I can’t remember where it came from—maybe it’s a comedy bit, or maybe it’s an old Tumblr text post—but I can recall a joke that’s stuck with me regarding technology. It’s premised on the idea that people and popular culture have always been obsessed with flying cars—really, with the nonexistence of the promised flying car—ignoring the fact that, if flying cars existed, they would essentially function exactly like the planes we have now. Does the premise totally work? Not exactly. But it’s a useful meditation on the ways we view the technology we have in contrast to the technology we wish we had, or that we had imagined would exist by now. Bogost’s article is predicated on this same idea. When the “corporate fashion” of AI software is taken in kind with how “press and popular discourse sometimes inflate simple features into AI miracles,” the notion of artificial intelligence becomes something like the proverbial flying car: an ideal some subset of culture is convinced that it and the rest of society not only want, but are mere decades, years, or months away from achieving. Bogost frames this conflation as a kind of technocratic theology, a worship of “false idols” like AI and algorithms at the expense of serious consideration of what services these kinds of software can—and more importantly, can’t—be expected to perform.
What become eminently useful are analyses like Crawford’s, which engage head-on with notions and consequences of “bias” and “classification” as they emerge, nominally and implicitly, from precisely the sort of “nothing special” systems Bogost reminds us are inevitably “made by people.” Through a leapfrogging historical overview of human classification systems which today appear somewhere between “quaint and old-fashioned” at best and systematically harmful at worst, Crawford explains how the “arbitrary classification” of computation works both to the benefit of oppression and to the detriment of the oppressed. She quotes Stuart Hall: “Systems of classification become the objects of power.” In response, Crawford proposes the crafting of technologies in respect of the “kind of world we want”—a process fundamentally reliant on cross-disciplinary collaboration. This reach “beyond [the] technical approach to the socio-technical” is one we have explored multiple times in the texts we’ve read for class, but it’s one that can sometimes appear worryingly reflective, after-the-fact in its approach to critique. I can’t help but think back to Bridle’s invocation of Epimetheus.
Bogost discusses how the term “artificial intelligence” has become over-utilized to the point that it has essentially become meaningless. He asserts that, “in most cases, the systems making claims to artificial intelligence aren’t sentient, self-aware, volitional, or even surprising. They’re just software.” The phrase has become popular beyond just the tech sector — in reference to systems composing music and news stories, and for corporate strategy, showing up in earnings call transcripts. Simple features are being inflated into AI miracles, according to Bogost. This has resulted in the “fable” of AI. Ultimately, Bogost advocates for understanding these systems as “particular implementations of software in corporations, not as totems of otherworldly AI.” He ends with the statement that ““today’s computer systems are nothing special. They are apparatuses made by people, running software made by people, full of the feats and flaws of both.” It is from this point that Kate Crawford’s talk connects with Bogost’s discussion. Crawford focuses on the social factors also impacting AI, focusing on some of those flaws, particularly bias in AI. She asserts that, “the legacies of inequality are being embedded in our digital systems and they are now being built into the logics of AI itself.” She talks about the different historical and disciplinary interpretations of bias, but focuses on an understanding of it as “a skew that produces a type of harm.” She unpacks this harm through two pipelines: harm of allocation and harm of representation. Her broader discussion of the politics of classification situates the discussion of bias in AI into the larger social context. It highlights her point that the bias in the world is embedded in our data and that structural bias is fundamentally social. In order to fully understand this, we must talk about this social history and that is what Crawford attempts in her talk. While Crawford suggests somethings, we can do to combat bias and harm related to AI (fairness forensics, connect across fields, and ethics of classification), she really advocates for a shift in perspective. We should first ask, “What type of world do we want?” and then, “How does technology server that vision? I think this is a critical shift. So often, we are focusing on the technological invention or the increase in efficiency (read corporate profit) that we don’t stop to question the real consequences of these technologies on our world and the human experience. These technologies are developing and being implemented faster that we can ask the more fundamental questions. If we don’t take a moment to adjust and engage them soon, we may miss the opportunity to do so (if we haven’t already).
Ian Bogost’s article “‘Artificial Intelligence’ has become meaningless”  and Kate Crawford’s presentation “AI Now: Social and Political Questions for Artificial Intelligence”  both pose interesting questions about the impact of Artificial Intelligence in the world today. While I agree with much of what they have said regarding ‘AI’, they both seem to miss a critical point about the very concept of artificial intelligence as a technology, though Crawford dances around the subject in her talk. The point both academics miss, is that the concept of intelligence itself is artificial, therefore artificial intelligence will have some diluted definition of some already artificial arbitrary classification of intellect as well as all the biases typically associated with said intellect. Let’s argue for the sake of time that ‘Intelligence’ is quantifiable and in the case of this discussion refers only to a type of mind found in the human body, already a very pernicious speculation, but it seems to be a common one. When thinking about ‘human intelligence’ not the ‘artificial’ type, what does it do? What is its purpose? And more to the point is it the ‘ideal’ the ‘gold standard’ for intelligence artificial or otherwise? In order the answers are, classification, judgement for survival purposes, and no. I’ve simplified just a bit, but when you look at the literature on the mind and consider the history, it become easy to see that the human mind is a ‘biased’ machine which short circuits to the path of least resistance much more often than we would like to consider. Consider James Bridle’s discussion of automation bias and people driving into the ocean or Russian airspace as a good example. The mind is full of biases, wikipedia lists 175  which can probably be reduced to 20  considering potential overlaps and causal effects. To think that anything written in code [sub classification machine language] and data [reductive classification of global information] could result in something more complicated than ‘Human Intelligence’ is reductio ad absurdum. So in a way, Bogost is right, just not for the reasons he presumes. AI today does not meet the Hollywood equivalent and probably never will, because we are basing it off of human intelligence and training it with human biases (Crawford’s point).
A better approach to both arguments would be to rename ‘Artificial Intelligence’ as ‘Artificial Insanity’, you even get to keep the same acronym. From this new name you get a truer picture of what AI supposedly represents, namely human bias as a machine. A rather useful machine, which can be used to study the way we classify and reduce the world into smaller and smaller problems, whether those reductions are useful, harmful or deadly and how we can change our own minds in order to fix the problem. That’s assuming the problem can be fixed, there is a good chance that the problem is us. We continually try to offload work and thinking to some ‘worker’ that gets ‘more accurate’, ‘higher quality’, ‘cheaper’, ‘faster’ results. Each one of these attributes or goals of automation when associated with thought and life is an oxymoron. Life is life and information is information, classification of both by anything other than our own minds is an exercise in generalized categorization and reductionism. So by all means, let’s take pictures of the world and segment them into classes which will train our machines on what can and cannot be hit while we allow them to drive us down the road to nowhere. Freeing us up to discuss the semantics of what AI can and cannot do.
Bogost’s article on artificial intelligence is critical of both AI in concept, and of technologists and large companies that claim to be pioneers in creating machines that have thinking ability. He’s convinced that many of todays hyped algorithms are just regular software with cool names. These supposed super smart, critically engaged objects that are said to be outpacing human intellect while making digital social spaces safe are not living up to their lofty claims. Ian references Turing’s early and accidental ideas that laid the foundation for how software engineers (and the larger society) began to think how machines would be situated within our lives. He emphasizes that machines are tricking people alright, but not in the ways that Turing imagined decades ago. The narratives behind these technologies and AI can be misleading, this much I do agree with. While I don’t think that completely scrapping the name Artificial Intelligence to replace it with some other term is helpful, I do agree with being critical of these technologies as a computer scientist and a user of many of the systems produced by companies that are considered thought leaders in the space. However, if we’re going to be critical of these spaces, practices, and the implications surrounding both, its helpfully to critically engage with solutions as well. As we discussed last week, some scholars and companies have begun to address certain issues.
Having accurate views and definitions for AI helps us to conceptualize the potential of these smart programs. This also allows us to recognize the current drawbacks and limitations that have begun to pop up when humans interact with the systems. I’m thinking of Crawfords discussion on how people of color have experienced bias in their experiences with machines that are thought to be intelligent. Even more to Bogost’s point, these algorithms are built by humans and exude the successes and flaws found in human work and existence. We’re always operating in various contexts, so the bias that surfaces in AI stems directly from the real world issues surrounding representation and recognition that Crawford discussed. Real people with often stereotyped datasets build systems that impact real people. Since we just read Bridle, the “dark age” and biased systems that he believes will cost us dearly come to mind as well. I appreciate Crawford’s attempt to incorporate some solutions to this huge problem that scholars are talking about, such as “fairness forensics” for assessing bias in AI, or practices of approaching development from the social rather than the technical.
The paper and lecture for this weeks discussion focus on Artificial Intelligence present various ways that technology has implemented cultural bias. Kate Crawford’s talk at the University of Washington focuses on the research at the New York Studio AI Now Institute and other institutions globally. One of the concepts that were ever present in the lecture was that this bias in AI technology has been current since it’s the early invention which included white males and that the recognition of the software still is limited in allowing people of color and women to be recognized. The examples in the lecture show that many of these technologies incorporate data that is biased. What was also interesting about the talk was the number of recent cases where companies have created platforms that exclude a gender, race, or another individual, because it doesn’t recognize the person. One example was the Google Art Selfie app that miscategorized the faces of women and people of color because the art included in the software was mainly western. What was most interesting in the talk was that even though this research is ever occuring there is a need for policymakers, computer scientist, cultural studies, psychologist, and others to work together to determine ways to address the bias in the current software. Ian Bogost’s article, “‘Artificial Intelligence’ Has Become Meaningless” has a similar perspective on the impact of AI in society. Bogost shows that AI is the main result of media and technology. In the paper, his method of understanding AI importance is through looking at ways in which it occurs in the format of media like films or television. Bogost tries to define a better understanding of why the field started through interviews with AI scholars, like Professor Charles Isabell on the meaning of artificial intelligence as, “making computers act like they do in the movies. That might sound glib, but it underscores AI’s fundamental relationship to theories of cognition and sentience.” This statement demonstrates that science fiction and media have played a significant role in framing what people think of in terms of AI. One of the main concepts that I believe both Bogost and Crawford ask us to question is the significance of AI to our current society and who has been the primary audience for this technology. I think once people began to ask those questions maybe we can start to think of ways to restructure or reform the field.