Feats and Flaws, Technical and Social – What type of world do we want?

Bogost discusses how the term “artificial intelligence” has become over-utilized to the point that it has essentially become meaningless. He asserts that, “in most cases, the systems making claims to artificial intelligence aren’t sentient, self-aware, volitional, or even surprising. They’re just software.” The phrase has become popular beyond just the tech sector — in reference to systems composing music and news stories, and for corporate strategy, showing up in earnings call transcripts. Simple features are being inflated into AI miracles, according to Bogost. This has resulted in the “fable” of AI. Ultimately, Bogost advocates for understanding these systems as “particular implementations of software in corporations, not as totems of otherworldly AI.” He ends with the statement that ““today’s computer systems are nothing special. They are apparatuses made by people, running software made by people, full of the feats and flaws of both.” It is from this point that Kate Crawford’s talk connects with Bogost’s discussion. Crawford focuses on the social factors also impacting AI, focusing on some of those flaws, particularly bias in AI. She asserts that, “the legacies of inequality are being embedded in our digital systems and they are now being built into the logics of AI itself.” She talks about the different historical and disciplinary interpretations of bias, but focuses on an understanding of it as “a skew that produces a type of harm.” She unpacks this harm through two pipelines: harm of allocation and harm of representation. Her broader discussion of the politics of classification situates the discussion of bias in AI into the larger social context. It highlights her point that the bias in the world is embedded in our data and that structural bias is fundamentally social. In order to fully understand this, we must talk about this social history and that is what Crawford attempts in her talk.  While Crawford suggests somethings, we can do to combat bias and harm related to AI (fairness forensics, connect across fields, and ethics of classification), she really advocates for a shift in perspective. We should first ask, “What type of world do we want?” and then, “How does technology server that vision? I think this is a critical shift. So often, we are focusing on the technological invention or the increase in efficiency (read corporate profit) that we don’t stop to question the real consequences of these technologies on our world and the human experience. These technologies are developing and being implemented faster that we can ask the more fundamental questions. If we don’t take a moment to adjust and engage them soon, we may miss the opportunity to do so (if we haven’t already).

Leave a Reply