Digital Disruption Talk

We are excited to invite you to Part IV of the Digital Disruption series, a collaboration between Scheller College of Business at Georgia Tech and Accenture that explores trends in today’s ever-changing digital arena. The series examines ways successful organizations and managers embrace technological advances and leverage opportunities to accelerate innovation, create value, and build the workforce of the future.

For more info: https://www.scheller.gatech.edu/news-events/events/event.html?event_id=025666ef0a1504470d5ea080de343036

Crawford and Bogost: “You’ve Got to Tell Them… the AI Data Set… Is… People!”

I can’t remember where it came from—maybe it’s a comedy bit, or maybe it’s an old Tumblr text post—but I can recall a joke that’s stuck with me regarding technology. It’s premised on the idea that people and popular culture have always been obsessed with flying cars—really, with the nonexistence of the promised flying car—ignoring the fact that, if flying cars existed, they would essentially function exactly like the planes we have now. Does the premise totally work? Not exactly. But it’s a useful meditation on the ways we view the technology we have in contrast to the technology we wish we had, or that we had imagined would exist by now. Bogost’s article is predicated on this same idea. When the “corporate fashion” of AI software is taken in kind with how “press and popular discourse sometimes inflate simple features into AI miracles,” the notion of artificial intelligence becomes something like the proverbial flying car: an ideal some subset of culture is convinced that it and the rest of society not only want, but are mere decades, years, or months away from achieving. Bogost frames this conflation as a kind of technocratic theology, a worship of “false idols” like AI and algorithms at the expense of serious consideration of what services these kinds of software can—and more importantly, can’t—be expected to perform.

What become eminently useful are analyses like Crawford’s, which engage head-on with notions and consequences of “bias” and “classification” as they emerge, nominally and implicitly, from precisely the sort of “nothing special” systems Bogost reminds us are inevitably “made by people.” Through a leapfrogging historical overview of human classification systems which today appear somewhere between “quaint and old-fashioned” at best and systematically harmful at worst, Crawford explains how the “arbitrary classification” of computation works both to the benefit of oppression and to the detriment of the oppressed. She quotes Stuart Hall: “Systems of classification become the objects of power.” In response, Crawford proposes the crafting of technologies in respect of the “kind of world we want”—a process fundamentally reliant on cross-disciplinary collaboration. This reach “beyond [the] technical approach to the socio-technical” is one we have explored multiple times in the texts we’ve read for class, but it’s one that can sometimes appear worryingly reflective, after-the-fact in its approach to critique. I can’t help but think back to Bridle’s invocation of Epimetheus.

 

Feats and Flaws, Technical and Social – What type of world do we want?

Bogost discusses how the term “artificial intelligence” has become over-utilized to the point that it has essentially become meaningless. He asserts that, “in most cases, the systems making claims to artificial intelligence aren’t sentient, self-aware, volitional, or even surprising. They’re just software.” The phrase has become popular beyond just the tech sector — in reference to systems composing music and news stories, and for corporate strategy, showing up in earnings call transcripts. Simple features are being inflated into AI miracles, according to Bogost. This has resulted in the “fable” of AI. Ultimately, Bogost advocates for understanding these systems as “particular implementations of software in corporations, not as totems of otherworldly AI.” He ends with the statement that ““today’s computer systems are nothing special. They are apparatuses made by people, running software made by people, full of the feats and flaws of both.” It is from this point that Kate Crawford’s talk connects with Bogost’s discussion. Crawford focuses on the social factors also impacting AI, focusing on some of those flaws, particularly bias in AI. She asserts that, “the legacies of inequality are being embedded in our digital systems and they are now being built into the logics of AI itself.” She talks about the different historical and disciplinary interpretations of bias, but focuses on an understanding of it as “a skew that produces a type of harm.” She unpacks this harm through two pipelines: harm of allocation and harm of representation. Her broader discussion of the politics of classification situates the discussion of bias in AI into the larger social context. It highlights her point that the bias in the world is embedded in our data and that structural bias is fundamentally social. In order to fully understand this, we must talk about this social history and that is what Crawford attempts in her talk.  While Crawford suggests somethings, we can do to combat bias and harm related to AI (fairness forensics, connect across fields, and ethics of classification), she really advocates for a shift in perspective. We should first ask, “What type of world do we want?” and then, “How does technology server that vision? I think this is a critical shift. So often, we are focusing on the technological invention or the increase in efficiency (read corporate profit) that we don’t stop to question the real consequences of these technologies on our world and the human experience. These technologies are developing and being implemented faster that we can ask the more fundamental questions. If we don’t take a moment to adjust and engage them soon, we may miss the opportunity to do so (if we haven’t already).

Can you be copied anew?

One day Fall of 2017 in Magerko’s Expressive Computing seminar, he lept to his feet after reading a news snippet that appeared on his laptop. If my memory serves me correctly, I think he called Elon Musk a moron after some comment he made about AI. Unfortunately as in Bogost’s article Musk is surrounded by good company. Although I think Gates isn’t worried about a Sci-Fi dystopia, but more of the economic implications of AI implementations. I think he has in mind the growing social wealth disparities that will grow exponentially as new technologies displace human labor on scales not seen since the industrial revolution. This is a less sexy claim to make. Hollywood can’t make a good movie about social inequity. But Bogost is right AI as a catch-all term is pointless. The article written by Jerry Kaplan that Bogost mentions is very blunt. “Machines are not people, and there’s no persuasive evidence that they are on a path toward sentience.” I’ve never seen it stated this clearly and it doesn’t make for a good news or marketing.

Kate Crawford’s talk spells out the more near-sighted stakes in public policy surrounding AI. Shall we rely on Europe to get it right? Especially when the stakes around diversity are different from “ours” or globally. Do we need specialized data sets? Is the geopolitics of AI, between the US and China, potentially another form of colonialism?

Here’s a couple of things we can try out in class. Has anyone is tried https://thispersondoesnotexist.com
This website, created by Philip Wang, presents a random computer generated photo of a fictional person. Every time you refresh there’s a new face. You can infer how these pictures skew, heavily white, few black with a smattering of Asian. Also, https://havetheyfaked.me/ takes the same data set and searches for a match using your selfie. It should promote discussion. According to the second website, my face was very close to being faked.