Science fiction’s superpower isn’t thinking up new technologies – it’s thinking up new social arrangements for technology. What the gadget does is nowhere near as important as who the gadget does it for and who it does it to. Your car can use a cutting-edge computer vision system to alert you when you’re drifting out of your lane – or it can use that same system to narc you out to your insurer so they can raise your premiums by $10 that month to punish you for inattentive driving. Same gadget, different social arrangement.
Here’s why that’s so important: tech hucksters want you to think there’s only one way to use the gadget (their way). Mark Zuckerberg wants you to believe that it’s unthinkable that you might socialize with your friends without letting him spy on you all from asshole to appetite. Conversing with friends without Creepy Zuck listening in? That’s like water that’s not wet!
But of course, it’s all up for grabs. There’s nothing inevitable about it. Zuck spies on you because he wants to, not because he has to. He could stop. We could make him stop. That’s what the best science fiction does: It makes us question the social arrangements of our technology, and inspires us to demand better ones.
This idea – that who a technology acts for (and upon) is more important than the technology’s operating characteristics – has a lot of explanatory power.
Take AI: There are a lot of people in my orbit who use AI tools and describe them in glowing terms, as something useful and even delightful. Then there are people I know and trust who describe AI as an immiserating, dehumanizing technology that they hate using. This is true even for people who have similar levels of technological know-how, who are using the very same tools.
But the mystery vanishes as soon as you learn about the social arrangements around the AI usage.
I recently installed some AI software on my laptop: an open source model called Whisper that can transcribe audio files. I installed it because I was writing an article and I wanted to cite something I’d heard an expert say on a podcast. I couldn’t remember which expert, nor even which podcast. So I downloaded Whisper, threw 30 or 40 hours’ worth of podcasts I’d recently listened to at it, and then, a couple hours later, searched the text until I found the episode, along with timecode for the relevant passage. I was able to call up the audio and review it and match it to the transcript, correct a few small errors, and paste it into my essay.
A year ago, I simply would have omitted the reference. There was no way I was ever going to re-listen to hours and hours of podcasts looking for this half-remembered passage. Thanks to a free AI model that ran on my modest laptop, in the background while I was doing other work, I was able to write a better essay. In that moment, I felt pretty good about this little AI model, especially since it’s an open source project that will endure long after the company that made it has run out of dumb money and been sold for parts. The ability to use your personal computer to turn arbitrary amounts of recorded speech into a pretty accurate transcript is now a permanent fact of computing, like the ability to use your PC to crop an image or make a sign advertising your garage sale.
That’s one social arrangement for AI. Here’s another: last May, the Chicago Sun-Times included a 64-page “Best of Summer” insert from Hearst Publishing, containing lists of things to do this summer, including a summer reading list. Of the 15 books on that list, ten did not exist. They were AI “hallucinations” (jargon used by AI hucksters in place of the less sexy, but more accurate term, “errors”).
This briefly lit up the internet, as well it should have, because it’s a pretty wild error to see in a major daily newspaper. Jason Koebler from 404 Media tracked down the list’s “author,” a freelancer called Marco Buscaglia, who confessed that he had used AI to write the story and professed his shame and embarrassment at his failure to fact-check the AI’s output.
... continue reading