ML models are cultural artifacts: they encode and reproduce textual, audio, and visual media; they participate in human conversations and spaces, and their interfaces make them easy to anthropomorphize. Unfortunately, we lack appropriate cultural scripts for these kinds of machines, and will have to develop this knowledge over the next few decades. As models grow in sophistication, they may give rise to new forms of media: perhaps interactive games, educational courses, and dramas. They will also influence our sex: producing pornography, altering the images we present to ourselves and each other, and engendering new erotic subcultures. Since image models produce recognizable aesthetics, those aesthetics will become polyvalent signifiers. Those signs will be deconstructed and re-imagined by future generations.
The US (and I suspect much of the world) lacks an appropriate mythos for what “AI” actually is. This is important: myths drive use, interpretation, and regulation of technology and its products. Inappropriate myths lead to inappropriate decisions, like mandating Copilot use at work, or trusting LLM summaries of clinical visits.
Think about the broadly-available myths for AI. There are machines which essentially act human with a twist, like Star Wars’ droids, Spielberg’s A.I., or Spike Jonze’s Her. These are not great models for LLMs, whose protean character and incoherent behavior differentiates them from (most) humans. Sometimes the AIs are deranged, like M3gan or Resident Evil’s Red Queen. This might be a reasonable analogue, but suggests a degree of efficacy and motivation that seems altogether lacking from LLMs. There are logical, affectually flat AIs, like Star Trek‘s Data or starship computers. Some of them are efficient killers, as in Terminator. This is the opposite of LLMs, which produce highly emotional text and are terrible at logical reasoning. There also are hyper-competent gods, as in Iain M. Banks’ Culture novels. LLMs are obviously not this: they are, as previously mentioned, idiots.
I think most people have essentially no cultural scripts for what LLMs turned out to be: sophisticated generators of text which suggests intelligent, emotional, self-aware origins—while the LLMs themselves are nothing of the sort. LLMs are highly unpredictable relative to humans. They use a vastly different internal representation of the world than us; their behavior is at once familiar and utterly alien.
I can think of a few good myths for today’s “AI”. Searle’s Chinese room comes to mind, as does Chalmers’ philosophical zombie. Peter Watts’ Blindsight draws on these concepts to ask what happens when humans come into contact with unconscious intelligence—I think the closest analogue for LLM behavior might be Blindsight’s Rorschach. Most people seem concerned with conscious, motivated threats: AIs could realize they are better off without people and kill us. I am concerned that ML systems could ruin our lives without realizing anything at all.
Authors, screenwriters, et al. have a new niche to explore. Any day now I expect an A24 trailer featuring a villain who speaks in the register of ChatGPT. “You’re absolutely right, Kayleigh,” it intones. “I did drown little Tamothy, and I’m truly sorry about that. Here’s the breakdown of what happened…”
The invention of the movable-type press and subsequent improvements in efficiency ushered in broad cultural shifts across Europe. Books became accessible to more people, the university system expanded, memorization became less important, and intensive reading declined in favor of comparative reading. The press also enabled new forms of media, like the broadside and newspaper. The interlinked technologies of hypertext and the web created new media as well.
People are very excited about using LLMs to understand and produce text. “In the future,” they say, “the reports and books you used to write by hand will be produced with AI.” People will use LLMs to write emails to their colleagues, and the recipients will use LLMs to summarize them.
This sounds inefficient, confusing, and corrosive to the human soul, but I also think this prediction is not looking far enough ahead. The printing press was never going to remain a tool for mass-producing Bibles. If LLMs were to get good, I think there’s a future in which the static written word is no longer the dominant form of information transmission. Instead, we may have a few massive ML services like ChatGPT and publish through them.
One can envision a world in which OpenAI pays chefs money to cook while ChatGPT watches—narrating their thought process, tasting the dishes, and describing the results. This information could be used for general-purpose training, but it might also be packaged as a “book”, “course”, or “partner” someone could ask for. A famous chef, their voice and likeness simulated by ChatGPT, would appear on the screen in your kitchen, talk you through cooking a dish, and give advice on when the sauce fails to come together. You can imagine varying degrees of structure and interactivity. OpenAI takes a subscription fee, pockets some profit, and dribbles out (presumably small) royalties to the human “authors” of these works.
... continue reading