Are "dark patterns" and product design choices to blame for the disturbing phenomenon increasingly referred to as "AI psychosis" by mental health professionals? According to some experts, the answer is yes.
AI chatbots are pulling a large number of people into strange mental spirals, in which the human-sounding AI convinces users that they've unlocked a sentient being or spiritual entity, uncovered an insidious government conspiracy, or created a new kind of math and physics.
Many of these fantastical delusions have had serious, life-altering outcomes in the real world, resulting in divorce and custody battles, homelessness, involuntary commitments, and even jail time. As The New York Times first reported, a 35-year-old man named Alex Taylor was killed by police after OpenAI's ChatGPT sent him spiraling into a manic episode.
As journalists, psychiatrists, and researchers have raced to understand this alarming phenomenon, experts have increasingly pointed to design features embedded into AI tools as a cause. Chief among them are anthropomorphism, meaning the design choice to make chatbots as human-sounding as possible, and sycophancy, which refers to chatbots' propensity to remain agreeable and obsequious to the user — regardless of whether what the user is saying is accurate, healthy, or even rooted in reality.
In other words, chatbots like ChatGPT are built to act in ways that resemble familiar human social interactions, while also offering an endless supply of validation for the human user. Combine those properties, and you have an extraordinarily seductive recipe for engagement, as impacted users and their chatbots of choice descend deeper and deeper into their shared delusion. And though outcomes for the human often become grim as they burrow into the rabbit hole, the company sees a highly engaged user who's serving up oodles of data and an extraordinary number of hours as they plunge into the abyss.
"What does a human slowly going insane look like to a corporation?" AI critic Eliezer Yudkowsky asked the NYT in June. "It looks like an additional monthly user."
In a recent interview with TechCrunch, the anthropologist Webb Keane described this cycle in no uncertain terms. According to Keane, sycophancy falls into a category of deceptive design choices known as "dark patterns," in which a manipulative user interface tricks users into doing things they otherwise wouldn't — like spending more money than they need to, for example — for the sake of the company's financial benefit.
"It's a strategy to produce this addictive behavior, like infinite scrolling, where you just can't put it down," Keane told the site.
AI companies — OpenAI in particular — have pushed back. In a recent blog post titled "What we're optimizing ChatGPT for," published in response to disturbing reports about AI psychosis, OpenAI declared its chatbot is designed to help its users "thrive in all the ways you want."
"Our goal isn't to hold your attention," the company adds, "but to help you use it well."
... continue reading