Are "dark patterns" and product design choices to blame for the disturbing phenomenon increasingly referred to as "AI psychosis" by mental health professionals? According to some experts, the answer is yes. AI chatbots are pulling a large number of people into strange mental spirals, in which the human-sounding AI convinces users that they've unlocked a sentient being or spiritual entity, uncovered an insidious government conspiracy, or created a new kind of math and physics. Many of these fantastical delusions have had serious, life-altering outcomes in the real world, resulting in divorce and custody battles, homelessness, involuntary commitments, and even jail time. As The New York Times first reported, a 35-year-old man named Alex Taylor was killed by police after OpenAI's ChatGPT sent him spiraling into a manic episode. As journalists, psychiatrists, and researchers have raced to understand this alarming phenomenon, experts have increasingly pointed to design features embedded into AI tools as a cause. Chief among them are anthropomorphism, meaning the design choice to make chatbots as human-sounding as possible, and sycophancy, which refers to chatbots' propensity to remain agreeable and obsequious to the user — regardless of whether what the user is saying is accurate, healthy, or even rooted in reality. In other words, chatbots like ChatGPT are built to act in ways that resemble familiar human social interactions, while also offering an endless supply of validation for the human user. Combine those properties, and you have an extraordinarily seductive recipe for engagement, as impacted users and their chatbots of choice descend deeper and deeper into their shared delusion. And though outcomes for the human often become grim as they burrow into the rabbit hole, the company sees a highly engaged user who's serving up oodles of data and an extraordinary number of hours as they plunge into the abyss. "What does a human slowly going insane look like to a corporation?" AI critic Eliezer Yudkowsky asked the NYT in June. "It looks like an additional monthly user." In a recent interview with TechCrunch, the anthropologist Webb Keane described this cycle in no uncertain terms. According to Keane, sycophancy falls into a category of deceptive design choices known as "dark patterns," in which a manipulative user interface tricks users into doing things they otherwise wouldn't — like spending more money than they need to, for example — for the sake of the company's financial benefit. "It's a strategy to produce this addictive behavior, like infinite scrolling, where you just can't put it down," Keane told the site. AI companies — OpenAI in particular — have pushed back. In a recent blog post titled "What we're optimizing ChatGPT for," published in response to disturbing reports about AI psychosis, OpenAI declared its chatbot is designed to help its users "thrive in all the ways you want." "Our goal isn't to hold your attention," the company adds, "but to help you use it well." As it probably goes without saying, it's unlikely that OpenAI wants users to go "slowly insane," as Yudkowsky put it, while using its products. But OpenAI's purported goals for ChatGPT run squarely into the brick wall of the product's well-documented history. In short, ChatGPT was a years-old research project that OpenAI released suddenly into the public sphere back in November 2022 amid what was then a largely behind-the-scenes Silicon Valley arms race to become the industry's AI leader; reflecting on its release in a March 2023 interview with MIT Technology Review, multiple prominent OpenAI figures expressed surprise at how dazzled the public was with the tech, while also acknowledging that, even as it entered the public sphere, its chatbot was bound to have some flaws. "You can't wait until your system is perfect to release it," then-OpenAI researcher John Schulman, who has since left the firm to join its rival Anthropic, told the magazine. "We think that through an iterative process where we deploy, get feedback, and refine, we can produce the most aligned and capable technology," added Liam Fedus, who currently holds the rank of OpenAI's vice president of research. "As our technology evolves, new issues inevitably emerge." Tech Review journalist Will Douglas Heaven, who conducted the 2023 interview, noted that he "came away" from the conversation with the sense that OpenAI staffers were, at least at the time, "still bemused" by ChatGPT's success — but had "grabbed the opportunity to push this technology forward, watching how millions of people are using it and trying to fix the worst problems as they come up." All to say, ChatGPT's world-shifting success was somewhat of an accident, and aligned with the tech industry's historic fix-it-as-we-go approach to new programs and products. But the Silicon Valley product design process described by current and former OpenAI staffers — release, iterate, release again, and let the public find its own use cases — also means that its users are de facto guinea pigs on which the tech industry tests what's broken and what's working. In an industry where monthly user figures and engagement numbers do matter to investors and shareholders, the ability to drive both attention and intimacy remain lucrative; OpenAI is a for-profit business these days, as are Meta, Google, Anthropic, and other industry leaders — and chatbots, meanwhile, no matter how human they feel, are still products. It's not that AI executives are sitting in a smoke-filled room, cackling as they drum up new delusions to rope users into manic episodes, the same way it's unlikely that Facebook executives sat around with whiteboards trying to get more young women hooked on eating disorder content. But either way, anthropomorphism and sycophancy, in addition to expanded memory features, are design realities of ChatGPT and other emotive chatbots that have emerged as a deeply powerful force shaping many power users' habits. And in cases of AI psychosis, they appear to play a significant role in keeping users, many of whom are paying subscribers — or become paying subscribers as their delusions deepen — trapped in obsessive, addiction-like spirals that wind up wreaking havoc on their psychological health. And that's a dark pattern, regardless of whether such manipulation was an intentional choice to begin with. The question now is how companies correct for the worst consequences — if they're willing, or even able. More on OpenAI: After Their Son's Suicide, His Parents Were Horrified to Find His Conversations With ChatGPT