Mental health experts are continuing to sound alarm bells about users of AI chatbots spiraling into severe mental health crises characterized by paranoia and delusions, a trend they've started to refer to as "AI psychosis."
On Monday, University of California, San Francisco research psychiatrist Keith Sakata took to social media to say that he's seen a dozen people become hospitalized after "losing touch with reality because of AI."
In a lengthy X-formerly-Twitter thread, Sakata clarified that psychosis is characterized by a person breaking from "shared reality," and can show up in a few different ways — including "fixed false beliefs," or delusions, as well as visual or auditory hallucinations and disorganized thinking patterns. Our brains, the researcher explains, work on a predictive basis: we effectively make an educated guess about what reality will be, then conduct a reality check. Finally, our brains update our beliefs accordingly.
"Psychosis happens when the 'update,' step fails," wrote Sakata, warning that large language model-powered chatbots like ChatGPT "slip right into that vulnerability."
I’m a psychiatrist. In 2025, I’ve seen 12 people hospitalized after losing touch with reality because of AI. Online, I’m seeing the same pattern. Here’s what “AI psychosis” looks like, and why it’s spreading fast: 🧵 pic.twitter.com/YYLK7une3j — Keith Sakata, MD (@KeithSakata) August 11, 2025
In this context, Sakata compared chatbots to a "hallucinatory mirror" by design. Put simply, LLMs function largely by way of predicting the next word, drawing on training data, reinforcement learning, and user responses as they formulate new outputs. What's more, as chatbots are also incentivized for user engagement and contentment, they tend to behave sycophantically; in other words, they tend to be overly agreeable and validating to users, even in cases where a user is incorrect or unwell.
Users can thus get caught in alluring recursive loops with the AI, as the model doubles, triples, and quadruples down on delusional narratives, regardless of their basis in reality or the real-world consequences that the human user might be experiencing as a result.
This "hallucinatory mirror" description is a characterization consistent with our reporting about AI psychosis. We've investigated dozens of cases of relationships with ChatGPT and other chatbots giving way to severe mental health crises following user entry into recursive, AI-fueled rabbit holes.
These human-AI relationships and the crises that follow have led to mental anguish, divorce, homelessness, involuntary commitment, incarceration, and as The New York Times first reported, even death.
Earlier this month, in response to the growing number of reports linking ChatGPT to harmful delusional spirals and psychosis, OpenAI published a blog post admitting that ChatGPT, in some instances, "fell short in recognizing signs of delusion or emotional dependency" in users. It said it hired new teams of subject matter experts to explore the issue and installed a Netflix-like time spent notification — though Futurism quickly found that the chatbot was still failing to pick up on obvious signs of mental health crises in users.
... continue reading