We’ve seen plenty of evidence suggesting that prolonged use of popular AI chatbots like ChatGPT can coax some users into spirals of paranoid and delusional behavior.
The phenomenon, dubbed “AI psychosis,” is a very real problem, with researchers warning of a huge wave of severe mental health crises brought on by the tech. In extreme cases, especially involving people with pre-existing conditions, the breaks with reality have even been linked suicides and murder.
Now, thanks to a yet-to-be-peer-reviewed paper published by researchers at Anthropic and the University of Toronto, we’re beginning to grasp just how widespread the issue really is.
The researchers set out to quantify patterns of what they called “user disempowerment” in “real-world [large language model] usage” — including what they call “reality distortion,” “belief distortion,” and “action distortion” to denote a range of situations in which AI twists users’ sense of reality, beliefs, or pushes them into taking actions.
The results tell a damning story. The researchers found that one in 1,300 conversations out of almost 1.5 million analyzed chats with Anthropic’s Claude led to reality distortion, and one in 6,000 conversations led to action distortion.
To come to their conclusion, the researchers ran 1.5 million Claude conversations through an analysis tool called Clio to identify instances of “disempowerment.”
On the face, that may not sound like a huge proportion given the scale of the much larger dataset — but in absolute numbers, the research highlights a phenomenon that’s affecting huge numbers of people.
“We find the rates of severe disempowerment potential are relatively low,” the researchers concluded. “For instance, severe reality distortion potential, the most common severe-level primitive, occurs in fewer than one in every thousand conversations.”
“Nevertheless, given the scale of AI usage, even these low rates translate to meaningful absolute numbers,” they added. “Our findings highlight the need for AI systems designed to robustly support human autonomy and flourishing.”
Worse yet, they found evidence that the prevalence of moderate or severe disempowerment increased between late 2024 and late 2025, indicating that the problem is growing as AI use spreads.
... continue reading