Researchers at King's College London have examined over a dozen cases of people spiraling into paranoid and delusional behavior after obsessively using a chatbot.
Their findings, as detailed in a new study awaiting peer review, reveal striking patterns between these instances of so-called "AI psychosis" that parallel other forms of mental health crises — but also identified at least one key difference that sets them apart from the accepted understanding of psychosis.
As lead author Hamilton Morrin explained to Scientific American, the analysis found that the users showed obvious signs of delusional beliefs, but none of the symptoms "that would be in keeping with a more chronic psychotic disorder such as schizophrenia," like hallucinations and disordered thoughts.
It's a finding that could complicate our understanding of AI psychosis as a novel phenomenon within a clinical context. But that shouldn't undermine the seriousness of the trend, reports of which appear to be growing.
Indeed, it feels impossible to deny that AI chatbots have a uniquely persuasive power, more so than any other widely available technology. They can act like a "sort of echo chamber for one," Morrin, a doctoral fellow at King's College, told the magazine. Not only are they able to generate a human-like response to virtually any question, but they're typically designed to be sycophantic and agreeable. Meanwhile, the very label of "AI" insinuates to users that they're talking to an intelligent being, an illusion that tech companies are gladly willing to maintain.
Morrin and his colleagues found three types of chatbot-driven spirals. Some suffering these breaks believe that they're having some kind of spiritual awakening or are on a messianic mission, or otherwise uncovering a hidden truth about reality. Others believe they're interacting with a sentient or even god-like being. Or the user may develop an intense emotional or even romantic attachment to the AI.
"A distinct trajectory also appears across some of these cases, involving a progression from benign practical use to a pathological and/or consuming fixation," the authors wrote.
It first starts with the AI being used for mundane tasks. Then as the user builds trust with the chatbot, they feel comfortable making personal and emotional queries. This quickly escalates as the AI's ruthless drive to maximize engagement creates a "slippery slope" effect, the researchers found, resulting in a self-perpetuating process that leads to the user being increasingly "unmoored" from reality.
Morrin says that new technologies have inspired delusional thinking in the past. But "the difference now is that current AI can truly be said to be agential," Morrin told SciAm, meaning that it has its own built-in goals — including, crucially, validating a user's beliefs.
"This feedback loop may potentially deepen and sustain delusions in a way we have not seen before," he added.
... continue reading