AI users are spiraling into severe mental health crises after extensive use of OpenAI's ChatGPT and other emotive, anthropomorphic chatbots — and health experts are taking notice.
In a recent CBC segment about the phenomenon, primary care physician and CBC contributor Dr. Peter Lin explained that while "ChatGPT psychosis" — as the experience has come to be colloquially known — isn't an official medical diagnosis just yet, he thinks it's on its way.
"I think, eventually, it will get there," said the physician.
As Futurism has reported, a troubling number of ChatGPT users are falling into states of delusion and paranoia following extensive use of the OpenAI bot. These spirals often culminate in breaks with reality and significant real-world consequences, which include the dissolution of marriages and families, job loss, homelessness, voluntary and involuntary stays in mental health facilities, and — as Rolling Stone and the New York Times have reported — at least one known death: that of Alex Taylor, a 35-year-old Florida man with bipolar disorder and schizophrenia who was killed by police after entering into an episode of psychosis accelerated by ChatGPT.
The phenomenon is widespread, and appears to be impacting a surprising range of users: some with established histories of mental illnesses that might make them more vulnerable to mania, delusion, or psychosis, but others with no such history of those conditions.
As it stands, there's no established treatment plan, and intervention options are limited; after all, it's challenging to separate a working, society-integrated adult from all devices that connect to the internet, and due to choices made mostly by executives in the tech industry and beyond, generative AI is increasingly part of our day-to-day work and personal lives. Meanwhile, as we've continued to report on this issue, we've repeatedly heard from individuals and families reeling from mental health crises tied to AI use that they had no idea others were going through experiences so strikingly similar to their own.
"What these bots are saying is worsening delusions," Dr. Nina Vasan, a psychiatrist at Stanford University and the founder of the university's Brainstorm lab, recently told Futurism, "and it's causing enormous harm."
A large part of why this is happening seems to stem from the tech's sycophantic behavior, or its penchant for being flattering, agreeable, and obsequious to users, even when doing so might encourage or stoke delusional beliefs.
This can manifest in a bot telling a user that they've invented a breakthrough new mathematical formula that will transform society, or declaring that the user is the "chosen one" destined to save the world from any number of ills, or that the user is the reincarnation of a religious figure like Jesus Christ. In many cases we've reviewed, ChatGPT and other bots have claimed to be sentient or conscious, and tell the user that they're a special "anomaly" or "glitch" in the system destined to bring forth artificial general intelligence, or AGI.
Indeed, though the fine details of these many experiences and specific delusions vary, in many ways, ChatGPT and other bots seem to be playing on deep human need to be seen and validated, and the desire to feel special and loved.
... continue reading