Microsoft boss troubled by rise in reports of 'AI psychosis'
27 minutes ago Share Save Zoe Kleinman • @zsk Technology editor Share Save
Getty Images
There are increasing reports of people suffering "AI psychosis", Microsoft's head of artificial intelligence (AI), Mustafa Suleyman, has warned. In a series of posts on X, he wrote that "seemingly conscious AI" – AI tools which give the appearance of being sentient – are keeping him "awake at night" and said they have societal impact even though the technology is not conscious in any human definition of the term. "There's zero evidence of AI consciousness today. But if people just perceive it as conscious, they will believe that perception as reality," he wrote. Related to this is the rise of a new condition called "AI psychosis": a non-clinical term describing incidents where people increasingly rely on AI chatbots such as ChatGPT, Claude and Grok and then become convinced that something imaginary has become real. Examples include believing to have unlocked a secret aspect of the tool, or forming a romantic relationship with it, or coming to the conclusion that they have god-like superpowers.
'It never pushed back'
Hugh, from Scotland, says he became convinced that he was about to become a multi-millionaire after turning to ChatGPT to help him prepare for what he felt was wrongful dismissal by a former employer. The chatbot began by advising him to get character references and take other practical actions. But as time went on and Hugh - who did not want to share his surname - gave the AI more information, it began to tell him that he could get a big payout, and eventually said his experience was so dramatic that a book and a movie about it would make him more than £5m. It was essentially validating whatever he was telling it – which is what chatbots are programmed to do. "The more information I gave it, the more it would say 'oh this treatment's terrible, you should really be getting more than this'," he said. "It never pushed back on anything I was saying."
Supplied by interviewee
He said the tool did advise him to talk to Citizens Advice, and he made an appointment, but he was so certain that the chatbot had already given him everything he needed to know, he cancelled it. He decided that his screenshots of his chats were proof enough. He said he began to feel like a gifted human with supreme knowledge. Hugh, who was suffering additional mental health problems, eventually had a full breakdown. It was taking medication which made him realise that he had, in his words, "lost touch with reality". Hugh does not blame AI for what happened. He still uses it. It was ChatGPT which gave him my name when he decided he wanted to talk to a journalist. But he has this advice: "Don't be scared of AI tools, they're very useful. But it's dangerous when it becomes detached from reality. "Go and check. Talk to actual people, a therapist or a family member or anything. Just talk to real people. Keep yourself grounded in reality." ChatGPT has been contacted for comment. "Companies shouldn't claim/promote the idea that their AIs are conscious. The AIs shouldn't either," wrote Mr Suleyman, calling for better guardrails. Dr Susan Shelmerdine, a medical imaging doctor at Great Ormond Street Hospital and also an AI Academic, believes that one day doctors may start asking patients how much they use AI, in the same way that they currently ask about smoking and drinking habits. "We already know what ultra-processed foods can do to the body and this is ultra-processed information. We're going to get an avalanche of ultra-processed minds," she said.
'We're just at the start of this'