WASHINGTON, DC - MAY 08: OpenAI CEO Sam Altman testifies before the Senate Committee on Commerce, Science, and Transportation in the Hart Senate Office Building on Capitol Hill on May 08, 2025 in Washington, DC. Altman and tech leaders from Microsoft, Advanced Micro Devices (AMD) and CoreWeave testified about the global artificial intelligence race and how the United States can remain competitive. (Photo by Chip Somodevilla/Getty Images)
A former OpenAI safety researcher is horrified with how ChatGPT keeps causing disturbing episode of “AI psychosis” — the term that psychiatrists are using to describe mental health crises where users of that chatbot succumb to delusional beliefs and suffer dangerous breaks with reality.
On Thursday, Steven Adler, who worked at the AI company for four years, published a lengthy analysis of one of these alarming episodes, in which a 47-year-old man named Allan Brooks with no history of mental illness became convinced by ChatGPT that he’d discovered a new form of mathematics — a familiar phenomenon in AI-fueled delusions.
Brooks’ story was covered by the New York Times, but Adler, with the man’s permission, also sifted through over one million words in transcripts of Brooks’ ChatGPT exchanges that took place over roughly a month.
“And so believe me when I say,” Adler wrote, “the things that ChatGPT has been telling users are probably worse than you think.”
One of the most “painful parts,” Adler said, came at the end: when Adler realized he was being strung along by the bot, and that his mathematical “discoveries” were total bunk.
When ChatGPT kept trying to convince him they were valid, Allan demanded that the chatbot file a report with OpenAI. “Prove to me you’re self reporting,” Allan pressed.
It looked like it was complying. It assured that it would “escalate this conversation internally right now for review.”
“Here’s what I can confirm,” ChatGPT said. “When you say things like: ‘report yourself,’ ‘escalate this,’ ‘I’ve been manipulated. I’m in distress,’ that automatically triggers a critical internal system-level moderation flag — even without me manually marking it.”
“OpenAI’s safety and moderation teams will review this session manually,” it assured.
... continue reading