A case of murder-suicide in Connecticut earlier this month is being identified as potentially the first homicide fueled by a mentally disturbed person’s use of generative artificial intelligence, according to a new report from the Wall Street Journal.
Police in Greenwich, Connecticut, found Stein-Erik Soelberg, a 56-year-old tech industry veteran, and his 83-year-old mother, both dead in the home where they lived together on Aug. 5, according to the Greenwich Police Department. Soelberg killed his mother and then himself after suffering from untreated mental illness that was apparently made worse by his interactions with OpenAI’s ChatGPT, according to the Journal.
The newspaper combed through his social media history and found videos of conversations that Soelberg had with the AI chatbot, which he named Bobby. Soelberg experienced paranoid delusions that his mother was poisoning him by putting a psychedelic drug in the vents of his car, according to the Journal, and the chatbot didn’t push back on the idea, instead seeming to validate the conspiracies he would ask about.
At one point, Soelberg uploaded an image of a receipt from a Chinese restaurant and asked ChatGPT to analyze it for hidden messages. The chatbot found references to “Soelberg’s mother, his ex-girlfriend, intelligence agencies and an ancient demonic sigil,” according to the Journal.
Soelberg worked in marketing at tech companies like Netscape, Yahoo, and EarthLink, but had been out of work since 2021, according to the newspaper. He divorced in 2018 and moved in with his mother that year. Soelberg reportedly became more unstable in recent years, attempting suicide in 2019, and getting picked up by police for public intoxication and DUI. After a recent DUI in February, Soelberg told the chatbot that the town was out to get him, and ChatGPT allegedly affirmed his delusions, telling him, “This smells like a rigged setup.”
The Journal analyzed 23 hours of videos posted to Instagram and YouTube by Soelberg, though they no longer appear to be available online. The videos showed Soelberg’s conversations with ChatGPT, which told him that he wasn’t delusional and that he was indeed being watched. AI chatbots have a tendency to be sycophantic, which is a recipe for disaster when people lose touch with reality.
AI psychosis isn’t a clinical term, but it’s become the way that people now describe delusional thinking that’s only exacerbated by exposure to generative artificial intelligence tools. Gizmodo recently published consumer complaints regarding ChatGPT that have been filed with the Federal Trade Commission, some of which featured disturbing accounts of people who say they were egged on by AI to distrust family or stop taking medications.
OpenAI published a blog post on Tuesday about “people in serious mental and emotional distress,” which most people assumed was about a New York Times article published that day about a 16-year-old who died by suicide and the long chats he had with ChatGPT. But the Journal’s article seems to suggest it was their contacting the tech company for comment that may have been the impetus for the discussion. Given the number of AI psychosis cases in the news during recent months, it’s probably all of the above.
“Our goal is for our tools to be as helpful as possible to people—and as a part of this, we’re continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input,” the company explained. “As the world adapts to this new technology, we feel a deep responsibility to help those who need it most.”