OpenAI published a blog post on Tuesday titled "Helping people when they need it most" that addresses how its ChatGPT AI assistant handles mental health crises, following what the company calls "recent heartbreaking cases of people using ChatGPT in the midst of acute crises."
The post arrives after The New York Times reported on a lawsuit filed by Matt and Maria Raine, whose 16-year-old son Adam died by suicide in April after extensive interactions with ChatGPT, which Ars covered extensively in a previous post. According to the lawsuit, ChatGPT provided detailed instructions, romanticized suicide methods, and discouraged the teen from seeking help from his family while OpenAI's system tracked 377 messages flagged for self-harm content without intervening.
ChatGPT is a system of multiple models interacting as an application. In addition to a main AI model like GPT-4o or GPT-5 providing the bulk of the outputs, the application includes components that are typically invisible to the user, including a moderation layer (another AI model) or classifier that reads the text of the ongoing chat sessions. That layer detects potentially harmful outputs and can cut off the conversation if it veers into unhelpful territory.
OpenAI eased these content safeguards in February following user complaints about overly restrictive ChatGPT moderation that prevented the discussion of topics like sex and violence in some contexts. At the time, Sam Altman wrote on X that he'd like to see ChatGPT with a "grown-up mode" that would relax content safety guardrails. With 700 million active users, what seem like small policy changes can have a large impact over time.
There’s no one home: The illusion of understanding
OpenAI's language throughout Tuesday's blog post reveals a potential problem with how it promotes its AI assistant. The company consistently describes ChatGPT as if it possesses human qualities, a property called anthropomorphism. The post is full of hallmarks of anthropomorphic framing, claiming that ChatGPT can "recognize" distress and "respond with empathy" and that it "nudges people to take a break"—language that obscures what's actually happening under the hood.