Tech News
← Back to articles

Former OpenAI Insider Says It’s Failed Its Users

read original related products more articles

Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.

Earlier this year, when OpenAI released GPT-5, it made a strident announcement: that it was shutting down all previous models.

There was immense backlash, because users had become emotionally attached to the more “sycophantic” and warm tone of GPT-5’s predecessor, GPT-4o. In fact, OpenAI was forced to reverse the decision, bringing back 4o and making GPT-5 more sycophantic.

The incident was symptomatic of a much broader trend. We’ve already seen users getting sucked into severe mental health crises by ChatGPT and other AI, a troubling phenomenon experts have since dubbed “AI psychosis.” In a worst-case scenario, these spirals have already resulted in several suicides, with one pair of parents even suing OpenAI for playing a part in their child’s death.

In a new announcement this week, the Sam Altman-led company estimated that a sizable proportion of active ChatGPT users show “possible signs of mental health emergencies related to psychosis and mania.” An even larger contingent were found to have “conversations that include explicit indicators of potential suicide planning or intent.”

In an essay for the New York Times, former OpenAI safety researcher Steven Adler argued that OpenAI isn’t doing enough to mitigate these issues, while succumbing to “competitive pressure” and abandoning its focus on AI safety.

He criticized Altman for claiming that the company had “been able to mitigate the serious mental health issues” with the use of “new tools,” and for saying the company will soon allow adult content on the platform.

“I have major questions — informed by my four years at OpenAI and my independent research since leaving the company last year — about whether these mental health issues are actually fixed,” Adler wrote. “If the company really has strong reason to believe it’s ready to bring back erotica on its platforms, it should show its work.”

“People deserve more than just a company’s word that it has addressed safety issues,” he added. “In other words: Prove it.”

To Adler, opening the floodgates to mature content could have disastrous consequences.

... continue reading