OpenAI cannot escape the doom cloud swirling around its rollout of a text-based “adult mode” in ChatGPT.
Late Sunday, The Wall Street Journal reported that insiders confirmed that OpenAI’s “handpicked council of advisers on well-being and AI” were “freaking out” over the company’s plans to move ahead with “adult mode,” despite their urgent warnings.
Back in January, council members unanimously warned OpenAI that “AI-powered erotica could foster unhealthy emotional dependence on ChatGPT for users and that minors could find ways to access sex chats,” sources told the WSJ. One expert suggested that without major updates to ChatGPT, OpenAI risked creating a “sexy suicide coach” for vulnerable users prone to form intense bonds with their companion bots.
OpenAI’s wellness council was created in October. It was put together after backlash following the first-known case of a minor’s ChatGPT-linked suicide, and it was curiously announced on the same day that Sam Altman broadcast on X that “adult mode” would be coming soon to ChatGPT.
Back then, OpenAI’s goal was to update ChatGPT to safeguard sensitive users by consulting “leading researchers and experts with decades of experience studying how technology affects our emotions, motivation, and mental health.” However, there have been more suicide cases since then, including two involving middle-aged men whose families discovered disturbing chat logs where ChatGPT seemed to weaponize its growing bond with users to incite self-harm and other violence, including murder.
Notably, the council does not include a suicide prevention expert, but even experts who perhaps aren’t laser-focused on reducing ChatGPT’s suicide risks are panicked by OpenAI’s erotica plans, WSJ reported.