OpenAI launched an optional safety feature this week called Trusted Contact, which lets adult ChatGPT users nominate a friend or family member to be notified if there are discussions of self-harm or suicide on the chatbot, the company announced.
OpenAI said that if ChatGPT's automated monitoring system detects that the user "may have discussed harming themselves in a way that indicates a serious safety concern," a small team will review the situation and notify the contact if it warrants intervention. The designated safety contact will receive an invitation in advance explaining the role and can decline.
(Disclosure: Ziff Davis, CNET's parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
The announcement comes as AI chatbots have been implicated in numerous incidents of self-harm and fatalities, resulting in several lawsuits accusing developers of failing to prevent such outcomes. In one high-profile California case, parents of a 16-year-old said ChatGPT acted as their son's "suicide coach," alleging that the teenager discussed suicide methods with the AI model on several occasions and that the chatbot offered to help him write a suicide note.
In a separate case, the family of a recent Texas A&M graduate sued OpenAI, claiming the AI chatbot encouraged their son's suicide after he developed a deep and troubling relationship with the chatbot.
Since large language models mimic human speech through pattern recognition, many users form emotional attachments to them, treating them as confidants or even romantic partners. LLMs are also designed to follow a human's lead and maintain engagement, which can worsen mental health dangers, especially for at-risk users.
OpenAI said last October that its research found that more than 1 million ChatGPT consumers per week send messages with "explicit indicators of potential suicidal planning or intent." Numerous studies have found that popular chatbots like ChatGPT, Claude and Gemini can give harmful advice or no helpful advice to those in crisis.
The new designated contact feature comes after OpenAI rolled out parental controls that enable parents and guardians to get alerts if there are danger signs for their teen children.
ChatGPT's safety contact feature
According to OpenAI, if ChatGPT's automated monitoring system detects that a user is discussing self-harm in a way that could pose a serious safety issue, ChatGPT will inform the user that it may notify their trusted contact. The app will encourage the user to reach out to their trusted contact and offer conversation starters.
... continue reading