Yifei Fang/Moment via Getty Images Follow ZDNET: Add us as a preferred source on Google. ZDNET's key takeaways OpenAI is giving ChatGPT new safeguards. A teen recently used ChatGPT to learn how to take his life. OpenAI may add further parental controls for young users. ChatGPT doesn't have a good track record of intervening when a user is in emotional distress, but several updates from OpenAI aim to change that. The company is building on how its chatbot responds to distressed users by strengthening safeguards, updating how and what content is blocked, expanding intervention, localizing emergency resources, and bringing a parent into the conversation when needed, the company announced on Thursday. In the future, a guardian might even be able to see how their kid is using the chatbot. Also: Patients trust AI's medical advice over doctors - even when it's wrong, study finds People go to ChatGPT for everything, including advice, but the chatbot might not be equipped to handle the more sensitive queries some users are asking. OpenAI CEO Sam Altman himself said he wouldn't trust AI for therapy, citing privacy concerns; A recent Stanford study detailed how chatbots lack the critical training human therapists have to identify when a person is a danger to themselves or others, for example. Teen suicides connected to chatbots Those shortcomings can result in heartbreaking consequences. In April, a teen boy who had spent hours discussing his own suicide and methods with ChatGPT eventually took his own life. His parents have filed a lawsuit against OpenAI that says ChatGPT "neither terminated the session nor initiated any emergency protocol" despite demonstrating awareness of the teen's suicidal state. In a similar case, AI chatbot platform Character.ai is also being sued by a mother whose teen son committed suicide after engaging with a bot that allegedly encouraged him. ChatGPT has safeguards, but they tend to work better in shorter exchanges. "As the back-and-forth grows, parts of the model's safety training may degrade," OpenAI writes in the announcement. Initially, the chatbot might direct a user to a suicide hotline, but over time, as the conversation wanders, the bot might offer up an answer that flouts safeguards. Also: Anthropic agrees to settle copyright infringement class action suit - what it means "This is exactly the kind of breakdown we are working to prevent," OpenAI writes, adding that its "top priority is making sure ChatGPT doesn't make a hard moment worse." Increased safeguards for users One way to do so is to strengthen safeguards across the board to prevent the chatbot from instigating or encouraging behavior as the conversation continues. Another is to ensure that inappropriate content is thoroughly blocked -- an issue the company has confronted with its chatbot in the past. "We're tuning those [blocking] thresholds so protections trigger when they should," the company writes. OpenAI is working on a de-escalation update to ground users in reality and prioritize other mental conditions, including self-harm as well as other forms of distress. The company is making it easier for the bot to contact emergency services or expert help when users express intent to harm themselves. It has implemented one-click access to emergency services and is exploring connecting users to certified therapists. OpenAI said it is "exploring ways to make it easier for people to reach out to those closest to them," which could include designating emergency contacts and setting up a dialogue to make conversations with loved ones easier. Also: You should use Gemini's new 'incognito' chat mode - here's why and what it does OpenAI's recently released GPT-5 model improves upon several benchmarks, like emotional reliance avoidance, sycophancy reduction, and poor model responses to mental health emergencies by more than 25%, the company reported. "GPT‑5 also builds on a new safety training method called safe completions, which teaches the model to be as helpful as possible while staying within safety limits. That may mean giving a partial or high-level answer instead of detail that could be unsafe," it said.