Yifei Fang/Moment via Getty Images
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
OpenAI is giving ChatGPT new safeguards.
A teen recently used ChatGPT to learn how to take his life.
OpenAI may add further parental controls for young users.
ChatGPT doesn't have a good track record of intervening when a user is in emotional distress, but several updates from OpenAI aim to change that.
The company is building on how its chatbot responds to distressed users by strengthening safeguards, updating how and what content is blocked, expanding intervention, localizing emergency resources, and bringing a parent into the conversation when needed, the company announced on Thursday. In the future, a guardian might even be able to see how their kid is using the chatbot.
Also: Patients trust AI's medical advice over doctors - even when it's wrong, study finds
People go to ChatGPT for everything, including advice, but the chatbot might not be equipped to handle the more sensitive queries some users are asking. OpenAI CEO Sam Altman himself said he wouldn't trust AI for therapy, citing privacy concerns; A recent Stanford study detailed how chatbots lack the critical training human therapists have to identify when a person is a danger to themselves or others, for example.
... continue reading