Starting today, OpenAI is rolling out ChatGPT safety tools intended for parents to use with their teenagers. This worldwide update includes the ability for parents, as well as law enforcement, to receive notifications if a child—in this case, users between the ages of 13 and 18—engages in chatbot conversations about self harm or suicide. These changes arrive as OpenAI is being sued by parents who allege ChatGPT played a role in the death of their child. The chatbot allegedly encouraged the suicidal teen to hide a noose in their room out of sight from family members, according to reporting from The New York Times. As a whole, the content experience for teens using ChatGPT is altered with this update. "Once parents and teens connect their accounts, the teen account will automatically get additional content protections,” reads OpenAI’s blog post announcing the launch. “Including reduced graphic content, viral challenges, sexual, romantic or violent roleplay, and extreme beauty ideals, to help keep their experience age-appropriate." Under the new restrictions, if a teen using a ChatGPT account enters a prompt related to self-harm or suicidal ideation, the prompt is sent to a team of human reviewers who decide whether to trigger a potential parental notification. "We will contact you as a parent in every way we can,” says Lauren Haber Jonas, OpenAI’s head of youth well-being. Parents can opt to receive these alerts over text, email, and a notification from the ChatGPT app. The warnings parents may receive in these situations are expected to arrive within hours of the conversation being flagged for review. In moments where every minute counts, this delay will likely be frustrating for parents who want more instant alerts about their child’s safety. OpenAI is working to reduce the lag time for notifications. The alert that could potentially be sent to parents by OpenAI will broadly state that the child may have written a prompt related to suicide or self harm. It may also include conversation strategies from mental health experts for the parents to use while talking with their child. In a prelaunch demo, the example email’s subject line shown to WIRED highlighted safety concerns but did not explicitly mention suicide. What the parental notifications also won’t include are any direct quotes from the child’s conversation—neither the prompts nor the outputs. Parents can follow up with the notification and request conversation time stamps. "We want to give parents enough information to take action and have a conversation with their teens while still maintaining some amount of teen privacy,” says Jonas, “because the content can also include other sensitive information." Both the parent’s and the teen’s accounts have to be opted-in for these safety features to be activated. This means parents will need to send their teen an invitation to have their account monitored, and the teen is required to accept it. The account linkage can also be initiated by the teen. OpenAI may contact law enforcement in situations where human moderators determine that a teen may be in danger and the parents are unable to be reached via notification. It’s unclear what this coordination with law enforcement will look like, especially on a global scale.