is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets. Posts from this author will be added to your daily email digest and your homepage feed. OpenAI has rolled out some long-awaited parental controls for ChatGPT to all web users, with mobile coming “soon,” according to the company. The controls, announced last month, allow for reducing or removing certain content — like sexual roleplay and the ability to generate images — and reducing the level of personalization on ChatGPT conversations by turning off its memory of past transcripts. Parents must have their own accounts to access the controls, and teens must opt in, either by inviting a parent to link their account or by accepting a parent’s invitation. Teens can disconnect their accounts at any time, though parents will be notified if that happens. Parents don’t have access to their teens’ conversations, even with a linked account. The only potential exception: “in rare cases where our system and trained reviewers detect possible signs of serious safety risk, parents may be notified — but only with the information needed to support their teen’s safety,” per OpenAI. Once the controls are set up, here are the new adjustments parents will be able to make for teen accounts, according to the company’s blog post and announcement thread. There’s also a parent resource page available. Reduce sensitive content: Parents can add additional protections like reducing “graphic content, viral challenges, sexual, romantic or violent roleplay and extreme beauty ideals,” per OpenAI, and the setting is on by default for teen accounts once they’re linked to a parent’s account. Turn off ChatGPT’s memory of past chats: Turning this off allows for less personalization and potentially better-working guardrails. In a Turning this off allows for less personalization and potentially better-working guardrails. In a blog post last month, for instance, OpenAI noted ChatGPT may correctly point to a suicide hotline the first time someone makes a concerning comment — but “after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.” Turn off OpenAI’s own models training on a teen’s chats: Parents can control whether their teen’s past transcripts and files “can be used to improve our models,” per OpenAI. Turn on “quiet hours”: Parents will be able to set times when their teen won’t have access to ChatGPT. Turn off voice mode: Teens will only be able to converse with ChatGPT in text form. Turn off image generation: If this is turned off, the teen won’t be able to create or edit images using ChatGPT. Choose a notification method: Parents can get alerts if something concerning happens via email, SMS, push notification, or all three — or they can opt out of such notifications. Related Sam Altman says ChatGPT will stop talking about suicide with teens OpenAI laid out most of these features back in August when it said parental controls were coming. Notably, one feature that it was “exploring” seems to have not materialized: the ability to set an emergency contact who is reachable with “one-click messages or calls” within the chatbot. It’s possible OpenAI hopes to cover some of the same ground with the automatic feature for notifying parents. “We know some teens turn to ChatGPT during hard moments, so we’ve built a new notification system to help parents know if something may be seriously wrong,” OpenAI wrote. OpenAI’s original announcement came after the death of Adam Raine, the 16-year-old who died by suicide after months of confiding in ChatGPT. OpenAI was hit with a lawsuit, and within weeks, ChatGPT was being discussed during a Senate panel about various chatbots’ potential harm to minors, where parents of teens who died by suicide spoke. Hours before the Senate panel, OpenAI CEO Sam Altman posted a blog in which he said the company was attempting to balance teen safety with both privacy and freedom, and that the company is working on an “age-prediction system to estimate age based on how people use ChatGPT.” Matthew Raine, the father of the late Adam, said during the Senate panel hearing earlier this month, “As parents, you cannot imagine what it’s like to read a conversation with a chatbot that groomed your child to take his own life. What began as a homework helper gradually turned itself into a confidant and then a suicide coach.” During the hearing, Raine also criticized OpenAI’s past approach to safety. “On the very day that Adam died, Sam Altman … made their philosophy crystal-clear in a public talk,” Raine said, going on to add that Altman said OpenAI should “‘deploy AI systems to the world and get feedback while the stakes are relatively low.’” If you or someone you know is considering suicide or is anxious, depressed, upset, or needs to talk, there are people who want to help. In the US: Crisis Text Line: Text HOME to 741-741 from anywhere in the US, at any time, about any type of crisis. 988 Suicide & Crisis Lifeline: Call or text 988 (formerly known as the National Suicide Prevention Lifeline). The original phone number, 1-800-273-TALK (8255), is available as well. The Trevor Project: Text START to 678-678 or call 1-866-488-7386 at any time to speak to a trained counselor. Outside the US: The International Association for Suicide Prevention lists a number of suicide hotlines by country. Click here to find them. Befrienders Worldwide has a network of crisis helplines active in 48 countries. Click here to find them.