Tech News
← Back to articles

OpenAI’s ChatGPT parental controls are rolling out — here’s what you should know

read original related products more articles

is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.

Posts from this author will be added to your daily email digest and your homepage feed.

OpenAI has rolled out some long-awaited parental controls for ChatGPT to all web users, with mobile coming “soon,” according to the company.

The controls, announced last month, allow for reducing or removing certain content — like sexual roleplay and the ability to generate images — and reducing the level of personalization on ChatGPT conversations by turning off its memory of past transcripts.

Parents must have their own accounts to access the controls, and teens must opt in, either by inviting a parent to link their account or by accepting a parent’s invitation. Teens can disconnect their accounts at any time, though parents will be notified if that happens. Parents don’t have access to their teens’ conversations, even with a linked account. The only potential exception: “in rare cases where our system and trained reviewers detect possible signs of serious safety risk, parents may be notified — but only with the information needed to support their teen’s safety,” per OpenAI.

Once the controls are set up, here are the new adjustments parents will be able to make for teen accounts, according to the company’s blog post and announcement thread. There’s also a parent resource page available.

Reduce sensitive content: Parents can add additional protections like reducing “graphic content, viral challenges, sexual, romantic or violent roleplay and extreme beauty ideals,” per OpenAI, and the setting is on by default for teen accounts once they’re linked to a parent’s account.

Turn off ChatGPT’s memory of past chats: Turning this off allows for less personalization and potentially better-working guardrails. In a Turning this off allows for less personalization and potentially better-working guardrails. In a blog post last month, for instance, OpenAI noted ChatGPT may correctly point to a suicide hotline the first time someone makes a concerning comment — but “after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards.”

Turn off OpenAI’s own models training on a teen’s chats: Parents can control whether their teen’s past transcripts and files “can be used to improve our models,” per OpenAI.

Turn on “quiet hours”: Parents will be able to set times when their teen won’t have access to ChatGPT.

... continue reading