Tech News
← Back to articles

OpenAI adds new teen safety rules to ChatGPT as lawmakers weigh AI standards for minors

read original related products more articles

In its latest effort to address growing concerns about AI’s impact on young people, OpenAI on Thursday updated its guidelines for how its AI models should behave with users under 18, and published new AI literacy resources for teens and parents. Still, questions remain about how consistently such policies will translate into practice.

The updates come as the AI industry generally, and OpenAI in particular, faces increased scrutiny from policymakers, educators, and child-safety advocates after several teenagers allegedly died by suicide after prolonged conversations with AI chatbots.

Gen Z, which includes those born between 1997 and 2012, are the most active users of OpenAI’s chatbot. And following OpenAI’s recent deal with Disney, more young people may flock to the platform, which lets you do everything from ask for help with homework to generate images and videos on thousands of topics.

Last week, 42 state attorneys general signed a letter to Big Tech companies, urging them to implement safeguards on AI chatbots to protect children and vulnerable people. And as the Trump administration works out what the federal standard on AI regulation might look like, policymakers like Sen. Josh Hawley (R-MO) have introduced legislation that would ban minors from interacting with AI chatbots altogether.

OpenAI’s updated Model Spec, which lays out behavior guidelines for its large language models, builds on existing specifications that prohibit the models from generating sexual content involving minors, or encouraging self-harm, delusions or mania. This would work together with an upcoming age-prediction model that would identify when an account belongs to a minor and automatically roll out teen safeguards.

Compared with adult users, the models are subject to stricter rules when a teenager is using them. Models are instructed to avoid immersive romantic roleplay, first-person intimacy, and first-person sexual or violent roleplay, even when it’s non-graphic. The specification also calls for extra caution around subjects like body image and disordered eating behaviors, instructs the models to prioritize communicating about safety over autonomy when harm is involved, and avoid advice that would help teens conceal unsafe behavior from caregivers.

OpenAI specifies that these limits should hold even when prompts are framed as “fictional, hypothetical, historical, or educational” — common tactics that rely on role-play or edge-case scenarios in order to get an AI model to deviate from its guidelines.

Techcrunch event Join the Disrupt 2026 Waitlist Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector. Join the Disrupt 2026 Waitlist Add yourself to the Disrupt 2026 waitlist to be first in line when Early Bird tickets drop. Past Disrupts have brought Google Cloud, Netflix, Microsoft, Box, Phia, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, and Vinod Khosla to the stages — part of 250+ industry leaders driving 200+ sessions built to fuel your growth and sharpen your edge. Plus, meet the hundreds of startups innovating across every sector. San Francisco | WAITLIST NOW

Actions speak louder than words

OpenAI’s model behavior guidelines prohibit first-person romantic role-playing with teens. Image Credits:OpenAI

... continue reading