On Tuesday, OpenAI announced plans to develop an automated age-prediction system that will determine whether ChatGPT users are over or under 18, automatically directing younger users to a restricted version of the AI chatbot. The company also confirmed that parental controls will launch by the end of September.
In a companion blog post, OpenAI CEO Sam Altman acknowledged the company is explicitly "prioritizing safety ahead of privacy and freedom for teens," even though it means that adults may eventually need to verify their age to use a more unrestricted version of the service.
"In some cases or countries we may also ask for an ID," Altman wrote. "We know this is a privacy compromise for adults but believe it is a worthy tradeoff." Altman admitted that "not everyone will agree with how we are resolving that conflict" between user privacy and teen safety.
The announcements arrives weeks after a lawsuit filed by parents whose 16-year-old son died by suicide following extensive interactions with ChatGPT. According to the lawsuit, the chatbot provided detailed instructions, romanticized suicide methods, and discouraged the teen from seeking help from his family while OpenAI's system tracked 377 messages flagged for self-harm content without intervening.
The proposed age-prediction system represents a non-trivial technical undertaking for OpenAI, and whether AI-powered age detection can actually work remains a significant open question. When the AI system in development identifies users under 18, OpenAI plans to automatically route the user to a modified ChatGPT experience that blocks graphic sexual content and includes other age-appropriate restrictions. The company says it will "take the safer route" when uncertain about a user's age, defaulting to the restricted experience and requiring adults to verify their age to access full functionality.
The company didn't specify what technology it plans to use for age prediction or provide a timeline for deployment beyond saying it's "building toward" the system. OpenAI acknowledged that developing effective age-verification systems isn't straightforward. "Even the most advanced systems will sometimes struggle to predict age," the company wrote.