is a news writer focused on creative industries, computing, and internet culture. Jess started her career at TechRadar, covering news and hardware reviews.
Posts from this author will be added to your daily email digest and your homepage feed.
ChatGPT’s age prediction model works by examining behavioral and account-level signals, including a user’s stated age, how old the account is, when the user is active, and usage patterns over time. Additional protections will then be applied to persons that ChatGPT estimates are under the age of 18, restricting exposure to sensitive content, such as:
Graphic violence or gory content
Viral challenges that could encourage risky or harmful behavior in minors
Sexual, romantic, or violent roleplay
Depictions of self-harm
Content that promotes extreme beauty standards, unhealthy dieting, or body-shaming
Adult users who are incorrectly placed in the under-18 experience can restore their unrestricted access by verifying their age with a selfie. Age prediction is rolling out globally, except in the EU, where OpenAI says it will “roll out in the coming weeks to account for regional requirements.”
These age prediction features and additional safeguards come after ChatGPT was the focus of a teen suicide lawsuit and a Senate panel discussing chatbots’ potential harm to minors.