OpenAI on Wednesday released a new policy blueprint for how it should address one of the most important and consequential issues of the AI age: protecting its youngest users.
Like every AI company trying to avoid lawsuits, OpenAI has guardrails to prevent its AI from being used for illegal or harmful purposes. But, like every tech company, we've seen how easy it is to get around those rules. This can come with devastating results, particularly for children and teenagers, as we saw in a Florida family's lawsuit against OpenAI that alleges their 17-year-old son used ChatGPT as a "suicide coach."
OpenAI's plan focuses on strengthening existing laws and technical safeguards to keep up with the capabilities of generative AI. The framework was developed in collaboration with the child safety advocacy groups Thorn and the National Center for Missing and Exploited Children, as well as the Attorney General Alliance's AI task force, led by North Carolina Attorney General Jeff Jackson and Utah Attorney General Derek Brown.
This plan includes a series of recommendations, including guardrails OpenAI has already implemented and others it's actively building, the company told CNET. The roadmap is broad, calling for coordination between tech companies, state and federal governments, law enforcement and advocacy groups. While that kind of coordination could bolster the odds of success, regulating AI models has proven to be an ongoing challenge, and implementing effective policy is hardly a guarantee.
Keeping kids safe online, including when using AI, is an especially heated debate in the tech world. It has been reignited in the wake of two landmark court cases in which Meta and Google were found negligent for failing to protect young users. Given all this, AI companies are under increased pressure to lay out how they plan to keep users safe and avoid past mistakes.
Watch this: Your Phone is Disgusting: Let's Fix That 05:07
Tougher laws and technical guardrails
One of the biggest issues the blueprint deals with is child sexual abuse material. CSAM existed before AI, but generative AI has turbocharged the work of bad actors. This became startlingly clear when people using xAI's Grok made approximately 3 million sexual AI images over 11 days in January, 23,000 of which included images of children.
The deepfake trend was extensive and sparked much outrage, prompting investigations into Elon Musk's xAI and a lawsuit from three teenage girls who were victims of these AI nonconsensual sexual images. Grok removed its image editing ability from X (formerly Twitter), but its "spicy mode" is still available through the standalone website.
OpenAI and its collaborators are recommending updates to existing laws governing the creation and sharing of deepfakes and CSAM. So far, 45 states have criminalized the AI- and computer-created CSAM, according to a 2025 report. The new plan calls for enacting laws in all 50 states and the District of Columbia. It also calls for clarifying liability rules to ensure law enforcement can prosecute those who try to make CSAM, even if those attempts are blocked by the AI company.
... continue reading