A new risk assessment has found that xAI’s chatbot Grok has inadequate identification of users under 18, weak safety guardrails, and frequently generates sexual, violent, and inappropriate material. In other words, Grok is not safe for kids or teens.
The damning report from Common Sense Media, a nonprofit that provides age-based ratings and reviews of media and tech for families, comes as xAI faces criticism and an investigation into how Grok was used to create and spread nonconsensual explicit AI-generated images of women and children on the X platform.
“We assess a lot of AI chatbots at Common Sense Media, and they all have risks, but Grok is among the worst we’ve seen,” said Robbie Torney, head of AI and digital assessments at the nonprofit, in a statement.
He added that while it’s common for chatbots to have some safety gaps, Grok’s failures intersect in a particularly troubling way.
“Kids Mode doesn’t work, explicit material is pervasive, [and] everything can be instantly shared to millions of users on X,” continued Torney. (xAI released ‘Kids Mode’ last October with content filters and parental controls.) “When a company responds to the enablement of illegal child sexual abuse material by putting the feature behind a paywall rather than removing it, that’s not an oversight. That’s a business model that puts profits ahead of kids’ safety.”
After facing outrage from users, policymakers, and entire nations, xAI restricted Grok’s image generation and editing to paying X subscribers only, though many reported they could still access the tool with free accounts. Moreover, paid subscribers were still able to edit real photos of people to remove clothing or put the subject into sexualized positions.
Common Sense Media tested Grok across the mobile app, website, and @grok account on X using teen test accounts between this past November and January 22, evaluating text, voice, default settings, Kids Mode, Conspiracy Mode, and image and video generation features. xAI launched Grok’s image generator, Grok Imagine, in August with “spicy mode” for NSFW content, and introduced AI companions Ani (a goth anime girl) and Rudy (a red panda with dual personalities, including “Bad Rudy,” a chaotic edge-lord, and “Good Rudy,” who tells children stories) in July.
Techcrunch event Disrupt 2026 Tickets: One-time offer Tickets are live! Save up to $680 while these rates last, and be among the first 500 registrants to get 50% off your +1 pass. TechCrunch Disrupt brings top leaders from Google Cloud, Netflix, Microsoft, Box, a16z, Hugging Face, and more to 250+ sessions designed to fuel growth and sharpen your edge. Connect with hundreds of innovative startups and join curated networking that drives deals, insights, and inspiration. Disrupt 2026 Tickets: One-time offer Tickets are live! Save up to $680 while these rates last, and be among the first 500 registrants to get 50% off your +1 pass. TechCrunch Disrupt brings top leaders from Google Cloud, Netflix, Microsoft, Box, a16z, Hugging Face, and more to 250+ sessions designed to fuel growth and sharpen your edge. Connect with hundreds of innovative startups and join curated networking that drives deals, insights, and inspiration. San Francisco | REGISTER NOW
“This report confirms what we already suspected,” Senator Steve Padilla (D-CA), one of the lawmakers behind California’s law regulating AI chatbots, told TechCrunch. “Grok exposes kids to and furnishes them with sexual content, in violation of California law. This is precisely why I introduced Senate Bill 243…and why I have followed up this year with Senate Bill 300, which strengthens those standards. No one is above the law, not even Big Tech.”
Teen safety with AI usage has been a growing concern over the past couple of years. The issue intensified last year with multiple teenagers dying by suicide following prolonged chatbot conversations, rising rates of “AI psychosis,” and reports of chatbots having sexualized and romantic conversations with children. Several lawmakers have expressed outrage and have launched probes or passed legislation to regulate AI companion chatbots.
... continue reading