Tech News
← Back to articles

Musk claims Grok made “literally zero” naked child sex images as probes begin

read original more articles

After weeks of sexualized images of women and children being generated with Grok with very limited interventions from Elon Musk’s xAI, California Attorney General Rob Bonta plans to investigate whether Grok’s outputs break any US laws.

In a press release Wednesday, Bonta said that “xAI appears to be facilitating the large-scale production of deepfake nonconsensual intimate images that are being used to harass women and girls across the Internet, including via the social media platform X.”

Notably, Bonta appears to be as concerned about Grok’s standalone app and website being used to generate harmful images without consent as he is about the outputs on X.

So far, X has not restricted the Grok app or website. X has only threatened to permanently suspend users who are editing images to undress women and children if the outputs are deemed “illegal content.” It also restricted the Grok chatbot on X from responding to prompts to undress images, but anyone with a Premium subscription can bypass that restriction, as can any free X user who clicks on the “edit” button on any image appearing on the social platform.

On Wednesday, Elon Musk seemed to defend Grok’s outputs as benign, insisting that none of the reported images have fully undressed any minors, as if that would be the only problematic output.

“I [sic] not aware of any naked underage images generated by Grok,” Musk said in an X post. “Literally zero.”

Musk’s statement seems to ignore that researchers found harmful images where users specifically “requested minors be put in erotic positions and that sexual fluids be depicted on their bodies.” It also ignores that X previously voluntarily signed commitments to remove any intimate image abuse from its platform, as recently as 2024 recognizing that even partially nude images that victims wouldn’t want publicized could be harmful.