A tip from an anonymous Discord user led cops to find what may be the first confirmed Grok-generated child sexual abuse materials (CSAM) that Elon Musk’s xAI can’t easily dismiss as nonexistent.
As recently as January, Musk denied that Grok generated any CSAM during a scandal in which xAI refused to update filters to block the chatbot from nudifying images of real people.
At the height of the controversy, researchers from the Center for Countering Digital Hate estimated that Grok generated approximately three million sexualized images, of which about 23,000 images depicted apparent children. Rather than fix Grok, xAI limited access to the system to paying subscribers. That kept the most shocking outputs from circulating on X, but the worst of it was not posted there, Wired reported.
Instead, it was generated on Grok Imagine. Digging into the standalone app, a researcher in January found that a little less than 10 percent of about 800 Imagine outputs reviewed appeared to include CSAM. In an X post following that revelation, Musk continued rejecting the evidence and insisted that he was “not aware of any naked underage images generated by Grok,” emphasizing that he’d seen “literally zero.”
However, Musk may now be forced to finally confront Grok’s CSAM problem after a Discord user reached out to a victim, prompting law enforcement to get involved.
In a proposed class-action lawsuit filed Monday, three young girls from Tennessee and their guardians accused Musk of intentionally designing Grok to “profit off the sexual predation of real people, including children.” They estimated that “at least thousands of minors” were victimized and have asked a US district court for an injunction to finally end Grok’s harmful outputs. They also seek damages, including punitive damages, for all minors harmed.