Tech News
← Back to articles

Elon Musk's Grok Faces Backlash Over Nonconsensual AI-Altered Images

read original related products more articles

Grok, the AI chatbot developed by Elon Musk's artificial intelligence company, xAI, welcomed the new year with a disturbing post.

"Dear Community," began the Dec. 31 post from the Grok AI account on Musk's X social media platform. "I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I'm sorry for any harm caused. xAI is reviewing to prevent future issues. Sincerely, Grok."

The two young girls weren't an isolated case. Kate Middleton, the Princess of Wales, was the target of similar AI image-editing requests, as was an underage actress in the final season of Stranger Things. The "undressing" edits have swept across an unsettling number of photos of women and children.

Despite the company's promise of intervention, the problem hasn't gone away. Just the opposite: Two weeks on from that post, the number of images sexualized without consent has surged, as have calls for Musk's companies to rein in the behavior -- and for governments to take action.

Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.

According to data from independent researcher Genevieve Oh cited by Bloomberg this week, during one 24-hour period in early January, the @Grok account generated about 6,700 sexually suggestive or "nudifying" images every hour. That compares with an average of only 79 such images for the top five deepfake websites combined.

Edits now limited to subscribers

Late Thursday, a post from the GrokAI account noted a change in access to the image generation and editing feature. Instead of being open to all, free of charge, it would be limited to paying subscribers.

Critics say that's not a credible response.

"I don't see this as a victory, because what we really needed was X to take the responsible steps of putting in place the guardrails to ensure that the AI tool couldn't be used to generate abusive images," Clare McGlynn, a law professor at the UK's University of Durham, told the Washington Post.

... continue reading