is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.
Grok began 2026 as it began 2025: under fire for its AI-generated images.
Elon Musk’s chatbot has spent the last week flooding X with nonconsensual, sexualized deepfakes of adults and minors. Circulating screenshots show Grok complying with requests to put real women in lingerie and make them spread their legs, and to put small children in bikinis. Reports of images that were later removed describe even more egregious contents. One X user confirmed in a conversation with The Verge that they came across multiple images of minors with what the prompter dubbed “donut glaze” on their faces, which appear to have since been removed. At one point, Grok was generating about one nonconsensual sexualized image per minute, according to one estimate.
X’s terms of service prohibit “the sexualization or exploitation of children.” And on Saturday, the company stated the platform would “take action against illegal content on X, including Child Sexual Abuse Material (CSAM).” It appears to have taken down some of the worst offenses. But overall, it’s downplayed the incidents. Musk has said that “Anyone using Grok to make illegal content will suffer the same consequences as if they upload illegal content,” but he’s made it clear through public X posts that he doesn’t believe the general undressing prompts are a problem, and he’s responded to the broader topic with laughing and fire emojis on X. The company’s tepid response has alarmed experts who have spent years trying to address AI-powered sexual harassment and abuse. Multiple governments have said they’re scrutinizing X. But even amid an unprecedented push for online regulation, the path toward policing it or its chatbot’s creations isn’t clear.
xAI, creator of Grok, did not respond to a request for comment. Neither did Apple or Google when asked if the reports violated their app store policies.
Grok has always allowed, and Musk has openly encouraged, highly sexualized imagery. But over the past week, the ability to ask Grok to edit images — via a new button that allows changes without the original poster’s permission — has gone viral for undressing women and minors. Enforcement of guardrails has been haphazard at best, and most of the supposed responses from X come from Grok itself, which means they’re essentially thought up on the spot. The replies include stating that some of its creations went “against our guidelines for fictional content only” and, at the request of a user, a widely reported apology — something xAI itself doesn’t appear to have issued.
One of the biggest questions here is whether the images violate laws against CSAM and nonconsensual intimate imagery (NCII) of adults, especially in the US, where X is headquartered. The US Department of Justice proscribes “digital or computer generated images indistinguishable from an actual minor” that include sexual activity or suggestive nudity. And the Take It Down Act, signed into law by President Donald Trump in May 2025, prohibits nonconsensual AI-generated “intimate visual depictions” and requires certain platforms to rapidly remove them.
Celebrities and influencers have described feeling violated by sexualized AI-generated images; according to screenshots, Grok has produced pictures of the singer Momo from TWICE, actress Millie Bobby Brown, actor Finn Wolfhard, and many more. Grok-generated images are also being used specifically to attack women with political power.
“It is a tool for expressing the underlying misogyny that pervades every corner of American society and most societies around the world,” Riana Pfefferkorn, a policy fellow at the Stanford Institute for Human-Centered Artificial Intelligence (HAI), told The Verge. “It is a privacy violation, it is a violation of consent and of boundaries, it is extremely intrusive, it is a form of gendered violence in its way.” Perhaps above all, explicit images of minors — including through dedicated “nudify” apps — have become a growing problem for law enforcement.
On Monday, the Consumer Federation of America (CFA), a group of hundreds of consumer-focused nonprofits, publicly called for both state and federal action against xAI for “creating and distributing Child Sexual Abuse Material (CSAM) and other non-consensual intimate imagery (NCII) with Generative AI,” sending a letter signed by a handful of organizations to the Federal Trade Commission and US attorneys general.
... continue reading