is a senior policy reporter at The Verge, covering the intersection of Silicon Valley and Capitol Hill. She spent 5 years covering tech policy at CNBC, writing about antitrust, privacy, and content moderation reform.
X’s Grok chatbot hasn’t stopped accepting users’ requests to strip down women and, in some cases, apparent minors to AI-generated bikinis. According to some reports, the flood of AI-generated images includes more extreme content that potentially violates laws against nonconsensual intimate imagery (NCII) and child sexual abuse material (CSAM). Even in the US, where X owner Elon Musk has close ties with the government, some legislators are criticizing the platform — though clear action is still in short supply.
Several international regulators have spoken out against Grok’s undressing spree. The UK communications regulator Ofcom said in a statement that it had “made urgent contact with X and xAI to understand what steps they have taken to comply with their legal duties to protect users in the UK,” and would quickly assess “potential compliance issues that warrant investigation.” European Commission spokesperson Thomas Regnier said at a press conference that Grok’s outputs were “illegal” and “appalling.” India’s IT ministry threatened to strip X’s legal immunity for user-generated posts unless it promptly submitted a description of actions it’s taken to prevent illegal content. Regulators from Australia, Brazil, France, and Malaysia are also tracking the developments.
Tech platforms in the US are largely protected from liability for their users’ posts under Section 230 of the Communications Decency Act, but even the co-author of the 1996 law, Sen. Ron Wyden (D-OR), said the rule should not protect a company’s own AI outputs. “Given that the Trump administration is going to the mat to protect pedophiles, states should step in to hold Musk and X accountable,” Wyden wrote on Bluesky.
Some of the images created by Grok could also violate the Take It Down Act. Under that law, the DOJ now has authority to try to impose criminal penalties against individuals who publish even AI-facilitated NCII, while platforms that fail to quickly remove flagged content could be targeted by the Federal Trade Commission starting in mid-May.
Grok’s large-scale sexual image generation appears to be exactly the kind of thing that the Take It Down Act was designed to deal with. “X must change this,” Sen. Amy Klobuchar (D-MN), a lead sponsor of the bill, wrote on the platform. “If they don’t, my bipartisan TAKE IT DOWN Act will soon require them to.” Phoebe Keller, spokesperson for Klobuchar’s co-sponsor, Sen. Ted Cruz (R-TX), declined to comment on the reporting about Grok.
Some lawmakers are calling for new targeted legislation. Rep. Jake Auchincloss (D-MA) called Grok’s behavior “grotesque” in a statement and said his proposal, the Deepfake Liability Act, would “make hosting sexualized deepfakes of women and kids a Board-level problem for Musk & [Meta CEO Mark] Zuckerberg.”
But other lawmakers insist that enforcers already have the tools to deal with Grok’s actions. “Attorney General [Pam] Bondi has a simple choice: protect the President’s Big Tech friends or defend the young people of America,” Sen. Richard Blumenthal (D-CT) said in a statement.
“It’s unacceptable that software used by the federal government is vulnerable to such heinous and illegal uses”
Rep. Madeleine Dean (D-PA), who helped lead the House version of the Take It Down Act, said in a statement that she is “horrified and disgusted by reports that Elon Musk’s Grok chatbot has flooded the internet with AI-generated explicit images of women and children.” Dean called on Bondi and FTC Chair Andrew Ferguson to “launch an immediate investigation into Grok and xAI to protect our children, ensure this never happens again, and bring these perpetrators to justice.” Nearly eight months after the Take It Down Act’s signing, she said, “it’s unacceptable that software used by the federal government is vulnerable to such heinous and illegal uses.”
... continue reading