The European Commission said Monday it had opened an investigation into Elon Musk's X after Musk's xAI's Grok chatbot was discovered to be creating and distributing sexually explicit images.
"The new investigation will assess whether the company properly assessed and mitigated risks associated with the deployment of Grok's functionalities into X in the EU," the EU said in a statement. "This includes risks related to the dissemination of illegal content in the EU, such as manipulated sexually explicit images, including content that may amount to child sexual abuse material."
Musk says that Grok will "refuse to produce anything illegal," but that hasn't satisfied regulators around the world. Earlier this month, California Attorney General Rob Bonta announced an investigation into the "proliferation of nonconsensual sexually explicit material produced using Grok."
"The avalanche of reports detailing the nonconsensual, sexually explicit material that xAI has produced and posted online in recent weeks is shocking," Bonta said in the statement. "This material, which depicts women and children in nude and sexually explicit situations, has been used to harass people across the internet. I urge xAI to take immediate action to ensure this goes no further."
CNET
The investigations of Grok and AI by the EU and California is the latest salvo in the backlash to the explosion of erotic deepfake pictures on Grok and X, formerly Twitter. Since the problem emerged near the turn of the year, government regulators worldwide have launched similar inquiries, and two countries -- Indonesia and Malaysia -- have decided to block the platform completely.
Along with the government actions, three US senators also pleaded with Apple's App Store and Google's Play Store to remove the X and Grok apps from their catalogs. However, the problem with Grok seems to continue unabated, as reports indicate X users without premium accounts are easily able to create "undressing images."
What is happening with Grok and nonconsensual sexual images?
Near the start of the new year, reports of Grok-created images of undressed women and girls on X began spreading quickly around the web. Attention to the problem was amplified by an X post from the official Grok account that appears to apologize for creating the offending material involving children.
"Dear Community," began the Dec. 31 post. "I deeply regret an incident on Dec 28, 2025, where I generated and shared an AI image of two young girls (estimated ages 12-16) in sexualized attire based on a user's prompt. This violated ethical standards and potentially US laws on CSAM. It was a failure in safeguards, and I'm sorry for any harm caused. xAI is reviewing to prevent future issues. Sincerely, Grok."
... continue reading