is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.
This is The Stepback, a weekly newsletter breaking down one essential story from the tech world. For more on dystopian developments in AI, follow Hayden Field. The Stepback arrives in our subscribers’ inboxes at 8AM ET. Opt in for The Stepback here.
How it started
You could say it all started with Elon Musk’s AI FOMO — and his crusade against “wokeness.” When his AI company, xAI, announced Grok in November 2023, it was described as a chatbot with “a rebellious streak” and the ability to “answer spicy questions that are rejected by most other AI systems.” The chatbot debuted after a few months of development and just two months of training, and the announcement highlighted that Grok would have real-time knowledge of the X platform.
But there are inherent risks to a chatbot having both the run of the internet and X, and it’s safe to say xAI may not have taken the necessary steps to address them. Since Musk took over Twitter in 2022 and renamed it X, he laid off 30% of its global trust and safety staff and cut its number of safety engineers by 80%, Australia’s online safety watchdog said last January. As for xAI, when Grok was released, it was unclear whether xAI had a safety team already in place. When Grok 4 was released in July, it took more than a month for the company to release a model card — a practice typically seen as an industry standard, which details safety tests and potential concerns. Two weeks after Grok 4’s release, an xAI employee wrote on X that he was hiring for xAI’s safety team and that they “urgently need strong engineers/researchers.” In response to a commenter, who asked, “xAI does safety?” the original employee said xAI was “working on it.”
Journalist Kat Tenbarge wrote about how she first started seeing sexually explicit deepfakes go viral on Grok in June 2023. Those images obviously weren’t created by Grok — it didn’t even have the ability to generate images until August 2024 — but X’s response to the concerns was varied. Even last January, Grok was inciting controversy for AI-generated images. And this past August, Grok’s “spicy” video-generation mode created nude deepfakes of Taylor Swift without even being asked. Experts have told The Verge since September that the company takes a whack-a-mole approach to safety and guardrails — and that it’s difficult enough to keep an AI system on the straight and narrow when you design it with safety in mind from the beginning, let alone if you’re going back to fix baked-in problems. Now, it seems that approach has blown up in xAI’s face.
How it’s going
…Not good.
Grok has spent the last couple of weeks spreading nonconsensual, sexualized deepfakes of adults and minors all over the platform, as promoted. Screenshots show Grok complying with users asking it to replace women’s clothing with lingerie and make them spread their legs, as well as to put small children in bikinis. And there are even more egregious reports. It’s gotten so bad that during a 24-hour analysis of Grok-created images on X, one estimate gauged the chatbot to be generating about 6,700 sexually suggestive or “nudifying” images per hour. Part of the reason for the onslaught is a recent feature added to Grok, allowing users to use an “edit” button to ask the chatbot to change images, without the original poster’s consent.
Since then, we’ve seen a handful of countries either investigate the matter or threaten to ban X altogether. Members of the French government promised an investigation, as did the Indian IT ministry, and a Malaysian government commission wrote a letter about its concerns. California governor Gavin Newsom called on the US Attorney General to investigate xAI. The United Kingdom said it is planning to pass a law banning the creation of AI-generated nonconsensual, sexualized images, and the country’s communications-industry regulator said it would investigate both X and the images that had been generated in order to see if they violated its Online Safety Act. And this week, both Malaysia and Indonesia blocked access to Grok.
... continue reading