Tech News
← Back to articles

The MechaHitler defense contract is raising red flags

read original related products more articles

is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.

Ask someone their worst fears about AI, and you’ll find a few recurring topics — from near-term fears like AI tools replacing human workers and the loss of critical thinking to apocalyptic scenarios like AI-designed weapons of mass destruction and automated war. Most have one thing in common: a loss of human control.

And the system many AI experts fear most will spiral out of our grip? Elon Musk’s Grok.

Grok was designed to compete with leading AI systems like Anthropic’s Claude and OpenAI’s ChatGPT. From the beginning, its selling point has been loose guardrails. When xAI, Musk’s AI startup, debuted Grok in November 2023, the announcement said it would “answer spicy questions that are rejected by most other AI systems” and had a “rebellious streak, so please don’t use it if you hate humor!”

Fast-forward a year and a half, and the cutting edge of AI is getting more dangerous, with multiple companies flagging increased risks of their systems being used for tasks like chemical and biological weapon development. As that’s happening, Grok’s “rebellious streak” has taken over more times than most people can count. And when its “spicy” answers go too far, the slapdash fixes have left experts unconvinced it can handle a bigger threat.

Senator Elizabeth Warren (D-MA) sent a letter Wednesday to US Defense Secretary Pete Hegseth, detailing her concerns about the Department of Defense’s decision to award xAI a $200 million contract in order to “address critical national security challenges.” Though the contracts also went to OpenAI, Anthropic, and Google, Warren has unique concerns about the contract with xAI, she wrote in the letter viewed by The Verge — including that “Musk and his companies may be improperly benefitting from the unparalleled access to DoD data and information that he obtained while leading the Department of Government Efficiency,” as well as “the competition concerns raised by xAI’s use and rights to sensitive government data” and Grok’s propensity to generate “erroneous outputs and misinformation.”

Sen. Warren cited reports that xAI was a “late-in-the-game addition under the Trump administration” and that it had not been considered for such contracts before March of this year, and that the company did not have the type of reputation or proven record that typically precedes DoD awards. The letter requests that the DoD provide, in response, the full scope of work for xAI, how its contract differs from the contracts with the other AI companies, and “to what extent DoD will implement Grok, and who will be held accountable for any program failures related to Grok.”

One of Sen. Warren’s key reasons for concern, per the letter, was specifically “the slew of offensive and antisemitic posts generated by Grok,” which went viral this summer. The company did not immediately respond to a request for comment.

A ‘patchwork’ approach to safety

The height of Grok’s power, up to now, has been posting answers to users’ queries on X. But even in this relatively limited capacity, it’s racked up a remarkable number of controversies, often resulting from patchwork tweaks and fixed with patchwork solutions. In February, the chatbot temporarily blocked results that mention Musk or President Trump spreading misinformation. In May, it briefly went viral for constant tirades about “white genocide” in South Africa. In July, it developed a habit of searching for Musk’s opinion on hot-button topics like Israel and Palestine, immigration, and abortion before responding to questions about them. And most infamously, last month it went on an antisemitic bender — spreading stereotypes about Jewish people, praising Adolf Hitler and even going so far as to call itself “MechaHitler.”

... continue reading