Anthropic’s weekslong battle with the Department of Defense has played out over social media posts, admonishing public statements, and direct quotes from unnamed Pentagon officials to the news media. But the future of the $380 billion AI startup comes down to just three words: “any lawful use.” The new terms, which OpenAI and xAI have reportedly already agreed to, would give the US military carte blanche to use services for mass surveillance and lethal autonomous weapons, AI that has full power to track and kill targets with no humans involved in the decision-making process.
The negotiations have turned ugly, with Pentagon CTO Emil Michael, formerly a top executive at the ridehailing company Uber, driving the government’s threats to designate Anthropic as a “supply chain risk,” according to two people familiar with negotiations. This classification is usually reserved for threats to national security, including malicious foreign influence or cyber warfare. Anthropic CEO Dario Amodei will reportedly meet with Secretary Pete Hegseth on Tuesday at the Pentagon, and an unnamed Defense official described it as a “shit-or-get-off-the-pot meeting.”
The Pentagon issuing this threat to an American company is unprecedented. But the Pentagon publicly issuing this threat is even more bizarre.
For security purposes, the Pentagon does not publicly disclose what companies are on these lists, to say nothing of publicly threatening those companies if their views don’t align. In fact, Geoffrey Gertz, a senior fellow at the Center for a New American Security (CNAS), told The Verge that under current federal regulations the Pentagon could have classified Anthropic as a risk without informing the public at all or stating why. “It’s the extra step of trying to specifically label them a national security risk, and keep other companies from doing business with Anthropic, that goes above and beyond here.”
The clash is over Anthropic’s enforcement of its “acceptable use policy”
If the classification were to be made official, it would end Anthropic’s $200 million contract with the Pentagon, but it would have a more devastating ripple effect on Anthropic’s overall bottom line. Major defense contractors and tech companies, like AWS, Palantir, and Anduril, use Anthropic’s Claude in their work for the Pentagon, due to the fact that it was the first AI model cleared to use classified information. Put more bluntly: If Anthropic is labeled a “supply chain risk,” any company that currently works with the military or ever hopes to get a military contract would have to drop Anthropic’s AI systems, which are thought to be some of the best in the industry. (The evening before Amodei’s scheduled meeting with Hegseth, the Pentagon confirmed that it had signed an agreement to use Grok, the controversial AI model made by Elon Musk’s xAI, in classified systems. The Pentagon did not have an immediate response after a request for comment.)
This could be implemented in a very narrow sense — or an extremely broad one. “I suspect the more logical explanation would be the narrower definition, that Anthropic can’t be used as part of a specific statement of work for the Pentagon,” said Gertz. “But based on some of the reporting and effort to make this seem like a punitive move against Anthropic, it’s worth thinking through both of those scenarios.”
Although the Pentagon and their media allies have gone on a campaign to label Anthropic “woke,” they have yet to make any real accusations about security vulnerabilities or potential for espionage. Instead, the clash is over Anthropic’s enforcement of its “acceptable use policy,” according to people familiar with the internal discussions.
A source familiar with the situation, who requested anonymity due to the sensitive nature of the negotiations, told The Verge that Anthropic has been very clear to the government about its red lines, and that there are two narrow things the company won’t agree to: autonomous kinetic operations and mass domestic surveillance. The latter, the source said, is due to the fact that the “laws haven’t caught up to what AI can do” and that it may infringe on American civil liberties. For the former — lethal autonomous weapons — the source said that the technology “isn’t there yet for fully autonomous weapons with no humans in loop.”
Hamza Chaudhry, the AI and national security lead at the Future of Life Institute, a nonpartisan research group focused on AI governance, noted that Anthropic’s red lines already reflected current government directives that have not been repealed.
... continue reading