The past two weeks have been defined by a clash between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth as the two battle over the military’s use of AI.
Anthropic refuses to allow its AI models to be used for mass surveillance of Americans or for fully autonomous weapons that conduct strikes without human input. At the same time, Secretary Hegseth has argued the Department of Defense shouldn’t be limited by the rules of a vendor, arguing any “lawful use” of the technology should be permitted.
On Thursday, Amodei publicly signaled that Anthropic isn’t backing down — despite threats that his company could be designated as a supply chain risk as a result. But with the news cycle moving fast, it’s worth revisiting exactly what’s at stake in the fight.
At its core, this fight is about who controls powerful AI systems — the companies that build them, or the government that wants to deploy them.
What is Anthropic worried about?
As we said above, Anthropic doesn’t want its AI models to be used for mass surveillance of Americans or for autonomous weapons with no humans in the loop for targeting and firing decisions. Traditional defense contractors typically have little say in how their products will be used, but Anthropic has argued from its inception that AI technology poses unique risks and therefore requires unique safeguards. From the company’s perspective, the question is how to maintain those safeguards when the technology is being used by the military.
The U.S. military already relies on highly automated systems, some of which are lethal. The decision to use lethal force has historically been left to humans, but there are few legal restrictions on military use of autonomous weapons. The DoD doesn’t categorically ban fully autonomous weapons systems. According to a 2023 DOD directive, AI systems can select and engage targets without human intervention, as long as they meet certain standards and pass review by senior defense officials.
That’s precisely what makes Anthropic nervous. Military technology is secretive by nature, so if the U.S. military were taking steps to automate lethal decision-making, we might not know about it until it was operational. And if it used Anthropic’s models, it could count as “lawful use.”
Techcrunch event Save up to $300 or 30% to TechCrunch Founder Summit 1,000+ founders and investors come together at TechCrunch Founder Summit 2026 for a full day focused on growth, execution, and real-world scaling. Learn from founders and investors who have shaped the industry. Connect with peers navigating similar growth stages. Walk away with tactics you can apply immediately.
... continue reading