The contract dispute between the US Department of Defense and the AI developer Anthropic that boiled over at the end of February exposed in stark terms how laws and regulations have failed to keep up with the capabilities of artificial intelligence.
The Pentagon wanted to be able to use Anthropic's Claude AI for "all lawful purposes," while Anthropic wanted to prohibit the military from using it for mass domestic surveillance or for fully autonomous weapons systems. After Anthropic refused to meet the government's demands, President Donald Trump and Secretary of Defense Pete Hegseth said they would declare the company a "supply chain risk," prohibiting the use of its products in defense contract work. The Pentagon did, and Anthropic filed suit Monday in federal court challenging the designation, calling it an "unprecedented and unlawful" attack on the company's right to free speech.
Pentagon officials said the problem is moot because current law doesn't allow for such surveillance, and it has no plans to use the tool for autonomous weapons systems. But the laws and regulations aren't actually that clear, according to privacy and tech experts. And a contract dispute between a private company and a federal agency isn't the place to settle it.
"This week exposed a real governance vacuum, and it should be a wake-up call for Congress," said Hamza Chaudhry, AI and national security lead at the Future of Life Institute.
Read more: Congress Isn't Stepping Up to Regulate AI. Where Does That Leave Us Now?
The immediate result of the contract dispute was the Pentagon striking a deal with OpenAI instead. The deal with OpenAI was less clear about the limitations of using the company's products for mass surveillance or autonomous weapons, but OpenAI leaders said this week that they have taken steps to strengthen those guardrails. CEO Sam Altman said in a post on X that the Pentagon affirmed it would not be used by the department's intelligence agencies.
(Disclosure: Ziff Davis, CNET's parent company, in 2025 filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
OpenAI research scientist Noam Brown posted on X that he believed the world "should not have to rely on trust in AI labs or intelligence agencies" to ensure things like safety. "I know that legislation can sometimes be slow, but I'm afraid of a slippery slope where we become accustomed to circumventing the democratic process for important policy decisions," he wrote.
The question is whether, and how, Congress will deal with these issues.
AI plays a growing role in surveillance
... continue reading