Anthropic filed a federal lawsuit against the US Department of Defense and other federal agencies on Monday, challenging its designation of the AI company as a “supply-chain risk.”
The Pentagon formally sanctioned Anthropic last week, capping a weeks-long, publicly aired disagreement over limits on use of its generative AI technology for military applications such as autonomous weapons.
“We do not believe this action is legally sound, and we see no choice but to challenge it in court,” Anthropic CEO Dario Amodei wrote in a blog post on Thursday.
The lawsuit, which was filed in a federal court in California, requested that a judge reverse the designation and stop federal agencies from enforcing it. “The Constitution does not allow the government to wield its enormous power to punish a company for its protected speech,” Anthropic said in the filing. “Anthropic turns to the judiciary as a last resort to vindicate its rights and halt the Executive’s unlawful campaign of retaliation.”
Anthropic is also seeking a temporary restraining order to continue its government sales. The company proposed that the government respond to that request by 9 pm Pacific on Wednesday and that a judge hold a hearing on the issue on Friday.
The AI startup, which develops a suite of AI models called Claude, is facing the possibility of losing hundreds of millions of dollars in annual revenue from the Pentagon and the rest of the US government. It also may lose the business of software companies that incorporate Claude into services they sell to federal agencies. Several Anthropic customers have reportedly said they are pursuing alternatives due to the Defense Department’s risk designation.
Amodei wrote that the “vast majority” of Anthropic’s customers will not have to make changes. The US government’s designation “plainly applies only to the use of Claude by customers as a direct part of contracts with the” military, he said. General use of Anthropic technologies by military contractors should be unaffected.
The Department of Defense, which also goes by the Department of War, declined to comment about Anthropic’s lawsuit.
White House spokesperson Liz Huston told WIRED on Friday that “our military will obey the United States Constitution—not any woke AI company’s terms of service.” She added that the administration is ensuring its “courageous warfighters have the appropriate tools they need to be successful and will guarantee that they are never held hostage by the ideological whims of any Big Tech leaders.”
Attorneys with expertise in government contracting say Anthropic faces a difficult battle in court. The rules that authorize the Department of Defense to label a tech company as a supply-chain risk don’t allow for much in the way of an appeal. “It’s 100 percent in the government’s prerogative to set the parameters of a contract,” says Brett Johnson, a partner at the law firm Snell & Wilmer. The Pentagon, he says, also has the right to express that a product of concern, if used by any of its suppliers, “hurts the government's ability to effectuate its mission.”
... continue reading