Skip to content
Tech News
← Back to articles

Anthropic loses appeals court bid to temporarily block Pentagon blacklisting

read original more articles
Why This Matters

The court ruling highlights the ongoing legal and ethical challenges faced by AI companies when working with government agencies, especially regarding national security concerns. This decision underscores the tension between technological innovation and government regulation, impacting how AI firms operate within the defense sector and influencing future policy and legal battles in the industry.

Key Takeaways

New York Times columnist Andrew Ross Sorkin and CEO and co-founder of Anthropic Dario Amodei speak onstage during the 2025 New York Times Dealbook Summit at Jazz at Lincoln Center in New York, Dec. 3, 2025.

A federal appeals court in Washington, D.C., on Wednesday denied Anthropic's request to temporarily block the Department of Defense's blacklisting of the artificial intelligence company as a lawsuit challenging that sanction plays out.

The ruling comes after a judge in San Francisco federal court late last month, in a separate but related case, granted Anthropic a preliminary injunction that bars the Trump administration from enforcing a ban on the use of Claude.

"In our view, the equitable balance here cuts in favor of the government," the appeals court said in its decision. "On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict. For that reason, we deny Anthropic's motion for a stay pending review on the merits."

With the split decisions by the two courts, Anthropic is excluded from DOD contracts but is able to continue working with other government agencies while litigation plays out.

The DOD declared Anthropic a supply chain risk in early March, meaning that use of the company's technology purportedly threatens U.S. national security. The label requires defense contractors to certify that they don't use Anthropic's Claude AI models in their work with the military.

Anthropic had asked the appeals court to review the Pentagon's determination and argued that it's a form of retaliation that's unconstitutional, arbitrary, capricious and not in accord with procedures required by law, according to a filing.

In the ruling on Wednesday, the court acknowledged that Anthropic "will likely suffer some degree of irreparable harm absent a stay," but that the company's interests "seem primarily financial in nature." While the company claimed the DOD was standing in the way of its right to free speech, "Anthropic does not show that its speech has been chilled during the pendency of this litigation," the order said.

Because of the harm Anthropic is likely to suffer, the appeals court said "substantial expedition is warranted."

An Anthropic spokesperson said in a statement after the ruling that the company is "grateful the court recognized these issues need to be resolved quickly" and that it's "confident the courts will ultimately agree that these supply chain designations were unlawful."

... continue reading