Skip to content
Tech News
← Back to articles

Pentagon ban of Anthropic faces judge; Claude AI maker seeks injunction

read original get AI Safety Compliance Kit → more articles
Why This Matters

The legal battle between Anthropic and the Pentagon highlights the growing tensions around AI regulation, government use, and national security concerns. The outcome could significantly impact AI companies' ability to work with government agencies and influence future policies on AI deployment in sensitive areas. This case underscores the importance of clear legal frameworks for AI technology in the defense sector and beyond.

Key Takeaways

Dario Amodei, co-founder and chief executive officer of Anthropic, at the AI Impact Summit in New Delhi, India, on Thursday, Feb. 19, 2026.

Anthropic heads to San Francisco federal court on Tuesday to ask a judge to temporarily pause the Pentagon's blacklisting of its Claude artificial intelligence models and President Donald Trump's directive banning federal government agencies from using that technology.

If the preliminary injunction is awarded, the AI startup will be able to continue doing business with government contractors and federal agencies as its lawsuit against the Trump administration plays out in court.

Without the injunction, the company has said, it could lose billions of dollars in business.

The hearing on Anthropic's request, which will be conducted by U.S. District Judge Rita Lin, is set to begin at 4:30 p.m. ET. The hearing can be viewed via Zoom.

Earlier in March, the Department of Defense designated Anthropic a so-called supply chain risk, meaning that use of the company's technology purportedly threatens U.S. national security. It was the first time an American company had been hit with that designation.

The label, if allowed to continue, will require defense contractors, including Amazon , Microsoft , and Palantir , to certify that they do not use Claude in their work with the military.

Palantir is continuing to use Claude in its work with the department as the legal battle plays out, CEO Alex Karp told CNBC on March 12. Anthropic's model is also being used in the war with Iran.

Anthropic has argued that there is no basis to consider the company a supply chain risk. The company also said it is being unfairly retaliated against because it demanded that the DOD not use Claude for fully autonomous weapons or mass surveillance of Americans. The Pentagon insists it does not use the AI models for such purposes.

Lin could issue a ruling from the bench about Anthropic's motion on Tuesday, or she could deliver a written ruling later.