The Pentagon's decision to blacklist Anthropic's Claude artificial intelligence models "looks like an attempt to cripple" the company, a federal judge said at a court hearing on Tuesday.
Lawyers for Anthropic appeared in San Francisco federal court to ask Judge Rita Lin to temporarily pause the Pentagon's blacklisting and President Donald Trump's directive banning federal government agencies from using its technology.
The company noted that an injunction would not require the U.S. government to use its models or prevent it from transitioning to another AI vendor.
During the hearing, Lin asked lawyers for Anthropic and the U.S. government a number of questions about the details of the case. She said her concern is whether Anthropic is being "punished for criticizing the government's contracting position in the press."
"Everyone, including Anthropic, agrees that the Department of War is free to stop using Claude and look for a more permissive AI vendor," Lin said. "I don't see that as being what this case is about. I see the question in this case as being a very different one, which is whether the government violated the law."
Lin said she expects to issue an order on Anthropic's motion in the next few days.
If the preliminary injunction is awarded, the AI startup would be able to continue doing business with government contractors and federal agencies as its lawsuit against the Trump administration plays out in court. Without it, the company has said in filings that it could lose billions of dollars in business and suffer further reputational harm.
Earlier in March, the Department of Defense designated Anthropic a so-called supply chain risk, meaning that use of the company's technology purportedly threatens U.S. national security. The label, if allowed to continue, will require defense contractors, including Amazon , Microsoft , and Palantir , to certify that they do not use Claude in their work with the military.
Eric Hamilton, lawyer for the U.S. government, said Tuesday the DOD had "come to worry that Anthropic may in the future take action to sabotage or subvert IT systems," which is why the company was designated a supply chain risk.
"What happens if Anthropic installs a kill switch or functionality that changes how it functions? That is an unacceptable risk," Hamilton said.
... continue reading