is The Verge’s senior AI reporter. An AI beat reporter for more than five years, her work has also appeared in CNBC, MIT Technology Review, Wired UK, and other outlets.
After Anthropic’s weeks-long standoff with the Pentagon, the company won one milestone: A judge granted Anthropic a preliminary injunction in its lawsuit, which sought to reverse its government blacklisting while the judicial process plays out.
“The Department of War’s records show that it designated Anthropic as a supply chain risk because of its ‘hostile manner through the press,’” Judge Rita F. Lin, a district judge in the northern district of California, wrote in the order, which will go into effect in seven days. “Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation.”
A final verdict could be weeks or months out.
Anthropic spokesperson Danielle Cohen said in a Thursday statement, “We’re grateful to the court for moving swiftly, and pleased they agree Anthropic is likely to succeed on the merits. While this case was necessary to protect Anthropic, our customers, and our partners, our focus remains on working productively with the government to ensure all Americans benefit from safe, reliable AI.”
“I do think this case touches on an important debate,” Judge Lin said during the Tuesday hearing. “On the one hand, Anthropic is saying that its AI product, Claude, is not safe to use for autonomous lethal weapons and domestic mass surveillance. Anthropic’s position is that if the government wants to use its technology, the government has to agree not to use it for those purposes. On the other hand the Department of War is saying that military commanders have to decide what is safe for its AI to do.”
On Tuesday, Judge Lin went on to say, “It’s not my role to decide who’s right in that debate… The Department of War decides what AI product it wants to use and buy. And everyone, including Anthropic, agrees that the Department of War is free to stop using Claude and look for a more permissive AI vendor.” She added, “I see the question in this case as being … whether the government violated the law when it went beyond that.”
It all started with a memo sent by Defense Secretary Pete Hegseth on Jan. 9, calling for “any lawful use” language to be written into any AI services procurement contract within 180 days, which would include existing contracts with companies like Anthropic, OpenAI, xAI, and Google. Anthropic’s negotiations with the Pentagon stretched on for weeks, hinging on two “red lines” that the company did not want the military to use its AI for: domestic mass surveillance and lethal autonomous weapons (or AI systems with the power to kill targets with no human involvement in the decisionmaking process). The rollercoaster series of events that followed has included a barrage of social media insults, a formal “supply chain risk” designation with the potential to significantly handicap Anthropic’s business, competing AI companies swooping in to make deals, and an ensuing lawsuit.
With its lawsuit, Anthropic argues that it was punished for speech protected under the First Amendment, and it’s seeking to reverse the supply chain risk designation.
It’s rare, and potentially even unheard of until now, for a US company to be named a supply chain risk, a designation typically reserved for non-US companies potentially linked to foreign adversaries. Anthropic’s designation as such raised eyebrows nationwide and caused bipartisan controversy due to concerns that disagreeing with a presidential administration could potentially lead to outsized retribution for a business in any sector.
... continue reading