A federal judge handed Anthropic a temporary victory Thursday in its escalating fight with the Pentagon, blocking the Defense Department from labeling the company a national security supply chain risk — and calling the government actions what they appeared to be: punishment.
U.S. District Judge Rita Lin, a Biden appointee sitting in San Francisco, issued a preliminary injunction preventing the Pentagon from enforcing its designation of Anthropic as a risk to the defense supply chain. Lin also blocked a separate Trump administration directive ordering all federal agencies to cease using Anthropic's Claude chatbot. The ruling does not take effect for seven days, giving the administration time to appeal.
The legal clash stems from negotiations over a potential defense contract that collapsed when Anthropic refused to allow its AI to be used for fully autonomous weapons or the mass surveillance of Americans. After the company went public with its objections, the administration moved to blacklist it.
Lin was direct in her assessment. "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government," she wrote in a 43-page ruling. The record, she said, supported "an inference that Anthropic is being punished for criticizing the government's contracting position in the press" — which she called "classic illegal First Amendment retaliation."
The constitutional arguments Anthropic raised — First Amendment retaliation and Fifth Amendment due process deprivation — are novel in this context. The supply chain risk designation is an obscure procurement statute that has historically been used against foreign companies suspected of vulnerability to state-sponsored sabotage. Applying it to a U.S. AI lab, without notice or the chance to contest it, set a legal precedent with few parallels.
Anthropic filed suit in early March after Defense Secretary Pete Hegseth used the authority to effectively bar the company from certain military contracts. Executives warned the designation could cost billions in lost business and irreparable reputational harm. The company has a second, separate lawsuit pending in D.C. federal court over a different supply chain risk designation that could exclude it from civilian government contracting.
The ruling drew an unusually broad coalition of supporters. Microsoft, major industry trade groups, rank-and-file technology workers, retired U.S. military leaders, and a group of Catholic theologians all filed briefs backing Anthropic's position.
The Pentagon's position was that Anthropic's refusal to accept contractual terms, not its advocacy around AI safety, prompted the designation. The Justice Department argued the designation was lawful and that uncertainty over Claude's availability could disrupt military operations.
Lin acknowledged the ruling was not about the underlying policy debate — whether the military should use AI for autonomous weapons or domestic surveillance — but about the government's response to Anthropic's refusal. "If the concern is the integrity of the operational chain of command, the Department of War could just stop using Claude," she noted. "Instead, these measures appear designed to punish Anthropic."
Anthropic said it was grateful for the court's swift action and remains focused on "working productively with the government to ensure all Americans benefit from safe, reliable AI." The company has not backed down on its core positions regarding autonomous weapons and surveillance.
What this means for the broader AI labs under government pressure — and there are reports others have received similar scrutiny — is unclear. But the precedent is significant: for now, at least, one court has drawn a line between legitimate national security concern and retaliation against a company for its principles. The administration has a week to decide whether to test it.