A federal judge just handed Anthropic, the AI safety company behind Claude, a significant legal win — and used two words that don't usually appear in procurement disputes: First Amendment retaliation.
U.S. District Judge Rita Lin, sitting in San Francisco, granted Anthropic a preliminary injunction on March 26 blocking the Defense Department's effort to designate the company a supply chain risk. In a 43-page ruling, Lin called the government's conduct exactly what it was: "classic illegal First Amendment retaliation." She also called the underlying theory — that a company could be branded an adversary for disagreeing publicly with the government — an "Orwellian notion" with no support in the governing statute.
The ruling gives the government until approximately April 2 to seek an emergency stay from the Ninth Circuit Court of Appeals, which it has indicated it will do. But the win is real, and it's a direct result of Anthropic's willingness to litigate rather than comply.
The underlying dispute is specific. For months, the Defense Department had pressed Anthropic to remove two longstanding restrictions from its Claude AI model: a ban on enabling mass surveillance of U.S. citizens, and a ban on lethal autonomous weapons — systems that could select and engage targets without a human in the loop. Anthropic refused. DefenseOne reported that negotiations were cordial and that Anthropic even offered to help DOD transition to another vendor. Then Anthropic went public with the dispute, Anthropic CEO Dario Amodei published an essay on AI safety, and the company issued a direct statement on February 26 laying out its position.
Within 24 hours, President Trump issued a government-wide ban on Anthropic products via Truth Social, and Defense Secretary Pete Hegseth designated Anthropic a supply chain risk. Neither cited statutory authority, according to Lin's ruling.
What makes Lin's ruling unusual is that she found the government's own words incriminating. An internal Defense Department memo, cited by CNN, stated that Anthropic's risk level had escalated specifically because it was engaging in an "increasingly hostile manner through the press." Lin also noted that Trump called Anthropic a "radical left, woke company" and Hegseth attacked its "sanctimonious rhetoric" and "Silicon Valley ideology." Those aren't national security findings. They're political grievances.
The business damage was immediate. Politico reported that three contractors either terminated their work with Anthropic or were instructed to do so by the government, and three additional deals worth over $180 million collapsed despite being on the verge of closing. Anthropic had been working with the Defense Department since late 2024 through a partnership with Palantir Technologies and had launched a standalone product, Claude Gov, on June 5, 2025.
But the win is incomplete. Lin blocked the designation under 10 U.S.C. § 3252, the statute governing covered defense contracts. A separate designation under 41 U.S.C. § 4713 — the statute covering civilian agency procurement — remains active and is already being challenged in the D.C. Circuit Court of Appeals, where a three-judge panel includes Trump appointees Gregory Katsas and Neomi Rao. Both have taken an expansive view of government national security powers. Lawyers who spoke to Politico said it was likely that panel would rule differently than Lin did.
Under Secretary Emil Michael, the DOD official who had been negotiating with Anthropic and sent Amodei a message saying they were "very close" on a deal — sent the same day the designation was formalized — called Lin's ruling "a disgrace with dozens of factual errors" in a post on X. The injunction takes effect in seven days.
The broader implications extend past this specific case. Microsoft, Google, and OpenAI employees have all filed amicus briefs in support of Anthropic, as have several industry associations. The tech lobby is watching not just for Anthropic's fate but for whether the government can weaponize procurement designation against any AI company that draws an ethical line — and whether that line can be defended in court. If the answer is yes to the first and no to the second, every frontier model company will have to factor that risk into their policy decisions.
What happens next: the Ninth Circuit will rule on the stay. The D.C. Circuit will hear arguments on the 41 U.S.C. § 4713 designation separately. Both cases could reach different conclusions on the same underlying conduct — which would put the Supreme Court in the unusual position of having to settle whether the government can cite national security to punish a company for speaking publicly about where it draws ethical lines.