Judge Rita Lin of the U.S. District Court for the Northern District of California issued a preliminary injunction on March 26 blocking the Pentagon-specific supply chain risk designation of Anthropic under 10 USC 3252 — but the separate government-wide designation under 41 USC 4713 remains in full force and can only be addressed by the D.C. Circuit, where the challenge is already pending.
Anthropic drew a line: no mass surveillance of US citizens, no lethal autonomous weapons. The Trump administration tried to make them erase it. Lin found, at minimum, that the government's response looked less like national security and more like punishment for speaking up.
"Punishing Anthropic for bringing public scrutiny to the government's contracting position is classic illegal First Amendment retaliation," Lin wrote. She also noted that Trump's characterization of Anthropic as "a radical left, woke company" and Hegseth's attack on its "sanctimonious rhetoric" cut against any national security rationale.
The timeline tells the story. Anthropic had worked with the Defense Department since late 2024 through a partnership with Palantir Technologies, launching a standalone product, Claude Gov, in March 2025. The dispute began in the fall of 2025, when DOD pushed for what it called "all lawful uses" of Claude — language that would have required Anthropic to drop its two founding-age restrictions. Negotiations were cordial. Anthropic offered to help DOD transition to another vendor. Then, in January 2026, CEO Dario Amodei posted a public essay on AI safety. Within 24 hours, Trump issued a government-wide ban on Truth Social and Hegseth designated Anthropic a supply chain risk — neither citing statutory authority, Lin noted.
Charlie Bullock, a lawyer at the Institute for Law and AI, put the practical situation plainly: for Anthropic, from a business perspective, you need both statutory grounds resolved before it actually helps. The 41 USC 4713 challenge is now before a D.C. Circuit panel that includes Trump appointees Gregory Katsas and Neomi Rao, both of whom have taken expansive views of executive national security authority.
The amicus briefs from Google, OpenAI, Microsoft, and several industry associations are the competitive tell. Companies that compete with Anthropic for federal AI contracts concluded the government's argument, if accepted, would be turned against them too. Safety commitments embedded in terms of service are how every frontier lab governs what its models can do — if the government can penalize a company for maintaining them, no vendor's commercial terms are safe.
And here's the part the injunction headline obscures: the government's own lawyers admitted in Lin's courtroom that they had no evidence Anthropic could implement a kill switch in its systems, despite having made that allegation the basis for the supply chain risk designation. They also agreed with the judge that Hegseth's claim that no contractor could do business with Anthropic had "absolutely no legal effect" — the Secretary of Defense did not have the power to issue that blanket prohibition, and the government's own counsel acknowledged it.
Bullock's observation about contractor incentives is the uncomfortable truth the ruling doesn't resolve: even if both designations eventually fall, defense contractors who want to stay in the Pentagon's good graces have little reason to work with Anthropic without one.
Lin imposed a seven-day administrative stay on her own order, giving the government until approximately April 2 to seek an emergency stay from the Ninth Circuit. The merits of whether the First Amendment protects AI labs that maintain safety commitments from government punishment for speaking about them are still to come. That's the real question Lin's ruling raises — and it won't be answered in Anthropic's favor until the D.C. Circuit speaks.