A federal appeals court handed the Pentagon a win Wednesday, ruling that Anthropic cannot pause a designation that effectively blacklists the AI company from government contracts while its case proceeds. The decision from the U.S. Court of Appeals for the D.C. Circuit came the same day the Pentagon's chief technology officer said the ban remains in effect despite a separate ruling from a California judge who blocked the military-specific portion of the action two weeks earlier.
The result is a split: two courts, two statutes, opposite outcomes on the same company and the same government action. The Pentagon is using the D.C. ruling to keep the broader government-wide restriction in place while complying with the narrower California order.
Anthropic was the first AI company publicly designated a supply chain risk under two obscure federal statutes: 10 U.S.C. Section 3252, a Pentagon-specific authority, and the Federal Acquisition Supply Chain Security Act of 2018, which covers civilian agencies government-wide, per Mayer Brown's legal analysis of the designation. The company filed twin lawsuits on March 9 challenging both designations, arguing they amount to retaliation for refusing to give the Defense Department unrestricted access to its Claude models.
The California case got the first ruling. On March 26, U.S. District Judge Rita Lin issued a preliminary injunction blocking the Section 3252 designation, calling it "classic illegal First Amendment retaliation". Her order prevented the Pentagon from enforcing the military-specific restriction while the case proceeds.
The D.C. Circuit was Anthropic's next stop. The company asked the appellate panel to halt the FASCSA designation while its separate challenge winds through court. Wednesday, a three-judge panel said no. Reuters confirmed the split: the California case deals with the narrower statute excluding Anthropic from Pentagon contracts involving military information systems, while the D.C. case concerns the broader law that could extend the blacklist to civilian agencies. The ruling is not a decision on the merits. It is a procedural loss: Anthropic wanted the designation paused during litigation and the court declined. The underlying case remains live.
The FASCSA statute carries stakes beyond the immediate dispute. The law triggers an interagency review process that could, in theory, extend the blacklisting beyond defense contractors to the broader civilian federal government. If the designation survives legal challenge, it creates a template for future use against other AI vendors whose politics the government dislikes.
The human conflict behind the legal one is documented in internal Pentagon memos reported by Defense One. The dispute emerged in fall 2025 when DOD pushed for unrestricted access to Claude for all lawful uses. Anthropic refused to remove two long-standing safety restrictions it embeds in the model: no mass surveillance of U.S. citizens and no lethal autonomous warfare. An internal Pentagon memo said Anthropic's risk level escalated because it was "engaging in an increasingly hostile manner through the press."
Defense Secretary Pete Hegseth publicly attacked what he called Anthropic's "sanctimonious rhetoric" and "Silicon Valley ideology." President Trump wrote on Truth Social: "WE will decide the fate of our Country — NOT some out-of-control, Radical Left AI company."
Behind the scenes, the picture was different. The day before the designation was finalized, Under Secretary Emil Michael and Anthropic CEO Dario Amodei were still exchanging draft usage terms cordially, with Michael writing: "After reviewing with our attorneys and seeing your last draft, I think we are very close here," per Judge Lin's ruling.
Anthropic executives have said the designation could cost the company billions of dollars in lost business. The company had signed a $200 million contract with the Pentagon in July 2025 for Claude deployment on the Defense Department's GenAI.mil platform. It was, at the time, the first AI company to deploy models on DOD classified networks.
Employees of Anthropic's competitors, including Google and OpenAI, filed briefs supporting the company, along with Microsoft. The industry's view of the government's legal theory appears largely unified against it.
What comes next: The D.C. Circuit case proceeds on the merits. The California case proceeds on its merits. The government has already sought an emergency stay from the Ninth Circuit. Both courts will eventually rule on the underlying constitutionality of what the Pentagon did. Wednesday's appellate ruling is a procedural step in one of those two tracks. Significant enough to report, not significant enough to declare a winner.