Two Courts. Two Rulings. One Blacklisted AI Company.
On May 19 the DC Circuit hears Anthropic’s appeal. Three weeks earlier, a California judge had already blocked the same Pentagon blacklisting. Two courts, two opposite rulings, one company caught between them.

On May 19, a federal appeals court will hear oral arguments that could determine whether the Pentagon can keep Anthropic, the AI safety company behind the Claude family of models, permanently blacklisted from federal contracts. Anthropic will arrive to that hearing having already won a partial court order saying the opposite, and the collision between those two rulings is the story almost no one has covered as a single piece.
Three weeks before the DC Circuit denied Anthropic's motion to pause the blacklisting, a California federal judge blocked one of the underlying Pentagon directives. CNBC reported that the California court found the government appeared to have retaliated against Anthropic for its public positions on AI safety, violating both the First Amendment's free speech guarantees and the Fifth Amendment's right to dispute a government action before it takes effect. The DC Circuit panel, all three judges Republican-appointed and two of them named by President Trump, acknowledged that Anthropic "will likely suffer some irreparable harm," but said the balance of the equities cut in the government's favor, Ars Technica reported. Acting Attorney General Todd Blanche called it a victory: "Military authority and operational control belong to the Commander-in-Chief and Department of War, not a tech company."
The dispute started with a $200 million contract Anthropic won from the Pentagon last July to develop AI capabilities for national security purposes, according to CBS News. After Anthropic refused subsequent terms that would have expanded Claude's use for surveillance and autonomous weapons applications, Defense Secretary Pete Hegseth issued orders designating the company a supply-chain risk, a classification Reuters reported was the first time a US company had been publicly flagged under these particular procurement statutes. That designation effectively bars Anthropic from the federal contracting market. The Computer and Communications Industry Association, a trade group, called the designation "a tool normally reserved for foreign adversaries," and warned that using it against a domestic company sends a chilling signal at a moment when US AI companies compete directly with Chinese ones.
The two cases are running on separate legal tracks, which is why neither ruling resolves the other. The California injunction, per the court order, blocks a specific Hegseth directive under constitutional retaliation claims. The DC Circuit case, per the appellate ruling, addresses the broader supply-chain designation under federal procurement law. The DC Circuit panel noted that the California case raises distinct statutory questions under different laws, meaning the conflict may be more procedural than doctrinal. But if the Ninth Circuit affirms the California injunction while the DC Circuit case goes the other way, the government could find itself operating under court orders that are difficult to reconcile.
What May 19 actually decides is narrower than the size of the fight suggests. The DC Circuit expedited the case at Anthropic's request, and oral arguments will focus on whether the appellate court should reinstate a stay, or let the blacklisting hold while the full appeal plays out. The stakes for the AI industry are not narrow: this is the first time the supply-chain-risk designation has been used against a US AI company, and whatever legal framework emerges will define the government's leverage over domestic AI labs for years.




