The government's case against Anthropic keeps unraveling in public. A federal judge in San Francisco issued a preliminary injunction on March 26 blocking the Pentagon from enforcing its supply chain risk designation against the AI company, and her reasoning was unusually direct. Judge Rita Lin found that the government's logic for blacklisting Anthropic was, as she put it at a March 24 hearing, "a pretty low bar" — that a company could be designated a supply chain risk simply for being stubborn and asking annoying questions. She also found the record supported an inference that Anthropic was being punished for publicly refusing to let its AI be used for fully autonomous weapons or mass surveillance, which she called "classic illegal First Amendment retaliation."
That is a remarkable thing for a federal judge to say about the Department of Defense. But the evidence behind the dispute is even stranger than the ruling suggests.
The Financial Times first framed this as a governance test, and that remains the right frame — just not in the abstract way the FT meant it. The real test is concrete: did the Pentagon's second-most-senior official tell Anthropic's CEO they were close to resolving their differences on the same day the government formally designated the company a national security risk? And did that same official hold between $2 million and $10 million in Perplexity AI, one of Anthropic's direct competitors?
According to a court filing reviewed by TechCrunch, Under Secretary of Defense Emil Michael emailed Anthropic CEO Dario Amodei on March 4 — one day after the supply chain risk designation was formally finalized — saying the two sides were "very close" on the two issues the government now cites as evidence that Anthropic is a national security threat. Those red lines were autonomous weapons and domestic mass surveillance: exactly the restrictions Anthropic had refused to abandon. Michael then publicly posted on X that there was no active negotiation. The timing and the contradiction are in the court record.
Michael holds between $2 million and $10 million in vested and unvested stock in Perplexity AI and served on its board, according to reporting by The Outpost and the Lever. He is also the Pentagon's chief technology officer. Anthropic was the only large language model provider approved for classified U.S. government networks until the Pentagon announced a deal with xAI to deploy its Grok model on those same networks, per Reuters. The conflict of interest is not subtle.
Anthropic signed a $200 million contract with the Pentagon in July 2025 for responsible AI deployment in defense operations, becoming the first frontier AI lab to operate on classified networks, according to CNBC. The contract's responsible AI provisions — restrictions on how Claude could be used — appear to be exactly what the government objected to.
The OpenAI deal had been in motion since February 24, before the dispute with Anthropic escalated. According to Axios, CEO Sam Altman told staff on February 24 that it might be a good time to work out a deal with Michael. Michael contacted Altman that same afternoon. By February 27, OpenAI had reached a deal with the Pentagon.
On the afternoon of February 27, Defense Secretary Pete Hegseth gave Dario Amodei a deadline: relent by 5:01 p.m. and allow unrestricted use of Claude for all legal purposes. Anthropic refused. The next day, OpenAI announced its deal publicly, with Altman later acknowledging the arrangement was "definitely rushed," per MIT Technology Review.
Anthropic filed two federal lawsuits challenging the designation. In one filing, the company described how the Pentagon's position had escalated after it publicly refused to allow its technology for autonomous lethal weapons or mass surveillance, and after it asked questions about how its AI was used during a military raid that captured Venezuelan President Nicolas Maduro, per Reuters. An internal DOD memo cited by Defense One stated that Anthropic's risk level had escalated specifically because it was "engaging in an increasingly hostile manner through the press."
Lin found the Department of Defense violated due process by not giving Anthropic advance notice or an opportunity to respond before the ban went into effect, and did not follow procedures required by the supply chain risk statute. She also found that neither the President nor Secretary Hegseth cited any statutory authority for the directives. Her 43-page ruling quoted from prior precedent: "Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the United States," as the New York Times reported.
Michael called the decision a disgrace and claimed there were dozens of factual errors in the ruling, per the Los Angeles Times. The Pentagon's CTO also said the ban still stands even after the preliminary injunction, per the National Law Journal, indicating the government does not intend to back down.
More than 30 employees from OpenAI and Google DeepMind, including Google DeepMind chief scientist Jeff Dean, filed an amicus brief in support of Anthropic. The brief made an argument that goes beyond this specific case: that in the absence of public law governing AI deployment, the contractual and technological requirements AI developers impose on their systems represent a vital safeguard against catastrophic misuse. The companies that built the technology are trying to enforce their own safety rules. The government wants to override them. That is the actual governance test — not whether the Pentagon can buy AI services, but who gets to decide what those systems are allowed to refuse to do.
Lin put her own preliminary injunction on hold for seven days to allow the Justice Department to appeal to the Ninth Circuit, per Politico. The case is Anthropic PBC v. U.S. Department of War, 3:26-cv-01996 in the Northern District of California. Whatever the outcome, the fight has already exposed something the government probably did not want exposed: that the objections to Anthropic looked less like a coherent national security assessment and more like a negotiation that went wrong, conducted by an official with a financial stake in the outcome, concluded with a designation finalized the day after an email suggested a deal was close, and then defended with public statements that contradicted the private record.