Pentagon Official Called AI Talks 'Very Close' Before Declaring Anthropic a Threat
The memo that undercuts the Pentagon's case against Anthropic The most damaging document in Anthropic's fight with the Trump administration is not a legal brief.

image from FLUX 2.0 Pro
The most damaging document in Anthropic's fight with the Trump administration is not a legal brief. It is an email.
On March 4 — the day after the Defense Department formally designated Anthropic a supply chain risk, effectively blocking American defense contractors from using its Claude AI — Under Secretary of Defense Emil Michael sent Anthropic CEO Dario Amodei a message that the two sides were "very close" on the two issues the government now cites as evidence that Anthropic poses an unacceptable risk to national security. That is the core of Anthropic's new evidentiary filing in California federal court, according to a TechCrunch report, submitted late Friday ahead of a hearing next Tuesday before Judge Rita Lin.
The email is worth pairing with what Michael said publicly in the days that followed. On March 5, Amodei published a statement saying Anthropic had been having "productive conversations" with the Pentagon, per the company's own posting. On March 6, Michael posted on X that "there is no active Department of War negotiation with Anthropic." Roughly a week later, he told CNBC there was "no chance" of renewed talks.
Anthropic's argument, laid out in two sworn declarations filed alongside its reply brief, is straightforward: if the company is a national security threat because of its positions on autonomous weapons and mass surveillance of Americans, the Pentagon's own officials apparently did not think so — at least not in the days immediately surrounding the designation.
What the declarations say
Sarah Heck, Anthropic's Head of Policy and a former National Security Council official under Obama, was present at the February 24 meeting where Amodei sat down with Defense Secretary Pete Hegseth and Michael. Her declaration takes direct aim at what she calls a "central falsehood" in the government's filings: the claim that Anthropic demanded some kind of approval role over military operations. "At no time during Anthropic's negotiations with the Department did I or any other Anthropic employee state that the company wanted that kind of role," she wrote.
Heck also disputes the government's characterization of how the negotiation unfolded. She says the concern about Anthropic potentially disabling or altering its technology mid-operation — one of the stated justifications for the supply chain designation — was never raised during months of actual negotiations. It appeared for the first time in the government's court filings, giving Anthropic no opportunity to respond.
Thiyagu Ramasamy, Anthropic's Head of Public Sector and a former six-year AWS veteran who managed AI deployments in classified government environments, addresses the "operational veto" claim directly. His argument is that it is technically impossible as a matter of architecture: once Claude is deployed inside an air-gapped government system operated by a third-party contractor, Anthropic has no access to it. There is no remote kill switch, no backdoor, and no mechanism to push unauthorized updates. Any change to the model requires the Pentagon's explicit approval and installation action. Anthropic, he says, cannot even see what government users are typing into the system.
The legal question
Anthropic's lawsuits — filed in California federal court and the D.C. Circuit — argue the supply chain designation is government retaliation for protected speech, in violation of the First Amendment, and exceeds the statutory authority of supply chain risk law. The designation, the first ever applied to an American company, was designed to block foreign adversaries from Pentagon supply chains. Using it against a domestic AI company whose stated positions on weapons and surveillance differ from the government's preferences is exactly the kind of viewpoint-based punishment the First Amendment was designed to prevent, Anthropic argues, according to AP News.
The government, in a 40-page response filed this week, rejects this framing entirely. It argues that Anthropic's refusal to allow all lawful military uses of its technology was a business decision, not protected speech, and that the designation was a straightforward national security call made on the merits.
The hearing on Tuesday will test whether the court agrees that the designation deserves a closer look — or whether the executive branch has the authority it claims.
The Palantir problem
The lawsuits land against a backdrop that makes the dispute stranger than a pure constitutional question. Palantir's Maven Smart System, which the Pentagon just designated its official AI program of record, runs on Anthropic's Claude. The company Anthropic is now barred from the defense contractor ecosystem is embedded in the primary AI targeting system of the U.S. military, Reuters reports. Anthropic's lawsuit does not spell out what follows from this — the legal argument is about the designation, not Palantir's architecture — but the practical contradiction is there for the reader to see.
Separately, OpenAI and xAI have been cleared for use in classified systems in the weeks since Anthropic was designated. The competitive dynamics are not incidental to the story.
What happens next
Judge Lin will hear arguments Tuesday. Anthropic is asking for an emergency block on the designation while the case proceeds. The underlying timeline — from "very close" to "no chance" in roughly a week — is the factual question the court will need to resolve before the constitutional one.

