US Govt Says Anthropic AI An 'Unacceptable Risk' To Military - Barron's
The U.S. government told a federal court Tuesday that Anthropic poses an unacceptable national security risk because the company could alter or disable its AI systems in ways that serve its own interests rather than America’s — and that AI models are inherently “acutely vulnerable to manipulation.”
The 40-page filing (Case 3:26-cv-01996-RFL) in U.S. District Court for the Northern District of California is the government’s first direct response to two Anthropic lawsuits challenging Defense Secretary Pete Hegseth’s decision last month to designate the company a “supply chain risk.” A hearing is scheduled March 24 before Judge Rita F. Lin.
The DOJ’s core argument, laid out in a section titled “Anthropic Rejects the Department of War’s Standard ‘Any Lawful Use’ Policy”: Anthropic’s refusal to permit “any lawful use” of its technology means the company, not the government, retains ultimate control over how Claude is deployed. “Giving Anthropic access to the Department of War’s warfighting infrastructure would introduce unacceptable risk into DoW supply chains,” the filing states.
Anthropic has maintained that the military — not the company — decides how its technology is used. In a February 26 statement, CEO Dario Amodei said Anthropic “has never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.” The company argues the blacklist is ideological punishment; the government argues Anthropic’s own guardrails make it an unreliable partner.
The filing also challenges Anthropic’s constitutional claims. It argues that refusing a contractual term is not speech, that the government would have acted identically regardless of any speech, and that the Secretarial Determination was “lawful and reasonable.” A separate section disputes whether Hegseth’s social media post constitutes a final agency action reviewable under the APA.
The broader context sharpens the irony. Anthropic spent years positioning itself as the cautious, safety-first lab — the one that wouldn’t race recklessly toward artificial general intelligence. That caution now looks like a liability. The same week Hegseth declared Anthropic a supply chain risk, OpenAI secured a deal to put its technology in classified military systems. Google announced Gemini-for-government. Meanwhile, Anthropic stands alone, arguing that current-generation AI isn’t accurate enough to be used in weapons, and that safety guardrails are not a political preference but a capability constraint.
Thirty-seven researchers from OpenAI and Google, including Google Chief Scientist Jeff Dean, filed an amicus brief in support of Anthropic, arguing the episode could chill open debate about AI risks. “By silencing one lab, the government reduces the industry’s potential to innovate solutions,” they wrote.
Anthropic executives have said the blacklist could cut 2026 revenue by multiple billions of dollars. One partner has already switched to a rival model, eliminating a pipeline of over $100 million. Negotiations with financial institutions worth roughly $180 million have been disrupted. The company has not ruled out returning to the negotiating table.
Notebook: Maven — the classified system name for military access to Claude — is now being discussed in open court. That alone tells you how far this has escalated.