A federal judge on Tuesday called the Pentagon's decision to blacklist Anthropic's AI tools from U.S. government systems "troubling," pressing a Justice Department lawyer to explain how the unprecedented supply-chain-risk designation against an American company survived basic legal scrutiny.
U.S. District Judge Rita Lin, sitting in San Francisco, hosted a hearing on Anthropic's bid for a preliminary injunction that would halt the designation while its lawsuit plays out. The posture was narrow — Lin repeatedly noted her job is to decide whether the government's actions were illegal, not whether they were good policy — but her questions made clear she is not persuaded by the national security rationale the government has offered.
The core of the government's case rests on a theory: that Anthropic's Claude models pose a supply-chain risk because the company could, in theory, remotely disable or alter its software running on government systems. It's a kill-switch argument. But the Department of War, Lin observed, doesn't actually know whether that is true.
"Can Anthropic secretly push an update without DoW's knowledge or consent?" Lin asked the government's lawyer, in an exchange reported by the legal newsletter Ashtalks. The answer, on the record: "We haven't taken a position on whether Anthropic maintains that ability. There's an audit underway." Lin pressed until the government said it plainly: "Sitting here today, the Department of War does not know one way or the other whether Anthropic can in fact update its software on DoD systems without DoW's knowledge or consent."
The designation, signed by Secretary of Defense Pete Hegseth on March 3, 2026, was preceded by a risk assessment memorandum from Emil Michael dated March 2 — one day earlier, according to a filing cited by Ashtalks. The administrative record supporting the designation was, in Lin's reading, notably thin on the connection the government was asking her to accept.
"The bar seems low enough to apply to any stubborn IT vendor that insists on certain terms," Lin told DOJ's lawyer, in remarks covered by CNBC. She later asked why Hegseth would post the claim publicly if it had no legal effect. The government's lawyer said he didn't know.
On the question of whether Anthropic's own behavior undermines the kill-switch theory, Lin was direct. A real saboteur, she noted, would accept the contract and act quietly. Everything Anthropic has been accused of — including going public with its contract disputes — was done openly and with direct communication to the Department. "I'm not seeing the connection there," she said on the record.
The congressional notification letters that are supposed to accompany a supply-chain-risk designation under 10 U.S.C. § 3252 also drew scrutiny. Lin asked the government directly: does it concede that Secretary Hegseth's letters to the relevant congressional committees did not contain the required discussion of less intrusive measures? Government counsel answered yes. The letters don't have that discussion.
Anthropic filed its lawsuits in two courts: the Northern District of California, challenging the § 3252 designation directly, and the D.C. Circuit, challenging the parallel FASCSA order under a separate legal pathway. The dual filing reflects a legal structure in which the two authorities have different review processes, as Mayer Brown analyzed.
The case has attracted an unusual coalition of amici. A bipartisan group of nearly 150 retired federal and state judges filed a brief in the D.C. litigation arguing the law contains no national security exception that lets the government avoid judicial review entirely. Google chief scientist Jeff Dean and others from Google and OpenAI signed a separate brief warning that the designation could have serious ramifications for the AI industry and raising concerns about AI being used for government surveillance.
If Lin grants the preliminary injunction, the practical effect would be straightforward: Anthropic could continue doing business with government contractors and federal agencies while the case proceeds. The injunction would not require the U.S. government to use Claude or prevent it from transitioning to another vendor, the company noted in its filing. The four-part preliminary injunction standard — likelihood of success on the merits, irreparable harm without the injunction, balance of equities, and public interest — is what Lin must now work through.
What she does next will set a precedent that doesn't exist yet: no American company has ever been designated a supply-chain risk under the statutory framework the Pentagon invoked. If the injunction holds, it will be because a court found the government acted arbitrarily. If it fails, the designation stands — and every AI company doing business with the federal government will be operating under a new and less-understood kind of regulatory risk.
The government's position, as Lin characterized it, is that a company can be punished for being difficult in a contract negotiation — for being, in her words, "stubborn" and for "asking annoying questions" — if it happens to work in an area touching national security. "A real saboteur," she observed, would not pick a public fight. Anthropic picked the public fight. Whether that fact is exculpatory or damning is now, at least in part, for the court to decide.