Dario Amodei published a blog post last month that contained a sentence no American tech CEO had ever written: the company, he said, cannot in good conscience accede to a demand from the United States government. The demand was not classified. The company did not invent a fiction to refuse it. The refusal was public, documented, and absolute.
Then the government struck back — with a blunt instrument that has never been used against an American technology company for refusing a government demand. The Pentagon designated Anthropic a supply chain risk, the same legal mechanism used against Huawei and Kaspersky. No U.S. tech firm had ever been blacklisted for simply saying no.
Anthropic sued on March 9, filing in federal court in San Francisco. The case is Anthropic v. Department of Defense, and it is now the most concrete test of whether the executive branch can use national security procurement rules to punish a company for having red lines. Federal Judge Rita Lin was not gentle. At a hearing on March 24, she described the government's position as troubling and said the bar for a supply chain risk designation seemed low enough that a company could qualify simply by being stubborn and asking annoying questions. That is not the language judges use when they find an argument compelling.
The two red lines Anthropic drew were specific: mass domestic surveillance and fully autonomous weapons. These were not abstractions. They were use cases the Defense Department was seeking for its GenAI.mil platform, according to court filings and reporting by CNBC. Anthropic had been a willing government partner before this — it was the first frontier AI company to deploy on classified U.S. government networks, the first to operate at the National Laboratories, and the first to provide custom models for national security customers. The company had signed a $200 million contract with the DoD in July 2025 alongside Google and OpenAI. This was not a company that avoided the government. It was a company that drew a line.
The negotiations collapsed in September when Anthropic and the Pentagon began discussing expanded deployment of Claude on GenAI.mil. The sticking point, per CNBC: the DoD wanted contractual guarantees that it could use the models for the two disputed use cases. Anthropic said no. Amodei's public statement, posted to the Anthropic blog, made the position unambiguous. "We cannot in good conscience accede to their request," he wrote. "Regardless, these threats do not change our position."
The retaliation was fast. On February 24, Defense Secretary Pete Hegseth gave Amodei a deadline: relent by 5:01 p.m. on Friday, February 27. When Anthropic did not relent, President Trump ordered federal agencies to cease using Anthropic's products that same day, and Hegseth designated the firm a supply chain risk. Pentagon Undersecretary Emil Michael went further, posting on X that Amodei had a "God complex" and wanted to personally control the U.S. Military — an extraordinary charge from a senior defense official against the head of a domestic technology company. The characterizations in Michael's post could not be independently verified against the original X thread.
The Justice Department's legal argument for the designation was, as Lin noted, striking. DOJ lawyers said the risk was that Anthropic might install a kill switch or change how the models function in ways that could sabotage or subvert IT systems. The company that built Claude, the argument went, might secretly turn it against the systems it already powers. Lin pushed back directly: that standard, she observed, seemed to describe almost any software vendor.
The irony cuts deeper when you know the history. Claude had already been used operationally by the U.S. government — including, according to The Guardian, as part of the U.S. operation to seize Venezuelan President Nicolas Maduro, a raid that involved bombing across Caracas and, per Venezuelan defense ministry sources, killed 83 people. The model was used through Anthropic's partnership with Palantir Technologies, a defense contractor. The government found the model useful enough to use in a sensitive kinetic operation. Two months later it was an unacceptable supply chain risk.
Anthropic is not a small company that can be quietly squeezed. Its annualized revenue pace reached $19 billion during the dispute, up from $14 billion weeks earlier, per Reuters. Enterprise sales make up roughly 80 percent of its revenue. The company also voluntarily cut off several hundred million dollars in revenue to stop use of Claude by firms linked to the Chinese Communist Party. This is a company that has shown it will leave money on the table for principled reasons. That changes the leverage calculation.
Whether it changes the legal outcome is another question. The case is likely to settle before a final ruling — the political and economic pressure on both sides is significant. But the precedent being set is already visible. Future labs will know what happens when you refuse a government demand: you lose your contracts, you get blacklisted, you get called names on social media by undersecretaries. Judge Lin called it troubling. That word matters, but what matters more is whether a company with $19 billion in annualized revenue can absorb this fight — and whether one without that financial standing could survive it.
The answer to the second question is almost certainly no.