DOJ: Anthropic's Guardrail Refusal Is 'Conduct,' Not Protected Speech
The Trump administration filed a legal brief defending the Pentagon's blacklisting of Anthropic, arguing the company's refusal to remove safety guardrails on its AI models is conduct, not protected speech — and therefore not shielded by the First Amendment.
The Justice Department filing, submitted in California federal court, came one week before a scheduled March 24 hearing on Anthropic's request for a preliminary injunction that would temporarily block the designation while its lawsuit plays out.
"The President directed all federal agencies to terminate their business relationships with Anthropic," the DOJ filing stated, "because Anthropic refused to release the restrictions on the use of its products — which refusal is conduct, not protected speech."
Anthropic sued the administration in March, arguing the supply-chain risk designation was unlawful. The company has maintained it cannot agree to contracts permitting unrestricted use of its AI in autonomous weapons or mass surveillance of American citizens. The DOJ response: those are contractual terms, not expression.
Meanwhile, the Pentagon is already moving forward without Anthropic. Cameron Stanley, the department's chief digital and AI officer, told Bloomberg the Department of Defense "is actively pursuing multiple LLMs into appropriate government-owned environments" and that engineering work on replacement systems "has begun." Stanley said he expects the alternatives to be available for operational use "very soon."
The government has also signed agreements with OpenAI and xAI to fill the gap. The DoD signed a deal with Elon Musk's xAI to use Grok in classified systems.
Anthropic said in a statement it was reviewing the filing: "Seeking judicial review does not change our longstanding commitment to harnessing AI to protect our national security, but this is a necessary step to protect our business, our customers, and the public."
The case raises a question with no clear precedent: can an AI company claim First Amendment protection for what its model will and will not do? Legal experts are divided. The Cato Institute argued in a friend-of-the-court brief that the administration has the right to set procurement standards. The Brennan Center countered that forcing a company to alter its product to win government contracts is a content-neutral regulation that still triggers First Amendment scrutiny.
A ruling in Anthropic's favor could establish that AI safety commitments constitute protected expression — a precedent with sweeping implications for how the government regulates frontier models. A ruling against would give the executive branch broad power to exclude AI vendors based on their safety policies.