Anthropic told an appeals court this week it has built Claude in a way that makes the AI functionally untouchable by its own creators once deployed inside classified Pentagon networks — a technical claim that forms the centerpiece of the company's legal challenge to a Trump administration designation that threatened its federal business.
The argument, laid out in a 96-page filing with the U.S. Court of Appeals for the D.C. Circuit, marks a rare instance of an AI company staking its defense on what it cannot do rather than what it can. Anthropic is not arguing the technology is safe to use for military applications. It is arguing the technology cannot be misused in the specific ways the government fears, because the company itself lacks the technical ability to intervene once Claude is operating inside a secure government environment.
In April, the Pentagon canceled a $200 million contract with Anthropic after the company refused to remove usage restrictions prohibiting lethal autonomous warfare and mass surveillance of Americans from its terms of service. OpenAI subsequently struck a deal to provide its technology to the U.S. military. Oral arguments in the D.C. Circuit case are scheduled for May 19.
The technical architecture Anthropic is invoking is called "model locking" in industry parlance — a design philosophy where a deployed AI system is configured so that its behavior cannot be altered remotely, even by the organization that built it. The company declined to detail which specific technical mechanisms enforce this in Claude's case, citing the sensitive nature of the filing. The government has not yet filed its formal response.
Anthropic previously prevailed in a separate case focused on the same issues in San Francisco federal court, a ruling that prompted the Trump administration to remove stigmatizing labels from the company. But the D.C. Circuit proceeding is separate and involves a different legal question: whether the Pentagon's supply-chain risk designation — which treats Anthropic as a potential national security threat — can survive Anthropic's argument that the product in question cannot be weaponized in the manner the designation assumes.
Supply-chain risk designations typically target companies whose products could be used against U.S. interests. Anthropic's filing argues that Claude, as deployed in classified networks, cannot be used that way at all — a claim that, if accepted, would undermine the factual basis for the designation regardless of the company's other activities or partnerships.
What the government will argue in response is not yet public. The D.C. Circuit rejected Anthropic's request for an injunction blocking the Pentagon's actions while the case proceeds, a procedural ruling that does not signal how the panel will ultimately decide the merits. The May 19 hearing will be the first opportunity to hear government lawyers articulate why they believe the supply-chain risk framework applies to a company whose core technical claim is that its product cannot perform the harmful acts the framework is designed to prevent.
What to watch: whether the government's response invokes classified evidence the court will review in camera — a legal term meaning the judges will examine sensitive materials privately without public disclosure — and whether Anthropic's technical architecture claim survives scrutiny from security researchers who have questioned how truly immutable any deployed AI system can be. The company's legal victory in San Francisco bought it regulatory relief. The D.C. Circuit case will test whether that relief holds.