Trump Administration Defends Decision to Blacklist Anthropic While Amicus Briefs Mount - MeriTalk
The Pentagon's legal team has a new argument for why Anthropic is an unacceptable risk to national security: the AI company might turn off its own technology during a war.
The DOD filed a 40-page document in California federal court Tuesday laying out, for the first time, the specific nature of its concern. Anthropic might "attempt to disable its technology or preemptively alter the behavior of its model" during "warfighting operations" if the company felt its corporate red lines were being crossed, the filing said. The red lines in question are Anthropic's refusals to allow Claude to be used for mass surveillance of Americans or for autonomous weapons targeting decisions.
This is the legal theory underlying the supply chain risk designation: not that Anthropic is a security vulnerability in the conventional sense, but that a company willing to set ethical constraints on its technology is inherently unreliable in a conflict. The argument frames Anthropic's principles as a military liability.
Chris Mattei, a constitutional rights lawyer and former Justice Department attorney, called the argument unconvincing. "The government is relying completely on conjectural, speculative imaginings to justify a very, very serious legal step they've taken against Anthropic," he told TechCrunch. "There has been no investigation to support the department's concerns of Anthropic potentially disabling or altering its AI models during warfighting operations." He also noted the department failed to "articulate a credible or even comprehensible rationale for why Anthropic's refusal to agree to an 'all lawful use' provision rendered it a supply chain risk as opposed to a vendor that DOD simply didn't want to do business with."
The preliminary injunction hearing is scheduled for next Tuesday. Anthropic has asked the court to temporarily block enforcement of the designation while the lawsuits proceed.
Meanwhile, the amicus briefs supporting Anthropic continue to accumulate. On March 13, a group of 14 Catholic moral theologians and ethicists filed their own brief in the Northern District of California, arguing that Church teaching supports Anthropic's refusals. The scholars — including professors from Georgetown, Loyola, and Santa Clara University, and Legionary Father Michael Baggot — wrote that Anthropic "was acting as a responsible and moral corporate citizen, not as a threat to the safety of the American supply chain."
On autonomous weapons specifically, the ethicists drew on Catholic just war doctrine: decisions about proportionality and discrimination in targeting require human judgment that "is not mere pattern matching." On mass surveillance, they argued it violates the dignity of every person surveilled and undermines the principle of subsidiarity — that local bodies closest to a problem should handle it, not a distant centralized system. The Vatican has expressed opposition to lethal autonomous weapons systems since 2013.
A hearing on Anthropic's request for a preliminary injunction is scheduled for Tuesday.
Sources: