Steve Bannon just told the Pentagon it was wrong. That is not the story — it is the corroboration.
The story is a contract clause, a lawsuit, and a meeting scheduled for Friday that could determine whether this stays a legal dispute or becomes a wider rupture over who sets the ethical boundaries for AI in warfare.
Anthropic CEO Dario Amodei is scheduled to meet White House chief of staff Susie Wiles on Friday, per AOL. The agenda is not publicly disclosed, but the context is. For months, Anthropic has refused to accept Defense Department contract language requiring it to make its AI systems available to the military for "any lawful use" — a phrase that, in practice, meant fully autonomous weapons and mass domestic surveillance. That refusal is the subject of a federal lawsuit the company filed in March against Defense Secretary Pete Hegseth, the Pentagon, the Executive Office of the President, and other agencies, per Business Insider.
Hegseth pressed the company to accept the "any lawful use" standard or forfeit its Defense Department contracts, per Business Insider. Anthropic's position, stated in a company blog post and echoed by Amodei in interviews: frontier AI systems are simply not reliable enough to power fully autonomous weapons, and the company "cannot in good conscience" provide a product that puts American warfighters and civilians at risk, per Business Insider. Anthropic offered to work with the Department of War on research to improve system reliability. The department did not accept.
When Anthropic refused, the Pentagon labeled it a supply chain risk — a designation historically reserved for foreign adversaries and never before applied to an American AI firm, per AOL. A federal appeals court declined on April 8 to temporarily block that designation, leaving Anthropic's challenge to proceed through the courts. Judge Lin, ruling in March on Anthropic's initial challenge, described the Pentagon's move as something that "looks like an attempt to cripple the company," per Politico.
The fracture inside Trump-world came after. Steve Bannon, the former White House strategist who built his brand on disrupting the Republican establishment, filed a brief supporting Anthropic's position and, speaking at the Semafor World Economy Summit, called allowing the Pentagon to run Anthropic's flagship model Claude with minimal restrictions "almost too dangerous." He proposed an atomic-energy-style federal commission to oversee the industry — a regulatory framework the current administration has largely rejected, per AOL. "I think Anthropic had it right," Bannon said.
OpenAI accepted the same terms and received the contract Anthropic refused, per CNBC. Anthropic had been working with the Defense Department and intelligence agencies since late 2024 through a partnership with Palantir — making the break notable.
What to watch: Friday's Amodei-Wiles meeting. Whether it produces a negotiated settlement, narrows the legal confrontation, or hardens both sides will determine whether this remains a dispute over contract terms — or becomes a wider rupture over who sets the ethical boundaries for AI in warfare.