A Pentagon official called Amodei a liar with a "God-complex" after Anthropic refused to strip AI safeguards. Then he cashed out a massive stake in xAI. The government banned Anthropic. Its own senior officials still take Amodei's calls.
The Pentagon blacklisted the only AI company it had authorized for classified systems. The trigger was not a policy disagreement. It was a raid.
In early January, when U.S. forces captured Nicolas Maduro in Venezuela, Claude was part of the operation. An Anthropic executive asked Palantir whether the company's AI had been used in the mission. The question, sources familiar with the matter say, was routine business due diligence. In the Pentagon, it landed like a declaration of war.
Emil Michael, the Under Secretary of Defense for Research and Engineering, had what he later described on a podcast as a "whoa moment": the U.S. military was dependent on a company that might refuse to cooperate at some future point. He went to Pete Hegseth. On February 28, 2026, Hegseth declared Anthropic a "supply chain risk," a designation historically reserved for foreign adversaries and applied to an American company for the first time, according to Defense One. Trump issued a Truth Social directive ordering all agencies to cease using Anthropic technology within six months.
The blacklisting was not inevitable. It was championed by a man who had a financial stake in the company that would replace Anthropic.
Michael held between $500,000 and $1 million in xAI stock when he was confirmed as Under Secretary in May 2025, The Guardian reported. He received an OGE divestiture certificate on December 18. The Pentagon announced its xAI deal on December 22. Michael sold his xAI stake on January 9 for a gain that could have been as high as 4,800 percent, potentially $24 million on an initial position valued at under $1 million. He is the architect of Anthropic's blacklisting and the primary beneficiary of the contract that replaced it.
He has called Dario Amodei a liar with a "God-complex" on social media, The Guardian reported. On the All-In podcast, he described Amodei as someone who thought he knew better than the government what to do with American military data, Fortune reported. The Pentagon declined to comment on the financial disclosures.
Anthropic had signed a $200 million contract with the Pentagon in July 2025 to deploy Claude on GenAI.mil, CNBC reported. Negotiations stalled in September when DOD demanded "all lawful purposes" access: unrestricted use of Claude for any legal application, including mass surveillance of Americans and fully autonomous weapons, according to Defense One. Anthropic refused. Its statement at the time said the company "cannot in good conscience" comply. The same safeguards that DOD wanted removed are the ones Anthropic built Claude to prevent.
OpenAI, faced with the same demand hours after Anthropic's ban, accepted it. The company signed a deal with the Pentagon on February 27, 2026, for "any lawful purpose" access, The New York Times reported. Then came the backlash. Within days, OpenAI amended the agreement to add the same protections Anthropic had refused to remove: no mass surveillance of Americans without judicial oversight, no lethal autonomous weapons without human authorization. An OpenAI executive, Caitlin Kalinowski, resigned over lines that she said deserved more deliberation. The outcome of Anthropic's refusal was a ban. The outcome of OpenAI's capitulation, followed by quiet amendment, was a contract. The market processed which red lines were actually unreasonable, and it was not Anthropic's.
On March 26, Judge Rita Lin of the U.S. District Court for Northern California issued a preliminary injunction blocking the government from enforcing the ban. Her language was remarkable for a court order. The Pentagon's action, she wrote, was "classic illegal First Amendment retaliation," Defense One reported, the government punishing a company for speaking publicly about its dispute. She called the blacklisting an "Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S." for refusing to surrender its core operating principles. The government appealed.
On April 8, a federal appeals court in Washington, D.C., denied Anthropic's motion to stay the ban pending litigation, CNBC reported. The ruling was a setback, but a narrow one. The court's reasoning revealed the government's actual argument: "On one side is a relatively contained risk of financial harm to a single private company. On the other side is judicial management of how, and through whom, the Department of War secures vital AI technology during an active military conflict." The court was not saying Anthropic was wrong. It was saying wartime acquisition priorities supersede normal legal protections while litigation proceeds. Two separate statutes govern the dispute, requiring challenges in two different courts. This legal architecture could keep the case moving for years.
The internal Pentagon memo that formed the basis for the supply chain risk designation is extraordinary. According to a court filing, the memo found that Anthropic's risk level escalated into a supply chain risk "principally because it was leveraging DoW ongoing good faith negotiations for Anthropic own public relations and its increasingly hostile manner through the press." The government did not blacklist Anthropic because its technology was a security risk. It blacklisted the company because the company's leadership was publicly critical of the government's demands.
Despite the ban, Amodei was on a private call last week with Vice President JD Vance, Treasury Secretary Scott Bessent, and the CEOs of OpenAI, Google, Microsoft, and Elon Musk, CNBC reported. The government has branded Anthropic a security risk. The government's own senior officials are still in the room with its CEO.
Anthropic released Mythos Preview on April 7, the day before the appeals court ruling. The technical report describes a model "strikingly capable" at cybersecurity tasks: it finds and exploits zero-day vulnerabilities across every major operating system and browser, and non-experts can use it to develop working exploits overnight. The capability emerged from general improvements rather than explicit training for offensive security work. Anthropic had briefed senior U.S. government officials on the full scope of these capabilities, including offensive cyber applications, before releasing the model to roughly 40 companies through a program called Project Glasswing, which focuses on defensive security research, CNBC reported. Fed Chair Jerome Powell and Treasury Secretary Bessent were also briefed on the national security implications, along with major banks.
That is the leverage Anthropic still holds. The Pentagon banned the one AI company it had trusted with classified systems, scrambled to OpenAI and xAI to fill the gap, and is now being briefed by that same banned company on a model so capable it can autonomously penetrate the infrastructure of every major technology provider. Anthropic is simultaneously fighting the government in court and briefing that government on its most powerful offensive cyber tool. The appeals court sided with the Pentagon on procedure. It said nothing about who is right.
The litigation continues. The war in Iran continues. The calls between Amodei and Vance continue. The contradiction has not been resolved. It has only been papered over.