DoD's Claude Ban Underestimated by 12 Months, AI Certification Experts Say
As the Trump administration pushes to remove Anthropic's AI tools from military networks, Pentagon insiders say the transition is neither quick nor painless — and some are betting it won't happen at all.

image from Gemini Imagen 4
The Pentagon's order to remove Anthropic's Claude from military networks is running into a basic problem: the agencies that depend on it say replacing the tool could take more than twice as long as the six-month deadline allows.
Defense Secretary Pete Hegseth designated Anthropic a supply-chain risk on March 3, following a dispute over guardrails on how the military could use the company's AI tools, according to a Reuters report. The order bars use of Claude by the Pentagon and its contractors within six months. But contractors and current Pentagon officials, speaking anonymously because they were not authorized to speak publicly, say the technical reality of replacing an AI system embedded across military workflows is far messier than the order suggests.
The most concrete challenge is recertification. Joe Saunders, the CEO of RunSafe Security, a company that helps the military incorporate AI tools, told Reuters that certifying a replacement system for use on classified or military networks takes 12 to 18 months in the case of an existing system being swapped for a new one. "It's not just costly, it's a loss of productivity," he said.
The depth of Claude's integration complicates the picture further. Palantir's Maven Smart Systems — a platform used for intelligence analysis and weapons targeting with Pentagon contracts worth more than $1 billion — built key workflows using Anthropic's Claude Code, according to a Reuters report. Palantir will need to replace Claude with another AI model and rebuild parts of its software, one of the sources said. Palantir did not respond to a request for comment.
Some Pentagon staff are "slow-rolling" their replacement of Claude because they are actively using it to create workflows, according to a Pentagon technologist. One chief information officer at a federal agency told Reuters the agency plans to drag out the phase-out, betting that the dispute will be resolved before the six-month deadline expires.
The resistance reflects how deeply Claude had embedded itself in Pentagon operations. Anthropic announced a $200 million defense contract in July 2025, and Claude became the first AI model approved to operate on classified military networks. Sources told Reuters the technology remains in use despite the blacklisting, and that the Pentagon used Claude tools to support operations during the conflict with Iran — what one expert called "the clearest signal" of how highly the Defense Department values the tool.
The political dispute is playing out in parallel with the technical one. Anthropic has sued the Pentagon, the Executive Office of the President, and a host of other federal agencies to block Hegseth's directive. Meanwhile, orders to stop using Claude are filtering through the chain of command. One official said staff are complying because "no one wants to end their career over this," but described the shift as wasteful. Tasks previously handled by Claude, such as querying large datasets, are in some cases now being done manually using tools like Microsoft Excel.
The contrast with available alternatives is also a factor. One IT contractor who works with the Pentagon told Reuters that Anthropic's Claude AI model "is the best," while xAI's Grok often produced inconsistent answers to the same query.
Roger Zakheim, director of the Ronald Reagan Presidential Foundation and Institute, offered a blunt summary of the situation: "What we are seeing play out here is the tension of adoption, both inside the Pentagon as well as the political level."

