The Pentagon Banned the AI That Found 27-Year-Old Bugs. China Is 6 Months Behind.
The Trump administration has six months to remove Anthropic from Defense Department networks — and a federal judge just hit the brakes on a follow-on order. That is the state of play at the center of a widening rift between the Pentagon and the White House over one of the most capable vulnerability research tools the US military has ever had access to.
Defense Secretary Pete Hegseth labeled Anthropic a "supply chain risk" in February and ordered the company off Defense Department networks. A federal judge temporarily paused a follow-on Trump administration directive, but the department has given its offices six months to remove Anthropic's products The Hill. Meanwhile, CEO Dario Amodei told an audience last week that Chinese AI systems are roughly six to 12 months behind Mythos's capability level — with the floor of that window at about 6 months — for the US to close its own vulnerabilities before adversarial actors can exploit them at machine speed CNBC.
The department signed agreements with seven AI companies last week to fill the gap: SpaceX, OpenAI, Google, Nvidia, Reflection AI, Microsoft, and Amazon Web Services. Anthropic was not among them Reuters. Pentagon CTO Emil Michael said the model has "capabilities that are particular to finding cyber vulnerabilities" — while insisting the company behind it is too risky to trust.
The contradiction is not subtle. And it is not settled.
The capability that makes Mythos simultaneously valuable and dangerous lives in a technical preview Anthropic published in April. In internal benchmarks, the model succeeded at exploit development 181 times where its predecessor, Opus 4.6, failed near-100 percent of the time. It unearthed a 27-year-old bug in OpenBSD, a critical remote code execution flaw in FreeBSD's NFS server, and roughly 300 vulnerabilities in Firefox alone — where an earlier Claude model found about 20. Across all software tested, the total runs into tens of thousands, most still unpatched, most not publicly disclosed Anthropic red.team blog. The company's red team published the findings publicly, dense enough to unsettle anyone who runs enterprise software at scale, as part of its effort to make the case through evidence rather than politics.
The guardrails that sank the deal are not fully public. The Hill reported that negotiations between the Pentagon and Anthropic collapsed over whether Anthropic's AI could be used for domestic surveillance or fully autonomous attacks — uses Anthropic's terms of service prohibit but the Pentagon wanted to permit The Hill. Anthropic declined to accept contract terms that would have allowed the military to override those restrictions. When Anthropic would not bend, the department moved to remove it.
Pentagon staffers and contractors told Reuters they view Anthropic's technology as superior to the alternatives and are reluctant to give it up Reuters. The department's own AI platform, GenAI.mil, has drawn more than 1.3 million Defense Department personnel since its launch. Removing Claude means removing the system those users have been trained on.
Anthropic built its safety reputation partly on refusing to let its most powerful models be weaponized. The Pentagon's current position — that Anthropic is too dangerous to trust while simultaneously racing to adopt competing systems — assumes those competitors have comparable safety architectures. Whether they do is not publicly known. The agreements announced last week did not include detailed terms of service.
The White House is not finished with Anthropic. Trump told CNBC he thinks a deal is "possible" and that he likes "smart people" and "high-IQ people" CNBC. Treasury Secretary Scott Bessent, Fed Chair Jerome Powell, and White House chief of staff Susie Wiles have all spoken with Anthropic about Mythos. Several civilian agencies have requested and received access to the model The Hill. The administration appears to be distinguishing between Mythos — useful for defensive patching — and Anthropic itself, which it still regards with suspicion.
This bifurcation is not sustainable, said Jessica Tillipman, associate dean for government procurement law studies at George Washington University Law School. "They've now adopted completely inconsistent positions across the government where one agency continues to dig its heels on this ridiculous designation, and the rest is trying to actually work in reality," she told The Hill.
What happens inside the six-month window is the unresolved question. The Pentagon is onboarding new AI providers at a pace its own officials describe as unprecedented — less than three months, compared to the 18-month norm that preceded this crisis Reuters. Whether those providers can match Mythos's vulnerability research capabilities before Chinese AI closes the gap is not answered by the agreements announced last week. Reflection AI, the one newcomer with a direct Trump family connection — Donald Trump Jr.'s venture firm 1789 Capital is an investor — has not published comparable benchmark data.
Anthropic is suing to remove the supply chain risk designation. The case is working through the courts. If Amodei's timeline is right, Chinese AI systems will reach Mythos-level capability sometime in the next six to 12 months. The question is what will be left to exploit when they do.