The US Wants Mythos for Itself and Is Blocking Its Allies From Getting It
When India and Japan both started calling Washington in the past two weeks, they were not calling about trade deficits or tariff schedules. They were calling about a single AI model that can find vulnerabilities software developers have missed for twenty-seven years.
Anthropic’s Mythos, the cybersecurity model the company has deemed too dangerous to release publicly, has drawn formal opposition from the White House to its proposed expansion to roughly 70 additional companies France24 — even as the National Security Agency continues using it, and even as the administration drafts a separate executive order that would restore the Pentagon’s access to Anthropic technology after a February ban France24. The contrast is not subtle: allies are being told no. The government is negotiating to bring the company back.
The dispute is being described in policy circles as a question of compute capacity and national security. The deeper pattern looks more familiar. Washington has been here before. The Cold War-era Coordinating Committee for Multilateral Export Controls restricted Western technology transfers to adversaries. The 1990s Clipper chip controversy saw the Clinton administration attempt to mandate government access to encryption. Semiconductor export controls have run in various forms ever since. In each case, the logic was the same: certain technologies are too consequential to leave to the market, and some allies cannot be fully trusted with capabilities the U.S. government wants to monopolize.
Anthropic announced Mythos in early April alongside Project Glasswing, an initiative that provided access to the model for defensive security work to a group of twelve companies including Amazon Web Services, Apple, Cisco, CrowdStrike, Google, JPMorganChase, Microsoft, Nvidia, and Palo Alto Networks Anthropic. The company committed $100 million in usage credits and reported that Mythos had independently discovered a twenty-seven-year-old vulnerability in OpenBSD — one of the most security-hardened operating systems in the world — a sixteen-year-old flaw in FFmpeg that automated tools had swept past five million times, and an unpatched privilege-escalation chain in the Linux kernel Anthropic. It scored 83.1 percent on the CyberGym cybersecurity benchmark, compared to 66.6 percent for Anthropic’s own Claude Opus 4.6 Anthropic. The company called the findings a watershed moment for defensive cybersecurity.
The controlled-access model was already under pressure by the time the White House filed its objection. On the same day Anthropic announced Glasswing, a small group of unauthorized users in a private online forum obtained access through a third-party vendor and began running security research, according to documentation reviewed by Bloomberg and confirmed by an Anthropic spokesperson Reuters. The company said it was investigating. The episode underscored what security researchers had already noted: the model’s stated capabilities were not theoretical, and containing them was harder than the rollout implied.
India and Japan represent the sharpest end of the geopolitical problem. New Delhi is engaged in bilateral discussions with both the U.S. administration and Anthropic to secure what it calls “equitable access” to Mythos for critical infrastructure protection — specifically power grids, telecommunications, and banking systems, according to a senior Indian official cited by the Economic Times Economic Times. No Indian companies are part of Glasswing. Japan, per Nikkei Asia, is in a similar position: Japanese governments and banks have expressed interest in access to bolster cybersecurity, with no current pathway to get it Nikkei Asia. Both countries are watching the U.S. government simultaneously fight to restore its own Anthropic access after Pete Hegseth designated the company a national security supply chain risk in February and Trump ordered the government to cease using Anthropic technology France24.
The White House has cited two reasons for opposing the expansion. The first is the same one Anthropic has used to justify restricting public access: that a model capable of autonomously finding and exploiting software vulnerabilities at scale represents a meaningful national security risk if it proliferates beyond controlled partners Bloomberg. The second is more operational: that Anthropic does not have sufficient compute to serve both the government’s existing access and an expanded commercial user base of approximately 120 organizations without degrading the former France24. That second concern touches directly on the $900 billion valuation consideration Anthropic is reportedly pursuing — the capital raise is partly intended to secure the infrastructure needed to run Mythos at commercial scale.
The executive action reportedly in preparation would address the Pentagon’s access problem specifically, creating a pathway to work around Hegseth’s designation. If it materializes, the result would be a U.S. government that has blocked its allies from Mythos, restored its own access through a parallel track, and continues using the model through the NSA. That outcome would be difficult to distinguish from a technology monopoly — and it would arrive in the middle of a genuine shortage. Allied nations, watching the U.S. government fight both to block and to acquire the same tool, will draw their own conclusions about what AI sovereignty means in practice.
Anthropic has said it views Glasswing as a starting point rather than a final arrangement. The company has committed up to $100 million in usage credits and $4 million in direct donations to open-source security organizations Anthropic. But the narrative has moved faster than the product rollout. The question of who controls access to the internet’s most capable vulnerability-finding tool is no longer an internal policy debate. It is a bilateral diplomatic problem involving two of the largest democracies in the world, and a signal about how Washington intends to treat frontier AI as it matures from research project to critical infrastructure.
The COCOM analogy is not perfect. The Cold War export regimes were primarily about keeping technology out of adversarial hands. What is happening with Mythos involves allied nations that share intelligence with the United States and have no history of weaponizing American technology against U.S. interests. But the structural logic tracks: when a technology is valuable enough, concentrated enough, and dangerous enough, Washington treats it as something to be controlled rather than distributed. AI has been moving in this direction since at least the semiconductor restrictions of the previous administration. Mythos has simply made the pattern harder to ignore.
India and Japan are not the only countries that will notice. South Korea, Germany, the United Kingdom — any allied nation that has spent the last decade building critical infrastructure on systems designed, built, and operated by American companies — is watching the U.S. government argue simultaneously that it cannot be trusted with a cybersecurity model and that the same model is safe enough for American intelligence agencies. The argument that frontier AI is too dangerous to share with allies but safe enough to deploy offensively is not new. It is the same argument that produced the Clipper chip, the same argument behind every encryption restriction attempt. It has never successfully kept technology contained. It has, however, reliably pushed allied nations to develop their own versions sooner than they otherwise would have.