Two private companies just decided who gets cyber superpowers.
OpenAI released GPT-5.4-Cyber to participants in its Trusted Access for Cyber program on April 14, six days after Anthropic released Claude Mythos Preview to members of Project Glasswing. Both models find and exploit software vulnerabilities. Both are restricted to approved partners. Both cite safety as the reason. Nobody elected either company to make that call.
The partner lists read like a roll call of critical infrastructure: JPMorganChase, Google, Microsoft, Apple, Cisco, CrowdStrike, Nvidia, and a dozen others through Anthropic's Glasswing coalition. Palo Alto Networks and the SANS Institute joined OpenAI's program. These organizations will get access to models that can autonomously find zero-day vulnerabilities, chain exploits, and execute multi-stage network intrusions — capabilities that, a year ago, required a team of human researchers.
The companies say they are acting responsibly. The experts they cite disagree.
Wendi Whitmore, senior vice president at Palo Alto Networks, told Security Boulevard that similar capabilities will inevitably leak or be replicated in open-source models within weeks. Rob T. Lee of the SANS Institute said the ability to find flaws in aging codebases is a fundamental feature of modern language models that cannot be easily unlearned. The safety rationale, in other words, assumes a door can stay closed that the experts closest to these systems say will not hold.
The question of who decides is not rhetorical. Jonathan Iwry Wharton, a researcher who has studied AI security governance, put it plainly: the world is relying on the judgment of a handful of private actors who are not accountable to the public. That is not a talking point. It is a description of a governance vacuum.
What makes the vacuum durable is the money behind it. Anthropic committed $100 million in usage credits for Mythos across Project Glasswing, along with $4 million in direct donations to open-source security organizations. OpenAI put $10 million in API credits on the table for its program. These are not small experiments. They are infrastructure investments designed to make the partner ecosystem sticky before the open-weight models catch up.
Whether they catch up is itself an open question. AISLE, an AI security company that has been running vulnerability discovery in production since mid-2025, published independent research showing that several of the specific vulnerabilities Anthropic highlighted in its Mythos announcement could be detected by smaller, freely available models — not just the $100 million enterprise version. The capability gap is real. It is not clear it justifies the price gap.
OpenAI's next flagship model, internally codenamed Spud, is expected to have comparable cyber capabilities. Sasha Baker, OpenAI's national security policy lead, said at an event that Spud's cyber capabilities will be rolled out following the same defender-access process as Anthropic's Mythos. The playbook is established. The next model will follow it.
What happens after that is not a technical question. Two companies have set a norm: frontier cyber AI gets gatekept behind partner programs, and the rest of the world is trusted to wait. The people who study this for a living say the capabilities will not stay behind that wall. The people who built the wall say the wall is the point.