The attackers already have cyber superpowers. OpenAI is deciding who among the defenders gets them too.
The company released GPT-5.4-Cyber on April 14 — a version of its flagship model stripped of the safety restrictions that normally prevent it from helping with offensive security tasks like finding vulnerabilities in compiled software, chaining exploits, and analyzing malware. The stated purpose is defensive: help the good guys find and fix flaws before the bad guys exploit them. The question OpenAI is not answering is whether that framing holds.
The timing is not neutral. Anthropic released Claude Mythos Preview on April 7 under Project Glasswing, a consortium that includes Amazon Web Services, Anthropic, Apple, Broadcom, Cisco, CrowdStrike, Google, JPMorganChase, the Linux Foundation, and Microsoft. One week later, OpenAI announced its own program with a functionally similar product. The competitive move is not hidden — Reuters noted it, Bruce Schneier noted it. OpenAI's post frames itself as a democratizing force making advanced security tools widely available. The product it describes — vetted partners only, iterative rollout, top capability tier restricted to the highest authentication level — is a partner program with a marketing budget.
What GPT-5.4-Cyber actually does is specific enough to evaluate. The model adds binary reverse engineering to GPT-5.4: the ability to analyze compiled software without source code, to look at a finished program and figure out how it works and where it is vulnerable. OpenAI notes in its own announcement that this is not new for adversaries, who have had tools like Ghidra for years. The company is offering a capability the other side already has, inside a gated access program the other side cannot join.
The evidence that the capability gap is narrower than the announcements suggest is in the reporting. According to Fortune, several security researchers told the publication that much of what Anthropic's Mythos can do may already be achievable with smaller, cheaper, openly available models. AISLE research cited in the same article found that several of the vulnerabilities Anthropic highlighted — including bugs dating back decades — could have been detected by freely available models. Spencer Whitman, chief product officer at Gray Swan, said the hardest part of what Mythos achieved was autonomously finding vulnerabilities inside large codebases and then validating the exploits — not the detection itself. If the hardest part is the scaffold rather than the model, the moat is the pipeline.
OpenAI points to Codex Security as proof its approach produces real results. The system has contributed to over 3,000 critical and high severity vulnerabilities being fixed since launch. That is a concrete number. Codex Security is also a separate product running on existing models — not GPT-5.4-Cyber, which is new and not yet deployed at scale. The credited output and the new product are different things.
The governance question underneath is not rhetorical. Jonathan Iwry Wharton, a researcher who has studied AI security governance, told Fortune that the world is relying on the judgment of a handful of private actors who are not accountable to the public. OpenAI frames its tiered access as neutral and automated — identity verification replaces arbitrary decisions. The expert consensus that cheaper models already replicate these capabilities is the strongest argument that the gate is not as necessary as the company argues, and that the real moat is the access program itself.