OpenAI and Anthropic Are Writing Cyber Access Rules They Won't Publish
OpenAI's enterprise application form for its new cyber model asks companies to certify they will not use it for offensive operations. The form does not say what offensive means. The Pentagon uses a narrower definition than the public typically hears. That gap is where the story lives, because defense contractors can run the same operation in one room and call it defensive while OpenAI would call it prohibited.
That contrast is the new pressure point in GPT-5.5-Cyber's restricted rollout. According to The Verge, the model will go only to a select group of trusted defenders rather than the general public. The enterprise application form requires applicants to confirm they will not use trusted access for offensive cyber operations. What OpenAI does not publish is the definition of offensive, the threshold for approval, or the criteria it uses to separate legitimate defense from prohibited activity.
That ambiguity is not a technicality. The Trusted Access for Cyber post describes GPT-5.4-Cyber, the earlier model in the same program, as capable of penetration testing, vulnerability identification and exploitation, and malware reverse engineering. In practice, the technical steps a security team uses to find and test a weakness are the same steps an attacker would use to exploit it. That overlap makes the admission rules load-bearing. If a lab is going to gate access to a model that can help users probe real systems, the definition of prohibited use is not a compliance footnote. It is the product policy.
Both labs are now making these decisions privately. Anthropic has said Mythos access runs through its Glasswing partner program but has not published the criteria that separate a qualified security organization from everyone else. That leaves both companies in the same position: privately deciding which companies, agencies, and contractors are allowed to use models capable of finding software flaws and helping exploit them before most targets can patch.
The practical argument for secrecy has real force. OpenAI says in its Trusted Access for Cyber post that it is scaling the program to thousands of verified individual defenders and hundreds of teams while consulting with the U.S. government. OpenAI also briefed roughly 50 federal cyber practitioners in Washington on April 22, according to Axios. Treating cyber access as a live security problem rather than a standard software rollout reasonably calls for some discretion.
But restricted access has already shown its limits. TechCrunch reported on April 21, citing Bloomberg, that an unauthorized group gained access to Anthropic's Mythos through a third-party vendor and used it regularly. Anthropic confirmed it was investigating. The official gates did not fully contain the model. They just made the side doors more valuable.
The White House has made clear it views these access decisions as government business, not purely private ones. The Wall Street Journal reported that Anthropic wanted to expand Mythos access from roughly 50 organizations to about 120, and that the White House opposed the plan. That turns the access question into something stranger than a product launch or a safety measure. A private lab proposes a guest list for frontier cyber capability, government officials object, and the public still does not get to see the admission rubric.
Sam Altman made the contradiction unavoidable. TechCrunch reported that he mocked Anthropic's Mythos restrictions on April 21 as "fear-based marketing." Nine days later, OpenAI adopted the same basic restricted-access structure for GPT-5.5-Cyber. The contradiction is real and documented. But the more durable problem is that both labs now agree on the underlying power: these models are capable enough that broad public release feels dangerous, yet neither lab will tell the public exactly how it decides who counts as safe.
The specific new fact is the certification language in the enterprise form itself. By requiring applicants to confirm they will not engage in offensive cyber operations without defining the term, OpenAI has created a standard it reserves the right to interpret privately. For a capability that can move from security research to active system probing, that interpretive discretion is not a minor loophole. It is the mechanism through which the broader access problem expresses itself.
That is the pressure point to watch. Not whether Altman contradicted himself nine days apart. Whether either lab, or any government leaning on one, is willing to publish a definition the rest of the market can actually inspect.