OpenAI is building a cyber defense tool. It won't say who qualifies to use it.
OpenAI built its most permissive cyber AI ever and is quietly deciding which security researchers get to use it — without publishing the criteria.

OpenAI has released GPT-5.4-Cyber, a cyber defense model with significantly lower refusal boundaries and binary reverse engineering capabilities, allowing security researchers to analyze compiled code and reverse engineer software for vulnerabilities—tasks most AI systems decline on safety grounds. Access is controlled through the proprietary Trusted Access Program, which has no public application process, no published eligibility criteria, and no appeal mechanism for rejected applicants, raising concerns about OpenAI acting as an unregulated gatekeeper for advanced offensive-security tooling. European regulators (ECB, Bank of England, German supervisors) are actively examining risks from both this model and Anthropic's comparable Mythos, illustrating how frontier cybersecurity AI is outpacing governance frameworks.
- •GPT-5.4-Cyber's lower refusal boundaries enable analysis of malicious compiled code and binary reverse engineering that standard AI systems refuse.
- •U.S. federal agencies were briefed on the model but cannot use it; Five Eyes partners begin receiving briefings this week.
- •The Trusted Access Program has no public application process, no disclosed eligibility requirements, and no appeal process for denied applicants—approved users sign NDAs, rejected applicants receive no explanation.





