OpenAI wants governments inside its security infrastructure. The evidence for why they should trust it has not been checked.
This week, OpenAI began briefings with Five Eyes intelligence alliance members to vet governments for access to GPT-5.4-Cyber, the company's AI system for finding and fixing software vulnerabilities before attackers exploit them. The briefings, confirmed by Reuters on Tuesday, are the first concrete signal that OpenAI is actively pursuing a formal government customer for its cyber model. No agency has publicly committed. OpenAI declined to say which governments attended.
The number OpenAI has used to make its case for that trust is precise: Codex Security, the underlying system, has contributed to fixing more than 3,000 critical and high-severity software vulnerabilities since launch, according to the company's blog. The figure appears in every press release, every blog post, and every reporter's notebook since the product launched eight days ago. It has not been independently verified against the National Vulnerability Database, the canonical public record for tracked security flaws. Security Magazine reported that AI systems are discovering vulnerabilities faster than the infrastructure to patch them can handle, a structural mismatch that could widen exposure even as detection improves. What nobody has answered in public: whether 3,000 represents a meaningful pace for a system at scale, or a small fraction of a much larger problem that the model is not reaching.
Trey Ford, a former security executive at Bugcrowd who has tracked the AI-cybersecurity convergence, said the government briefings are the more significant development regardless of the vulnerability count. "Both labs are racing to define who gets access to powerful models, and governments are the prize," Ford said. "Whoever gets government sign-off first sets the terms for everyone else." Anthropic took a different path with its competing Mythos system — briefing senior U.S. officials including CISA and Treasury Secretary Scott Bessent before public release, then declining to release the model publicly. Google has made similar moves within its own security products.
OpenAI is attempting a commercial product with government vetting built in. Its Trusted Access Program, the gating system that controls which organizations can use its most powerful models, already lists Bank of America, Citi, CrowdStrike, Nvidia, Oracle, Zscaler, and 11 other firms, according to Forbes. No U.S. federal agency appears on that list. The briefings this week are an attempt to change that.
The question all three companies are racing to answer — whether frontier AI in cybersecurity is a product to be sold, or a public utility to be governed — is not settled. What is new is the specificity of OpenAI's offer: formal access to national security networks, in exchange for the kind of endorsement that competitors cannot easily replicate. Whether any government says yes is what to watch next. OpenAI declined to comment on whether any agency has committed to the program.