OpenAI Will Lock Your Account So Tightly It Cannot Help You Back In
OpenAI just told its most at-risk users something nobody in the AI industry has said plainly before: we will lock your account down so tightly that if you lose your key, we ourselves cannot help you get back in.
The company launched Advanced Account Security on Thursday, an opt-in hardening layer for ChatGPT that requires a physical security key or passkey to sign in, shortens session lengths, alerts users to new logins, and automatically excludes enrolled accounts from model training. For high-risk users like journalists, political dissidents, elected officials, and researchers handling sensitive work, the appeal is real. The catch is absolute: if you lose your security key and your backup passkeys, OpenAI support cannot recover your account. There is no email reset. There is no SMS fallback. There is no customer service lifeline.
The tradeoff is not new in the security world. Google launched its Advanced Protection Program in October 2017, and the cybersecurity industry has treated it since as the appropriate baseline for anyone facing elevated targeting. OpenAI is arriving nearly a decade late to the same conclusion, and it is doing so at a moment when AI accounts hold far more sensitive context than a search history ever did. A ChatGPT account can now hold years of personal conversations, proprietary code, business strategy, legal research, and medical questions. For some users, losing control of that account is not an inconvenience. It is a catastrophe.
The mechanics are specific: the enrollment disables email and SMS recovery entirely, leaving backup passkeys, security keys, and recovery keys as the only paths back in. OpenAI has partnered with Yubico to offer preferred pricing on YubiKey bundles, and any FIDO-compliant security key or passkey works — YubiKey is not required. The setting covers ChatGPT and Codex under the same login, so a single enrollment carries across both products. Conversations from enrolled accounts are automatically excluded from model training, a meaningful edge for users who previously had to remember to opt out manually each time.
The FIDO2 WebAuthn standard that underpins this is not experimental technology. Yubico, the security key maker that helped co-author the FIDO2 and WebAuthn specifications, called OpenAI's adoption a sign the industry is "crossing the threshold from password-optional to phishing-resistant" at scale. The question is whether the implementation is a substantive deployment or a checkbox. Yubico's involvement as a preferred partner, and the fact that the company is requiring Advanced Account Security for its highest-risk API and ChatGPT users, suggests this is more than a compliance gesture. But the bar set by Google's program since 2017 is high, and independent auditors have not yet assessed OpenAI's implementation.
The June 1 deadline is the detail that turns a product update into a policy signal. Members of OpenAI's Trusted Access for Cyber program must enable Advanced Account Security by June 1, 2026 or submit an alternative phishing-resistant single sign-on attestation. That means OpenAI is already treating this not as an optional hardening layer but as a condition of access for the users most likely to be targeted by state-sponsored actors, commercial espionage, or sophisticated personal attacks. It also means OpenAI is already drawing the line it has asked those users to live behind: we will not be able to help you if you lose your key.
The consequences of not acting are not abstract. Researchers tracking credential theft have found OpenAI authentication credentials among the data compiled by infostealer malware, the commodity attack tool that has compromised hundreds of millions of machines globally. In 2025, a third-party analytics provider breach exposed names and email addresses linked to ChatGPT accounts. For a targeted researcher or journalist, a compromised AI account is not a password reset away from resolved. It is a months-long exposure of everything they asked the model to help them think through.
The industry-wide pattern is the second-order stakes. As AI systems have moved from research demos into production workflows holding sensitive corporate and personal data, the retrofit problem is becoming visible. Legacy account security, built for casual consumer use around email recovery and SMS fallbacks, is structurally mismatched to the threat model that applies when a hostile actor might spend weeks patiently compromising a journalist's or researcher's account to extract months of AI conversations. The FIDO2 WebAuthn standard has been available for years. OpenAI is not pioneering. It is catching up to a standard the security community has considered table stakes for high-risk users since before the iPhone launched the App Store.
What took so long is the question the industry itself is asking. For years, AI labs optimized for growth and capability. Account security was an afterthought, managed by the same mass-market infrastructure used for social media and streaming accounts. The change is that the consequences of that afterthought are now legible: high-profile compromises of AI accounts carry real operational risk for the companies running them, not just for the individual users. OpenAI's move suggests the industry has started pricing that risk.