OpenAI turns the ChatGPT account into a security control
OpenAI introduced Advanced Account Security for ChatGPT and Codex accounts on April 30.
This is more than a new setting for security keys. OpenAI says ChatGPT accounts can now contain sensitive personal and professional context, and that the same login also protects Codex. For enterprise leaders, the AI account is becoming a privileged work surface, not just another SaaS password.
The facts: Advanced Account Security is an opt-in mode in ChatGPT security settings on the web. Once enabled, OpenAI requires passkeys or physical security keys and disables password-based login. Email and SMS recovery are removed. Users must rely on backup passkeys, security keys, or recovery keys instead. OpenAI Support will not be able to recover enrolled accounts.
OpenAI is also tightening active sessions. Users get shorter sign-in windows, alerts when a login occurs, and better visibility into active ChatGPT and Codex sessions. A notable privacy consequence is that conversations from accounts with Advanced Account Security enabled are automatically excluded from model training.
The company is pairing the launch with Yubico through discounted YubiKey bundles, while still supporting FIDO-compliant keys and software passkeys. For Trusted Access for Cyber, the requirement is stricter: individual users accessing OpenAI's most cyber-capable and permissive models must enable Advanced Account Security from June 1, 2026, unless their organization attests that it has phishing-resistant SSO.
Assessment: this shows where OpenAI sees the operational risk moving. The issue is not only the model. It is the account around the model: prompt history, uploaded files, source code, agents, connected tools, and API access. As AI tools gain more context and more ability to act, account takeover becomes a business risk.
The CIO implication is concrete. Identify who uses ChatGPT, Codex, and similar tools for source code, board material, customer data, or security work. Require SSO, phishing-resistant MFA, and managed recovery for high-risk users. Do not make hardware keys an individual side project; connect them to IAM, device management, offboarding, and lost-key procedures.
For boards, the lesson is straightforward: AI security is no longer only about model behavior and data sharing. Account and access governance is becoming part of AI governance.
📬 Likte du denne?
AI-nyheter for ledere. Kuratert av en CIO som bygger det selv. Daglig i innboksen.