For anyone who fearing that their ChatGPT and Codex accounts could be targeted by attackers, OpenAI announced on Thursday that it is adding an optional novel level of account protection that adds an extra layer of security. Called Advanced Account Security, this feature enforces tight access controls, making account takeover attacks very arduous.
Such measures are not a novel idea in the field of account security. For example, Google has offered the Advanced Protection account security level for almost a decade. However, as mainstream AI services rapidly spread across the globe, there is an urgent need to put in place a number of basic safeguards. OpenAI says the launch is part of a broader cybersecurity strategy announced earlier this month.
Courtesy of OpenAi
“People are turning to AI for highly personal questions and increasingly high-stakes work,” the company said Thursday in blog entry. “Over time, a ChatGPT account can contain sensitive personal and professional context and be at the center of connected tools and workflows. For some people, such as journalists, elected officials, political dissidents, researchers and those particularly concerned about security, the stakes are even higher.”
People who enable Advanced Account Security will no longer be able to employ regular passwords on their accounts. Instead, they must add two physical security keys or passwords to significantly reduce the risk of successful phishing attacks. This feature also eliminates email and text messages and account recovery routes. Instead, users must employ recovery keys, backup keys, or physical security keys. OpenAI says it has partnered with Yubico to offer cheaper YubiKey packages to Advanced Account Security users.
Most importantly, once a user enables Advanced Account Security, they will no longer be able to seek assistance from the OpenAI support team to recover their account as support no longer has access or control over any of the recovery options. This way, attackers will not be able to attempt to hack accounts by attacking support portals using social engineering.
Advanced Account Security also enforces shorter login windows and sessions before the user has to sign in to the device again. It also generates notifications whenever someone logs into a locked account, pointing to a dashboard to view energetic ChatGPT and Codex sessions. Additionally, while OpenAI offers each user the option to opt out of having their ChatGPT conversations used for model training, this exclusion is enabled by default for Advanced Account Security users.
Members of OpenAI’s Trusted Access for Cyber program, which provides cybersecurity professionals, researchers and others with advanced access to novel models, will be required to enable Advanced Account Security or provide alternative attestation to implementing phishing-resistant authentication through enterprise single sign-on starting June 1.
