OpenAI employs: Chief of Preparedness. In other words, someone whose main job is to think of all the ways AI could go terribly, terribly wrong. In write to XAltman himself announced his position, admitting that the rapid improvement of AI models poses “some real challenges.” The post goes on to specifically highlight the potential impact of AI-based cybersecurity weapons on people’s mental health and the dangers they pose.
The list of job offers states that the person in this position will be responsible for:
“Track and prepare for pioneering capabilities that pose new risks of significant harm. You will be directly responsible for building and coordinating capability assessments, threat models, and mitigations that create a coherent, rigorous, and operationally scalable security pipeline.”
Altman also says that in the future, this person will be responsible for implementing the company’s “readiness framework,” securing AI models to unleash “biological capabilities,” and even establishing barriers to self-improving systems. He also notes that it will be a “stressful job,” which seems like an understatement.
