OpenAI Seeking Executive to Study AI-Related Risks
OpenAI is on the lookout for a new executive to delve into the emerging risks associated with artificial intelligence. The responsibilities of this role encompass exploring various AI-related risks, spanning from computer security to mental health.
CEO Sam Altman, in a recent post on X, acknowledged the challenges posed by AI models. These challenges include the potential impact of models on mental health and the increasing proficiency of models in identifying critical vulnerabilities in computer security.
Altman highlighted the importance of enhancing cybersecurity defenses with advanced capabilities while ensuring that these capabilities cannot be misused by attackers for malicious purposes. The company is also focused on developing biological capabilities and ensuring the safety of systems that can self-improve.
The job listing for the Head of Preparedness role at OpenAI outlines the responsibilities of executing the company’s preparedness framework, which involves tracking and preparing for frontier capabilities that introduce new risks of severe harm.
The compensation package for this role is set at $555,000 plus equity. OpenAI initially announced the establishment of a preparedness team in 2023, tasked with studying potential catastrophic risks, ranging from immediate threats like phishing attacks to more speculative ones like nuclear threats.
Recently, the company reassigned the Head of Preparedness, Aleksander Madry, to a role focused on AI reasoning. Other safety executives at OpenAI have also transitioned to new roles outside of preparedness and safety.
Techcrunch event
San Francisco
|
October 13-15, 2026
OpenAI has recently updated its Preparedness Framework, stating that it may adjust its safety requirements if a competing AI lab releases a high-risk model without similar protections in place.
As Altman mentioned in his post, generative AI chatbots, like OpenAI’s ChatGPT, have faced scrutiny over their impact on mental health. Lawsuits have alleged that ChatGPT reinforced users’ delusions, increased social isolation, and even contributed to instances of suicide. OpenAI is actively working on enhancing ChatGPT’s ability to identify signs of emotional distress and connect users to appropriate support services.

