OpenAI Begins Search for New Head of Preparedness

Editorial Staff

December 29, 2025

OpenAI has launched a search for a new Head of Preparedness, an executive role focused on identifying and managing emerging risks linked to advanced artificial intelligence systems. The position will oversee research and strategy across areas such as cybersecurity, mental health, and other potential harms that could arise as AI models grow more capable and widely used.

In a post shared on X, OpenAI CEO Sam Altman said the company’s latest AI models are beginning to present serious challenges. He highlighted concerns around the possible effects of AI on mental health, as well as systems that have become highly skilled in computer security and are capable of discovering critical software vulnerabilities. Altman emphasized the need to balance empowering defenders with advanced tools while preventing attackers from using those same capabilities to cause harm.

Altman also pointed to broader risks related to biological research and self-improving AI systems, saying the company needs stronger confidence in the safety of technologies that could rapidly evolve. He encouraged candidates who want to help shape how powerful AI tools are released and controlled to apply for the role.

According to OpenAI’s job listing, the Head of Preparedness will be responsible for executing the company’s Preparedness Framework. This framework outlines how OpenAI tracks frontier AI capabilities and prepares for risks that could lead to severe harm if not properly managed. The role comes with a listed compensation of $555,000, along with equity.

OpenAI first introduced its preparedness team in 2023, stating that the group would study both immediate and long-term catastrophic risks. These ranged from near-term threats like phishing and cyberattacks to more speculative dangers, including large-scale geopolitical or nuclear risks. However, the team has seen changes over the past year. Aleksander Madry, who previously led preparedness efforts, was reassigned to focus on AI reasoning, and several other safety leaders have either left the company or shifted to different roles.

The company recently updated its Preparedness Framework, noting that it may adjust its safety standards if rival AI labs release high-risk models without similar protections. This change reflects growing competitive pressure in the AI industry.