OpenAI is hiring a new senior leader to strengthen the safety of its artificial intelligence systems. The company is looking for a Head of Preparedness, a role focused on identifying possible risks from advanced AI models and creating systems to reduce those risks before the models are released to the public. OpenAI has described the role as critical and is offering a very high salary along with equity.
According to the job listing, the Head of Preparedness will be part of OpenAI’s Safety Systems team, which is responsible for making sure powerful AI models are developed and deployed responsibly. The position is based in San Francisco and offers an annual salary of up to $555,000, along with company equity. This makes it one of the highest-paid safety roles in the AI industry.
OpenAI CEO Sam Altman shared the job opening publicly and said the role is extremely important as AI capabilities are growing very fast. He also noted that the job would be challenging and stressful, as the person will have to deal with complex safety issues from day one. The role involves reviewing advanced AI models before they are launched and making sure they meet safety standards.
The Head of Preparedness will be responsible for building and leading OpenAI’s preparedness framework. This includes designing systems to test what AI models can do, identifying potential threats across different risk areas, and developing clear plans to reduce harm. The goal is to create a safety process that is practical, detailed, and scalable as models become more powerful.
The role requires deep expertise in machine learning and AI safety. Strong technical judgement and clear communication skills are also essential, as the position involves working closely with multiple teams across the company. The leader will guide how safety checks are built into fast AI development cycles.
This hiring move comes at a time when OpenAI is facing growing scrutiny. The company is dealing with lawsuits and criticism related to claims that its AI tools, including ChatGPT, may have contributed to harmful user behaviour. There have also been concerns about security weaknesses such as prompt injection attacks in AI-powered tools.
By creating this role, OpenAI appears to be taking a more proactive approach to AI safety. The company aims to better understand risks in advance and prevent misuse or unintended harm. As AI becomes more powerful and widely used, roles like Head of Preparedness are expected to play a key role in shaping safer AI systems for the future.