OpenAI is hiring a ‘head of preparedness’ with a $550,000 salary to mitigate AI dangers that CEO Sam Altman warns will be ‘stressful’
OpenAI is looking for a new employee to help address the growing risks of artificial intelligence, and the technology company is willing to spend more than half a million dollars to fill the position.
OpenAI is hiring a “chief preparedness officer” to mitigate the harms associated with the technology, such as user mental health and cybersecurity, CEO Sam Altman wrote in an X post on Saturday. The position will pay $555,000 annually, plus royalties, according to the job listing.
“This will be an exhausting task and you will jump into the deep end almost immediately,” Altman said.
OpenAI’s drive to hire a safety executive comes amid growing corporate concerns about AI’s risks to operations and reputation. A November analysis of annual Securities and Exchange Commission filings by financial data and analytics firm AlphaSense found that in the first 11 months of the year, 418 companies worth at least $1 billion cited reputational damage associated with AI risk factors. These reputational risks include AI data sets that expose biased information or compromise security. Reports of reputational damage related to AI have increased by 46% since 2024, according to the analysis.
“The models are improving rapidly and are now capable of doing many great things, but they are also starting to present some real challenges,” Altman said in a social media post.
He added: “If you would like to help the world learn how to empower cybersecurity defenders with cutting-edge capabilities while ensuring attackers cannot use them to do harm, ideally by making all systems more secure, and similarly for how to unlock biological capabilities and even gain confidence in the safety of operating systems that can self-improve, please consider applying.”
OpenAI’s former Head of Readiness, Alexandre Madry, was reassigned last year to a role related to AI inference, with AI safety being a relevant part of the job.
OpenAI’s efforts to address AI risks
Founded in 2015 as a nonprofit with the goal of using AI to improve and benefit humanity, OpenAI has struggled, in the eyes of some of its former leaders, to prioritize its commitment to developing safe technology. The company’s former vice president of research, Dario Amudi, along with his sister Daniela Amudi and several other researchers, left OpenAI in 2020, partly due to concerns that the company was prioritizing commercial success over safety. Amodei founded Anthropic the following year.
OpenAI has faced several wrongful death lawsuits this year, alleging that ChatGPT encouraged users’ delusions, and claiming that conversations with the bot were linked to some users’ suicides. A New York Times An investigation published in November found nearly 50 cases of ChatGPT users experiencing mental health crises while speaking with the bot.
OpenAI said last August that its security features may “degrade” after long conversations between users and ChatGPT, but the company has made changes to improve how its models interact with users. It created an eight-person council earlier this year to advise the company on guardrails to support users’ well-being and updated ChatGPT to better respond in sensitive conversations and increase access to crisis hotlines. At the beginning of the month, the company announced grants to fund research into the intersection between artificial intelligence and mental health.
The technology company also acknowledged the need to improve safety measures, saying in a blog post this month that some of its upcoming models could pose a “high” cybersecurity risk with the rapid advancement of artificial intelligence. The company is taking measures — such as training models to not respond to requests that compromise cybersecurity and improving monitoring systems — to mitigate these risks.
“We have a solid basis for measuring growing capabilities,” Altman wrote on Saturday. “But we are entering a world where we need to more accurately understand and measure how these capabilities are being abused, and how we can reduce those negative aspects in our products and in the world, in a way that allows us all to enjoy the tremendous benefits.”
This story originally appeared on Fortune.com
Don’t miss more hot News like this! Click here to discover the latest in Business news!
2025-12-29 19:29:00



