Tech News

OpenAI Is Hiring for a Scary-Sounding Position

“This is going to be a stressful job and it’s going to jump to the finish line quickly,” OpenAI CEO Sam Altman wrote on X in his announcement of the “readiness” work at OpenAI on Saturday.

To earn $555,000 a year, according to OpenAI’s job ad, the head of readiness must “grow, strengthen, and direct,” the existing readiness program within OpenAI’s security systems department. This side of OpenAI builds protections that, in theory, make OpenAI models “behave as intended in real-world settings.”

But hey, wait a minute, are they saying that OpenAI models behave as intended in real-world settings now? In 2025, ChatGPT continued to appear in official filings, attracted hundreds of FTC complaints, including complaints that it caused mental health problems for users, and apparently turned photos of clothed women into deepfake bikinis. Sora had to own his ability to make videos of people like Martin Luther King, Jr. it was dismissed because users were abusing the right to make famous historical figures say basically anything.

When lawsuits related to problems with OpenAI products come to court—such as in the wrongful death lawsuit filed by the family of Adam Raine, who allegedly received advice and encouragement from ChatGPT that led to his death—there is a legal argument to be made that users were misusing OpenAI products. In November, a filing by OpenAI’s attorneys pointed to violations as a possible cause of Raine’s death.

Whether you buy the abuse argument or not, it’s clearly a big part of how OpenAI makes sense of what its products do for the community. Altman admits in his X post about the head of preparedness work that the company’s models can affect people’s mental health, and can find security risks. He says, “we’re entering a world where we need a more flexible understanding of how those skills can be misused, and how we can limit that harm to our products and the world, in a way that allows us all to enjoy greater benefits.”

After all, if the goal was to never cause any harm, the quickest way to ensure that would be to remove ChatGPT and Sora from the market altogether.

The head of readiness at OpenAI, then, is the person who will thread this needle, and “[o]wn OpenAI’s eventual preparedness strategy,” finds a way to evaluate redundant skills models, and design ways to reduce them. The ad says that this person will need to “modify the preparedness framework as new risks, capabilities, or external expectations emerge.” This can only mean finding new ways to make OpenAI products possible. strength capable of harming people or society, and came up with a rubric for allowing products to exist, while showing, presumably, that the risks are sufficiently blunted that OpenAI is not legally responsible for the seemingly inevitable future “decline.”

It would be bad enough to do all this for a company that is treading water, but OpenAI has to take drastic measures to bring in money and release high-quality products quickly. In an interview last month, Altman firmly stated that he would take the company’s revenue from where it is now — apparently somewhere north of $13 billion a year — to $100 billion in less than two years. Altman said that “the consumer device business is going to be important and important,” and that “AI that can do science automatically is going to create enormous value.”

So if you’d like to oversee the “reduction design” of all new versions of existing OpenAI products, and new wearable gadgets, and platforms that don’t exist yet, but should do things like “automated science,” all the while the CEO sighs about needing to make about the same amount of annual revenue as Walt Disney next year, enjoy being the head of preparation at OpenAI. Try not to expose the world to your new job.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button