Inside OpenAI’s Raid on Thinking Machines Lab

If someone goes made an HBO Max series about the field of AI, this week’s events will make quite the episode.
On Wednesday, the CEO of OpenAI applications, Fidji Simo, announced that the company has rehired Barret Zoph and Luke Metz, the founders of Mira Murati’s AI, Think Machines Lab. Zoph and Metz had left OpenAI in late 2024.
We reported last night on two accounts that led to the departure, and we have learned new information.
A source with direct information says that the leadership of Thinking Machines believes that Zoph committed an incident of misconduct while at the company last year. That incident broke Murati’s trust, the source said, and disrupted the working relationship between the two. The source also alleged that Murati fired Zoph on Wednesday—before he knew he was going to OpenAI—because of what the company says were issues that arose after allegations of misconduct. When the company found out that Zoph was returning to OpenAI, Think Machines raised concerns internally about whether it had shared confidential information with competitors. (Zoph did not respond to several requests for comment from WIRED.)
Meanwhile, in a Wednesday memo to employees, Simo said the employees had been working for weeks and Zoph told Murati that he was considering leaving Think Machines on Monday—the day before he was fired. Simo also told staff that OpenAI does not share Think Machines’ concerns about Zoph’s behavior.
Alongside Zoph and Metz, another former OpenAI researcher who worked at Thinking Machines, Sam Schoenholz, is also joining the maker of ChatGPT, according to Simo’s announcement. At least two more Thinking Mission employees are expected to join OpenAI in the coming weeks, according to a source familiar with the matter. Tech reporter Alex Heath was the first to report on the additional hires.
A separate source familiar with the matter pushed back on the idea that the recent personnel changes were entirely related to Zoph. “This was part of a long conversation at the Think Tank. There were discussions and a lack of planning in what the company wanted to build—it was about the product, the technology and the future.”
Think Machines Lab and OpenAI declined to comment.
In the wake of these events, we’ve been hearing from several researchers at leading AI labs who say they’re fed up with the ongoing drama in their industry. This particular incident is reminiscent of OpenAI’s firing of Sam Altman in 2023, known within OpenAI as “the blip.” Mrati played a major role in that event as the company’s CEO at the time, according to a report by The Wall Street Journal.
In the years since Altman’s firing, the drama in the AI industry has continued, with the departure of local founders in several major AI areas, including Igor Babuschkin of xAI, Daniel Gross of Safe Superintelligence, and Meta’s Yann LeCun (he co-founded Facebook’s AI lab, FAIR, after all).
Some might say the drama is appropriate for a fledgling industry whose costs contribute to America’s GDP growth. And, if you buy into the idea that one of these researchers might come up with a few breakthroughs on the road to AGI, it’s probably worth tracking where they’re going.
That said, many researchers started working before the success of ChatGPT and seem surprised that their industry is now the source of ongoing experimentation.
As long as researchers can continue to raise multi-billion dollar seeds in a flash, we predict that the AI industry’s energy shake-up will continue apace. HBO Max writers, step inside.
How AI Labs Train Agents to Do Your Job
People in Silicon Valley have been mulling over AI to take jobs away for decades. In the past few months, however, efforts to get AI to perform economically important work have become more complex.
AI labs are smart about the data they use to create AI agents. Last week, WIRED reported that OpenAI was asking Handshake’s third-party contractors to upload examples of their real work from previous jobs to test OpenAI agents. Companies ask employees to check these documents for any confidential data and personally identifiable information. While it’s possible that some company secrets or names have leaked out, it’s likely not what OpenAI is following (although the company could be in big trouble if that happens, experts say).


