On Wednesday, OpenAI Applications CEO Fidji Simo announced that the company had rehired Barret Zoph and Luke Metz, co-founders of Mira Murati’s AI startup Thinking Machines Lab. Zoph and Metz left OpenAI at the end of 2024.
Last night we reported on the two narratives that have formed around the reasons for the departure, and we have since learned up-to-date information.
A source with direct knowledge said Thinking Machines management believed Zoph had committed an incident of solemn misconduct at the company last year. The source said the incident broke Murati’s trust and disrupted the couple’s working relationship. The source also claimed that Murati fired Zoph on Wednesday – before he knew he was moving to OpenAI – due to what the company said were problems that arose after the alleged misconduct. Around the time the company learned that Zoph was returning to OpenAI, Thinking Machines raised concerns internally about whether he was sharing confidential information with competitors. (Zoph did not respond to several requests for comment from WIRED.)
Meanwhile, in a Wednesday memo to employees, Simo said the work had been ongoing for weeks and that Zoph told Murati he was considering leaving Thinking Machines on Monday – before his firing date. Simo also told employees that OpenAI did not share Thinking Machines’ concerns about Zopha’s ethics.
As announced by Simo, alongside Zoph and Metz, another former OpenAI researcher who worked at Thinking Machines, Sam Schoenholz, is re-joining the creator of ChatGPT. At least two more Thinking Machines employees will join OpenAI in the coming weeks, according to a source familiar with the matter. Technology reporter Alex Heath first reported the additions.
Another source familiar with the matter rejected the notion that the recent personnel changes were entirely related to Zoph. “It was part of a long discussion at Thinking Machines. The discussions and disagreements were about what the company wanted to build – it was about the product, the technology and the future.”
Thinking Machines Lab and OpenAI declined to comment.
In the wake of these events, several researchers at leading artificial intelligence labs said they were exhausted by the ongoing drama in their industry. This particular incident is reminiscent of the brief removal of Sam Altman from OpenAI in 2023, known in OpenAI as a “flip.” Murati played a key role in the event as the company’s then-chief technology officer, according to The Wall Street Journal reports.
In the years since Altman’s ouster, drama in the AI industry has continued, with the departure of co-founders of several major AI labs, including Igor Babuschkin of xAI, Daniel Gross of Protected Superintelligence, and Yann LeCun of Meta (he, after all, co-founded FAIR, Facebook’s long-running AI lab).
Some might argue that the drama is justified for a nascent industry with the same expenses contributing to America’s GDP growth. Furthermore, if you believe that one of these researchers can make some breakthroughs on the road to AGI, it’s probably worth following where they go.
That said, many researchers began their work before ChatGPT’s groundbreaking success and seem surprised that their industry is now a source of near-constant scrutiny.
As long as researchers can raise billion-dollar seed rounds on a whim, our guess is that power shifts in the AI industry will continue to occur. HBO Max creators, shut up.
How AI labs train agents to do your job
Residents of Silicon Valley have been wondering for decades about artificial intelligence replacing jobs. But over the last few months, efforts to actually get AI to do economically valuable work have become much more sophisticated.
AI labs are improving their understanding of the data they utilize to create AI agents. Last week, WIRED reported that OpenAI is asking third-party contractors from Handshake to submit examples of their real work from previous jobs to evaluate OpenAI agents. Companies ask employees to remove all confidential and personal information from these documents. While it’s possible that some corporate secrets or names might slip through, that’s probably not OpenAI’s intention (although the company could get into solemn trouble if that happens, experts say).
