Another of OpenAI’s principal security researchers, Lilian Weng, announced on Friday that she was leaving the startup. Weng has served as vice president of research and security since August and was previously head of the OpenAI security systems team.
In write to XWeng said that “after 7 years at OpenAI, I feel ready to reset and discover something new.” Weng said her last day would be November 15, but did not specify where she would go next.
“I have made the extremely difficult decision to leave OpenAI,” Weng said in the post. “Looking at what we have achieved, I am very proud of all the members of the Safety Systems team and I have great confidence that the team will continue to grow.”
Weng’s departure is the latest in a long list of AI security researchers, policy researchers and other executives who have left the company in the past year, with several of them accusing OpenAI of prioritizing commercial products over AI security. Weng joins Ilya Sutskever and Jan Leike – leaders of the now-disbanded Superalignment team at OpenAI, which tried to develop methods to control superintelligent AI systems – who also left the startup this year to work on AI security elsewhere.
According to her, Weng joined OpenAI in 2018 LinkedInworking on the startup’s robotics team that ultimately built a robotic hand that could solve a Rubik’s Cube – a task that took two years, according to her post.
As OpenAI began to focus more on the GPT paradigm, Weng did so as well. The researcher began helping build the startup’s applied artificial intelligence research team in 2021. After the launch of GPT-4, Weng was tasked with building a dedicated team to build security systems for the startup in 2023. Currently, OpenAI’s security systems unit employs over. According to the post Wenga, 80 scientists, researchers and policy experts.
That’s a lot of people working on AI security, but many have expressed concerns about OpenAI’s focus on security as it tries to build increasingly powerful AI systems. Miles Brundage, a longtime policy researcher, left the startup in October and announced that OpenAI was disbanding its AGI readiness team, which he advised. On the same day, the Fresh York Times profiled former OpenAI researcher Suchir Balajiwho said he left OpenAI because he believed the startup’s technology would do more harm than good to society.
OpenAI tells TechCrunch that executives and security researchers are working to replace Weng.
“We deeply appreciate Lilian’s contributions to groundbreaking security research and building rigorous technical safeguards,” an OpenAI spokesperson said in an emailed statement. “We are confident that the Safety Systems team will continue to play a critical role in ensuring the safety and reliability of our systems, serving hundreds of millions of people around the world.”
Other executives who have left OpenAI in recent months include CTO Mira Murati, chief research officer Bob McGrew and vice president of research Barret Zoph. In August, prominent researcher Andrej Karpathy and co-founder John Schulman also announced they were leaving the startup. Some of these people, including Leike and Schulman, left to join OpenAI’s rival company, Anthropic, while others started their own ventures.