In newly released testimony in Elon Musk’s case against OpenAI, the technology executive attacked OpenAI’s security, claiming that his xAI company has a better approach to security. He even went as far as to say that “Nobody committed suicide because of Grok, but apparently it happened because of ChatGPT.”
The comment came in response to questions about a public letter Musk signed in March 2023. In it, he called on artificial intelligence labs to pause for at least six months the development of artificial intelligence systems more powerful than GPT-4, OpenAI’s flagship model at the time. The letter, signed by more than 1,100 people, including many AI experts, said there is not enough planning and management in AI labs because they are stuck in an “out-of-control race to develop and deploy increasingly powerful digital minds that no one – not even their creators – can understand, predict or reliably control.”
A transcript of Musk’s video testimony, which took place in September, was made public this week, ahead of an expected jury trial next month.
The lawsuit against OpenAI focuses on the company’s transition from a nonprofit artificial intelligence research lab to a for-profit company, which Musk claims violated its founding agreements. As part of his argument, Musk claims that OpenAI’s commercial relationship could threaten AI security because such a relationship would prioritize speed, scale and revenue over security concerns.
However, since this recording, xAI has been dealing with its own security issues. Last month, Musk’s X social network was flooded with nonsensical nude photos generated by xAI’s Grok, some of which featured minors. The California Attorney General’s office brought this about open an investigation in this case. EU too conducts his own investigationand other governments have also taken action by imposing certain lockdowns and bans.
In newly filed testimony, Musk claimed he signed the AI security letter because it “seemed like a good idea,” not because he had just started an AI company that wants to compete with OpenAI.
“I signed it, like many people, to urge caution in the development of artificial intelligence,” Musk said. “I just wanted… AI security to be prioritized.”
Musk also responded to other questions in the testimony, including about artificial general intelligence, or AGI – the concept of artificial intelligence that can match or exceed human reasoning on a wide range of tasks – by stating that “it’s risky.” He also confirmed that he was “wrong” about the alleged $100 million donation to OpenAI; the second amended complaint in this case, the actual amount is closer to $44.8 million.
He also recalled why OpenAI was founded, which from his perspective was due to “increasing concern about the danger posed by Google as a monopoly in AI,” adding that his conversations with Google co-founder Larry Page were “troubling because he didn’t seem to take AI security seriously.” Musk claimed that OpenAI was created as a counterweight to this threat.
