During Thursday’s testimony in federal court, Elon Musk appeared to indicate that his artificial intelligence lab may have used OpenAI models to train its own xAI models. He touched on the topic while sitting on the witness stand and answering questions posed by OpenAI’s lawyer as part of the ongoing legal battle against the creator of ChatGPT.
Here’s the exchange as best WIRED could capture it:
OpenAI lawyer William Savitt: Do you know what distillation is?
Musk: This means using one AI model to train another AI model.
Savit: Did xAI do this to OpenAI?
Musk: Basically all AI companies [do that].
Savit: So that’s it.
Musk: Partly.
Distillation is a technique in which a smaller AI model is trained to mimic the behavior of a larger, more capable model, making it run cheaper and faster while retaining much of its performance.
OpenAI lawyer William Savitt then asked whether OpenAI technology was used in any way to develop xAI.
Savit: Has OpenAI technology been used in any way to develop xAI?
Musk: Using other AIs to check your AI is standard practice.
OpenAI and xAI did not immediately respond to WIRED’s request for comment.
OpenAI seeks to prevent its competitors from distilling AI models, most notably Chinese AI lab DeepSeek. In February 2026 note to a House committee, OpenAI wrote that it has “taken steps to protect and strengthen our models from distillation.” In that note, OpenAI said it was focused on ensuring conditions in which “China cannot develop autocratic artificial intelligence by appropriating and repackaging American innovations.”
The Trump administration has also taken steps to prevent Chinese companies from distilling American artificial intelligence models. Michael Kratsios, director of the White House Office of Science and Technology Policy, said in April 2026 note that it will share information about foreign distillation with US AI companies. Kratsios said in write to X that “the U.S. government is committed to the free and equitable development of artificial intelligence technologies in a competitive ecosystem.”
US AI labs have used each other’s AI models in a variety of ways to test progress and assess safety. However, in today’s competitive landscape, some AI companies have completely shut out competing labs. In August 2025, Anthropic blocked OpenAI’s access to its Claude coding models after the company alleged violations of its terms of service. Most recently, Anthropic cut off xAI from using its AI models also for coding.
During Musk’s multi-day hearing, Savitt questioned Musk about his attempts to gain control of OpenAI and then his quest to defeat the creator of ChatGPT. On Wednesday, Savitt produced emails and text messages from 2017 to support a series of questions about whether Musk had pressured OpenAI by withholding funding and hiring key researchers.
