Back to news
NewsMay 4, 2026· 3 min read

Musk admits xAI used OpenAI models to train Grok during trial

Elon Musk testified that his competing AI startup distilled OpenAI's models to improve Grok while suing OpenAI for abandoning its mission.

By Agentic DailyVerified Source: The Verge

Our Take

Musk's admission that xAI used OpenAI's models to train Grok while simultaneously claiming OpenAI betrayed its founding principles exposes the lawsuit's opportunistic nature.

Why it matters

The testimony reveals standard industry practice of model distillation being used by direct competitors, and suggests Musk's $150 billion lawsuit is more about competitive advantage than principle.

Do this week

AI teams: audit your model distillation policies before Thursday to ensure you have clear documentation of training data sources for potential legal scrutiny.

Musk confirms xAI distilled OpenAI models

During federal court testimony on Thursday, Elon Musk confirmed that his AI startup xAI used OpenAI's models to train its own Grok chatbot through model distillation. The admission came as part of Musk's $150 billion lawsuit against OpenAI, where he claims the company abandoned its founding mission of developing AI to benefit humanity.

Model distillation is a common industry practice where a larger AI model acts as a "teacher" to pass knowledge to a smaller "student" model. While often used legitimately within companies, it's also employed by smaller AI labs to mimic competitors' performance.

The trial, which began April 27th with jury selection, has Musk seeking the removal of OpenAI CEO Sam Altman and president Greg Brockman, plus demanding OpenAI stop operating as a public benefit corporation. OpenAI has dismissed the lawsuit as "a baseless and jealous bid to derail a competitor."

Court evidence revealed that before the trial, Musk attempted to settle and threatened Brockman and Altman: "By the end of this week, you and Sam will be the most hated men in America." OpenAI's lawyers are seeking to admit this as evidence of Musk's competitive motivations.

Legal strategy meets competitive reality

The testimony exposes a contradiction at the heart of Musk's case. While arguing that OpenAI betrayed its nonprofit mission by focusing on profits, Musk simultaneously used OpenAI's own models to build a competing commercial product. This undermines his position as a principled defender of AI safety and open development.

Court documents show Musk "largely drafted OpenAI's mission and heavily influenced its early structure," according to evidence presented. However, internal emails revealed early concerns from Greg Brockman and Ilya Sutskever about "Musk's level of control over the company."

Expert witness Stuart Russell, charging $4,000 per hour (company-reported) for his first 40 hours of work with Musk's legal team, provided generic AI risk testimony that Judge Yvonne Gonzalez Rogers noted seemed disconnected from the specific dispute at hand.

Document your training practices now

The case highlights how model distillation practices can become legal evidence in competitive disputes. AI teams should maintain clear documentation of all training data sources, especially when using competitors' models for distillation or evaluation purposes.

The testimony also reveals the standard nature of cross-pollination in AI development. Russell mentioned legitimate applications like medical technology and protein structure prediction, while acknowledging that "each company individually feels it needs to be in this race."

For enterprises using multiple AI providers, the case demonstrates how vendor relationships and data flows could become scrutinized in future legal disputes. Clear contractual terms around model usage and training practices are becoming essential compliance measures, not just technical considerations.

#LLM#AI Ethics#Legal AI#Enterprise AI
Share:
Keep reading

Related stories