Musk Admitted xAI Trained on OpenAI Models. The Industry’s Distillation Double Standard Is Now on Record.
Musk just said something remarkable in a federal courtroom. Under oath. In a lawsuit he filed against the company he helped found.
On Thursday, while testifying in his own lawsuit against OpenAI in Oakland, California, Elon Musk admitted that xAI WIRED — the AI company he founded to compete with the very firm he’s suing — has partly used OpenAI’s models to train its own. The exchange was captured by multiple outlets. OpenAI lawyer William Savitt asked whether xAI had “distilled” OpenAI’s technology. Musk’s response: “Generally all the AI companies [do that].” Pressed again, he said “Partly.”
Distillation is a standard machine learning technique: a smaller model is trained to mimic a larger one, preserving most of its capabilities at lower computational cost. It’s useful. It’s ubiquitous. And it is, at this exact moment, the subject of a multi-front political and legal campaign by the very companies now implicated by Musk’s admission.
Because while xAI was apparently using OpenAI’s outputs to build Grok, OpenAI was in Washington telling Congress the opposite.
In a February 2026 memo to a House committee, OpenAI wrote that it had “taken steps to protect and harden our models against distillation,” specifically naming the Chinese AI lab DeepSeek WIRED. The memo argued that China should not be allowed to “advance autocratic AI by appropriating and repackaging American innovation.” The Trump administration subsequently issued its own memo framing distillation as a threat requiring government-industry coordination to counter.
Anthropic has made the same case more publicly. In a February blog post, the company described distillation as “a method of intellectual property theft” and said it had identified DeepSeek, Moonshot, and MiniMax conducting industrial-scale campaigns — more than 16 million exchanges through roughly 24,000 fraudulent accounts — to extract capabilities from Claude Anthropic. Google has said the same: distillation violates its terms of service and constitutes IP theft The Verge.
All three of America’s major AI labs have drawn a line: distillation by foreign actors is a national security issue. OpenAI has told Congress so in writing. Anthropic has published it. Google has said it repeatedly.
Then Musk, the world’s richest person and the plaintiff in an ongoing lawsuit alleging that OpenAI betrayed its nonprofit mission, testified that xAI did the same thing Reuters — just with an American company.
The irony is not subtle. OpenAI has spent months building a policy case that model distillation is an act of competitive appropriation and a threat to America’s AI leadership. That case is now being used to justify export controls, trade restrictions, and a harder line on Chinese AI development. The same framing, applied to xAI, has no obvious enforcement mechanism — Musk is an American company on American infrastructure, and distillation between U.S. entities is not illegal.
Musk, for his part, described the practice differently when it benefited him. “It is standard practice to use other AIs to validate your AI,” he testified Reuters. This is technically true. It is also the same argument the Chinese labs have made in their own defense Semafor.
The lawsuit itself continues. Musk’s core claim — that OpenAI abandoned its charitable mission by converting to a for-profit structure — does not depend on this admission. But the admission sits in the record as a complication. He sued OpenAI for betraying its founding purpose. He also apparently used OpenAI’s technology to build his competing product, the same technology he is now arguing in court was never meant to benefit any individual.
OpenAI and xAI declined to comment. The White House memo on distillation was issued in April; the February OpenAI memo was first reported by Bloomberg.
The question the testimony raises is not whether distillation is legal — it is, between U.S. companies, generally speaking — but whether the industry’s public position on it is coherent. If the practice is a national security threat when DeepSeek does it Semafor and standard industry behavior when an American competitor does it, the policy framework built on that distinction has a calibration problem. And the lawyers in Oakland will be reminded of it every time the case returns to court.