Mira Murati Testified Altman Made a False Statement About a Safety Review at OpenAI
The most damaging thing Mira Murati said in court on Wednesday was not that Sam Altman created chaos. It was that he told her the truth.
In a video deposition played during the second week of Elon Musk's lawsuit against OpenAI, the former chief technology officer testified that Altman told her OpenAI's legal department had determined a new AI model did not require review by the company's deployment safety board. Under questioning, Murati said that statement was not true. "As you understand it, was Mr. Altman telling the truth when he made that statement to you?" she was asked. "No," Murati said.
That exchange — a former colleague telling a federal court that the CEO of the world's most valuable AI company made a documented false statement about a safety governance procedure — is the specific detail that separates this testimony from the workplace dysfunction narrative dominating coverage of the case. Every outlet will run the chaos angle. The false-statement allegation, if it is supported by documentary evidence in the trial record, is the kind of thing that IPO regulators and institutional investors typically require disclosure for. Whether that corroboration exists is the question worth watching as the trial continues.
Murati's broader testimony painted a picture of internal disorder that she herself had a hand in managing. She told the court that Altman made her job as technology chief harder by pitting executives against each other and undermining her authority. "I had an incredibly hard job to do in an organization that was very complex. I was asking Sam to lead, and lead with clarity, and not undermine my ability to do my job," she said. She described Altman as "creating chaos" and said he was at times deceptive with her and other colleagues. She also told the board she wished Altman to continue as CEO after they briefly ousted him in November 2023, pressing for a fuller explanation of the decision.
Murati separately told the court that during that same period, OpenAI faced what she called catastrophic risk of collapse. "I was concerned about the company completely blowing up," she said. The internal dysfunction had operational consequences: she said researchers were preparing to leave for rival labs including Google's DeepMind and Elon Musk's own AI company, xAI, which has since become part of SpaceX and is valued at approximately $1.25 trillion according to MIT Technology Review's reporting on the trial.
The trial record shows why Murati's testimony is commercially sensitive beyond its characterizations of management style. Musk is seeking $150 billion in damages from OpenAI and Microsoft, with any recovery directed to OpenAI's nonprofit arm, according to Reuters' coverage of the proceedings. His central claim is that the company's transition from nonprofit to for-profit structure breached its founding charitable mission. If successful, the case could unwind the restructuring OpenAI needs to complete before a potential public listing. The company, now valued at more than $850 billion, has described the lawsuit as "a baseless and jealous bid to derail a competitor" to xAI.
The personal stakes for Murati are also material. She left OpenAI in late 2023 and subsequently founded Thinking Machines Lab, which raised approximately $2 billion at a $12 billion valuation in a funding round led by Andreessen Horowitz, with participation from Nvidia, AMD, Cisco, and Jane Street, according to Reuters' reporting on the startup's formation. She has a financial interest in her former colleague's credibility being impaired. OpenAI disputed the characterization of its leadership in the proceedings.
The federal court docket for the case, Musk v. Altman No. 4:24-cv-04722-YGR in the Northern District of California, contains the full record of filings, witness lists, and trial schedule. Judge Yvonne Gonzalez Rogers is presiding without a jury; nine jurors will deliver an advisory verdict on liability that will guide her decision on the charitable trust claims. Microsoft's role — as both a defendant and a significant investor — adds complexity the trial has not yet fully resolved.
The governance questions the trial raises are not unique to OpenAI. As AI systems become more consequential, the internal processes by which companies decide what to release, how to safety-test it, and who has authority to override those decisions are becoming de facto public infrastructure. Murati's testimony describes a company whose safety governance depended on a CEO being honest about what his legal department had cleared. The trial record will determine whether that standard was met.