The Board Found Out About ChatGPT the Same Day the Public Did
Helen Toner served on OpenAI's board for two years and never learned about ChatGPT the way the board was supposed to. She learned about it from Twitter. That detail — from Day 7 of Elon Musk's federal lawsuit against Sam Altman — is the most concrete fact to emerge from a week of testimony about how OpenAI's nonprofit board actually functioned while the company's commercial entity built and shipped one of the most consequential technology products in recent history.
Toner testified that she first became aware that ChatGPT was about to launch when an OpenAI employee asked another board member whether the board was aware of the development, as Business Insider reported and the San Francisco Business Times separately confirmed. She was not surprised, she said, because she was "used to the board not being very informed about things," The Verge reported from the trial proceedings. Shivon Zilis, another board member, corroborated the account, testifying that the entire board had raised concerns about ChatGPT being released without any board communication.
The product launched November 30, 2022. The board found out the same day the public did.
The board fired Altman two weeks later. Toner described why in a deposition posted to the Effective Altruism Forum: "the end effect was that after years of this thing, all four of us who fired him came to the conclusion that we just couldn't believe things that Sam was telling us. That's a completely unworkable place to be in as a board." All four board members who voted to remove him reached the same conclusion independently.
Toner also testified that AI safety testing at the company had no rigorous standard — that building AI models was, in her words, "more like alchemy than chemistry," meaning there was no clear-cut way to test whether a system was safe before it was released. The Verge captured the quote from trial proceedings: "People are just throwing things together to see what happens." The quote appears in court reporting and deposition accounts; the exact trial transcript is not independently accessible. The description matters because it frames what the board was supposed to oversee: a process Toner herself called untestable by any defined method.
The contrast with OpenAI's current posture is worth noting. The structured evaluation framework the company now describes on its Deployment Safety Hub — pre-deployment red-teaming, system cards, benchmark requirements — was not in place in 2022, according to Toner's account. What her testimony describes is a board overseeing a process she herself called untestable by any defined method. Whether the board in 2022 was merely uninformed about a process that was still informal, or whether management actively resisted formalizing it, is one of the factual disputes this trial is now litigating.
Zilis also testified about her resignation from the board — which she said came after Altman called to inform her about the launch of Musk's xAI. But WIRED reported that text messages presented in court suggested Zilis had known about xAI before Altman's call. The discrepancy matters because it bears on whether the resignation was genuinely voluntary or a response to discovering that management had been selectively managing the board's information. The same WIRED reporting revealed that Musk had attempted, in 2017 and 2018, to recruit Altman to lead Tesla's artificial intelligence lab — going so far as to offer him a seat on Tesla's board as part of an effort to fold OpenAI into Tesla. The emails and testimony were entered into the federal court record during this week's proceedings.
The trial is being heard in the Northern District of California before Judge Yvonne Gonzalez Rogers. The federal court docket shows the case is listed as 4:24-cv-04722 and was last updated May 7, 2026. The proceedings are expected to continue for several weeks.
There is a counterargument worth naming. Toner is a figure associated with the effective altruism community, which has been publicly critical of OpenAI's trajectory under Altman. The EA movement's concerns about AI risk shaped some of the board's composition in the first place. It is reasonable to ask whether Toner's account reflects genuine governance failure or a philosophical disagreement about how much oversight a fast-moving startup should have — and whether the EA-appointed board members were trying to impose a governance style that was never realistic for a company racing to ship products that its competitors were shipping too. No other frontier lab in 2022 had a formal pre-deployment safety evaluation process either. The question is not only whether OpenAI's board was dysfunctional, but whether the dysfunction was cause or consequence of Altman's management choices.
What the testimony establishes, at minimum, is that OpenAI's board understood itself to be operating without the information it needed to function. Toner's "alchemy" description of safety testing is also a description of a board that had no metric it could check against. Whether that reflects a management team that deliberately kept the board uninformed, a governance structure that was never designed to exert real oversight, or the honest limitations of overseeing a process the field itself had not yet formalized, is the factual question the trial is now building a record to answer.