The board of OpenAI received roughly seventy pages of Slack messages and human resources documents. The compilation, assembled by Ilya Sutskever, OpenAI's former chief scientist, documented what multiple insiders described as a years-long pattern of alleged deceptions by CEO Sam Altman. Sutskever sent it as disappearing messages, designed to leave no trace. The board fired Altman on November 17, 2023. Five days later, he was back.
That sequence is the factual core of an investigation by The New Yorker, published April 6, 2026. Reporters Ronan Farrow and Andrew Marantz spoke with more than one hundred people with firsthand knowledge of how Altman conducts business and obtained on-record testimony from a board member. The story they tell is not primarily about personality. It is about what happens when the structure designed to prevent a CEO from acting against an organization's stated mission is confronted with evidence of exactly that and still cannot hold.
What the documents contained
Sutskever spent weeks working with like-minded colleagues to compile approximately seventy pages of Slack messages and human resources documents, according to The New Yorker. He sent them to board members using a disappearing-message protocol, a setting that deletes content from the sender's device after a set time. The documents alleged a pattern of conduct by Altman that multiple insiders had spent considerable effort documenting.
Dario Amodei, who was then OpenAI's research director and would later co-found Anthropic, separately maintained more than two hundred pages of private notes also documenting alleged deceptions, The New Yorker reported. Amodei left OpenAI in 2020, well before the 2023 crisis, but his notes were part of the documentary record the board was working from.
A board member who spoke on the record described Altman to Farrow and Marantz as possessing two traits that rarely coexist in one person. The first was a strong desire to please, to be liked in any given interaction. The second was, in the board member's phrasing, "almost a sociopathic lack of concern for the consequences that may come from deceiving someone." The quote captures something specific: not a diagnosis, but a behavioral pattern the board member had directly observed.
When confronted by the board with the documented evidence, Altman's response was blunt. "This is just so fucked up," he said repeatedly. "I can't change my personality." The line, if accurate, does not dispute the underlying conduct. It reframes it as immutable.
The structure that was supposed to prevent this
OpenAI was founded in 2015 as a nonprofit research organization with a stated mission to develop artificial general intelligence safely. Its charter says OpenAI's "primary fiduciary duty is to humanity," according to Fortune, which reviewed the document at the time of the 2023 board crisis. Not to shareholders. To humanity. In practice, the structure is more complex. OpenAI operates a for-profit subsidiary that has accepted billions in outside investment, most notably from Microsoft, which had invested approximately $13 billion as of November 2023, The New Yorker reported. The arrangement gives the nonprofit board nominal control over the commercial entity. It is the structure supposed to make OpenAI different.
The theory, as described by the organization's own founding documents, is that a mission-driven board with fiduciary duty to humanity would constrain the commercial incentives that typically drive technology development. If the CEO acted against the mission, the board could act against the CEO. The evidence compiled by Sutskever and Amodei was supposed to give the board the basis to do exactly that.
During the weekend Altman was fired, OpenAI was negotiating an investment from Thrive Capital at a valuation of $86 billion, according to The New Yorker. The deal would have made the already-valuable AI company one of the most highly valued private companies in the world. The Thrive negotiations were part of the context the board was navigating as it weighed what to do with the documented evidence it had received.
The reversal
Within five days, the board had reversed itself. Altman was reinstated. The disappearing messages Sutskever had sent did not constrain him. Neither did the two hundred pages of notes Amodei had kept.
According to The New Yorker, Altman told Mira Murati, then OpenAI's chief technology officer, that his allies were "going all out and finding bad things to damage her reputation." Murati had been among those who supported the board's initial decision to remove him. The threat, if accurately reported, is not the language of someone who had been wrongfully removed and was eager to reunite the organization. It is the language of someone mapping retaliation.
The board members who reversed course did not publicly explain what changed. The documentary evidence that had seemed urgent enough to justify firing a CEO on a Friday afternoon was apparently not sufficient to keep him out by the following Wednesday.
What this means for AI governance
OpenAI is reportedly preparing for an initial public offering at a potential valuation of $1 trillion, The New Yorker reported. The company that was supposed to prove that artificial intelligence could be developed responsibly, constrained by a mission rather than by profit, is now heading toward the public markets where quarterly earnings and shareholder expectations define the terms of accountability.
The New Yorker story is the most detailed account to date of what happened inside OpenAI's board in November 2023. It comes with caveats any serious reader should weigh. The board member who spoke on record was removed during the crisis and has obvious incentives to cast the conflict in the most favorable terms. Sutskever, who compiled the seventy pages, is a safety-focused researcher who left OpenAI after the reinstatement and has his own institutional interests. The disappearing-message protocol means the documents themselves have not been made public. What we know about them comes from The New Yorker's reporting, which is itself based on sources with varying perspectives on what happened.
But the core of the story does not rest entirely on any single source. The fact of the documentation is consistent with what was reported at the time about the board's internal processes. The reinstatement within five days is a matter of public record. The charter's stated primary duty to humanity is verifiable from OpenAI's own website. And the fact that a company with that stated mission, that structure, and that much external investment is now preparing an IPO is a structural tension that no amount of board-level documentation resolved.
OpenAI's governance structure was designed to make this kind of accountability possible. What the documented evidence and its aftermath describe is a structure that could document the problem, distribute the documentation, and still fail to act on it. The governance worked exactly as designed. The design did not work.
Ronan Farrow and Andrew Marantz's investigation "Sam Altman May Control Our Future. Can He Be Trusted?" was published in The New Yorker on April 6, 2026.