The internal memo began with five words: "Sam exhibits a consistent pattern of." The first item on the list was "Lying." That document, one of roughly seventy pages of Slack messages and HR records that Ilya Sutskever compiled and sent as disappearing messages to three OpenAI board members in the fall of 2023, is now the most concrete evidence of what happened inside the world's most important AI company during the most turbulent week in its history. It is also, depending on your perspective, either the justification for firing Sam Altman or the product of a paranoid colleague who had lost faith in the founder he helped recruit.
The New Yorker published a detailed account on April 13, drawing on those memos, court depositions, and interviews with more than a dozen people close to the board and the company. The result is the most comprehensive public accounting of an event that the AI industry spent eighteen months trying to move past. The question the article cannot fully answer is whether the board that fired Altman had the information, the authority, or the nerve to do what they apparently believed was necessary.
Sutskever, OpenAI's cofounder and chief scientist, was one of the architects of the company's nonprofit structure, designed to insulate the board from commercial pressure. When the board voted to remove Altman on November 17, 2023, Sutskever was on the call. Altman was in Las Vegas, at a Formula 1 race. Sutskever invited him to a video call, and the board told him he was fired.
What happened next is well documented. Microsoft, which had invested some $13 billion in OpenAI, moved within hours to reinforce Altman. Greg Brockman, OpenAI's president and Altman's closest ally, resigned. Employees threatened to quit in bulk. The board that had voted to remove Altman found itself negotiating his return. Sutskever, who had initiated the process, never returned to the company. He later explained in a court deposition: "I felt that if we were to go down the path where Sam would not return, then OpenAI would be destroyed."
That sentence deserves attention. The chief scientist who helped compile seventy pages of evidence that Altman was dishonest believed that removing Altman would destroy the company. The memos do not exist in a vacuum. They were written by someone who understood, better than almost anyone, what OpenAI was and what it would become. The article does not resolve the contradiction.
The governance structure that made this crisis possible is not an accident. OpenAI's unusual arrangement was designed to allow a nonprofit board to oversee a commercial entity capable of raising the billions needed to compete with Google and Meta. At the time of the board dispute, OpenAI was negotiating a deal with Thrive that would have valued the company at $86 billion. The structure worked, in a sense: the board fired the CEO. It did not work in the way the designers intended, because the board lacked the sustained authority to hold the decision. Within five days, Altman was back, the board had been restructured, and two new members, the former Harvard president Lawrence Summers and the former Facebook CTO Bret Taylor, were selected after what the article describes as close conversations with Altman. Bret Taylor was proposed by Altman himself in a text to Microsoft CEO Satya Nadella: "bret, larry summers, adam as the board and me as ceo and then bret handles the investigation." That text is in the memos.
The superalignment team, which OpenAI formed in 2023 to study long-term existential risks from AI, was dissolved the following year without completing its mission. Both of its leaders, Sutskever and Jan Leike, had already left the company. Mira Murati, who briefly served as interim CEO during the crisis, told the New Yorker: "We need institutions worthy of the power they wield." That line appears in an article about a company that is now preparing for an IPO that could value it at $1 trillion, according to Reuters, with a filing potentially in the second half of 2026 raising $60 billion or more.
The governance problem has not been solved. It has been deferred. The board that returned Altman was not the board that fired him. The structure that allowed Altman to text a Microsoft CEO and propose a new board before the old one had finished calling him is still, in modified form, the structure in place. OpenAI is a more valuable company than it was in November 2023. It is not a better governed one. The memos show what the board knew and when they knew it. What they did about it, and what the result says about who actually controls OpenAI, is the story the AI industry has been trying to close since the day the robotic ring bearer walked down the aisle.