The board had the power to fire him. It used that power for less than forty-eight hours.
In November 2023, Sam Altman was ousted as CEO of OpenAI while attending a Formula 1 race in Las Vegas. Ilya Sutskever, OpenAI's chief scientist, had spent weeks compiling seventy pages of Slack messages, HR documents, and explanatory text—sent as disappearing messages to three board members—alleging that Altman was not fit to have his finger on the button. Dario Amodei, then leading the company's safety team, had written in more than two hundred pages of private notes that "the problem with OpenAI is Sam himself." The board voted Altman out.
Five days later, he was back. The board that tried to enforce OpenAI's own governance structure is gone. The memos that documented their reasoning are now legendary in Silicon Valley. And the company is preparing for an IPO that could value it at a trillion dollars.
The New Yorker's Ronan Farrow and Andrew Marantz have published an eighteen-month investigation drawing on those internal documents and more than a hundred interviews. The story of what happened inside OpenAI in the fall of 2023 is not, at its core, a story about personalities or drama. It is a story about what happens when the governance structure designed to check a single person's power meets the realities of venture capital, employee loyalty, and a $13 billion investment from Microsoft.
The architecture OpenAI's founders built was unusual and deliberate. Established as a nonprofit in 2015, the company's board had a legal duty to prioritize the safety of humanity over the company's success or even its survival. The CEO could be fired. That was the point.
What Farrow and Marantz document is how consistently and thoroughly that structure failed. Sutskever's memos, which the reporters reviewed in full, allege that Altman misrepresented facts to executives and board members and deceived them about internal safety protocols. One memo begins with a list headed "Sam exhibits a consistent pattern of..." The first item: "Lying."
The board members who received the memos—Helen Toner, an AI policy expert, and Tasha McCauley, an entrepreneur—had already had doubts. Toner later testified that she and McCauley believed Altman's role entrusted him with the future of humanity but that he could not be trusted with it.
Microsoft learned of the plan to fire Altman moments before it happened. Satya Nadella, Microsoft's CEO, told the reporters he was "very stunned" and "couldn't get anything out of anybody." Within hours, Thrive Capital had put its planned investment—valued at eighty-six billion dollars—on hold. A public letter demanding Altman's return circulated at the organization. Most of OpenAI's employees signed it. Some who hesitated received imploring calls and messages from colleagues.
The board was backed into a corner. "Control Z, that's one option," Toner said at the time. "Or the other option is the company falls apart."
What followed was a master class in where real power resided. Altman set up what he called a "government-in-exile" at his twenty-seven-million-dollar San Francisco mansion. Crisis communications managers, lawyers, and allies including Airbnb co-founder Brian Chesky joined for hours each day. Sutskever, who had compiled the case against Altman, was ultimately persuaded to reverse course. Brockman's wife approached him at the office and pleaded with him to reconsider. "You're a good person—you can fix this," she said. Sutskever later explained in a court deposition that he believed if Altman did not return, "OpenAI would be destroyed."
The three board members who had moved against Altman lost their seats. Two new members—former Harvard president Lawrence Summers and former Facebook CTO Bret Taylor—were selected after close conversations with Altman himself. As a condition of their exit, the departing members demanded that the allegations against Altman be investigated. The new board, chosen by Altman, oversaw that investigation.
Jan Leike, who co-led the Superalignment team with Sutskever, told The New Yorker the team's formation was "a pretty effective retention tool." OpenAI announced the team would receive 20 percent of the company's compute. In practice, according to four people who worked on or closely with the team, actual resources amounted to one to two percent, allocated on the oldest cluster with the worst chips. The team was dissolved the following year without completing its mission.
In a statement, OpenAI said the allegations in the memos were inaccurate and that the company has "continuously improved our safety processes." Altman, in extensive interviews with The New Yorker, disputed the characterization of his behavior and attributed criticism to a tendency early in his career "to be too much of a conflict avoider."
The company is now one of the most valuable in the world. It is reportedly preparing for an initial public offering at a potential valuation of a trillion dollars. Altman is building AI infrastructure that includes partnerships with foreign governments. OpenAI is securing government contracts and setting standards for AI use in immigration enforcement, domestic surveillance, and autonomous weaponry.
In a blog post last year, Altman wrote that "astounding triumphs—fixing the climate, establishing a space colony, and the discovery of all of physics—will eventually become commonplace." He has also warned, at times, that the industry may be in a bubble. "Someone is going to lose a phenomenal amount of money," he told reporters last year.
The question Farrow and Marantz leave unanswered—the question OpenAI's own governance was supposed to answer—is whether anyone can actually stop him if the bubble bursts, or if the alignment problem proves unsolvable, or if the technology does exactly what the company's own charter says it might: disempower humanity.
The board tried. It had five days.