Sam Altman built the most consequential AI company on earth. He also appears to have built something else, according to two of the people who were supposed to keep him in check: a machine for deceiving the people trying to govern it.
The New Yorker investigation, by Ronan Farrow and Andrew Marantz, pulls together more than two hundred pages of internal documents from former OpenAI president Dario Amodei, a seventy-page memo that Ilya Sutskever compiled with allies in the fall of 2023, and interviews with more than a hundred people with firsthand knowledge of Altman. What those sources found, independently and in parallel, is consistent enough to be uncomfortable reading for anyone who has accepted the nonprofit-plus-commercial hybrid at face value.
The nonprofit structure was supposed to be the lock. The commercial arm could raise money, hire researchers, build products, and move fast. The nonprofit board could slow things down, demand evidence, and pull the emergency brake. Two of the people who sat closest to that lock have now documented what they found when they tried to use it.
Ilya Sutskever co-founded OpenAI with Altman in 2015. By the fall of 2023, he had spent eight years watching the company he built diverge from its founding premise. The seventy-page memo he assembled from Slack messages and HR documents began with a single heading: Lying. Under it, according to The New Yorker, Sutskever documented what he described as a consistent pattern of deception. His conclusion, as quoted in the piece: "I don't think Sam is the guy who should have his finger on the button."
Dario Amodei, who left OpenAI in 2023 to co-found Anthropic, kept notes. More than two hundred pages of them, according to The New Yorker, including internal emails and memos. His assessment was blunt: "The problem with OpenAI is Sam himself."
These are not peripheral figures. Sutskever ran the Superalignment team, the internal effort meant to ensure AI systems did what their operators intended. Amodei ran safety before he left. Their documented concerns converge on the same failure point: the governance structure that was supposed to make OpenAI safe was not working, and both men concluded the reason was the person it was supposed to govern.
The alignment compute figure is the most specific number in this story. OpenAI publicly committed to dedicating twenty percent of its most powerful compute to safety research. The actual allocation to Sutskever's team was closer to one to two percent, according to The Block. That is not a rounding error. It is a ten-to-one gap between promise and delivery, in the one area where the company's stated mission most directly depended on follow-through.
What the board did with this information is instructive. In November 2023, the OpenAI board moved to remove Altman, citing, per internal communications in The New Yorker, a pattern of dishonest communication. Altman responded with a call campaign that, according to phone records reviewed by Farrow and Marantz, kept him on calls with journalists for more than twelve hours a day over five days. He told the board, in a message quoted in The New Yorker: "I can't change my nature." Four days later, the board reversed itself.
The WilmerHale law firm was brought in to investigate. When Altman returned to the company, the investigation produced no written report, The Block reported. The board that approved his return has since had its membership reshaped in ways that make its independence harder to assess than it was before November 2023.
Microsoft has invested roughly thirteen billion dollars in OpenAI. One Microsoft executive, quoted in The Block's coverage of the New Yorker piece, offered a harsher verdict: "He distorts, twists, renegotiates, and violates agreements. There is a small but real possibility he will ultimately be remembered like Bernie Madoff or Sam Bankman-Fried."
The comparison comes from a party with obvious commercial interests in OpenAI's reputational standing. But the New Yorker reporting forces a narrower question: whether a structure designed around a nonprofit board's authority over an increasingly commercial entity was ever actually designed to work, or whether it was designed to look like it worked while the actual decisions moved elsewhere.
OpenAI's stated mission is to build artificial general intelligence that benefits humanity. The company is now valued at eighty-six billion dollars, its products are embedded in tools used by hundreds of millions of people, and its next generation of models is the subject of intense speculation in every capital where AI policy is being made. If the nonprofit oversight layer is ornamental, the accountability mechanism that was supposed to make the mission statement binding does not exist.
Altman is forty years old. He remains in control.