The night before a 20-year-old man allegedly threw a Molotov cocktail at Sam Altman's San Francisco home, the OpenAI CEO published a blog post acknowledging that he had "underestimated the power of words and narratives." He wrote that after an article raised questions about his honesty, someone had warned him it could make things "more dangerous" for him. He shrugged it off. At 3:45am on April 11, police say a suspect threw the incendiary device at Altman's house and was later arrested at OpenAI headquarters threatening to burn it down, according to Reuters and The New York Times.
The article that preceded the attack was no ordinary press clipping. It was a 16,000-word investigation by Ronan Farrow and Andrew Marantz in The New Yorker, published April 6, drawing on seventy pages of secret Slack messages and HR documents compiled by Ilya Sutskever and over one hundred interviews. The attack on his home gives the question of Altman's trustworthiness a new and uncomfortable weight.
But the most alarming passages in the investigation are not about the attack. They are about a conversation that allegedly took place inside OpenAI: executives discussing how the company could position itself as a kind of nuclear weapon in global politics. Former employees described a strategy in which world powers would compete to invest in OpenAI technology, afraid of being left behind. Jack Clark, OpenAI's former policy director, described the approach as a "prisoner's dilemma," in which countries would face pressure to fund the company or risk dangerous consequences. A junior researcher who attended a meeting about the plan told investigators it was, quote, "completely insane." Several employees discussed the possibility of mass resignation in response.
OpenAI disputed the characterization, calling it "ridiculous." Former employees who described the conversations contradict that denial. The plan, whatever its ultimate seriousness, reflects something real about how OpenAI's leadership has thought about the company's geopolitical leverage: not as a responsibility but as a tool.
The investigation also details Sutskever's secret campaign to alert board members to what he saw as a consistent pattern of deception by Altman, beginning in the fall of 2023. Sutskever compiled seventy pages of evidence including Slack messages, HR documents, and photos taken on a personal phone to avoid detection on company systems. He sent them as disappearing messages to three board members. "He was terrified," one board member recalled. The documents alleged that Altman had misrepresented facts to executives and board members and deceived them about internal safety protocols. In one memo about Altman, the first item on a list headed "Sam exhibits a consistent pattern of..." was the word "Lying."
The board acted on those memos in November 2023, removing Altman and telling the public only that he "was not consistently candid in his communications." What followed was one of the most compressed corporate crises in Silicon Valley history. Microsoft learned of the plan moments before it happened. Satya Nadella told investigators he "couldn't get anything out of anybody." Altman set up what he called a "government-in-exile" at his $27 million San Francisco mansion and, with help from crisis communications manager Chris Lehane, Airbnb co-founder Brian Chesky, and investor Ron Conway, mounted a rapid counteroffensive. Thrive, the venture firm of Josh Kushner, placed its planned investment on hold and made clear it would only proceed if Altman returned. Within days, Sutskever had reversed course, employees had signed a letter demanding Altman's return, and the board had reversed itself. Altman was reinstated. Three board members who had sought his removal lost their seats.
The investigation draws on a board member's description of Altman that has circulated in Silicon Valley for years but rarely appeared in print: quote, "He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone."
In his blog post responding to the article, Altman acknowledged making mistakes, including in his handling of the board conflict. "I am not proud of handling myself badly in a conflict with our previous board that led to a huge mess for the company," he wrote. "I am sorry to people I've hurt and wish I had learned more faster." He framed the episode as the product of a "'ring of power' dynamic" in the AI industry, and proposed that the solution was "orienting towards sharing the technology with people broadly, and for no one to have the ring." He did not address the specific allegations about the geopolitical strategy or the board memos.
What the investigation makes clear is that the question of Altman's trustworthiness is not a personality quirk or a media narrative. It is a governance problem at the center of the most consequential technology company in the world. OpenAI is reportedly preparing for an IPO at a valuation that could reach a trillion dollars. Its infrastructure is being built inside foreign autocracies. It is securing contracts that set standards for how AI is used in immigration enforcement, domestic surveillance, and autonomous weapons in war zones. The board that oversees those decisions was structured to prioritize the safety of humanity over the company's survival. The people who designed that structure no longer sit on it.
Internal surveys cited in The New Yorker investigation found that only 19% of OpenAI employees surveyed said they believed OpenAI would be truthful in its self-reported progress on safety. Jan Leike, who resigned from the company over concerns about safety culture, seconded Ilya Sutskever's criticism of Altman leadership. The man who once told a board member that Altman was not the person who should "have his finger on the button" no longer works there. The button belongs to Altman alone.
The attack on Altman's home is being investigated as a criminal matter. The suspect was arrested at OpenAI's headquarters. No one was injured. But the incident is a reminder of what happens when the question of who controls transformative technology becomes indistinguishable from the question of who can be trusted.