Three weeks before publishing its new principles, OpenAI signed a Pentagon contract. Eight weeks after that deal sparked protests from its own employees and prompted an internal memo from Sam Altman in which he acknowledged the company had rushed to close the agreement, OpenAI published a document that rewrote the ethical commitments it made to the world in 2018. The most discussed deletion: the stop-and-assist clause, which required OpenAI to halt and assist any rival that approached artificial general intelligence first. Also gone: the pledge to publish most AI research as a public good, and the explicit definition of AGI as systems that outperform humans at most economically valuable work. In their place: five principles the 2018 charter never mentioned, including Adaptability, which says OpenAI can imagine periods where it has to trade off some empowerment for more resilience.
The document, dated April 26, 2026 and attributed to Sam Altman, removes three commitments from its 2018 founding charter. No regulator required this. No court ordered it. No public vote approved it. A company changed its own ethical obligations in a blog post, the way a person might update a LinkedIn bio.
That is not unusual in the technology industry. What is unusual is the stakes. OpenAI is the most consequential AI laboratory in the world, an organization that has shaped the global direction of artificial intelligence development and sits inside a web of government partnerships, defense contracts, and research commitments that affect how the technology evolves. It is also, now, a defense contractor: on February 28, 2026, OpenAI signed an agreement to deploy its models on classified military networks through the Department of Defense. The deal prompted protests outside its San Francisco and London offices, an open letter signed by 98 OpenAI employees and 796 Google employees, and an internal memo in which Altman said the company had rushed to close the agreement. OpenAI subsequently amended the contract to add language prohibiting use of its systems for domestic surveillance of US persons.
The Pentagon contract and the principles rewrite are not formally connected. Nothing required OpenAI to update its ethical framework as a condition of the defense deal. But the sequence tells a story. Eight weeks after closing a contract that required the company to make concessions on surveillance, and eight weeks after employees and outside researchers pushed back on the implications, OpenAI published a document that says, in essence, the company gets to decide what it owes the world, and that list is negotiable.
The 2018 charter was not a legal contract. Critics argued it functioned mainly as a recruiting tool and a regulatory signal, a way of telling Washington that OpenAI could be trusted not to race recklessly toward AGI while continuing to race toward AGI anyway. The company fulfilled that function for six years. Now it has written a document that says the opposite.
The wire signal for this story comes from Business Insider, which has an investment from OpenAI and disclosed the conflict. The specific document changes — the deleted stop-and-assist clause, the removed publication commitment, the new Adaptability language — are verifiable directly from the two source texts at openai.com/charter and openai.com/index/our-principles. This article is an independent confirmation with additional context from the Pentagon contract timeline.
OpenAI did not respond to a request for comment on the principle changes.