The pattern is the story. OpenAI has now retreated from three consumer-facing products in a single week: Sora, which generated $1.4 million in net revenue before being shut down, and whose $1 billion Disney licensing deal collapsed before money changed hands; Instant Checkout, which launched in September 2025 and was pulled back in March 2026 after only about a dozen of Shopify's millions of merchants went live; and adult content mode, announced in October 2025, delayed multiple times, and killed indefinitely this week. Three announcements, three retreats. The adult content retreat is the most revealing, because the feature never shipped — and that is exactly why it matters.
Adult content was never going to move the needle at OpenAI's scale. OpenAI has 900 million weekly active users and 50 million consumer subscribers, according to Axios. The AI-generated adult content market exists, but it is served by smaller, less scrutinized companies. For a company courting enterprise customers and preparing for a public listing, the brand risk outweighed the potential revenue by a wide margin. The business logic, as one OpenAI advisor put it in January, was that the company could end up building something that functioned as what the Wall Street Journal reported as a "sexy suicide coach" — a chatbot that discussed sexual content while also generating material that no enterprise customer or regulator would tolerate. According to The Next Web, which cited Financial Times reporting, some investors questioned why OpenAI would jeopardize its reputation for a product with relatively small upside. That is the liability calculation. It is not a values change.
The feature also ran into documented engineering difficulty. According to reporting by The Next Web, which cited Financial Times sources, internal testing surfaced content safety problems — including outputs involving illegal scenarios — that proved difficult to filter at the level the feature would have required. Whether those problems crossed into technical impossibility depends on who you ask, but the testing difficulties were real enough that OpenAI delayed the feature repeatedly before walking away entirely.
The legal exposure is the part that gets underplayed in the cultural analysis framing. OpenAI disclosed wrongful death and harm lawsuits as among the top risks to its business in a financial document shown to investors — including the case of Adam Raine, a 16-year-old from Southern California who died in April 2025. The Raine lawsuit, filed in August 2025, alleges that Adam mentioned suicide roughly 200 times in his conversations with ChatGPT — and that the chatbot responded with suicide-related content approximately 1,275 times, roughly six times more often than Adam himself. OpenAI has denied the allegations. The company faces multiple such suits, and the pattern is not abstract: it is the kind of exposure that shows up in investor risk disclosures before it shows up in headlines.
What makes this week's retreat different from earlier stumbles is the IPO timeline. OpenAI closed a $110 billion funding round in February 2026 — $50 billion from Amazon, $30 billion from Nvidia, and $30 billion from SoftBank — at a $730 billion valuation. That valuation is predicated on a path to a public offering where institutional investors and retail shareholders will own pieces of the company. Enterprise customers, the ones who will actually determine whether OpenAI becomes a durable business rather than a subsidized research lab, do not want their AI assistant associated with a product category that creates legal and reputational risk they are not equipped to manage.
Apple's decision to open Siri to Anthropic's Claude and Google's Gemini in iOS 27, reported by Bloomberg this week, adds a layer of urgency that the Axios framing misses entirely. OpenAI's exclusivity arrangement with Apple was one of the most valuable distribution partnerships in the history of consumer software. Losing it — or having it diluted — means OpenAI needs to compete on product merit rather than proximity to hardware. Sam Altman's internal code red in December 2025, declaring a critical period for ChatGPT as Google's Gemini 3 outperformed the company's models on benchmarks, was the alarm bell. The adult content retreat is the quiet consequence of that alarm.
There is a version of this story where OpenAI discovered something principled about its values. The company committed to building AGI that benefits humanity, and some employees found it difficult to reconcile that ambition with engineering a chatbot that could discuss sexual content without generating illegal material. That version is not wrong, but it is incomplete. The feature was killed because the liability math did not work, not because the culture shifted. The culture, if anything, shifted toward whatever protects the IPO.
The three retreats in quick succession reveal something about how OpenAI operates under pressure. The company announces boldly and retreats when reality proves harder than the announcement suggested. Sora was going to transform video. Checkout was going to transform commerce. Adult content was going to treat users like adults. None of it happened at the scale the announcements implied, and the walk-back in each case was quieter than the rollout. That pattern is worth watching as the IPO approaches. A company that keeps retreating is a company whose forward statements deserve extra scrutiny.
The adult content mode is not the story. The story is that OpenAI spent engineering resources on a feature that was never going to move the needle at its scale, could not resolve the safety and technical problems to ship it, and is now cleaning house before it goes public. That is a business and strategy story. Sonny's angle was right.