OpenAI shelved plans to release an erotic chatbot indefinitely on Thursday, according to people familiar with the decision, as the company moves toward a public listing. The product, internally called adult mode, would have allowed ChatGPT users to engage in text-based conversations with sexual themes. It is at least the second controversial product OpenAI has pulled in recent months ahead of an IPO.
The governance breakdown here is worth sitting with. OpenAI has an internal wellbeing advisory board. It has eight members. They voted unanimously against the feature in January, according to Winbuzzer. One member warned that the combination of erotic content and ChatGPT's existing emotional bonding capabilities could produce what they called a sexy suicide coach. OpenAI told the board it was proceeding anyway. The product was delayed twice, most recently in early March, with no confirmed launch date. Then Sora got canceled. Then the IPO pressure intensified. Then the whole thing went away quietly, reported first by the Financial Times.
The age verification problem is structural, not incidental. OpenAI projected roughly 100 million underage weekly active users on ChatGPT. The company's own age detection system misidentifies minors as adults approximately 12 percent of the time, according to Winbuzzer reporting on the internal deliberations. At those numbers, launch-day adult mode could have meant 10 to 12 million minors accessing sexual content on a platform specifically designed to form emotional bonds with its users. That content would have improved over time at modeling those bonds. Nobody published a plan for what happens when that system has five years of data on someone who signed up at 14.
Ryan Beiermeister, the executive in charge of product policy who raised internal opposition to the rollout, was fired in January, according to CNET. OpenAI declined to comment on personnel matters. His departure came after the unanimous advisory board vote and before the final shelftime. The company now has to explain to regulators and institutional investors why its safety processes exist and when they actually carry weight. That is a harder case to make after the person who exercised those processes was removed.
OpenAI CFO Sarah Friar said in a Wired interview that the company needs to be ready to be a public company. The context was Sora's discontinuation, another product killed for business reasons with a genuine safety case behind the decision. What the adult mode shelftime reveals is that the IPO timeline functions as a de facto safety review when all other review mechanisms have failed. The board spoke. The board was ignored. The market spoke. The product went away.
OpenAI also discontinued its Sora text-to-video model, citing compute constraints and IPO preparation, according to Reuters. The company did not respond to a request for comment on the adult mode decision.
The structural question this episode leaves open is whether OpenAI can operate safety processes with genuine authority, or whether those processes exist to provide cover when products are popular and get overruled when they are not. The company is months away from one of the largest technology IPOs in history. The next product decision that creates tension between commercial interests and safety review will be the real test. The last one already failed it.
Related: OpenAI is not the only lab rethinking product boundaries ahead of public markets. type0 has covered the broader pattern of frontier AI companies simplifying their product lines and cutting features with regulatory exposure as IPO processes intensify.