OpenAI is folding its reasoning model family into GPT-5, and that is not just a product decision — it is an enterprise consolidation move. The company confirmed this week that GPT-5 will power all of ChatGPT, replacing GPT-4o, o3, o4-mini, and o4-mini-high with a single model that users no longer have to select manually. Enterprise API customers, for now, can still call the older models. The line between consumer simplification and enterprise roadmap management just got harder to draw.
The timing is deliberate. OpenAI published an AI progress and recommendations paper this week alongside the model consolidation, laying out a capability timeline the company says is grounded in its research trajectory: in 2026, AI systems will be capable of very small discoveries, and by 2028 and beyond, OpenAI says it is pretty confident systems will make more significant contributions to knowledge creation. The framing is careful — we could of course be wrong — but the confidence level is notable coming from a company that has spent years being careful about capability forecasts.
The paper also quantifies something OpenAI has been implying for months: the cost per unit of intelligence has been falling roughly 40x per year over the last few years. That is the economic engine that makes the consolidation possible. When intelligence gets cheap enough to commoditize internally, maintaining a menu of specialized models for a mass-market product becomes a UX liability, not a feature.
The OpenAI Foundation, which we wrote about earlier this week, is the other half of this story. The Foundation published its own update this week, committing at least $1 billion over the next year toward life sciences, economic impact, AI resilience, and community programs — early deployment of a longer-term $25 billion disease-curing pledge. The Foundation holds a 26% equity stake in OpenAI Group, worth approximately $130 billion based on the company current valuation. That stake funds the philanthropy; the philanthropy is meant to manage the consequences of the commercial speed.
The AI resilience pillar is where the model consolidation and the Foundation intersect most directly. The Foundation update lists three initial focus areas: AI impact on children and youth, biosecurity, and AI model safety. Those are not abstract concerns. They are the categories of harm that get harder to govern once a single model runs 400 million ChatGPT users. Jacob Trefethen, hired from Coefficient Giving where he oversaw more than $500 million in grantmaking, leads the life sciences work. Wojciech Zaremba, an OpenAI co-founder, takes the AI resilience portfolio. The personnel choices signal that OpenAI wants people with delivery track records, not just researchers, running these programs.
What the consolidation also reveals is something about how OpenAI thinks about its product surface area. Sam Altman acknowledged in February that the ChatGPT model picker had become unwieldy — a dropdown where users could select from multiple reasoning and non-reasoning models was creating confusion rather than empowerment. Removing the picker is a UX fix. But it is also an admission that the model family proliferation was a bridge too far for the intended user base, and that the company misjudged how much differentiation the average user actually needed.
The counterpoint is the one OpenAI has already built: the API. Enterprise developers who have architected around GPT-4o or the o-series reasoning models are explicitly protected from the consumer-side sunset. A spokesperson told VentureBeat that older models will not be deprecated on the API side. That carve-out is not charity. It is a recognition that the enterprise customer base has made architectural decisions based on specific model behaviors, and reversing those decisions without a migration path would create the kind of friction that sends developers to competitors.
The capability timeline in the progress paper is the part worth reading carefully. OpenAI saying it expects systems capable of small discoveries in 2026 and more significant discoveries by 2028 is a more concrete claim than the usual lab boilerplate. The 40x cost curve underpins it: if intelligence continues to get cheaper at that rate, the unit economics of scientific research change in ways that make drug discovery, materials science, and climate modeling qualitatively different exercises within a few years. The Foundation is positioned to be the vehicle that captures some of that value for non-commercial purposes — or at least to fund the research infrastructure that identifies what the commercial side cannot easily own.
Whether that structure actually delivers on the resilience framing is an open question. Catherine Bracy of TechEquity told Vox in coverage of the Foundation launch that they are never going to make a decision that is bad for the company. The Foundation 26% stake and the commercial parent burn rate make that tension structural, not incidental. The model consolidation this week is the commercial side moving fast. The Foundation is the answer to what happens around it.