OpenAI published the rate card before it published the price
Enterprise teams can now build shared AI agents that work autonomously across their organizations. The catch: nobody knows what it costs.

OpenAI published the rate card before it published the price
OpenAI shipped workspace agents on Tuesday, and one detail from the announcement tells you almost everything you need to know about how the company is approaching enterprise AI right now: they published the rate card before they published the price.
The new feature, rolling out in research preview to ChatGPT Business, Enterprise, Edu, and Teachers plans, lets teams build shared agents that run in the cloud, work across multiple steps, connect to tools, and keep going when no one is watching. It is the team-scale evolution of custom GPTs, and OpenAI is positioning it as the backbone of an AI-powered organization: sales agents that qualify leads and draft emails, accounting agents that close the books, IT agents that process software requests and file tickets.
The pricing, such as it is, arrives May 6. Until then, workspace agents are free. After that, credit-based pricing kicks in. The rate card shows what a million tokens costs in credits, but credits themselves have no published dollar value. An enterprise buyer reading the documentation learns exactly how fast credits leave their account (GPT-5.4: 62.50 credits per million input tokens, 375 per million output; ChatGPT Agent feature: 30 credits per message), but cannot calculate what any of this costs in actual money without a separate conversation with a sales rep.
This is not an accident. OpenAI has built a consumer product empire on simplicity, one $20/month subscription at a time. Enterprise pricing is messier, and the company appears to be feeling its way toward a number rather than committing to one. "On average, Codex costs $100-$200 per developer per month," the help center notes, with large variance depending on model used, number of instances, automations and usage of fast mode. That range covers a lot of ground.
The competitive timing is notable. Anthropic made Claude Cowork generally available for enterprise on April 9, dropping the research preview label and positioning itself squarely in the same workspace-agent market. Google Cloud Next, also this week, announced its own enterprise agent plays. Every major lab is converging on the same idea at the same moment: not just an AI assistant that answers your question, but an AI coworker that does your job.
OpenAI's angle is the full-stack argument. The company that builds the models, the interface, and the infrastructure believes it can offer something competitors who only do one piece cannot: coherence. That claim is only as strong as the integrations OpenAI has built to actually make good on it, and those remain partially in progress.
Workspace agents are a credible product in a market that is suddenly very crowded. The autonomous multi-step capability is genuinely new for OpenAI's team-facing tier. The Slack integration means they show up where work already happens rather than demanding workers come to a new surface. And the compliance API gives enterprise IT something it desperately wants: visibility into what the AI is doing and what data it is touching.
The OpenClaw founder Peter Steinberger is now at OpenAI, adding a small but pointed irony to the announcement. Steinberger spent years building a product that tried to bring AI capabilities to team workflows. Now he is inside the company most directly competing for that same territory.
What OpenAI has not yet done is tell enterprise buyers what they are actually paying. The rate card is not a price list. It is a menu with no numbers on the right side of the column. Companies making six- and seven-figure annual commitments to AI infrastructure will get their first real invoice in six weeks. By then, the agents will have been running long enough to accumulate costs no one budgeted for.
That is either a bold pricing experiment or a sales strategy that front-loads adoption before the sticker shock arrives. The answer is probably both.
https://openai.com/index/introducing-workspace-agents-in-chatgpt/
https://openai.com/index/next-phase-of-enterprise-ai/
https://9to5mac.com/2026/04/22/openai-updates-chatgpt-with-codex-powered-workspace-agents-for-teams/
https://9to5mac.com/2026/04/09/anthropic-scales-up-with-enterprise-features-for-claude-cowork-and-managed-agents/
https://help.openai.com/en/articles/20001106-codex-rate-card
https://help.openai.com/en/articles/11481834-chatgpt-rate-card





