On March 30, OpenAI published a plugin on GitHub that runs inside Anthropic's Claude Code. The next day, the source code for Claude Code leaked across the internet. The timing is either perfect or catastrophic, depending on your vantage point.
The plugin, openai/codex-plugin-cc, installs into Claude Code and gives developers six slash commands for calling OpenAI's Codex from within their existing workflow: standard review, adversarial review, task delegation, background job management, status checks, and result retrieval. OpenAI published it under an Apache 2.0 license. Within a week it had roughly 3,700 GitHub stars. The code is real. The plugin is shipped. And the adversarial review command is the most interesting thing in it.
Called the Review Gate, that feature can be configured to automatically run a Codex review whenever Claude Code produces output in a session. If Codex finds issues, it blocks the output from reaching the developer and forces Claude to address the problems first. One AI reviewing another AI's output before it reaches the human. OpenAI's own documentation warns this can create long-running loops and drain usage limits quickly. The feature ships disabled by default. Nobody has good intuitions about when it helps and when it just burns API credits, but the pattern it represents is new: the interface layer between model and developer is becoming a site of active arbitration, not just passive display.
The interoperability signal
The market for AI coding tools has been competing on lock-in: more integrations, better context windows, tighter IDE support. The implicit bet was that developers would pick one assistant and stay. That bet hasn't fully paid off. The market has remained pluralistic. Developers use Claude and Codex the way they use VS Code and Neovim — not as mutually exclusive choices but as tools for different contexts.
OpenAI's plugin accepts this reality and builds on top of it. Rather than asking developers to migrate to Codex, OpenAI brings Codex to developers already using Claude Code. Every time a developer runs /codex:review on a pull request, OpenAI's infrastructure handles the request. The plugin reuses the local Codex installation and credentials already on the machine, so there's no separate auth flow. It just works, as long as you have a ChatGPT subscription or an API key.
The technical design is worth noting: the plugin delegates through the local Codex CLI and reuses the same MCP server setup, environment variables, and project-level config that Codex uses directly. This is shallow integration — OpenAI isn't rewriting Anthropic's codebase or asking for API access to Claude's internals. It's wiring into a surface that Anthropic deliberately exposed. The fact that a competitor could publish an official, first-party integration there in under a week is a testament to how Anthropic built the platform. Claude Code has a marketplace and an extensible command system. The success of that architecture creates a distribution opportunity for anyone who can serve its users.
The three primary use cases OpenAI lists are mundane but revealing. A standard review is what a linter does. An adversarial review is a second opinion before shipping. Task delegation is offloading work to a subagent. None of these require deep platform access. They're exactly the kind of surface-level but genuinely useful integration that plugin architectures are designed to expose. If OpenAI can build an official plugin for Claude Code, other vendors can too. The pattern — shallow integration through a marketplace, reusing local credentials — is reproducible. The precedent it sets is that rival platforms don't need to be exclusive.
The realignment behind it
The plugin arrives during a period of significant restructuring at OpenAI. The company has been cutting projects and consolidating tools after a stretch where it launched Sora, Atlas, a hardware device, and various other initiatives simultaneously. The Verge reported in March, citing Wall Street Journal reporting, that Fidji Simo, OpenAI's CEO of Applications, explicitly called Anthropic's Claude Code success an internal wake-up call in an all-hands meeting. She told employees the company was "distracted by side quests" and could not afford to miss what she described as a defining moment. The company is now merging ChatGPT, Codex, and its Atlas browser into a single desktop application. Greg Brockman is reportedly leading the transition.
This is the strategic context the plugin sits inside. OpenAI isn't just adding features — it's trying to hold developers who chose Anthropic's tooling. The plugin is a shorter-term move: get Codex in front of developers currently committed to Claude Code, while the superapp gets built. The question is whether that strategy works in the other direction. A developer who installs the Codex plugin and gets comfortable running adversarial reviews as part of their Claude Code workflow now has a relationship with both platforms. That could make them more likely to try Codex directly when the superapp launches — or it could mean they get what they need from the plugin and never bother switching.
The broader pattern the plugin illustrates is one of the more significant structural shifts in the AI tooling space. The interaction layer — the terminal, the editor, the agent loop — is where the actual user relationship lives. Models can be replicated by training data. Weights can be copied. But the workflow a developer has already integrated into their process, the scripts and aliases and habits, the way Claude Code has become part of how a team thinks about code review — that is not in the weights. That's in the interface. Whoever controls the interface layer controls what can't be easily migrated.
The Review Gate is the sharp end of this dynamic. It's not a linter. It's a checkpoint. And the fact that OpenAI shipped it as a feature — the adversarial review use case pushed to its logical endpoint — suggests someone inside OpenAI believes the interface layer is where the competitive moat will be built, not in the model itself.
What to watch
The plugin's utility depends heavily on how often developers find Codex's perspective genuinely complementary to Claude's — not just a marginally different take, but something that catches real bugs or suggests meaningfully different approaches. The 3,700 GitHub stars in under a week suggest real developer interest. Whether that translates to sustained usage or just reflects novelty is the open question.
The superapp launch, expected later this year according to the WSJ reporting, will be the more significant competitive move. If it delivers a unified coding environment that matches Claude Code's capability, the plugin strategy becomes a footnote — a short-term distribution tactic that the parent product eventually renders unnecessary. If the superapp is delayed or underwhelming, the plugin becomes OpenAI's primary foothold in a developer base it is currently losing.
The other thing worth watching is whether other vendors follow the integration pattern OpenAI just demonstrated. If Codex can live inside Claude Code, can Claude live inside Codex? The plugin architecture Anthropic built is deliberately open. The question is whether competitors decide it's worth the investment to integrate with a rival's platform rather than build their own.
OpenAI has published the plugin at github.com/openai/codex-plugin-cc under an Apache 2.0 license. Installation requires Claude Code, a ChatGPT subscription or OpenAI API key, and Node.js 18.18 or later.