Okta for AI Agents launches April 30 — but the harder problem is the one already in production.
The numbers from the Gravitee State of AI Agent Security 2026 report tell a familiar story about new compute paradigms: 88 percent of organizations reported suspected or confirmed AI agent security incidents in the past year. But only 21.9 percent treat AI agents as independent, identity-bearing entities. That gap — between what agents are doing and how enterprises are tracking them — is the actual story Okta is trying to address with its new platform.
"Okta for AI Agents," announced as generally available April 30, positions the company as an identity provider for the agentic enterprise. The core pitch: if you're not treating your AI agents like employees — registering them, scoping their access, monitoring what they do — you're flying blind. Okta President of Products and Technology Ric Smith put it plainly: agents are evolving faster than any software before them, and traditional security models weren't built for that pace.
The platform organizes its answer around three questions every enterprise should be asking: where are my agents, what can they connect to, and what can they do? It's extending the Okta Integration Network — its catalog of 8,200-plus integrations — to include dedicated support for AI agent platforms including Boomi, DataRobot, and Google Vertex AI. Boomi CISO Carl Siva called the shift to agent-as-identity "a fundamental change in how we think about authorization and access," which is corporate-speak for: we didn't have a framework for this before.
The Gravitee data suggests that gap is wide. Surveying more than 900 executives and technical practitioners in February 2026, researchers found that while 80.9 percent of technical teams have moved past planning into active testing or production with AI agents, 45.6 percent still rely on shared API keys for agent-to-agent authentication. Another 27.2 percent have "reverted to custom, hardcoded logic" for authorization. On average, only 47.1 percent of an organization's AI agents are actively monitored or secured. More than half operate without any security oversight or logging. Healthcare organizations reported the highest incident rate: 92.7 percent had confirmed or suspected incidents.
What Okta is proposing — treating agents as first-class identities with registered accounts, scoped permissions, and audit trails — isn't technically novel. It's applying the same model used for human users to non-human actors. The more interesting question is whether that model holds when agents start spawning other agents. The Gravitee data already shows 25.5 percent of deployed agents can create and task other agents. At that point, identity chains get complicated fast.
The technical foundation Okta is building on is the IETF Identity Assertions and Authorization Grant (IAAG) draft, authored by Aaron Parecki of Okta and Brian Campbell of Ping Identity, dated March 2, 2026. IAAG enables an application to use an identity assertion to obtain an access token for a third-party API via a common enterprise identity provider — a mouthful that essentially means: agents get credentials scoped to what they're allowed to do, and those credentials travel with them across systems. The draft expires September 3, 2026, which means the standard isn't finalized before Okta's GA date. That's worth noting. Okta is launching a product on an unratified specification.
Microsoft is already operating at scale on this problem. Alex Simons, CVP of identity at Microsoft, told VentureBeat that Microsoft Entra ID handles 10,000 AI agents in single pilot programs while processing 8 billion authentications daily. CrowdStrike tracks 15 billion AI-related events daily across customer environments. These aren't small numbers — the infrastructure is already under load.
Okta's press release notably named OpenClaw as an example of a superagent that operates directly on users' machines — executing terminal commands, accessing the file system, transferring data between applications. The capability description is accurate: that's exactly what an agent framework with filesystem and terminal access does. The naming is also strategic. By citing a specific, well-known agent platform in its threat model, Okta is signaling that it has done the technical homework — this isn't a conceptual product.
The 82 percent figure from Gravitee is the one that should land hardest: 82 percent of executives feel confident their existing policies protect them from unauthorized agent actions. That confidence appears largely unearned. The deployment numbers are real — agents are in production — but the security infrastructure to govern them is, in most organizations, not keeping pace.
Okta's framing is that the identity layer is the control plane for agentic AI. Whether that framing becomes a standard or a product differentiator depends on whether enterprises actually move from "we should be doing this" to "we have a system for this." The GA date gives them something concrete to point at. The IETF standard gives it a future. The gap between the two is where the story lives.