The governance problem in agentic AI has a name. Its called OpenClaw.
A survey released Tuesday at RSAC 2026 by the Cloud Security Alliance and Aembit found that 68 percent of organizations cannot clearly distinguish AI agent activity from human activity — yet 73 percent expect those same agents to become mission-critical within a year. The gap between operational reliance and operational visibility is not a rounding error. Its a structural failure. And the story of how one open-source agent framework became one of the fastest-deployed pieces of infrastructure in recent memory is the clearest illustration of why that failure matters.
OpenClaw was built by Peter Steinberger, an Austrian developer formerly at Basecamp, as a framework for orchestrating AI agents through modular skills. By early 2026 it had roughly 250,000 GitHub stars — it crossed that milestone on March 3, 2026, per the project blog — and the count has since climbed to over 334,000 as of March 24, 2026, according to the GitHub API. It was running in over 2.2 million deployed instances. The distribution curve was one that security teams at Cisco, SecurityScorecard, and multiple independent researchers would later describe as unprecedented. Then the vulnerabilities arrived in clusters.
SecurityScorecards STRIKE team found more than 135,000 internet-exposed OpenClaw instances with default credentials still active — a figure from a broader scan than an earlier 30,000-instance count. The Koi Security audit identified 824 malicious skills out of 10,700 on ClawHub, OpenClaws community skill marketplace — up from 341 when the registry stood at 2,857 in early February; Bitdefenders independent scan put the number closer to 900, roughly one in five packages on the platform. Snyks ToxicSkills project found that 36 percent of all ClawHub skills contained detectable prompt injection — a technique where an attacker embeds instructions inside data the agent processes, effectively hijacking its behavior. Seven-point-one percent exposed credentials in plain text.
The headline vulnerability was CVE-2026-25253, a one-click remote code execution flaw through WebSocket hijacking discovered by Mav Levin of DepthFirst in late January 2026. Ciscos AI Threat and Security Research team ran its own analysis on OpenClaws top-ranked skill, What Would Elon Do? — nine security findings surfaced, including two critical and five high-severity issues. Groundbreaking from a capability perspective, Cisco assessed, but an absolute nightmare from a security perspective.
The compounding incidents did not stop at the framework itself. A misconfigured Moltbook database — Moltbook runs on top of OpenClaw — exposed 1.5 million API tokens and 35,000 email addresses with no access controls in place. Wiz researchers discovered the breach in early February 2026. Anyone who found the database could read private messages between agents and take control of any agent on the platform. Jamieson OReilly, a researcher who has documented OpenClaws security posture extensively, has found Anthropic API keys, Telegram bot tokens, Slack OAuth credentials, and complete conversation histories pulled from exposed instances — a cross-section of the credential stack that agents accumulate by design.
Simon Willison, the researcher and developer behind the Datasette project, has described what he calls the lethal trifecta for AI agents: access to private data, exposure to untrusted content, and the ability to take actions externally. OpenClaws architecture ticked every box.
The pattern is not a vendor problem. The CSA/Aembit survey, which drew on 228 respondents in January 2026, found that 85 percent of organizations already have AI agents in production environments. Only 18 percent say their current identity and access management systems can manage agent identities effectively. Only 21 percent maintain a real-time registry or inventory of the agents running in their environments. Seventy-four percent say their agents receive more access than necessary; 79 percent say agents create access pathways that are difficult to monitor. These are not edge cases. They describe the standard deployment.
Jonathan Armstrong, a partner at Punter Southall Law who advises firms on technology risk, has watched the pattern unfold inside enterprises. Nearly always, nobody at the top of the organization, nobody in the CISOs team, nobody in the compliance team, nobody in legal team knows that its happening, he told GovInfo Security. Shadow AI experimentation — agents spun up by individual teams without central visibility — is the operative mode, not the exception.
The governance gap that the CSA data documents is not primarily a security problem. It is an identity problem. When an agent can read your email, execute code in your cloud environment, and send messages on your behalf — and your organization cannot tell whether that activity originated from a person or a process — the access control model has a fundamental blind spot. A prompt injection attack or a malicious skill does not just compromise a system. It compromises an identity.
NIST launched its AI Agent Standards Initiative on February 18, 2026, four days after Steinberger announced on Valentines Day that OpenClaw was joining OpenAI. The initiative, led by the National Cybersecurity Center of Excellence, published a request for information on securing AI agent systems with comments due March 9. Whether the resulting guidance carries enough weight to shift deployment behavior before the next OpenClaw-scale incident is the open question.
Steinbergers move to OpenAI on February 14 closed one chapter of the OpenClaw story. Meta acquired Moltbook and brought its creator into Meta Superintelligence Labs. The community has begun organizing around skill signing, credential hygiene defaults, and better instance monitoring. ClawHub has removed confirmed malicious packages. But the underlying dynamic — frameworks deployed faster than governance frameworks can form — has not changed.
The numbers from the CSA survey describe the scope of the problem. The OpenClaw case describes the mechanism. Agent frameworks are being embedded into production environments at scale before the organizational capacity to observe, attribute, and control what those agents do has been built. The infrastructure is there. The governance is not.
The Cloud Security Alliance and Aembit published the survey at RSAC 2026 on March 24, 2026.