The dependency layer is where the attack surface lives — and it is not getting smaller. Nine OpenClaw CVEs dropped in four days last month. Belgium's cybersecurity agency issued a Patch Immediately advisory for seven more. Microsoft's security team flagged the same class of flaw six weeks earlier, calling it the defining risk for the software layer between AI models and the applications running on them. One April patch addresses part of it.
OpenClaw patched a flaw this week that illustrates a coming wave of AI security failures. The core concept: when AI agents — software that decides for itself which tools to use and in what order — pull in plugins at runtime, the old security model breaks down. Traditionally, a company secures the code it writes. Now an AI agent can install a tool from outside the company, chain it together with built-in system commands, and output from that external plugin can flow into trusted operations without verifying where it came from.
This is the novel attack surface that Microsoft's February security analysis named explicitly: the software layer between AI models and production applications is becoming the primary attack surface, and as agents gain the ability to install and compose tools at runtime, the question of which tool output flows into which tool input stops being an architecture question and becomes an attack surface. The nine OpenClaw security advisories that dropped between March 18 and March 21 — including a 9.9 critical severity flaw that let an authenticated user become full admin by manipulating WebSocket scope self-declaration, and six high-severity follow-ons — are what that attack surface looks like when it tears open. Belgium's Centre for Cybersecurity issued a Patch Immediately advisory for seven more CVEs in the Nextcloud Talk plugin alone, all scoring 9.2–9.4. The jgamblin/OpenClawCVEs tracker now lists 156 total security advisories, with 128 still awaiting CVE assignment.
The April patch is subtler than those. The release notes call it a bug fix. It closes the possibility that results from a user-installed plugin could flow into a trusted built-in tool without checking which domain they came from — the same class of boundary failure that drove the March CVE flood, just at a different layer. One data point. The pattern is the story.
The layer between the model and the application is where security boundaries get tested. OpenClaw fixes them when it finds them. The next gap is probably already there.
Also in Thursday's beta: a new Gemini 3.1 Flash text-to-speech plugin with expressive inline tags like [excitedly] or [whispers], and a bundled image-understanding model promoted to Claude Opus 4.7, which launched Thursday with a one-million-token context window. The Opus 4.7 upgrade means any OpenClaw deployment using vision — document parsing, screenshot analysis, diagram understanding — gets the improvement automatically, without developer reconfiguration. The TTS plugin removes a friction point that has been forcing voice-agent developers to build their own workarounds.
The OpenAI Codex and GPT-5.4 Cloudflare regression, introduced April 14 and still unfixed in this beta, is a reminder that the layer also fails in non-security ways. For developers running code-understanding agents on OpenClaw, the Cloudflare bot-mitigation pages have been production incidents for two days. The dependency graph is visible when it breaks: when OpenClaw routes your agent to a model and that route goes dark, your agent goes dark with it. The TTS plugin is what developers will mention at the next meetup. The unfixed route is what they will deal with at 2 a.m.