The people who write the policies governing how AI agents behave are acquiring a form of enterprise authority that used to belong to human managers, and almost no organization is acknowledging it openly.
That redistribution is the thing hiding inside the governance gap. The urgency is in the data: a global survey of 300 enterprise leaders across security, fraud, identity, and AI functions, published by Arkose Labs, found that 97 percent expect a material AI-agent security or fraud incident within the next year, and nearly half expect one within six months. Thirty-two percent already have established guardrails; 88 percent are still implementing or have not yet finished establishing governance programs (2026 Agentic AI Security Report). The gap between those two numbers is the governance problem. A three-day-old refresh of the Forrester AEGIS framework — a structured six-domain model for securing autonomous AI across governance, identity, data, application security, threat operations, and Zero Trust — is the vendor community's answer to exactly that problem (Forrester AEGIS Framework). Vendors including F5 and BigID have spent the past three months publishing product-framework mappings against it (F5 Blog; BigID Blog).
The uncomfortable implication is that the security team configuring an AEGIS-aligned control plane may have more effective authority over enterprise workflows than the manager whose decisions the control plane encodes. This is not necessarily wrong: distributed policy enforcement at the agent layer may be exactly what Zero Trust demands at scale. But it is a redistribution of decision-making power that organizations adopting agentic AI fastest are rarely explicitly acknowledging. Vendors selling guardrail products are not leading their marketing with it.
The AEGIS framework introduces principles it calls least agency, continuous assurance, and explainable outcomes. Least agency is the reframe that matters: where traditional security constrains what a human user can access, agentic security must constrain what an autonomous system can do. That is a fundamentally different governance problem. Agents are relentless. Willpower is infinite. They do not get tired, do not second-guess, and do not stop when the business case changes. The guardrail configuration is not a technical detail. It is a governance question.
AEGIS recommends a phased implementation path: governance and risk functions first, zero to three months; identity and data security modernization, three to six; agent lifecycle controls, six to twelve; Zero Trust maturity, twelve months and beyond. The sequence is sensible. In most enterprises, it runs exactly backwards. A Cyware survey of security professionals at RSAC 2026 found 77 percent prefer AI-driven tools that operate with analyst oversight, but only 32 percent have established guardrails in place (Cyware RSAC 2026 Survey). Agents are being deployed into production now, often by business lines chasing efficiency gains, before foundational controls are in place. The twelve-month path to Zero Trust maturity means that for at least the next year, most enterprise agentic deployments will be running with controls that are, by Forrester's own analysis, inadequate for the threat model.
The framework was written for exactly this window. The Arkose data suggests the window is already open, and most enterprises are still deciding which lock to buy.
F5, which acquired testing firm CalypsoAI, cites AEGIS in its Application Security domain. BigID published a direct product-framework mapping in February. CybersecurityAsia covered the framework's Asian rollout in August, quoting Forrester analyst Cody Scott on the need for continuous assurance rather than periodic audits (CybersecurityAsia). Independent CISO guidance from Conifers AI also flags the phased implementation timeline as the critical risk point for enterprise deployments (Conifers AI Blog). Whether enterprises are moving fast enough to use it before the 97 percent's prediction comes true is the question nobody has answered yet.