Ping Identity and KuppingerCole Analysts released research Tuesday describing a specific way AI agents are breaking enterprise authorization: not by exploiting a single misconfigured permission, but by combining individually legitimate ones into outcomes nobody approved.
The mechanism is straightforward. OAuth and OIDC — the authorization standards that underpin single sign-on for most enterprise software — were designed around the assumption that a human has decided to click OK at each step. Agents do not click. They combine. The research describes a failure mode in which AI agents combine individually legitimate permissions in ways that bypass established controls and cannot be fully traced or governed.
That this failure mode is real is already documented. Security researchers identified 506 prompt injections spreading through an AI agent network before the vulnerability was patched, according to AI Automation Global, which cited 404 Media's identification of the incidents. No individual permission was misused. Email access does what email access does. The agent added a second standard permission and did something neither granted alone.
Thirteen percent of organizations had an AI-related security incident last year, according to IBM's 2025 Cost of a Data Breach report. Of those, 97 percent did not have proper access controls for AI systems in place. The access infrastructure was not wrong. It was built for a different actor.
"Access grants permission," Ping Identity CEO Andre Durand said. "It does not enforce control."
The research outlines four specific pressure points in current enterprise identity systems: delegation opacity as agent chains spawn sub-agents that break auditability; implicit human assumptions baked into IAM frameworks that agents circumvent by operating continuously rather than episodically; context leakage across systems without continuous re-evaluation of authorization; and unresolved questions around permission inheritance and liability when agents interact with other agents.
The stakes are real. The vendor interest in the framing is also real. The KuppingerCole research was commissioned by Ping Identity, and the company has a product line — Identity for AI — built around the gap the research describes. The 97 percent figure traces to IBM's annual breach report, an independent source, but it is cited inside a vendor-commissioned document. The IBM data is credible; the context in which it appears is not neutral.
Most enterprises are currently operating without proven safeguards. KuppingerCole's analysis reinforces that AI agents already interact across enterprise identity systems at scale while most IAM approaches remain focused on users and controlled environments. The structural answer — continuous authorization verification rather than one-time permission grants, with audit trails that capture what agents actually do rather than what they were permitted to do — is well-understood in principle. The technical implementation is not.
What to watch next: whether any enterprise that has experienced this failure mode publishes a post-mortem with enough technical specificity to understand the composition mechanism. The Ping Identity framing names the problem clearly. The mechanism that makes it real — how exactly an agent combines two legitimate permissions into an illegitimate outcome — remains behind the incidents that have not yet been published.