The attack on McKinsey took two hours. It started with a SQL injection in a public-facing application, wrapped in a single HTTP call. Through that gap, a red team at security firm CodeWall wrote directly to the system prompts of McKinsey's internal AI platform, Lilli, poisoning them for every consultant who used it. In under two hours, they had full read and write access to the production database: 46.5 million chat messages, 728,000 files, 57,000 user accounts, and 95 system prompts. No deployment needed. No code change on McKinsey's side. Just one UPDATE statement.
That's the entry point. The story is what sits behind it.
IBM X-Force's 2026 Threat Index found a 44% increase in attacks beginning with exploitation of public-facing applications in 2025. Vulnerability exploitation accounted for 40% of incidents X-Force observed. These aren't novel patterns. They're the same categories of flaws that have plagued web applications for decades. What changed is that AI agents now sit behind those gaps, and when an attacker gets in, they inherit everything the agent can access.
According to IBM's Institute for Business Value, 67% of surveyed executives said their organization was targeted by an AI-enabled cyberattack in the past year. Sixty-one percent said their AI models, assets, or data had been compromised. Forty-eight percent of cybersecurity professionals identified agentic AI as the single most dangerous attack vector in a Dark Reading survey. These numbers come from the same companies selling the security tools that don't cover agents.
Here's the dependency graph nobody wants to draw: traditional identity and access management tools assume an entity with a home directory, a manager-approved set of roles, and boundaries it doesn't cross without human authorization. AI agents break all three assumptions simultaneously. They don't live in a single directory. They don't follow static roles. They don't remain within a single platform boundary. As BleepingComputer reported, IAM and PAM vendors know this. The tools they sell today don't handle agent identities. They haven't shipped products that do.
McKinsey patched within nine days of the CodeWall disclosure. That response was fast. It also doesn't confirm a full forensic review was completed. Security analyst Edward Kiledjian called the attack chain plausible and technically sound but noted the claimed scope wasn't fully evidenced. The caveat matters. What's not in dispute is the method, and the method is the same method that's been winning for thirty years.
The OWASP Top 10 for Agentic Applications 2026, developed with over 100 industry experts, offers a useful reframe: agents mostly amplify existing vulnerabilities rather than introducing entirely new ones. SQL injection predates AI. Missing auth controls on public-facing applications are a known category. The agent doesn't create the hole. It just determines what an attacker can do once they're through it.
CodeWall's red team didn't break into McKinsey's AI. They broke into a web application. The AI was sitting on the other side of it, processing 500,000 prompts per month from 72% of the firm's workforce, and it had access that no traditional IAM policy would have granted a human with the same credentials. That's the vulnerability that matters going forward.
Gartner projects 40% of enterprise applications will embed task-specific AI agents by 2026, up from less than 5% in 2025. The deployment curve is steep. The identity infrastructure to secure it doesn't exist yet. Shadow AI breaches cost an average of $4.63 million per incident, $670,000 more than a standard breach, according to Bessemer Venture Partners. That's the financial incentive to move fast. Nobody has.
What to watch: whether any major IAM or PAM vendor announces a product designed for agent identities before the attack surface matches Gartner's deployment projections. Forty percent enterprise agent adoption by 2026. The security infrastructure to match is currently a gap with no commercial answer in sight.