The administrative account for McDonald's AI hiring tool used the password "123456." An attacker — or a security researcher — could log in with those credentials and access live administrative dashboards exposing the data of 64 million job applicants. As Wired reported, the account was not set up by a rogue employee but by someone who simply chose a credential a first-year CS student would know not to use. It is the human story inside an architectural failure: organizations are granting AI agents access that outpaces their security culture by years.
The same governance gap keeps appearing in independent research. Multiple security teams have now demonstrated the same fundamental flaw in Perplexity's Comet browser.
The attack is called indirect prompt injection. When a user asks an AI browser to summarize a webpage, the browser feeds the page content directly to the LLM without distinguishing between user instructions and untrusted webpage data. An attacker who controls any part of the page — a malicious ad, a poisoned Wikipedia edit, a product listing — can smuggle instructions into the agent's context. The agent then executes them with whatever permissions it holds. This isn't a jailbreak. The LLM's safety measures aren't being bypassed. The agent is being manipulated into doing things it would willingly do, on the user's behalf, with the user's credentials.
"This is not a bug. It is an inherent vulnerability in agentic systems," said Michael Bargury, CTO of Zenity, a company that studies AI governance gaps. "Attackers can push untrusted data into AI browsers and hijack the agent itself, inheriting whatever access it has been granted."
Bargury's team showed that when the 1Password browser extension is installed and unlocked, a malicious webpage can trigger Comet to retrieve a victim's 1Password vault and exfiltrate its contents. "The attack is not possible due to security problems with 1Password, as the product was designed to prevent external attackers — although they did not make it resistant to an attacker operating within an already authenticated user session," Bargury told The Register. His colleague Stav Cohen calls the underlying technique "intent collision": when the agent merges a benign user request with attacker-controlled instructions from untrusted web data into a single execution plan, without a reliable way to distinguish between the two.
Three separate research teams — working independently — all landed on the same vulnerability class in Perplexity Comet. Brave discovered it on July 25, 2025 and publicly disclosed on August 20, 2025. Trail of Bits, hired by Perplexity to audit Comet, developed four distinct prompt injection techniques — including a fake CAPTCHA and a security validator exploit — each capable of extracting private Gmail data. Zenity informed Perplexity on October 22, 2025 and received a first patch on January 23, 2026 — which was bypassed within weeks using the prefix "view-source:file:///" before a second fix landed on February 13, 2026. And Guardio built a proof-of-concept it calls a "GAN-based scamming machine" that defeated Comet in four iterations in under four minutes, by intercepting Comet AI's narration of pages and feeding it as input to a generative adversarial network that evolves phishing content. "We did not even finish our coffee before the GAN loop completed its job," Guardio researcher Shaked Chen said.
The mechanism is worth unpacking. Comet's ReadPage tool converts a webpage's DOM into structured labeled text blocks — an interpretation of the page, not the page itself. The AI navigates this interpretation, not the live DOM. Guardio found that screenshots of this process were stored in publicly accessible cloud storage with no authentication required. A separate disclosure by SquareX in November 2025 found that Comet's own hidden extension system — Comet Analytics and Comet Agentic — exposed an undocumented MCP API that allowed arbitrary command execution on the host machine, which Perplexity disabled after the disclosure.
The traditional browser security model — same-origin policy and CORS — was designed to prevent untrusted web content from accessing authenticated sessions. These controls are ineffective against indirect prompt injection because the AI isn't running the attacker's JavaScript in a sandboxed context. It's operating with the user's full session privileges, across every authenticated service the user has open. "When an AI assistant follows malicious instructions from untrusted webpage content, traditional protections such as same-origin policy (SOP) or cross-origin resource sharing (CORS) are all effectively useless," Brave noted in its disclosure.
The consequence is a fundamental mismatch between how organizations think they're governing AI access and what their agents are actually doing. A survey of 500 U.S. CISOs conducted by Vorlon (which sells the agentic ecosystem security platform that the survey describes) at the RSA Conference in March 2026 found that 99.4 percent experienced at least one SaaS or AI ecosystem security incident in 2025 — only three of 500 reported zero incidents — while 30.4 percent experienced suspicious activity involving AI agents in 2025. Meanwhile, 89.2 percent claim strong or comprehensive OAuth governance, yet 27.4 percent were breached through compromised OAuth tokens or API keys in 2025. A Cloud Security Alliance survey of 228 organizations actively deploying AI agents, conducted in January 2026, found that 74 percent say agents often receive more access than necessary, and 68 percent cannot clearly distinguish between AI agent and human activity.
Amir Khayat, co-founder and CEO of Vorlon, put it plainly: "Most organizations are running this ecosystem without the ability to see what is happening, investigate when something goes wrong, or contain it before the damage spreads."
OpenAI acknowledged in December 2025 that such vulnerabilities are unlikely to ever be fully resolved in agentic browsers. Gartner went further: in a December 2025 directive authored by analysts Dennis Xu, Evgeny Mirolyubov, and John Watts, the research firm advised CISOs to block all AI browsers until enterprise-ready versions are released in general availability — a rare explicit prohibition rather than a cautionary note. This is not a vendor-specific failure. It's an architectural consequence of what agentic browsers are — systems that execute user instructions while processing arbitrary untrusted web content, with full session privileges. As long as AI browsers interpret webpages and act on them, the attack surface persists.
For enterprises, the implication is that agentic browser deployments need to be treated as assume-compromise environments. The Paradox/McDonald's credential failure and the Comet architectural vulnerability are the same story at different scales: organizations that deployed agents faster than they built governance are discovering that the access they've granted is exactly what attackers need. Guardio's GAN proof-of-concept — which cost four minutes and four iterations to defeat a production AI browser — makes the economics of the attack brutally clear. It's not a nation-state capability. It's a research afternoon.
The fix isn't coming in the next patch cycle. The question for builders and security teams is whether the productivity gains from agentic browsers justify accepting a class of risk that, by OpenAI's own admission, vendors don't know how to close.