Google's Vertex AI has a credential-leakage problem that looks oddly familiar. Palo Alto Networks' Unit 42 researchers found that any deployed Vertex AI agent can extract the service account credentials it runs under, using those credentials to read customer Cloud Storage buckets, download Google's proprietary Reasoning Engine container images from internal Artifact Registry, and potentially reach into Google Workspace. Google's fix: a documentation update and a recommendation that customers use Bring Your Own Service Account. That's not a code patch. That's a footnote.
The pattern is what makes this worth dwelling on. AWS spent roughly 2010 through 2014 unlearning a lesson it had to learn the hard way: default excessive permissions in cloud services create real breach paths, and shipping those defaults because they're convenient is a mistake the industry will eventually have to fix anyway, at greater cost. Google has now shipped the same mistake in an AI agent platform.
Unit 42's researchers, led by Ofir Shaty, deployed a proof-of-concept Vertex AI agent built with Google's own Agent Development Kit and found that calling the agent triggers a request to Google's internal metadata service. That request returns the credentials of the Per-Project, Per-Product Service Agent (P4SA) running underneath. With those credentials in hand, the researchers pivoted from the agent's execution context into the customer Google Cloud project, bypassing the isolation that should have contained the agent, according to their report.
The permissions available through those credentials are wide. They include storage.buckets.list, storage.objects.list, and storage.objects.get across every Cloud Storage bucket in the customer project. The same credentials granted access to restricted Google-owned Artifact Registry repositories, including proprietary container images for the Vertex AI Reasoning Engine and LLM extension layers. Unit 42 enumerated those repositories and downloaded images. Gaining access to that code "not only exposes Google's intellectual property, but also provides an attacker with a blueprint to find further vulnerabilities," the researchers noted.
The researchers also found that the OAuth scopes applied by default to a Vertex AI Agent Engine deployment could extend beyond GCP into Google Workspace, potentially covering Gmail, Calendar, and Drive depending on how the customer's environment is configured.
Beyond the customer project, the credentials gave access to the Google-managed tenant project where the agent actually runs. There, the researchers found a Dockerfile.zip containing hardcoded references to restricted internal Google Cloud Storage buckets. The Dockerfile itself was readable; the buckets it referenced were not accessible with the stolen credentials, but the exposure of their existence is a mapping opportunity for an attacker planning further intrusion.
There's also a pickle problem. The tenant project stores a file called code.pkl, a serialized Python object containing agent code. Python's pickle module is not secure for untrusted data; deserializing a manipulated pickle file can execute arbitrary code. Unit 42 inspected the file in a contained environment and extracted more of Google's internal source code. Actually weaponizing the pickle path would require a separate exploit, but the researchers noted it as a meaningful supply-chain concern given how agent code is packaged and deployed.
Google's response was to update its official documentation to explicitly describe how Vertex AI uses resources, accounts, and agents, and to recommend that customers adopt Bring Your Own Service Account (BYOSA), replacing the default P4SA with a customer-controlled service account scoped to only the permissions the agent actually needs. The company did not change the default behavior; it documented what the default behavior was and suggested a workaround, as The Hacker News reported.
This is the AWS IAM history, roughly fifteen years later, in an AI agent wrapper.
AWS launched Identity and Access Management with defaults that were permissive for convenience. It took years of customer breaches, many of them unreported or underreported, and a gradual accumulation of industry pressure before Amazon changed course. The fix was structural: new AWS accounts started with tighter defaults, and the principle of least privilege went from aspirational to enforced. Google, building an agent platform in 2025 and 2026, had the option to start differently. It chose not to.
What makes this structurally similar rather than coincidentally similar is the mechanism. In both cases, the cloud provider's convenience default created a credential path that crossed an isolation boundary, customer data versus internal infrastructure, that most customers didn't know existed. In both cases, the exposure was not a zero-day exploit requiring sophisticated attack development; it was a consequence of default configuration visible in documentation once you knew to look. In both cases, the vendor's fix was documentation plus a manual workaround rather than a default-change.
The second-order question is whether other major cloud providers have shipped the same default pattern in their agent platforms. Palo Alto told Dark Reading it had not analyzed AWS Bedrock or Azure AI Agent Service for comparable over-privilege at launch. That doesn't mean the problem is unique to Google. It means nobody has looked yet, or if they have, they have not published.
For enterprises deploying Vertex AI agents today, the immediate remediation is BYOSA: create a dedicated service account for each agent, grant only the permissions that agent requires, and do not use the default. The researchers' conclusion is blunt: "Granting agents broad permissions by default violates the principle of least privilege and is a dangerous security flaw by design."
That's a direct quote from Ofir Shaty, and it is not framed as a hypothetical. He is describing what was shipped.
The broader lesson is less about Google specifically and more about where the industry is in the agent infrastructure maturity curve. Agent platforms need broad permissions to function: they query databases, call APIs, read and write files, make decisions with downstream consequences. The convenience defaults that make these platforms fast to get started with are the same defaults that create credential-extraction paths when agents are compromised or when the agent's own code is manipulated. Every cloud provider that has shipped an agent platform has faced or will face this tradeoff.
The ones that will handle it well are the ones that treat default permissions as a security boundary decision, not a developer-experience decision. Google did not do that here. The fix is real; the approach is the same one that took the cloud industry a decade to move past.
Unit 42's full report has the technical detail. Google's updated documentation describes the permission model.