Google Says AI Agents Should Ask Permission. Its Own Product Does Not.
Google is testing an AI agent called Remy that would learn your habits, monitor what matters to you, and take actions on your behalf without being asked — the same week its cloud division is publishing guidance telling enterprise customers that agents should do the opposite.
Remy, described in an internal document seen by Business Insider, is positioned as a 24/7 personal agent that turns Gemini from a chatbot into something that acts for you. "Remy is your 24/7 personal agent for work, school, and daily life, powered by Gemini," the document reads. "It elevates the Gemini app into a true assistant that can take actions on your behalf — not just answer questions or generate content." Two people familiar with the project confirmed employees are currently testing it. Google declined to comment.
The timing creates a juxtaposition Google likely did not intend. Gemini Enterprise Agent Platform, the product Google Cloud uses to sell agent infrastructure to enterprises, launched nine days earlier with a governance framework built around limiting what agents can do. The platform emphasizes least-privilege access — agents should get only the permissions they need for a specific task — along with observable actions, auditable logs, and human oversight before any consequential step. Google Cloud's own AI governance documentation states that agents operating on behalf of users must have "carefully limited powers" and be designed with "well-defined human controllers."
Remy's internal description contains none of those constraints. The document describes a system that learns preferences over time, monitors user-relevant information continuously, and acts proactively rather than reactively.
The competitive pressure behind Remy is not subtle. OpenAI hired Peter Steinberger, creator of the OpenClaw agent framework, in February to "drive the next generation of personal agents," and moved OpenClaw to an independent foundation it continues to fund. OpenClaw became a viral sensation earlier this year for its ability to autonomously reply to messages, conduct research, and execute tasks on behalf of users. Sam Altman said in February that OpenAI was hiring Steinberger. Google has no comparable public product.
Google DeepMind CEO Demis Hassabis has talked for years about building a digital assistant that goes beyond answering questions. Remy appears to be the execution of that vision, but it exists only in employee testing — what the industry calls dogfooding — with no public launch date. The company holds its annual I/O developer conference later this month, where agents are expected to be a central theme.
The unresolved question is whether Remy requires user confirmation before taking any action, and whether users can audit what the agent has done. Business Insider's reporting did not answer either question. Google's privacy hub for Gemini lets users review and delete activity, manage data used for personalization, and control which apps connect to the assistant. But those controls were designed for a reactive chatbot, not a system that monitors continuously and acts without being prompted. Whether the same interface gives users meaningful oversight of a 24/7 proactive agent is not known.
For Google, the problem is not simply being late. The company has spent the past two years positioning itself as the enterprise trustworthy AI vendor — the cloud provider whose governance tools, audit trails, and guardrails make it safe to deploy agents in regulated industries. That positioning is now visible in every Gemini Enterprise Agent Platform presentation, every security whitepaper, and every customer case study. Remy, as described internally, appears to be the opposite of that product.