Google says three-quarters of its new code is generated by artificial intelligence. The figure, announced by CEO Sundar Pichai at the Cloud Next conference this week, is real. But it lands against a uncomfortable backdrop: Google has tied AI usage to employee performance reviews, creating pressure to adopt the tools that may be driving the numbers up faster than the code's quality can be verified.
The adoption trajectory is steep. Google reported that 75 percent of new code is AI-generated and approved by engineers, up from 50 percent last fall and 25 percent in October 2024. Complex migrations now run six times faster than a year ago, and Pichai described engineers orchestrating fully autonomous digital task forces — AI agents that fire off sub-agents to complete work without constant human oversight. The company has set specific AI usage goals that will be factored into performance reviews this year, Business Insider reported, structuring the incentive in a way that rewards engagement with AI coding tools regardless of whether engineers believe the tools improve their work.
The distinction between engagement and adoption is not academic. Steve Yegge, a former Google engineering executive, described weekly usage as a low bar that includes people who opened a tool once and went back to writing code by hand. The volume of interactions does not measure whether engineers changed how they work. Addy Osmani, a senior Google engineer, said on X that more than 40,000 Google software engineers now use AI coding tools weekly — Google's own count of its progress.
When Google proposed equalizing access to AI coding tools across the company, the proposed solution was to remove Anthropic's Claude Code for everyone. Several DeepMind engineers objected so strongly they threatened to leave, Business Insider reported. Their preference for a competitor's product over Google's own Gemini for the most sensitive AI-assisted work is a form of internal signal Google has not disclosed publicly.
What the 75 percent figure does not tell you is whether the humans approving the code understand what they are approving. The risk is structural: AI can generate code faster than engineers can audit it, and if the people evaluating the output are the same people under pressure to demonstrate AI usage, the metric and the quality assurance may be working at cross purposes. That question — who vets the code when the code is everywhere and the reviewers are measured by how much of it ships — is one Google has not answered.
Google declined to comment.