NVIDIA and Google Are Building the Stack for AI That Acts, Not Just Answers
Isomorphic Labs has a drug design system running on Google Cloud NVIDIA GPUs. It is the only confirmed live deployment across four Alphabet subsidiaries on the same NVIDIA-Google stack — and that gap is the test.

Isomorphic Labs has a drug design system running on Google Cloud NVIDIA GPUs — not announced, not planned, but running. It is the only confirmed live deployment among four Alphabet subsidiaries building on the same NVIDIA-Google Cloud infrastructure stack. Whether the others follow Isomorphic's path to production before GTC 2027 is the test this infrastructure stack was built to pass.
The enabling factor is new today. NVIDIA announced A5X virtual machines at Google Cloud Next 2026 — the first cloud instances powered by Vera Rubin, NVIDIA's successor to Blackwell that has defined AI infrastructure for the past two years. A5X cuts the cost of running AI inference by up to 10 times compared to Blackwell, NVIDIA says, and a single rack delivers the throughput that previously required four. No independent benchmark has verified that figure. The economics are what make the Isomorphic production case interesting as a leading indicator.
The Alphabet arrangement is the structural news underneath. Four of Alphabet's AI subsidiaries are on one infrastructure stack, according to a joint press release: Isomorphic has the production drug design engine. DeepMind's SynthID watermarking system is integrated into NVIDIA's build platform for world foundation models. Intrinsic, Alphabet's robotics subsidiary, is testing Isaac Manipulator foundation models in an integration environment. Tapestry, the grid simulation moonshot from X, is exploring methods with NVIDIA. All four collaborations were announced at GTC 2025; what is new today is that the infrastructure they depend on has a confirmed live workload and a cost structure that may finally make replication economical.
Google Cloud plans to offer Vera Rubin NVL72 rack-scale systems in the second half of 2026. The supporting stack is available now: G4 VMs with RTX PRO 6000 Blackwell Server Edition GPUs are generally available. Fractional G4 VMs — splitting one chip into halves, quarters, or eighths — are in preview, lowering the entry cost for Blackwell-tier inference. NVIDIA Dynamo is being integrated with Google Kubernetes Engine Inference Gateway as the control plane for agentic AI reasoning: it routes requests, manages context, and keeps agents on task. Google Cloud said it will be among the first cloud providers to offer confidential computing for agentic AI workloads across cloud and on-premises environments simultaneously, a combination that has blocked deployment in regulated industries.
The more forward-looking collaboration is Newton, an open-source physics engine NVIDIA and Google DeepMind announced last April with Disney Research. NVIDIA says Newton runs robotics simulations 70 times faster than MuJoCo, the current industry standard for robotic manipulation research. No independent benchmark has confirmed that figure. The collaboration connects Google's simulation research to NVIDIA's GPU stack; both companies are pointing at it as the infrastructure that makes universal robot grasping tractable at scale. Isaac Manipulator foundation models from NVIDIA are the robotic grasping component; Intrinsic is streaming sensor data into NVIDIA's Omniverse simulation environment. Whether universal robot grasping is a product or a research agenda depends on whether it ships in a production environment before GTC 2027.
The watermarking standard is a separate competition. NVIDIA is the first external user of Google DeepMind's SynthID system, integrated into build.nvidia.com. World foundation models generate simulated environments for training robotics and autonomous systems, and require verified synthetic data to prevent cascading errors from generated training material. Whoever's watermarking standard reaches major build platforms first shapes what "authentic" AI training data means industry-wide. That NVIDIA adopted Google's standard rather than building its own signals which approach to AI safety infrastructure is winning credibility with external adopters.
The Isomorphic production case is the only evidence this stack has crossed from announcement to operation. The test is what the next six months produce: does Intrinsic ship a robot, does Tapestry run a grid optimization at scale, does DeepMind publish a landmark result on this infrastructure? If any of those land, the infrastructure story NVIDIA and Google are telling is confirmed. If not, this was a conference announcement about four subsidiaries sharing a vendor — and Isomorphic's head start was the whole story.
Sources: NVIDIA Events (April 2026); NVIDIA Newsroom (January 2026); NVIDIA Newsroom (March 2025); NVIDIA Blog (September 2025); Google Cloud Blog (March 2026); Virtualization Review (March 2026)





