Training a robot in simulation is cheap and fast. Training it to perform like a robot trained on the real world is the whole problem — and nobody has figured out how to prove they are the same thing.
That question is at the center of the robotics industry's biggest bet right now: that factories can skip years of slow, expensive real-world data collection and train robots entirely in virtual environments, then deploy them to floors where they perform identically. Cadence and Nvidia announced an expanded partnership at CadenceLIVE Silicon Valley on April 15, weaving Cadence's multiphysics simulation engines into Nvidia's Isaac robotics platform. Jensen Huang appeared in person alongside Cadence CEO Anirudh Devgan. ABB Robotics, FANUC, YASKAWA, and KUKA — four companies with more than 2 million robots deployed globally — are already using Nvidia's Isaac simulation tools for virtual commissioning, testing robotic systems before they touch a physical floor.
ABB has gone further than anyone else in claiming it solved the gap. The company says its RobotStudio HyperReality system, built on Nvidia Omniverse, can close the sim-to-real gap with up to 99% accuracy — meaning a robot trained entirely in simulation would perform almost identically when unboxed on a real assembly line. If that number holds, it changes the economics of automation entirely.
ABB's number. ABB's lab. Nobody else has checked.
That's the uncomfortable reality behind every press release about simulation-based robot training. Virtual commissioning has been standard practice in factories for decades, used to validate layouts and cell choreography. What's new is the ambition — simulation as the primary training environment for robots performing precise manipulation tasks, not just checking spatial fit.
Cadence brings its multiphysics roots to the problem. The company built its simulation tools for semiconductor design — predicting how electrons move through a chip requires modeling heat, power, and material interactions with high precision. Cadence argues it can apply that same mathematical rigor to robot training scenarios: a warehouse robot handling a package of unknown flexibility, a collaborative robot working alongside a human whose position changes by the second, a manufacturing arm encountering a part slightly out of spec. Those edge cases separate a simulation that looks good from one that actually works.
Nvidia contributes Isaac Sim and Isaac Lab — open-source simulation environments that have become the default platform for a generation of robotics researchers — along with Jetson edge compute hardware for deployment. The combined workflow spans virtual training in Isaac tools, evaluation through Cadence's physics models and VTD mission-scale simulation, and deployment on Nvidia hardware.
For now, the partnership is a proof of concept. No customer has publicly disclosed a robot trained using the Cadence-Nvidia stack in full production. Cadence cited early deployments at more than 10 customers for its separate ChipStack AI tools — a different product line — showing up to 10X productivity gains in design and verification. The robotics simulation workflow is being positioned as the next step.
VCs put $7.2 billion into robotics startups in 2025, roughly double the $3.1 billion invested in 2023. Simulation infrastructure is getting a significant share of that capital. The question is whether the simulation actually holds.
ABB's 99% figure is the most specific performance claim in the space. It is also, so far as the public record shows, asserted rather than independently confirmed. The factories running these systems are betting on a number nobody has audited. The researchers who could audit it haven't been invited to look.
The question of whether a simulation faithfully reproduces the physical world is not philosophical. It is the line between a robot that works and one that doesn't. Right now, the industry is asking factories to take its word that the line is crossed.