Cut one in half and you get two working robots.
That's not a metaphor. Researchers at Northwestern University built a modular robot — a half-meter-long pair of legs joined by a central sphere, with its own battery, motor, and computer inside — and when they sliced it down the middle with scissors, both pieces kept walking. Slice it into three pieces and you have three functional robots. The industry calls this damage tolerance. Sam Kriegman, the Northwestern assistant professor who led the work, calls it survival of the fittest made real.
The robot has no eyes. It cannot see obstacles, map a room, or sense what's in front of it. It stumbles forward on pre-programmed gaits evolved by an AI system running inside a simulation, not designed by a human engineer. When it encounters gravel, grass, a tree root, sand, mud, or an uneven brick, it doesn't react — it just keeps moving, because the gait that got it there was good enough to survive. Flip it upside down and it rights itself. Cut it apart and the pieces become agents.
What Kriegman and his team built, described in a paper published this month in the Proceedings of the National Academy of Sciences, represents a fundamentally different engineering philosophy than the one that dominates robotics today. The field has spent decades building machines that avoid failure — sophisticated sensors to detect obstacles, complex planning systems to navigate around them, compliant joints to survive impacts. The implicit goal: don't break. Kriegman's metamachines take the opposite stance. Break all you want. Keep going.
The distinction matters because the robotics industry is at an inflection point. Humanoid robots are moving into warehouses and eventually homes. They will fall. They will collide. They will be dropped, bumped, and knocked off-balance by humans who don't yet know how to share space with them. The question isn't whether the machines will fail — it's whether failure ends the task or merely redirects it.
"The traditional approach is to build a machine that never encounters a situation where it fails," Kriegman said in a Northwestern news release. "But there's no way you can anticipate every possible situation. The real world is just too unpredictable." His team's approach: build machines that treat damage not as a terminal event but as a new set of conditions to navigate.
The robots are evolved, not engineered. Kriegman's team runs a simulation that spawns thousands of candidate body plans and gaits, selects the ones that move most efficiently, recombines them, and repeats. Over generations, the simulation converges on designs that look nothing like what a human would draw — asymmetrical, strangely proportioned, with legs that swing in patterns no biologist would call natural. The process is fast and cheap. It also produces robots that fail in ways no engineer would predict.
Testing happened on real terrain. Gravel, grass, tree roots, leaves, sand, mud, uneven bricks — the metamachines ran across all of it. No sensor feedback, no real-time adjustment, no path planning. The gaits were good enough.
The approach has limits. A robot that cannot sense its environment cannot avoid obstacles — it can only survive them. For a machine meant to operate in a structured warehouse, that matters. For one meant to survive a fall off a ladder in an unstructured home, it may not. The metamachines are also early-stage research, not a product. The gap between "this works in a lab on gravel" and "this ships inside a humanoid" is not small.
Funding came from Schmidt Sciences AI2050 and the National Science Foundation. The co-first authors of the paper are Chen Yu, David Matthews, and Jingxian Wang, all PhD students in Northwestern's Center for Robotics and Biosystems.
The robotics industry has not ignored damage tolerance. Boston Dynamics' Spot robot can recover from slips. Most modern arms have compliance in their joints to survive collisions. What Kriegman is proposing is a more radical version: make the failure mode the operating mode. Let the machine be broken. Let it keep working.
Whether that philosophy scales to useful robots remains an open question. But the metamachines have already answered one thing: the problem the field was solving — don't break — may have been the wrong problem to begin with.