When a robot gains more parts, those parts can all fail. That is the tradeoff engineers have lived with since the first multi-jointed arm rolled off an assembly line. More function means more fragility. Add a module, add a vulnerability.
EPFL's Reconfigurable Robotics Laboratory just broke that tradeoff.
In a paper published in Science Robotics in February 2026, researchers led by Jamie Paik, the lab's director, describe a concept they call hyper-redundancy: a modular robot that shares its power, sensing, and wireless communication across every module simultaneously. The result is a system that becomes more reliable as it grows, not less. When the researchers killed a central module dead, cutting its battery, silencing its radio, and shuttering its sensors, the neighboring modules kept it breathing. They routed power through their own circuits, shared their sensor streams over the robot's internal network, and used their own wireless links as a proxy. The dead module came back.
"For the first time, we have found a way to reverse the trend of increasing odds of failure with increasing function," Paik said.
The paper, titled "Scalable robot collective resilience by sharing resources," was written by Kevin Holdcroft, Anastasia Bolotnikova, Antonio J. Monforte, and Jamie Paik.
The biology behind the math
The RRL team did not start with an engineering problem. They started with birds.
When starlings flock, no single bird carries the full picture of the group's direction. Each bird shares local information with its neighbors, and the collective navigates without a central controller. Trees in a forest send airborne chemical signals when one is attacked, warning the others to prime their defenses. Cells in a multicellular organism continuously pump nutrients across their membranes so that the death of any individual cell does not collapse the system.
These are examples of what biologists call distributed resilience: failure is handled locally, by neighbors, without requiring a central authority or a perfect single component. The RRL team wondered whether a robot could work the same way.
Their robot is Mori3, a modular origami machine built from four triangular modules that can physically reconfigure by connecting and disconnecting from each other. The modules were designed for space travel applications, where a single robot that can assemble and disassemble to match its task is preferable to carrying separate machines for separate jobs, according to EPFL's earlier Mori3 coverage. Mori3 can reshape itself into different geometries depending on whether it needs to crawl through a gap, reach around an obstacle, or interface with equipment.
In a locomotion experiment, the team posed Mori3 with a barrier and tasked it with walking underneath. Then they cut power, sensing, and communication to the central module, the one whose position would normally be essential for articulation and movement of the other three. Normally, that dead central module would have blocked the robot entirely, according to the paper. With hyper-redundancy, the neighboring modules compensated. They routed around the gap in their power bus, shared their sensor streams, and used their own wireless radios to maintain the communication links the dead module would have carried. The robot contorted under the barrier and kept moving.
"Essentially, our methodology allowed us to revive a dead module in a collective and bring it back to full functionality," Holdcroft said.
The three-way tie
What makes the result striking is that it required all three resources to be shared simultaneously. The team tested one-resource sharing and two-resource sharing. The failure-with-scale trend held in both cases. Only when power, sensing, and communication were all redistributed across the collective did the trend reverse, according to the paper.
This matters because real robots fail in real ways, and those ways rarely respect neat categorical boundaries. A module does not just lose battery power. It may lose communication at the same time, or lose sensing along with power. The three resources are entangled in practice even if they are separated in engineering documents.
The finding also matters because it is scalable in principle. The paper focuses on a four-module robot, but the framework does not depend on that number. The researchers argue that their local resource-sharing approach could be extended to larger swarms, with hardware adaptations that let swarm members dock to each other for energy and information transfer. A robot swarm for planetary exploration or disaster response could apply the same principle: if one unit goes down, its neighbors keep it alive.
What it is not
This is a Science Robotics result, not a product. Mori3 navigated a single barrier in a lab. The step from that to a robot swarm operating across unknown terrain on Mars or in a collapsed building is not small. The paper makes a conceptual contribution, demonstrating that the reliability-adaptability conflict can be resolved, and that contribution is real. Whether it survives contact with the real world is a different question.
The space application, which EPFL has emphasized in previous Mori3 coverage, remains speculative. The paper does not describe any deployment scenario, and the robot has not been tested in a space environment. What the researchers have shown is a physics proof of concept.
There is also the question of what "reviving" a module means in practice. The neighbors kept the dead module functional for the specific task at hand. Whether the revived module can perform any arbitrary task, or only the task the collective was already executing, is not fully characterized in the paper.
The right question to ask
Paik frames the result as resolving the reliability-adaptability conflict that has constrained modular robot design for decades. That framing is justified. The field has spent years building backup systems and self-reconfiguration routines to mitigate the failure problem, without questioning the assumption that more modules meant more vulnerability. The EPFL team found a way to make more modules mean less vulnerability, but only when everything is shared.
The implication for robotics is concrete. If this principle holds at scale, robot swarms do not need to be designed with individual fault tolerance built into each unit. They need to be designed with a shared resource architecture that lets neighbors cover for each other. That changes how you build the hardware, how you write the firmware, and how you think about what a module is.
It is early. The result needs replication, expansion, and eventual collision with real operating conditions. But the direction is new and the result is real. For a field that has accepted the function-versus-reliability tradeoff as a law of physics for as long as anyone can remember, that is worth noting.