A worm that learns by forgetting
There is a worm on a lab bench in Amsterdam. It has twelve motorized hinges and an elastic spine. Each hinge runs its own microcontroller. No single controller runs the show. Train it one way and it wriggles forward. Train it differently and it curls around an object and holds. It learned both behaviors. And it can learn a new one without being rebuilt.
The work, published April 7 in Nature Physics by researchers at the University of Amsterdam, sits at an odd intersection: materials science, robotics, and a problem AI labs are currently working around rather than solving. Yao Du, a PhD candidate in the Machine Materials Lab at UvA and first author of the paper, puts it this way: "The most exciting observation of our research was that learning gives our metamaterials the ability to evolve. Once the system starts to learn, the possibilities of where it ends up feel almost limitless."
How the worm learns
The system is built from chains of motorized hinges. Each hinge carries a microcontroller that measures how far it has rotated, remembers its past movements, and talks to its neighbors. Learning happens locally. Training works like this: researchers fix certain hinges in a bent position to define an input. Then they repeatedly nudge the remaining hinges toward a target shape, clamping and releasing with each cycle. With each repetition, the microcontrollers update how much force their hinge should apply. Eventually the chain has learned: whenever it senses the same input configuration, it morphs into the trained shape on its own.
The researchers call this contrastive learning, borrowed from machine learning. The material learns by being shown examples rather than by being explicitly programmed. It can learn multiple shapes and switch between them. It can learn a new shape and overwrite an old one. And it can perform tasks like gripping objects and moving across surfaces that typically require pre-programmed robots.
Corentin Coulais leads the Machine Materials Lab, which has been building toward this for years. Earlier work showed that odd mechanical objects could roll, crawl, and wiggle over unpredictable terrain without any centralized control. Those systems could move, but they could not learn. Adding learning opens something different: a machine that can be retrained for a new task without a hardware swap.
The researchers are careful about the comparison to AI. The metamaterial is not reasoning or generalizing. It learns specific shape responses through repeated physical training. The training itself requires human researchers repeatedly nudging hinges into position. It is not autonomous discovery.
What the paper shows and what it does not
The system demonstrated three distinct capabilities. It can learn a new shape and retain it. It can overwrite an old learned shape with a new one. And it can learn multiple shapes and toggle between them on demand, a capability the researchers call multi-shape memory.
The hardware is not soft robotics. The motorized hinges with embedded microcontrollers are a specific technical choice that gives the system programmable stiffness but limits it to the centimeter-to-meter scale. A truly deployable shape-shifting material would likely need different actuation mechanisms.
The next steps the researchers describe are time-dependent behaviors: learning different locomotion gaits like crawling versus rolling depending on environmental conditions. They also want to explore learning under noise and uncertainty, where responses become probabilistic rather than fixed.
Dutch Research Agenda funding, announced in the 2026 NWA program on materials that learn, signals that this is not a one-off curiosity. A new PhD candidate is joining the lab in August to work specifically on extending the system toward time-dependent learning.
The broader context is a quiet shift in where researchers think intelligence can live. For decades, the assumption was that adaptation required something central receiving signals, deciding, and issuing commands. This work adds to a body of evidence that distributed, local decision-making can produce surprisingly complex behavior — what one independent commentator described as the boundary between software intelligence and physical learning systems beginning to dissolve.
For robotics, a material that learns a new task by being physically trained rather than having a program downloaded suggests a different paradigm. The learning is partly physical, embedded in the structure itself, rather than entirely in software.
Whether that path leads anywhere practical is an open question. But the worm on the bench in Amsterdam is doing something its builders did not have to rebuild it to teach.