Something with no eyes can still see. That is the finding that kept Michael Levin up at night, and it is the reason you should care about neurobots.
Researchers at Tufts University and the Wyss Institute at Harvard have built living robots from frog embryonic cells that develop a functional nervous system inside them, according to the Wyss Institute announcement. These neurobots move in looping spirals, explore more than their predecessors, and respond to drugs in ways that suggest the neural tissue is actively shaping behavior. They are doing something more than reflex.
The most striking discovery came from the gene expression data. When researchers sequenced RNA from the neurobots, they found nearly 6,800 genes firing at significantly higher levels than in earlier xenobots, the cell clusters that preceded them. Among those genes: opsins, the light-sensitive proteins found in eyes. The full phototransduction cascade had switched on. There were no eyes anywhere in the structure. There was no retina, no optic nerve, no visual cortex. The organism had turned on its own visual hardware anyway.
"We have no framework for this," said Kate Adamala, a synthetic biologist at the University of Minnesota who was not involved in the study, in an interview with IEEE Spectrum Robotics. "This truly puts the engineering component into bioengineering."
The research was published in Advanced Science on February 20, 2026, and announced publicly on March 16, 2026. The first author was Haleh Fotowat, a senior scientist at the Wyss Institute who spearheaded the work with Levin. The Department of Defense funded it under awards HR0011-18-2-0022 and W911NF1920027, along with the John Templeton Foundation and Northpond Ventures.
To understand what makes a neurobot different, it helps to know what came before. The original xenobots, described in 2020, were clusters of frog embryonic cells that could move, survive about nine to ten days on nutrients stored in those original cells, and repair minor damage. They were alive but reflex-driven: no neurons, no signaling, no internal processing. A neurobot gets neural precursor cells implanted alongside the skin tissue. Those precursors develop into neurons that extend axons and dendrites, form synapses, and begin firing in primitive networks. Calcium imaging confirmed the electrical activity. Inside these small structures, a nervous system was assembling itself.
The behavior tells a similar story. When treated with PTZ (pentylenetetrazole), a drug that affects neural signaling, the neurobots did something the non-neural versions did not: some of them increased their movement complexity. The non-neural biobots became less motile under the same exposure. The nervous system was not just along for the ride. It was changing what the organism did.
More than 54 percent of the upregulated genes in neurobots fell into the two most ancient phylostratigraphic gene age categories, shared across all living organisms or across eukaryotes broadly. The researchers described it as the transcriptome shifting toward very old genetic programs, the deep operating system that predates specialized organs. Levin, who holds the Vannevar Bush Chair at Tufts and directs the Allen Discovery Center, has spent years arguing that cellular collectives have real problem-solving capacities that have nothing to do with having a brain. The neurobot data is the strongest evidence yet that he is right.
Fauna Systems, a startup co-founded by Levin and Josh Bongard, is already working to commercialize the underlying xenobot platform for environmental sensing. The company is targeting aquaculture, wastewater monitoring, and pollutant detection, according to CEO Naimish Patel. The idea is that a living sensor, capable of responding to chemical gradients in its environment, could do things a silicon sensor cannot: adapt to new conditions, heal itself, and eventually degrade harmlessly when its work is done.
None of that is deployed yet. The gap between a published study and a working system in a wastewater pipe is not small. But the biological machinery being commercialized is now well characterized, and the research team has shown it can be tuned. What the nervous system actually computes, and what behaviors it enables in real environments rather than petri dishes, is the open question.
The stranger implication is harder to get at. Evolution built eyes before brains in some organisms. The opsin machinery predates the visual cortex by an enormous phylogenetic distance. What does it mean for a cell cluster to run phototransduction with no light-gathering organ to feed it data? The researchers are careful about what they claim, but the question is not going away. Something in these organisms is responding to light, and the mechanism for that response exists independently of the hardware most animals use to see.
That is not a robotics story. It is not quite a biology story either. It is a story about where the boundary is between a machine and an organism, and whether that boundary means what we thought it meant.