Cortical Labs (@CorticalLabs) 287 likes · 21 replies
Cortical Labs, the Melbourne-based biotech that grew 800,000 human neurons on a chip and taught them to play Pong in 2022, has opened its biological computing platform to the public — and the neurons are now integrated with large language models.
The company announced Cortical Cloud this week, giving external developers live access to its CL1 biocomputer: a self-contained unit housing living neurons on a silicon chip with built-in life support designed to keep them alive for up to six months. The CL1 runs the company's Biological Intelligence Operating System (biOS), which simulates a world for the neurons and sends real-time electrical signals in and out of the neural structure. For the first time, anyone can write Python code, deploy it to real neurons over the internet, and interact with or train the live network.
The power draw for a CL1 rack is 850 to 1,000 watts — confirmed across Wikipedia, IEEE Spectrum, and NotebookCheck. That is a fraction of what conventional AI data centers consume (tens of kilowatts per rack), though still far more than a light bulb. The company claims the neurons themselves are extraordinarily efficient compared to transistors doing equivalent computation. CEO Hon Weng Chong has said the biological approach requires less power than a handheld calculator, though that claim refers to the neural tissue, not the full device infrastructure.
On the capability side, Cortical Labs has moved beyond Pong. An independent developer named Sean Cole used the CL1's Python API to teach the neurons to play Doom — a three-dimensional first-person shooter requiring spatial navigation, enemy detection, and corridor planning. According to Cortical Labs chief scientist Brett Kagan, Cole achieved this in around a week, working with roughly a quarter as many neurons as the earlier Pong demonstration. The performance is better than random firing but far below human play. Kagan noted that unlike the Pong work, which took years of scientific effort, the Doom demo was built in days by someone without deep biology expertise — a sign, he argues, that the platform is becoming accessible.
The more significant announcement is the LLM integration. The neurons can now be connected to language models, a setup Cortical Labs calls Wetware-as-a-Service. The company's technical approach is rooted in the Free Energy Principle, a neuroscience framework developed by Karl Friston: the brain organises itself to minimise surprise, constantly predicting and adjusting when reality diverges from expectation. The CL1 uses electrical signals — order when neurons hit a target, chaos when they miss — rather than the dopamine-based rewards of biological learning. The neurons have no evolutionary past, no body, no sensory history. Within minutes of first exposure, they begin improving.
The Cortical Cloud launch is partly a commercial play. The CL1 hardware costs around $35,000 per unit. Cortical Cloud lets researchers and developers experiment without buying a physical device — a wetware equivalent of cloud GPU access. Whether biological neurons can perform useful work at scale, beyond demonstrating learning in constrained game environments, remains an open question. The efficiency argument is real but the practical constraints — keeping neurons alive, interfacing reliably, scaling beyond hundreds of thousands of cells — are substantial.
The LLM integration is the part to watch. If Cortical Labs can reliably combine biological learning with the generalisation capabilities of foundation models, the use cases extend beyond games: drug discovery screens, robotics control, adaptive sensors. None of that is demonstrated yet. But the platform is now open, and the code is in developers' hands.