Your brain is not a corporation. It does not operate by consensus. Every region is constantly competing for finite resources — glucose, oxygen, cognitive bandwidth. Yet for twenty years, the models researchers used to simulate brain activity treated it like a harmony choir.
That changes this week.
Karl Friston and colleagues at University College London and the University of Cambridge published findings in Nature Neuroscience showing that whole-brain digital models improve their fit to real brain activity by roughly twofold when they account for competitive interactions between regions, not just cooperative ones. The work was validated across 14,000 neuroimaging studies spanning three species: humans, macaques, and mice. If the result holds, it could finally make personalized brain simulation useful for drug development — an area where roughly 90 percent of treatments for central nervous system disorders fail in human trials after working in animal models.
The competition finding is, in retrospect, obvious. When you focus attention on a task, some brain regions increase activity while others suppress theirs. You cannot do everything at once, and neither can your neurons. But the modeling field largely ignored this dynamic in silico, building simulations where neighboring regions were forced to cooperate whether their dynamics supported it or not.
The result was models that looked like brains in the aggregate but did not behave like any brain in particular. Digital twins that were, as the researchers note, more like distant cousins — models that could not reliably capture what made one person's brain different from another.
The clinical problem
The immediate stakes are drug development. About 90 percent of central nervous system drug candidates fail in Phase II or Phase III clinical trials — the highest failure rate of any therapeutic area after cancer. The reasons are multiple: incomplete understanding of neurobiology, the blood-brain barrier, heterogeneous patient populations, and endpoints that are difficult to measure objectively. But one contributing factor is increasingly clear: researchers have been testing drugs in people whose specific brain wiring they do not actually understand.
Whole-brain modeling offers a partial answer. If you can build a digital twin of a specific patient's brain dynamics, you can simulate how a drug propagates through their neural architecture before giving it to them. Epilepsy is the furthest along: personalized whole-brain models are already used to identify seizure onset zones for surgical planning. The competition finding could sharpen those models further, making simulated seizure dynamics more accurately match what the surgeon will actually encounter.
The same logic applies to drug screening. A compound that looks effective in a generic brain model might fail in a patient whose specific network architecture renders it ineffective or harmful. If the digital twin captures real inter-individual variation, researchers gain a way to stratify patients and predict response before the first human dose.
The paper also found that models with competitive interactions were more subject-specific — better at capturing the unique brain fingerprint that distinguishes one person from another. That is the difference between a model that knows you are human and a model that knows you are you.
The validation question
Cross-species agreement is a meaningful evidence base. Findings that hold across humans, macaques, and mice in neuroimaging are rare; it suggests the competitive dynamics are not an artifact of one imaging modality or one species' particular cortical folding patterns. The 14,000-study validation set gives the result statistical weight that a single-center study could not claim.
The limitation worth watching: the paper demonstrates that adding competition improves model fit. It does not yet demonstrate that the resulting models reliably predict individual patient outcomes in a clinical context. That is the step between "this is a better model" and "this changed medicine." The authors are careful about this in the paper. The press materials they authored may be less so.
The bottom line
Friston is among the most cited neuroscientists alive — his free energy principle has been applied everywhere from psychiatry to robotics. His lab's findings carry real weight in the modeling community. If the competition framework holds, it changes what "personalized" means in computational neuroscience: not just mapping someone's connectivity, but capturing the actual competitive dynamics that make their brain behave the way it does.
For founders in neurotech, digital health, or central nervous system drug discovery: the tooling for personalized brain simulation is moving faster than the underlying validation. The models are getting better. The clinical translation is still being worked out. Watch whether this result accelerates adoption in clinical trial design — that is where the money would follow.