Jason Moore has a word for what AI that can take actions on its own — planning, executing, and debugging multi-step research tasks without being prompted for each step — is doing to his lab at Cedars-Sinai. He uses it carefully, because the word sounds like hype: mind-boggling.
"We have not replaced anyone yet," Moore, chair of the Department of Computational Biomedicine, told the medical center's newsroom Cedars-Sinai Newsroom. But the productivity gains are, by his accounting, unprecedented. "We are seeing things we have never seen before."
That tension — between gains that sound like replacement and a workforce that has not been replaced — is the actual story in biomedical AI right now. It is not that the technology cannot displace scientists. It is that it has not yet, and the reason why is as interesting as the reason it might.
Moore is one of the authors of a Nature Biotechnology perspective paper published in February 2026: "Agentic AI and the rise of in silico team science" Nature Biotechnology. The paper argues that AI is becoming a peer in the research process, not just a tool. An AI agent can assemble a team of specialized sub-agents, assign roles, catch errors, and hand back results that would have taken a human lab group weeks to produce. The paper's framing is optimistic. The subtext is competitive: labs that figure this out first will have a structural advantage.
Cedars-Sinai built its own agentic AI system, called ESCARGOT, and tested it against ChatGPT on a specific task: answering queries about Alzheimer disease genes and drug interactions. The results were stark. ESCARGOT scored 80 to 90 percent accuracy. ChatGPT came in at roughly 50 percent Cedars-Sinai Newsroom.
That gap matters. A general-purpose chatbot trained on the broad internet will always sacrifice domain depth for breadth. A system built on proprietary biomedical literature, tuned for how drug-gene interactions are actually evaluated inside a research hospital, will outperform any general model on the tasks that matter most to that domain. The labs with the best training data, not the best algorithms, are going to win.
Moore does not frame this as replacement. He frames it as augmentation. His paper Nature Biotechnology describes AI agents as members of a research team, not replacements for one. "Unlike traditional AI that is primarily designed to complete a single task, agentic AI is a new generation of AI models that can independently perform multiple tasks simultaneously to achieve specific objectives," he told Cedars-Sinai Cedars-Sinai Newsroom.
That framing is the honest part. Nobody has been laid off because of ESCARGOT. The productivity gains Moore describes are real, but they are gains in throughput, not headcount reduction. The AI is doing the scut work: literature review, data cleanup, hypothesis scaffolding. The scientists are still the ones asking the questions.
The technology industry laid off nearly 80,000 employees in the first quarter of 2026, with roughly half those cuts attributed to AI Tom's Hardware. That number is real, and it is being used by every company that announces a reorganization to justify decisions that have more to do with stock price than with technology readiness.
But the biomedical research world is not the technology industry. Lab work is expensive, slow, and deeply relational. A graduate student who spent three years learning to culture cells does not become obsolete overnight because an AI can generate a protocol. The training pipeline for scientists runs on apprenticeship logic — you learn by doing the work, under supervision, for years. If AI takes the doing out of the apprenticeship, the pipeline breaks in ways that will not show up in quarterly headcount reports.
The Brookings Institution raised this paradox in a March 2026 analysis Brookings: agentic AI may simultaneously increase the productivity of senior researchers and narrow the development path for junior ones. The work that once trained a scientist is the work that now trains the AI.
The Nature Biotechnology paper Nature Biotechnology does not answer who decides what gets automated first. It is a vision paper, not an audit. It describes what agentic AI could do in an ideal world where the technology is developed responsibly and the productivity gains are shared.
ESCARGOT works because Cedars-Sinai has years of labeled biomedical data that Google does not. That data is a competitive moat. The labs and hospitals that sit on proprietary research records have a structural advantage in building specialized AI systems that outperform anything a general model can produce. The question of who owns that data, and who controls access to it, will determine whether the productivity gains Moore describes flow back into the research pipeline or simply compound the advantage of the institutions that were already winning.
Moore's own paper hints at this tension. The future it describes is one where AI agents are team members, not tools. But team membership implies power sharing, and power sharing is not what institutions with proprietary data typically want. Watch for the licensing agreements that research hospitals sign with AI companies — they will determine whether the labs that built the training data ever see a return.