The Real Story Behind Doctronic's $40M Isn't the Money
Doctronic, the AI healthcare startup that bills itself as the "world's most popular AI doctor," isn't just raising money.

image from Gemini Imagen 4
Doctronic, the AI healthcare startup that bills itself as the "world's most popular AI doctor," isn't just raising money. It's testing whether an algorithm can handle routine prescribing inside a supervised state sandbox — and a growing patchwork of federal programs suggests government interest in AI-delivered healthcare is broader than any single experiment.
The company announced a $40 million Series B on Monday, co-led by Abstract Ventures and Lightspeed Venture Partners, bringing total funding to $65 million. STAT News reported that Doctronic is on track for $10 million in revenue this year. But the raise is the least interesting thing happening here. The real story is a state regulatory experiment — and a federal bill that keeps failing — that together illuminate how fast the ground is shifting under AI prescribing.
The Utah experiment
In January, the Utah Department of Commerce announced a first-of-its-kind partnership with Doctronic for AI-assisted prescription medication renewals. The arrangement runs through Utah's regulatory sandbox, a framework that temporarily waives certain licensing requirements to let new technologies operate under supervised conditions. Utah's release describes the pilot as keeping "clinicians at the center" of care — Doctronic handles routine renewal workflows, but the sandbox framework maintains clinical oversight.
Margaret Woolley Busse, executive director of the Utah Department of Commerce, and state Sen. Kirk Cullimore, who sponsored the sandbox legislation, both backed the announcement. Matt Pavelle, Doctronic's co-CEO, framed the AI as a "doctor, not a device" — language that turns out to be load-bearing.
The pilot is limited: 190 medications, no controlled substances, phased rollout, prescription renewals only. But the framing matters more than the formulary. If AI is a "practitioner" rather than a "device," it sidesteps the Food and Drug Administration entirely. Medical devices go through FDA clearance. Practitioners get licensed by states.
The federal bill
That framing is what the Healthy Technology Act of 2025 would begin to codify at the federal level — with significant guardrails. H.R. 238, introduced by Rep. David Schweikert (R-Arizona) on Jan. 7, 2025, would amend the Federal Food, Drug, and Cosmetic Act to allow AI and machine learning systems to qualify as prescribing practitioners, but only if authorized by state law and approved, cleared, or authorized under separate federal law. It's narrower than "AI replaces your doctor" — the bill explicitly ties AI prescribing authority to both state authorization and federal regulatory approval.
Schweikert's own press page describes it as a way to "restore the human connection in medicine" by letting AI handle routine tasks.
The bill has no co-sponsors and hasn't moved from committee, according to GovTrack, a congressional tracking service. This is its third introduction. Schweikert filed nearly identical versions in the 117th and 118th Congresses; both died without a vote.
Federal programs worth watching
Empirical Health, an independent healthcare analysis publication, flagged something worth noting: several federal programs are pushing toward AI in clinical care on parallel tracks. ARPA-H, the Advanced Research Projects Agency for Health, is funding ADVOCATE, a program to build an autonomous AI cardiologist. The Centers for Medicare and Medicaid Services launched ACCESS, an outcomes-based payment model that would require AI-scale efficiency. The FDA introduced TEMPO, an enforcement discretion framework for AI medical devices.
These programs emerged independently, and there's no evidence they represent a coordinated strategy. But taken together, they suggest a federal appetite for AI in healthcare delivery that extends well beyond Utah's sandbox.
The evidence problem
About that evidence. The clinical validation that underpins Doctronic's safety claims comes primarily from a preprint posted to medRxiv, the preprint server for health sciences, in July 2025. The study reports 99.2 percent concordance between Doctronic's AI treatment plans and those of human physicians across 500 urgent care cases — a figure that has been widely cited in coverage of the company.
But the preprint has significant conflicts of interest. Every author holds equity in Doctronic. The ethics review was conducted by Doctronic's own research committee. And the adjudication method — the process for determining whether the AI's recommendations matched human ones — used another large language model, meaning an AI judged the AI. The study has not been peer-reviewed, and a rapid response in the BMJ, the British Medical Journal, raised concerns about the evidence and its scope.
A 99.2 percent concordance rate from a 500-person study, authored entirely by equity holders and reviewed by nobody outside the company, is not the same thing as independent clinical validation. It is a company telling you its own product works.
The security problem
There's a more immediate concern. In late January, Mindgard, an AI security research firm, published a red-team assessment of Doctronic's system. The findings were severe. Researchers extracted approximately 60 pages of system prompts, manipulated the AI into spreading vaccine misinformation, and — most alarmingly — tripled an OxyContin dose in the AI's SOAP notes by injecting fake regulatory updates into the conversation.
The Register reported that Doctronic said it had reviewed Mindgard's findings, but Mindgard's Peter Portnoy expressed doubt that meaningful fixes had been implemented. Zach Boyd, AI policy director at the Utah Department of Commerce, told The Register that the Utah deployment includes additional safeguards beyond the generic Doctronic product, and that controlled substances are excluded from the pilot. The Washington Post published an opinion piece on March 9 raising questions about AI prescribing based on thin evidence — five days after Mindgard's report, two weeks before the Series B announcement.
As of March, Mindgard says the vulnerabilities it disclosed remain exploitable. Doctronic has not responded publicly since the initial disclosure.
The access argument
None of this means the underlying problem is fake. Lightspeed's investment thesis cites the Association of American Medical Colleges' projection that the United States will face a shortage of up to 86,000 physicians by 2036. More than 100 million Americans lack easy access to primary care. Doctronic says it has conducted over 15 million AI consultations and served more than one million unique patients. The company's website offers free AI consultations, with $39 video visits for cases that need a human clinician.
Pavelle, who was previously founding CTO of Moda Operandi, the luxury fashion platform, co-founded Doctronic with Dr. Adam Oskowitz, a vascular surgeon at the University of California, San Francisco. The company uses a multi-agent architecture — multiple specialized AI models working in concert rather than a single chatbot.
The physician shortage is real. The access gap is real. The question is whether the regulatory interest building around AI prescribing is moving faster than the evidence, the security infrastructure, and the accountability mechanisms that should accompany a system that handles medication decisions for humans.
What to watch
The Utah sandbox has an expiration date — regulatory sandboxes are temporary by design. Whether Utah extends the arrangement, and whether other states follow, will depend in part on outcomes data that doesn't exist yet. H.R. 238 is likely dead again in this Congress, but the "practitioner, not device" framing has already taken hold at the state level, which is where licensing decisions are actually made.
If Doctronic's model works — genuinely works, with independent validation and robust security — it could meaningfully expand access to routine care for millions of underserved patients. If it doesn't, the growing government appetite for AI in healthcare will have outpaced the evidence base.
The $40 million is table stakes. The regulatory question is the bet.

