OpenAI named its first dedicated life sciences reasoning model after Rosalind Franklin — the chemist whose X-ray photograph of DNA was co-opted by James Watson and Francis Crick without her credit, and who died before receiving the Nobel Prize her colleagues collected.
The choice is not subtle. OpenAI is pitching GPT-Rosalind to pharmaceutical companies and research institutions as a model that can reason across genomics, protein engineering, and chemistry — one that actually understands what a helix is, rather than pattern-matching to the word. To get those customers, OpenAI needs something it has historically struggled to provide: the credibility that scientists require before they trust an AI with decisions that affect human health.
Naming the model after Franklin is the company essentially waving a white flag at the scientific community, acknowledging that it knows it has a trust problem.
GPT-Rosalind launched Thursday as OpenAI first purpose-built biology model OpenAI blog post. The benchmarks are real: 0.751 on BixBench, a bioinformatics benchmark, outperforming every other model with published scores. On LABBench2, a broader research task benchmark, it beat OpenAI own GPT-5.4 on six of eleven tasks — with the largest gains in CloningQA, the end-to-end design of DNA and enzyme reagents for molecular cloning OpenAI blog post. In a partnership with Dyno Therapeutics, the model ranked above the 95th percentile of human experts on RNA sequence prediction and around the 84th percentile on sequence generation OpenAI blog post.
Those numbers are from OpenAI own evaluations, and Dyno has commercial relationships with the company. Independent benchmark verification does not yet exist. But the underlying capability claim — that a domain-specific model fine-tuned for biology outperforms general-purpose frontier models on scientific tasks — aligns with what researchers have been arguing for two years.
The model comes with a feature OpenAI is betting will differentiate it in a field saturated with confident AI mistakes: deliberate skepticism tuning. GPT-Rosalind is trained to tell users when something is a bad drug target, rather than generating plausible-sounding reasons to pursue a dead end Ars Technica. The motivation is obvious to anyone who has watched AI-generated research literature proliferate through preprint servers. Overconfident AI outputs in drug discovery are not merely wrong — they are expensive, costing years and hundreds of millions in misdirected research.
Access is restricted to qualified U.S. enterprise customers through a trusted access program OpenAI blog post. The reason, per Ars Technica coverage: concerns that the model could be prompted to optimize virus infectivity Ars Technica. OpenAI is upfront about this. What is more notable is the Los Alamos National Laboratory partnership disclosed in the announcement. The lab — which conducts biosecurity research for the U.S. government — is working with OpenAI on AI-guided protein and catalyst design, including the ability to modify biological structures OpenAI blog post. This is not standard government contracting. It suggests OpenAI is building institutional review capacity into the product development rather than treating biosecurity as a PR problem.
The advisory tier around the launch reveals where OpenAI actually expects to make money. Alongside pharmaceutical partners Amgen, Moderna, and the Allen Institute, OpenAI listed McKinsey, Boston Consulting Group, and Bain as advisory partners OpenAI blog post. That puts the consulting firms — not the AI itself — at the center of how pharmaceutical companies decide whether to trust GPT-Rosalind outputs. The model is the instrument. The advisory layer is the revenue.
Franklin name will appear in pharmaceutical research workflows for the first time since her X-ray photograph of DNA structure was shown, without her knowledge or consent, to Watson and Crick in 1952. She died in 1958, four years before the Nobel Prize was awarded jointly to the three men who described the double helix — Franklin data was instrumental in their conclusion. She did not share the prize.
OpenAI picked her name for a product designed to help pharma decide what to believe about biology. The choice suggests the company understands exactly how the scientific community sees it.