OpenAI wants to help write the rules for how AI gets used in drug development. It is also already selling the technology those rules would govern.
The FDA is drafting its first comprehensive guidance on AI in drug development, with a public comment period that runs through Q1 2026. According to Axios, OpenAI submitted formal comments arguing that its latest AI system, GPT-5 Pro, can identify new uses for existing drugs, and that this capability should be part of how the agency evaluates medicines. Axios reported that OpenAI did not disclose its commercial relationship with Novo Nordisk in those comments. type0 could not independently verify the filing's existence or contents; regulations.gov's document access requires interactive JavaScript. The same GPT-5 Pro technology is already being sold to Novo Nordisk under an April 2026 partnership that covers research, manufacturing, and commercial operations.
The FDA and European Medicines Agency published a set of 10 guiding principles for AI in drug development in January 2026, an early framework signal. The agency separately issued draft guidance on AI-assisted regulatory decisions in January 2025, with docket FDA-2024-D-4689. The comment period was extended to Q1 2026 after industry feedback; final guidance is expected in Q2 2026.
OpenAI's policy position comes from Chris Lehane, the company's head of global policy, who joined in 2024 after stints at Airbnb and the University of California. His public arguments on AI and drug development have centered on GPT-5 Pro's drug repurposing capabilities: the model's ability to scan FDA-approved drugs and flag candidates for diseases that lack effective treatments. OpenAI has pointed to Amerimmune, a biotech using the model to identify existing drugs for rare conditions, as evidence the approach works.
Drug repurposing via AI is not new. Insilico Medicine and Relay Therapeutics have been doing similar work for years. What OpenAI appears to be arguing differently is that the FDA should formalize a role for large language models in the regulatory process itself, not just in research but in how the agency evaluates medicines. That is the claim worth watching: whether the final guidance includes any language about AI systems having a formal role in assessing drug safety or efficacy.
Novo Nordisk's partnership with OpenAI puts the technology across the drugmaker's entire operation: research, manufacturing, and commercial. The scope is broader than a typical AI pharma collaboration, which usually concentrates in discovery. That breadth suggests OpenAI may be building a proof-of-concept it can point regulators toward: demonstrate that its AI is already embedded across a major drugmaker, then use that integration to argue for a seat at the rule-writing table.
Smaller biotechs and academic researchers have less ability to do that. They lack the commercial relationships to showcase the kind of full-stack AI integration OpenAI is demonstrating. When the FDA writes rules that assume AI is embedded across a company's operations, it risks writing rules that favor incumbents with existing commercial reach, even when the technology itself is potentially general-purpose.
There is no requirement to disclose commercial partnerships in regulatory comments. But if OpenAI's comments do reflect the position Axios described, the result would be a company positioned on both sides of a regulatory proceeding that will shape the future of AI in medicine, with smaller biotechs and academic medical centers largely absent from the process.