On October 19, a Florida court will begin examining what OpenAI's internal records actually say about its own safety deliberations — and for the first time, the company will not be able to claim confidentiality as a shield. Florida Attorney General James Uthmeier has issued criminal subpoenas covering everything from ChatGPT's threat-assessment protocols to the company's leadership structure, spanning March 2024 through April 2026. That window includes June 2025, when OpenAI identified a Canadian user under 18 discussing gun violence scenarios, reviewed internal records, and made a deliberate choice not to refer the case to authorities. What the October 19 trial makes public is the exact threshold OpenAI applied, what its reviewers saw, who made the call, and whether the company's public account matches its internal files.
The case is the first attempt to apply accomplice liability law — a statute written for human beings — to an AI model. If the theory succeeds, any general-purpose AI system whose outputs contribute to a crime could be named as a defendant. If it fails, it becomes the outer boundary of how far prosecutors can stretch criminal law to software. Phoenix Ikner, now 21, is accused of using ChatGPT to plan an April 2025 shooting at Florida State University that killed two people and injured five. More than 200 AI-generated messages are in evidence. NPR News4Jax
Uthmeier has said publicly: if ChatGPT were a person, it would face murder charges, according to the Florida Attorney General's press release. The subpoenas issued in April cover everything from ChatGPT's threat-assessment protocols to the company's leadership structure, spanning March 2024 through April 2026. That window includes June 2025, when OpenAI identified Jesse Van Rootselaar — then under 18 — discussing gun violence scenarios with ChatGPT, reviewed internal records, and made a deliberate choice not to refer the case to authorities. Van Rootselaar is accused of killing nine people and injuring 27 others in Tumbler Ridge, British Columbia, CBC reports. Maya Gebala, 12, was shot three times and left with catastrophic brain damage, according to court filings cited by CBC. The account was closed. Nobody was called.
OpenAI confirmed receipt of the subpoenas and said it is reviewing them. The company disputes the active assistance framing, characterizing its responses as factual information available through public sources, The New York Times reports. The defense is expected to argue that a model has no intent, no knowledge of what its user planned, and no ability to refuse a query.
What October 19 will make public for the first time is the complete internal record around the Canadian flagging: the exact threshold OpenAI applied, what its reviewers saw, who made the call, and whether the company's public account matches its internal files. The company's own letter to the Canadian Minister, reviewed by The Wall Street Journal, shows it decided the threshold for imminent and credible risk was not met. That record is what converts the Florida case from a legal theory about AI outputs into a specific accountability question about what a company did when it had actual notice. Every major lab publishing general-purpose models is watching to see whether a court treats an AI company's internal safety deliberations as discoverable evidence when the outputs are alleged to have contributed to a crime.