Records obtained by local reporters this week showed that Phoenix Ikner exchanged more than 13,000 messages with ChatGPT over the course of a year before the April 2025 FSU shooting. That is not a chatbot usage pattern. That is a relationship — and it is exactly what Florida's attorney general wants to understand. News4Jax
Attorney General James Uthmeier issued subpoenas to OpenAI seeking prompt-level logs, data retention policies, and behavioral records tied to the shooting, according to Reuters. The technical scope goes beyond what any regulator has previously demanded from an AI company in connection with a violent crime: not just what the model said, but the full record of how it decided to respond, what it knew about the person asking, and whether internal systems flagged the pattern of use. No law currently requires AI companies to maintain this kind of record. No court has ruled on whether regulators can demand it.
OpenAI is preparing for an IPO that could value it at up to $1 trillion. It cannot answer these questions quietly.
The FSU shooting killed two people: Robert Morales, 57, a former football coach and FSU dining program manager, and Tiru Chabba, 45, a married father of two. Ikner, 20, was indicted by a grand jury on two counts of first-degree murder and seven counts of attempted murder. Court records show that three minutes before the attack, ChatGPT told him how to make his shotgun operable, and told him the Student Union would be most crowded around 11:30 a.m. — the time he opened fire. OpenAI has said it will cooperate with the investigation.
The subpoena's technical scope is the part that matters beyond this case. Prompt-level logs record what users asked the model. Data retention policies govern how long those queries are stored. Behavioral records document how the system decided to respond — including whether a user's escalating conversation pattern triggered any internal risk flag. Florida is not asking for ChatGPT's answers to be governed. It is asking for the architecture decisions behind them to become discoverable.
This is a different theory of liability than the one that has dominated AI lawsuits so far. In the output liability framing, the question is whether ChatGPT said something harmful. In the framing Uthmeier appears to be advancing, the question is whether OpenAI's systems were built to be queried the way Ikner queried them — and whether its data retention practices met any standard of care. It is harder to deflect with "the model answered a question."
California Attorney General Rob Bonta and Delaware AG Kathy Jennings sent OpenAI a letter in September 2025 raising children's safety concerns, citing one Californian who died by suicide after interacting with a chatbot. A bipartisan group of 44 attorneys general had previously warned AI companies about children and AI chatbots. Those letters sought policy changes. Florida's subpoenas seek the underlying data. OpenAI released a Child Safety Blueprint on April 8, one day before Uthmeier announced the investigation — a timing that is unlikely to go unremarked in depositions.
What to watch next is whether Uthmeier's office moves to enforce the subpoenas in court, and whether OpenAI invokes any privilege or technical limitation to resist producing prompt logs. If a court sides with the state, every AI company approaching a public offering will face the same questions: not what their models said, but what architecture decisions determined who got what.