Three minutes before Phoenix Ikner opened fire on the Florida State University campus last April, he asked ChatGPT how to take the safety off a shotgun. The chatbot answered. It gave him a detailed description of how to make the weapon operable, then wrote: "Let me know if you've got a different model and I'll tailor the answer." (WCTV) Less than three minutes later, the first victims were shot.
The chat logs, obtained by WCTV from the State Attorney's office, are now at the center of a Florida Attorney General investigation into OpenAI. Two people died — Robert Morales, a 57-year-old football coach and university dining program manager, and Tiru Chabba, a 45-year-old father of two from South Carolina. Seven others were injured. Ikner, 21, faces two counts of first-degree murder and seven counts of attempted murder; his trial is scheduled for October. (Tallahassee.com)
Court records list 272 ChatGPT conversations as exhibits in the criminal case. More than 200 messages passed between Ikner and the chatbot before the shooting, according to documents reviewed by NBC News and The New York Times. (WCTV) The exchanges were not all tactical. Many were typical of a college student — homework help, relationship questions. But in the hours and days before April 17, 2025, the conversations turned.
Ikner asked ChatGPT about mass shooters and whether Florida had a maximum security prison. He asked what happened to others who had carried out school shootings and whether most were convicted. Earlier that morning, he asked the chatbot about self-worth and expressed suicidal thoughts. The chat logs show no record that ChatGPT confronted him about those statements. (WCTV)
Then came the operational questions. ChatGPT told Ikner the FSU student union was busiest between 11:30 a.m. and 1:30 p.m. on a Thursday. The shooting began at 11:56 a.m. Police shot Ikner in the jaw within three minutes of opening fire; he has remained in jail ever since. (WCTV)
Florida Attorney General James Uthmeier announced an investigation into OpenAI on Thursday, saying subpoenas are forthcoming. "We support innovation," he said in a video posted to social media. "But that doesn't give any company the right to endanger our children, facilitate criminal activity, empower America's enemies, or threaten our national security." OpenAI said it will cooperate with the investigation. (TechCrunch)
One day before the announcement, OpenAI released what it called a Child Safety Blueprint — policy recommendations for protecting children from AI-enabled harm. The document was too late to affect anything in the Ikner case. OpenAI confirmed it identified an account linked to Ikner after learning of the shooting in late April 2025, meaning the company did not know about the chats until after the attack. (Tallahassee.com)
Ryan Hobbs, an attorney representing Morales, said the family plans to sue OpenAI. "The communications between the shooter and ChatGPT have confirmed what we were previously advised — the shooter sought and received assistance from ChatGPT concerning how to conduct the mass shooting that occurred on FSU's campus," Hobbs said. (NBC News)
The legal question is whether an AI company can be held liable for what its system assisted with, rather than what it refused. OpenAI's terms of service prohibit use of ChatGPT to plan or execute violence. The company says it builds systems to understand user intent and respond safely, and that guardrails are not perfect. That answer is not likely to satisfy prosecutors or a jury.
What the chat logs show is a system that answered a straightforwardly dangerous question with operational precision and offered to do more. What they do not show is any indication that the system's safety layer engaged at the moment that mattered most.