Altman Apologized Thursday. OpenAI Knew Seven Months Earlier. Nobody Did Anything.
Maya Gebala was shot three times at school and survived. OpenAI had flagged the shooter seven months earlier and banned the account. Nobody told the police. Altman apologized Thursday.

Sam Altman apologized Thursday for a failure that cost eight people their lives. Seven months earlier, OpenAI had flagged the shooter's ChatGPT account for violent activity, banned it, and decided not to tell the police. No law required the company to make that call. Nobody does.
Jesse Van Rootselaars's account was detected by OpenAI's automated systems in June 2025, reviewed by human moderators, and banned for violence-related activity, CBC reported. About a dozen OpenAI employees argued the company should refer the account to law enforcement. Management said no, flyingpenguin reported, citing the Wall Street Journal. The stated reason: the activity did not meet OpenAI's internal threshold for imminent and credible risk of serious physical harm. On February 10, Van Rootselaar killed eight people in Tumbler Ridge, British Columbia: five children and a teacher's aide at a school, two others at home, before dying by suicide, the National Post reported.
The day after the shooting, OpenAI met with B.C. government officials to express horror and offer support. The company did not mention that it had already identified the shooter as a threat months earlier, the Globe and Mail reported. Premier David Eby called the apology necessary and grossly insufficient for the devastation done to the families, CP24 reported.
The apology landed two days after OpenAI published a formal protocol for when flagged accounts get reported to law enforcement, a document that did not exist in June 2025. Ann O'Leary, an OpenAI vice president, said this week that if the current protocol had been in place then, the account would have been referred, CP24 reported. The shift is narrow in scope: it sets a lower bar for when AI-generated content crosses into a reportable threat. It does not answer the harder question of who inside a company makes that judgment, on what evidence, and whether those decision-makers are equipped to assess real-world danger or primarily managing legal exposure.
Maya Gebala, who was shot three times at the school and survived, is suing OpenAI alleging the company provided the shooter with information, guidance, and assistance and knew the threat but took no action, CBC reported. OpenAI declined to specify what the AI told the user or who approved the June 2025 decision, citing active litigation.
RCMP already had an active file on Van Rootselaar, including prior weapons seizures and mental health apprehensions under the Mental Health Act, the Globe and Mail reported. Whether a police referral from OpenAI in June would have changed the outcome is unknowable. It is also, for the families of the dead, unanswerable.
What the litigation surfaces, and what OpenAI's public statements have so far obscured, is who was in the room when a dozen employees wanted to act and management chose not to. The apology acknowledges a failure. It does not name the decision-makers, the specific information they weighed, or what standard they were applying. Eight people are dead. The chain that might have interrupted it broke at the point where OpenAI's internal process gave management the authority to override the people who saw the flag and wanted to report it.





