OpenAI’s threat-referral rule is now on trial
OpenAI is facing a more dangerous kind of lawsuit than the usual claim that a chatbot somehow contributed to harm. Families of victims in the Tumbler Ridge, British Columbia, mass shooting are trying to prove that OpenAI recognized a credible violent threat inside ChatGPT, debated whether to call police, and stopped at banning the account instead.Reuters
What raises the stakes beyond the headline is the timing. The lawsuits landed the same day OpenAI published a new safety post saying it notifies law enforcement when conversations indicate an imminent and credible risk of harm to others.OpenAI The Guardian reported that OpenAI published that post after the paper approached the company for comment. That turns the case into a test of something more specific than chatbot harm: whether an AI lab's internal threat-referral judgment can become a legal duty.
Reuters reported that seven lawsuits were filed in federal court in San Francisco against OpenAI and chief executive Sam Altman. According to one complaint cited by Reuters, OpenAI's automated systems flagged the shooter's ChatGPT conversations in June 2025 after she described gun-violence scenarios. Reuters also reported that safety team members recommended contacting police after concluding she posed a credible and imminent threat of harm.
That recommendation is the hinge. Social platforms have spent years arguing that bad moderation is not the same thing as legal responsibility for offline violence. These plaintiffs are trying to move OpenAI into a harder category: not just a platform that hosted dangerous language, but an operator that identified a real-world threat and still kept the decision inside an internal safety process.
OpenAI's defense, as Reuters described it, is that the flagged account did not meet the company's internal criteria for a law-enforcement referral.Reuters The local publication Tumbler RidgeLines reported that Altman said on April 24, "I am deeply sorry that we did not alert law enforcement to the account that was banned in June." That matters because it makes the dispute look less like a factual denial than an argument over where OpenAI believed its reporting threshold should have been.
OpenAI's new safety post makes that threshold newly visible. The company said it notifies law enforcement when conversations indicate an imminent and credible risk of harm to others, and that its enforcement tools can include disabling accounts, banning other accounts from the same user, and trying to stop new ones from being opened.OpenAI The public record still leans heavily on lawsuit allegations and prior reporting rather than documents the company has released itself.
There is a real counterforce here. Courts may be reluctant to create a broad duty for AI labs to report dangerous user conversations to police, even in a case this horrific. A rule that sounds obvious when a threat later turns into violence gets harder to apply once it reaches false positives, privacy concerns, or users describing fictional scenarios. OpenAI can argue that safety teams see disturbing material all the time and that a referral standard built on imperfect model flags could create its own harms.
Still, the lawsuits put pressure on a question every frontier lab would rather keep internal: when a model company says it can identify an imminent and credible threat, what exactly does it owe the public after that? If judges or regulators decide that banning an account is not enough once a company has crossed its own danger threshold, OpenAI's safety language stops being a blog promise and starts looking like evidence.