OpenAI Faces Seven Lawsuits Over Shooter ChatGPT Account It Flagged but Did Not Report
Seven families whose loved ones were killed or injured in the February mass shooting in Tumbler Ridge, British Columbia, have sued OpenAI and CEO Sam Altman in San Francisco federal court, alleging the company knew eight months before the attack that the shooter was planning violence on ChatGPT and chose not to alert law enforcement.
The lawsuits, filed April 29, were confirmed by OpenAI after the company disclosed that it had flagged the shooter's account in June 2025 for discussing gun violence, considered whether to refer the account to the Royal Canadian Mounted Police, and decided the threshold for referral was not met. OpenAI banned the account and moved on.
On February 10, 2026, 18-year-old Jesse Van Rootselaar killed her mother and 11-year-old stepbrother at home, then traveled to Tumbler Ridge Secondary School, where she killed five children and an education assistant before killing herself. Twenty-five people were injured. It was one of Canada's deadliest mass shootings in years.
"I am deeply sorry that we did not alert law enforcement to the account that was banned in June," Altman wrote in a letter dated April 23 and published April 24. British Columbia Premier David Eby called the apology "necessary, and yet grossly insufficient for the devastation done to the families of Tumbler Ridge."
The lawsuits name seven families and seek both damages and court orders that legal experts say could reshape how every AI company handles credible threats identified through their systems. The Gebala lawsuit, filed on behalf of 12-year-old survivor Maya Gebala, asks the court to require OpenAI to alert law enforcement whenever its systems identify a user who poses a real-world risk of violence. A second lawsuit alleges OpenAI made a "conscious decision" not to warn authorities because doing so could have exposed the volume of violence-related conversations on ChatGPT and damaged the company's path to a nearly $1 trillion initial public offering.
The legal theory invokes the landmark Tarasoff precedent, which established that therapists who identify credible threats from patients have a duty to warn intended victims. The lawsuits argue that by engaging with users "as a therapist, a life coach" — Altman himself acknowledged on a podcast that people discuss their most personal problems with ChatGPT — OpenAI assumed a heightened duty to act when it detected violence planning. The cases also invoke product liability doctrine, arguing that ChatGPT is a defective product because its design — willing to engage endlessly on any topic, to validate any fixation — is what made it dangerous.
Robin Feldman, director of the AI Law and Innovation Institute at UC Law San Francisco, said the cases enter "untried territory." "The old doctrines are being applied to new circumstances," she told KQED. The core questions: are LLMs protected like bulletin boards under Section 230, or do they have an affirmative duty to act when they detect credible violence threats? And if so, what does that duty look like operationally?
That second question is what the plaintiffs are actually after. The injunctive relief they're seeking — mandatory reporting infrastructure, automatic triggers when violence risk is detected, permanent bans for violent users — would effectively write the engineering specification for how every general-purpose AI company must respond to credible threats. If courts grant it, "safe AI" stops being a marketing claim and becomes an operational mandate with product liability consequences for failure.
The cases also cite what they describe as a familiar corporate calculus. In the 1970s, Ford calculated that paying settlements to families of victims whose Pintos caught fire after rear-end collisions cost less than fixing the fuel tank design. "For Ford, the dangerous design was a flaw in an otherwise ordinary product," the Gebala lawsuit states. "But for OpenAI, the dangerous design is the product."
OpenAI said in a statement it has "a zero-tolerance policy for using our tools to assist in committing violence" and has strengthened safeguards, including improved detection of repeat policy violators. The company declined to comment on specific allegations.
The lawsuits are not the first time OpenAI has faced allegations that its chatbot provided harmful guidance to users who went on to commit violence. In a separate case, a defendant in the Florida State University mass shooting received tactical advice from ChatGPT shortly before opening fire, according to chat logs. OpenAI also faces a criminal probe in Florida over that case. And Jay Edelson, the lawyer representing the Tumbler Ridge families, is also representing parents of a California teenager who died by suicide after using ChatGPT, and the estate of an 83-year-old Connecticut woman killed by her son after what the lawsuit alleges was ChatGPT amplification of paranoid delusions.
The pattern is what makes the Tumbler Ridge case different from a typical product liability suit, plaintiffs say. This is not a user who stumbled onto harmful content. This is a company that identified a specific threat from a specific account, weighed whether to act, and decided the reputational cost of reporting was higher than the risk of staying silent. Eight months later, eight people were dead.
Altman's apology acknowledged the failure. What the lawsuits will test is whether that failure constitutes negligence under the law — and whether the remedies go beyond damages to rewrite the operating rules for every AI company that builds a system capable of understanding human language.