OpenAI’s safety pledges in the wake of Tumbler Ridge aren’t AI regulation — they’re surveillance - The Conversation
Eight months before Jesse Van Rootselaar killed eight people in Tumbler Ridge, British Columbia, his ChatGPT account was flagged for violent queries and banned. Human moderators reviewed the interactions. Some advocated reporting to law enforcement. Others, applying the company's internal thresholds, decided against it. The account was closed. Nothing was reported to the RCMP.
The failure was not mechanical. It was institutional. And the response that followed is the subject of an urgent debate about what AI governance actually means.
In the two days after news of the failure became public, OpenAI CEO Sam Altman met with Canada's Federal AI Minister Evan Solomon and British Columbia Premier David Eby. He secured commitments: reporting threats directly to the RCMP, retroactive review of previously flagged accounts, distress-redirect protocols, access to OpenAI's safety office for Canadian experts, and an agreement to work with the province on regulatory recommendations to Ottawa. He also agreed to apologize to Tumbler Ridge.
These are significant gestures. They are also, according to University of Toronto researcher Michael Lydeen, who wrote about the case in The Conversation, the wrong answer to the right question.
The governance question Tumbler Ridge raised was not "should OpenAI report flagged accounts more quickly?" It was "who decides, under what legal authority, with what oversight?" In the absence of an answer, OpenAI designed its own answer: faster internal thresholds, direct RCMP referral, proprietary criteria for what counts as a threat. "That is not a fix," Lydeen writes. "It is the same unaccountable architecture with a faster trigger."
The deeper problem Lydeen identifies is what he calls the surveillance substitution. The proposed settlement does not regulate AI. It regulates users. The entire apparatus being constructed — internal threat identification, flagging, RCMP referral — is oriented toward monitoring what people say to AI, not toward how AI systems are designed, trained, or constrained in their responses. True AI regulation asks whether a model might amplify harmful ideation through its interaction patterns. It asks how the system is built and what obligations attach to its deployment. The current arrangement asks none of these questions.
The chilling effect is unstudied but potentially severe. Research on how people interact with AI in distress consistently shows that users disclose suicidal or violent thoughts to chatbots precisely because the interaction feels private and non-judgmental. If that space becomes a monitored channel where concerning disclosures trigger law enforcement referrals based on opaque corporate criteria, the most vulnerable users may stop seeking help altogether.
OpenAI is not acting in bad faith. It is behaving as a rational private entity in the absence of a regulatory framework, offering the minimum viable response to political pressure while preserving operational autonomy. Look south and the logic clarifies: when the Pentagon sought AI models with safety guardrails removed, OpenAI moved to fill the gap. In Canada, the dynamic is inverted — OpenAI is volunteering concessions designed to pre-empt binding legislation that would actually constrain its operations. Support broad norms with no legal force; resist specific domestic obligations that carry real consequences. "This is how regulatory capture begins," Lydeen writes. "Not with corruption, but with convenience."
Canada has genuine leverage: unusual cross-party consensus that something must change, public attention that has given AI governance a human face, and a provincial government that understands the stakes. The question is whether it uses that leverage to define binding thresholds for AI flagging — developed with mental health professionals, privacy experts, and law enforcement — or accepts OpenAI's pledges as sufficient and normalizes corporate self-regulation as the baseline.
What Tumbler Ridge demands is not more efficient surveillance of users. It is a regulatory architecture that addresses the systems themselves.
Sources: