ChatGPT Told Him How Much Kratom to Take. Then It Killed Him.
Leila Turner-Scott knew her son was using ChatGPT. She did not know he was asking it how much Kratom he would need to get high.
Sam Nelson, 19, died of a drug overdose on May 31, 2025 in his bedroom in San Jose, California. His mother learned about the full extent of his conversations with OpenAI's chatbot only after his death, when she found 18 months of chat logs on his computer. On Tuesday, she and her husband Angus Scott filed a wrongful death lawsuit in California state court against OpenAI, alleging the company's product contributed to their son's death.
The lawsuit, reported first by CBS News, makes a specific claim: that ChatGPT advised Sam it was safe to combine Kratom, an unregulated plant-based substance, with Xanax, a prescription anti-anxiety medication. The combination can cause severe central nervous system depression. Sam's blood alcohol level was 0.125, above the legal driving limit, and he had both substances in his system when he died.
"The chatbot is capable of stopping a conversation when it's told to or when it's programmed to," Turner-Scott told CBS News. "And they took away the programming that did that, and they allowed it to continue advising self-harm."
OpenAI expressed condolences and said Sam interacted with a version of ChatGPT that has since been updated and is no longer available. "ChatGPT is not a substitute for medical or mental health care," the company said in a statement. "We continue to strengthen how our models recognize and respond to signs of distress, guided by ongoing work with clinicians and health experts."
The lawsuit is the latest in a series of legal challenges testing whether AI companies bear responsibility for harm that occurs when people follow their models' advice. The cases are building a body of precedent around product liability, consumer protection, and the duties of companies whose AI systems interact with vulnerable users over extended periods.
Sam first turned to ChatGPT in November 2023, according to accounts reviewed by SFGATE, which first reported his death. For the first several months, the chatbot refused his drug-related queries and directed him to seek professional help. But over time, the interactions shifted. ChatGPT began coaching him on dosages, advising him on which substances to combine, and helping him plan subsequent binges. In one exchange reviewed by SFGATE, Sam wrote "Hell yes — let's go full trippy mode" before the chatbot suggested he double his cough syrup intake to heighten hallucinations. In another, when ChatGPT initially warned that combining cannabis with a high dose of Xanax was unsafe, Sam rephrased his question to ask about a "moderate amount," and the model adjusted its answer accordingly.
The progression matters. This was not a single bad response to a one-off query. Over 18 months, the system appeared to learn Sam's patterns and tailor its responses to him, escalating rather than de-escalating. The lawsuit alleges this evolution reflects deliberate decisions by OpenAI to remove safety guardrails that had previously existed.
A 2023 lawsuit, Raine v. OpenAI, made a similar allegation: that OpenAI had stripped out auto-termination features that caused the model to disengage from certain categories of harmful requests. The new case invokes the same argument in a different context and with more dramatic consequences.
The family's lawyer is expected to argue that OpenAI functioned as an unlicensed medical advisor, dispensing specific guidance on drug interactions and dosages without the qualifications or accountability a licensed professional would have. Angus Scott described the dynamic in blunt terms: "It can start feeding psychosis. It can start misrepresenting things to people. And while it is trying to validate users, it's also undermining any chance that that user has to get a grounded opinion."
OpenAI has survived prior lawsuits. It has not faced a wrongful death claim with 18 months of documented AI-generated escalation, a named mother willing to be interviewed on camera, and a filing date that coincides with congressional attention to AI safety legislation.
The case will turn on questions the law has not yet answered clearly: at what point does an AI system's persistent engagement with a user cross from information provision into advice that creates liability, and whether companies can be held responsible for the cumulative effect of how their models behave over extended conversations.
What the chat logs show, according to Sam's mother, is a system that started by refusing, learned to comply, and eventually actively helped plan the behavior that killed him.
"He would not want anyone else to be harmed like he was," Turner-Scott told CBS News.