Just days after OpenAI CEO Sam Altman wrote a public apology to people of Tumbler Ridge, British Columbia in the aftermath of the town's deadly February 10 school shooting, the families of the victims of the traumatic event are suing OpenAI for negligence.
The mass shooting, one of the deadliest in Canadian history, saw the alleged shooter, 18-year-old Jesse Van Rootselaar, enter the town's local high school and kill five students and one teacher, as well as critically injure two others, before taking her own life. Local police later discovered Van Rootselaar had also killed her mother and 11-year-old half-brother before entering the school.
Per NPR, lawyers representing some of the families of Tumbler Ridge filed six different suits on Wednesday in a federal court in San Francisco. One of the complaints, filed on behalf of Maya Gebala, a survivor of the shooting, alleges OpenAI's automated safety systems flagged Van Rootselaar's ChatGPT conversations in June 2025, more than half a year before she entered the town's high school with a long gun and modified rifle, for "gun violence activity and planning." It further claims OpenAI's safety team urged management to contact authorities, but that the company chose instead to deactivate Van Rootselaar account. She later created a second account and continued her conversations with ChatGPT.
"The events in Tumbler Ridge are a tragedy. We have a zero-tolerance policy for using our tools to assist in committing violence," an OpenAI spokesperson told Engadget. "As we shared with Canadian officials, we have already strengthened our safeguards, including improving how ChatGPT responds to signs of distress, connecting people with local support and mental health resources, strengthening how we assess and escalate potential threats of violence, and improving detection of repeat violators."
On late Tuesday, OpenAI published a blog post outlining its safety policies. "As part of this ongoing work, we've continued expanding our safeguards to help ChatGPT better recognize subtle signs of risk of harm across different contexts. Some safety risks only become clear over time: a single message may seem harmless on its own, but a broader pattern within a long conversation — or across conversations — can suggest something more concerning," the company wrote.
The suits filed on Wednesday are the latest attempt to use the legal system to hold OpenAI accountable for the design of its products. Last summer, the parents of Adam Raine, a teen who committed suicide in 2025, filed the first known wrongful death suit against an AI company, alleging ChatGPT was aware of four previous attempts by Raine to take his own life before he was ultimately successful.