Skip to content
Tech News
← Back to articles

OpenAI Faces Lawsuits Over Deadly Mass Shooting in Canada

read original more articles
Why This Matters

This lawsuit highlights the growing legal and ethical challenges faced by AI companies like OpenAI in preventing misuse of their technology, especially in violent contexts. It underscores the importance of developing robust safeguards and accountability measures to protect users and society from potential harm caused by AI-driven interactions.

Key Takeaways

The families of victims of a February school shooting in British Columbia opened seven lawsuits Wednesday against OpenAI, the creator of ChatGPT. The lawsuits, filed in federal court in San Francisco, claim that OpenAI's actions regarding the shooter's use of its AI allowed the shooting to happen.

The cases could have major implications for future chatbot safeguards and whether companies can be held liable for how people use artificial intelligence.

The shooting occurred on Feb. 10 when an 18-year-old former student entered a secondary school in Tumbler Ridge, British Columbia, and opened fire using a modified handgun, killing five children and an education assistant, according to news reports. Investigators allege that the shooter had also killed her mother and half-brother. The combined fatalities made this one of the deadliest shootings in Canadian history. The shooter died at the scene, apparently of a self-inflicted gunshot wound.

The shooter had engaged ChatGPT in conversations involving violence before the attack.

OpenAI says it has taken steps intended to address issues raised by the lawsuits.

"We have already strengthened our safeguards, including improving how ChatGPT responds to signs of distress, connecting people with local support and mental health resources, strengthening how we assess and escalate potential threats of violence, and improving detection of repeat policy violators," an OpenAI spokesperson told CNET in an email.

OpenAI co-founder and chief executive Sam Altman wrote a letter to the families, which was published on the local news site Tumbler RidgeLines.

"The pain your community has endured is unimaginable," Altman wrote.

He referred to the shooter's ChatGPT account, writing, "I am deeply sorry that we did not alert law enforcement to the account that was banned in June."

CBS News reports that the shooter's account was flagged in 2025 for misusing ChatGPT for "violent activities" and then banned. OpenAI told CBS that it considered flagging the account to law enforcement but determined it "did not pose an imminent and credible risk of serious physical harm to others."

... continue reading