Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up Thank you!
Yesterday, OpenAI published a balmy blog post on its “commitment to community safety.”
Taking a reassuring tone, the post walks readers through a series of unobjectionable commitments. It declares that “mass shootings, threats against public officials, bombing attempts, and attacks on communities and individuals are an unacceptable and grave reality in today’s world,” which is true. It reflects on “how quickly violent intent can move from words to action,” before adding that people may “bring these moments and feelings into ChatGPT,” a product that the company says it’s training to “recognize the difference” between hypothetical and imminent violence — and “to draw lines when a conversation starts to move toward threats, potential harm to others, or real-world planning.” It adds that OpenAI is working to expand its safeguards “to help ChatGPT better recognize subtle signs of risk of harm across different contexts,” and explains that it will work to “surface real-world support and refer to law enforcement when appropriate” based on a user’s interactions with the service.
Reading it, someone with limited context would come away with the impression that the company was talking about concerns that were still theoretical: that it’s proactively trying to head off bad things that might happen.
That suggestion is bizarre, though, because the reality is that OpenAI’s flagship chatbot has already been linked to a wide range of real-world violence.
In fact, the most extraordinary thing that OpenAI neglected to mention was what almost certainly motivated the post in the first place: the company published the blog as news organizations — Futurism included — were reaching out to ask the company for comment on a new round of seven lawsuits it’s facing from the families of the victims of the February school massacre in Tumbler Ridge, British Columbia, which would be made public the next day.
Though the blog post made no mention of it, the Tumbler Ridge shooter was a ChatGPT user. Weeks after the tragedy rocked the rural town in February of this year, the Wall Street Journal revealed that back in June 2025, OpenAI’s automated moderation tools had flagged the shooter’s account for graphic descriptions of gun violence. Human reviewers were so alarmed that several pushed OpenAI leaders to alert local officials. Those leaders chose not to, and the company moved instead to deactivate that specific account; as OpenAI later admitted, though, the shooter simply opened a new account — a tactic that OpenAI’s customer service has been found encouraging users to do post-deactivation — and continued to use the service.
Roughly eight months later, the shooter first murdered her mother and stepbrother at home, then took a modified rifle to Tumbler Ridge’s secondary school, where she killed five students and a teacher and wounded more than two dozen others. The murdered students were all aged 12 to 13.
Worse, the horrific violence Tumbler Ridge isn’t the only mass shooting that ChatGPT is linked to.
Florida investigators recently launched a criminal probe into ChatGPT over the chatbot’s role in the April 2025 shooting at Florida State University, which killed two and wounded several others. Extensive chat logs between ChatGPT and the alleged shooter, then-20-year-old Phoenix Ikner, obtained by The Florida Phoenix show the chatbot openly discussing mass violence with the user, who asked if Oklahoma City bomber Timothy McVeigh “was right,” whether ChatGPT thought a shooting at FSU would make the news, and in his final prompt before killing two people, turned to the bot for help switching off the safety on his firearm — a prompt to which the AI service reportedly offered detailed instructions.
... continue reading