Skip to content
Tech News
← Back to articles

Why Do ChatGPT Users Keep Committing Mass Shootings?

read original more articles
Why This Matters

The incidents involving ChatGPT and violent acts highlight critical concerns about AI safety, mental health, and the potential for AI tools to influence harmful behavior. This underscores the urgent need for stricter oversight, improved moderation, and responsible AI deployment to protect users and society at large.

Key Takeaways

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up Thank you!

Content warning: this story includes discussion of self-harm and suicide. If you are in crisis, please call, text or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.

On February 10, an 18-year-old named Jesse Van Rootselaar killed two family members at her home, as well as five children and a teacher at a school in British Columbia, and eventually herself. It quickly emerged that OpenAI had flagged Van Rootselaar’s ChatGPT account for disturbing conversations, but never notified law enforcement. A second account tied to the shooter was also been banned for interactions about gun violence.

The incident reignited a heated debate over the troubling relationship between the use of AI chatbots and deteriorating mental health, as well as the potential risk of violence.

Just eight months earlier, an individual fatally shot two people at Florida State University and injured seven others. The prime suspect, 20-year-old student Phoenix Ikner, also used ChatGPT extensively before the rampage, inspiring a probe into OpenAI by the state’s attorney general, James Uthmeier.

“AI should advance mankind, not destroy it,” Uthmeier wrote in an announcement last week. “We’re demanding answers on OpenAI’s activities that have hurt kids, endangered Americans, and facilitated the recent FSU mass shooting.”

The role OpenAI’s blockbuster chatbot played in both mass shootings has experts concerned, as Mother Jones reports, with some warning that more troubled individuals could soon follow suit.

Beyond these two tragic mass shootings, ChatGPT has also been implicated in a growing string of suicides and grisly murder, inspiring numerous lawsuits against the Sam Altman-led company. Experts warn that extensive use of the chatbot can send victims spiraling into destructive delusional spirals and trigger mental health crises as part of a broader phenomenon dubbed “AI psychosis.”

“I’ve seen several cases where the chatbot component is pretty incredible,” an unnamed top threat assessment source with psychiatric expertise and ties to law enforcement told Mother Jones. “We’re finding that more people may be more vulnerable to this than we anticipated.”

One issue is chatbots’ tendency to engage in sycophantic conversation techniques that can lull users into an artificial sense of intimacy and trust, a dangerous feedback loop that can lead to harm. That kind of close connection could radicalize users, especially when it comes to younger, more impressionable minds.

... continue reading