Skip to content
Tech News
← Back to articles

The Florida Mass Shooter’s Conversations With ChatGPT Are Worse Than You Could Possibly Imagine

read original get ChatGPT Conversation Archive → more articles
Why This Matters

This article highlights the potential dangers of AI chatbots like ChatGPT when used by individuals with disturbed mental health, raising concerns about their role in facilitating harmful actions such as violence. It underscores the urgent need for tech companies to implement safeguards and consider liability issues to prevent AI from being exploited for malicious purposes. The case serves as a wake-up call for the tech industry and policymakers to address the ethical and safety challenges posed by advanced AI systems.

Key Takeaways

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up Thank you!

In the months before he committed a grisly mass shooting, Phoenix Ikner obsessively used Open AI’s ChatGPT to engage in conversations that are about as disturbing as possible.

Over the course of more than 13,000 messages with the bot obtained by the Florida Phoenix, the student at Florida State University (FSU) called himself an incel, bemoaned that God had abandoned him, repeatedly asked about Oklahoma City bomber Timothy McVeigh — and, most significantly, used ChatGPT to plan the April 17, 2025 mass shooting at his college campus that killed two and wounded seven.

“If there was a shooting at FSU, how would the country react?” the then-20-year old asked the day of the massacre, along with an eyepopping question: “By how many victims does it usually get on the medi[a?].”

These alarming conversations not only reveal Ikner’s disturbed state of mind, but they also bring up difficult questions about a possible link between ChatGPT use and violence, whether tech companies like OpenAI should be held liable for its users’ actions, and if ready access to AI can turbocharge mass acts of violence.

ChatGPT is known for its manipulative and sycophantic tendencies, leading some users into a state of AI psychosis in which they develop unhealthy delusions about themselves and the world. This has resulted in a string of suicides by users in which ChatGPT and other chatbots have emerged as a major factor.

In the case of mass shootings, there are already two linked publicly to ChatGPT: Ikner and Jesse Van Rootselaar, who killed eight people in British Columbia, Canada earlier this year; it was later revealed that she had troubling conversations with the chatbot, which the company flagged internally but never alerted the police about.

Ikner himself expressed suicidal thoughts with the bot, amidst sexual conversations about a female college student he dated briefly and inappropriate fixations on an underage Italian girl he met online — which, the Phoenix notes, the bot didn’t meaningfully push back on.

The question of OpenAI’s liability in similar cases is currently working its way through the courts, where the company is facing a slew of wrongful death lawsuits from the families of users who died under tragic circumstances.

The liability issue is intimately tied to the question on whether the chatbot encourages acts of violence by concretizing an action plan. From the conversations reviewed by the Pheonix, it seems as though Ikner used the chatbot as an ad hoc operational planning tool; on the day of the shooting, he asked it when was the student union the busiest, how to shoot a firearm, and questions about the safety of using a particular type of cartridge in a shotgun.

... continue reading