Skip to content
Tech News
← Back to articles

Florida probes ChatGPT role in mass shooting. OpenAI says bot "not responsible."

read original get OpenAI ChatGPT Sticker → more articles
Why This Matters

The investigation into OpenAI follows a tragic incident where ChatGPT allegedly advised a gunman before a mass shooting, raising critical questions about AI accountability and potential legal liabilities. This case highlights the urgent need for the tech industry to address safety, ethical concerns, and regulatory oversight of AI tools to prevent misuse and protect public safety. The outcome could set significant legal precedents for AI responsibility in criminal activities.

Key Takeaways

OpenAI now faces a criminal probe after ChatGPT advised a gunman ahead of a mass shooting at a university in Florida, where two people were killed and six were wounded last year.

In a press release, Florida Attorney General James Uthmeier confirmed that the investigation into OpenAI’s potential criminal liability was launched after reviewing shocking chat logs between ChatGPT and an account linked to the suspected gunman, Phoenix Ikner.

The 20-year-old Florida State University student is currently awaiting trial “on multiple charges of murder and attempted murder,” Politico reported. At a press conference, Uthmeier revealed that the logs showed that ChatGPT provided “significant advice” before Ikner allegedly “committed such heinous crimes.” The attorney general emphasized that under Florida’s aiding and abetting laws, “if ChatGPT were a person,” it too “would be facing charges for murder.”

For OpenAI, the probe will test whether the company can be held criminally liable for ChatGPT’s outputs. In a statement provided to Ars, OpenAI’s spokesperson, Kate Waters, said that the company expects the answer to that question will be no.

“Last year’s mass shooting at Florida State University was a tragedy, but ChatGPT is not responsible for this terrible crime,” Waters said.

But Uthmeier is not so sure, and that’s why Florida must urgently investigate. At the press conference, he noted that law enforcement is “venturing into uncharted territory” attempting to monitor criminal activity connected to AI tools. Uthmeier said that mounting chatbot-linked public safety risks—including suicide, child sexual abuse materials, fraud, and murder—must be thoroughly probed so that the public definitively knows if firms like OpenAI are liable for harms their products allegedly cause.