Skip to content
Tech News
← Back to articles

FBI Director Kash Patel Says AI Has Stopped Numerous Violent Attacks Against America. We’d Love to See a Single Whiff of Evidence

read original get AI Threat Detection Software → more articles
Why This Matters

While FBI officials claim that AI has played a role in preventing violent attacks, evidence remains inconclusive and raises concerns about AI's potential to inadvertently promote violence. This highlights the complex and often controversial role of AI in national security and public safety efforts, emphasizing the need for cautious implementation and oversight. For consumers and the industry, it underscores the importance of understanding AI's limitations and risks in sensitive applications.

Key Takeaways

Sign up to see the future, today Can’t-miss innovations from the bleeding edge of science and tech Email address Sign Up Thank you!

In a recent interview on Sean Hannity’s YouTube podcast, FBI head Kash Patel lauded AI for helping stop multiple violent attacks on innocent people.

“AI was never used at the FBI till we got there, literally crazy,” Patel said in his characteristically hopped up affect. “I’m using it everywhere.”

Specifically, Patel — who’s been accused of severe issues related to alcohol consumption — alleges that using AI the FBI has been able to foil numerous mass shootings at schools throughout the US.

“We stopped a school massacre in North Carolina because we got a tip from our private-sector partners who are building out AI infrastructure,” he bragged.

As with everything coming out of the Trump administration, we need to take this statement with a Mar-a-Lago-sized grain of salt. While it remains to be seen whether AI has really helped the FBI thwart mass casualty events, there’s extremely compelling evidence that the exact opposite is also true.

For starters, research has shown that AI chatbots are actually twice as likely to encourage humans to commit violent acts than step in and stop them. One Stanford study found that AI chatbots only discourage violence 16.7 percent of the time, while the same chatbots actively supported violent thoughts in an alarming 33.3 percent of cases.

In the real world, this is manifesting into a key pattern of violence. After the second shooting at Florida State University — the 2025 one, not the 2014 one — in which two were killed and seven injured, it was found that the perpetrator had not only confided in ChatGPT about his plans to commit a mass shooting, but used the chatbot to organize the attack.

The mass shooter in Tumbler Ridge, Canada conducted conversations with ChatGPT so disturbing that they were automatically flagged by the company’s internal moderation systems, spurring leadership at the company to debate whether to inform law enforcement; they ultimately didn’t, and the attack killed seven and injured dozens more.

Meanwhile in South Korea, police investigators allege a 21-year-old serial killer used ChatGPT to help plan at least two murders. A Connecticut man with a history of violent mental health episodes was likewise alleged to have killed his mother before taking his own life after long-running conversations with ChatGPT resulted in a disturbing break from reality. One wrongful death suit in Florida alleges Google’s chatbot, Gemini, encouraged a man to kill others in order to procure a “robot body” for his AI lover; failing that, he killed himself.

... continue reading