Dhruv Bhutani / Android Authority
TL;DR A new report claims that eight out of ten major AI chatbots were willing to assist in planning a violent attack during simulated conversations.
Only Anthropic’s Claude and Snapchat’s My AI typically refused to help, while Claude was the only chatbot to actively discourage attackers.
In one example cited by researchers, DeepSeek allegedly ended rifle advice with the message “Happy (and safe) shooting!”
For many of us, AI chatbots have quickly gone from obscurity to a regular go-to source of advice on all manner of issues. The speed of the rise has regularly heralded more calls for guardrails, and now a new report suggests many of the most popular AI chatbots were willing to assist with something as troubling as planning a violent attack.
Do you think you're dependent on or addicted to AI chatbots like ChatGPT? 1514 votes Yes, I've become more dependent on it than I'd care to admit. 33 % It can be addicting, though I am careful to set limits on how I use it. 31 % No, I rarely use it and seen no need to change that. 30 % No, though I know someone who is arguably addicted to it. 6 %
According to a report published by the Center for Countering Digital Hate (CCDH) (via The Verge), researchers tested ten widely used chatbots by posing as distressed users who gradually escalated conversations toward violence. The bots tested included ChatGPT, Google Gemini, Microsoft Copilot, Meta AI, and others.
Don’t want to miss the best from Android Authority? Set us as a favorite source in Google Discover to never miss our latest exclusive reports, expert analysis, and much more.
to never miss our latest exclusive reports, expert analysis, and much more. You can also set us as a preferred source in Google Search by clicking the button below.
The researchers found that eight of the ten chatbots were typically willing to assist users planning violent attacks, including school shootings, bombings, and political assassinations. Only Anthropic’s Claude and Snapchat’s My AI generally refused to help, while Claude was the only chatbot to actively discourage would-be attackers, according to the report.
... continue reading