Tech News
← Back to articles

National security experts warn extremist groups are experimenting with AI. Here’s how

read original related products more articles

According to experts and spy agencies, AI could be a powerful tool for militant groups when recruiting. As the rest of the world rushes to harness the power of artificial intelligence, militant groups also are experimenting with the technology, even if they aren’t sure exactly what to do with it.For extremist organizations, AI could be a powerful tool for recruiting new members, churning out realistic deepfake images and refining their cyberattacks, national security experts and spy agencies have warned.Someone posting on a pro-Islamic State group website last month urged other IS supporters to make AI part of their operations. “One of the best things about AI is how easy it is to use,” the user wrote in English.“Some intelligence agencies worry that AI will contribute (to) recruiting,” the user continued. “So make their nightmares into reality.”IS, which had seized territory in Iraq and Syria years ago but is now a decentralized alliance of militant groups that share a violent ideology, realized years ago that social media could be a potent tool for recruitment and disinformation, so it’s not surprising that the group is testing out AI, national security experts say.For loose-knit, poorly resourced extremist groups — or even an individual bad actor with a web connection — AI can be used to pump out propaganda or deepfakes at scale, widening their reach and expanding their influence.“For any adversary, AI really makes it much easier to do things,” said John Laliberte, a former vulnerability researcher at the National Security Agency who is now CEO of cybersecurity firm ClearVector. “With AI, even a small group that doesn’t have a lot of money is still able to make an impact.”