It was almost an hour into our Google Meet call. I was interviewing Kitboga, a popular YouTube scam baiter with nearly 3.7 million subscribers, known for humorously entrapping fraudsters in common scams while livestreaming.
"I assume I'm talking to Evan Zimmer," he says with a mischievous glance, his eyes exposed without his trademark aviator sunglasses on. We were close to the end of our conversation when he realized that my image and audio could have been digitally altered to impersonate me this whole time. "If I'm completely honest with you, there was not a single moment where I thought you could be deepfaking," he says.
He had a reason to be paranoid, except I wasn't using AI to trick Kitboga at all. "That's the big problem because you could be!" he says.
True enough. Artificial intelligence is the tool of choice for cybercriminals, who increasingly use it to do their dirty work, building a fleet of bots that don't need to eat or sleep. Large-scale telemarketing calls are being replaced by more targeted AI-driven attacks, as scammers access tools, from deepfakes to voice clones, that look and sound frighteningly realistic. Big shopping events like Amazon Prime Day offer scammers bountiful targets among bargain-hungry consumers.
Generative AI, capable of creating fake video and audio content based on learned patterns and data — almost as easily as ChatGPT and Gemini churn out emails and meeting summaries — makes financial fraud and identity theft easier than ever before. Victim losses from these machine-learning systems are predicted to reach $40 billion annually by 2027.
Now imagine if the good guys had an AI-powered army of their own.
A group of vloggers, content creators and computer engineers are creating a shield against hordes of scammers, bot or not. These fraud fighters are flipping the script to expose the thieves and hackers who are out to steal your money and your identity.
Sometimes, scam baiters use AI technology to waste fraudsters' time or showcase common scams to educate the public. In other cases, they work closely with financial institutions and the authorities to integrate AI into their systems to prevent fraud and target bad actors.
Businesses, banks and federal agencies already use AI to detect fraudulent activity, leveraging large language models to identify patterns and find biometric anomalies. Companies ranging from American Express to Amazon employ neural networks trained on datasets to determine authentic versus synthetic transactions.
But it's an uphill battle. AI systems are progressing at an incredible rate, which means the methods used to "scam the scammers" must constantly evolve.
... continue reading