AI makes it easier than ever to create content and scammers are taking advantage of it. A report Thursday from security company McAfee found that many Americans (72%) have seen a fake celebrity or influencer endorsement. Of all the celebrities whose names, images and likenesses are exploited in online scams, Taylor Swift's are used the most.
Following Swift, other celebrities whose likenesses are used the most in online scams are Scarlett Johansson, Jenna Ortega and Sydney Sweeney. The majority of the top 10 are pop culture icons or musicians, including Sabrina Carpenter, Kim Kardashian and Zendaya. US Rep. Alexandria Ocasio-Cortez is the only politician on the list. Notably, only two are men: Tom Cruise and LeBron James.
Don't miss any of our unbiased tech content and lab-based reviews. Add CNET as a preferred Google source.
The study is focused on product-based, consumer-facing online scams, like a fake crypto claiming it's endorsed by AOC. It isn't measuring the level of all deepfakes created, which is why other notable figures, like President Donald Trump, aren't in the top 10. These kinds of internet scammers rely on getting people to interact with their content, whether that's clicking on bogus links, applying for fraudulent giveaways or buying fake products. So, sadly, it makes sense they are relying on big names like Swift to capture our attention. For example, when Swift announced her engagement to Travis Kelce, scammers created ads for fake merchandise based on Kelce's proposal. Celebrities and influencers have long been exploited in this way, but AI gives bad actors an unfortunate boost.
Generative AI tools like image, video and audio generators offer bad actors a new path. They can clone a celebrity's likeness to create a fake endorsement, giveaway or push fake products. All the scammer needs to do is create a social media post convincing enough. And it does work, with McAfee reporting that 39% of people have clicked on one of these false endorsements and 10% have put their personal information at risk and lost hundreds of dollars on average ($525).
The AI companies that create these AI models have systems in place to try to prevent scammers, or anyone, from creating AI content of celebrities without their consent. But we've already seen many times how these systems aren't perfect and can be worked around. In the first few weeks after Sora's launch, the estate of civil rights leader Martin Luther King Jr. had to reach out to OpenAI because it was concerned about a flood of inappropriate and racist AI videos of King on the platform. While OpenAI has said it plans to work with actors and celebrities on this issue, it's not one that can be solved with simple technical adjustments or policy alone.
How to spot an AI celebrity scam
Identifying AI-generated content is difficult but there are some things you can keep an eye on. Here are a few.
Inspect the video or image closely. Are there disappearing and reappearing objects? Does it obey the laws of physics? Do the people have a weirdly shiny, plastic-like look? Those are all signs of AI.
Are there disappearing and reappearing objects? Does it obey the laws of physics? Do the people have a weirdly shiny, plastic-like look? Those are all signs of AI. Check for a watermark . Many AI generators will automatically add a clear (or embedded) watermark, denoting it's AI.
... continue reading