Join the event trusted by enterprise leaders for nearly two decades. VB Transform brings together the people building real enterprise AI strategy. Learn more
As remote work has become the norm, a shadowy threat has emerged in corporate hiring departments: sophisticated AI-powered fake candidates who can pass video interviews, submit convincing resumes, and even fool human resources professionals into offering them jobs.
Now, companies are racing to deploy advanced identity verification technologies to combat what security experts describe as an escalating crisis of candidate fraud, driven largely by generative AI tools and coordinated efforts by foreign actors, including North Korean state-sponsored groups seeking to infiltrate American businesses.
San Francisco-based Persona, a leading identity verification platform, announced Tuesday a major expansion of its workforce screening capabilities, introducing new tools specifically designed to detect AI-generated personas and deepfake attacks during the hiring process. The enhanced solution integrates directly with major enterprise platforms including Okta’s Workforce Identity Cloud and Cisco Duo, allowing organizations to verify candidate identities in real-time.
“In today’s environment, ensuring the person behind the screen is who they claim to be is more important than ever,” said Rick Song, CEO and co-founder of Persona, in an exclusive interview with VentureBeat. “With state-sponsored actors infiltrating enterprises and generative AI making impersonation easier than ever, our enhanced Workforce IDV solution gives organizations the confidence that every access attempt is tied to a real, verified individual.”
The timing of Persona’s announcement reflects growing urgency around what cybersecurity professionals call an “identity crisis” in remote hiring. According to a April 2025 Gartner report, by 2028, one in four candidate profiles globally will be fake — a staggering prediction that underscores how AI tools have lowered the barriers to creating convincing false identities.
75 million blocked deepfake attempts reveal massive scope of AI-powered hiring fraud
The threat extends far beyond individual bad actors. In 2024 alone, Persona blocked over 75 million AI-based face spoofing attempts across its platform, which serves major technology companies including OpenAI, Coursera, Instacart, and Twilio. The company has observed a 50-fold increase in deepfake activity over recent years, with attackers deploying increasingly sophisticated techniques.
“The North Korean IT worker threat is real,” Song explained. “But it’s not just North Korea. A lot of foreign actors are all doing things like this right now in terms of finding ways to infiltrate organizations. The insider threat for businesses is higher than ever.”
Recent high-profile cases have highlighted the severity of the issue. In 2024, cybersecurity firm KnowBe4 inadvertently hired a North Korean IT worker who attempted to load malware onto company systems. Other Fortune 500 companies have reportedly fallen victim to similar schemes, where foreign actors use fake identities to gain access to sensitive corporate systems and intellectual property.
... continue reading