Skip to content
Tech News
← Back to articles

Your Employees Know What Phishing Looks Like. They’re Still Getting Fooled. Here’s Why.

read original get Phishing Awareness Training Course → more articles
Why This Matters

The increasing sophistication of AI-generated phishing messages is making traditional detection methods less effective, highlighting the need for a shift in cybersecurity strategies. Employees remain vulnerable not due to lack of training, but because of the pressure and context in which they respond to messages, emphasizing the importance of operational and communication norms in cybersecurity. This evolution underscores the necessity for organizations to rethink their defenses beyond awareness training, focusing on systemic changes to reduce susceptibility.

Key Takeaways

Opinions expressed by Entrepreneur contributors are their own.

Key Takeaways AI is making phishing harder to detect. The messages are increasingly polished and professional, often mimicking colleagues or executives, which removes the obvious signs people used to rely on.

Employees generally know how to spot phishing, but they still fall for it because they’re busy, multitasking and making fast decisions under pressure. It’s not because they lack training.

Leaders must accept that cybersecurity is an operational problem. They must examine communication norms, look at after-hours expectations and build friction deliberately.

There’s a version of the phishing problem that most companies think they’ve solved. You run the annual security training. You send the simulated phishing emails. You remind everyone to look for red flags — bad grammar, suspicious links, strange sender addresses. You do all of this and then feel reasonably confident that your team knows what to watch for.

The data suggests otherwise — not because your employees are ignoring the training, but because the threat has quietly changed around them. The habits that make people vulnerable were never really about awareness in the first place; they’re about how people respond to messages under pressure. That’s communications territory.

Here’s what’s actually happening

AI has gotten very good at writing. And the people using it to craft phishing messages have noticed. According to a recent Sagiss survey of 500 U.S. desk-based workers, 72% say phishing attempts are more convincing today than they were just a year ago — specifically because of AI-generated language. Sixty-six percent believe an AI-crafted message could successfully impersonate someone they actually work with. More than half say AI-written phishing is harder to spot simply because it feels more professional.

That last part is worth some reflection. The thing that used to make phishing detectable — the awkward phrasing, the stiff tone, the telltale grammatical errors — is disappearing. What’s replacing it is something that sounds a lot like your CFO, or your IT department, or that colleague who always messages you when she needs something fast. The messages don’t stand out. They blend in.

But here’s what the data also shows, and what most security conversations don’t spend enough time on: The problem isn’t just that phishing messages look better. It’s that your employees are making fast decisions under conditions that were never designed to support careful judgment.

... continue reading