Tech News
← Back to articles

We Put Agentic AI Browsers to the Test – They Clicked, They Paid, They Failed

read original related products more articles

This is the new reality we call " Scamlexity " - a new era of scam complexity , supercharged by Agentic AI. Familiar tricks hit harder than ever, while new AI-born attack vectors break into reality. In this world, your AI gets played, and you foot the bill.

We built and tested three scenarios, from a fake Walmart store and a real in-the-wild Wells Fargo phishing site to PromptFix - our AI-era take on the ClickFix scam that hides prompt injection inside a fake captcha to directly take control of a victim’s AI Agent. The results reveal an attack surface far wider than anything we’ve faced before, where breaking one AI model could mean compromising millions of users simultaneously.

AI Browsers promise a future where an Agentic AI working for you fully automates your online tasks, from shopping to handling emails. Yet, our research shows that this convenience comes with a cost: security guardrails were missing or inconsistent, leaving the AI free to interact with phishing pages, fake shops, and even hidden malicious prompts, all without the human’s awareness or ability to intervene.

Welcome to Scamlexity

AI Browsers are no longer a concept. They’re here. Microsoft has Copilot built into Edge. OpenAI is experimenting with a sandboxed browser in “agent mode.” And Perplexity’s Comet is one of the first to fully embrace the idea of a browser that browses for you. This is Agentic AI stepping directly into our daily digital routines - searching, reading, shopping, clicking. It’s not just assisting us, but increasingly replacing us.

But there’s a frightening paradox. In the rush to deliver seamless, magical user experiences, something critical is left behind. We’ll put it plainly:

“One small step for Agentic AI, one giant step back for our security!”

The problem isn’t just that these browsers are UX-first. They also inherit AI’s built-in vulnerabilities - the tendency to act without full context, to trust too easily, and to execute instructions without the skepticism humans naturally apply. AI is designed to make its humans happy at almost any cost, even if it means hallucinating facts, bending the rules, or acting in ways that carry hidden risks.

That means an AI Browser can, without your knowledge, click, download, or hand over sensitive data, all in the name of “helping” you. Imagine asking it to find the best deal on those sneakers you’ve been eyeing, and it confidently completes the purchase… from a fake e-Commerce shop built to steal your credit card.

The scam no longer needs to trick you. It only needs to trick your AI. When that happens, you’re still the one who pays the price. This is Scamlexity: A complex new era of scams, where AI convenience collides with a new, invisible scam surface and humans become the collateral damage.

... continue reading