Dhruv Bhutani / Android Authority
Would you trust an AI chatbot like ChatGPT or Gemini with your emails, financial data, or even browsing habits and data? Most of us would probably answer no to that question, and yet that’s exactly what companies like OpenAI and Perplexity are asking us to do with their new AI browsers, Atlas and Comet.
OpenAI’s Atlas is a new browser with ChatGPT built-in, and it goes much further than Google’s addition of Gemini to Chrome. Atlas has agentic capabilities meaning the AI can surf the internet on your behalf by opening tabs, navigating to specific websites, clicking on buttons, and even filling in text fields. If that sounds like a potential game-changer, I’d like to temper that excitement because it’s also a big security gamble. Here’s why I’m not switching to Atlas in a hurry.
Would you switch to an AI browser like ChatGPT Atlas? 26 votes Yes, I'm excited to switch right away. 19 % Yes, but only once they mature. 8 % I'm on the fence. 23 % No, I don't want AI capabilities in my web browser. 50 %
Why I refuse to touch a browser with agentic capabilities
Calvin Wankhede / Android Authority
It hasn’t taken long for security researchers to find flaws in the new wave of AI-powered browsers. Within just a couple of months, they’ve shown how attackers could manipulate the built-in AI models into leaking your private data or maliciously interacting with your online accounts. The browser company Brave has now confirmed several vulnerabilities tied to this type of exploit, which researchers have dubbed prompt injection.
Injection attacks are not new; in fact, they’ve been around for nearly as long as the Internet itself. A classic example is SQL injection, in which an attacker gains control of a website’s database. This involves entering malicious code into a seemingly harmless field, like the one for a username. If the website fails to follow good security practices and “sanitize” this input, it would mistakenly recognize the attacker’s code as a legitimate database command, allowing them to perform unauthorized actions like reading other people’s data, changing passwords, or wiping the entire database.
AI prompt injection works in much the same way — imagine you’re scrolling through Reddit and you come across a post with malicious instructions. You might not interact with it in any way, but the AI in your browser might if you’ve given it the authority to do so. It could follow the attacker’s instructions to open a new tab, navigate to financial or social media websites, potentially siphon private information, or interact with the page through keyboard and mouse inputs.
A prompt injection attack allows an attacker to hijack the AI model that has full control over your browser.
... continue reading