As AI assistants become capable of controlling web browsers, a new security challenge has emerged: users must now trust that every website they visit won't try to hijack their AI agent with hidden malicious instructions. Experts voiced concerns about this emerging threat this week after testing from a leading AI chatbot vendor revealed that AI browser agents can be successfully tricked into harmful actions nearly a quarter of the time.
On Tuesday, Anthropic announced the launch of Claude for Chrome, a web browser-based AI agent that can take actions on behalf of users. Due to security concerns, the extension is only rolling out as a research preview to 1,000 subscribers on Anthropic's Max plan, which costs between $100 and $200 per month, with a waitlist available for other users.
The Claude for Chrome extension allows users to chat with the Claude AI model in a sidebar window that maintains the context of everything happening in their browser. Users can grant Claude permission to perform tasks such as managing calendars, scheduling meetings, drafting email responses, handling expense reports, and testing website features.
The browser extension builds on Anthropic's Computer Use capability, which the company released in October 2024. Computer Use is an experimental feature that allows Claude to take screenshots and control a user's mouse cursor to perform tasks, but the new Chrome extension provides more direct browser integration.
Claude for Chrome demo video by Anthropic.
Zooming out, it appears Anthropic's browser extension reflects a new phase of AI lab competition. In July, Perplexity launched its own browser, Comet, which features an AI agent that attempts to offload tasks for users. OpenAI recently released ChatGPT Agent, a bot that uses its own sandboxed browser to take actions on the web. Google has also launched Gemini integrations with Chrome in recent months.
But this rush to integrate AI into browsers has exposed a fundamental security flaw that could put users at serious risk.
Security challenges and safety measures
In preparation for the Chrome extension launch, Anthropic says it has conducted extensive testing that revealed browser-using AI models can face prompt injection attacks, where malicious actors embed hidden instructions into websites to trick AI systems into performing harmful actions without user knowledge.