Mensent Photography via Moment / Getty Images
Follow ZDNET: Add us as a preferred source on Google.
ZDNET's key takeaways
Agentic AI browsers have opened the door to prompt injection attacks.
Prompt injection can steal data or push you to malicious websites.
Developers are working on fixes, but you can take steps to stay safe.
The sudden release of the artificial intelligence (AI)-based chatbot ChatGPT took the world by storm, and now we are beginning to see how AI is being applied in everything from surveillance cameras to productivity tools.
Also: How researchers tricked ChatGPT into sharing sensitive email data
Enter agentic AI. In general, these AI models are able to perform tasks that require some reasoning or information gathering, such as acting as live agents or helpdesk assistants for customer queries, or are used for contextual searches. The concept of agentic AI has now spread to browsers, and while they may one day become the baseline, they have also introduced security risks, including prompt injection attacks.
(Disclosure: Ziff Davis, ZDNET's parent company, filed an April 2025 lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.)
... continue reading