Tech News
← Back to articles

Security Experts Warn of Vulnerabilities in ChatGPT Atlas Browser

read original related products more articles

Key Takeaways Researchers from NeuralTrust, LayerX, and SPLX discovered that OpenAI’s ChatGPT Atlas browser is vulnerable to prompt-injection attacks, tainted-memory exploits, and AI-targeted cloaking.

OpenAI’s Chief Information Security Officer, Dane Stuckey, confirmed that prompt injections remain an active risk and advised users to browse in “logged-out mode” or use “Watch Mode” on sensitive sites to stay safer.

We recommend using it only for non-sensitive tasks, such as reading or comparing products. Avoid logged-in sessions or handling personal data until OpenAI strengthens its defenses against prompt injections, phishing sites, and other security risks.

Security Experts Warn of Vulnerabilities in ChatGPT Atlas Browser

OpenAI launched its AI-powered browser, ChatGPT Atlas, a few days ago. It promises to increase your efficiency by completing various tasks on your behalf, such as filling forms, booking tickets, and comparing options. But multiple cybersecurity experts have already raised concerns about potential vulnerabilities.

NeuralTrust’s security team found that attackers could exploit ChatGPT Atlas through prompt injection attacks. Cybersecurity researchers at LayerX have identified potential tainted memory exploits in the browser. Additionally, the SPLX security team has identified that it is vulnerable to AI-targeted cloaking attacks.

We took a closer look at these findings to understand critical vulnerabilities that experts have uncovered in ChatGPT Atlas so far.

Here’s what we found.

Security Vulnerabilities in ChatGPT Atlas

Agentic browsing, where the browser performs actions on your behalf, has long raised concerns about security and privacy.

... continue reading