Tech News
← Back to articles

Google won’t fix new ASCII smuggling attack in Gemini

read original related products more articles

Google has decided not to fix a new ASCII smuggling attack in Gemini that could be used to trick the AI assistant into providing users with fake information, alter the model’s behavior, and silently poison its data.

ASCII smuggling is an attack where special characters from the Tags Unicode block are used to introduce payloads that are invisible to users but can still be detected and processed by large-language models (LLMs).

It’s similar to other attacks that researchers presented recently against Google Gemini, which all exploit a gap between what users see and what machines read, like performing CSS manipulation or exploiting GUI limitations.

While LLMs’ susceptibility to ASCII smuggling attacks isn’t a new discovery, as several researchers have explored this possibility since the advent of generative AI tools, the risk level is now different [1, 2, 3, 4].

Before, chatbots could only be maliciously manipulated by such attacks if the user was tricked into pasting specially crafted prompts. With the rise of agentic AI tools like Gemini, which have widespread access to sensitive user data and can perform tasks autonomously, the threat is more significant.

Viktor Markopoulos, a security researcher at FireTail cybersecurity company, has tested ASCII smuggling against several widely used AI tools and found that Gemini (Calendar invites or email), DeepSeek (prompts), and Grok (X posts), are vulnerable to the attack.

Claude, ChatGPT, and Microsoft CoPilot proved secure against ASCII smuggling, implementing some form of input sanitization, FireTail found.

Susceptibility to ASCII smuggling

Source: FireTail

Regarding Gemini, its integration with Google Workspace poses a high risk, as attackers could use ASCII smuggling to embed hidden text in Calendar invites or emails.

... continue reading