Tech News
← Back to articles

Google says it won’t fix Gemini security flaw that could send your sensitive info to a stranger

read original related products more articles

Edgar Cervantes / Android Authority

TL;DR A security researcher found that Gemini is susceptible to ASCII smuggling attacks.

These attacks hide malicious prompts in emails or calendar invites that LLMs can read when asked to summarize text.

Google has dismissed the threat as a social engineering attack, placing the responsibility on the end user.

Google tends to take the security of its users seriously, implementing a range of measures to keep its products safe to use. In fact, that’s part of the thought process behind the company’s crackdown on sideloading apps from unverified developers on Android. But it looks like the company isn’t too concerned about fixing an issue that makes Gemini susceptible to a troubling type of cyber threat.

Don’t want to miss the best from Android Authority? Set us as a favorite source in Google Discover to never miss our latest exclusive reports, expert analysis, and much more.

to never miss our latest exclusive reports, expert analysis, and much more. You can also set us as a preferred source in Google Search by clicking the button below.

According to Bleeping Computer, security researcher Viktor Markopoulos tested some of the most popular LLMs against ASCII smuggling attacks. Markopoulos found that Gemini, DeepSeek, and Grok were susceptible to this type of cyberattack. However, Claude, ChatGPT, and Copilot had protections, proving these options to be secure.

If you’re unfamiliar with this type of cyber threat, ASCII smuggling involves “smuggling” (hiding) a prompt for an AI to read. For example, the bad actor could write a secret prompt in an email in the smallest font size available, and the victim would be none the wiser. If the victim were to ask an AI tool, like Gemini, to summarize the text in the message, the AI would also read this covert prompt.

There are a few reasons why something like this is problematic. For example, the prompt could tell the AI to search your inbox for sensitive information or send contact details. Considering that Gemini is now integrated with Google Workspace, this issue poses an even higher risk.

... continue reading