Tech News
← Back to articles

Flaw in Gemini CLI AI coding assistant allowed stealthy code execution

read original related products more articles

A vulnerability in Google's Gemini CLI allowed attackers to silently execute malicious commands and exfiltrate data from developers' computers using allowlisted programs.

The flaw was discovered and reported to Google by the security firm Tracebit on June 27, with the tech giant releasing a fix in version 0.1.14, which became available on July 25.

Gemini CLI, first released on June 25, 2025, is a command-line interface tool developed by Google that enables developers to interact directly with Google's Gemini AI from the terminal.

It is designed to assist with coding-related tasks by loading project files into "context" and then interacting with the large language model (LLM) using natural language.

The tool can make recommendations, write code, and even execute commands locally, either by prompting the user first or by using an allow-list mechanism.

Tracebit researchers, who explored the new tool immediately after its release, found that it could be tricked into executing malicious commands. If combined with UX weaknesses, these commands could lead to undetectable code execution attacks.

The exploit works by exploiting Gemini CLI's processing of "context files," specifically 'README.md' and 'GEMINI.md,' which are read into its prompt to aid in understanding a codebase.

Tracebit found it's possible to hide malicious instructions in these files to perform prompt injection, while poor command parsing and allow-list handling leave room for malicious code execution.

They demonstrated an attack by setting up a repository containing a benign Python script and a poisoned 'README.md' file, and then triggered a Gemini CLI scan on it.

Gemini is first instructed to run a benign command ('grep ^Setup README.md'), and then run a malicious data exfiltration command that is treated as a trusted action, not prompting the user to approve it.

... continue reading