Tech News
← Back to articles

Reprompt attack hijacked Microsoft Copilot sessions for data theft

read original related products more articles

Researchers identified an attack method dubbed “Reprompt” that could allow attackers to infiltrate a user’s Microsoft Copilot session and issue commands to exfiltrate sensitive data.

By hiding a malicious prompt inside a legitimate URL and bypassing Copilot’s protections, a hacker could maintain access to a victim’s LLM session after the user clicks on a single link.

Apart from the one-click interaction, Reprompt does not require any plugins or other tricks and allows invisible data exfiltration.

Copilot connects to a personal account and acts as an AI assistant, being integrated into Windows and the Edge browser, as well as various consumer applications.

As such, it can access and reason over user-provided prompts, conversation history, and certain personal Microsoft data, depending on context and permissions.

How Reprompt works

Security researchers at data security and analytics company Varonis discovered that access to a user's Copilot session is possible by leveraging three techniques.

They found that Copilot accepts prompts via the 'q' parameter in the URL and executes them automatically when the page loads. If an attacker could embed malicious instructions in this parameter and deliver the URL to a target user, they could make Copilot perform actions on behalf of the user without their knowledge.

However, additional methods are required to bypass Copilot's safeguards and exfiltrate data continuously via follow-up instructions from the attacker.

In a report shared with BleepingComputer, Varonis explains that a Reprompt attack flow involves phishing the victim with a legitimate Copilot link, triggering Copilot to execute injected prompts, and then maintaining an ongoing back-and-forth exchange between Copilot and the attacker's server.

... continue reading