OPINION
On March 10, 2026, Microsoft patched CVE-2026-26144, a cross-site scripting (XSS) vulnerability in Excel. XSS in Office isn't anything new, but what makes this XSS different is what happens after the script executes.
The vulnerability chains with Copilot Agent mode. An attacker embeds a malicious payload in an Excel file. After a user opens it, the XSS fires without the user ever clicking anything. However, unlike most XSS attacks, which aim to steal a session cookie or redirect the user to a phishing site, this attack hijacks the Copilot Agent and silently exfiltrates data from the spreadsheet to an attacker-controlled endpoint: no user interaction, no visual prompt to indicate that anything had happened. The AI does the exfiltration for you.
Zero Day Initiative's Dustin Childs called it "a fascinating bug" and warned that this attack scenario will become more common. While that is true, it is an understatement. This is not merely a single bug; it marks the start of a new wave of exploits that leverage AI agents' capabilities.
Related:NIST Revamps CVE Framework to Focus on High-Impact Vulnerabilities
For 30 years, we have categorized vulnerabilities by type, such as XSS, SQL injection, buffer overflow, and path traversal. Based on those classifications, we build detection rules, set patch priorities, and train developers on them. The mental model is that the vulnerability category determines the impact: an XSS steals cookies, an SSRF leaks internal data, and a command injection grants shell access.
AI agents have broken this model. When an AI agent operates inside the application, every traditional vulnerability gains a new capability: autonomous action. The XSS that previously stole a cookie can now instruct Copilot to read every cell in the workbook and post the contents to an external URL. The potential damage is no longer bounded by what the exploit code can do. It is bounded by the permissions granted to the AI agent.
The hardest lesson from production I learned is that the trust boundary between an application and its AI agent is effectively non-existent. Copilot Agent in Excel can read, analyze, and transmit data because that is what Excel does. There is no separate permission layer between "what Excel can access," and "what Copilot can do with that access." When the application is compromised, the AI inherits the compromise automatically.
This concept is what I call "privilege amplification." The bug serves as the entry point, while AI acts as the weapon. The blast radius is determined by the AI agent's access scope rather than the exploit's technical capabilities.
Related:Privilege Elevation Dominates Massive Microsoft Patch Update
... continue reading