Tech News
← Back to articles

Are Copilot prompt injection flaws vulnerabilities or AI limits?

read original related products more articles

Microsoft has pushed back against claims that multiple prompt injection and sandbox-related issues raised by a security engineer in its Copilot AI assistant constitute security vulnerabilities.

The development highlights a growing divide between how vendors and researchers define risk in generative AI systems.

AI vulnerabilities or known limitations?

"Last month, I discovered 4 vulnerabilities in Microsoft Copilot. They've since closed my cases stating they do not qualify for serviceability," posted cybersecurity engineer John Russell on LinkedIn.

Specifically, the issues disclosed by Russell and later dismissed by Microsoft as not qualifying as security vulnerabilities include:

Indirect and direct prompt injection leading to system prompt leak

Copilot file upload type policy bypass via base64-encoding

Command execution within Copilot's isolated Linux environment

Of these, the file upload restriction bypass is particularly interesting. Copilot may not generally allow "risky" file formats from being uploaded. But, users can simply encode these into base64 text strings and workaround the restriction.

"Once submitted as a plain text file, the content passes initial file-type checks, can be decoded within the session, and the reconstructed file is subsequently analyzed — effectively circumventing upload policy controls," explains Russell.

... continue reading