New attack on ChatGPT research agent pilfers secrets from Gmail inboxes
So far, prompt injections have proved impossible to prevent, much like memory-corruption vulnerabilities in certain programming languages and SQL injections in Web applications are. That has left OpenAI and the rest of the LLM market reliant on mitigations that are often introduced on a case-by-case basis, and only in response to the discovery of a working exploit. Accordingly, OpenAI mitigated the prompt-injection technique ShadowLeak fell to—but only after Radware privately alerted the LLM ma