Skip to content
Tech News
← Back to articles

"Disregard That" Attacks

read original more articles
Why This Matters

Sharing your LLM's context window can expose sensitive information and introduce security vulnerabilities like prompt injection, risking data leaks or malicious manipulation. Understanding the importance of controlling and safeguarding this input is crucial for both developers and users to maintain privacy and security in AI applications.

Key Takeaways

"Disregard that!" attacks

Why you shouldn't share your context window with others

There is a joke from the olden days of the internet; it goes a bit like this:

<Jeff> I'm going away from my keyboard now, but Henry is still here. <Jeff> If I talk in the next 25 minutes it's not me talking, it's Henry <Jeff> DISREGARD THAT! - I am indeed Jeff and I would like to now make a series of shameful public admissions... [snip]

Ultimately this is the same security problem that many, many LLM use-cases have: a vulnerability sometimes called "prompt injection", though I think that "Disregard that!" is a much clearer way to refer to this class of vulnerabilities.

The context window

LLMs run on a "context window". The context window is the input text (though it isn't always text) that the LLM ponders prior to outputting something. If you are using an LLM as a chatbot, the context window is the entire chat history.

If you're using an LLM as a coding assistant, the context window includes the code you're working on, your coding style guide instructions (ie CLAUDE.md ), and perhaps pieces of the documentation that the LLM has looked up for you.

Imaginary context window from a Claude Code session

If you're using an LLM as a better version of Google, the context window includes your query, the documents that it's found so far, perhaps the documents that it's found previously, and so on.

... continue reading