Tech News
← Back to articles

How to Fix Your Context

read original related products more articles

Mitigating & Avoiding Context Failures

Following up on our earlier post, “How Long Contexts Fail”, let’s run through the ways we can mitigate or avoid these failures entirely.

But before we do, let’s briefly recap some of the ways long contexts can fail:

Context Poisoning: When a hallucination or other error makes it into the context, where it is repeatedly referenced.

When a hallucination or other error makes it into the context, where it is repeatedly referenced. Context Distraction: When a context grows so long that the model over-focuses on the context, neglecting what it learned during training.

When a context grows so long that the model over-focuses on the context, neglecting what it learned during training. Context Confusion: When superfluous information in the context is used by the model to generate a low-quality response.

When superfluous information in the context is used by the model to generate a low-quality response. Context Clash: When you accrue new information and tools in your context that conflicts with other information in the prompt.

Everything here is about information management. Everything in the context influences the response. We’re back to the old programming adage of, “Garbage in, garbage out.” Thankfully, there’s plenty of options for dealing with the issues above.

RAG

Retrieval-Augmented Generation (RAG) is the act of selectively adding relevant information to help the LLM generate a better response.

... continue reading