Tech News
← Back to articles

How attention sinks keep language models stable

read original related products more articles

We discovered why language models catastrophically fail on long conversations: when old tokens are removed to save memory, models produce complete gibberish. We found models dump massive attention onto the first few tokens as "attention sinks"—places to park unused attention since softmax requires weights to sum to 1. Our solution, StreamingLLM, simply keeps these first 4 tokens permanently while sliding the window for everything else, enabling stable processing of 4 million+ tokens instead of just thousands. This mechanism is now in HuggingFace, NVIDIA TensorRT-LLM, and OpenAI's latest models.

This week, OpenAI made headlines by releasing their first open-source large language models, GPT-OSS-20B and GPT-OSS-120B. Buried in the technical documentation was a fascinating architectural detail: the inclusion of attention sink mechanisms.

Their implementation adds a trainable scalar value to each attention head's softmax calculation:

This simple modification—adding just one learnable parameter per attention head—enables the model to "pay no attention to any tokens" when needed, a design choice OpenAI's model card explicitly attributes to our StreamingLLM work.

OpenAI's model card for GPT-OSS-20B explains the attention sink mechanism, directly connecting the design to our research.

... continue reading