Skip to content
Tech News
clear
Topics: Today This Week This Month This Year
1.
DeepSeek-V4: Towards Highly Efficient Million-Token Context Intelligence (news.ycombinator.com)
2.
IndexCache, a new sparse attention optimizer, delivers 1.82x faster inference on long-context AI models (venturebeat.com)
3.
MSA: Memory Sparse Attention (news.ycombinator.com)
4.
DeepSeek tests “sparse attention” to slash AI processing costs (arstechnica.com)
5.
Using uninitialized memory for fun and profit (2008) (news.ycombinator.com)
Today's top topics: openai google apple anthropic chatgpt android authority microsoft samsung nvidia meta
View all today's topics →