Nuclear clocks could use electrical signals to keep time
(feeds.nature.com)
91.
92.
HTML as an Accessible Format for Papers (2023)
(news.ycombinator.com)
93.
HTML as an Accessible Format for Papers
(news.ycombinator.com)
94.
Even in space, telescopes can’t escape photobombers
(feeds.nature.com)
95.
96.
97.
An Economy of AI Agents
(news.ycombinator.com)
98.
Gwern's "Stem Humor" Directory
(news.ycombinator.com)
99.
AMD Radeon RX 9060 Review: The Non-XT GPU
(techspot.com)
100.
AMD Radeon RX 9060 Review: The Non-XT GPU
(techspot.com)
101.
102.
103.
104.
Preprint site arXiv is banning computer-science reviews: here’s why
(feeds.nature.com)
105.
LLMs encode how difficult problems are
(news.ycombinator.com)
106.
LLMs Encode How Difficult Problems Are
(news.ycombinator.com)
107.
HyperX Pulsefire Saga Pro Review: Modular, but slightly behind
(tomshardware.com)
108.
109.
110.
Language models are injective and hence invertible
(news.ycombinator.com)
111.
Language Models Are Injective and Hence Invertible
(news.ycombinator.com)
112.
EntropyLong: Effective Long-Context Training via Predictive Uncertainty
(news.ycombinator.com)
113.
Magic sizes enable high-fidelity assembly of programmable shells
(news.ycombinator.com)
114.
Why can't transformers learn multiplication?
(news.ycombinator.com)
115.
ChunkLLM: A Lightweight Pluggable Framework for Accelerating LLMs Inference
(news.ycombinator.com)
116.
Fast-DLLM: Training-Free Acceleration of Diffusion LLM
(news.ycombinator.com)
117.
Antislop: A framework for eliminating repetitive patterns in language models
(news.ycombinator.com)
118.
Antislop: A Framework for Eliminating Repetitive Patterns in Language Models
(news.ycombinator.com)
119.
120.
Llama-Factory: Unified, Efficient Fine-Tuning for 100 Open LLMs
(news.ycombinator.com)