1.
7.
Fire Phone 2.0? Amazon wants a second chance at failing to make a phone work
(androidauthority.com)
8.
Attention Residuals
(news.ycombinator.com)
9.
12.
13.
Show HN: Duplicate 3 layers in a 24B LLM, logical deduction .22→.76. No training
(news.ycombinator.com)
14.
15.
Why Resilient, Mentally Healthy Employees Drive Unstoppable Performance
(feeds.feedburner.com)
16.
Nvidia says it can shrink LLM memory 20x without changing model weights
(venturebeat.com)
17.
18.
Executing programs inside transformers with exponentially faster inference
(news.ycombinator.com)
19.
20.
Show HN: How I topped the HuggingFace open LLM leaderboard on two gaming GPUs
(news.ycombinator.com)
21.
Show HN: How I Topped the HuggingFace Open LLM Leaderboard on Two Gaming GPUs
(news.ycombinator.com)
22.
23.
Finite-Element Approaches to Transformer Harmonic and Transient Analysis
(spectrum.ieee.org)
24.
10-202: Introduction to Modern AI (CMU)
(news.ycombinator.com)
25.
Smallest transformer that can add two 10-digit numbers
(news.ycombinator.com)
26.
Learnings from 4 months of Image-Video VAE experiments
(news.ycombinator.com)
27.
Lidar waveforms are worth 40x128x33 words
(news.ycombinator.com)
28.
Show HN: AI Timeline – 171 LLMs from Transformer (2017) to GPT-5.3 (2026)
(news.ycombinator.com)
29.
The JavaScript Oxidation Compiler
(news.ycombinator.com)