Find Related products on Amazon

Shop on Amazon

Tiny-LLM – a course of serving LLM on Apple Silicon for systems engineers

Published on: 2025-08-06 21:24:41

tiny-llm - LLM Serving in a Week Still WIP and in very early stage. A tutorial on LLM serving using MLX for system engineers. The codebase is solely (almost!) based on MLX array/matrix APIs without any high-level neural network APIs, so that we can build the model serving infrastructure from scratch and dig into the optimizations. The goal is to learn the techniques behind efficiently serving an LLM model (i.e., Qwen2 models). Book The tiny-llm book is available at https://skyzh.github.io/tiny-llm/. You can follow the guide and start building. Community You may join skyzh's Discord server and study with the tiny-llm community. Roadmap Week + Chapter Topic Code Test Doc 1.1 Attention ✅ ✅ ✅ 1.2 RoPE ✅ ✅ ✅ 1.3 Grouped Query Attention ✅ 🚧 🚧 1.4 RMSNorm and MLP ✅ 🚧 🚧 1.5 Transformer Block ✅ 🚧 🚧 1.6 Load the Model ✅ 🚧 🚧 1.7 Generate Responses (aka Decoding) ✅ ✅ 🚧 2.1 KV Cache ✅ 🚧 🚧 2.2 Quantized Matmul and Linear - CPU ✅ 🚧 🚧 2.3 Quantized Matmul and Linear - GPU ✅ 🚧 🚧 2.4 Flash Atten ... Read full article.