Train Your Own LLM From Scratch
A hands-on workshop where you write every piece of a GPT training pipeline yourself, understanding what each component does and why.
Andrej Karpathy's nanoGPT was my first real exposure to LLMs and transformers. Seeing how a working language model could be built in a few hundred lines of PyTorch completely changed how I thought about AI and inspired me to go deeper into the space.
This workshop is my attempt to give others that same experience. nanoGPT targets reproducing GPT-2 (124M params) and covers a lot of ground. This project strips it down to the essentials and scales it to a ~10M param model that trains on a laptop in under an hour — designed to be completed in a single workshop session.
No black-box libraries. No model = AutoModel.from_pretrained() . You build it all.
What You'll Build
A working GPT model trained from scratch on your MacBook, capable of generating Shakespeare-like text. You'll write:
Tokenizer — turning text into numbers the model can process
— turning text into numbers the model can process Model architecture — the transformer: embeddings, attention, feed-forward layers
— the transformer: embeddings, attention, feed-forward layers Training loop — forward pass, loss, backprop, optimizer, learning rate scheduling
... continue reading