Z80-μLM: A Retrocomputing Micro Language Model
Z80-μLM is a 'conversational AI' that generates short character-by-character sequences, with quantization-aware training (QAT) to run on a Z80 processor with 64kb of ram.
The root behind this project was the question: how small can we go while still having personality, and can it be trained or fine-tuned easily? With easy self-hosted distribution?
The answer is Yes! And a 40kb .com binary (including inference, weights & a chat-style UI) running on a 4MHz processor from 1976.
It won't pass the Turing test, but it might make you smile at the green screen.
For insight on how to best train your own model, see TRAINING.md.
Examples
Two pre-built examples are included:
A conversational chatbot trained on casual Q&A pairs. Responds to greetings, questions about itself, and general banter with terse personality-driven answers.
> hello HI > are you a robot YES > do you dream MAYBE
... continue reading