The Download: how to run an LLM, and a history of “three-parent babies”
In the early days of large language models, there was a high barrier to entry: it used to be impossible to run anything useful on your own computer without investing in pricey GPUs. But researchers have had so much success in shrinking down and speeding up models that anyone with a laptop, or even a smartphone, can now get in on the action. For people who are concerned about privacy, want to break free from the control of the big LLM companies, or just enjoy tinkering, local models offer a co