Tech News
← Back to articles

I tried local AI on my M1 Mac, and the experience was brutal - here's why

read original related products more articles

Screenshot by Tiernan Ray for ZDNET

Follow ZDNET: Add us as a preferred source on Google.

ZDNET's key takeaways

Ollama makes it fairly easy to download open-source LLMs.

Even small models can run painfully slow.

Don't try this without a new machine with 36GB of RAM.

As a reporter covering artificial intelligence for over a decade now, I have always known that running artificial intelligence brings all kinds of computer engineering challenges. For one thing, the large language models keep getting bigger, and they keep demanding more and more DRAM memory to run their model "parameters," or "neural weights."

Also: How to install an LLM on MacOS (and why you should)

I have known all that, but I wanted to get a feel for it firsthand. I wanted to run a large language model on my home computer.

Now, downloading and running an AI model can involve a lot of work to set up the "environment." So, inspired by my colleague Jack Wallen's coverage of the open-source tool Ollama, I downloaded the MacOS binary of Ollama as my gateway to local AI.

... continue reading