Skip to content
Tech News
← Back to articles

My Journey to a reliable and enjoyable locally hosted voice assistant

read original get Mycroft Mark II → more articles
Why This Matters

This article highlights the shift towards fully local, reliable voice assistants using HomeAssistant and llama.cpp, emphasizing the importance of privacy, customization, and hardware flexibility in the evolving smart home landscape. It demonstrates how consumers and developers can optimize performance with various GPU options, reducing dependence on cloud services.

Key Takeaways

I have been watching HomeAssistant’s progress with assist for some time. We previously used Google Home via Nest Minis, and have switched to using fully local assist backed by local first + llama.cpp (previously Ollama). In this post I will share the steps I took to get to where I am today, the decisions I made and why they were the best for my use case specifically.

Links to Additional Improvements

Here are links to additional improvements posted about in this thread.

New Features

Fixing Unwanted HA / LLM Behaviors

Optimizing Performance

Hardware Details

I have tested a wide variety of hardware from a 3050 to a 3090, most modern discrete GPUs can be used for local assist effectively, it just depends on your expectations of capability and speed for what hardware is required.

I am running HomeAssistant on my UnRaid NAS, specs are not really important as it has nothing to do with HA Voice.

Voice Hardware:

... continue reading