This project takes a lot of off-the-shelf components, puts them together, and produces a private, secure, and simple AI companion that answers any que

How I built my own Rabbit R1 without any VC funding or a previous crypto gig

submited by
Style Pass
2024-07-04 10:30:04

This project takes a lot of off-the-shelf components, puts them together, and produces a private, secure, and simple AI companion that answers any question I throw at it.

My AI assistant uses a similar approach to other, successful voice recognition systems (namely Rhasspy). In my case, however, the Raspberry Pi 4B runs a voice recognition software (called VOSK) locally and interfaces with a large language model hosted on one of my PCs through an OpenAI-compliant API endpoint (Ollama). Thanks to NordVPN’s Meshnet I can, and actually do, that from anywhere in the world.

This guide will show how I built my own AI assistant. I already mentioned most of the things I used for this project. However, there is still a lot to cover.

Ollama allows setting up language models with a couple of clicks. It’s unbelievable how streamlined the process has become. There even are installers for MacOS and Windows, ready to be downloaded and made use of.

With a fairly new and powerful graphics card, anyone can take advantage of GPU acceleration, making the model’s response times significantly quicker. However, even on a midrange CPU (like the Ryzen 5 5600G APU I’ve got in my home lab server) the responses are nearly instant, especially with smaller language models.

Leave a Comment