Hands on  With all the talk of massive machine-learning training clusters and AI PCs you’d be forgiven for thinking you need some kind of special ha

How to run an LLM on your PC, not in the cloud, in less than 10 minutes

submited by
Style Pass
2024-06-22 10:30:04

Hands on With all the talk of massive machine-learning training clusters and AI PCs you’d be forgiven for thinking you need some kind of special hardware to play with text-and-code-generating large language models (LLMs) at home.

In reality, there’s a good chance the desktop system you’re reading this on is more than capable of running a wide range of LLMs, including chat bots like Mistral or source code generators like Codellama.

In fact, with openly available tools like Ollama, LM Suite, and Llama.cpp, it’s relatively easy to get these models running on your system.

In the interest of simplicity and cross-platform compatibility, we’re going to be looking at Ollama, which once installed works more or less the same across Windows, Linux, and Macs.

In general, large language models like Mistral or Llama 2 run best with dedicated accelerators. There’s a reason datacenter operators are buying and deploying GPUs in clusters of 10,000 or more, though you'll need the merest fraction of such resources.

Leave a Comment