A few month’s back I really enjoyed reading Thorsten Ball’s How to Build an Agent. At the time I came across it, I had been trying to unde

Local LLM Agents - Do they work?

submited by
Style Pass
2025-07-29 12:30:04

A few month’s back I really enjoyed reading Thorsten Ball’s How to Build an Agent. At the time I came across it, I had been trying to understand how tool calling works in Ollama. His wink if you want me to use this tool explanation clicked. But I’m not interested in depending on a private model hosted by someone else.

So, I set out to follow along with Thorsten’s examples, but instead of Claude, I chose to use a model I could host locally. I went with Devstral, which describes itself as an ‘agentic LLM for software engineering tasks’, and with 24 billion parameters, it runs comfortably on GPUs with 24GB. I’ve been happy using Ollama to host these models locally for quite a while, so my code is using their client to talk to their server (HTTP and JSON under the covers).

This post will focus on the converstations themselves, and I will link to the code, which is mostly Thorsten’s, with minimal adaptations to talk to the Ollama API.

Leave a Comment
Related Posts