I try to run an experiment once a week with open-source LLMs. This week experiment was using Llama3 via Ollama and AgentRun to have an open-source, 10

Jonathan's blog

submited by
Style Pass
2024-04-27 13:00:07

I try to run an experiment once a week with open-source LLMs. This week experiment was using Llama3 via Ollama and AgentRun to have an open-source, 100% local Code Interpreter.

The idea is, give an LLM a query that is better answered via code execution instead of its training. Run the code in AgentRun, then return the answer to the user. It is more or less a proof of concept, that can be expanded on with additional tools that an LLM can use.

For this experiment, I had Ollama installed and running as well as the AgentRun API. My goal was use code generated by an LLM it to answer some questions that normally an LLM would struggle with. Like, what is 12345 * 54321? Or what is the largest prime number under 1000?

In the file, we will start off by importing the necessary libraries. We'll need json for handling data and requests for making HTTP calls. We’re also using a cool library called json_repair just in case our JSON data decides to act up and we need to fix it on the fly. This is especially the case if use 8B version of Llama3 where the JSON sometimes is slightly broken.

We've crafted a simple function execute_python_code. This function is pretty straightforward—it sends a Python code snippet to a code execution environment provided by AgentRun and fetches the output.

Leave a Comment