Programmatically Interacting with LLMS

submited by
Style Pass
2024-04-17 00:00:10

I decided to sit down over a few days (I'm at a trampoline park right now, being a money dispensor for the kids to buy slurpees while they tire themselves out) and figure out how to code with LLMs. We're going to look at how to do a single show prompt, multi message chat, RAG Retrieval-Augmented Generation, and tool usage, each with the ollama-js, openai-js, and then langchain with both ollana and openai.

The straightforward code of using the interfaces is shorter, but not necessarily clearer especially with tool calling. It is super cool to be able to retarget the LangChain code onto a different provider and it smooths over the differences of implementations. I learned a lot about what you could do by looking through it's documentation. But in practice, there's a lot of overhead and classes and things you need to learn which aren't clearly necessary. Seems like a slightly too premature abstraction, but it is overall nice. Worth learning even if you don't end up using.

LangChain is a library that lets you build things on top of LLMs, where it's relatively easy to paper over the difference between each of the implementations and APIs. It also has a bunch of high level concepts in there. I like the RecursiveCharacterTextSplitter for example, when I started I just split it into sentences and this was a more interesting solution.

Leave a Comment