In the 6 or so months we’ve been writing this blog, we’ve alluded to what we’re building at  RunLLM from time to time — we’re sure a few of

Generating Conversation

submited by
Style Pass
2024-03-28 18:30:04

In the 6 or so months we’ve been writing this blog, we’ve alluded to what we’re building at RunLLM from time to time — we’re sure a few of you have looked at our (often outdated, until recently!) website — but we haven’t taken the time to explain what we’re building and why. This is mainly because we’ve been working out kinks in the product and iterating on its form factor with early users. We’re (finally) ready to start taking the wraps off.

Briefly, RunLLM is a domain specific AI-powered assistant for developer-first tools. You can see RunLLM in action live in the MotherDuck and RisingWave community Slack channels, among others. We’re in beta but if you’d like to get access, please reach out!

RunLLM is a custom assistant for developer-first tools that can generate code, answer conceptual questions, and help with debugging. Using fine-tuned LLMs and cutting-edge data augmentation and retrieval techniques, RunLLM learns from data like documentation, guides, and community to help developers navigate your product and its APIs. You can integrate RunLLM via our Slack and Discord bots or our web widget (and there’s more to come beyond chat!). You can see a quick demo of the RunLLM admin ui here:

When you think of LLM-powered developer assistants, your mind probably jumps to GitHub Copilot or RAG + GPT-4 based solutions. Our approach is fundamentally different — rather than relying on generic LLMs and search techniques, we use fine-tuning to build a narrowly tailored expert on your product. If you were lucky enough to be at Data Council this week, Joey spoke about some of the things we’re working on in his keynote with DJ Patil. We’ll share that video when it’s available.

Leave a Comment