I recently gave this talk at a lovely event put on by our friends at Jamsocket, where we discussed different experiences running LLMs in production. With Townie, we’ve been dealing with the magic and eccentricities of this new kind of engineering.
Val Town is mostly a platform for running little bits of JavaScript that we call “vals”, and it makes programming accessible to a lot of new people because vals can be super small, simple, and they don’t require any configuration. But we’ve been using LLMs to make it accessible to even more people who want to create things with natural language.
Here’s the feature I’m talking about today - Townie. It’s the Val Town bot. If you want to be old-fashioned, you can write the code yourself with your hands and fingers, but Townie will let you write it indirectly with English. It’s similar in broad strokes to Anthropic Artifacts or Vercel v0, but one of the biggest differences is that the artifacts have full-stack backends, can be shared and forked, and so on.
We’re running a pretty plain-vanilla LLM setup! The meat and potatoes of Val Town are the challenges of running a lot of user code, at scale, with good security guarantees, and building community and collaboration tools. We aren’t training our own models or running our own GPU clusters.