I hear a lot of people are debating whether to adopt a local‑first architecture. This approach is often marketed as a near-magical solution that giv

Local-first search

submited by
Style Pass
2025-07-31 14:30:14

I hear a lot of people are debating whether to adopt a local‑first architecture. This approach is often marketed as a near-magical solution that gives users the best of both worlds: cloud-like collaboration with zero latency and offline capability. Advocates even claim it can improve developer experience (DX) by simplifying state handling and reducing server costs. After two years of building applications in this paradigm, however, I’ve found the reality is more nuanced.

Local-first applications do have major benefits for users but I think the DX claims fall off a cliff as your application grows in complexity or volume of data. In other words, the more state your app manages, the harder your life as a developer becomes.

One area I struggled with the most was implementing full-text search for my Fika, my local-first app that I used to write this very same blog post. Now that I’m finally happy with the solution, I want to share the journey with y’all to illustrate how local-first ideals can be at odds with practical constraints.

Fika is a local-first web application built with Replicache (for syncing), Postgres (as authoritative database), and Cloudflare Workers (as the server runtime). It’s a platform for content curators, and it has three types of entities that need to be searchable: stories, feeds, and blog posts. A power user can easily have ~10k of those entities, and each entity can contain up to ~10k characters of text. In other words, we’re dealing with on the order of 100 million characters of text that might need to be searched locally.

Leave a Comment
Related Posts