Tejas' Blog - Caching optimizations

submited by
Style Pass
2024-12-27 16:30:07

In my earlier blog post we saw how redis pipelines can be used to improve caching performance. Now we will see some other effective optimizations which can significantly improve our caching performance. This post focuses on API response caching with JSON, backed by Redis. But the ideas are generic and can be applied elsewhere as well.

We want to achieve maximum cache hit ratio but have a large data to be cached. Some or the other entity within that JSON keeps updating and invalidating the cache. For example consider the following structure of cache:

With this structure, cache will invalidate when any product within shopping cart, order status or user profile change. More invalidations means more cache miss and less performance.

On average, shopping cart changes way more often than user profile. If we split up the cache into separate parts, when shopping cart updates, we don't need to update user JSON. Not only does this increase cache hits, it also reduces DB queries for fetching related entities. So instead of single checkout_page_cache we have three separate ones:

Applications generally use Redis for multiple purposes apart from caching. eg. storing background jobs for sidekiq. Recommended configuration for cache store usually differs from these tools. Also for separation of concerns we should separate these. Ideally separate Redis instances would be great but at the very least we should use separate databases.

Leave a Comment