Semantic Cache is a tool for caching natural text based on semantic similarity. It's ideal for any task that involves querying or retrieving informati

Search code, repositories, users, issues, pull requests...

submited by
Style Pass
2024-05-01 05:30:04

Semantic Cache is a tool for caching natural text based on semantic similarity. It's ideal for any task that involves querying or retrieving information based on meaning, such as natural language classification or caching AI responses. Two pieces of text can be similar but not identical (e.g., "great places to check out in Spain" vs. "best places to visit in Spain"). Traditional caching doesn't recognize this semantic similarity and misses opportunities for reuse.

First, create an Upstash Vector database here. You'll need the url and token credentials to connect your semantic cache. Important: Choose any pre-made embedding model when creating your database.

Different embedding models are great for different use cases. For example, if low latency is a priority, choose a model with a smaller dimension size like bge-small-en-v1.5. If accuracy is important, choose a model with more dimensions.

The minProximity parameter ranges from 0 to 1. It lets you define the minimum relevance score to determine a cache hit. The higher this number, the more similar your user input must be to the cached content to be a hit. In practice, a score of 0.95 indicates a very high similarity, while a score of 0.75 already indicates a low similarity. For example, a value of 1.00, the highest possible, would only accept an exact match of your user query and cache content as a cache hit.

Leave a Comment