We’re excited to partner with Milvus to bring you Milvus Lite, the newly available, lightweight, in-memory version of their leading vector datab

Semantic Search with Milvus Lite and Voyage AI – Voyage AI

submited by
Style Pass
2024-07-05 21:30:03

We’re excited to partner with Milvus to bring you Milvus Lite, the newly available, lightweight, in-memory version of their leading vector database. This powerful tool is now just a pip install away, ready to run on Jupyter Notebooks, laptops, or edge devices, and is fully integrated with Voyage AI embeddings, making the development of GenAI applications easier than ever.

Our cutting-edge general-purpose and domain-specific embedding models are easily accessible through a hosted API endpoint. Voyage embedding models dramatically boost semantic search retrieval quality for enterprise retrieval-augmented generation (RAG) applications, and Voyage’s portfolio of embedding models frequently tops the Massive Text Embedding Benchmark (MTEB) leaderboards, including the general purpose voyage-large-2-instruct and the legal-specific voyage-law-2 (ranked #1 for legal retrieval). Voyage models consistently outperform commercial alternatives, including OpenAI and Cohere.

Together, Milvus Lite and Voyage hosted models enable developers to easily include powerful semantic search to their GenAI applications within seconds—with little to no changes when scaling to production. The same client-side code works for more scalable Milvus on Kubernetes or managed Milvus on Zilliz Cloud, simplifying the migration and saving valuable time.

Leave a Comment