This year vector databases have sprung up like mushrooms to enable applications to retrieve context based on semantic search. A large portion of these applications have used the retrieved context to augment the ability of large language models (LLMs) in a pattern known as RAG. On November 7th OpenAI released its Assistants API, enabling the implementation of AI chat interfaces with context retrieval without needing a separate message store or vector database. Does this new API make vector databases obsolete?
This post concludes a series comparing three different implementations of AI chat backends to help answer this question. Read on for the final verdict.
In the current age of ChatGPT, GPTs, Bards, Bings, and Claudes it might not be obvious why anyone should be building another AI-powered chat, but there are good reasons:
Combining all three together can lead to integrations not possible in the frame of existing general chat interfaces. For example, we could provide the LLM with user-specific information, give it precise instructions, and have it take actions directly in our product. Think: going to Amazon and asking to reorder and double the order of an item you bought last week, and having your shopping cart update to reflect this conversation. We can render product information inline and have it be interactive without the user having to navigate between different websites or apps.