While Pinecone is most known for the vector database which helps reduce hallucinations through Retrieval Augmented Generation, we’re also investing

Introducing the First Hallucination-Free LLM

submited by
Style Pass
2024-04-01 11:30:08

While Pinecone is most known for the vector database which helps reduce hallucinations through Retrieval Augmented Generation, we’re also investing in finding other ways to reduce hallucinations. Today, we’re excited to announce a breakthrough in our research: The first-ever LLM that never hallucinates — ever.

It’s called Luna, and Pinecone users can interact with the model today through a chatbot interface for free. We will open-source the model eventually, but for now, due to the far-reaching implications of an AI model that never hallucinates, we’re only sharing the model’s source and weights with vetted institutions.

Hallucinations are the predominant reason why most AI applications never reach production. While LLMs answer most questions about public information, they don’t have sufficient knowledge to answer questions that require access to private data. While this is already being addressed with RAG — using a vector database to retrieve and feed relevant context to the LLM — we wondered if there was an even easier way.

Our novel approach targets the root issue causing all other LLMs to hallucinate: They don’t know the limits of their knowledge, so they often fail to admit when they don’t know the answer. And so they make something up. And therein lies the key insight: A model will never hallucinate if it always admits what it does not know.

Leave a Comment