In this blog post, we share the results of our initial experiments aimed at correcting hallucinations generated by Large Language Models (LLMs). Our f

Correcting Hallucinations in Large Language Models

submited by
Style Pass
2024-09-04 14:30:04

In this blog post, we share the results of our initial experiments aimed at correcting hallucinations generated by Large Language Models (LLMs). Our focus is on the open-book setting, which encompasses tasks such as summarization and Retrieval-Augmented Generation (RAG).

September 03 , 2024 by Utkarsh Jain & Suleman Kazi & Ofer Mendelevitch

In the context of LLMs, hallucination refers to the phenomenon where the model makes up information when responding to a user’s prompt or question. While the exact causes of hallucinations remain unclear and are the subject of ongoing research, these occurrences can have significant real-world consequences, especially in enterprise applications and even more so with Agentic RAG.

As we mention in our hallucination detection blog post, one of the most effective methods for reducing hallucinations is grounding LLM responses in a set of provided documents (also called references). In other words Augmenting Generation with Retrieval aka RAG. However, even RAG is not the catch-all solution to the hallucination problem, and as Vectara’s Hallucination Leaderboard shows modern LLMs hallucinate anywhere from 1% to nearly 30% of the time even when generating outputs based on reference sources, called open book generation.

Leave a Comment