From: aidotengineer
Evaluating Retrieval-Augmented Generation (RAG) systems traditionally faces a significant hurdle: the need for “golden answers” or “golden chunks” to measure performance [00:00:22]. This requirement makes RAG evaluation non-scalable [00:00:29].
OpenRAG Eval: A Solution for Scalable RAG Evaluation
To address this challenge, OpenRAG Eval was developed as an open-source project aimed at quick and scalable RAG evaluation [00:00:06]. It is research-backed, developed in collaboration with the University of Waterloo’s Jimmy Lynn lab [00:00:32].
How OpenRAG Eval Works
The evaluation process begins with a set of queries, ranging from tens to thousands, that are important for a specific RAG system [00:00:47].
- RAG Connector: A RAG connector collects the actual information, including chunks and answers generated by the RAG pipeline [00:00:56]. Connectors are available for systems like Vectara, LangChain, and LlamaIndex, with more being added [00:01:04]. These connectors generate the RAG outputs [00:01:11].
- Evaluation Run: The evaluation process then runs a series of metrics, which are grouped into evaluators [00:01:16].
- Output: These evaluators generate RAG evaluation files, providing comprehensive data to assess the RAG pipeline [00:01:24].
Key Metrics for Evaluation Without Golden Answers
OpenRAG Eval employs several metrics that circumvent the need for golden answers [00:01:37]:
-
Umbrella (Retrieval Metric):
- This metric allows for retrieval evaluation without relying on golden chunks [00:01:44].
- It assigns a score to each retrieved chunk or passage, ranging from zero (no relevance to the query) to three (dedicated to the query and contains the exact answer) [00:02:13].
- Research indicates that this approach correlates well with human judgment, ensuring reliable results even without golden chunks [00:02:32].
-
Auto Nuggetizer (Generation Metric):
- This metric evaluates generation without requiring golden answers [00:02:54].
- It involves three steps:
- Nugget Creation: Atomic units called “nuggets” are created [00:03:00].
- Nugget Rating and Sorting: Each nugget is assigned a “vital” or “okay” rating, and the top 20 are selected [00:03:07].
- LLM Judge Analysis: An LLM (Large Language Model) judge analyzes the RAG response to determine if each selected nugget is either fully or partially supported by the answer [00:03:15].
-
Citation Faithfulness:
- This metric assesses whether citations within the response are accurate and high-fidelity [00:03:30].
- It checks if the citation is fully supported, partially supported, or not supported by the response [00:03:37].
-
Hallucination Detection:
- Utilizing Victara’s Hallucination Detection Model (HHM), this metric checks if the entire RAG response aligns with the retrieved content, identifying instances of hallucination [00:03:45].
User Interface and Benefits
OpenRAG Eval includes a user-friendly interface [00:03:58]. After running an evaluation, users can drag and drop the generated files onto open-evaluation.ai
to view a UI that displays queries, retrieval scores, and different generation scores [00:04:08].
This open-source package is powerful for optimizing and tuning RAG pipelines [00:04:29]. Its open source nature ensures transparency in how the metrics work [00:04:34]. The project encourages contributions, especially for new RAG pipeline connectors [00:04:50].