RAGAS for RAG in LLMs: A Comprehensive Guide to Evaluation Metrics.

Karthikeyan Dhanakotti
6 min readAug 15, 2024

--

Introduction

In the era of Large Language Models (LLMs), generating accurate and contextually appropriate responses is a critical challenge, especially when these models need to handle diverse and complex queries. To address this, Retrieval-Augmented Generation (RAG) has emerged as a promising approach. RAG models combine the generative power of LLMs with the retrieval of relevant documents, enabling them to generate more informed and accurate responses.

However, evaluating the performance of RAG models is not straightforward. Traditional metrics like BLEU and ROUGE, widely used in natural language processing, often fall short in capturing the nuanced requirements of RAG applications, such as factual accuracy and context relevance. This is where RAGAS (Retrieval-Augmented Generation Assessment Suite) comes into play.

What is RAGAS?

RAGAS is a specialized suite of metrics designed to evaluate the performance of RAG models in a more comprehensive manner. Unlike traditional metrics, RAGAS focuses on assessing key aspects of the generated responses, including their factual accuracy, relevance to the query, and the effectiveness of the retrieved context. By providing a more holistic evaluation framework, RAGAS helps developers and researchers ensure that their RAG models are not only generating relevant responses but also grounding those responses in factual evidence.

Traditional Evaluation Metrics: BLEU and ROUGE

Before diving into RAGAS metrics, it’s essential to understand why traditional metrics like BLEU and ROUGE might not be suitable for RAG applications.

BLEU (Bilingual Evaluation Understudy)

BLEU is a precision-based metric that evaluates the overlap of n-grams (sequences of words) between the generated response and a reference response.

Limitations in RAG:

  • Surface-Level Comparison: BLEU focuses on surface-level text similarity, which might not capture the underlying correctness or relevance of the information in RAG-generated responses.
  • Lack of Context Understanding: BLEU does not consider the context provided by retrieved documents, making it less effective in evaluating whether a response is supported by the evidence.

ROUGE (Recall-Oriented Understudy for Gisting Evaluation)

ROUGE measures the overlap of n-grams, word sequences, and word pairs between the generated response and reference texts, with a focus on recall.

Limitations in RAG:

  • Recall Focus: ROUGE emphasizes recall, but in RAG, both precision (how much of the retrieved information is relevant) and recall (how much relevant information is retrieved) are crucial.
  • Context Ignorance: Like BLEU, ROUGE does not account for the context or evidence provided by the retrieved documents, leading to potential gaps in evaluating the factual accuracy of the response.

Why BLEU and ROUGE Fall Short for RAG

In RAG applications, the primary goal is not just to generate responses that look similar to reference answers but to ensure that the responses are factually correct, relevant, and supported by retrieved documents. BLEU and ROUGE, while useful for general text generation tasks, do not adequately address these needs. This is where RAGAS metrics, specifically designed for RAG models, become essential.

RAGAS: Specialized Metrics for RAG Models

RAGAS introduces several metrics that provide a more holistic evaluation of RAG models, focusing on aspects like faithfulness, answer relevancy, context precision, and context recall.

ragas score

Faithfulness

Faithfulness measures the factual accuracy of the generated response based on the retrieved documents.

Formula:

Example:

Query: “What is the capital of France?”

Generated Response: “Paris is the capital of France, and it is the largest city in Europe.”

Correct Facts: 1 (Paris is the capital of France)

Total Facts: 2

Faithfulness: 1/2=0.5

Answer Relevancy

Answer Relevancy evaluates how relevant the generated response is to the original query.

Formula:

Example:

Query: “What is the capital of France?”

Generated Response: “The capital of France is Paris, known for its art, culture, and fashion.”

Relevant Concepts: 3 (capital, Paris, France)

Total Concepts: 4

Answer Relevancy: 3/4=0.75

Context Precision

Context Precision measures the precision of the retrieved documents in providing relevant information to the query.

Formula:

Example:

Total Sentences Retrieved: 10

Relevant Sentences: 7

Context Precision: 7/10=0.7

Context Recall

Context Recall assesses how well the retrieved documents cover all relevant aspects of the query.

Formula:

Example:

Total Relevant Sentences Available: 8

Relevant Sentences Retrieved: 7

Context Recall: 7/8=0.875

Example Calculation of RAGAS Score

Let’s assume the following:

  • Faithfulness: 0.8 (out of 1)
  • Answer Relevancy: 0.75 (out of 1)
  • Context Precision: 0.7 (out of 1)
  • Context Recall: 0.875 (out of 1)

A simple way to combine these metrics could be an average, though in practice, weighting might be applied based on the importance of each aspect:

RAGAS Score=(0.8+0.75+0.7+0.8750/4=3.125/4=0.781

Thus, the overall RAGAS Score would be 0.781, indicating the model’s performance in terms of relevance, faithfulness, precision, and recall.

Implementing RAGAS Metrics in Python

To demonstrate the practical application of RAGAS metrics, let’s walk through a Python code example using the RAGAS library along with the Hugging Face Datasets library.

from datasets import Dataset
from ragas import evaluate
from ragas.metrics import (
faithfulness,
answer_relevancy,
answer_correctness,
context_precision,
context_recall,
)

# Example data
data = {
"query": ["What is the capital of France?"],
"generated_response": ["Paris is the capital of France."],
"retrieved_documents": [["Paris is the capital of France. It is a major European city known for its culture."]]
}

# Convert the data to a Hugging Face Dataset
dataset = Dataset.from_dict(data)

# Define the metrics you want to evaluate
metrics = [
faithfulness,
answer_relevancy,
answer_correctness,
context_precision,
context_recall,
]

# Evaluate the dataset using the selected metrics
results = evaluate(dataset, metrics)

# Display the results
for metric_name, score in results.items():
print(f"{metric_name}: {score:.2f}")

Explanation of the Code

Data Structure: The dataset comprises three key components:

query: The input question to the RAG model.

generated_response: The response generated by the model.

retrieved_documents: The documents retrieved by the model to help generate the response.

Metrics: The code evaluates the generated responses using the faithfulness, answer_relevancy, answer_correctness, context_precision, and context_recall metrics.

Output: The results are printed, providing a score for each metric, helping to understand how well the model performs across these dimensions.

Conclusion

RAGAS offers a specialized suite of metrics that go beyond traditional text generation evaluation methods like BLEU and ROUGE. These traditional metrics, while effective for general text similarity tasks, fall short in capturing the nuances of factual accuracy, context relevance, and support in RAG applications. RAGAS metrics such as Faithfulness, Answer Relevancy, Context Precision, and Context Recall provide a more comprehensive evaluation framework for RAG models, ensuring that generated responses are not only relevant but also grounded in factual evidence.

By integrating RAGAS with libraries like Hugging Face Datasets, developers can easily test and refine their RAG models, ultimately leading to more reliable and trustworthy AI systems. As RAG models continue to evolve, having robust and specialized evaluation tools like RAGAS will be crucial in maintaining the quality and integrity of AI-generated content.

Karthikeyan Dhanakotti is on LinkedIn.

--

--

Karthikeyan Dhanakotti
Karthikeyan Dhanakotti

Written by Karthikeyan Dhanakotti

AI/ML & Data Science Leader @ Microsoft , Mentor/Speaker, AI/ML Enthusiast | Microsoft Certified.

No responses yet