Viottery commited on
Commit
eb966ee
·
verified ·
1 Parent(s): ba58fdd

Update README.md

Browse files

Welcome to mmRAG, the first comprehensive benchmark for evaluating Retrieval-Augmented Generation (RAG) systems across multiple modalities and modular components. mmRAG offers:

A Unified, Multimodal Document Pool
• Text, tables, and knowledge graph data all indexed together
• Enables truly cross-format retrieval and grounding

Sufficient Chunk-Level Labels
• High-precision relevance annotations for every query–chunk pair, generated via LLM-based annotation
• Supports fine-grained retrieval module evaluation

Dataset-Level Routing Labels
• Aggregated from chunk-level annotations to indicate which source dataset (e.g., WebQA, TriviaQA, NQ, CWQ, OTT, KG) best answers each query
• The first benchmarked signal for the query-routing phase in RAG

With these multi-level annotations, mmRAG allows you to measure:

Retrieval: using "relevant chunks" labels

Generation: using "answer" labels

Routing: using "dataset score" labels

Whether you’re developing new dense retrievers, fine-tuning generators, or designing smarter query routers, mmRAG provides the labels, metrics, and data you need to evaluate every piece of your RAG pipeline. Dive in to benchmark your models on text, tables, and graphs—together, in one unified evaluation suite!

Files changed (1) hide show
  1. README.md +8 -1
README.md CHANGED
@@ -1,3 +1,10 @@
 
 
 
 
 
 
 
1
  # 📚 mmrag benchmark
2
 
3
  ## 📁 Files Overview
@@ -63,4 +70,4 @@ This file is a list of documents used for document retrieval, which contains the
63
 
64
  ## 📄 License
65
 
66
- This dataset is licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - question-answering
5
+ language:
6
+ - en
7
+ ---
8
  # 📚 mmrag benchmark
9
 
10
  ## 📁 Files Overview
 
70
 
71
  ## 📄 License
72
 
73
+ This dataset is licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0).