|
|
--- |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- question-answering |
|
|
language: |
|
|
- en |
|
|
--- |
|
|
# π mmrag benchmark |
|
|
|
|
|
## π Files Overview |
|
|
|
|
|
- `mmrag_train.json`: Training set for model training. |
|
|
- `mmrag_dev.json`: Validation set for hyperparameter tuning and development. |
|
|
- `mmrag_test.json`: Test set for evaluation. |
|
|
- `processed_documents.json`: The chunks used for retrieval. |
|
|
|
|
|
--- |
|
|
## π Example: How to Use mmRAG dataset |
|
|
|
|
|
You can load and work with the mmRAG dataset using standard Python libraries like `json`. Below is a simple example of how to load and interact with the data files. |
|
|
|
|
|
### β
Step 1: Load the Dataset |
|
|
|
|
|
```python |
|
|
import json |
|
|
|
|
|
# Load query datasets |
|
|
with open("mmrag_train.json", "r", encoding="utf-8") as f: |
|
|
train_data = json.load(f) |
|
|
|
|
|
with open("mmrag_dev.json", "r", encoding="utf-8") as f: |
|
|
dev_data = json.load(f) |
|
|
|
|
|
with open("mmrag_test.json", "r", encoding="utf-8") as f: |
|
|
test_data = json.load(f) |
|
|
|
|
|
# Load document chunks |
|
|
with open("processed_documents.json", "r", encoding="utf-8") as f: |
|
|
documents = json.load(f) |
|
|
# Load as dict if needed |
|
|
documents = {doc["id"]: doc["text"] for doc in documents} |
|
|
``` |
|
|
### β
Step 2: Access Query and Document Examples |
|
|
|
|
|
```python |
|
|
# Example query |
|
|
query_example = train_data[0] |
|
|
print("Query:", query_example["query"]) |
|
|
print("Answer:", query_example["answer"]) |
|
|
print("Relevant Chunks:", query_example["relevant_chunks"]) |
|
|
|
|
|
# Get the text of a relevant chunk |
|
|
for chunk_id, relevance in query_example["relevant_chunks"].items(): |
|
|
if relevance > 0: |
|
|
print(f"Chunk ID: {chunk_id}, Relevance label: {relevance}\nText: {documents[chunk_id]}") |
|
|
``` |
|
|
### β
Step 3: Get Sorted Routing Scores |
|
|
The following example shows how to extract and sort the `dataset_score` field of a query to understand which dataset is most relevant to the query. |
|
|
|
|
|
```python |
|
|
# Choose a query from the dataset |
|
|
query_example = train_data[0] |
|
|
|
|
|
print("Query:", query_example["query"]) |
|
|
print("Answer:", query_example["answer"]) |
|
|
|
|
|
# Get dataset routing scores |
|
|
routing_scores = query_example["dataset_score"] |
|
|
|
|
|
# Sort datasets by relevance score (descending) |
|
|
sorted_routing = sorted(routing_scores.items(), key=lambda x: x[1], reverse=True) |
|
|
|
|
|
print("\nRouting Results (sorted):") |
|
|
for dataset, score in sorted_routing: |
|
|
print(f"{dataset}: {score}") |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
## π Query Datasets: `mmrag_train.json`, `mmrag_dev.json`, `mmrag_test.json` |
|
|
|
|
|
The three files are all lists of dictionaries. Each dictionary contains the following fields: |
|
|
|
|
|
### π `id` |
|
|
|
|
|
- **Description**: Unique query identifier, structured as `SourceDataset_queryIDinDataset`. |
|
|
- **Example**: `ott_144`, means this query is picked from OTT-QA dataset |
|
|
|
|
|
### β `query` |
|
|
|
|
|
- **Description**: Text of the query. |
|
|
- **Example**: `"What is the capital of France?"` |
|
|
|
|
|
### β
`answer` |
|
|
|
|
|
- **Description**: The gold-standard answer corresponding to the query. |
|
|
- **Example**: `"Paris"` |
|
|
|
|
|
### π `relevant_chunks` |
|
|
|
|
|
- **Description**: Dictionary of annotated chunk IDs and their corresponding relevance scores. The context of chunks can be get from processed_documents.json. relevance score is in range of {0(irrelevant), 1(Partially relevant), 2(gold)} |
|
|
- **Example**: ```json{"ott_23573_2": 1, "ott_114_0": 2, "m.12345_0": 0}``` |
|
|
|
|
|
### π `ori_context` |
|
|
|
|
|
- **Description**: A list of the original document IDs related to the query. This field can help to get the relevant document provided by source dataset. |
|
|
- **Example**: `["ott_144"]`, means all chunk IDs start with "ott_114" is from the original document. |
|
|
|
|
|
### π `dataset_score` |
|
|
|
|
|
- **Description**: The datset-level relevance labels. With the routing score of all datasets regarding this query. |
|
|
- **Example**: `{"tat": 0, "triviaqa": 2, "ott": 4, "kg": 1, "nq": 0}`, where 0 means there is no relevant chunks in the dataset. The higher the score is, the more relevant chunks the dataset have. |
|
|
|
|
|
--- |
|
|
|
|
|
## π Knowledge Base: `processed_documents.json` |
|
|
|
|
|
This file is a list of chunks used for document retrieval, which contains the following fields: |
|
|
|
|
|
### π `id` |
|
|
|
|
|
- **Description**: Unique document identifier, structured as `dataset_documentID_chunkIndex`, equivalent to `dataset_queryID_chunkIndex` |
|
|
- **example1**: `ott_8075_0` (chunks from NQ, TriviaQA, OTT, TAT) |
|
|
- **example2**: `m.0cpy1b_5` (chunks from documents of knowledge graph(Freebase)) |
|
|
|
|
|
### π `text` |
|
|
|
|
|
- **Description**: Text of the document. |
|
|
- **Example**: `A molecule editor is a computer program for creating and modifying representations of chemical structures.` |
|
|
|
|
|
--- |
|
|
|
|
|
## π License |
|
|
|
|
|
This dataset is licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0). |