File size: 4,602 Bytes
1895c1c ba58fdd 9229617 ba58fdd 8c0dd2e 43ade6a 8c0dd2e 43ade6a 8c0dd2e ba58fdd 1895c1c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 |
---
license: apache-2.0
task_categories:
- question-answering
language:
- en
---
# π mmrag benchmark
## π Files Overview
- `mmrag_train.json`: Training set for model training.
- `mmrag_dev.json`: Validation set for hyperparameter tuning and development.
- `mmrag_test.json`: Test set for evaluation.
- `processed_documents.json`: The chunks used for retrieval.
---
## π Example: How to Use mmRAG dataset
You can load and work with the mmRAG dataset using standard Python libraries like `json`. Below is a simple example of how to load and interact with the data files.
### β
Step 1: Load the Dataset
```python
import json
# Load query datasets
with open("mmrag_train.json", "r", encoding="utf-8") as f:
train_data = json.load(f)
with open("mmrag_dev.json", "r", encoding="utf-8") as f:
dev_data = json.load(f)
with open("mmrag_test.json", "r", encoding="utf-8") as f:
test_data = json.load(f)
# Load document chunks
with open("processed_documents.json", "r", encoding="utf-8") as f:
documents = json.load(f)
# Load as dict if needed
documents = {doc["id"]: doc["text"] for doc in documents}
```
### β
Step 2: Access Query and Document Examples
```python
# Example query
query_example = train_data[0]
print("Query:", query_example["query"])
print("Answer:", query_example["answer"])
print("Relevant Chunks:", query_example["relevant_chunks"])
# Get the text of a relevant chunk
for chunk_id, relevance in query_example["relevant_chunks"].items():
if relevance > 0:
print(f"Chunk ID: {chunk_id}, Relevance label: {relevance}\nText: {documents[chunk_id]}")
```
### β
Step 3: Get Sorted Routing Scores
The following example shows how to extract and sort the `dataset_score` field of a query to understand which dataset is most relevant to the query.
```python
# Choose a query from the dataset
query_example = train_data[0]
print("Query:", query_example["query"])
print("Answer:", query_example["answer"])
# Get dataset routing scores
routing_scores = query_example["dataset_score"]
# Sort datasets by relevance score (descending)
sorted_routing = sorted(routing_scores.items(), key=lambda x: x[1], reverse=True)
print("\nRouting Results (sorted):")
for dataset, score in sorted_routing:
print(f"{dataset}: {score}")
```
---
## π Query Datasets: `mmrag_train.json`, `mmrag_dev.json`, `mmrag_test.json`
The three files are all lists of dictionaries. Each dictionary contains the following fields:
### π `id`
- **Description**: Unique query identifier, structured as `SourceDataset_queryIDinDataset`.
- **Example**: `ott_144`, means this query is picked from OTT-QA dataset
### β `query`
- **Description**: Text of the query.
- **Example**: `"What is the capital of France?"`
### β
`answer`
- **Description**: The gold-standard answer corresponding to the query.
- **Example**: `"Paris"`
### π `relevant_chunks`
- **Description**: Dictionary of annotated chunk IDs and their corresponding relevance scores. The context of chunks can be get from processed_documents.json. relevance score is in range of {0(irrelevant), 1(Partially relevant), 2(gold)}
- **Example**: ```json{"ott_23573_2": 1, "ott_114_0": 2, "m.12345_0": 0}```
### π `ori_context`
- **Description**: A list of the original document IDs related to the query. This field can help to get the relevant document provided by source dataset.
- **Example**: `["ott_144"]`, means all chunk IDs start with "ott_114" is from the original document.
### π `dataset_score`
- **Description**: The datset-level relevance labels. With the routing score of all datasets regarding this query.
- **Example**: `{"tat": 0, "triviaqa": 2, "ott": 4, "kg": 1, "nq": 0}`, where 0 means there is no relevant chunks in the dataset. The higher the score is, the more relevant chunks the dataset have.
---
## π Knowledge Base: `processed_documents.json`
This file is a list of chunks used for document retrieval, which contains the following fields:
### π `id`
- **Description**: Unique document identifier, structured as `dataset_documentID_chunkIndex`, equivalent to `dataset_queryID_chunkIndex`
- **example1**: `ott_8075_0` (chunks from NQ, TriviaQA, OTT, TAT)
- **example2**: `m.0cpy1b_5` (chunks from documents of knowledge graph(Freebase))
### π `text`
- **Description**: Text of the document.
- **Example**: `A molecule editor is a computer program for creating and modifying representations of chemical structures.`
---
## π License
This dataset is licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0). |