File size: 2,682 Bytes
0dbbad9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 |
---
license: apache-2.0
task_categories:
- question-answering
language:
- en
---
# π mmrag benchmark
## π Files Overview
- `mmrag_train.json`: Training set for model training.
- `mmrag_dev.json`: Validation set for hyperparameter tuning and development.
- `mmrag_test.json`: Test set for evaluation.
- `processed_documents.json`: The chunks used for retrieval.
---
## π Query Datasets: `mmrag_train.json`, `mmrag_dev.json`, `mmrag_test.json`
The three files are all lists of dictionaries. Each dictionary contains the following fields:
### π `id`
- **Description**: Unique query identifier, structured as `SourceDataset_queryIDinDataset`.
- **Example**: `ott_144`, means this query is picked from OTT-QA dataset
### β `query`
- **Description**: Text of the query.
- **Example**: `"What is the capital of France?"`
### β
`answer`
- **Description**: The gold-standard answer corresponding to the query.
- **Example**: `"Paris"`
### π `relevant_chunks`
- **Description**: Dictionary of annotated chunk IDs and their corresponding relevance scores. The context of chunks can be get from processed_documents.json. relevance score is in range of {0(irrelevant), 1(Partially relevant), 2(gold)}
- **Example**: ```json{"ott_23573_2": 1, "ott_114_0": 2, "m.12345_0": 0}```
### π `ori_context`
- **Description**: A list of the original document IDs related to the query. This field can help to get the relevant document provided by source dataset.
- **Example**: `["ott_144"]`, means all chunk IDs start with "ott_114" is from the original document.
### π `dataset_score`
- **Description**: The datset-level relevance labels. With the routing score of all datasets regarding this query.
- **Example**: `{"tat": 0, "triviaqa": 2, "ott": 4, "kg": 1, "nq": 0}`, where 0 means there is no relevant chunks in the dataset. The higher the score is, the more relevant chunks the dataset have.
---
## π Knowledge Base: `processed_documents.json`
This file is a list of chunks used for document retrieval, which contains the following fields:
### π `id`
- **Description**: Unique document identifier, structured as `dataset_documentID_chunkIndex`, equivalent to `dataset_queryID_chunkIndex`
- **example1**: `ott_8075_0` (chunks from NQ, TriviaQA, OTT, TAT)
- **example2**: `m.0cpy1b_5` (chunks from documents of knowledge graph(Freebase))
### π `text`
- **Description**: Text of the document.
- **Example**: `A molecule editor is a computer program for creating and modifying representations of chemical structures.`
---
## π License
This dataset is licensed under the [Apache License 2.0](https://www.apache.org/licenses/LICENSE-2.0). |