Update README.md
Browse files
README.md
CHANGED
|
@@ -12,7 +12,7 @@ language:
|
|
| 12 |
- `mmrag_train.json`: Training set for model training.
|
| 13 |
- `mmrag_dev.json`: Validation set for hyperparameter tuning and development.
|
| 14 |
- `mmrag_test.json`: Test set for evaluation.
|
| 15 |
-
- `processed_documents.json`: The
|
| 16 |
|
| 17 |
---
|
| 18 |
|
|
@@ -22,12 +22,12 @@ The three files are all lists of dictionaries. Each dictionary contains the foll
|
|
| 22 |
|
| 23 |
### π `id`
|
| 24 |
|
| 25 |
-
- **Description**: Unique query identifier, structured as `
|
| 26 |
-
- **Example**: `ott_144`
|
| 27 |
|
| 28 |
### β `query`
|
| 29 |
|
| 30 |
-
- **Description**: Text of the
|
| 31 |
- **Example**: `"What is the capital of France?"`
|
| 32 |
|
| 33 |
### β
`answer`
|
|
@@ -37,29 +37,30 @@ The three files are all lists of dictionaries. Each dictionary contains the foll
|
|
| 37 |
|
| 38 |
### π `relevant_chunks`
|
| 39 |
|
| 40 |
-
- **Description**: Dictionary of
|
| 41 |
-
- **Example**: `{"ott_23573_2": 1}`
|
| 42 |
|
| 43 |
### π `ori_context`
|
| 44 |
|
| 45 |
-
- **Description**: A list of the original document IDs related to the query.
|
| 46 |
-
- **Example**: `["
|
| 47 |
|
| 48 |
### π `dataset_score`
|
| 49 |
|
| 50 |
-
- **Description**: The relevance score of all datasets regarding this query.
|
| 51 |
-
- **Example**: `{"tat": 0, "triviaqa": 2, "ott": 4, "kg": 1, "nq": 0}`
|
| 52 |
|
| 53 |
---
|
| 54 |
|
| 55 |
## π Knowledge Base: `processed_documents.json`
|
| 56 |
|
| 57 |
-
This file is a list of
|
| 58 |
|
| 59 |
### π `id`
|
| 60 |
|
| 61 |
- **Description**: Unique document identifier, structured as `dataset_documentID_chunkIndex`, equivalent to `dataset_queryID_chunkIndex`
|
| 62 |
-
- **
|
|
|
|
| 63 |
|
| 64 |
### π `text`
|
| 65 |
|
|
|
|
| 12 |
- `mmrag_train.json`: Training set for model training.
|
| 13 |
- `mmrag_dev.json`: Validation set for hyperparameter tuning and development.
|
| 14 |
- `mmrag_test.json`: Test set for evaluation.
|
| 15 |
+
- `processed_documents.json`: The chunks used for retrieval.
|
| 16 |
|
| 17 |
---
|
| 18 |
|
|
|
|
| 22 |
|
| 23 |
### π `id`
|
| 24 |
|
| 25 |
+
- **Description**: Unique query identifier, structured as `SourceDataset_queryIDinDataset`.
|
| 26 |
+
- **Example**: `ott_144`, means this query is picked from OTT-QA dataset
|
| 27 |
|
| 28 |
### β `query`
|
| 29 |
|
| 30 |
+
- **Description**: Text of the query.
|
| 31 |
- **Example**: `"What is the capital of France?"`
|
| 32 |
|
| 33 |
### β
`answer`
|
|
|
|
| 37 |
|
| 38 |
### π `relevant_chunks`
|
| 39 |
|
| 40 |
+
- **Description**: Dictionary of annotated chunk IDs and their corresponding relevance scores. The context of chunks can be get from processed_documents.json. relevance score is in range of {0(irrelevant), 1(Partially relevant), 2(gold)}
|
| 41 |
+
- **Example**: ```json{"ott_23573_2": 1, "ott_114_0": 2, "m.12345_0": 0}```
|
| 42 |
|
| 43 |
### π `ori_context`
|
| 44 |
|
| 45 |
+
- **Description**: A list of the original document IDs related to the query. This field can help to get the relevant document provided by source dataset.
|
| 46 |
+
- **Example**: `["ott_144"]`, means all chunk IDs start with "ott_114" is from the original document.
|
| 47 |
|
| 48 |
### π `dataset_score`
|
| 49 |
|
| 50 |
+
- **Description**: The datset-level relevance labels. With the routing score of all datasets regarding this query.
|
| 51 |
+
- **Example**: `{"tat": 0, "triviaqa": 2, "ott": 4, "kg": 1, "nq": 0}`, where 0 means there is no relevant chunks in the dataset. The higher the score is, the more relevant chunks the dataset have.
|
| 52 |
|
| 53 |
---
|
| 54 |
|
| 55 |
## π Knowledge Base: `processed_documents.json`
|
| 56 |
|
| 57 |
+
This file is a list of chunks used for document retrieval, which contains the following fields:
|
| 58 |
|
| 59 |
### π `id`
|
| 60 |
|
| 61 |
- **Description**: Unique document identifier, structured as `dataset_documentID_chunkIndex`, equivalent to `dataset_queryID_chunkIndex`
|
| 62 |
+
- **example1**: `ott_8075_0` (chunks from NQ, TriviaQA, OTT, TAT)
|
| 63 |
+
- **example2**: `m.0cpy1b_5` (chunks from documents of knowledge graph(Freebase))
|
| 64 |
|
| 65 |
### π `text`
|
| 66 |
|