Askio commited on
Commit
9229617
Β·
verified Β·
1 Parent(s): 72f010b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -12
README.md CHANGED
@@ -12,7 +12,7 @@ language:
12
  - `mmrag_train.json`: Training set for model training.
13
  - `mmrag_dev.json`: Validation set for hyperparameter tuning and development.
14
  - `mmrag_test.json`: Test set for evaluation.
15
- - `processed_documents.json`: The document pool used for document retrieval.
16
 
17
  ---
18
 
@@ -22,12 +22,12 @@ The three files are all lists of dictionaries. Each dictionary contains the foll
22
 
23
  ### πŸ”‘ `id`
24
 
25
- - **Description**: Unique query identifier, structured as `dataset_queryID`.
26
- - **Example**: `ott_144`.
27
 
28
  ### ❓ `query`
29
 
30
- - **Description**: Text of the question.
31
  - **Example**: `"What is the capital of France?"`
32
 
33
  ### βœ… `answer`
@@ -37,29 +37,30 @@ The three files are all lists of dictionaries. Each dictionary contains the foll
37
 
38
  ### πŸ“‘ `relevant_chunks`
39
 
40
- - **Description**: Dictionary of relevant document IDs and their corresponding relevance scores. The document IDs are structured as `dataset_documentID_chunkIndex`, equivalent to `dataset_queryID_chunkIndex`
41
- - **Example**: `{"ott_23573_2": 1}`
42
 
43
  ### πŸ“– `ori_context`
44
 
45
- - **Description**: A list of the original document IDs related to the query.
46
- - **Example**: `["ott_6104"]`
47
 
48
  ### πŸ“œ `dataset_score`
49
 
50
- - **Description**: The relevance score of all datasets regarding this query.
51
- - **Example**: `{"tat": 0, "triviaqa": 2, "ott": 4, "kg": 1, "nq": 0}`
52
 
53
  ---
54
 
55
  ## πŸ“š Knowledge Base: `processed_documents.json`
56
 
57
- This file is a list of documents used for document retrieval, which contains the following fields:
58
 
59
  ### πŸ”‘ `id`
60
 
61
  - **Description**: Unique document identifier, structured as `dataset_documentID_chunkIndex`, equivalent to `dataset_queryID_chunkIndex`
62
- - **example**: `ott_8075_0`
 
63
 
64
  ### πŸ“„ `text`
65
 
 
12
  - `mmrag_train.json`: Training set for model training.
13
  - `mmrag_dev.json`: Validation set for hyperparameter tuning and development.
14
  - `mmrag_test.json`: Test set for evaluation.
15
+ - `processed_documents.json`: The chunks used for retrieval.
16
 
17
  ---
18
 
 
22
 
23
  ### πŸ”‘ `id`
24
 
25
+ - **Description**: Unique query identifier, structured as `SourceDataset_queryIDinDataset`.
26
+ - **Example**: `ott_144`, means this query is picked from OTT-QA dataset
27
 
28
  ### ❓ `query`
29
 
30
+ - **Description**: Text of the query.
31
  - **Example**: `"What is the capital of France?"`
32
 
33
  ### βœ… `answer`
 
37
 
38
  ### πŸ“‘ `relevant_chunks`
39
 
40
+ - **Description**: Dictionary of annotated chunk IDs and their corresponding relevance scores. The context of chunks can be get from processed_documents.json. relevance score is in range of {0(irrelevant), 1(Partially relevant), 2(gold)}
41
+ - **Example**: ```json{"ott_23573_2": 1, "ott_114_0": 2, "m.12345_0": 0}```
42
 
43
  ### πŸ“– `ori_context`
44
 
45
+ - **Description**: A list of the original document IDs related to the query. This field can help to get the relevant document provided by source dataset.
46
+ - **Example**: `["ott_144"]`, means all chunk IDs start with "ott_114" is from the original document.
47
 
48
  ### πŸ“œ `dataset_score`
49
 
50
+ - **Description**: The datset-level relevance labels. With the routing score of all datasets regarding this query.
51
+ - **Example**: `{"tat": 0, "triviaqa": 2, "ott": 4, "kg": 1, "nq": 0}`, where 0 means there is no relevant chunks in the dataset. The higher the score is, the more relevant chunks the dataset have.
52
 
53
  ---
54
 
55
  ## πŸ“š Knowledge Base: `processed_documents.json`
56
 
57
+ This file is a list of chunks used for document retrieval, which contains the following fields:
58
 
59
  ### πŸ”‘ `id`
60
 
61
  - **Description**: Unique document identifier, structured as `dataset_documentID_chunkIndex`, equivalent to `dataset_queryID_chunkIndex`
62
+ - **example1**: `ott_8075_0` (chunks from NQ, TriviaQA, OTT, TAT)
63
+ - **example2**: `m.0cpy1b_5` (chunks from documents of knowledge graph(Freebase))
64
 
65
  ### πŸ“„ `text`
66