caobiao24 drt commited on
Commit
aed43b4
·
verified ·
0 Parent(s):

Duplicate from drt/kqa_pro

Browse files

Co-authored-by: Yuanchun <drt@users.noreply.huggingface.co>

Files changed (7) hide show
  1. .gitattributes +53 -0
  2. README.md +230 -0
  3. kb.json +3 -0
  4. kqa_pro.py +123 -0
  5. test.json +3 -0
  6. train.json +3 -0
  7. val.json +3 -0
.gitattributes ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ftz filter=lfs diff=lfs merge=lfs -text
6
+ *.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.h5 filter=lfs diff=lfs merge=lfs -text
8
+ *.joblib filter=lfs diff=lfs merge=lfs -text
9
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tflite filter=lfs diff=lfs merge=lfs -text
29
+ *.tgz filter=lfs diff=lfs merge=lfs -text
30
+ *.wasm filter=lfs diff=lfs merge=lfs -text
31
+ *.xz filter=lfs diff=lfs merge=lfs -text
32
+ *.zip filter=lfs diff=lfs merge=lfs -text
33
+ *.zst filter=lfs diff=lfs merge=lfs -text
34
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
35
+ # Audio files - uncompressed
36
+ *.pcm filter=lfs diff=lfs merge=lfs -text
37
+ *.sam filter=lfs diff=lfs merge=lfs -text
38
+ *.raw filter=lfs diff=lfs merge=lfs -text
39
+ # Audio files - compressed
40
+ *.aac filter=lfs diff=lfs merge=lfs -text
41
+ *.flac filter=lfs diff=lfs merge=lfs -text
42
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
43
+ *.ogg filter=lfs diff=lfs merge=lfs -text
44
+ *.wav filter=lfs diff=lfs merge=lfs -text
45
+ # Image files - uncompressed
46
+ *.bmp filter=lfs diff=lfs merge=lfs -text
47
+ *.gif filter=lfs diff=lfs merge=lfs -text
48
+ *.png filter=lfs diff=lfs merge=lfs -text
49
+ *.tiff filter=lfs diff=lfs merge=lfs -text
50
+ # Image files - compressed
51
+ *.jpg filter=lfs diff=lfs merge=lfs -text
52
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
53
+ *.webp filter=lfs diff=lfs merge=lfs -text
README.md ADDED
@@ -0,0 +1,230 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ - expert-generated
5
+ language:
6
+ - en
7
+ language_creators:
8
+ - found
9
+ license:
10
+ - mit
11
+ multilinguality:
12
+ - monolingual
13
+ pretty_name: KQA-Pro
14
+ size_categories:
15
+ - 10K<n<100K
16
+ source_datasets:
17
+ - original
18
+ tags:
19
+ - knowledge graph
20
+ - freebase
21
+ task_categories:
22
+ - question-answering
23
+ task_ids:
24
+ - open-domain-qa
25
+ ---
26
+
27
+ # Dataset Card for KQA Pro
28
+
29
+ ## Table of Contents
30
+ - [Table of Contents](#table-of-contents)
31
+ - [Dataset Description](#dataset-description)
32
+ - [Dataset Summary](#dataset-summary)
33
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
34
+ - [Languages](#languages)
35
+ - [Dataset Structure](#dataset-structure)
36
+ - [Data Configs](#data-configs)
37
+ - [Data Splits](#data-splits)
38
+ - [Additional Information](#additional-information)
39
+ - [How to run SPARQLs and programs](#how-to-run-sparqls-and-programs)
40
+ - [Knowledge Graph File](#knowledge-graph-file)
41
+ - [How to Submit to Leaderboard](#how-to-submit-results-of-test-set)
42
+ - [Licensing Information](#licensing-information)
43
+ - [Citation Information](#citation-information)
44
+ - [Contributions](#contributions)
45
+
46
+ ## Dataset Description
47
+
48
+ - **Homepage:** http://thukeg.gitee.io/kqa-pro/
49
+ - **Repository:** https://github.com/shijx12/KQAPro_Baselines
50
+ - **Paper:** [KQA Pro: A Dataset with Explicit Compositional Programs for Complex Question Answering over Knowledge Base](https://aclanthology.org/2022.acl-long.422/)
51
+ - **Leaderboard:** http://thukeg.gitee.io/kqa-pro/leaderboard.html
52
+ - **Point of Contact:** shijx12 at gmail dot com
53
+
54
+ ### Dataset Summary
55
+
56
+ KQA Pro is a large-scale dataset of complex question answering over knowledge base. The questions are very diverse and challenging, requiring multiple reasoning capabilities including compositional reasoning, multi-hop reasoning, quantitative comparison, set operations, and etc. Strong supervisions of SPARQL and program are provided for each question.
57
+
58
+ ### Supported Tasks and Leaderboards
59
+
60
+ It supports knowlege graph based question answering. Specifically, it provides SPARQL and *program* for each question.
61
+
62
+ ### Languages
63
+
64
+ English
65
+
66
+ ## Dataset Structure
67
+
68
+ **train.json/val.json**
69
+ ```
70
+ [
71
+ {
72
+ 'question': str,
73
+ 'sparql': str, # executable in our virtuoso engine
74
+ 'program':
75
+ [
76
+ {
77
+ 'function': str, # function name
78
+ 'dependencies': [int], # functional inputs, representing indices of the preceding functions
79
+ 'inputs': [str], # textual inputs
80
+ }
81
+ ],
82
+ 'choices': [str], # 10 answer choices
83
+ 'answer': str, # golden answer
84
+ }
85
+ ]
86
+ ```
87
+
88
+ **test.json**
89
+ ```
90
+ [
91
+ {
92
+ 'question': str,
93
+ 'choices': [str], # 10 answer choices
94
+ }
95
+ ]
96
+ ```
97
+
98
+ ### Data Configs
99
+
100
+ This dataset has two configs: `train_val` and `test` because they have different available fields. Please specify this like `load_dataset('drt/kqa_pro', 'train_val')`.
101
+
102
+ ### Data Splits
103
+
104
+ train, val, test
105
+
106
+
107
+ ## Additional Information
108
+
109
+ ### Knowledge Graph File
110
+
111
+ You can find the knowledge graph file `kb.json` in the original github repository. It comes with the format:
112
+
113
+ ```json
114
+ {
115
+ 'concepts':
116
+ {
117
+ '<id>':
118
+ {
119
+ 'name': str,
120
+ 'instanceOf': ['<id>', '<id>'], # ids of parent concept
121
+ }
122
+ },
123
+ 'entities': # excluding concepts
124
+ {
125
+ '<id>':
126
+ {
127
+ 'name': str,
128
+ 'instanceOf': ['<id>', '<id>'], # ids of parent concept
129
+ 'attributes':
130
+ [
131
+ {
132
+ 'key': str, # attribute key
133
+ 'value': # attribute value
134
+ {
135
+ 'type': 'string'/'quantity'/'date'/'year',
136
+ 'value': float/int/str, # float or int for quantity, int for year, 'yyyy/mm/dd' for date
137
+ 'unit': str, # for quantity
138
+ },
139
+ 'qualifiers':
140
+ {
141
+ '<qk>': # qualifier key, one key may have multiple corresponding qualifier values
142
+ [
143
+ {
144
+ 'type': 'string'/'quantity'/'date'/'year',
145
+ 'value': float/int/str,
146
+ 'unit': str,
147
+ }, # the format of qualifier value is similar to attribute value
148
+ ]
149
+ }
150
+ },
151
+ ]
152
+ 'relations':
153
+ [
154
+ {
155
+ 'predicate': str,
156
+ 'object': '<id>', # NOTE: it may be a concept id
157
+ 'direction': 'forward'/'backward',
158
+ 'qualifiers':
159
+ {
160
+ '<qk>': # qualifier key, one key may have multiple corresponding qualifier values
161
+ [
162
+ {
163
+ 'type': 'string'/'quantity'/'date'/'year',
164
+ 'value': float/int/str,
165
+ 'unit': str,
166
+ }, # the format of qualifier value is similar to attribute value
167
+ ]
168
+ }
169
+ },
170
+ ]
171
+ }
172
+ }
173
+ }
174
+ ```
175
+
176
+
177
+
178
+ ### How to run SPARQLs and programs
179
+
180
+ We implement multiple baselines in our [codebase](https://github.com/shijx12/KQAPro_Baselines), which includes a supervised SPARQL parser and program parser.
181
+
182
+ In the SPARQL parser, we implement a query engine based on [Virtuoso](https://github.com/openlink/virtuoso-opensource.git).
183
+ You can install the engine based on our [instructions](https://github.com/shijx12/KQAPro_Baselines/blob/master/SPARQL/README.md), and then feed your predicted SPARQL to get the answer.
184
+
185
+ In the program parser, we implement a rule-based program executor, which receives a predicted program and returns the answer.
186
+ Detailed introductions of our functions can be found in our [paper](https://arxiv.org/abs/2007.03875).
187
+
188
+ ### How to submit results of test set
189
+ You need to predict answers for all questions of test set and write them in a text file **in order**, one per line.
190
+ Here is an example:
191
+ ```
192
+ Tron: Legacy
193
+ Palm Beach County
194
+ 1937-03-01
195
+ The Queen
196
+ ...
197
+ ```
198
+
199
+ Then you need to send the prediction file to us by email <caosl19@mails.tsinghua.edu.cn>, we will reply to you with the performance as soon as possible.
200
+ To appear in the learderboard, you need to also provide following information:
201
+
202
+ - model name
203
+ - affiliation
204
+ - open-ended or multiple-choice
205
+ - whether use the supervision of SPARQL in your model or not
206
+ - whether use the supervision of program in your model or not
207
+ - single model or ensemble model
208
+ - (optional) paper link
209
+ - (optional) code link
210
+
211
+ ### Licensing Information
212
+
213
+ MIT License
214
+
215
+ ### Citation Information
216
+
217
+ If you find our dataset is helpful in your work, please cite us by
218
+
219
+ ```
220
+ @inproceedings{KQAPro,
221
+ title={{KQA P}ro: A Large Diagnostic Dataset for Complex Question Answering over Knowledge Base},
222
+ author={Cao, Shulin and Shi, Jiaxin and Pan, Liangming and Nie, Lunyiu and Xiang, Yutong and Hou, Lei and Li, Juanzi and He, Bin and Zhang, Hanwang},
223
+ booktitle={ACL'22},
224
+ year={2022}
225
+ }
226
+ ```
227
+
228
+ ### Contributions
229
+
230
+ Thanks to [@happen2me](https://github.com/happen2me) for adding this dataset.
kb.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:04da7408320c5cb7023c44372cce32846d56d369d8865d2e61a18c3956661a7c
3
+ size 79341787
kqa_pro.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """KQA Pro: A large-scale, diverse, challenging dataset of complex question answering over knowledge base."""
2
+
3
+ import json
4
+ import os
5
+
6
+ import datasets
7
+
8
+ logger = datasets.logging.get_logger(__name__)
9
+
10
+ _CITATION = """\
11
+ @inproceedings{KQAPro,
12
+ title={{KQA P}ro: A Large Diagnostic Dataset for Complex Question Answering over Knowledge Base},
13
+ author={Cao, Shulin and Shi, Jiaxin and Pan, Liangming and Nie, Lunyiu and Xiang, Yutong and Hou, Lei and Li, Juanzi and He, Bin and Zhang, Hanwang},
14
+ booktitle={ACL'22},
15
+ year={2022}
16
+ }
17
+ """
18
+
19
+ _DESCRIPTION = """\
20
+ A large-scale, diverse, challenging dataset of complex question answering over knowledge base.
21
+ """
22
+
23
+ _URL = "https://thukeg.gitee.io/kqa-pro/"
24
+ _DOWNLOAD_URL = "https://cloud.tsinghua.edu.cn/f/df54ff66d1dc4ca7823e/?dl=1"
25
+ _URLS = {
26
+ "train": "train.json",
27
+ "val": "val.json",
28
+ "test": "test.json"
29
+ }
30
+
31
+ _TRAIN_CONFIG_NAME = "train_val"
32
+ _TEST_CONFIG_NAME = "test"
33
+
34
+ class KQAProConfig(datasets.BuilderConfig):
35
+ """BuilderConfig for KQA Pro."""
36
+
37
+ def __init__(self, **kwargs):
38
+ """BuilderConfig for KQA Pro.
39
+ Args:
40
+ **kwargs: keyword arguments forwarded to super.
41
+ """
42
+ super(KQAProConfig, self).__init__(**kwargs)
43
+
44
+
45
+ class KQAPro(datasets.GeneratorBasedBuilder):
46
+ """KQAPro: A large scale knowledge-based question answering dataset."""
47
+
48
+ BUILDER_CONFIGS = [
49
+ KQAProConfig(
50
+ name=_TRAIN_CONFIG_NAME,
51
+ description="KQA Pro"
52
+ ),
53
+ KQAProConfig(
54
+ name=_TEST_CONFIG_NAME,
55
+ description="KQA Pro"
56
+ ),
57
+ ]
58
+
59
+
60
+ def _info(self):
61
+ if self.config.name == _TEST_CONFIG_NAME:
62
+ return datasets.DatasetInfo(
63
+ description=_DESCRIPTION,
64
+ features=datasets.Features(
65
+ {
66
+ "question": datasets.Value("string"),
67
+ "choices": datasets.features.Sequence(datasets.Value("string")),
68
+ }
69
+ ),
70
+ supervised_keys=None,
71
+ homepage=_URL,
72
+ citation=_CITATION,
73
+ )
74
+
75
+ return datasets.DatasetInfo(
76
+ description=_DESCRIPTION,
77
+ features=datasets.Features(
78
+ {
79
+ "question": datasets.Value("string"),
80
+ "sparql": datasets.Value("string"),
81
+ "program": datasets.features.Sequence(
82
+ {
83
+ "function": datasets.Value("string"),
84
+ "dependencies": datasets.features.Sequence(datasets.Value("int32")),
85
+ "inputs": datasets.features.Sequence(datasets.Value("string"))
86
+ }
87
+ ),
88
+ "choices": datasets.features.Sequence(datasets.Value("string")),
89
+ "answer": datasets.Value("string")
90
+ }
91
+ ),
92
+ # No default supervised_keys (as we have to pass both question
93
+ # and context as input).
94
+ supervised_keys=None,
95
+ homepage=_URL,
96
+ citation=_CITATION,
97
+ )
98
+
99
+
100
+ def _split_generators(self, dl_manager):
101
+ downloaded_files = dl_manager.download_and_extract(_URLS)
102
+
103
+ if self.config.name == _TEST_CONFIG_NAME:
104
+ return [
105
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={
106
+ "filepath": downloaded_files["test"]})
107
+ ]
108
+
109
+ return [
110
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={
111
+ "filepath": downloaded_files["train"]}),
112
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={
113
+ "filepath": downloaded_files["val"]})
114
+ ]
115
+
116
+
117
+ def _generate_examples(self, filepath):
118
+ """This function returns the examples in the raw (text) form."""
119
+ logger.info("generating examples from = %s", filepath)
120
+ with open(filepath, encoding="utf-8") as f:
121
+ kqa = json.load(f)
122
+ for idx, sample in enumerate(kqa):
123
+ yield idx, sample
test.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b2142ed6124ae525b7d7fd8d1edb338c1b025751ac0167ff1498608111911822
3
+ size 3257326
train.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e9fbe4c1cdf207aac83ae0d5e4a1a53a9965a2b13b403de699ca6d5dae6e4510
3
+ size 88119411
val.json ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4aed6ab3d7ad071722064fe3bb02bc028cfbeb15da5f7115d57a1e2d198f3bb
3
+ size 11047970