drt commited on
Commit
08cf776
·
1 Parent(s): e514d48

Add load script and update README

Browse files
Files changed (2) hide show
  1. README.md +211 -1
  2. kqa_pro.py +126 -0
README.md CHANGED
@@ -1,3 +1,213 @@
1
  ---
2
- license: mit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ annotations_creators:
3
+ - machine-generated
4
+ - expert-generated
5
+ language:
6
+ - en
7
+ language_creators:
8
+ - found
9
+ license:
10
+ - mit
11
+ multilinguality:
12
+ - monolingual
13
+ pretty_name: KQA-Pro
14
+ size_categories:
15
+ - 10K<n<100K
16
+ source_datasets:
17
+ - original
18
+ tags:
19
+ - knowledge graph
20
+ - freebase
21
+ task_categories:
22
+ - question-answering
23
+ task_ids:
24
+ - open-domain-qa
25
  ---
26
+
27
+ # Dataset Card for [Dataset Name]
28
+
29
+ ## Table of Contents
30
+ - [Table of Contents](#table-of-contents)
31
+ - [Dataset Description](#dataset-description)
32
+ - [Dataset Summary](#dataset-summary)
33
+ - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards)
34
+ - [Languages](#languages)
35
+ - [Dataset Structure](#dataset-structure)
36
+ - [Data Splits](#data-splits)
37
+ - [Additional Information](#additional-information)
38
+ - [How to run SPARQLs and programs](#how-to-run-sparqls-and-programs)
39
+ - [Knowledge Graph File](#knowledge-graph-file)
40
+ - [How to Submit to Leaderboard](#how-to-submit-results-of-test-set)
41
+ - [Licensing Information](#licensing-information)
42
+ - [Citation Information](#citation-information)
43
+ - [Contributions](#contributions)
44
+
45
+ ## Dataset Description
46
+
47
+ - **Homepage:** http://thukeg.gitee.io/kqa-pro/
48
+ - **Repository:** https://github.com/shijx12/KQAPro_Baselines
49
+ - **Paper:** [KQA Pro: A Dataset with Explicit Compositional Programs for Complex Question Answering over Knowledge Base](https://aclanthology.org/2022.acl-long.422/)
50
+ - **Leaderboard:** http://thukeg.gitee.io/kqa-pro/leaderboard.html
51
+ - **Point of Contact:** shijx12 at gmail dot com
52
+
53
+ ### Dataset Summary
54
+
55
+ KQA Pro is a large-scale dataset of complex question answering over knowledge base. The questions are very diverse and challenging, requiring multiple reasoning capabilities including compositional reasoning, multi-hop reasoning, quantitative comparison, set operations, and etc. Strong supervisions of SPARQL and program are provided for each question.
56
+
57
+ ### Supported Tasks and Leaderboards
58
+
59
+ It supports knowlege graph based question answering. Specifically, it provides SPARQL and *program* for each question.
60
+
61
+ ### Languages
62
+
63
+ English
64
+
65
+ ## Dataset Structure
66
+
67
+ **train.json/val.json**
68
+ ```
69
+ [
70
+ {
71
+ 'question': str,
72
+ 'sparql': str, # executable in our virtuoso engine
73
+ 'program':
74
+ [
75
+ {
76
+ 'function': str, # function name
77
+ 'dependencies': [int], # functional inputs, representing indices of the preceding functions
78
+ 'inputs': [str], # textual inputs
79
+ }
80
+ ],
81
+ 'choices': [str], # 10 answer choices
82
+ 'answer': str, # golden answer
83
+ }
84
+ ]
85
+ ```
86
+
87
+ **test.json**
88
+ ```
89
+ [
90
+ {
91
+ 'question': str,
92
+ 'choices': [str], # 10 answer choices
93
+ }
94
+ ]
95
+ ```
96
+
97
+ ### Data Splits
98
+
99
+ train, val, test
100
+
101
+
102
+ ## Additional Information
103
+
104
+ ### Knowledge Graph File
105
+
106
+ You can find the knowledge graph file `kb.json` in the original github repository. It comes with the format:
107
+
108
+ ```json
109
+ {
110
+ 'concepts':
111
+ {
112
+ '<id>':
113
+ {
114
+ 'name': str,
115
+ 'instanceOf': ['<id>', '<id>'], # ids of parent concept
116
+ }
117
+ },
118
+ 'entities': # excluding concepts
119
+ {
120
+ '<id>':
121
+ {
122
+ 'name': str,
123
+ 'instanceOf': ['<id>', '<id>'], # ids of parent concept
124
+ 'attributes':
125
+ [
126
+ {
127
+ 'key': str, # attribute key
128
+ 'value': # attribute value
129
+ {
130
+ 'type': 'string'/'quantity'/'date'/'year',
131
+ 'value': float/int/str, # float or int for quantity, int for year, 'yyyy/mm/dd' for date
132
+ 'unit': str, # for quantity
133
+ },
134
+ 'qualifiers':
135
+ {
136
+ '<qk>': # qualifier key, one key may have multiple corresponding qualifier values
137
+ [
138
+ {
139
+ 'type': 'string'/'quantity'/'date'/'year',
140
+ 'value': float/int/str,
141
+ 'unit': str,
142
+ }, # the format of qualifier value is similar to attribute value
143
+ ]
144
+ }
145
+ },
146
+ ]
147
+ 'relations':
148
+ [
149
+ {
150
+ 'predicate': str,
151
+ 'object': '<id>', # NOTE: it may be a concept id
152
+ 'direction': 'forward'/'backward',
153
+ 'qualifiers':
154
+ {
155
+ '<qk>': # qualifier key, one key may have multiple corresponding qualifier values
156
+ [
157
+ {
158
+ 'type': 'string'/'quantity'/'date'/'year',
159
+ 'value': float/int/str,
160
+ 'unit': str,
161
+ }, # the format of qualifier value is similar to attribute value
162
+ ]
163
+ }
164
+ },
165
+ ]
166
+ }
167
+ }
168
+ }
169
+ ```
170
+
171
+
172
+
173
+ ### How to run SPARQLs and programs
174
+
175
+ We implement multiple baselines in our [codebase](https://github.com/shijx12/KQAPro_Baselines), which includes a supervised SPARQL parser and program parser.
176
+
177
+ In the SPARQL parser, we implement a query engine based on [Virtuoso](https://github.com/openlink/virtuoso-opensource.git).
178
+ You can install the engine based on our [instructions](https://github.com/shijx12/KQAPro_Baselines/blob/master/SPARQL/README.md), and then feed your predicted SPARQL to get the answer.
179
+
180
+ In the program parser, we implement a rule-based program executor, which receives a predicted program and returns the answer.
181
+ Detailed introductions of our functions can be found in our [paper](https://arxiv.org/abs/2007.03875).
182
+
183
+ ### How to submit results of test set
184
+ You need to predict answers for all questions of test set and write them in a text file **in order**, one per line.
185
+ Here is an example:
186
+ ```
187
+ Tron: Legacy
188
+ Palm Beach County
189
+ 1937-03-01
190
+ The Queen
191
+ ...
192
+ ```
193
+
194
+ ### Licensing Information
195
+
196
+ MIT License
197
+
198
+ ### Citation Information
199
+
200
+ If you find our dataset is helpful in your work, please cite us by
201
+
202
+ ```
203
+ @inproceedings{KQAPro,
204
+ title={{KQA P}ro: A Large Diagnostic Dataset for Complex Question Answering over Knowledge Base},
205
+ author={Cao, Shulin and Shi, Jiaxin and Pan, Liangming and Nie, Lunyiu and Xiang, Yutong and Hou, Lei and Li, Juanzi and He, Bin and Zhang, Hanwang},
206
+ booktitle={ACL'22},
207
+ year={2022}
208
+ }
209
+ ```
210
+
211
+ ### Contributions
212
+
213
+ Thanks to [@happen2me](https://github.com/happen2me) for adding this dataset.
kqa_pro.py ADDED
@@ -0,0 +1,126 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """KQA Pro: A large-scale, diverse, challenging dataset of complex question answering over knowledge base."""
2
+
3
+ import json
4
+ import os
5
+
6
+ import datasets
7
+
8
+ logger = datasets.logging.get_logger(__name__)
9
+
10
+ _CITATION = """\
11
+ @inproceedings{KQAPro,
12
+ title={{KQA P}ro: A Large Diagnostic Dataset for Complex Question Answering over Knowledge Base},
13
+ author={Cao, Shulin and Shi, Jiaxin and Pan, Liangming and Nie, Lunyiu and Xiang, Yutong and Hou, Lei and Li, Juanzi and He, Bin and Zhang, Hanwang},
14
+ booktitle={ACL'22},
15
+ year={2022}
16
+ }
17
+ """
18
+
19
+ _DESCRIPTION = """\
20
+ A large-scale, diverse, challenging dataset of complex question answering over knowledge base.
21
+ """
22
+
23
+ _URL = "https://thukeg.gitee.io/kqa-pro/"
24
+ _DOWNLOAD_URL = "https://cloud.tsinghua.edu.cn/f/df54ff66d1dc4ca7823e/?dl=1"
25
+
26
+ _TRAIN_CONFIG_NAME = "train_val"
27
+ _TEST_CONFIG_NAME = "test"
28
+
29
+ class KQAProConfig(datasets.BuilderConfig):
30
+ """BuilderConfig for KQA Pro."""
31
+
32
+ def __init__(self, **kwargs):
33
+ """BuilderConfig for KQA Pro.
34
+ Args:
35
+ **kwargs: keyword arguments forwarded to super.
36
+ """
37
+ super(KQAProConfig, self).__init__(**kwargs)
38
+
39
+
40
+ class KQAPro(datasets.GeneratorBasedBuilder):
41
+ """KQAPro: A large scale knowledge-based question answering dataset."""
42
+
43
+ BUILDER_CONFIGS = [
44
+ KQAProConfig(
45
+ name=_TRAIN_CONFIG_NAME,
46
+ description="KQA Pro",
47
+ data_dir="data"
48
+ ),
49
+ KQAProConfig(
50
+ name=_TEST_CONFIG_NAME,
51
+ description="KQA Pro",
52
+ data_dir="data"
53
+ ),
54
+ ]
55
+
56
+
57
+ def _info(self):
58
+ if self.config.name == _TEST_CONFIG_NAME:
59
+ return datasets.DatasetInfo(
60
+ description=_DESCRIPTION,
61
+ features=datasets.Features(
62
+ {
63
+ "question": datasets.Value("string"),
64
+ "choices": datasets.features.Sequence(datasets.Value("string")),
65
+ }
66
+ ),
67
+ supervised_keys=None,
68
+ homepage=_URL,
69
+ citation=_CITATION,
70
+ )
71
+
72
+ return datasets.DatasetInfo(
73
+ description=_DESCRIPTION,
74
+ features=datasets.Features(
75
+ {
76
+ "question": datasets.Value("string"),
77
+ "sparql": datasets.Value("string"),
78
+ "program": datasets.features.Sequence(
79
+ {
80
+ "function": datasets.Value("string"),
81
+ "dependencies": datasets.features.Sequence(datasets.Value("int32")),
82
+ "inputs": datasets.features.Sequence(datasets.Value("string"))
83
+ }
84
+ ),
85
+ "choices": datasets.features.Sequence(datasets.Value("string")),
86
+ "answer": datasets.Value("string")
87
+ }
88
+ ),
89
+ # No default supervised_keys (as we have to pass both question
90
+ # and context as input).
91
+ supervised_keys=None,
92
+ homepage=_URL,
93
+ citation=_CITATION,
94
+ )
95
+
96
+
97
+ def _split_generators(self, dl_manager):
98
+ download_dir = dl_manager.download_and_extract(_DOWNLOAD_URL)
99
+ data_dir = os.path.join(download_dir, self.config.data_dir)
100
+ downloaded_files = {
101
+ "train": os.path.join(data_dir, "train.json"),
102
+ "val": os.path.join(data_dir, "val.json"),
103
+ "test": os.path.join(data_dir, "test.json")
104
+ }
105
+
106
+ if self.config.name == _TEST_CONFIG_NAME:
107
+ return [
108
+ datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={
109
+ "filepath": downloaded_files["test"]})
110
+ ]
111
+
112
+ return [
113
+ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={
114
+ "filepath": downloaded_files["train"]}),
115
+ datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={
116
+ "filepath": downloaded_files["val"]})
117
+ ]
118
+
119
+
120
+ def _generate_examples(self, filepath):
121
+ """This function returns the examples in the raw (text) form."""
122
+ logger.info("generating examples from = %s", filepath)
123
+ with open(filepath, encoding="utf-8") as f:
124
+ kqa = json.load(f)
125
+ for idx, sample in enumerate(kqa):
126
+ yield idx, sample