Add 1 files
Browse files- 1803/1803.06643.md +425 -0
1803/1803.06643.md
ADDED
|
@@ -0,0 +1,425 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Title: The Web as a Knowledge-base for Answering Complex Questions
|
| 2 |
+
|
| 3 |
+
URL Source: https://arxiv.org/html/1803.06643
|
| 4 |
+
|
| 5 |
+
Markdown Content:
|
| 6 |
+
Alon Talmor
|
| 7 |
+
|
| 8 |
+
Tel-Aviv University
|
| 9 |
+
|
| 10 |
+
alontalmor@mail.tau.ac.il
|
| 11 |
+
|
| 12 |
+
&Jonathan Berant
|
| 13 |
+
|
| 14 |
+
Tel-Aviv University
|
| 15 |
+
|
| 16 |
+
joberant@cs.tau.ac.il
|
| 17 |
+
|
| 18 |
+
###### Abstract
|
| 19 |
+
|
| 20 |
+
Answering complex questions is a time-consuming activity for humans that requires reasoning and integration of information. Recent work on reading comprehension made headway in answering simple questions, but tackling complex questions is still an ongoing research challenge. Conversely, semantic parsers have been successful at handling compositionality, but only when the information resides in a target knowledge-base. In this paper, we present a novel framework for answering broad and complex questions, assuming answering simple questions is possible using a search engine and a reading comprehension model. We propose to decompose complex questions into a sequence of simple questions, and compute the final answer from the sequence of answers. To illustrate the viability of our approach, we create a new dataset of complex questions, ComplexWebQuestions, and present a model that decomposes questions and interacts with the web to compute an answer. We empirically demonstrate that question decomposition improves performance from 20.8 precision@1 to 27.5 precision@1 on this new dataset.71
|
| 21 |
+
|
| 22 |
+
1 Introduction
|
| 23 |
+
--------------
|
| 24 |
+
|
| 25 |
+
Humans often want to answer complex questions that require reasoning over multiple pieces of evidence, e.g., “From what country is the winner of the Australian Open women’s singles 2008?”. Answering such questions in broad domains can be quite onerous for humans, because it requires searching and integrating information from multiple sources.
|
| 26 |
+
|
| 27 |
+
Recently, interest in question answering (QA) has surged in the context of reading comprehension (RC), where an answer is sought for a question given one or more documents Hermann et al. ([2015](https://arxiv.org/html/1803.06643#bib.bib14)); Joshi et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib20)); Rajpurkar et al. ([2016](https://arxiv.org/html/1803.06643#bib.bib32)). Neural models trained over large datasets led to great progress in RC, nearing human-level performance Wang et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib37)). However, analysis of models revealed Jia and Liang ([2017](https://arxiv.org/html/1803.06643#bib.bib19)); Chen et al. ([2016](https://arxiv.org/html/1803.06643#bib.bib7)) that they mostly excel at matching questions to local contexts, but struggle with questions that require reasoning. Moreover, RC assumes documents with the information relevant for the answer are available – but when questions are complex, even retrieving the documents can be difficult.
|
| 28 |
+
|
| 29 |
+
Conversely, work on QA through semantic parsing has focused primarily on compositionality: questions are translated to compositional programs that encode a sequence of actions for finding the answer in a knowledge-base (KB) Zelle and Mooney ([1996](https://arxiv.org/html/1803.06643#bib.bib43)); Zettlemoyer and Collins ([2005](https://arxiv.org/html/1803.06643#bib.bib44)); Artzi and Zettlemoyer ([2013](https://arxiv.org/html/1803.06643#bib.bib1)); Krishnamurthy and Mitchell ([2012](https://arxiv.org/html/1803.06643#bib.bib21)); Kwiatkowski et al. ([2013](https://arxiv.org/html/1803.06643#bib.bib22)); Liang et al. ([2011](https://arxiv.org/html/1803.06643#bib.bib26)). However, this reliance on a manually-curated KB has limited the coverage and applicability of semantic parsers.
|
| 30 |
+
|
| 31 |
+

|
| 32 |
+
|
| 33 |
+
Figure 1: Given a complex questions q q, we decompose the question to a sequence of simple questions q 1,q 2,…q_{1},q_{2},\dots, use a search engine and a QA model to answer the simple questions, from which we compute the final answer a a.
|
| 34 |
+
|
| 35 |
+
In this paper we present a framework for QA that is _broad_, i.e., it does not assume information is in a KB or in retrieved documents, and _compositional_, i.e., to compute an answer we must perform some computation or reasoning. Our thesis is that answering simple questions can be achieved by combining a search engine with a RC model. Thus, answering complex questions can be addressed by decomposing the question into a sequence of simple questions, and computing the answer from the corresponding answers. Figure[1](https://arxiv.org/html/1803.06643#S1.F1 "Figure 1 ‣ 1 Introduction ‣ The Web as a Knowledge-base for Answering Complex Questions") illustrates this idea. Our model decomposes the question in the figure into a sequence of simple questions, each is submitted to a search engine, and then an answer is extracted from the search result. Once all answers are gathered, a final answer can be computed using symbolic operations such as union and intersection.
|
| 36 |
+
|
| 37 |
+
To evaluate our framework we need a dataset of complex questions that calls for reasoning over multiple pieces of information. Because an adequate dataset is missing, we created ComplexWebQuestions, a new dataset for complex questions that builds on WebQuestionsSP, a dataset that includes pairs of simple questions and their corresponding SPARQL query. We take SPARQL queries from WebQuestionsSP and automatically create more complex queries that include phenomena such as function composition, conjunctions, superlatives and comparatives. Then, we use Amazon Mechanical Turk (AMT) to generate natural language questions, and obtain a dataset of 34,689 question-answer pairs (and also SPARQL queries that our model ignores). Data analysis shows that examples are diverse and that AMT workers perform substantial paraphrasing of the original machine-generated question.
|
| 38 |
+
|
| 39 |
+
We propose a model for answering complex questions through question decomposition. Our model uses a sequence-to-sequence architecture Sutskever et al. ([2014](https://arxiv.org/html/1803.06643#bib.bib34)) to map utterances to short programs that indicate how to decompose the question and compose the retrieved answers. To obtain supervision for our model, we perform a noisy alignment from machine-generated questions to natural language questions and automatically generate noisy supervision for training.1 1 1 We differ training from question-answer pairs for future work.
|
| 40 |
+
|
| 41 |
+
We evaluate our model on ComplexWebQuestions and find that question decomposition substantially improves precision@1 from 20.8 to 27.5. We find that humans are able to reach 63.0 precision@1 under a limited time budget, leaving ample room for improvement in future work.
|
| 42 |
+
|
| 43 |
+
To summarize, our main contributions are:
|
| 44 |
+
|
| 45 |
+
1. nosep
|
| 46 |
+
A framework for answering complex questions through question decomposition.
|
| 47 |
+
|
| 48 |
+
2. nosep
|
| 49 |
+
A sequence-to-sequence model for question decomposition that substantially improves performance.
|
| 50 |
+
|
| 51 |
+
3. nosep
|
| 52 |
+
A dataset of 34,689 examples of complex and broad questions, along with answers, web snippets, and SPARQL queries.
|
| 53 |
+
|
| 54 |
+
Our dataset, ComplexWebQuestions, can be downloaded from [http://nlp.cs.tau.ac.il/compwebq](http://nlp.cs.tau.ac.il/compwebq) and our codebase can be downloaded from [https://github.com/alontalmor/WebAsKB](https://github.com/alontalmor/WebAsKB).
|
| 55 |
+
|
| 56 |
+
2 Problem Formulation
|
| 57 |
+
---------------------
|
| 58 |
+
|
| 59 |
+
Our goal is to learn a model that given a question q q and a black box QA model for answering simple questions, SimpQA(⋅)\textsc{SimpQA}(\cdot), produces a _computation tree_ t t (defined below) that decomposes the question and computes the answer. The model is trained from a set of N N question-computation tree pairs {q i,t i}i=1 N\{q^{i},t^{i}\}_{i=1}^{N} or question-answer pairs {q i,a i}i=1 N\{q^{i},a^{i}\}_{i=1}^{N}.
|
| 60 |
+
|
| 61 |
+

|
| 62 |
+
|
| 63 |
+
Figure 2: A computation tree for “What city is the birthplace of the author of ‘Without end’, and hosted Euro 2012?”. The leaves are strings, and inner nodes are functions (red) applied to their children to produce answers (blue).
|
| 64 |
+
|
| 65 |
+
A computation tree is a tree where leaves are labeled with strings, and inner nodes are labeled with functions. The arguments of a function are its children sub-trees. To compute an answer, or _denotation_, from a tree, we recursively apply the function at the root to its children. More formally, given a tree rooted at node t t, labeled by the function f f, that has children c 1(t),…,c k(t)c_{1}(t),\dots,c_{k}(t), the denotation ⟦t⟧=f(⟦c 1(t)⟧,…,⟦c k(t)⟧){\llbracket t\rrbracket}=f({\llbracket c_{1}(t)\rrbracket},\dots,{\llbracket c_{k}(t)\rrbracket}) is an arbitrary function applied to the denotations of the root’s children. Denotations are computed recursively and the denotation of a string at the leaf is the string itself, i.e., ⟦l⟧=l{\llbracket l\rrbracket}=l. This is closely related to “semantic functions” in semantic parsing Berant and Liang ([2015](https://arxiv.org/html/1803.06643#bib.bib4)), except that we do not interact with a KB, but rather compute directly over the breadth of the web through a search engine.
|
| 66 |
+
|
| 67 |
+
Figure[2](https://arxiv.org/html/1803.06643#S2.F2 "Figure 2 ‣ 2 Problem Formulation ‣ The Web as a Knowledge-base for Answering Complex Questions") provides an example computation tree for our running example. Notice that words at the leaves are not necessarily in the original question, e.g., “city” is paraphrased to “cities”. More broadly, our framework allows paraphrasing questions in any way that is helpful for the function SimpQA(⋅)\textsc{SimpQA}(\cdot). Paraphrasing for better interaction with a QA model has been recently suggested by Buck et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib6)) and Nogueira and Cho ([2016](https://arxiv.org/html/1803.06643#bib.bib28)).
|
| 68 |
+
|
| 69 |
+
We defined the function SimpQA(⋅)\textsc{SimpQA}(\cdot) for answering simple questions, but in fact it comprises two components in this work. First, the question is submitted to a search engine that retrieves a list of web snippets. Next, a RC model extracts the answer from the snippets. While it is possible to train the RC model jointly with question decomposition, in this work we pre-train it separately, and later treat it as a black box.
|
| 70 |
+
|
| 71 |
+
The expressivity of our QA model is determined by the functions used, which we turn to next.
|
| 72 |
+
|
| 73 |
+
3 Formal Language
|
| 74 |
+
-----------------
|
| 75 |
+
|
| 76 |
+
Functions in our formal language take arguments and return values that can be strings (when decomposing or re-phrasing the question), sets of strings, or sets of numbers. Our set of functions includes:
|
| 77 |
+
|
| 78 |
+
1. nosep
|
| 79 |
+
SimpQA(⋅)(\cdot): Model for answering simple questions, which takes a string argument and returns a set of strings or numbers as answer.
|
| 80 |
+
|
| 81 |
+
2. nosep
|
| 82 |
+
Comp(⋅,⋅)(\cdot,\cdot): This function takes a string containing one unique variable VAR, and a set of answers. E.g., in Figure[2](https://arxiv.org/html/1803.06643#S2.F2 "Figure 2 ‣ 2 Problem Formulation ‣ The Web as a Knowledge-base for Answering Complex Questions") the first argument is “birthplace of VAR”, and the second argument is “{Ken Follett, Adam Zagajewski}”. The function replaces the variable with each answer string representation and returns their union. Formally, Comp(q,𝒜)=∪a∈𝒜 SimpQA(q/a)\textsc{Comp}(q,\mathcal{A})=\cup_{a\in\mathcal{A}}\textsc{SimpQA}(q/a), where q/a q/a denotes the string produced when replacing VAR in q q with a a. This is similar to function composition in CCG Steedman ([2000](https://arxiv.org/html/1803.06643#bib.bib33)), or a join operation in λ\lambda-DCS Liang ([2013](https://arxiv.org/html/1803.06643#bib.bib25)), where the string is a function applied to previously-computed values.
|
| 83 |
+
|
| 84 |
+
3. nosep
|
| 85 |
+
Conj(⋅,⋅)(\cdot,\cdot): takes two sets and returns their intersection. Other set operations can be defined analogously. As syntactic sugar, we allow Conj(⋅\cdot) to take strings as input, which means that we run SimpQA(⋅)(\cdot) to obtain a set and then perform intersection. The root node in Figure[2](https://arxiv.org/html/1803.06643#S2.F2 "Figure 2 ‣ 2 Problem Formulation ‣ The Web as a Knowledge-base for Answering Complex Questions") illustrates an application of Conj.
|
| 86 |
+
|
| 87 |
+
4. nosep
|
| 88 |
+
Add(⋅,⋅)(\cdot,\cdot): takes two singleton sets of numbers and returns a set with their addition. Similar functions can be defined analogously. While we support mathematical operations, they were not required in our dataset.
|
| 89 |
+
|
| 90 |
+
#### Other logical operations
|
| 91 |
+
|
| 92 |
+
In semantic parsing superlative and comparative questions like “What is the highest European mountain?” or “What European mountains are higher than Mont Blanc?” are answered by joining the set of European mountains with their elevation. While we could add such functions to the formal language, answering such questions from the web is cumbersome: we would have to extract a list of entities and a numerical value for each. Instead, we handle such constructions using SimpQA directly, assuming they are mentioned verbatim on some web document.
|
| 93 |
+
|
| 94 |
+
Similarly, negation questions (“What countries are not in the OECD?”) are difficult to handle when working against a search engine only, as this is an open world setup and we do not hold a closed set of countries over which we can perform set subtraction.
|
| 95 |
+
|
| 96 |
+
In future work, we plan to interface with tables Pasupat and Liang ([2015](https://arxiv.org/html/1803.06643#bib.bib30)) and KBs Zhong et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib45)). This will allow us to perform set operations over well-defined sets, and handle in a compositional manner superlatives and comparatives.
|
| 97 |
+
|
| 98 |
+
4 Dataset
|
| 99 |
+
---------
|
| 100 |
+
|
| 101 |
+
Evaluating our framework requires a dataset of broad and complex questions that examine the importance of question decomposition. While many QA datasets have been developed recently Yang et al. ([2015](https://arxiv.org/html/1803.06643#bib.bib41)); Rajpurkar et al. ([2016](https://arxiv.org/html/1803.06643#bib.bib32)); Hewlett et al. ([2016](https://arxiv.org/html/1803.06643#bib.bib15)); Nguyen et al. ([2016](https://arxiv.org/html/1803.06643#bib.bib27)); Onishi et al. ([2016](https://arxiv.org/html/1803.06643#bib.bib29)); Hill et al. ([2015](https://arxiv.org/html/1803.06643#bib.bib16)); Welbl et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib40)), they lack a focus on the importance of question decomposition.
|
| 102 |
+
|
| 103 |
+
Most RC datasets contain simple questions that can be answered from a short input document. Recently, TriviaQA Joshi et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib20)) presented a larger portion of complex questions, but still most do not require reasoning. Moreover, the focus of TriviaQA is on answer extraction from documents that are given. We, conversely, highlight question decomposition for finding the relevant documents. Put differently, RC is complementary to question decomposition and can be used as part of the implementation of SimpQA. In Section[6](https://arxiv.org/html/1803.06643#S6 "6 Experiments ‣ The Web as a Knowledge-base for Answering Complex Questions") we demonstrate that question decomposition is useful for two different RC approaches.
|
| 104 |
+
|
| 105 |
+
### 4.1 Dataset collection
|
| 106 |
+
|
| 107 |
+

|
| 108 |
+
|
| 109 |
+
Figure 3: Overview of data collection process.
|
| 110 |
+
|
| 111 |
+
To generate complex questions we use the dataset WebQuestionsSP Yih et al. ([2016](https://arxiv.org/html/1803.06643#bib.bib42)), which contains 4,737 questions paired with SPARQL queries for Freebase Bollacker et al. ([2008](https://arxiv.org/html/1803.06643#bib.bib5)). Questions are broad but simple. Thus, we sample question-query pairs, automatically create more complex SPARQL queries, generate automatically questions that are understandable to AMT workers, and then have them paraphrase those into natural language (similar to Wang et al. ([2015](https://arxiv.org/html/1803.06643#bib.bib38))). We compute answers by executing complex SPARQL queries against Freebase, and obtain broad and complex questions. Figure[6](https://arxiv.org/html/1803.06643#Sx3.F6 "Figure 6 ‣ Machine-generated (MG) questions ‣ Dataset ‣ The Web as a Knowledge-base for Answering Complex Questions") provides an example for this procedure, and we elaborate next.
|
| 112 |
+
|
| 113 |
+
Table 1: Rules for generating a complex query r′r^{\prime} from a query r r (’.’ in SPARQL corresponds to logical and). The query r r returns the variable ?x?x, and contains an entity e e. We denote by r[e/y]r[e/y] the replacement of the entity e e with a variable ?y?y. pred 1\text{pred}_{1} and pred 2\text{pred}_{2} are any KB predicates, obj is any KB entity, V V is a numerical value, and ?c?c is a variable of a CVT type in Freebase which refers to events. The last column provides an example for a NL question for each type.
|
| 114 |
+
|
| 115 |
+
#### Generating SPARQL queries
|
| 116 |
+
|
| 117 |
+
Given a SPARQL query r r, we create four types of more complex queries: conjunctions, superlatives, comparatives, and compositions. Table[7](https://arxiv.org/html/1803.06643#Sx3.T7 "Table 7 ‣ Generating SPARQL queries ‣ Dataset ‣ The Web as a Knowledge-base for Answering Complex Questions") gives the exact rules for generation. For conjunctions, superlatives, and comparatives, we identify queries in WebQuestionsSP whose denotation is a set 𝒜,|𝒜|≥2\mathcal{A},|\mathcal{A}|\geq 2, and generate a new query r′r^{\prime} whose denotation is a strict subset 𝒜′,𝒜′⊂𝒜,𝒜′≠ϕ\mathcal{A}^{\prime},\mathcal{A}^{\prime}\subset\mathcal{A},\mathcal{A}^{\prime}\neq\phi. For conjunctions this is done by traversing the KB and looking for SPARQL triplets that can be added and will yield a valid set 𝒜′\mathcal{A}^{\prime}. For comparatives and superlatives we find a numerical property common to all a∈𝒜 a\in\mathcal{A}, and add a triplet and restrictor to r r accordingly. For compositions, we find an entity e e in r r, and replace e e with a variable y y and add to r r a triplet such that the denotation of that triplet is {e}\{e\}.
|
| 118 |
+
|
| 119 |
+
#### Machine-generated (MG) questions
|
| 120 |
+
|
| 121 |
+
To have AMT workers paraphrase SPARQL queries into natural language, we need to present them in an understandable form. Therefore, we automatically generate a question they can paraphrase. When we generate new SPARQL queries, new predicates are added to the query (Table[7](https://arxiv.org/html/1803.06643#Sx3.T7 "Table 7 ‣ Generating SPARQL queries ‣ Dataset ‣ The Web as a Knowledge-base for Answering Complex Questions")). We manually annotated 687 templates mapping KB predicates to text for different compositionality types (with 462 unique KB predicates), and use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query. E.g., the template for ?x?x ns:book.author.works_written obj is “the author who wrote OBJ”. For brevity, we provide the details in the supplementary material.
|
| 122 |
+
|
| 123 |
+
#### Question Rephrasing
|
| 124 |
+
|
| 125 |
+
We used AMT workers to paraphrase MG questions into natural language (NL). Each question was paraphrased by one AMT worker and validated by 1-2 other workers. To generate diversity, workers got a bonus if the edit distance of a paraphrase was high compared to the MG question. A total of 200 workers were involved, and 34,689 examples were produced with an average cost of 0.11$ per question. Table[7](https://arxiv.org/html/1803.06643#Sx3.T7 "Table 7 ‣ Generating SPARQL queries ‣ Dataset ‣ The Web as a Knowledge-base for Answering Complex Questions") gives an example for each compositionality type.
|
| 126 |
+
|
| 127 |
+
A drawback of our method for generating data is that because queries are generated automatically the question distribution is artificial from a semantic perspective. Still, developing models that are capable of reasoning is an important direction for natural language understanding and ComplexWebQuestions provides an opportunity to develop and evaluate such models.
|
| 128 |
+
|
| 129 |
+
To summarize, each of our examples contains a question, an answer, a SPARQL query (that our models ignore), and all web snippets harvested by our model when attempting to answer the question. This renders ComplexWebQuestions useful for both the RC and semantic parsing communities.
|
| 130 |
+
|
| 131 |
+
### 4.2 Dataset analysis
|
| 132 |
+
|
| 133 |
+
ComplexWebQuestions builds on the WebQuestions Berant et al. ([2013](https://arxiv.org/html/1803.06643#bib.bib3)). Questions in WebQuestions are usually about properties of entities (“What is the capital of France?”), often with some filter for the semantic type of the answer (“Which director”, “What city”). WebQuestions also contains questions that refer to events with multiple entities (“Who did Brad Pitt play in Troy?”). ComplexWebQuestions contains all these semantic phenomena, but we add four compositionality types by generating composition questions (45% of the times), conjunctions (45%), superlatives (5%) and comparatives (5%).
|
| 134 |
+
|
| 135 |
+
#### Paraphrasing
|
| 136 |
+
|
| 137 |
+
To generate rich paraphrases, we gave a bonus to workers that substantially modified MG questions. To check whether this worked, we measured surface similarity between MG and NL questions, and examined the similarity. Using normalized edit-distance and the DICE coefficient, we found that NL questions are different from MG questions and that the similarity distribution has wide support (Figure[4](https://arxiv.org/html/1803.06643#S4.F4 "Figure 4 ‣ Paraphrasing ‣ 4.2 Dataset analysis ‣ 4 Dataset ‣ The Web as a Knowledge-base for Answering Complex Questions")). We also found that AMT workers tend to shorten the MG question (MG avg. length: 16, NL avg. length: 13.18), and use a richer vocabulary (MG # unique tokens: 9,489, NL # unique tokens: 14,282).
|
| 138 |
+
|
| 139 |
+

|
| 140 |
+
|
| 141 |
+
Figure 4: MG and NL questions similarity with normalized edit-distance, and the DICE coefficient (bars are stacked).
|
| 142 |
+
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+
Figure 5: Heat map for similarity matrix between a MG and NL question. The red line indicates a known MG split point. The blue line is the approximated NL split point.
|
| 146 |
+
|
| 147 |
+
We created a heuristic for approximating the amount of word re-ordering performed by AMT workers. For every question, we constructed a matrix A A, where A ij A_{ij} is the similarity between token i i in the MG question and token j j in the NL question. Similarity is 1 1 if lemmas match, or cosine similarity according to GloVe embeddings Pennington et al. ([2014](https://arxiv.org/html/1803.06643#bib.bib31)), when above a threshold, and 0 otherwise. The matrix A A allows us to estimate whether parts of the MG question were re-ordered when paraphrased to NL (details in supplementary material). We find that in 44.7% of the conjunction questions and 13.2% of the composition questions, word re-ordering happened, illustrating that substantial changes to the MG question have been made. Figure[8](https://arxiv.org/html/1803.06643#Sx4.F8 "Figure 8 ‣ Generating noisy supervision ‣ The Web as a Knowledge-base for Answering Complex Questions") illustrates the matrix A A for a pair of questions with re-ordering.
|
| 148 |
+
|
| 149 |
+
Last, we find that in WebQuestions almost all questions start with a wh-word, but in ComplexWebQuestions 22% of the questions start with another word, again showing substantial paraphrasing from the original questions.
|
| 150 |
+
|
| 151 |
+
#### Qualitative analysis
|
| 152 |
+
|
| 153 |
+
We randomly sampled 100 examples from the development set and manually identified prevalent phenomena in the data. We present these types in Table[2](https://arxiv.org/html/1803.06643#S4.T2 "Table 2 ‣ Qualitative analysis ‣ 4.2 Dataset analysis ‣ 4 Dataset ‣ The Web as a Knowledge-base for Answering Complex Questions") along with their frequency. In 18% of the examples a conjunct in the MG question becomes a modifier of a wh-word in the NL question (Wh-modifier). In 22% substantial word re-ordering of the MG questions occurred, and in 42% a minor word re-ordering occurred (“number of building floors is 50” paraphrased as “has 50 floors”). AMT workers used a synonym in 54% of the examples, they omitted words in 27% of the examples and they added new lexical material in 29%.
|
| 154 |
+
|
| 155 |
+
Table 2: Examples and frequency of prevalent phenomena in the NL questions for a manually analyzed subset (see text).
|
| 156 |
+
|
| 157 |
+
To obtain intuition for operations that will be useful in our model, we analyzed the 100 examples for the types of operations that should be applied to the NL question during question decomposition. We found that splitting the NL question is insufficient, and that in 53% of the cases a word in the NL question needs to be copied to multiple questions after decomposition (row 3 in Table[3](https://arxiv.org/html/1803.06643#S5.T3 "Table 3 ‣ 5 Model and Learning ‣ The Web as a Knowledge-base for Answering Complex Questions")). Moreover, words that did not appear in the MG question need to be added in 39% of the cases, and words need to be deleted in 28% of the examples.
|
| 158 |
+
|
| 159 |
+
5 Model and Learning
|
| 160 |
+
--------------------
|
| 161 |
+
|
| 162 |
+
We would like to develop a model that translates questions into arbitrary computation trees with arbitrary text at the tree leaves. However, this requires training from denotations using methods such as maximum marginal likelihood or reinforcement learning Guu et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib13)) that are difficult to optimize. Moreover, such approaches involve issuing large amounts of queries to a search engine at training time, incurring high costs and slowing down training.
|
| 163 |
+
|
| 164 |
+
Instead, we develop a simple approach in this paper. We consider a subset of all possible computation trees that allows us to automatically generate noisy full supervision. In what follows, we describe the subset of computation trees considered and their representation, a method for automatically generating noisy supervision, and a pointer network model for decoding.
|
| 165 |
+
|
| 166 |
+
Program Question Split
|
| 167 |
+
SimpQA“What building in Vienna, Austria has 50 floors”-
|
| 168 |
+
Comp 5 5 9 9“Where is the birthplace of the writer of Standup Shakespeare”“Where is the birthplace of VAR”
|
| 169 |
+
“the writer of Standup Shakespeare”
|
| 170 |
+
Conj 5 5 1 1 _“What film featured Taylor Swift and_“What film featured Taylor Swift”
|
| 171 |
+
_was directed by Deborah Aquila”_“film and was directed by Deborah Aquila”
|
| 172 |
+
|
| 173 |
+
Table 3: Examples for the types of computation trees that can be decoded by our model.
|
| 174 |
+
|
| 175 |
+
#### Representation
|
| 176 |
+
|
| 177 |
+
We represent computation trees as a sequence of tokens, and consider trees with at most one compositional operation. We denote a sequence of question tokens q i:j=(q i,…,q j)q_{i:j}=(q_{i},\dots,q_{j}), and the decoded sequence by z z. We consider the following token sequences (see Table [3](https://arxiv.org/html/1803.06643#S5.T3 "Table 3 ‣ 5 Model and Learning ‣ The Web as a Knowledge-base for Answering Complex Questions")):
|
| 178 |
+
|
| 179 |
+
1. nosep
|
| 180 |
+
SimpQA: The function SimpQA is applied to the question q q without paraphrasing. In prefix notation this is the tree SimpQA(q)\textsc{SimpQA}(q).
|
| 181 |
+
|
| 182 |
+
2. nosep
|
| 183 |
+
Comp i i j j: This sequence of tokens corresponds to the following computation tree: Comp(q 1:i−1∘VAR∘q j+1:|q|,SimpQA(q i:j))\textsc{Comp}(q_{1:i-1}\circ\texttt{VAR}\circ q_{j+1:|q|},\textsc{SimpQA}(q_{i:j})), where ∘\circ is the concatenation operator. This is used for questions where a substring is answered by SimpQA and the answers replace a variable before computing a final answer.
|
| 184 |
+
|
| 185 |
+
3. nosep
|
| 186 |
+
Conj i i j j: This sequence of tokens corresponds to the computation tree Conj(SimpQA(q 0:i−1),SimpQA(q j∘q i:|q|))\textsc{Conj}(\textsc{SimpQA}(q_{0:i-1}),\textsc{SimpQA}(q_{j}\circ q_{i:|q|})). The idea is that conjunction can be answered by splitting the question in a single point, where one token is copied to the second part as well (“film” in Table [3](https://arxiv.org/html/1803.06643#S5.T3 "Table 3 ‣ 5 Model and Learning ‣ The Web as a Knowledge-base for Answering Complex Questions")). If nothing needs to be copied, then j=−1 j=-1.
|
| 187 |
+
|
| 188 |
+
This representation supports one compositional operation, and a single copying operation is allowed without any re-phrasing. In future work, we plan to develop a more general representation, which will require training from denotations.
|
| 189 |
+
|
| 190 |
+
#### Supervision
|
| 191 |
+
|
| 192 |
+
Training from denotations is difficult as it involves querying a search engine frequently, which is expensive. Therefore, we take advantage of the the original SPARQL queries and MG questions to generate noisy programs for composition and conjunction questions. Note that these noisy programs are only used as supervision to avoid the costly process of manual annotation, but the model itself does not assume SPARQL queries in any way.
|
| 193 |
+
|
| 194 |
+
We generate noisy programs from SPARQL queries in the following manner: First, we automatically identify composition and conjunction questions. Because we generated the MG question, we can exactly identify the split points (i,j i,j in composition questions and i i in conjunction questions) in the MG question. Then, we use a rule-based algorithm that takes the alignment matrix A A (Section[Dataset](https://arxiv.org/html/1803.06643#Sx3 "Dataset ‣ The Web as a Knowledge-base for Answering Complex Questions")), and approximates the split points in the NL question and the index j j to copy in conjunction questions. The red line in Figure[8](https://arxiv.org/html/1803.06643#Sx4.F8 "Figure 8 ‣ Generating noisy supervision ‣ The Web as a Knowledge-base for Answering Complex Questions") corresponds to the known split point in the MG question, and the blue one is the approximated split point in the NL question. The details of this rule-based algorithm are in the supplementary material.
|
| 195 |
+
|
| 196 |
+
Thus, we obtain noisy supervision for all composition and conjunction questions and can train a model that translates questions q q to representations z=z 1z 2z 3 z=z_{1}\ z_{2}\ z_{3}, where z 1∈{Comp,Conj}z_{1}\in\{\texttt{Comp},\texttt{Conj}\} and z 2,z 3 z_{2},z_{3} are integer indices.
|
| 197 |
+
|
| 198 |
+
#### Pointer network
|
| 199 |
+
|
| 200 |
+
The representation z z points to indices in the input, and thus pointer networks Vinyals et al. ([2015](https://arxiv.org/html/1803.06643#bib.bib36)) are a sensible choice. Because we also need to decode the tokens Comp and Conj, we use “augmented pointer networks”, Zhong et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib45)): For every question q q, an augmented question q^\hat{q} is created by appending the tokens “Comp Conj” to q q. This allows us to decode the representation z z with one pointer network that at each decoding step points to one token in the augmented question. We encode q^\hat{q} with a one-layer GRU Cho et al. ([2014](https://arxiv.org/html/1803.06643#bib.bib9)), and decode z z with a one-layer GRU with attention as in Jia and Liang ([2016](https://arxiv.org/html/1803.06643#bib.bib18)). The only difference is that we decode tokens from the augmented question q^\hat{q} rather than from a fixed vocabulary.
|
| 201 |
+
|
| 202 |
+
We train the model with token-level cross-entropy loss, minimizing ∑j logp θ(z j|x,z 1:j−1)\sum_{j}\log p_{\theta}(z_{j}|x,z_{1:j-1}). Parameters θ\theta include the GRU encoder and decoder, and embeddings for unknown tokens (that are not in pre-trained GloVe embeddings Pennington et al. ([2014](https://arxiv.org/html/1803.06643#bib.bib31))).
|
| 203 |
+
|
| 204 |
+
The trained model decodes Comp and Conj representations, but sometimes using SimpQA(q)\textsc{SimpQA}(q) without decomposition is better. To handle such cases we do the following: We assume that we always have access to a score for every answer, provided by the final invocation of SimpQA (in Conj questions this score is the maximum of the scores given by SimpQA for the two conjuncts), and use the following rule to decide if to use the decoded representation z z or SimpQA(q)\textsc{SimpQA}(q). Given the scores for answers given by z z and the scores given by SimpQA(q)\textsc{SimpQA}(q), we return the single answer that has the highest score. The intuition is that the confidence provided by the scores of SimpQA is correlated with answer correctness. In future work we will train directly from denotations and will handle all logical functions in a uniform manner.
|
| 205 |
+
|
| 206 |
+
6 Experiments
|
| 207 |
+
-------------
|
| 208 |
+
|
| 209 |
+
In this section, we aim to examine whether question decomposition can empirically improve performance of QA models over complex questions.
|
| 210 |
+
|
| 211 |
+
#### Experimental setup
|
| 212 |
+
|
| 213 |
+
We used 80% of the examples in ComplexWebQuestions for training, 10% for development, and 10% for test, training the pointer network on 24,708 composition and conjunction examples. The hidden state dimension of the pointer network is 512 512, and we used Adagrad Duchi et al. ([2010](https://arxiv.org/html/1803.06643#bib.bib11)) combined with L 2 regularization and a dropout rate of 0.25 0.25. We initialize 50 50-dimensional word embeddings using GloVe and learn embeddings for missing words.
|
| 214 |
+
|
| 215 |
+
#### Simple QA model
|
| 216 |
+
|
| 217 |
+
As our SimpQA function, we download the web-based QA model of Talmor et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib35)). This model sends the question to Google’s search engine and extracts a distribution over answers from the top-100 100 web snippets using manually-engineered features. We re-train the model on our data with one new feature: for every question q q and candidate answer mention in a snippet, we run RaSoR, a RC model by lee et al. ([2016](https://arxiv.org/html/1803.06643#bib.bib24)), and add the output logit score as a feature. We found that combining the web-facing model of Talmor et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib35)) and RaSoR, resulted in improved performance.
|
| 218 |
+
|
| 219 |
+
#### Evaluation
|
| 220 |
+
|
| 221 |
+
For evaluation, we measure precision@1 (p@1), i.e., whether the highest scoring answer returned string-matches one of the correct answers (while answers are sets, 70% of the questions have a single answer, and the average size of the answer set is 2.3).
|
| 222 |
+
|
| 223 |
+
We evaluate the following models and oracles:
|
| 224 |
+
|
| 225 |
+
1. nosep
|
| 226 |
+
SimpQA: running SimpQA on the entire question, i.e., without decomposition.
|
| 227 |
+
|
| 228 |
+
2. nosep
|
| 229 |
+
SplitQA: Our main model that answers complex questions by decomposition.
|
| 230 |
+
|
| 231 |
+
3. nosep
|
| 232 |
+
SplitQAOracle: An _oracle_ model that chooses whether to perform question decomposition or use SimpQA in hindsight based on what performs better.
|
| 233 |
+
|
| 234 |
+
4. nosep
|
| 235 |
+
RCQA: This is identical to SimpQA, except that we replace the RC model from Talmor et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib35)) with the the RC model DocQA Clark and Gardner ([2017](https://arxiv.org/html/1803.06643#bib.bib10)), whose performance is comparable to state-of-the-art on TriviaQA.
|
| 236 |
+
|
| 237 |
+
5. nosep
|
| 238 |
+
SplitRCQA: This is identical to SplitQA, except that we replace the RC model from Talmor et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib35)) with DocQA.
|
| 239 |
+
|
| 240 |
+
6. nosep
|
| 241 |
+
GoogleBox: We sample 100 100 random development set questions and check whether Google returns a box that contains one of the correct answers.
|
| 242 |
+
|
| 243 |
+
7. nosep
|
| 244 |
+
Human: We sample 100 100 random development set questions and manually answer the questions with Google’s search engine, including all available information. We limit the amount of time allowed for answering to 4 minutes.
|
| 245 |
+
|
| 246 |
+
Table 4: precision@1 results on the development set and test set for ComplexWebQuestions.
|
| 247 |
+
|
| 248 |
+
Table[4](https://arxiv.org/html/1803.06643#S6.T4 "Table 4 ‣ Evaluation ‣ 6 Experiments ‣ The Web as a Knowledge-base for Answering Complex Questions") presents the results on the development and test sets. SimpQA, which does not decompose questions obtained 20.8 p@1, while by performing question decomposition we substantially improve performance to 27.5 p@1. An upper bound with perfect knowledge on when to decompose is given by SplitQAOracle at 33.7 p@1.
|
| 249 |
+
|
| 250 |
+
RCQA obtained lower performance SimpQA, as it was trained on data from a different distribution. More importantly SplitRCQA outperforms RCQA by 3.4 points, illustrating that this RC model also benefits from question decomposition, despite the fact that it was not created with question decomposition in mind. This shows the importance of question decomposition for retrieving documents from which an RC model can extract answers. GoogleBox finds a correct answer in 2.5% of the cases, showing that complex questions are challenging for search engines.
|
| 251 |
+
|
| 252 |
+
To conclude, we demonstrated that question decomposition substantially improves performance on answering complex questions using two independent RC models.
|
| 253 |
+
|
| 254 |
+
#### Analysis
|
| 255 |
+
|
| 256 |
+
We estimate human performance (Human) at 63.0 p@1. We find that answering complex questions takes roughly 1.3 minutes on average. For questions we were unable to answer, we found that in 27% the answer was correct but exact string match with the gold answers failed; in 23.1% the time required to compute the answer was beyond our capabilities; for 15.4% we could not find an answer on the web; 11.5% were of ambiguous nature; 11.5% involved paraphrasing errors of AMT workers; and an additional 11.5% did not contain a correct gold answer.
|
| 257 |
+
|
| 258 |
+
SplitQA decides if to decompose questions or not based on the confidence of SimpQA. In 61% of the questions the model chooses to decompose the question, and in the rest it sends the question as-is to the search engine. If one of the strategies (decomposition vs. no decomposition) works, our model chooses that right one in 86% of the cases. Moreover, in 71% of these answerable questions, only one strategy yields a correct answer.
|
| 259 |
+
|
| 260 |
+
We evaluate the ability of the pointer network to mimic our labeling heuristic on the development set. We find that the model outputs the exact correct output sequence 60.9% of the time, and allowing errors of one word to the left and right (this often does not change the final output) accuracy is at 77.1%. Token-level accuracy is 83.0% and allowing one-word errors 89.7%. This shows that SplitQA learned to identify decomposition points in the questions. We also observed that often SplitQA produced decomposition points that are better than the heuristic, e.g., for “What is the place of birth for the lyricist of Roman Holiday”, SplitQA produced “the lyricist of Roman Holiday”, but the heuristic produced “the place of birth for the lyricist of Roman Holiday”. Additional examples of SplitQA question decompositions are provided in Table[5](https://arxiv.org/html/1803.06643#S6.T5 "Table 5 ‣ Analysis ‣ 6 Experiments ‣ The Web as a Knowledge-base for Answering Complex Questions").
|
| 261 |
+
|
| 262 |
+
Question Split-1 Split-2
|
| 263 |
+
_“Find the actress who played Hailey Rogers,_“the actress who played Hailey Rogers”“Find VAR , what label is she signed to”
|
| 264 |
+
_what label is she signed to”_
|
| 265 |
+
_“What are the colors of the sports team whose_ _“the sports team whose arena stadium_“What are the colors of VAR”
|
| 266 |
+
_arena stadium is the AT&T Stadium”_ _is the AT&T Stadium”_
|
| 267 |
+
_“What amusement park is located in Madrid_ _“What amusement park is located in_“park includes the stunt fall ride”
|
| 268 |
+
_Spain and includes the stunt fall ride”_ _Madrid Spain and”_
|
| 269 |
+
_“Which university whose mascot is_ _“Which university whose mascot is_“university Derek Fisher attend”
|
| 270 |
+
_The Trojan did Derek Fisher attend”_ _The Trojan did”_
|
| 271 |
+
|
| 272 |
+
Table 5: Examples for question decompositions from SplitQA.
|
| 273 |
+
|
| 274 |
+
#### ComplexQuestions
|
| 275 |
+
|
| 276 |
+
To further examine the ability of web-based QA models, we run an experiment against ComplexQuestions Bao et al. ([2016](https://arxiv.org/html/1803.06643#bib.bib2)), a small dataset of question-answer pairs designed for semantic parsing against Freebase.
|
| 277 |
+
|
| 278 |
+
Table 6: F 1 results for ComplexQuestions.
|
| 279 |
+
|
| 280 |
+
We ran SimpQA on this dataset (Table[6](https://arxiv.org/html/1803.06643#S6.T6 "Table 6 ‣ ComplexQuestions ‣ 6 Experiments ‣ The Web as a Knowledge-base for Answering Complex Questions")) and obtained 38.6 F 1 (the official metric), slightly lower than CompQ, the best system, which operates directly against Freebase. 2 2 2 By adding the output logit from RaSor, we improved test F 1 from 32.6, as reported by Talmor et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib35)), to 38.6. By analyzing the training data, we found that we can decompose Comp questions with a rule that splits the question when the words “when” or “during” appear, e.g., “Who was vice president when JFK was president?”.3 3 3 The data is too small to train our decomposition model. We decomposed questions with this rule and obtained 39.7 F 1 (SplitQARule). Analyzing the development set errors, we found that occasionally SplitQARule returns a correct answer that fails to string-match with the gold answer. By manually fixing these cases, our development set F 1 reaches 46.9 (SplitQARule++). Note that CompQ does not suffer from any string matching issue, as it operates directly against the Freebase KB and thus is guaranteed to output the answer in the correct form. This short experiment shows that a web-based QA model can rival a semantic parser that works against a KB, and that simple question decomposition is beneficial and leads to results comparable to state-of-the-art.
|
| 281 |
+
|
| 282 |
+
7 Related work
|
| 283 |
+
--------------
|
| 284 |
+
|
| 285 |
+
This work is related to a body of work in semantic parsing and RC, in particular to datasets that focus on complex questions such as TriviaQA Joshi et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib20)), WikiHop Welbl et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib40)) and Race Lai et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib23)). Our distinction is in proposing a framework for complex QA that focuses on question decomposition.
|
| 286 |
+
|
| 287 |
+
Our work is related to Chen et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib8)) and Watanabe et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib39)), who combined retrieval and answer extraction on a large set of documents. We work against the entire web, and propose question decomposition for finding information.
|
| 288 |
+
|
| 289 |
+
This work is also closely related to Dunn et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib12)) and Buck et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib6)): we start with questions directly and do not assume documents are given. Buck et al. ([2017](https://arxiv.org/html/1803.06643#bib.bib6)) also learn to phrase questions given a black box QA model, but while they focus on paraphrasing, we address decomposition. Using a black box QA model is challenging because you can not assume differentiability, and reproducibility is difficult as black boxes change over time. Nevertheless, we argue that such QA setups provide a holistic view to the problem of QA and can shed light on important research directions going forward.
|
| 290 |
+
|
| 291 |
+
Another important related research direction is Iyyer et al. ([2016](https://arxiv.org/html/1803.06643#bib.bib17)), who answered complex questions by decomposing them. However, they used crowdsourcing to obtain direct supervision for the gold decomposition, while we do not assume such supervision. Moreover, they work against web tables, while we interact with a search engine against the entire web.
|
| 292 |
+
|
| 293 |
+
8 Conclusion
|
| 294 |
+
------------
|
| 295 |
+
|
| 296 |
+
In this paper we propose a new framework for answering complex questions that is based on question decomposition and interaction with the web. We develop a model under this framework and demonstrate it improves complex QA performance on two datasets and using two RC models. We also release a new dataset, ComplexWebQuestions, including questions, SPARQL programs, answers, and web snippets harvested by our model. We believe this dataset will serve the QA and semantic parsing communities, drive research on compositionality, and push the community to work on holistic solutions for QA.
|
| 297 |
+
|
| 298 |
+
In future work, we plan to train our model directly from weak supervision, i.e., denotations, and to extract information not only from the web, but also from structured information sources such as web tables and KBs.
|
| 299 |
+
|
| 300 |
+
Acknowledgements
|
| 301 |
+
----------------
|
| 302 |
+
|
| 303 |
+
We thank Jonatahn Herzig, Ni Lao, and the anonymous reviewers for their constructive feedback. This work was supported by the Samsung runway project and the Israel Science Foundation, grant 942/16.
|
| 304 |
+
|
| 305 |
+
References
|
| 306 |
+
----------
|
| 307 |
+
|
| 308 |
+
* Artzi and Zettlemoyer (2013) Y.Artzi and L.Zettlemoyer. 2013. Weakly supervised learning of semantic parsers for mapping instructions to actions. Transactions of the Association for Computational Linguistics (TACL) 1:49–62.
|
| 309 |
+
* Bao et al. (2016) J.Bao, N.Duan, Z.Yan, M.Zhou, and T.Zhao. 2016. Constraint-based question answering with knowledge graph. In International Conference on Computational Linguistics (COLING).
|
| 310 |
+
* Berant et al. (2013) J.Berant, A.Chou, R.Frostig, and P.Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP).
|
| 311 |
+
* Berant and Liang (2015) J.Berant and P.Liang. 2015. Imitation learning of agenda-based semantic parsers. Transactions of the Association for Computational Linguistics (TACL) 3:545–558.
|
| 312 |
+
* Bollacker et al. (2008) K.Bollacker, C.Evans, P.Paritosh, T.Sturge, and J.Taylor. 2008. Freebase: a collaboratively created graph database for structuring human knowledge. In International Conference on Management of Data (SIGMOD). pages 1247–1250.
|
| 313 |
+
* Buck et al. (2017) C.Buck, J.Bulian, M.Ciaramita, A.Gesmundo, N.Houlsby, W.Gajewski, and W.Wang. 2017. Ask the right questions: Active question reformulation with reinforcement learning. arXiv preprint arXiv:1705.07830 .
|
| 314 |
+
* Chen et al. (2016) D.Chen, J.Bolton, and C.D. Manning. 2016. A thorough examination of the CNN / Daily Mail reading comprehension task. In Association for Computational Linguistics (ACL).
|
| 315 |
+
* Chen et al. (2017) D.Chen, A.Fisch, J.Weston, and A.Bordes. 2017. Reading Wikipedia to answer open-domain questions. arXiv preprint arXiv:1704.00051 .
|
| 316 |
+
* Cho et al. (2014) K.Cho, B.van Merriënboer, D.Bahdanau, and Y.Bengio. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259 .
|
| 317 |
+
* Clark and Gardner (2017) C.Clark and M.Gardner. 2017. Simple and effective multi-paragraph reading comprehension. arXiv preprint arXiv:1710.10723 .
|
| 318 |
+
* Duchi et al. (2010) J.Duchi, E.Hazan, and Y.Singer. 2010. Adaptive subgradient methods for online learning and stochastic optimization. In Conference on Learning Theory (COLT).
|
| 319 |
+
* Dunn et al. (2017) M.Dunn, , L.Sagun, M.Higgins, U.Guney, V.Cirik, and K.Cho. 2017. SearchQA: A new Q&A dataset augmented with context from a search engine. arXiv .
|
| 320 |
+
* Guu et al. (2017) K.Guu, P.Pasupat, E.Z. Liu, and P.Liang. 2017. From language to programs: Bridging reinforcement learning and maximum marginal likelihood. In Association for Computational Linguistics (ACL).
|
| 321 |
+
* Hermann et al. (2015) K.M. Hermann, T.Kočiský, E.Grefenstette, L.Espeholt, W.Kay, M.Suleyman, and P.Blunsom. 2015. Teaching machines to read and comprehend. In Advances in Neural Information Processing Systems (NIPS).
|
| 322 |
+
* Hewlett et al. (2016) D.Hewlett, A.Lacoste, L.Jones, I.Polosukhin, A.Fandrianto, J.Han, M.Kelcey, and D.Berthelot. 2016. Wikireading: A novel large-scale language understanding task over Wikipedia. In Association for Computational Linguistics (ACL).
|
| 323 |
+
* Hill et al. (2015) F.Hill, A.Bordes, S.Chopra, and J.Weston. 2015. The goldilocks principle: Reading children’s books with explicit memory representations. In International Conference on Learning Representations (ICLR).
|
| 324 |
+
* Iyyer et al. (2016) M.Iyyer, W.Yih, and M.Chang. 2016. Answering complicated question intents expressed in decomposed question sequences. CoRR 0.
|
| 325 |
+
* Jia and Liang (2016) R.Jia and P.Liang. 2016. Data recombination for neural semantic parsing. In Association for Computational Linguistics (ACL).
|
| 326 |
+
* Jia and Liang (2017) R.Jia and P.Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Empirical Methods in Natural Language Processing (EMNLP).
|
| 327 |
+
* Joshi et al. (2017) M.Joshi, E.Choi, D.Weld, and L.Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Association for Computational Linguistics (ACL).
|
| 328 |
+
* Krishnamurthy and Mitchell (2012) J.Krishnamurthy and T.Mitchell. 2012. Weakly supervised training of semantic parsers. In Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL). pages 754–765.
|
| 329 |
+
* Kwiatkowski et al. (2013) T.Kwiatkowski, E.Choi, Y.Artzi, and L.Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Empirical Methods in Natural Language Processing (EMNLP).
|
| 330 |
+
* Lai et al. (2017) G.Lai, Q.Xie, H.Liu, Y.Yang, and E.Hovy. 2017. Race: Large-scale reading comprehension dataset from examinations. arXiv preprint arXiv:1704.04683 .
|
| 331 |
+
* lee et al. (2016) K.lee, M.Lewis, and L.Zettlemoyer. 2016. Global neural CCG parsing with optimality guarantees. In Empirical Methods in Natural Language Processing (EMNLP).
|
| 332 |
+
* Liang (2013) P.Liang. 2013. Lambda dependency-based compositional semantics. arXiv preprint arXiv:1309.4408 .
|
| 333 |
+
* Liang et al. (2011) P.Liang, M.I. Jordan, and D.Klein. 2011. Learning dependency-based compositional semantics. In Association for Computational Linguistics (ACL). pages 590–599.
|
| 334 |
+
* Nguyen et al. (2016) T.Nguyen, M.Rosenberg, X.Song, J.Gao, S.Tiwary, R.Majumder, and L.Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. In Workshop on Cognitive Computing at NIPS.
|
| 335 |
+
* Nogueira and Cho (2016) R.Nogueira and K.Cho. 2016. End-to-end goal-driven web navigation. In Advances in Neural Information Processing Systems (NIPS).
|
| 336 |
+
* Onishi et al. (2016) T.Onishi, H.Wang, M.Bansal, K.Gimpel, and D.McAllester. 2016. Who did what: A large-scale person-centered cloze dataset. In Empirical Methods in Natural Language Processing (EMNLP).
|
| 337 |
+
* Pasupat and Liang (2015) P.Pasupat and P.Liang. 2015. Compositional semantic parsing on semi-structured tables. In Association for Computational Linguistics (ACL).
|
| 338 |
+
* Pennington et al. (2014) J.Pennington, R.Socher, and C.D. Manning. 2014. Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP).
|
| 339 |
+
* Rajpurkar et al. (2016) P.Rajpurkar, J.Zhang, K.Lopyrev, and P.Liang. 2016. Squad: 100,000+ questions for machine comprehension of text. In Empirical Methods in Natural Language Processing (EMNLP).
|
| 340 |
+
* Steedman (2000) M.Steedman. 2000. The Syntactic Process. MIT Press.
|
| 341 |
+
* Sutskever et al. (2014) I.Sutskever, O.Vinyals, and Q.V. Le. 2014. Sequence to sequence learning with neural networks. In Advances in Neural Information Processing Systems (NIPS). pages 3104–3112.
|
| 342 |
+
* Talmor et al. (2017) A.Talmor, M.Geva, and J.Berant. 2017. Evaluating semantic parsing against a simple web-based question answering model. In *SEM.
|
| 343 |
+
* Vinyals et al. (2015) O.Vinyals, M.Fortunato, and N.Jaitly. 2015. Pointer networks. In Advances in Neural Information Processing Systems (NIPS). pages 2674–2682.
|
| 344 |
+
* Wang et al. (2017) W.Wang, N.Yang, F.Wei, B.Chang, and M.Zhou. 2017. Gated self-matching networks for reading comprehension and question answering. In Association for Computational Linguistics (ACL).
|
| 345 |
+
* Wang et al. (2015) Y.Wang, J.Berant, and P.Liang. 2015. Building a semantic parser overnight. In Association for Computational Linguistics (ACL).
|
| 346 |
+
* Watanabe et al. (2017) Y.Watanabe, B.Dhingra, and R.Salakhutdinov. 2017. Question answering from unstructured text by retrieval and comprehension. arXiv preprint arXiv:1703.08885 .
|
| 347 |
+
* Welbl et al. (2017) J.Welbl, P.Stenetorp, and S.Riedel. 2017. Constructing datasets for multi-hop reading comprehension across documents. arXiv preprint arXiv:1710.06481 .
|
| 348 |
+
* Yang et al. (2015) Y.Yang, W.Yih, and C.Meek. 2015. WikiQA: A challenge dataset for open-domain question answering. In Empirical Methods in Natural Language Processing (EMNLP). pages 2013–2018.
|
| 349 |
+
* Yih et al. (2016) W.Yih, M.Richardson, C.Meek, M.Chang, and J.Suh. 2016. The value of semantic parse labeling for knowledge base question answering. In Association for Computational Linguistics (ACL).
|
| 350 |
+
* Zelle and Mooney (1996) M.Zelle and R.J. Mooney. 1996. Learning to parse database queries using inductive logic programming. In Association for the Advancement of Artificial Intelligence (AAAI). pages 1050–1055.
|
| 351 |
+
* Zettlemoyer and Collins (2005) L.S. Zettlemoyer and M.Collins. 2005. Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars. In Uncertainty in Artificial Intelligence (UAI). pages 658–666.
|
| 352 |
+
* Zhong et al. (2017) V.Zhong, C.Xiong, and R.Socher. 2017. Seq2sql: Generating structured queries from natural language using reinforcement learning. arXiv preprint arXiv:1709.00103 .
|
| 353 |
+
|
| 354 |
+
Supplementary Material
|
| 355 |
+
----------------------
|
| 356 |
+
|
| 357 |
+
Dataset
|
| 358 |
+
-------
|
| 359 |
+
|
| 360 |
+
#### Generating SPARQL queries
|
| 361 |
+
|
| 362 |
+
Given a SPARQL query r r, we create four types of more complex queries: conjunctions, superlatives, comparatives, and compositions. For conjunctions, superlatives, and comparatives, we identify SPARQL queries in WebQuestionsSP whose denotation is a set 𝒜,|𝒜|≥2\mathcal{A},|\mathcal{A}|\geq 2, and generate a new query r′r^{\prime} whose denotation is a strict subset 𝒜′,𝒜′⊂𝒜,𝒜′≠ϕ\mathcal{A}^{\prime},\mathcal{A}^{\prime}\subset\mathcal{A},\mathcal{A}^{\prime}\neq\phi. We also discard questions that contain the answer within the new machine-generated questions.
|
| 363 |
+
|
| 364 |
+
For conjunctions this is done by traversing the KB and looking for SPARQL triplets that can be added and will yield a valid set 𝒜′\mathcal{A}^{\prime}.
|
| 365 |
+
|
| 366 |
+
For comparatives and superlatives this is done by finding a numerical property common to all a∈𝒜 a\in\mathcal{A}, and adding a clause to r r accordingly.
|
| 367 |
+
|
| 368 |
+
For compositions, we find an entity e e in r r, and replace e e with a variable y y and add to r r a clause such that the denotation of the clause is {e}\{e\}. We also check for discard ambiguous questions that yield more than one answer for entity e e.
|
| 369 |
+
|
| 370 |
+
Table 7: Rules for generating a complex query r′r^{\prime} from a query r r (’.’ in SPARQL corresponds to logical and). The query r r returns the variable ?x?x, and contains an entity e e. We denote by r[e/y]r[e/y] the replacement of the entity e e with a variable ?y?y. pred 1\text{pred}_{1} and pred 2\text{pred}_{2} are any KB predicates, obj is any KB entity, V V is a numerical value, and ?c?c is a variable of a CVT type in Freebase which refers to events. The last column provides an example for a NL question for each type.
|
| 371 |
+
|
| 372 |
+
Table[7](https://arxiv.org/html/1803.06643#Sx3.T7 "Table 7 ‣ Generating SPARQL queries ‣ Dataset ‣ The Web as a Knowledge-base for Answering Complex Questions") gives the exact rules for generation.
|
| 373 |
+
|
| 374 |
+
#### Machine-generated (MG) questions
|
| 375 |
+
|
| 376 |
+
To have AMT workers paraphrase SPARQL queries into natural language, we need to present them in an understandable form. Therefore, we automatically generate a question they can paraphrase. When we generate SPARQL queries, new predicates are added to the query (Table[7](https://arxiv.org/html/1803.06643#Sx3.T7 "Table 7 ‣ Generating SPARQL queries ‣ Dataset ‣ The Web as a Knowledge-base for Answering Complex Questions")). We manually annotate 503 templates mapping predicates to text for different compositionality types (with 377 unique KB predicates). We annotate the templates in the context of several machine-generated questions to ensure that they result templates are in understandable language.
|
| 377 |
+
|
| 378 |
+
We use those templates to modify the original WebQuestionsSP question according to the meaning of the generated SPARQL query. E.g., the template for ?xns:book.author.works_written obj?x\texttt{ ns:book.author.works\_written \text{obj}} is “the author who wrote OBJ”. Table[8](https://arxiv.org/html/1803.06643#Sx3.T8 "Table 8 ‣ Machine-generated (MG) questions ‣ Dataset ‣ The Web as a Knowledge-base for Answering Complex Questions") shows various examples of such templates. “Obj” is replaced in turn by the actual name according to Freebase of the object at hand. Freebase represents events that contain multiple arguments using a special node in the knowledge-base called CVT that represents the event, and is connected with edges to all event arguments. Therefore, some of our templates include two predicates that go through a CVT node, and they are denoted in Table[8](https://arxiv.org/html/1803.06643#Sx3.T8 "Table 8 ‣ Machine-generated (MG) questions ‣ Dataset ‣ The Web as a Knowledge-base for Answering Complex Questions") with ’+’.
|
| 379 |
+
|
| 380 |
+
To fuse the templates with the original WebQuestionsSP natural language questions, templates contain lexical material that glues them back to the question conditioned on the compositionality type. For example, in Conj questions we use the coordinating phrase “and is”, so that “the author who wrote OBJ” will produce “Who was born in London and is the author who wrote OBJ”.
|
| 381 |
+
|
| 382 |
+

|
| 383 |
+
|
| 384 |
+
Figure 6: Overview of data collection process. Blue text denotes different stages of the term addition, green represents the obj value, and red the intermediate text to connect the new term and seed question
|
| 385 |
+
|
| 386 |
+
Freebase Predicate Template
|
| 387 |
+
ns:book.author.works_written“the author who wrote obj”
|
| 388 |
+
ns:aviation.airport.airlines + ns:aviation.airline_airport_presence.airline“the airport with the obj airline”
|
| 389 |
+
ns:award.competitor.competitions_won“the winner of obj”
|
| 390 |
+
ns:film.actor.film + ns:film.performance.film“the actor that played in the film obj”
|
| 391 |
+
|
| 392 |
+
Table 8: Template Examples
|
| 393 |
+
|
| 394 |
+
#### First word distribution
|
| 395 |
+
|
| 396 |
+

|
| 397 |
+
|
| 398 |
+
Figure 7: First word in question distribution
|
| 399 |
+
|
| 400 |
+
We find that in WebQuestions almost all questions start with a wh-word, but in ComplexWebQuestions 22% of the questions start with another word, again showing substantial paraphrasing from the original questions. Figure[7](https://arxiv.org/html/1803.06643#Sx3.F7 "Figure 7 ‣ First word distribution ‣ Dataset ‣ The Web as a Knowledge-base for Answering Complex Questions") Shows the distribution of first words in questions.
|
| 401 |
+
|
| 402 |
+
Generating noisy supervision
|
| 403 |
+
----------------------------
|
| 404 |
+
|
| 405 |
+
We created a heuristic for approximating the amount of global word re-ordering performed by AMT workers and creating noisy supervision. For every question, we constructed a matrix A A, where A ij A_{ij} is the similarity between token i i in the MG question and token j j in the NL question. Similarity is 1 1 if lemmas match, or the cosine distance according to GloVe embedding, when above a threshold, and 0 otherwise. This allows us to compute an approximate word alignment between the MG question and the NL question tokens and assess whether word re-ordering occurred.
|
| 406 |
+
|
| 407 |
+
For a natural language Conj question of length n n and a machine-generated question of length m m with a known split point index r r, the algorithm first computes the best point to split the NL question assuming there is no re-ordering. This is done iterating over all candidate split points p p, and returning the split point p 1∗p^{*}_{1} that maximizes:
|
| 408 |
+
|
| 409 |
+
∑0≤i<p max 0≤j<rA(i,j)+∑p≤i<n max r≤j<mA(i,j)\sum_{\begin{subarray}{c}0\leq i<p\end{subarray}}\max_{0\leq j<r}{A(i,j)}+\sum_{\begin{subarray}{c}p\leq i<n\end{subarray}}\max_{r\leq j<m}{A(i,j)}(1)
|
| 410 |
+
|
| 411 |
+
We then compute p 1∗p^{*}_{1} by trying to find the best split point, assuming that there is re-ordering in the NL questions:
|
| 412 |
+
|
| 413 |
+
∑0≤i<p max r≤j<mA(i,j)+∑p≤i<n max 0≤j<rA(i,j)\sum_{\begin{subarray}{c}0\leq i<p\end{subarray}}\max_{r\leq j<m}{A(i,j)}+\sum_{\begin{subarray}{c}p\leq i<n\end{subarray}}\max_{0\leq j<r}{A(i,j)}(2)
|
| 414 |
+
|
| 415 |
+
We then determine the final split point and whether re-ordering occurred by comparing the two values and using the higher one.
|
| 416 |
+
|
| 417 |
+
In Comp questions, two split points are returned, representing the beginning and end of the phrase that is to be sent to the QA model. Therefore, if r 1,r 2 r_{1},r_{2} are the known split points in the machine-generated questions, we return p 1,p 2 p_{1},p_{2} that maximize:
|
| 418 |
+
|
| 419 |
+
∑0≤i<p 1 max 0≤j<r 1A(i,j)+∑p 1≤i<p 2 max r 1≤j<r 2A(i,j)+∑p 2≤i<n max r 2≤j<mA(i,j).\displaystyle\begin{split}&\sum_{\begin{subarray}{c}0\leq i<p_{1}\end{subarray}}\max_{0\leq j<r_{1}}{A(i,j)}+\sum_{\begin{subarray}{c}p_{1}\leq i<p_{2}\end{subarray}}\max_{r_{1}\leq j<r_{2}}{A(i,j)}\\ &+\sum_{\begin{subarray}{c}p_{2}\leq i<n\end{subarray}}\max_{r_{2}\leq j<m}{A(i,j)}.\end{split}
|
| 420 |
+
|
| 421 |
+
Figure[8](https://arxiv.org/html/1803.06643#Sx4.F8 "Figure 8 ‣ Generating noisy supervision ‣ The Web as a Knowledge-base for Answering Complex Questions") illustrates finding the split point for a Conj questions by using equation ([2](https://arxiv.org/html/1803.06643#Sx4.E2 "In Generating noisy supervision ‣ The Web as a Knowledge-base for Answering Complex Questions")). The red line in Figure[8](https://arxiv.org/html/1803.06643#Sx4.F8 "Figure 8 ‣ Generating noisy supervision ‣ The Web as a Knowledge-base for Answering Complex Questions") corresponds to the known split point in the MG question, and the blue one is the estimated split point p∗p^{*} in the NL question.
|
| 422 |
+
|
| 423 |
+

|
| 424 |
+
|
| 425 |
+
Figure 8: Heat map for similarity matrix between an MG and NL question. The red line indicates a known MG split point. The blue line is the approximated NL split point. Below is a graph of each candidate split point score.
|