{ "paper_id": "P16-1003", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T08:57:18.506185Z" }, "title": "Inferring Logical Forms From Denotations", "authors": [ { "first": "Panupong", "middle": [], "last": "Pasupat", "suffix": "", "affiliation": {}, "email": "ppasupat@cs.stanford.edu" }, { "first": "Percy", "middle": [], "last": "Liang", "suffix": "", "affiliation": {}, "email": "pliang@cs.stanford.edu" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "A core problem in learning semantic parsers from denotations is picking out consistent logical forms-those that yield the correct denotation-from a combinatorially large space. To control the search space, previous work relied on restricted set of rules, which limits expressivity. In this paper, we consider a much more expressive class of logical forms, and show how to use dynamic programming to efficiently represent the complete set of consistent logical forms. Expressivity also introduces many more spurious logical forms which are consistent with the correct denotation but do not represent the meaning of the utterance. To address this, we generate fictitious worlds and use crowdsourced denotations on these worlds to filter out spurious logical forms. On the WIKITABLEQUESTIONS dataset, we increase the coverage of answerable questions from 53.5% to 76%, and the additional crowdsourced supervision lets us rule out 92.1% of spurious logical forms.", "pdf_parse": { "paper_id": "P16-1003", "_pdf_hash": "", "abstract": [ { "text": "A core problem in learning semantic parsers from denotations is picking out consistent logical forms-those that yield the correct denotation-from a combinatorially large space. To control the search space, previous work relied on restricted set of rules, which limits expressivity. In this paper, we consider a much more expressive class of logical forms, and show how to use dynamic programming to efficiently represent the complete set of consistent logical forms. Expressivity also introduces many more spurious logical forms which are consistent with the correct denotation but do not represent the meaning of the utterance. To address this, we generate fictitious worlds and use crowdsourced denotations on these worlds to filter out spurious logical forms. On the WIKITABLEQUESTIONS dataset, we increase the coverage of answerable questions from 53.5% to 76%, and the additional crowdsourced supervision lets us rule out 92.1% of spurious logical forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Consider the task of learning to answer complex natural language questions (e.g., \"Where did the last 1st place finish occur?\") using only question-answer pairs as supervision (Clarke et al., 2010; Liang et al., 2011; Berant et al., 2013; . Semantic parsers map the question into a logical form (e.g., R [Venue] .argmax(Position.1st, Index)) that can be executed on a knowledge source to obtain the answer (denotation). Logical forms are very expressive since they can be recursively composed, but this very expressivity makes it more difficult to search over the space of logical forms. Previous work sidesteps this obstacle by restricting the set of possible logical form compositions, but this is limiting. For instance, for the system in Pasupat and Liang (2015) , in only 53.5% of the examples was the correct logical form even in the set of generated logical forms.", "cite_spans": [ { "start": 176, "end": 197, "text": "(Clarke et al., 2010;", "ref_id": "BIBREF3" }, { "start": 198, "end": 217, "text": "Liang et al., 2011;", "ref_id": "BIBREF12" }, { "start": 218, "end": 238, "text": "Berant et al., 2013;", "ref_id": "BIBREF2" }, { "start": 304, "end": 311, "text": "[Venue]", "ref_id": null }, { "start": 742, "end": 766, "text": "Pasupat and Liang (2015)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The goal of this paper is to solve two main challenges that prevent us from generating more expressive logical forms. The first challenge is computational: the number of logical forms grows exponentially as their size increases. Directly enumerating over all logical forms becomes infeasible, and pruning techniques such as beam search can inadvertently prune out correct logical forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The second challenge is the large increase in spurious logical forms-those that do not reflect the semantics of the question but coincidentally execute to the correct denotation. For example, while logical forms z 1 , . . . , z 5 in Figure 1 are all consistent (they execute to the correct answer y), the logical forms z 4 and z 5 are spurious and would give incorrect answers if the table were to change.", "cite_spans": [], "ref_spans": [ { "start": 233, "end": 241, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We address these two challenges by solving two interconnected tasks. The first task, which addresses the computational challenge, is to enumerate the set Z of all consistent logical forms given a question x, a knowledge source w (\"world\"), and the target denotation y (Section 4). Observing that the space of possible denotations grows much more slowly than the space of logical forms, we perform dynamic programming on denotations (DPD) to make search feasible. Our method is guaranteed to find all consistent logical forms up to some bounded size.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Given the set Z of consistent logical forms, the second task is to filter out spurious logical forms from Z (Section 5). Using the property that spurious logical forms ultimately give a wrong answer when the data in the world w changes, we create Among rows with Position = 1st, pick the one with minimum index, then return the Venue. (= Finland)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Figure 1: Six logical forms generated from the question x. The first five are consistent: they execute to the correct answer y. Of those, correct logical forms z 1 , z 2 , and z 3 are different ways to represent the semantics of x, while spurious logical forms z 4 and z 5 get the right answer y for the wrong reasons.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "fictitious worlds to test the denotations of the logical forms in Z. We use crowdsourcing to annotate the correct denotations on a subset of the generated worlds. To reduce the amount of annotation needed, we choose the subset that maximizes the expected information gain. The pruned set of logical forms would provide a stronger supervision signal for training a semantic parser.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We test our methods on the WIKITABLEQUES-TIONS dataset of complex questions on Wikipedia tables. We define a simple, general set of deduction rules (Section 3), and use DPD to confirm that the rules generate a correct logical form in", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": ". . . r 1 \u2022 \u2022 \u2022 1 Finland 1st r 2 \u2022 \u2022 \u2022 2 Germany 11th r 3 \u2022 \u2022 \u2022 3 Thailand 1st", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": ". . . Figure 1 is converted into a graph. The recursive execution of logical form z 1 is shown via the different colors and styles.", "cite_spans": [], "ref_spans": [ { "start": 6, "end": 14, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "76% of the examples, up from the 53.5% in Pasupat and . Moreover, unlike beam search, DPD is guaranteed to find all consistent logical forms up to a bounded size. Finally, by using annotated data on fictitious worlds, we are able to prune out 92.1% of the spurious logical forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "The overarching motivation of this work is allowing people to ask questions involving computation on semi-structured knowledge sources such as tables from the Web. This section introduces how the knowledge source is represented, how the computation is carried out using logical forms, and our task of inferring correct logical forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "2" }, { "text": "Worlds. We use the term world to refer to a collection of entities and relations between entities. One way to represent a world w is as a directed graph with nodes for entities and directed edges for relations. (For example, a world about geography would contain a node Europe with an edge Contains to another node Germany.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "2" }, { "text": "In this paper, we use data tables from the Web as knowledge sources, such as the one in Figure 1 . We follow the construction in Pasupat and Liang (2015) for converting a table into a directed graph (see Figure 2 ). Rows and cells become nodes (e.g., r 0 = first row and Finland) while columns become labeled directed edges between them (e.g., Venue maps r 1 to Finland). The graph is augmented with additional edges Next (from each row to the next) and Index (from each row to its index number). In addition, we add normalization edges to cell nodes, including Number (from the cell to the first number in the cell), Num2 (the second number), Date (interpretation as a date), and Part (each list item if the cell represents a list). For example, a cell with content \"3-4\" has a Number edge to the integer 3, a Num2 edge to 4, and a Date edge to XX-03-04.", "cite_spans": [ { "start": 129, "end": 153, "text": "Pasupat and Liang (2015)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 88, "end": 96, "text": "Figure 1", "ref_id": null }, { "start": 204, "end": 212, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Setup", "sec_num": "2" }, { "text": "Logical forms. We can perform computation on a world w using a logical form z, a small program that can be executed on the world, resulting in a denotation z w .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "2" }, { "text": "We use lambda DCS (Liang, 2013) as the language of logical forms. As a demonstration, we will use z 1 in Figure 2 as an example. The smallest units of lambda DCS are entities (e.g., 1st) and relations (e.g., Position). Larger logical forms can be constructed using logical operations, and the denotation of the new logical form can be computed from denotations of its constituents. For example, applying the join operation on Position and 1st gives Position.1st, whose denotation is the set of entities with relation Position pointing to 1st. With the world in Figure 2 , the denotation is Position.1st w = {r 1 , r 3 }, which corresponds to the 2nd and 4th rows in the table. The partial logical form Position.1st is then used to construct argmax(Position.1st, Index), the denotation of which can be computed by mapping the entities in Position.1st w = {r 1 , r 3 } using the relation Index ({r 0 : 0, r 1 : 1, . . . }), and then picking the one with the largest mapped value (r 3 , which is mapped to 3). The resulting logical form is finally combined with R[Venue] with another join operation. The relation R[Venue] is the reverse of Venue, which corresponds to traversing Venue edges in the reverse direction.", "cite_spans": [ { "start": 18, "end": 31, "text": "(Liang, 2013)", "ref_id": "BIBREF13" } ], "ref_spans": [ { "start": 105, "end": 113, "text": "Figure 2", "ref_id": "FIGREF0" }, { "start": 561, "end": 569, "text": "Figure 2", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Setup", "sec_num": "2" }, { "text": "Semantic parsing. A semantic parser maps a natural language utterance x (e.g., \"Where did the last 1st place finish occur?\") into a logical form z. With denotations as supervision, a semantic parser is trained to put high probability on z's that are consistent-logical forms that execute to the correct denotation y (e.g., Thailand). When the space of logical forms is large, searching for consistent logical forms z can become a challenge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "2" }, { "text": "As illustrated in Figure 1 , consistent logical forms can be divided into two groups: correct logical forms represent valid ways for computing the answer, while spurious logical forms accidentally get the right answer for the wrong reasons (e.g., z 4 picks the row with the maximum time but gets the correct answer anyway).", "cite_spans": [], "ref_spans": [ { "start": 18, "end": 26, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Setup", "sec_num": "2" }, { "text": "Tasks. Denote by Z and Z c the sets of all consistent and correct logical forms, respectively. The first task is to efficiently compute Z given an utterance x, a world w, and the correct denotation y (Section 4). With the set Z, the second task is to infer Z c by pruning spurious logical forms from Z (Section 5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Setup", "sec_num": "2" }, { "text": "The space of logical forms given an utterance x and a world w is defined recursively by a set of deduction rules (Table 1) . In this setting, each constructed logical form belongs to a category (Set, Rel, or Map). These categories are used for type checking in a similar fashion to categories in syntactic parsing. Each deduction rule specifies the categories of the arguments, category of the resulting logical form, and how the logical form is constructed from the arguments.", "cite_spans": [], "ref_spans": [ { "start": 113, "end": 122, "text": "(Table 1)", "ref_id": null } ], "eq_spans": [], "section": "Deduction rules", "sec_num": "3" }, { "text": "Deduction rules are divided into base rules and compositional rules. A base rule follows one of the following templates:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deduction rules", "sec_num": "3" }, { "text": "TokenSpan[span] \u2192 c [f (span)]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deduction rules", "sec_num": "3" }, { "text": "(1)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deduction rules", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "\u2205 \u2192 c [f ()]", "eq_num": "(2)" } ], "section": "Deduction rules", "sec_num": "3" }, { "text": "A rule of Template 1 is triggered by a span of tokens from x (e.g., to construct z 1 in Figure 2 from x in Figure 1 , Rule B1 from Table 1 constructs 1st of category Set from the phrase \"1st\"). Meanwhile, a rule of Template 2 generates a logical form without any trigger (e.g., Rule B5 generates Position of category Rel from the graph edge Position without a specific trigger in x).", "cite_spans": [], "ref_spans": [ { "start": 88, "end": 96, "text": "Figure 2", "ref_id": "FIGREF0" }, { "start": 107, "end": 115, "text": "Figure 1", "ref_id": null }, { "start": 131, "end": 138, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Deduction rules", "sec_num": "3" }, { "text": "Compositional rules then construct larger logical forms from smaller ones:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deduction rules", "sec_num": "3" }, { "text": "c 1 [z 1 ] + c 2 [z 2 ] \u2192 c [g(z 1 , z 2 )]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deduction rules", "sec_num": "3" }, { "text": "(3)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deduction rules", "sec_num": "3" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "c 1 [z 1 ] \u2192 c [g(z 1 )]", "eq_num": "(4)" } ], "section": "Deduction rules", "sec_num": "3" }, { "text": "A rule of Template 3 combines partial logical forms z 1 and z 2 of categories c 1 and c 2 into g(z 1 , z 2 ) of category c (e.g., Rule C1 uses 1st of category Set and Position of category Rel to construct Position.1st of category Set). Template 4 works similarly. Most rules construct logical forms without requiring a trigger from the utterance x. This is", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Deduction rules", "sec_num": "3" }, { "text": "TokenSpan \u2192 Set fuzzymatch(span) (entity fuzzily matching the text: \"chinese\" \u2192 China) B2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "TokenSpan \u2192 Set val(span) (interpreted value: \"march 2015\" \u2192 2015-03-XX) B3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "\u2205 \u2192 Set Type.Row (the set of all rows) B4 \u2205 \u2192 Set c \u2208 ClosedClass (any entity from a column with few unique entities) (e.g., 400m or relay from the Event column) B5 \u2205 \u2192 Rel r \u2208 GraphEdges (any relation in the graph:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "Venue, Next, Num2, . . . ) B6 \u2205 \u2192 Rel != | < | <= | > | >= Compositional Rules C1 Set + Rel \u2192 Set z2.z1 | R[z2].z1 (R[z]", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "is the reverse of z; i.e., flip the arrow direction) C2", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "Set \u2192 Set a(z1) (a \u2208 {count, max, min, sum, avg}) C3", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "Set", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "+ Set \u2192 Set z1 z2 | z1 z2 | z1 \u2212 z2 (subtraction is only allowed on numbers) Compositional Rules with Maps Initialization M1 Set \u2192 Map (z1, x) (identity map) Operations on Map M2 Map + Rel \u2192 Map (u1, z2.b1) | (u1, R[z2].b1) M3 Map \u2192 Map (u1, a(b1)) (a \u2208 {count, max, min, sum, avg}) M4 Map + Set \u2192 Map (u1, b1 z2) | . . . M5 Map + Map \u2192 Map (u1, b1 b2) | . . .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "(Allowed only when u1 = u2) (Rules M4 and M5 are repeated for and \u2212)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "Finalization M6 Map \u2192 Set argmin(u1, R[\u03bbx.b1]) | argmax(u1, R[\u03bbx.b1])", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "Table 1: Deduction rules define the space of logical forms by specifying how partial logical forms are constructed. The logical form of the i-th argument is denoted by z i (or (u i , b i ) if the argument is a Map). The set of final logical forms contains any logical form with category Set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "crucial for generating implicit relations (e.g., generating Year from \"what's the venue in 2000?\" without a trigger \"year\"), and generating operations without a lexicon (e.g., generating argmax from \"where's the longest competition\"). However, the downside is that the space of possible logical forms becomes very large.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "The Map category. The technique in this paper requires execution of partial logical forms. This poses a challenge for argmin and argmax operations, which take a set and a binary relation as arguments. The binary could be a complex function (e.g., in z 3 from Figure 1) . While it is possible to build the binary independently from the set, executing a complex binary is sometimes impossible (e.g., the denotation of \u03bbx.count(x) is impossible to write explicitly without knowledge of x).", "cite_spans": [], "ref_spans": [ { "start": 259, "end": 268, "text": "Figure 1)", "ref_id": null } ], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "We address this challenge with the Map category. A Map is a pair (u, b) of a finite set u (unary) and a binary relation b. The denotation of (u, b) is ( u w , b w ) where the binary b w is b w with the domain restricted to the set u w . For example, consider the construction of argmax(Position.1st, Index). After constructing Position.1st with denotation {r 1 , r 3 }, Rule M1 initializes (Position.1st, x) with denotation ({r 1 , r 3 }, {r 1 : {r 1 }, r 3 : {r 3 }}). Rule M2 is then applied to generate (Position.1st, R[Index].x) with denotation ({r 1 , r 3 }, {r 1 : {1}, r 3 : {3}}). Finally, Rule M6 converts the Map into the desired argmax logical form with denotation {r 3 }.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "Generality of deduction rules. Using domain knowledge, previous work restricted the space of logical forms by manually defining the categories c or the semantic functions f and g to fit the domain. For example, the category Set might be divided into Records, Values, and Atomic when the knowledge source is a table (Pasupat and Liang, 2015) . Another example is when a compositional rule g (e.g., sum(z 1 )) must be triggered by some phrase in a lexicon (e.g., words like \"total\" that align to sum in the training data). Such restrictions make search more tractable but greatly limit the scope of questions that can be answered.", "cite_spans": [ { "start": 315, "end": 340, "text": "(Pasupat and Liang, 2015)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "Here, we have increased the coverage of logical forms by making the deduction rules simple and general, essentially following the syntax of lambda DCS. The base rules only generates entities that approximately match the utterance, but all possible relations, and all possible further combinations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "Beam search. Given the deduction rules, an utterance x and a world w, we would like to generate all derived logical forms Z. We first present the floating parser (Pasupat and Liang, 2015) , which uses beam search to generate Z b \u2286 Z, a usually incomplete subset. Intuitively, the algorithm first constructs base logical forms based on spans of the utterance, and then builds larger logical forms of increasing size in a \"floating\" fashion-without requiring a trigger from the utterance.", "cite_spans": [ { "start": 162, "end": 187, "text": "(Pasupat and Liang, 2015)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "Formally, partial logical forms with category c and size s are stored in a cell (c, s). The algorithm first generates base logical forms from base deduction rules and store them in cells (c, 0) (e.g., the cell (Set, 0) contains 1st, Type.Row, and so on). Then for each size s = 1, . . . , s max , we populate Due to the generality of our deduction rules, the number of logical forms grows quickly as the size s increases. As such, partial logical forms that are essential for building the desired logical forms might fall off the beam early on. In the next section, we present a new search method that compresses the search space using denotations.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "\u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 \u2022 (Set, 7, {Thailand}) (Set, 7, {Finland})", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Base Rules B1", "sec_num": null }, { "text": "Our first step toward finding all correct logical forms is to represent all consistent logical forms (those that execute to the correct denotation). Formally, given x, w, and y, we wish to generate the set Z of all logical forms z such that z w = y.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic programming on denotations", "sec_num": "4" }, { "text": "As mentioned in the previous section, beam search does not recover the full set Z due to pruning. Our key observation is that while the number of logical forms explodes, the number of distinct denotations of those logical forms is much more controlled, as multiple logical forms can share the same denotation. So instead of directly enumerating logical forms, we use dynamic programming on denotations (DPD), which is inspired by similar methods from program induction (Lau et al., 2003; Liang et al., 2010; Gulwani, 2011) .", "cite_spans": [ { "start": 469, "end": 487, "text": "(Lau et al., 2003;", "ref_id": "BIBREF10" }, { "start": 488, "end": 507, "text": "Liang et al., 2010;", "ref_id": "BIBREF11" }, { "start": 508, "end": 522, "text": "Gulwani, 2011)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Dynamic programming on denotations", "sec_num": "4" }, { "text": "The main idea of DPD is to collapse logical forms with the same denotation together. Instead of using cells (c, s) as in beam search, we perform dynamic programming using cells (c, s, d) where d is a denotation. For instance, the logical form Position.Number.1 will now be stored in cell (Set, 2, {r 1 , r 3 }) .", "cite_spans": [], "ref_spans": [ { "start": 288, "end": 310, "text": "(Set, 2, {r 1 , r 3 })", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Dynamic programming on denotations", "sec_num": "4" }, { "text": "For DPD to work, each deduction rule must have a denotationally invariant semantic function g, meaning that the denotation of the resulting logical form g(z 1 , z 2 ) only depends on the denotations of z 1 and z 2 :", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic programming on denotations", "sec_num": "4" }, { "text": "z 1 w = z 1 w \u2227 z 2 w = z 2 w \u21d2 g(z 1 , z 2 ) w = g(z 1 , z 2 ) w", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic programming on denotations", "sec_num": "4" }, { "text": "All of our deduction rules in Table 1 are denotationally invariant, but a rule that, for instance, returns the argument with the larger logical form size would not be. Applying a denotationally invariant deduction rule on any pair of logical forms from (c 1 , s 1 , d 1 ) and (c 2 , s 2 , d 2 ) always results in a logical form with the same denotation d in the same cell (c, s 1 + s 2 + 1, d). 1 (For example, the cell (Set, 4, {r 3 }) contains z 1 := argmax(Position.1st, Index) and z 1 := argmin(Event.Relay, Index). Combining each of these with Venue using Rule C1 gives R[Venue].z 1 and R[Venue].z 1 , which belong to the same cell (Set, 5, {Thailand})).", "cite_spans": [], "ref_spans": [ { "start": 30, "end": 37, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Dynamic programming on denotations", "sec_num": "4" }, { "text": "Algorithm. DPD proceeds in two forward passes. The first pass finds the possible combinations of cells (c, s, d) that lead to the correct denotation y, while the second pass enumerates the logical forms in the cells found in the first pass. Figure 3 illustrates the DPD algorithm.", "cite_spans": [], "ref_spans": [ { "start": 241, "end": 249, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Dynamic programming on denotations", "sec_num": "4" }, { "text": "In the first pass, we are only concerned about finding relevant cell combinations and not the actual logical forms. Therefore, any logical form that belongs to a cell could be used as an argument of a deduction rule to generate further logical forms. Thus, we keep at most one logical form per cell; subsequent logical forms that are generated for that cell are discarded.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic programming on denotations", "sec_num": "4" }, { "text": "After populating all cells up to size s max , we list all cells (Set, s, y) with the correct denotation y, and then note all possible rule combinations (cell 1 , rule) or (cell 1 , cell 2 , rule) that lead to those final cells, including the combinations that yielded discarded logical forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic programming on denotations", "sec_num": "4" }, { "text": "The second pass retrieves the actual logical forms that yield the correct denotation. To do this, we simply populate the cells (c, s, d) with all logical forms, using only rule combinations that lead to final cells. This elimination of irrelevant rule combinations effectively reduces the search space. (In Section 6.2, we empirically show that the number of cells considered is reduced by 98.7%.)", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Dynamic programming on denotations", "sec_num": "4" }, { "text": "The parsing chart is represented as a hypergraph as in Figure 3 . After eliminating unused rule combinations, each of the remaining hyperpaths from base predicates to the target denotation corresponds to a single logical form. making the remaining parsing chart a compact implicit representation of all consistent logical forms. This representation is guaranteed to cover all possible logical forms under the size limit s max that can be constructed by the deduction rules.", "cite_spans": [], "ref_spans": [ { "start": 55, "end": 63, "text": "Figure 3", "ref_id": "FIGREF1" } ], "eq_spans": [], "section": "Dynamic programming on denotations", "sec_num": "4" }, { "text": "In our experiments, we apply DPD on the deduction rules in Table 1 and explicitly enumerate the logical forms produced by the second pass. For efficiency, we prune logical forms that are clearly redundant (e.g., applying max on a set of size 1). We also restrict a few rules that might otherwise create too many denotations. For example, we restricted the union operation ( ) except unions of two entities (e.g., we allow Germany Finland but not Venue.Hungary . . . ), subtraction when building a Map, and count on a set of size 1. 2", "cite_spans": [], "ref_spans": [ { "start": 59, "end": 66, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Dynamic programming on denotations", "sec_num": "4" }, { "text": "After finding the set Z of all consistent logical forms, we want to filter out spurious logical forms. To do so, we observe that semantically correct logical forms should also give the correct denotation in worlds w other than than w. In contrast, spurious logical forms will fail to produce the correct denotation on some other world.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fictitious worlds", "sec_num": "5" }, { "text": "Generating fictitious worlds. With the observation above, we generate fictitious worlds w 1 , w 2 , . . . , where each world w i is a slight alteration of w. As we will be executing logical forms z \u2208 Z on w i , we should ensure that all entities and relations in z \u2208 Z appear in the fictitious world w i (e.g., z 1 in Figure 1 would be meaningless if the entity 1st does not appear in w i ). To this end, we Figure 4 : From the example in Figure 1 , we generate a table for the fictitious world w 1 . impose that all predicates present in the original world w should also be present in w i as well.", "cite_spans": [], "ref_spans": [ { "start": 318, "end": 326, "text": "Figure 1", "ref_id": null }, { "start": 408, "end": 416, "text": "Figure 4", "ref_id": null }, { "start": 439, "end": 447, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Fictitious worlds", "sec_num": "5" }, { "text": "w w 1 w 2 \u2022 \u2022 \u2022 z 1 Thailand China Finland \u2022 \u2022 \u2022 q 1 z 2 Thailand China Finland \u2022 \u2022 \u2022 z 3 Thailand China Finland \u2022 \u2022 \u2022 z 4 Thailand Germany China \u2022 \u2022 \u2022 } q 2 z 5 Thailand China China \u2022 \u2022 \u2022 q 3 z 6 Thailand China China \u2022 \u2022 \u2022 . . . . . . . . . . . .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fictitious worlds", "sec_num": "5" }, { "text": "In our case where the world w comes from a data table t, we construct w i from a new table t i as follows: we go through each column of t and resample the cells in that column. The cells are sampled using random draws without replacement if the original cells are all distinct, and with replacement otherwise. Sorted columns are kept sorted. To ensure that predicates in w exist in w i , we use the same set of table columns and enforce that any entity fuzzily matching a span in the question x must be present in t i (e.g., for the example in Figure 1 , the generated t i must contain \"1st\"). Figure 4 shows an example fictitious table generated from the table in Figure 1 .", "cite_spans": [], "ref_spans": [ { "start": 544, "end": 552, "text": "Figure 1", "ref_id": null }, { "start": 594, "end": 602, "text": "Figure 4", "ref_id": null }, { "start": 665, "end": 673, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Fictitious worlds", "sec_num": "5" }, { "text": "Fictitious worlds are similar to test suites for computer programs. However, unlike manually designed test suites, we do not yet know the correct answer for each fictitious world or whether a world is helpful for filtering out spurious logical forms. The next subsections introduce our method for choosing a subset of useful fictitious worlds to be annotated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fictitious worlds", "sec_num": "5" }, { "text": "Equivalence classes. Let W = (w 1 , . . . , w k ) be the list of all possible fictitious worlds. For each z \u2208 Z, we define the denotation tuple z W = ( z w 1 , . . . , z w k ). We observe that some logical forms produce the same denotation across all fictitious worlds. This may be due to an algebraic equivalence in logical forms (e.g., z 1 and z 2 in Figure 1) or due to the constraints in the construction of fictitious worlds (e.g., z 1 and z 3 in Figure 1 are equivalent as long as the Year column is sorted). We group logical forms into equivalence classes based on their denotation tuples, as illustrated in Figure 5 . When the question is unambiguous, we expect at most one equivalence class to contain correct logical forms.", "cite_spans": [], "ref_spans": [ { "start": 353, "end": 362, "text": "Figure 1)", "ref_id": null }, { "start": 452, "end": 460, "text": "Figure 1", "ref_id": null }, { "start": 615, "end": 623, "text": "Figure 5", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Fictitious worlds", "sec_num": "5" }, { "text": "Annotation. To pin down the correct equivalence class, we acquire the correct answers to the question x on some subset W = (w 1 , . . . , w ) \u2286 W of fictitious worlds, as it is impractical to obtain annotations on all fictitious worlds in W . We compile equivalence classes that agree with the annotations into a set Z c of correct logical forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fictitious worlds", "sec_num": "5" }, { "text": "We want to choose W that gives us the most information about the correct equivalence class as possible. This is analogous to standard practices in active learning (Settles, 2010) . 3 Let Q be the set of all equivalence classes q, and let q W be the denotation tuple computed by executing an arbitrary z \u2208 q on W . The subset W divides Q into partitions F t = {q \u2208 Q : q W = t} based on the denotation tuples t (e.g., from Figure 5 , if W contains just w 2 , then q 2 and q 3 will be in the same partition F (China) ). The annotation t * , which is also a denotation tuple, will mark one of these partitions F t * as correct. Thus, to prune out many spurious equivalence classes, the partitions should be as numerous and as small as possible.", "cite_spans": [ { "start": 163, "end": 178, "text": "(Settles, 2010)", "ref_id": "BIBREF18" } ], "ref_spans": [ { "start": 422, "end": 430, "text": "Figure 5", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Fictitious worlds", "sec_num": "5" }, { "text": "More formally, we choose a subset W that maximizes the expected information gain (or equivalently, the reduction in entropy) about the correct equivalence class given the annotation. With random variables Q \u2208 Q representing the correct equivalence class and T * W for the annotation on worlds W , we seek to find arg min W H(Q | T * W ). Assuming a uniform prior on Q (p(q) = 1/|Q|) and accurate annotation (p(t", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fictitious worlds", "sec_num": "5" }, { "text": "EQUATION", "cite_spans": [], "ref_spans": [], "eq_spans": [ { "start": 0, "end": 8, "text": "EQUATION", "ref_id": "EQREF", "raw_str": "* | q) = I[q \u2208 F t * ]): H(Q | T * W ) = q,t p(q, t) log p(t) p(q, t) = 1 |Q| t |F t | log |F t |.", "eq_num": "(*)" } ], "section": "Fictitious worlds", "sec_num": "5" }, { "text": "We exhaustively search for W that minimizes (*). The objective value follows our intuition since t |F t | log |F t | is small when the terms |F t | are small and numerous.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fictitious worlds", "sec_num": "5" }, { "text": "In our experiments, we approximate the full set W of fictitious worlds by generating k = 30 worlds to compute equivalence classes. We choose a subset of = 5 worlds to be annotated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fictitious worlds", "sec_num": "5" }, { "text": "For the experiments, we use the training portion of the WIKITABLEQUESTIONS dataset (Pasupat and Liang, 2015) , which consists of 14,152 questions on 1,679 Wikipedia tables gathered by crowd workers. Answering these complex questions requires different types of operations. The same operation can be phrased in different ways (e.g., \"best\", \"top ranking\", or \"lowest ranking number\") and the interpretation of some phrases depend on the context (e.g., \"number of \" could be a table lookup or a count operation). The lexical content of the questions is also quite diverse: even excluding numbers and symbols, the 14,152 training examples contain 9,671 unique words, only 10% of which appear more than 10 times.", "cite_spans": [ { "start": 83, "end": 108, "text": "(Pasupat and Liang, 2015)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "We attempted to manually annotate the first 300 examples with lambda DCS logical forms. We successfully constructed correct logical forms for 84% of these examples, which is a good number considering the questions were created by humans who could use the table however they wanted. The remaining 16% reflect limitations in our setupfor example, non-canonical table layouts, answers appearing in running text or images, and common sense reasoning (e.g., knowing that \"Quarterfinal\" is better than \"Round of 16\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experiments", "sec_num": "6" }, { "text": "We compare our set of deduction rules with the one given in Pasupat and Liang (2015) (henceforth PL15). PL15 reported generating the annotated logical form in 53.5% of the first 200 examples. With our more general deduction rules, we use DPD to verify that the rules are able to generate the annotated logical form in 76% of the first 300 examples, within the logical form size limit s max of 7. This is 90.5% of the examples that were successfully annotated. Figure 6 shows some examples of logical forms we cover that PL15 could not. Since DPD is guaranteed to find all consistent logical forms, we can be sure that the logical \"which opponent has the most wins\" z = argmax ( Figure 6 : Several example logical forms our system can generated that are not covered by the deduction rules from the previous work PL15.", "cite_spans": [], "ref_spans": [ { "start": 460, "end": 468, "text": "Figure 6", "ref_id": null }, { "start": 678, "end": 686, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Generality of deduction rules", "sec_num": "6.1" }, { "text": "forms not covered are due to limitations of the deduction rules. Indeed, the remaining examples either have logical forms with size larger than 7 or require other operations such as addition, union of arbitrary sets, etc.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Generality of deduction rules", "sec_num": "6.1" }, { "text": "Search space. To demonstrate the savings gained by collapsing logical forms with the same denotation, we track the growth of the number of unique logical forms and denotations as the logical form size increases. The plot in Figure 7 shows that the space of logical forms explodes much more quickly than the space of denotations. The use of denotations also saves us from considering a significant amount of irrelevant partial logical forms. On average over 14,152 training examples, DPD generates approximately 25,000 consistent logical forms. The first pass of DPD generates \u2248 153,000 cells (c, s, d), while the second pass generates only \u2248 2,000 cells resulting from \u2248 8,000 rule combinations, resulting in a 98.7% reduction in the number of cells that have to be considered.", "cite_spans": [], "ref_spans": [ { "start": 224, "end": 232, "text": "Figure 7", "ref_id": "FIGREF4" } ], "eq_spans": [], "section": "Dynamic programming on denotations", "sec_num": "6.2" }, { "text": "Comparison with beam search. We compare DPD to beam search on the ability to generate (but not rank) the annotated logical forms. We consider two settings: when the beam search parameters are uninitialized (i.e., the beams are pruned randomly), and when the parameters are trained using the system from PL15 (i.e., the beams are pruned based on model scores). The plot in Figure 8 shows that DPD generates more annotated logical forms (76%) compared to beam search (53.7%), even when beam search is guided heuristically by learned parameters. Note that DPD is an exact algorithm and does not require a heuristic. Figure 8 : The number of annotated logical forms that can be generated by beam search, both uninitialized (dashed) and initialized (solid), increases with the number of candidates generated (controlled by beam size), but lacks behind DPD (star).", "cite_spans": [], "ref_spans": [ { "start": 372, "end": 380, "text": "Figure 8", "ref_id": null }, { "start": 613, "end": 621, "text": "Figure 8", "ref_id": null } ], "eq_spans": [], "section": "Dynamic programming on denotations", "sec_num": "6.2" }, { "text": "We now explore how fictitious worlds divide the set of logical forms into equivalence classes, and how the annotated denotations on the chosen worlds help us prune spurious logical forms.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fictitious worlds", "sec_num": "6.3" }, { "text": "Equivalence classes. Using 30 fictitious worlds per example, we produce an average of 1,237 equivalence classes. One possible concern with using a limited number of fictitious worlds is that we may fail to distinguish some pairs of nonequivalent logical forms. We verify the equivalence classes against the ones computed using 300 fictitious worlds. We found that only 5% of the logical forms are split from the original equivalence classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fictitious worlds", "sec_num": "6.3" }, { "text": "Ideal Annotation. After computing equivalence classes, we choose a subset W of 5 fictitious worlds to be annotated based on the informationtheoretic objective. For each of the 252 examples with an annotated logical form z * , we use the denotation tuple t * = z * W as the annotated answers on the chosen fictitious worlds. We are able to rule out 98.7% of the spurious equivalence classes and 98.3% of spurious logical forms. Furthermore, we are able to filter down to just one equivalence class in 32.7% of the examples, and at most three equivalence classes in 51.3% of the examples. If we choose 5 fictitious worlds randomly instead of maximizing information gain, then the above statistics are 22.6% and 36.5%, respectively. When more than one equivalence classes remain, usually only one class is a dominant class with many equivalent logical forms, while other classes are small and contain logical forms with unusual patterns (e.g., z 5 in Figure 1 ). The average size of the correct equivalence class is \u2248 3,000 with the standard deviation of \u2248 8,000. Because we have an expressive logical language, there are fundamentally many equivalent ways of computing the same quantity.", "cite_spans": [], "ref_spans": [ { "start": 948, "end": 956, "text": "Figure 1", "ref_id": null } ], "eq_spans": [], "section": "Fictitious worlds", "sec_num": "6.3" }, { "text": "Crowdsourced Annotation. Data from crowdsourcing is more susceptible to errors. From the 252 annotated examples, we use 177 examples where at least two crowd workers agree on the answer of the original world w. When the crowdsourced data is used to rule out spurious logical forms, the entire set Z of consistent logical forms is pruned out in 11.3% of the examples, and the correct equivalent class is removed in 9% of the examples. These issues are due to annotation errors, inconsistent data (e.g., having date of death before birth date), and different interpretations of the question on the fictitious worlds. For the remaining examples, we are able to prune out 92.1% of spurious logical forms (or 92.6% of spurious equivalence classes).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fictitious worlds", "sec_num": "6.3" }, { "text": "To prevent the entire Z from being pruned, we can relax our assumption and keep logical forms z that disagree with the annotation in at most 1 fictitious world. The number of times Z is pruned out is reduced to 3%, but the number of spurious logical forms pruned also decreases to 78%.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Fictitious worlds", "sec_num": "6.3" }, { "text": "This work evolved from a long tradition of learning executable semantic parsers, initially from annotated logical forms (Zelle and Mooney, 1996; Kate et al., 2005; Zettlemoyer and Collins, 2005; Zettlemoyer and Collins, 2007; Kwiatkowski et al., 2010) , but more recently from denotations (Clarke et al., 2010; Liang et al., 2011; Berant et al., 2013; Kwiatkowski et al., 2013; Pasupat and Liang, 2015) . A central challenge in learning from denotations is finding consistent logical forms (those that execute to a given denotation).", "cite_spans": [ { "start": 120, "end": 144, "text": "(Zelle and Mooney, 1996;", "ref_id": "BIBREF21" }, { "start": 145, "end": 163, "text": "Kate et al., 2005;", "ref_id": "BIBREF7" }, { "start": 164, "end": 194, "text": "Zettlemoyer and Collins, 2005;", "ref_id": "BIBREF22" }, { "start": 195, "end": 225, "text": "Zettlemoyer and Collins, 2007;", "ref_id": "BIBREF23" }, { "start": 226, "end": 251, "text": "Kwiatkowski et al., 2010)", "ref_id": "BIBREF8" }, { "start": 289, "end": 310, "text": "(Clarke et al., 2010;", "ref_id": "BIBREF3" }, { "start": 311, "end": 330, "text": "Liang et al., 2011;", "ref_id": "BIBREF12" }, { "start": 331, "end": 351, "text": "Berant et al., 2013;", "ref_id": "BIBREF2" }, { "start": 352, "end": 377, "text": "Kwiatkowski et al., 2013;", "ref_id": "BIBREF9" }, { "start": 378, "end": 402, "text": "Pasupat and Liang, 2015)", "ref_id": "BIBREF16" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work and Discussion", "sec_num": "7" }, { "text": "As Kwiatkowski et al. (2013) and Berant and Liang (2014) both noted, a chief difficulty with executable semantic parsing is the \"schema mismatch\"-words in the utterance do not map cleanly onto the predicates in the logical form. This mismatch is especially pronounced in the WIKITABLEQUESTIONS of Pasupat and Liang (2015) . In the second example of Figure 6 , \"how long\" is realized by a logical form that computes a difference between two dates. The ramification of this mismatch is that finding consistent logical forms cannot solely proceed from the language side. This paper is about using annotated denotations to drive the search over logical forms.", "cite_spans": [ { "start": 3, "end": 28, "text": "Kwiatkowski et al. (2013)", "ref_id": "BIBREF9" }, { "start": 33, "end": 56, "text": "Berant and Liang (2014)", "ref_id": "BIBREF1" }, { "start": 297, "end": 321, "text": "Pasupat and Liang (2015)", "ref_id": "BIBREF16" } ], "ref_spans": [ { "start": 349, "end": 357, "text": "Figure 6", "ref_id": null } ], "eq_spans": [], "section": "Related Work and Discussion", "sec_num": "7" }, { "text": "This takes us into the realm of program induction, where the goal is to infer a program (logical form) from input-output pairs (for us, world-denotation pairs). Here, previous work has also leveraged the idea of dynamic programming on denotations (Lau et al., 2003; Liang et al., 2010; Gulwani, 2011) , though for more constrained spaces of programs. Continuing the program analogy, generating fictitious worlds is similar in spirit to fuzz testing for generating new test cases (Miller et al., 1990) , but the goal there is coverage in a single program rather than identifying the correct (equivalence class of) programs. This connection can potentially improve the flow of ideas between the two fields.", "cite_spans": [ { "start": 247, "end": 265, "text": "(Lau et al., 2003;", "ref_id": "BIBREF10" }, { "start": 266, "end": 285, "text": "Liang et al., 2010;", "ref_id": "BIBREF11" }, { "start": 286, "end": 300, "text": "Gulwani, 2011)", "ref_id": "BIBREF5" }, { "start": 479, "end": 500, "text": "(Miller et al., 1990)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work and Discussion", "sec_num": "7" }, { "text": "Finally, the effectiveness of dynamic programming on denotations relies on having a manageable set of denotations. For more complex logical forms and larger knowledge graphs, there are many possible angles worth exploring: performing abstract interpretation to collapse denotations into equivalence classes (Cousot and Cousot, 1977) , relaxing the notion of getting the correct denotation (Steinhardt and Liang, 2015) , or working in a continuous space and relying on gradient descent (Guu et al., 2015; Neelakantan et al., 2016; Yin et al., 2016; Reed and de Freitas, 2016) . This paper, by virtue of exact dynamic programming, sets the standard.", "cite_spans": [ { "start": 307, "end": 332, "text": "(Cousot and Cousot, 1977)", "ref_id": "BIBREF4" }, { "start": 389, "end": 417, "text": "(Steinhardt and Liang, 2015)", "ref_id": "BIBREF19" }, { "start": 485, "end": 503, "text": "(Guu et al., 2015;", "ref_id": "BIBREF6" }, { "start": 504, "end": 529, "text": "Neelakantan et al., 2016;", "ref_id": "BIBREF15" }, { "start": 530, "end": 547, "text": "Yin et al., 2016;", "ref_id": "BIBREF20" }, { "start": 548, "end": 574, "text": "Reed and de Freitas, 2016)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work and Discussion", "sec_num": "7" }, { "text": "Semantic functions f with one argument work similarly.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "While we technically can apply count on sets of size 1, the number of spurious logical forms explodes as there are too many sets of size 1 generated.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "The difference is that we are obtaining partial information about an individual example rather than partial information about the parameters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "Acknowledgments. We gratefully acknowledge the support of the Google Natural Language Understanding Focused Program. In addition, we would like to thank anonymous reviewers for their helpful comments.Reproducibility. Code and experiments for this paper are available on the CodaLab platform at https://worksheets.codalab.org/worksheets/ 0x47cc64d9c8ba4a878807c7c35bb22a42/.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "acknowledgement", "sec_num": null } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "UW SPF: The University of Washington semantic parsing framework", "authors": [ { "first": "Y", "middle": [], "last": "Artzi", "suffix": "" }, { "first": "L", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": { "arXiv": [ "arXiv:1311.3011" ] }, "num": null, "urls": [], "raw_text": "Y. Artzi and L. Zettlemoyer. 2013. UW SPF: The Uni- versity of Washington semantic parsing framework. arXiv preprint arXiv:1311.3011.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Semantic parsing via paraphrasing", "authors": [ { "first": "J", "middle": [], "last": "Berant", "suffix": "" }, { "first": "P", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2014, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Berant and P. Liang. 2014. Semantic parsing via paraphrasing. In Association for Computational Linguistics (ACL).", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Semantic parsing on Freebase from question-answer pairs", "authors": [ { "first": "J", "middle": [], "last": "Berant", "suffix": "" }, { "first": "A", "middle": [], "last": "Chou", "suffix": "" }, { "first": "R", "middle": [], "last": "Frostig", "suffix": "" }, { "first": "P", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2013, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Berant, A. Chou, R. Frostig, and P. Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "Driving semantic parsing from the world's response", "authors": [ { "first": "J", "middle": [], "last": "Clarke", "suffix": "" }, { "first": "D", "middle": [], "last": "Goldwasser", "suffix": "" }, { "first": "M", "middle": [], "last": "Chang", "suffix": "" }, { "first": "D", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2010, "venue": "Computational Natural Language Learning (CoNLL)", "volume": "", "issue": "", "pages": "18--27", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Clarke, D. Goldwasser, M. Chang, and D. Roth. 2010. Driving semantic parsing from the world's re- sponse. In Computational Natural Language Learn- ing (CoNLL), pages 18-27.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints", "authors": [ { "first": "P", "middle": [], "last": "Cousot", "suffix": "" }, { "first": "R", "middle": [], "last": "Cousot", "suffix": "" } ], "year": 1977, "venue": "Principles of Programming Languages (POPL)", "volume": "", "issue": "", "pages": "238--252", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Cousot and R. Cousot. 1977. Abstract interpreta- tion: a unified lattice model for static analysis of programs by construction or approximation of fix- points. In Principles of Programming Languages (POPL), pages 238-252.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Automating string processing in spreadsheets using input-output examples", "authors": [ { "first": "S", "middle": [], "last": "Gulwani", "suffix": "" } ], "year": 2011, "venue": "ACM SIGPLAN Notices", "volume": "46", "issue": "1", "pages": "317--330", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Gulwani. 2011. Automating string processing in spreadsheets using input-output examples. ACM SIGPLAN Notices, 46(1):317-330.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Traversing knowledge graphs in vector space", "authors": [ { "first": "K", "middle": [], "last": "Guu", "suffix": "" }, { "first": "J", "middle": [], "last": "Miller", "suffix": "" }, { "first": "P", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2015, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "K. Guu, J. Miller, and P. Liang. 2015. Travers- ing knowledge graphs in vector space. In Em- pirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Learning to transform natural to formal languages", "authors": [ { "first": "R", "middle": [ "J" ], "last": "Kate", "suffix": "" }, { "first": "Y", "middle": [ "W" ], "last": "Wong", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 2005, "venue": "Association for the Advancement of Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "1062--1068", "other_ids": {}, "num": null, "urls": [], "raw_text": "R. J. Kate, Y. W. Wong, and R. J. Mooney. 2005. Learning to transform natural to formal languages. In Association for the Advancement of Artificial In- telligence (AAAI), pages 1062-1068.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Inducing probabilistic CCG grammars from logical form with higher-order unification", "authors": [ { "first": "T", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "L", "middle": [], "last": "Zettlemoyer", "suffix": "" }, { "first": "S", "middle": [], "last": "Goldwater", "suffix": "" }, { "first": "M", "middle": [], "last": "Steedman", "suffix": "" } ], "year": 2010, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "1223--1233", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Kwiatkowski, L. Zettlemoyer, S. Goldwater, and M. Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order uni- fication. In Empirical Methods in Natural Language Processing (EMNLP), pages 1223-1233.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Scaling semantic parsers with on-the-fly ontology matching", "authors": [ { "first": "T", "middle": [], "last": "Kwiatkowski", "suffix": "" }, { "first": "E", "middle": [], "last": "Choi", "suffix": "" }, { "first": "Y", "middle": [], "last": "Artzi", "suffix": "" }, { "first": "L", "middle": [], "last": "Zettlemoyer", "suffix": "" } ], "year": 2013, "venue": "Empirical Methods in Natural Language Processing (EMNLP)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Kwiatkowski, E. Choi, Y. Artzi, and L. Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly on- tology matching. In Empirical Methods in Natural Language Processing (EMNLP).", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Programming by demonstration using version space algebra", "authors": [ { "first": "T", "middle": [], "last": "Lau", "suffix": "" }, { "first": "S", "middle": [], "last": "Wolfman", "suffix": "" }, { "first": "P", "middle": [], "last": "Domingos", "suffix": "" }, { "first": "D", "middle": [ "S" ], "last": "Weld", "suffix": "" } ], "year": 2003, "venue": "Machine Learning", "volume": "53", "issue": "", "pages": "111--156", "other_ids": {}, "num": null, "urls": [], "raw_text": "T. Lau, S. Wolfman, P. Domingos, and D. S. Weld. 2003. Programming by demonstration using version space algebra. Machine Learning, 53:111-156.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Learning programs: A hierarchical Bayesian approach", "authors": [ { "first": "P", "middle": [], "last": "Liang", "suffix": "" }, { "first": "M", "middle": [ "I" ], "last": "Jordan", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2010, "venue": "International Conference on Machine Learning (ICML)", "volume": "", "issue": "", "pages": "639--646", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Liang, M. I. Jordan, and D. Klein. 2010. Learn- ing programs: A hierarchical Bayesian approach. In International Conference on Machine Learning (ICML), pages 639-646.", "links": null }, "BIBREF12": { "ref_id": "b12", "title": "Learning dependency-based compositional semantics", "authors": [ { "first": "P", "middle": [], "last": "Liang", "suffix": "" }, { "first": "M", "middle": [ "I" ], "last": "Jordan", "suffix": "" }, { "first": "D", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2011, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "590--599", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Liang, M. I. Jordan, and D. Klein. 2011. Learn- ing dependency-based compositional semantics. In Association for Computational Linguistics (ACL), pages 590-599.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Lambda dependency-based compositional semantics. arXiv", "authors": [ { "first": "P", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2013, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Liang. 2013. Lambda dependency-based composi- tional semantics. arXiv.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "An empirical study of the reliability of UNIX utilities", "authors": [ { "first": "B", "middle": [ "P" ], "last": "Miller", "suffix": "" }, { "first": "L", "middle": [], "last": "Fredriksen", "suffix": "" }, { "first": "B", "middle": [], "last": "So", "suffix": "" } ], "year": 1990, "venue": "Communications of the ACM", "volume": "33", "issue": "12", "pages": "32--44", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. P. Miller, L. Fredriksen, and B. So. 1990. An empir- ical study of the reliability of UNIX utilities. Com- munications of the ACM, 33(12):32-44.", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Neural programmer: Inducing latent programs with gradient descent", "authors": [ { "first": "A", "middle": [], "last": "Neelakantan", "suffix": "" }, { "first": "Q", "middle": [ "V" ], "last": "Le", "suffix": "" }, { "first": "I", "middle": [], "last": "Sutskever", "suffix": "" } ], "year": 2016, "venue": "International Conference on Learning Representations (ICLR", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "A. Neelakantan, Q. V. Le, and I. Sutskever. 2016. Neural programmer: Inducing latent programs with gradient descent. In International Conference on Learning Representations (ICLR).", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Compositional semantic parsing on semi-structured tables", "authors": [ { "first": "P", "middle": [], "last": "Pasupat", "suffix": "" }, { "first": "P", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2015, "venue": "Association for Computational Linguistics (ACL)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Pasupat and P. Liang. 2015. Compositional semantic parsing on semi-structured tables. In Association for Computational Linguistics (ACL).", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Neural programmerinterpreters", "authors": [ { "first": "S", "middle": [], "last": "Reed", "suffix": "" }, { "first": "N", "middle": [], "last": "De Freitas", "suffix": "" } ], "year": 2016, "venue": "International Conference on Learning Representations (ICLR)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "S. Reed and N. de Freitas. 2016. Neural programmer- interpreters. In International Conference on Learn- ing Representations (ICLR).", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "Active learning literature survey", "authors": [ { "first": "B", "middle": [], "last": "Settles", "suffix": "" } ], "year": 2010, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "B. Settles. 2010. Active learning literature survey. Technical report, University of Wisconsin, Madison.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Learning with relaxed supervision", "authors": [ { "first": "J", "middle": [], "last": "Steinhardt", "suffix": "" }, { "first": "P", "middle": [], "last": "Liang", "suffix": "" } ], "year": 2015, "venue": "Advances in Neural Information Processing Systems (NIPS)", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "J. Steinhardt and P. Liang. 2015. Learning with re- laxed supervision. In Advances in Neural Informa- tion Processing Systems (NIPS).", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Neural enquirer: Learning to query tables with natural language. arXiv", "authors": [ { "first": "P", "middle": [], "last": "Yin", "suffix": "" }, { "first": "Z", "middle": [], "last": "Lu", "suffix": "" }, { "first": "H", "middle": [], "last": "Li", "suffix": "" }, { "first": "B", "middle": [], "last": "Kao", "suffix": "" } ], "year": 2016, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "P. Yin, Z. Lu, H. Li, and B. Kao. 2016. Neural en- quirer: Learning to query tables with natural lan- guage. arXiv.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "Learning to parse database queries using inductive logic programming", "authors": [ { "first": "M", "middle": [], "last": "Zelle", "suffix": "" }, { "first": "R", "middle": [ "J" ], "last": "Mooney", "suffix": "" } ], "year": 1996, "venue": "Association for the Advancement of Artificial Intelligence (AAAI)", "volume": "", "issue": "", "pages": "1050--1055", "other_ids": {}, "num": null, "urls": [], "raw_text": "M. Zelle and R. J. Mooney. 1996. Learning to parse database queries using inductive logic pro- gramming. In Association for the Advancement of Artificial Intelligence (AAAI), pages 1050-1055.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars", "authors": [ { "first": "L", "middle": [ "S" ], "last": "Zettlemoyer", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2005, "venue": "Uncertainty in Artificial Intelligence (UAI)", "volume": "", "issue": "", "pages": "658--666", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. S. Zettlemoyer and M. Collins. 2005. Learning to map sentences to logical form: Structured classifica- tion with probabilistic categorial grammars. In Un- certainty in Artificial Intelligence (UAI), pages 658- 666.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Online learning of relaxed CCG grammars for parsing to logical form", "authors": [ { "first": "L", "middle": [ "S" ], "last": "Zettlemoyer", "suffix": "" }, { "first": "M", "middle": [], "last": "Collins", "suffix": "" } ], "year": 2007, "venue": "Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP/CoNLL)", "volume": "", "issue": "", "pages": "678--687", "other_ids": {}, "num": null, "urls": [], "raw_text": "L. S. Zettlemoyer and M. Collins. 2007. Online learn- ing of relaxed CCG grammars for parsing to log- ical form. In Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning (EMNLP/CoNLL), pages 678-687.", "links": null } }, "ref_entries": { "FIGREF0": { "num": null, "text": "R[Venue]. argmax( Position. 1st , Index) The table in", "uris": null, "type_str": "figure" }, "FIGREF1": { "num": null, "text": "The first pass of DPD constructs cells (c, s, d) (square nodes) using denotationally invariant semantic functions (circle nodes). The second pass enumerates all logical forms along paths that lead to the correct denotation y (solid lines).the cells (c, s) by applying compositional rules on partial logical forms with size less than s. For instance, when s = 2, we can apply Rule C1 on logical forms Number.1 from cell (Set, s 1 = 1) and Position from cell (Rel, s 2 = 0) to create Position.Number.1 in cell (Set, s 0 +s 1 +1 = 2). After populating each cell (c, s), the list of logical forms in the cell is pruned based on the model scores to a fixed beam size in order to control the search space. Finally, the set Z b is formed by collecting logical forms from all cells (Set, s) for s = 1, . . . , s max .", "uris": null, "type_str": "figure" }, "FIGREF2": { "num": null, "text": "We execute consistent logical forms z i \u2208 Z on fictitious worlds to get denotation tuples. Logical forms with the same denotation tuple are grouped into the same equivalence class q j .", "uris": null, "type_str": "figure" }, "FIGREF3": { "num": null, "text": "R[Opponent].Type.Row, R[\u03bbx.count(Opponent.x Result.Lost]) \"how long did ian armstrong serve?\" z = R[Num2].R[Term].Member.IanArmstrong \u2212 R[Number].R[Term].Member.IanArmstrong \"which players came in a place before lukas bauer?\" z = R[Name].Index.<.R[Index].Name.LukasBauer \"which players played the same position as ardo kreek?\" z = R[Player].Position.R[Position].Player.Ardo !=.Ardo", "uris": null, "type_str": "figure" }, "FIGREF4": { "num": null, "text": "The median of the number of logical forms (dashed) and denotations (solid) as the formula size increases. The space of logical forms grows much faster than the space of denotations.", "uris": null, "type_str": "figure" }, "TABREF0": { "text": ": \"Where did the last 1st place finish occur?\" y:Thailand Consistent Correct z1: R[Venue].argmax(Position.1st, Index)Among rows with Position = 1st, pick the one with maximum index, then return the Venue of that row. z2:R[Venue].Index.max(R[Index].Position.1st)Find the maximum index of rows with Position = 1st, then return the Venue of the row with that index.", "num": null, "html": null, "content": "
YearVenuePosition EventTime
2001 Hungary2nd400m47.12
2003Finland1st400m46.69
2005 Germany11th400m46.62
2007 Thailand1strelay182.05
2008China7threlay180.32
z3: R[Venue].argmax(Position.Number.1,
R[\u03bbx.R[Date].R[Year].x])
Among rows with Position number 1, pick one with
latest date in the Year column and return the Venue.
Spurious
z4: R[Venue].argmax(Position.Number.1,
R[\u03bbx.R[Number].R[Time].x])
Among rows with Position number 1, pick the one
with maximum Time number. Return the Venue.
", "type_str": "table" } } } }