| { |
| "paper_id": "Q16-1012", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:06:48.205610Z" |
| }, |
| "title": "Learning to Make Inferences in a Semantic Parsing Task", |
| "authors": [ |
| { |
| "first": "Kyle", |
| "middle": [], |
| "last": "Richardson", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Stuttgart", |
| "location": {} |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Jonas", |
| "middle": [], |
| "last": "Kuhn", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Stuttgart", |
| "location": {} |
| }, |
| "email": "jonas.kuhn@ims.uni-stuttgart.de" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We introduce a new approach to training a semantic parser that uses textual entailment judgements as supervision. These judgements are based on high-level inferences about whether the meaning of one sentence follows from another. When applied to an existing semantic parsing task, they prove to be a useful tool for revealing semantic distinctions and background knowledge not captured in the target representations. This information is used to improve the quality of the semantic representations being learned and to acquire generic knowledge for reasoning. Experiments are done on the benchmark Sportscaster corpus (Chen and Mooney, 2008), and a novel RTE-inspired inference dataset is introduced. On this new dataset our method strongly outperforms several strong baselines. Separately, we obtain state-of-the-art results on the original Sportscaster semantic parsing task.", |
| "pdf_parse": { |
| "paper_id": "Q16-1012", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We introduce a new approach to training a semantic parser that uses textual entailment judgements as supervision. These judgements are based on high-level inferences about whether the meaning of one sentence follows from another. When applied to an existing semantic parsing task, they prove to be a useful tool for revealing semantic distinctions and background knowledge not captured in the target representations. This information is used to improve the quality of the semantic representations being learned and to acquire generic knowledge for reasoning. Experiments are done on the benchmark Sportscaster corpus (Chen and Mooney, 2008), and a novel RTE-inspired inference dataset is introduced. On this new dataset our method strongly outperforms several strong baselines. Separately, we obtain state-of-the-art results on the original Sportscaster semantic parsing task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Semantic Parsing is the task of automatically translating natural language text to formal meaning representations (e.g., statements in a formal logic). Recent work has centered around learning such translations using parallel data, or raw collections of text-meaning pairs, often by employing methods from statistical machine translation (Wong and Mooney, 2006; Jones et al., 2012; Andreas et al., 2013) and parsing (Zettlemoyer and Collins, 2009; Kwiatkowski et al., 2010) . Earlier attempts focused on learning to map natural language questions to simple database queries for database retrieval using collections of target ques-tions and formal queries. A more recent focus has been on learning representations using weaker forms of supervision that require minimal amounts of manual annotation effort (Clarke et al., 2010; Liang et al., 2011; Krishnamurthy and Mitchell, 2012; Artzi and Zettlemoyer, 2013; Berant et al., 2013; Kushman et al., 2014) .", |
| "cite_spans": [ |
| { |
| "start": 338, |
| "end": 361, |
| "text": "(Wong and Mooney, 2006;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 362, |
| "end": 381, |
| "text": "Jones et al., 2012;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 382, |
| "end": 403, |
| "text": "Andreas et al., 2013)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 416, |
| "end": 447, |
| "text": "(Zettlemoyer and Collins, 2009;", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 448, |
| "end": 473, |
| "text": "Kwiatkowski et al., 2010)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 804, |
| "end": 825, |
| "text": "(Clarke et al., 2010;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 826, |
| "end": 845, |
| "text": "Liang et al., 2011;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 846, |
| "end": 879, |
| "text": "Krishnamurthy and Mitchell, 2012;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 880, |
| "end": 908, |
| "text": "Artzi and Zettlemoyer, 2013;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 909, |
| "end": 929, |
| "text": "Berant et al., 2013;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 930, |
| "end": 951, |
| "text": "Kushman et al., 2014)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "For example, Liang et al. (2011) train a semantic parser in a question-answering domain using the denotation (or answer) of each question as the sole supervision. Particularly impressive is their system's ability to learn complex linguistic structure not handled by earlier methods that use more direct supervision. Similarly, Artzi and Zettlemoyer (2013) train a parser that generates higherorder logical representations in a navigation domain using low-level navigation cues. What is missing in such approaches, however, is an explicit account of entailment (e.g., learning entailment rules from such corpora), which has long been considered one of the basic aims of semantics (Montague, 1970) . An adequate semantic parser that captures the core aspects of natural language meaning should support inferences about sentence-level entailments (i.e., determining whether the meaning of one sentence follows from another). In many cases, the target representations being learned remain inexpressive, making it difficult to learn the types of semantic generalizations and world-knowledge needed for modeling entailment (see discussion in Schubert (2015) ).", |
| "cite_spans": [ |
| { |
| "start": 13, |
| "end": 32, |
| "text": "Liang et al. (2011)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 679, |
| "end": 695, |
| "text": "(Montague, 1970)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 1136, |
| "end": 1151, |
| "text": "Schubert (2015)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Attempts to integrate more general knowledge into semantic parsing pipelines have often involved additional hand-engineering or external lexical resources (Wang et al., 2014; Tian et al., 2014; Beltagy et al., 2014) . We propose a different learning-based approach that uses textual inference judgements between sentences as additional supervision to learn semantic generaliza- Figure 1: The original Sportscaster training setup: a text x paired with a set of meaning representations z derived from events occurring in a 2-d soccer simulator. The goal is to learn a latent translation, y, from the text to the correct representation.", |
| "cite_spans": [ |
| { |
| "start": 155, |
| "end": 174, |
| "text": "(Wang et al., 2014;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 175, |
| "end": 193, |
| "text": "Tian et al., 2014;", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 194, |
| "end": 215, |
| "text": "Beltagy et al., 2014)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "tions in a semantic parsing task. Our assumption is that differences in sentence realizations provide a strong, albeit indirect, signal about differences in meaning. When paired with entailment judgements, this evidence can reveal important semantic distinctions (e.g., sense distinctions, modification) that are not captured in target meaning representations. These judgements can also be used to learn general knowledge about a domain (e.g., meaning postulates or ontological relations).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we introduce a novel recognizing textual entailment (RTE) inspired inference task for training and evaluating semantic parsers that extends previous approaches. Our method learns jointly using structured meaning representations (as done in previous approaches) and raw textual inference judgements as the main supervision. In order to learn and model entailment phenomena, we introduce a new method that integrates natural logic (symbolic) reasoning (MacCartney and Manning, 2009) directly into a data-driven semantic parsing model.", |
| "cite_spans": [ |
| { |
| "start": 465, |
| "end": 495, |
| "text": "(MacCartney and Manning, 2009)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We perform experiments on the Sportscaster corpus (Chen and Mooney, 2008 ), which we extend by annotating pairs of sentences in the original dataset with inference judgements. On a new inference task based on this extended dataset, we achieve an accuracy of 73%, which is an improvement of 13 percentage points over a strong baseline. As a separate result, part of our approach outperforms previously published results (from around 89% accuracy to 96%) on the original Sportscaster semantic parsing task. ", |
| "cite_spans": [ |
| { |
| "start": 50, |
| "end": 72, |
| "text": "(Chen and Mooney, 2008", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this section, we describe the idea of modeling inference in a semantic parsing task using examples from the Sportscaster domain. Figure 1 shows a training example from the original Sportscaster corpus used in Chen and Mooney (2008) , consisting of a text x paired with a set of formal meaning representations z. The goal for training a semantic parser in this setup is to learn a hidden translation y from the text to the correct representation using such raw pairs as supervision. In this case, human commentary (i.e., x) was collected by having participants watch a 2-d simulation of several Robocup 1 soccer league games and comment on events in the game. Rather than hand annotating the verbal sports commentary, sentences were paired with symbolic (logical) representations underlying the original simulator actions (Andr\u00e9 et al., 2000) . These representations serve as a proxy for the grounded game context and the denotation of individual events (shown as JzK). While the representations capture the general events being discussed, they often fail to capture other aspects of meaning and additional details that the human commentators found to be relevant and expressed verbally.", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 234, |
| "text": "Chen and Mooney (2008)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 824, |
| "end": 844, |
| "text": "(Andr\u00e9 et al., 2000)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 132, |
| "end": 140, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Motivation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "These issues are illustrated in Figure 2 , where example sentences are shown with target meaning representations. Sentence-level entailment judgements 2 between orderings of text are shown using a standard 3-way entailment scheme (Cooper et al., 1996; Bentivogli et al., 2011) , along with a na\u00efve inference computed by comparing the target labels. The mismatch between some of the na\u00efve inferences and the actual entailment judgements show that the target representations alone fail to capture certain semantic distinctions. This is related to two problems:", |
| "cite_spans": [ |
| { |
| "start": 230, |
| "end": 251, |
| "text": "(Cooper et al., 1996;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 252, |
| "end": 276, |
| "text": "Bentivogli et al., 2011)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 32, |
| "end": 40, |
| "text": "Figure 2", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Problems of Representation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Imprecise labels: The corpus representations fail to account for certain aspects of meaning. For example, the first two sentences in Figure 2 map to the same formal meaning representation (i.e., pass(pink3,pink7)) despite having slightly different semantics and divergent entailment patterns. This shift in meaning is related to the adverbial modifier quickly, which is not explicitly analyzed in the target representation. The same is true for the modifier long in example 4, and for all other forms of modification. For a semantic parser or generator trained on this data, both sentences are treated as having an identical meaning. As shown in the example 2, other representations fail to capture important sense distinctions, such as the difference between the two senses of the kick relation. While shooting for the goal in general entails kicking, such an entailment does not hold in the reverse direction. Without making this distinction explicit at the representation level, such inferences and distinctions cannot be made.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 133, |
| "end": 141, |
| "text": "Figure 2", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Problems of Representation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Missing Domain Knowledge: Since the logical representations are not based on an underlying logical theory or domain ontology, semantic relations between different symbols are not known. For example, computing the entailments in example 3 requires knowing that in general, a pass event entails or implies a kick event (i.e., the set of things kicking at a given moment includes the set of people passing). Other such factoids are involved in reasoning about the sentences in example 4: purple7 is part of the purple team, and a score event entails a kick event (but not conversely).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problems of Representation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Our goal is to learn a semantic parser that can capture the inferential properties of language de- 2 We adopt the definition of entailment used in the RTE challenges (Dagan et al., 2005) : a text T entails a hypothesis H if \"typically, a human reading T would infer that H is most likely True\" t: pink3", |
| "cite_spans": [ |
| { |
| "start": 99, |
| "end": 100, |
| "text": "2", |
| "ref_id": null |
| }, |
| { |
| "start": 166, |
| "end": 186, |
| "text": "(Dagan et al., 2005)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problems of Representation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "passes to pink1 a:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problems of Representation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "h:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problems of Representation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "pink3 quickly kicks y 0 : pink3 \u2318 pink3 (rel) pink3 \u2318 pink3 w vc (mod) w quickly pass v kick, pink1 v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problems of Representation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "(infer) passes to pink1 v kicks (infer) passes to pink 1 # quickly kicks (infer) pink3 passes to pink1 # pink3 quickly kicks", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problems of Representation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "z 0 : Unknown (= #) pink3/pink3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problems of Representation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "pink1/ /vc pass/kick Figure 3 : The inference training setup: an ordered pair (t, h) annotated with a sentence-level inference relation z 0 . The goal is to learn a hidden alignment a between t, h, and a hidden proof (tree) y 0 that generates the target inference. scribed above. Rather than re-annotating the corpus and creating a domain ontology from scratch, we use the raw entailment judgements to help improve and learn about the existing representations. We show that entailment judgements prove to be a powerful tool for solving the two problems described above.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 21, |
| "end": 29, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Problems of Representation", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Our approach addresses the problems outlined above by adding pairs of text annotated with inference judgements (as shown in Figure 2 ) to the original Sportscaster data (as shown in Figure 1 ). While training an ordinary semantic parser, we use such pairs to jointly reason about the Sportscaster concepts/symbols and prove theorems about the target entailments using a simple logical calculus. The idea is that these proofs reveal distinctions not captured in the original representations, and can be used to improve the semantic parser's internal representations and acquire knowledge.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 124, |
| "end": 132, |
| "text": "Figure 2", |
| "ref_id": "FIGREF3" |
| }, |
| { |
| "start": 182, |
| "end": 190, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Learning from Entailment", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "This training setup is illustrated in Figure 3 , where each training instance consists of a text t and hypothesis h, and an entailment judgement z 0 . The goal is to learn a hidden proof y 0 that derives the target entailment by transforming the text into the hypothesis. Such a proof is driven by latent semantic relationships (shown on the top row in y 0 and rel) between aligned pairs of symbols (the arc labels in a, delimited by \"/\"). These relations record the effect of substituting or inserting/deleting symbols in the text with related symbols in the hypothesis and compare the denotations of these symbols. These relations are then projected up a proof tree using generic inference rules (infer and mod) to compute a global inference.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 38, |
| "end": 46, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Learning from Entailment", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "This proof gives rise to several new facts: the pass symbol is found to forward entail or imply (shown using the set inclusion symbol v) the kick symbol. The adverbial modifier, which is previously unanalyzed, is treated as an entailing modifier v c , which results in a reverse entailment or implication (shown using the symbol w) when inserted (or substituted for the empty symbol ) on the hypothesis side. The first fact can be used for building a domain theory, and the second for assigning more precise labels to modifiers for the semantic parser. The overall effect of inserting the adverbial modifier (shown in red) is then propagated up the proof tree leading to an Uncertain inference (shown using the # symbol).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning from Entailment", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Computing entailments is driven by learning the correct semantic relations between primitive domain symbols, as well as the semantic effect of deleting/inserting symbols. We focus on learning the following very broad types of linguistic inferences (Fyodorov et al., 2003) : constructionbased inferences, or inferences generated from specific (syntactic) constructions or lexical items in the language, and lexical-based inferences, or inferences generated between words or primitive concepts due to their inherent lexical meaning.", |
| "cite_spans": [ |
| { |
| "start": 248, |
| "end": 271, |
| "text": "(Fyodorov et al., 2003)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning from Entailment", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Construction-based inferences are inferences related to modifier constructions: quickly(pass) v pass, goal w nice(goal), gets a(free kick) \u2318 (equivalence) free kick, where the entailments relate to default properties of particular modifiers when they are added or dropped. Lexical-based inferences relate to general inferences and implications between primitive semantic symbols or concepts: kick w score, pass v kick, and pink1 v pink team.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning from Entailment", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Experiments are done by first training a standard semantic parser on the Sportscaster dataset, then improving this parser using an extended corpus of sentences annotated with entailment judgements. Semantic parsing is done using a probabilistic grammar induction approach (B\u00f6rschinger et al., 2011; Angeli et al., 2012 ), which we extend to accommodate entailment modeling. The natural logic calculus is used as the underlying logical inference engine (MacCartney and Manning, 2009) .", |
| "cite_spans": [ |
| { |
| "start": 272, |
| "end": 298, |
| "text": "(B\u00f6rschinger et al., 2011;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 299, |
| "end": 318, |
| "text": "Angeli et al., 2012", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 452, |
| "end": 482, |
| "text": "(MacCartney and Manning, 2009)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Outline of Approach", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "To evaluate the quality of our resulting semantic parser and the acquired knowledge, we run our system on a held-out set of inference pairs. The results are compared to the na\u00efve inferences computed by the initial semantic parser.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Outline of Approach", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In this section, we describe the technical details behind the semantic parsing. We also describe the underlying natural logic inference engine used for computing inferences, and how to integrate this into a standard semantic parsing pipeline for modeling our extended corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Semantic Parsing", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Semantic grammars (Allen, 1987) are used to perform the translation between text and logical meaning representations. The rules in these grammars are automatically constructed from the target corpus representations using a small set of rule templates, building on B\u00f6rschinger et al. 2011(henceforth BJJ). Figure 4 shows a set of rule templates in the form of context-free productions, along with examples from the Sportscaster domain.", |
| "cite_spans": [ |
| { |
| "start": 18, |
| "end": 31, |
| "text": "(Allen, 1987)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 305, |
| "end": 313, |
| "text": "Figure 4", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Base Semantic Grammars", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Meanings representations (MR) are atomic formulae of predicate logic and take the following form: Rel(x arg1 ,..,x argN ). The production rules break down each representation to smaller parts: lexical rules associate MR constituents or symbols (e.g., Rel,x arg1 instances) to individual words, phrase rules associate these constituents to word sequences, concept rules associate phrase rules to domain concepts, and glue rules combine concepts to build complete MRs. 3 Lexical rules are created by breaking down all MRs in a target corpus, and associating each constituent with all words in the target corpus. Phrase rules are from (Johnson et al., 2010) and allow arbitrary word sequences to be associated with constituents as opposed to single words. Such rules can be used to skip words that don't contribute direct meaning to a constituent or are unanalyzed, which is represented using the empty word symbol w . This is shown in the treatment of the adverbial quickly in the phrase passes quickly to in Figure 4 .2a.", |
| "cite_spans": [ |
| { |
| "start": 467, |
| "end": 468, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 632, |
| "end": 654, |
| "text": "(Johnson et al., 2010)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 1007, |
| "end": 1015, |
| "text": "Figure 4", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Rule Templates and Extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "Glue rules are constructed by marking constituent concepts with syntax-semantic roles and Base Grammar Rule Templates Examples: derivation (a), input (b) and interpretation (c) word orders ={sv, vs, os, vo} 1b:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rule Templates and Extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "Semx glue ! { E Aarg1 } x 2 {sv,vs} Semx glue ! { E' (Aarg2) } x 2 {ov,vo} Sem empty ! ( c ) w E glue ! { Rr (Aarg2) } E' glue ! { Rr Aarg1 } Rr concept ! { Rc ( c) } Ax concept ! { Ic ( c ) } x 2 {arg1,arg2} xc atomic ! { C1c C2c, ... } x 2 {I,R} xc phrase ! xp x 2 {I, R, , C} xp phrase ! (xphx) xw xp phrase ! xph ( w) xph phrase ! xph ( w) xph phrase ! xphx (xw) xphx skip ! (xphx) w xphx phrase ! (xphx) xw xw lexical ! w 2 corpus x 2 {I, R, , C}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rule Templates and Extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "Purple 10 shoots 2b: Purple 7 quickly passes to purple 4 1c:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rule Templates and Extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "kick(purple10) 2c: pass(purple7,purple4)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rule Templates and Extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "3a. The pink goalie blocks the ball 4b: Free kick for the purple team 3c: combining these roles according to the structure of the MRs. For example, the constituent symbol block in Figure 4 .3 is marked as an intransitiveplay relation and pink1 as a play-argument, both of which combine to create a well-formed MR. Here we diverge from BJJ, where such abstractions are not used and full MRs are encoded as separate grammar symbols. As in BJJ, word-order rules are used to account for regularities in the order in which arguments combine with relations.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 180, |
| "end": 188, |
| "text": "Figure 4", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Rule Templates and Extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "block(pink1) 4c: playmode(free kick l) Inference Rules S 2 {\u2318, |, #, w, v}; P 2 S \\ {#, \u2318},M 2 {v, w, \u2318, #}; Y 2 {R, I} (S ./ S')x join ! {SE S'A arg1 } x 2 {sv, vs} (S ./ S')x join ! {S E 0 S'A arg2 } x 2 {ov, vo} |x fun. ! {|E SA } x 2 {sv, vs, ...} |x fun. ! {|A SE } (S ./ S')E join ! {SE (S'A arg2 ) } (S ./ S') E 0 join ! {SE S'A arg1 } (S ./ M)x mod. ! {Sx Mc} |f fun. ! {Sf |x} f 2 {E, A, c} Px sub. ! Yc / Y'c x 2 {E, A} vc delete ! x / \u2318c in/del ! \u2318c / | / \u2318c wc insert ! / x 5a.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rule Templates and Extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "c and Atomic Rules In addition to the skip phrase rules, concept rules are padded with a new empty concept c , which are used for modeling phrases and modifiers surrounding concept phrases. For example, in The pink goalie in Figure 4 .3a, the is treated as a separate phrase that modifies pink goalie. As described later, these phrases will get classified according to their effect on entailment using our extended corpus, but in the base semantic grammar get treated as not con-tributing any additional meaning.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 225, |
| "end": 233, |
| "text": "Figure 4", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Rule Templates and Extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "The atomic rules optionally break down some of the domain symbols to smaller concepts, rather than using the original corpus symbols directly as in BJJ. For example, the concept pink3 is treated as consisting of two concepts: pink and 3. Similarly, the game symbol free kick l is broken down to two concepts: free kick and purple team (or l). Unlike in BJJ, some flexibility is permitted in terms of dropping or skipping over some constituent symbols that do not get realized in sentences. For example, playmode in Figure 4 .4a is dropped, since it is not explicitly described in the associated text.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 515, |
| "end": 523, |
| "text": "Figure 4", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Rule Templates and Extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "Interpretation Using this grammar, a given sentence input will generate a large space of output derivations, each related to a particular semantic representation. An interpretation of a derivation d is the MR produced from the derivation by applying the glue rules. By assigning probabilities to Table: . the rules in our grammar, we can learn the correct interpretations using our training data.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 296, |
| "end": 302, |
| "text": "Table:", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Rule Templates and Extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "/ \u2318 v w | # \u2318 \u2318 v w | # v v v # | # w w # w # # | | # | # # # # # # # # D blocks the ball | scores Example Atomic Joins R S R' = R ./ S pink3 v pink scores v shoots pink3 scores v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Rule Templates and Extraction", |
| "sec_num": null |
| }, |
| { |
| "text": "The base semantic parser described in the previous section makes it possible to translate sentences to formal representations. Entailment modeling aims to discover abstract relations between symbols in these representations. For example, knowing how the meaning, or denotation, of the symbol purple7 in general relates to the meaning of purple team, or how score relates to kick.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Entailment Modeling", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In this section, we describe our general framework used for modeling textual entailment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Entailment Modeling", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We use a fragment of the natural logic calculus to model entailment (MacCartney and Manning, 2009; Icard III, 2012) . Natural logic derives from work in linguistics on proof-theoretic approaches to semantics (van Benthem, 2008; Moss, 2010) . More recently, it has been used in NLP for work on RTE (MacCartney and Manning, 2008; Angeli and Manning, 2014; Bowman et al., 2014) .", |
| "cite_spans": [ |
| { |
| "start": 68, |
| "end": 98, |
| "text": "(MacCartney and Manning, 2009;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 99, |
| "end": 115, |
| "text": "Icard III, 2012)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 208, |
| "end": 227, |
| "text": "(van Benthem, 2008;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 228, |
| "end": 239, |
| "text": "Moss, 2010)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 297, |
| "end": 327, |
| "text": "(MacCartney and Manning, 2008;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 328, |
| "end": 353, |
| "text": "Angeli and Manning, 2014;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 354, |
| "end": 374, |
| "text": "Bowman et al., 2014)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Natural Logic Calculus", |
| "sec_num": null |
| }, |
| { |
| "text": "Components of the calculus are shown in Figure 5 . A small set of primitive set-theoretic relations are defined, which are used to relate the denotations of arbitrary lexical items (w.r.t to a domain of discourse D). We use a subset of the original seven relations to relate symbols (and by extension, word/phrases) in our domain. For example, purple3 (or \"purple 3\") has a v (or subset) relation to purple team (or \"purple team\"), which is illustrated in Figure 5 using a Venn diagram. These primitive relations are then composed using two operations: atomic join rules, or generic inference rules for combining two inference relations to create a new relation (shown in the join table) , and function rules, or inference rules associated with particular lexical items that project certain properties onto other relations.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 40, |
| "end": 48, |
| "text": "Figure 5", |
| "ref_id": null |
| }, |
| { |
| "start": 456, |
| "end": 464, |
| "text": "Figure 5", |
| "ref_id": null |
| }, |
| { |
| "start": 681, |
| "end": 687, |
| "text": "table)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Natural Logic Calculus", |
| "sec_num": null |
| }, |
| { |
| "text": "Sentence-level entailment recognition is done by finding an alignment between a text and hypothesis pair. Such an alignment transforms the text into the hypothesis by substituting each part of the text with lexical items in the hypothesis, and inserting/deleting other items. Each local transformation is marked with a semantic relation, and these relations are composed using the join and function rules. A proof tree records the result of this overall process, and the top-most node shows the overall inference relation (e.g., | in Figure 4 .5a or w in Figure 4 .6a).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 534, |
| "end": 542, |
| "text": "Figure 4", |
| "ref_id": "FIGREF7" |
| }, |
| { |
| "start": 555, |
| "end": 563, |
| "text": "Figure 4", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Natural Logic Calculus", |
| "sec_num": null |
| }, |
| { |
| "text": "Semantic relations between symbols and functions are usually recorded in a semantic lexicon. Since we have no prior knowledge about how symbols relate to one another, we learn these relations using the resulting entailment judgements as supervision. For example, since we do not know the exact relation between kick and pass, we start by assuming all semantic relations and find the correct relation by looking into the (latent) proof trees that produce the correct entailments in our training data (e.g., the tree in Figure 3 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 518, |
| "end": 526, |
| "text": "Figure 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Natural Logic Calculus", |
| "sec_num": null |
| }, |
| { |
| "text": "All actions in our domain are fixed in time and place, and as such apply to a unique group of objects. For example, if a passing event is being described, the person doing the passing or being passed to at a particular moment must always be a unique individual. Substituting this individual with someone else will always result in a contradiction (see examples on the bottom of Figure 5 ). We therefore use a single default function rule that always projects negations | up the proof tree. See (MacCartney, 2009) for more information about a: pink 5 / pink 5 substitute ((t = \"pink 5 steals the ball\", h = \"good defense at the goal by pink 5\"), z 0 = Uncertain) Figure 6 : An example produced by our model: (t, h) are the text and hypothesis, z 0 is the inference annotation/relation that holds between t ! h, a is a phrase alignment between both sentences, and y 0 is a (simplified) proof tree that generates the target inference. these types of functional relations.", |
| "cite_spans": [ |
| { |
| "start": 494, |
| "end": 512, |
| "text": "(MacCartney, 2009)", |
| "ref_id": "BIBREF30" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 378, |
| "end": 386, |
| "text": "Figure 5", |
| "ref_id": null |
| }, |
| { |
| "start": 662, |
| "end": 670, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Natural Logic Calculus", |
| "sec_num": null |
| }, |
| { |
| "text": "We encode natural logic operations as production rules and add them to the base semantic grammar described in Section 3.1. Rule templates are shown on the bottom of Figure 4 . Substitute rules assign a semantic relation to a pair of symbols: e.g., w play-intr. ! kick c / pass c , where the subscript on the semantic relation is the role of the left concept being substituted. Substitutions occur between symbols with the same role, such as all relation symbols or all argument symbols in a domain and set of MRs. Function rules project negations (regardless of role) up a proof tree. Join rules compose relations using the join function ./ and, like the glue rules, are used to construct wellformed MRs.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 165, |
| "end": 173, |
| "text": "Figure 4", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Inference Grammar Rules", |
| "sec_num": null |
| }, |
| { |
| "text": "Substitution rules arbitrarily assign semantic relations to pairs of symbols since the correct relation is not known at the start (as discussed previously). This set of relations can be constrained by adding knowledge into the grammar. In our experiments, we assume a single negation rule between all arguments of the same semantic type (e.g., player arguments).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference Grammar Rules", |
| "sec_num": null |
| }, |
| { |
| "text": "We add two concept symbols to the grammar: v c and \u2318 c to replace the c / w rules in the base grammar. These are used to classify modifiers (e.g., the adverbial modifier in Figure 2 .1) and other expressions that are unanalyzed. The modifier rule allows these to combine with other symbols to affect entailment in various ways when added/dropped. With the empty symbol , other symbols can also be arbitrarily added/dropped via the insert and delete rules.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 173, |
| "end": 181, |
| "text": "Figure 2", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modifiers and Senses", |
| "sec_num": null |
| }, |
| { |
| "text": "We handle sense distinctions by allowing relation symbols (e.g., kick) in the base grammar to break down into a fixed number of specific senses in the grammar (e.g., kick 1 ,kick 2 , . . . ). Using the substitution rule, these different senses can be compared in the standard way to account for the semantic distinctions discussed in Section 2.1. In our experiments, we assigned a random number of senses to the most frequent events.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modifiers and Senses", |
| "sec_num": null |
| }, |
| { |
| "text": "Since the inference grammar is built on top of the base semantic grammar, these additional sense and modifier distinctions can be used for improving the base parser. Figure 8 shows examples of improvements in the semantic parse output after training with the extended corpus.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 166, |
| "end": 174, |
| "text": "Figure 8", |
| "ref_id": "FIGREF11" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modifiers and Senses", |
| "sec_num": null |
| }, |
| { |
| "text": "Construction vs. Lexical The distinction between construction-based and lexical-based inferences described in Section 2.2 is the difference between insert/delete and substitution rules in the inference grammar rules.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modifiers and Senses", |
| "sec_num": null |
| }, |
| { |
| "text": "Interpretation A given input will generate a large set of proof trees and an even larger set of semantic relations between different symbols. The crucial aspect of the interpretation of the proof tree is the overall inference relation marked at the root node. These relations are mapped into particular inference judgements as shown in Figure 5 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 336, |
| "end": 344, |
| "text": "Figure 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modifiers and Senses", |
| "sec_num": null |
| }, |
| { |
| "text": "The inference grammar assumes as input a word/phrase alignment between sentence pairs. Such an alignment is done in a heuristic fashion by parsing each sentence individually using the se-mantic grammar and aligning nodes in the resulting parse trees that have matching roles. A string is produced by pairing the yield of each matching subtree using a delimiter /. Subtrees that do not have a matching role in the other tree or are modifier expressions are isolated and aligned to the empty symbol .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tree Alignment", |
| "sec_num": null |
| }, |
| { |
| "text": "An example tree alignment is shown in Figure 6a , where the relation nodes play-intr and ar-gument1 nodes player arg1 are aligned. Since there are no modifiers in the argument subtrees, the yields of the two trees are simply combined to create the string pink 5 / pink 5. The modifier phrase c is removed from the second relation subtree and aligned to the empty string:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 38, |
| "end": 47, |
| "text": "Figure 6a", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tree Alignment", |
| "sec_num": null |
| }, |
| { |
| "text": "/ at the goal by. The remaining part of the relation subtrees are then aligned: good defense / steals the ball. With this input, the standard phrase and concept rules from the base grammar are used to tag each phrase and inference rules are then applied to generate proofs. 4 In our experiments, we use the tags from the semantic parse trees used during the alignment step to restrict the space of proofs considered. For example, we already know from the semantic parser output in Figure 6a that the text involves a steal event and the hypothesis a defense event, so we can constrain the search to consider only proofs that involve these two types of events.", |
| "cite_spans": [ |
| { |
| "start": 274, |
| "end": 275, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 481, |
| "end": 490, |
| "text": "Figure 6a", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Tree Alignment", |
| "sec_num": null |
| }, |
| { |
| "text": "Semantic parsing and inference computation is performed using a single generative probabilistic framework as shown in Figure 7 . A probabilistic context-free grammar (PCFG) transforms input to logical representations and entailment judgements using the grammar rules defined above. Learning reduces to the problem of finding the optimal parameters \u2713 for our PCFG given example input/output training pairs", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 118, |
| "end": 126, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "{(x i , Z i )} n i=1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": ". This is done via a EM bootstrapping approach that uses a k-best approximation of grammar derivations to estimate \u2713 (Angeli et al., 2012) . At each iteration in the EM procedure t = 1 . . . T , a set of k-best derivations is generated for each input x, D(x) = {(d j , p j )} k j=1 , using the current parameters \u2713 t . The set of valid derivations, 4 For readability, substitute rules such as R ! X / Y in many of the proof trees are simplified to the following: R ! X/Y and X/Y ! x string/y string, without showing the full concept/phrase analysis for X, Y . See Figure 4 .5a for a more precise example. Figure 7 : The prediction model: input is mapped to a hidden derivation d using a PCFG (parameterized by \u2713), which is then used to generate an output semantic representation. In the case of semantic parsing (top), d is a latent semantic parse tree and the output is a logical representation. For entailment detection, d is a latent proof tree and the output is a human judgement about entailment.", |
| "cite_spans": [ |
| { |
| "start": 117, |
| "end": 138, |
| "text": "(Angeli et al., 2012)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 349, |
| "end": 350, |
| "text": "4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 564, |
| "end": 572, |
| "text": "Figure 4", |
| "ref_id": "FIGREF7" |
| }, |
| { |
| "start": 605, |
| "end": 613, |
| "text": "Figure 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "C(x, Z) \u2713 D(x)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": ", includes all derivations that, when interpreted, are included in the set of training labels Z. From this set, C 0 (x, Z) is computed by normalizing the probabilities, each p, to create a proper probability distribution. For a given rule A ! , the parameter updates are computed using these sets of normalized valid derivations. This is given by the following (unnormalized) formula (with Dirichlet prior \u21b5):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "\u2713 t+1 A! = \u21b5+ n X i=1 X (d,p)2C 0 (x i ,Z i ) count(d, A ! ) p", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "In this section, we discuss the Sportscaster dataset and our experimental setup.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Sportscaster The Sportscaster corpus (Chen and Mooney, 2008) While the domain has a relatively small set of concepts and limited scope, reasoning in this domain still requires a large set of semantic relations and background knowledge. From this small set of concepts, the inference grammar described in Section 3.2 encodes around 3,000 inference rules. Since soccer is a topic that most people are familiar with, it is also easy to get non-experts to provide judgements about entailment.", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 60, |
| "text": "(Chen and Mooney, 2008)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Extended Inference Corpus The extended corpus consists of 461 unaligned pairs of texts from the original Sportscaster corpus annotated with sentence-level entailment judgements. We annotated 356 pairs using local human judges an average of 2.5 times 5 . Following Dagan et al. 2005, we discarded pairs without a majority agreement, which resulted in 306 pairs (or 85% of the initial set). We also annotated an additional 155 pairs using Amazon Mechanical Turk, which were mitigated by a local annotator.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "In addition to this core set of 461 entailment pairs, we separately experimented with adding unlabeled data (i.e., pairs without inference judgements) and ambiguously labelled data (i.e., pairs with multiple inference judgements) to train our inference grammars (shown in the results as More Data) and test the flexibility of our model. This included 250 unlabeled pairs taken from the original dataset, as well as 592 (ambiguous) pairs created by deriving new conclusions from the annotated set. This last group was constructed by exploiting the transitive nature of various inference relations and mapping pairs with matching labels in training to {Entail,Unknown}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We perform two types of experiments: A semantic parsing experiment (Task 1) to test our approach on the original task of generating Sportscaster representations. In addition, we introduce an inference experiment (Task 2) to test our approach on the problem of detecting entailments/contradictions between sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For the semantic parsing experiment, we follow the original setup of Chen and Mooney (2008) . 4fold cross validation is employed by training on all variations of 3 games and evaluating on a left out game. Each representation produced in the evaluation phrase is considered correct if it matches exactly a gold representation.", |
| "cite_spans": [ |
| { |
| "start": 69, |
| "end": 91, |
| "text": "Chen and Mooney (2008)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The second experiment imitates an RTE-style evaluation and tests the quality of the background knowledge being learned using our infer-Task 1: Semantic Parsing Match F1 (Chen et al., 2010) 0.80 (B\u00f6rschinger et al., 2011) (BJJ) 0.86 (Gaspers and Cimiano, 2014) 0 ence grammars. Like in the semantic parsing task, we perform cross-validation on the games using both the original data and sentence pairs to jointly train our models, and evaluate on left-out sets of inference pairs. Each proof generated in the evaluation phrase is considered correct if the resulting inference label matches a gold inference. We implemented the learning algorithm in Section 3.3 using the k-best algorithm by Huang and Chiang (2005) , with a beam size of 1,000. The base semantic grammars were each trained for 3 iterations and re-trained using the additional inference grammar rules for 10 iterations. Two Dirichlet priors were used, \u21b5 1 = 0.05 (for lexical rules) and \u21b5 2 = 0.3 (for non-lexical rules) throughout. Lexical rule probabilities were initialized using co-occurrence statistics estimated using an IBM Model1 word aligner (uniform initialization otherwise). 5 additional senses were added to the inference grammar for the most frequent events.", |
| "cite_spans": [ |
| { |
| "start": 169, |
| "end": 188, |
| "text": "(Chen et al., 2010)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 194, |
| "end": 226, |
| "text": "(B\u00f6rschinger et al., 2011) (BJJ)", |
| "ref_id": null |
| }, |
| { |
| "start": 232, |
| "end": 259, |
| "text": "(Gaspers and Cimiano, 2014)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 690, |
| "end": 713, |
| "text": "Huang and Chiang (2005)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and Evaluation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The results of both tasks are shown in Table 1 . Scores are averaged over all held out test sets.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 39, |
| "end": 46, |
| "text": "Table 1", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Task 1: Semantic Parsing We compare the results of our base semantic parser model with previously published semantic parsing results. While our grammar model simplifies how some of the knowledge is represented in grammar derivations (e.g., in comparison to BJJ), the set of output representations or interpretations is restricted to the original Sportscaster formal representations making our results fully comparable. As shown, our base grammar strongly outperforms all previously published results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We also show the performance of our inference grammars on the semantic parsing task after being trained with additional inference sentence pairs. This was done under two conditions: when the inference grammar was trained using fully labeled inference data and unlabeled/ambiguously labeled data (more data). While not fully comparable to previous results, both cases achieve the same results as the base grammar, indicating that our additional training setup does not lead to an improvement on the original task.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Task 2: Inference Task The main result of our paper is the performance of our inference grammars on the inference task. For comparison, we developed several baselines, including a majority baseline (i.e., guess the most frequent inference label from training). We also use an RTE max-entropy classifier that is trained on the raw text inference pairs to make predictions. This classifier uses a standard set of RTE features (e.g., word overlap, word entity cooccurrence/mismatch). Both of these approaches are strongly outperformed by our main inference grammar (or IG Full).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The Na\u00efve Inference baseline compares the full Sportscaster representations generated by our semantic parser for each sentence in a pair and assigns an Entailment for representations that match and a Contradiction otherwise (see discussion in Section 2.1). This baseline compares the inferential power of the original representations (without background knowledge and more precise labels) to the inferential power of the inference grammars. The strong increase in performance suggests that important distinctions that are not captured in the original representations are indeed being captured in the inference grammars.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We tested another classification approach using a Flat Classifier, which is a multi-class SVM classifier that makes predictions using features from the input to the inference grammar. Such input includes both sentences in a pair, their parse trees and predicted semantic labels, and the alignment between the sentences. In Figure 6 , for example, this includes all of the information excluding the proof tree in y 0 . This baseline aims to test the effect of using hierarchical, natural logic inference rules as opposed to a flat or linear representation of the input, and to see whether our model learns more than the just the presence of important words that are not modeled in the orig- inal representations. Features include the particular words/phrases aligned or inserted/deleted, the category of these words/phrases in the parse trees, the rules in both parse trees and between the trees, the types of predicates/arguments in the predicted representations and various combinations of these features. This is also strongly outperformed by our main model, suggesting that the natural logic system is learning more general inference patterns.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 323, |
| "end": 331, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Finally, we also experimented with removing insertions and deletions of modifiers from alignment inputs to test the effect of only using lexical knowledge to solve the entailment problems (Lexical Inference Only). In Figure 6 this involves removing \"at the goal\" from the alignment input and relying only on the grammars knowledge about how steal (or \"steals the ball\") relates to defense (or \"good defense by\") to make an entailment decision. This only slightly reduced the accuracy, which suggests that the real strength of the grammar lies in its lexical knowledge. Figure 8 shows example parse derivations before and after being trained using the inference grammars and additional inference pairs. In example 1, the parser learns to cor-a.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 217, |
| "end": 225, |
| "text": "Figure 6", |
| "ref_id": null |
| }, |
| { |
| "start": 569, |
| "end": 577, |
| "text": "Figure 8", |
| "ref_id": "FIGREF11" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "(t, h):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualitative Analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "(a beautiful pass to,passes to) ( gets a free kick,freekick from the) ( yet again passes to,kicks to) ( purple 10,purple 10 who is out front) analysis:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualitative Analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "vc ./ \u2318play-tran= vplay-tran modifier \u2318play-tran. \u2318player arg2 purple10/purple10 \"purple 10\"/\"purple 10\" generalization:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualitative Analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "beautiful(X) v X get(X) \u2318 X yet-again(X) v X X w out front(X)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualitative Analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "b.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualitative Analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "(t, h):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualitative Analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "(pink team is offsides,purple 9 passes) ( bad pass.., loses the ball to) ( free kick for, steals the ball from) ( purple 6 kicks to,purple 6 kicks)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualitative Analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "analysis: rectly treat the modifier \"under pressure\" as a separate constituent. The particular analysis also captures the correct semantics by treating this phrase as forward-entailing, which allows us to predict how the entailment changes if we insert or delete this constituent. Similarly, the parser learns a more fine-grained analysis for the phrase \"passes out to\" by treating \"out\" as a type of modifier that does not affect entailment. Example 3 shows how the improved model learns to distinguish two senses of the kick relation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualitative Analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "|", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Qualitative Analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "On the inference task, one advantage of the natural logic approach is that it is easy to see how our models make entailment decisions by looking directly at the resulting proof trees. Figure 9 shows the types of knowledge learned by our system and used in proofs. Figure 9a shows example construction-based inferences, or modifier constructions. For example, the first example treats the word \"beautiful\" in \"a beautiful pass\" as a type of modifier that changes the entailment or implication when it is inserted (forward-entails) or deleted (reverse-entails). In set-theoretic terms, this rule says that the set of \"beautiful passes\" is a subset of the set of all \"passes\". The model also learns the semantics of longer phrases, including how certain types of relative clauses (last example) affect entailment. Figure 9b show types of lexical-based inferences, or relations between specific symbols. For example, the model learns that the pink team is disjoint from a particular player from the purple team, purple9, and that a bad pass implies a turnover event. Figure 10 shows three common cases where our system fails. The first error (1a) involves a sense error, where the system treats \"shoots\" as having a distinct sense from \"shoots for the goal\". This can be explained by observing that \"shoots\" is used ambiguously throughout the corpus to refer to both shooting for the goal and ordinary kicking. The second example (1b) shows how errors in the semantic parser (which is used to generate an alignment) propagate up the processing pipeline. In this case, the semantic parser erroneously predicted that \"pink 6\" is the first argument of the steal relation (a common type of word-order error), and subsequently aligned \"Purple 8\" to \"pink 6\". Similarly, the semantic parse tree for the hypothesis in the last (1c) failed to predict \"another\" as a modifier, which would generate an alignment with the empty string .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 184, |
| "end": 192, |
| "text": "Figure 9", |
| "ref_id": null |
| }, |
| { |
| "start": 264, |
| "end": 273, |
| "text": "Figure 9a", |
| "ref_id": null |
| }, |
| { |
| "start": 811, |
| "end": 820, |
| "text": "Figure 9b", |
| "ref_id": null |
| }, |
| { |
| "start": 1063, |
| "end": 1072, |
| "text": "Figure 10", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Qualitative Analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "To better understand the results, a confusion matrix for our inference grammars on the crossvalidation experiments is shown in Figure 10 .2. It reveals that our system is worst at predicting Uncertain inferences. An informal survey of a portion of the data suggests that this is largely due to the alignment/modifier errors discussed above. This is also reflected in the results that use only lexical inference rules to make predictions, which had a minimal effect on the inference performance.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 127, |
| "end": 136, |
| "text": "Figure 10", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Qualitative Analysis", |
| "sec_num": null |
| }, |
| { |
| "text": "We have focused on learning representations for semantic parsing that capture the inferential properties of language. Since the goal of semantic parsing is to generate useful meaning representations, the representations being learned should facilitate entailment and inference. Since entailment is also closely tied to how we evaluate and make decisions about representations, we believe that learning methods for semantic parsing should also be able to use such judgements as supervision to influence and guide the learning. We proposed a general framework based on these ideas, which uses textual inference judgements between pairs of sentences and symbolic reasoning as a tool to learn more precise representations. While our approach uses natural logic (MacCartney and Manning, 2009) as the underlying reasoning engine, other reasoning frameworks with comparable features could be used.", |
| "cite_spans": [ |
| { |
| "start": 757, |
| "end": 787, |
| "text": "(MacCartney and Manning, 2009)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Technically, our natural logic inference system is encoded as a PCFG, in which the background rules of the logic are expressed as probabilistic rewrite rules. Learning in this framework reduces to a probabilistic grammatical inference task, in this case using entailment judgements as the primary supervision. These entailments give indirect clues about a domain's denotational semantics, and can be used to reason about and find gaps in the target meaning representations. While our approach focuses on natural language, it closely relates to work on learning from entailment in the probabilistic logic literature (De Raedt and Kersting, 2004) .", |
| "cite_spans": [ |
| { |
| "start": 629, |
| "end": 644, |
| "text": "Kersting, 2004)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Our setup closely follows other work on situated semantic interpretation (as advocated by Mooney (2008) ) and other approaches to semantic parsing that use low-level feedback to learn representations. In real situated learning tasks, however, learners will often find themselves in a situation where they observe two linguistic utterances describing the same situation. Our training setup tries to imitate these types of cases, where being able to reason and learn about entailment directly is essential.", |
| "cite_spans": [ |
| { |
| "start": 90, |
| "end": 103, |
| "text": "Mooney (2008)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Since capturing inference is our main goal, we also propose using textual entailment as an evaluation metric for semantic parsing. As reflected in the results, our inference task (i.e., Task 2) is considerably harder than the original semantic parsing evaluation (i.e., Task 1). This is not surprising, given that entailment recognition is in general known to involve considerable amounts of lexical and world knowledge (LoBue and Yates, 2011). Since the difference in performance on the original task is minimal between our base grammars and the inference grammars, one might conclude that the original evaluation does not tell us very much about the quality of the semantic grammar being learned in the same way as our new inference evaluation. We hope that our work pushes others in the direction of using entailment not only as a tool for learning, but for evaluating and comparing the quality of semantic parsers. While our current model only handles simple types of inferences relating to inclusion/exclusion, we believe that our overall approach can be used to tackle more complex entailment phenomena. Future work will focus on extending our method to new datasets and inference phenomena.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion and Conclusion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "http://www.robocup.org/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Capital letters (e.g., E, A, ..) are used as variables in the grammar to refer to sets of symbol types, and x is used to refer to all symbols in the grammar.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We used a version of the elicitation instructions used in the RTE experiments ofSnow et al. (2008)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work was funded by the Deutsche Forschungsgemeinschaft (DFG) via SFB 732, project D2. We thank our anonymous reviewers and the action editor for their helpful comments and feedback. Thanks also to our IMS colleagues, in particular Christian Scheible, for providing feedback on earlier drafts, as well as to Ekaterina Ovchinnikova and Cleo Condoravdi for helpful discussions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Natural Language Understanding", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Allen", |
| "suffix": "" |
| } |
| ], |
| "year": 1987, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Allen. 1987. Natural Language Understanding. Benjamin/Cummings Publishing Company, Inc.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Three robocup simulation league commentator systems", |
| "authors": [ |
| { |
| "first": "Elisabeth", |
| "middle": [], |
| "last": "Andr\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Kim", |
| "middle": [], |
| "last": "Binsted", |
| "suffix": "" |
| }, |
| { |
| "first": "Kumiko", |
| "middle": [], |
| "last": "Tanaka-Ishii", |
| "suffix": "" |
| }, |
| { |
| "first": "Sean", |
| "middle": [], |
| "last": "Luke", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerd", |
| "middle": [], |
| "last": "Herzog", |
| "suffix": "" |
| }, |
| { |
| "first": "Thomas", |
| "middle": [], |
| "last": "Rist", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "21", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Elisabeth Andr\u00e9, Kim Binsted, Kumiko Tanaka-Ishii, Sean Luke, Gerd Herzog, and Thomas Rist. 2000. Three robocup simulation league commentator sys- tems. AI Magazine, 21(1):57.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Semantic parsing as machine translation", |
| "authors": [ |
| { |
| "first": "Jacob", |
| "middle": [], |
| "last": "Andreas", |
| "suffix": "" |
| }, |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Vlachos", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of ACL-2013", |
| "volume": "", |
| "issue": "", |
| "pages": "47--52", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jacob Andreas, Andreas Vlachos, and Stephen Clark. 2013. Semantic parsing as machine translation. In Proceedings of ACL-2013, pages 47-52.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Naturalli: Natural logic inference for common sense reasoning", |
| "authors": [ |
| { |
| "first": "Gabor", |
| "middle": [], |
| "last": "Angeli", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of EMNLP-2014", |
| "volume": "", |
| "issue": "", |
| "pages": "534--545", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gabor Angeli and Christopher D Manning. 2014. Nat- uralli: Natural logic inference for common sense reasoning. In Proceedings of EMNLP-2014, pages 534-545.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Parsing time: Learning to interpret time expressions", |
| "authors": [ |
| { |
| "first": "Gabor", |
| "middle": [], |
| "last": "Angeli", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of NAACL-2012", |
| "volume": "", |
| "issue": "", |
| "pages": "446--455", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gabor Angeli, Christopher D Manning, and Daniel Ju- rafsky. 2012. Parsing time: Learning to interpret time expressions. In Proceedings of NAACL-2012, pages 446-455.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Weakly supervised learning of semantic parsers for mapping instructions to actions", |
| "authors": [ |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Artzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "1", |
| "issue": "", |
| "pages": "49--62", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yoav Artzi and Luke Zettlemoyer. 2013. Weakly su- pervised learning of semantic parsers for mapping instructions to actions. Transactions of the Associa- tion for Computational Linguistics, 1:49-62.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Semantic parsing using distributional semantics and probabilistic logic", |
| "authors": [ |
| { |
| "first": "Islam", |
| "middle": [], |
| "last": "Beltagy", |
| "suffix": "" |
| }, |
| { |
| "first": "Katrin", |
| "middle": [], |
| "last": "Erk", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of ACL-2014", |
| "volume": "", |
| "issue": "", |
| "pages": "7--12", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Islam Beltagy, Katrin Erk, and Raymond Mooney. 2014. Semantic parsing using distributional seman- tics and probabilistic logic. In Proceedings of ACL- 2014, pages 7-12.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "The seventh Pascal Recognizing Textual Entailment challenge. Proceedings of TAC", |
| "authors": [ |
| { |
| "first": "Luisa", |
| "middle": [], |
| "last": "Bentivogli", |
| "suffix": "" |
| }, |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Ido", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| }, |
| { |
| "first": "Hoa", |
| "middle": [], |
| "last": "Dang", |
| "suffix": "" |
| }, |
| { |
| "first": "Danilo", |
| "middle": [], |
| "last": "Giampiccolo", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luisa Bentivogli, Peter Clark, Ido Dagan, Hoa Dang, and Danilo Giampiccolo. 2011. The seventh Pas- cal Recognizing Textual Entailment challenge. Pro- ceedings of TAC, 2011.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Semantic parsing on Freebase from question-answer pairs", |
| "authors": [ |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Berant", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Chou", |
| "suffix": "" |
| }, |
| { |
| "first": "Roy", |
| "middle": [], |
| "last": "Frostig", |
| "suffix": "" |
| }, |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of EMNLP-2013", |
| "volume": "", |
| "issue": "", |
| "pages": "1533--1544", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of EMNLP- 2013, pages 1533-1544.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Reducing grounded learning tasks to grammatical inference", |
| "authors": [ |
| { |
| "first": "Benjamin", |
| "middle": [], |
| "last": "B\u00f6rschinger", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Bevan", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of EMNLP-2011", |
| "volume": "", |
| "issue": "", |
| "pages": "1416--1425", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Benjamin B\u00f6rschinger, Bevan K. Jones, and Mark Johnson. 2011. Reducing grounded learning tasks to grammatical inference. In Proceedings of EMNLP-2011, pages 1416-1425.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Recursive neural networks can learn logical semantics", |
| "authors": [ |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Samuel R Bowman", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Potts", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of 3rd Workshop on Continuous Vector Space Models and their Compositionality", |
| "volume": "", |
| "issue": "", |
| "pages": "12--21", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Samuel R Bowman, Christopher Potts, and Christo- pher D Manning. 2014. Recursive neural networks can learn logical semantics. In Proceedings of 3rd Workshop on Continuous Vector Space Models and their Compositionality, pages 12-21.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Learning to sportscast: A test of grounded language acquisition", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "David", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [ |
| "J" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ICML-2008", |
| "volume": "", |
| "issue": "", |
| "pages": "128--135", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David L. Chen and Raymond J. Mooney. 2008. Learn- ing to sportscast: A test of grounded language acqui- sition. In Proceedings of ICML-2008, pages 128- 135.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Training a multilingual sportscaster: Using perceptual context to learn language", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [ |
| "L" |
| ], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Joohyun", |
| "middle": [], |
| "last": "Kim", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Journal of Artificial Intelligence Research", |
| "volume": "37", |
| "issue": "", |
| "pages": "397--435", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David L. Chen, Joohyun Kim, and Raymond J. Mooney. 2010. Training a multilingual sportscaster: Using perceptual context to learn language. Journal of Artificial Intelligence Research, 37:397-435.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Driving semantic parsing from the world's response", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Clarke", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Goldwasser", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming-Wei", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [ |
| "Roth" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of CoNLL-10", |
| "volume": "", |
| "issue": "", |
| "pages": "18--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Clarke, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world's response. In Proceedings of CoNLL-10, pages 18-27.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Using the framework", |
| "authors": [ |
| { |
| "first": "Robin", |
| "middle": [], |
| "last": "Cooper", |
| "suffix": "" |
| }, |
| { |
| "first": "Dick", |
| "middle": [], |
| "last": "Crouch", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Van Eijck", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Fox", |
| "suffix": "" |
| }, |
| { |
| "first": "Josef", |
| "middle": [], |
| "last": "Van Genabith", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Jaspars", |
| "suffix": "" |
| }, |
| { |
| "first": "Hans", |
| "middle": [], |
| "last": "Kamp", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Milward", |
| "suffix": "" |
| }, |
| { |
| "first": "Manfred", |
| "middle": [], |
| "last": "Pinkal", |
| "suffix": "" |
| }, |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| }, |
| { |
| "first": "Steve", |
| "middle": [], |
| "last": "Pulman", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "The FraCaS Consortium", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Robin Cooper, Dick Crouch, Jan van Eijck, Chris Fox, Josef van Genabith, Jan Jaspars, Hans Kamp, David Milward, Manfred Pinkal, Massimo Poesio, and Steve Pulman. 1996. Using the framework. Techni- cal Report LRE 62-051 D-16, The FraCaS Consor- tium.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "The PASCAL Recognizing Textual Entailment Challenge", |
| "authors": [ |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Ido Dagan", |
| "suffix": "" |
| }, |
| { |
| "first": "Bernardo", |
| "middle": [], |
| "last": "Glickman", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Magnini", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the PASCAL Challenges Workshop on Recognizing Textual Entailment", |
| "volume": "", |
| "issue": "", |
| "pages": "1--9", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ido Dagan, Oren Glickman, and Bernardo Magnini. 2005. The PASCAL Recognizing Textual Entail- ment Challenge. In Proceedings of the PASCAL Challenges Workshop on Recognizing Textual En- tailment, pages 1-9.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Probabilistic inductive logic programming", |
| "authors": [ |
| { |
| "first": "Kristian", |
| "middle": [], |
| "last": "Luc De Raedt", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Kersting", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Algorithmic Learning Theory", |
| "volume": "", |
| "issue": "", |
| "pages": "19--36", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luc De Raedt and Kristian Kersting. 2004. Proba- bilistic inductive logic programming. In Algorith- mic Learning Theory, pages 19-36. Springer.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Order-based inference in natural logic", |
| "authors": [ |
| { |
| "first": "Yaroslav", |
| "middle": [], |
| "last": "Fyodorov", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoad", |
| "middle": [], |
| "last": "Winter", |
| "suffix": "" |
| }, |
| { |
| "first": "Nissim", |
| "middle": [], |
| "last": "Francez", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Logic Journal of IGPL", |
| "volume": "11", |
| "issue": "4", |
| "pages": "385--416", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yaroslav Fyodorov, Yoad Winter, and Nissim Francez. 2003. Order-based inference in natural logic. Logic Journal of IGPL, 11(4):385-416.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Learning a semantic parser from spoken utterances", |
| "authors": [ |
| { |
| "first": "Judith", |
| "middle": [], |
| "last": "Gaspers", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Cimiano", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of IEEE-ICASSP", |
| "volume": "", |
| "issue": "", |
| "pages": "3201--3205", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Judith Gaspers and Philipp Cimiano. 2014. Learning a semantic parser from spoken utterances. In Pro- ceedings of IEEE-ICASSP, pages 3201-3205.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Better k-best parsing", |
| "authors": [ |
| { |
| "first": "Liang", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of IWPT-2005", |
| "volume": "", |
| "issue": "", |
| "pages": "53--64", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang Huang and David Chiang. 2005. Better k-best parsing. In Proceedings of IWPT-2005, pages 53- 64.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Inclusion and exclusion in natural language", |
| "authors": [ |
| { |
| "first": "Iii", |
| "middle": [], |
| "last": "Thomas F Icard", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Studia Logica", |
| "volume": "100", |
| "issue": "4", |
| "pages": "705--725", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thomas F Icard III. 2012. Inclusion and exclusion in natural language. Studia Logica, 100(4):705-725.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Synergies in learning words and their referents", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "Katherine", |
| "middle": [], |
| "last": "Demuth", |
| "suffix": "" |
| }, |
| { |
| "first": "Bevan", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael J", |
| "middle": [], |
| "last": "Black", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings NIPS-2010", |
| "volume": "", |
| "issue": "", |
| "pages": "1018--1026", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Johnson, Katherine Demuth, Bevan Jones, and Michael J Black. 2010. Synergies in learning words and their referents. In Proceedings NIPS- 2010, pages 1018-1026.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Semantic parsing with Bayesian Tree Transducers", |
| "authors": [ |
| { |
| "first": "Keeley", |
| "middle": [], |
| "last": "Bevan", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of ACL-2012", |
| "volume": "", |
| "issue": "", |
| "pages": "488--496", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bevan Keeley Jones, Mark Johnson, and Sharon Gold- water. 2012. Semantic parsing with Bayesian Tree Transducers. In Proceedings of ACL-2012, pages 488-496.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Weakly supervised training of semantic parsers", |
| "authors": [ |
| { |
| "first": "Jayant", |
| "middle": [], |
| "last": "Krishnamurthy", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Tom", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of EMNLP/CoNLL-2012", |
| "volume": "", |
| "issue": "", |
| "pages": "754--765", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jayant Krishnamurthy and Tom M Mitchell. 2012. Weakly supervised training of semantic parsers. In Proceedings of EMNLP/CoNLL-2012, pages 754- 765.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Learning to automatically solve algebra word problems", |
| "authors": [ |
| { |
| "first": "Nate", |
| "middle": [], |
| "last": "Kushman", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoav", |
| "middle": [], |
| "last": "Artzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Regina", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of ACL-2014", |
| "volume": "", |
| "issue": "", |
| "pages": "271--281", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nate Kushman, Yoav Artzi, Luke Zettlemoyer, and Regina Barzilay. 2014. Learning to automatically solve algebra word problems. In Proceedings of ACL-2014, pages 271-281.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Inducing probabilistic CCG grammars from logical form with higherorder unification", |
| "authors": [ |
| { |
| "first": "Tom", |
| "middle": [], |
| "last": "Kwiatkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Sharon", |
| "middle": [], |
| "last": "Goldwater", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Steedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of EMNLP-2010", |
| "volume": "", |
| "issue": "", |
| "pages": "1223--1233", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tom Kwiatkowski, Luke Zettlemoyer, Sharon Goldwa- ter, and Mark Steedman. 2010. Inducing probabilis- tic CCG grammars from logical form with higher- order unification. In Proceedings of EMNLP-2010, pages 1223-1233.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Learning dependency-based compositional semantics", |
| "authors": [ |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [ |
| "I" |
| ], |
| "last": "Jordan", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of ACL-2011", |
| "volume": "", |
| "issue": "", |
| "pages": "590--599", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Percy Liang, Michael I. Jordan, and Dan Klein. 2011. Learning dependency-based compositional seman- tics. In Proceedings of ACL-2011, pages 590-599.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Types of common-sense knowledge needed for Recognizing Textual Entailment", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Lobue", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Yates", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of ACL/HLT-2011", |
| "volume": "", |
| "issue": "", |
| "pages": "329--334", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter LoBue and Alexander Yates. 2011. Types of common-sense knowledge needed for Recognizing Textual Entailment. In Proceedings of ACL/HLT- 2011, pages 329-334.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Modeling semantic containment and exclusion in natural language inference", |
| "authors": [ |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Maccartney", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of COLING-2008", |
| "volume": "", |
| "issue": "", |
| "pages": "521--528", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bill MacCartney and Christopher Manning. 2008. Modeling semantic containment and exclusion in natural language inference. In Proceedings of COLING-2008, pages 521-528.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "An extended model of natural logic", |
| "authors": [ |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Maccartney", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Christopher", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the eighth international conference on computational semantics", |
| "volume": "", |
| "issue": "", |
| "pages": "140--156", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bill MacCartney and Christopher D Manning. 2009. An extended model of natural logic. In Proceedings of the eighth international conference on computa- tional semantics, pages 140-156.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Natural Language Inference", |
| "authors": [ |
| { |
| "first": "Bill", |
| "middle": [], |
| "last": "Maccartney", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bill MacCartney. 2009. Natural Language Infer- ence. Ph.D. thesis, Department of Computer Sci- ence, Stanford University.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Universal grammar. Theoria", |
| "authors": [ |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Montague", |
| "suffix": "" |
| } |
| ], |
| "year": 1970, |
| "venue": "", |
| "volume": "36", |
| "issue": "", |
| "pages": "373--398", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Richard Montague. 1970. Universal grammar. Theo- ria, 36(3):373-398.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Learning to connect language and perception", |
| "authors": [ |
| { |
| "first": "Ray", |
| "middle": [], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of AAAI-2008", |
| "volume": "", |
| "issue": "", |
| "pages": "1598--1601", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ray Mooney. 2008. Learning to connect language and perception. In Proceedings of AAAI-2008, pages 1598-1601.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Natural logic and semantics", |
| "authors": [ |
| { |
| "first": "Lawrence", |
| "middle": [], |
| "last": "Moss", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of 17th Amsterdam Colloquium", |
| "volume": "", |
| "issue": "", |
| "pages": "71--80", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lawrence Moss. 2010. Natural logic and seman- tics. In Proceedings of 17th Amsterdam Colloquium, pages 71-80.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Semantic representation", |
| "authors": [ |
| { |
| "first": "Lenhart", |
| "middle": [], |
| "last": "Schubert", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of AAAI-2015", |
| "volume": "", |
| "issue": "", |
| "pages": "4132--4139", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lenhart Schubert. 2015. Semantic representation. In Proceedings of AAAI-2015, pages 4132-4139.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Cheap and fast-but is it good?: Evaluating non-expert annotations for natural language tasks", |
| "authors": [ |
| { |
| "first": "Rion", |
| "middle": [], |
| "last": "Snow", |
| "suffix": "" |
| }, |
| { |
| "first": "O'", |
| "middle": [], |
| "last": "Brendan", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Connor", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew Y", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of EMNLP-2008", |
| "volume": "", |
| "issue": "", |
| "pages": "254--263", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rion Snow, Brendan O'Connor, Daniel Jurafsky, and Andrew Y Ng. 2008. Cheap and fast-but is it good?: Evaluating non-expert annotations for natu- ral language tasks. In Proceedings of EMNLP-2008, pages 254-263.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Logical inference on dependency-based compositional semantics", |
| "authors": [ |
| { |
| "first": "Ran", |
| "middle": [], |
| "last": "Tian", |
| "suffix": "" |
| }, |
| { |
| "first": "Yusuke", |
| "middle": [], |
| "last": "Miyao", |
| "suffix": "" |
| }, |
| { |
| "first": "Takuya", |
| "middle": [], |
| "last": "Matsuzaki", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of ACL-2014", |
| "volume": "", |
| "issue": "", |
| "pages": "79--89", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ran Tian, Yusuke Miyao, and Takuya Matsuzaki. 2014. Logical inference on dependency-based com- positional semantics. Proceedings of ACL-2014, pages 79-89.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "A brief history of natural logic", |
| "authors": [ |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Van Benthem", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johan van Benthem. 2008. A brief history of natural logic. Technical Report PP-2008-05, ILLC, Univer- sity of Amsterdam.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Morpho-syntactic lexical generalization for CCG semantic parsing", |
| "authors": [ |
| { |
| "first": "Adrienne", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Tom", |
| "middle": [], |
| "last": "Kwiatkowski", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of EMNLP-2014", |
| "volume": "", |
| "issue": "", |
| "pages": "1284--1295", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Adrienne Wang, Tom Kwiatkowski, and Luke Zettle- moyer. 2014. Morpho-syntactic lexical generaliza- tion for CCG semantic parsing. In Proceedings of EMNLP-2014, pages 1284-1295.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Learning for semantic parsing with statistical machine translation", |
| "authors": [ |
| { |
| "first": "Yuk", |
| "middle": [ |
| "Wah" |
| ], |
| "last": "Wong", |
| "suffix": "" |
| }, |
| { |
| "first": "Raymond", |
| "middle": [ |
| "J" |
| ], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of HLT/NAACL-2006", |
| "volume": "", |
| "issue": "", |
| "pages": "439--446", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yuk Wah Wong and Raymond J. Mooney. 2006. Learning for semantic parsing with statistical ma- chine translation. In Proceedings of HLT/NAACL- 2006, pages 439-446.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Learning context-dependent mappings from sentences to logical form", |
| "authors": [ |
| { |
| "first": "Luke", |
| "middle": [ |
| "S" |
| ], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Collins", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of ACL-2009", |
| "volume": "", |
| "issue": "", |
| "pages": "976--984", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Luke S. Zettlemoyer and Michael Collins. 2009. Learning context-dependent mappings from sen- tences to logical form. In Proceedings of ACL-2009, pages 976-984.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Transactions of the Association for Computational Linguistics, vol. 4, pp. 155-168, 2016. Action Editor: Mark Steedman. Submission batch: 10/2015; Revision batch: 2/2016; Published 5/2016. c 2016 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF2": { |
| "text": "", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF3": { |
| "text": "Example sentence pairs and semantic representations with textual inference judgements. Na\u00efve entailments are a type of close-world assumption that result from matching semantic representations and assigning an entailment for matches and a contradiction otherwise.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF6": { |
| "text": "arg1 ./ wplay-intr.=wsv join wplay-intr. ./ wc=wplay-intr. modifier wc insert /pink2 / pink2wc ./ wplay-intr.=wplay-intr. kicks, pink 1 quickly passes to pink2) 5c.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF7": { |
| "text": "The top are rule templates for building a semantic grammar with examples from the Sportscaster domain. The bottom are templates for encoding natural logic inference rules as grammar rules. Rules in {.} are expanded to all orders. Example derivations are shown on the right (some derivations are collapsed using dashed lines).", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF8": { |
| "text": "y = ;, x [ y 6 = D Entail: {\u2318, v}, Contr.: {|}, Unknown: {#, w} D pass to w bad pass to Join", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF9": { |
| "text": "arg1 ./ #play-intr.) = # = Uncertain join (wc ./ vplay-intran.) = #play-intr.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF10": { |
| "text": "consists of 4 simulated Robocup soccer games annotated with human commentary. The English portion includes 1872 sentences paired with sets of logical meaning representations. On average, each training instance is paired with 2.3 meaning representations. The representations have 46 different types of concepts, consisting of 22 entity types and 24 event (and event-like) predicate types.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF11": { |
| "text": "Example semantic parse trees (1,2) before (a) and after (b) training on the extended corpus. Example 3 shows two senses learned for the kick relation.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "FIGREF14": { |
| "text": "Example construction-based (a) and lexical-based (b) inferences (both defined in Section 2.2) taken from parts of proof trees learned during our experiments. While inferences are computed between semantic concept symbols (e.g., purple4,pass,v c ), the generalizations in a. show how such structures can be used to generate lexicalized inference rules.sense/context error: 1a. t : Pink 9 shoots h : Pink 9 shoots for the goal z 0 : Entail (predicted: Uncertain) semantic parse error: 1b. t : Purple 8 steals the ball back h : Purple 8 steals the ball from pink 6 z 0 : Uncertain (predicted: Contr.) alignment/modifier error: 1c. t : A goal for the purple team. h : And the purple team scored another goal z 0 : Uncertain (predicted: Entail) Example inference pairs where our system fails (1a-c). A confusion matrix is shown in 2.", |
| "type_str": "figure", |
| "uris": null, |
| "num": null |
| }, |
| "TABREF4": { |
| "text": "", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td>: Results on the semantic parsing (top) and</td></tr><tr><td>inference (bottom) cross validation experiments.</td></tr></table>" |
| } |
| } |
| } |
| } |