ACL-OCL / Base_JSON /prefixQ /json /Q14 /Q14-1030.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q14-1030",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:11:13.349932Z"
},
"title": "Large-scale Semantic Parsing without Question-Answer Pairs",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"addrLine": "10 Crichton Street",
"postCode": "EH8 9AB",
"settlement": "Edinburgh"
}
},
"email": "siva.reddy@ed.ac.uk"
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"addrLine": "10 Crichton Street",
"postCode": "EH8 9AB",
"settlement": "Edinburgh"
}
},
"email": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {
"addrLine": "10 Crichton Street",
"postCode": "EH8 9AB",
"settlement": "Edinburgh"
}
},
"email": "steedman@inf.ed.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs. Our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with Freebase. Given this representation, we conceptualize semantic parsing as a graph matching problem. Our model converts sentences to semantic graphs using CCG and subsequently grounds them to Freebase guided by denotations as a form of weak supervision. Evaluation experiments on a subset of the FREE917 and WEBQUESTIONS benchmark datasets show our semantic parser improves over the state of the art.",
"pdf_parse": {
"paper_id": "Q14-1030",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper we introduce a novel semantic parsing approach to query Freebase in natural language without requiring manual annotations or question-answer pairs. Our key insight is to represent natural language via semantic graphs whose topology shares many commonalities with Freebase. Given this representation, we conceptualize semantic parsing as a graph matching problem. Our model converts sentences to semantic graphs using CCG and subsequently grounds them to Freebase guided by denotations as a form of weak supervision. Evaluation experiments on a subset of the FREE917 and WEBQUESTIONS benchmark datasets show our semantic parser improves over the state of the art.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Querying a database to retrieve an answer, telling a robot to perform an action, or teaching a computer to play a game are tasks requiring communication with machines in a language interpretable by them. Semantic parsing addresses the specific task of learning to map natural language (NL) to machine interpretable formal meaning representations. Traditionally, sentences are converted into logical form grounded in the symbols of some fixed ontology or relational database.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Approaches for learning semantic parsers have been for the most part supervised, using annotated training data consisting of sentences and their corresponding logical forms (Zelle and Mooney, 1996; Zettlemoyer and Collins, 2005; Wong and Mooney, 2007; Kwiatkowski et al., 2010) . More recently, alternative forms of supervision have been proposed to alleviate the annotation burden, e.g., by learning from conversational logs (Artzi and Zettlemoyer, 2011) , from sentences paired with system behavior (Chen and Mooney, 2011; Goldwasser and Roth, Question What is the capital of Texas? Logical Form \u03bbx. city(x) \u2227 capital(x, Texas) Answer {Austin} Figure 1 : An example question with annotated logical query, and its answer.",
"cite_spans": [
{
"start": 173,
"end": 197,
"text": "(Zelle and Mooney, 1996;",
"ref_id": "BIBREF35"
},
{
"start": 198,
"end": 228,
"text": "Zettlemoyer and Collins, 2005;",
"ref_id": "BIBREF38"
},
{
"start": 229,
"end": 251,
"text": "Wong and Mooney, 2007;",
"ref_id": "BIBREF33"
},
{
"start": 252,
"end": 277,
"text": "Kwiatkowski et al., 2010)",
"ref_id": "BIBREF24"
},
{
"start": 426,
"end": 455,
"text": "(Artzi and Zettlemoyer, 2011)",
"ref_id": "BIBREF0"
},
{
"start": 501,
"end": 524,
"text": "(Chen and Mooney, 2011;",
"ref_id": "BIBREF9"
},
{
"start": 525,
"end": 554,
"text": "Goldwasser and Roth, Question",
"ref_id": null
}
],
"ref_spans": [
{
"start": 646,
"end": 654,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "2011; Artzi and Zettlemoyer, 2013) , via distant supervision (Krishnamurthy and Mitchell, 2012; Cai and Yates, 2013) , from questions Poon, 2013; Fader et al., 2013) , and questionanswer pairs (Clarke et al., 2010; Liang et al., 2011) . Indeed, methods which learn from question-answer pairs have been gaining momentum as a means of scaling semantic parsers to large, open-domain problems (Kwiatkowski et al., 2013; Berant et al., 2013; Berant and Liang, 2014; Yao and Van Durme, 2014) . Figure 1 shows an example of a question, its annotated logical form, and answer (or denotation). In this paper, we build a semantic parser that does not require example annotations or question-answer pairs but instead learns from a large knowledge base (KB) and web-scale corpora. Specifically, we exploit Freebase, a large community-authored knowledge base that spans many sub-domains and stores real world facts in graphical format, and parsed sentences from a large corpus. We formulate semantic parsing as a graph matching problem. We convert the output of an open-domain combinatory categorial grammar (CCG) parser (Clark and Curran, 2007) into a graphical representation and subsequently map it onto Freebase. The parser's graphs (also called ungrounded graphs) are mapped to all possible Freebase subgraphs (also called grounded graphs) by replacing edges and nodes with relations and types in Freebase. Each grounded graph corresponds to a unique grounded logical query. During learning, our semantic parser is trained to identify which KB subgraph best corresponds to the NL graph. Problem-capital(Austin) \u2227 UNIQUE(Austin) \u2227 capital.of.arg1(e, Austin) \u2227 capital.of.arg2(e, Texas) atically, ungrounded graphs may give rise to many grounded graphs. Since we do not make use of manual annotations of sentences or question-answer pairs, we do not know which grounded graphs are correct. To overcome this, we rely on comparisons between denotations of natural language queries and related Freebase queries as a form of weak supervision in order to learn the mapping between NL and KB graphs. Figure 2 illustrates our approach for the sentence Austin is the capital of Texas. From the CCG syntactic derivation (which we omit here for the sake of brevity) we obtain a semantic parse (Figure 2a ) and convert it to an ungrounded graph ( Figure 2b ). Next, we select an entity from the graph and replace it with a variable x, creating a graph corresponding to the query What is the capital of Texas? (Figure 2c ). The math function UNIQUE on Austin in Figure 2b indi-cates Austin is the only value of x which can satisfy the query graph in Figure 2c . Therefore, the denotation 1 of the NL query graph is {AUSTIN}. Figure 2d shows two different groundings of the query graph in the Freebase KB. We obtain these by replacing edges and nodes in the query graph with Freebase relations and types. We use the denotation of the NL query as a form of weak supervision to select the best grounded graph. Under the constraint that the denotation of a Freebase query should be the same as the denotation of the NL query, the graph on the left hand-side of Figure 2d is chosen as the correct grounding.",
"cite_spans": [
{
"start": 6,
"end": 34,
"text": "Artzi and Zettlemoyer, 2013)",
"ref_id": "BIBREF1"
},
{
"start": 61,
"end": 95,
"text": "(Krishnamurthy and Mitchell, 2012;",
"ref_id": "BIBREF22"
},
{
"start": 96,
"end": 116,
"text": "Cai and Yates, 2013)",
"ref_id": "BIBREF8"
},
{
"start": 134,
"end": 145,
"text": "Poon, 2013;",
"ref_id": "BIBREF30"
},
{
"start": 146,
"end": 165,
"text": "Fader et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 193,
"end": 214,
"text": "(Clarke et al., 2010;",
"ref_id": "BIBREF13"
},
{
"start": 215,
"end": 234,
"text": "Liang et al., 2011)",
"ref_id": "BIBREF26"
},
{
"start": 389,
"end": 415,
"text": "(Kwiatkowski et al., 2013;",
"ref_id": "BIBREF23"
},
{
"start": 416,
"end": 436,
"text": "Berant et al., 2013;",
"ref_id": "BIBREF3"
},
{
"start": 437,
"end": 460,
"text": "Berant and Liang, 2014;",
"ref_id": "BIBREF4"
},
{
"start": 461,
"end": 485,
"text": "Yao and Van Durme, 2014)",
"ref_id": "BIBREF34"
},
{
"start": 1108,
"end": 1132,
"text": "(Clark and Curran, 2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 488,
"end": 496,
"text": "Figure 1",
"ref_id": null
},
{
"start": 2084,
"end": 2092,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 2273,
"end": 2283,
"text": "(Figure 2a",
"ref_id": "FIGREF1"
},
{
"start": 2326,
"end": 2335,
"text": "Figure 2b",
"ref_id": "FIGREF1"
},
{
"start": 2488,
"end": 2498,
"text": "(Figure 2c",
"ref_id": "FIGREF1"
},
{
"start": 2540,
"end": 2549,
"text": "Figure 2b",
"ref_id": "FIGREF1"
},
{
"start": 2628,
"end": 2637,
"text": "Figure 2c",
"ref_id": "FIGREF1"
},
{
"start": 2703,
"end": 2712,
"text": "Figure 2d",
"ref_id": "FIGREF1"
},
{
"start": 3135,
"end": 3144,
"text": "Figure 2d",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experimental results on two benchmark datasets consisting of questions to Freebase -FREE917 (Cai and Yates, 2013) and WEBQUESTIONS (Berant et al., 2013) -show that our semantic parser improves over state-of-the-art approaches. Our contributions include: a novel graph-based method to convert natural language sentences to grounded semantic parses which exploits the similarities in the topology of knowledge graphs and linguistic structure, together with the ability to train using a wide range of features; a proposal to learn from a large scale web corpus, without question-answer pairs, based on denotations of queries from natural language statements as weak supervision; and the development of a scalable semantic parser which besides Freebase uses CLUEWEB09 for training, a corpus of 503.9 million webpages. Our semantic parser can be downloaded from http://sivareddy.in/ downloads.",
"cite_spans": [
{
"start": 92,
"end": 113,
"text": "(Cai and Yates, 2013)",
"ref_id": "BIBREF8"
},
{
"start": 131,
"end": 152,
"text": "(Berant et al., 2013)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our goal is to build a semantic parser which maps a natural language sentence to a logical form that can be executed against Freebase. We begin with CLUEWEB09, a web-scale corpus automatically annotated with Freebase entities (Gabrilovich et al., 2013) . We extract the sentences containing at least two entities linked by a relation in Freebase. We parse these sentences using a CCG syntactic parser, and build semantic parses from the syntactic output. Semantic parses are then converted to semantic graphs which are subsequently grounded to Freebase. Grounded graphs can be easily converted to a KB query deterministically. During training we learn which grounded graphs correspond best to the natural language input. In the following, we provide a brief introduction to Freebase and its graph structure. Next, we explain how we obtain semantic parses from CCG (Section 2.2), how we convert them to graphs (Section 2.3), and ground them in Freebase (Section 2.4). Section 3 presents our learning algorithm.",
"cite_spans": [
{
"start": 226,
"end": 252,
"text": "(Gabrilovich et al., 2013)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Framework",
"sec_num": "2"
},
{
"text": "Freebase consists of 42 million entities and 2.5 billion facts. A fact is defined by a triple containing two entities and a relation between them. Entities represent real world concepts, and edges represent relations, thus forming a graph-like structure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Freebase Knowledge Graph",
"sec_num": "2.1"
},
{
"text": "A Freebase subgraph is shown in Figure 3 with Figure 3 with the same identifiers e.g., m and n). For reasons of uniformity, we assume that simple facts are also represented via mediator nodes and split single edges into two with each subedge going from the mediator node to the target node (see person.nationality.arg1 and person.nationality.arg2 in Figure 3) . Finally, Freebase also has entity types defining is-a relations. In Figure 3 types are represented by rounded rectangles (e.g., BARACK OBAMA is of type US president, and COLUMBIA UNIVERSITY is of type education.university).",
"cite_spans": [],
"ref_spans": [
{
"start": 32,
"end": 40,
"text": "Figure 3",
"ref_id": null
},
{
"start": 46,
"end": 54,
"text": "Figure 3",
"ref_id": null
},
{
"start": 350,
"end": 359,
"text": "Figure 3)",
"ref_id": null
},
{
"start": 430,
"end": 438,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Freebase Knowledge Graph",
"sec_num": "2.1"
},
{
"text": "The graph like structure of Freebase inspires us to create a graph like structure for natural language, and learn a mapping between them. To do this we take advantage of the representational power of Combinatory Categorial Grammar (Steedman, 2000) . CCG is a linguistic formalism that tightly couples syntax and semantics, and can be used to model a wide range of language phenom- ena. CCG is well known for capturing long-range dependencies inherent in constructions such as coordination, extraction, raising and control, as well as standard local predicate-argument dependencies (Clark et al., 2002) , thus supporting wide-coverage semantic analysis. Moreover, due to the transparent interface between syntax and semantics, it is relatively straightforward to built a semantic parse for a sentence from its corresponding syntactic derivation tree (Bos et al., 2004) .",
"cite_spans": [
{
"start": 231,
"end": 247,
"text": "(Steedman, 2000)",
"ref_id": "BIBREF32"
},
{
"start": 581,
"end": 601,
"text": "(Clark et al., 2002)",
"ref_id": "BIBREF12"
},
{
"start": 849,
"end": 867,
"text": "(Bos et al., 2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Categorial Grammar",
"sec_num": "2.2"
},
{
"text": "In our case, the choice of syntactic parser is motivated by the scale of our problem; the parser must be broad-coverage and robust enough to handle a web-sized corpus. For these reasons, we rely on the C&C parser (Clark and Curran, 2004) , a generalpurpose CCG parser, to obtain syntactic derivations. To our knowledge, we present the first attempt to use a CCG parser trained on treebanks for grounded semantic parsing. Most previous work has induced task-specific CCG grammars Collins, 2005, 2007; Kwiatkowski et al., 2010 ). An example CCG derivation is shown in Figure 4 .",
"cite_spans": [
{
"start": 213,
"end": 237,
"text": "(Clark and Curran, 2004)",
"ref_id": "BIBREF10"
},
{
"start": 479,
"end": 499,
"text": "Collins, 2005, 2007;",
"ref_id": null
},
{
"start": 500,
"end": 524,
"text": "Kwiatkowski et al., 2010",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 566,
"end": 574,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Combinatory Categorial Grammar",
"sec_num": "2.2"
},
{
"text": "Semantic parses are constructed from syntactic CCG parses, with semantic composition being guided by the CCG syntactic derivation. 2 We use a neo-Davidsonian (Parsons, 1990) semantics to represent semantic parses. 3 Each word has a semantic category based on its syntactic category and part of speech. For example, the syntactic category for directed is (S\\NP)/NP, i.e., it 2 See Bos et al. (2004) for a detailed introduction to semantic representation using CCG. 3 Neo-Davidsonian semantics is a form of first-order logic that uses event identifiers (e) to connect verb predicates and their subcategorized arguments through conjunctions. takes two argument NPs and becomes S. To represent its semantic category, we use a lambda term \u03bby\u03bbx. directed.arg1(e, x) \u2227 directed.arg2(e, y), where e identifies the event of directed, and x and y are arguments corresponding to the NPs in the syntactic category. We obtain semantic categories automatically using the indexed syntactic categories provided by the C&C parser.",
"cite_spans": [
{
"start": 131,
"end": 132,
"text": "2",
"ref_id": null
},
{
"start": 158,
"end": 173,
"text": "(Parsons, 1990)",
"ref_id": "BIBREF29"
},
{
"start": 380,
"end": 397,
"text": "Bos et al. (2004)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Categorial Grammar",
"sec_num": "2.2"
},
{
"text": "The latter reveal the bindings of basic constituent categories in more complex categories. For example, in order to convert ((S\\NP)\\(S\\NP))/NP to its semantic category, we must know whether all NPs have the same referent and thus use the same variable name. The indexed category ((S e \\NP x )\\(S e \\NP x ))/NP y reveals that there are only two different NPs, x and y, and that one of them (i.e., x) is shared across two subcategories. We discuss the details of semantic category construction in the Appendix.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Combinatory Categorial Grammar",
"sec_num": "2.2"
},
{
"text": "Apart from n-ary predicates representing events (mostly verbs), we also use unary predicates representing types in language (mostly common nouns and noun modifiers). For example, capital(Austin) indicates Austin is of type capital. Prepositions, adjectives and adverbs are represented by predicates lexicalized with their head words to provide more information (see capital.of.arg1 instead of of.arg1 in Figure 2a ).",
"cite_spans": [],
"ref_spans": [
{
"start": 404,
"end": 413,
"text": "Figure 2a",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Combinatory Categorial Grammar",
"sec_num": "2.2"
},
{
"text": "We will now illustrate how we create ungrounded semantic graphs from CCG-derived semantic parses. Figure 5a displays the ungrounded graph for the sen- Entity Nodes (Rectangles) Entity nodes are denoted by rectangles and represent entities e.g., Cameron in Figure 5a . In cases where an entity is not known, we use variables e.g., x in Figure 6a . Entity variables are connected to their corresponding word nodes from which they originate by dotted links e.g., x in Figure 6a is connected to the word node who.",
"cite_spans": [],
"ref_spans": [
{
"start": 98,
"end": 107,
"text": "Figure 5a",
"ref_id": "FIGREF4"
},
{
"start": 256,
"end": 265,
"text": "Figure 5a",
"ref_id": "FIGREF4"
},
{
"start": 335,
"end": 344,
"text": "Figure 6a",
"ref_id": "FIGREF5"
},
{
"start": 465,
"end": 474,
"text": "Figure 6a",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Ungrounded Semantic Graphs",
"sec_num": "2.3"
},
{
"text": "Mediator Nodes (Circles) Mediator nodes are denoted by circles and represent events in language. They connect pairs of entities which participate in an event forming a clique (see the entities Cameron, Titanic and 1997 in Figure 5a ). We define an edge as a link that connects any two entities via a mediator. The subedge of an edge i.e., the link between a mediator and an entity, corresponds to the predi-cate denoting the event and taking the entity as its argument (e.g. directed.arg1 links e and Cameron in Figure 5a ). Mediator nodes are connected to their corresponding word nodes from which they originate by dotted links e.g. mediators in Figure 5a are connected to word node directed.",
"cite_spans": [],
"ref_spans": [
{
"start": 222,
"end": 231,
"text": "Figure 5a",
"ref_id": "FIGREF4"
},
{
"start": 512,
"end": 521,
"text": "Figure 5a",
"ref_id": "FIGREF4"
},
{
"start": 648,
"end": 657,
"text": "Figure 5a",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Ungrounded Semantic Graphs",
"sec_num": "2.3"
},
{
"text": "Type nodes (Rounded rectangles) Type nodes are denoted by rounded rectangles. They represent unary predicates in natural language. In Figure 6b type nodes capital and capital.state are attached to Austin denoting Austin is of type capital and capital.state. Type nodes are also connected to their corresponding word nodes from which they originate by dotted links e.g. type node capital.state and word node state in Figure 6b .",
"cite_spans": [],
"ref_spans": [
{
"start": 134,
"end": 143,
"text": "Figure 6b",
"ref_id": "FIGREF5"
},
{
"start": 416,
"end": 425,
"text": "Figure 6b",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Ungrounded Semantic Graphs",
"sec_num": "2.3"
},
{
"text": "Math nodes (Diamonds) Math nodes are denoted by diamonds. They describe functions to be applied on the nodes/subgraphs they attach to. The function TARGET attaches to the entity variable of interest. For example, the graph in Figure 6a represents the question Who directed The Nutty Professor?. Here, TARGET attaches to x representing the word who. UNIQUE attaches to the entity variable modified by the definite article the. In Figure 6b , UNIQUE attaches to Austin implying that only Austin satisfies the graph. Finally, COUNT attaches to entity nodes which have to be counted. For the sentence Julie Andrews has appeared in 40 movies in Figure 7 , the KB could either link Julie Andrews and 40, with type node movies matching the grounded type integer, or it could link Julie Andrews to each movie she acted in and the count of these different movies add to 40. In anticipation of this ambiguity, we generate two semantic parses resulting in two ungrounded graphs (see Figures 7a and 7b) . We generate all possible grounded graphs corresponding to each ungrounded graph, and leave it up to the learning to decide which ones the KB prefers.",
"cite_spans": [],
"ref_spans": [
{
"start": 226,
"end": 235,
"text": "Figure 6a",
"ref_id": "FIGREF5"
},
{
"start": 429,
"end": 438,
"text": "Figure 6b",
"ref_id": "FIGREF5"
},
{
"start": 640,
"end": 648,
"text": "Figure 7",
"ref_id": "FIGREF6"
},
{
"start": 972,
"end": 990,
"text": "Figures 7a and 7b)",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Ungrounded Semantic Graphs",
"sec_num": "2.3"
},
{
"text": "We ground semantic graphs in Freebase by mapping edge labels to relations, type nodes to entity types, and entity nodes to Freebase entities. Entity nodes Previous approaches (Cai and Yates, 2013; Berant et al., 2013; Kwiatkowski et al., 2013 ) use a manual lexicon or heuristics to ground named entities to Freebase entities. Fortunately, CLUEWEB09 sentences have been automatically annotated with Freebase entities, so we use these annotations to ground proper names to Freebase entities (denoted by uppercase words) e.g., Cameron in Figure 5a is grounded to Freebase entity CAMERON in Figure 5b . we use an automatically constructed lexicon which maps ungrounded types to grounded ones (see Section 4.2 for details).",
"cite_spans": [
{
"start": 175,
"end": 196,
"text": "(Cai and Yates, 2013;",
"ref_id": "BIBREF8"
},
{
"start": 197,
"end": 217,
"text": "Berant et al., 2013;",
"ref_id": "BIBREF3"
},
{
"start": 218,
"end": 242,
"text": "Kwiatkowski et al., 2013",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 536,
"end": 545,
"text": "Figure 5a",
"ref_id": "FIGREF4"
},
{
"start": 588,
"end": 597,
"text": "Figure 5b",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Grounded Semantic Graphs",
"sec_num": "2.4"
},
{
"text": "Edges An edge between two entities is grounded using all edges linking the two entities in the knowledge graph. For example, to ground the edge between Titanic and Cameron in Figure 5 , we use the following edges linking TITANIC and CAMERON in Freebase: (film.directed by.arg1, film.directed by.arg2), (film.produced by.arg1, film.produced by.arg2). If only one entity is grounded, we use all possible edges from this grounded entity. If no entity is grounded, we use a mapping lexicon which is automatically created as described in Section 4.2. Given an ungrounded graph with n edges, there are O((k + 1) n ) possible grounded graphs, with k being the grounded edges in the knowledge graph for each ungrounded edge together with an additional empty (no) edge.",
"cite_spans": [],
"ref_spans": [
{
"start": 175,
"end": 183,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Grounded Semantic Graphs",
"sec_num": "2.4"
},
{
"text": "Mediator nodes In an ungrounded graph, mediator nodes represent semantic event identifiers. In the grounded graph, they represent Freebase fact identifiers. Fact identifiers help distinguish if neighboring edges belong to a single complex fact, which may or may not be coextensive with an ungrounded event. In Figure 8a , the edges corresponding to the event identifier e are grounded to a single complex fact in Figure 8b , with the fact identifier m. However, in Figure 5a , the edges of the ungrounded event e are grounded to different Freebase facts, distinguished in Figure 5b by the identifiers m and n. Furthermore, the edge in 5a between CAMERON and 1997 is not grounded in 5b, since no Freebase edge exists between the two entities. We convert grounded graphs to SPARQL queries, but for readability we only show logical expressions. The conversion is deterministic and is exactly the inverse of the semantic parse to graph conversion (Section 2.3). Wherever a node/edge is instantiated with a grounded entity/type/relation in Freebase, we use them in the grounded parse (e.g., type node capital.state in Figure 6b becomes location.capital city). Math function TARGET is useful in retrieving instantiations of entity variables of interest (see Figure 6a ).",
"cite_spans": [],
"ref_spans": [
{
"start": 310,
"end": 319,
"text": "Figure 8a",
"ref_id": "FIGREF7"
},
{
"start": 413,
"end": 422,
"text": "Figure 8b",
"ref_id": "FIGREF7"
},
{
"start": 465,
"end": 474,
"text": "Figure 5a",
"ref_id": "FIGREF4"
},
{
"start": 572,
"end": 581,
"text": "Figure 5b",
"ref_id": "FIGREF4"
},
{
"start": 1113,
"end": 1122,
"text": "Figure 6b",
"ref_id": "FIGREF5"
},
{
"start": 1252,
"end": 1261,
"text": "Figure 6a",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Grounded Semantic Graphs",
"sec_num": "2.4"
},
{
"text": "A natural language sentence may give rise to several grounded graphs. But only one (or a few) of them will be a faithful representation of the sentence in Freebase. We next describe our algorithm for finding the best Freebase graph for a given sentence, our learning model, and the features it uses.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning",
"sec_num": "3"
},
{
"text": "Freebase has a large number of relations and entities, and as a result there are many possible grounded graphs g for each ungrounded graph u. We construct and score graphs incrementally, traversing each node in the ungrounded graph and matching its edges and types in Freebase. Given a NL sentence s, we construct from its CCG syntactic derivation all corresponding ungrounded graphs u. Using a beam search procedure (described in Section 4.2), we find the best scoring graphs (\u011d,\u00fb), maximizing over different graph configurations (g, u) of s:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.1"
},
{
"text": "(\u011d,\u00fb) = arg max g,u \u03a6(g, u, s, K B) \u2022 \u03b8 (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.1"
},
{
"text": "We define the score of (\u011d,\u00fb) as the dot product between a high dimensional feature representation \u03a6 = (\u03a6 1 , . . . \u03a6 m ) and a weight vector \u03b8 (see Section 3.3 for details on the features we employ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.1"
},
{
"text": "We estimate the weights \u03b8 using the averaged structured perceptron algorithm (Collins, 2002) . As shown in Algorithm 1, the perceptron makes several passes over sentences, and in each iteration it computes the best scoring (\u011d,\u00fb) among the candidate graphs for a given sentence. In line 6, the algorithm updates \u03b8 with the difference (if any) be-Algorithm 1: Averaged Structured Perceptron Input: Training sentences:",
"cite_spans": [
{
"start": 77,
"end": 92,
"text": "(Collins, 2002)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.1"
},
{
"text": "{s i } N i=1 1 \u03b8 \u2190 0 2 for t \u2190 1 . . . T do 3 for i \u2190 1 . . . N do 4 (\u011d i ,\u00fb i ) = arg max g i ,u i \u03a6(g i , u i , s i , K B) \u2022 \u03b8 5 if (u + i , g + i ) = (\u00fb i ,\u011d i ) then 6 \u03b8 \u2190 \u03b8 + \u03a6(g + i , u + i , s i , K B)\u2212\u03a6(\u011d i ,\u00fb i , s i , K B) 7 return 1 T \u2211 T t=i 1 N \u2211 N i=1 \u03b8 i t",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.1"
},
{
"text": "tween the feature representations of the best scoring graph (\u011d,\u00fb) and the gold standard graph (g + , u + ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.1"
},
{
"text": "The goal of the algorithm is to rank gold standard graphs higher than the any other graphs. The final weight vector \u03b8 is the average of weight vectors over T iterations and N sentences. This averaging procedure avoids overfitting and produces more stable results (Collins, 2002 ).",
"cite_spans": [
{
"start": 263,
"end": 277,
"text": "(Collins, 2002",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.1"
},
{
"text": "As we do not make use of question-answer pairs or manual annotations of sentences, gold standard graphs (g + , u + ) are not available. In the following, we explain how we approximate them by relying on graph denotations as a form of weak supervision.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithm",
"sec_num": "3.1"
},
{
"text": "Let u be an ungrounded semantic graph of s. We select an entity E in u, replace it with a variable x, and make it a target node. Let u + represent the resulting ungrounded graph. Next, we obtain all grounded graphs g + which correspond to u + such that the denotations",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Surrogate Gold Graphs",
"sec_num": "3.2"
},
{
"text": "[[u + ]] K B = [[g + ]] N L .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Surrogate Gold Graphs",
"sec_num": "3.2"
},
{
"text": "We use these surrogate graphs g + as gold standard, and the pairs (u + , g + ) for model training. There is considerable latitude in choosing which entity E to replace. This can be done randomly, according to entity frequency, or some other criterion. We found that substituting the entity with the most connections to other entities in the sentence works well in practice. All the entities that can replace x in u + to constitute a valid fact in Freebase will be the denotation of To ensure that graphs u + and g + have the same denotations, we impose the following constraints:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Surrogate Gold Graphs",
"sec_num": "3.2"
},
{
"text": "u + , [[u + ]] N L .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Surrogate Gold Graphs",
"sec_num": "3.2"
},
{
"text": "Constraint 1 If the math function UNIQUE is attached to the entity being replaced in the ungrounded graph, we assume the denotation of u + contains only that entity. For example, in Figure 2b, we replace Austin by x, and thus assume [[u + ] ] N L = {AUSTIN}. 4 Any grounded graph which results in [[g + ]] K B = {AUSTIN} will be considered a surrogate gold graph. This allows us to learn entailment relations, e.g., capital.of should be grounded to location.capital (left hand-side graph in Figure 2d ) and not to location.containedby which results in all locations in Texas (right hand-side graph in Figure 2d ).",
"cite_spans": [
{
"start": 233,
"end": 240,
"text": "[[u + ]",
"ref_id": null
}
],
"ref_spans": [
{
"start": 182,
"end": 188,
"text": "Figure",
"ref_id": null
},
{
"start": 491,
"end": 500,
"text": "Figure 2d",
"ref_id": "FIGREF1"
},
{
"start": 601,
"end": 610,
"text": "Figure 2d",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Selecting Surrogate Gold Graphs",
"sec_num": "3.2"
},
{
"text": "Constraint 2 If the target entity node is a number, we select the Freebase graphs with denotation close to this number. For example, in Figure Figure 8b , or must be enumerated as in Figure 7c .",
"cite_spans": [],
"ref_spans": [
{
"start": 136,
"end": 142,
"text": "Figure",
"ref_id": null
},
{
"start": 143,
"end": 152,
"text": "Figure 8b",
"ref_id": "FIGREF7"
},
{
"start": 183,
"end": 192,
"text": "Figure 7c",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Selecting Surrogate Gold Graphs",
"sec_num": "3.2"
},
{
"text": "Constraint 3 If the target entity node is a date, we select the grounded graph which results in the smallest set containing the date based on the intuition that most sentences in the data describe specific rather than general events.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Surrogate Gold Graphs",
"sec_num": "3.2"
},
{
"text": "Constraint 4 If none of the above constraints apply to the target entity E, we know E \u2208 [[u + ]] N L , and hence we select the grounded graphs which satisfy E \u2208 [[g + ]] K B as surrogate gold graphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Selecting Surrogate Gold Graphs",
"sec_num": "3.2"
},
{
"text": "Our feature vector \u03a6(g, u, s, K B) denotes the features extracted from a sentence s and its corresponding graphs u and g with respect to a knowledge base K B. The elements of the vector (\u03c6 1 , \u03c6 2 , . . . ) take integer values denoting the number of times a feature appeared. We devised the following broad feature classes:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.3"
},
{
"text": "Lexical alignments Since ungrounded graphs are similar in topology to grounded graphs, we extract ungrounded and grounded edge and type alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.3"
},
{
"text": "So, from graphs 5a and 5b, we obtain the edge alignment \u03c6 edge (directed.arg1, directed.arg2, film.directed by.arg2, film.directed by.arg1) and the subedge alignments \u03c6 edge (directed.arg1, film.directed by.arg2) and \u03c6 edge (directed.arg2, film.directed by.arg1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.3"
},
{
"text": "In a similar fashion we extract type alignments (e.g., \u03c6 type (capital,location.city)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Features",
"sec_num": "3.3"
},
{
"text": "Contextual features In addition to lexical alignments, we also use contextual features which essentially record words or word combinations surrounding grounded edge labels. Feature \u03c6 event records an event word and its grounded predicates (e.g., in Figure 7c we extract features \u03c6 event (appear, performance.film) and \u03c6 event (appear, performance.actor).",
"cite_spans": [],
"ref_spans": [
{
"start": 249,
"end": 258,
"text": "Figure 7c",
"ref_id": "FIGREF6"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "3.3"
},
{
"text": "Feature \u03c6 arg records a predicate and its argument words (e.g., \u03c6 arg (performance.film, movie) in Figure 7c ). Word combination features are extracted from the parser's dependency output. The feature \u03c6 dep records a predicate and the dependencies of its event word (e.g., from the grounded version of Figure 6b we extract features \u03c6 dep (location.state.capital.arg1, capital, state) and \u03c6 dep (location.state.capital.arg2, capital, state)). Using such features, we are able to handle multiword predicates.",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 108,
"text": "Figure 7c",
"ref_id": "FIGREF6"
},
{
"start": 302,
"end": 311,
"text": "Figure 6b",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Features",
"sec_num": "3.3"
},
{
"text": "We count the number of word stems 5 shared by grounded and ungrounded edge labels e.g., in Figure 5 directed.arg1 and film.directed by.arg2 have one stem overlap (ignoring the argument labels arg1 and arg2). For a grounded graph, we compute \u03c6 stem , the aggregate stem overlap count over all its grounded and ungrounded edge labels. We did not incorporate WordNet/Wiktionarybased lexical similarity features but these were found fruitful in Kwiatkowski et al. (2013) . We also have a feature for stem overlap count between the grounded edge labels and the context words.",
"cite_spans": [
{
"start": 441,
"end": 466,
"text": "Kwiatkowski et al. (2013)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [
{
"start": 91,
"end": 99,
"text": "Figure 5",
"ref_id": "FIGREF4"
}
],
"eq_spans": [],
"section": "Lexical similarity",
"sec_num": null
},
{
"text": "These features penalize graphs with non-standard topologies. For example, we do not want a final graph with no edges. The feature value \u03c6 hasEdge is one if there exists at least one edge in the graph. We also have a feature \u03c6 nodeCount for counting the number of connected nodes in the graph. Finally, feature \u03c6 colloc captures the collocation of grounded edges (e.g., edges belonging to a single complex fact are likely to cooccur; see Figure 8b ).",
"cite_spans": [],
"ref_spans": [
{
"start": 437,
"end": 446,
"text": "Figure 8b",
"ref_id": "FIGREF7"
}
],
"eq_spans": [],
"section": "Graph connectivity features",
"sec_num": null
},
{
"text": "In this section we present our experimental set-up for assessing the performance of the semantic parser described above. We present the datasets on which our model was trained and tested, discuss implementation details, and briefly introduce the models used for comparison with our approach.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "4"
},
{
"text": "We evaluated our approach on the FREE917 (Cai and Yates, 2013) and WEBQUESTIONS (Berant et al., 2013) datasets. FREE917 consists of 917 questions and their meaning representations (written in a variant of lambda calculus) which we, however, do not use. The dataset represents 81 domains covering 635 Freebase relations, with most domains containing fewer than 10 questions. We report results on three domains, namely film, business, and people as these are relatively large in both FREE917 and Freebase. WEBQUESTIONS consists of 5,810 question-answer pairs, 2,780 of which are reserved for testing. Our experiments used a subset of WEBQUESTIONS representing the three target domains. We extracted domain-specific queries semi-automatically by identifying questionanswer pairs with entities in target domain relations. In both datasets, named entities were disambiguated to Freebase entities with a named entity lexicon. 6 Table 1 presents descriptive statistics for each domain. Evaluating on all domains in Freebase would generate a very large number of queries for which denotations would have to be computed (the number of queries is linear in the number of domains and the size of training data). Our system loads Freebase using Virtuoso 7 and queries it with SPARQL. Virtuoso is slow in dealing with millions of queries indexed on the entire Freebase, and is the only reason we did not work with the complete Freebase.",
"cite_spans": [
{
"start": 41,
"end": 62,
"text": "(Cai and Yates, 2013)",
"ref_id": "BIBREF8"
},
{
"start": 80,
"end": 101,
"text": "(Berant et al., 2013)",
"ref_id": "BIBREF3"
},
{
"start": 920,
"end": 921,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "To train our model, we extracted sentences from CLUEWEB09 which contain at least two entities associated with a relation in Freebase, and have an edge between them in the ungrounded graph. These were further filtered so as to remove sentences which do not yield at least one semantic parse without an uninstantiated entity variable. For example, the sentence Avatar is directed by Cameron would be used for training, whereas Avatar directed by Cameron received a critical review wouldn't. In the latter case, any semantic parse will have an uninstantiated entity variable for review. Table 1 (Train) shows the number of sentences we obtained.",
"cite_spans": [],
"ref_spans": [
{
"start": 584,
"end": 591,
"text": "Table 1",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Implementation",
"sec_num": "4.2"
},
{
"text": "In order to train our semantic parser, we initialized the alignment and type features (\u03c6 edge and \u03c6 type , respectively) with the alignment lexicon weights. These weights are computed as follows. Let count(r , r) denote the number of pairs of entities which are linked with edge r in Freebase and edge r in CLUEWEB09 sentences. We then estimate the probability distribution P(r /r) = count(r ,r) \u2211 i count(r i ,r) . Analogously, we created a type alignment lexicon. The counts were collected from CLUEWEB09 sentences containing pairs of entities linked with an edge in Freebase (business 390k, film 130k, and people 490k). Contextual features were initialized to \u22121 since most word contexts and grounded predicates/types do not appear together. All other features were set to 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "4.2"
},
{
"text": "We used a beam-search algorithm to convert ungrounded graphs to grounded ones. The edges and types of each ungrounded graph are placed in a priority queue. Priority is based on edge/type tf-idf scores collected over CLUEWEB09. At each step, we pop an element from the queue and ground it in Freebase. We rank the resulting grounded graphs us-ing the perceptron model, and pick the n-best ones, where n is the beam size. We continue until the queue is empty. In our experiments we used a beam size of 100. We trained a single model for all the domains combined together. We ran the perceptron for 20 iterations (around 5-10 million queries). At each training iteration we used 6,000 randomly selected sentences from the training corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Implementation",
"sec_num": "4.2"
},
{
"text": "We compared our graph-based semantic parser (henceforth GRAPHPARSER) against two state-ofthe-art systems both of which are open-domain and work with Freebase. The semantic parser developed by Kwiatkowski et al. (2013) ",
"cite_spans": [
{
"start": 192,
"end": 217,
"text": "Kwiatkowski et al. (2013)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Systems",
"sec_num": "4.3"
},
{
"text": "(henceforth KCAZ13)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Systems",
"sec_num": "4.3"
},
{
"text": "is learned from question-answer pairs and follows a two-stage procedure: first, a natural language sentence is converted to a domain-independent semantic parse and then grounded onto Freebase using a set of logical-type equivalent operators. The operators explore possible ways sentential meaning could be expressed in Freebase and essentially transform logical form to match the target ontology. Our approach also has two steps (i.e., we first generate multiple ungrounded graphs and then ground them to different Freebase graphs). We do not use operators to perform structure matching, rather we create multiple graphs and leave it up to the learner to find an appropriate grounding using a rich feature space. To give a specific example, their operator literal to constant is equivalent to having named entities for larger text chunks in our case. Their operator split literal explores different edge possibilities in an event whereas we start with a clique and remove unwanted edges. Our approach has (almost) similar expressive power but is conceptually simpler. Our second comparison system was the semantic parser of Berant and Liang (2014) (henceforth PARASEMPRE) which also uses QA pairs for training and makes use of paraphrasing. Given an input NL sentence, they first construct a set of logical forms based on hand-coded rules, and then generate sentences from each logical form (using generation templates and a lexicon). Pairs of logical forms and natural language are finally scored using a paraphrase model consisting of two components. An association model determines whether they contain phrase pairs likely to be paraphrases and a vector space model assigns a vector representation for each sentence, and learns a scoring function that ranks paraphrase candidates. Our semantic parser employs a graph-based representation as a means of handling the mismatch between natural language, whereas PARASEMPRE opts for a textbased one through paraphrasing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison Systems",
"sec_num": "4.3"
},
{
"text": "Finally, we compared our semantic parser against a baseline which is based on graphs but employs no learning. The baseline converts an ungrounded graph to a grounded one by replacing each ungrounded edge/type with the highest weighted grounded label creating a maximum weighted graph, henceforth MWG. Both GRAPHPARSER and the baseline use the same alignment lexicon (a weighted mapping from ungrounded to grounded labels). Table 2 summarizes our results on FREE917. As described earlier, we evaluated GRAPHPARSER on a subset of the dataset representing three domains (business, film, and people). Since this subset contains a relatively small number of instances (124 in total), we performed 10-fold cross validation with 9 folds as development data 8 , and one fold as test data. We report results averaged over all test folds. With respect to KCAZ13, we present results with their cross-domain trained models, where training data from multiple domains is used to test foreign domains. 9 KCAZ13 used generic features like string similarity and knowledge base features which apply across domains and do not require indomain training data. We do not report results with PARASEMPRE as the small number of training instances would put their method at a disadvantage. We treat a predicted query as correct if its denota- tion is exactly equal to the denotation of the manually annotated gold query. As can be seen, GRAPHPARSER outperforms KCAZ13 and the MWG baseline by a wide margin. This is an encouraging result bearing in mind that our model does not use question-answer pairs. We should also point out that our domain relation set is larger compared to KCAZ13. We do not prune any of the relations in Freebase, whereas KCAZ13 use only 112 relations and 83 types from our three domains (see Table 1 ). We further performed a feature ablation study to examine the contribution of different feature classes. As shown in Table 3 , the most important features are those based on lexical similarity, as also observed in KCAZ13. Graph connectivity and lexical alignments are equally important (these features are absent from KCAZ13). Contextual features are not very helpful over and above alignment features which also encode contextual information. Overall, generic features like lexical similarity are helpful only to a certain extent; the performance of GRAPHPARSER improves considerably when additional graph-related features are taken into account.",
"cite_spans": [],
"ref_spans": [
{
"start": 423,
"end": 430,
"text": "Table 2",
"ref_id": null
},
{
"start": 1791,
"end": 1798,
"text": "Table 1",
"ref_id": "TABREF7"
},
{
"start": 1918,
"end": 1925,
"text": "Table 3",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "Comparison Systems",
"sec_num": "4.3"
},
{
"text": "We also analyzed the errors GRAPHPARSER makes. 25% of these are caused by the C&C parser and are cases where it either returns no syntactic analysis or a wrong one. 19% of the errors are due to Freebase inconsistencies. For example, our system answered the question How many stores are in Nittany mall? with 65 using the relation shopping center.number of stores whereas the gold standard provides the answer 25 counting all stores using the relation shopping center.store. Around 15% of errors include structural mismatches between natural language and Freebase; for the question Who is the president of Gap Inc?, our method grounds president to a grounded type whereas in Freebase it is represented as a relation employment.job.title. The remain-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Prec Rec F1 MWG 39.4 34.0 36.5 PARASEMPRE 37.5 37.5 37.5 GRAPHPARSER 41.9 37.0 39.3 GRAPHPARSER + PARA 44.7 38.4 41.3 Table 4 : Experimental results on WEBQUESTIONS.",
"cite_spans": [],
"ref_spans": [
{
"start": 118,
"end": 125,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "ing errors are miscellaneous. For example, the question What are some films on Antarctica? receives two interpretations, i.e., movies filmed in Antarctica or movies with Antarctica as their subject.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "We next discuss our results on WEBQUESTIONS. PARASEMPRE was trained with 1,115 QA pairs (corresponding to our target domains) together with question paraphrases obtained from the PARALEX corpus (Fader et al., 2013) . 10 While training PARASEMPRE, out-of-domain Freebase relations and types were removed. Both GRAPHPARSER and PARASEMPRE were tested on the same set of 570 in-domain QA pairs with exact answer match as the evaluation criterion. For development purposes, GRAPHPARSER uses 200 QA pairs. Table 4 displays our results. We observe that GRAPHPARSER obtains a higher F1 against MWG and PARASEMPRE. Differences in performance among these systems are less pronounced compared to FREE917. This is for a good reason. WEBQUESTIONS is a challenging dataset, created by non-experts. The questions are not tailored to Freebase in any way, they are more varied and display a wider vocabulary. As a result the mismatch between natural language and Freebase is greater and the semantic parsing task harder.",
"cite_spans": [
{
"start": 194,
"end": 214,
"text": "(Fader et al., 2013)",
"ref_id": "BIBREF15"
},
{
"start": 217,
"end": 219,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 500,
"end": 507,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "Error analysis further revealed that parsing errors are responsible for 13% of the questions GRAPH-PARSER fails to answer. Another cause of errors is mismatches between natural language and Freebase. Around 7% of the questions are of the type Where did X come from?, and our model answers with the individual's nationality, whereas annotators provide the birthplace (city/town/village) as the right answer. Moreover, 8% of the questions are of the type What does X do?, which the annotators answer with the individual's profession. In natural language, we rarely attest constructions like X does dentist/researcher/actor. The proposed framework assumes that Freebase and natural language are somewhat isomorphic, which is not always true. An obvious future direction would be to paraphrase the questions so as to increase the number of grounded and ungrounded graphs. As an illustration, we rewrote questions like Where did X come from to What is X's birth place, and What did X do to What is X's profession and evaluated our model GRAPHPARSER + PARA. As shown in Table 4 , even simple paraphrasing can boost performance.",
"cite_spans": [],
"ref_spans": [
{
"start": 1064,
"end": 1071,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "Finally, Table 3 (third column) examines the contribution of different features on the WEBQUES-TIONS development dataset. Interestingly, we observe that contextual features are not useful and in fact slightly harm performance. We hypothesize that this is due to the higher degree of mismatch between natural language and Freebase in this dataset. Features based on similarity, graph connectivity, and lexical alignments are more robust and generally useful across datasets.",
"cite_spans": [],
"ref_spans": [
{
"start": 9,
"end": 16,
"text": "Table 3",
"ref_id": "TABREF10"
}
],
"eq_spans": [],
"section": "System",
"sec_num": null
},
{
"text": "In this paper, we introduce a new semantic parsing approach for Freebase. A key idea in our work is to exploit the structural and conceptual similarities between natural language and Freebase through a common graph-based representation. We formalize semantic parsing as a graph matching problem and learn a semantic parser without using annotated question-answer pairs. We have shown how to obtain graph representations from the output of a CCG parser and subsequently learn their correspondence to Freebase using a rich feature set and their denotations as a form of weak supervision. Our parser yields state-of-the art performance on three large Freebase domains and is not limited to question answering. We can create semantic parses for any type of NL sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Our work brings together several strands of research. Graph-based representations of sentential meaning have recently gained some attention in the literature (Banarescu et al., 2013) , and attempts to map sentences to semantic graphs have met with good inter-annotator agreement. Our work is also closely related to Kwiatkowski et al. (2013) and Berant and Liang (2014) who present open-domain se-mantic parsers based on Freebase and trained on QA pairs. Despite differences in formulation and model structure, both approaches have explicit mechanisms for handling the mismatch between natural language and the KB (e.g., using logical-type equivalent operators or paraphrases). The mismatch is handled implicitly in our case via our graphical representation which allows for the incorporation of all manner of powerful features. More generally, our method is based on the assumption that linguistic structure has a correspondence to Freebase structure which does not always hold (e.g., in Who is the grandmother of Prince William?, grandmother is not directly expressed as a relation in Freebase). Additionally, our model fails when questions are too short without any lexical clues (e.g., What did Charles Darwin do? ). Supervision from annotated data or paraphrasing could improve performance in such cases. In the future, we plan to explore cluster-based semantics (Lewis and Steedman, 2013) to increase the robustness on unseen NL predicates.",
"cite_spans": [
{
"start": 158,
"end": 182,
"text": "(Banarescu et al., 2013)",
"ref_id": "BIBREF2"
},
{
"start": 316,
"end": 341,
"text": "Kwiatkowski et al. (2013)",
"ref_id": "BIBREF23"
},
{
"start": 357,
"end": 369,
"text": "Liang (2014)",
"ref_id": "BIBREF4"
},
{
"start": 1368,
"end": 1394,
"text": "(Lewis and Steedman, 2013)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Our work joins others in exploiting the connections between natural language and open-domain knowledge bases. Recent approaches in relation extraction use distant supervision from a knowledge base to predict grounded relations between two target entities (Mintz et al., 2009; Hoffmann et al., 2011; Riedel et al., 2013) . During learning, they aggregate sentences containing the target entities, ignoring richer contextual information. In contrast, we learn from each individual sentence taking into account all entities present, their relations, and how they interact. Krishnamurthy and Mitchell (2012) formalize semantic parsing as a distantly supervised relation extraction problem combined with a manually specified grammar to guide semantic parse composition.",
"cite_spans": [
{
"start": 255,
"end": 275,
"text": "(Mintz et al., 2009;",
"ref_id": "BIBREF28"
},
{
"start": 276,
"end": 298,
"text": "Hoffmann et al., 2011;",
"ref_id": "BIBREF20"
},
{
"start": 299,
"end": 319,
"text": "Riedel et al., 2013)",
"ref_id": "BIBREF31"
},
{
"start": 570,
"end": 603,
"text": "Krishnamurthy and Mitchell (2012)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Finally, our approach learns a model of semantics guided by denotations as a form of weak supervision. Beyond semantic parsing (Artzi and Zettlemoyer, 2013; Liang et al., 2011; Clarke et al., 2010) , feedback-based learning has been previously used for interpreting and following NL instructions (Branavan et al., 2009; Chen and Mooney, 2011) , playing computer games (Branavan et al., 2012) , and grounding language in the physical world (Krishnamurthy and Kollar, 2013; Matuszek et al., 2012 : Rules used to classify words into semantic classes. * represents a wild card expression which matches anything. lex x denotes the lexicalised form of x e.g., when state : NP x /NP x : \u03bbP\u03bbx.lex x .state(x) \u2227 P(x) is applied to capital : NP : \u03bby.capital(y), the lexicalised form of x becomes capital, and therefore the predicate lex x .state becomes capital.state. The resulting semantic parse after application is \u03bbx.capital.state(x) \u2227 capital(x).",
"cite_spans": [
{
"start": 127,
"end": 156,
"text": "(Artzi and Zettlemoyer, 2013;",
"ref_id": "BIBREF1"
},
{
"start": 157,
"end": 176,
"text": "Liang et al., 2011;",
"ref_id": "BIBREF26"
},
{
"start": 177,
"end": 197,
"text": "Clarke et al., 2010)",
"ref_id": "BIBREF13"
},
{
"start": 296,
"end": 319,
"text": "(Branavan et al., 2009;",
"ref_id": "BIBREF6"
},
{
"start": 320,
"end": 342,
"text": "Chen and Mooney, 2011)",
"ref_id": "BIBREF9"
},
{
"start": 368,
"end": 391,
"text": "(Branavan et al., 2012)",
"ref_id": "BIBREF7"
},
{
"start": 439,
"end": 471,
"text": "(Krishnamurthy and Kollar, 2013;",
"ref_id": "BIBREF21"
},
{
"start": 472,
"end": 493,
"text": "Matuszek et al., 2012",
"ref_id": "BIBREF27"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Transactions of the Association for Computational Linguistics, 2 (2014) 377-392. Action Editor: Noah Smith. Submitted 3/2014; Revised 6/2014; Published 10/2014. c 2014 Association for Computational Linguistics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The denotation of a graph is the set of feasible values for the nodes marked with TARGET.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also remove UNIQUE attached to x to exactly mimic the test time setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We use the Porter stemmer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "FREE917 comes with a named entity lexicon. For WEBQUES-TIONS we hand-coded this lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://virtuoso.openlinksw.com",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The development data is only used for model selection and for determining the optimal training iteration.9 We are grateful to Tom Kwiatkowski for supplying us with the output of their system.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We used the SEMPRE package (http://www-nlp. stanford.edu/software/sempre/) which does not use any hand-coded entity disambiguation lexicon.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We are grateful to the anonymous reviewers for their valuable feedback on an earlier version of this paper. Thanks to Mike Lewis and the members of ILCC for helpful discussions and comments. We acknowledge the support of EU ERC Advanced Fellowship 249520 GRAMPLUS and EU IST Cognitive Systems IP EC-FP7-270273 \"Xperience\".",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
},
{
"text": "We use a handful of rules to divide words into semantic classes. Based on a word's semantic class and indexed syntactic category, we construct its semantic category automatically. For example, directed is a member of the EVENT class, and its indexed syntactic category is ((S e \\NP x <1>)/NP y <2>) (here, <1> and <2> indicate that x and y are the first and second arguments of e). We then generate its semantic category as \u03bbQ\u03bbP\u03bbe.\u2203x\u2203y.directed.arg1(e, x) \u2227 directed.arg2(e, y) \u2227 P(x) \u2227 Q(y). Please refer to Appendix B of Clark and Curran (2007) for a list of their indexed syntactic categories.The rules are described in Table 5 . Syntactic categories are not shown for the sake of brevity. Most rules will match any syntactic category. Exceptions are copula-related rules (see be in the sixth row) which apply only to the syntactic category (S\\NP)/NP, and rules pertaining to wh -words (see the last two rows in the table). When more than one rule apply, we end up with multiple semantic parses. There are a few cases like passives, question words, and prepositional phrases where we modified the original indexed categories for better interpretation of the semantics (these are not displayed here). We also handle non-standard CCG operators involving unary and binary rules as described in Appendix A of Clark and Curran (2007) .",
"cite_spans": [
{
"start": 523,
"end": 546,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF11"
},
{
"start": 1308,
"end": 1331,
"text": "Clark and Curran (2007)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [
{
"start": 623,
"end": 630,
"text": "Table 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Appendix",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Bootstrapping semantic parsers from conversations",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "421--432",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Artzi, Yoav and Luke Zettlemoyer. 2011. Boot- strapping semantic parsers from conversations. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing. Edin- burgh, Scotland, pages 421-432.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Weakly supervised learning of semantic parsers for mapping instructions to actions",
"authors": [
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2013,
"venue": "Transations of the Association for Computational Linguistics",
"volume": "1",
"issue": "1",
"pages": "49--62",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Artzi, Yoav and Luke Zettlemoyer. 2013. Weakly su- pervised learning of semantic parsers for mapping instructions to actions. Transations of the Associ- ation for Computational Linguistics 1(1):49-62.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Abstract meaning representation for sembanking",
"authors": [
{
"first": "Laura",
"middle": [],
"last": "Banarescu",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Bonial",
"suffix": ""
},
{
"first": "Shu",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Madalina",
"middle": [],
"last": "Georgescu",
"suffix": ""
},
{
"first": "Kira",
"middle": [],
"last": "Griffitt",
"suffix": ""
},
{
"first": "Ulf",
"middle": [],
"last": "Hermjakob",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Martha",
"middle": [],
"last": "Palmer",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Schneider",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 7th Linguistic Annotation Workshop and Interoperability with Discourse",
"volume": "",
"issue": "",
"pages": "178--186",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Banarescu, Laura, Claire Bonial, Shu Cai, Madalina Georgescu, Kira Griffitt, Ulf Hermjakob, Kevin Knight, Philipp Koehn, Martha Palmer, and Nathan Schneider. 2013. Abstract meaning repre- sentation for sembanking. In Proceedings of the 7th Linguistic Annotation Workshop and Interop- erability with Discourse. Sofia, Bulgaria, pages 178-186.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Semantic parsing on Freebase from question-answer pairs",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Frostig",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1533--1544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berant, Jonathan, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Nat- ural Language Processing. Seattle, Washington, USA, pages 1533-1544.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semantic parsing via paraphrasing",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1415--1425",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Berant, Jonathan and Percy Liang. 2014. Seman- tic parsing via paraphrasing. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Baltimore, Maryland, USA, pages 1415-1425.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Wide-coverage semantic representations from a ccg parser",
"authors": [
{
"first": "Johan",
"middle": [],
"last": "Bos",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of Coling",
"volume": "",
"issue": "",
"pages": "1240--1246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bos, Johan, Stephen Clark, Mark Steedman, James R. Curran, and Julia Hockenmaier. 2004. Wide-coverage semantic representations from a ccg parser. In Proceedings of Coling 2004. Geneva, Switzerland, pages 1240-1246.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Reinforcement learning for mapping instructions to actions",
"authors": [
{
"first": "S",
"middle": [
"R K"
],
"last": "Branavan",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Harr Chen",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP. Suntec",
"volume": "",
"issue": "",
"pages": "82--90",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Branavan, S.R.K., Harr Chen, Luke Zettlemoyer, and Regina Barzilay. 2009. Reinforcement learn- ing for mapping instructions to actions. In Pro- ceedings of the Joint Conference of the 47th An- nual Meeting of the ACL and the 4th International Joint Conference on Natural Language Process- ing of the AFNLP. Suntec, Singapore, pages 82- 90.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning high-level planning from text",
"authors": [
{
"first": "S",
"middle": [
"R K"
],
"last": "Branavan",
"suffix": ""
},
{
"first": "Nate",
"middle": [],
"last": "Kushman",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Regina",
"middle": [],
"last": "Barzilay",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "126--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Branavan, S.R.K., Nate Kushman, Tao Lei, and Regina Barzilay. 2012. Learning high-level plan- ning from text. In Proceedings of the 50th An- nual Meeting of the Association for Computa- tional Linguistics. Jeju Island, Korea, pages 126- 135.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Largescale semantic parsing via schema matching and lexicon extension",
"authors": [
{
"first": "Qingqing",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Alexander",
"middle": [],
"last": "Yates",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "423--433",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Cai, Qingqing and Alexander Yates. 2013. Large- scale semantic parsing via schema matching and lexicon extension. In Proceedings of the 51st Annual Meeting of the Association for Compu- tational Linguistics. Sofia, Bulgaria, pages 423- 433.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Learning to interpret natural language navigation instructions from observations",
"authors": [
{
"first": "David",
"middle": [
"L"
],
"last": "Chen",
"suffix": ""
},
{
"first": "Raymond",
"middle": [
"J"
],
"last": "Mooney",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 25th AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "859--865",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chen, David L. and Raymond J. Mooney. 2011. Learning to interpret natural language navigation instructions from observations. In Proceedings of the 25th AAAI Conference on Artificial Intelli- gence. San Francisco, California, pages 859-865.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Parsing the wsj using CCG and log-linear models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "103--111",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clark, Stephen and James R Curran. 2004. Pars- ing the wsj using CCG and log-linear models. In Proceedings of the 42nd Annual Meeting on Asso- ciation for Computational Linguistics. Barcelona, Spain, pages 103-111.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Widecoverage efficient statistical parsing with CCG and log-linear models",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "James",
"middle": [
"R"
],
"last": "Curran",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "4",
"pages": "493--552",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clark, Stephen and James R Curran. 2007. Wide- coverage efficient statistical parsing with CCG and log-linear models. Computational Linguistics 33(4):493-552.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Building deep dependency structures with a wide-coverage CCG parser",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Clark",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Hockenmaier",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "327--334",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clark, Stephen, Julia Hockenmaier, and Mark Steed- man. 2002. Building deep dependency structures with a wide-coverage CCG parser. In Proceed- ings of the 40th Annual Meeting on Association for Computational Linguistics. pages 327-334.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Driving semantic parsing from the world's response",
"authors": [
{
"first": "James",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Goldwasser",
"suffix": ""
},
{
"first": "Ming-Wei",
"middle": [],
"last": "Chang",
"suffix": ""
},
{
"first": "Dan",
"middle": [
"Roth"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 14th Conference on Natural Language Learning",
"volume": "",
"issue": "",
"pages": "18--27",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Clarke, James, Dan Goldwasser, Ming-Wei Chang, and Dan Roth. 2010. Driving semantic parsing from the world's response. In Proceedings of the 14th Conference on Natural Language Learning. Uppsala, Sweden, pages 18-27.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Discriminative training methods for Hidden Markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Collins, Michael. 2002. Discriminative training methods for Hidden Markov models: Theory and experiments with perceptron algorithms. In Proceedings of the 2002 Conference on Empir- ical Methods in Natural Language Processing. Philadelphia, Pennsylvania, pages 1-8.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Paraphrase-driven learning for open question answering",
"authors": [
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fader, Anthony, Luke Zettlemoyer, and Oren Et- zioni. 2013. Paraphrase-driven learning for open question answering. In Proceedings of the 51st",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "1608--1618",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computa- tional Linguistics. Sofia, Bulgaria, pages 1608- 1618.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "FACC1: Freebase annotation of ClueWeb corpora, Version 1 (Release date 2013-06-26, Format version 1, Correction level 0)",
"authors": [
{
"first": "Evgeniy",
"middle": [],
"last": "Gabrilovich",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ringgaard",
"suffix": ""
},
{
"first": "Amarnag",
"middle": [],
"last": "Subramanya",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabrilovich, Evgeniy, Michael Ringgaard, and Amarnag Subramanya. 2013. FACC1: Freebase annotation of ClueWeb corpora, Version 1 (Re- lease date 2013-06-26, Format version 1, Correc- tion level 0).",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Confidence driven unsupervised semantic parsing",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Goldwasser",
"suffix": ""
},
{
"first": "Roi",
"middle": [],
"last": "Reichart",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "1486--1495",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goldwasser, Dan, Roi Reichart, James Clarke, and Dan Roth. 2011. Confidence driven unsupervised semantic parsing. In Proceedings of the 49th Annual Meeting of the Association for Compu- tational Linguistics: Human Language Technolo- gies. Portland, Oregon, USA, pages 1486-1495.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning from natural instructions",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Goldwasser",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 22nd International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1794--1800",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Goldwasser, Dan and Dan Roth. 2011. Learning from natural instructions. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence. Barcelona, Spain, pages 1794-1800.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Knowledge-based weak supervision for information extraction of overlapping relations",
"authors": [
{
"first": "Raphael",
"middle": [],
"last": "Hoffmann",
"suffix": ""
},
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Luke",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weld",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "541--550",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hoffmann, Raphael, Congle Zhang, Xiao Ling, Luke S Zettlemoyer, and Daniel S Weld. 2011. Knowledge-based weak supervision for informa- tion extraction of overlapping relations. In Pro- ceedings of the 49th Annual Meeting of the As- sociation for Computational Linguistics: Human Language Technologies. Portland, Oregon, USA, pages 541-550.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Jointly learning to parse and perceive: Connecting natural language to the physical world. Transations of the Association for",
"authors": [
{
"first": "Jayant",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Kollar",
"suffix": ""
}
],
"year": 2013,
"venue": "Computational Linguistics",
"volume": "1",
"issue": "1",
"pages": "193--206",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krishnamurthy, Jayant and Thomas Kollar. 2013. Jointly learning to parse and perceive: Connect- ing natural language to the physical world. Tran- sations of the Association for Computational Lin- guistics 1(1):193-206.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Weakly supervised training of semantic parsers",
"authors": [
{
"first": "Jayant",
"middle": [],
"last": "Krishnamurthy",
"suffix": ""
},
{
"first": "Tom",
"middle": [],
"last": "Mitchell",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "754--765",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Krishnamurthy, Jayant and Tom Mitchell. 2012. Weakly supervised training of semantic parsers. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learn- ing. Jeju Island, Korea, pages 754-765.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Scaling semantic parsers with on-the-fly ontology matching",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Eunsol",
"middle": [],
"last": "Choi",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1545--1556",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kwiatkowski, Tom, Eunsol Choi, Yoav Artzi, and Luke Zettlemoyer. 2013. Scaling semantic parsers with on-the-fly ontology matching. In Proceed- ings of the 2013 Conference on Empirical Meth- ods in Natural Language Processing. Seattle, Washington, USA, pages 1545-1556.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Inducing probabilistic CCG grammars from logical form with higher-order unification",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kwiatkowski",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Sharon",
"middle": [],
"last": "Goldwater",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1223--1233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kwiatkowski, Tom, Luke Zettlemoyer, Sharon Goldwater, and Mark Steedman. 2010. Inducing probabilistic CCG grammars from logical form with higher-order unification. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing. pages 1223-1233.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Combined distributional and logical semantics",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "179--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lewis, Mike and Mark Steedman. 2013. Combined distributional and logical semantics. Transactions of the Association for Computational Linguistics 1:179-192.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning dependency-based compositional semantics",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Jordan",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "590--599",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Liang, Percy, Michael Jordan, and Dan Klein. 2011. Learning dependency-based compositional se- mantics. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguis- tics: Human Language Technologies. Portland, Oregon, USA, pages 590-599.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A joint model of language and perception for grounded attribute learning",
"authors": [
{
"first": "Cynthia",
"middle": [],
"last": "Matuszek",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Fitzgerald",
"suffix": ""
},
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Liefeng",
"middle": [],
"last": "Bo",
"suffix": ""
},
{
"first": "Dieter",
"middle": [],
"last": "Fox",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 29th International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "1671--1678",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matuszek, Cynthia, Nicholas FitzGerald, Luke Zettlemoyer, Liefeng Bo, and Dieter Fox. 2012. A joint model of language and perception for grounded attribute learning. In Proceedings of the 29th International Conference on Machine Learn- ing. Edinburgh, Scotland, pages 1671-1678.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Distant supervision for relation extraction without labeled data",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Mintz",
"suffix": ""
},
{
"first": "Steven",
"middle": [],
"last": "Bills",
"suffix": ""
},
{
"first": "Rion",
"middle": [],
"last": "Snow",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1003--1011",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mintz, Mike, Steven Bills, Rion Snow, and Dan Jurafsky. 2009. Distant supervision for relation extraction without labeled data. In Proceedings of the Joint conference of the 47th Annual Meet- ing of the Association for Computational Linguis- tics and the 4th International Joint Conference on Natural Language Processing of the Asian Fed- eration of Natural Language Processing. pages 1003-1011.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Events in the Semantics of English",
"authors": [
{
"first": "Terence",
"middle": [],
"last": "Parsons",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Parsons, Terence. 1990. Events in the Semantics of English. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Grounded unsupervised semantic parsing",
"authors": [
{
"first": "Hoifung",
"middle": [],
"last": "Poon",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "933--943",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Poon, Hoifung. 2013. Grounded unsupervised se- mantic parsing. In Proceedings of the 51st An- nual Meeting of the Association for Computa- tional Linguistics. Sofia, Bulgaria, pages 933- 943.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Relation extraction with matrix factorization and universal schemas",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Benjamin M",
"middle": [],
"last": "Marlin",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "74--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Riedel, Sebastian, Limin Yao, Andrew McCallum, and Benjamin M Marlin. 2013. Relation ex- traction with matrix factorization and universal schemas. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. Atlanta, Georgia, pages 74-84.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "The Syntactic Process",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Steedman, Mark. 2000. The Syntactic Process. The MIT Press.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Learning synchronous grammars for semantic parsing with lambda calculus",
"authors": [
{
"first": "Yuk",
"middle": [],
"last": "Wong",
"suffix": ""
},
{
"first": "Raymond",
"middle": [],
"last": "Wah",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Prague, Czech Republic",
"volume": "",
"issue": "",
"pages": "960--967",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wong, Yuk Wah and Raymond Mooney. 2007. Learning synchronous grammars for semantic parsing with lambda calculus. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. Prague, Czech Re- public, pages 960-967.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Information extraction over structured data: Question answering with freebase",
"authors": [
{
"first": "Xuchen",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "956--966",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yao, Xuchen and Benjamin Van Durme. 2014. In- formation extraction over structured data: Ques- tion answering with freebase. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. Baltimore, Maryland, USA, pages 956-966.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Learning to parse database queries using inductive logic programming",
"authors": [
{
"first": "John",
"middle": [
"M"
],
"last": "Zelle",
"suffix": ""
},
{
"first": "Raymond J",
"middle": [],
"last": "Mooney",
"suffix": ""
}
],
"year": 1996,
"venue": "Proceedings of the National Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1050--1055",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zelle, John M and Raymond J Mooney. 1996. Learning to parse database queries using induc- tive logic programming. In Proceedings of the National Conference on Artificial Intelligence. Portland, Oregon, pages 1050-1055.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Online learning of relaxed CCG grammars for parsing to logical form",
"authors": [
{
"first": "Luke",
"middle": [],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zettlemoyer, Luke and Michael Collins. 2007. On- line learning of relaxed CCG grammars for pars- ing to logical form. In Proceedings of the 2007",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. Prague, Czech Republic",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "678--687",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joint Conference on Empirical Methods in Natu- ral Language Processing and Computational Nat- ural Language Learning. Prague, Czech Repub- lic, pages 678-687.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Learning to map sentences to logical form: Structured classification with probabilistic categorial grammars",
"authors": [
{
"first": "Luke",
"middle": [
"S"
],
"last": "Zettlemoyer",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of 21st Conference in Uncertainilty in Artificial Intelligence. Edinburgh",
"volume": "",
"issue": "",
"pages": "658--666",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zettlemoyer, Luke S. and Michael Collins. 2005. Learning to map sentences to logical form: Struc- tured classification with probabilistic categorial grammars. In Proceedings of 21st Conference in Uncertainilty in Artificial Intelligence. Edin- burgh, Scotland, pages 658-666.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "(a) Semantic parse of the sentence Austin is the capital of Texas. ) \u2227 capital(Austin)\u2227 capital.of.arg1(e, Austin) \u2227 capital.of.arg2(e, Texas)(b) Ungrounded graph for semantic parse (a); UNIQUE means that Austin is the only capital of Texas. \u2227 capital(x) \u2227 capital.of.arg1(e, x) \u2227 capital.of.arg2(e, Texas) {AUSTIN} (c) Query graph after removing Austin from graph (b) and its denotation. ) \u2227 location.city(x) \u2227 location.capital.arg1(m, x) \u2227 location.capital.arg2(m, Texas) ) \u2227 location.city(x) \u2227 location.containedby.arg1(n, x) \u2227 location.containedby.arg2(n, Texas) {AUSTIN} {AUSTIN, DALLAS, HOUSTON . . . } (d) Freebase graphs for NL graph (c) and their denotations.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF1": {
"text": "Steps involved in converting a natural language sentence to a Freebase grounded graph.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF2": {
"text": "Cameron \u03bby\u03bbx. directed.arg1(e, x) Titanic \u2227 directed.arg2(e, y) > S\\N P \u03bbx. directed.arg1(e, x) \u2227 directed.arg2(e, Titanic) < S directed.arg1(e, Cameron) \u2227 directed.arg2(e, Titanic) 1 CCG derivation containing both syntactic and semantic parse construction.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF3": {
"text": "(e, Cameron) \u2227 directed.arg2(e, Titanic) \u2227 directed.in(e, 1997) by.arg2(m, Cameron) \u2227 film.directed by.arg1(m, Titanic) \u2227 film.initial release date.arg1(n, Titanic) \u2227 film.initial release date.arg2(n, 1997) (b) Grounded graph",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF4": {
"text": "Graph representations for the sentence Cameron directed Titanic in 1997.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF5": {
"text": "Ungrounded graphs with math functions TARGET and UNIQUE.tence Cameron directed Titanic in 1997. In order to construct ungrounded graphs topologically similar to Freebase, we define five types of nodes:Word Nodes (Ovals) Word nodes are denoted by ovals. They represent natural language words (e.g., directed inFigure 5a, capital and state inFigure 6b). Word nodes are connected to other word nodes via syntactic dependencies. For readability, we do not show inter-word dependencies.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF6": {
"text": "Graph representations for the sentence Julie Andrews has appeared in 40 movies. Ungrounded graph (a) directly connects Julie Andrews and 40, whereas graph (b) uses the math function COUNT. Ungrounded graph (b) and grounded graph (c) have similar topology.",
"type_str": "figure",
"uris": null,
"num": null
},
"FIGREF7": {
"text": "of employees.inverse(m, Alcoa) \u2227 measurement unit.dated integer.number(m, 119000) \u2227 measurement unit.dated integer.year(m, 2007) \u2227 type.int(119000) Graph representations for Alcoa has 120000 employees in 2007.",
"type_str": "figure",
"uris": null,
"num": null
},
"TABREF2": {
"html": null,
"type_str": "table",
"num": null,
"text": "Math nodes remain unchanged. Though word nodes are not present in Freebase, we retain them in our grounded graphs to extract sophisticated features based on words and grounded predicates. JulieAndrews) \u2227 appeared.in(e, 40) \u2227 movies(40)",
"content": "<table><tr><td/><td colspan=\"4\">appeared</td><td/><td/><td/><td>movies</td><td>movies</td></tr><tr><td>Julie Andrews</td><td>appeared .arg1</td><td/><td>e</td><td/><td/><td colspan=\"3\">appeared.in</td><td>40 type</td></tr><tr><td colspan=\"9\">appeared.arg1(e, (a) Ungrounded Graph</td></tr><tr><td/><td colspan=\"3\">appeared</td><td/><td colspan=\"4\">movies</td><td>movies</td></tr><tr><td>Julie Andrews</td><td>appeared .arg1</td><td>e</td><td/><td>appeared .in</td><td/><td>z</td><td colspan=\"2\">type</td><td>count</td><td>40</td></tr><tr><td colspan=\"9\">appeared.arg1(e, JulieAndrews) \u2227 appeared.in(e, z) \u2227 movies(z) \u2227 count(z, 40)</td></tr><tr><td/><td colspan=\"8\">(b) Alternate Ungrounded Graph</td></tr><tr><td/><td colspan=\"4\">appeared</td><td colspan=\"4\">film</td><td>movies</td></tr><tr><td colspan=\"3\">Julie Andrews performance .actor</td><td>m</td><td colspan=\"2\">performance .film</td><td colspan=\"2\">z</td><td>type</td><td>count</td><td>40</td></tr><tr><td colspan=\"9\">performance.actor(m, JulieAndrews) \u2227 performance.film(m, z) \u2227 film(z) \u2227 count(z, 40)</td></tr><tr><td/><td/><td colspan=\"7\">(c) Grounded graph</td></tr></table>"
},
"TABREF3": {
"html": null,
"type_str": "table",
"num": null,
"text": "Common nouns like movies (see are left as variables to be instantiated by the entities satisfying the graph.Type nodes Type nodes are grounded to Freebase entity types. Type nodes capital and capital.state inFigure 6bare grounded to all possible types of Austin (e.g., location.city, location.capital city, book.book subject, broadcast.genre). In cases where entity nodes are not grounded, (e.g., z inFigure 7b),",
"content": "<table><tr><td>employees</td><td>employees</td><td>type</td><td>120000</td></tr><tr><td/><td>e</td><td>2 h a s .a rg</td><td>has.arg2</td></tr><tr><td>h a s. a rg 1</td><td/><td/><td/></tr><tr><td>Alcoa</td><td>has</td><td/><td>e</td></tr><tr><td>h a s. a rg 1</td><td>e</td><td/><td>has.in</td></tr><tr><td/><td/><td>h a s .i n</td><td/></tr><tr><td/><td/><td/><td>2007</td></tr><tr><td colspan=\"3\">has.arg1(e, Alcoa) \u2227 has.arg2(e, 120000) \u2227 has.in(e, 2007) \u2227 employees(120000)</td><td/></tr></table>"
},
"TABREF4": {
"html": null,
"type_str": "table",
"num": null,
"text": "While it is straightforward to compute [[g + ]] K B , it is hard to compute [[u + ]] N L because of the mismatch between our natural language semantic language and the Freebase query language.",
"content": "<table/>"
},
"TABREF7": {
"html": null,
"type_str": "table",
"num": null,
"text": "Domain-specific Freebase statistics (*some relations/types/triples are shared across domains); number of training CLUEWEB09 sentences; number of test questions in FREE917 and WEBQUESTIONS.",
"content": "<table/>"
},
"TABREF10": {
"html": null,
"type_str": "table",
"num": null,
"text": "GRAPHPARSER ablation results on FREE917 and WEBQUESTIONS development set.",
"content": "<table/>"
},
"TABREF11": {
"html": null,
"type_str": "table",
"num": null,
"text": "directed : (S e \\NP x <1>)/NP y <2> : \u03bbQ\u03bbP\u03bbe.\u2203x\u2203y. directed.arg1(e, x) \u2227 directed.arg2(e, y) \u2227 P(x) \u2227 Q(y) * NN, NNSTYPE movie : NP : \u03bbx.movie(x) * NNP*, PRP* ENTITY Obama : NP : \u03bbx.equal(x, Obama) * RB* EVENTMOD annually : S e \\S e : \u03bbP\u03bbe.lex e .annually(e) \u2227 P(e) NP x /NP x : \u03bbP\u03bbx.lex x .state(x) \u2227 P(x) be * COPULA be: (S y \\NP x )/NP y : \u03bbQ\u03bbP\u03bby.\u2203x.lex y (x) \u2227 P(x) \u2227 Q(y) the * UNIQUE the : NP x /NP x : \u03bbP\u03bbx.UNIQUE(x) \u2227 P(x) * CD COUNT twenty : N x /N x : \u03bbP\u03bbx.COUNT(x, 20) \u2227 P(x) twenty : N x /N x : \u03bbP\u03bbx.equal(x, 20) \u2227 P(x) not, n't * NEGATION not : (S e \\NP x )/(S e \\NP x ) : \u03bbP\u03bbQ\u03bbe.\u2203x.NEGATION(e) \u2227 P(x, e) \u2227 Q(x) no * COMPLEMENT no : NP x /N x : \u03bbP\u03bbx.COMPLEMENT(x) \u2227 P(x)",
"content": "<table><tr><td colspan=\"2\">Lemma POS</td><td>Semantic Class Semantic Category</td></tr><tr><td>*</td><td>VB*, IN, TO,</td></tr><tr><td/><td>POS</td></tr><tr><td colspan=\"3\">* state : * JJ* TYPEMOD WDT, WP*, QUESTION what : S[wq] e /(S[dcl] e \\NP x ) WRB : \u03bbP\u03bbe.\u2203x.TARGET(x) \u2227 P(x, e) * WDT, WP*,</td></tr><tr><td/><td/><td>).</td></tr></table>"
},
"TABREF12": {
"html": null,
"type_str": "table",
"num": null,
"text": "",
"content": "<table/>"
}
}
}
}