ACL-OCL / Base_JSON /prefixQ /json /Q18 /Q18-1048.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Q18-1048",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T15:10:33.070935Z"
},
"title": "Learning Typed Entailment Graphs with Global Soft Constraints",
"authors": [
{
"first": "Mohammad",
"middle": [
"Javad"
],
"last": "Hosseini",
"suffix": "",
"affiliation": {},
"email": "javad.hosseini@ed.ac.uk"
},
{
"first": "Nathanael",
"middle": [],
"last": "Chambers",
"suffix": "",
"affiliation": {},
"email": "nchamber@usna.edu"
},
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": "",
"affiliation": {},
"email": "sivar@stanford.edu"
},
{
"first": "Xavier",
"middle": [
"R"
],
"last": "Holt",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": "",
"affiliation": {},
"email": "scohen@inf.ed.ac.uk"
},
{
"first": "Mark",
"middle": [],
"last": "Johnson",
"suffix": "",
"affiliation": {},
"email": "mark.johnson@mq.edu.au"
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": "",
"affiliation": {},
"email": "steedman@inf.ed.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "This paper presents a new method for learning typed entailment graphs from text. We extract predicate-argument structures from multiple-source news corpora, and compute local distributional similarity scores to learn entailments between predicates with typed arguments (e.g., person contracted disease). Previous work has used transitivity constraints to improve local decisions, but these constraints are intractable on large graphs. We instead propose a scalable method that learns globally consistent similarity scores based on new soft constraints that consider both the structures across typed entailment graphs and inside each graph. Learning takes only a few hours to run over 100K predicates, and our results show large improvements over local similarity scores on two entailment data sets. We further show improvements over paraphrases and entailments from the Paraphrase Database and prior state-of-the-art entailment graphs. We show that the entailment graphs improve performance in a downstream task.",
"pdf_parse": {
"paper_id": "Q18-1048",
"_pdf_hash": "",
"abstract": [
{
"text": "This paper presents a new method for learning typed entailment graphs from text. We extract predicate-argument structures from multiple-source news corpora, and compute local distributional similarity scores to learn entailments between predicates with typed arguments (e.g., person contracted disease). Previous work has used transitivity constraints to improve local decisions, but these constraints are intractable on large graphs. We instead propose a scalable method that learns globally consistent similarity scores based on new soft constraints that consider both the structures across typed entailment graphs and inside each graph. Learning takes only a few hours to run over 100K predicates, and our results show large improvements over local similarity scores on two entailment data sets. We further show improvements over paraphrases and entailments from the Paraphrase Database and prior state-of-the-art entailment graphs. We show that the entailment graphs improve performance in a downstream task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recognizing textual entailment and paraphrasing is critical to many core natural language processing applications such as question answering and semantic parsing. The surface form of a sentence that answers a question such as \"Does Verizon own Yahoo?\" frequently does not directly correspond to the form of the question, but is rather a paraphrase or an expression such as \"Verizon bought Yahoo,\" that entails the answer. The lack of a well-established form-independent semantic representation for natural language is the most important single obstacle to bridging the gap between queries and text resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper seeks to learn meaning postulates (e.g., buying entails owning) that can be used to augment the standard form-dependent semantics. Our immediate goal is to learn entailment rules between typed predicates with two arguments, where the type of each predicate is determined by the types of its arguments. We construct typed entailment graphs, with typed predicates as nodes and entailment rules as edges. Figure 1 shows simple examples of such graphs with arguments of types company,company and person,location.",
"cite_spans": [],
"ref_spans": [
{
"start": 413,
"end": 421,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Entailment relations are detected computing a similarity score between the typed predicates based on the distributional inclusion hypothesis, which states that a word (predicate) u entails another word (predicate) v if in any context that u can be used, v can be used in its place (Dagan et al., 1999; Geffet and Dagan, 2005; Herbelot and Ganesalingam, 2013; Kartsaklis and Sadrzadeh, 2016) . Most previous work has taken a \"local learning\" approach (Lin, 1998; Weeds and Weir, 2003; Szpektor and Dagan, 2008; Schoenmackers et al., 2010) , namely, learning entailment rules independently from each other.",
"cite_spans": [
{
"start": 281,
"end": 301,
"text": "(Dagan et al., 1999;",
"ref_id": "BIBREF9"
},
{
"start": 302,
"end": 325,
"text": "Geffet and Dagan, 2005;",
"ref_id": "BIBREF16"
},
{
"start": 326,
"end": 358,
"text": "Herbelot and Ganesalingam, 2013;",
"ref_id": "BIBREF17"
},
{
"start": 359,
"end": 390,
"text": "Kartsaklis and Sadrzadeh, 2016)",
"ref_id": "BIBREF19"
},
{
"start": 450,
"end": 461,
"text": "(Lin, 1998;",
"ref_id": "BIBREF27"
},
{
"start": 462,
"end": 483,
"text": "Weeds and Weir, 2003;",
"ref_id": "BIBREF43"
},
{
"start": 484,
"end": 509,
"text": "Szpektor and Dagan, 2008;",
"ref_id": "BIBREF39"
},
{
"start": 510,
"end": 537,
"text": "Schoenmackers et al., 2010)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "One problem facing local learning approaches is that many correct edges are not identified because of data sparsity and many wrong edges are spuriously identified as valid entailments. A \"global learning\" approach, where dependencies between entailment rules are taken into account, can improve the local decisions significantly. Berant et al. (2011) imposed transitivity constraints on the entailments, such that the inclusion of rules i\u2192j and j\u2192k implies that of i\u2192k. Although they showed transitivity constraints to be effective in learning entailment graphs, the Integer Linear Programming (ILP) solution of Berant et al. is not scalable beyond a few hundred nodes. In fact, the problem of finding a maximally weighted transitive subgraph of a graph with arbitrary edge weights is NP-hard (Berant et al., 2011) .",
"cite_spans": [
{
"start": 330,
"end": 350,
"text": "Berant et al. (2011)",
"ref_id": "BIBREF4"
},
{
"start": 793,
"end": 814,
"text": "(Berant et al., 2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This paper instead proposes a scalable solution that does not rely on transitivity closure, but instead uses two global soft constraints that maintain structural similarity both across and within each typed entailment graph ( Figure 2 ). We introduce an unsupervised framework to learn globally consistent similarity scores given local similarity scores ( \u00a74). Our method is highly parallelizable and takes only a few hours to apply to more than 100K predicates. 1, 2 Our experiments ( \u00a76) show that the global scores improve significantly over local scores and outperform state-of-the-art entailment graphs on two standard entailment rule data sets (Berant et al., 2011; Holt, 2018) . We ultimately intend the typed entailment graphs to provide a resource for entailment and paraphrase rules for use in semantic parsing and open domain question answering, as has been done for similar resources such as the Paraphrase Database (PPDB; Ganitkevitch et al., 2013; Pavlick et al., 2015) in Wang et al. (2015) and Dong et al. (2017) . 3 With that end in view, we have included a comparison with PPDB in our evaluation on the entailment data sets. We also show that the learned entailment rules improve performance on a question-answering task ( \u00a77) with no tuning or prior knowledge of the task.",
"cite_spans": [
{
"start": 463,
"end": 465,
"text": "1,",
"ref_id": null
},
{
"start": 466,
"end": 467,
"text": "2",
"ref_id": null
},
{
"start": 650,
"end": 671,
"text": "(Berant et al., 2011;",
"ref_id": "BIBREF4"
},
{
"start": 672,
"end": 683,
"text": "Holt, 2018)",
"ref_id": "BIBREF18"
},
{
"start": 935,
"end": 961,
"text": "Ganitkevitch et al., 2013;",
"ref_id": "BIBREF15"
},
{
"start": 962,
"end": 983,
"text": "Pavlick et al., 2015)",
"ref_id": "BIBREF32"
},
{
"start": 987,
"end": 1005,
"text": "Wang et al. (2015)",
"ref_id": "BIBREF42"
},
{
"start": 1010,
"end": 1028,
"text": "Dong et al. (2017)",
"ref_id": "BIBREF11"
},
{
"start": 1031,
"end": 1032,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 226,
"end": 234,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work is closely related to Berant et al. (2011) , where entailment graphs are learned by imposing transitivity constraints on the entailment relations. However, the exact solution to the problem is not scalable beyond a few hundred predicates, whereas the number of predicates that we capture is two orders of magnitude larger ( \u00a75). Hence, it is necessary to resort to approximate methods based across different but related typed entailment graphs and (B) within each graph. 0 \u2264 \u03b2 \u2264 1 determines how much different graphs are related. The dotted edges are missing, but will be recovered by considering relationships shown by across-graph (red) and within-graph (light blue) connections.",
"cite_spans": [
{
"start": 31,
"end": 51,
"text": "Berant et al. (2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "on assumptions concerning the graph structure. propose Tree-Node-Fix (TNF), an approximation method that scales better by additionally assuming the entailment graphs are \"forest reducible,\" where a predicate cannot entail two (or more) predicates j and k such that neither j\u2192k nor k\u2192j (FRG assumption). However, the FRG assumption is not correct for many real-world domains. For example, a person visiting a place entails both arriving at that place and leaving that place, although the latter do not necessarily entail each other. Our work injects two other types of prior knowledge about the structure of the graph that are less expensive to incorporate and yield better results on entailment rule data sets. Abend et al. (2014) learn entailment relations over multi-word predicates with different levels of compositionality. Pavlick et al. (2015) add variety of relations, including entailment, to phrase pairs in PPDB. This includes a broader range of entailment relations such as lexical entailment. In contrast to our method, these works rely on supervised data and take a local learning approach.",
"cite_spans": [
{
"start": 711,
"end": 730,
"text": "Abend et al. (2014)",
"ref_id": "BIBREF0"
},
{
"start": 828,
"end": 849,
"text": "Pavlick et al. (2015)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Another related strand of research is link prediction (Bordes et al., 2013; Riedel et al., 2013; Socher et al., 2013; Yang et al., 2015; Trouillon et al., 2016; Dettmers et al., 2018) , where the source data are extractions from text, facts in knowledge bases, or both. Unlike our work, which directly learns entailment relations between predicates, these methods aim at predicting the source data-that is, whether two entities have a particular relationship. The common wisdom is that entailment relations are a byproduct of these methods (Riedel et al., 2013) . However, this assumption has not usually been explicitly evaluated. Explicit entailment rules provide explainable resources that can be used in downstream tasks. Our experiments show that our method significantly outperforms a state-of-the-art link prediction method.",
"cite_spans": [
{
"start": 54,
"end": 75,
"text": "(Bordes et al., 2013;",
"ref_id": "BIBREF6"
},
{
"start": 76,
"end": 96,
"text": "Riedel et al., 2013;",
"ref_id": "BIBREF35"
},
{
"start": 97,
"end": 117,
"text": "Socher et al., 2013;",
"ref_id": "BIBREF37"
},
{
"start": 118,
"end": 136,
"text": "Yang et al., 2015;",
"ref_id": "BIBREF45"
},
{
"start": 137,
"end": 160,
"text": "Trouillon et al., 2016;",
"ref_id": "BIBREF41"
},
{
"start": 161,
"end": 183,
"text": "Dettmers et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 540,
"end": 561,
"text": "(Riedel et al., 2013)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We first extract binary relations as predicateargument pairs using a combinatory categorial grammar (CCG; Steedman, 2000) semantic parser ( \u00a73.1). We map the arguments to their Wikipedia URLs using a named entity linker ( \u00a73.2). We extract types such as person and disease for each argument ( \u00a73.2). We then compute local similarity scores between predicate pairs ( \u00a73.3).",
"cite_spans": [
{
"start": 106,
"end": 121,
"text": "Steedman, 2000)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Computing Local Similarity Scores",
"sec_num": "3"
},
{
"text": "The semantic parser of Reddy et al. (2014) , GraphParser, is run on the NewsSpike corpus (Zhang and Weld, 2013) to extract binary relations between a predicate and its arguments from sentences. GraphParser uses CCG syntactic derivations and \u03bb-calculus to convert sentences to neo-Davisonian semantics, a first-order logic that uses event identifiers (Parsons, 1990) . For example, for the sentence, Obama visited Hawaii in 2012, GraphParser produces the logical form \u2203e.visit 1 (e, Obama) \u2227 visit 2 (e, Hawaii)\u2227 visit in (e, 2012), where e denotes an event. We will consider a relation for each pair of arguments, hence, there will be three relations for the given sentence: visit 1,2 with arguments (Obama, Hawaii), visit 1,in with arguments (Obama,2012), and visit 2,in with arguments (Hawaii,2012). We currently only use extracted relations that involve two named entities or one named entity and a noun. We constrain the relations to have at least one named entity to reduce ambiguity in finding entailments.",
"cite_spans": [
{
"start": 23,
"end": 42,
"text": "Reddy et al. (2014)",
"ref_id": "BIBREF34"
},
{
"start": 89,
"end": 111,
"text": "(Zhang and Weld, 2013)",
"ref_id": "BIBREF47"
},
{
"start": 350,
"end": 365,
"text": "(Parsons, 1990)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Extraction",
"sec_num": "3.1"
},
{
"text": "We perform a few automatic post-processing steps on the output of the parser. First, we normalize the predicates by lemmatization of their head words. Passive predicates are mapped to active ones and we extract negations and particle verb predicates. Next, we discard unary relations and relations involving coordination of arguments. Finally, whenever we see a relation between a subject and an object, and a relation between object and a third argument connected by a preposi-tional phrase, we add a new relation between the subject and the third argument by concatenating the relation name with the object. For example, for the sentence China has a border with India, we extract a relation have border 1,with between China and India. We perform a similar process for prepositional phrases attached to verb phrases. Most of the light verbs and multiword predicates will be extracted by the above post-processing (e.g., take care 1,of ), which will recover many salient ternary relations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Extraction",
"sec_num": "3.1"
},
{
"text": "Although entailments and paraphrasing can benefit from n-ary relations-for example, person visits a location in a time-we currently follow previous work (Lewis and Steedman, 2013a) ; in confining our attention to binary relations, leaving the construction of n-ary graphs to future work.",
"cite_spans": [
{
"start": 153,
"end": 180,
"text": "(Lewis and Steedman, 2013a)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Relation Extraction",
"sec_num": "3.1"
},
{
"text": "Entailment and paraphrasing depend on context. Although using exact context is impractical in forming entailment graphs, many authors have used the type of the arguments to disambiguate polysemous predicates (Berant et al., 2011 Lewis and Steedman, 2013a; Lewis, 2014) . Typing also reduces the size of the entailment graphs.",
"cite_spans": [
{
"start": 208,
"end": 228,
"text": "(Berant et al., 2011",
"ref_id": "BIBREF4"
},
{
"start": 229,
"end": 255,
"text": "Lewis and Steedman, 2013a;",
"ref_id": "BIBREF24"
},
{
"start": 256,
"end": 268,
"text": "Lewis, 2014)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linking and Typing Arguments",
"sec_num": "3.2"
},
{
"text": "Because named entities can be referred to in many different ways, we use a named entity linking tool to normalize the named entities. In the following experiments, we use AIDALight (Nguyen et al., 2014), a fast and accurate named entity linker, to link named entities to their Wikipedia URLs (if any). We thus type all entities that can be grounded in Wikipedia. We first map the Wikipedia URL of the entities to Freebase (Bollacker et al., 2008) . We select the most notable type of the entity from Freebase and map it to FIGER types (Ling and Weld, 2012 ) such as building, disease, person, and location, using only the first level of the FIGER type hierarchy. 4 For example, instead of event/sports_event, we use event as type. If an entity cannot be grounded in Wikipedia or its Freebase type does not have a mapping to FIGER, we assign the default type thing to it.",
"cite_spans": [
{
"start": 422,
"end": 446,
"text": "(Bollacker et al., 2008)",
"ref_id": "BIBREF5"
},
{
"start": 535,
"end": 555,
"text": "(Ling and Weld, 2012",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Linking and Typing Arguments",
"sec_num": "3.2"
},
{
"text": "For each typed predicate (e.g., visit 1,2 with types person,location), we extract a feature vector. We use as feature types the set of argument pair strings (e.g., Obama-Hawaii) that instantiate the binary relations of the predicates. The value of each feature is the pointwise mutual information between the predicate and the feature. We use the feature vectors to compute three local similarity scores (both symmetric and directional) between typed predicates: Weeds (Weeds and Weir, 2003) , Lin (Lin, 1998) , and Balanced Inclusion (BInc; Szpektor and Dagan, 2008) similarities.",
"cite_spans": [
{
"start": 469,
"end": 491,
"text": "(Weeds and Weir, 2003)",
"ref_id": "BIBREF43"
},
{
"start": 498,
"end": 509,
"text": "(Lin, 1998)",
"ref_id": "BIBREF27"
},
{
"start": 542,
"end": 567,
"text": "Szpektor and Dagan, 2008)",
"ref_id": "BIBREF39"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Local Distributional Similarities",
"sec_num": "3.3"
},
{
"text": "We learn globally consistent similarity scores based on local similarity scores. The global scores will be used to form typed entailment graphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Globally Consistent Entailment Graphs",
"sec_num": "4"
},
{
"text": "Let T be a set of types and P be a set of predicates. We denote byV (t 1 , t 2 ) the set of typed predicates p(:t 1 , :t 2 ), where t 1 , t 2 \u2208 T and p \u2208 P . Each p(:t 1 , :t 2 ) \u2208V (t 1 , t 2 ) takes as input arguments of types t 1 and t 2 . An example of a typed predicate is win 1,2 (:team,:event) that can be instantiated with win 1,2 (Seahawks:team,Super Bowl:event).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "4.1"
},
{
"text": "We",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "4.1"
},
{
"text": "define V (t 1 , t 2 ) =V (t 1 , t 2 ) \u222aV (t 2 , t 1 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "4.1"
},
{
"text": ". We often denote elements of V (t 1 , t 2 ) by i, j, and k, where each element is a typed predicate as above. For an i=p(:t 1 , :t 2 ) \u2208 V (t 1 , t 2 ), we denote by \u03c0(i)=p, \u03c4 1 (i)=t 1 and \u03c4 2 (i)=t 2 . We compute distributional similarities between predicates with the same argument types. We denote by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "4.1"
},
{
"text": "W 0 (t 1 , t 2 ) \u2208 [0, 1] |V (t 1 ,t 2 )|\u00d7|V (t 1 ,t 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "4.1"
},
{
"text": "| the (sparse) matrix containing all local similarity scores w 0 ij between predicates i and j with types t 1 and t 2 , where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "4.1"
},
{
"text": "|V (t 1 , t 2 )| is the size of V (t 1 , t 2 ). 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "4.1"
},
{
"text": "Predicates can entail each other with the same argument order (direct) or in the reverse order-that is, p(:t 1 , :t 2 ) might entail q(:t 1 , :t 2 ) or q(:t 2 , :t 1 ). For the graphs with the same types (e.g., t 1 =t 2 =person), we keep two copies of the predicates, one for each of the possible orderings. This allows us to model entailments with reverse argument orders (e.g., is son of 1,2 (:person1,:person2) \u2192 is parent of 1,2 (:person2,:person1)).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "4.1"
},
{
"text": "We",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "4.1"
},
{
"text": "define V = t 1 ,t 2 V (t 1 , t 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "4.1"
},
{
"text": ", the set of all typed predicates, and W 0 as a blockdiagonal matrix consisting of all the local sim-ilarity matrices W 0 (t 1 , t 2 ). Similarly, we define W(t 1 , t 2 ) and W as the matrices consisting of globally consistent similarity scores w ij we wish to learn. The global similarity scores are used to form entailment graphs by thresholding W. For a \u03b4 > 0, we define typed entailment graphs as G",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "4.1"
},
{
"text": "\u03b4 (t 1 , t 2 ) = V (t 1 , t 2 ), E \u03b4 (t 1 , t 2 ) , where V (t 1 , t 2 ) are the nodes and E(t 1 , t 2 ) = {(i, j)|i, j \u2208 V (t 1 , t 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "4.1"
},
{
"text": ", w ij \u2265 \u03b4} are the edges of the entailment graphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Problem Formulation",
"sec_num": "4.1"
},
{
"text": "Existing approaches to learn entailment graphs from text miss many correct edges because of data sparsity-namely, the lack of explicit evidence in the corpus that a predicate i entails another predicate j. The goal of our method is to use evidence from the existing edges that have been assigned high confidence to predict missing ones and remove spurious edges. We propose two global soft constraints that maintain structural similarity both across and within each typed entailment graph. The constraints are based on the following two observations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "First, it is standard to learn a separate typed entailment graph for each (plausible) type-pair because arguments provide necessary disambiguation for predicate meaning (Berant et al., 2011 Lewis and Steedman, 2013a,b) . However, many entailment relations for which we have direct evidence only in a few subgraphs may in fact apply over many others ( Figure 2A ). For example, we may not have found direct evidence that mentions of a living_thing (e.g., a virus) triggering a disease are accompanied by mentions of the living_thing causing that disease (because of data sparsity), whereas we have found that mentions of a government_agency triggering an event are reliably accompanied by mentions of causing that event. While we show that typing is necessary to learning entailments ( \u00a76), we propose to learn all typed entailment graphs jointly.",
"cite_spans": [
{
"start": 169,
"end": 189,
"text": "(Berant et al., 2011",
"ref_id": "BIBREF4"
},
{
"start": 190,
"end": 218,
"text": "Lewis and Steedman, 2013a,b)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 351,
"end": 360,
"text": "Figure 2A",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "Second, we encourage paraphrase predicates (where i\u2192j and j\u2192i) to have the same patterns of entailment ( Figure 2B ), that is, to entail and be entailed by the same predicates, global soft constraints that we call paraphrase resolution. Using these soft constraints, a missing entailment (e.g., medicine treats disease \u2192 medicine is useful for disease) can be identified by considering the entailments of a paraphrase predicate (e.g.,",
"cite_spans": [],
"ref_spans": [
{
"start": 105,
"end": 114,
"text": "Figure 2B",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "J(W \u2265 0, \u03b2 \u2265 0) = L withinGraph + L crossGraph + L pResolution + \u03bb 1 W 1 (1) L withinGraph = i,j\u2208V (w ij \u2212 w 0 ij ) 2 (2) L crossGraph = 1 2 i,j\u2208V (i ,j )\u2208 N (i,j) \u03b2 \u03c0(i), \u03c4 1 (i), \u03c4 2 (i) , \u03c4 1 (i ), \u03c4 2 (i ) (w ij \u2212 w i j ) 2 + \u03bb 2 2 1 \u2212 \u03b2 2 2 (3) L pResolution = 1 2 t 1 ,t 2 \u2208T i,j,k\u2208V (t 1 ,t 2 ) k =i,k =j I \u03b5 (w ij )I \u03b5 (w ji ) (w ik \u2212 w jk ) 2 + (w ki \u2212 w kj ) 2",
"eq_num": "(4)"
}
],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "Figure 3: The objective function to jointly learn global scores W and the compatibility function \u03b2, given local scores W 0 . L withinGraph encourages global and local scores to be close; L crossGraph encourages similarities to be consistent between different typed entailment graphs; L pResolution encourages paraphrase predicates to have the same pattern of entailment. We use an 1 regularization penalty to remove entailments with low confidence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "medicine cures disease \u2192 medicine is useful for disease).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "Sharing entailments across different typed entailment graphs is only semantically correct for some predicates and types. In order to learn when we can generalize an entailment from one graph to another, we define a compatibility function \u03b2 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "P \u00d7 (T \u00d7T ) \u00d7 (T \u00d7T ) \u2192 [0, 1].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "The function is defined for a predicate and two type pairs (Figure 2A ). It specifies the extent of compatibility for a single predicate between different typed entailment graphs, with 1 being completely compatible and 0 being irrelevant. In particular, \u03b2 p, (t 1 , t 2 ), (t 1 , t 2 ) determines how much we expect the outgoing edges of p(:t 1 , :t 2 ) and p(:t 1 , :t 2 ) to be similar. We constrain \u03b2 to be symmetric between t 1 , t 2 and t 1 , t 2 as compatibility of outgoing edges of p(:t 1 , :t 2 ) with p(:t 1 , :t 2 ) should be the same as p(:t 1 , :t 2 ) with p(:t 1 , :t 2 ). We denote by \u03b2 a vectorization consisting of the values of \u03b2 for all possible input predicates and types.",
"cite_spans": [],
"ref_spans": [
{
"start": 59,
"end": 69,
"text": "(Figure 2A",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "Note that the global similarity scores W and the compatibility function \u03b2 are not known in advance. Given local similarity scores W 0 , we learn W and \u03b2 jointly. We minimize the loss function defined in Equation (1), which consists of three soft constraints defined below and an 1 regularization term (Figure 3 ). L withinGraph . Equation (2) encourages global scores w ij to be close to local scores w 0 ij , so that the global scores will not stray too far from the original scores.",
"cite_spans": [],
"ref_spans": [
{
"start": 301,
"end": 310,
"text": "(Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "L crossGraph . Equation 3encourages each predicate's entailments to be similar across typed entailment graphs (Figure 2A ) if the predicates have similar neighbors. We penalize the difference of entailments in two different graphs when the compatibility function is high. For each pair of typed predicates (i, j) \u2208 V (t 1 , t 2 ), we define a set of neighbors (predicates with different types):",
"cite_spans": [],
"ref_spans": [
{
"start": 110,
"end": 120,
"text": "(Figure 2A",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "N (i, j) = (i , j ) \u2208 V (t 1 , t 2 )|t 1 , t 2 \u2208 T, (i , j ) = (i, j), \u03c0(i) = \u03c0(i ), \u03c0(j) = \u03c0(j ), a(i, j) = a(i , j ) (5)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "where a(i, j) is true if the argument orders of i and j match, and false otherwise. For each (i , j ) \u2208 N (i, j), we penalize the difference of entailments by adding the term \u03b2(\u2022)(w ij \u2212 w i j ) 2 . We add a prior term on \u03b2 as \u03bb 2 1 \u2212 \u03b2 2 2 , where 1 is a vector of the same size as \u03b2 with all 1s. Without the prior term (i.e., \u03bb 2 =0), all the elements of \u03b2 will become zero. Increasing \u03bb 2 will keep (some of the) elements of \u03b2 non-zero and encourages communications between related graphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "L pResolution . Equation (4) denotes the paraphrase resolution global soft constraints that encourage paraphrase predicates to have the same patterns of entailments ( Figure 2B ). The function I \u03b5 (x) equals x if x > \u03b5 and zero, otherwise. 6 Unlike L crossGraph in Equation (3), Equation (4) operates on the edges within each graph. If both w ij and w ji are high, their incoming and outgoing edges from/to nodes k are encouraged to be similar. We name this global constraint paraphrase resolution, because it might add missing links (e.g., i\u2192k) if i and j are paraphrases of each other and j\u2192k, or break the paraphrase relation, if the incoming and outgoing edges are very different.",
"cite_spans": [
{
"start": 240,
"end": 241,
"text": "6",
"ref_id": null
}
],
"ref_spans": [
{
"start": 167,
"end": 176,
"text": "Figure 2B",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "We impose an 1 penalty on the elements of W as \u03bb 1 W 1 , where \u03bb 1 is a nonnegative tuning hyperparameter that controls the strength of the penalty applied to the elements of W. This term removes entailments with low confidence from the entailment graphs. Note that Equation (1) has W 0 and average of W 0 across different typed entailment graphs ( \u00a75.4) as its special cases. The former is achieved by setting \u03bb 1 =\u03bb 2 =0 and \u03b5=1 and the latter by \u03bb 1 =0, \u03bb 2 =\u221e and \u03b5=1. We do not explicitly weight the different components of the loss function, as the effect of L crossGraph and L pResolution can be controlled by \u03bb 2 and \u03b5, respectively. Equation (1) can be interpreted as an inference problem in a Markov random field (MRF) (Kindermann and Snell, 1980) , where the nodes of the MRF are the global scores w ij and the parameters \u03b2 p, (t 1 , t 2 ), (t 1 , t 2 ) . The MRF will have five log-linear factor types: one unary factor type for L withinGraph , one three-variable factor type for the first term of L crossGraph , a unary factor type for the prior on \u03b2, one four-variable factor type for L pResolution , and a unary factor type for the 1 regularization term. Figure 2 shows an example factor graph (unary factors are not shown for simplicity).",
"cite_spans": [
{
"start": 729,
"end": 757,
"text": "(Kindermann and Snell, 1980)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 1170,
"end": 1178,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "We learn W and \u03b2 jointly using a message passing approach based on the Block Coordinate Descent method (Xu and Yin, 2013) . We initialize W = W 0 . Assuming that we know the global similarity scores W, we learn how much the entailments are compatible between different types ( \u03b2) and vice versa. Given W fixed, each w ij sends messages to the corresponding \u03b2(\u2022) elements, which will be used to update \u03b2. Given \u03b2 fixed, we do one iteration of learning for each w ij . Each \u03b2(\u2022) and w ij elements send messages to the related elements in W, which will be in turn updated. Based on the update rules (Appendix A), we always have w ij \u2264 1 and \u03b2 \u2264 1.",
"cite_spans": [
{
"start": 103,
"end": 121,
"text": "(Xu and Yin, 2013)",
"ref_id": "BIBREF44"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "Each iteration of the learning method takes O W 0 |T | 2 + i\u2208V ( w i: 0 + w :i 0 ) 2 time, where W 0 is the number of nonzero elements of W (number of edges in the current graph), |T | is the number of types, and w i: 0 ( w :i 0 ) is the number of nonzero elements of the ith row (col-umn) of the matrix (out-degree and in-degree of the node i). 7 In practice, learning converges after five iterations of full updates. The method is highly parallelizable, and our efficient implementation does the learning in only a few hours.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Learning Algorithm",
"sec_num": "4.2"
},
{
"text": "We extract binary relations from a multiple-source news corpus ( \u00a75.1) and compute local and global scores. We form entailment graphs based on the similarity scores and test our model on two entailment rules data sets ( \u00a75.2). We then discuss parameter tuning ( \u00a75.3) and baseline systems ( \u00a75.4).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Set-up",
"sec_num": "5"
},
{
"text": "We use the multiple-source NewsSpike corpus of Zhang and Weld (2013) . NewsSpike was deliberately built to include different articles from different sources describing identical news stories. They scraped RSS news feeds from January-February 2013 and linked them to full stories collected through a Web search of the RSS titles. The corpus contains 550K news articles (20M sentences). Because this corpus contains multiple sources covering the same events, it is well suited to our purpose of learning entailment and paraphrase relations.",
"cite_spans": [
{
"start": 47,
"end": 68,
"text": "Zhang and Weld (2013)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Corpus: Multiple-Source News",
"sec_num": "5.1"
},
{
"text": "We extracted 29M binary relations using the procedure in \u00a73.1. In our experiments, we used two cut-offs within each typed subgraph to reduce the effect of noise in the corpus: (1) remove any argument-pair that is observed with fewer than C 1 =3 unique predicates; (2) remove any predicate that is observed with fewer than C 2 =3 unique argument-pairs. This leaves us with |P |=101K unique predicates in 346 entailment graphs. The maximum graph size is 53K nodes, 8 and the total number of non-zero local scores in all graphs is 66M. In the future, we plan to test our method on an even larger corpus, but preliminary experiments suggest that data sparsity will persist regardless of the corpus size, because of the power law distribution of the terms. We compared our extractions qualitatively with Stanford Open IE (Etzioni et al., 2011; . Our CCG-based extraction generated noticeably better relations for longer sentences with longrange dependencies such as those involving coordination.",
"cite_spans": [
{
"start": 816,
"end": 838,
"text": "(Etzioni et al., 2011;",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Corpus: Multiple-Source News",
"sec_num": "5.1"
},
{
"text": "Levy/Holt's Entailment Data Set Levy and Dagan (2016) proposed a new annotation method (and a new data set) for collecting relational inference data in context. Their method removes a major bias in other inference data sets such as Zeichner's (Zeichner et al., 2012) , where candidate entailments were selected using a directional similarity measure. Levy and Dagan form questions of the type which city (q type ), is located near (q rel ), mountains (q arg )? and provide possible answers of the form Kyoto (a answer ), is surrounded by (a rel ), mountains (a arg ). Annotators are shown a question with multiple possible answers, where a answer is masked by q type to reduce the bias towards world knowledge. If the annotator indicates the answer as True (False), it is interpreted that the predicate in the answer entails (does not entail) the predicate in the question.",
"cite_spans": [
{
"start": 32,
"end": 53,
"text": "Levy and Dagan (2016)",
"ref_id": "BIBREF22"
},
{
"start": 243,
"end": 266,
"text": "(Zeichner et al., 2012)",
"ref_id": "BIBREF46"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Entailment Data Sets",
"sec_num": "5.2"
},
{
"text": "Whereas the Levy and Dagan entailment data set removes bias, a recent evaluation identified a high labeling error rate for entailments that hold only in one direction (Holt, 2018). Holt analyzed 150 positive examples and showed that 33% of the claimed entailments are correct only in the opposite direction, and 15% do not entail in any direction. Holt (2018) designed a task to crowdannotate the data set by a) adding the reverse entailment (q\u2192a) for each original positive entailment (a\u2192q) in Levy and Dagan's data set; and b) directly asking the annotators if a positive example (or its reverse) is an entailment or not (as opposed to relying on a factoid question). We test our method on this re-annotated data set of 18,407 examples (3,916 positive and 14,491 negative), which we refer to as Levy/Holt. 9 We run our CCG-based binary relation extraction on the examples and perform our typing procedure ( \u00a73.2) on a answer (e.g., Kyoto) and a arg (e.g., mountains) to find the types of the arguments. We split the reannotated data set into dev (30%) and test (70%) such that all the examples with the same q type and q rel are assigned to only one of the sets. ment graphs based on the predicates in their corpus. The data set contains 3,427 edges (positive), and 35,585 non-edges (negative). We evaluate our method on all the examples of Berant's entailment data set. The types of this data set do not match with FIGER types, but we perform a simple handmapping between their types and FIGER types. 10",
"cite_spans": [
{
"start": 808,
"end": 809,
"text": "9",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Entailment Data Sets",
"sec_num": "5.2"
},
{
"text": "We selected \u03bb 1 =.01 and \u03b5=.3 based on preliminary experiments on the dev set of Levy/Holt's data set. The hyperparameter \u03bb 2 is selected from {0, 0.01, 0.1, 0.5, 1, 1.5, 2, 10, \u221e}. 11 We do not tune \u03bb 2 for Berant's data set. We instead use the selected value based on the Levy/Holt dev set. In all our experiments, we remove any local score w 0 ij < .01. We show precision-recall curves by changing the threshold \u03b4 on the similarity scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Tuning",
"sec_num": "5.3"
},
{
"text": "We test our model by ablation of the global soft constraints L crossGraph and L pResolution , testing simple baselines to resolve sparsity and comparing to the state-of-the-art resources. We also compare with two distributional approaches that can be used to predict predicate similarity. We compare the following models and resources.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison",
"sec_num": "5.4"
},
{
"text": "CG_PR is our novel model with both global soft constraints L crossGraph and L pResolution . CG is our model without L pResolution . Local is the local distributional similarities without any change.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison",
"sec_num": "5.4"
},
{
"text": "AVG is the average of local scores across all the entailment graphs that contain both predicates in an entailment of interest. We set \u03bb 2 = \u221e, which forces all the values of \u03b2 to be 1, hence resulting in a uniform average of local scores. Untyped scores are local scores learned without types. We set the cut-offs C 1 =20 and C 2 =20 to have a graph with total number of edges similar to the typed entailment graphs.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison",
"sec_num": "5.4"
},
{
"text": "ConvE scores are cosine similarities of lowdimensional predicate representations learned by ConvE (Dettmers et al., 2018) , a state-of-theart model for link prediction. ConvE is a multilayer convolutional network model that is highly parameter efficient. We learn 200-dimensional vectors for each predicate (and argument) by applying ConvE to the set of extractions of the above untyped graph. We learned embeddings for each predicate and its reverse to handle examples where the argument order of the two predicates are different. Additionally, we tried TransE (Bordes et al., 2013) , another link prediction method that, despite its simplicity, produces very competitive results in knowledge base completion. However, we do not present its full results, as they were worse than ConvE. 12 PPDB is based on the Paraphrase Database (PPDB) of Pavlick et al. (2015) . We accept an example as entailment if it is labeled as a paraphrase or entailment in the PPDB XL lexical or phrasal collections. 13 Berant_ILP is based on the entailment graphs of Berant et al. (2011) . 14 For Berant's data set, we directly compared our results to the ones reported in Berant et al. (2011) . For Levy/Holt's data set, we used publicly available entailment rules derived from Berant et al. (2011) that give us one point of precision and recall in the plots. Although the rules are typed and can be applied in a context-sensitive manner, ignoring the types and applying the rules out of context yields much better results (Levy and Dagan, 2016) . This is attributable to both the non-standard types used by Berant et al. (2011) and also the general data sparsity issue.",
"cite_spans": [
{
"start": 98,
"end": 121,
"text": "(Dettmers et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 562,
"end": 583,
"text": "(Bordes et al., 2013)",
"ref_id": "BIBREF6"
},
{
"start": 787,
"end": 789,
"text": "12",
"ref_id": null
},
{
"start": 841,
"end": 862,
"text": "Pavlick et al. (2015)",
"ref_id": "BIBREF32"
},
{
"start": 994,
"end": 996,
"text": "13",
"ref_id": null
},
{
"start": 1045,
"end": 1065,
"text": "Berant et al. (2011)",
"ref_id": "BIBREF4"
},
{
"start": 1151,
"end": 1171,
"text": "Berant et al. (2011)",
"ref_id": "BIBREF4"
},
{
"start": 1257,
"end": 1277,
"text": "Berant et al. (2011)",
"ref_id": "BIBREF4"
},
{
"start": 1502,
"end": 1524,
"text": "(Levy and Dagan, 2016)",
"ref_id": "BIBREF22"
},
{
"start": 1587,
"end": 1607,
"text": "Berant et al. (2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison",
"sec_num": "5.4"
},
{
"text": "In all our experiments, we first test a set of rule-based constraints introduced by Berant et al. (2011) on the examples before the prediction by our methods. In the experiments on Levy/Holt's data set, in order to maintain compatibility with Levy and Dagan (2016) , we also run the lemmabased heuristic process used by them before applying our methods.We do not apply the lemmabased process on Berant's data set in order to compare with Berant et al's (2011) reported results directly. In experiments with CG_PR and CG, if the typed entailment graph corresponding to an example does not have one or both predicates, we resort to the average score between all typed entailment graphs.",
"cite_spans": [
{
"start": 243,
"end": 264,
"text": "Levy and Dagan (2016)",
"ref_id": "BIBREF22"
},
{
"start": 438,
"end": 459,
"text": "Berant et al's (2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison",
"sec_num": "5.4"
},
{
"text": "To test the efficacy of our globally consistent entailment graphs, we compare them with the baseline systems in Section 6.1. We test the effect of approximating transitivity constraints in Section 6.2. Section 6.3 concerns error analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results and Discussion",
"sec_num": "6"
},
{
"text": "We test our method using three distributional similarity measures: Weeds similarity (Weeds and Weir, 2003) , Lin similarity (Lin, 1998) , and Balanced Inclusion (BInc; Szpektor and Dagan, 2008) . The first two similarity measures are symmetric, 15 and BInc is directional. Figures 4A and 4B show precision-recall curves of the different methods on Levy/Holt's and Berant's data sets, respectively, using BInc. We show the full curve for BInc; as it is directional and on the development portion of Levy/Holt's data set, it yields better results than Weeds and Lin.",
"cite_spans": [
{
"start": 84,
"end": 106,
"text": "(Weeds and Weir, 2003)",
"ref_id": "BIBREF43"
},
{
"start": 124,
"end": 135,
"text": "(Lin, 1998)",
"ref_id": "BIBREF27"
},
{
"start": 168,
"end": 193,
"text": "Szpektor and Dagan, 2008)",
"ref_id": "BIBREF39"
},
{
"start": 245,
"end": 247,
"text": "15",
"ref_id": null
}
],
"ref_spans": [
{
"start": 273,
"end": 291,
"text": "Figures 4A and 4B",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Globally Consistent Entailment Graphs",
"sec_num": "6.1"
},
{
"text": "In addition, Table 1 shows the area under the precision-recall curve (AUC) for all variants of the three similarity measures. Note that each method covers a different range of precisions and recalls. We compute AUC for precisions in the range [0.5, 1], because predictions with precision better than random guess are more important for end applications such as question answering and semantic parsing. For each similarity measure, we tested statistical significance between the methods using bootstrap resampling with 10K experiments (Efron and Tibshirani, 1985; Koehn, 2004) . In Table 1 , the best result for each data set and similarity measure is boldfaced. If the difference of another model with the best result is not significantly different with p-value < 0.05, the second model is also boldfaced.",
"cite_spans": [
{
"start": 534,
"end": 562,
"text": "(Efron and Tibshirani, 1985;",
"ref_id": "BIBREF12"
},
{
"start": 563,
"end": 575,
"text": "Koehn, 2004)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 13,
"end": 20,
"text": "Table 1",
"ref_id": "TABREF1"
},
{
"start": 581,
"end": 588,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Globally Consistent Entailment Graphs",
"sec_num": "6.1"
},
{
"text": "Among the distributional similarities based on BInc, BInc_CG_PR outperforms all the other models in both data sets. In comparison with BInc score's AUC, we observe more than 100% improvement on Levy/Holt's data set and about 30% improvement on Berant's. Given the consistent gains, our proposed model appears to alleviate the data sparsity and the noise inherent to local scores. Our method also outperforms PPDB and Berant_ILP on both data sets. The second-best performing model is BInc_CG, which improves the results significantly, especially on Berant's data set, over the BInc_AVG (AUC of .177 vs. .144) . This confirms that learning what subset of entailments should be generalized across different typed entailment graphs ( \u03b2) is effective.",
"cite_spans": [
{
"start": 585,
"end": 607,
"text": "(AUC of .177 vs. .144)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Consistent Entailment Graphs",
"sec_num": "6.1"
},
{
"text": "The untyped models yield a single large entailment graph. It contains (noisy) edges that are not found in smaller typed entailment graphs. Despite the noise, untyped models for all three similarity measures still perform better than the typed ones in terms of AUC. However, they do worse in the high-precision range. For example, BInc_untyped is worse than BInc for precision > 0.85. The AVG models do surprisingly well (only about 0.5 to 3.5 below CG_PR in terms of AUC), but note that only a subset of the typed entailment graphs might have (untyped) predicates p and q of interest (usually not more than 10 typed entailment graphs out of 367 graphs). Therefore, the AVG models are generally expected to outperform the untyped ones (with only one exception in our experiments), as typing has refined the entailments and averaging just improves the recall. Comparison of CG_PR with CG models confirms that explicitly encouraging paraphrase predicates to have the same patterns of entailment is effective. It improves the results for BInc score, which is a directional similarity measure. We also tested applying the paraphrase resolution soft constraints alone, but the differences with the local scores were not statistically significant. This suggests that the paraphrase resolution is more helpful when similarities are transferred between graphs, as this can cause inconsistencies around the predicates with transferred similarities, which are then resolved by the paraphrase resolution constraints.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Consistent Entailment Graphs",
"sec_num": "6.1"
},
{
"text": "The results of the distributional representations learned by ConvE are worse than most other methods. We attribute this outcome to the fact that a) while entailment relations are directional, these methods are symmetric; b) the learned embeddings are optimized for tasks other than entailment or paraphrase detection; and c) the embeddings are learned regardless of argument types. However, even the BInc_untyped baseline outperforms ConvE, showing that it is important to use a directional measure that directly models entailment. We hypothesize that learning predicate representations based on the distributional inclusion hypotheses which do not have the above limitations might yield better results.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Globally Consistent Entailment Graphs",
"sec_num": "6.1"
},
{
"text": "Our largest graph has 53K nodes; we thus tested approximate methods instead of the ILP to close entailment relations under transitivity ( \u00a72). The approximate TNF method of Berant et al. (2011) did not scale to the size of our graphs with moderate sparsity parameters. also present a heuristic method, High-To-Low Forest Reducible Graph (HTL-FRG), which gets slightly better results than TNF on their data set, and which scales to graphs of the size we work with. 16 We applied the HTL-FRG method to the globally consistent similarity scores (BInc_CG_ PR_HTL) and changed the threshold on the scores to get a precision-recall curve. Figures 4C and 4D show the results of this method on Levy/Holt's and Berant's data sets. Our experiments show, in contrast to the results of , that the HTL-FRG method leads to worse results when applied to our global scores. This result is caused both by the use of heuristic methods in place of globally optimizing via ILP, and by the removal of many valid edges arising from the fact that the FRG assumption is not correct for many realworld domains.",
"cite_spans": [
{
"start": 173,
"end": 193,
"text": "Berant et al. (2011)",
"ref_id": "BIBREF4"
},
{
"start": 464,
"end": 466,
"text": "16",
"ref_id": null
}
],
"ref_spans": [
{
"start": 633,
"end": 650,
"text": "Figures 4C and 4D",
"ref_id": "FIGREF3"
}
],
"eq_spans": [],
"section": "Effect of Transitivity Constraints",
"sec_num": "6.2"
},
{
"text": "We analyzed 100 false positive (FP) and 100 false negative (FN) randomly selected examples (using BInc_CG_ST results on Levy/Holt's data set and at the precision level of Berant_ILP, i.e. 0.76). We present our findings in Table 2 . Most of the FN errors are due to data sparsity, but a few errors are due to wrong labeling of the data and parsing errors. More than half of the FP errors are because of spurious correlations in the data that are captured by the similarity scores, but are not judged to constitute entailment by the human judges. About one-third of the FP errors are because of the normalization we currently perform on the relations (e.g., we remove modals and auxiliaries). The remaining errors are mostly due to parsing and our use of Levy and Dagan's (2016) lemmabased heuristic process.",
"cite_spans": [
{
"start": 753,
"end": 776,
"text": "Levy and Dagan's (2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 222,
"end": 229,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "6.3"
},
{
"text": "To further test the utility of explicit entailment rules, we evaluate the learned rules on an extrinsic task: answer selection for machine reading comprehension on NewsQA, a data set that The report said opium has accounted for more than half of Afghanistan's gross domestic product in 2007.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "What makes up half of Afghanistans GDP ? contains questions about CNN articles (Trischler et al., 2017) . Machine reading comprehension is usually evaluated by posing questions about a text passage and then assessing the answers of a system (Trischler et al., 2017) . The data sets that are used for this task are often in the form of (document,question,answer) triples, where answer is a short span of the document. Answer selection is an important task, where the goal is to select the sentence(s) that contain the answer. We show improvements by adding knowledge from our learned entailments without changing the graphs or tuning them to this task in any way.",
"cite_spans": [
{
"start": 79,
"end": 103,
"text": "(Trischler et al., 2017)",
"ref_id": "BIBREF40"
},
{
"start": 241,
"end": 265,
"text": "(Trischler et al., 2017)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "Inverse sentence frequency (ISF) is a strong baseline for answer selection (Trischler et al., 2017) . The ISF score between a sentence S i and a question Q is defined as ISF(S i , Q) = w\u2208S i \u2229Q IDF(w), where IDF(w) is the inverse document frequency of the word w by considering each sentence in the whole corpus as one document. The state-of-the-art methods for answer selection use ISF, and by itself it already does quite well (Trischler et al., 2017; Narayan et al., 2018) . We propose to extend the ISF score with entailment rules. We define a new score,",
"cite_spans": [
{
"start": 75,
"end": 99,
"text": "(Trischler et al., 2017)",
"ref_id": "BIBREF40"
},
{
"start": 429,
"end": 453,
"text": "(Trischler et al., 2017;",
"ref_id": "BIBREF40"
},
{
"start": 454,
"end": 475,
"text": "Narayan et al., 2018)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "ISFEnt(S i , Q) = \u03b1ISF(S i , Q) + (1 \u2212 \u03b1)|{r 1 \u2208 S i , r 2 \u2208 Q : r 1 \u2192 r 2 }|",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "where \u03b1 \u2208 [0, 1] is a hyper-parameter and r 1 and r 2 denote relations in the sentence and the question, respectively. The intuition is that if a sentence such as \"Luka Modric sustained a fracture to his right fibula\" is a paraphrase of or entails the answer of a question such as \"What does Luka Modric suffer from?\", it will contain the answer span. We consider an entailment decision between two typed predicates if their global similarity BInc_CG_PR is higher than a threshold \u03b4.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "We also considered entailments between unary relations (one argument) by leveraging our learned binary entailments. We split each binary entailment into two potential unary entailments. For example, the entailment visit 1,2 (:person,:location) \u2192 arrive 1,in (:person,:location), is split into visit 1 (:person) \u2192 arrive 1 (:person) and visit 2 (:location) \u2192 arrive in (:location). We computed unary similarity scores by averaging over all related binary scores. This is particularly helpful when one argument is not present (e.g., adjuncts or Wh questions) or does not exactly match between the question and the answer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "We test the proposed answer selection score on NewsQA, a data set that contains questions about CNN articles (Trischler et al., 2017) . The data set is collected in a way that encourages lexical and syntactic divergence between questions and documents. The crowdworkers who wrote questions saw only a news article headline and its summary points, but not the full article. This process encourages curiosity about the contents of the full article and prevents questions that are simple reformulations of article sentences (Trischler et al., 2017) . This is a more realistic and suitable setting to test paraphrasing and entailment capabilities.",
"cite_spans": [
{
"start": 109,
"end": 133,
"text": "(Trischler et al., 2017)",
"ref_id": "BIBREF40"
},
{
"start": 521,
"end": 545,
"text": "(Trischler et al., 2017)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "We use the development set of the data set (5,165 samples) to tune \u03b1 and \u03b4 and report results on the test set (5,124 examples) in Table 4 .",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 137,
"text": "Table 4",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "w ij = 1(c ij > \u03bb 1 )(c ij \u2212 \u03bb 1 )/\u03c4 ij (6) c ij = w 0 ij + (i ,j )\u2208N (i,j) \u03b2(\u2022)w i j \u2212 1(w ij > \u03b5)I \u03b5 (w ji ) k\u2208V (\u03c4 1 (i),\u03c4 2 (i)) (w ik \u2212 w jk ) 2 + (w ki \u2212 w kj ) 2 + 2 k\u2208V (\u03c4 1 (i),\u03c4 2 (i))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "I \u03b5 (w jk )I \u03b5 (w kj )w ik + I \u03b5 (w ik )I \u03b5 (w ki )w kj (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03c4 ij = 1 + (i ,j )\u2208N (i,j) \u03b2(\u2022) + 2 k\u2208V (\u03c4 1 (i),\u03c4 2 (i)) I \u03b5 (w jk )I \u03b5 (w kj ) + I \u03b5 (w ik )I \u03b5 (w ki )",
"eq_num": "(8)"
}
],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03b2(\u2022) = I 0 1 \u2212 j\u2208V (\u03c4 1 (i),\u03c4 2 (i)) (i ,j )\u2208N (i,j) (w ij \u2212 w i j ) 2 /\u03bb 2 .",
"eq_num": "(9)"
}
],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "Figure 5: The update rules for w ij and \u03b2(\u2022).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "We observe about 1.4% improvement in accuracy (ACC) and 1% improvement in mean reciprocal rank (MRR) and mean average precision (MAP), confirming that entailment rules are helpful for answer selection. 17 Table 3 shows some of the examples where ISFEnt ranks the correct sentences higher than ISF. These examples are very challenging for methods that do not have entailment and paraphrasing knowledge, and illustrate the semantic interpretability of the entailment graphs. We also performed a similar evaluation on the Stanford Natural Language Inference data set (SNLI; Bowman et al., 2015) and obtained 1% improvement over a basic neural network architecture that models sentences with an n-layered LSTM (Conneau et al., 2017) . However, we did not obtain improvements over the state-of-theart results, because only a few of the SNLI examples require external knowledge of predicate entailments. Most examples require reasoning capabilities such as A \u2227 B \u2192 B and simple lexical entailments such as boy \u2192 person, which are often present in the training set.",
"cite_spans": [
{
"start": 706,
"end": 728,
"text": "(Conneau et al., 2017)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [
{
"start": 205,
"end": 212,
"text": "Table 3",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Extrinsic Evaluation",
"sec_num": "7"
},
{
"text": "We have introduced a scalable framework to learn typed entailment graphs directly from text. We use global soft constraints to learn globally consistent entailment scores for entailment relations. Our experiments show that generalizing in this way across different but related typed entail- 17 The accuracy results of Narayan et al. (2018) are not consistent with their own MRR and MAP (ACC>MRR in come cases), as they break ties between ISF scores differently when computing ACC compared to MRR and MAP. See also http://homepages.inf.ed.ac.uk/scohen/ acl18external-errata.pdf. ment graphs significantly improves performance over local similarity scores on two standard textentailment data sets. We show around 100% increase in AUC on Levy/Holt's data set and 30% on Berant's data set. The method also outperforms PPDB and the prior state-of-the-art entailment graph-building approach due to Berant et al. (2011) . Paraphrase resolution further improves the results. We have in addition showed the utility of entailment rules on answer selection for machine reading comprehension.",
"cite_spans": [
{
"start": 291,
"end": 293,
"text": "17",
"ref_id": null
},
{
"start": 318,
"end": 339,
"text": "Narayan et al. (2018)",
"ref_id": "BIBREF29"
},
{
"start": 892,
"end": 912,
"text": "Berant et al. (2011)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "In the future, we plan to show that the global soft constraints developed in this paper can be extended to other structural properties of entailment graphs such as transitivity. Future work might also look at entailment relation learning and link prediction tasks jointly. The entailment graphs can be used to improve relation extraction, similar to Eichler et al. (2017) , but covering more relations. In addition, we intend to collapse cliques in the entailment graphs to paraphrase clusters with a single relation identifier, to replace the form-dependent lexical semantics of the CCG parser with these form-independent relations (Lewis and Steedman, 2013a) , and to use the entailment graphs to derive meaning postulates for use in tasks such as question-answering and construction of knowledge-graphs from text (Lewis and Steedman, 2014) . and zero, otherwise. The compatibility functions \u03b2(\u2022) are updated using Equation (9).",
"cite_spans": [
{
"start": 350,
"end": 371,
"text": "Eichler et al. (2017)",
"ref_id": "BIBREF13"
},
{
"start": 633,
"end": 660,
"text": "(Lewis and Steedman, 2013a)",
"ref_id": "BIBREF24"
},
{
"start": 816,
"end": 842,
"text": "(Lewis and Steedman, 2014)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and Future Work",
"sec_num": "8"
},
{
"text": "We performed our experiments on a 32-core 2.3 GHz machine with 256GB of RAM.2 Our code, extracted binary relations, and the learned entailment graphs are available at https://github.com/ mjhosseini/entGraph.3 Predicates inside each clique in the entailment graphs are considered to be paraphrases. t 3 =living_thing,t 4 =disease t 1 =government_agency,t 2 =event !(trigger,(t 1 ,t 2 ),(t 3 ,t 4 ))",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "49 types out of 113 FIGER types.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "For each similarity measure, we define one separate matrix and run the learning algorithm separately, but for simplicity of notation, we do not show the similarity measure names.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In our experiments, we set \u03b5 = .3. Smaller values of \u03b5 yield similar results, but learning is slower.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In our experiments, the total number of edges is \u2248 .01|V | 2 and most of predicate pairs are seen in less than 20 subgraphs, rather than |T | 2 .8 There are 4 graphs with more than 20K nodes, 3 graphs with 10K to 20K nodes, and 16 graphs with 1K to 10K nodes.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "10 mappings in total (e.g., animal to living_thing). 11 The selected value was usually around 1.5.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also tried the average of GloVe embeddings(Pennington et al., 2014) of the words in each predicate, but the results were worse than ConvE.13 We also tested the largest collection (XXXL), but the precision was very low on Berant's data set (below 30%).14 We also tested, but do not report the results as they are very similar.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Weeds similarity is the harmonic average of Weeds precision and Weeds recall, hence a symmetric measure.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "TNF did not converge after two weeks for threshold \u03b4 = .04. For \u03b4 = .12 (precisions higher than 80%), it converged, but with results slightly worse than HTL-FRG on both data sets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We thank Thomas Kober and Li Dong for helpful comments and feedback on the work, Reggie Long for preliminary experiments on openIE extractions, and Ronald Cardenas for providing baseline code for the NewsQA experiments. The authors would also like to thank Katrin Erk and the three anonymous reviewers for their valuable feedback. This work was supported in part by the Alan Turing Institute under EPSRC grant EP/N510129/1. The experiments were made possible by Microsoft's donation of Azure credits to The Alan Turing Institute. The research was supported in part by ERC Advanced Fellowship GA 742137 SEMANTAX, a Google faculty award, a Bloomberg L. P. Gift award, and a University of Edinburgh/Huawei Technologies award to Steedman. Chambers was supported in part by the National Science Foundation under grant IIS-1617952. Steedman and Johnson were supported by the Australian Research Council's Discovery Projects funding scheme (project number DP160102156).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "Appendix A Figure 5 shows the update rules of the learning algorithm. The global similarity scores w ij are updated using Equation 6, where c ij and \u03c4 ij are defined in Equation 7and Equation 8, respectively. 1(x) equals 1 if the condition x is satisfied",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 19,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "annex",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Lexical Inference over Multi-Word Predicates: A Distributional Approach",
"authors": [
{
"first": "Omri",
"middle": [],
"last": "Abend",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "644--654",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omri Abend, Shay B. Cohen, and Mark Steedman. 2014. Lexical Inference over Multi- Word Predicates: A Distributional Approach. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 644-654.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Leveraging Linguistic Structure for Open Domain Information Extraction",
"authors": [
{
"first": "Gabor",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Melvin",
"middle": [
"Johnson"
],
"last": "Premkumar",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "344--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabor Angeli, Melvin Johnson Premkumar, and Christopher D. Manning. 2015. Leveraging Lin- guistic Structure for Open Domain Information Extraction. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 344-354.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Efficient Global Learning of Entailment Graphs",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Noga",
"middle": [],
"last": "Alon",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2015,
"venue": "Computational Linguistics",
"volume": "42",
"issue": "",
"pages": "221--263",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Noga Alon, Ido Dagan, and Jacob Goldberger. 2015. Efficient Global Learning of Entailment Graphs. Computational Linguistics, 42:221-263.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Efficient Tree-Based Approximation for Entailment Graph Learning",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Meni",
"middle": [],
"last": "Adler",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "117--125",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Ido Dagan, Meni Adler, and Jacob Goldberger. 2012. Efficient Tree-Based Approximation for Entailment Graph Learning. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 117-125.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Global Learning of Typed Entailment Rules",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "610--619",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Jacob Goldberger, and Ido Dagan. 2011. Global Learning of Typed Entail- ment Rules. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 610-619.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Freebase: A Collaboratively Created Graph Database for Structuring Human Knowledge",
"authors": [
{
"first": "Kurt",
"middle": [],
"last": "Bollacker",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Evans",
"suffix": ""
},
{
"first": "Praveen",
"middle": [],
"last": "Paritosh",
"suffix": ""
},
{
"first": "Tim",
"middle": [],
"last": "Sturge",
"suffix": ""
},
{
"first": "Jamie",
"middle": [],
"last": "Taylor",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the ACM SIGMOD international conference on Management of data",
"volume": "",
"issue": "",
"pages": "1247--1250",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: A Collaboratively Created Graph Database for Structuring Human Knowledge. In Proceedings of the ACM SIGMOD international conference on Management of data, pages 1247-1250.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Translating Embeddings for Modeling Multi-Relational Data",
"authors": [
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
},
{
"first": "Nicolas",
"middle": [],
"last": "Usunier",
"suffix": ""
},
{
"first": "Alberto",
"middle": [],
"last": "Garcia-Duran",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Weston",
"suffix": ""
},
{
"first": "Oksana",
"middle": [],
"last": "Yakhnenko",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "2787--2795",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antoine Bordes, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. 2013. Translating Embeddings for Modeling Multi-Relational Data. In Advances in neural information processing systems, pages 2787-2795.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A Large Annotated Corpus for Learning Natural Language Inference",
"authors": [
{
"first": "R",
"middle": [],
"last": "Samuel",
"suffix": ""
},
{
"first": "Gabor",
"middle": [],
"last": "Bowman",
"suffix": ""
},
{
"first": "Christopher",
"middle": [],
"last": "Angeli",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Potts",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "632--642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel R. Bowman, Gabor Angeli, Christopher Potts, and Christopher D. Manning. 2015. A Large Annotated Corpus for Learning Natu- ral Language Inference. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 632-642.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Supervised Learning of Universal Sentence Representations from Natural Language Inference Data",
"authors": [
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Douwe",
"middle": [],
"last": "Kiela",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Antoine",
"middle": [],
"last": "Bordes",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "670--680",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alexis Conneau, Douwe Kiela, Holger Schwenk, Lo\u00efc Barrault, and Antoine Bordes. 2017. Su- pervised Learning of Universal Sentence Rep- resentations from Natural Language Inference Data. In Proceedings of the Conference on Em- pirical Methods in Natural Language Process- ing, pages 670-680.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Similarity-Based Models of Word Cooccurrence Probabilities",
"authors": [
{
"first": "Lillian",
"middle": [],
"last": "Ido Dagan",
"suffix": ""
},
{
"first": "Fernando",
"middle": [
"C N"
],
"last": "Lee",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine learning",
"volume": "34",
"issue": "1-3",
"pages": "43--69",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ido Dagan, Lillian Lee, and Fernando C.N. Pereira. 1999. Similarity-Based Models of Word Cooccurrence Probabilities. Machine learning, 34(1-3):43-69.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Convolutional 2D Knowledge Graph Embeddings",
"authors": [
{
"first": "Tim",
"middle": [],
"last": "Dettmers",
"suffix": ""
},
{
"first": "Minervini",
"middle": [],
"last": "Pasquale",
"suffix": ""
},
{
"first": "Stenetorp",
"middle": [],
"last": "Pontus",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 32th AAAI Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "1811--1818",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tim Dettmers, Minervini Pasquale, Stenetorp Pontus, and Sebastian Riedel. 2018. Convo- lutional 2D Knowledge Graph Embeddings. In Proceedings of the 32th AAAI Conference on Artificial Intelligence, pages 1811-1818.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Learning to Paraphrase for Question Answering",
"authors": [
{
"first": "Li",
"middle": [],
"last": "Dong",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Mallinson",
"suffix": ""
},
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "875--886",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Li Dong, Jonathan Mallinson, Siva Reddy, and Mirella Lapata. 2017. Learning to Paraphrase for Question Answering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 875-886.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "The Bootstrap Method for Assessing Statistical Accuracy",
"authors": [
{
"first": "Bradley",
"middle": [],
"last": "Efron",
"suffix": ""
},
{
"first": "Robert",
"middle": [],
"last": "Tibshirani",
"suffix": ""
}
],
"year": 1985,
"venue": "Behaviormetrika",
"volume": "12",
"issue": "17",
"pages": "1--35",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bradley Efron and Robert Tibshirani. 1985. The Bootstrap Method for Assessing Statistical Accuracy. Behaviormetrika, 12(17):1-35.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Generating Pattern-Based Entailment Graphs for Relation Extraction",
"authors": [
{
"first": "Kathrin",
"middle": [],
"last": "Eichler",
"suffix": ""
},
{
"first": "Feiyu",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Hans",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Krause",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (* SEM 2017)",
"volume": "",
"issue": "",
"pages": "220--229",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kathrin Eichler, Feiyu Xu, Hans Uszkoreit, and Sebastian Krause. 2017. Generating Pattern- Based Entailment Graphs for Relation Ex- traction. In Proceedings of the 6th Joint Conference on Lexical and Computational Semantics (* SEM 2017), pages 220-229.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Open Information Extraction: The Second Generation",
"authors": [
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Anthony",
"middle": [],
"last": "Fader",
"suffix": ""
},
{
"first": "Janara",
"middle": [],
"last": "Christensen",
"suffix": ""
},
{
"first": "Stephen",
"middle": [],
"last": "Soderland",
"suffix": ""
},
{
"first": "Mausam",
"middle": [],
"last": "Mausam",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 22nd International Joint Conference on Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "3--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Oren Etzioni, Anthony Fader, Janara Christensen, Stephen Soderland, and Mausam Mausam. 2011. Open Information Extraction: The Sec- ond Generation. In Proceedings of the 22nd International Joint Conference on Artificial Intelligence, pages 3-10.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "PPDB: The Paraphrase Database",
"authors": [
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "758--764",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2013. PPDB: The Para- phrase Database. In Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 758-764.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "The Distributional Inclusion Hypotheses and Lexical Entailment",
"authors": [
{
"first": "Maayan",
"middle": [],
"last": "Geffet",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "107--114",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maayan Geffet and Ido Dagan. 2005. The Dis- tributional Inclusion Hypotheses and Lexical Entailment. In Proceedings of the 43rd Annual Meeting on Association for Computational Lin- guistics, pages 107-114.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Measuring Semantic Content in Distributional Vectors",
"authors": [
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "Herbelot",
"suffix": ""
},
{
"first": "Mohan",
"middle": [],
"last": "Ganesalingam",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "440--445",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aur\u00e9lie Herbelot and Mohan Ganesalingam. 2013. Measuring Semantic Content in Distributional Vectors. In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, pages 440-445.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Probabilistic Models of Relational Implication",
"authors": [
{
"first": "R",
"middle": [],
"last": "Xavier",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Holt",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xavier R. Holt. 2018. Probabilistic Models of Relational Implication. Master's thesis, Mac- quarie University.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Distributional Inclusion Hypothesis for Tensor-based Composition",
"authors": [
{
"first": "Dimitri",
"middle": [],
"last": "Kartsaklis",
"suffix": ""
},
{
"first": "Mehrnoosh",
"middle": [],
"last": "Sadrzadeh",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2849--2860",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dimitri Kartsaklis and Mehrnoosh Sadrzadeh. 2016. Distributional Inclusion Hypothesis for Tensor-based Composition. In Proceedings of the 26th International Conference on Computational Linguistics: Technical Papers, pages 2849-2860.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Markov Random Fields and their Applications",
"authors": [
{
"first": "Ross",
"middle": [],
"last": "Kindermann",
"suffix": ""
},
{
"first": "Laurie",
"middle": [],
"last": "Snell",
"suffix": ""
}
],
"year": 1980,
"venue": "",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ross Kindermann and J Laurie Snell. 1980. Markov Random Fields and their Applications, volume 1. American Mathematical Society.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Statistical Significance Tests for Machine Translation Evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Statistical Significance Tests for Machine Translation Evaluation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 388-395.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Annotating Relation Inference in Context via Question Answering",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "249--255",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy and Ido Dagan. 2016. Annotating Re- lation Inference in Context via Question An- swering. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, pages 249-255.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Combined Distributional and Logical Semantics",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis. 2014. Combined Distributional and Logical Semantics. Ph.D. thesis, University of Edinburgh.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Combined Distributional and Logical Semantics",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2013,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "179--192",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis and Mark Steedman. 2013a. Com- bined Distributional and Logical Semantics. Transactions of the Association for Computa- tional Linguistics, 1:179-192.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Unsupervised Induction of Cross-Lingual Semantic Relations",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "681--692",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis and Mark Steedman. 2013b. Unsu- pervised Induction of Cross-Lingual Semantic Relations. In Proceedings of the Conference on Empirical Methods in Natural Language Pro- cessing, pages 681-692.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Combining Formal and Distributional Models of Temporal and Intensional Semantics",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the ACL Workshop on Semantic Parsing",
"volume": "",
"issue": "",
"pages": "28--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Lewis and Mark Steedman. 2014. Com- bining Formal and Distributional Models of Temporal and Intensional Semantics. In Pro- ceedings of the ACL Workshop on Semantic Parsing, pages 28-32.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Automatic Retrieval and Clustering of Similar Words",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "768--774",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin. 1998. Automatic Retrieval and Clus- tering of Similar Words. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics, pages 768-774.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Fine-Grained Entity Recognition",
"authors": [
{
"first": "Xiao",
"middle": [],
"last": "Ling",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the National Conference of the Association for Advancement of Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "94--100",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiao Ling and Daniel S. Weld. 2012. Fine- Grained Entity Recognition. In Proceedings of the National Conference of the Associa- tion for Advancement of Artificial Intelligence, pages 94-100.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Document Modeling with External Attention For Sentence Extraction",
"authors": [
{
"first": "Shashi",
"middle": [],
"last": "Narayan",
"suffix": ""
},
{
"first": "Ronald",
"middle": [],
"last": "Cardenas",
"suffix": ""
},
{
"first": "Nikos",
"middle": [],
"last": "Papasarantopoulos",
"suffix": ""
},
{
"first": "Shay",
"middle": [
"B"
],
"last": "Cohen",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Jiangsheng",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Yi",
"middle": [],
"last": "Chang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "2020--2030",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shashi Narayan, Ronald Cardenas, Nikos Papasarantopoulos, Shay B. Cohen, Mirella Lapata, Jiangsheng Yu, and Yi Chang. 2018. Document Modeling with External Attention For Sentence Extraction. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, pages 2020-2030.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "AIDAlight: High-Throughput Named-Entity Disambiguation",
"authors": [
{
"first": "Johannes",
"middle": [],
"last": "Dat Ba Nguyen",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Hoffart",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Theobald",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2014,
"venue": "Workshop on Linked Data on the Web",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dat Ba Nguyen, Johannes Hoffart, Martin Theobald, and Gerhard Weikum. 2014. AIDA- light: High-Throughput Named-Entity Dis- ambiguation. In Workshop on Linked Data on the Web, pages 1-10.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Events in the Semantics of English: A Study in Subatomic Semantics",
"authors": [
{
"first": "Terence",
"middle": [],
"last": "Parsons",
"suffix": ""
}
],
"year": 1990,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terence Parsons. 1990. Events in the Semantics of English: A Study in Subatomic Semantics. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "PPDB 2.0: Better Paraphrase Ranking, Fine-Grained Entailment Relations, Word Embeddings, and Style Classification",
"authors": [
{
"first": "Ellie",
"middle": [],
"last": "Pavlick",
"suffix": ""
},
{
"first": "Pushpendre",
"middle": [],
"last": "Rastogi",
"suffix": ""
},
{
"first": "Juri",
"middle": [],
"last": "Ganitkevitch",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [],
"last": "Van Durme",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "425--430",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevitch, Benjamin Van Durme, and Chris Callison-Burch. 2015. PPDB 2.0: Better Paraphrase Ranking, Fine-Grained Entailment Relations, Word Embeddings, and Style Clas- sification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 425-430.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "GloVe: Global Vectors for Word Representation",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Pennington",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1532--1543",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. GloVe: Global Vectors for Word Representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 1532-1543.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Large-Scale Semantic Parsing without Question-Answer Pairs",
"authors": [
{
"first": "Siva",
"middle": [],
"last": "Reddy",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2014,
"venue": "Transactions of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "377--392",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Siva Reddy, Mirella Lapata, and Mark Steedman. 2014. Large-Scale Semantic Parsing with- out Question-Answer Pairs. Transactions of the Association for Computational Linguistics, 2:377-392.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Relation Extraction with Matrix Factorization and Universal Schemas",
"authors": [
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "Limin",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "Benjamin",
"middle": [
"M"
],
"last": "Marlin",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "74--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sebastian Riedel, Limin Yao, Andrew McCallum, and Benjamin M. Marlin. 2013. Relation Ex- traction with Matrix Factorization and Univer- sal Schemas. In Proceedings of the Conference of the North American Chapter of the Associ- ation for Computational Linguistics: Human Language Technologies, pages 74-84.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Learning First-Order Horn Clauses From Web Text",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Schoenmackers",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Etzioni",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
},
{
"first": "Jesse",
"middle": [
"Davis"
],
"last": "",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1088--1098",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Schoenmackers, Oren Etzioni, Daniel S. Weld, and Jesse Davis. 2010. Learning First- Order Horn Clauses From Web Text. In Proceedings of the Conference on Empiri- cal Methods in Natural Language Processing, pages 1088-1098.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Reasoning with Neural Tensor Networks for Knowledge Base Completion",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
},
{
"first": "Danqi",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "926--934",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Socher, Danqi Chen, Christopher D. Manning, and Andrew Ng. 2013. Reasoning with Neural Tensor Networks for Knowledge Base Completion. In Advances in neural infor- mation processing systems, pages 926-934.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "The Syntactic Process",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Steedman",
"suffix": ""
}
],
"year": 2000,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge, MA.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Learning Entailment Rules for Unary Templates",
"authors": [
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "849--856",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Idan Szpektor and Ido Dagan. 2008. Learning Entailment Rules for Unary Templates. In Pro- ceedings of the 22nd International Conference on Computational Linguistics, pages 849-856.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "NewsQA: A Machine Comprehension Dataset",
"authors": [
{
"first": "Adam",
"middle": [],
"last": "Trischler",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Xingdi",
"middle": [],
"last": "Yuan",
"suffix": ""
},
{
"first": "Justin",
"middle": [],
"last": "Harris",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Bachman",
"suffix": ""
},
{
"first": "Kaheer",
"middle": [],
"last": "Suleman",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2nd Workshop on Representation Learning for NLP",
"volume": "",
"issue": "",
"pages": "191--200",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, and Kaheer Suleman. 2017. NewsQA: A Ma- chine Comprehension Dataset. In Proceedings of the 2nd Workshop on Representation Learn- ing for NLP, pages 191-200.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Complex Embeddings for Simple Link Prediction",
"authors": [
{
"first": "Th\u00e9o",
"middle": [],
"last": "Trouillon",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Welbl",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Riedel",
"suffix": ""
},
{
"first": "\u00c9ric",
"middle": [],
"last": "Gaussier",
"suffix": ""
},
{
"first": "Guillaume",
"middle": [],
"last": "Bouchard",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 33rd International Conference on International Conference on Machine Learning",
"volume": "",
"issue": "",
"pages": "2071--2080",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Th\u00e9o Trouillon, Johannes Welbl, Sebastian Riedel, \u00c9ric Gaussier, and Guillaume Bouchard. 2016. Complex Embeddings for Simple Link Pre- diction. In Proceedings of the 33rd Interna- tional Conference on International Conference on Machine Learning, pages 2071-2080.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Building a Semantic Parser Overnight",
"authors": [
{
"first": "Yushi",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1332--1342",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yushi Wang, Jonathan Berant, and Percy Liang. 2015. Building a Semantic Parser Overnight. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics, pages 1332-1342.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "A General Framework for Distributional Similarity",
"authors": [
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julie Weeds and David Weir. 2003. A Gen- eral Framework for Distributional Similarity. In Proceedings of the Conference on Empiri- cal Methods in Natural Language Processing, pages 81-88.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Completion",
"authors": [
{
"first": "Yangyang",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Wotao",
"middle": [],
"last": "Yin",
"suffix": ""
}
],
"year": 2013,
"venue": "SIAM Journal on imaging sciences",
"volume": "6",
"issue": "3",
"pages": "1758--1789",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yangyang Xu and Wotao Yin. 2013. A Block Coordinate Descent Method for Regularized Multiconvex Optimization with Applications to Nonnegative Tensor Factorization and Com- pletion. SIAM Journal on imaging sciences, 6(3):1758-1789.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "Embedding Entities and Relations for Learning and Inference in Knowledge Bases",
"authors": [
{
"first": "Bishan",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Wen-Tau",
"middle": [],
"last": "Yih",
"suffix": ""
},
{
"first": "Xiaodong",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Jianfeng",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Bishan Yang, Wen-tau Yih, Xiaodong He, Jianfeng Gao, and Li Deng. 2015. Embedding Entities and Relations for Learning and Infer- ence in Knowledge Bases. In Proceedings of the International Conference on Learning Rep- resentations.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Crowdsourcing Inference-Rule Evaluation",
"authors": [
{
"first": "Naomi",
"middle": [],
"last": "Zeichner",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "156--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Naomi Zeichner, Jonathan Berant, and Ido Dagan. 2012. Crowdsourcing Inference-Rule Evalu- ation. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 156-160.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Harvesting Parallel News Streams to Generate Paraphrases of Event Relations",
"authors": [
{
"first": "Congle",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [
"S"
],
"last": "Weld",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1776--1786",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Congle Zhang and Daniel S. Weld. 2013. Harvest- ing Parallel News Streams to Generate Para- phrases of Event Relations. In Proceedings of the Conference on Empirical Methods in Natu- ral Language Processing, pages 1776-1786.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Transactions of the Association for Computational Linguistics, vol. 6, pp. 703-718, 2018. Action Editor: Katrin Erk. Submission batch: 4/2018; Revision batch: 8/2018; Published 12/2018. c 2018 Association for Computational Linguistics. Distributed under a CC-BY 4.0 license. Examples of typed entailment graphs for arguments of types company,company and person, location."
},
"FIGREF1": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Learning entailments that are consistent (A)"
},
"FIGREF2": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Berant's Entailment Data Set Berant et al. (2011) annotated all the edges of 10 typed entail-9 www.github.com/xavi-ai/relationalimplication-dataset."
},
"FIGREF3": {
"num": null,
"type_str": "figure",
"uris": null,
"text": "Comparison of globally consistent entailment graphs to the baselines on Levy/Holt's (A) and Berant's (B) data sets. The results are compared to graphs learned by Forest Reducible Graph Assumption on Levy/Holts's (C) and Berant's (D) data sets."
},
"TABREF1": {
"text": "",
"content": "<table><tr><td>: Area under the precision-recall curve (for</td></tr><tr><td>precision &gt; 0.5) for different variants of similarity</td></tr><tr><td>measures: local, untyped, AVG, crossGraph (CG) and</td></tr><tr><td>crossGraph + pResolution (CG_PR). We report results</td></tr><tr><td>on two data sets. Bold indicates statistical significance</td></tr><tr><td>(see text).</td></tr></table>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF3": {
"text": "Examples of different error categories and relative frequencies. The cause of errors is boldfaced.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF4": {
"text": "The board hailed Romney for his solid credentials. Who praised Mitt Romney's credentials? Researchers announced this week that they've found a new gene, ALS6, which is responsible for . . . Which gene did the ALS association discover ? One out of every 17 children under 3 years old in America has a food allergy, and some will outgrow their sensitivities. How many Americans suffer from food allergies? The reported compromise could itself run afoul of European labor law, opening the way for foreign workers . . . What law might the deal break? . . . Barnes & Noble CEO William Lynch said as he unveiled his company's Nook Tablet on Monday. Who launched the Nook Tablet?",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF5": {
"text": "Examples where explicit entailment relations improve the rankings. The related words are boldfaced.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
},
"TABREF7": {
"text": "Results (in percentage) for answer selection on the NewsQA data set.",
"content": "<table/>",
"html": null,
"num": null,
"type_str": "table"
}
}
}
}