ACL-OCL / Base_JSON /prefixK /json /K15 /K15-1018.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "K15-1018",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T07:08:57.290195Z"
},
"title": "Learning to Exploit Structured Resources for Lexical Inference",
"authors": [
{
"first": "Vered",
"middle": [],
"last": "Shwartz",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bar-Ilan University",
"location": {}
},
"email": ""
},
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bar-Ilan University",
"location": {}
},
"email": "omerlevy@cs.biu.ac.il"
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bar-Ilan University",
"location": {}
},
"email": "dagan@cs.biu.ac.il"
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Bar-Ilan University",
"location": {}
},
"email": "jacob.goldberger@biu.ac.il"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Massive knowledge resources, such as Wikidata, can provide valuable information for lexical inference, especially for proper-names. Prior resource-based approaches typically select the subset of each resource's relations which are relevant for a particular given task. The selection process is done manually, limiting these approaches to smaller resources such as WordNet, which lacks coverage of propernames and recent terminology. This paper presents a supervised framework for automatically selecting an optimized subset of resource relations for a given target inference task. Our approach enables the use of large-scale knowledge resources, thus providing a rich source of high-precision inferences over proper-names. 1",
"pdf_parse": {
"paper_id": "K15-1018",
"_pdf_hash": "",
"abstract": [
{
"text": "Massive knowledge resources, such as Wikidata, can provide valuable information for lexical inference, especially for proper-names. Prior resource-based approaches typically select the subset of each resource's relations which are relevant for a particular given task. The selection process is done manually, limiting these approaches to smaller resources such as WordNet, which lacks coverage of propernames and recent terminology. This paper presents a supervised framework for automatically selecting an optimized subset of resource relations for a given target inference task. Our approach enables the use of large-scale knowledge resources, thus providing a rich source of high-precision inferences over proper-names. 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Recognizing lexical inference is an important component in semantic tasks. Various lexicalsemantic relations, such as synonomy, classmembership, part-of, and causality may be used to infer the meaning of one word from another, in order to address lexical variability. For instance, a question answering system asked \"which artist's net worth is $450 million?\" might retrieve the candidates Beyonc\u00e9 Knowles and Lloyd Blankf ein, who are both worth $450 million. To correctly answer the question, the application needs to know that Beyonc\u00e9 is an artist, and that Lloyd Blankfein is not.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Corpus-based methods are often employed to recognize lexical inferences, based on either cooccurrence patterns (Hearst, 1992; Turney, 2006) or distributional representations (Weeds and Weir, 2003; Kotlerman et al., 2010) . While earlier methods were mostly unsupervised, recent trends introduced supervised methods for the task (Baroni et al., 2012; Turney and Mohammad, 2015; Roller et al., 2014) . In these settings, a targeted lexical inference relation is implicitly defined by a training set of term-pairs, which are annotated as positive or negative examples of this relation. Several such datasets have been created, each representing a somewhat different flavor of lexical inference.",
"cite_spans": [
{
"start": 111,
"end": 125,
"text": "(Hearst, 1992;",
"ref_id": "BIBREF9"
},
{
"start": 126,
"end": 139,
"text": "Turney, 2006)",
"ref_id": "BIBREF24"
},
{
"start": 174,
"end": 196,
"text": "(Weeds and Weir, 2003;",
"ref_id": "BIBREF27"
},
{
"start": 197,
"end": 220,
"text": "Kotlerman et al., 2010)",
"ref_id": "BIBREF11"
},
{
"start": 328,
"end": 349,
"text": "(Baroni et al., 2012;",
"ref_id": "BIBREF3"
},
{
"start": 350,
"end": 376,
"text": "Turney and Mohammad, 2015;",
"ref_id": "BIBREF23"
},
{
"start": 377,
"end": 397,
"text": "Roller et al., 2014)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "While corpus-based methods usually enjoy high recall, their precision is often limited, hindering their applicability. An alternative common practice is to mine high-precision lexical inferences from structured resources, particularly WordNet (Fellbaum, 1998) . Nevertheless, Word-Net is an ontology of the English language, which, by definition, does not cover many propernames (Beyonc\u00e9 \u2192 artist) and recent terminology (F acebook \u2192 social network). A potential solution may lie in rich and up-to-date structured knowledge resources such as Wikidata (Vrande\u010di\u0107, 2012) , DBPedia (Auer et al., 2007) , and Yago (Suchanek et al., 2007) . In this paper, we investigate how these resources can be exploited for lexical inference over proper-names.",
"cite_spans": [
{
"start": 243,
"end": 259,
"text": "(Fellbaum, 1998)",
"ref_id": null
},
{
"start": 551,
"end": 568,
"text": "(Vrande\u010di\u0107, 2012)",
"ref_id": "BIBREF26"
},
{
"start": 579,
"end": 598,
"text": "(Auer et al., 2007)",
"ref_id": "BIBREF1"
},
{
"start": 610,
"end": 633,
"text": "(Suchanek et al., 2007)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We begin by examining whether the common usage of WordNet for lexical inference can be extended to larger resources. Typically, a subset of WordNet relations is manually selected (e.g. all synonyms and hypernyms). By nature, each application captures a different aspect of lexical inference, and thus defines different relations as indicative of its particular flavor of lexical infer-",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "David Petraeus \u2192 Director of CIA performer Sheldon Cooper \u2192 Jim Parsons operating system iPhone \u2192 iOS ence. For instance, the hypernym relation is indicative of the is a flavor of lexical inference (e.g. musician \u2192 artist), but does not indicate causality.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example position held",
"sec_num": null
},
{
"text": "Since WordNet has a relatively simple schema, manually finding such an optimal subset is feasible. However, structured knowledge resources' schemas contain thousands of relations, dozens of which may be indicative. Many of these are not trivial to identify by hand, as shown in Table 1 . A manual effort to construct a distinct subset for each task is thus quite challenging, and an automated method is required.",
"cite_spans": [],
"ref_spans": [
{
"start": 278,
"end": 285,
"text": "Table 1",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Example position held",
"sec_num": null
},
{
"text": "We present a principled supervised framework, which automates the selection process of resource relations, and optimizes this subset for a given target inference relation. This automation allows us to leverage large-scale resources, and extract many high-precision inferences over propernames, which are absent from WordNet. Finally, we show that our framework complements stateof-the-art corpus-based methods. Combining the two approaches can particularly benefit real-world tasks in which proper-names are prominent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Example position held",
"sec_num": null
},
{
"text": "WordNet (Fellbaum, 1998) is widely used for identifying lexical inference. It is usually used in an unsupervised setting where the relations relevant for each specific inference task are manually selected a priori.",
"cite_spans": [
{
"start": 8,
"end": 24,
"text": "(Fellbaum, 1998)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Common Use of WordNet for Inference",
"sec_num": "2.1"
},
{
"text": "One approach looks for chains of these predefined relations (Harabagiu and Moldovan, 1998) , e.g. dog \u2192 mammal using a chain of hypernyms: dog \u2192 canine \u2192 carnivore \u2192 placental mammal \u2192 mammal. Another approach is via WordNet Similarity (Pedersen et al., 2004) , which takes two synsets and returns a numeric value that represents their similarity based on WordNet's hierarchical hypernymy structure.",
"cite_spans": [
{
"start": 60,
"end": 90,
"text": "(Harabagiu and Moldovan, 1998)",
"ref_id": "BIBREF8"
},
{
"start": 236,
"end": 259,
"text": "(Pedersen et al., 2004)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Common Use of WordNet for Inference",
"sec_num": "2.1"
},
{
"text": "While there is a broad consensus that synonyms entail each other (elevator \u2194 lif t) and hyponyms entail their hypernyms (cat \u2192 animal), other relations, such as meronymy, are not agreed Resource #Entities #Properties Version DBPedia 4,500,000 1,367 July 2014 Wikidata 6,000,000 1,200 July 2014 Yago 10,000,000 70 December 2014 WordNet 150,000 13 3.0 upon, and may vary depending on task and context (e.g. living in London \u2192 living in England, but leaving London \u2192 leaving England).",
"cite_spans": [],
"ref_spans": [
{
"start": 217,
"end": 347,
"text": "Version DBPedia 4,500,000 1,367 July 2014 Wikidata 6,000,000 1,200 July 2014 Yago 10,000,000 70 December 2014 WordNet",
"ref_id": "TABREF0"
}
],
"eq_spans": [],
"section": "Common Use of WordNet for Inference",
"sec_num": "2.1"
},
{
"text": "Overall, there is no principled way to select the subset of relevant relations, and a suitable subset is usually tailored to each dataset and task. This work addresses this issue by automatically learning the subset of relations relevant to the task.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Common Use of WordNet for Inference",
"sec_num": "2.1"
},
{
"text": "While WordNet is quite extensive, it is handcrafted by expert lexicographers, and thus cannot compete in terms of scale with community-built knowledge bases such as Wikidata (Vrande\u010di\u0107, 2012) , which connect millions of entities through a rich variety of structured relations (properties). Using these resources for various NLP tasks has become exceedingly popular (Wu and Weld, 2010; Rahman and Ng, 2011; Unger et al., 2012; Berant et al., 2013) . Little attention, however, was given to leveraging them for identifying lexical inference; the exception being Shnarch et al. (2009) , who used structured data from Wikipedia for this purpose.",
"cite_spans": [
{
"start": 174,
"end": 191,
"text": "(Vrande\u010di\u0107, 2012)",
"ref_id": "BIBREF26"
},
{
"start": 365,
"end": 384,
"text": "(Wu and Weld, 2010;",
"ref_id": "BIBREF29"
},
{
"start": 385,
"end": 405,
"text": "Rahman and Ng, 2011;",
"ref_id": "BIBREF16"
},
{
"start": 406,
"end": 425,
"text": "Unger et al., 2012;",
"ref_id": "BIBREF25"
},
{
"start": 426,
"end": 446,
"text": "Berant et al., 2013)",
"ref_id": "BIBREF4"
},
{
"start": 560,
"end": 581,
"text": "Shnarch et al. (2009)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Structured Knowledge Resources",
"sec_num": "2.2"
},
{
"text": "In this paper, we experimented with such resources, in addition to WordNet. DBPedia (Auer et al., 2007) contains structured information from Wikipedia: info boxes, redirections, disambiguation links, etc. Wikidata (Vrande\u010di\u0107, 2012) contains facts edited by humans to support Wikipedia and other Wikimedia projects. Yago (Suchanek et al., 2007) is a semantic knowledge base derived from Wikipedia, WordNet, and GeoNames. 2 Table 2 compares the scale of the resources we used. The massive scale of the more recent resources and their rich schemas can potentially increase the coverage of current WordNet-based approaches, yet make it difficult to manually select an optimized subset of relations for a task. Our method automatically learns such a subset, and provides lexical inferences on entities that are absent from WordNet, particularly proper-names. Figure 1 : An excerpt of a resource graph (Wikidata) connecting \"Beyonc\u00e9\" to \"artist\". Resource graphs contain two types of nodes: terms (ellipses) and concepts (rectangles).",
"cite_spans": [
{
"start": 84,
"end": 103,
"text": "(Auer et al., 2007)",
"ref_id": "BIBREF1"
},
{
"start": 320,
"end": 343,
"text": "(Suchanek et al., 2007)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [
{
"start": 422,
"end": 429,
"text": "Table 2",
"ref_id": "TABREF1"
},
{
"start": 854,
"end": 862,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Structured Knowledge Resources",
"sec_num": "2.2"
},
{
"text": "We wish to leverage the information in structured resources to identify whether a certain lexicalinference relation R holds between a pair of terms. Formally, we wish to classify whether a term-pair (x, y) satisfies the relation R. R is implicitly defined by a training set of (x, y) pairs, annotated as positive or negative examples. We are also given a set of structured resources, which we will utilize to classify (x, y). Each resource can be naturally viewed as a directed graph G (Figure 1 ). There are two types of nodes in G: term (lemma) nodes and concept (synset) nodes. The edges in G are each labeled with a property (edge type), defining a wide range of semantic relations between concepts (e.g. occupation, subclass of). In addition, terms are mapped to the concepts they represent via termconcept edge types.",
"cite_spans": [],
"ref_spans": [
{
"start": 486,
"end": 495,
"text": "(Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Task Definition and Representation",
"sec_num": "3"
},
{
"text": "When using multiple resources, G is a disconnected graph composed of a subgraph per resource, without edges connecting nodes from different resources. One may consider connecting multiple resource graphs at the term nodes. However, this may cause sense-shifts, i.e. connect two distinct concepts (in different resources) through the same term. For example, the concept January 1 st in Wikidata is connected to the concept f ruit in WordNet through the polysemous term date. The alternative, aligning resources in the concept space, is not trivial. Some partial mappings exist (e.g. Yago-WordNet), which can be explored in future work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Task Definition and Representation",
"sec_num": "3"
},
{
"text": "We present an algorithmic framework for learning whether a term-pair (x, y) satisfies a relation R, given an annotated set of term-pairs and a resource graph G. We first represent (x, y) as the set of paths connecting x and y in G ( \u00a74.1). We then classify each such path as indicative or not of R, and decide accordingly whether xRy ( \u00a74.2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Algorithmic Framework",
"sec_num": "4"
},
{
"text": "We represent each (x, y) pair as the set of paths that link x and y within each resource. We retain only the shortest paths (all paths x ; y of minimal length) as they yielded better performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Term-Pairs as Path-Sets",
"sec_num": "4.1"
},
{
"text": "Resource graphs are densely connected, and thus have a huge branching factor b. We thus limited the maximum path length to = 8 and employed bidirectional search (Russell and Norvig, 2009, Ch. 3) to find the shortest paths. This algorithm runs two simultaneous instances of breadthfirst search (BFS), one from x and another from y, halting when they meet in the middle. It is much more efficient, having a complexity of O(",
"cite_spans": [
{
"start": 161,
"end": 191,
"text": "(Russell and Norvig, 2009, Ch.",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Term-Pairs as Path-Sets",
"sec_num": "4.1"
},
{
"text": "b /2 ) = O(b 4 ) instead of BFS's O(b ) = O(b 8 ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Term-Pairs as Path-Sets",
"sec_num": "4.1"
},
{
"text": "To further reduce complexity, we split the search to two phases: we first find all nodes along the shortest paths between x and y, and then reconstruct the actual paths. Searching for relevant nodes ignores edge types, inducing a simpler resource graph, which can be represented as a sparse adjacency matrix and manipulated efficiently with matrix operations (elaborated in appendix A). Once the search space is limited to relevant nodes alone, path-finding becomes trivial.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Representing Term-Pairs as Path-Sets",
"sec_num": "4.1"
},
{
"text": "We consider edge types that typically connect between concepts in R to be \"indicative\"; for example, the occupation edge type is indicative of the is a relation, as in \"Beyonc\u00e9 is a musician\". Our framework's goal is to learn which edge types are indicative of a given relation R, and use that information to classify new (x, y) term-pairs. Figure 2 presents the dependencies between edge types, paths, and term-pairs. As discussed in the previous section, we represent each term-pair as a set of paths. In turn, we represent each path as a \"bag of edges\", a vector with an entry for each edge type. 3 To model the edges' \"indicativeness\", we assign a parameter to each edge type, and learn these parameters from the term-pair level supervision provided by the training data.",
"cite_spans": [
{
"start": 600,
"end": 601,
"text": "3",
"ref_id": null
}
],
"ref_spans": [
{
"start": 341,
"end": 349,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Classification Framework",
"sec_num": "4.2"
},
{
"text": "In this work, we are not only interested in optimizing accuracy or F 1 , but in exploring the entire recall-precision trade-off. Therefore, we optimize the F \u03b2 objective, where \u03b2 2 balances the recallprecision trade-off. 4 In particular, we expect structured resources to facilitate high-precision inferences, and are thus more interested in lower values of \u03b2 2 , which emphasize precision over recall.",
"cite_spans": [
{
"start": 221,
"end": 222,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Classification Framework",
"sec_num": "4.2"
},
{
"text": "A typical neural network approach is to assign a weight w i to each edge type e i , where more indicative edge types should have higher values of w i . The indicativeness of a path (p) is modeled using logistic regression:p \u03c3( w \u2022 \u03c6), where \u03c6 is the path's \"bag of edges\" representation, i.e. a feature vector of each edge type's frequency in the path.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Edge Model",
"sec_num": "4.2.1"
},
{
"text": "The probability of a term-pair being positive can be determined using either the sum of all path scores or the score of its most indicative path (max-pooling). We trained both variants with back-propagation (Rumelhart et al., 1986) and gradient ascent. In particular, we optimized F \u03b2 using a variant of Jansche's (2005) derivation of F \u03b2 -optimized logistic regression (see suplementary material 5 for full derivation).",
"cite_spans": [
{
"start": 207,
"end": 231,
"text": "(Rumelhart et al., 1986)",
"ref_id": "BIBREF19"
},
{
"start": 304,
"end": 320,
"text": "Jansche's (2005)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Edge Model",
"sec_num": "4.2.1"
},
{
"text": "This model can theoretically quantify how indicative each edge type is of R. Specifically, it can differentiate weakly indicative edges (e.g. meronyms) from those that contradict R (e.g. antonyms). However, on our datasets, this model yielded sub-optimal results (see \u00a76.1), and therefore serves as a baseline to the binary model presented in the following section.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Weighted Edge Model",
"sec_num": "4.2.1"
},
{
"text": "Preliminary experiments suggested that in most datasets, each edge type is either indicative or non-indicative of the target relation R. We therefore developed a binary model, which defines a",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binary Edge Model",
"sec_num": "4.2.2"
},
{
"text": "4 F \u03b2 = (1+\u03b2 2 )\u2022precision\u2022recall \u03b2 2 \u2022precision+recall",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binary Edge Model",
"sec_num": "4.2.2"
},
{
"text": "5 http://u.cs.biu.ac.il/%7enlp/wp-content/uploads/LinKeR-sup.pdf global set of edge types that are indicative of R: a whitelist.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binary Edge Model",
"sec_num": "4.2.2"
},
{
"text": "Classification We represent each path p as a binary \"bag of edges\" \u03c6, i.e. the set of edge types that were applied in p. Given a term-pair (x, y) represented as a path-set paths(x, y), and a whitelist w, the model classifies (x, y) as positive if:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binary Edge Model",
"sec_num": "4.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u2203\u03c6 \u2208 paths(x, y) : \u03c6 \u2286 w",
"eq_num": "(1)"
}
],
"section": "Binary Edge Model",
"sec_num": "4.2.2"
},
{
"text": "In other words:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binary Edge Model",
"sec_num": "4.2.2"
},
{
"text": "1. A path is classified as indicative if all its edge types are whitelisted. The first design choice essentially assumes that R is a transitive relation. This is usually the case in most inference relations (e.g. hypernymy, causality). In addition, notice that the second modeling assumption is unidirectional; in some cases xRy, yet an indicative path between them does not exist. This can happen, for example, if the relation between them is not covered by the resource, e.g. causality in WordNet.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binary Edge Model",
"sec_num": "4.2.2"
},
{
"text": "Training Learning the optimal whitelist over a training set can be cast as a subset selection problem: given a set of possible edge types E = {e 1 , ..., e n } and a utility function u : 2 E \u2192 R, find the subset (whitelist) w \u2286 E that maximizes the utility, i.e. w * = arg max w u(w). In our case, the utility u is the F \u03b2 score over the training set. Structured knowledge resources contain hundreds of different edge types, making E very large, and an exhaustive search over its powerset infeasible. The standard approach to this class of subset selection problems is to apply local search algorithms, which find an approximation of the optimal subset. We tried several local search algorithms, and found that genetic search (Russell and Norvig, 2009 , Ch.4) performed well. In general, genetic search is claimed to be a preferred strategy for subset selection (Yang and Honavar, 1998) .",
"cite_spans": [
{
"start": 726,
"end": 751,
"text": "(Russell and Norvig, 2009",
"ref_id": "BIBREF20"
},
{
"start": 862,
"end": 886,
"text": "(Yang and Honavar, 1998)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Binary Edge Model",
"sec_num": "4.2.2"
},
{
"text": "In our application of genetic search, each individual (candidate solution) is a whitelist, represented by a bit vector with a bit for each edge type. We defined the fitness function of a whitelist w according to the F \u03b2 score of w over the training set. We also applied L 2 regularization to reduce the fitness of large whitelists. The binary edge model works well in practice, successfully replicating the common practice of manually selected relations from WordNet (see \u00a76.1). In addition, the model outputs a humaninterpretable set of indicative edges. Although the weighted model's hypothesis space subsumes the binary model's, the binary model performed better on our datasets. We conjecture that this stems from the limited amount of training instances, which prevents a more general model from converging into an optimal solution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Binary Edge Model",
"sec_num": "4.2.2"
},
{
"text": "We used 3 existing common-noun datasets and one new proper-name dataset. Each dataset consists of annotated (x, y) term-pairs, where both x and y are noun phrases. Since each dataset was created in a slightly different manner, the underlying semantic relation R varies as well.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Datasets",
"sec_num": "5"
},
{
"text": "kotlerman2010 (Kotlerman et al., 2010 ) is a manually annotated lexical entailment dataset of distributionally similar nouns. turney2014 (Turney and Mohammad, 2015) is based on a crowdsourced dataset of semantic relations, from which we removed non-nouns and lemmatized plurals. levy2014 (Levy et al., 2014) was generated from manually annotated entailment graphs of subject-verb-object tuples. Table 3 provides metadata on each dataset.",
"cite_spans": [
{
"start": 14,
"end": 37,
"text": "(Kotlerman et al., 2010",
"ref_id": "BIBREF11"
},
{
"start": 288,
"end": 307,
"text": "(Levy et al., 2014)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 395,
"end": 402,
"text": "Table 3",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Existing Datasets",
"sec_num": "5.1"
},
{
"text": "Two additional datasets were created using WordNet (Baroni and Lenci, 2011; Baroni et al., 2012) , whose definition of R can be trivially captured by a resource-based approach using Word-Net. Hence, they are omitted from our evaluation.",
"cite_spans": [
{
"start": 51,
"end": 75,
"text": "(Baroni and Lenci, 2011;",
"ref_id": "BIBREF2"
},
{
"start": 76,
"end": 96,
"text": "Baroni et al., 2012)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Existing Datasets",
"sec_num": "5.1"
},
{
"text": "An important linguistic component that is missing from these lexical-inference datasets is propernames. We conjecture that much of the added value in utilizing structured resources is the ability to cover terms such as celebrities (Lady Gaga) and recent terminology (social networks) that do not appear in WordNet. We thus created a new dataset of (x, y) pairs in which x is a proper-name, y is a common noun, and R is the is a relation. For instance, (Lady Gaga, singer) is true, but (Lady Gaga, f ilm) is false.",
"cite_spans": [
{
"start": 231,
"end": 242,
"text": "(Lady Gaga)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "A New Proper-Name Dataset",
"sec_num": "5.2"
},
{
"text": "To construct the dataset, we sampled 70 articles in 9 different topics from a corpus of recent events (online magazines). As candidate (x, y) pairs, we extracted 24,000 pairs of noun phrases x and y that belonged to the same paragraph in the original text, selecting those in which x is a propername. These pairs were manually annotated by graduate students, who were instructed to use their world knowledge and the original text for disambiguation (e.g. England \u2192 team in the context of football). The agreement on a subset of 4,500 pairs was \u03ba = 0.954.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A New Proper-Name Dataset",
"sec_num": "5.2"
},
{
"text": "After annotation, we had roughly 800 positive and 23,000 negative pairs. To balance the dataset, we sampled negative examples according to the frequency of y in positive pairs, creating \"harder\" negative examples, such as (Sherlock, lady) and (Kylie M inogue, vice president).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "A New Proper-Name Dataset",
"sec_num": "5.2"
},
{
"text": "We first validate our framework by checking whether it can automatically replicate the common manual usage of WordNet. We then evaluate it on the proper-name dataset using additional resources. Finally, we compare our method to stateof-the-art distributional methods.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "Experimental Setup While F 1 is a standard measure of performance, it captures only one point on the recall-precision curve. Instead, we present the entire curve, while expecting the contribution of structured resources to be in the high-precision region. To create these curves, we optimized our method and the baselines using F \u03b2 with 40 values of \u03b2 2 \u2208 (0, 2).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "We randomly split each dataset into 70% train, 25% test and 5% validation. 7 We applied L 2 regularization to our method and the baselines, tuning the regularization parameter on the validation set.",
"cite_spans": [
{
"start": 75,
"end": 76,
"text": "7",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "6"
},
{
"text": "We examine whether our algorithm can replicate the common use of WordNet ( \u00a72.1), by manually constructing 4 whitelists based on the literature Figure 3 : Recall-precision curve of each dataset with Word-Net as the only resource. Each point in the graph stands for the performance on a certain value of \u03b2. Notice that in some of the graphs, different \u03b2 values yield the same performance, causing less points to be displayed.",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 152,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Performance on WordNet",
"sec_num": "6.1"
},
{
"text": "(see Table 4 ), and evaluating their performance using the classification methods in \u00a74.2. In addition, we compare our method to Resnik's (1995) Word-Net similarity, which scores each pair of terms based on their lowest common hypernym. This score was used as a single feature in F \u03b2 -optimized logistic regression to create a classifier. Figure 3 compares our algorithm to Word-Net's baselines, showing that our binary model always replicates the best-performing manuallyconstructed whitelists, for certain values of \u03b2 2 . Synonyms and hypernyms are often selected, and additional edges are added to match the semantic flavor of each particular dataset. In turney2014, for example, where meronyms are common, our binary model learns that they are indicative by including meronymy in its whitelist. In levy2014, however, where meronyms are less indicative, the model does not select them.",
"cite_spans": [
{
"start": 129,
"end": 144,
"text": "Resnik's (1995)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [
{
"start": 5,
"end": 12,
"text": "Table 4",
"ref_id": "TABREF5"
}
],
"eq_spans": [],
"section": "Performance on WordNet",
"sec_num": "6.1"
},
{
"text": "We also observe that, in most cases, our algorithm outperforms Resnik's similarity. In addition, the weighted model does not perform as well as the binary model, as discussed in \u00a74.2. We therefore focus our presentation on the binary model.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Performance on WordNet",
"sec_num": "6.1"
},
{
"text": "We evaluated our model on the new proper-name dataset proper2015 described in \u00a75.2. This time, we incorporated all the resources described in \u00a72.2 (including WordNet) into our framework, and compared the performance to that of using WordNet alone. Indeed, our algorithm is able to exploit the information in the additional resources and greatly increase performance, particularly recall, on this dataset (Figure 4 ). 8 Figure 5 : Recall-precision curve of each dataset using: (1) Supervised word2vec (2) Our binary model. The binary model yields 97% precision at 29% recall, at the top of the \"precision cliff\". The whitelist learnt at this point contains 44 edge types, mainly from Wikidata and Yago. Even though the is a relation implicitly defined in proper2015 is described using many different edge types, our binary model still manages to learn which of the over 2,500 edge types are indicative. Table 5 shows some of the learnt edge types (see the supplementary material for the complete list).",
"cite_spans": [],
"ref_spans": [
{
"start": 404,
"end": 413,
"text": "(Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 419,
"end": 427,
"text": "Figure 5",
"ref_id": null
},
{
"start": 902,
"end": 909,
"text": "Table 5",
"ref_id": "TABREF7"
}
],
"eq_spans": [],
"section": "Lexical Inference over Proper-Names",
"sec_num": "6.2"
},
{
"text": "The performance boost in proper2015 demonstrates that community-built resources have much added value when considering propernames. As expected, many proper-names do not appear in WordNet (Doctor W ho). That said, even when both terms appear in WordNet, they often lack important properties covered by other resources (Louisa M ay Alcott is a woman).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Inference over Proper-Names",
"sec_num": "6.2"
},
{
"text": "Lexical inference has been thoroughly explored in distributional semantics, with recent supervised methods (Baroni et al., 2012; Turney and Mohammad, 2015) these methods leverage huge corpora to increase coverage, they often introduce noise that affects their precision. Structured resources, on the other hand, are precision-oriented. We therefore expect our approach to complement distributional methods in high-precision scenarios.",
"cite_spans": [
{
"start": 107,
"end": 128,
"text": "(Baroni et al., 2012;",
"ref_id": "BIBREF3"
},
{
"start": 129,
"end": 155,
"text": "Turney and Mohammad, 2015)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Corpus-based Methods",
"sec_num": "6.3"
},
{
"text": "To represent term-pairs with distributional features, we downloaded the pre-trained word2vec embeddings. 9 These vectors were trained over a huge corpus (100 billion words) using a stateof-the-art embedding algorithm (Mikolov et al., 2013) . Since each vector represents a single term (either x or y), we used 3 state-of-the-art meth-ods to construct a feature vector for each termpair: concatenation x \u2295 y (Baroni et al., 2012) , difference y \u2212 x (Roller et al., 2014; Fu et al., 2014; Weeds et al., 2014) , and similarity x \u2022 y. We then used F \u03b2 -optimized logistic regression to train a classifier. Figure 5 compares our methods to concatenation, which was the best-performing corpus-based method. 10 In turney2014 and proper2015, the embeddings retain over 80% precision while boasting higher recall than our method's. In turney2014, it is often a result of the more associative relations prominent in the dataset (f ootball \u2192 playbook), which seldom are expressed in structured resources. In proper2015, the difference in recall seems to be from missing terminology (T witter \u2192 social network). However, the corpus-based method's precision does not exceed the low 80's, while our binary algorithm yields 93% @ 27% precision-atrecall on turney2014 and 97% @ 29% on proper2015.",
"cite_spans": [
{
"start": 217,
"end": 239,
"text": "(Mikolov et al., 2013)",
"ref_id": "BIBREF14"
},
{
"start": 407,
"end": 428,
"text": "(Baroni et al., 2012)",
"ref_id": "BIBREF3"
},
{
"start": 448,
"end": 469,
"text": "(Roller et al., 2014;",
"ref_id": "BIBREF18"
},
{
"start": 470,
"end": 486,
"text": "Fu et al., 2014;",
"ref_id": "BIBREF7"
},
{
"start": 487,
"end": 506,
"text": "Weeds et al., 2014)",
"ref_id": "BIBREF28"
},
{
"start": 701,
"end": 703,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 602,
"end": 610,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Comparison to Corpus-based Methods",
"sec_num": "6.3"
},
{
"text": "In levy2014, there is an overwhelming advantage to our resource-based method over the corpus-based method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Corpus-based Methods",
"sec_num": "6.3"
},
{
"text": "This dataset contains healthcare terms and might require a domainspecific corpus to train the embeddings. Having said that, many of its examples are of an ontological nature (drug x treats disease y), which may be more suited to our resource-based approach, regardless of domain.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Comparison to Corpus-based Methods",
"sec_num": "6.3"
},
{
"text": "Since resource-based methods are precisionoriented, we analyzed our binary model by selecting the setting with the highest attainable recall that maintains high precision. This point is often at the top of a \"precision cliff\" in Figures 3 and 4 . These settings are presented in Table 6 .",
"cite_spans": [],
"ref_spans": [
{
"start": 229,
"end": 244,
"text": "Figures 3 and 4",
"ref_id": "FIGREF2"
},
{
"start": 279,
"end": 286,
"text": "Table 6",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "The high-precision settings we chose resulted in few false positives, most of which are caused by annotation errors or resource errors. Naturally, regions of higher recall and lower precision will yield more false positives and less false negatives. We thus focus the rest of our discussion on false negatives (Table 7) .",
"cite_spans": [],
"ref_spans": [
{
"start": 310,
"end": 319,
"text": "(Table 7)",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "While structured resources cover most terms, Table 7 : Analysis of false negatives in each dataset. We observed the following errors: (1) One of the terms is out-ofvocabulary (2) All paths are not indicative (3) An indicative path exists, but discarded by the whitelist (4) The resource describes an inaccurate relation between the terms (5) The term-pair was incorrectly annotated as positive.",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 52,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "the majority of false negatives stem from the lack of indicative paths between them. Many important relations are not explicitly covered by the resources, such as noun-quality (saint \u2192 holiness), which are abundant in turney2014, or causality (germ \u2192 inf ection), which appear in levy2014. These examples are occasionally captured by other (more specific) relations, and tend to be domain-specific.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "In kotlerman2010, we found that many false negatives are caused by annotation errors in this dataset. Pairs are often annotated as positive based on associative similarity (e.g. transport \u2192 environment, f inancing \u2192 management), making it difficult to even manually construct a coherent whitelist for this dataset. This may explain the poor performance of our method and other baselines on this dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Error Analysis",
"sec_num": "7"
},
{
"text": "In this paper, we presented a supervised framework for utilizing structured resources to recognize lexical inference. We demonstrated that our framework replicates the common practice of WordNet and can increase the coverage of propernames by exploiting larger structured resources. Compared to the prior practice of manually identifying useful relations in structured resources, our contribution offers a principled learning approach for automating and optimizing this common need.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "While our method enjoys high-precision, its recall is limited by the resources' coverage. In future work, combining our method with high-recall corpus-based methods may have synergistic results. Another direction for increasing recall is to use cross-resource mappings to allow crossresource paths (connected at the concept-level).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "Finally, our method can be extended to become context-sensitive, that is, deciding whether the lexical inference holds in a given context. This may be done by applying a resource-based WSD approach similar to (Brody et al., 2006; Agirre et al., 2014) , detecting the concept node that matches the term's sense in the given context.",
"cite_spans": [
{
"start": 209,
"end": 229,
"text": "(Brody et al., 2006;",
"ref_id": "BIBREF5"
},
{
"start": 230,
"end": 250,
"text": "Agirre et al., 2014)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion and Future Work",
"sec_num": "8"
},
{
"text": "Our code and data are available at: https://github.com/vered1986/LinKeR",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also considered Freebase, but it required significantly larger computational resources to work in our framework, which, at the time of writing, exceeded our capacity. \u00a74.1 discusses complexity.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We add special markers to the first and last edges within each path. This allows the algorithm to learn that applying term-to-concept and concept-to-term edge types in the middle of a path causes sense-shifts.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "As a corollary, if x Ry, then every path between them is non-indicative.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Since our methods do not use lexical features, we did not use lexical splits as in(Levy et al., 2015).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also evaluated our algorithm on the common-nouns datasets with all resources, but apparently, adding resources did not significantly improve performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://code.google.com/p/word2vec/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that the corpus-based method benefits from lexical memorization(Levy et al., 2015), overfitting for the lexical terms in the training set, while our resource-based method does not. This means thatFigure 5paints a relatively optimistic picture of the embeddings' actual performance.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work was supported by an Intel ICRI-CI grant, the Google Research Award Program and the German Research Foundation via the German-Israeli Project Cooperation (DIP, grant DA 1600/1-1).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
},
{
"text": "We split the search to two phases: we first find all nodes along the shortest paths between x and y, and then reconstruct the actual paths. The first phase ignores edge types, inducing a simpler resource graph, which we represent as a sparse adjacency matrix and manipulate efficiently with matrix operations (Algorithm 1). Once the search space is limited to relevant nodes only, the second phase becomes trivial.Algorithm 1 Find Relevant Nodes 1: function NODESINPATH( nx, ny, len) 2:if len == 1 then 3:return nx \u222a ny 4:for 0 < k \u2264 len do 5:if k is odd then 6: nx = nx \u2022 A 7: else 8:if nx \u2022 ny > 0 then 10: nxy = nx \u2229 ny 11:n f orward = nodesInP ath( nx, nxy, k 2 ) 12:n backward = nodesInP ath( nxy, ny, k 2 ) 13:return n f orward \u222a n backward 14:return 0The algorithm finds all nodes in the paths between x and y subject to the maximum length (len). A is the resource adjacency matrix and nx, ny are one-hot vectors of x, y. At each iteration, we either make a forward (line 6) or a backward (8) step. If the forward and backward search meet (9), we recursively call the algorithm for each side (11-12), and merge their results (13). The stop conditions are len = 0, returning an empty set when no path was found, and len = 1, merging both sides when they are connected by single edges.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Appendix A Efficient Path-Finding",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Random walks for knowledge-based word sense disambiguation",
"authors": [
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Oier",
"middle": [],
"last": "Lopez De Lacalle",
"suffix": ""
},
{
"first": "Aitor",
"middle": [],
"last": "Soroa",
"suffix": ""
}
],
"year": 2014,
"venue": "Computational Linguistics",
"volume": "40",
"issue": "1",
"pages": "57--84",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eneko Agirre, Oier Lopez de Lacalle, and Aitor Soroa. 2014. Random walks for knowledge-based word sense disambiguation. Computational Linguistics, 40(1):57-84.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Dbpedia: A nucleus for a web of open data",
"authors": [
{
"first": "S\u00f6ren",
"middle": [],
"last": "Auer",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Bizer",
"suffix": ""
},
{
"first": "Georgi",
"middle": [],
"last": "Kobilarov",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Cyganiak",
"suffix": ""
},
{
"first": "Zachary",
"middle": [],
"last": "Ives",
"suffix": ""
}
],
"year": 2007,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S\u00f6ren Auer, Christian Bizer, Georgi Kobilarov, Jens Lehmann, Richard Cyganiak, and Zachary Ives. 2007. Dbpedia: A nucleus for a web of open data. Springer.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "How we blessed distributional semantic evaluation",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Lenci",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics",
"volume": "",
"issue": "",
"pages": "1--10",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni and Alessandro Lenci. 2011. How we blessed distributional semantic evaluation. In Pro- ceedings of the GEMS 2011 Workshop on GEomet- rical Models of Natural Language Semantics, pages 1-10, Edinburgh, UK, July. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Entailment above the word level in distributional semantics",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Baroni",
"suffix": ""
},
{
"first": "Raffaella",
"middle": [],
"last": "Bernardi",
"suffix": ""
},
{
"first": "Ngoc-Quynh",
"middle": [],
"last": "Do",
"suffix": ""
},
{
"first": "Chung-Chieh",
"middle": [],
"last": "Shan",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "23--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Baroni, Raffaella Bernardi, Ngoc-Quynh Do, and Chung-chieh Shan. 2012. Entailment above the word level in distributional semantics. In Proceed- ings of the 13th Conference of the European Chap- ter of the Association for Computational Linguistics, pages 23-32, Avignon, France, April. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Semantic parsing on Freebase from question-answer pairs",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Berant",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Chou",
"suffix": ""
},
{
"first": "Roy",
"middle": [],
"last": "Frostig",
"suffix": ""
},
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1533--1544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Berant, Andrew Chou, Roy Frostig, and Percy Liang. 2013. Semantic parsing on Freebase from question-answer pairs. In Proceedings of the 2013 Conference on Empirical Methods in Natural Lan- guage Processing, pages 1533-1544, Seattle, Wash- ington, USA, October. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Ensemble methods for unsupervised wsd",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "Brody",
"suffix": ""
},
{
"first": "Roberto",
"middle": [],
"last": "Navigli",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "97--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel Brody, Roberto Navigli, and Mirella Lapata. 2006. Ensemble methods for unsupervised wsd. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Lin- guistics, pages 97-104. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning semantic hierarchies via word embeddings",
"authors": [
{
"first": "Ruiji",
"middle": [],
"last": "Fu",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Guo",
"suffix": ""
},
{
"first": "Bing",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Wanxiang",
"middle": [],
"last": "Che",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Ting",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1199--1209",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ruiji Fu, Jiang Guo, Bing Qin, Wanxiang Che, Haifeng Wang, and Ting Liu. 2014. Learning semantic hier- archies via word embeddings. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1199-1209, Baltimore, Maryland, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Knowledge processing on an extended wordNet. WordNet: An electronic lexical database",
"authors": [
{
"first": "Sanda",
"middle": [],
"last": "Harabagiu",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Moldovan",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "305",
"issue": "",
"pages": "381--405",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sanda Harabagiu and Dan Moldovan. 1998. Knowl- edge processing on an extended wordNet. WordNet: An electronic lexical database, 305:381-405.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Automatic acquisition of hyponyms from large text corpora",
"authors": [
{
"first": "A",
"middle": [],
"last": "Marti",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "The 15th International Conference on Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "529--545",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti A Hearst. 1992. Automatic acquisition of hy- ponyms from large text corpora. In COLING 1992 Volume 2: The 15th International Conference on Computational Linguistics, pages 529-545, Nantes, France.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Maximum expected f-measure training of logistic regression models",
"authors": [
{
"first": "Martin",
"middle": [],
"last": "Jansche",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "692--699",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Martin Jansche. 2005. Maximum expected f-measure training of logistic regression models. In Proceed- ings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, pages 692-699, Vancouver, British Columbia, Canada, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Directional distributional similarity for lexical inference",
"authors": [
{
"first": "Lili",
"middle": [],
"last": "Kotlerman",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Idan",
"middle": [],
"last": "Szpektor",
"suffix": ""
},
{
"first": "Maayan",
"middle": [],
"last": "Zhitomirsky-Geffet",
"suffix": ""
}
],
"year": 2010,
"venue": "Natural Language Engineering",
"volume": "16",
"issue": "04",
"pages": "359--389",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lili Kotlerman, Ido Dagan, Idan Szpektor, and Maayan Zhitomirsky-Geffet. 2010. Directional distribu- tional similarity for lexical inference. Natural Lan- guage Engineering, 16(04):359-389.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Focused entailment graphs for open ie propositions",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Goldberger",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the Eighteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "87--97",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy, Ido Dagan, and Jacob Goldberger. 2014. Focused entailment graphs for open ie propositions. In Proceedings of the Eighteenth Conference on Computational Natural Language Learning, pages 87-97, Ann Arbor, Michigan, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Do supervised distributional methods really learn lexical inference relations?",
"authors": [
{
"first": "Omer",
"middle": [],
"last": "Levy",
"suffix": ""
},
{
"first": "Steffen",
"middle": [],
"last": "Remus",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Biemann",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "970--976",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omer Levy, Steffen Remus, Chris Biemann, and Ido Dagan. 2015. Do supervised distributional meth- ods really learn lexical inference relations? In Pro- ceedings of the 2015 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 970-976, Denver, Colorado, May-June. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Distributed representations of words and phrases and their compositionality",
"authors": [
{
"first": "Tomas",
"middle": [],
"last": "Mikolov",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Kai",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Gregory",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Corrado",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2013,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3111--3119",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S Corrado, and Jeffrey Dean. 2013. Distributed rep- resentations of words and phrases and their compo- sitionality. In Advances in Neural Information Pro- cessing Systems, pages 3111-3119.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Wordnet::similarity -measuring the relatedness of concepts",
"authors": [
{
"first": "Ted",
"middle": [],
"last": "Pedersen",
"suffix": ""
},
{
"first": "Siddharth",
"middle": [],
"last": "Patwardhan",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Michelizzi",
"suffix": ""
}
],
"year": 2004,
"venue": "HLT-NAACL 2004: Demonstration Papers",
"volume": "",
"issue": "",
"pages": "38--41",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ted Pedersen, Siddharth Patwardhan, and Jason Miche- lizzi. 2004. Wordnet::similarity -measuring the re- latedness of concepts. In Daniel Marcu Susan Du- mais and Salim Roukos, editors, HLT-NAACL 2004: Demonstration Papers, pages 38-41, Boston, Mas- sachusetts, USA, May 2 -May 7. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Coreference resolution with world knowledge",
"authors": [
{
"first": "Altaf",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "814--824",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Altaf Rahman and Vincent Ng. 2011. Coreference res- olution with world knowledge. In Proceedings of the 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technolo- gies, pages 814-824, Portland, Oregon, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Using information content to evaluate semantic similarity in a taxonomy",
"authors": [
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 14th international joint conference on Artificial intelligence",
"volume": "1",
"issue": "",
"pages": "448--453",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philip Resnik. 1995. Using information content to evaluate semantic similarity in a taxonomy. In Proceedings of the 14th international joint confer- ence on Artificial intelligence -Volume 1, IJCAI'95, pages 448-453. Morgan Kaufmann Publishers Inc.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Inclusive yet selective: Supervised distributional hypernymy detection",
"authors": [
{
"first": "Stephen",
"middle": [],
"last": "Roller",
"suffix": ""
},
{
"first": "Katrin",
"middle": [],
"last": "Erk",
"suffix": ""
},
{
"first": "Gemma",
"middle": [],
"last": "Boleda",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "1025--1036",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stephen Roller, Katrin Erk, and Gemma Boleda. 2014. Inclusive yet selective: Supervised distributional hy- pernymy detection. In Proceedings of COLING 2014, the 25th International Conference on Compu- tational Linguistics: Technical Papers, pages 1025- 1036, Dublin, Ireland, August. Dublin City Univer- sity and Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Learning representations by back-propagating errors",
"authors": [
{
"first": "G E",
"middle": [],
"last": "D E Rumelhart",
"suffix": ""
},
{
"first": "R J",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 1986,
"venue": "Nature",
"volume": "",
"issue": "",
"pages": "533--536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D E Rumelhart, G E Hinton, and R J Williams. 1986. Learning representations by back-propagating er- rors. Nature, pages 533-536.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Artificial Intelligence: A Modern Approach",
"authors": [
{
"first": "Stuart",
"middle": [],
"last": "Russell",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Norvig",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stuart Russell and Peter Norvig. 2009. Artificial Intel- ligence: A Modern Approach. Prentice Hall.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Extracting lexical reference rules from wikipedia",
"authors": [
{
"first": "Eyal",
"middle": [],
"last": "Shnarch",
"suffix": ""
},
{
"first": "Libby",
"middle": [],
"last": "Barak",
"suffix": ""
},
{
"first": "Ido",
"middle": [],
"last": "Dagan",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP",
"volume": "",
"issue": "",
"pages": "450--458",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eyal Shnarch, Libby Barak, and Ido Dagan. 2009. Ex- tracting lexical reference rules from wikipedia. In Proceedings of the Joint Conference of the 47th An- nual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP, pages 450-458, Suntec, Singapore, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Yago: a core of semantic knowledge",
"authors": [
{
"first": "M",
"middle": [],
"last": "Fabian",
"suffix": ""
},
{
"first": "Gjergji",
"middle": [],
"last": "Suchanek",
"suffix": ""
},
{
"first": "Gerhard",
"middle": [],
"last": "Kasneci",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weikum",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 16th international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "697--706",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fabian M Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowl- edge. In Proceedings of the 16th international con- ference on World Wide Web, pages 697-706. ACM.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Experiments with three approaches to recognizing lexical entailment",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Saif",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Mohammad",
"suffix": ""
}
],
"year": 2015,
"venue": "Natural Language Engineering",
"volume": "21",
"issue": "03",
"pages": "437--476",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D Turney and Saif M Mohammad. 2015. Ex- periments with three approaches to recognizing lex- ical entailment. Natural Language Engineering, 21(03):437-476.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Similarity of semantic relations",
"authors": [
{
"first": "D",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Turney",
"suffix": ""
}
],
"year": 2006,
"venue": "Computational Linguistics",
"volume": "32",
"issue": "3",
"pages": "379--416",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter D Turney. 2006. Similarity of semantic relations. Computational Linguistics, 32(3):379-416.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Template-based question answering over rdf data",
"authors": [
{
"first": "Christina",
"middle": [],
"last": "Unger",
"suffix": ""
},
{
"first": "Lorenz",
"middle": [],
"last": "B\u00fchmann",
"suffix": ""
},
{
"first": "Jens",
"middle": [],
"last": "Lehmann",
"suffix": ""
},
{
"first": "Axel-Cyrille Ngonga",
"middle": [],
"last": "Ngomo",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gerber",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Cimiano",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 21st international conference on World Wide Web",
"volume": "",
"issue": "",
"pages": "639--648",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christina Unger, Lorenz B\u00fchmann, Jens Lehmann, Axel-Cyrille Ngonga Ngomo, Daniel Gerber, and Philipp Cimiano. 2012. Template-based question answering over rdf data. In Proceedings of the 21st international conference on World Wide Web, pages 639-648. ACM.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Wikidata: A new platform for collaborative data collection",
"authors": [
{
"first": "Denny",
"middle": [],
"last": "Vrande\u010di\u0107",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 21st international conference companion on World Wide Web",
"volume": "",
"issue": "",
"pages": "1063--1064",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Denny Vrande\u010di\u0107. 2012. Wikidata: A new platform for collaborative data collection. In Proceedings of the 21st international conference companion on World Wide Web, pages 1063-1064. ACM.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A general framework for distributional similarity",
"authors": [
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julie Weeds and David Weir. 2003. A general framework for distributional similarity. In Michael Collins and Mark Steedman, editors, Proceedings of the 2003 Conference on Empirical Methods in Nat- ural Language Processing, pages 81-88.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Learning to distinguish hypernyms and co-hyponyms",
"authors": [
{
"first": "Julie",
"middle": [],
"last": "Weeds",
"suffix": ""
},
{
"first": "Daoud",
"middle": [],
"last": "Clarke",
"suffix": ""
},
{
"first": "Jeremy",
"middle": [],
"last": "Reffin",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Weir",
"suffix": ""
},
{
"first": "Bill",
"middle": [],
"last": "Keller",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of COL-ING 2014, the 25th International Conference on Computational Linguistics: Technical Papers",
"volume": "",
"issue": "",
"pages": "2249--2259",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Julie Weeds, Daoud Clarke, Jeremy Reffin, David Weir, and Bill Keller. 2014. Learning to distinguish hy- pernyms and co-hyponyms. In Proceedings of COL- ING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 2249-2259, Dublin, Ireland, August. Dublin City University and Association for Computational Lin- guistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Open information extraction using wikipedia",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Daniel",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Weld",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "118--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Wu and Daniel S. Weld. 2010. Open information extraction using wikipedia. In Proceedings of the 48th Annual Meeting of the Association for Compu- tational Linguistics, pages 118-127, Uppsala, Swe- den, July. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Feature subset selection using a genetic algorithm",
"authors": [
{
"first": "Jihoon",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Vasant",
"middle": [],
"last": "Honavar",
"suffix": ""
}
],
"year": 1998,
"venue": "Feature extraction, construction and selection",
"volume": "",
"issue": "",
"pages": "117--136",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jihoon Yang and Vasant Honavar. 1998. Feature subset selection using a genetic algorithm. In Feature ex- traction, construction and selection, pages 117-136. Springer.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "The dependencies between term-pairs (x \u2192 y), paths (pj), and edge types (ei).",
"uris": null,
"num": null
},
"FIGREF1": {
"type_str": "figure",
"text": "2.A term-pair is classified as positive if at least one of its paths is indicative. 6",
"uris": null,
"num": null
},
"FIGREF2": {
"type_str": "figure",
"text": "Recall-precision curve for proper2015.",
"uris": null,
"num": null
},
"TABREF0": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Examples of Wikidata relations that are indicative of lexical inference."
},
"TABREF1": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Structured resources explored in this work."
},
"TABREF4": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "Datasets evaluated in this work."
},
"TABREF5": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "The manual whitelists commonly used in WordNet."
},
"TABREF6": {
"content": "<table><tr><td>Edge Type</td><td>Example</td></tr><tr><td>occupation</td><td>Daniel Radcliffe \u2192 actor</td></tr><tr><td>sex or gender</td><td>Louisa May Alcott \u2192 woman</td></tr><tr><td>instance of</td><td>Doctor Who \u2192 series</td></tr><tr><td>acted in</td><td>Michael Keaton \u2192 Beetlejuice</td></tr><tr><td>genre</td><td>Touch \u2192 drama</td></tr><tr><td>position played on team</td><td>Jason Collins \u2192 center</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "showing promising results. While"
},
"TABREF7": {
"content": "<table/>",
"type_str": "table",
"html": null,
"num": null,
"text": "An excerpt of the whitelist learnt for proper2015 by the binary model with accompanying true-positives that do not have an indicative path in WordNet."
},
"TABREF9": {
"content": "<table><tr><td>Error Type</td><td colspan=\"4\">kotlerman levy turney proper 2010 2014 2014 2015</td></tr><tr><td>Not Covered</td><td>2%</td><td>12%</td><td>4%</td><td>13%</td></tr><tr><td>No Indicative Paths</td><td>35%</td><td>48%</td><td>73%</td><td>75%</td></tr><tr><td>Whitelist Error</td><td>6%</td><td>3%</td><td>5%</td><td>8%</td></tr><tr><td>Resource Error</td><td>15%</td><td>11%</td><td>7%</td><td>0%</td></tr><tr><td>Annotation Error</td><td>40%</td><td>23%</td><td>7%</td><td>1%</td></tr><tr><td>Other</td><td>2%</td><td>3%</td><td>4%</td><td>3%</td></tr></table>",
"type_str": "table",
"html": null,
"num": null,
"text": "The error analysis setting of each dataset."
}
}
}
}