| { |
| "paper_id": "P09-1031", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:54:13.152507Z" |
| }, |
| "title": "A Metric-based Framework for Automatic Taxonomy Induction", |
| "authors": [ |
| { |
| "first": "Hui", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Carnegie Mellon University", |
| "location": {} |
| }, |
| "email": "huiyang@cs.cmu.edu" |
| }, |
| { |
| "first": "Jamie", |
| "middle": [], |
| "last": "Callan", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Carnegie Mellon University", |
| "location": {} |
| }, |
| "email": "callan@cs.cmu.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper presents a novel metric-based framework for the task of automatic taxonomy induction. The framework incrementally clusters terms based on ontology metric, a score indicating semantic distance; and transforms the task into a multi-criteria optimization based on minimization of taxonomy structures and modeling of term abstractness. It combines the strengths of both lexico-syntactic patterns and clustering through incorporating heterogeneous features. The flexible design of the framework allows a further study on which features are the best for the task under various conditions. The experiments not only show that our system achieves higher F1-measure than other state-of-the-art systems, but also reveal the interaction between features and various types of relations, as well as the interaction between features and term abstractness.", |
| "pdf_parse": { |
| "paper_id": "P09-1031", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper presents a novel metric-based framework for the task of automatic taxonomy induction. The framework incrementally clusters terms based on ontology metric, a score indicating semantic distance; and transforms the task into a multi-criteria optimization based on minimization of taxonomy structures and modeling of term abstractness. It combines the strengths of both lexico-syntactic patterns and clustering through incorporating heterogeneous features. The flexible design of the framework allows a further study on which features are the best for the task under various conditions. The experiments not only show that our system achieves higher F1-measure than other state-of-the-art systems, but also reveal the interaction between features and various types of relations, as well as the interaction between features and term abstractness.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Automatic taxonomy induction is an important task in the fields of Natural Language Processing, Knowledge Management, and Semantic Web. It has been receiving increasing attention because semantic taxonomies, such as WordNet (Fellbaum, 1998) , play an important role in solving knowledge-rich problems, including question answering (Harabagiu et al., 2003) and textual entailment (Geffet and Dagan, 2005) . Nevertheless, most existing taxonomies are manually created at great cost. These taxonomies are rarely complete; it is difficult to include new terms in them from emerging or rapidly changing domains. Moreover, manual taxonomy construction is time-consuming, which may make it unfeasible for specialized domains and personalized tasks. Automatic taxonomy induction is a solution to augment existing resources and to pro-duce new taxonomies for such domains and tasks.", |
| "cite_spans": [ |
| { |
| "start": 224, |
| "end": 240, |
| "text": "(Fellbaum, 1998)", |
| "ref_id": null |
| }, |
| { |
| "start": 331, |
| "end": 355, |
| "text": "(Harabagiu et al., 2003)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 379, |
| "end": 403, |
| "text": "(Geffet and Dagan, 2005)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Automatic taxonomy induction can be decomposed into two subtasks: term extraction and relation formation. Since term extraction is relatively easy, relation formation becomes the focus of most research on automatic taxonomy induction. In this paper, we also assume that terms in a taxonomy are given and concentrate on the subtask of relation formation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Existing work on automatic taxonomy induction has been conducted under a variety of names, such as ontology learning, semantic class learning, semantic relation classification, and relation extraction. The approaches fall into two main categories: pattern-based and clusteringbased. Pattern-based approaches define lexicalsyntactic patterns for relations, and use these patterns to discover instances of relations. Clustering-based approaches hierarchically cluster terms based on similarities of their meanings usually represented by a vector of quantifiable features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Pattern-based approaches are known for their high accuracy in recognizing instances of relations if the patterns are carefully chosen, either manually (Berland and Charniak, 1999; Kozareva et al., 2008) or via automatic bootstrapping (Hearst, 1992; Widdows and Dorow, 2002; Girju et al., 2003) . The approaches, however, suffer from sparse coverage of patterns in a given corpus. Recent studies Kozareva et al., 2008) show that if the size of a corpus, such as the Web, is nearly unlimited, a pattern has a higher chance to explicitly appear in the corpus. However, corpus size is often not that large; hence the problem still exists. Moreover, since patterns usually extract instances in pairs, the approaches suffer from the problem of inconsistent concept chains after connecting pairs of instances to form taxonomy hierarchies.", |
| "cite_spans": [ |
| { |
| "start": 151, |
| "end": 179, |
| "text": "(Berland and Charniak, 1999;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 180, |
| "end": 202, |
| "text": "Kozareva et al., 2008)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 234, |
| "end": 248, |
| "text": "(Hearst, 1992;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 249, |
| "end": 273, |
| "text": "Widdows and Dorow, 2002;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 274, |
| "end": 293, |
| "text": "Girju et al., 2003)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 395, |
| "end": 417, |
| "text": "Kozareva et al., 2008)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Clustering-based approaches have a main advantage that they are able to discover relations which do not explicitly appear in text. They also avoid the problem of inconsistent chains by addressing the structure of a taxonomy globally from the outset. Nevertheless, it is generally believed that clustering-based approaches cannot generate relations as accurate as pattern-based approaches. Moreover, their performance is largely influenced by the types of features used. The common types of features include contextual (Lin, 1998) , co-occurrence (Yang and Callan, 2008) , and syntactic dependency (Pantel and Lin, 2002; . So far there is no systematic study on which features are the best for automatic taxonomy induction under various conditions.", |
| "cite_spans": [ |
| { |
| "start": 518, |
| "end": 529, |
| "text": "(Lin, 1998)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 546, |
| "end": 569, |
| "text": "(Yang and Callan, 2008)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 597, |
| "end": 619, |
| "text": "(Pantel and Lin, 2002;", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "This paper presents a metric-based taxonomy induction framework. It combines the strengths of both pattern-based and clustering-based approaches by incorporating lexico-syntactic patterns as one type of features in a clustering framework. The framework integrates contextual, co-occurrence, syntactic dependency, lexical-syntactic patterns, and other features to learn an ontology metric, a score indicating semantic distance, for each pair of terms in a taxonomy; it then incrementally clusters terms based on their ontology metric scores. The incremental clustering is transformed into an optimization problem based on two assumptions: minimum evolution and abstractness. The flexible design of the framework allows a further study of the interaction between features and relations, as well as that between features and term abstractness.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There has been a substantial amount of research on automatic taxonomy induction. As we mentioned earlier, two main approaches are patternbased and clustering-based.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Pattern-based approaches are the main trend for automatic taxonomy induction. Though suffering from the problems of sparse coverage and inconsistent chains, they are still popular due to their simplicity and high accuracy. They have been applied to extract various types of lexical and semantic relations, including is-a, part-of, sibling, synonym, causal, and many others.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Pattern-based approaches started from and still pay a great deal of attention to the most common is-a relations. Hearst (1992) pioneered using a hand crafted list of hyponym patterns as seeds and employing bootstrapping to discover is-a relations. Since then, many approaches (Mann, 2002; Snow et al., 2005) have used Hearst-style patterns in their work on is-a relations. For instance, Mann (2002) extracted is-a relations for proper nouns by Hearststyle patterns. extended is-a relation acquisition towards terascale, and automatically identified hypernym patterns by minimal edit distance.", |
| "cite_spans": [ |
| { |
| "start": 113, |
| "end": 126, |
| "text": "Hearst (1992)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 276, |
| "end": 288, |
| "text": "(Mann, 2002;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 289, |
| "end": 307, |
| "text": "Snow et al., 2005)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 387, |
| "end": 398, |
| "text": "Mann (2002)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Another common relation is sibling, which describes the relation of sharing similar meanings and being members of the same class. Terms in sibling relations are also known as class members or similar terms. Inspired by the conjunction and appositive structures, Riloff and Shepherd (1997) , Roark and Charniak (1998) used cooccurrence statistics in local context to discover sibling relations. The KnowItAll system extended the work in (Hearst, 1992) and bootstrapped patterns on the Web to discover siblings; it also ranked and selected the patterns by statistical measures. Widdows and Dorow (2002) combined symmetric patterns and graph link analysis to discover sibling relations. Davidov and Rappoport (2006) also used symmetric patterns for this task. Recently, Kozareva et al. (2008) combined a double-anchored hyponym pattern with graph structure to extract siblings.", |
| "cite_spans": [ |
| { |
| "start": 262, |
| "end": 288, |
| "text": "Riloff and Shepherd (1997)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 291, |
| "end": 316, |
| "text": "Roark and Charniak (1998)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 436, |
| "end": 450, |
| "text": "(Hearst, 1992)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 576, |
| "end": 600, |
| "text": "Widdows and Dorow (2002)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 767, |
| "end": 789, |
| "text": "Kozareva et al. (2008)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The third common relation is part-of. Berland and Charniak (1999) used two meronym patterns to discover part-of relations, and also used statistical measures to rank and select the matching instances. Girju et al. (2003) took a similar approach to Hearst (1992) for part-of relations.", |
| "cite_spans": [ |
| { |
| "start": 38, |
| "end": 65, |
| "text": "Berland and Charniak (1999)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 201, |
| "end": 220, |
| "text": "Girju et al. (2003)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 248, |
| "end": 261, |
| "text": "Hearst (1992)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Other types of relations that have been studied by pattern-based approaches include questionanswer relations (such as birthdates and inventor) (Ravichandran and Hovy, 2002) , synonyms and antonyms (Lin et al., 2003) , general purpose analogy (Turney et al., 2003) , verb relations (including similarity, strength, antonym, enablement and temporal) (Chklovski and Pantel, 2004) , entailment (Szpektor et al., 2004) , and more specific relations, such as purpose, creation (Cimiano and Wenderoth, 2007) , LivesIn, and EmployedBy (Bunescu and Mooney , 2007) .", |
| "cite_spans": [ |
| { |
| "start": 143, |
| "end": 172, |
| "text": "(Ravichandran and Hovy, 2002)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 197, |
| "end": 215, |
| "text": "(Lin et al., 2003)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 242, |
| "end": 263, |
| "text": "(Turney et al., 2003)", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 348, |
| "end": 376, |
| "text": "(Chklovski and Pantel, 2004)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 390, |
| "end": 413, |
| "text": "(Szpektor et al., 2004)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 471, |
| "end": 500, |
| "text": "(Cimiano and Wenderoth, 2007)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 527, |
| "end": 554, |
| "text": "(Bunescu and Mooney , 2007)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The most commonly used technique in pattern-based approaches is bootstrapping (Hearst, 1992; Girju et al., 2003; Ravichandran and Hovy, 2002; Pantel and Pennacchiotti, 2006) . It utilizes a few man-crafted seed patterns to extract instances from corpora, then extracts new patterns using these instances, and continues the cycle to find new instances and new patterns. It is effective and scalable to large datasets; however, uncontrolled bootstrapping soon generates undesired instances once a noisy pattern brought into the cycle.", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 92, |
| "text": "(Hearst, 1992;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 93, |
| "end": 112, |
| "text": "Girju et al., 2003;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 113, |
| "end": 141, |
| "text": "Ravichandran and Hovy, 2002;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 142, |
| "end": 173, |
| "text": "Pantel and Pennacchiotti, 2006)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "To aid bootstrapping, methods of pattern quality control are widely applied. Statistical measures, such as point-wise mutual information Pantel and Pennacchiotti, 2006) and conditional probability (Cimiano and Wenderoth, 2007) , have been shown to be effective to rank and select patterns and instances. Pattern quality control is also investigated by using WordNet (Girju et al., 2006) , graph structures built among terms (Widdows and Dorow, 2002; Kozareva et al., 2008) , and pattern clusters (Davidov and Rappoport, 2008) .", |
| "cite_spans": [ |
| { |
| "start": 137, |
| "end": 168, |
| "text": "Pantel and Pennacchiotti, 2006)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 197, |
| "end": 226, |
| "text": "(Cimiano and Wenderoth, 2007)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 366, |
| "end": 386, |
| "text": "(Girju et al., 2006)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 424, |
| "end": 449, |
| "text": "(Widdows and Dorow, 2002;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 450, |
| "end": 472, |
| "text": "Kozareva et al., 2008)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 496, |
| "end": 525, |
| "text": "(Davidov and Rappoport, 2008)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Clustering-based approaches usually represent word contexts as vectors and cluster words based on similarities of the vectors (Brown et al., 1992; Lin, 1998) . Besides contextual features, the vectors can also be represented by verb-noun relations (Pereira et al., 1993) , syntactic dependency Snow et al., 2005) , co-occurrence (Yang and Callan, 2008) , conjunction and appositive features (Caraballo, 1999 ). More work is described in (Buitelaar et al., 2005; Cimiano and Volker, 2005) . Clustering-based approaches allow discovery of relations which do not explicitly appear in text. Pantel and Pennacchiotti 2006, however, pointed out that clustering-based approaches generally fail to produce coherent cluster for small corpora. In addition, clustering-based approaches had only applied to solve is-a and sibling relations.", |
| "cite_spans": [ |
| { |
| "start": 126, |
| "end": 146, |
| "text": "(Brown et al., 1992;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 147, |
| "end": 157, |
| "text": "Lin, 1998)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 248, |
| "end": 270, |
| "text": "(Pereira et al., 1993)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 294, |
| "end": 312, |
| "text": "Snow et al., 2005)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 329, |
| "end": 352, |
| "text": "(Yang and Callan, 2008)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 391, |
| "end": 407, |
| "text": "(Caraballo, 1999", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 437, |
| "end": 461, |
| "text": "(Buitelaar et al., 2005;", |
| "ref_id": null |
| }, |
| { |
| "start": 462, |
| "end": 487, |
| "text": "Cimiano and Volker, 2005)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Many clustering-based approaches face the challenge of appropriately labeling non-leaf clusters. The labeling amplifies the difficulty in creation and evaluation of taxonomies. Agglomerative clustering (Brown et al., 1992; Caraballo, 1999; Rosenfeld and Feldman, 2007; Yang and Callan, 2008) iteratively merges the most similar clusters into bigger clusters, which need to be labeled. Divisive clustering, such as CBC (Clustering By Committee) which constructs cluster centroids by averaging the feature vectors of a subset of carefully chosen cluster members (Pantel and Lin, 2002; , also need to label the parents of split clusters. In this paper, we take an incremental clustering approach, in which terms and relations are added into a taxonomy one at a time, and their parents are from the existing taxonomy. The advantage of the incremental approach is that it eliminates the trouble of inventing cluster labels and concentrates on placing terms in the correct positions in a taxonomy hierarchy. Snow et al. (2006) is the most similar to ours because they also took an incremental approach to construct taxonomies. In their work, a taxonomy grows based on maximization of conditional probability of relations given evidence; while in our work based on optimization of taxonomy structures and modeling of term abstractness. Moreover, our approach employs heterogeneous features from a wide range; while their approach only used syntactic dependency. We compare system performance between (Snow et al., 2006) and our framework in Section 5.", |
| "cite_spans": [ |
| { |
| "start": 202, |
| "end": 222, |
| "text": "(Brown et al., 1992;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 223, |
| "end": 239, |
| "text": "Caraballo, 1999;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 240, |
| "end": 268, |
| "text": "Rosenfeld and Feldman, 2007;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 269, |
| "end": 291, |
| "text": "Yang and Callan, 2008)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 572, |
| "end": 582, |
| "text": "Lin, 2002;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 1002, |
| "end": 1020, |
| "text": "Snow et al. (2006)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 1493, |
| "end": 1512, |
| "text": "(Snow et al., 2006)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The features used in this work are indicators of semantic relations between terms. Given two input terms y x c c , , a feature is defined as a function generating a single numeric score", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Features", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u2208 ) , ( y x c c h \u211d or a vector of numeric scores \u2208 ) , ( y x c c h \u211d n .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Features", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The features include contextual, co-occurrence, syntactic dependency, lexicalsyntactic patterns, and miscellaneous.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Features", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The first set of features captures contextual information of terms. According to Distributional Hypothesis (Harris, 1954) , words appearing in similar contexts tend to be similar. Therefore, word meanings can be inferred from and represented by contexts. Based on the hypothesis, we develop the following features: (1) Global Context KL-Divergence: The global context of each input term is the search results collected through querying search engines against several corpora (Details in Section 5.1). It is built into a unigram language model without smoothing for each term. This feature function measures the Kullback-Leibler divergence (KL divergence) between the language models associated with the two inputs. (2) Local Context KL-Divergence: The local context is the collection of all the left two and the right two words surrounding an input term. Similarly, the local context is built into a unigram language model without smoothing for each term; the feature function outputs KL divergence between the models.", |
| "cite_spans": [ |
| { |
| "start": 107, |
| "end": 121, |
| "text": "(Harris, 1954)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Features", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The second set of features is co-occurrence. In our work, co-occurrence is measured by pointwise mutual information between two terms: where Count(.) is defined as the number of documents or sentences containing the term(s); or n as in \"Results 1-10 of about n for term\" appearing on the first page of Google search results for a term or the concatenation of a term pair. Based on different definitions of Count(.), we have (3) Document PMI, (4) Sentence PMI, and 5Google PMI as the co-occurrence features. The third set of features employs syntactic dependency analysis. We have (6) Minipar Syntactic Distance to measure the average length of the shortest syntactic paths (in the first syntactic parse tree returned by Minipar 1 ) between two terms in sentences containing them, (7) Modifier Overlap, (8) Object Overlap, (9) Subject Overlap, and (10) Verb Overlap to measure the number of overlaps between modifiers, objects, subjects, and verbs, respectively, for the two terms in sentences containing them. We use Assert 2 to label the semantic roles.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Features", |
| "sec_num": "3" |
| }, |
| { |
| "text": ") ( ) ( ) , ( log ) , (", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Features", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The fourth set of features is lexical-syntactic patterns. We have (11) Hypernym Patterns based on patterns proposed by (Hearst, 1992) and (Snow et al., 2005) , (12) Sibling Patterns which are basically conjunctions, and (13) Part-of Patterns based on patterns proposed by (Girju et al., 2003) and (Cimiano and Wenderoth, 2007) . Table 1 lists all patterns. Each feature function returns a vector of scores for two input terms, one score per pattern. A score is 1 if two terms match a pattern in text, 0 otherwise. The last set of features is miscellaneous. We have (14) Word Length Difference to measure the length difference between two terms, and (15) Definition Overlap to measure the number of word overlaps between the term definitions obtained by querying Google with \"define:term\".", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 133, |
| "text": "(Hearst, 1992)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 138, |
| "end": 157, |
| "text": "(Snow et al., 2005)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 272, |
| "end": 292, |
| "text": "(Girju et al., 2003)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 297, |
| "end": 326, |
| "text": "(Cimiano and Wenderoth, 2007)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Features", |
| "sec_num": "3" |
| }, |
| { |
| "text": "These heterogeneous features vary from simple statistics to complicated syntactic dependency features, basic word length to comprehensive Web-based contextual features. The flexible design of our learning framework allows us to use all of them, and even allows us to use different sets of them under different conditions, for instance, different types of relations and different abstraction levels. We study the interaction be-1 http://www.cs.ualberta.ca/lindek/minipar.htm. 2 http://cemantix.org/assert. tween features and relations and that between features and abstractness in Section 5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Features", |
| "sec_num": "3" |
| }, |
| { |
| "text": "This section presents the metric-based framework which incrementally clusters terms to form taxonomies. By minimizing the changes of taxonomy structures and modeling term abstractness at each step, it finds the optimal position for each term in a taxonomy. We first introduce definitions, terminologies and assumptions about taxonomies; then, we formulate automatic taxonomy induction as a multi-criterion optimization and solve it by a greedy algorithm; lastly, we show how to estimate ontology metrics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Metric-based Framework", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We define a taxonomy T as a data model that represents a set of terms C and a set of relations R between these terms. T can be written as T(C,R). Note that for the subtask of relation formation, we assume that the term set C is given. A full taxonomy is a tree containing all the terms in C. A partial taxonomy is a tree containing only a subset of terms in C. In our framework, automatic taxonomy induction is the process to construct a full taxonomy T\u011d iven a set of terms C and an initial partial taxonomy", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Taxonomies, Ontology Metric, Assumptions, and Information Functions", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": ") , ( 0 0 0 R S T , where C S \u2286 0", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Taxonomies, Ontology Metric, Assumptions, and Information Functions", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": ". Note that T 0 is possibly empty. The process starts from the initial partial taxonomy T 0 and randomly adds terms from C to T 0 one by one, until a full taxonomy is formed, i.e., all terms in C are added.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Taxonomies, Ontology Metric, Assumptions, and Information Functions", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We define an ontology metric as a distance measure between two terms (c x ,c y ) in a taxonomy T(C,R). Formally, it is a function", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ontology Metric", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2192 \u00d7 C C d : \u211d+,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Ontology Metric", |
| "sec_num": null |
| }, |
| { |
| "text": "where C is the set of terms in T. An ontology metric d on a taxonomy T with edge weights w for any term pair (c x ,c y )\u2208C is the sum of all edge weights along the shortest path between the pair: is the set of edges defining the shortest path from term c x to c y . Figure 1 illustrates ontology metrics for a 5-node taxonomy. Section 4.3 presents the details of learning ontology metrics.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 266, |
| "end": 274, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Ontology Metric", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u2211 \u2208 = ) ,", |
| "eq_num": "( , ) , ( , ) ( ) ," |
| } |
| ], |
| "section": "Ontology Metric", |
| "sec_num": null |
| }, |
| { |
| "text": "The amount of information in a taxonomy T is measured and represented by an information function Info(T). An information function is defined as the sum of the ontology metrics among a set of term pairs. The function can be defined over a taxonomy, or on a single level of a taxonomy. For a taxonomy T(C,R), we define its information function as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Information Functions", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2211 \u2208 < = C y c x c y x y x c c d T Info , , ) , ( ) ( (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Information Functions", |
| "sec_num": null |
| }, |
| { |
| "text": "Similarly, we define the information function for an abstraction level L i as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Information Functions", |
| "sec_num": null |
| }, |
| { |
| "text": "\u2211 \u2208 < = i L y c x c y x y x i i c c d L Info , , ) , ( ) ( (2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Information Functions", |
| "sec_num": null |
| }, |
| { |
| "text": "where L i is the subset of terms lying at the i th level of a taxonomy T. For example, in Figure 1 , node 1 is at level L 1 , node 2 and node 5 level L 2 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 90, |
| "end": 98, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Information Functions", |
| "sec_num": null |
| }, |
| { |
| "text": "Given the above definitions about taxonomies, we make the following assumptions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Assumptions", |
| "sec_num": null |
| }, |
| { |
| "text": "Minimum Evolution Assumption. Inspired by the minimum evolution tree selection criterion widely used in phylogeny (Hendy and Penny, 1985) , we assume that a good taxonomy not only minimizes the overall semantic distance among the terms but also avoid dramatic changes. Construction of a full taxonomy is proceeded by adding terms one at a time, which yields a series of partial taxonomies. After adding each term, the current taxonomy T n+1 from the previous taxonomy T n is one that introduces the least changes between the information in the two taxonomies:", |
| "cite_spans": [ |
| { |
| "start": 114, |
| "end": 137, |
| "text": "(Hendy and Penny, 1985)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Assumptions", |
| "sec_num": null |
| }, |
| { |
| "text": ") , ( min arg ' ' 1 T T Info T n T n \u2206 = +", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Assumptions", |
| "sec_num": null |
| }, |
| { |
| "text": "where the information change function is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Assumptions", |
| "sec_num": null |
| }, |
| { |
| "text": "| ) ( ) ( | ) , ( b a b a T Info T Info T T Info \u2212 = \u2206 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Assumptions", |
| "sec_num": null |
| }, |
| { |
| "text": "Abstractness Assumption. In a taxonomy, concrete concepts usually lay at the bottom of the hierarchy while abstract concepts often occupy the intermediate and top levels. Concrete concepts often represent physical entities, such as \"basketball\" and \"mercury pollution\". While abstract concepts, such as \"science\" and \"economy\", do not have a physical form thus we must imagine their existence. This obvious difference suggests that there is a need to treat them differently in taxonomy induction. Hence we assume that terms at the same abstraction level have common characteristics and share the same Info(.) function. We also assume that terms at different abstraction levels have different characteristics; hence they do not necessarily share the same Info(.) function. That is to say,", |
| "cite_spans": [ |
| { |
| "start": 601, |
| "end": 608, |
| "text": "Info(.)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Assumptions", |
| "sec_num": null |
| }, |
| { |
| "text": ", concept T c \u2208 \u2200 , level n abstractio T L i \u2282 (.). uses i i Info c L c \u21d2 \u2208", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Assumptions", |
| "sec_num": null |
| }, |
| { |
| "text": "The Minimum Evolution Objective Based on the minimum evolution assumption, we define the goal of taxonomy induction is to find the optimal full taxonomy T\u02c6 such that the information changes are the least since the initial partial taxonomy T 0 , i.e., to find:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": ") , ( min ar\u011d ' 0 ' T T Info T T \u2206 = (3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "where ' T is a full taxonomy, i.e., the set of terms in ' T equals C. To find the optimal solution for Equation 3, T\u02c6, we need to find the optimal term set \u0108 and the optimal relation set R . Since the optimal term set for a full taxonomy is always C, the only unknown part left is R . Thus, Equation (3) can be transformed equivalently into:", |
| "cite_spans": [ |
| { |
| "start": 6, |
| "end": 7, |
| "text": "'", |
| "ref_id": null |
| }, |
| { |
| "start": 56, |
| "end": 57, |
| "text": "'", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": ")) , ( ), , ( ( min ar\u011d 0 0 0 ' ' ' R S T R C T Info R R \u2206 =", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Note that in the framework, terms are added incrementally into a taxonomy. Each term insertion yields a new partial taxonomy T. By the minimum evolution assumption, the optimal next partial taxonomy is one gives the least information change. Therefore, the updating function for the set of relations 1 + n R after a new term z is inserted can be calculated as:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": ")) , ( ), }, { ( ( min ar\u011d ' ' n n n R R S T R z S T Info R \u222a \u2206 =", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "By plugging in the definition of the information change function (.,.) Info \u2206 in Section 4.1 and Equation (1), the updating function becomes:", |
| "cite_spans": [ |
| { |
| "start": 65, |
| "end": 70, |
| "text": "(.,.)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "| ) , ( ) , ( | min ar\u011d , } { , ' \u2211 \u2211 \u2208 \u222a \u2208 \u2212 = n S y c x c y x z n S y c x c y x R c c d c c d R", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The above updating function can be transformed into a minimization problem: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "< \u2212 \u2264 \u2212 \u2264 \u2211 \u2211 \u2211 \u2211 \u222a \u2208 \u2208 \u2208 \u222a \u2208 } { , , , } { , ) , ( ) , ( ) , ( ) , ( subject to min", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The minimization follows the minimum evolution assumption; hence we call it the minimum evolution objective.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Problem Formulation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The abstractness assumption suggests that term abstractness should be modeled explicitly by learning separate information functions for terms at different abstraction levels. We approximate an information function by a linear interpolation of some underlying feature functions. Each abstraction level L i is characterized by its own information function Info i (.). The least square fit of Info i (.) is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Abstractness Objective", |
| "sec_num": null |
| }, |
| { |
| "text": ". | ) ( | min 2 i T i i i H W L Info \u2212", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Abstractness Objective", |
| "sec_num": null |
| }, |
| { |
| "text": "By plugging Equation (2) and minimizing over every abstraction level, we have: (.,.) . This minimization follows the abstractness assumption; hence we call it the abstractness objective.", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 84, |
| "text": "(.,.)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Abstractness Objective", |
| "sec_num": null |
| }, |
| { |
| "text": "2 , , , )) , ( ) , ( ( min y x j i j j i i i L y c x c y x c c h w c c d \u2211 \u2211 \u2211 \u2212 \u2208 where j i h , (.,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Abstractness Objective", |
| "sec_num": null |
| }, |
| { |
| "text": "We propose that both minimum evolution and abstractness objectives need to be satisfied. To optimize multiple criteria, the Pareto optimality needs to be satisfied (Boyd and Vandenberghe, 2004) . We handle this by introducing \u07e3 \u202b\u05d0\u202c \u123e0,1\u123f to control the contribution of each objective. The multi-criterion optimization function is:", |
| "cite_spans": [ |
| { |
| "start": 164, |
| "end": 193, |
| "text": "(Boyd and Vandenberghe, 2004)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Multi-Criterion Optimization Algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "y x c c h w c c d v c c d c c d u c c d c c d u v u y x j i j j i i L c c y x z S c c y x S c c y x S c c y x z S c c y x i y x n y x n y x n y x n y x < \u2212 = \u2212 \u2264 \u2212 \u2264 \u2212 + \u2211 \u2211 \u2211 \u2211 \u2211 \u2211 \u2211 \u2208 \u222a \u2208 \u2208 \u2208 \u222a \u2208 2 )) , ( ) , ( ( ) , ( ) , ( ) , ( ) , ( subject to ) 1 ( min , , , } { , , , } { , \u03bb \u03bb", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Multi-Criterion Optimization Algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "The above optimization can be solved by a greedy optimization algorithm. At each term insertion step, it produces a new partial taxonomy by adding to the existing partial taxonomy a new term z, and a new set of relations R(z,.). z is attached to every nodes in the existing partial taxonomy; and the algorithm selects the optimal position indicated by R(z,.), which minimizes the multicriterion objective function. The algorithm is:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Multi-Criterion Optimization Algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "); , ( )}; ) 1 ( ( min {arg ; \\ R S T v u R R {z} S S S C z (z,.) R Output foreach \u03bb \u03bb \u2212 + \u222a \u2192 \u222a \u2192 \u2208", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Multi-Criterion Optimization Algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "The above algorithm presents a general incremental clustering procedure to construct taxonomies. By minimizing the taxonomy structure changes and modeling term abstractness at each step, it finds the optimal position of each term in the taxonomy hierarchy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Multi-Criterion Optimization Algorithm", |
| "sec_num": null |
| }, |
| { |
| "text": "Learning a good ontology metric is important for the multi-criterion optimization algorithm. In this work, the estimation and prediction of ontology metric are achieved by ridge regression (Hastie et al., 2001 ). In the training data, an ontology metric d(c x ,c y ) for a term pair (c x ,c y ) is generated by assuming every edge weight as 1 and summing up all the edge weights along the shortest path from c x to c y . We assume that there are some underlying feature functions which measure the semantic distance from term c x to c y . A weighted combination of these functions approximates the ontology metric for (c x ,c y ):", |
| "cite_spans": [ |
| { |
| "start": 189, |
| "end": 209, |
| "text": "(Hastie et al., 2001", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Ontology Metric", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "\u2211 = ) , ( ) , ( y x j j j c c h w y x d", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Ontology Metric", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "where j w is the j th weight for", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Ontology Metric", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": ") , ( y x j c c h", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Ontology Metric", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": ", the j th feature function. The feature functions are generated as mentioned in Section 3.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimating Ontology Metric", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The gold standards used in the evaluation are hypernym taxonomies extracted from WordNet and ODP (Open Directory Project), and meronym taxonomies extracted from WordNet. In WordNet taxonomy extraction, we only use the word senses within a particular taxonomy to ensure no ambiguity. In ODP taxonomy extraction, we parse the topic lines, such as \"Topic r:id=`Top/Arts/Movies'\", in the XML databases to obtain relations, such as is_a(movies, arts). In total, there are 100 hypernym taxonomies, 50 each extracted from WordNet 3 and ODP 4 , and 50 meronym taxonomies from WordNet 5 . summarizes the data statistics.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We also use two Web-based auxiliary datasets to generate features mentioned in Section 3:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "\u2022 Wikipedia corpus. The entire Wikipedia corpus is downloaded and indexed by Indri 6 . The top 100 documents returned by Indri are the global context of a term when querying with the term. \u2022 Google corpus. A collection of the top 1000 documents by querying Google using each term, and each term pair. Each top 1000 documents are the global context of a query term. Both corpora are split into sentences and are used to generate contextual, co-occurrence, syntactic dependency and lexico-syntactic pattern features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We evaluate the quality of automatic generated taxonomies by comparing them with the gold standards in terms of precision, recall and F1measure. F1-measure is calculated as 2*P*R/ (P+R), where P is precision, the percentage of correctly returned relations out of the total returned relations, R is recall, the percentage of correctly returned relations out of the total relations in the gold standard.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Leave-one-out cross validation is used to average the system performance across different training and test datasets. For each 50 datasets from WordNet hypernyms, WordNet meronyms or ODP hypernyms, we randomly pick 49 of them to generate training data, and test on the remaining dataset. We repeat the process for 50 times, with different training and test sets at each 6 http://www.lemurproject.org/indri/. time, and report the averaged precision, recall and F1-measure across all 50 runs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We also group the fifteen features in Section 3 into six sets: contextual, co-concurrence, patterns, syntactic dependency, word length difference and definition. Each set is turned on one by one for experiments in Section 5.4 and 5.5.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Methodology", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In this section, we compare the following automatic taxonomy induction systems: HE, the system by Hearst (1992) with 6 hypernym patterns; GI, the system by Girju et al. (2003) with 3 meronym patterns; PR, the probabilistic framework by Snow et al. (2006) ; and ME, the metric-based framework proposed in this paper. To have a fair comparison, for PR, we estimate the conditional probability of a relation given the evidence P(R ij |E ij ), as in (Snow et al. 2006) , by using the same set of features as in ME. Table 3 shows precision, recall, and F1measure of each system for WordNet hypernyms (is-a), WordNet meronyms (part-of) and ODP hypernyms (is-a). Bold font indicates the best performance in a column. Note that HE is not applicable to part-of, so is GI to is-a. Table 3 shows that systems using heterogeneous features (PR and ME) achieve higher F1measure than systems only using patterns (HE and GI) with a significant absolute gain of >30%. Generally speaking, pattern-based systems show higher precision and lower recall, while systems using heterogeneous features show lower precision and higher recall. However, when considering both precision and recall, using heterogeneous features is more effective than just using patterns. The proposed system ME consistently produces the best F1-measure for all three tasks.", |
| "cite_spans": [ |
| { |
| "start": 98, |
| "end": 111, |
| "text": "Hearst (1992)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 156, |
| "end": 175, |
| "text": "Girju et al. (2003)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 236, |
| "end": 254, |
| "text": "Snow et al. (2006)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 446, |
| "end": 464, |
| "text": "(Snow et al. 2006)", |
| "ref_id": "BIBREF34" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 511, |
| "end": 518, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 771, |
| "end": 778, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Performance of Taxonomy Induction", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The performance of the systems for ODP/is-a is worse than that for WordNet/is-a. This may be because there is more noise in ODP than in WordNet. For example, under artificial intelligence, ODP has neural networks, natural language and academic departments. Clearly, academic departments is not a hyponym of artificial intelligence. The noise in ODP interferes with the learning process, thus hurts the performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Performance of Taxonomy Induction", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "This section studies the impact of different sets of features on different types of relations. Table 4 shows F1-measure of using each set of features alone on taxonomy induction for WordNet is-a, sibling, and part-of relations. Bold font means a feature set gives a major contribution to the task of automatic taxonomy induction for a particular type of relation. Table 4 shows that different relations favor different sets of features. Both co-occurrence and lexico-syntactic patterns work well for all three types of relations. It is interesting to see that simple co-occurrence statistics work as good as lexico-syntactic patterns. Contextual features work well for sibling relations, but not for is-a and part-of. Syntactic features also work well for sibling, but not for is-a and part-of. The similar behavior of contextual and syntactic features may be because that four out of five syntactic features (Modifier, Subject, Object, and Verb overlaps) are just surrounding context for a term.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 95, |
| "end": 102, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 364, |
| "end": 371, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Features vs. Relations", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Comparing the is-a and part-of columns in Table 4 and the ME rows in Table 3 , we notice a significant difference in F1-measure. It indicates that combination of heterogeneous features gives more rise to the system performance than a single set of features does.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 42, |
| "end": 49, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 69, |
| "end": 76, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Features vs. Relations", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "This section studies the impact of different sets of features on terms at different abstraction le-vels. In the experiments, F1-measure is evaluated for terms at each level of a taxonomy, not the whole taxonomy. Table 5 and 6 demonstrate F1measure of using each set of features alone on each abstraction levels. Columns 2-6 are indices of the levels in a taxonomy. The larger the indices are, the lower the levels. Higher levels contain abstract terms, while lower levels contain concrete terms. L 1 is ignored here since it only contains a single term, the root. Bold font indicates good performance in a column.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 212, |
| "end": 219, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Features vs. Abstractness", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "Both tables show that abstract terms and concrete terms favor different sets of features. In particular, contextual, co-occurrence, pattern, and syntactic features work well for terms at L 4 -L 6 , i.e., concrete terms; co-occurrence works well for terms at L 2 -L 3, i.e., abstract terms. This difference indicates that terms at different abstraction levels have different characteristics; it confirms our abstractness assumption in Section 4.1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features vs. Abstractness", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "We also observe that for abstract terms in WordNet, patterns work better than contextual features; while for abstract terms in ODP, the conclusion is the opposite. This may be because that WordNet has a richer vocabulary and a more rigid definition of hypernyms, and hence is-a relations in WordNet are recognized more effectively by using lexico-syntactic patterns; while ODP contains more noise, and hence it favors features requiring less rigidity, such as the contextual features generated from the Web.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Features vs. Abstractness", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "This paper presents a novel metric-based taxonomy induction framework combining the strengths of lexico-syntactic patterns and clustering. The framework incrementally clusters terms and transforms automatic taxonomy induction into a multi-criteria optimization based on minimization of taxonomy structures and modeling of term abstractness. The experiments show that our framework is effective; it achieves higher F1measure than three state-of-the-art systems. The paper also studies which features are the best for different types of relations and for terms at different abstraction levels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Most prior work uses a single rule or feature function for automatic taxonomy induction at all levels of abstraction. Our work is a more general framework which allows a wider range of features and different metric functions at different abstraction levels. This more general framework has the potential to learn more complex taxonomies than previous approaches. Word Length 0.15 0.15 0.15 0.14 0.14 Definition 0.13 0.13 0.13 0.12 0.12 Table 6 . F1-measure for Features vs. Abstractness:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 436, |
| "end": 443, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "ODP/is-a.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "6" |
| }, |
| { |
| "text": "WordNet hypernym taxonomies are from 12 topics:gathering, professional, people, building, place, milk, meal, water, beverage, alcohol, dish, and herb. 4 ODP hypernym taxonomies are from 16 topics: computers, robotics, intranet, mobile computing, database, operating system, linux, tex, software, computer science, data communication, algorithms, data formats, security multimedia, and artificial intelligence. 5 WordNet meronym taxonomies are from 15 topics: bed, car, building, lamp, earth, television, body, drama, theatre, water, airplane, piano, book, computer, and watch.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This research was supported by NSF grant IIS-0704210. Any opinions, findings, conclusions, or recommendations expressed in this paper are of the authors, and do not necessarily reflect those of the sponsor.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgements", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Finding parts in very large corpora. ACL'99", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Berland", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Berland and E. Charniak. 1999. Finding parts in very large corpora. ACL'99.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Convex optimization", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Boyd", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Vandenberghe", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Boyd and L. Vandenberghe. 2004. Convex optimization. In Cambridge University Press, 2004.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Class-based ngram models for natural language", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [ |
| "D" |
| ], |
| "last": "Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Desouza", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Lai", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Computational Linguistics", |
| "volume": "18", |
| "issue": "4", |
| "pages": "468--479", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Brown, V. D. Pietra, P. deSouza, J. Lai, and R. Mercer. 1992. Class-based ngram models for natural language. Computational Linguistics, 18(4):468-479.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Learning to Extract Relations from the Web using Minimal Supervision", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Bunescu", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mooney", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Bunescu and R. Mooney. 2007. Learning to Extract Relations from the Web using Minimal Supervision. ACL'07.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Automatic construction of a hypernymlabeled noun hierarchy from text", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Caraballo", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Caraballo. 1999. Automatic construction of a hypernym- labeled noun hierarchy from text. ACL'99.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "VerbOcean: mining the web for fine-grained semantic verb relations", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Chklovski", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Chklovski and P. Pantel. 2004. VerbOcean: mining the web for fine-grained semantic verb relations. EMNLP '04.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Towards large-scale, opendomain and ontology-based named entity classification", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Cimiano", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Volker", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Cimiano and J. Volker. 2005. Towards large-scale, open- domain and ontology-based named entity classification. RANLP'07.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Automatic Acquisition of Ranked Qualia Structures from the Web", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Cimiano", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wenderoth", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Cimiano and J. Wenderoth. 2007. Automatic Acquisition of Ranked Qualia Structures from the Web. ACL'07.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Efficient Unsupervised Discovery of Word Categories Using Symmetric Patterns and High Frequency Words", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Davidov", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Rappoport", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Davidov and A. Rappoport. 2006. Efficient Unsuper- vised Discovery of Word Categories Using Symmetric Patterns and High Frequency Words. ACL'06.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Classification of Semantic Relationships between Nominals Using Pattern Clusters", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Davidov", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Rappoport", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Davidov and A. Rappoport. 2008. Classification of Se- mantic Relationships between Nominals Using Pattern Clusters. ACL'08.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "A Probabilistic model of redundancy in information extraction", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Downey", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Downey, O. Etzioni, and S. Soderland. 2005. A Probabil- istic model of redundancy in information extraction. IJ- CAI'05.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Unsupervised named-entity extraction from the web: an experimental study", |
| "authors": [ |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Cafarella", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Downey", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Popescu", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Shaked", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Weld", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Yates", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Artificial Intelligence", |
| "volume": "165", |
| "issue": "1", |
| "pages": "91--134", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "O. Etzioni, M. Cafarella, D. Downey, A. Popescu, T. Shaked, S. Soderland, D. Weld, and A. Yates. 2005. Un- supervised named-entity extraction from the web: an ex- perimental study. Artificial Intelligence, 165(1):91-134.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "WordNet: An Electronic Lexical Database", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Fellbuam", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Fellbuam. 1998. WordNet: An Electronic Lexical Data- base. MIT Press. 1998.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "The Distributional Inclusion Hypotheses and Lexical Entailment. ACL'05", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Geffet", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Geffet and I. Dagan. 2005. The Distributional Inclusion Hypotheses and Lexical Entailment. ACL'05.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Learning Semantic Constraints for the Automatic Discovery of Part-Whole Relations", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Girju", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Badulescu", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Moldovan", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Girju, A. Badulescu, and D. Moldovan. 2003. Learning Semantic Constraints for the Automatic Discovery of Part-Whole Relations. HLT'03.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Automatic Discovery of Part-Whole Relations", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Girju", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Badulescu", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Moldovan", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Computational Linguistics", |
| "volume": "32", |
| "issue": "1", |
| "pages": "83--135", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Girju, A. Badulescu, and D. Moldovan. 2006. Automatic Discovery of Part-Whole Relations. Computational Lin- guistics, 32(1): 83-135.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Distributional structure. In Word", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Harris", |
| "suffix": "" |
| } |
| ], |
| "year": 1954, |
| "venue": "", |
| "volume": "10", |
| "issue": "", |
| "pages": "146--162", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Z. Harris. 1985. Distributional structure. In Word, 10(23): 146-162s, 1954.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "The Elements of Statistical Learning: Data Mining, Inference, and Prediction", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Hastie", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Tibshirani", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Friedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Hastie, R. Tibshirani and J. Friedman. 2001. The Ele- ments of Statistical Learning: Data Mining, Inference, and Prediction. Springer-Verlag, 2001.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Automatic acquisition of hyponyms from large text corpora", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Hearst", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. COLING'92.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Branch and bound algorithms to determine minimal evolutionary trees", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [ |
| "D" |
| ], |
| "last": "Hendy", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Penny", |
| "suffix": "" |
| } |
| ], |
| "year": 1982, |
| "venue": "Mathematical Biosciences", |
| "volume": "59", |
| "issue": "", |
| "pages": "277--290", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. D. Hendy and D. Penny. 1982. Branch and bound algo- rithms to determine minimal evolutionary trees. Mathe- matical Biosciences 59: 277-290.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Semantic Class Learning from the Web with Hyponym Pattern Linkage Graphs", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Kozareva", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Z. Kozareva, E. Riloff, and E. Hovy. 2008. Semantic Class Learning from the Web with Hyponym Pattern Linkage Graphs. ACL'08.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Automatic retrieval and clustering of similar words", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Lin, 1998. Automatic retrieval and clustering of similar words. COLING'98.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Identifying Synonyms among Distributionally Similar Words. IJ-CAI'03", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Qin", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Lin, S. Zhao, L. Qin, and M. Zhou. 2003. Identifying Synonyms among Distributionally Similar Words. IJ- CAI'03.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Fine-Grained Proper Noun Ontologies for Question Answering", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [ |
| "S" |
| ], |
| "last": "Mann", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of SemaNet' 02: Building and Using Semantic Networks", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. S. Mann. 2002. Fine-Grained Proper Noun Ontologies for Question Answering. In Proceedings of SemaNet' 02: Building and Using Semantic Networks, Taipei.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Discovering word senses from text", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Pantel and D Lin. 2002. Discovering word senses from text. SIGKDD'02.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Automatically labeling semantic classes", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Ravichandran", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Pantel and D. Ravichandran. 2004. Automatically labe- ling semantic classes. HLT/NAACL'04.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Towards terascale knowledge acquisition. COLING'04", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Ravichandran", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Pantel, D. Ravichandran, and E. Hovy. 2004. Towards terascale knowledge acquisition. COLING'04.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Espresso: Leveraging Generic Patterns for Automatically Harvesting Semantic Relations", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Pennacchiotti", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Pantel and M. Pennacchiotti. 2006. Espresso: Leveraging Generic Patterns for Automatically Harvesting Semantic Relations. ACL'06.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Distributional clustering of English words. ACL'93", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Tishby", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Pereira, N. Tishby, and L. Lee. 1993. Distributional clus- tering of English words. ACL'93.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Learning surface text patterns for a question answering system", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Ravichandran", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Ravichandran and E. Hovy. 2002. Learning surface text patterns for a question answering system. ACL'02.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "A corpus-based approach for building semantic lexicons", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Shepherd", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Riloff and J. Shepherd. 1997. A corpus-based approach for building semantic lexicons. EMNLP'97.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Noun-phrase cooccurrence statistics for semi-automatic semantic lexicon construction", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Roark", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "ACL/COLING'98", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Roark and E. Charniak. 1998. Noun-phrase co- occurrence statistics for semi-automatic semantic lexicon construction. ACL/COLING'98.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Learning syntactic patterns for automatic hypernym discovery", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Snow", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Snow, D. Jurafsky, and A. Y. Ng. 2005. Learning syntac- tic patterns for automatic hypernym discovery. NIPS'05.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Semantic Taxonomy Induction from Heterogeneous Evidence", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Snow", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Snow, D. Jurafsky, and A. Y. Ng. 2006. Semantic Tax- onomy Induction from Heterogeneous Evidence. ACL'06.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Clustering for unsupervised relation identification", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Rosenfeld", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Feldman", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Rosenfeld and R. Feldman. 2007. Clustering for unsu- pervised relation identification. CIKM'07.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Combining independent modules to solve multiplechoice synonym and analogy problems", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Turney", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Littman", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Bigham", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Shnayder", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Turney, M. Littman, J. Bigham, and V. Shnayder. 2003. Combining independent modules to solve multiple- choice synonym and analogy problems. RANLP'03.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Open-Domain Textual Question Answering Techniques", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [ |
| "M" |
| ], |
| "last": "Harabagiu", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "J" |
| ], |
| "last": "Maiorano", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "A" |
| ], |
| "last": "Pasca", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Natural Language Engineering", |
| "volume": "9", |
| "issue": "3", |
| "pages": "1--38", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. M. Harabagiu, S. J. Maiorano and M. A. Pasca. 2003. Open-Domain Textual Question Answering Techniques. Natural Language Engineering 9 (3): 1-38, 2003.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Scaling web-based acquisition of entailment relations", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Szpektor", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Tanev", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Dagan", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Coppola", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "I. Szpektor, H. Tanev, I. Dagan, and B. Coppola. 2004. Scaling web-based acquisition of entailment relations. EMNLP'04.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "A graph model for unsupervised Lexical acquisition", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Widdows", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Dorow", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Widdows and B. Dorow. 2002. A graph model for unsu- pervised Lexical acquisition. COLING '02.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Learning the Distance Metric in a Personal Ontology", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Callan", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Workshop on Ontologies and Information Systems for the Semantic Web of CIKM'08", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Yang and J. Callan. 2008. Learning the Distance Metric in a Personal Ontology. Workshop on Ontologies and In- formation Systems for the Semantic Web of CIKM'08.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF1": { |
| "uris": null, |
| "num": null, |
| "type_str": "figure", |
| "text": "Figure 1. Illustration of Ontology Metric." |
| }, |
| "TABREF0": { |
| "content": "<table><tr><td>Hypernym Patterns</td><td>Sibling Patterns</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>NP y like NP x</td><td>NP y is made (up)? of NP x</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>NP y called NP x</td><td>NP y comprises NP x</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>NP x is a/an NP y</td><td>NP y consists of NP x</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>NP x , a/an NP</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>d</td><td>T</td><td>w</td><td>(</td><td>c</td><td>x</td><td>c</td><td>y</td><td/><td/><td/><td>w</td><td>e</td><td>x</td><td>y</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td/><td>e</td><td>x</td><td>y</td><td>P</td><td>x</td><td>y</td></tr></table>", |
| "num": null, |
| "text": "(,)?and/or other NP y NP x and/or NP y such NP y as NP x Part-of Patterns NP y (,)? such as NP x NP x of NP y NP y (,)? including NP x NP y 's NP x NP y (,)? especially NP x NP y has/had/have NP x NP", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF1": { |
| "content": "<table><tr><td/><td/><td>j w , is the weight i</td></tr><tr><td>for</td><td>i h ,</td><td>j</td></tr></table>", |
| "num": null, |
| "text": ".) is the j th underlying feature function for term pairs at level L i ,", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF2": { |
| "content": "<table><tr><td>Statistics</td><td colspan=\"3\">WN/is-a ODP/is-a WN/part-of</td></tr><tr><td>#taxonomies</td><td>50</td><td>50</td><td>50</td></tr><tr><td>#terms</td><td>1,964</td><td>2,210</td><td>1,812</td></tr><tr><td>Avg #terms</td><td>39</td><td>44</td><td>37</td></tr><tr><td>Avg depth</td><td>6</td><td>6</td><td>5</td></tr><tr><td/><td colspan=\"2\">Table 2. Data Statistics.</td><td/></tr></table>", |
| "num": null, |
| "text": "", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF4": { |
| "content": "<table/>", |
| "num": null, |
| "text": "F1-measure for Features vs. Relations: WordNet.", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF5": { |
| "content": "<table><tr><td>Feature</td><td>L 2</td><td>L 3</td><td>L 4</td><td>L 5</td><td>L 6</td></tr><tr><td>Contextual</td><td colspan=\"5\">0.29 0.31 0.47 0.44 0.42 0.39 0.40</td></tr><tr><td>Syntactic</td><td colspan=\"5\">0.31 0.28 0.36 0.38 0.39</td></tr><tr><td colspan=\"6\">Word Length 0.16 0.16 0.16 0.16 0.16</td></tr><tr><td>Definition</td><td colspan=\"5\">0.12 0.12 0.12 0.12 0.12</td></tr><tr><td colspan=\"6\">Table 5. F1-measure for Features vs. Abstractness:</td></tr><tr><td/><td colspan=\"2\">WordNet/is-a.</td><td/><td/><td/></tr><tr><td>Feature</td><td>L 2</td><td>L 3</td><td>L 4</td><td>L 5</td><td>L 6</td></tr><tr><td>Contextual</td><td colspan=\"5\">0.30 0.30 0.33 0.29 0.29</td></tr><tr><td colspan=\"6\">Co-occurrence 0.34 0.36 0.34 0.31 0.31</td></tr><tr><td>Patterns</td><td colspan=\"5\">0.23 0.25 0.30 0.28 0.28</td></tr><tr><td>Syntactic</td><td colspan=\"5\">0.18 0.18 0.23 0.27 0.27</td></tr></table>", |
| "num": null, |
| "text": "", |
| "type_str": "table", |
| "html": null |
| } |
| } |
| } |
| } |