| { |
| "paper_id": "D13-1039", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T16:41:39.606836Z" |
| }, |
| "title": "Open-Domain Fine-Grained Class Extraction from Web Search Queries", |
| "authors": [ |
| { |
| "first": "Marius", |
| "middle": [], |
| "last": "Pa\u015fca", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Google Inc", |
| "location": { |
| "addrLine": "1600 Amphitheatre Parkway Mountain View", |
| "postCode": "94043", |
| "region": "California" |
| } |
| }, |
| "email": "mars@google.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper introduces a method for extracting fine-grained class labels (\"countries with double taxation agreements with india\") from Web search queries. The class labels are more numerous and more diverse than those produced by current extraction methods. Also extracted are representative sets of instances (singapore, united kingdom) for the class labels.", |
| "pdf_parse": { |
| "paper_id": "D13-1039", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper introduces a method for extracting fine-grained class labels (\"countries with double taxation agreements with india\") from Web search queries. The class labels are more numerous and more diverse than those produced by current extraction methods. Also extracted are representative sets of instances (singapore, united kingdom) for the class labels.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Motivation: As more semantic constraints are added, concepts like companies become more specific, e.g., companies that are in the software business, and have been started in a garage. The sets of instances associated with the classes become smaller; the class labels used to concisely describe the meaning of more specific concepts tend to become longer. In fact, fine-grained class labels such as \"software companies started in a garage\" are often complex noun phrases, since they must somehow summarize multiple semantic constraints. Although Web users are interested in both coarse (e.g., \"companies\") and fine-grained (e.g., \"software companies started in a garage\") class labels, virtually all class labels acquired from text by previous extraction methods (Etzioni et al., 2005; Van Durme and Pa\u015fca, 2008; Kozareva and Hovy, 2010; Snow et al., 2006) exhibit little syntactic diversity. Indeed, instances and class labels that are relatively complex nouns are known to be difficult to detect and pick out precisely from surrounding text (Downey et al., 2007) . This and other challenges associated with large-scale extraction from Web text cause the extracted class labels to usually follow a rigid modifiers-plus-nouns format. The format covers nouns (\"companies\") possibly preceded by one or many modifiers (\"software companies\", \"computer security software companies\"). Examples of actual extractions include \"european cities\" (Etzioni et al., 2005) , \"strong acids\" (Pantel and Pennacchiotti, 2006) , \"prestigious private schools\" (Van Durme and , \"aquatic birds\" (Kozareva and Hovy, 2010) .", |
| "cite_spans": [ |
| { |
| "start": 762, |
| "end": 784, |
| "text": "(Etzioni et al., 2005;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 785, |
| "end": 811, |
| "text": "Van Durme and Pa\u015fca, 2008;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 812, |
| "end": 836, |
| "text": "Kozareva and Hovy, 2010;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 837, |
| "end": 855, |
| "text": "Snow et al., 2006)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 1042, |
| "end": 1063, |
| "text": "(Downey et al., 2007)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 1435, |
| "end": 1457, |
| "text": "(Etzioni et al., 2005)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 1487, |
| "end": 1507, |
| "text": "Pennacchiotti, 2006)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 1573, |
| "end": 1598, |
| "text": "(Kozareva and Hovy, 2010)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "As an alternative to extracting class labels from text, some methods simply import them from human-curated resources, for example from the set of categories encoded in Wikipedia (Remy, 2002) . As a result, class labels potentially exhibit higher syntactic diversity. The modifiers-plus-nouns format (\"computer security software companies\") is usually still the norm. But other formats are possible: \"software companies based in london\", \"software companies of the united kingdom\". Vocabulary coverage gaps remain a problem, with many relevant class labels (\"software companies of texas\" \"software companies started in a garage\", \"software companies that give sap training\") still missing. There is a need for methods that more aggressively identify fine-grained class labels, beyond those extracted by previous methods or encoded in existing, manually-created resources. Such class labels increase coverage, for example in scenarios that enrich Web search results with instances available for the class labels specified in the queries.", |
| "cite_spans": [ |
| { |
| "start": 178, |
| "end": 190, |
| "text": "(Remy, 2002)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The contributions of this paper are twofold. First, it proposes a weakly-supervised method to assemble a large vocabulary of class labels from queries. The class labels include finegrained class labels (\"countries with double taxation agreements with india\", \"no front license plate states\") that are difficult to extract from text by previous methods for open-domain information extraction. Second, the method acquires representative instances (singapore, united kingdom; arizona, new mexico) that belong to fine-grained class labels (\"countries with double taxation agreements with india\", \"no front license plate states\"). Both class labels and their instances are extracted from Web search queries.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Contributions:", |
| "sec_num": null |
| }, |
| { |
| "text": "Overview: Given a set of arbitrary Web search queries as input, our method produces a vocabulary of fine-grained class labels. For this purpose, it: a) selects an initial vocabulary of class labels, as a subset of input queries that are likely to correspond to search requests for classes; b) expands the vocabulary, by generating a large, noisy set of other possible class labels, through replacements of ngrams within initial class labels with their similar phrases; c) restricts the generated class labels to those that match the syntactic structure of class labels within the initial vocabulary; and d) further restricts the generated class labels to those that appear within the larger set of arbitrary Web search queries. Initial Vocabulary of Class Labels: Out of a set of arbitrary search queries available as input, the queries in the format \"list of ..\" are selected as the initial vocabulary of class labels. The prefix \"list of\" is discarded from each query. Thus, the query \"list of software companies that use linux\" gives the class label \"software companies that use linux\". Generation via Phrase Similarities: As a prerequisite to generating class labels, distributionally similar phrases (Lin and Pantel, 2002; Lin and Wu, 2009; and their scores are collected in advance. A phrase is represented as a vector of its contextual features. A feature is a word, collected from windows of three words centered around the occurrences of the phrase in sentences across Web documents (Lin and Wu, 2009) . In the contextual vector of a phrase, the weight of a feature is the pointwise-mutual information (Lin and Wu, 2009) between the phrase P and the feature F . The distributional similarity score between two phrases is the cosine similarity between the contextual vectors of the two phrases. The lists of most distributionally similar phrases of a phrase P are thus compiled offline, by ranking the similar phrases of P in decreasing order of their similarity score relative to P .", |
| "cite_spans": [ |
| { |
| "start": 1205, |
| "end": 1227, |
| "text": "(Lin and Pantel, 2002;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 1228, |
| "end": 1245, |
| "text": "Lin and Wu, 2009;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 1492, |
| "end": 1510, |
| "text": "(Lin and Wu, 2009)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 1611, |
| "end": 1629, |
| "text": "(Lin and Wu, 2009)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extraction from Queries 2.1 Extraction of Class Labels", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Each class label from the initial vocabulary is expanded into a set of generated, candidate class labels. To this effect, every ngram P within a given class label is replaced with each of the distributionally similar phrases, if any, available for the ngram. As shown later in the experimental section, the expansion can increase the vocabulary by a factor of 100. Approximate Syntactic Filtering: The set of generated class labels is noisy. The set is filtered, by retaining only class labels whose syntactic structure matches the syntactic structure of some class label(s) from the initial vocabulary. The syntactic structure is loosely approximated at surface rather than syntactic level. A generated class label is retained, if its sequence of part of speech tags matches the sequence of part of speech tags of one of the class labels from the initial vocabulary. As an additional constraint, the sequence must contain one tag corresponding to a common noun in plural form, i.e., NNS. Otherwise, the class label is discarded. Query Filtering: Generated class labels that pass previous filters are further restricted. They are intersected with the set of arbitrary Web search queries available as input. Generated class labels that are not full queries are discarded.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extraction from Queries 2.1 Extraction of Class Labels", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Overview: Our method mines instances of finegrained class labels from queries. In a nutshell, it identifies queries containing two types of information simultaneously. First, the queries contain an instance (marvin gaye) of the more general class labels (\"musicians\") from which the fine-grained class labels (\"musicians who have been shot\") can be obtained. Second, the queries contain the constraints added by the fine-grained class labels (\"... shot\") on top of the more general class labels. Instances of General Class Labels: Following (Ponzetto and Strube, 2007) , the Wikipedia category network is refined into a hierarchy that discards non-IsA (thematic) edges, and retains only IsA (subsumption) edges from the network (Ponzetto and Strube, 2007) . Instances, i.e., titles of Wikipedia articles, are propagated upwards to all their ancestor categories. The class label \"musicians\" would be mapped into madonna, marvin gaye, jon bon jovi etc. The mappings from each ancestor category, to all its descendant instances in the Wikipedia hierarchy, represent our mappings from more general class labels to instances. Decomposition of Fine-Grained Class Labels: A fine-grained class label (e.g., \"musicians who have been shot\") is effectively decomposed into pairs of two pieces of information. The first piece is a more general class label (\"musicians\"), if any occurs in it. The second piece is a bag of words, collected from the remainder of the fine-grained class label after discarding stop words. Note that the standard set of stop words is augmented with auxiliary verbs (e.g., does, has, is, would), determiners, conjunctions, prepositions, and question wh-words (Radev et al., 2005 ) (e.g., where, how). In the first piece of each pair, the general class label is then replaced with each of its instances. This produces multiple pairs of a candidate instance and a bag of words, for each fine-grained class label. As an illustration, the class labels \"musicians who have been shot\" and \"automobiles with remote start\" are decomposed into pairs like <madonna, {shot}>, <marvin gaye, {shot}>; and <buick lacrosse, {remote, start}>, <nissan versa, {remote, start}>, respectively. Matching of Candidate Instances: A decomposed class label is retained, if there are matching queries that contain the candidate instance, the bag of words, and optionally stop words. Otherwise, the decomposed class label is discarded. The word matching is performed after word stemming (Porter, 1980) . The aggregated frequency of the matching queries is assigned as the score of the candidate instance for the fine-grained class label:", |
| "cite_spans": [ |
| { |
| "start": 541, |
| "end": 568, |
| "text": "(Ponzetto and Strube, 2007)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 728, |
| "end": 755, |
| "text": "(Ponzetto and Strube, 2007)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 1674, |
| "end": 1693, |
| "text": "(Radev et al., 2005", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 2475, |
| "end": 2489, |
| "text": "(Porter, 1980)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extraction of Instances", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Score(I, C) = Q (F req(Q)|M atch(Q, < I, C >)) (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extraction of Instances", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "For example, the score of the candidate instance marvin gaye for the class label \"musicians who have been shot\", is the sum of the frequencies of the matching queries \"marvin gaye is shot\", \"when was marvin gaye shot\", \"why marvin gaye was shot\" etc. Similarly, the score of buick lacrosse for \"au-tomobiles with remote start\" is given by the aggregated frequencies of the queries \"buick lacrosse remote start\", \"how to remote start buick lacrosse\", \"remote start for buick lacrosse\". Candidate instances of a class label are ranked in decreasing order of their scores.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Extraction of Instances", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Web Textual Data: The experiments rely on a sample of 1 billion queries in English submitted by users of a Web search engine. Each query is accompanied by its frequency of occurrence. Also available is a sample of around 200 million Web documents in English. Phrase Similarities: Web documents are used in the experiments only to construct a phrase similarity repository following (Lin and Wu, 2009; . The repository contains ranked lists of the top 1000 phrases, computed to be the most distributionally similar to each of around 16 million phrases. Text Pre-Processing: The TnT tagger (Brants, 2000) assigns part of speech tags to words in class labels. Instances: To collect mappings from Wikipedia categories (as more general class labels) to titles of descendant Wikipedia articles (as instances), a snapshot of Wikipedia articles was intersected with the Wikipedia category hierarchy from (Ponzetto and Strube, 2007) . The mappings connect a total of 1,535,083 instances to a total of 108,756 class labels.", |
| "cite_spans": [ |
| { |
| "start": 381, |
| "end": 399, |
| "text": "(Lin and Wu, 2009;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 895, |
| "end": 922, |
| "text": "(Ponzetto and Strube, 2007)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experimental Setting", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Experimental Runs: Human-compiled information available within Wikipedia serves as the source of data for two baseline runs. The set of all categories, listed in Wikipedia for any of its articles, corresponds to the set of class labels \"acquired\" in run R wc . Categories used for internal Wikipedia bookkeeping (Ponzetto and Strube, 2007) are discarded. Their names contain one of the words article(s), category(ies), indices, pages, redirects, stubs, or templates. Similarly, the titles of Wikipedia articles with the prefix \"List of ..\" (e.g., \"List of automobile manufacturers of Germany\") form the set of class labels \"acquired\" in run R wl . The prefix \"List of\" is discarded.", |
| "cite_spans": [ |
| { |
| "start": 312, |
| "end": 339, |
| "text": "(Ponzetto and Strube, 2007)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For completeness, a third baseline run, R dc , corresponds to class labels extracted from Web documents. The class labels are noun phrases C that fill extraction patterns equivalent to \"C such as I\". The patterns are matched to document sentences. The boundaries of the class labels C are approximated from part of speech tags of sentence words (Van Durme and . The patterns were proposed in (Hearst, 1992) . They were employed widely in subsequent methods (Etzioni et al., 2005; Kozareva et al., 2008; Wu et al., 2012) , which extract class labels precisely from the set of class labels C produced by the extraction patterns. Even methods using queries as a textual data source still extract class labels from documents using the same extraction patterns (Pa\u015fca, 2010) . Therefore, from the point of view of evaluating class labels, run R dc is a valid representative of previous extraction methods, including (Etzioni et al., 2005; Kozareva et al., 2008; Van Durme and Pa\u015fca, 2008; Pa\u015fca, 2010; Wu et al., 2012) .", |
| "cite_spans": [ |
| { |
| "start": 392, |
| "end": 406, |
| "text": "(Hearst, 1992)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 457, |
| "end": 479, |
| "text": "(Etzioni et al., 2005;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 480, |
| "end": 502, |
| "text": "Kozareva et al., 2008;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 503, |
| "end": 519, |
| "text": "Wu et al., 2012)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 756, |
| "end": 769, |
| "text": "(Pa\u015fca, 2010)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 911, |
| "end": 933, |
| "text": "(Etzioni et al., 2005;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 934, |
| "end": 956, |
| "text": "Kozareva et al., 2008;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 957, |
| "end": 983, |
| "text": "Van Durme and Pa\u015fca, 2008;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 984, |
| "end": 996, |
| "text": "Pa\u015fca, 2010;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 997, |
| "end": 1013, |
| "text": "Wu et al., 2012)", |
| "ref_id": "BIBREF40" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Besides the baseline runs, three experimental runs are considered. In run R ql , the queries starting with the prefix \"list of\" form the set of class labels. The prefix \"list of\" is discarded from each query. In run R qg , the class labels are generated via phrase similarities, starting from R ql as an initial set of class labels. Run R qa represents an ablation experiment. It is created from R qg , by limiting the expansion of a given class label via distributional similarities to only one, rather than multiple, phrases within the class label. Note that, by design, none of the class labels that appear in R ql also appear in runs R qa or R qg . Therefore, the intersection between R ql , on one hand, and R qa and R qg , on the other hand, is the empty set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "All data, including the class labels extracted in all experimental runs, is converted to lower case.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Procedure", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Coverage Over Entire Sets: Table 1 : Coverage of class labels extracted by various experimental runs, relative to class labels available in Wikipedia before and after intersecting them with a large set of arbitrary queries (A = reference set, relative to which coverage is computed; B = measured set, for which coverage is computed relative to the reference set; |A| = size of set A; Q = set of input queries) part of the table. Note that the number of class labels extracted by the individual run shown in the second column (B) is shown in the fourth column (|B|). In particular, there are around 1.6 million unique \"list of ..\" queries, from which class labels are collected in run R ql .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 27, |
| "end": 34, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Relative Coverage of Class Labels", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "During the computation of coverage, the reference set, and the set for which coverage is being computed, are intersected. Intersection relies on strict string matching. All words, including punctuation, must match exactly in order for a class label to be part of the intersection. The reference sets are intersected with the set of all Web search queries Q used in the experiments. Coverage is computed both before and after intersection. Less than half (126,318 of 295,587) of the class labels, for the reference set R wc ; and about a third (47,442 of 134,840) for R wl ; appear in the set Q of all queries.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relative Coverage of Class Labels", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Three conclusions can be drawn from the results. First, query-based runs vastly outperform Wikipedia-based runs in terms of absolute coverage. Run R ql contains around 5 and 12 times more class labels, than R wc and R wl respectively. On top of that, generating class labels via phrase similarities further increases the class label count by about 20 times for R qa , and 80 times for R qg . Second, querybased runs R qa and R qg surpass the document-based run R dc . Third, higher class label counts translate into higher relative coverage. In the upper part of the table, run R wl contains 3.9% (relative to R wc ) and 7.1% (relative to R wc \u2229Q) of the reference set. But the relative coverage doubles for R ql at 7.4% (relative to R wc ) and 17.3% (relative to R wc \u2229Q). Coverage again doubles for R qg at 14.8% (relative to R wc ) and 34.7% (relative to R wc \u2229Q). The union of query-based initial and generated class labels is R ql \u222aR qg . The union contains about a quarter (i.e., 22.2%) or half (52.1%) of the reference set R wc , depending on whether the reference set is intersected with the set of all queries or not. In the lower part of the table, more than 90% of the queries in the reference set R wl that are also queries are found among the class labels collectively extracted in the querybased runs. Note that, since R ql is disjoint from R qa and R qg , none of the class labels already in R ql can be \"re-discovered\" (generated) again in R qa or R qg . Therefore, by experimental design, relative coverage scores of R ql may be relatively difficult to surpass by R qa or R qg taken individually.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relative Coverage of Class Labels", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Diversity: Class labels restricted to those that have the format \".. that/which/who ..\" are relatively more specific, e.g., \"grocery stores that double coupons in omaha\", \"airlines which fly from santa barbara\", \"writers who were doctors\". The most frequent head phrases of such restricted class labels offer an idea about how diverse the class labels are. The counts of class labels for the most frequent head phrases are in the order of 10's in the case of R wl vs. 10,000's for R qg . In comparison, none of the class labels of run R dc have this format. The lack of such class labels in run R dc , and their smaller proportion in run R wl vs. R qg , suggest that class labels extracted by the proposed method exhibit higher lexical and syntactic diversity than previous methods do. Tag (Value) : Examples of Class Labels correct (1.0): angioplasty specialists in kolkata, good things pancho villa did, eating disorders inpatient units in the uk nhs specialist services questionable (0.5): picture framers adelaide cbd, side effects bicalutamide, different eating disorders, private hospitals treat kidney stones uk incorrect (0.0): al hirschfield theatre hours, value of berkshire hathaway shares, remove spaces in cobol, dogs with loss of appetite, 1999 majorca open ", |
| "cite_spans": [ |
| { |
| "start": 786, |
| "end": 797, |
| "text": "Tag (Value)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relative Coverage of Class Labels", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Evaluation Metric: Class labels being evaluated are manually assigned a correctness tag. A class label is deemed correct, if it is grammatically well-formed and describes a relevant concept that embodies some (unspecified) set of instances that share similar properties; questionable, if it is relevant but not wellformed; or incorrect. A questionable class label is not well-formed because it lacks necessary linking particles (e.g., the prepositions of or for in \"side effects bicalutamide\"), or contains undesirable modifiers (\"different eating disorders\"). Examples of correct and incorrect class labels are \"angioplasty specialists in kolkata\" and \"al hirschfield theatre hours\" respectively.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Precision of Class Labels", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "To compute the precision score, the correctness tags are converted to numeric values, as shown in Table 2 : correct to 1; questionable to 0.5; and incorrect to 0. Precision over a list of class labels is measured as the sum of the correctness values of the class labels in the list, divided by the size of the list. Precision Relative to Target Phrases: The precision of the class labels in each run is determined similarly to how relative coverage was computed earlier. More precisely, the precision is computed over the class labels whose names contain each phrase from the set of 75 target phrases from (Alfonseca et al., 2010) . For each phrase, and for each run, a random sample of at most 50 of the class labels that match the phrase is selected for evaluation. The samples taken for each run, corresponding to the same phrase, are combined into a merged list. This produces one merged list for each phrase, for a total of 75 merged lists. The precision score over a target phrase is the precision score over its sample of class labels.", |
| "cite_spans": [ |
| { |
| "start": 606, |
| "end": 630, |
| "text": "(Alfonseca et al., 2010)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 98, |
| "end": 105, |
| "text": "Table 2", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Precision of Class Labels", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "The last two columns of Table 3 capture the precision scores for the class labels. The scores are computed in two ways: averaged over the (variable) subsets of target phrases for which some matching class label(s) exist, in the last but one column, e.g., over 19 of the 75 target phrases for R wc ; and averaged over the entire set of 75 target phrases, in the last column. The former does not penalize a run for not being able to extract any class labels containing a particular target phrase, whereas the latter does penalize. Naturally, precision scores over the entire set of target phrases decrease when coverage is lower, for runs R wc , R wl and, to a lesser extent, R dc and R ql . But even after ignoring target phrases with no matching class labels, precision scores in the last but one column in Table 3 reveal important properties of the experimental runs. First, between the two Wikipedia-based runs, R wl has perfect class labels, whereas as many as 1 in 4 class labels of run R wc are marked as incorrect during the evaluation. Second, the class labels collected from \"list of ..\" queries in run R ql correspond to relevant, wellformed concepts in 80% of the cases. Third, the generation of class labels via phrase similarities (R qg ) greatly increases coverage as shown earlier. The increase comes at the expense of lowering precision from 80% to 72%. However, the phrases from initial queries that are expanded via distributional similarities can be limited from multiple to only one, by switching from R qg to R qa . This gives higher precision for R qa than for R qg .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 24, |
| "end": 31, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 807, |
| "end": 814, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Precision of Class Labels", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "As a complement to Table 3 , the graphs in Figure 1 offer a more detailed view into the precision of class labels. The figure covers a Wikipedia-based run (R wc ) and two query-based runs (R ql , R qg ). The graphs show the precision scores, over each of the 75 target phrases. Among target phrases for which some matching class labels exist in the respective run, the target phrases with the lowest precision scores are robotics (score of 0.15) and karlsruhe (0.33), for R wc ; carotid arteries and kidney stones, both with a score of 0.00 because their matching class labels are all incorrect, for R dc ; african population and chester arthur, both with a score of 0.00 because their matching class labels are all incorrect, for R ql ; and arlene martel (0.00) and right to vote Table 3 . It also affects the coverage values relative to R wc in Table 1 . Ideally, high-precision experimental runs would not extract any incorrect class labels that happen to appear in R wc , for example \"austrian contemporary art\". But the coverage relative to R wc would artificially penalize such runs, for not extracting the incorrect class labels from R wc . As a proxy for estimating class label complexity, Table 4 shows the longest class labels derived from Wikipedia (R wl ) vs. generated from queries (R qg ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 19, |
| "end": 26, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 43, |
| "end": 49, |
| "text": "Figure", |
| "ref_id": null |
| }, |
| { |
| "start": 781, |
| "end": 788, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 847, |
| "end": 854, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 1198, |
| "end": 1205, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Precision of Class Labels", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Class labels derived from Web search queries may be semantically overlapping. Examples are \"writers who killed themselves\" vs. \"writers who committed suicide\". The overlap is desirable, since different Web users may request the same information via different queries. The same phenomenon has been observed in other information extraction tasks. It also affects manually-created resources like Wikipedia. The continuous manual refinements to Wikipedia content still cannot prevent the occurrence of duplicate class labels among Wikipedia List-Of categories. The duplicates are present in run R wl . Exam-Target Class Labels 007 movie actors, .308 weapons, actors with obsessive compulsive disorder, antibiotics for multiple sclerosis, astronauts in space station, automobiles with remote start, beatles songs of love, beetles that bite, companies with sustainable competitive advantage, countries with double taxation agreements with india, criminals who have been executed, daft punk live albums, dallas medical companies, direct democracy states, electronic companies in electronic city bangalore, expensive brands of shoes, eye diseases in cats, f1 car companies, fwd sports cars, garden landscaping magazines, heliskiing resorts, hell in a cell wrestlers, holidays celebrated in sydney, ibf weight classes, ibiza 2011 djs, immunology scientists, jewelry manufacturing companies, kanye west songs on youtube, kingston upon thames supermarkets, latin military ranks, ludhiana newspapers, maastricht treaty countries, musicians who have been shot, no front license plate states, non-profit organizations in nashville tennessee, organic chocolate companies, plants which are used in homeopathy, programming languages for server side programming, qatar chemical companies, qld private schools, real estate companies in virginia beach virginia, respiratory infection antibiotics, serial killers with antisocial personality disorder, singers with curly hair, telecommunications companies in the philippines, trains from la to san diego, visual basic database management systems, warmblood colors, washington university basketball players, world heritage sites in northern ireland Table 5 : Set of 50 class labels, used in the evaluation of extracted instances ples are \"formula one drivers that never qualified for a race\" vs. \"formula one drivers who never qualified for a race\"; or \"goaltenders who have scored a goal in a nhl game\" vs. \"goaltenders who have scored a goal in an nhl game\". Some of the lexical differences among class labels are due to undesirable misspellings. Again, similar problems occasionally affect existing Wikipedia categories: \"nobel laureates who endorse barack obama\" vs. \"nobel laureates who endorse barrack obama\".", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 2176, |
| "end": 2183, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Precision of Class Labels", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Target Set of Class Labels: The target set for evaluation is shown in Table 5 . Initially, a random sample of 100 class labels is selected from all class labels in Tag (Value): Examples of Instances correct (1.0): countries with double taxation agreements with india: thailand; hell in a cell wrestlers: brock lesnar; ibiza 2011 djs: dimitri from paris; heliskiing resorts: valle nevado questionable (0.5): 007 movie actors: david niven; kanye west songs on youtube: the good life; holidays celebrated in sydney: waitangi day incorrect (0.0): electronic companies in electronic city bangalore: bank of baroda; garden landscaping magazines: marquis; immunology scientists: rosalind franklin Table 6 : Correctness tags manually assigned to instances extracted from queries for various class labels run R qg . Class labels deemed incorrect, as well as class labels for which no instances are extracted, are manually removed from the sample. Out of the remaining class labels, a smaller random sample of 50 of the remaining class labels is retained, for the purpose of evaluating the quality of instances extracted for various class labels. Evaluation Metric: The evaluation computes the precision of the ranked list of instances extracted for each target class label. To remove any undesirable bias towards higher-ranked instances, the ranked list is sorted alphabetically, then each instance is assigned one of the correctness tags from Table 6 . Instances are deemed questionable, if they would be correct for a rather obscure interpretation of the class label. For example, david niven is an actor in one of the spoofs rather than main releases of the 007 movie. Instances that would be correct if a few words were dropped or added are also deemed questionable: the good life is not one of the \"kanye west songs on youtube\" but good life is.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 70, |
| "end": 77, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 690, |
| "end": 697, |
| "text": "Table 6", |
| "ref_id": null |
| }, |
| { |
| "start": 1435, |
| "end": 1442, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation Procedure", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "To compute the precision score over a ranked list of instances, the correctness tags are converted to numeric values. Precision at some rank N in the list is measured as the sum of the correctness values of the instances extracted up to rank N, divided by the number of instances extracted up to rank N.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Procedure", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Precision: Precision scores in Table 7 vary across target class labels. For some class labels, the extracted instances are noisy enough that scores are below 0.50 at ranks 10 and higher. This is the case for \"electronic companies in electronic city banga- Target Class Label Extracted Instances countries with double taxation agreements with india [singapore , malaysia, mauritius, kenya, australia, united kingdom, cyprus, turkey, thailand, germany,..] direct democracy states [california, oregon, nevada, wisconsin, louisiana, arizona, vermont, alaska, illinois, michigan,..] In additional experiments, the same evaluation procedure is applied to output from two previous extraction methods. The first method starts by internally generating a small set of seed instances for a class label given as input (Wang and Cohen, 2009) . A set expansion module then expands the seed set into a longer, ranked list of instances. The instances are extracted from unstructured and semi-structured text within Web documents. The documents are accessed via the search interface of a general-purpose Web search engine (cf. (Wang and Cohen, 2009) for more details). The second method extracts instances of class labels using the extraction patterns proposed in (Hearst, 1992) . As such, it is similar to (Kozareva et al., 2008; Van Durme and Pa\u015fca, 2008; Wu et al., 2012) . The method corresponds to the run R dc described earlier, where the relative ranking of instances and class labels uses the co-occurrence of instances and class labels within queries (Pa\u015fca, 2010) . For the purpose of the evaluation, when no instances are available for a target class label, the class label is generalized into iteratively shorter phrases containing fewer modifiers, until some instances are available for the shorter phrase. For example, target class labels like actors with obsessive compulsive disorder, beatles songs of love, garden landscaping magazines do not have any instances extracted by the second method. Therefore, the instances evaluated for the second method for these target class labels are collected from the instances of the more general actors, beatles songs, landscaping magazines. Without the generalization, the target class label would receive no credit during the evaluation, and the two previous methods would have lower precision scores. Over the 50 target class labels, the precision of the two methods is 0.11 and 0.27 at rank 5; 0.06 and 0.25 at rank 10; 0.05 and 0.22 at rank 20; and 0.05 and 0.20 at rank 50. The results confirm that, as explained earlier, previous methods for open-domain information extraction have limited ability to extract instances of fine-grained class labels. Discussion: Earlier errors in the acquisition of the class label affect the usefulness of any instances that may be subsequently extracted for them. The experiments require candidate instances to appear in Wikipedia. This may improve precision, at the expense of not extracting instances that are not yet in Wikipedia .", |
| "cite_spans": [ |
| { |
| "start": 367, |
| "end": 461, |
| "text": ", malaysia, mauritius, kenya, australia, united kingdom, cyprus, turkey, thailand, germany,..]", |
| "ref_id": null |
| }, |
| { |
| "start": 486, |
| "end": 585, |
| "text": "[california, oregon, nevada, wisconsin, louisiana, arizona, vermont, alaska, illinois, michigan,..]", |
| "ref_id": null |
| }, |
| { |
| "start": 814, |
| "end": 836, |
| "text": "(Wang and Cohen, 2009)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 1118, |
| "end": 1140, |
| "text": "(Wang and Cohen, 2009)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 1255, |
| "end": 1269, |
| "text": "(Hearst, 1992)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1298, |
| "end": 1321, |
| "text": "(Kozareva et al., 2008;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 1322, |
| "end": 1348, |
| "text": "Van Durme and Pa\u015fca, 2008;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 1349, |
| "end": 1365, |
| "text": "Wu et al., 2012)", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 1551, |
| "end": 1564, |
| "text": "(Pa\u015fca, 2010)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 31, |
| "end": 38, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 256, |
| "end": 366, |
| "text": "Target Class Label Extracted Instances countries with double taxation agreements with india [singapore", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Precision of Instances", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Previous methods for extracting classes of instances from text acquire sets of instances that are each either unlabeled (Pennacchiotti and Pantel, 2009; Jain and Pennacchiotti, 2010; Shi et al., 2010) , or associated with a class label (Banko et al., 2007; Wang and Cohen, 2009) . The sets of instances and/or class labels may be organized as flat sets or hierarchically, relative to inferred hierarchies (Kozareva and Hovy, 2010) or existing hierarchies such as WordNet (Snow et al., 2006; Davidov and Rappoport, 2009) or the category network within Wikipedia (Wu and Weld, 2008; Ponzetto and Navigli, 2009) . Semi-structured text from Web documents is a complementary resource to unstructured text, for the purpose of extracting relations in general (Cafarella et al., 2008) , and classes and instances in particular (Talukdar et al., 2008; Dalvi et al., 2012) .", |
| "cite_spans": [ |
| { |
| "start": 120, |
| "end": 152, |
| "text": "(Pennacchiotti and Pantel, 2009;", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 153, |
| "end": 182, |
| "text": "Jain and Pennacchiotti, 2010;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 183, |
| "end": 200, |
| "text": "Shi et al., 2010)", |
| "ref_id": null |
| }, |
| { |
| "start": 236, |
| "end": 256, |
| "text": "(Banko et al., 2007;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 257, |
| "end": 278, |
| "text": "Wang and Cohen, 2009)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 405, |
| "end": 430, |
| "text": "(Kozareva and Hovy, 2010)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 471, |
| "end": 490, |
| "text": "(Snow et al., 2006;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 491, |
| "end": 519, |
| "text": "Davidov and Rappoport, 2009)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 561, |
| "end": 580, |
| "text": "(Wu and Weld, 2008;", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 581, |
| "end": 608, |
| "text": "Ponzetto and Navigli, 2009)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 752, |
| "end": 776, |
| "text": "(Cafarella et al., 2008)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 819, |
| "end": 842, |
| "text": "(Talukdar et al., 2008;", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 843, |
| "end": 862, |
| "text": "Dalvi et al., 2012)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "With previous methods, the vocabulary of class labels potentially produced for any instance is confined to a closed set provided manually as input (Wang and Cohen, 2009; Carlson et al., 2010) . The closed set is often derived from resources like Wikipedia (Talukdar and Pereira, 2010; Hoffart et al., 2013) or Freebase (Pantel et al., 2012) . Alternatively, the vocabulary is not a closed set, but instead is acquired along with the instances (Pantel and Pennacchiotti, 2006; Snow et al., 2006; Banko et al., 2007; Van Durme and Pa\u015fca, 2008; Kozareva and Hovy, 2010) . In the latter case, the extracted class labels take the form of head nouns preceded by modifiers. Examples are \"cities\", \"european cities\" (Etzioni et al., 2005) ; \"artists\", \"strong acids\" (Pantel and Pennacchiotti, 2006) ; \"outdoor activities\", \"prestigious private schools\" (Van Durme and ; \"methaterians\", \"aquatic birds\" (Kozareva and Hovy, 2010) . In contrast, the class labels extracted in our method exhibit greater syntactic diversity and are finergrained. In addition, they are not constrained to a particular set of categories available in resources like Wikipedia.", |
| "cite_spans": [ |
| { |
| "start": 147, |
| "end": 169, |
| "text": "(Wang and Cohen, 2009;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 170, |
| "end": 191, |
| "text": "Carlson et al., 2010)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 256, |
| "end": 284, |
| "text": "(Talukdar and Pereira, 2010;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 285, |
| "end": 306, |
| "text": "Hoffart et al., 2013)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 319, |
| "end": 340, |
| "text": "(Pantel et al., 2012)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 443, |
| "end": 475, |
| "text": "(Pantel and Pennacchiotti, 2006;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 476, |
| "end": 494, |
| "text": "Snow et al., 2006;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 495, |
| "end": 514, |
| "text": "Banko et al., 2007;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 515, |
| "end": 541, |
| "text": "Van Durme and Pa\u015fca, 2008;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 542, |
| "end": 566, |
| "text": "Kozareva and Hovy, 2010)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 708, |
| "end": 730, |
| "text": "(Etzioni et al., 2005)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 759, |
| "end": 791, |
| "text": "(Pantel and Pennacchiotti, 2006)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 895, |
| "end": 920, |
| "text": "(Kozareva and Hovy, 2010)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Fine-grained class labels roughly correspond to queries submitted in typed search (Demartini et al., 2009) or entity search (Balog et al., 2010) or listseeking questions (\"name the circuit judges in the cayman islands that are british\"). But our focus is on generating, rather than answering such queries or, more generally, attempting to deeply understand their semantics (Li, 2010) . Phrase similarities can be derived with any methods, using documents (Lin and Wu, 2009) or search queries (Jain and Pennacchiotti, 2010) .", |
| "cite_spans": [ |
| { |
| "start": 82, |
| "end": 106, |
| "text": "(Demartini et al., 2009)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 124, |
| "end": 144, |
| "text": "(Balog et al., 2010)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 373, |
| "end": 383, |
| "text": "(Li, 2010)", |
| "ref_id": null |
| }, |
| { |
| "start": 455, |
| "end": 473, |
| "text": "(Lin and Wu, 2009)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 492, |
| "end": 522, |
| "text": "(Jain and Pennacchiotti, 2010)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Whether Web search queries are a useful textual data source for open-domain information extraction has been investigated in several tasks. Examples are collecting unlabeled sets of similar instances (Jain and Pennacchiotti, 2010) , ranking of class labels already extracted from text (Pa\u015fca, 2010) , extracting attributes of instances (Alfonseca et al., 2010) and identifying the occurrences in queries of instances of several types, where the types are defined in a manually-created resource (Pantel et al., 2012) . Comparatively, we show that queries are useful in identifying possible class labels, not only reranking them; and even in populating the class labels with relevant, albeit small, sets of corresponding instances.", |
| "cite_spans": [ |
| { |
| "start": 199, |
| "end": 229, |
| "text": "(Jain and Pennacchiotti, 2010)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 284, |
| "end": 297, |
| "text": "(Pa\u015fca, 2010)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 335, |
| "end": 359, |
| "text": "(Alfonseca et al., 2010)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 493, |
| "end": 514, |
| "text": "(Pantel et al., 2012)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "As automatically-extracted class labels become finer-grained, they more clearly illustrate a phenomenon that received little attention. Namely, class labels of an instance, on one hand, and relations link-ing the instance with other instances and classes, on the other hand, are not mutually exclusive pieces of knowledge. Their extraction does not necessarily require different, dedicated techniques. Quite the opposite, class labels serve in text as nothing more than convenient lexical representations, or lexical shorthands, of relations linking instances with other instances. The class labels \"no front license plate states\" and \"states with no front license plate requirement\" are applicable to arizona. If so, it is because arizona is a state, and states require the installation of license plates on vehicles, and the requirement does not apply to the front of vehicles in the case of arizona. The connection between class labels and relations has been judiciously exploited in (Nastase and Strube, 2008) . In that study, relations encoded implicitly within Wikipedia categories are transformed into explicit relations. As an example, the explicit relation that deconstructing harry is directed by woody allen is obtained from the fact that deconstructing harry is listed under \"movies directed by woody allen\" in Wikipedia. Ours is the first approach to examine the potential for extracting relations from search queries, where relations are compactly and loosely folded into the respective class labels. A variety of methods address the more general task of acquisition of open-domain relations from documents, e.g., (Zhu et al., 2009; Carlson et al., 2010; Lao et al., 2011) .", |
| "cite_spans": [ |
| { |
| "start": 987, |
| "end": 1013, |
| "text": "(Nastase and Strube, 2008)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 1628, |
| "end": 1646, |
| "text": "(Zhu et al., 2009;", |
| "ref_id": "BIBREF41" |
| }, |
| { |
| "start": 1647, |
| "end": 1668, |
| "text": "Carlson et al., 2010;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 1669, |
| "end": 1686, |
| "text": "Lao et al., 2011)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The approach introduced in this paper exploits knowledge loosely encoded within Web search queries. It acquires a vocabulary of class labels that are finer grained than in previous literature. The class labels have precision comparable to that of class labels derived from human-created knowledge repositories. Furthermore, representative instances are extracted from queries for the fine-grained class labels, at encouraging levels of accuracy. Current work explores the use of noisy syntactic features to increase the accuracy of extracted class labels; the extraction of instances from evidence in multiple, rather than single queries; the expansion of extracted instances into larger sets; and the conversion of finegrained class labels into relations among classes.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Precision of Instances @1 @5 @10 @50 007 movie actors 1.00 1.00 0.85 0.85 actors with obsessive compulsive disorder 0.00 0.60 0.70 0.70 antibiotics for multiple sclerosis 0.50 0.60 0.55 0.58 astronauts in space station 1.00 0.70 0.85 0.83 automobiles with remote start 1.00 1.00 0.75 0.75 beatles songs of love 0.00 0.50 0.65 0.52 beetles that bite 1.00 0.80 0. 1.00 1.00 0.80 0.30 fwd sports cars 1.00 1.00 1.00 1.00 garden landscaping magazines 0.00 0.10 0.15 0.06 heliskiing resorts 1.00 1.00 1.00 1.00 hell in a cell wrestlers 1.00 1.00 1.00 0.92 holidays celebrated in sydney Table 7 : Precision at various ranks in the ranked lists of instances extracted from queries, for various target class labels and as an average over the entire set of 50 target class labels lore\" and \"daft punk live albums\", and especially for \"garden landscaping magazines\" which has the worst precision. On the other hand, instances extracted for \"companies with sustainable competitive advantage\" or \"criminals who have been executed\" have high precision across all ranks. As an average over all target class labels, precision is 0.76 at rank 10, and 0.71 at rank 50. Although there is room for improvement, we find these accuracy levels to be encouragingly good, especially at rank 50. As a reminder, instances are extracted from noisy queries, and for class labels as fine-grained as those acquired and used in our experiments. Some of the extracted ranked lists of instances are shown in Table 8 .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 581, |
| "end": 588, |
| "text": "Table 7", |
| "ref_id": null |
| }, |
| { |
| "start": 1475, |
| "end": 1482, |
| "text": "Table 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Target Class Label", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Acquisition of instance attributes via labeled and related instances", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Alfonseca", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Pa\u015fca", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Robledo-Arnuncio", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 33rd International Conference on Research and Development in Information Retrieval (SIGIR-10)", |
| "volume": "", |
| "issue": "", |
| "pages": "58--65", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Alfonseca, M. Pa\u015fca, and E. Robledo-Arnuncio. 2010. Acquisition of instance attributes via labeled and re- lated instances. In Proceedings of the 33rd Interna- tional Conference on Research and Development in In- formation Retrieval (SIGIR-10), pages 58-65, Geneva, Switzerland.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Categorybased query modeling for entity search", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Balog", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Bron", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "De Rijke", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 32nd European Conference on Information Retrieval (ECIR-10)", |
| "volume": "", |
| "issue": "", |
| "pages": "319--331", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Balog, M. Bron, and M. de Rijke. 2010. Category- based query modeling for entity search. In Proceed- ings of the 32nd European Conference on Information Retrieval (ECIR-10), pages 319-331, Milton Keynes, United Kingdom.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Open information extraction from the Web", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Banko", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Cafarella", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Broadhead", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-07)", |
| "volume": "", |
| "issue": "", |
| "pages": "224--231", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Banko, Michael J Cafarella, S. Soderland, M. Broad- head, and O. Etzioni. 2007. Open information ex- traction from the Web. In Proceedings of the 20th In- ternational Joint Conference on Artificial Intelligence (IJCAI-07), pages 2670-2676, Hyderabad, India. T. Brants. 2000. TnT -a statistical part of speech tagger. In Proceedings of the 6th Conference on Applied Natu- ral Language Processing (ANLP-00), pages 224-231, Seattle, Washington.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "WebTables: Exploring the power of tables on the Web", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Cafarella", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Halevy", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 34th Conference on Very Large Data Bases (VLDB-08)", |
| "volume": "", |
| "issue": "", |
| "pages": "538--549", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Cafarella, A. Halevy, D. Wang, E. Wu, and Y. Zhang. 2008. WebTables: Exploring the power of tables on the Web. In Proceedings of the 34th Conference on Very Large Data Bases (VLDB-08), pages 538-549, Auckland, New Zealand.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Coupled semi-supervised learning for information extraction", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Carlson", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Betteridge", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Hruschka", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 3rd ACM Conference on Web Search and Data Mining (WSDM-10)", |
| "volume": "", |
| "issue": "", |
| "pages": "101--110", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Carlson, J. Betteridge, R. Wang, E. Hruschka, and T. Mitchell. 2010. Coupled semi-supervised learn- ing for information extraction. In Proceedings of the 3rd ACM Conference on Web Search and Data Mining (WSDM-10), pages 101-110, New York.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Websets: Extracting sets of entities from the Web using unsupervised information extraction", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Dalvi", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Cohen", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Callan", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 5th ACM Conference on Web Search and Data Mining (WSDM-12)", |
| "volume": "", |
| "issue": "", |
| "pages": "243--252", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Dalvi, W. Cohen, and J. Callan. 2012. Websets: Ex- tracting sets of entities from the Web using unsuper- vised information extraction. In Proceedings of the 5th ACM Conference on Web Search and Data Mining (WSDM-12), pages 243-252, Seattle, Washington.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Enhancement of lexical concepts using cross-lingual Web mining", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Davidov", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Rappoport", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP-09)", |
| "volume": "", |
| "issue": "", |
| "pages": "852--861", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Davidov and A. Rappoport. 2009. Enhancement of lexical concepts using cross-lingual Web mining. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP- 09), pages 852-861, Singapore.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Overview of the INEX 2009 Entity Ranking track", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Demartini", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Iofciu", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "De Vries", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "INitiative for the Evaluation of XML Retrieval Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "254--264", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Demartini, T. Iofciu, and A. de Vries. 2009. Overview of the INEX 2009 Entity Ranking track. In INitiative for the Evaluation of XML Retrieval Workshop, pages 254-264, Brisbane, Australia.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Locating complex named entities in Web text", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Downey", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Broadhead", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 20th International Joint Conference on Artificial Intelligence (IJCAI-07)", |
| "volume": "", |
| "issue": "", |
| "pages": "2733--2739", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Downey, M. Broadhead, and O. Etzioni. 2007. Locat- ing complex named entities in Web text. In Proceed- ings of the 20th International Joint Conference on Ar- tificial Intelligence (IJCAI-07), pages 2733-2739, Hy- derabad, India.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Unsupervised named-entity extraction from the Web: an experimental study", |
| "authors": [ |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Cafarella", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Downey", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Popescu", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Shaked", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Weld", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Yates", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Artificial Intelligence", |
| "volume": "165", |
| "issue": "1", |
| "pages": "91--134", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "O. Etzioni, M. Cafarella, D. Downey, A. Popescu, T. Shaked, S. Soderland, D. Weld, and A. Yates. 2005. Unsupervised named-entity extraction from the Web: an experimental study. Artificial Intelligence, 165(1):91-134.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Open information extraction: The second generation", |
| "authors": [ |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Fader", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Christensen", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| }, |
| { |
| "first": "Mausam", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 22nd International Joint Conference on Artificial Intelligence (IJCAI-11)", |
| "volume": "", |
| "issue": "", |
| "pages": "3--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "O. Etzioni, A. Fader, J. Christensen, S. Soderland, and Mausam. 2011. Open information extraction: The second generation. In Proceedings of the 22nd In- ternational Joint Conference on Artificial Intelligence (IJCAI-11), pages 3-10, Barcelona, Spain.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Identifying relations for open information extraction", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Fader", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP-11)", |
| "volume": "", |
| "issue": "", |
| "pages": "1535--1545", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Fader, S. Soderland, and O. Etzioni. 2011. Identifying relations for open information extraction. In Proceed- ings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP-11), pages 1535-1545, Edinburgh, Scotland.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Automatic acquisition of hyponyms from large text corpora", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Hearst", |
| "suffix": "" |
| } |
| ], |
| "year": 1992, |
| "venue": "Proceedings of the 14th International Conference on Computational Linguistics (COLING-92)", |
| "volume": "", |
| "issue": "", |
| "pages": "539--545", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of the 14th In- ternational Conference on Computational Linguistics (COLING-92), pages 539-545, Nantes, France.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "YAGO2: a spatially and temporally enhanced knowledge base from Wikipedia", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Hoffart", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Suchanek", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Berberich", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Weikum", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Artificial Intelligence", |
| "volume": "194", |
| "issue": "", |
| "pages": "28--61", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Hoffart, F. Suchanek, K. Berberich, and G. Weikum. 2013. YAGO2: a spatially and temporally enhanced knowledge base from Wikipedia. Artificial Intelli- gence, 194:28-61.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Open entity extraction from Web search query logs", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Jain", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Pennacchiotti", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics (COLING-10)", |
| "volume": "", |
| "issue": "", |
| "pages": "510--518", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Jain and M. Pennacchiotti. 2010. Open entity ex- traction from Web search query logs. In Proceed- ings of the 23rd International Conference on Com- putational Linguistics (COLING-10), pages 510-518, Beijing, China.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "A semi-supervised method to learn and construct taxonomies using the web", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Kozareva", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP-10)", |
| "volume": "", |
| "issue": "", |
| "pages": "1110--1118", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Z. Kozareva and E. Hovy. 2010. A semi-supervised method to learn and construct taxonomies using the web. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing (EMNLP-10), pages 1110-1118, Cambridge, Mas- sachusetts.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Semantic class learning from the Web with hyponym pattern linkage graphs", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Kozareva", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics (ACL-08)", |
| "volume": "", |
| "issue": "", |
| "pages": "1048--1056", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Z. Kozareva, E. Riloff, and E. Hovy. 2008. Semantic class learning from the Web with hyponym pattern linkage graphs. In Proceedings of the 46th Annual Meeting of the Association for Computational Linguis- tics (ACL-08), pages 1048-1056, Columbus, Ohio.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Random walk inference and learning in a large scale knowledge base", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Lao", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Cohen", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP-11)", |
| "volume": "", |
| "issue": "", |
| "pages": "1337--1345", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lao, T. Mitchell, and W. Cohen. 2011. Random walk inference and learning in a large scale knowledge base. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing (EMNLP- 11), pages 529-539, Edinburgh, Scotland. X. Li. 2010. Understanding the semantic struc- ture of noun phrase queries. In Proceedings of the 48th Annual Meeting of the Association for Compu- tational Linguistics (ACL-10), pages 1337-1345, Up- psala, Sweden.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Concept discovery from text", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the 19th International Conference on Computational linguistics (COLING-02)", |
| "volume": "", |
| "issue": "", |
| "pages": "1--7", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Lin and P. Pantel. 2002. Concept discovery from text. In Proceedings of the 19th International Conference on Computational linguistics (COLING-02), pages 1- 7, Taipei, Taiwan.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Phrase clustering for discriminative learning", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics (ACL-IJCNLP-09)", |
| "volume": "", |
| "issue": "", |
| "pages": "1030--1038", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Lin and X. Wu. 2009. Phrase clustering for discrim- inative learning. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguis- tics (ACL-IJCNLP-09), pages 1030-1038, Singapore.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "No noun phrase left behind: Detecting and typing unlinkable entities", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Mausam", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP-CoNLL-12)", |
| "volume": "", |
| "issue": "", |
| "pages": "893--903", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Lin, Mausam, and O. Etzioni. 2012. No noun phrase left behind: Detecting and typing unlinkable enti- ties. In Proceedings of the Joint Conference on Em- pirical Methods in Natural Language Processing and Computational Natural Language Learning (EMNLP- CoNLL-12), pages 893-903, Jeju Island, Korea.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Decoding Wikipedia categories for knowledge acquisition", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Nastase", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Strube", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 23rd National Conference on Artificial Intelligence (AAAI-08)", |
| "volume": "", |
| "issue": "", |
| "pages": "1219--1224", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "V. Nastase and M. Strube. 2008. Decoding Wikipedia categories for knowledge acquisition. In Proceedings of the 23rd National Conference on Artificial Intelli- gence (AAAI-08), pages 1219-1224, Chicago, Illinois.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "The role of queries in ranking labeled instances extracted from text", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Pa\u015fca", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics (COLING-10)", |
| "volume": "", |
| "issue": "", |
| "pages": "955--962", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Pa\u015fca. 2010. The role of queries in ranking la- beled instances extracted from text. In Proceedings of the 23rd International Conference on Computa- tional Linguistics (COLING-10), pages 955-962, Bei- jing, China.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Espresso: Leveraging generic patterns for automatically harvesting semantic relations", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Pennacchiotti", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL-06)", |
| "volume": "", |
| "issue": "", |
| "pages": "113--120", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Pantel and M. Pennacchiotti. 2006. Espresso: Lever- aging generic patterns for automatically harvesting se- mantic relations. In Proceedings of the 21st Interna- tional Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computa- tional Linguistics (COLING-ACL-06), pages 113-120, Sydney, Australia.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Web-scale distributional similarity and entity set expansion", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Crestan", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Borkovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Popescu", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Vyas", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP-09)", |
| "volume": "", |
| "issue": "", |
| "pages": "938--947", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Pantel, E. Crestan, A. Borkovsky, A. Popescu, and V. Vyas. 2009. Web-scale distributional similarity and entity set expansion. In Proceedings of the 2009 Con- ference on Empirical Methods in Natural Language Processing (EMNLP-09), pages 938-947, Singapore.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Mining entity types from query logs via user intent modeling", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Gamon", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics (ACL-12)", |
| "volume": "", |
| "issue": "", |
| "pages": "563--571", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Pantel, T. Lin, and M. Gamon. 2012. Mining entity types from query logs via user intent modeling. In Proceedings of the 50th Annual Meeting of the Associ- ation for Computational Linguistics (ACL-12), pages 563-571, Jeju Island, Korea.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Entity extraction via ensemble semantics", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Pennacchiotti", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Pantel", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP-09)", |
| "volume": "", |
| "issue": "", |
| "pages": "238--247", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Pennacchiotti and P. Pantel. 2009. Entity extrac- tion via ensemble semantics. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing (EMNLP-09), pages 238-247, Singapore.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Large-scale taxonomy mapping for restructuring and integrating Wikipedia", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Ponzetto", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 21st International Joint Conference on Artificial Intelligence (IJCAI-09)", |
| "volume": "", |
| "issue": "", |
| "pages": "2083--2088", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Ponzetto and R. Navigli. 2009. Large-scale taxonomy mapping for restructuring and integrating Wikipedia. In Proceedings of the 21st International Joint Confer- ence on Artificial Intelligence (IJCAI-09), pages 2083- 2088, Pasadena, California.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Deriving a large scale taxonomy from Wikipedia", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Ponzetto", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Strube", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 22nd National Conference on Artificial Intelligence (AAAI-07)", |
| "volume": "", |
| "issue": "", |
| "pages": "1440--1447", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Ponzetto and M. Strube. 2007. Deriving a large scale taxonomy from Wikipedia. In Proceedings of the 22nd National Conference on Artificial Intelligence (AAAI- 07), pages 1440-1447, Vancouver, British Columbia.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "An algorithm for suffix stripping", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Porter", |
| "suffix": "" |
| } |
| ], |
| "year": 1980, |
| "venue": "Program", |
| "volume": "14", |
| "issue": "3", |
| "pages": "130--137", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Porter. 1980. An algorithm for suffix stripping. Pro- gram, 14(3):130-137.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Probabilistic question answering on the Web", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Radev", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Fan", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Qi", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Grewal", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Journal of the American Society for Information Science and Technology", |
| "volume": "56", |
| "issue": "3", |
| "pages": "571--583", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Radev, W. Fan, H. Qi, H. Wu, and A. Grewal. 2005. Probabilistic question answering on the Web. Journal of the American Society for Information Science and Technology, 56(3):571-583.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Wikipedia: The free encyclopedia. Online Information Review", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Remy", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "26", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Remy. 2002. Wikipedia: The free encyclopedia. On- line Information Review, 26(6):434.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Corpus-based semantic class mining: Distributional vs. pattern-based approaches", |
| "authors": [], |
| "year": null, |
| "venue": "Proceedings of the 23rd International Conference on Computational Linguistics (COLING-10)", |
| "volume": "", |
| "issue": "", |
| "pages": "993--1001", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Corpus-based semantic class mining: Distributional vs. pattern-based approaches. In Proceedings of the 23rd International Conference on Computational Linguistics (COLING-10), pages 993-1001, Beijing, China.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Semantic taxonomy induction from heterogenous evidence", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Snow", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the 21st International Conference on Computational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING-ACL-06)", |
| "volume": "", |
| "issue": "", |
| "pages": "801--808", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Snow, D. Jurafsky, and A. Ng. 2006. Semantic tax- onomy induction from heterogenous evidence. In Pro- ceedings of the 21st International Conference on Com- putational Linguistics and 44th Annual Meeting of the Association for Computational Linguistics (COLING- ACL-06), pages 801-808, Sydney, Australia.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Experiments in graphbased semi-supervised learning methods for classinstance acquisition", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Talukdar", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics (ACL-10)", |
| "volume": "", |
| "issue": "", |
| "pages": "1473--1481", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Talukdar and F. Pereira. 2010. Experiments in graph- based semi-supervised learning methods for class- instance acquisition. In Proceedings of the 48th An- nual Meeting of the Association for Computational Linguistics (ACL-10), pages 1473-1481, Uppsala, Sweden.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Weakly-supervised acquisition of labeled class instances using graph random walks", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Talukdar", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Reisinger", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Pa\u015fca", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Ravichandran", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Bhagat", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP-08)", |
| "volume": "", |
| "issue": "", |
| "pages": "582--590", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Talukdar, J. Reisinger, M. Pa\u015fca, D. Ravichandran, R. Bhagat, and F. Pereira. 2008. Weakly-supervised acquisition of labeled class instances using graph ran- dom walks. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing (EMNLP-08), pages 582-590, Honolulu, Hawaii.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Finding cars, goddesses and enzymes: Parametrizable acquisition of labeled instances for open-domain information extraction", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Van Durme", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Pa\u015fca", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 23rd National Conference on Artificial Intelligence (AAAI-08)", |
| "volume": "", |
| "issue": "", |
| "pages": "1243--1248", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Van Durme and M. Pa\u015fca. 2008. Finding cars, god- desses and enzymes: Parametrizable acquisition of la- beled instances for open-domain information extrac- tion. In Proceedings of the 23rd National Confer- ence on Artificial Intelligence (AAAI-08), pages 1243- 1248, Chicago, Illinois.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Automatic set instance extraction using the Web", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Cohen", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics (ACL-IJCNLP-09)", |
| "volume": "", |
| "issue": "", |
| "pages": "441--449", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Wang and W. Cohen. 2009. Automatic set instance extraction using the Web. In Proceedings of the 47th Annual Meeting of the Association for Computational Linguistics (ACL-IJCNLP-09), pages 441-449, Singa- pore.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Automatically refining the Wikipedia infobox ontology", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Weld", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 17th World Wide Web Conference (WWW-08)", |
| "volume": "", |
| "issue": "", |
| "pages": "635--644", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Wu and D. Weld. 2008. Automatically refining the Wikipedia infobox ontology. In Proceedings of the 17th World Wide Web Conference (WWW-08), pages 635-644, Beijing, China.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Probase: a probabilistic taxonomy for text understanding", |
| "authors": [ |
| { |
| "first": ",", |
| "middle": [ |
| "H" |
| ], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 International Conference on Management of Data (SIGMOD-12)", |
| "volume": "", |
| "issue": "", |
| "pages": "481--492", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wu, , H. Li, H. Wang, and K. Zhu. 2012. Probase: a probabilistic taxonomy for text understanding. In Proceedings of the 2012 International Conference on Management of Data (SIGMOD-12), pages 481-492, Scottsdale, Arizona.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Stat-Snowball: a statistical approach to extracting entity relationships", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Nie", |
| "suffix": "" |
| }, |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Wen", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 18th World Wide Web Conference (WWW-09)", |
| "volume": "", |
| "issue": "", |
| "pages": "101--110", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Zhu, Z. Nie, X. Liu, B. Zhang, and J. Wen. 2009. Stat- Snowball: a statistical approach to extracting entity re- lationships. In Proceedings of the 18th World Wide Web Conference (WWW-09), pages 101-110, Madrid, Spain.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td>Counts</td><td>Cvg</td></tr><tr><td>A</td><td>B</td><td>|A|</td><td>|B| |A\u2229B| |A\u2229B| |A|</td></tr><tr><td colspan=\"3\">vs. Wikipedia categories:</td><td/></tr><tr><td>R wc \u2229Q</td><td colspan=\"3\">R dc 126,318 2,884,390 14,840 0.117</td></tr><tr><td/><td colspan=\"3\">R ql 126,318 1,649,261 21,979 0.173</td></tr><tr><td/><td colspan=\"3\">R qa 126,318 33,073,741 33,502 0.265</td></tr><tr><td/><td colspan=\"3\">R qg 126,318 134,235,151 43,935 0.347</td></tr><tr><td colspan=\"4\">R ql \u222aR qg 126,318 135,884,412 65,914 0.521</td></tr><tr><td colspan=\"3\">vs. Wikipedia List-Of categories:</td><td/></tr><tr><td>R wl</td><td colspan=\"3\">R dc 134,840 2,884,390 8,099 0.060</td></tr><tr><td/><td colspan=\"3\">R ql 134,840 1,649,261 26,446 0.196</td></tr><tr><td/><td colspan=\"3\">R qa 134,840 33,073,741 16,204 0.120</td></tr><tr><td/><td colspan=\"3\">R qg 134,840 134,235,151 20,021 0.148</td></tr><tr><td colspan=\"4\">R ql \u222aR qg 134,840 135,884,412 46,467 0.344</td></tr><tr><td colspan=\"4\">vs. Wikipedia List-Of categories that are queries:</td></tr><tr><td>R wl \u2229Q</td><td colspan=\"3\">R dc 47,442 2,884,390 7,985 0.168</td></tr><tr><td/><td colspan=\"3\">R ql 47,442 1,649,261 24,821 0.523</td></tr><tr><td/><td colspan=\"3\">R qa 47,442 33,073,741 16,204 0.341</td></tr><tr><td/><td colspan=\"3\">R qg 47,442 134,235,151 20,021 0.422</td></tr><tr><td>R</td><td/><td/><td/></tr></table>", |
| "html": null, |
| "text": "Table 1 illustrates the overall coverage of the various experimental runs. The table takes all class labels into account, relative to the Wikipedia-based runs as reference sets: R wc (Wikipedia categories), in the upper part of the table; and R wl (Wikipedia List-Of categories), in the lower R dc 295,587 2,884,390 15,011 0.051 R ql 295,587 1,649,261 21,979 0.074 R qa 295,587 33,073,741 33,502 0.113 R qg 295,587 134,235,151 43,935 0.148 R ql \u222aR qg 295,587 135,884,412 65,914 0.222 vs. Wikipedia categories that are queries: R ql \u222aR qg 47,442 135,884,412 44,842 0.945", |
| "num": null |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "text": "Correctness tags manually assigned to class labels containing one of the (underlined) target phrases, extracted by various runs", |
| "num": null |
| }, |
| "TABREF3": { |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td/><td colspan=\"30\">: Precision of class labels that match (i.e., whose</td></tr><tr><td colspan=\"33\">names contain) each target phrase, computed as an av-</td></tr><tr><td colspan=\"33\">erage over (variable) subsets of target phrases for which</td></tr><tr><td colspan=\"33\">some matching class label(s) exist, and as an average over</td></tr><tr><td colspan=\"18\">the entire set of 75 target phrases</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"16\">Per-Phrase Precision for Run Rwc</td><td colspan=\"16\">Per-Phrase Precision for Run Rql</td></tr><tr><td>Precision</td><td>0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Precision</td><td>0 0.1 0.2 0.3 0.4 0.5 0.6 1 0.9 0.8 0.7</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>aaa 1</td><td>adelaide cbd 5</td><td>american fascism 10</td><td>antarctic region 15</td><td>baquba 20</td><td>boulder colorado 25</td><td>chester arthur 30</td><td>contemporary art 35</td><td>eating disorders 40</td><td>halogens 45</td><td>juan carlos 50</td><td>lucky ali 55</td><td>phosphorus 60</td><td>rouen 65</td><td>u.s. 70</td><td>wlan 75</td><td>aaa 1</td><td>adelaide cbd 5</td><td>american fascism 10</td><td>antarctic region 15</td><td>baquba 20</td><td>boulder colorado 25</td><td>chester arthur 30</td><td>contemporary art 35</td><td>eating disorders 40</td><td>halogens 45</td><td>juan carlos 50</td><td>lucky ali 55</td><td>phosphorus 60</td><td>rouen 65</td><td>u.s. 70</td><td>wlan 75</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"4\">Phrase</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"4\">Phrase</td><td/><td/><td/><td/><td/><td/></tr><tr><td/><td colspan=\"16\">Per-Phrase Precision for Run Rdc</td><td colspan=\"16\">Per-Phrase Precision for Run Rqg</td></tr><tr><td>Precision</td><td>0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td>Precision</td><td>0 0.1 0.2 0.3 0.4 0.5 0.6 1 0.9 0.8 0.7</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td/><td>aaa 1</td><td>adelaide cbd 5</td><td>american fascism 10</td><td>antarctic region 15</td><td>baquba 20</td><td>boulder colorado 25</td><td>chester arthur 30</td><td>contemporary art 35</td><td>eating disorders 40</td><td>halogens 45</td><td>juan carlos 50</td><td>lucky ali 55</td><td>phosphorus 60</td><td>rouen 65</td><td>u.s. 70</td><td>wlan 75</td><td>aaa 1</td><td>adelaide cbd 5</td><td>american fascism 10</td><td>antarctic region 15</td><td>baquba 20</td><td>boulder colorado 25</td><td>chester arthur 30</td><td>contemporary art 35</td><td>eating disorders 40</td><td>halogens 45</td><td>juan carlos 50</td><td>lucky ali 55</td><td>phosphorus 60</td><td>rouen 65</td><td>u.s. 70</td><td>wlan 75</td></tr><tr><td/><td/><td/><td/><td/><td/><td colspan=\"4\">Phrase</td><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td/><td colspan=\"4\">Phrase</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"33\">Figure 1: Discussion: As noted in (Ponzetto and Strube,</td></tr><tr><td colspan=\"33\">2007), Wikipedia organizes its articles and cate-</td></tr><tr><td colspan=\"33\">gories into a category network that mixes IsA (sub-</td></tr><tr><td colspan=\"33\">sumption) edges with non-IsA (thematic) edges.</td></tr><tr><td colspan=\"33\">Whenever an edge in Wikipedia is not IsA, the par-</td></tr></table>", |
| "html": null, |
| "text": "Precision scores for runs R wc , R ql , R dc and R qg , over class labels that match (i.e., contain) each of the 75 target phrases (0.25), for R qg . precision is separately computed over a random sample of 400 class labels per experimental run. The samples are selected from the set of all class labels extracted by the respective run. The precision scores are: 0.759 for R wc ; 1.000 for R wl ; 0.806 for R dc ; 0.811 for R ql ; 0.856 for R qa ; and 0.711 for R qg . The scores are in line with scores computed earlier over the target phrases, in the fourth column ofTable 3. : [japanese army and navy members in military or politic services in proper japan korea manchuria occupied china and nearest areas in previous times and pacific war epoch(1930-40s), mental disorders as defined by the diagnostic and statistical manual of mental disorders and the international statistical classification of diseases and related health problems,..] R", |
| "num": null |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "text": "Longest class labels extracted by runs R wl and R", |
| "num": null |
| }, |
| "TABREF6": { |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "text": "Ranked lists of instances extracted for a sample of class labels", |
| "num": null |
| } |
| } |
| } |
| } |