| { |
| "paper_id": "Q15-1023", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T15:07:44.388398Z" |
| }, |
| "title": "Design Challenges for Entity Linking", |
| "authors": [ |
| { |
| "first": "Xiao", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Washington", |
| "location": { |
| "settlement": "Seattle", |
| "region": "WA" |
| } |
| }, |
| "email": "xiaoling@cs.washington.edu" |
| }, |
| { |
| "first": "Sameer", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Washington", |
| "location": { |
| "settlement": "Seattle", |
| "region": "WA" |
| } |
| }, |
| "email": "sameer@cs.washington.edu" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [ |
| "S" |
| ], |
| "last": "Weld", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Washington", |
| "location": { |
| "settlement": "Seattle", |
| "region": "WA" |
| } |
| }, |
| "email": "weld@cs.washington.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Recent research on entity linking (EL) has introduced a plethora of promising techniques, ranging from deep neural networks to joint inference. But despite numerous papers there is surprisingly little understanding of the state of the art in EL. We attack this confusion by analyzing differences between several versions of the EL problem and presenting a simple yet effective, modular, unsupervised system, called VINCULUM, for entity linking. We conduct an extensive evaluation on nine data sets, comparing VINCULUM with two state-of-theart systems, and elucidate key aspects of the system that include mention extraction, candidate generation, entity type prediction, entity coreference, and coherence.", |
| "pdf_parse": { |
| "paper_id": "Q15-1023", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Recent research on entity linking (EL) has introduced a plethora of promising techniques, ranging from deep neural networks to joint inference. But despite numerous papers there is surprisingly little understanding of the state of the art in EL. We attack this confusion by analyzing differences between several versions of the EL problem and presenting a simple yet effective, modular, unsupervised system, called VINCULUM, for entity linking. We conduct an extensive evaluation on nine data sets, comparing VINCULUM with two state-of-theart systems, and elucidate key aspects of the system that include mention extraction, candidate generation, entity type prediction, entity coreference, and coherence.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Entity Linking (EL) is a central task in information extraction -given a textual passage, identify entity mentions (substrings corresponding to world entities) and link them to the corresponding entry in a given Knowledge Base (KB, e.g. Wikipedia or Freebase). For example, JetBlue begins direct service between Barnstable Airport and JFK International.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Here, \"JetBlue\" should be linked to the entity KB:JetBlue, \"Barnstable Airport\" to KB:Barnstable Municipal Airport, and \"JFK International\" to KB:John F. Kennedy International Airport 1 . The links not only provide semantic annotations to human readers but also a machine-consumable representation of the most basic semantic knowledge in the text. Many other NLP applications can benefit from such links, such as distantly-supervised relation extraction (Craven and Kumlien, 1999; Riedel et al., 2010; Hoffmann et al., 2011; Koch et al., 2014) that uses EL to create training data, and some coreference systems that use EL for disambiguation (Hajishirzi et al., 2013; Zheng et al., 2013; Durrett and Klein, 2014) . Unfortunately, in spite of numerous papers on the topic and several published data sets, there is surprisingly little understanding about state-of-the-art performance.", |
| "cite_spans": [ |
| { |
| "start": 454, |
| "end": 480, |
| "text": "(Craven and Kumlien, 1999;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 481, |
| "end": 501, |
| "text": "Riedel et al., 2010;", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 502, |
| "end": 524, |
| "text": "Hoffmann et al., 2011;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 525, |
| "end": 543, |
| "text": "Koch et al., 2014)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 642, |
| "end": 667, |
| "text": "(Hajishirzi et al., 2013;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 668, |
| "end": 687, |
| "text": "Zheng et al., 2013;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 688, |
| "end": 712, |
| "text": "Durrett and Klein, 2014)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We argue that there are three reasons for this confusion. First, there is no standard definition of the problem. A few variants have been studied in the literature, such as Wikification (Milne and Witten, 2008; Ratinov et al., 2011; Cheng and Roth, 2013) which aims at linking noun phrases to Wikipedia entities and Named Entity Linking (aka Named Entity Disambiguation) (McNamee and Dang, 2009; Hoffart et al., 2011) which targets only named entities. Here we use the term Entity Linking as a unified name for both problems, and Named Entity Linking (NEL) for the subproblem of linking only named entities. But names are just one part of the problem. For many variants there are no annotation guidelines for scoring links. What types of entities are valid targets? When multiple entities are plausible for annotating a mention, which one should be chosen? Are nested mentions allowed? Without agreement on these issues, a fair comparison is elusive.", |
| "cite_spans": [ |
| { |
| "start": 186, |
| "end": 210, |
| "text": "(Milne and Witten, 2008;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 211, |
| "end": 232, |
| "text": "Ratinov et al., 2011;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 233, |
| "end": 254, |
| "text": "Cheng and Roth, 2013)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 371, |
| "end": 395, |
| "text": "(McNamee and Dang, 2009;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 396, |
| "end": 417, |
| "text": "Hoffart et al., 2011)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Secondly, it is almost impossible to assess approaches, because systems are rarely compared using the same data sets. For instance, Hoffart et al. (2011) developed a new data set (AIDA) based on the CoNLL 2003 Named Entity Recognition data set but failed to evaluate their system on MSNBC previously created by (Cucerzan, 2007) ; Wikifier (Cheng and Roth, 2013) compared to the authors' previous system (Ratinov et al., 2011) using the originally selected datasets but didn't evaluate using AIDA data.", |
| "cite_spans": [ |
| { |
| "start": 132, |
| "end": 153, |
| "text": "Hoffart et al. (2011)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 311, |
| "end": 327, |
| "text": "(Cucerzan, 2007)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 339, |
| "end": 361, |
| "text": "(Cheng and Roth, 2013)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 403, |
| "end": 425, |
| "text": "(Ratinov et al., 2011)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Finally, when two end-to-end systems are compared, it is rarely clear which aspect of a system makes one better than the other. This is especially problematic when authors introduce complex mechanisms or nondeterministic methods that involve learning-based reranking or joint inference.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To address these problems, we analyze several significant inconsistencies among the data sets. To have a better understanding of the importance of various techniques, we develop a simple and modular, unsupervised EL system, VINCULUM. We compare VINCULUM to the two leading sophisticated EL systems on a comprehensive set of nine datasets. While our system does not consistently outperform the best EL system, it does come remarkably close and serves as a simple and competitive baseline for future research. Furthermore, we carry out an extensive ablation analysis, whose results illustrate 1) even a near-trivial model using CrossWikis (Spitkovsky and Chang, 2012) performs surprisingly well, and 2) incorporating a fine-grained set of entity types raises that level even higher. In summary, we make the following contributions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We analyze the differences among several versions of the entity linking problem, compare existing data sets and discuss annotation inconsistencies between them. (Sections 2 & 3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We present a simple yet effective, modular, unsupervised system, VINCULUM, for entity linking. We make the implementation open source and publicly available for future research. 2 (Section 4)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 We compare VINCULUM to 2 state-of-the-art systems on an extensive evaluation of 9 data sets. We also investigate several key aspects of the system including mention extraction, candidate generation, entity type prediction, entity coreference, and coherence between entities. (Section 5)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "2 http://github.com/xiaoling/vinculum", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this section, we describe some of the key differences amongst evaluations reported in existing literature, and propose a candidate benchmark for EL.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "No Standard Benchmark", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Nine data sets are in common use for EL evaluation; we partition them into three groups. The UIUC group (ACE and MSNBC datasets) (Ratinov et al., 2011) , AIDA group (with dev and test sets) (Hoffart et al., 2011) , and TAC-KBP group (with datasets ranging from the 2009 through 2012 competitions) (Mc-Namee and Dang, 2009) . Their statistics are summarized in Table 1 3 . Our set of nine is not exhaustive, but most other datasets, e.g. CSAW (Kulkarni et al., 2009) and AQUAINT (Milne and Witten, 2008) , annotate common concepts in addition to named entities. As we argue in Sec. 3.1, it is extremely difficult to define annotation guidelines for common concepts, and therefore they aren't suitable for evaluation. For clarity, this paper focuses on linking named entities. Similarly, we exclude datasets comprising Tweets and other short-length documents, since radically different techniques are needed for the specialized corpora. Table 2 presents a list of recent EL publications showing the data sets that they use for evaluation. The sparsity of this table is striking -apparently no system has reported the performance data from all three of the major evaluation groups.", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 151, |
| "text": "(Ratinov et al., 2011)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 190, |
| "end": 212, |
| "text": "(Hoffart et al., 2011)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 297, |
| "end": 322, |
| "text": "(Mc-Namee and Dang, 2009)", |
| "ref_id": null |
| }, |
| { |
| "start": 442, |
| "end": 465, |
| "text": "(Kulkarni et al., 2009)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 478, |
| "end": 502, |
| "text": "(Milne and Witten, 2008)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 360, |
| "end": 367, |
| "text": "Table 1", |
| "ref_id": "TABREF0" |
| }, |
| { |
| "start": 935, |
| "end": 942, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Sets", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Existing benchmarks have also varied considerably in the knowledge base used for link targets. Wikipedia has been most commonly used (Milne and Witten, 2008; Ratinov et al., 2011; Cheng and Roth, 2013) , however datasets were annotated using different snapshots and subsets. Other KBs include Yago (Hoffart et al., 2011) , Freebase (Sil and Yates, 2013) , DBpedia (Mendes et al., 2011 ) and a subset of Wikipedia (Mayfield et al., 2012) . Given that almost all KBs are descendants of Wikipedia, we use Wikipedia as the base KB in this work. 4", |
| "cite_spans": [ |
| { |
| "start": 133, |
| "end": 157, |
| "text": "(Milne and Witten, 2008;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 158, |
| "end": 179, |
| "text": "Ratinov et al., 2011;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 180, |
| "end": 201, |
| "text": "Cheng and Roth, 2013)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 298, |
| "end": 320, |
| "text": "(Hoffart et al., 2011)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 332, |
| "end": 353, |
| "text": "(Sil and Yates, 2013)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 356, |
| "end": 384, |
| "text": "DBpedia (Mendes et al., 2011", |
| "ref_id": null |
| }, |
| { |
| "start": 413, |
| "end": 436, |
| "text": "(Mayfield et al., 2012)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Knowledge Base", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "# of Mentions Entity Types KB # of NILs Eval . Metric UIUC ACE 244 Any Wikipedia Topic Wikipedia 0 BOC F1 MSNBC 654 Any Wikipedia Topic Wikipedia 0 BOC F1 AIDA AIDA-dev 5917 PER,ORG,LOC,MISC Yago 1126 Accuracy AIDA-test 5616 PER,ORG,LOC,MISC Yago 1131 Accuracy TAC KBP TAC09 3904 PER T ,ORG T ,GPE TAC \u2282 Wiki 2229 Accuracy TAC10 2250 PER T ,ORG T ,GPE TAC \u2282 Wiki 1230 Accuracy TAC10T 1500 (Ratinov et al., 2011; Cheng and Roth, 2013) . B 3 + F1 used in TAC KBP measures the accuracy in terms of entity clusters, grouped by the mentions linked to the same entity.", |
| "cite_spans": [ |
| { |
| "start": 433, |
| "end": 455, |
| "text": "(Ratinov et al., 2011;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 456, |
| "end": 477, |
| "text": "Cheng and Roth, 2013)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 45, |
| "end": 427, |
| "text": ". Metric UIUC ACE 244 Any Wikipedia Topic Wikipedia 0 BOC F1 MSNBC 654 Any Wikipedia Topic Wikipedia 0 BOC F1 AIDA AIDA-dev 5917 PER,ORG,LOC,MISC Yago 1126 Accuracy AIDA-test 5616 PER,ORG,LOC,MISC Yago 1131 Accuracy TAC KBP TAC09 3904 PER T ,ORG T ,GPE TAC \u2282 Wiki 2229 Accuracy TAC10 2250 PER T ,ORG T ,GPE TAC \u2282 Wiki 1230 Accuracy TAC10T", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Set", |
| "sec_num": null |
| }, |
| { |
| "text": "PER T ,ORG T ,GPE TAC \u2282 Wiki 426 Accuracy TAC11 2250 PER T ,ORG T ,GPE TAC \u2282 Wiki 1126 B 3 + F1 TAC12 2226 PER T ,ORG T ,GPE TAC \u2282 Wiki 1049 B 3 + F1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Set", |
| "sec_num": null |
| }, |
| { |
| "text": "Data Set ACE MSNBC AIDA-test TAC09 TAC10 TAC11 TAC12 AQUAINT CSAW Cucerzan (2007) x Milne and Witten (2008) x Kulkarni et al. (2009) x x Ratinov et al. 2011x x x Hoffart et al. 2011x Han and Sun (2012) x x He et al. 2013ax x He et al. 2013bx x x Cheng and Roth (2013) x x x x Sil and Yates 2013x x x Li et al. 2013x x Cornolti et al. 2013x x x TAC-KBP participants x x x x Table 2 : A sample of papers on entity linking with the data sets used in each paper (ordered chronologically). TAC-KBP proceedings comprise additional papers (McNamee and Dang, 2009; Ji et al., 2010; Ji et al., 2010; Mayfield et al., 2012) . Our intention is not to exhaust related work but to illustrate how sparse evaluation impedes comparison.", |
| "cite_spans": [ |
| { |
| "start": 66, |
| "end": 81, |
| "text": "Cucerzan (2007)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 84, |
| "end": 107, |
| "text": "Milne and Witten (2008)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 110, |
| "end": 132, |
| "text": "Kulkarni et al. (2009)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 183, |
| "end": 201, |
| "text": "Han and Sun (2012)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 246, |
| "end": 267, |
| "text": "Cheng and Roth (2013)", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 532, |
| "end": 556, |
| "text": "(McNamee and Dang, 2009;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 557, |
| "end": 573, |
| "text": "Ji et al., 2010;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 574, |
| "end": 590, |
| "text": "Ji et al., 2010;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 591, |
| "end": 613, |
| "text": "Mayfield et al., 2012)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 373, |
| "end": 380, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data Set", |
| "sec_num": null |
| }, |
| { |
| "text": "NIL entities: In spite of Wikipedia's size, there are many real-world entities that are absent from the KB. When such a target is missing for a mention, it is said to link to a NIL entity (McNamee and Dang, 2009) (aka out-of-KB or unlinkable entity (Hoffart et al., 2014) ). In the TAC KBP, in addition to determining if a mention has no entity in the KB to link, all the mentions that represent the same real world entities must be clustered together. Since our focus is not to create new entities for the KB, NIL clustering is beyond the scope of this paper. The AIDA data sets similarly contain such NIL annotations whereas ACE and MSNBC omit these mentions altogether. We only evaluate whether a mention with no suitable entity in the KB is predicted as NIL.", |
| "cite_spans": [ |
| { |
| "start": 249, |
| "end": 271, |
| "text": "(Hoffart et al., 2014)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Set", |
| "sec_num": null |
| }, |
| { |
| "text": "While a variety of metrics have been used for evaluation, there is little agreement on which one to use. However, this detail is quite important, since the choice of metric strongly biases the results. We describe the most common metrics below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Bag-of-Concept F1 (ACE, MSNBC): For each document, a gold bag of Wikipedia entities is evaluated against a bag of system output entities requiring exact segmentation match. This metric may have its historical reason for comparison but is in fact flawed since it will obtain 100% F1 for an annotation in which every mention is linked to the wrong entity, but the bag of entities is the same as the gold bag.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Micro Accuracy (TAC09, TAC10, TAC10T): For a list of given mentions, the metric simply measures the percentage of correctly predicted links.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "TAC-KBP B 3 + F1 (TAC11, TAC12): The mentions that are predicted as NIL entities are required to be clustered according to their identities (NIL clustering). The overall data set is evaluated using a entity cluster-based B 3 + F1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "NER-style F1 (AIDA): Similar to official CoNLL NER F1 evaluation, a link is considered correct only if the mention matches the gold boundary and the linked entity is also correct. A wrong link with the correct boundary penalizes both precision and recall.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "We note that Bag-of-Concept F1 is equivalent to the measure for Concept-to-Wikipedia task proposed in (Cornolti et al., 2013) and NER-style F1 is the same as strong annotation match. In the experiments, we use the official metrics for the TAC data sets and NER-style F1 for the rest.", |
| "cite_spans": [ |
| { |
| "start": 102, |
| "end": 125, |
| "text": "(Cornolti et al., 2013)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation Metrics", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Not only do we lack a common data set for evaluation, but most prior researchers fail to even define the problem under study, before developing algorithms. Often an overly general statement such as annotating the mentions to \"referent Wikipedia pages\" or \"corresponding entities\" is used to describe which entity link is appropriate. This section shows that failure to have a detailed annotation guideline causes a number of key inconsistencies between data sets. A few assumptions are subtly made in different papers, which makes direct comparisons unfair and hard to comprehend.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "No Annotation Guidelines", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Which entities deserve links? Some argue for restricting to named entities. Others argue that any phrase that can be linked to a Wikipedia entity adds value. Without a clear answer to this issue, any data set created will be problematic. It's not fair to penalize a NEL system for skipping a common noun phrases; nor would it be fair to lower the precision of a system that \"incorrectly\" links a common concept. However, we note that including mentions of common concepts is actually quite problematic, since the choice is highly subjective.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Entity Mentions: Common or Named?", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Example 1 In December 2008, Hoke was hired as the head football coach at San Diego State University. WikipediaAt first glance, KB:American football seems the gold-standard link. However, there is another entity KB:College football, which is clearly also, if not more, appropriate. If one argues that KB:College football should be the right choice given the context, what if KB:College football does not exist in the KB? Should NIL be returned in this case? The question is unanswered. 5 For the rest of this paper, we focus on the (better defined) problem of solely linking named entities. 6 AQUAINT and CSAW are therefore not used for evaluation due to an disproportionate number of common concept annotations.", |
| "cite_spans": [ |
| { |
| "start": 485, |
| "end": 486, |
| "text": "5", |
| "ref_id": null |
| }, |
| { |
| "start": 590, |
| "end": 591, |
| "text": "6", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Entity Mentions: Common or Named?", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "It is important to resolve disagreement when more than one annotation is plausible.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "How Specific Should Linked Entities Be?", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The TAC-KBP annotation guidelines (tac, 2012) specify that different iterations of the same organization (e.g. the KB:111th U.S. Congress and the KB:112th U.S. Congress) should not be considered as distinct entities. Unfortunately, this is not a common standard shared across the data sets, where often the most specific possible entity is preferred.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "How Specific Should Linked Entities Be?", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Example 2 Adams and Platt are both injured and will miss England's opening World Cup qualifier against Moldova on Sunday. (AIDA) Here the mention \"World Cup\" is labeled as KB:1998 FIFA World Cup, a specific occurrence of the event KB:FIFA World Cup.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "How Specific Should Linked Entities Be?", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "It is indeed difficult to decide how specific the gold link should be. Given a static knowledge base, which is often incomplete, one cannot always find the most specific entity. For instance, there is no Wikipedia page for the KB:116th U.S. Congress because the Congress has not been elected yet. On the other hand, using general concepts can cause troubles for machine reading. Consider president-of relation extraction on the following sentence. Figure 1 : Entities divided by their types. For named entities, the solid squares represent 4 CoNLL(AIDA) classes; the red dashed squares display 3 TAC classes; the shaded rectangle depicts common concepts.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 448, |
| "end": 456, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "How Specific Should Linked Entities Be?", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Failure to distinguish different Congress iterations would cause an information extraction system to falsely extracting the fact that KB:Joe Biden is the Senate President of the KB:United States Congress at all times!", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "How Specific Should Linked Entities Be?", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Another situation in which more than one annotation is plausible is metonymy, which is a way of referring to an entity not by its own name but rather a name of some other entity it is associated with. A common example is to refer to a country's government using its capital city.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Metonymy", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Example 4 Moscow's as yet undisclosed proposals on Chechnya's political future have , meanwhile, been sent back to do the rounds of various government departments. (AIDA)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Metonymy", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The mention here, \"Moscow\", is labeled as KB:Government of Russia in AIDA. If this sentence were annotated in TAC-KBP, it would have been labeled as KB:Moscow (the city) instead. Even the country KB:Russia seems to be a valid label. However, neither the city nor the country can actually make a proposal. The real entity in play is KB:Government of Russia.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Metonymy", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Even in the data sets consisting of solely named entities, the types of the entities vary and therefore the data distribution differs. TAC-KBP has a clear definition of what types of entities require links, namely Person, Organization and Geo-political entities. AIDA, which adopted the NER data set from the CoNLL shared task, includes entities from 4 classes, Person, Organization, Location and Misc. 7 Com-pared to the AIDA entity types, it is obvious that TAC-KBP is more restrictive, since it does not have Misc. entities (e.g. KB:FIFA World Cup). Moreover, TAC entities don't include fictional characters or organizations, such as KB:Sherlock Holmes. TAC GPEs include some geographical regions, such as KB:France, but exclude those without governments, such as KB:Central California or locations such as KB:Murrayfield Stadium. 8 Figure 1 summarizes the substantial differences between the two type sets.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 836, |
| "end": 844, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Named Entities, But of What Types?", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We often see one entity mention nested in another. For instance, a U.S. city is often followed by its state, such as \"Portland, Oregon\". One can split the whole mention to individual ones, \"Portland\" for the city and \"Oregon\" for the city's state. AIDA adopts this segmentation. However, annotations in an early TAC-KBP dataset (2009) select the whole span as the mention. We argue that all three mentions make sense. In fact, knowing the structure of the mention would facilitate the disambiguation (i.e. the state name provides enough context to uniquely identify the city entity). Besides the mention segmentation, the links for the nested entities may also be ambiguous.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Can Mention Boundaries Overlap?", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "Example 5 Dorothy Byrne, a state coordinator for the Florida Green Party, said she had been inundated with angry phone calls and e-mails from Democrats, but has yet to receive one regretful note from a Nader voter.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Can Mention Boundaries Overlap?", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "The gold annotation from ACE is KB:Green Party of Florida even though the mention doesn't contain \"Florida\" and can arguably be linked to KB:US Green Party.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Can Mention Boundaries Overlap?", |
| "sec_num": "3.5" |
| }, |
| { |
| "text": "In this section, we present VINCULUM, a simple, unsupervised EL system that performs comparably to the state of the art. As input, VINCULUM takes a plain-text document d and outputs a set of segmented mentions with their associated entities Figure 2 illustrates the linking pipeline that follows mention extraction. For each mention, VINCULUM ranks the candidates at each stage based on an ever widening context. For example, candidate generation (Section 4.2) merely uses the mention string, entity typing (Section 4.3) uses the sentence, while coreference (Section 4.4) and coherence (Section 4.5) use the full document and Web respectively. Our pipeline mimics the sieve structure introduced in (Lee et al., 2013), but instead of merging coreference clusters, we adjust the probability of candidate entities at each stage. The modularity of VINCULUM enables us to study the relative impact of its subcomponents.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 241, |
| "end": 249, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Simple & Modular Linking Method", |
| "sec_num": "4" |
| }, |
| { |
| "text": "A d = {(m i , l i )}. VINCULUM", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "A Simple & Modular Linking Method", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The first step of EL extracts potential mentions from the document. Since VINCULUM restricts attention to named entities, we use a Named Entity Recognition (NER) system (Finkel et al., 2005) . Alternatively, an NP chunker may be used to identify the mentions.", |
| "cite_spans": [ |
| { |
| "start": 169, |
| "end": 190, |
| "text": "(Finkel et al., 2005)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mention Extraction", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "While in theory a mention could link to any entity in the KB, in practice one sacrifices little by restricting attention to a subset (dozens) precompiled using a dictionary. A common way to build such a dictionary D is by crawling Web pages and aggregating anchor links that point to Wikipedia pages. The frequency with which a mention (anchor text), m, links to a particular entity (anchor link), c, allows one to estimate the conditional probability p(c|m). We adopt the CrossWikis dictionary, which was computed from a Google crawl of the Web (Spitkovsky and Chang, 2012). The dictionary contains more than 175 million unique strings with the entities they may represent. In the literature, the dictionary is often built from the anchor links within the Wikipedia website (e.g., (Ratinov et al., 2011; Hoffart et al., 2011) ).", |
| "cite_spans": [ |
| { |
| "start": 782, |
| "end": 804, |
| "text": "(Ratinov et al., 2011;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 805, |
| "end": 826, |
| "text": "Hoffart et al., 2011)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dictionary-based Candidate Generation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In addition, we employ two small but precise dictionaries for U.S. state abbreviations and demonyms when the mention satisfies certain conditions. For U.S. state abbreviations, a comma before the mention is required. For demonyms, we ensure that the mention is either an adjective or a plural noun.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dictionary-based Candidate Generation", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For an ambiguous mention such as \"Washington\", knowing that the mention denotes a person allows an EL system to promote KB:George Washington while lowering the rank of the capital city in the candidate list. We incorporate this intuition by combining it probabilistically with the CrossWikis prior.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating Entity Types", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "p(c|m, s) = t\u2208T p(c, t|m, s) = t\u2208T p(c|m, t, s)p(t|m, s) ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating Entity Types", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "where s denotes the sentence containing this mention m and T represents the set of all possible types. We assume the candidate c and the sentential context s are conditionally independent if both the mention m and its type t are given. In other words, p(c|m, t, s) = p(c|m, t), the RHS of which can be estimated by renormalizing p(c|m) w.r.t. type t:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating Entity Types", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "p(c|m, t) = p(c|m) c \u2192t p(c|m) ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating Entity Types", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "where c \u2192 t indicates that t is one of c's entity types. 9 The other part of the equation, p(t|m, s), can be estimated by any off-the-shelf Named Entity Recognition system, e.g. Finkel et al. (2005) and Ling and Weld (2012) .", |
| "cite_spans": [ |
| { |
| "start": 178, |
| "end": 198, |
| "text": "Finkel et al. (2005)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 203, |
| "end": 223, |
| "text": "Ling and Weld (2012)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating Entity Types", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "It is common for entities to be mentioned more than once in a document. Since some mentions are less ambiguous than others, it makes sense to use the most representative mention for linking. To this end, VINCULUM applies a coreference resolution system (e.g. Lee et al. (2013)) to cluster coreferent mentions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "The representative mention of a cluster is chosen for linking. 10 While there are more sophisticated ways to integrate EL and coreference (Hajishirzi et al., 2013) , VINCULUM's pipeline is simple and modular.", |
| "cite_spans": [ |
| { |
| "start": 138, |
| "end": 163, |
| "text": "(Hajishirzi et al., 2013)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "When KB:Barack Obama appears in a document, it is more likely that the mention \"Washington\" represents the capital KB:Washington, D.C. as the two entities are semantically related, and hence the joint assignment is coherent. A number of researchers found inclusion of some version of coherence is beneficial for EL (Cucerzan, 2007; Milne and Witten, 2008; Ratinov et al., 2011; Hoffart et al., 2011; Cheng and Roth, 2013) . For incorporating it in VINCULUM, we seek a document-wise assignment of entity links that maximizes the sum of the coherence scores between each pair of entity links predicted in the document d, i.e.", |
| "cite_spans": [ |
| { |
| "start": 315, |
| "end": 331, |
| "text": "(Cucerzan, 2007;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 332, |
| "end": 355, |
| "text": "Milne and Witten, 2008;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 356, |
| "end": 377, |
| "text": "Ratinov et al., 2011;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 378, |
| "end": 399, |
| "text": "Hoffart et al., 2011;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 400, |
| "end": 421, |
| "text": "Cheng and Roth, 2013)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coherence", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "1\u2264i<j\u2264|M d | \u03c6(l m i , l m j )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coherence", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "where \u03c6 is a function that measures the coherence between two entities, M Specifically, for a mention m and each of its candidates, we compute a score, coh(c) = Since both measures take values between 0 and 1, we denote the coherence score coh(c) as p \u03c6 (c|P d ), the conditional probability of an entity given other entities in the document. The final score of a can-didate is the sum of coherence p \u03c6 (c|P d ) and type compatibility p(c|m, s). Two coherence measures have been found to be useful: Normalized Google Distance (NGD) (Milne and Witten, 2008; Ratinov et al., 2011) and relational score (Cheng and Roth, 2013) . NGD between two entities c i and c j is defined based on the link structure between Wikipedia articles as follows:", |
| "cite_spans": [ |
| { |
| "start": 532, |
| "end": 556, |
| "text": "(Milne and Witten, 2008;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 557, |
| "end": 578, |
| "text": "Ratinov et al., 2011)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 600, |
| "end": 622, |
| "text": "(Cheng and Roth, 2013)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coherence", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "1 |P d |\u22121 p\u2208P d \\{pm} \u03c6(p, c), c \u2208 C m ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coherence", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "\u03c6 N GD (c i , c j ) = 1 \u2212 log(max(|L i |,|L i |))\u2212log(|L i \u2229L j |) log(W )\u2212log(min(|L i |,|L i |))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coherence", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "where L i and L j are the incoming (or outgoing) links in the Wikipedia articles for c i and c j respectively and W is the total number of entities in Wikipedia. The relational score between two entities is a binary indicator whether a relation exists between them. We use Freebase 11 as the source of the relation triples F = {(sub, rel, obj)}. Relational coherence \u03c6 REL is thus defined as \u03c6 REL (e i , e j ) = 1 \u2203r, (e i , r, e j ) or (e j , r, e i ) \u2208 F 0 otherwise.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coherence", |
| "sec_num": "4.5" |
| }, |
| { |
| "text": "In this section, we present experiments to address the following questions:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 Is NER sufficient to identify mentions? (Sec. 5.1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 How much does candidate generation affect final EL performance? (Sec. 5.2)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 How much does entity type prediction help EL? What type set is most appropriate? (Sec. 5.3)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 How much does coherence improve the EL results? (Sec. 5.4)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 How well does VINCULUM perform compared to the state-of-the-art? (Sec. 5.5)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "\u2022 Finally, which of VINCULUM's components contribute the most to its performance? (Sec. 5.6)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We start by using Stanford NER for mention extraction and measure its efficacy by the recall of correct mentions shown in Table 3 : Performance(%, R: Recall; P: Precision) of the correct mentions using different mention extraction strategies. ACE and MSNBC only annotate a subset of all the mentions and therefore the absolute values of precision are largely underestimated.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 122, |
| "end": 129, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Mention Extraction", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "alone, is used to detect mentions. Some of the missing mentions are noun phrases without capitalization, a well-known limitation of automated extractors. To recover them, we experiment with an NP chunker (NP) 12 and a deterministic noun phrase extractor based on parse trees (DP). Although we expect them to introduce spurious mentions, the purpose is to estimate an upper bound for mention recall. The results confirm the intuition: both methods improve recall, but the effect on precision is prohibitive. Therefore, we only use NER in subsequent experiments. Note that the recall of mention extraction is an upper bound of the recall of end-to-end predictions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Mention Extraction", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "In this section, we inspect the performance of candidate generation. We compare CrossWikis with an intra-Wikipedia dictionary 13 and Freebase Search API 14 . Each candidate generation component takes a mention string as input and returns an ordered list of candidate entities representing the mention. The candidates produced by Crosswikis and the intra-Wikipedia dictionary are ordered by their conditional probabilities given the mention string. Freebase API provides scores for the entities using a combination of text similarity and an in-house entity relevance score. We compute candidates for the union of all the non-NIL mentions from all 9 data sets and measure their efficacy by recall@k. From Figure 3 , it is clear that CrossWikis outperforms both the intra-Wikipedia dictionary and Freebase Search API for almost all k. The intra-Wikipedia dictionary is on a par with CrossWikis at k = 1 but in general has a Figure 4: Recall@k using CrossWikis for candidate generation, split by data set. 30 is chosen to be the cut-off value in consideration of both efficiency and accuracy.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 703, |
| "end": 711, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Candidate Generation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "lower coverage of the gold candidates compared to CrossWikis 15 . Freebase API offers a better coverage than the intra-Wikipedia dictionary but is less efficient than CrossWikis. In other words, Freebase API needs a larger cut-off value to include the gold entity in the candidate set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Candidate Generation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Using CrossWikis for candidate generation, we plot the recall@k curves per data set (Figure 4 ). To our surprise, in most data sets, CrossWikis alone can achieve more than 70% recall@1. The only exceptions are TAC11 and TAC12 because the organizers intentionally selected the mentions that are highly ambiguous such as \"ABC\" and/or incomplete such as \"Brown\". For efficiency, we set a cut-off threshold at 30 (> 80% recall for all but one data set). Note that Crosswikis itself can be used a context-insensitive EL system by looking up the mention string and predicting the entity with the highest conditional probability. The second row in Table 4 presents the results using this simple baseline. Crosswikis alone, using only the mention string, has a fairly reasonable performance. Table 4 : Performance (%) after incorporating entity types, comparing two sets of entity types (NER and FIGER). Using a set of fine-grained entity types (FIGER) generally achieves better results.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 84, |
| "end": 93, |
| "text": "(Figure 4", |
| "ref_id": null |
| }, |
| { |
| "start": 641, |
| "end": 648, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 784, |
| "end": 791, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Candidate Generation", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Here we investigate the impact of the entity types on the linking performance. The most obvious choice is the traditional NER types (T NER = {PER, ORG, LOC, MISC}). To predict the types of the mentions, we run Stanford NER (Finkel et al., 2005) and set the predicted type t m of each mention m to have probability 1 (i.e. p(t m |m, s) = 1). As to the types of the entities, we map their Freebase types to the four NER types 16 . A more appropriate choice is 112 fine-grained entity types introduced by Ling and Weld (2012) in FIGER, a publicly available package 17 . These finegrained types are not disjoint, i.e. each mention is allowed to have more than one type. For each mention, FIGER returns a set of types, each of which is accompanied by a score, t FIGER (m) = {(t j , g j ) : t j \u2208 T FIGER }. A softmax function is used to probabilistically interpret the results as follows:", |
| "cite_spans": [ |
| { |
| "start": 223, |
| "end": 244, |
| "text": "(Finkel et al., 2005)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 424, |
| "end": 426, |
| "text": "16", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating Entity Types", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "p(t j |m, s) = 1 Z exp(g j ) if (t j , g j ) \u2208 t FIGER (m), 0 otherwise where Z = (t k ,g k )\u2208t FIGER (m) exp(g k ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating Entity Types", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "We evaluate the utility of entity types in Table 4 , which shows that using NER typically worsens the performance. This drop may be attributed to the rigid binary values for type incorporation; it is hard to output the probabilities of the entity types for a mention given the chain model adopted in Stanford NER. We also notice that FIGER types consistently improve the results across the data sets, indicating that a finer-grained type set may be more suitable for the entity linking task.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 43, |
| "end": 50, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Incorporating Entity Types", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "To further confirm this assertion, we simulate the scenario where the gold types are provided for each mention (the oracle types of its gold entity). The performance is significantly boosted with the assistance from the gold types, which suggests that a better performing NER/FIGER system can further improve performance. Similarly, we notice that the results using FIGER types almost consistently outperform the ones using NER types. This observation endorses our previous recommendation of using fine-grained types for EL tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Incorporating Entity Types", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Two coherence measures suggested in Section 4.5 are tested in isolation to better understand their effects in terms of the linking performance (Table 5 ). In general, the link-based NGD works slightly better than the relational facts in 6 out of 9 data sets (comparing row \"+NGD\" with row \"+REL\"). We hypothesize that the inferior results of REL may be due to the incompleteness of Freebase triples, which makes it less robust than NGD. We also combine the two by taking the average score, which in most data set performs the best (\"+BOTH\"), indicating that two measures provide complementary source of information.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 143, |
| "end": 151, |
| "text": "(Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Coherence", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "To answer the last question of how well does VINCULUM perform overall, we conduct an end-toend comparison against two publicly available systems with leading performance: 18 AIDA (Hoffart et al., 2011) : We use the recommended GRAPH variant of the AIDA package (Version 2.0.4) and are able to replicate their results when gold-standard mentions are given. Table 5 : Performance (%) after re-ranking candidates using coherence scores, comparing two coherence measures (NGD and REL). \"no COH\": no coherence based re-ranking is used. \"+BOTH\": an average of two scores is used for re-ranking. Coherence in general helps: a combination of both measures often achieves the best effect and NGD has a slight advantage over REL. Table 6 : End-to-end performance (%): We compare VINCULUM in different stages with two state-of-the-art systems, AIDA and WIKIFIER. The column \"Overall\" lists the average performance of nine data sets for each approach. CrossWikis appears to be a strong baseline. VINCULUM is 0.6% shy from WIKIFIER, each winning in four data sets; AIDA tops both VINCULUM and WIKIFIER on AIDA-test.", |
| "cite_spans": [ |
| { |
| "start": 171, |
| "end": 173, |
| "text": "18", |
| "ref_id": null |
| }, |
| { |
| "start": 179, |
| "end": 201, |
| "text": "(Hoffart et al., 2011)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 356, |
| "end": 363, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 720, |
| "end": 727, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Overall Performance", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "WIKIFIER (Cheng and Roth, 2013) : We are able to reproduce the reported results on ACE and MSNBC and obtain a close enough B 3 + F1 number on TAC11 (82.4% vs 83.7%). Since WIKIFIER overgenerates mentions and produce links for common concepts, we restrict its output on the AIDA data to the mentions that Stanford NER predicts. Table 6 shows the performance of VINCULUM after each stage of candidate generation (Cross-Wikis), entity type prediction (+FIGER), coreference (+Coref) and coherence (+Coherence). The column \"Overall\" displays the average of the performance numbers for nine data sets for each approach. WIKI-FIER achieves the highest in the overall performance. VINCULUM performs quite comparably, only 0.6% shy from WIKIFIER, despite its simplicity and unsupervised nature. Looking at the performance per data set, VINCULUM and WIKIFIER each is superior in 4 out of 9 data sets while AIDA tops the performance only on AIDA-test. The performance of all the systems on TAC12 is generally lower than on the other dataset, mainly because of a low recall in the candidate generation stage.", |
| "cite_spans": [ |
| { |
| "start": 9, |
| "end": 31, |
| "text": "(Cheng and Roth, 2013)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 327, |
| "end": 334, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Overall Performance", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "We notice that even using CrossWikis alone works pretty well, indicating a strong baseline for future comparisons. The entity type prediction provides the highest boost on performance, an absolute 1.7% increase, among other subcomponents. The coreference stage and the coherence stage also give a reasonable lift.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Overall Performance", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "In terms of running time, VINCULUM runs reasonably fast. For a document with 20-40 entity mentions on average, VINCULUM takes only a few seconds to finish the linking process on one single thread.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Overall Performance", |
| "sec_num": "5.5" |
| }, |
| { |
| "text": "We outline the differences between the three system architectures in Table 7 . For identifying mentions to link, both VINCULUM and AIDA rely solely on NER detected mentions, while WIKIFIER additionally includes common noun phrases, and trains a classifier to determine whether a mention should be linked. For candidate generation, CrossWikis provides better coverage of entity mentions. For example, in Figure 3, we observe a recall of 93.2% at a cut-off of 30 by CrossWikis, outperforming 90.7% by AIDA's dictionary. Further, Hoffart et al. (2011) report a precision of 65.84% using gold mentions on AIDAtest, while CrossWikis achieves a higher precision at 69.24%. Both AIDA and WIKIFIER use coarse NER types as features, while VINCULUM incorporates fine-grained types that lead to dramatically improved performance, as shown in Section 5.3. Figure 2 . Components found to be most useful for VINCULUM are highlighted.", |
| "cite_spans": [ |
| { |
| "start": 527, |
| "end": 548, |
| "text": "Hoffart et al. (2011)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 69, |
| "end": 76, |
| "text": "Table 7", |
| "ref_id": "TABREF8" |
| }, |
| { |
| "start": 403, |
| "end": 409, |
| "text": "Figure", |
| "ref_id": null |
| }, |
| { |
| "start": 844, |
| "end": 852, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "System Analysis", |
| "sec_num": "5.6" |
| }, |
| { |
| "text": "cial to performance, as they each provide relatively small gains. Finally, VINCULUM is an unsupervised system whereas AIDA and WIKIFIER are trained on labeled data. Reliance on labeled data can often hurt performance in the form of overfitting and/or inconsistent annotation guidelines; AIDA's lower performance on TAC datasets, for instance, may be caused by the different data/label distribution of its training data from other datasets (e.g. CoNLL-2003 contains many scoreboard reports without complete sentences, and the more specific entities as annotations for metonymic mentions).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Analysis", |
| "sec_num": "5.6" |
| }, |
| { |
| "text": "We analyze the errors made by VINCULUM and categorize them into six classes (Table 8) . \"Metonymy\" consists of the errors where the mention is metonymic but the prediction links to its literal name. The errors in \"Wrong Entity Types\" are mainly due to the failure to recognize the correct entity type of the mention. In Table 8 's example, the link would have been right if FIGER had correctly predicted the airport type. The mistakes by the coreference system often propagate and lead to the errors under the \"Coreference\" category. The \"Context\" category indicates a failure of the linking system to take into account general contextual information other than the fore-mentioned categories. \"Specific Labels\" refers to the errors where the gold label is a specific instance of a general entity, includes instances where the prediction is the parent company of the gold entity or where the gold label is the township whereas the prediction is the city that corresponds to the township. \"Misc\" accounts for the rest of the errors. In the example, usually the location name appearing in the byline of a news article is a city name; and VINCULUM, without knowledge of this convention, mistakenly links to a state with the same name.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 76, |
| "end": 85, |
| "text": "(Table 8)", |
| "ref_id": "TABREF10" |
| }, |
| { |
| "start": 320, |
| "end": 327, |
| "text": "Table 8", |
| "ref_id": "TABREF10" |
| } |
| ], |
| "eq_spans": [], |
| "section": "System Analysis", |
| "sec_num": "5.6" |
| }, |
| { |
| "text": "The distribution of errors shown in Table 9 provides valuable insights into VINCULUM's varying performance across the nine datasets. First, we ob-serve a notably high percentage of metonymy-related errors. Since many of these errors are caused due to incorrect type prediction by FIGER, improvements in type prediction for metonymic mentions can provide substantial gains in future. The especially high percentage of metonymic mentions in the AIDA datasets thus explains VINCULUM's lower perfomance there (see Table 6 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 36, |
| "end": 43, |
| "text": "Table 9", |
| "ref_id": "TABREF11" |
| }, |
| { |
| "start": 510, |
| "end": 517, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "System Analysis", |
| "sec_num": "5.6" |
| }, |
| { |
| "text": "Second, we note that VINCULUM makes quite a number of \"Context\" errors on the TAC11 and TAC12 datasets. One possible reason is that when highly ambiguous mentions have been intentionally selected, link-based similarity and relational triples are insufficient for capturing the context. For example, in \"... while returning from Freeport to Portland. (TAC)\", the mention \"Freeport\"is unbounded by the state, one needs to know that it's more likely to have both \"Freeport\" and \"Portland\" in the same state (i.e. Maine) to make a correct prediction 19 . Another reason may be TAC's higher percentage of Web documents; since contextual information is more scattered in Web text than in newswire documents, this increases the difficulty of context modeling. We leave a more sophisticated context model for future work (Chisholm and Hachey, 2015; Singh et al., 2012) .", |
| "cite_spans": [ |
| { |
| "start": 546, |
| "end": 548, |
| "text": "19", |
| "ref_id": null |
| }, |
| { |
| "start": 813, |
| "end": 840, |
| "text": "(Chisholm and Hachey, 2015;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 841, |
| "end": 860, |
| "text": "Singh et al., 2012)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Analysis", |
| "sec_num": "5.6" |
| }, |
| { |
| "text": "Since \"Specific Labels\", \"Metonymy\", and \"Wrong Entity Types\" correspond to the annotation issues discussed in Sections 3.2, 3.3, and 3.4, the distribution of errors are also useful in studying annotation inconsistencies. The fact that the errors vary considerably across the datasets, for instance, VINCULUM makes many more \"Specific Labels\" mistakes in ACE and MSNBC, strongly suggests that annotation guidelines have a considerable impact on the final performance. We also observe that annotation inconsistencies also cause reasonable predictions to be treated as a mistake, We analyze a random sample of 250 of VINCULUM's errors, categorize the errors into six classes, and display the frequencies of each type across the nine datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Analysis", |
| "sec_num": "5.6" |
| }, |
| { |
| "text": "for example, AIDA predicts KB:West Virginia Mountaineers football for \"..., Alabama offered the job to Rich Rodriguez, but he decided to stay at West Virginia. (MSNBC)\" but the gold label is KB:West Virginia University.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System Analysis", |
| "sec_num": "5.6" |
| }, |
| { |
| "text": "Most related work has been discussed in the earlier sections; see Shen et al. (2014) for an EL survey. Two other papers deserve comparison. Cornolti et al. (2013) present a variety of evaluation measures and experimental results on five systems compared headto-head. In a similar spirit, Hachey et al. (2014) provide an easy-to-use evaluation toolkit on the AIDA data set. In contrast, our analysis focuses on the problem definition and annotations, revealing the lack of consistent evaluation and a clear annotation guideline. We also show an extensive set of experimental results conducted on nine data sets as well as a detailed ablation analysis to assess each subcomponent of a linking system.", |
| "cite_spans": [ |
| { |
| "start": 66, |
| "end": 84, |
| "text": "Shen et al. (2014)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 140, |
| "end": 162, |
| "text": "Cornolti et al. (2013)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 288, |
| "end": 308, |
| "text": "Hachey et al. (2014)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Despite recent progress in Entity Linking, the community has had little success in reaching an agreement on annotation guidelines or building a standard benchmark for evaluation. When complex EL systems are introduced, there are limited ablation studies for readers to interpret the results. In this paper, we examine 9 EL data sets and discuss the inconsistencies among them. To have a better understanding of an EL system, we implement a simple yet effective, unsupervised system, VINCULUM, and conduct extensive ablation tests to measure the relative impact of each component. From the experimental results, we show that a strong candidate generation component (CrossWikis) leads to a surprisingly good result; using fine-grained entity types helps filter out incorrect links; and finally, a simple unsupervised system like VINCULUM can achieve comparable performance with existing machine-learned linking systems and, therefore, is suitable as a strong baseline for future research. There are several directions for future work. We hope to catalyze agreement on a more precise EL annotation guideline that resolves the issues discussed in Section 3. We would also like to use crowdsourcing (Bragg et al., 2014) to collect a large set of these annotations for subsequent evaluation. Finally, we hope to design a joint model that avoids cascading errors from the current pipeline (Wick et al., 2013; Durrett and Klein, 2014) .", |
| "cite_spans": [ |
| { |
| "start": 1194, |
| "end": 1214, |
| "text": "(Bragg et al., 2014)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 1382, |
| "end": 1401, |
| "text": "(Wick et al., 2013;", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 1402, |
| "end": 1426, |
| "text": "Durrett and Klein, 2014)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Future Work", |
| "sec_num": "7" |
| }, |
| { |
| "text": "We use typewriter font, e.g., KB:Entity, to indicate an entity in a particular KB, and quotes, e.g., \"Mention\", to denote textual mentions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Transactions of the Association for Computational Linguistics, vol. 3, pp. 315-328, 2015. Action Editor: Kristina Toutanova.Submission batch: 11/2014; Revision batch 3/2015; Published 6/2015. c 2015 Association for Computational Linguistics. Distributed under a CC-BY-NC-SA 4.0 license.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "An online appendix containing details of the datasets is available at https://github.com/xiaoling/vinculum/ raw/master/appendix.pdf.4 Since the knowledge bases for all the data sets were around 2011, we use Wikipedia dump 20110513.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that linking common noun phrases is closely related to Word Sense Disambiguation(Moro et al., 2014).6 We define named entity mention extensionally: any name uniquely referring to one entity of a predefined class, e.g. a specific person or location.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.cnts.ua.ac.be/conll2003/ner/ annotation.txt", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://nlp.cs.rpi.edu/kbp/2014/elquery.pdf", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We notice that an entity often has multiple appropriate types, e.g. a school can be either an organization or a location depending on the context. We use Freebase to provide the entity types and map them appropriately to the target type set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Note that the representative mention in coreference resolution is not always the best mention for linking. When the representative mention contains a relative clause, we use the submention without the clause, which is favorable for candidate generation. When the representative mention is a location, a longer, non-conjunctive mention is preferred if possible. We also apply some heuristics to find organization acronyms, etc.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The mapping between Freebase and Wikipedia is provided at https://developers.google.com/freebase.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "OpenNLP NP Chunker: opennlp.apache.org 13 adopted from AIDA(Hoffart et al., 2011) 14 https://www.googleapis.com/freebase/v1/ search, restricted to no more than 220 candidates per query.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We also compared to another intra-Wikipedia dictionary(Table 3 in(Ratinov et al., 2011)). A recall of 86.85% and 88.67% is reported for ACE and MSNBC, respectively, at a cutoff level of 20. CrossWikis has a recall of 90.1% and 93.3% at the same cut-off.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The Freebase types \"/person/*\" are mapped to PER, \"/location/*\" to LOC, \"/organization/*\" plus a few others like \"/sports/sports team\" to ORG, and the rest to MISC.17 http://github.com/xiaoling/figer", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We are also aware of other systems such as TagMe-2 (Ferragina and Scaiella, 2012), DBpedia Spotlight (Mendes et al., 2011) and WikipediaMiner(Milne and Witten, 2008). A trial test on the AIDA data set shows that both Wikifier and AIDA tops the performance of other systems reported in(Cornolti et al., 2013) and therefore it is sufficient to compare with these two systems in the evaluation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "e.g. Cucerzan (2012) use geo-coordinates as features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "Acknowledgements The authors thank Luke Zettlemoyer, Tony Fader, Kenton Lee, Mark Yatskar for constructive suggestions on an early draft and all members of the LoudLab group and the LIL group for helpful discussions. We also thank the action editor and the anonymous reviewers for valuable comments. This work is supported in part by the Air Force Research Laboratory (AFRL) under prime contract no. FA8750-13-2-0019, an ONR grant N00014-12-1-0211, a WRF / TJ Cable Professorship, a gift from Google, an ARO grant number W911NF-13-1-0246, and by TerraSwarm, one of six centers of STARnet, a Semiconductor Research Corporation program sponsored by MARCO and DARPA. Any opinions, findings, and conclusion or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of DARPA, AFRL, or the US government.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "acknowledgement", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Parallel task routing for crowdsourcing", |
| "authors": [ |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Bragg", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrey", |
| "middle": [], |
| "last": "Kolobov", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [ |
| "S" |
| ], |
| "last": "Weld", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Second AAAI Conference on Human Computation and Crowdsourcing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jonathan Bragg, Andrey Kolobov, and Daniel S Weld. 2014. Parallel task routing for crowdsourcing. In Sec- ond AAAI Conference on Human Computation and Crowdsourcing.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Relational inference for wikification", |
| "authors": [ |
| { |
| "first": "Xiao", |
| "middle": [], |
| "last": "Cheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiao Cheng and Dan Roth. 2013. Relational inference for wikification. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Entity disambiguation with web links", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Chisholm", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Hachey", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "3", |
| "issue": "", |
| "pages": "145--156", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew Chisholm and Ben Hachey. 2015. Entity disam- biguation with web links. Transactions of the Associa- tion for Computational Linguistics, 3:145-156.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A framework for benchmarking entityannotation systems", |
| "authors": [ |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Cornolti", |
| "suffix": "" |
| }, |
| { |
| "first": "Paolo", |
| "middle": [], |
| "last": "Ferragina", |
| "suffix": "" |
| }, |
| { |
| "first": "Massimiliano", |
| "middle": [], |
| "last": "Ciaramita", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 22nd international conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "249--260", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marco Cornolti, Paolo Ferragina, and Massimiliano Cia- ramita. 2013. A framework for benchmarking entity- annotation systems. In Proceedings of the 22nd interna- tional conference on World Wide Web, pages 249-260. International World Wide Web Conferences Steering Committee.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Constructing biological knowledge bases by extracting information from text sources", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Craven", |
| "suffix": "" |
| }, |
| { |
| "first": "Johan", |
| "middle": [], |
| "last": "Kumlien", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of the Seventh International Conference on Intelligent Systems for Molecular Biology (ISMB-1999)", |
| "volume": "", |
| "issue": "", |
| "pages": "77--86", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mark Craven and Johan Kumlien. 1999. Constructing biological knowledge bases by extracting information from text sources. In Proceedings of the Seventh Inter- national Conference on Intelligent Systems for Molecu- lar Biology (ISMB-1999), pages 77-86.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Large-scale named entity disambiguation based on wikipedia data", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Cucerzan", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of EMNLP-CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "708--716", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Cucerzan. 2007. Large-scale named entity disam- biguation based on wikipedia data. In Proceedings of EMNLP-CoNLL, volume 2007, pages 708-716.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "The msr system for entity linking at tac 2012", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Silviu Cucerzan", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Text Analysis Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Silviu Cucerzan. 2012. The msr system for entity linking at tac 2012. In Text Analysis Conference 2012.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "A joint model for entity analysis: Coreference, typing, and linking", |
| "authors": [ |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Durrett", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "477--490", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Greg Durrett and Dan Klein. 2014. A joint model for en- tity analysis: Coreference, typing, and linking. Trans- actions of the Association for Computational Linguis- tics, 2:477-490.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Fast and accurate annotation of short texts with wikipedia pages", |
| "authors": [ |
| { |
| "first": "Paolo", |
| "middle": [], |
| "last": "Ferragina", |
| "suffix": "" |
| }, |
| { |
| "first": "Ugo", |
| "middle": [], |
| "last": "Scaiella", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "IEEE Software", |
| "volume": "29", |
| "issue": "1", |
| "pages": "70--75", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Paolo Ferragina and Ugo Scaiella. 2012. Fast and ac- curate annotation of short texts with wikipedia pages. IEEE Software, 29(1):70-75.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Incorporating non-local information into information extraction systems by gibbs sampling", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "R" |
| ], |
| "last": "Finkel", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Grenager", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "363--370", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J.R. Finkel, T. Grenager, and C. Manning. 2005. Incor- porating non-local information into information extrac- tion systems by gibbs sampling. In Proceedings of the 43rd Annual Meeting on Association for Compu- tational Linguistics, pages 363-370. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Cheap and easy entity evaluation", |
| "authors": [ |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Hachey", |
| "suffix": "" |
| }, |
| { |
| "first": "Joel", |
| "middle": [], |
| "last": "Nothman", |
| "suffix": "" |
| }, |
| { |
| "first": "Will", |
| "middle": [], |
| "last": "Radford", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ben Hachey, Joel Nothman, and Will Radford. 2014. Cheap and easy entity evaluation. In ACL.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Joint Coreference Resolution and Named-Entity Linking with Multi-pass Sieves", |
| "authors": [ |
| { |
| "first": "Hannaneh", |
| "middle": [], |
| "last": "Hajishirzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Leila", |
| "middle": [], |
| "last": "Zilles", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [ |
| "S" |
| ], |
| "last": "Weld", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hannaneh Hajishirzi, Leila Zilles, Daniel S. Weld, and Luke Zettlemoyer. 2013. Joint Coreference Resolution and Named-Entity Linking with Multi-pass Sieves. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "An entity-topic model for entity linking", |
| "authors": [ |
| { |
| "first": "Xianpei", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| }, |
| { |
| "first": "Le", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "105--115", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xianpei Han and Le Sun. 2012. An entity-topic model for entity linking. In Proceedings of the 2012 Joint Confer- ence on Empirical Methods in Natural Language Pro- cessing and Computational Natural Language Learn- ing, pages 105-115. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Learning entity representation for entity disambiguation", |
| "authors": [ |
| { |
| "first": "Zhengyan", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Shujie", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| }, |
| { |
| "first": "Mu", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Ming", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Longkai", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Houfeng", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proc. ACL2013. Zhengyan He", |
| "volume": "", |
| "issue": "", |
| "pages": "426--435", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhengyan He, Shujie Liu, Mu Li, Ming Zhou, Longkai Zhang, and Houfeng Wang. 2013a. Learning entity rep- resentation for entity disambiguation. Proc. ACL2013. Zhengyan He, Shujie Liu, Yang Song, Mu Li, Ming Zhou, and Houfeng Wang. 2013b. Efficient collective entity linking with stacking. In EMNLP, pages 426-435.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Robust disambiguation of named entities in text", |
| "authors": [ |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Hoffart", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohamed", |
| "middle": [ |
| "A" |
| ], |
| "last": "Yosef", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilaria", |
| "middle": [], |
| "last": "Bordino", |
| "suffix": "" |
| }, |
| { |
| "first": "Hagen", |
| "middle": [], |
| "last": "F\u00fcrstenau", |
| "suffix": "" |
| }, |
| { |
| "first": "Manfred", |
| "middle": [], |
| "last": "Pinkal", |
| "suffix": "" |
| }, |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Spaniol", |
| "suffix": "" |
| }, |
| { |
| "first": "Bilyana", |
| "middle": [], |
| "last": "Taneva", |
| "suffix": "" |
| }, |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Thater", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerhard", |
| "middle": [], |
| "last": "Weikum", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "782--792", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johannes Hoffart, Mohamed A. Yosef, Ilaria Bordino, Ha- gen F\u00fcrstenau, Manfred Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 782-792. As- sociation for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Discovering emerging entities with ambiguous names", |
| "authors": [ |
| { |
| "first": "Johannes", |
| "middle": [], |
| "last": "Hoffart", |
| "suffix": "" |
| }, |
| { |
| "first": "Yasemin", |
| "middle": [], |
| "last": "Altun", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerhard", |
| "middle": [], |
| "last": "Weikum", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 23rd international conference on World wide web", |
| "volume": "", |
| "issue": "", |
| "pages": "385--396", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Johannes Hoffart, Yasemin Altun, and Gerhard Weikum. 2014. Discovering emerging entities with ambiguous names. In Proceedings of the 23rd international confer- ence on World wide web, pages 385-396. International World Wide Web Conferences Steering Committee.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Knowledgebased weak supervision for information extraction of overlapping relations", |
| "authors": [ |
| { |
| "first": "Raphael", |
| "middle": [], |
| "last": "Hoffmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Congle", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiao", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [ |
| "S" |
| ], |
| "last": "Weld", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "541--550", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Raphael Hoffmann, Congle Zhang, Xiao Ling, Luke Zettlemoyer, and Daniel S Weld. 2011. Knowledge- based weak supervision for information extraction of overlapping relations. In Proceedings of the 49th An- nual Meeting of the Association for Computational Lin- guistics: Human Language Technologies, volume 1, pages 541-550.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Overview of the tac 2010 knowledge base population track", |
| "authors": [ |
| { |
| "first": "Heng", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| }, |
| { |
| "first": "Hoa", |
| "middle": [ |
| "Trang" |
| ], |
| "last": "Dang", |
| "suffix": "" |
| }, |
| { |
| "first": "Kira", |
| "middle": [], |
| "last": "Griffitt", |
| "suffix": "" |
| }, |
| { |
| "first": "Joe", |
| "middle": [ |
| "Ellis" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Text Analysis Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Heng Ji, Ralph Grishman, Hoa Trang Dang, Kira Grif- fitt, and Joe Ellis. 2010. Overview of the tac 2010 knowledge base population track. In Text Analysis Con- ference (TAC 2010).", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Type-aware distantly supervised relation extraction with linked arguments", |
| "authors": [ |
| { |
| "first": "Mitchell", |
| "middle": [], |
| "last": "Koch", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Gilmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [ |
| "S" |
| ], |
| "last": "Weld", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mitchell Koch, John Gilmer, Stephen Soderland, and Daniel S Weld. 2014. Type-aware distantly supervised relation extraction with linked arguments. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Deterministic coreference resolution based on entitycentric, precision-ranked rules", |
| "authors": [ |
| { |
| "first": "Sayali", |
| "middle": [], |
| "last": "Kulkarni", |
| "suffix": "" |
| }, |
| { |
| "first": "Amit", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Ganesh", |
| "middle": [], |
| "last": "Ramakrishnan", |
| "suffix": "" |
| }, |
| { |
| "first": "Soumen", |
| "middle": [], |
| "last": "Chakrabarti", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 15th ACM SIGKDD international conference on Knowledge discovery and data mining", |
| "volume": "", |
| "issue": "", |
| "pages": "1--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sayali Kulkarni, Amit Singh, Ganesh Ramakrishnan, and Soumen Chakrabarti. 2009. Collective annotation of Wikipedia entities in web text. In Proceedings of the 15th ACM SIGKDD international conference on Knowl- edge discovery and data mining, pages 457-466. ACM. Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolution based on entity- centric, precision-ranked rules. Computational Linguis- tics, pages 1-54.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Mining evidences for named entity disambiguation", |
| "authors": [ |
| { |
| "first": "Yang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Chi", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Fangqiu", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiawei", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| }, |
| { |
| "first": "Xifeng", |
| "middle": [], |
| "last": "Yan", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 19th ACM SIGKDD international conference on Knowledge discovery and data mining", |
| "volume": "", |
| "issue": "", |
| "pages": "1070--1078", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yang Li, Chi Wang, Fangqiu Han, Jiawei Han, Dan Roth, and Xifeng Yan. 2013. Mining evidences for named entity disambiguation. In Proceedings of the 19th ACM SIGKDD international conference on Knowledge dis- covery and data mining, pages 1070-1078. ACM.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Fine-grained entity recognition", |
| "authors": [ |
| { |
| "first": "Xiao", |
| "middle": [], |
| "last": "Ling", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Daniel S Weld", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "AAAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiao Ling and Daniel S Weld. 2012. Fine-grained entity recognition. In AAAI.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Overview of the tac2012 knowledge base population track", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Mayfield", |
| "suffix": "" |
| }, |
| { |
| "first": "Javier", |
| "middle": [], |
| "last": "Artiles", |
| "suffix": "" |
| }, |
| { |
| "first": "Hoa", |
| "middle": [ |
| "Trang" |
| ], |
| "last": "Dang", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Text Analysis Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Mayfield, Javier Artiles, and Hoa Trang Dang. 2012. Overview of the tac2012 knowledge base popu- lation track. Text Analysis Conference (TAC 2012).", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Overview of the tac knowledge base population track", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Mcnamee", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "T" |
| ], |
| "last": "Dang", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Text Analysis Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. McNamee and H.T. Dang. 2009. Overview of the tac knowledge base population track. Text Analysis Conference (TAC 2009).", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Dbpedia spotlight: shedding light on the web of documents", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Pablo", |
| "suffix": "" |
| }, |
| { |
| "first": "Max", |
| "middle": [], |
| "last": "Mendes", |
| "suffix": "" |
| }, |
| { |
| "first": "Andr\u00e9s", |
| "middle": [], |
| "last": "Jakob", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Garc\u00eda-Silva", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Bizer", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 7th International Conference on Semantic Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "1--8", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pablo N Mendes, Max Jakob, Andr\u00e9s Garc\u00eda-Silva, and Christian Bizer. 2011. Dbpedia spotlight: shedding light on the web of documents. In Proceedings of the 7th International Conference on Semantic Systems, pages 1-8. ACM.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Learning to link with wikipedia", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Milne", |
| "suffix": "" |
| }, |
| { |
| "first": "Ian", |
| "middle": [ |
| "H" |
| ], |
| "last": "Witten", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 17th ACM conference on Information and knowledge management", |
| "volume": "", |
| "issue": "", |
| "pages": "509--518", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Milne and Ian H. Witten. 2008. Learning to link with wikipedia. In Proceedings of the 17th ACM con- ference on Information and knowledge management, pages 509-518. ACM.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Entity linking meets word sense disambiguation: A unified approach", |
| "authors": [ |
| { |
| "first": "Andrea", |
| "middle": [], |
| "last": "Moro", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Raganato", |
| "suffix": "" |
| }, |
| { |
| "first": "Roberto", |
| "middle": [], |
| "last": "Navigli", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrea Moro, Alessandro Raganato, and Roberto Navigli. 2014. Entity linking meets word sense disambiguation: A unified approach. Transactions of the Association for Computational Linguistics, 2.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Local and global algorithms for disambiguation to wikipedia", |
| "authors": [ |
| { |
| "first": "Lev-Arie", |
| "middle": [], |
| "last": "Ratinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| }, |
| { |
| "first": "Doug", |
| "middle": [], |
| "last": "Downey", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Anderson", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "ACL", |
| "volume": "11", |
| "issue": "", |
| "pages": "1375--1384", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lev-Arie Ratinov, Dan Roth, Doug Downey, and Mike Anderson. 2011. Local and global algorithms for dis- ambiguation to wikipedia. In ACL, volume 11, pages 1375-1384.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Modeling relations and their mentions without labeled text", |
| "authors": [ |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Riedel", |
| "suffix": "" |
| }, |
| { |
| "first": "Limin", |
| "middle": [], |
| "last": "Yao", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "ECML/PKDD (3)", |
| "volume": "", |
| "issue": "", |
| "pages": "148--163", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sebastian Riedel, Limin Yao, and Andrew McCallum. 2010. Modeling relations and their mentions without labeled text. In ECML/PKDD (3), pages 148-163.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Entity linking with a knowledge base: Issues, techniques, and solutions", |
| "authors": [ |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Jianyong", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jiawei", |
| "middle": [], |
| "last": "Han", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Wei Shen, Jianyong Wang, and Jiawei Han. 2014. Entity linking with a knowledge base: Issues, techniques, and solutions. TKDE.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Re-ranking for joint named-entity recognition and linking", |
| "authors": [ |
| { |
| "first": "Avirup", |
| "middle": [], |
| "last": "Sil", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Yates", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proceedings of the 22nd ACM international conference on Conference on information & knowledge management", |
| "volume": "", |
| "issue": "", |
| "pages": "2369--2374", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Avirup Sil and Alexander Yates. 2013. Re-ranking for joint named-entity recognition and linking. In Pro- ceedings of the 22nd ACM international conference on Conference on information & knowledge management, pages 2369-2374. ACM.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Wikilinks: A largescale cross-document coreference corpus labeled via links to wikipedia", |
| "authors": [ |
| { |
| "first": "Sameer", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Amarnag", |
| "middle": [], |
| "last": "Subramanya", |
| "suffix": "" |
| }, |
| { |
| "first": "Fernando", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "CMPSCI Technical Report", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer Singh, Amarnag Subramanya, Fernando Pereira, and Andrew McCallum. 2012. Wikilinks: A large- scale cross-document coreference corpus labeled via links to wikipedia. Technical report, University of Massachusetts Amherst, CMPSCI Technical Report, UM-CS-2012-015.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "A crosslingual dictionary for english wikipedia concepts", |
| "authors": [ |
| { |
| "first": "I", |
| "middle": [], |
| "last": "Valentin", |
| "suffix": "" |
| }, |
| { |
| "first": "Angel X", |
| "middle": [], |
| "last": "Spitkovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "3168--3175", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Valentin I Spitkovsky and Angel X Chang. 2012. A cross- lingual dictionary for english wikipedia concepts. In LREC, pages 3168-3175.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "A joint model for discovering and linking entities", |
| "authors": [ |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Wick", |
| "suffix": "" |
| }, |
| { |
| "first": "Sameer", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Harshal", |
| "middle": [], |
| "last": "Pandya", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "CIKM Workshop on Automated Knowledge Base Construction (AKBC)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Michael Wick, Sameer Singh, Harshal Pandya, and An- drew McCallum. 2013. A joint model for discovering and linking entities. In CIKM Workshop on Automated Knowledge Base Construction (AKBC).", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Dynamic knowledgebase alignment for coreference resolution", |
| "authors": [ |
| { |
| "first": "Jiaping", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Vilnis", |
| "suffix": "" |
| }, |
| { |
| "first": "Sameer", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Jinho", |
| "middle": [ |
| "D" |
| ], |
| "last": "Choi", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Conference on Computational Natural Language Learning (CoNLL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jiaping Zheng, Luke Vilnis, Sameer Singh, Jinho D. Choi, and Andrew McCallum. 2013. Dynamic knowledge- base alignment for coreference resolution. In Confer- ence on Computational Natural Language Learning (CoNLL).", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "The process of finding the best entity for a mention. All possible entities are sifted through as VINCULUM proceeds at each stage with a widening range of context in consideration.s(c j |m, d) based on the entity type compatibility, its coreference mentions, and other entity links around this mention. The candidate entity with the maximum score, i.e. l = arg max c\u2208Cm s(c|m, d), is picked as the predicted link of m.", |
| "num": null |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "d denotes the set of all the mentions detected in d and l m i (l m j ) is one of the candidates of m i (m j ). Instead of searching for the exact solution in a brute-force manner (O(|C| |M d | ) where |C| = max m\u2208M d |C m |), we isolate each mention and greedily look for the best candidate by fixing the predictions of other mentions, allowing linear time search (O(|C| \u2022 |M d |)).", |
| "num": null |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "where P d is the union of all intermediate links {p m } in the document.", |
| "num": null |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "type_str": "figure", |
| "text": "Recall@k on an aggregate of nine data sets, comparing three candidate generation methods.", |
| "num": null |
| }, |
| "TABREF0": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "text": "Characteristics of the nine NEL data sets. Entity types: The AIDA data sets include named entities in four NER classes,", |
| "content": "<table><tr><td>Person (PER), Organization (ORG), Location (LOC) and Misc. In TAC KBP data sets, both Person (PER T ) and Organization entities</td></tr><tr><td>(ORG T ) are defined differently from their NER counterparts and geo-political entities (GPE), different from LOC, exclude places</td></tr><tr><td>like KB:Central California. KB (Sec. 2.2): The knowledge base used when each data was being developed. Evaluation</td></tr><tr><td>Metric (Sec. 2.3): Bag-of-Concept F1 is used as the evaluation metric in</td></tr></table>" |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "text": "Example 3 Joe Biden is the Senate President in the 113th United States Congress.", |
| "content": "<table><tr><td>Location</td><td>Person</td></tr><tr><td>TAC GPE (Geo-</td><td>TAC Person</td></tr><tr><td>political Entities)</td><td/></tr><tr><td/><td>Common Concepts</td></tr><tr><td/><td>E.g. Brain_Tumor,</td></tr><tr><td/><td>Desk, Water, etc.</td></tr><tr><td>TAC Organization</td><td/></tr><tr><td>Organization</td><td>Misc.</td></tr></table>" |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "text": "begins with mention extraction. For each identified mention m, candidate entities C m = {c j } are generated for linking. VINCULUM assigns each candidate a linking score", |
| "content": "<table><tr><td>All possible entities</td><td>less context</td></tr><tr><td>Candidate Generation</td><td>Mention phrases</td></tr><tr><td>Entity Type</td><td>sentence</td></tr><tr><td>Coreference</td><td>document</td></tr><tr><td>Coherence</td><td>world knowledge</td></tr><tr><td>One most likely entity</td><td>more context</td></tr></table>" |
| }, |
| "TABREF3": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "text": "", |
| "content": "<table><tr><td>. TAC data sets are not</td></tr><tr><td>included because the mention strings are given in that</td></tr><tr><td>competition. The results indicate that at least 10% of</td></tr><tr><td>the gold-standard mentions are left out when NER,</td></tr></table>" |
| }, |
| "TABREF8": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "text": "Comparison of entity linking pipeline architectures. VINCULUM components are described in detail in Section 4, and correspond to", |
| "content": "<table/>" |
| }, |
| "TABREF9": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "text": "to avoid a fifth successive defeat in 1996 at the hands of the All Blacks ... South Africa national rugby union team South Africa Wrong Entity Types Instead of Los Angeles International, for example, consider flying into Burbank or John Wayne Airport ... Bob Hope Airport Burbank, California Coreference It is about his mysterious father, Barack Hussein Obama, an imperious if alluring voice gone distant and then missing. Barack Obama Sr. Barack ObamaContextScott Walker removed himself from the race, but Green never really stirred the passions of former Walker supporters, nor did he garner outsized support \"outstate\".", |
| "content": "<table><tr><td>Category</td><td>Example</td><td>Gold Label</td><td>Prediction</td></tr><tr><td>Metonymy</td><td colspan=\"3\">South Africa managed Scott Walker (politician) Scott Walker (singer)</td></tr><tr><td>Specific Labels</td><td>What we like would be Seles , ( Olympic champion Lindsay ) Davenport and Mary Joe Fernandez .</td><td>1996 Summer Olympics</td><td>Olympic Games</td></tr><tr><td>Misc</td><td>NEW YORK 1996-12-07</td><td>New York City</td><td>New York</td></tr></table>" |
| }, |
| "TABREF10": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "text": "We divide linking errors into six error categories and provide an example for each class.", |
| "content": "<table><tr><td>Error Category</td><td colspan=\"9\">TAC09 TAC10 TAC10T TAC11 TAC12 AIDA-dev AIDA-test ACE MSNBC</td></tr><tr><td colspan=\"7\">Metonymy Wrong Entity Types 13.3% 23.3% 20.0% 6.7% 10.0% 16.7% 0.0% 3.3% 0.0% 0.0% Coreference 30.0% 6.7% 20.0% 6.7% 3.3% Context 30.0% 26.7% 26.7% 70.0% 70.0% 13.3% 60.0% 6.7% 0.0% Specific Labels 6.7% 36.7% 16.7% 10.0% 3.3% 3.3% Misc 3.3% 6.7% 13.3% 6.7% 13.3% 16.7%</td><td colspan=\"3\">60.0% 10.0% 31.6% 5.0% 5.3% 20.0% 0.0% 0.0% 20.0% 16.7% 15.8% 15.0% 3.3% 36.9% 25.0% 10.0% 10.5% 15.0%</td></tr><tr><td colspan=\"2\"># of examined errors 30</td><td>30</td><td>30</td><td>30</td><td>30</td><td>30</td><td>30</td><td>19</td><td>20</td></tr></table>" |
| }, |
| "TABREF11": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "text": "Error analysis:", |
| "content": "<table/>" |
| } |
| } |
| } |
| } |