{ "paper_id": "D13-1029", "header": { "generated_with": "S2ORC 1.0.0", "date_generated": "2023-01-19T16:40:36.857742Z" }, "title": "Joint Coreference Resolution and Named-Entity Linking with Multi-pass Sieves", "authors": [ { "first": "Hannaneh", "middle": [], "last": "Hajishirzi", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "hannaneh@cs.washington.edu" }, { "first": "Leila", "middle": [], "last": "Zilles", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "lzilles@cs.washington.edu" }, { "first": "Daniel", "middle": [ "S" ], "last": "Weld", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "weld@cs.washington.edu" }, { "first": "Luke", "middle": [], "last": "Zettlemoyer", "suffix": "", "affiliation": { "laboratory": "", "institution": "University of Washington", "location": {} }, "email": "" } ], "year": "", "venue": null, "identifiers": {}, "abstract": "Many errors in coreference resolution come from semantic mismatches due to inadequate world knowledge. Errors in named-entity linking (NEL), on the other hand, are often caused by superficial modeling of entity context. This paper demonstrates that these two tasks are complementary. We introduce NECO, a new model for named entity linking and coreference resolution, which solves both problems jointly, reducing the errors made on each. NECO extends the Stanford deterministic coreference system by automatically linking mentions to Wikipedia and introducing new NEL-informed mention-merging sieves. Linking improves mention-detection and enables new semantic attributes to be incorporated from Freebase, while coreference provides better context modeling by propagating named-entity links within mention clusters. Experiments show consistent improvements across a number of datasets and experimental conditions, including over 11% reduction in MUC coreference error and nearly 21% reduction in F1 NEL error on ACE 2004 newswire data.", "pdf_parse": { "paper_id": "D13-1029", "_pdf_hash": "", "abstract": [ { "text": "Many errors in coreference resolution come from semantic mismatches due to inadequate world knowledge. Errors in named-entity linking (NEL), on the other hand, are often caused by superficial modeling of entity context. This paper demonstrates that these two tasks are complementary. We introduce NECO, a new model for named entity linking and coreference resolution, which solves both problems jointly, reducing the errors made on each. NECO extends the Stanford deterministic coreference system by automatically linking mentions to Wikipedia and introducing new NEL-informed mention-merging sieves. Linking improves mention-detection and enables new semantic attributes to be incorporated from Freebase, while coreference provides better context modeling by propagating named-entity links within mention clusters. Experiments show consistent improvements across a number of datasets and experimental conditions, including over 11% reduction in MUC coreference error and nearly 21% reduction in F1 NEL error on ACE 2004 newswire data.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Abstract", "sec_num": null } ], "body_text": [ { "text": "Coreference resolution and named-entity linking are closely related problems, but have been largely studied in isolation. This paper demonstrates that they are complementary by introducing a simple joint model that improves performance on both tasks. Coreference resolution is the task of determining when two textual mentions name the same individ- ual. The biggest challenge in coreference resolution -accounting for 42% of errors in the stateof-the-art Stanford system -is the inability to reason effectively about background semantic knowledge (Lee et al., 2013) . For example, consider the sentence in Figure 1 . \"President\" refers to \"Donald Tsang\" and \"the park\" refers to \"Hong Kong Disneyland,\" but automated algorithms typically lack the background knowledge to draw such inferences. Incorporating knowledge is challenging, and many efforts to do so have actually hurt performance, e.g. (Lee et al., 2011; Durrett and Klein, 2013) .", "cite_spans": [ { "start": 548, "end": 566, "text": "(Lee et al., 2013)", "ref_id": "BIBREF14" }, { "start": 897, "end": 915, "text": "(Lee et al., 2011;", "ref_id": null }, { "start": 916, "end": 940, "text": "Durrett and Klein, 2013)", "ref_id": "BIBREF5" } ], "ref_spans": [ { "start": 607, "end": 615, "text": "Figure 1", "ref_id": "FIGREF0" } ], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "Named-entity linking (NEL) is the task of matching textual mentions to corresponding entities in a knowledge base, such as Wikipedia or Freebase. Such links provide rich sources of semantic knowledge about entity attributes -Freebase includes president as Tsang's title and Disneyland as having the attribute park. But NEL is itself a challenging problem, and finding the correct link requires disambiguating based on the mention string and often non-local contextual features. For example, \"Michael Eisner\" is relatively unambiguous but the isolated mention \"Eisner\" is more challenging. However, these mentions could be clustered with a coreference model, allowing for improved NEL through link propagation from the easier mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We present NECO, a new algorithm for jointly solving named entity linking and coreference resolution. Our work is related to that of Ratinov and Roth (2012) , which also uses knowledge derived from an NEL system to improve coreference. However, NECO is the first joint model we know of, is purely deterministic with no learning phase, does automatic mention detection, and improves performance on both tasks.", "cite_spans": [ { "start": 133, "end": 156, "text": "Ratinov and Roth (2012)", "ref_id": "BIBREF24" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "NECO extends the Stanford's sieve-based model, in which a high recall mention detection phase is followed by a sequence of cluster merging operations ordered by decreasing precision (Raghunathan et al., 2010; Lee et al., 2013) . At each step, it merges two clusters only if all available information about their respective entities is consistent. We use NEL to increase recall during the mention detection phase and introduce two new cluster-merging sieves, which compare the Freebase attributes of entities. NECO also improves NEL by initially favoring high precision linking results and then propagating links and attributes as clusters are formed.", "cite_spans": [ { "start": 182, "end": 208, "text": "(Raghunathan et al., 2010;", "ref_id": "BIBREF22" }, { "start": 209, "end": 226, "text": "Lee et al., 2013)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "In summary we make the following contributions:", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We introduce NECO, a novel, joint approach to solving coreference and NEL, demonstrating that these tasks are complementary by achieving joint error reduction.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 We present experiments showing improved performance at coreference resolution, given both gold and automatic mention detection: e.g., 6.2 point improvement in MUC recall on ACE 2004 newswire text and 3.1 point improvement in MUC precision the CoNLL 2011 test set.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "\u2022 NECO also leads to better performance at named-entity linking, given both gold and automatic linking, improving F1 from 61.7% to 69.2% on a newly labeled test set. 1", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Introduction", "sec_num": "1" }, { "text": "We make use of existing models for coreference resolution and named entity linking.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "1 Our corpus and the source code for NECO can be downloaded from https://www.cs.washington.edu/ research-projects/nlp/neco.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Background", "sec_num": "2" }, { "text": "Coreference resolution is the the task of identifying all text spans (called mentions) that refer to the same entity, forming mention clusters.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference Resolution", "sec_num": "2.1" }, { "text": "Stanford's Sieve Model is a state-of-the-art coreference resolver comprising a pipeline of \"sieves\" that merge coreferent mentions according to deterministic rules. Mentions are automatically predicted by selecting all noun phrases (NP), pronouns, and named entities. Each sieve either merges a cluster to its single best antecedent from a list of previous clusters, or declines to merge.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference Resolution", "sec_num": "2.1" }, { "text": "Higher precision sieves are applied earlier in the pipeline according to the following order, looking at different aspects of the text, including: (1) speaker identification, (2-3) exact and relaxed string matches between mentions, (4) precise constructs, including appositives, acronyms and demonyms, (5-9) different notions of strict and relaxed head matches between mentions, and finally (10) a number of syntactic and distance cues for pronoun resolution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Coreference Resolution", "sec_num": "2.1" }, { "text": "Named-entity linking (NEL) is the task of identifying mentions in a text and linking them to the entity they name in a knowledge base, usually Wikipedia. NECO uses two existing NEL systems: GLOW (Ratinov et al., 2011) and Wikipedi-aMiner (Milne and Witten, 2008) .", "cite_spans": [ { "start": 195, "end": 217, "text": "(Ratinov et al., 2011)", "ref_id": "BIBREF25" }, { "start": 238, "end": 262, "text": "(Milne and Witten, 2008)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Named Entity Linking", "sec_num": "2.2" }, { "text": "WikipediaMiner links mentions based on a notion of semantic similarity to Wikipedia pages, considering all substrings up to a fixed length. Since there are often many possible links, it disambiguates by choosing the entity whose Wikipedia page is most semantically related to the nearby context of the mention. The semantic scoring function includes ngram statistics and also counts shared links to other unambiguous mentions in the text.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named Entity Linking", "sec_num": "2.2" }, { "text": "GLOW finds mentions by selecting all the NPs and named entities in the text. Linking is framed as an integer linear programming optimization problem that takes into account using similar local constraints but also includes global constraints such as entity link co-occurrence.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named Entity Linking", "sec_num": "2.2" }, { "text": "Both systems return confidence values. To maintain high precision, NECO uses an ensemble of \u2022 Let Exemplar (c) be a representative mention of the cluster c, computed as defined below \u2022 Let c j be an antecedent cluster of c i if c j has a mention which is before the first mention of c i \u2022 Let l(m) be a Wikipedia page linked to mention m or \u2205 if there is no link \u2022 Let l(c) be a Wikipedia page linked to mention Exemplar (c) or \u2205 if there is no link 1. Initialize Linked Mentions: GLOW and WikipediaMiner, selecting only high confidence links.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named Entity Linking", "sec_num": "2.2" }, { "text": "(a) Let M N EL = {m i | i = 1 . . .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named Entity Linking", "sec_num": "2.2" }, { "text": ") = m i , l(c i ) = l(m i ) 2.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Named Entity Linking", "sec_num": "2.2" }, { "text": "We introduce a joint model for coreference resolution and NEL. Building on the Stanford sieve architecture, our algorithm incrementally constructs clusters of mentions using deterministic coreference rules under NEL constraints. Figure 2 presents the complete algorithm. The input to NECO is a document and the output is a set C of coreference clusters, with links l(c) to Wikipedia pages for a subset of the clusters c \u2208 C.", "cite_spans": [], "ref_spans": [ { "start": 229, "end": 237, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Joint Coreference and Linking", "sec_num": "3" }, { "text": "Step 1 detects mentions, merging the outputs of the base systems (Sec. 3.1).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Coreference and Linking", "sec_num": "3" }, { "text": "Step 2 repeatedly merges coreference clusters, while ensuring that NEL constraints (Sec. 3.4) are satisfied. It uses the original Stanford sieves and also two new NEL-informed sieves (Sec. 3.6). NEL links are propagated to new clusters as they are formed (Sec. 3.5).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Joint Coreference and Linking", "sec_num": "3" }, { "text": "In Steps 1(a-c) in Fig. 2 , NECO combines mentions from the base coreference and NEL systems.", "cite_spans": [], "ref_spans": [ { "start": 19, "end": 25, "text": "Fig. 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Mention Detection", "sec_num": "3.1" }, { "text": "Let M CR be the set of mentions returned by using Stanford's rule-based mention detection algorithm (Lee et al., 2013) . Let M N EL be the set of mentions output by the two NEL systems. NECO creates an initial set of mentions, M , by taking the union of all the mentions in M N EL and M CR . In practice, taking the union increases diversity in the mention pool. For example, it is often the case that M N EL will include sub-phrases such as \"Suharto\" when they are part of a larger mention \"ex-dictator Suharto\" that is detected in M CR .", "cite_spans": [ { "start": 100, "end": 118, "text": "(Lee et al., 2013)", "ref_id": "BIBREF14" } ], "ref_spans": [], "eq_spans": [], "section": "Mention Detection", "sec_num": "3.1" }, { "text": "Step 1(d) in Fig. 2 assigns Wikipedia links to a subset of the detected mentions.", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 19, "text": "Fig. 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Mention Entity Links and Pruning", "sec_num": "3.2" }, { "text": "For mentions m output by the base NEL systems, we assign an exact link l(m) if the entire mention span is linked. Mentions m that differ from an exact linked mention m by only a pre-or post-fix stop word are similarly assigned exact links l(m ) = l(m). For example, the mention \"the president\" will be assigned the same link as \"president\" but \"The governor of Alaska Sarah Palin\" would not be assigned an exact link to Sarah Palin.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mention Entity Links and Pruning", "sec_num": "3.2" }, { "text": "For mentions m that do not receive an exact link, we assign a head link h(m ) if the head word 2 m has been linked, by setting h(m ) = l(m). For instance, the head link for the mention \"President Clinton\" (with \"Clinton\" as head word) will be the Wikipedia title of Bill Clinton. We use head links for the Relaxed NEL sieve (Sec. 3.6).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mention Entity Links and Pruning", "sec_num": "3.2" }, { "text": "Next, we define L(m) to be the set con- taining l(m) and l(m ) for all sub-phrases m of m.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mention Entity Links and Pruning", "sec_num": "3.2" }, { "text": "We add the sub-phrase links only if their confidence is higher than the confidence for l(m).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mention Entity Links and Pruning", "sec_num": "3.2" }, { "text": "For instance, assuming appropriate confidence values, L(m) would include the pages for {List of governors of Alaska, Alaska, Sarah Palin} given the mention \"The governor of Alaska Sarah Palin.\" We will use L(m) for NEL constraints and filtering (Sec. 3.4).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mention Entity Links and Pruning", "sec_num": "3.2" }, { "text": "After updating the entity links for all mentions, NECO prunes spurious mentions that begin or end with a stop word where the remaining subexpression of the mention exists in M . It also removes time expressions and numbers from M if they are not included in M N EL .", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mention Entity Links and Pruning", "sec_num": "3.2" }, { "text": "Step 1(e) in Fig. 2 also assigns attributes for a mention m linked to Wikipedia page l(m), at both coarse and fine-grained levels, based on information from the Freebase entry corresponding to exact link l(m) or head link h(m).", "cite_spans": [], "ref_spans": [ { "start": 13, "end": 19, "text": "Fig. 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Mention Attributes", "sec_num": "3.3" }, { "text": "The coarse attributes include gender, type, and NER classes such as PERSON, LOCATION, and OR-GANIZATION. These attributes are part of the original Stanford coreference system and are used to avoid merging conflicting clusters. We use the Freebase values for these attributes when available. For instance, if the linked entity contains the Freebase type location or organization, we include the coarse type to LOCATION or ORGANIZATION respectively. In order to account for both links to specific peo-ple (Barack Obama) and generic links to positions held by people (President), we include the type PER-SON if the linked entity has any of the Freebase types person, job title, or government office or title. If no coarse Freebase types are available for an attribute, we default to predicted NER classes.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Mention Attributes", "sec_num": "3.3" }, { "text": "We add fine-grained attributes from Freebase and Wikipedia by importing additional type information. We use all of the Freebase notable types, a set of hundreds of commonly used Freebase types, ranging from us president to tropical cyclone and synthpop album. We also include all of the Wikipedia categories, on average six per entity. For example, the mention \"Indonesia\" is assigned fine-grained attributes such as book subject, military power, and olympic participating country. Since many of these fine-grained attributes are extremely specific, we use the last word of each attribute to define an additional fine-grained attribute (see Fig. 3 ). These finegrained attributes are used in the Relaxed NEL sieve (Sec. 3.6).", "cite_spans": [], "ref_spans": [ { "start": 641, "end": 647, "text": "Fig. 3", "ref_id": "FIGREF3" } ], "eq_spans": [], "section": "Mention Attributes", "sec_num": "3.3" }, { "text": "While applying sieves to merge clusters in Figure 2 Step 2(a), NECO uses NEL constraints to eliminate some otherwise acceptable merges.", "cite_spans": [], "ref_spans": [ { "start": 43, "end": 51, "text": "Figure 2", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "NEL Constraints", "sec_num": "3.4" }, { "text": "We avoid merging inconsistent clusters that link to different entities. Clusters c i and c j are inconsistent if both are linked (i.e., both clusters have non-null entity assignments) and l(c i ) = l(c j ) or h(c i ) = h(c j ). Also, in order to consider an antecedent cluster c as a merge candidate, we require a pair of entities in the set of linked entities L(c) to be related to one another in Freebase. Two entities are related in Freebase if they both appear in a relation; for example, Bill Clinton and Arkansas are related because Bill Clinton has a \"governor-of\" relation with Arkansas.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NEL Constraints", "sec_num": "3.4" }, { "text": "When two clusters c i and c j are merged to form a new cluster c k , the entity link information L(c k ), l(c k ), and h(c k ) must be updated (Step 2 of Fig. 2) . We set L(c k ) to the union of the linked entities found in l(c i ) and l(c j ) and merge coarse attributes at this point.", "cite_spans": [], "ref_spans": [ { "start": 154, "end": 161, "text": "Fig. 2)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "Merging Clusters and Update Entity Links", "sec_num": "3.5" }, { "text": "In order to set the exact and head entity links l(c k ) and h(c k ), we use the exemplar mention Exemplar (c k ) that denotes the most representative mention of the cluster. Exemplar (c) is selected according to a set of rules in the Stanford system, based on textual position and mention type (proper noun vs. common). We augment this function by considering information from exact and head entity links as well. Mentions appearing earlier in text, proper mentions, and mentions that have exact or head named-entity links are preferred to those which do not. Given exemplars, we set l(c k ) = l(Exemplar (c k )) and h(c k ) = h(Exemplar (c k )).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Merging Clusters and Update Entity Links", "sec_num": "3.5" }, { "text": "Finally, we introduce two new sieves that use NEL information at the beginning and end of the Stanford sieves pipeline in the merging stage (Step 2 of Fig. 2) .", "cite_spans": [], "ref_spans": [ { "start": 151, "end": 158, "text": "Fig. 2)", "ref_id": "FIGREF2" } ], "eq_spans": [], "section": "NEL Sieves", "sec_num": "3.6" }, { "text": "Exact NEL sieve The Exact NEL sieve merges two clusters c i and c j if both are linked and their links match, l(c i ) = l(c j ). For example, all mentions that have been linked to Barack Obama will become members of the same coreference cluster. Because the Exact NEL sieve has high precision, we place it at the very beginning of the pipeline.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NEL Sieves", "sec_num": "3.6" }, { "text": "Relaxed NEL sieve The Relaxed NEL sieve uses fine-grained attributes of the linked mentions to merge proper nouns with common nouns when they share attributes. For example, this sieve is able to merge the proper mention \"Disneyland\" with the \"the mysterious park\", because park is one of the fine-grained attributes assigned to Disneyland.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NEL Sieves", "sec_num": "3.6" }, { "text": "More formally, let m i = Exemplar (c i ) and m j = Exemplar (c j ). For every common noun mention m i , we merge c i with an antecedent cluster c j if (1) m j is a linked proper noun, (2) if m i or the title of its linked Wikipedia page is in the list of fine-grained attributes of m j , or (3) if h(m j ) is related to the head link h(m i ) according to Freebase as defined above.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NEL Sieves", "sec_num": "3.6" }, { "text": "Because this sieve has low precision, we only allow merges between mentions that have a maximum distance of three sentences between one another. We add the Relaxed NEL sieve near the end of the pipeline, just before pronoun resolution.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "NEL Sieves", "sec_num": "3.6" }, { "text": "Core Components and Baselines The Stanford sieve-based coreference system (Lee et al., 2013) , the GLOW NEL system (Ratinov et al., 2011) , and WikipediaMiner (Milne and Witten, 2008) provide core functionality for our joint model, and are also the state-of-the-art baselines against which we measure performance.", "cite_spans": [ { "start": 74, "end": 92, "text": "(Lee et al., 2013)", "ref_id": "BIBREF14" }, { "start": 115, "end": 137, "text": "(Ratinov et al., 2011)", "ref_id": "BIBREF25" }, { "start": 159, "end": 183, "text": "(Milne and Witten, 2008)", "ref_id": "BIBREF15" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Parameter Settings Based on performance on the development set, we set the GLOW's confidence parameter to 1.0 and WikipediaMiner's to 0.4 to assure high-precision NEL. We also optimized for the set of fine-grained attributes to import from Wikipedia and Freebase, and the best way to incorporate the NEL constraints into the sieve architecture.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Datasets We report results on the following three datasets: ACE\uf732\uf730\uf730\uf734-NWIRE, CONLL\uf732\uf730\uf731\uf731, and ACE\uf732\uf730\uf730\uf734-NWIRE-NEL. ACE\uf732\uf730\uf730\uf734-NWIRE, the newswire subset of the ACE 2004 corpus (NIST, 2004) , includes 128 documents. The CONLL\uf732\uf730\uf731\uf731 coreference dataset includes text from five different domains: broadcast conversation (BC), broadcast news (BN), magazine (MZ), newswire (NW), and web data (WB) (Pradhan et al., 2011) . The broadcast conversation and broadcast news domains consist of transcripts, whereas magazine and newswire contain more standard written text. The development data includes 303 documents and the test data includes 322 documents.", "cite_spans": [ { "start": 167, "end": 179, "text": "(NIST, 2004)", "ref_id": "BIBREF18" }, { "start": 381, "end": 403, "text": "(Pradhan et al., 2011)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "We created ACE\uf732\uf730\uf730\uf734-NWIRE-NEL by taking a subset of ACE\uf732\uf730\uf730\uf734-NWIRE and annotating with gold-standard entity links. We segment and link all the expressions in text that refer to Wikipedia pages, allowing for nested linking. For instance, both the phrase \"Hong Kong Disneyland,\" and the sub-phrase \"Hong Kong\" are linked. This dataset includes 12 documents and 350 linked entities.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "Metrics We evaluate our system using MUC (Vilain et al., 1995) , B 3 (Bagga and Baldwin, 1998) , and pairwise scores. MUC is a link-based metric which measures how many clusters need to be merged to cover the gold clusters and favors larger clusters; B 3 computes the proportion of intersection between predicted and gold clusters for every mention and favors singletons (Recasens and Hovy, 2010) . We computed the scores using the Stanford coreference software for ACE2004 and using the CoNLL scorer for the CoNLL 2011 dataset.", "cite_spans": [ { "start": 41, "end": 62, "text": "(Vilain et al., 1995)", "ref_id": null }, { "start": 69, "end": 94, "text": "(Bagga and Baldwin, 1998)", "ref_id": "BIBREF0" }, { "start": 371, "end": 396, "text": "(Recasens and Hovy, 2010)", "ref_id": "BIBREF27" } ], "ref_spans": [], "eq_spans": [], "section": "Experimental Setup", "sec_num": "4" }, { "text": "We first look at NECO's performance at coreference resolution and then evaluate its ability at NEL.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Experimental Results", "sec_num": "5" }, { "text": "Overall System Performance on ACE Data Table 1 shows NECO's performance at coreference resolution on ACE-\uf732\uf730\uf730\uf734 compared to the Stanford sieve implementation (Lee et al., 2013) . The table shows that NECO has both significantly improved precision and recall compared to the Stanford baseline, across all metrics. We generally observe larger gains in MUC due to better mention detection and the Relaxed NEL Sieve. Table 1 also details the performance of four variants of our system that ablate various components and features. Specifically, we consider the following cases:", "cite_spans": [ { "start": 156, "end": 174, "text": "(Lee et al., 2013)", "ref_id": "BIBREF14" } ], "ref_spans": [ { "start": 411, "end": 418, "text": "Table 1", "ref_id": null } ], "eq_spans": [], "section": "Coref. Results with Predicted Mentions", "sec_num": "5.1" }, { "text": "\u2022 No NEL Mentions: We discard additional mentions, M N EL , provided by NEL (Sec. 3.1). This increases B 3 precision at the expense of recall. Inspection shows that some of the errors introduced by M N EL are actually due to correctly linked entities that were not annotated as mentions in the dataset, but also some improperly linked mentions.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contribution of System Components", "sec_num": null }, { "text": "\u2022 No Mention Pruning: We disable the initial step of updating mention boundaries and removing spurious mentions (Sec. 3.2). As expected, removing this step drops precision and recall significantly, even compared to the No NEL Mentions variant.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contribution of System Components", "sec_num": null }, { "text": "\u2022 No Attributes: Ablating coarse and finegrained attributes (Sec. 3.3) drops F1 and recall measures across all metrics. To understand this effect, note that NECO uses attributes in two different settings. Updating coarse attributes tends to increase precision because it prevents dangerous merges, such as merging \"Staples\" with the mention \"it\" in a situation when \"Staples\" refers to the person entity Todd Staples. Fine-grained attributes also help with recall, when merging a specific name of an entity with a mention that uses a more general term; for instance, \"Hong Kong Disneyland\" can be merged with \"the mysterious park\" because \"park\" is a finegrained attribute for Disneyland. However, when fine-grained attributes are used, precision sometimes drops (e.g., when \"president\" might merge with \"Bush\" when it should really merge with \"Clinton\").", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contribution of System Components", "sec_num": null }, { "text": "\u2022 No NEL Constraints: Removing these constraints (Sec. 3.4) drops precision dramatically leading to drop in F1. In the case of incorrect linking, however, NEL constraints can affect recall. For instance, NEL constraints might prevent merging \"Staples\" with \"Todd Staples\" if the former were linked to the company and the latter to the politician.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Contribution of System Components", "sec_num": null }, { "text": "We also compare our full system (with added NEL sieves, constraints, and mention pruning 3 ) with the Stanford sieve coreference system on CoNLL data Table 2 : Coreference results on CoNLL 2011 development and test data, using predicted mentions. Rows denoted with * indicate runs using the fully automated Stanford CoreNLP pipeline rather than the predicted annotations provided with the CoNLL data. Given the relatively close results, we ran the Mann-Whitney U test for this table; values with the + superscript are significant with p < 0.05. (Table 2) . We ran NECO and the baseline in two settings: in the first, we use the standard predicted annotations (for POS, parses, NER, and speaker tags) provided with the CoNLL data, and in the second, we use the automated Stanford CoreNLP pipeline to predict this information. On both the development and test sets, we gain about 1 point in MUC F1 as well as a smaller improvement in B 3 . Closer inspection indicates that our system increases precision primarily due to mention pruning and NEL constraints. Due to the differences in mention annotation guidelines between ACE and CoNLL, performance on ACE benefits more from improved mention detection from NEL. Moreover, the ACE cor-pus is all newswire text, which contains more entities that can benefit from linking. CoNLL, on the other hand, contains a wider variety of texts, some of which do not mention many named entities in Wikipedia.", "cite_spans": [], "ref_spans": [ { "start": 150, "end": 157, "text": "Table 2", "ref_id": null }, { "start": 545, "end": 554, "text": "(Table 2)", "ref_id": null } ], "eq_spans": [], "section": "Overall System Performance on CoNLL Data", "sec_num": null }, { "text": "To examine the performance of our system on the different domains covered by the CoNLL data, we also test our system on each domain separately (Table 3). We found NEL provided the biggest improvement for the news domains, broadcast news (BN) and newswire (NW). These domains especially benefit from the improved mention detection and pruning provided by NEL, and strong linking benefitted both precision and recall in these domains. We found that the magazine (MZ) section of the corpus benefited the least from NEL, as there were relatively few entities that our NEL systems were able to connect to Wikipedia.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Overall System Performance on CoNLL Data", "sec_num": null }, { "text": "Some of the errors introduced in our system are due to incorrect or incomplete links discovered by the automatic linking system. To assess the effect of NEL performance on NECO, we tested on a portion of ACE\uf732\uf730\uf730\uf734-NWIRE dataset for which we handlabeled correct links for the gold and predicted mentions. \"NECO + Gold NEL\" denotes a version of our system which uses gold links instead of those predicted by NEL. As shown in Table 4 , gold linking significantly improves the performance of our system across all measures. This suggests that further work to improve automatic NEL may have substantial reward.", "cite_spans": [], "ref_spans": [ { "start": 421, "end": 428, "text": "Table 4", "ref_id": "TABREF7" } ], "eq_spans": [], "section": "Coreference Results with Gold Linking", "sec_num": "5.2" }, { "text": "Gold linking improves precision for two main rea- Haghighi and Klein (2009) 77.0 75.9 76.5 79.4 74.5 76.9 66.9 49.2 56.7 Poon and Domingos (2008) 71.3 70.5 70.9 ---62.6 38.9 48.0 Finkel and Manning (2008) 78.7 58.5 67.1 86.8 65.2 74.5 76.1 44.2 55.9 sons. First, it reduces the coreference errors caused by incorrect NEL links. For instance, gold linking replaces the erroneous link generated by our NEL systems for \"Nasser al-Kidwa\" to the correct Wikipedia entity. As another example, two mentions of \"Rutgers\" will not be merged if one links to the university and the other links to their football team. Second, gold linking leads to better mention detection and better linked mentions. For instance, under gold linking, the whole mention, \"The governor of Alaska, Sarah Palin,\" is linked to the politician, while automatic linking systems only link the substring containing her name, \"Sarah Palin.\" Still, gold NEL cannot compensate for all coreference errors in cases of generic or unlinked entities.", "cite_spans": [ { "start": 50, "end": 75, "text": "Haghighi and Klein (2009)", "ref_id": "BIBREF8" }, { "start": 121, "end": 145, "text": "Poon and Domingos (2008)", "ref_id": "BIBREF20" }, { "start": 179, "end": 204, "text": "Finkel and Manning (2008)", "ref_id": "BIBREF6" } ], "ref_spans": [], "eq_spans": [], "section": "Coreference Results with Gold Linking", "sec_num": "5.2" }, { "text": "Many of the previous papers evaluate coreference resolution assuming gold mentions so we also run under that condition (Table 5) using ACE\uf732\uf730\uf730\uf734-NWIRE data. As the table shows, with gold mentions our system outperforms Haghighi and Klein (2009) , Poon and Domingos (2008) , Finkel and Manning (2008) and the Stanford sieve algorithm across all metrics. Our method shows a relatively smaller gain in precision, because this condition adds no benefit to our technique of using NEL information for pruning mentions.", "cite_spans": [ { "start": 217, "end": 242, "text": "Haghighi and Klein (2009)", "ref_id": "BIBREF8" }, { "start": 245, "end": 269, "text": "Poon and Domingos (2008)", "ref_id": "BIBREF20" }, { "start": 272, "end": 297, "text": "Finkel and Manning (2008)", "ref_id": "BIBREF6" } ], "ref_spans": [ { "start": 119, "end": 128, "text": "(Table 5)", "ref_id": "TABREF8" } ], "eq_spans": [], "section": "Coreference Results with Gold Mentions", "sec_num": "5.3" }, { "text": "While our previous experiments show that namedentity linking can improve coreference resolution, we now address the question of whether coreference techniques can help NEL. We compare NECO with a baseline ensemble 4 composed of GLOW (Ratinov et al., 2011) and WikipediaMiner (Milne and Witten, 2008) on our ACE\uf732\uf730\uf730\uf734-NWIRE-NEL dataset (Table 6 ). Our system gains about 8% in absolute recall and 5% in absolute precision. For instance, our system correctly adds links from \"Bullock\" to the entity Sandra Bullock because coreference resolution merges two mentions. In another example, it correctly links \"company\" to Nokia. Overall, there is a 21% relative reduction in F1 error.", "cite_spans": [ { "start": 233, "end": 255, "text": "(Ratinov et al., 2011)", "ref_id": "BIBREF25" }, { "start": 275, "end": 299, "text": "(Milne and Witten, 2008)", "ref_id": "BIBREF15" } ], "ref_spans": [ { "start": 333, "end": 342, "text": "(Table 6", "ref_id": null } ], "eq_spans": [], "section": "Improving Named Entity Linking", "sec_num": "5.4" }, { "text": "70.6 72.0 69.2 Baseline NEL 64.4 67.4 61.7 Table 6 : NEL performance of our system and the ensemble baseline linker on ACE\uf732\uf730\uf730\uf734-NWIRE-NEL.", "cite_spans": [], "ref_spans": [ { "start": 43, "end": 50, "text": "Table 6", "ref_id": null } ], "eq_spans": [], "section": "F1 Precision Recall NECO", "sec_num": null }, { "text": "We analyzed 90 precision and recall errors and present our findings in Table 7 . Spurious mentions accounted for the majority of non-semantic errors. Despite the improvements that come from NEL, a large portion of coreference errors can still be attributed to incomplete semantic information, including precision errors caused by incorrect linking. For instance, the mention \"Disney\" sometimes refers to the company, and other times refers to the amusement park; however, the NEL systems we used had difficulty disambiguating these cases, and NECO often incorrectly merges such mentions. Overly general fine-grained attributes caused precision errors in cases where many proper noun mentions were potential antecedents for a common noun. Although attributes such as country are useful for resolving a generic \"country\" mention, this information is insufficient when two distinct mentions such as \"China\" and \"Russia\" both have the country attribute. However, many recall errors are also caused by the lack of fine-grained attributes. Finding the ideal set of fine-grained attributes remains an open problem.", "cite_spans": [], "ref_spans": [ { "start": 71, "end": 78, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Error Analysis", "sec_num": "5.5" }, { "text": "Coreference resolution has a fifty year history which defies brief summarization; see Ng (2010) for a recent survey. Section 2.1 described the Stanford multi-pass sieve algorithm, which is the foundation for NECO.", "cite_spans": [ { "start": 86, "end": 95, "text": "Ng (2010)", "ref_id": "BIBREF17" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Earlier coreference resolution systems used shallow semantics and pioneered knowledge extraction from online encyclopedias (Ponzetto and Strube, 2006; Daum\u00e9 III and Marcu, 2005; Ng, 2007) . Some recent work shows improvement in coreference resolution by incorporating semantic information from Web-scale structured knowledge bases. Haghighi and Klein (2009) use a rule-based system to extract fine-grained attributes for mentions by analyzing precise constructs (e.g., appositives) in Wikipedia articles. Subsequently, Haghighi and Klein (2010) used a generative approach to learn entity types from an initial list of unambiguous mention types. Bansal and Klein (2012) use statistical analysis of Web ngram features including lexical relations. Rahman and Ng (2011) use YAGO to extract type relations for all mentions. These methods incorporate knowledge about all possible meanings of a mention. If a mention has multiple meanings, extraneous information might be associated with it. Zheng et al. (2013) use a ranked list of candidate entities for each mention and maintain the ranked list when mentions are merged. Unlike previous work, our method relies on NEL systems to disambiguate possible meanings of a mention and capture highprecision semantic knowledge from Wikipedia categories and Freebase notable types.", "cite_spans": [ { "start": 123, "end": 150, "text": "(Ponzetto and Strube, 2006;", "ref_id": "BIBREF19" }, { "start": 151, "end": 177, "text": "Daum\u00e9 III and Marcu, 2005;", "ref_id": "BIBREF3" }, { "start": 178, "end": 187, "text": "Ng, 2007)", "ref_id": "BIBREF16" }, { "start": 332, "end": 357, "text": "Haghighi and Klein (2009)", "ref_id": "BIBREF8" }, { "start": 519, "end": 544, "text": "Haghighi and Klein (2010)", "ref_id": "BIBREF9" }, { "start": 645, "end": 668, "text": "Bansal and Klein (2012)", "ref_id": "BIBREF1" }, { "start": 745, "end": 765, "text": "Rahman and Ng (2011)", "ref_id": "BIBREF23" }, { "start": 985, "end": 1004, "text": "Zheng et al. (2013)", "ref_id": "BIBREF29" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Ratinov and Roth (2012) investigated using NEL to improve coreference resolution, but did not consider a joint approach. They extracted attributes from Wikipedia categories and used them as features in a learned mention-pair model, but did not do mention detection. Unfortunately, it is difficult to compare directly to the results of both systems, since they reported results on portions of ACE and CoNLL datasets using gold mentions. However, our approach provides independent evidence for the benefit of NEL, and joint modeling in particular, since it outperforms the state-of-the-art Stanford sieve system (winner of the CoNLL 2011 shared task (Pradhan et al., 2011) ) and other recent comparable approaches on benchmark datasets.", "cite_spans": [ { "start": 648, "end": 670, "text": "(Pradhan et al., 2011)", "ref_id": "BIBREF21" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Our work also builds on a long trajectory of work in named entity resolution stemming from SemTag (Dill et al., 2003) . Section 2.2 discussed GLOW and WikipediaMiner (Ratinov et al., 2011; Milne and Witten, 2008) . Kulkarni et al. (2009) present an elegant collective disambiguation model, but do not exploit the syntactic nuances gleaned by within-document coreference resolution. Hachey et al. (2013) provide an insightful summary and evaluation of different approaches to NEL.", "cite_spans": [ { "start": 98, "end": 117, "text": "(Dill et al., 2003)", "ref_id": "BIBREF4" }, { "start": 166, "end": 188, "text": "(Ratinov et al., 2011;", "ref_id": "BIBREF25" }, { "start": 189, "end": 212, "text": "Milne and Witten, 2008)", "ref_id": "BIBREF15" }, { "start": 215, "end": 237, "text": "Kulkarni et al. (2009)", "ref_id": "BIBREF11" }, { "start": 382, "end": 402, "text": "Hachey et al. (2013)", "ref_id": "BIBREF7" } ], "ref_spans": [], "eq_spans": [], "section": "Related Work", "sec_num": "6" }, { "text": "Observing that existing coreference resolution and named-entity linking have complementary strengths Table 7 : Examples of different error categories and the relative frequency of each. For every example, the mention to be resolved is underlined, and the correct antecedent is italicized. For precision errors, the wrongly merged mention is bolded. For recall errors, the missed mention is surrounded by [brackets] .", "cite_spans": [ { "start": 404, "end": 414, "text": "[brackets]", "ref_id": null } ], "ref_spans": [ { "start": 101, "end": 108, "text": "Table 7", "ref_id": null } ], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "and weaknesses, we present a joint approach. We introduce NECO, a novel algorithm which solves the problems jointly, demonstrating improved performance on both tasks. We envision several ways to improve the joint model. While the current implementation of NECO only introduces NEL once, we could also integrate predictions with different levels of confidence into different sieves. It would be interesting to more tightly integrate the NEL system so it operates on clusters rather than individual mentions -after each sieve merges an unlinked cluster, the algorithm would retry NEL with the new context information. NECO uses a relatively modest number of Freebase attributes. While using more semantic knowledge holds the promise of increased recall, the challenge is maintaining precision. Finally, we would also like to explore the extent to which a joint probabilistic model (e.g., (Durrett and Klein, 2013) ) might be used to learn how to best make this tradeoff.", "cite_spans": [ { "start": 886, "end": 911, "text": "(Durrett and Klein, 2013)", "ref_id": "BIBREF5" } ], "ref_spans": [], "eq_spans": [], "section": "Conclusions", "sec_num": "7" }, { "text": "A head word is assigned to every mention with the Stanford parser head finding rules(Klein and Manning, 2003).", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "Due to CoNLL annotation guidelines, a named entity is added to the mention list if it is not inside a larger mention with an exact named entity link.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null }, { "text": "We take the union of all the links returned by GLOW and WikipediaMiner, but if they link a mention to two different entities, we use only the output of WikipediaMiner.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "", "sec_num": null } ], "back_matter": [ { "text": "The research was supported in part by grants from DARPA under the DEFT program through the AFRL (FA8750-13-2-0019) and the CSSG (N11AP20020), the ONR (N00014-12-1-0211), and the NSF (IIS-1115966). Support was also provided by a gift from Google, an NSF Graduate Research Fellowship, and the WRF / TJ Cable Professorship. The authors thank Greg Durrett, Heeyoung Lee, Mitchell Koch, Xiao Ling, Mark Yatskar, Kenton Lee, Eunsol Choi, Gabriel Schubiner, Nicholas FitzGerald, Tom Kwiatkowski, and the anonymous reviewers for helpful comments and feedback on the work.", "cite_spans": [], "ref_spans": [], "eq_spans": [], "section": "Acknowledgements", "sec_num": "8" } ], "bib_entries": { "BIBREF0": { "ref_id": "b0", "title": "Algorithms for scoring coreference chains", "authors": [ { "first": "Amit", "middle": [], "last": "Bagga", "suffix": "" }, { "first": "Breck", "middle": [], "last": "Baldwin", "suffix": "" } ], "year": 1998, "venue": "International Conference on Language Resources and Evaluation Workshop on Linguistics Coreference", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In International Confer- ence on Language Resources and Evaluation Work- shop on Linguistics Coreference.", "links": null }, "BIBREF1": { "ref_id": "b1", "title": "Coreference semantics from web features", "authors": [ { "first": "Mohit", "middle": [], "last": "Bansal", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the 45th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Mohit Bansal and Dan Klein. 2012. Coreference se- mantics from web features. In Proceedings of the 45th", "links": null }, "BIBREF2": { "ref_id": "b2", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF3": { "ref_id": "b3", "title": "A large-scale exploration of effective global features for a joint entity detection and tracking model", "authors": [ { "first": "Hal", "middle": [], "last": "Daum\u00e9", "suffix": "" }, { "first": "Iii", "middle": [], "last": "", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Marcu", "suffix": "" } ], "year": 2005, "venue": "Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2005. A large-scale exploration of effective global features for a joint en- tity detection and tracking model. In Proceedings of the Conference on Human Language Technology and Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF4": { "ref_id": "b4", "title": "SemTag and Seeker: bootstrapping the semantic web via automated semantic annotation", "authors": [ { "first": "Stephen", "middle": [], "last": "Dill", "suffix": "" }, { "first": "Nadav", "middle": [], "last": "Eiron", "suffix": "" }, { "first": "David", "middle": [], "last": "Gibson", "suffix": "" }, { "first": "Daniel", "middle": [], "last": "Gruhl", "suffix": "" }, { "first": "R", "middle": [], "last": "Guha", "suffix": "" }, { "first": "Anant", "middle": [], "last": "Jhingran", "suffix": "" }, { "first": "Tapas", "middle": [], "last": "Kanungo", "suffix": "" }, { "first": "Sridhar", "middle": [], "last": "Rajagopalan", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Tomkins", "suffix": "" }, { "first": "John", "middle": [ "A" ], "last": "Tomlin", "suffix": "" }, { "first": "Jason", "middle": [ "Y" ], "last": "Zien", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 12th International Conference on World Wide Web", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stephen Dill, Nadav Eiron, David Gibson, Daniel Gruhl, R. Guha, Anant Jhingran, Tapas Kanungo, Sridhar Ra- jagopalan, Andrew Tomkins, John A. Tomlin, and Ja- son Y. Zien. 2003. SemTag and Seeker: bootstrapping the semantic web via automated semantic annotation. In Proceedings of the 12th International Conference on World Wide Web.", "links": null }, "BIBREF5": { "ref_id": "b5", "title": "Easy victories and uphill battles in coreference resolution", "authors": [ { "first": "Greg", "middle": [], "last": "Durrett", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In Proceed- ings of the Conference on Empirical Methods in Natu- ral Language Processing.", "links": null }, "BIBREF6": { "ref_id": "b6", "title": "Enforcing transitivity in coreference resolution", "authors": [ { "first": "Jenny", "middle": [ "Rose" ], "last": "Finkel", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the 46th Annual Meeting of the Association for Computational Linguistics on Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jenny Rose Finkel and Christopher D. Manning. 2008. Enforcing transitivity in coreference resolution. In Proceedings of the 46th Annual Meeting of the Asso- ciation for Computational Linguistics on Human Lan- guage Technologies.", "links": null }, "BIBREF7": { "ref_id": "b7", "title": "Evaluating entity linking with Wikipedia", "authors": [ { "first": "Ben", "middle": [], "last": "Hachey", "suffix": "" }, { "first": "Will", "middle": [], "last": "Radford", "suffix": "" }, { "first": "Joel", "middle": [], "last": "Nothman", "suffix": "" }, { "first": "Matthew", "middle": [], "last": "Honnibal", "suffix": "" }, { "first": "James", "middle": [ "R" ], "last": "Curran", "suffix": "" } ], "year": null, "venue": "Artificial Intelligence Journal", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Ben Hachey, Will Radford, Joel Nothman, Matthew Hon- nibal, and James R. Curran. 2013. Evaluating entity linking with Wikipedia. Artificial Intelligence Jour- nal, 194.", "links": null }, "BIBREF8": { "ref_id": "b8", "title": "Simple coreference resolution with rich syntactic and semantic features", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aria Haghighi and Dan Klein. 2009. Simple coreference resolution with rich syntactic and semantic features. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF9": { "ref_id": "b9", "title": "Coreference resolution in a modular, entity-centered model", "authors": [ { "first": "Aria", "middle": [], "last": "Haghighi", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" } ], "year": 2010, "venue": "Human Language Technologies: Annual Conference of the North American Chapter of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Aria Haghighi and Dan Klein. 2010. Coreference res- olution in a modular, entity-centered model. In Hu- man Language Technologies: Annual Conference of the North American Chapter of the Association for Computational Linguistics.", "links": null }, "BIBREF10": { "ref_id": "b10", "title": "Accurate unlexicalized parsing", "authors": [ { "first": "Dan", "middle": [], "last": "Klein", "suffix": "" }, { "first": "Christopher", "middle": [ "D" ], "last": "Manning", "suffix": "" } ], "year": 2003, "venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of the 41st An- nual Meeting on Association for Computational Lin- guistics.", "links": null }, "BIBREF11": { "ref_id": "b11", "title": "Collective annotation of Wikipedia entities in Web text", "authors": [ { "first": "Sayali", "middle": [], "last": "Kulkarni", "suffix": "" }, { "first": "Amit", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Ganesh", "middle": [], "last": "Ramakrishnan", "suffix": "" }, { "first": "Soumen", "middle": [], "last": "Chakrabarti", "suffix": "" } ], "year": 2009, "venue": "Proceedings of the 2009 Conference on Knowledge Discovery and Data Mining", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sayali Kulkarni, Amit Singh, Ganesh Ramakrishnan, and Soumen Chakrabarti. 2009. Collective annotation of Wikipedia entities in Web text. In Proceedings of the 2009 Conference on Knowledge Discovery and Data Mining.", "links": null }, "BIBREF13": { "ref_id": "b13", "title": "Stanford's multi-pass sieve coreference resolution system at the CoNLL-2011 shared task", "authors": [], "year": null, "venue": "Proceedings of the Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Stanford's multi-pass sieve coreference resolution sys- tem at the CoNLL-2011 shared task. In Proceedings of the Conference on Computational Natural Language Learning.", "links": null }, "BIBREF14": { "ref_id": "b14", "title": "Deterministic coreference resolution based on entitycentric, precision-ranked rules", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Angel", "middle": [], "last": "Chang", "suffix": "" }, { "first": "Yves", "middle": [], "last": "Peirsman", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Jurafsky", "suffix": "" } ], "year": 2013, "venue": "Computational Linguistics", "volume": "39", "issue": "4", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolution based on entity- centric, precision-ranked rules. Computational Lin- guistics, 39(4).", "links": null }, "BIBREF15": { "ref_id": "b15", "title": "Learning to link with Wikipedia", "authors": [ { "first": "Dan", "middle": [], "last": "Milne", "suffix": "" }, { "first": "Ian", "middle": [ "H" ], "last": "Witten", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the ACM Conference on Information and Knowledge Management", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Dan Milne and Ian H. Witten. 2008. Learning to link with Wikipedia. In Proceedings of the ACM Confer- ence on Information and Knowledge Management.", "links": null }, "BIBREF16": { "ref_id": "b16", "title": "Shallow semantics for coreference resolution", "authors": [ { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2007, "venue": "Proceedings of the 20th International Joint Conference on Artificial Intelligence", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vincent Ng. 2007. Shallow semantics for coreference resolution. In Proceedings of the 20th International Joint Conference on Artificial Intelligence.", "links": null }, "BIBREF17": { "ref_id": "b17", "title": "Supervised noun phrase coreference research: The first fifteen years", "authors": [ { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Vincent Ng. 2010. Supervised noun phrase coreference research: The first fifteen years. In Proceedings of the 48th Annual Meeting of the Association for Computa- tional Linguistics.", "links": null }, "BIBREF18": { "ref_id": "b18", "title": "The ACE 2004 evaluation planXPToolkit architecture", "authors": [ { "first": "", "middle": [], "last": "Nist", "suffix": "" } ], "year": 2004, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "NIST. 2004. The ACE 2004 evaluation planXPToolkit architecture.", "links": null }, "BIBREF19": { "ref_id": "b19", "title": "Exploiting semantic role labeling, Wordnet and Wikipedia for coreference resolution", "authors": [ { "first": "Paolo", "middle": [], "last": "Simone", "suffix": "" }, { "first": "Michael", "middle": [], "last": "Ponzetto", "suffix": "" }, { "first": "", "middle": [], "last": "Strube", "suffix": "" } ], "year": 2006, "venue": "Proceedings of the North American Association for Natural Language Processing on Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Simone Paolo Ponzetto and Michael Strube. 2006. Exploiting semantic role labeling, Wordnet and Wikipedia for coreference resolution. In Proceedings of the North American Association for Natural Lan- guage Processing on Human Language Technologies.", "links": null }, "BIBREF20": { "ref_id": "b20", "title": "Joint unsupervised coreference resolution with Markov logic", "authors": [ { "first": "Hoifung", "middle": [], "last": "Poon", "suffix": "" }, { "first": "Pedro", "middle": [], "last": "Domingos", "suffix": "" } ], "year": 2008, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Hoifung Poon and Pedro Domingos. 2008. Joint unsu- pervised coreference resolution with Markov logic. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF21": { "ref_id": "b21", "title": "CoNLL-2011 shared task: modeling unrestricted coreference in OntoNotes", "authors": [ { "first": "Sameer", "middle": [], "last": "Pradhan", "suffix": "" }, { "first": "Lance", "middle": [], "last": "Ramshaw", "suffix": "" }, { "first": "Mitchell", "middle": [], "last": "Marcus", "suffix": "" }, { "first": "Martha", "middle": [], "last": "Palmer", "suffix": "" }, { "first": "Ralph", "middle": [], "last": "Weischedel", "suffix": "" }, { "first": "Nianwen", "middle": [], "last": "Xue", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Sameer Pradhan, Lance Ramshaw, Mitchell Marcus, Martha Palmer, Ralph Weischedel, and Nianwen Xue. 2011. CoNLL-2011 shared task: modeling unre- stricted coreference in OntoNotes. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning: Shared Task.", "links": null }, "BIBREF22": { "ref_id": "b22", "title": "A multipass sieve for coreference resolution", "authors": [ { "first": "Heeyoung", "middle": [], "last": "Karthik Raghunathan", "suffix": "" }, { "first": "Sudarshan", "middle": [], "last": "Lee", "suffix": "" }, { "first": "Nathanael", "middle": [], "last": "Rangarajan", "suffix": "" }, { "first": "Mihai", "middle": [], "last": "Chambers", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Surdeanu", "suffix": "" }, { "first": "Christopher", "middle": [], "last": "Jurafsky", "suffix": "" }, { "first": "", "middle": [], "last": "Manning", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Karthik Raghunathan, Heeyoung Lee, Sudarshan Ran- garajan, Nathanael Chambers, Mihai Surdeanu, Dan Jurafsky, and Christopher Manning. 2010. A multi- pass sieve for coreference resolution. In Proceedings of the Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF23": { "ref_id": "b23", "title": "Coreference resolution with world knowledge", "authors": [ { "first": "Altaf", "middle": [], "last": "Rahman", "suffix": "" }, { "first": "Vincent", "middle": [], "last": "Ng", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Altaf Rahman and Vincent Ng. 2011. Coreference res- olution with world knowledge. In Proceedings of the 49th Annual Meeting of the Association for Computa- tional Linguistics: Human Language Technologies.", "links": null }, "BIBREF24": { "ref_id": "b24", "title": "Learning-based multisieve co-reference resolution with knowledge", "authors": [ { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2012, "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Ratinov and Dan Roth. 2012. Learning-based multi- sieve co-reference resolution with knowledge. In Pro- ceedings of the Conference on Empirical Methods in Natural Language Processing.", "links": null }, "BIBREF25": { "ref_id": "b25", "title": "Local and global algorithms for disambiguation to Wikipedia", "authors": [ { "first": "Lev", "middle": [], "last": "Ratinov", "suffix": "" }, { "first": "Dan", "middle": [], "last": "Roth", "suffix": "" } ], "year": 2011, "venue": "Proceedings of the 49th", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Lev Ratinov, Dan Roth, Doug Downey, and Mike An- derson. 2011. Local and global algorithms for dis- ambiguation to Wikipedia. In Proceedings of the 49th", "links": null }, "BIBREF26": { "ref_id": "b26", "title": "Annual Meeting of the Association for Computational Linguistics", "authors": [], "year": null, "venue": "", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Annual Meeting of the Association for Computational Linguistics.", "links": null }, "BIBREF27": { "ref_id": "b27", "title": "Coreference resolution across corpora: languages, coding schemes, and preprocessing information", "authors": [ { "first": "Marta", "middle": [], "last": "Recasens", "suffix": "" }, { "first": "Eduard", "middle": [], "last": "Hovy", "suffix": "" } ], "year": 2010, "venue": "Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marta Recasens and Eduard Hovy. 2010. Coreference resolution across corpora: languages, coding schemes, and preprocessing information. In Proceedings of the 48th Annual Meeting of the Association for Computa- tional Linguistics.", "links": null }, "BIBREF28": { "ref_id": "b28", "title": "Dennis Connolly, and Lynette Hirschman. 1995. A modeltheoretic coreference scoring scheme", "authors": [ { "first": "Marc", "middle": [], "last": "Vilain", "suffix": "" }, { "first": "John", "middle": [], "last": "Burger", "suffix": "" }, { "first": "John", "middle": [], "last": "Aberdeen", "suffix": "" } ], "year": null, "venue": "Proceedings of the 6th conference on Message Understanding", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Proceedings of the 6th conference on Message Understanding.", "links": null }, "BIBREF29": { "ref_id": "b29", "title": "Dynamic knowledge-base alignment for coreference resolution", "authors": [ { "first": "Jiaping", "middle": [], "last": "Zheng", "suffix": "" }, { "first": "Luke", "middle": [], "last": "Vilnis", "suffix": "" }, { "first": "Sameer", "middle": [], "last": "Singh", "suffix": "" }, { "first": "Jinho", "middle": [ "D" ], "last": "Choi", "suffix": "" }, { "first": "Andrew", "middle": [], "last": "Mccallum", "suffix": "" } ], "year": 2013, "venue": "Proceedings of the Seventeenth Conference on Computational Natural Language Learning", "volume": "", "issue": "", "pages": "", "other_ids": {}, "num": null, "urls": [], "raw_text": "Jiaping Zheng, Luke Vilnis, Sameer Singh, Jinho D. Choi, and Andrew McCallum. 2013. Dynamic knowledge-base alignment for coreference resolution. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning.", "links": null } }, "ref_entries": { "FIGREF0": { "uris": null, "type_str": "figure", "num": null, "text": "A text passage illustrating interactions between coreference resolution and NEL." }, "FIGREF1": { "uris": null, "type_str": "figure", "num": null, "text": "Merge Clusters: For every sieve S (including NEL sieves, Sec. 3.6) and cluster c i \u2208 C (a) For every cluster c j , j = [i \u2212 1 . . . 1] (traverse the preceding clusters in reverse order) i. NEL constraints: Prevent merge if l(c i ) = l(c j ) (Sec. 3.4) ii. If all rules of sieve S are satisfied for clusters c i and c j A. c k \u2190 Merge(c i , c j ), including entity link and attribute updates (Sec. 3.5) B. C \u2190 C \u222a {c k } \\ {c i , c j } 3. Output: Coreference clusters C and linked Wikipedia pages l(c i )\u2200c i \u2208 C" }, "FIGREF2": { "uris": null, "type_str": "figure", "num": null, "text": "NECO: A joint algorithm for named-entity linking and coreference resolution." }, "FIGREF3": { "uris": null, "type_str": "figure", "num": null, "text": "The most commonly used fine-grained attributes from Freebase and Wikipedia (out of over 500 total attributes)." }, "TABREF0": { "type_str": "table", "html": null, "num": null, "content": "", "text": "Michael Eisner] 1 and [Donald Tsang] 2 announced the grand opening of [[Hong Kong] 3 Disneyland] 4 yesterday. [Eisner] 1 thanked [the President] 2 and welcomed [fans] 5 to [the park] 4 ." }, "TABREF1": { "type_str": "table", "html": null, "num": null, "content": "
i from coreference mention detection
(c) Let M \u2190 M CR \u222a M N EL (Sec. 3.1)
(d) Update entity links for all m \u2208 M and prune M (Sec. 3.2)
(e) Extract attributes from Wikipedia and Freebase for all m \u2208 M (Sec. 3.3)
(f) Let C \u2190 M be singleton mention clusters where Exemplar (c i
", "text": "p} be the NEL output mentions, m i , each with a link l(m i ) (b) Let M CR = {m i | i = 1 . . . q} be the mentions m" }, "TABREF4": { "type_str": "table", "html": null, "num": null, "content": "
MUCB 3
Category: MethodPRF1PRF1
BC: NECO 62.1 BN: Stanford Sieves 68.0 58.9 63.1 79.0 60.2 68.3
MZ: NECO67.6 62.9 65.2 78.4 61.1 68.7
MZ: Stanford Sieves 66.0 63.4 64.9 77.9 61.5 68.7
NW: NECO62.0 54.5 58.0 74.9 57.4 65.0
NW: Stanford Sieves 60.0 54.2 56.9 75.3 57.0 64.9
", "text": "64.7 63.4 69.8 57.8 63.2 BC: Stanford Sieves 60.9 65.0 62.9 69.2 58.0 63.1 BN: NECO 69.3 59.4 64.0 78.8 60.8 68.6" }, "TABREF5": { "type_str": "table", "html": null, "num": null, "content": "
(BC=broadcast conver-
", "text": "Coreference results on the individual categories of CoNLL 2011 development data. + 51.7 53.3 + 70.0 50.8 58.8 Stanford* 52.0 52.3 + 52.1 68.9 50.8 58.5" }, "TABREF6": { "type_str": "table", "html": null, "num": null, "content": "
MethodMUCB 3Pairwise
PRF1PRF1PRF1
Gold Mentions
NECO + NECO84.6 74.0 78.9 90.5 80.4 85.2 83.9 66.0 73.9
Stanford Sieves84.5 72.2 77.8 89.9 77.7 83.4 89.9 57.3 68.1
Predicted Mentions
NECO + Gold NEL 56.4 58.8 57.5 78.2 78.3 78.3 68.0 54.3 60.4
NECO51.3 53.5 52.4 76.5 76.4 76.5 61.2 45.6 52.2
Stanford Sieves43.9 46.4 45.1 74.4 74.2 74.3 51.3 36.1 42.4
", "text": "Gold NEL 85.8 75.5 80.3 91.4 81.2 86.0 89.1 68.0 77.1" }, "TABREF7": { "type_str": "table", "html": null, "num": null, "content": "
MethodMUCB 3Pairwise
PRF1PRF1PRF1
NECO85.0 76.6 80.6 87.6 76.4 81.6 79.3 56.1 65.8
Stanford Sieves84.6 75.1 79.6 87.3 74.1 80.2 79.4 50.1 61.4
", "text": "Coreference results on ACE\uf732\uf730\uf730\uf734-NWIRE-NEL with gold and predicted mentions and gold or automatic linking." }, "TABREF8": { "type_str": "table", "html": null, "num": null, "content": "", "text": "Coreference results on ACE\uf732\uf730\uf730\uf734-NWIRE with gold mentions and automatic linking." }, "TABREF9": { "type_str": "table", "html": null, "num": null, "content": "
Error TypePercentage Example
Extra mentions31.1
Contextual semantic [The NEL semantic 16.6 13.3
", "text": "The other thing Paula really important is that they talk a lot about the fact ... Pronoun 27.7 However , [all 3 women gymnasts , taking part in the internationals for the first time], performed well , because they had strong events and their movements had difficulty . Chinese side] hopes that each party concerned continues to make constructive efforts to ...Considering the requirements of the Korean side , ... the Chinese government decided to ... The most important thing about Disney is that it is a global brand. ... The subway to Disney has already been constructed. Attributes 11.1 The Hong Kong government turned over to Disney Corporation [200 hectares of land ...]. ... this area has become a prohibited zone in Hong Kong." } } } }