| { |
| "paper_id": "K16-1023", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T07:11:18.324411Z" |
| }, |
| "title": "Coreference in Wikipedia: Main Concept Resolution", |
| "authors": [ |
| { |
| "first": "Abbas", |
| "middle": [], |
| "last": "Ghaddar", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "abbas.ghaddar@umontreal.ca" |
| }, |
| { |
| "first": "Philippe", |
| "middle": [], |
| "last": "Langlais", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Wikipedia is a resource of choice exploited in many NLP applications, yet we are not aware of recent attempts to adapt coreference resolution to this resource. In this work, we revisit a seldom studied task which consists in identifying in a Wikipedia article all the mentions of the main concept being described. We show that by exploiting the Wikipedia markup of a document, as well as links to external knowledge bases such as Freebase, we can acquire useful information on entities that helps to classify mentions as coreferent or not. We designed a classifier which drastically outperforms fair baselines built on top of state-of-the-art coreference resolution systems. We also measure the benefits of this classifier in a full coreference resolution pipeline applied to Wikipedia texts.", |
| "pdf_parse": { |
| "paper_id": "K16-1023", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Wikipedia is a resource of choice exploited in many NLP applications, yet we are not aware of recent attempts to adapt coreference resolution to this resource. In this work, we revisit a seldom studied task which consists in identifying in a Wikipedia article all the mentions of the main concept being described. We show that by exploiting the Wikipedia markup of a document, as well as links to external knowledge bases such as Freebase, we can acquire useful information on entities that helps to classify mentions as coreferent or not. We designed a classifier which drastically outperforms fair baselines built on top of state-of-the-art coreference resolution systems. We also measure the benefits of this classifier in a full coreference resolution pipeline applied to Wikipedia texts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Coreference Resolution (CR) is the task of identifying all mentions of entities in a document and grouping them into equivalence classes. CR is a prerequisite for many NLP tasks. For example, in Open Information Extraction (OIE) (Yates et al., 2007) , one acquires subject-predicate-object relations, many of which (e.g., <the foundation stone, was laid by, the Queen s daughter>) are useless because the subject or the object contains material coreferring to other mentions in the text being mined.", |
| "cite_spans": [ |
| { |
| "start": 229, |
| "end": 249, |
| "text": "(Yates et al., 2007)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Most CR systems, including state-of-the-art ones (Durrett and Klein, 2014; Martschat and Strube, 2015; Clark and Manning, 2015) are essentially adapted to news-like texts. This is basically imputable to the availability of large datasets where this text genre is dominant. This includes resources developed within the Message Understanding Conferences (Hirshman and Chinchor, 1998) or the Automatic Content Extraction (ACE) program (Doddington et al., 2004) , as well as resources developed within the collaborative annotation project OntoNotes (Pradhan et al., 2007) .", |
| "cite_spans": [ |
| { |
| "start": 49, |
| "end": 74, |
| "text": "(Durrett and Klein, 2014;", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 75, |
| "end": 102, |
| "text": "Martschat and Strube, 2015;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 103, |
| "end": 127, |
| "text": "Clark and Manning, 2015)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 352, |
| "end": 381, |
| "text": "(Hirshman and Chinchor, 1998)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 432, |
| "end": 457, |
| "text": "(Doddington et al., 2004)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 535, |
| "end": 567, |
| "text": "OntoNotes (Pradhan et al., 2007)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "It is now widely accepted that coreference resolution systems trained on newswire data performs poorly when tested on other text genres (Hendrickx and Hoste, 2009; Sch\u00e4fer et al., 2012) , including Wikipedia texts, as we shall see in our experiments.", |
| "cite_spans": [ |
| { |
| "start": 136, |
| "end": 163, |
| "text": "(Hendrickx and Hoste, 2009;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 164, |
| "end": 185, |
| "text": "Sch\u00e4fer et al., 2012)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Wikipedia is a large, multilingual, highly structured, multi-domain encyclopedia, providing an increasingly large wealth of knowledge. It is known to contain well-formed, grammatical and meaningful sentences, compared to say, ordinary internet documents. It is therefore a resource of choice in many NLP systems, see (Medelyan et al., 2009) for a review of some pioneering works.", |
| "cite_spans": [ |
| { |
| "start": 317, |
| "end": 340, |
| "text": "(Medelyan et al., 2009)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "While being a ubiquitous resource in the NLP community, we are not aware of much work conducted to adapt CR to this text genre. Two notable exceptions are (Nguyen et al., 2007) and (Nakayama, 2008) , two studies dedicated to extract tuples from Wikipedia articles. Both studies demonstrate that the design of a dedicated rulebased CR system leads to improved extraction accuracy. The focus of those studies being information extraction, the authors did not spend much efforts in designing a fully-fledged CR designed for Wikipedia, neither did they evaluate it on a coreference resolution task.", |
| "cite_spans": [ |
| { |
| "start": 155, |
| "end": 176, |
| "text": "(Nguyen et al., 2007)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 181, |
| "end": 197, |
| "text": "(Nakayama, 2008)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our main contribution in this work is to revisit the task initially discussed in (Nakayama, 2008) which consists in identifying in a Wikipedia article all the mentions of the concept being described by this article. We refer to this concept as the \"main concept\" (MC) henceforth. For instance, within the article Chilly Gonzales, the task is to find all proper (e.g. Gonzales, Beck), nominal (e.g. the performer) and pronominal (e.g. he) mentions that refer to the MC \"Chilly Gonzales\".", |
| "cite_spans": [ |
| { |
| "start": 81, |
| "end": 97, |
| "text": "(Nakayama, 2008)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "More specifically, we frame this task as a binary classification problem, where one has to decide whether a detected mention refers to the MC. Our classifier exploits carefully designed features extracted from Wikipedia markup and characteristics, as well as from Freebase; many of which we borrowed from the related literature.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We show that our approach outperforms stateof-the-art generic coreference resolution engines on this task. We further demonstrate that the integration of our classifier into the state-of-the-art rule-based coreference system of Lee et al. (2013) improves the detection of coreference chains in Wikipedia articles.", |
| "cite_spans": [ |
| { |
| "start": 228, |
| "end": 245, |
| "text": "Lee et al. (2013)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The paper is organized as follows. We discuss related works in Section 2. We describe in Section 3 the Wikipedia-based dataset we exploited in this study, and show the drop in performance of state-of-the-art coreference resolution systems when faced to this corpus. We describe in Section 4 the baselines we built on top of two state-ofthe-art coreference resolution systems, and present our approach in Section 5. We report on experiments we conducted in section 6, and conclude in Section 7.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our approach is inspired by, and extends, previous works on coreference resolution which show that incorporating external knowledge into a CR system is beneficial. In particular, a variety of approaches (Ponzetto and Strube, 2006; Ng, 2007; Haghighi and Klein, 2009) have been shown to benefit from using external resources such as Wikipedia, WordNet (Miller, 1995) , or YAGO (Suchanek et al., 2007) . Ratinov and Roth (2012) and Hajishirzi et al. (2013) both investigate the integration of named-entity linking into machine learning and rule-based coreference resolution system respectively. They both use GLOW (Ratinov et al., 2011) a wikification system which associates detected mentions with their equivalent entity in Wikipedia. In addition, they assign to each mention a set of highly accurate knowledge attributes extracted from Wikipedia and Freebase (Bollacker et al., 2008) , such as the Wikipedia categories, gender, nationality, aliases, and NER type (ORG, PER, LOC, FAC, MISC).", |
| "cite_spans": [ |
| { |
| "start": 203, |
| "end": 230, |
| "text": "(Ponzetto and Strube, 2006;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 231, |
| "end": 240, |
| "text": "Ng, 2007;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 241, |
| "end": 266, |
| "text": "Haghighi and Klein, 2009)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 332, |
| "end": 365, |
| "text": "Wikipedia, WordNet (Miller, 1995)", |
| "ref_id": null |
| }, |
| { |
| "start": 376, |
| "end": 399, |
| "text": "(Suchanek et al., 2007)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 402, |
| "end": 425, |
| "text": "Ratinov and Roth (2012)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 430, |
| "end": 454, |
| "text": "Hajishirzi et al. (2013)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 612, |
| "end": 634, |
| "text": "(Ratinov et al., 2011)", |
| "ref_id": "BIBREF31" |
| }, |
| { |
| "start": 860, |
| "end": 884, |
| "text": "(Bollacker et al., 2008)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "One issue with all the aforementioned studies is that inaccuracies often cause cascading errors in the pipeline (Zheng et al., 2013) . Consequently, most authors concentrate on high-precision linking at the cost of low recall.", |
| "cite_spans": [ |
| { |
| "start": 112, |
| "end": 132, |
| "text": "(Zheng et al., 2013)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Dealing specifically with Wikipedia articles, we can directly exploit the wealth of markup available (redirects, internal links, links to Freebase) without resorting to named-entity linking, thus benefiting from much less ambiguous information on mentions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Works", |
| "sec_num": "2" |
| }, |
| { |
| "text": "As our approach is dedicated to Wikipedia articles, we used the freely 1 available resource called WikiCoref (Ghaddar and Langlais, 2016) . This ressource consists in 30 English Wikipedia articles manually coreference-annotated. It comprises 60k tokens annotated with the OntoNotes project guidelines (Pradhan et al., 2007) . Each mention is annotated with three attributes: the mention type (named-entity, noun phrase, or pronominal), the coreference type (identity, attributive or copular) and the equivalent Freebase entity if it exists. The resource contains roughly 7 000 non singleton mentions, among which 1 800 refer to the main concept, which is to say that 30 chains out of 1 469 make up for 25% of the mentions annotated.", |
| "cite_spans": [ |
| { |
| "start": 109, |
| "end": 137, |
| "text": "(Ghaddar and Langlais, 2016)", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 301, |
| "end": 323, |
| "text": "(Pradhan et al., 2007)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dataset", |
| "sec_num": "3" |
| }, |
| { |
| "text": "WikiCoref OntoNotes Dcoref 51.77 55.59 Durrett and Klein (2013) 51.01 61.41 Durrett and Klein (2014) 49 Since most coreference resolution systems for English are trained and tested on ACE (Doddington et al., 2004) or OntoNotes (Hovy et al., 2006) resources, it is interesting to measure how state-ofthe art systems perform on the WikiCoref dataset. To this end, we ran a number of recent CR systems: the rule-based system of (Lee et al., 2013) , hereafter named Dcoref; the Berkeley systems described in (Durrett and Klein, 2013; Durrett and Klein, 2014) ; the latent model of Martschat and Strube (2015); and the system described in (Clark and Manning, 2015) , hereafter named Scoref.", |
| "cite_spans": [ |
| { |
| "start": 39, |
| "end": 63, |
| "text": "Durrett and Klein (2013)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 76, |
| "end": 100, |
| "text": "Durrett and Klein (2014)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 188, |
| "end": 213, |
| "text": "(Doddington et al., 2004)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 227, |
| "end": 246, |
| "text": "(Hovy et al., 2006)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 425, |
| "end": 443, |
| "text": "(Lee et al., 2013)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 504, |
| "end": 529, |
| "text": "(Durrett and Klein, 2013;", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 530, |
| "end": 554, |
| "text": "Durrett and Klein, 2014)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 634, |
| "end": 659, |
| "text": "(Clark and Manning, 2015)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System", |
| "sec_num": null |
| }, |
| { |
| "text": "We evaluate the systems on the whole dataset, using the v8.01 of the CoNLL scorer 2 (Pradhan et al., 2014) . The results are reported in Table 1 along with the performance of the systems on the CoNLL 2012 test data (Pradhan et al., 2012) . Expectedly, the performance of all systems dramatically decrease on WikiCoref, which calls for further research on adapting the coreference resolution technology to new text genres.", |
| "cite_spans": [ |
| { |
| "start": 84, |
| "end": 106, |
| "text": "(Pradhan et al., 2014)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 215, |
| "end": 237, |
| "text": "(Pradhan et al., 2012)", |
| "ref_id": "BIBREF27" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 137, |
| "end": 144, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "System", |
| "sec_num": null |
| }, |
| { |
| "text": "Somehow more surprisingly, the rule-based system of (Lee et al., 2013) works better than the machine-learning based systems on the WikiCoref dataset. Nevertheless, statistical systems can be trained or adapted to the WikiCoref dataset, a point we leave for future investigations. Also, we observe that the ranking of the statistical systems on this dataset differs from the one obtained on the OntoNotes test set.", |
| "cite_spans": [ |
| { |
| "start": 52, |
| "end": 70, |
| "text": "(Lee et al., 2013)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System", |
| "sec_num": null |
| }, |
| { |
| "text": "The WikiCoref dataset is far smaller than the OntoNotes one; still, the authors paid attention to sample Wikipedia articles of various characteristics: size, topic (people, organizations, locations, events, etc.) and internal link density. Therefore, we believe our results to be representative. Those results further confirm the conclusions in (Hendrickx and Hoste, 2009), which show that a CR system trained on news-paper significantly underperforms on data coming from users comments and blogs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System", |
| "sec_num": null |
| }, |
| { |
| "text": "Since there is no system readily available for our task, we devised four baselines on top of two available coreference resolution systems. Given the output of a CR system applied on a Wikipedia article, our goal here is to isolate the coreference chain that represents the main concept. We experimented with several heuristics, yielding the following baselines.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "4" |
| }, |
| { |
| "text": "B1 picks the longest coreference chain identified and considers that its mentions are those that co-refer to the main concept. The underlying assumption is that the most mentioned concept in a Wikipedia article is the main concept itself.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "4" |
| }, |
| { |
| "text": "B2 picks the longest coreference chain identified 2 http://conll.github.io/ reference-coreference-scorers if it contains a mention that exactly matches the MC title, otherwise it checks in decreasing order (longest to shortest) for a chain containing the title. We expect this baseline to be more precise than the previous one overall.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "4" |
| }, |
| { |
| "text": "It turns out that, for CR systems, mentions of the MC often are spread over several coreference chains. Therefore we devised two more baselines that aggregate chains, with an expected increase in recall.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "4" |
| }, |
| { |
| "text": "B3 conservatively aggregates chains containing a mention that exactly matches the MC title.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "4" |
| }, |
| { |
| "text": "B4 more loosely aggregates all chains that contain at least one mention whose span is a substring of the title. 3 For instance, given the main concept Barack Obama, we concatenate all chains containing either Obama or Barack in their mentions. Obviously, this baseline should show a higher recall than the previous ones, but risks aggregating mentions that are not related to the MC. For instance, it will aggregate the coreference chain referring to University of Sydney concept with a chain containing the mention Sydney.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We observed that, for pronominal mentions, those baselines were not performing very well in terms of recall. With the aim of increasing recall, we added to the chain all the occurrences of pronouns found to refer to the MC (at least once) by the baseline. This heuristic was first proposed by Nguyen et al. (2007) . For instance, if the pronoun he is found in the chain identified by the baseline, all pronouns he in the article are considered to be mentions of the MC Barack Obama. Obviously, there are cases where those pronouns do not corefer to the MC, but this step significantly improves the performance on pronouns.", |
| "cite_spans": [ |
| { |
| "start": 293, |
| "end": 313, |
| "text": "Nguyen et al. (2007)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baselines", |
| "sec_num": "4" |
| }, |
| { |
| "text": "Our approach is composed of a preprocessor which computes a representation of each mention in an article as well as its main concept; and a feature extractor which compares both representations for inducing a set of features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Approach", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We extract mentions using the same mention detection algorithm embedded in Lee et al. (2013) and Clark and Manning (2015) . This algorithm described in (Raghunathan et al., 2010) extracts all named-entities, noun phrases and pronouns, and then removes spurious mentions.", |
| "cite_spans": [ |
| { |
| "start": 75, |
| "end": 92, |
| "text": "Lee et al. (2013)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 97, |
| "end": 121, |
| "text": "Clark and Manning (2015)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preprocessing", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We leverage the hyperlink structure of the article in order to enrich the list of predicted mentions with shallow semantic attributes. For each link found within the article under consideration, we look through the list of predicted mentions for all mentions that match the surface string of the link. We assign to them the attributes (entity type, gender and number) extracted from the Freebase entry (if it exists) corresponding to the Wikipedia article the hyperlink points to. This module behaves as a substitute to the named-entity linking pipelines used in other works, such as (Ratinov and Roth, 2012; Hajishirzi et al., 2013) . We expect it to be of high quality because it exploits human-made links.", |
| "cite_spans": [ |
| { |
| "start": 584, |
| "end": 608, |
| "text": "(Ratinov and Roth, 2012;", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 609, |
| "end": 633, |
| "text": "Hajishirzi et al., 2013)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preprocessing", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We use the WikipediaMiner (Milne and Witten, 2008) API for easily accessing any piece of structure (clean text, labels, internal links, redirects, etc) in Wikipedia, and Jena 4 to index and query Freebase.", |
| "cite_spans": [ |
| { |
| "start": 26, |
| "end": 50, |
| "text": "(Milne and Witten, 2008)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preprocessing", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "In the end, we represent a mention by three strings (actual mention span, head word, and span up to the head noun), as well as its coarse attributes (entity type, gender and number). We represent the main concept of a Wikipedia article by its title, its inferred type (a common noun inferred from the first sentence of the article). Those attributes were used by Nguyen et al. (2007) to heuristically link a mention to the main concept of an article. We further extend this representation by the MC name variants extracted from the markup of Wikipedia (redirects, text anchored in links) as well as aliases from Freebase; the MC entity types we extracted from the Freebase notable types attribute, and its coarse attributes extracted from Freebase, such as its NER type, its gender and number. If the concept category is a person (PER), we import the profession attribute. Figure 2 illustrates The source from which the information is extracted is indicated in parentheses: (W)ikipedia, (F)reebase.", |
| "cite_spans": [ |
| { |
| "start": 363, |
| "end": 383, |
| "text": "Nguyen et al. (2007)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 873, |
| "end": 893, |
| "text": "Figure 2 illustrates", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Preprocessing", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "We experimented with a few hundred features for characterizing each mention, focusing on the most promising ones that we found simple enough to compute. In part, our features are inspired by coreference systems that use Wikipedia and Freebase as feature sources (see Section 2). These features, along with others related to the characteristics of Wikipedia texts, allow us to recognize mentions of the MC more accurately than current CR systems. We make a distinction between features computed for pronominal mentions and features computed from the other mentions.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Extraction", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "For each mention, we compute seven families of features we sketch below.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Non-pronominal Mentions", |
| "sec_num": "5.2.1" |
| }, |
| { |
| "text": "base Number of occurrences of the mention span and the mention head found in the list of candidate mentions. We also add a normal-ized version of those counts (frequency / total number of mentions).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Non-pronominal Mentions", |
| "sec_num": "5.2.1" |
| }, |
| { |
| "text": "title, inferred type, name variants, entity type Most often, a concept is referred to by its name, one of its variants, or its type which are encoded in the four first fields of our MC representation. We define four families of comparison features, each corresponding to one of the first four fields of a MC representation (see Figure 2 ). For instance, for the title family, we compare the title text span with each of the text spans of the mention representation (see Figure 1) . A comparison between a field of the MC representation and a mention text span yields 10 boolean features. These features encode string similarities (exact match, partial match, one being the substring of another, sharing of a number of words, etc.). An eleventh feature is the semantic relatedness score of Wu and Palmer (1994) . For title, we therefore end up with 3 sets of 11 feature vectors.", |
| "cite_spans": [ |
| { |
| "start": 789, |
| "end": 809, |
| "text": "Wu and Palmer (1994)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 328, |
| "end": 336, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 470, |
| "end": 479, |
| "text": "Figure 1)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Non-pronominal Mentions", |
| "sec_num": "5.2.1" |
| }, |
| { |
| "text": "tag Part-of-speech tags of the first and last words of the mention, as well as the tag of the words immediately before and after the mention in the article. We convert this into 34\u00d74 binary features (presence/absence of a specific combination of tags).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Non-pronominal Mentions", |
| "sec_num": "5.2.1" |
| }, |
| { |
| "text": "main Boolean features encoding whether the MC and the mention coarse attributes matches; also we use conjunctions of all pairs of features in this family.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Non-pronominal Mentions", |
| "sec_num": "5.2.1" |
| }, |
| { |
| "text": "We characterize pronominal mentions by five families of features, which, with the exception of the first one, all capture information extracted from Wikipedia.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pronominal Mentions", |
| "sec_num": "5.2.2" |
| }, |
| { |
| "text": "base The pronoun span itself, number, gender and person attributes, to which we add the number of occurrences of the pronoun, as well as its normalized count. The most frequently occurring pronoun in an article is likely to co-refer to the main concept, and we expect these features to capture this to some extent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pronominal Mentions", |
| "sec_num": "5.2.2" |
| }, |
| { |
| "text": "main MC coarse attributes, such as NER type, gender, number (see Figure 2 ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 65, |
| "end": 73, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Pronominal Mentions", |
| "sec_num": "5.2.2" |
| }, |
| { |
| "text": "tag Part-of-speech of the previous and following tokens, as well as the previous and the next POS bigrams (this is converted into 2380 binary features).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pronominal Mentions", |
| "sec_num": "5.2.2" |
| }, |
| { |
| "text": "position Often, pronouns at the beginning of a new section or paragraph refer to the main concept. Therefore, we compute 5 (binary) features encoding the relative position (first, first tier, second tier, last tier, last) of a mention in the sentence, paragraph, section and article.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pronominal Mentions", |
| "sec_num": "5.2.2" |
| }, |
| { |
| "text": "distance Within a sentence, we search before and after the mention for an entity that is compatible (according to Freebase information) with the pronominal mention of interest. If a match is found, one feature encodes the distance between the match and the mention; another feature encodes the number of other compatible pronouns in the same sentence. We expect that this family of features will help the model to capture the presence of local (within a sentence) co-references.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pronominal Mentions", |
| "sec_num": "5.2.2" |
| }, |
| { |
| "text": "In this section, we first describe the data preparation we conducted (section 6.1), and provide details on the classifier we trained (section 6.2). Then, we report experiments we carried out on the task of identifying the mentions co-referent (positive class) to the main concept of an article (section 6.3). We compare our approach to the baselines described in section 4, and analyze the impact of the families of features described in section 5. We also investigate a simple extension of Dcoref which takes advantage of our classifier for improving coreference resolution (section 6.4).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Each article in WikiCoref was part-of-speech tagged, syntactically parsed and the namedentities were identified. This was done thanks to the Stanford CoreNLP toolkit (Manning et al., 2014) . Since WikiCoref does not contain singleton mentions (in conformance to the OntoNotes guidelines), we automatically extract singleton mentions using the method described in (Raghunathan et al., 2010) . Overall, we added about 13 400 automatically extracted mentions (singletons) to the 7 000 coreferent mentions annotated ", |
| "cite_spans": [ |
| { |
| "start": 166, |
| "end": 188, |
| "text": "(Manning et al., 2014)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 363, |
| "end": 389, |
| "text": "(Raghunathan et al., 2010)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data Preparation", |
| "sec_num": "6.1" |
| }, |
| { |
| "text": "We trained two Support Vector Machine classifiers (Cortes and Vapnik, 1995) , one for pronominal mentions and one for non-pronominal ones, making use of the LIBSVM library (Chang and Lin, 2011) and the features described in Section 5.2. For both models, we selected 5 the Csupport vector classification and used a linear kernel. Since our dataset is unbalanced (at least for non-pronominal mentions), we penalized the negative class with a weight of 2.0. During training, we do not use gold mention attributes, but we automatically enrich mentions with the information extracted from Wikipedia and Freebase, as described in Section 5.", |
| "cite_spans": [ |
| { |
| "start": 50, |
| "end": 75, |
| "text": "(Cortes and Vapnik, 1995)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 172, |
| "end": 193, |
| "text": "(Chang and Lin, 2011)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Classifier", |
| "sec_num": "6.2" |
| }, |
| { |
| "text": "We focus on the task of identifying all the mentions referring to the main concept of an article. We measure the performance of the systems we devised by average precision, recall and F1 rates computed by a 10-fold cross-validation procedure.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Main Concept Resolution Performance", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "We generated baselines for all the systems discussed in Section 3, but found results derived from statistical approaches to be close enough that we only include results of two systems in the sequel: Dcoref (Lee et al., 2013) and Scoref (Clark and Manning, 2015) . We choose these two because they use the same pipeline (parser, mention detection, etc), while applying very different techniques (rules versus machine learning). The results of the baselines and our approach are reported in Table 2 .", |
| "cite_spans": [ |
| { |
| "start": 206, |
| "end": 224, |
| "text": "(Lee et al., 2013)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 236, |
| "end": 261, |
| "text": "(Clark and Manning, 2015)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 489, |
| "end": 496, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Main Concept Resolution Performance", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "Clearly, our approach outperforms all baselines for both pronominal and non-pronominal mentions, and across all metrics. On all mentions, our best classifier yields an absolute F1 increase of 13 points over Dcoref, and 15 points over Scoref.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Main Concept Resolution Performance", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "In order to understand the impact of each family of features we considered in this study, we trained various classifiers in a greedy fashion. We started with the simplest feature set (base) and gradually added one family of features at a time, keeping at each iteration the one leading to the highest increase in F1. The outcome of this process for the pronominal mentions is reported in Table 3 : Performance of our approach on the pronominal mentions, as a function of the features.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 388, |
| "end": 395, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Main Concept Resolution Performance", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "A baseline that always considers that a pronom-inal mention is co-referent to the main concept results in an F1 measure of 63.7%. This naive baseline is outperformed by the simplest of our model (base) by a large margin (over 10 absolute points). We observe that recall significantly improves when those features are augmented with the MC coarse attributes (+main). In fact, this variant already outperforms all the Dcoref-based baselines in terms of F1 score. Each feature family added further improves the performance overall, leading to better precision and recall than any of the baselines tested. Inspection shows that most of the errors on pronominal mentions are introduced by the lack of information on noun phrase mentions surrounding the pronouns. In example (f) shown in Figure 3 , the classifier associates the mention it with the MC instead of the Johnston Atoll \" Safeguard C \" mission. Table 4 : Performance of our approach on the nonpronominal mentions, as a function of the features. Table 4 reports the results obtained for the nonpronominal mentions classifier. The simplest classifier is outperformed by most baselines in terms of F1. Still, this model is able to correctly match mentions in example (a) and (b) of Figure 3 simply because those mentions are frequent within their respective article. Of course, such a simple model is often wrong as in example (c), where all mentions the United States are associated to the MC, simply because this is a frequent mention.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 782, |
| "end": 790, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 901, |
| "end": 908, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 1001, |
| "end": 1008, |
| "text": "Table 4", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Main Concept Resolution Performance", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "The title feature family drastically increases precision, and the resulting classifier (+title) outperforms all the baselines in terms of F1 score. Adding the inferred type feature family gives a further boost in recall (7 absolute points) with no loss in precision (gain of almost 2 points). For instance, the resulting classifier can link the mention the team to the MC Houston Texans (see example (d)) because it correctly identifies the term team as a type. The family name variants also gives a nice boost in recall, in a slight expense of precision. This drop is due to some noisy redirects in Wikipedia, misleading our classifier. For instance, Johnston and Sand Islands is a redirect of the Johnston Atoll article.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Main Concept Resolution Performance", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "The entity type family further improves performance, mainly because it plays a role similar to the inferred type features extracted from Freebase. This indicates that the noun type induced directly from the first sentence of a Wikipedia article is pertinent and can complement the types extracted from Freebase when available or serve as proxy when they are missing. The Houston Texans are a professional American football team based in Houston* , Texas. Finally, the main family significantly increases precision (over 4 absolute points) with no loss in recall. To illustrate a negative example, the resulting classifier wrongly recognizes mentions referring to the town Houston as coreferent to the football team in example (g). We handpicked a number of classification errors and found that most of these are difficult coreference cases. For instance, our best classifier fails to recognize that the mention the expansion team refers to the main concept Houston Texans in example (e). ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Main Concept Resolution Performance", |
| "sec_num": "6.3" |
| }, |
| { |
| "text": "While identifying all the mentions of the MC in a Wikipedia article is certainly useful in a number of NLP tasks (Nguyen et al., 2007; Nakayama, 2008) , finding all coreference chains in a Wikipedia article is also worth studying. In the following, we describe an experiment where we introduced in Dcoref a new high-precision sieve which uses our classifier 6 . Sieves in Dcoref are ranked in decreasing order of precision, and we ranked this new sieve first. The aim of this sieve is to construct the coreference chain equivalent to the main concept. It merges two chains whenever they both contain mentions to the MC according to our classifier. We further prevent other sieves from appending new mentions to the MC coreference chain. We ran this modified system (called Dcoref++) on the WikiCoref dataset, where mentions were automatically predicted. The results of this system are reported in Table 5 , measured in terms of MUC (Vilain et al., 1995) , B3 (Bagga and Baldwin, 1998) , CEAF\u03c6 4 (Luo, 2005) and the average F1 CoNLL score (Denis and Baldridge, 2009) .", |
| "cite_spans": [ |
| { |
| "start": 113, |
| "end": 134, |
| "text": "(Nguyen et al., 2007;", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 135, |
| "end": 150, |
| "text": "Nakayama, 2008)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 932, |
| "end": 953, |
| "text": "(Vilain et al., 1995)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 959, |
| "end": 984, |
| "text": "(Bagga and Baldwin, 1998)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 995, |
| "end": 1006, |
| "text": "(Luo, 2005)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 1038, |
| "end": 1065, |
| "text": "(Denis and Baldridge, 2009)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 897, |
| "end": 904, |
| "text": "Table 5", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Coreference Resolution Performance", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "We observe an improvement for Dcoref++ over the other systems, for all the metrics. In particular, Dcoref++ increases by 4 absolute points the CoNLL F1 score. This shows that early decisions taken by our classifier benefit other sieves as well. It must be noted, however, that the overall gain in precision is larger than the one in recall.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Coreference Resolution Performance", |
| "sec_num": "6.4" |
| }, |
| { |
| "text": "We developed a simple yet powerful approach that accurately identifies all the mentions that co-refer 6 We use predicted results from 10-fold cross-validation. to the concept being described in a Wikipedia article. We tackle the problem with two (pronominal and non-pronominal) models based on well designed features. The resulting system is compared to baselines built on top of state-of-the-art systems adapted to this task. Despite being relatively simple, our model reaches 89 % in F1 score, an absolute gain of 13 F1 points over the best baseline. We further show that incorporating our system into the Stanford deterministic rule-based system (Lee et al., 2013) leads to an improvement of 4% in F1 score on a fully fledged coreference task. A natural extension of this work is to identify all coreference relations in a Wikipedia article, a task we are currently investigating.", |
| "cite_spans": [ |
| { |
| "start": 102, |
| "end": 103, |
| "text": "6", |
| "ref_id": null |
| }, |
| { |
| "start": 649, |
| "end": 667, |
| "text": "(Lee et al., 2013)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "The material used in this study, as well as a (huge) dump of all the mentions in English Wikipedia (version of April 2013) our classifier identified as referring to the main concept, along with information we extracted from Wikipedia and Freebase are available at http://rali.iro.umontreal.ca/ rali/en/wikipedia-main-concept. We hope this ressource will foster further research on Wikipedia-based coreference resolution.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7" |
| }, |
| { |
| "text": "http://rali.iro.umontreal.ca/rali/?q= en/wikicoref", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Grammatical words are not considered for matching.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://jena.apache.org", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We tried with less success other configurations on a heldout dataset.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work has been funded by Nuance Foundation. We are grateful to the reviewers for their helpful comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Algorithms for scoring coreference chains", |
| "authors": [ |
| { |
| "first": "Amit", |
| "middle": [], |
| "last": "Bagga", |
| "suffix": "" |
| }, |
| { |
| "first": "Breck", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "The first international conference on language resources and evaluation workshop on linguistics coreference", |
| "volume": "1", |
| "issue": "", |
| "pages": "563--566", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In The first in- ternational conference on language resources and evaluation workshop on linguistics coreference, vol- ume 1, pages 563-566.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Freebase: a collaboratively created graph database for structuring human knowledge", |
| "authors": [ |
| { |
| "first": "Kurt", |
| "middle": [], |
| "last": "Bollacker", |
| "suffix": "" |
| }, |
| { |
| "first": "Colin", |
| "middle": [], |
| "last": "Evans", |
| "suffix": "" |
| }, |
| { |
| "first": "Praveen", |
| "middle": [], |
| "last": "Paritosh", |
| "suffix": "" |
| }, |
| { |
| "first": "Tim", |
| "middle": [], |
| "last": "Sturge", |
| "suffix": "" |
| }, |
| { |
| "first": "Jamie", |
| "middle": [], |
| "last": "Taylor", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 2008 ACM SIGMOD international conference on Management of data", |
| "volume": "", |
| "issue": "", |
| "pages": "1247--1250", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kurt Bollacker, Colin Evans, Praveen Paritosh, Tim Sturge, and Jamie Taylor. 2008. Freebase: a col- laboratively created graph database for structuring human knowledge. In Proceedings of the 2008 ACM SIGMOD international conference on Management of data, pages 1247-1250.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "LIB-SVM: a library for support vector machines", |
| "authors": [ |
| { |
| "first": "Chih-Chung", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Chih-Jen", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "ACM Transactions on Intelligent Systems and Technology", |
| "volume": "2", |
| "issue": "3", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chih-Chung Chang and Chih-Jen Lin. 2011. LIB- SVM: a library for support vector machines. ACM Transactions on Intelligent Systems and Technology (TIST), 2(3):27.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Entity-centric coreference resolution with model stacking", |
| "authors": [ |
| { |
| "first": "Kevin", |
| "middle": [], |
| "last": "Clark", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Association of Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kevin Clark and Christopher D. Manning. 2015. Entity-centric coreference resolution with model stacking. In Association of Computational Linguis- tics (ACL).", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Supportvector networks", |
| "authors": [ |
| { |
| "first": "Corinna", |
| "middle": [], |
| "last": "Cortes", |
| "suffix": "" |
| }, |
| { |
| "first": "Vladimir", |
| "middle": [], |
| "last": "Vapnik", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Machine learning", |
| "volume": "20", |
| "issue": "3", |
| "pages": "273--297", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Corinna Cortes and Vladimir Vapnik. 1995. Support- vector networks. Machine learning, 20(3):273-297.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Global joint models for coreference resolution and named entity classification", |
| "authors": [ |
| { |
| "first": "Pascal", |
| "middle": [], |
| "last": "Denis", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Baldridge", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Procesamiento del Lenguaje Natural", |
| "volume": "42", |
| "issue": "1", |
| "pages": "87--96", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pascal Denis and Jason Baldridge. 2009. Global joint models for coreference resolution and named entity classification. Procesamiento del Lenguaje Natural, 42(1):87-96.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "The Automatic Content Extraction (ACE) Program-Tasks, Data, and Evaluation", |
| "authors": [ |
| { |
| "first": "George", |
| "middle": [ |
| "R" |
| ], |
| "last": "Doddington", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexis", |
| "middle": [], |
| "last": "Mitchell", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [ |
| "A" |
| ], |
| "last": "Przybocki", |
| "suffix": "" |
| }, |
| { |
| "first": "Lance", |
| "middle": [ |
| "A" |
| ], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephanie", |
| "middle": [], |
| "last": "Strassel", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [ |
| "M" |
| ], |
| "last": "Weischedel", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "LREC", |
| "volume": "2", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George R. Doddington, Alexis Mitchell, Mark A. Przy- bocki, Lance A. Ramshaw, Stephanie Strassel, and Ralph M. Weischedel. 2004. The Automatic Con- tent Extraction (ACE) Program-Tasks, Data, and Evaluation. In LREC, volume 2, page 1.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Easy victories and uphill battles in coreference resolution", |
| "authors": [ |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Durrett", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "1971--1982", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Greg Durrett and Dan Klein. 2013. Easy victories and uphill battles in coreference resolution. In EMNLP, pages 1971-1982.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A joint model for entity analysis: Coreference, typing, and linking", |
| "authors": [ |
| { |
| "first": "Greg", |
| "middle": [], |
| "last": "Durrett", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "477--490", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Greg Durrett and Dan Klein. 2014. A joint model for entity analysis: Coreference, typing, and linking. Transactions of the Association for Computational Linguistics, 2:477-490.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Wikicoref: An english coreference-annotated corpus of wikipedia articles", |
| "authors": [ |
| { |
| "first": "Abbas", |
| "middle": [], |
| "last": "Ghaddar", |
| "suffix": "" |
| }, |
| { |
| "first": "Phillippe", |
| "middle": [], |
| "last": "Langlais", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the Ninth International Conference on Language Resources and Evaluation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Abbas Ghaddar and Phillippe Langlais. 2016. Wiki- coref: An english coreference-annotated corpus of wikipedia articles. In Proceedings of the Ninth In- ternational Conference on Language Resources and Evaluation (LREC 2016).", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Simple coreference resolution with rich syntactic and semantic features", |
| "authors": [ |
| { |
| "first": "Aria", |
| "middle": [], |
| "last": "Haghighi", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "3", |
| "issue": "", |
| "pages": "1152--1161", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Aria Haghighi and Dan Klein. 2009. Simple coref- erence resolution with rich syntactic and semantic features. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Process- ing: Volume 3-Volume 3, pages 1152-1161.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Joint Coreference Resolution and Named-Entity Linking with Multi-Pass Sieves", |
| "authors": [ |
| { |
| "first": "Hannaneh", |
| "middle": [], |
| "last": "Hajishirzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Leila", |
| "middle": [], |
| "last": "Zilles", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [ |
| "S" |
| ], |
| "last": "Weld", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [ |
| "S" |
| ], |
| "last": "Zettlemoyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "289--299", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Hannaneh Hajishirzi, Leila Zilles, Daniel S. Weld, and Luke S. Zettlemoyer. 2013. Joint Coreference Res- olution and Named-Entity Linking with Multi-Pass Sieves. In EMNLP, pages 289-299.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Coreference resolution on blogs and commented news", |
| "authors": [ |
| { |
| "first": "Iris", |
| "middle": [], |
| "last": "Hendrickx", |
| "suffix": "" |
| }, |
| { |
| "first": "Veronique", |
| "middle": [], |
| "last": "Hoste", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Anaphora Processing and Applications", |
| "volume": "", |
| "issue": "", |
| "pages": "43--53", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Iris Hendrickx and Veronique Hoste. 2009. Corefer- ence resolution on blogs and commented news. In Anaphora Processing and Applications, pages 43- 53.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "MUC-7 coreference task definition. version 3.0", |
| "authors": [ |
| { |
| "first": "Lynette", |
| "middle": [], |
| "last": "Hirshman", |
| "suffix": "" |
| }, |
| { |
| "first": "Nancy", |
| "middle": [], |
| "last": "Chinchor", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of the Seventh Message Understanding Conference (MUC-7)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lynette Hirshman and Nancy Chinchor. 1998. MUC-7 coreference task definition. version 3.0. In Proceed- ings of the Seventh Message Understanding Confer- ence (MUC-7).", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "OntoNotes: the 90% solution", |
| "authors": [ |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Mitchell", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "Martha", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| }, |
| { |
| "first": "Lance", |
| "middle": [], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the human language technology conference of the NAACL, Companion Volume: Short Papers", |
| "volume": "", |
| "issue": "", |
| "pages": "57--60", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eduard Hovy, Mitchell Marcus, Martha Palmer, Lance Ramshaw, and Ralph Weischedel. 2006. OntoNotes: the 90% solution. In Proceedings of the human language technology conference of the NAACL, Companion Volume: Short Papers, pages 57-60.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Deterministic coreference resolution based on entity-centric, precision-ranked rules", |
| "authors": [ |
| { |
| "first": "Heeyoung", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Angel", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yves", |
| "middle": [], |
| "last": "Peirsman", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathanael", |
| "middle": [], |
| "last": "Chambers", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Computational Linguistics", |
| "volume": "39", |
| "issue": "4", |
| "pages": "885--916", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Ju- rafsky. 2013. Deterministic coreference resolu- tion based on entity-centric, precision-ranked rules. Computational Linguistics, 39(4):885-916.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "On coreference resolution performance metrics", |
| "authors": [ |
| { |
| "first": "Xiaoqiang", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the conference on Human Language Technology and Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "25--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Xiaoqiang Luo. 2005. On coreference resolution per- formance metrics. In Proceedings of the confer- ence on Human Language Technology and Empiri- cal Methods in Natural Language Processing, pages 25-32.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "The Stanford CoreNLP Natural Language Processing Toolkit", |
| "authors": [ |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Bauer", |
| "suffix": "" |
| }, |
| { |
| "first": "Jenny", |
| "middle": [ |
| "Rose" |
| ], |
| "last": "Finkel", |
| "suffix": "" |
| }, |
| { |
| "first": "Steven", |
| "middle": [], |
| "last": "Bethard", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mc-Closky", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "ACL (System Demonstrations)", |
| "volume": "", |
| "issue": "", |
| "pages": "55--60", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP Natural Lan- guage Processing Toolkit. In ACL (System Demon- strations), pages 55-60.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Latent structures for coreference resolution", |
| "authors": [ |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "Martschat", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Strube", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Transactions of the Association for Computational Linguistics", |
| "volume": "3", |
| "issue": "", |
| "pages": "405--418", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sebastian Martschat and Michael Strube. 2015. La- tent structures for coreference resolution. Transac- tions of the Association for Computational Linguis- tics, 3:405-418.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Mining meaning from wikipedia", |
| "authors": [ |
| { |
| "first": "Olena", |
| "middle": [], |
| "last": "Medelyan", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Milne", |
| "suffix": "" |
| }, |
| { |
| "first": "Catherine", |
| "middle": [], |
| "last": "Legg", |
| "suffix": "" |
| }, |
| { |
| "first": "Ian", |
| "middle": [ |
| "H" |
| ], |
| "last": "Witten", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Int. J. Hum.-Comput. Stud", |
| "volume": "67", |
| "issue": "9", |
| "pages": "716--754", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Olena Medelyan, David Milne, Catherine Legg, and Ian H. Witten. 2009. Mining meaning from wikipedia. Int. J. Hum.-Comput. Stud., 67(9):716- 754, September.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "WordNet: A Lexical Database for English", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "George", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Miller", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Commun. ACM", |
| "volume": "38", |
| "issue": "11", |
| "pages": "39--41", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "George A. Miller. 1995. WordNet: A Lexical Database for English. Commun. ACM, 38(11):39- 41.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Learning to link with wikipedia", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Milne", |
| "suffix": "" |
| }, |
| { |
| "first": "Ian", |
| "middle": [ |
| "H" |
| ], |
| "last": "Witten", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of the 17th ACM conference on Information and knowledge management", |
| "volume": "", |
| "issue": "", |
| "pages": "509--518", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David Milne and Ian H. Witten. 2008. Learning to link with wikipedia. In Proceedings of the 17th ACM conference on Information and knowledge manage- ment, pages 509-518.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Wikipedia mining for triple extraction enhanced by co-reference resolution", |
| "authors": [ |
| { |
| "first": "Kotaro", |
| "middle": [], |
| "last": "Nakayama", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "The 7th International Semantic Web Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kotaro Nakayama. 2008. Wikipedia mining for triple extraction enhanced by co-reference resolution. In The 7th International Semantic Web Conference, page 103.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Shallow Semantics for Coreference Resolution", |
| "authors": [ |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "IJcAI", |
| "volume": "", |
| "issue": "", |
| "pages": "1689--1694", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vincent Ng. 2007. Shallow Semantics for Coreference Resolution. In IJcAI, volume 2007, pages 1689- 1694.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Relation extraction from wikipedia using subtree mining", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "T" |
| ], |
| "last": "Dat", |
| "suffix": "" |
| }, |
| { |
| "first": "Yutaka", |
| "middle": [], |
| "last": "Nguyen", |
| "suffix": "" |
| }, |
| { |
| "first": "Mitsuru", |
| "middle": [], |
| "last": "Matsuo", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ishizuka", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the National Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dat PT Nguyen, Yutaka Matsuo, and Mitsuru Ishizuka. 2007. Relation extraction from wikipedia using sub- tree mining. In Proceedings of the National Confer- ence on Artificial Intelligence, page 1414.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution", |
| "authors": [ |
| { |
| "first": "Paolo", |
| "middle": [], |
| "last": "Simone", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Ponzetto", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Strube", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the main conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "192--199", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simone Paolo Ponzetto and Michael Strube. 2006. Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution. In Pro- ceedings of the main conference on Human Lan- guage Technology Conference of the North Amer- ican Chapter of the Association of Computational Linguistics, pages 192-199.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Unrestricted coreference: Identifying entities and events in OntoNotes", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Sameer", |
| "suffix": "" |
| }, |
| { |
| "first": "Lance", |
| "middle": [], |
| "last": "Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Ralph", |
| "middle": [], |
| "last": "Ramshaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Jessica", |
| "middle": [], |
| "last": "Weischedel", |
| "suffix": "" |
| }, |
| { |
| "first": "Linnea", |
| "middle": [], |
| "last": "Macbride", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Micciulla", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "First IEEE International Conference on Semantic Computing", |
| "volume": "", |
| "issue": "", |
| "pages": "446--453", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer S. Pradhan, Lance Ramshaw, Ralph Weischedel, Jessica MacBride, and Linnea Micci- ulla. 2007. Unrestricted coreference: Identifying entities and events in OntoNotes. In First IEEE International Conference on Semantic Computing, pages 446-453.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "CoNLL-2012 shared task: Modeling multilingual unrestricted coreference in OntoNotes", |
| "authors": [ |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Sameer Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Moschitti", |
| "suffix": "" |
| }, |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuchen", |
| "middle": [], |
| "last": "Uryupina", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Joint Conference on EMNLP and CoNLL-Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "1--40", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. CoNLL- 2012 shared task: Modeling multilingual unre- stricted coreference in OntoNotes. In Joint Confer- ence on EMNLP and CoNLL-Shared Task, pages 1- 40.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Scoring coreference partitions of predicted mentions: A reference implementation", |
| "authors": [ |
| { |
| "first": "Xiaoqiang", |
| "middle": [], |
| "last": "Sameer Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Marta", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "" |
| }, |
| { |
| "first": "Eduard", |
| "middle": [], |
| "last": "Recasens", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Hovy", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Strube", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics", |
| "volume": "2", |
| "issue": "", |
| "pages": "30--35", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer Pradhan, Xiaoqiang Luo, Marta Recasens, Ed- uard Hovy, Vincent Ng, and Michael Strube. 2014. Scoring coreference partitions of predicted men- tions: A reference implementation. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 30-35, June.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "A multipass sieve for coreference resolution", |
| "authors": [ |
| { |
| "first": "Heeyoung", |
| "middle": [], |
| "last": "Karthik Raghunathan", |
| "suffix": "" |
| }, |
| { |
| "first": "Sudarshan", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| }, |
| { |
| "first": "Nathanael", |
| "middle": [], |
| "last": "Rangarajan", |
| "suffix": "" |
| }, |
| { |
| "first": "Mihai", |
| "middle": [], |
| "last": "Chambers", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Surdeanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "492--501", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Karthik Raghunathan, Heeyoung Lee, Sudarshan Ran- garajan, Nathanael Chambers, Mihai Surdeanu, Dan Jurafsky, and Christopher Manning. 2010. A multi- pass sieve for coreference resolution. In Proceed- ings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 492-501.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Learning-based multi-sieve co-reference resolution with knowledge", |
| "authors": [ |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Ratinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "1234--1244", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lev Ratinov and Dan Roth. 2012. Learning-based multi-sieve co-reference resolution with knowledge. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Process- ing and Computational Natural Language Learning, pages 1234-1244.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Local and global algorithms for disambiguation to wikipedia", |
| "authors": [ |
| { |
| "first": "Lev", |
| "middle": [], |
| "last": "Ratinov", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| }, |
| { |
| "first": "Doug", |
| "middle": [], |
| "last": "Downey", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Anderson", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "1", |
| "issue": "", |
| "pages": "1375--1384", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lev Ratinov, Dan Roth, Doug Downey, and Mike Anderson. 2011. Local and global algorithms for disambiguation to wikipedia. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1, pages 1375-1384.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "A fully coreference-annotated corpus of scholarly papers from the acl anthology", |
| "authors": [ |
| { |
| "first": "Ulrich", |
| "middle": [], |
| "last": "Sch\u00e4fer", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Spurk", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00f6rg", |
| "middle": [], |
| "last": "Steffen", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 24th International Conference on Computational Linguistics (COLING-2012)", |
| "volume": "", |
| "issue": "", |
| "pages": "1059--1070", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ulrich Sch\u00e4fer, Christian Spurk, and J\u00f6rg Steffen. 2012. A fully coreference-annotated corpus of scholarly papers from the acl anthology. In Pro- ceedings of the 24th International Conference on Computational Linguistics (COLING-2012), pages 1059-1070.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Yago: a core of semantic knowledge", |
| "authors": [ |
| { |
| "first": "Fabian", |
| "middle": [ |
| "M" |
| ], |
| "last": "Suchanek", |
| "suffix": "" |
| }, |
| { |
| "first": "Gjergji", |
| "middle": [], |
| "last": "Kasneci", |
| "suffix": "" |
| }, |
| { |
| "first": "Gerhard", |
| "middle": [], |
| "last": "Weikum", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the 16th international conference on World Wide Web", |
| "volume": "", |
| "issue": "", |
| "pages": "697--706", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fabian M. Suchanek, Gjergji Kasneci, and Gerhard Weikum. 2007. Yago: a core of semantic knowl- edge. In Proceedings of the 16th international con- ference on World Wide Web, pages 697-706.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "A modeltheoretic coreference scoring scheme", |
| "authors": [ |
| { |
| "first": "Marc", |
| "middle": [], |
| "last": "Vilain", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Burger", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Aberdeen", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings of the 6th conference on Message understanding", |
| "volume": "", |
| "issue": "", |
| "pages": "45--52", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Proceed- ings of the 6th conference on Message understand- ing, pages 45-52.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Textrunner: open information extraction on the web", |
| "authors": [ |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Yates", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Cafarella", |
| "suffix": "" |
| }, |
| { |
| "first": "Michele", |
| "middle": [], |
| "last": "Banko", |
| "suffix": "" |
| }, |
| { |
| "first": "Oren", |
| "middle": [], |
| "last": "Etzioni", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthew", |
| "middle": [], |
| "last": "Broadhead", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephen", |
| "middle": [], |
| "last": "Soderland", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations", |
| "volume": "", |
| "issue": "", |
| "pages": "25--26", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alexander Yates, Michael Cafarella, Michele Banko, Oren Etzioni, Matthew Broadhead, and Stephen Soderland. 2007. Textrunner: open information extraction on the web. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics: Demonstrations, pages 25-26.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Dynamic knowledge-base alignment for coreference resolution", |
| "authors": [ |
| { |
| "first": "Jianping", |
| "middle": [], |
| "last": "Zheng", |
| "suffix": "" |
| }, |
| { |
| "first": "Luke", |
| "middle": [], |
| "last": "Vilnis", |
| "suffix": "" |
| }, |
| { |
| "first": "Sameer", |
| "middle": [], |
| "last": "Singh", |
| "suffix": "" |
| }, |
| { |
| "first": "Jinho", |
| "middle": [ |
| "D" |
| ], |
| "last": "Choi", |
| "suffix": "" |
| }, |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Conference on Computational Natural Language Learning (CoNLL)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jianping Zheng, Luke Vilnis, Sameer Singh, Jinho D. Choi, and Andrew McCallum. 2013. Dynamic knowledge-base alignment for coreference resolu- tion. In Conference on Computational Natural Lan- guage Learning (CoNLL).", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "Figure 1shows the representation collected for the mention San Fernando Valley region of the city of Los Angeles found in the Los Angeles Pierce College article. string span San Fernando Valley region of the city of Los Angeles head word span region span up to the head noun San Fernando Valley region coarse attribute \u2205, neutral, singular Representation of a mention." |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "Representation of a Wikipedia concept." |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "Examples of mentions (underlined) associated with the MC. An asterisk indicates wrong decisions." |
| }, |
| "TABREF1": { |
| "html": null, |
| "text": "the information we collect for the Wikipedia concept Los Angeles Pierce College.", |
| "content": "<table><tr><td>title</td><td>(W)</td></tr><tr><td>Los Angeles Pierce College</td><td/></tr><tr><td>inferred type</td><td>(W)</td></tr><tr><td colspan=\"2\">Los Angeles Pierce College, also known as</td></tr><tr><td colspan=\"2\">Pierce College and just Pierce, is a commu-</td></tr><tr><td>nity college that serves . . .</td><td/></tr><tr><td>college</td><td/></tr><tr><td>name variants</td><td>(W,F)</td></tr><tr><td>Pierce Junior College, LAPC</td><td/></tr><tr><td>entity type</td><td>(F)</td></tr><tr><td>College/University</td><td/></tr><tr><td>coarse attributes</td><td>(F)</td></tr><tr><td>ORG, neutral, singular</td><td/></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "html": null, |
| "text": "Performance of the baselines on the task of identifying all MC coreferent mentions.", |
| "content": "<table><tr><td>in WikiCoref. In the end, our training set con-</td></tr><tr><td>sists of 20 362 mentions: 1 334 pronominal ones</td></tr><tr><td>(627 of them referring to the MC), and 19 028 non-</td></tr><tr><td>pronominal ones (16% of them referring to the</td></tr><tr><td>MC).</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "html": null, |
| "text": "", |
| "content": "<table><tr><td/><td/><td>.</td></tr><tr><td>P</td><td>R</td><td>F1</td></tr><tr><td colspan=\"3\">always positive 46.70 100.00 63.70</td></tr><tr><td>base 70.34</td><td colspan=\"2\">78.31 74.11</td></tr><tr><td>+main 74.15</td><td colspan=\"2\">90.11 81.35</td></tr><tr><td>+position 80.43</td><td colspan=\"2\">89.15 84.57</td></tr><tr><td>+tag 82.12</td><td colspan=\"2\">90.11 85.93</td></tr><tr><td>+distance 85.46</td><td colspan=\"2\">92.82 88.99</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF7": { |
| "html": null, |
| "text": "60.42 61.00 53.55 43.33 47.90 42.68 50.86 46.41 51.77 D&K (2013) 68.52 55.96 61.61 59.08 39.72 47.51 48.06 40.44 43.92 51.01 D&K (2014) 63.79 57.07 60.24 52.55 40.75 45.90 45.44 39.80 42.43 49.52 M&S (2015) 70.39 53.63 60.88 60.81 37.58 46.45 47.88 38.18 42.48 49.94 C&M (2015) 69.45 49.53 57.83 57.99 34.42 43.20 46.61 33.09 38.70 46.58 Dcoref++ 66.06 62.93 64.46 57.73 48.58 52.76 46.76 49.54 48.11 55.11", |
| "content": "<table><tr><td>System</td><td>P</td><td>MUC R</td><td>F1</td><td>P</td><td>B 3 R</td><td>F1</td><td>P</td><td>CEAF\u03c6 4 R</td><td>F1</td><td>CoNLL F1</td></tr><tr><td>Dcoref</td><td>61.59</td><td/><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
| "num": null, |
| "type_str": "table" |
| }, |
| "TABREF8": { |
| "html": null, |
| "text": "Performance of Dcoref++ on WikiCoref compared to the state-of-the-art systems: Lee et al.", |
| "content": "<table><tr><td>(2013); Durrett and Klein (2013) -Final; Durrett and Klein (2014) -Joint; Martschat and Strube (2015)</td></tr><tr><td>-Ranking:Latent; Clark and Manning (2015) -Statistical mode with clustering.</td></tr></table>", |
| "num": null, |
| "type_str": "table" |
| } |
| } |
| } |
| } |