ACL-OCL / Base_JSON /prefixP /json /P12 /P12-1041.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P12-1041",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T09:29:12.465734Z"
},
"title": "Coreference Semantics from Web Features",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "Berkeley"
}
},
"email": "mbansal@cs.berkeley.edu"
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of California",
"location": {
"settlement": "Berkeley"
}
},
"email": "klein@cs.berkeley.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "To address semantic ambiguities in coreference resolution, we use Web n-gram features that capture a range of world knowledge in a diffuse but robust way. Specifically, we exploit short-distance cues to hypernymy, semantic compatibility, and semantic context, as well as general lexical co-occurrence. When added to a state-of-the-art coreference baseline, our Web features give significant gains on multiple datasets (ACE 2004 and ACE 2005) and metrics (MUC and B 3), resulting in the best results reported to date for the end-to-end task of coreference resolution.",
"pdf_parse": {
"paper_id": "P12-1041",
"_pdf_hash": "",
"abstract": [
{
"text": "To address semantic ambiguities in coreference resolution, we use Web n-gram features that capture a range of world knowledge in a diffuse but robust way. Specifically, we exploit short-distance cues to hypernymy, semantic compatibility, and semantic context, as well as general lexical co-occurrence. When added to a state-of-the-art coreference baseline, our Web features give significant gains on multiple datasets (ACE 2004 and ACE 2005) and metrics (MUC and B 3), resulting in the best results reported to date for the end-to-end task of coreference resolution.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Many of the most difficult ambiguities in coreference resolution are semantic in nature. For instance, consider the following example:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "When Obama met Jobs, the president discussed the economy, technology, and education. His election campaign is expected to [...] For resolving coreference in this example, a system would benefit from the world knowledge that Obama is the president. Also, to resolve the pronoun his to the correct antecedent Obama, we can use the knowledge that Obama has an election campaign while Jobs does not. Such ambiguities are difficult to resolve on purely syntactic or configurational grounds.",
"cite_spans": [
{
"start": 122,
"end": 127,
"text": "[...]",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "There have been multiple previous systems that incorporate some form of world knowledge in coreference resolution tasks. Most work (Poesio et al., 2004; Markert and Nissim, 2005; Yang et al., 2005; Bergsma and Lin, 2006) addresses special cases and subtasks such as bridging anaphora, other anaphora, definite NP reference, and pronoun resolution, computing semantic compatibility via Web-hits and counts from large corpora. There is also work on end-to-end coreference resolution that uses large noun-similarity lists (Daum\u00e9 III and Marcu, 2005) or structured knowledge bases such as Wikipedia (Yang and Su, 2007; Haghighi and Klein, 2009; Kobdani et al., 2011) and YAGO (Rahman and Ng, 2011) . However, such structured knowledge bases are of limited scope, and, while Haghighi and Klein (2010) self-acquires knowledge about coreference, it does so only via reference constructions and on a limited scale.",
"cite_spans": [
{
"start": 131,
"end": 152,
"text": "(Poesio et al., 2004;",
"ref_id": "BIBREF22"
},
{
"start": 153,
"end": 178,
"text": "Markert and Nissim, 2005;",
"ref_id": "BIBREF17"
},
{
"start": 179,
"end": 197,
"text": "Yang et al., 2005;",
"ref_id": "BIBREF33"
},
{
"start": 198,
"end": 220,
"text": "Bergsma and Lin, 2006)",
"ref_id": "BIBREF3"
},
{
"start": 519,
"end": 546,
"text": "(Daum\u00e9 III and Marcu, 2005)",
"ref_id": "BIBREF7"
},
{
"start": 595,
"end": 614,
"text": "(Yang and Su, 2007;",
"ref_id": "BIBREF32"
},
{
"start": 615,
"end": 640,
"text": "Haghighi and Klein, 2009;",
"ref_id": "BIBREF9"
},
{
"start": 641,
"end": 662,
"text": "Kobdani et al., 2011)",
"ref_id": null
},
{
"start": 672,
"end": 693,
"text": "(Rahman and Ng, 2011)",
"ref_id": "BIBREF25"
},
{
"start": 770,
"end": 795,
"text": "Haghighi and Klein (2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we look to the Web for broader if shallower sources of semantics. In order to harness the information on the Web without presupposing a deep understanding of all Web text, we instead turn to a diverse collection of Web n-gram counts (Brants and Franz, 2006) which, in aggregate, contain diffuse and indirect, but often robust, cues to reference. For example, we can collect the cooccurrence statistics of an anaphor with various candidate antecedents to judge relative surface affinities (i.e., (Obama, president) versus (Jobs, president)). We can also count co-occurrence statistics of competing antecedents when placed in the context of an anaphoric pronoun (i.e., Obama's election campaign versus Jobs' election campaign).",
"cite_spans": [
{
"start": 248,
"end": 272,
"text": "(Brants and Franz, 2006)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "All of our features begin with a pair of headwords from candidate mention pairs and compute statistics derived from various potentially informative queries' counts. We explore five major categories of semantically informative Web features, based on (1) general lexical affinities (via generic co-occurrence statistics), (2) lexical relations (via Hearst-style hypernymy patterns), (3) similarity of entity-based context (e.g., common values of y for which h is a y is attested), (4) matches of distributional soft cluster ids, and (5) attested substitutions of candidate antecedents in the context of a pronominal anaphor.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We first describe a strong baseline consisting of the mention-pair model of the Reconcile system (Stoyanov et al., 2009; Stoyanov et al., 2010) using a decision tree (DT) as its pairwise classifier. To this baseline system, we add our suite of features in turn, each class of features providing substantial gains. Altogether, our final system produces the best numbers reported to date on end-to-end coreference resolution (with automatically detected system mentions) on multiple data sets (ACE 2004 and ACE 2005) and metrics (MUC and B 3 ), achieving significant improvements over the Reconcile DT baseline and over the state-of-the-art results of Haghighi and Klein (2010) .",
"cite_spans": [
{
"start": 97,
"end": 120,
"text": "(Stoyanov et al., 2009;",
"ref_id": "BIBREF29"
},
{
"start": 121,
"end": 143,
"text": "Stoyanov et al., 2010)",
"ref_id": "BIBREF30"
},
{
"start": 491,
"end": 504,
"text": "(ACE 2004 and",
"ref_id": null
},
{
"start": 505,
"end": 514,
"text": "ACE 2005)",
"ref_id": null
},
{
"start": 650,
"end": 675,
"text": "Haghighi and Klein (2010)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Before describing our semantic Web features, we first describe our baseline. The core inference and features come from the Reconcile package (Stoyanov et al., 2009; Stoyanov et al., 2010) , with modifications described below. Our baseline differs most substantially from Stoyanov et al. (2009) in using a decision tree classifier rather than an averaged linear perceptron.",
"cite_spans": [
{
"start": 141,
"end": 164,
"text": "(Stoyanov et al., 2009;",
"ref_id": "BIBREF29"
},
{
"start": 165,
"end": 187,
"text": "Stoyanov et al., 2010)",
"ref_id": "BIBREF30"
},
{
"start": 271,
"end": 293,
"text": "Stoyanov et al. (2009)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Baseline System",
"sec_num": "2"
},
{
"text": "Reconcile is one of the best implementations of the mention-pair model (Soon et al., 2001 ) of coreference resolution. The mention-pair model relies on a pairwise function to determine whether or not two mentions are coreferent. Pairwise predictions are then consolidated by transitive closure (or some other clustering method) to form the final set of coreference clusters (chains). While our Web features could be adapted to entity-mention systems, their current form was most directly applicable to the mention-pair approach, making Reconcile a particularly well-suited platform for this investigation.",
"cite_spans": [
{
"start": 71,
"end": 89,
"text": "(Soon et al., 2001",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reconcile",
"sec_num": "2.1"
},
{
"text": "The Reconcile system provides baseline features, learning mechanisms, and resolution procedures that already achieve near state-of-the-art results on multiple popular datasets using multiple standard metrics. It includes over 80 core features that exploit various automatically generated annotations such as named entity tags, syntactic parses, and WordNet classes, inspired by Soon et al. (2001) , Ng and Cardie (2002) , and Bengtson and Roth (2008) . The Reconcile system also facilitates standardized empirical evaluation to past work. 1 In this paper, we develop a suite of simple semantic Web features based on pairs of mention headwords which stack with the default Reconcile features to surpass past state-of-the-art results.",
"cite_spans": [
{
"start": 378,
"end": 396,
"text": "Soon et al. (2001)",
"ref_id": null
},
{
"start": 399,
"end": 419,
"text": "Ng and Cardie (2002)",
"ref_id": "BIBREF18"
},
{
"start": 426,
"end": 450,
"text": "Bengtson and Roth (2008)",
"ref_id": "BIBREF2"
},
{
"start": 539,
"end": 540,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reconcile",
"sec_num": "2.1"
},
{
"text": "Among the various learning algorithms that Reconcile supports, we chose the decision tree classifier, available in Weka (Hall et al., 2009) as J48, an open source Java implementation of the C4.5 algorithm of Quinlan (1993) .",
"cite_spans": [
{
"start": 120,
"end": 139,
"text": "(Hall et al., 2009)",
"ref_id": "BIBREF11"
},
{
"start": 208,
"end": 222,
"text": "Quinlan (1993)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Classifier",
"sec_num": "2.2"
},
{
"text": "The C4.5 algorithm builds decision trees by incrementally maximizing information gain. The training data is a set of already classified samples, where each sample is a vector of attributes or features. At each node of the tree, C4.5 splits the data on an attribute that most effectively splits its set of samples into more ordered subsets, and then recurses on these smaller subsets. The decision tree can then be used to classify a new sample by following a path from the root downward based on the attribute values of the sample.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Classifier",
"sec_num": "2.2"
},
{
"text": "We find the decision tree classifier to work better than the default averaged perceptron (used by Stoyanov et al. (2009) ), on multiple datasets using multiple metrics (see Section 4.3). Many advantages have been claimed for decision tree classifiers, including interpretability and robustness. However, we suspect that the aspect most relevant to our case is that decision trees can capture non-linear interactions between features. For example, recency is very important for pronoun reference but much less so for nominal reference.",
"cite_spans": [
{
"start": 98,
"end": 120,
"text": "Stoyanov et al. (2009)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decision Tree Classifier",
"sec_num": "2.2"
},
{
"text": "Our Web features for coreference resolution are simple and capture a range of diffuse world knowledge. Given a mention pair, we use the head finder in Reconcile to find the lexical heads of both mentions (for example, the head of the Palestinian territories is the word territories). Next, we take each headword pair (h 1 , h 2 ) and compute various Web-count functions on it that can signal whether or not this mention pair is coreferent.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics via Web Features",
"sec_num": "3"
},
{
"text": "As the source of Web information, we use the Google n-grams corpus (Brants and Franz, 2006) which contains English n-grams (n = 1 to 5) and their Web frequency counts, derived from nearly 1 trillion word tokens and 95 billion sentences. Because we have many queries that must be run against this corpus, we apply the trie-based hashing algorithm of Bansal and Klein (2011) to efficiently answer all of them in one pass over it. The features that require word clusters (Section 3.4) use the output of Lin et al. (2010). 2 We describe our five types of features in turn. The first four types are most intuitive for mention pairs where both members are non-pronominal, but, aside from the general co-occurrence group, helped for all mention pair types. The fifth feature group applies only to pairs in which the anaphor is a pronoun but the antecedent is a non-pronoun. Related work for each feature category is discussed inline.",
"cite_spans": [
{
"start": 67,
"end": 91,
"text": "(Brants and Franz, 2006)",
"ref_id": "BIBREF5"
},
{
"start": 349,
"end": 372,
"text": "Bansal and Klein (2011)",
"ref_id": "BIBREF1"
},
{
"start": 500,
"end": 520,
"text": "Lin et al. (2010). 2",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semantics via Web Features",
"sec_num": "3"
},
{
"text": "These features capture co-occurrence statistics of the two headwords, i.e., how often h 1 and h 2 are seen adjacent or nearly adjacent on the Web. This count can be a useful coreference signal because, in general, mentions referring to the same entity will co-occur more frequently (in large corpora) than those that do not. Using the n-grams corpus (for n = 1 to 5), we collect co-occurrence Web-counts by allowing a varying number of wildcards between h 1 and h 2 in the query. The co-occurrence value is:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General co-occurrence",
"sec_num": "3.1"
},
{
"text": "bin log 10 c 12 c 1 \u2022 c 2 where c 12 = count(\"h 1 h 2 \") + count(\"h 1 h 2 \") + count(\"h 1 h 2 \"),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General co-occurrence",
"sec_num": "3.1"
},
{
"text": "c 1 = count(\"h 1 \"), and",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General co-occurrence",
"sec_num": "3.1"
},
{
"text": "c 2 = count(\"h 2 \").",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "General co-occurrence",
"sec_num": "3.1"
},
{
"text": "We normalize the overall co-occurrence count of the headword pair c 12 by the unigram counts of the individual headwords c 1 and c 2 , so that high-frequency headwords do not unfairly get a high feature value (this is similar to computing scaled mutual information MI (Church and Hanks, 1989) ). 3 This normalized value is quantized by taking its log 10 and binning. The actual feature that fires is an indicator of which quantized bin the query produced. As a real example from our development set, the cooccurrence count c 12 for the headword pair (leader, president) is 11383, while it is only 95 for the headword pair (voter, president); after normalization and log 10 , the values are -10.9 and -12.0, respectively. These kinds of general Web co-occurrence statistics have been used previously for other supervised NLP tasks such as spelling correction and syntactic parsing Bansal and Klein, 2011) . In coreference, similar word-association scores were used by Kobdani et al. (2011), but from Wikipedia and for self-training.",
"cite_spans": [
{
"start": 268,
"end": 292,
"text": "(Church and Hanks, 1989)",
"ref_id": "BIBREF6"
},
{
"start": 880,
"end": 903,
"text": "Bansal and Klein, 2011)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "General co-occurrence",
"sec_num": "3.1"
},
{
"text": "These features capture templated co-occurrence of the two headwords h 1 and h 2 in the Web-corpus. Here, we only collect statistics of the headwords cooccurring with a generalized Hearst pattern (Hearst, 1992) in between. Hearst patterns capture various lexical semantic relations between items. For example, seeing X is a Y or X and other Y indicates hypernymy and also tends to cue coreference. The specific patterns we use are:",
"cite_spans": [
{
"start": 195,
"end": 209,
"text": "(Hearst, 1992)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hearst co-occurrence",
"sec_num": "3.2"
},
{
"text": "\u2022 h 1 {is | are | was | were} {a | an | the}? h 2 \u2022 h 1 {and | or} {other | the other | another} h 2 \u2022 h 1 other than {a | an | the}? h 2 \u2022 h 1 such as {a | an | the}? h 2 \u2022 h 1 , including {a | an | the}? h 2 \u2022 h 1 , especially {a | an | the}? h 2 \u2022 h 1 of {the| all}? h 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hearst co-occurrence",
"sec_num": "3.2"
},
{
"text": "For this feature, we again use a quantized normalized count as in Section 3.1, but c 12 here is restricted to n-grams where one of the above patterns occurs in between the headwords. We did not allow wildcards in between the headwords and the Hearst-patterns because this introduced a significant amount of noise. Also, we do not constrain the order of h 1 and h 2 because these patterns can hold for either direction of coreference. 4 As a real example from our development set, the c 12 count for the headword pair (leader, president) is 752, while for (voter, president), it is 0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Hearst co-occurrence",
"sec_num": "3.2"
},
{
"text": "Hypernymic semantic compatibility for coreference is intuitive and has been explored in varying forms by previous work. Poesio et al. (2004) and Markert and Nissim (2005) employ a subset of our Hearst patterns and Web-hits for the subtasks of bridging anaphora, other-anaphora, and definite NP resolution. Others (Haghighi and Klein, 2009; Rahman and Ng, 2011; Daum\u00e9 III and Marcu, 2005) use similar relations to extract compatibility statistics from Wikipedia, YAGO, and noun-similarity lists. Yang and Su (2007) use Wikipedia to automatically extract semantic patterns, which are then used as features in a learning setup. Instead of extracting patterns from the training data, we use all the above patterns, which helps us generalize to new datasets for end-to-end coreference resolution (see Section 4.3).",
"cite_spans": [
{
"start": 120,
"end": 140,
"text": "Poesio et al. (2004)",
"ref_id": "BIBREF22"
},
{
"start": 145,
"end": 170,
"text": "Markert and Nissim (2005)",
"ref_id": "BIBREF17"
},
{
"start": 313,
"end": 339,
"text": "(Haghighi and Klein, 2009;",
"ref_id": "BIBREF9"
},
{
"start": 340,
"end": 360,
"text": "Rahman and Ng, 2011;",
"ref_id": "BIBREF25"
},
{
"start": 361,
"end": 387,
"text": "Daum\u00e9 III and Marcu, 2005)",
"ref_id": "BIBREF7"
},
{
"start": 495,
"end": 513,
"text": "Yang and Su (2007)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hearst co-occurrence",
"sec_num": "3.2"
},
{
"text": "For each headword h, we first collect context seeds y using the pattern h {is | are | was | were} {a | an | the}? y taking seeds y in order of decreasing Web count. The corresponding ordered seed list Y = {y} gives us useful information about the headword's entity type. For example, for h = president, the top 30 seeds (and their parts of speech) include important cues such as president is elected (verb), president is authorized (verb), president is responsible (adjective), president is the chief (adjective), president is above (preposition), and president is the head (noun).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-based context",
"sec_num": "3.3"
},
{
"text": "Matches in the seed lists of two headwords can be a strong signal that they are coreferent. For example, in the top 30 seed lists for the headword pair (leader, president), we get matches including elected, responsible, and expected. To capture this effect, we create a feature that indicates whether there is a match in the top k seeds of the two headwords (where k is a hyperparameter to tune).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-based context",
"sec_num": "3.3"
},
{
"text": "We create another feature that indicates whether the dominant parts of speech in the seed lists matches for the headword pair. We first collect the POS tags (using length 2 character prefixes to indicate coarse parts of speech) of the seeds matched in the top k seed lists of the two headwords, where k is another hyperparameter to tune. If the dominant tags match and are in a small list of important tags ({JJ, NN, RB, VB}), we fire an indicator feature specifying the matched tag, otherwise we fire a nomatch indicator. To obtain POS tags for the seeds, we use a unigram-based POS tagger trained on the WSJ treebank training set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Entity-based context",
"sec_num": "3.3"
},
{
"text": "The distributional hypothesis of Harris (1954) says that words that occur in similar contexts tend to have a similar linguistic behavior. Here, we design features with the idea that this hypothesis extends to reference: mentions occurring in similar contexts in large document sets such as the Web tend to be compatible for coreference. Instead of collecting the contexts of each mention and creating sparse features from them, we use Web-scale distributional clustering to summarize compatibility.",
"cite_spans": [
{
"start": 33,
"end": 46,
"text": "Harris (1954)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster information",
"sec_num": "3.4"
},
{
"text": "Specifically, we begin with the phrase-based clusters from , which were created using the Google n-grams V2 corpus. These clusters come from distributional K-Means clustering (with K = 1000) on phrases, using the n-gram context as features. The cluster data contains almost 10 million phrases and their soft cluster memberships. Up to twenty cluster ids with the highest centroid similarities are included for each phrase in this dataset .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster information",
"sec_num": "3.4"
},
{
"text": "Our cluster-based features assume that if the headwords of the two mentions have matches in their cluster id lists, then they are more compatible for coreference. We check the match of not just the top 1 cluster ids, but also farther down in the 20 sized lists because, as discussed in Lin and Wu (2009) , the soft cluster assignments often reveal different senses of a word. However, we also assume that higher-ranked matches tend to imply closer meanings. To this end, we fire a feature indicating the value bin(i+j), where i and j are the earliest match positions in the cluster id lists of h 1 and h 2 . Binning here means that match positions in a close range generally trigger the same feature.",
"cite_spans": [
{
"start": 286,
"end": 303,
"text": "Lin and Wu (2009)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster information",
"sec_num": "3.4"
},
{
"text": "Recent previous work has used clustering information to improve the performance of supervised NLP tasks such as NER and dependency parsing (Koo et al., 2008; Lin and Wu, 2009) . However, in coreference, the only related work to our knowledge is from Daum\u00e9 III and Marcu (2005) , who use word class features derived from a Web-scale corpus via a process described in Ravichandran et al. (2005) .",
"cite_spans": [
{
"start": 139,
"end": 157,
"text": "(Koo et al., 2008;",
"ref_id": "BIBREF14"
},
{
"start": 158,
"end": 175,
"text": "Lin and Wu, 2009)",
"ref_id": "BIBREF15"
},
{
"start": 250,
"end": 276,
"text": "Daum\u00e9 III and Marcu (2005)",
"ref_id": "BIBREF7"
},
{
"start": 366,
"end": 392,
"text": "Ravichandran et al. (2005)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Cluster information",
"sec_num": "3.4"
},
{
"text": "Our last feature category specifically addresses pronoun reference, for cases when the anaphoric mention N P 2 (and hence its headword h 2 ) is a pronoun, while the candidate antecedent mention N P 1 (and hence its headword h 1 ) is not. For such a headword pair (h 1 , h 2 ), the idea is to substitute the nonpronoun h 1 into h 2 's position and see whether the result is attested on the Web.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pronoun context",
"sec_num": "3.5"
},
{
"text": "If the anaphoric pronominal mention is h 2 and its sentential context is l' l h 2 r r', then the substituted phrase will be l' l h 1 r r'. 5 High Web counts of substituted phrases tend to indicate semantic compatibility. Perhaps unsurprisingly for English, only the right context was useful in this capacity. We chose the following three context types, based on performance on a development set:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pronoun context",
"sec_num": "3.5"
},
{
"text": "\u2022 h 1 r (R1) \u2022 h 1 r r' (R2) \u2022 h 1 r (R1Gap)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pronoun context",
"sec_num": "3.5"
},
{
"text": "As an example of the R1Gap feature, if the anaphor h 2 + context is his victory and one candidate antecedent h 1 is Bush, then we compute the normalized value count(\"Bush s victory\") count(\" s victory\")count(\"Bush\")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pronoun context",
"sec_num": "3.5"
},
{
"text": "In general, we compute count(\"h 1 s r\") count(\" s r\")count(\"h 1 \")",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pronoun context",
"sec_num": "3.5"
},
{
"text": "The final feature value is again a normalized count converted to log 10 and then binned. 6 We have three separate features for the R1, R2, and R1Gap context types. We tune a separate bin-size hyperparameter for each of these three features.",
"cite_spans": [
{
"start": 89,
"end": 90,
"text": "6",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pronoun context",
"sec_num": "3.5"
},
{
"text": "These pronoun resolution features are similar to selectional preference work by Yang et al. (2005) and Bergsma and Lin (2006) , who compute semantic compatibility for pronouns in specific syntactic relationships such as possessive-noun, subject-verb, etc. In our case, we directly use the general context of any pronominal anaphor to find its most compatible antecedent.",
"cite_spans": [
{
"start": 80,
"end": 98,
"text": "Yang et al. (2005)",
"ref_id": "BIBREF33"
},
{
"start": 103,
"end": 125,
"text": "Bergsma and Lin (2006)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Pronoun context",
"sec_num": "3.5"
},
{
"text": "Note that all our above features are designed to be non-sparse by firing indicators of the quantized Web statistics and not the lexical-or class-based identities of the mention pair. This keeps the total number of features small, which is important for the relatively small datasets used for coreference resolution. We go from around 100 features in the Reconcile baseline to around 165 features after adding all our Web features. ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Pronoun context",
"sec_num": "3.5"
},
{
"text": "We show results on three popular and comparatively larger coreference resolution data sets -the ACE04, ACE05, and ACE05-ALL datasets from the ACE Program (NIST, 2004) . In ACE04 and ACE05, we have only the newswire portion (of the original ACE 2004 and 2005 training sets) and use the standard train/test splits reported in Stoyanov et al. (2009) and Haghighi and Klein (2010) . In ACE05-ALL, we have the full ACE 2005 training set and use the standard train/test splits reported in Rahman and Ng (2009) and Haghighi and Klein (2010) . Note that most previous work does not report (or need) a standard development set; hence, for tuning our features and its hyper-parameters, we randomly split the original training data into a training and development set with a 70/30 ratio (and then use the full original training set during testing). Details of the corpora are shown in Table 1 . 7 Details of the Web-scale corpora used for extracting features are discussed in Section 3.",
"cite_spans": [
{
"start": 154,
"end": 166,
"text": "(NIST, 2004)",
"ref_id": "BIBREF20"
},
{
"start": 324,
"end": 346,
"text": "Stoyanov et al. (2009)",
"ref_id": "BIBREF29"
},
{
"start": 351,
"end": 376,
"text": "Haghighi and Klein (2010)",
"ref_id": "BIBREF10"
},
{
"start": 483,
"end": 503,
"text": "Rahman and Ng (2009)",
"ref_id": "BIBREF24"
},
{
"start": 508,
"end": 533,
"text": "Haghighi and Klein (2010)",
"ref_id": "BIBREF10"
},
{
"start": 884,
"end": 885,
"text": "7",
"ref_id": null
}
],
"ref_spans": [
{
"start": 874,
"end": 881,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "4.1"
},
{
"text": "We evaluated our work on both MUC (Vilain et al., 1995) and B 3 (Bagga and Baldwin, 1998) . Both scorers are available in the Reconcile infrastructure. 8 MUC measures how many predicted clusters need to be merged to cover the true gold clusters. B 3 computes precision and recall for each mention by computing the intersection of its predicted and gold cluster and dividing by the size of the predicted Table 2 : Incremental results for the Web features on the ACE04 development set. AvgPerc is the averaged perceptron baseline, DecTree is the decision tree baseline, and the +Feature rows show the effect of adding a particular feature incrementally (not in isolation) to the DecTree baseline. The feature categories correspond to those described in Section 3. and gold cluster, respectively. It is well known (Recasens and Hovy, 2010; Ng, 2010; Kobdani et al., 2011) that MUC is biased towards large clusters (chains) whereas B 3 is biased towards singleton clusters. Therefore, for a more balanced evaluation, we show improvements on both metrics simultaneously.",
"cite_spans": [
{
"start": 34,
"end": 55,
"text": "(Vilain et al., 1995)",
"ref_id": null
},
{
"start": 64,
"end": 89,
"text": "(Bagga and Baldwin, 1998)",
"ref_id": "BIBREF0"
},
{
"start": 811,
"end": 836,
"text": "(Recasens and Hovy, 2010;",
"ref_id": "BIBREF27"
},
{
"start": 837,
"end": 846,
"text": "Ng, 2010;",
"ref_id": "BIBREF19"
},
{
"start": 847,
"end": 868,
"text": "Kobdani et al., 2011)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 403,
"end": 410,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "4.2"
},
{
"text": "We start with the Reconcile baseline but employ the decision tree (DT) classifier, because it has significantly better performance than the default averaged perceptron classifier used in Stoyanov et al. (2009) . 9 Table 2 compares the baseline perceptron results to the DT results and then shows the incremental addition of the Web features to the DT baseline (on the ACE04 development set).",
"cite_spans": [
{
"start": 187,
"end": 209,
"text": "Stoyanov et al. (2009)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [
{
"start": 214,
"end": 221,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "The DT classifier, in general, is precision-biased. The Web features somewhat balance this by increasing the recall and decreasing precision to a lesser extent, improving overall F1. Each feature type incrementally increases both MUC and B 3 F1-measures, showing that they are not taking advantage of any bias of either metric. The incremental improvements also show that each Web feature type brings in some additional benefit over the information already present in the Reconcile baseline, which includes alias, animacy, named entity, and WordNet Table 3 : Primary test results on the ACE04, ACE05, and ACE05-ALL datasets. All systems reported here use automatically extracted system mentions. B 3 here is the B 3 All version of Stoyanov et al. (2009) . We also report statistical significance of the improvements from the Web features on the DT baseline, using the bootstrap test (Noreen, 1989; Efron and Tibshirani, 1993) . The perceptron baseline in this work (Reconcile settings: 15 iterations, threshold = 0.45, SIG for ACE04 and AP for ACE05, ACE05-ALL) has different results from Stoyanov et al. (2009) because their current publicly available code is different from that used in their paper (p.c.). Also, the B 3 variant used by Rahman and Ng (2009) is slightly different from other systems (they remove all and only the singleton twinless system mentions, so it is neither B 3 All nor B 3 None). For completeness, our (untuned) B 3 None results (DT + Web) on the ACE05-ALL dataset are P=69.9|R=65.9|F1=67.8.",
"cite_spans": [
{
"start": 731,
"end": 753,
"text": "Stoyanov et al. (2009)",
"ref_id": "BIBREF29"
},
{
"start": 883,
"end": 897,
"text": "(Noreen, 1989;",
"ref_id": "BIBREF21"
},
{
"start": 898,
"end": 925,
"text": "Efron and Tibshirani, 1993)",
"ref_id": "BIBREF8"
},
{
"start": 1089,
"end": 1111,
"text": "Stoyanov et al. (2009)",
"ref_id": "BIBREF29"
},
{
"start": 1239,
"end": 1259,
"text": "Rahman and Ng (2009)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 549,
"end": 556,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "class / sense information. 10 Table 3 shows our primary test results on the ACE04, ACE05, and ACE05-ALL datasets, for the MUC and B 3 metrics. All systems reported use automatically detected mentions. We report our results (the 3 rows marked 'This Work') on the perceptron baseline, the DT baseline, and the Web features added to the DT baseline. We also report statistical significance of the improvements from the Web fea- 10 We also initially experimented with smaller datasets (MUC6 and MUC7) and an averaged perceptron baseline, and we did see similar improvements, arguing that these features are useful independently of the learning algorithm and dataset.",
"cite_spans": [
{
"start": 425,
"end": 427,
"text": "10",
"ref_id": null
}
],
"ref_spans": [
{
"start": 30,
"end": 37,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "tures on the DT baseline. 11 For significance testing, we use the bootstrap test (Noreen, 1989; Efron and Tibshirani, 1993) .",
"cite_spans": [
{
"start": 81,
"end": 95,
"text": "(Noreen, 1989;",
"ref_id": "BIBREF21"
},
{
"start": 96,
"end": 123,
"text": "Efron and Tibshirani, 1993)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "Our main comparison is against Haghighi and Klein (2010), a mostly-unsupervised generative approach that models latent entity types, which generate specific entities that in turn render individual mentions. They learn on large datasets including Wikipedia, and their results are state-of-the-art in coreference resolution. We outperform their system on most datasets and metrics (except on ACE05-ALL for the MUC metric). The other systems we compare to and outperform are the perceptron-based Reconcile system of Stoyanov et al. (2009) , the strong deterministic system of Haghighi and Klein (2009) , and the cluster-ranking model of Rahman and Ng (2009) .",
"cite_spans": [
{
"start": 513,
"end": 535,
"text": "Stoyanov et al. (2009)",
"ref_id": "BIBREF29"
},
{
"start": 573,
"end": 598,
"text": "Haghighi and Klein (2009)",
"ref_id": "BIBREF9"
},
{
"start": 634,
"end": 654,
"text": "Rahman and Ng (2009)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "We develop our features and tune their hyperparameter values on the ACE04 development set and then use these on the ACE04 test set. 12 On the ACE05 and ACE05-ALL datasets, we directly transfer our Web features and their hyper-parameter values from the ACE04 dev-set, without any retuning. The test improvements we get on all the datasets (see Table 3 ) suggest that our features are generally useful across datasets and metrics. 13",
"cite_spans": [],
"ref_spans": [
{
"start": 343,
"end": 350,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "4.3"
},
{
"text": "In this section, we briefly discuss errors (in the DT baseline) corrected by our Web features, and analyze the decision tree classifier built during training (based on the ACE04 development experiments).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "To study error correction, we begin with the mention pairs that are coreferent according to the goldstandard annotation (after matching the system mentions to the gold ones). We consider the pairs that are wrongly predicted to be non-coreferent by the baseline DT system but correctly predicted to be coreferent when we add our Web features. Some examples of such pairs include:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Iran ; the country the EPA ; the agency athletic director ; Mulcahy Democrat Al Gore ; the vice president 12 Note that for the ACE04 dataset only, we use the 'SmartIn-stanceGenerator' (SIG) filter of Reconcile that uses only a filtered set of mention-pairs (based on distance and other properties of the pair) instead of the 'AllPairs' (AP) setting that uses all pairs of mentions, and makes training and tuning very slow.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "13 For the ACE05 and ACE05-ALL datasets, we revert to the 'AllPairs' (AP) setting of Reconcile because this gives us baselines competitive with Haghighi and Klein (2010). Since we did not need to retune on these datasets, training and tuning speed were not a bottleneck. Moreover, the improvements from our Web features are similar even when tried over the SIG baseline; hence, the filter choice doesn't affect the performance gain from the Web features. Barry Bonds ; the best baseball player Vojislav Kostunica ; the pro-democracy leader its closest rival ; the German magazine Das Motorrad One of those difficult-to-dislodge judges ; John Marshall These pairs are cases where our features on Hearst-style co-occurrence and entity-based context-match are informative and help discriminate in favor of the correct antecedents. One advantage of using Web-based features is that the Web has a surprising amount of information on even rare entities such as proper names. Our features also correct coreference for various cases of pronominal anaphora, but these corrections are harder to convey out of context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Next, we analyze the decision tree built after training the classifier (with all our Web features included). Around 30% of the decision nodes (both non-terminals and leaves) correspond to Web features, and the average error in classification at the Web-feature leaves is only around 2.5%, suggesting that our features are strongly discriminative for pairwise coreference decisions. Some of the most discriminative nodes correspond to the general cooccurrence feature for most (binned) log-count values, the Hearst-style co-occurrence feature for its zero-count value, the cluster-match feature for its zero-match value, and the R2 pronoun context feature for certain (binned) log-count values.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "We have presented a collection of simple Web-count features for coreference resolution that capture a range of world knowledge via statistics of general lexical co-occurrence, hypernymy, semantic compatibility, and semantic context. When added to a strong decision tree baseline, these features give significant improvements and achieve the best results reported to date, across multiple datasets and metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We use the default configuration settings of Reconcile(Stoyanov et al., 2010) unless mentioned otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "These clusters are derived form the V2 Google n-grams corpus. The V2 corpus itself is not publicly available, but the clusters are available at http://www.clsp.jhu.edu/ sbergsma/PhrasalClusters",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "We also tried adding count(\"h1 h2\") to c12 but this decreases performance, perhaps because truly adjacent occurrences are often not grammatical.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Two minor variants not listed above are h1 including h2 and h1 especially h2.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Possessive pronouns are replaced with an additional apostrophe, i.e., h1 's. We also use features (see R1Gap) that allow wildcards ( ) in between the headword and the context when collecting Web-counts, in order to allow for determiners and other filler words.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Normalization helps us with two kinds of balancing. First, we divide by the count of the antecedent so that when choosing the best antecedent for a fixed anaphor, we are not biased towards more frequently occurring antecedents. Second, we divide by the count of the context so that across anaphora, an anaphor with rarer context does not get smaller values (for all its candidate antecedents) than another anaphor with a more common context.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Note that the development set is used only for ACE04, because for ACE05, and ACE05-ALL, we directly test using the features tuned on ACE04.8 Note that B 3 has two versions which handle twinless (spurious) mentions in different ways (seeStoyanov et al. (2009) for details). We use the B 3 All version, unless mentioned otherwise.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Moreover, a DT classifier takes roughly the same amount of time and memory as a perceptron on our ACE04 development experiments. It is, however, slower and more memory-intensive (\u223c3x) on the bigger ACE05-ALL dataset.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "All improvements are significant, except on the small ACE05 dataset with the MUC metric (where it is weak, at p < 0.12). However, on the larger version of this dataset, ACE05-ALL, we get improvements which are both larger and more significant (at p < 0.001).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank Nathan Gilbert, Adam Pauls, and the anonymous reviewers for their helpful suggestions. This research is supported by Qualcomm via an Innovation Fellowship to the first author and by BBN under DARPA contract HR0011-12-C-0014.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Algorithms for scoring coreference chains",
"authors": [
{
"first": "Amit",
"middle": [],
"last": "Bagga",
"suffix": ""
},
{
"first": "Breck",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": 1998,
"venue": "Proceedings of MUC-7 and LREC Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Amit Bagga and Breck Baldwin. 1998. Algorithms for scoring coreference chains. In Proceedings of MUC-7 and LREC Workshop.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Web-scale features for full-scale parsing",
"authors": [
{
"first": "Mohit",
"middle": [],
"last": "Bansal",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mohit Bansal and Dan Klein. 2011. Web-scale features for full-scale parsing. In Proceedings of ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Understanding the value of features for coreference resolution",
"authors": [
{
"first": "Eric",
"middle": [],
"last": "Bengtson",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Eric Bengtson and Dan Roth. 2008. Understanding the value of features for coreference resolution. In Pro- ceedings of EMNLP.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Bootstrapping path-based pronoun resolution",
"authors": [
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of COLING-ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shane Bergsma and Dekang Lin. 2006. Bootstrap- ping path-based pronoun resolution. In Proceedings of COLING-ACL.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Creating robust supervised classifiers via web-scale ngram data",
"authors": [
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shane Bergsma, Emily Pitler, and Dekang Lin. 2010. Creating robust supervised classifiers via web-scale n- gram data. In Proceedings of ACL.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "The Google Web 1T 5-gram corpus version 1.1. LDC2006T13",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Franz",
"suffix": ""
}
],
"year": 2006,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants and Alex Franz. 2006. The Google Web 1T 5-gram corpus version 1.1. LDC2006T13.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Word association norms, mutual information, and lexicography",
"authors": [
{
"first": "Kenneth",
"middle": [
"Ward"
],
"last": "Church",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Hanks",
"suffix": ""
}
],
"year": 1989,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Ward Church and Patrick Hanks. 1989. Word association norms, mutual information, and lexicogra- phy. In Proceedings of ACL.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A large-scale exploration of effective global features for a joint entity detection and tracking model",
"authors": [
{
"first": "Hal",
"middle": [],
"last": "Daum\u00e9",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hal Daum\u00e9 III and Daniel Marcu. 2005. A large-scale exploration of effective global features for a joint en- tity detection and tracking model. In Proceedings of EMNLP.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "An introduction to the bootstrap",
"authors": [
{
"first": "B",
"middle": [],
"last": "Efron",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Tibshirani",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Efron and R. Tibshirani. 1993. An introduction to the bootstrap. Chapman & Hall CRC.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Simple coreference resolution with rich syntactic and semantic features",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi and Dan Klein. 2009. Simple coreference resolution with rich syntactic and semantic features. In Proceedings of EMNLP.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Coreference resolution in a modular, entity-centered model",
"authors": [
{
"first": "Aria",
"middle": [],
"last": "Haghighi",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of NAACL-HLT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aria Haghighi and Dan Klein. 2010. Coreference resolu- tion in a modular, entity-centered model. In Proceed- ings of NAACL-HLT.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "The WEKA data mining software: An update. SIGKDD Explorations",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hall",
"suffix": ""
},
{
"first": "Eibe",
"middle": [],
"last": "Frank",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Holmes",
"suffix": ""
},
{
"first": "Bernhard",
"middle": [],
"last": "Pfahringer",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Reutemann",
"suffix": ""
},
{
"first": "Ian",
"middle": [
"H"
],
"last": "Witten",
"suffix": ""
}
],
"year": 2009,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hall, Eibe Frank, Geoffrey Holmes, Bernhard Pfahringer, Peter Reutemann, and Ian H. Witten. 2009. The WEKA data mining software: An update. SIGKDD Explorations, 11(1).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Distributional structure. Word",
"authors": [
{
"first": "Zellig",
"middle": [],
"last": "Harris",
"suffix": ""
}
],
"year": 1954,
"venue": "",
"volume": "10",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Zellig Harris. 1954. Distributional structure. Word, 10(23):146162.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Bootstrapping coreference resolution using word associations",
"authors": [
{
"first": "Marti",
"middle": [],
"last": "Hearst",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of COLING. Hamidreza Kobdani, Hinrich Schutze, Michael Schiehlen, and Hans Kamp",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marti Hearst. 1992. Automatic acquisition of hyponyms from large text corpora. In Proceedings of COLING. Hamidreza Kobdani, Hinrich Schutze, Michael Schiehlen, and Hans Kamp. 2011. Bootstrap- ping coreference resolution using word associations. In Proceedings of ACL.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Simple semi-supervised dependency parsing",
"authors": [
{
"first": "Terry",
"middle": [],
"last": "Koo",
"suffix": ""
},
{
"first": "Xavier",
"middle": [],
"last": "Carreras",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Terry Koo, Xavier Carreras, and Michael Collins. 2008. Simple semi-supervised dependency parsing. In Pro- ceedings of ACL.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Phrase clustering for discriminative learning",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Xiaoyun",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin and Xiaoyun Wu. 2009. Phrase clustering for discriminative learning. In Proceedings of ACL.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "New tools for web-scale n-grams",
"authors": [
{
"first": "Dekang",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Church",
"suffix": ""
},
{
"first": "Heng",
"middle": [],
"last": "Ji",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Sekine",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Yarowsky",
"suffix": ""
},
{
"first": "Shane",
"middle": [],
"last": "Bergsma",
"suffix": ""
},
{
"first": "Kailash",
"middle": [],
"last": "Patil",
"suffix": ""
},
{
"first": "Emily",
"middle": [],
"last": "Pitler",
"suffix": ""
},
{
"first": "Rachel",
"middle": [],
"last": "Lathbury",
"suffix": ""
},
{
"first": "Vikram",
"middle": [],
"last": "Rao",
"suffix": ""
},
{
"first": "Kapil",
"middle": [],
"last": "Dalwani",
"suffix": ""
},
{
"first": "Sushant",
"middle": [],
"last": "Narsale",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of LREC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekang Lin, Kenneth Church, Heng Ji, Satoshi Sekine, David Yarowsky, Shane Bergsma, Kailash Patil, Emily Pitler, Rachel Lathbury, Vikram Rao, Kapil Dalwani, and Sushant Narsale. 2010. New tools for web-scale n-grams. In Proceedings of LREC.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Comparing knowledge sources for nominal anaphora resolution",
"authors": [
{
"first": "Katja",
"middle": [],
"last": "Markert",
"suffix": ""
},
{
"first": "Malvina",
"middle": [],
"last": "Nissim",
"suffix": ""
}
],
"year": 2005,
"venue": "Computational Linguistics",
"volume": "31",
"issue": "3",
"pages": "367--402",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Katja Markert and Malvina Nissim. 2005. Comparing knowledge sources for nominal anaphora resolution. Computational Linguistics, 31(3):367-402.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Improving machine learning approaches to coreference resolution",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Ng and Claire Cardie. 2002. Improving machine learning approaches to coreference resolution. In Pro- ceedings of ACL.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Supervised noun phrase coreference research: The first fifteen years",
"authors": [
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vincent Ng. 2010. Supervised noun phrase coreference research: The first fifteen years. In Proceedings of ACL.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "The ACE evaluation plan",
"authors": [
{
"first": "",
"middle": [],
"last": "Nist",
"suffix": ""
}
],
"year": 2004,
"venue": "NIST",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "NIST. 2004. The ACE evaluation plan. In NIST.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Computer intensive methods for hypothesis testing: An introduction",
"authors": [
{
"first": "E",
"middle": [
"W"
],
"last": "Noreen",
"suffix": ""
}
],
"year": 1989,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E.W. Noreen. 1989. Computer intensive methods for hypothesis testing: An introduction. Wiley, New York.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Learning to resolve bridging references",
"authors": [
{
"first": "Massimo",
"middle": [],
"last": "Poesio",
"suffix": ""
},
{
"first": "Rahul",
"middle": [],
"last": "Mehta",
"suffix": ""
},
{
"first": "Axel",
"middle": [],
"last": "Maroudas",
"suffix": ""
},
{
"first": "Janet",
"middle": [],
"last": "Hitzeman",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Massimo Poesio, Rahul Mehta, Axel Maroudas, and Janet Hitzeman. 2004. Learning to resolve bridging references. In Proceedings of ACL.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "C4.5: Programs for machine learning",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Quinlan",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Quinlan. 1993. C4.5: Programs for machine learn- ing. Morgan Kaufmann Publishers Inc., San Fran- cisco, CA, USA.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Supervised models for coreference resolution",
"authors": [
{
"first": "Altaf",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Altaf Rahman and Vincent Ng. 2009. Supervised models for coreference resolution. In Proceedings of EMNLP.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Coreference resolution with world knowledge",
"authors": [
{
"first": "Altaf",
"middle": [],
"last": "Rahman",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Ng",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Altaf Rahman and Vincent Ng. 2011. Coreference reso- lution with world knowledge. In Proceedings of ACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Randomized algorithms and NLP: Using locality sensitive hash functions for high speed noun clustering",
"authors": [
{
"first": "Deepak",
"middle": [],
"last": "Ravichandran",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Pantel",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deepak Ravichandran, Patrick Pantel, and Eduard Hovy. 2005. Randomized algorithms and NLP: Using local- ity sensitive hash functions for high speed noun clus- tering. In Proceedings of ACL.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Coreference resolution across corpora: Languages, coding schemes, and preprocessing information",
"authors": [
{
"first": "Marta",
"middle": [],
"last": "Recasens",
"suffix": ""
},
{
"first": "Eduard",
"middle": [],
"last": "Hovy",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marta Recasens and Eduard Hovy. 2010. Corefer- ence resolution across corpora: Languages, coding schemes, and preprocessing information. In Proceed- ings of ACL.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "A machine learning approach to coreference resolution of noun phrases",
"authors": [],
"year": 2001,
"venue": "Computational Linguistics",
"volume": "27",
"issue": "4",
"pages": "521--544",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wee Meng Soon, Hwee Tou Ng, and Daniel Chung Yong Lim. 2001. A machine learning approach to corefer- ence resolution of noun phrases. Computational Lin- guistics, 27(4):521-544.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Conundrums in noun phrase coreference resolution: Making sense of the state-of-the-art",
"authors": [
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gilbert",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of ACL/IJCNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Veselin Stoyanov, Nathan Gilbert, Claire Cardie, and Ellen Riloff. 2009. Conundrums in noun phrase coref- erence resolution: Making sense of the state-of-the-art. In Proceedings of ACL/IJCNLP.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Reconcile: A coreference resolution research platform",
"authors": [
{
"first": "Veselin",
"middle": [],
"last": "Stoyanov",
"suffix": ""
},
{
"first": "Claire",
"middle": [],
"last": "Cardie",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Gilbert",
"suffix": ""
},
{
"first": "Ellen",
"middle": [],
"last": "Riloff",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Buttler",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Hysom",
"suffix": ""
}
],
"year": 2010,
"venue": "Technical report",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Veselin Stoyanov, Claire Cardie, Nathan Gilbert, Ellen Riloff, David Buttler, and David Hysom. 2010. Rec- oncile: A coreference resolution research platform. In Technical report, Cornell University.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Dennis Connolly, and Lynette Hirschman. 1995. A modeltheoretic coreference scoring scheme",
"authors": [
{
"first": "Marc",
"middle": [],
"last": "Vilain",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Burger",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Aberdeen",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of MUC-6",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc Vilain, John Burger, John Aberdeen, Dennis Con- nolly, and Lynette Hirschman. 1995. A model- theoretic coreference scoring scheme. In Proceedings of MUC-6.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Coreference resolution using semantic relatedness information from automatically discovered patterns",
"authors": [
{
"first": "Xiaofeng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaofeng Yang and Jian Su. 2007. Coreference resolu- tion using semantic relatedness information from auto- matically discovered patterns. In Proceedings of ACL.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Improving pronoun resolution using statistics-based semantic compatibility information",
"authors": [
{
"first": "Xiaofeng",
"middle": [],
"last": "Yang",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Su",
"suffix": ""
},
{
"first": "Chew Lim",
"middle": [],
"last": "Tan",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiaofeng Yang, Jian Su, and Chew Lim Tan. 2005. Im- proving pronoun resolution using statistics-based se- mantic compatibility information. In Proceedings of ACL.",
"links": null
}
},
"ref_entries": {
"TABREF1": {
"text": "Dataset characteristics -docs: the total number of documents; dev: the train/test split during development; test: the train/test split during testing; ment: the number of gold mentions in the test split; chn: the number of coreference chains in the test split.",
"html": null,
"type_str": "table",
"num": null,
"content": "<table/>"
}
}
}
}