| { |
| "paper_id": "P13-1049", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:34:33.200816Z" |
| }, |
| "title": "Improving pairwise coreference models through feature space hierarchy learning", |
| "authors": [ |
| { |
| "first": "Emmanuel", |
| "middle": [], |
| "last": "Lassalle", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "INRIA & Univ. Paris Diderot Sorbonne Paris Cit\u00e9", |
| "location": { |
| "postCode": "F-75205", |
| "settlement": "Paris" |
| } |
| }, |
| "email": "emmanuel.lassalle@ens-lyon.org" |
| }, |
| { |
| "first": "Pascal", |
| "middle": [], |
| "last": "Denis", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "pascal.denis@inria.fr" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "This paper proposes a new method for significantly improving the performance of pairwise coreference models. Given a set of indicators, our method learns how to best separate types of mention pairs into equivalence classes for which we construct distinct classification models. In effect, our approach finds an optimal feature space (derived from a base feature set and indicator set) for discriminating coreferential mention pairs. Although our approach explores a very large space of possible feature spaces, it remains tractable by exploiting the structure of the hierarchies built from the indicators. Our experiments on the CoNLL-2012 Shared Task English datasets (gold mentions) indicate that our method is robust relative to different clustering strategies and evaluation metrics, showing large and consistent improvements over a single pairwise model using the same base features. Our best system obtains a competitive 67.2 of average F1 over MUC, B 3 , and CEAF which, despite its simplicity, places it above the mean score of other systems on these datasets.", |
| "pdf_parse": { |
| "paper_id": "P13-1049", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "This paper proposes a new method for significantly improving the performance of pairwise coreference models. Given a set of indicators, our method learns how to best separate types of mention pairs into equivalence classes for which we construct distinct classification models. In effect, our approach finds an optimal feature space (derived from a base feature set and indicator set) for discriminating coreferential mention pairs. Although our approach explores a very large space of possible feature spaces, it remains tractable by exploiting the structure of the hierarchies built from the indicators. Our experiments on the CoNLL-2012 Shared Task English datasets (gold mentions) indicate that our method is robust relative to different clustering strategies and evaluation metrics, showing large and consistent improvements over a single pairwise model using the same base features. Our best system obtains a competitive 67.2 of average F1 over MUC, B 3 , and CEAF which, despite its simplicity, places it above the mean score of other systems on these datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Coreference resolution is the problem of partitioning a sequence of noun phrases (or mentions), as they occur in a natural language text, into a set of referential entities. A common approach to this problem is to separate it into two modules: on the one hand, one defines a model for evaluating coreference links, in general a discriminative classifier that detects coreferential mention pairs. On the other hand, one designs a method for grouping the detected links into a coherent global output (i.e. a partition over the set of entity mentions). This second step is typically achieved using greedy heuristics (McCarthy and Lehnert, 1995; Soon et al., 2001; Ng and Cardie, 2002; Bengston and Roth, 2008) , although more sophisticated clustering approaches have been used, too, such as cutting graph methods (Nicolae and Nicolae, 2006; Cai and Strube, 2010) and Integer Linear Programming (ILP) formulations (Klenner, 2007; Denis and Baldridge, 2009) . Despite its simplicity, this two-step strategy remains competitive even when compared to more complex models utilizing a global loss (Bengston and Roth, 2008) .", |
| "cite_spans": [ |
| { |
| "start": 613, |
| "end": 641, |
| "text": "(McCarthy and Lehnert, 1995;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 642, |
| "end": 660, |
| "text": "Soon et al., 2001;", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 661, |
| "end": 681, |
| "text": "Ng and Cardie, 2002;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 682, |
| "end": 706, |
| "text": "Bengston and Roth, 2008)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 810, |
| "end": 837, |
| "text": "(Nicolae and Nicolae, 2006;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 838, |
| "end": 859, |
| "text": "Cai and Strube, 2010)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 910, |
| "end": 925, |
| "text": "(Klenner, 2007;", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 926, |
| "end": 952, |
| "text": "Denis and Baldridge, 2009)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 1088, |
| "end": 1113, |
| "text": "(Bengston and Roth, 2008)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this kind of architecture, the performance of the entire coreference system strongly depends on the quality of the local pairwise classifier. 1 Consequently, a lot of research effort on coreference resolution has focused on trying to boost the performance of the pairwise classifier. Numerous studies are concerned with feature extraction, typically trying to enrich the classifier with more linguistic knowledge and/or more world knowledge (Ng and Cardie, 2002; Kehler et al., 2004; Ponzetto and Strube, 2006; Bengston and Roth, 2008; Versley et al., 2008; Uryupina et al., 2011) . A second line of work explores the use of distinct local models for different types of mentions, specifically for different types of anaphoric mentions based on their grammatical categories (such as pronouns, proper names, definite descriptions) (Morton, 2000; Ng, 2005; Denis and Baldridge, 2008) . 2 An important justification for such spe-cialized models is (psycho-)linguistic and comes from theoretical findings based on salience or accessibility (Ariel, 1988) . It is worth noting that, from a machine learning point of view, this is related to feature extraction in that both approaches in effect recast the pairwise classification problem in higher dimensional feature spaces.", |
| "cite_spans": [ |
| { |
| "start": 444, |
| "end": 465, |
| "text": "(Ng and Cardie, 2002;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 466, |
| "end": 486, |
| "text": "Kehler et al., 2004;", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 487, |
| "end": 513, |
| "text": "Ponzetto and Strube, 2006;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 514, |
| "end": 538, |
| "text": "Bengston and Roth, 2008;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 539, |
| "end": 560, |
| "text": "Versley et al., 2008;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 561, |
| "end": 583, |
| "text": "Uryupina et al., 2011)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 832, |
| "end": 846, |
| "text": "(Morton, 2000;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 847, |
| "end": 856, |
| "text": "Ng, 2005;", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 857, |
| "end": 883, |
| "text": "Denis and Baldridge, 2008)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 886, |
| "end": 887, |
| "text": "2", |
| "ref_id": null |
| }, |
| { |
| "start": 1038, |
| "end": 1051, |
| "text": "(Ariel, 1988)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper, we claim that mention pairs should not be processed by a single classifier, and instead should be handled through specific models. But we are furthermore interested in learning how to construct and select such differential models. Our argument is therefore based on statistical considerations, rather than on purely linguistic ones 3 . The main question we raise is, given a set of indicators (such as grammatical types, distance between two mentions, or named entity types), how to best partition the pool of mention pair examples in order to best discriminate coreferential pairs from non coreferential ones. In effect, we want to learn the \"best\" subspaces for our different models: that is, subspaces that are neither too coarse (i.e., unlikely to separate the data well) nor too specific (i.e., prone to data sparseness and noise). We will see that this is also equivalent to selecting a single large adequate feature space by using the data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our approach generalizes earlier approaches in important ways. For one thing, the definition of the different models is no longer restricted to grammatical typing (our model allows for various other types of indicators) or to the sole typing of the anaphoric mention (our models can also be specific to a particular type antecedent or to the two types of the mention pair). More importantly, we propose an original method for learning the best set of models that can be built from a given set of indicators and a training set. These models are organized in a hierarchy, wherein each leaf corresponds to a mutually disjoint subset of mention pair examples and the classifier that can be trained from it. Our models are trained using the Online Passive-Aggressive algorithm or PA (Crammer et al., 2006) , a large margin version of the perceptron. Our method is exact in that it explores the full space of hierarchies (of size at least 2 2 n ) definable on an indicator sequence, while remaining scalable by exploiting the particular structure of these during the training of the distinct local models (Ng and Cardie, 2002; Uryupina, 2004) .", |
| "cite_spans": [ |
| { |
| "start": 778, |
| "end": 800, |
| "text": "(Crammer et al., 2006)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 1099, |
| "end": 1120, |
| "text": "(Ng and Cardie, 2002;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 1121, |
| "end": 1136, |
| "text": "Uryupina, 2004)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "3 However it should be underlined that the statistical viewpoint is complementary to the linguistic work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "hierarchies with dynamic programming. This approach also performs well, and it largely outperforms the single model. As will be shown based on a variety of experiments on the CoNLL-2012 Shared Task English datasets, these improvements are consistent across different evaluation metrics and for the most part independent of the clustering decoder that was used.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The rest of this paper is organized as follows. Section 2 discusses the underlying statistical hypotheses of the standard pairwise model and defines a simple alternative framework that uses a simple separation of mention pairs based on grammatical types. Next, in section 3, we generalize the method by introducing indicator hierarchies and explain how to learn the best models associated with them. Section 4 provides a brief system description and Section 5 evaluates the various models on CoNLL-2012 English datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Pairwise models basically employ one local classifier to decide whether two mentions are coreferential or not. When using machine learning techniques, this involves certain assumptions about the statistical behavior of mention pairs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling pairs", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Let us adopt a probabilistic point of view to describe the prototype of pairwise models. Given a document, the number of mentions is fixed and each pair of mentions follows a certain distribution (that we partly observe in a feature space). The basic idea of pairwise models is to consider mention pairs independently from each other (that is why a decoder is necessary to enforce transitivity).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical assumptions", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "If we use a single classifier to process all pairs, then they are supposed to be identically distributed. We claim that pairs should not be processed by a single classifier because they are not identically distributed (or a least the distribution is too complex for the classifier); rather, we should separate different \"types\" on pairs and create a specific model for each of them.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical assumptions", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Separating different kinds of pairs and handling them with different specific models can lead to more accurate global models. For instance, some coreference resolution systems process different kinds of anaphors separately, which suggests for example that pairs containing an anaphoric pronoun behave differently from pairs with non-pronominal anaphors. One could rely on a rich set of features to capture complex distributions, but here we actually have a rather limited set of elementary features (see section 4) and, for instance, using products of features must be done carefully to avoid introducing noise in the model. Instead of imposing heuristic product of features, we will show that a clever separation of instances leads to significant improvements of the pairwise model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Statistical assumptions", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We first introduce the problem more formally. Every pair of mentions m i and m j is modeled by a random variable:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definitions", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "P ij : \u2126 \u2192 X \u00d7 Y \u03c9 \u2192 (x ij (\u03c9), y ij (\u03c9))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definitions", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "where \u2126 classically represents randomness, X is the space of objects (\"mention pairs\") that is not directly observable and y ij (\u03c9) \u2208 Y = {+1, \u22121} are the labels indicating whether m i and m j are coreferential or not. To lighten the notations, we will not always write the index ij. Now we define a mapping:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definitions", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "\u03c6 F : X \u2192 F x \u2192 x", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definitions", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "that casts pairs into a feature space F through which we observe them. For us, F is simply a vector space over R (in our case many features are Boolean; they are cast into R as 0 and 1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definitions", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "For technical coherence, we assume that \u03c6 F 1 (x(\u03c9)) and \u03c6 F 2 (x(\u03c9)) have the same values when projected on the feature space F 1 \u2229 F 2 : it means that common features from two feature spaces have the same values.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definitions", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "From this formal point of view, the task of coreference resolution consists in fixing \u03c6 F , observing labeled samples {(\u03c6 F (x), y) t } t\u2208T rainSet and, given partially observed new variables {(\u03c6 F (x)) t } t\u2208T estSet , recovering the corresponding values of y.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Definitions", |
| "sec_num": "2.2.1" |
| }, |
| { |
| "text": "We claimed before that all mention pairs seemed not to be identically distributed since, for example, pronouns do not behave like nominals. We can formulate this more rigorously: since the object space X is not directly observable, we do not know its complexity. In particular, when using a mapping to a too small feature space, the classifier cannot capture the distribution very well: the data is too noisy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formalizing the statistical assumptions", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "Now if we say that pronominal anaphora do not behave like other anaphora, we distinguish two kinds of pair i.e. we state that the distribution of pairs in X is a mixture of two distributions, and we deterministically separate pairs to their specific distribution part. In this way, we may separate positive and negative pairs more easily if we cast each kind of pair into a specific feature space. Let us call these feature spaces F 1 and F 2 . We can either create two independent classifiers on F 1 and F 2 to process each kind of pair or define a single model on a larger feature space F = F 1 \u2295 F 2 . If the model is linear (which is our case), these approaches happen to be equivalent.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formalizing the statistical assumptions", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "So we can actually assume that the random variables P ij are identically distributed, but drawn from a complex mixture. A new issue arises: we need to find a mapping \u03c6 F that renders the best view on the distribution of the data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formalizing the statistical assumptions", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "From a theoretical viewpoint, the higher the dimension of the feature space (imagine taking the direct sum of all feature spaces), the more we get details on the distribution of mention pairs and the more we can expect to separate positives and negatives accurately. In practice, we have to cope with data sparsity: there will not be enough data to properly train a linear model on such a space. Finally, we seek a feature space situated between the two extremes of a space that is too big (sparseness) or too small (noisy data). The core of this work is to define a general method for choosing the most adequate space F among a huge number of possibilities when we do not know a priori which is the best.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Formalizing the statistical assumptions", |
| "sec_num": "2.2.2" |
| }, |
| { |
| "text": "In this work, we try to linearly separate positive and negative instances in the large space F with the Online Passive-Aggressive (PA) algorithm (Crammer et al., 2006) : the model learns a parameter vector w that defines a hyperplane that cuts the space into two parts. The predicted class of a pair x with feature vector \u03c6 F (x) is given by:", |
| "cite_spans": [ |
| { |
| "start": 145, |
| "end": 167, |
| "text": "(Crammer et al., 2006)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear models", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "C F (x) := sign(w T \u2022 \u03c6 F (x))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear models", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "Linearity implies an equivalence between: (i) separating instances of two types, t 1 and t 2 , in two independent models with respective feature spaces F 1 and F 2 and parameters w 1 and w 2 , and (ii) a single model on F 1 \u2295 F 2 . To see why, let us define the map:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear models", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "\u03c6 F 1 \u2295F 2 (x) := \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \u03c6 F 1 (x) T 0 T if x typed t 1 0 \u03c6 F 2 (x) T T if x typed t 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear models", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "and the parameter vector", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear models", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "w = w 1 w 2 \u2208 F 1 \u2295 F 2 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear models", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "Then we have:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear models", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "C F 1 \u2295F 2 (x) = C F 1 (x) if x typed t 1 C F 2 (x) if x typed t 2", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear models", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "Now we check that the same property applies when the PA fits its parameter w. For each new instance of the training set, the weight is updated according to the following rule 4 :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear models", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "w t+1 = arg min w\u2208F 1 2 w \u2212 w t 2 s.t. l(w; (x t , y t )) = 0 where l(w; (x t , y t )) = min(0, 1\u2212y t (w\u2022\u03c6 F (x t ))), so that when F = F 1 \u2295 F 2 , the minimum if x is typed t 1 is w t+1 = w 1 t+1 w 2 t and if x is typed t 2 is w t+1 = w 1 t w 2 t+1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear models", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "where the w i t+1 correspond to the updates in space F i independently from the rest. This result can be extended easily to the case of n feature spaces. Thus, with a deterministic separation of the data, a large model can be learned using smaller independent models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Linear models", |
| "sec_num": "2.2.3" |
| }, |
| { |
| "text": "To motivate our approach, we first introduce a simple separation of mention pairs which creates 9 models obtained by considering all possible pairs of grammatical types {nominal, name, pronoun} for both mentions in the pair (a similar fine-grained separation can be found in (Chen et al., 2011) ). This is equivalent to using 9 different feature spaces F 1 , . . . , F 9 to capture the global distribution of pairs. With the PA, this is also a single model with feature space F = F 1 \u2295 \u2022 \u2022 \u2022 \u2295 F 9 . We will call it the GRAMTYPE model.", |
| "cite_spans": [ |
| { |
| "start": 275, |
| "end": 294, |
| "text": "(Chen et al., 2011)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An example: separation by gramtype", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "As we will see in Section 5, these separated models significantly outperform a single model that uses the same base feature set. But we would like to define a method that adapts a feature space to the data by choosing the most adequate separation of pairs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "An example: separation by gramtype", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In this section, we have to keep in mind that separating the pairs in different models is the same as building a large feature space in which the parameter w can be learned by parts in independent subspaces.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchizing feature spaces", |
| "sec_num": "3" |
| }, |
| { |
| "text": "For establishing a structure on feature spaces, we use indicators which are deterministic functions on mention pairs with a small number of outputs. Indicators classify pairs in predefined categories in one-to-one correspondence with independent feature spaces. We can reuse some features of the system as indicators, e.g. the grammatical or named entity types. We can also employ functions that are not used as features, e.g. the approximate position of one of the mentions in the text.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Indicators on pairs", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The small number of outputs of an indicator is required for practical reasons: if a category of pairs is too refined, the associated feature space will suffer from data sparsity. Accordingly, distance-based indicators must be approximated by coarse histograms. In our experiments the outputs never exceeded a dozen values. One way to reduce the output span of an indicator is to binarize it like binarizing a tree (many possible binarizations). This operation produces a hierarchy of indicators which is exactly the structure we exploit in what follows.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Indicators on pairs", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We define hierarchies as combinations of indicators creating finer categories of mention pairs: given a finite sequence of indicators, a mention pair is classified by applying the indicators successively, each time refining a category into subcategories, just like in a decision tree (each node having the same number of children as the number of outputs of its indicator). We allow the classification to stop before applying the last indicator, but the behavior must be the same for all the instances. So a hierarchy is basically a sub-tree of the complete decision tree that contains copies of the same indicator at each level.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchies for separating pairs", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "If all the leaves of the decision tree have the same depth, this corresponds to taking the Cartesian product of outputs of all indicators for indexing the categories. In that case, we refer to product-hierarchies. The GRAMTYPE model can be seen as a two level product-hierarchy (figure 1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchies for separating pairs", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Figure 1: GRAMTYPE seen as a product-hierarchy", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchies for separating pairs", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Product-hierarchies will be the starting point of our method to find a feature space that fits the data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchies for separating pairs", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Now choosing a relevant sequence of indicators should be achieved through linguistic intuitions and theoretical work (gramtype separation is one of them). The system will find by itself the best usage of the indicators when optimizing the hierarchy. The sequence is a parameter of the model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Hierarchies for separating pairs", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Like we did for the GRAMTYPE model, we associate a feature space F i to each leaf of a hierarchy. Likewise, the sum F = i F i defines a large feature space. The corresponding parameter w of the model can be obtained by learning the w i in F i .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation with feature spaces", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Given a sequence of indicators, the number of different hierarchies we can define is equal to the number of sub-trees of the complete decision tree (each non-leaf node having all its children). The minimal case is when all indicators are Boolean. The number of full binary trees of height at most n can be computed by the following recursion: T (1) = 1 and T (n + 1) = 1 + T (n) 2 . So T (n) \u2265 2 2 n : even with small values of n, the number of different hierarchies (or large feature spaces) definable with a sequence of indicators is gigantic (e.g. T (10) \u2248 3.8.10 90 ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation with feature spaces", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Among all the possibilities for a large feature space, many are irrelevant because for them the data is too sparse or too noisy in some subspaces. We need a general method for finding an adequate space without enumerating and testing each of them.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Relation with feature spaces", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Let us assume now that the sequence of indicators is fixed, and let n be its length. To find the best feature space among a very high number of possibilities, we need a criterion we can apply without too much additional computation. For that we only evaluate the feature space locally on pairs, i.e. without applying a decoder on the output. We employ 3 measures on pairwise classification results: precision, recall and F1-score. Now selecting the best space for one of these measures can be achieved by using dynamic programming techniques. In the rest of the paper, we will optimize the F1-score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Optimizing hierarchies", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Training the hierarchy Starting from the product-hierarchy, we associate a classifier and its proper feature space to each node of the tree 5 . The classifiers are then trained as follows: for each instance there is a unique path from the root to a leaf of the complete tree. Each classifier situated on the path is updated with this instance. The number of iterations of the Passive-Aggressive is fixed.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Optimizing hierarchies", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Computing scores After training, we test all the classifiers on another set of pairs 6 . Again, a classifier is tested on an instance only if it is situated on the path from the root to the leaf associated with the instance. We obtain TP/FP/FN numbers 7 on pair classifications that are sufficient to compute the F1-score. As for training, the data on which a classifier at a given node is evaluated is the same as the union of all data used to evaluate the classifiers corresponding to the children of this node. Thus we are able to compare the scores obtained at a node to the \"union of the scores\" obtained at its children.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Optimizing hierarchies", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "Cutting down the hierarchy For the moment we have a complete tree with a classifier at each node. We use a dynamic programming technique to compute the best hierarchy by cutting this tree and only keeping classifiers situated at the leaf. The algorithm assembles the best local models (or feature spaces) together to create larger models. It goes from the leaves to the root and cuts the subtree starting at a node whenever it does not pro-vide a better score than the node itself, or on the contrary propagates the score of the sub-tree when there is an improvement. The details are given in algorithm 1. list \u2190 list of nodes given by a breadth-first Algorithm 1: Cutting down a hierarchy Let us briefly discuss the correctness and complexity of the algorithm. Each node is seen two times so the time complexity is linear in the number of nodes which is at least O(2 n ). However, only nodes that have encountered at least one training instance are useful and there are O(n \u00d7 k) such nodes (where k the size of the training set). So we can optimize the algorithm to run in time O(n \u00d7 k) 8 . If we scan the list obtained by breadth-first search backwards, we are ensured that every node will be processed after its children. (node.children) is the set of children of node, and (node.score) its score. sum-num provides TP/FP/FN by simply adding those of the children and sum-score computes the score based on these new TP/FP/FN numbers. (line 6) cuts the children of a node when they are not used in the best score. The algorithm thus propagates the best scores from the leaves to the root which finally gives a single score corresponding to the best hierarchy. Only the leaves used to compute the best score are kept, and they define the best hierarchy.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Optimizing hierarchies", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "We can see the operation of cutting as replacing a group of subspaces by a single subspace in the sum (see figure 2) . So cutting down the product-hierarchy amounts to reducing the global initial feature space in an optimal way. To sum up, the whole procedure is equivalent to training more than O(2 n ) perceptrons simultaneously and selecting the best performing.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 107, |
| "end": 116, |
| "text": "figure 2)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Relation between cutting and the global feature space", |
| "sec_num": null |
| }, |
| { |
| "text": "Our system consists in the pairwise model obtained by cutting a hierarchy (the PA with selected feature space) and using a greedy decoder to create clusters from the output. It is parametrized by the choice of the initial sequence of indicators.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "System description", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We used classical features that can be found in details in (Bengston and Roth, 2008) and (Rahman and Ng, 2011): grammatical type and subtype of mentions, string match and substring, apposition and copula, distance (number of separating mentions/sentences/words), gender/number match, synonymy/hypernym and animacy (using WordNet), family name (based on lists), named entity types, syntactic features (gold parse) and anaphoricity detection.", |
| "cite_spans": [ |
| { |
| "start": 59, |
| "end": 84, |
| "text": "(Bengston and Roth, 2008)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The base features", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "As indicators we used: left and right grammatical types and subtypes, entity types, a boolean indicating if the mentions are in the same sentence, and a very coarse histogram of distance in terms of sentences. We systematically included right gramtype and left gramtype in the sequences and added other indicators, producing sequences of different lengths. The parameter was optimized by document categories using a development set after decoding the output of the pairwise model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Indicators", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We tested 3 classical greedy link selection strategies that form clusters from the classifier decision: Closest-First (merge mentions with their closest coreferent mention on the left) (Soon et al., 2001 ), Best-first (merge mentions with the mention on the left having the highest positive score) (Ng and Cardie, 2002; Bengston and Roth, 2008) , and Aggressive-Merge (transitive closure on positive pairs) (McCarthy and Lehnert, 1995) . Each of these decoders is typically (although not always) used in tandem with a specific sampling selection at training. Thus, Closest-First for instance is used in combination with a sample selection that generates training instances only for the mentions that occur between the closest antecedent and the anaphor (Soon et al., 2001 ", |
| "cite_spans": [ |
| { |
| "start": 185, |
| "end": 203, |
| "text": "(Soon et al., 2001", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 298, |
| "end": 319, |
| "text": "(Ng and Cardie, 2002;", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 320, |
| "end": 344, |
| "text": "Bengston and Roth, 2008)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 407, |
| "end": 435, |
| "text": "(McCarthy and Lehnert, 1995)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 753, |
| "end": 771, |
| "text": "(Soon et al., 2001", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoders", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We evaluated the system on the English part of the corpus provided in the CoNLL-2012 Shared Task (Pradhan et al., 2012), referred to as CoNLL-2012 here. The corpus contains 7 categories of documents (over 2K documents, 1.3M words). We used the official train/dev/test data sets. We evaluated our system in the closed mode which requires that only provided data is used.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Our baselines are a SINGLE MODEL, the GRAM-TYPE model (section 2) and a RIGHT-TYPE model, defined as the first level of the gramtype product hierarchy (i.e. grammatical type of the anaphora (Morton, 2000) ), with each greedy decoder and also the original sampling with a single model associated with those decoders.", |
| "cite_spans": [ |
| { |
| "start": 190, |
| "end": 204, |
| "text": "(Morton, 2000)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Settings", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "The hierarchies were trained with 10-fold crossvalidation on the training set (the hierarchies are cut after cumulating the scores obtained by crossvalidation) and their parameters are optimized by document category on the development set: the sequence of indicators obtaining the best average score after decoding was selected as parameter for the category. The obtained hierarchy is referred to as the BEST HIERARCHY in the results. We fixed the number of iterations for the PA for all models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Settings", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "In our experiments, we consider only the gold mentions. This is a rather idealized setting but our focus is on comparing various pairwise local models rather than on building a full coreference resolution system. Also, we wanted to avoid having to consider too many parameters in our experiments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Settings", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We use the three metrics that are most commonly used 9 , namely:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation metrics", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "MUC (Vilain et al., 1995) computes for each true entity cluster the number of system clusters that are needed to cover it. Precision is this quantity divided by the true cluster size minus one. Recall is obtained by reversing true and predicated clusters. F1 is the harmonic mean.", |
| "cite_spans": [ |
| { |
| "start": 4, |
| "end": 25, |
| "text": "(Vilain et al., 1995)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation metrics", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "B 3 (Bagga and Baldwin, 1998) computes recall and precision scores for each mention, based on the intersection between the system/true clusters for that mention. Precision is the ratio of the intersection and the true cluster sizes, while recall is the ratio of the intersection to the system cluster sizes. Global recall, precision, and F1 scores are obtained by averaging over the mention scores.", |
| "cite_spans": [ |
| { |
| "start": 4, |
| "end": 29, |
| "text": "(Bagga and Baldwin, 1998)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation metrics", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "CEAF (Luo, 2005) scores are obtained by computing the best one-to-one mapping between the system/true partitions, which is equivalent to finding the best optimal alignment in the bipartite graph formed out of these partitions. We use the \u03c6 4 similarity function from (Luo, 2005) .", |
| "cite_spans": [ |
| { |
| "start": 5, |
| "end": 16, |
| "text": "(Luo, 2005)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 267, |
| "end": 278, |
| "text": "(Luo, 2005)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation metrics", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "These metrics were recently used in the CoNLL-2011 and -2012 Shared Tasks. In addition, these campaigns use an unweighted average over the F1 scores given by the three metrics. Following common practice, we use micro-averaging when reporting our scores for entire datasets.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation metrics", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "The results obtained by the system are reported in table 2. The original sampling for the single model associated to Closest-First and Best-First decoder are referred to as SOON and NGCARDIE.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "The P/R/F1 pairwise scores before decoding are given in table 1. BEST HIERARCHY obtains a strong improvement in F1 (+15), a better precision and a less significant diminution of recall compared to GRAMTYPE and RIGHT-TYPE. Despite the use of greedy decoders, we observe a large positive effect of pair separation in the pairwise models on the outputs. On the mean score, the use of distinct models versus a single model yields F1 increases from 6.4 up to 8.3 depending on the decoder. Irrespective of the decoder being used, GRAMTYPE always outperforms RIGHT-TYPE and single model and is always outperformed by BEST HIERARCHY model. Interestingly, we see that the increment in pairwise and global score are not proportional: for instance, the strong improvement of F1 between RIGHT-TYPE and GRAMTYPE results in a small amelioration of the global score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Depending on the document category, we found some variations as to which hierarchy was learned in each setting, but we noticed that parameters starting with right and left gramtypes often produced quite good hierarchies: for instance right gramtype \u2192 left gramtype \u2192 same sentence \u2192 right named entity type.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "We observed that product-hierarchies did not performed well without cutting (especially when using longer sequences of indicators, because of data sparsity) and could obtain scores lower than the single model. Hopefully, after cutting them the results always became better as the resulting hierarchy was more balanced.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "Looking at the different metrics, we notice that overall, pair separation improves B 3 and CEAF (but not always MUC) after decoding the output: GRAMTYPE provides a better mean score than the single model, and BEST HIERARCHY gives the highest B 3 , CEAF and mean score.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "The best classifier-decoder combination reaches a score of 67.19, which would place it above the mean score (66.41) of the systems that took part in the CoNLL-2012 Shared Task (gold mentions track). Except for the first at 77.22, the best performing systems have a score around 68-69. Considering the simple decoding strategy we employed, our current system sets up a strong baseline.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "In this paper, we described a method for selecting a feature space among a very large number of choices by using linearity and by combining indicators to separate the instances. We employed dynamic programming on hierarchies of indicators to compute the feature space providing the best pairwise classifications efficiently. We applied this method to optimize the pairwise model of a coreference resolution system. Using different kinds of greedy decoders, we showed a significant improvement of the system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and perspectives", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Our approach is flexible in that we can use a variety of indicators. In the future we will apply the hierarchies on finer feature spaces to make more accurate optimizations. Observing that the general method of cutting down hierarchies is not restricted to modeling mention pairs, but can be applied to problems having Boolean aspects, we aim at employing hierarchies to address other tasks in computational linguistics (e.g. anaphoricity detection or discourse and temporal relation classification wherein position information may help separating the data).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and perspectives", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In this work, we have only considered standard, heuristic linking strategies like Closest-First. So, a natural extension of this work is to combine our method for learning pairwise models with more sophisticated decoding strategies (like Bestcut or using ILP). Then we can test the impact of hierarchies with more realistic settings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and perspectives", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Finally, the method for cutting hierarchies should be compared to more general but similar methods, for instance polynomial kernels for SVM and tree-based methods (Hastie et al., 2001) . We also plan to extend our method by breaking the symmetry of our hierarchies. Instead of cutting product-hierarchies, we will employ usual techniques to build decision trees 10 and apply our cutting method on their structure. The objective is twofold: first, we will get rid of the sequence of indicators as parameter. Second, we will avoid fragmentation or overfitting (which can arise with classification trees) by deriving an optimal large margin linear model from the tree structure.", |
| "cite_spans": [ |
| { |
| "start": 163, |
| "end": 184, |
| "text": "(Hastie et al., 2001)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and perspectives", |
| "sec_num": "6" |
| }, |
| { |
| "text": "There are however no theoretical guarantees that improving pair classification will always result in overall improvements if the two modules are optimized independently.2 Sometimes, distinct sample selections are also adopted", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "The parameter is updated to obtain a margin of a least 1. It does not change if the instance is already correctly classified with such margin.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In the experiments, the classifiers use a copy of a same feature space, but not the same data, which corresponds to crossing the features with the categories of the decision tree.6 The training set is cut into two parts, for training and testing the hierarchy. We used 10-fold cross-validation in our experiments.7 True positives, false positives and false negatives.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "In our experiments, cutting down the hierarchy was achieved very quickly, and the total training time was about five times longer than with a single model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "BLANC metric(Recasens and Hovy, 2011) results are not reported since they are not used to compute the CoNLL-2012 global score. However we can mention that in our experiments, using hierarchies had a positive effect similar to what was observed on B 3 and CEAF.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank the ACL 2013 anonymous reviewers for their valuable comments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Referring and accessibility", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Journal of Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "65--87", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Ariel. 1988. Referring and accessibility. Journal of Linguistics, pages 65-87.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Algorithms for scoring coreference chains", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Bagga", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Baldwin", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "Proceedings of LREC 1998", |
| "volume": "10", |
| "issue": "", |
| "pages": "563--566", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Bagga and B. Baldwin. 1998. Algorithms for scoring coreference chains. In Proceedings of LREC 1998, pages 563-566. 10 (Bansal and Klein, 2012) show good performances of de- cision trees on coreference resolution.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Coreference semantics from web features", |
| "authors": [ |
| { |
| "first": "Mohit", |
| "middle": [], |
| "last": "Bansal", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Long Papers", |
| "volume": "1", |
| "issue": "", |
| "pages": "389--398", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mohit Bansal and Dan Klein. 2012. Coreference se- mantics from web features. In Proceedings of the 50th Annual Meeting of the Association for Compu- tational Linguistics: Long Papers-Volume 1, pages 389-398. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Understanding the value of features for coreference resolution", |
| "authors": [ |
| { |
| "first": "Eric", |
| "middle": [], |
| "last": "Bengston", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Roth", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of EMNLP 2008", |
| "volume": "", |
| "issue": "", |
| "pages": "294--303", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Eric Bengston and Dan Roth. 2008. Understanding the value of features for coreference resolution. In Proceedings of EMNLP 2008, pages 294-303, Hon- olulu, Hawaii.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "End-to-end coreference resolution via hypergraph partitioning", |
| "authors": [ |
| { |
| "first": "Jie", |
| "middle": [], |
| "last": "Cai", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Strube", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "143--151", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jie Cai and Michael Strube. 2010. End-to-end coref- erence resolution via hypergraph partitioning. In COLING, pages 143-151.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A unified event coreference resolution by integrating multiple resolvers", |
| "authors": [ |
| { |
| "first": "Bin", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Su", |
| "suffix": "" |
| }, |
| { |
| "first": "Chew Lim", |
| "middle": [], |
| "last": "Sinno Jialin Pan", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Tan", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of 5th International Joint Conference on Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "102--110", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bin Chen, Jian Su, Sinno Jialin Pan, and Chew Lim Tan. 2011. A unified event coreference resolu- tion by integrating multiple resolvers. In Proceed- ings of 5th International Joint Conference on Nat- ural Language Processing, pages 102-110, Chiang Mai, Thailand, November. Asian Federation of Nat- ural Language Processing.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Shai Shalev-Shwartz, and Yoram Singer", |
| "authors": [ |
| { |
| "first": "Koby", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ofer", |
| "middle": [], |
| "last": "Dekel", |
| "suffix": "" |
| }, |
| { |
| "first": "Joseph", |
| "middle": [], |
| "last": "Keshet", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "7", |
| "issue": "", |
| "pages": "551--585", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev-Shwartz, and Yoram Singer. 2006. Online passive-aggressive algorithms. Journal of Machine Learning Research, 7:551-585.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Specialized models and ranking for coreference resolution", |
| "authors": [ |
| { |
| "first": "Pascal", |
| "middle": [], |
| "last": "Denis", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Baldridge", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of EMNLP 2008", |
| "volume": "", |
| "issue": "", |
| "pages": "660--669", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pascal Denis and Jason Baldridge. 2008. Specialized models and ranking for coreference resolution. In Proceedings of EMNLP 2008, pages 660-669, Hon- olulu, Hawaii.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Global joint models for coreference resolution and named entity classification", |
| "authors": [ |
| { |
| "first": "Pascal", |
| "middle": [], |
| "last": "Denis", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Baldridge", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Procesamiento del Lenguaje Natural", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Pascal Denis and Jason Baldridge. 2009. Global joint models for coreference resolution and named entity classification. Procesamiento del Lenguaje Natural, 43.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "The elements of statistical learning: data mining, inference, and prediction: with 200 fullcolor illustrations", |
| "authors": [ |
| { |
| "first": "Trevor", |
| "middle": [], |
| "last": "Hastie", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [], |
| "last": "Tibshirani", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "H" |
| ], |
| "last": "Friedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Trevor Hastie, Robert Tibshirani, and J. H. Friedman. 2001. The elements of statistical learning: data mining, inference, and prediction: with 200 full- color illustrations. New York: Springer-Verlag.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "The (non)utility of predicate-argument frequencies for pronoun interpretation", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Kehler", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Appelt", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Taylor", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Simma", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Kehler, D. Appelt, L. Taylor, and A. Simma. 2004. The (non)utility of predicate-argument frequencies for pronoun interpretation. In Proceedings of HLT- NAACL 2004.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Enforcing coherence on coreference sets", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Klenner", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of RANLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Klenner. 2007. Enforcing coherence on corefer- ence sets. In Proceedings of RANLP 2007.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "On coreference resolution performance metrics", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "Luo", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of HLT-NAACL 2005", |
| "volume": "", |
| "issue": "", |
| "pages": "25--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "X. Luo. 2005. On coreference resolution performance metrics. In Proceedings of HLT-NAACL 2005, pages 25-32.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Using decision trees for coreference resolution", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "F" |
| ], |
| "last": "Mccarthy", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [ |
| "G" |
| ], |
| "last": "Lehnert", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "IJCAI", |
| "volume": "", |
| "issue": "", |
| "pages": "1050--1055", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. F. McCarthy and W. G. Lehnert. 1995. Using de- cision trees for coreference resolution. In IJCAI, pages 1050-1055.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Coreference for NLP applications", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Morton", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of ACL 2000", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Morton. 2000. Coreference for NLP applications. In Proceedings of ACL 2000, Hong Kong.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Improving machine learning approaches to coreference resolution", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of ACL 2002", |
| "volume": "", |
| "issue": "", |
| "pages": "104--111", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "V. Ng and C. Cardie. 2002. Improving machine learn- ing approaches to coreference resolution. In Pro- ceedings of ACL 2002, pages 104-111.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Supervised ranking for pronoun resolution: Some recent improvements", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of AAAI 2005", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "V. Ng. 2005. Supervised ranking for pronoun resolu- tion: Some recent improvements. In Proceedings of AAAI 2005.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Bestcut: A graph algorithm for coreference resolution", |
| "authors": [ |
| { |
| "first": "Cristina", |
| "middle": [], |
| "last": "Nicolae", |
| "suffix": "" |
| }, |
| { |
| "first": "Gabriel", |
| "middle": [], |
| "last": "Nicolae", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "275--283", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Cristina Nicolae and Gabriel Nicolae. 2006. Best- cut: A graph algorithm for coreference resolution. In EMNLP, pages 275-283.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution", |
| "authors": [ |
| { |
| "first": "Paolo", |
| "middle": [], |
| "last": "Simone", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Ponzetto", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Strube", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the HLT 2006", |
| "volume": "", |
| "issue": "", |
| "pages": "192--199", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Simone Paolo Ponzetto and Michael Strube. 2006. Exploiting semantic role labeling, WordNet and Wikipedia for coreference resolution. In Proceed- ings of the HLT 2006, pages 192-199, New York City, N.Y.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Conll-2012 shared task: Modeling multilingual unrestricted coreference in ontonotes", |
| "authors": [ |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Sameer Pradhan", |
| "suffix": "" |
| }, |
| { |
| "first": "Nianwen", |
| "middle": [], |
| "last": "Moschitti", |
| "suffix": "" |
| }, |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuchen", |
| "middle": [], |
| "last": "Uryupina", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Joint Conference on EMNLP and CoNLL -Shared Task", |
| "volume": "", |
| "issue": "", |
| "pages": "1--40", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sameer Pradhan, Alessandro Moschitti, Nianwen Xue, Olga Uryupina, and Yuchen Zhang. 2012. Conll- 2012 shared task: Modeling multilingual unre- stricted coreference in ontonotes. In Joint Confer- ence on EMNLP and CoNLL -Shared Task, pages 1-40, Jeju Island, Korea, July. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Narrowing the modeling gap: a cluster-ranking approach to coreference resolution", |
| "authors": [ |
| { |
| "first": "Altaf", |
| "middle": [], |
| "last": "Rahman", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "J. Artif. Int. Res", |
| "volume": "40", |
| "issue": "1", |
| "pages": "469--521", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Altaf Rahman and Vincent Ng. 2011. Narrowing the modeling gap: a cluster-ranking approach to coref- erence resolution. J. Artif. Int. Res., 40(1):469-521.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Blanc: Implementing the rand index for coreference evaluation", |
| "authors": [ |
| { |
| "first": "Hovy", |
| "middle": [], |
| "last": "Recasens", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Natural Language Engineering", |
| "volume": "17", |
| "issue": "", |
| "pages": "485--510", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Recasens and Hovy. 2011. Blanc: Implementing the rand index for coreference evaluation. Natural Lan- guage Engineering, 17:485-510, 9.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "A machine learning approach to coreference resolution of noun phrases", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [ |
| "M" |
| ], |
| "last": "Soon", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [ |
| "T" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Lim", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Computational Linguistics", |
| "volume": "27", |
| "issue": "4", |
| "pages": "521--544", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "W. M. Soon, H. T. Ng, and D. Lim. 2001. A machine learning approach to coreference resolu- tion of noun phrases. Computational Linguistics, 27(4):521-544.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Disambiguation and filtering methods in using web knowledge for coreference resolution", |
| "authors": [ |
| { |
| "first": "Olga", |
| "middle": [], |
| "last": "Uryupina", |
| "suffix": "" |
| }, |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| }, |
| { |
| "first": "Claudio", |
| "middle": [], |
| "last": "Giuliano", |
| "suffix": "" |
| }, |
| { |
| "first": "Kateryna", |
| "middle": [], |
| "last": "Tymoshenko", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "FLAIRS Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Olga Uryupina, Massimo Poesio, Claudio Giuliano, and Kateryna Tymoshenko. 2011. Disambiguation and filtering methods in using web knowledge for coreference resolution. In FLAIRS Conference.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Linguistically motivated sample selection for coreference resolution", |
| "authors": [ |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Uryupina", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of DAARC 2004", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "O. Uryupina. 2004. Linguistically motivated sample selection for coreference resolution. In Proceedings of DAARC 2004, Furnas.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Coreference systems based on kernels methods", |
| "authors": [ |
| { |
| "first": "Yannick", |
| "middle": [], |
| "last": "Versley", |
| "suffix": "" |
| }, |
| { |
| "first": "Alessandro", |
| "middle": [], |
| "last": "Moschitti", |
| "suffix": "" |
| }, |
| { |
| "first": "Massimo", |
| "middle": [], |
| "last": "Poesio", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiaofeng", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "961--968", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yannick Versley, Alessandro Moschitti, Massimo Poe- sio, and Xiaofeng Yang. 2008. Coreference systems based on kernels methods. In COLING, pages 961- 968.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "A model-theoretic coreference scoring scheme", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Vilain", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Burger", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Aberdeen", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Connolly", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Hirschman", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proceedings fo the 6th Message Understanding Conference (MUC-6)", |
| "volume": "", |
| "issue": "", |
| "pages": "45--52", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Vilain, J. Burger, J. Aberdeen, D. Connolly, and L. Hirschman. 1995. A model-theoretic coreference scoring scheme. In Proceedings fo the 6th Message Understanding Conference (MUC-6), pages 45-52, San Mateo, CA. Morgan Kaufmann.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "uris": null, |
| "text": "search for node in reversed list do if node.children = \u2205 then 2 if sum-score(node.children)", |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "num": null, |
| "uris": null, |
| "text": "Cutting down the hierarchy reduces the feature space", |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "text": "79.49 93.72 86.02 26.23 89.43 40.56 49.74 19.92 28.44 51.67 SINGLE MODEL 78.95 75.15 77.0 51.88 68.42 59.01 37.79 43.89 40.61 58.87 RIGHT-TYPE 79.36 67.57 72.99 69.43 56.78 62.47 41.17 61.66 49.37 61.61 GRAMTYPE 80.5 71.12 75.52 66.39 61.04 63.6 43.11 59.93 50.15 63.09 BEST HIERARCHY 83.23 73.72 78.19 73.5 67.09 70.15 47.3 60.89 53.24 67.19 .19 57.18 32.08 47.83 38.41 55.56 BEST HIERARCHY 78.11 69.82 73.73 53.62 70.86 61.05 35.04 46.67 40.03 58.27 83.12 84.27 83.69 44.73 81.58 57.78 45.02 42.94 43.95 61.81 BEST HIERARCHY 83.26 85.2 84.22 45.65 82.48 58.77 46.28 43.13 44.65 62.55", |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td>MUC</td><td/><td/><td>B 3</td><td/><td/><td>CEAF</td><td/><td/></tr><tr><td>Closest-First</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>Mean</td></tr><tr><td/><td/><td>MUC</td><td/><td/><td>B 3</td><td/><td/><td>CEAF</td><td/><td/></tr><tr><td>Best-First</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>Mean</td></tr><tr><td>NGCARDIE</td><td colspan=\"9\">81.02 93.82 86.95 23.33 93.92 37.37 40.31 18.97 25.8</td><td>50.04</td></tr><tr><td>SINGLE MODEL</td><td colspan=\"10\">79.22 73.75 76.39 40.93 75.48 53.08 30.52 37.59 33.69 54.39</td></tr><tr><td>RIGHT-TYPE</td><td colspan=\"10\">77.13 65.09 70.60 48.11 66.21 55.73 31.07 47.30 37.50 54.61</td></tr><tr><td>GRAMTYPE</td><td colspan=\"5\">77.21 65.89 71.1 49.77 67MUC B 3</td><td/><td/><td>CEAF</td><td/><td/></tr><tr><td>Aggressive-Merge</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>P</td><td>R</td><td>F1</td><td>Mean</td></tr><tr><td>SINGLE MODEL</td><td colspan=\"6\">83.15 88.65 85.81 35.67 88.18 50.79</td><td colspan=\"4\">36.3 28.27 31.78 56.13</td></tr><tr><td>RIGHT-TYPE</td><td colspan=\"10\">83.48 89.79 86.52 36.82 88.08 51.93 45.30 33.84 38.74 59.07</td></tr></table>", |
| "html": null, |
| "num": null |
| }, |
| "TABREF2": { |
| "text": "CoNLL-2012 test (gold mentions): Closest-First, Best-First and Aggressive-Merge decoders.", |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "num": null |
| } |
| } |
| } |
| } |