ACL-OCL / Base_JSON /prefixN /json /N03 /N03-1028.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N03-1028",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:07:25.272407Z"
},
"title": "Shallow Parsing with Conditional Random Fields",
"authors": [
{
"first": "Sha",
"middle": [],
"last": "Fei",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {
"addrLine": "200 South 33rd Street",
"postCode": "19104",
"settlement": "Philadelphia",
"region": "PA"
}
},
"email": ""
},
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Pennsylvania",
"location": {
"addrLine": "200 South 33rd Street",
"postCode": "19104",
"settlement": "Philadelphia",
"region": "PA"
}
},
"email": "feisha|pereira@cis.upenn.edu"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Conditional random fields for sequence labeling offer advantages over both generative models like HMMs and classifiers applied at each sequence position. Among sequence labeling tasks in language processing, shallow parsing has received much attention, with the development of standard evaluation datasets and extensive comparison among methods. We show here how to train a conditional random field to achieve performance as good as any reported base noun-phrase chunking method on the CoNLL task, and better than any reported single model. Improved training methods based on modern optimization algorithms were critical in achieving these results. We present extensive comparisons between models and training methods that confirm and strengthen previous results on shallow parsing and training methods for maximum-entropy models. 1 Ramshaw and Marcus (1995) used transformation-based learning (Brill, 1995), which for the present purposes can be tought of as a classification-based method.",
"pdf_parse": {
"paper_id": "N03-1028",
"_pdf_hash": "",
"abstract": [
{
"text": "Conditional random fields for sequence labeling offer advantages over both generative models like HMMs and classifiers applied at each sequence position. Among sequence labeling tasks in language processing, shallow parsing has received much attention, with the development of standard evaluation datasets and extensive comparison among methods. We show here how to train a conditional random field to achieve performance as good as any reported base noun-phrase chunking method on the CoNLL task, and better than any reported single model. Improved training methods based on modern optimization algorithms were critical in achieving these results. We present extensive comparisons between models and training methods that confirm and strengthen previous results on shallow parsing and training methods for maximum-entropy models. 1 Ramshaw and Marcus (1995) used transformation-based learning (Brill, 1995), which for the present purposes can be tought of as a classification-based method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Sequence analysis tasks in language and biology are often described as mappings from input sequences to sequences of labels encoding the analysis. In language processing, examples of such tasks include part-of-speech tagging, named-entity recognition, and the task we shall focus on here, shallow parsing. Shallow parsing identifies the non-recursive cores of various phrase types in text, possibly as a precursor to full parsing or information extraction (Abney, 1991) . The paradigmatic shallowparsing problem is NP chunking, which finds the nonrecursive cores of noun phrases called base NPs. The pioneering work of Ramshaw and Marcus (1995) introduced NP chunking as a machine-learning problem, with standard datasets and evaluation metrics. The task was extended to additional phrase types for the CoNLL-2000 shared task (Tjong Kim Sang and Buchholz, 2000) , which is now the standard evaluation task for shallow parsing.",
"cite_spans": [
{
"start": 456,
"end": 469,
"text": "(Abney, 1991)",
"ref_id": "BIBREF0"
},
{
"start": 619,
"end": 644,
"text": "Ramshaw and Marcus (1995)",
"ref_id": "BIBREF24"
},
{
"start": 837,
"end": 861,
"text": "Sang and Buchholz, 2000)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Most previous work used two main machine-learning approaches to sequence labeling. The first approach relies on k-order generative probabilistic models of paired input sequences and label sequences, for instance hidden Markov models (HMMs) Kupiec, 1992) or multilevel Markov models (Bikel et al., 1999) . The second approach views the sequence labeling problem as a sequence of classification problems, one for each of the labels in the sequence. The classification result at each position may depend on the whole input and on the previous k classifications. 1 The generative approach provides well-understood training and decoding algorithms for HMMs and more general graphical models. However, effective generative models require stringent conditional independence assumptions. For instance, it is not practical to make the label at a given position depend on a window on the input sequence as well as the surrounding labels, since the inference problem for the corresponding graphical model would be intractable. Non-independent features of the inputs, such as capitalization, suffixes, and surrounding words, are important in dealing with words unseen in training, but they are difficult to represent in generative models.",
"cite_spans": [
{
"start": 240,
"end": 253,
"text": "Kupiec, 1992)",
"ref_id": "BIBREF17"
},
{
"start": 282,
"end": 302,
"text": "(Bikel et al., 1999)",
"ref_id": "BIBREF3"
},
{
"start": 559,
"end": 560,
"text": "1",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The sequential classification approach can handle many correlated features, as demonstrated in work on maximum-entropy Ratnaparkhi, 1996) and a variety of other linear classifiers, including winnow (Punyakanok and Roth, 2001) , AdaBoost (Abney et al., 1999) , and support-vector machines (Kudo and Matsumoto, 2001 ). Furthermore, they are trained to minimize some function related to labeling error, leading to smaller error in practice if enough training data are available. In contrast, generative models are trained to maximize the joint probability of the training data, which is not as closely tied to the accuracy metrics of interest if the actual data was not generated by the model, as is always the case in practice.",
"cite_spans": [
{
"start": 119,
"end": 137,
"text": "Ratnaparkhi, 1996)",
"ref_id": "BIBREF25"
},
{
"start": 198,
"end": 225,
"text": "(Punyakanok and Roth, 2001)",
"ref_id": "BIBREF23"
},
{
"start": 237,
"end": 257,
"text": "(Abney et al., 1999)",
"ref_id": "BIBREF1"
},
{
"start": 288,
"end": 313,
"text": "(Kudo and Matsumoto, 2001",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "However, since sequential classifiers are trained to make the best local decision, unlike generative models they cannot trade off decisions at different positions against each other. In other words, sequential classifiers are myopic about the impact of their current decision on later decisions (Bottou, 1991; Lafferty et al., 2001 ). This forced the best sequential classifier systems to resort to heuristic combinations of forward-moving and backward-moving sequential classifiers (Kudo and Matsumoto, 2001 ).",
"cite_spans": [
{
"start": 295,
"end": 309,
"text": "(Bottou, 1991;",
"ref_id": "BIBREF4"
},
{
"start": 310,
"end": 331,
"text": "Lafferty et al., 2001",
"ref_id": "BIBREF18"
},
{
"start": 483,
"end": 508,
"text": "(Kudo and Matsumoto, 2001",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Conditional random fields (CRFs) bring together the best of generative and classification models. Like classification models, they can accommodate many statistically correlated features of the inputs, and they are trained discriminatively. But like generative models, they can trade off decisions at different sequence positions to obtain a globally optimal labeling. Lafferty et al. (2001) showed that CRFs beat related classification models as well as HMMs on synthetic data and on a part-of-speech tagging task.",
"cite_spans": [
{
"start": 368,
"end": 390,
"text": "Lafferty et al. (2001)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the present work, we show that CRFs beat all reported single-model NP chunking results on the standard evaluation dataset, and are statistically indistinguishable from the previous best performer, a voting arrangement of 24 forward-and backward-looking support-vector classifiers (Kudo and Matsumoto, 2001 ). To obtain these results, we had to abandon the original iterative scaling CRF training algorithm for convex optimization algorithms with better convergence properties. We provide detailed comparisons between training methods.",
"cite_spans": [
{
"start": 283,
"end": 308,
"text": "(Kudo and Matsumoto, 2001",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The generalized perceptron proposed by Collins (2002) is closely related to CRFs, but the best CRF training methods seem to have a slight edge over the generalized perceptron.",
"cite_spans": [
{
"start": 39,
"end": 53,
"text": "Collins (2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We focus here on conditional random fields on sequences, although the notion can be used more generally (Lafferty et al., 2001; Taskar et al., 2002) . Such CRFs define conditional probability distributions p(Y |X) of label sequences given input sequences. We assume that the random variable sequences X and Y have the same length, and use x = x 1 \u2022 \u2022 \u2022 x n and y = y 1 \u2022 \u2022 \u2022 y n for the generic input sequence and label sequence, respectively.",
"cite_spans": [
{
"start": 104,
"end": 127,
"text": "(Lafferty et al., 2001;",
"ref_id": "BIBREF18"
},
{
"start": 128,
"end": 148,
"text": "Taskar et al., 2002)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "A CRF on (X, Y ) is specified by a vector f of local features and a corresponding weight vector \u03bb. Each local feature is either a state feature s(y, x, i) or a transition feature t(y, y , x, i), where y, y are labels, x an input sequence, and i an input position. To make the notation more uniform, we also write",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "s(y, y , x, i) = s(y , x, i) s(y, x, i) = s(y i , x, i) t(y, x, i) = t(y i\u22121 , y i , x, i) i > 1 0 i = 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "for any state feature s and transition feature t. Typically, features depend on the inputs around the given position, although they may also depend on global properties of the input, or be non-zero only at some positions, for instance features that pick out the first or last labels. The CRF's global feature vector for input sequence x and label sequence y is given by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "F (y, x) = i f (y, x, i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "where i ranges over input positions. The conditional probability distribution defined by the CRF is then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "p \u03bb (Y |X) = exp \u03bb \u2022 F (Y , X) Z \u03bb (X)",
"eq_num": "(1)"
}
],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "where",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "Z \u03bb (x) = y exp \u03bb \u2022 F (y, x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "Any positive conditional distribution p(Y |X) that obeys the Markov property",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "p(Y i |{Y j } j =i , X) = p(Y i |Y i\u22121 , Y i+1 , X)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "can be written in the form (1) for appropriate choice of feature functions and weight vector (Hammersley and Clifford, 1971) . The most probable label sequence for input sequence",
"cite_spans": [
{
"start": 93,
"end": 124,
"text": "(Hammersley and Clifford, 1971)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "x is\u0177 = arg max y p \u03bb (y|x) = arg max y \u03bb \u2022 F (y, x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "because Z \u03bb (x) does not depend on y. F (y, x) decomposes into a sum of terms for consecutive pairs of labels, so the most likely y can be found with the Viterbi algorithm.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "We train a CRF by maximizing the log-likelihood of a given training set",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "T = {(x k , y k )} N",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "k=1 , which we assume fixed for the rest of this section:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "L \u03bb = k log p \u03bb (y k |x k ) = k [\u03bb \u2022 F (y k , x k ) \u2212 log Z \u03bb (x k )]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "To perform this optimization, we seek the zero of the gradient",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "\u2207L \u03bb = k F (y k , x k ) \u2212 E p \u03bb (Y |x k ) F (Y , x k ) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "In words, the maximum of the training data likelihood is reached when the empirical average of the global feature vector equals its model expectation. The expectation E p \u03bb (Y |x) F (Y , x) can be computed efficiently using a variant of the forward-backward algorithm. For a given x, define the transition matrix for position i as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "M i [y, y ] = exp \u03bb \u2022 f (y, y , x, i) Let f be any local feature, f i [y, y ] = f (y, y , x, i), F (y, x) = i f (y i\u22121 , y i , x, i)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": ", and let * denote component-wise matrix product. Then",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "E p \u03bb (Y |x) F (Y , x) = y p \u03bb (y|x)F (y, x) = i \u03b1 i\u22121 (f i * M i )\u03b2 i Z \u03bb (x) Z \u03bb (x) = \u03b1 n \u2022 1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "where \u03b1 i and \u03b2 i the forward and backward state-cost vectors defined by",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "\u03b1 i = \u03b1 i\u22121 M i 0 < i \u2264 n 1 i = 0 \u03b2 i = M i+1 \u03b2 i+1 1 \u2264 i < n 1 i = n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "Therefore, we can use a forward pass to compute the \u03b1 i and a backward bass to compute the \u03b2 i and accumulate the feature expectations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "To avoid overfitting, we penalize the likelihood with a spherical Gaussian weight prior (Chen and Rosenfeld, 1999) : Lafferty et al. (2001) used iterative scaling algorithms for CRF training, following earlier work on maximumentropy models for natural language (Berger et al., 1996; Della Pietra et al., 1997) . Those methods are very simple and guaranteed to converge, but as Minka (2001) and Malouf (2002) showed for classification, their convergence is much slower than that of general-purpose convex optimization algorithms when many correlated features are involved. Concurrently with the present work, Wallach (2002) tested conjugate gradient and second-order methods for CRF training, showing significant training speed advantages over iterative scaling on a small shallow parsing problem. Our work shows that preconditioned conjugate-gradient (CG) (Shewchuk, 1994) or limited-memory quasi-Newton (L-BFGS) (Nocedal and Wright, 1999) perform comparably on very large problems (around 3.8 million features). We compare those algorithms to generalized iterative scaling (GIS) (Darroch and Ratcliff, 1972), non-preconditioned CG, and voted perceptron training (Collins, 2002) . All algorithms except voted perceptron maximize the penalized loglikelihood: \u03bb * = arg max \u03bb L \u03bb . However, for ease of exposition, this discussion of training methods uses the unpenalized log-likelihood L \u03bb .",
"cite_spans": [
{
"start": 88,
"end": 114,
"text": "(Chen and Rosenfeld, 1999)",
"ref_id": "BIBREF6"
},
{
"start": 117,
"end": 139,
"text": "Lafferty et al. (2001)",
"ref_id": "BIBREF18"
},
{
"start": 261,
"end": 282,
"text": "(Berger et al., 1996;",
"ref_id": "BIBREF2"
},
{
"start": 283,
"end": 309,
"text": "Della Pietra et al., 1997)",
"ref_id": "BIBREF10"
},
{
"start": 377,
"end": 389,
"text": "Minka (2001)",
"ref_id": "BIBREF21"
},
{
"start": 394,
"end": 407,
"text": "Malouf (2002)",
"ref_id": "BIBREF19"
},
{
"start": 608,
"end": 622,
"text": "Wallach (2002)",
"ref_id": "BIBREF33"
},
{
"start": 856,
"end": 872,
"text": "(Shewchuk, 1994)",
"ref_id": "BIBREF30"
},
{
"start": 913,
"end": 939,
"text": "(Nocedal and Wright, 1999)",
"ref_id": "BIBREF22"
},
{
"start": 1163,
"end": 1178,
"text": "(Collins, 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "L \u03bb = k [\u03bb \u2022 F (y k , x k ) \u2212 log Z \u03bb (x k )] \u2212 \u03bb 2 2\u03c3 2 + const with gradient \u2207L \u03bb = k F (y k , x k ) \u2212 E p \u03bb (Y |x k ) F (Y , x k ) \u2212 \u03bb \u03c3 2 3 Training Methods",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conditional Random Fields",
"sec_num": "2"
},
{
"text": "Conjugate-gradient (CG) methods have been shown to be very effective in linear and non-linear optimization (Shewchuk, 1994) . Instead of searching along the gradient, conjugate gradient searches along a carefully chosen linear combination of the gradient and the previous search direction. CG methods can be accelerated by linearly transforming the variables with preconditioner (Nocedal and Wright, 1999; Shewchuk, 1994) . The purpose of the preconditioner is to improve the condition number of the quadratic form that locally approximates the objective function, so the inverse of Hessian is reasonable preconditioner. However, this is not applicable to CRFs for two reasons. First, the size of the Hessian is dim(\u03bb) 2 , leading to unacceptable space and time requirements for the inversion. In such situations, it is common to use instead the (inverse of) the diagonal of the Hessian. However in our case the Hessian has the form",
"cite_spans": [
{
"start": 107,
"end": 123,
"text": "(Shewchuk, 1994)",
"ref_id": "BIBREF30"
},
{
"start": 379,
"end": 405,
"text": "(Nocedal and Wright, 1999;",
"ref_id": "BIBREF22"
},
{
"start": 406,
"end": 421,
"text": "Shewchuk, 1994)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preconditioned Conjugate Gradient",
"sec_num": "3.1"
},
{
"text": "H \u03bb def = \u2207 2 L \u03bb = \u2212 k {E [F (Y , x k ) \u00d7 F (Y , x k )] \u2212EF (Y , x k ) \u00d7 EF (Y , x k )}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preconditioned Conjugate Gradient",
"sec_num": "3.1"
},
{
"text": "where the expectations are taken with respect to p \u03bb (Y |x k ). Therefore, every Hessian element, including the diagonal ones, involve the expectation of a product of global feature values. Unfortunately, computing those expectations is quadratic on sequence length, as the forward-backward algorithm can only compute expectations of quantities that are additive along label sequences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preconditioned Conjugate Gradient",
"sec_num": "3.1"
},
{
"text": "We solve both problems by discarding the off-diagonal terms and approximating expectation of the square of a global feature by the expectation of the sum of squares of the corresponding local features at each position. The ap-proximated diagonal term H f for feature f has the form",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preconditioned Conjugate Gradient",
"sec_num": "3.1"
},
{
"text": "H f = Ef (Y , x k ) 2 \u2212 i \uf8eb \uf8ed y,y M i [y, y ] Z \u03bb (x) f (Y , x k ) \uf8f6 \uf8f8 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preconditioned Conjugate Gradient",
"sec_num": "3.1"
},
{
"text": "If this approximation is semidefinite, which is trivial to check, its inverse is an excellent preconditioner for early iterations of CG training. However, when the model is close to the maximum, the approximation becomes unstable, which is not surprising since it is based on feature independence assumptions that become invalid as the weights of interaction features move away from zero. Therefore, we disable the preconditioner after a certain number of iterations, determined from held-out data. We call this strategy mixed CG training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preconditioned Conjugate Gradient",
"sec_num": "3.1"
},
{
"text": "Newton methods for nonlinear optimization use secondorder (curvature) information to find search directions.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Limited-Memory Quasi-Newton",
"sec_num": "3.2"
},
{
"text": "As discussed in the previous section, it is not practical to obtain exact curvature information for CRF training. Limited-memory BFGS (L-BFGS) is a second-order method that estimates the curvature numerically from previous gradients and updates, avoiding the need for an exact Hessian inverse computation. Compared with preconditioned CG, L-BFGS can also handle large-scale problems but does not require a specialized Hessian approximations. An earlier study indicates that L-BFGS performs well in maximum-entropy classifier training (Malouf, 2002) . There is no theoretical guidance on how much information from previous steps we should keep to obtain sufficiently accurate curvature estimates. In our experiments, storing 3 to 10 pairs of previous gradients and updates worked well, so the extra memory required over preconditioned CG was modest. A more detailed description of this method can be found elsewhere (Nocedal and Wright, 1999) .",
"cite_spans": [
{
"start": 534,
"end": 548,
"text": "(Malouf, 2002)",
"ref_id": "BIBREF19"
},
{
"start": 915,
"end": 941,
"text": "(Nocedal and Wright, 1999)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Limited-Memory Quasi-Newton",
"sec_num": "3.2"
},
{
"text": "Unlike other methods discussed so far, voted perceptron training (Collins, 2002) attempts to minimize the difference between the global feature vector for a training instance and the same feature vector for the best-scoring labeling of that instance according to the current model. More precisely, for each training instance the method computes a weight update",
"cite_spans": [
{
"start": 65,
"end": 80,
"text": "(Collins, 2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Voted Perceptron",
"sec_num": "3.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03bb t+1 = \u03bb t + F (y k , x k ) \u2212 F (\u0177 k , x k )",
"eq_num": "(3)"
}
],
"section": "Voted Perceptron",
"sec_num": "3.3"
},
{
"text": "in which\u0177 k is the Viterbi pat\u0125",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voted Perceptron",
"sec_num": "3.3"
},
{
"text": "y k = arg max y \u03bb t \u2022 F (y , x k )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Voted Perceptron",
"sec_num": "3.3"
},
{
"text": "Like the familiar perceptron algorithm, this algorithm repeatedly sweeps over the training instances, updating the weight vector as it considers each instance. Instead of taking just the final weight vector, the voted perceptron algorithm takes the average of the \u03bb t . Collins (2002) reported and we confirmed that this averaging reduces overfitting considerably.",
"cite_spans": [
{
"start": 270,
"end": 284,
"text": "Collins (2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Voted Perceptron",
"sec_num": "3.3"
},
{
"text": "4 Shallow Parsing Figure 1 shows the base NPs in an example sentence. Following Ramshaw and Marcus (1995) , the input to the NP chunker consists of the words in a sentence annotated automatically with part-of-speech (POS) tags. The chunker's task is to label each word with a label indicating whether the word is outside a chunk (O), starts a chunk (B), or continues a chunk (I). For example, the tokens in first line of Figure 1 would be labeled BIIBIIOBOBIIO.",
"cite_spans": [
{
"start": 80,
"end": 105,
"text": "Ramshaw and Marcus (1995)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 18,
"end": 26,
"text": "Figure 1",
"ref_id": null
},
{
"start": 421,
"end": 429,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Voted Perceptron",
"sec_num": "3.3"
},
{
"text": "NP chunking results have been reported on two slightly different data sets: the original RM data set of Ramshaw and Marcus (1995) , and the modified CoNLL-2000 version of Tjong Kim Sang and Buchholz (2000) . Although the chunk tags in the RM and CoNLL-2000 are somewhat different, we found no significant accuracy differences between models trained on these two data sets. Therefore, all our results are reported on the CoNLL-2000 data set. We also used a development test set, provided by Michael Collins, derived from WSJ section 21 tagged with the Brill (1995) POS tagger.",
"cite_spans": [
{
"start": 104,
"end": 129,
"text": "Ramshaw and Marcus (1995)",
"ref_id": "BIBREF24"
},
{
"start": 181,
"end": 205,
"text": "Sang and Buchholz (2000)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Preparation",
"sec_num": "4.1"
},
{
"text": "Our chunking CRFs have a second-order Markov dependency between chunk tags. This is easily encoded by making the CRF labels pairs of consecutive chunk tags. That is, the label at position i is y i = c i\u22121 c i , where c i is the chunk tag of word i, one of O, B, or I. Since B must be used to start a chunk, the label OI is impossible. In addition, successive labels are constrained:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRFs for Shallow Parsing",
"sec_num": "4.2"
},
{
"text": "y i\u22121 = c i\u22122 c i\u22121 , y i = c i\u22121 c i",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRFs for Shallow Parsing",
"sec_num": "4.2"
},
{
"text": ", and c 0 = O. These contraints on the model topology are enforced by giving appropriate features a weight of \u2212\u221e, forcing all the forbidden labelings to have zero probability. Our choice of features was mainly governed by computing power, since we do not use feature selection and all features are used in training and testing. We use the following factored representation for features",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "CRFs for Shallow Parsing",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f (y i\u22121 , y i , x, i) = p(x, i)q(y i\u22121 , y i )",
"eq_num": "(4)"
}
],
"section": "CRFs for Shallow Parsing",
"sec_num": "4.2"
},
{
"text": "where p(x, i) is a predicate on the input sequence x and current position i and q(y i\u22121 , y i ) is a predicate on pairs of labels. For instance, p(x, i) might be \"word at position i is the\" or \"the POS tags at positions i \u2212 1, i are wi\u22121 = w c(yi) = c wi+1 = w wi\u22122 = w wi+2 = w wi\u22121 = w , wi = w wi+1 = w , wi = w ti = t ti\u22121 = t ti+1 = t ti\u22122 = t ti+2 = t ti\u22121 = t , ti = t ti\u22122 = t , ti\u22121 = t ti = t , ti+1 = t ti+1 = t , ti+2 = t ti\u22122 = t , ti\u22121 = t , ti = t ti\u22121 = t , ti = t , ti+1 = t ti = t , ti+1 = t , ti+2 = t Table 1 : Shallow parsing features DT, NN.\" Because the label set is finite, such a factoring of f (y i\u22121 , y i , x, i) is always possible, and it allows each input predicate to be evaluated just once for many features that use it, making it possible to work with millions of features on large training sets. Table 1 summarizes the feature set. For a given position i, w i is the word, t i its POS tag, and y i its label. For any label y = c c, c(y) = c is the corresponding chunk tag. For example, c(OB) = B. The use of chunk tags as well as labels provides a form of backoff from the very small feature counts that may arise in a secondorder model, while allowing significant associations between tag pairs and input predicates to be modeled. To save time in some of our experiments, we used only the 820,000 features that are supported in the CoNLL training set, that is, the features that are on at least once. For our highest F score, we used the complete feature set, around 3.8 million in the CoNLL training set, which contains all the features whose predicate is on at least once in the training set. The complete feature set may in principle perform better because it can place negative weights on transitions that should be discouraged if a given predicate is on.",
"cite_spans": [],
"ref_spans": [
{
"start": 521,
"end": 528,
"text": "Table 1",
"ref_id": null
},
{
"start": 830,
"end": 837,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "CRFs for Shallow Parsing",
"sec_num": "4.2"
},
{
"text": "As discussed previously, we need a Gaussian weight prior to reduce overfitting. We also need to choose the number of training iterations since we found that the best F score is attained while the log-likelihood is still improving. The reasons for this are not clear, but the Gaussian prior may not be enough to keep the optimization from making weight adjustments that slighly improve training log-likelihood but cause large F score fluctuations. We used the development test set mentioned in Section 4.1 to set the prior and the number of iterations.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Parameter Tuning",
"sec_num": "4.3"
},
{
"text": "The standard evaluation metrics for a chunker are precision P (fraction of output chunks that exactly match the reference chunks), recall R (fraction of reference chunks returned by the chunker), and their harmonic mean, the F1 score F 1 = 2 * P * R/(P + R) (which we call just F score in what follows). The relationships between F score and labeling error or log-likelihood are not direct, so we report both F score and the other metrics for the models we tested. For comparisons with other reported results we use F score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metric",
"sec_num": "4.4"
},
{
"text": "Ideally, comparisons among chunkers would control for feature sets, data preparation, training and test procedures, and parameter tuning, and estimate the statistical significance of performance differences. Unfortunately, reported results sometimes leave out details needed for accurate comparisons. We report F scores for comparison with previous work, but we also give statistical significance estimates using McNemar's test for those methods that we evaluated directly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Significance Tests",
"sec_num": "4.5"
},
{
"text": "Testing the significance of F scores is tricky because the wrong chunks generated by two chunkers are not directly comparable. Yeh (2000) examined randomized tests for estimating the significance of F scores, and in particular the bootstrap over the test set (Efron and Tibshirani, 1993; Sang, 2002) . However, bootstrap variances in preliminary experiments were too high to allow any conclusions, so we used instead a McNemar paired test on labeling disagreements (Gillick and Cox, 1989) . Model F score SVM combination 94.39% (Kudo and Matsumoto, 2001) CRF 94.38% Generalized winnow 93.89% (Zhang et al., 2002) Voted perceptron 94.09% MEMM 93.70% Table 2 : NP chunking F scores",
"cite_spans": [
{
"start": 127,
"end": 137,
"text": "Yeh (2000)",
"ref_id": "BIBREF34"
},
{
"start": 259,
"end": 287,
"text": "(Efron and Tibshirani, 1993;",
"ref_id": "BIBREF11"
},
{
"start": 288,
"end": 299,
"text": "Sang, 2002)",
"ref_id": "BIBREF29"
},
{
"start": 465,
"end": 488,
"text": "(Gillick and Cox, 1989)",
"ref_id": "BIBREF14"
},
{
"start": 528,
"end": 554,
"text": "(Kudo and Matsumoto, 2001)",
"ref_id": "BIBREF16"
},
{
"start": 592,
"end": 612,
"text": "(Zhang et al., 2002)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 649,
"end": 656,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Significance Tests",
"sec_num": "4.5"
},
{
"text": "All the experiments were performed with our Java implementation of CRFs,designed to handle millions of features, on 1.7 GHz Pentium IV processors with Linux and IBM Java 1.3.0. Minor variants support voted perceptron (Collins, 2002) and MEMMs (McCallum et al., 2000) with the same efficient feature encoding. GIS, CG, and L-BFGS were used to train CRFs and MEMMs. Table 2 gives representative NP chunking F scores for previous work and for our best model, with the complete set of 3.8 million features. The last row of the table gives the score for an MEMM trained with the mixed CG method using an approximate preconditioner. The published F score for voted perceptron is 93.53% with a different feature set (Collins, 2002) . The improved result given here is for the supported feature set; the complete feature set gives a slightly lower score of 94.07%. Zhang et al. (2002) reported a higher F score (94.38%) with generalized winnow using additional linguistic features that were not available to us.",
"cite_spans": [
{
"start": 217,
"end": 232,
"text": "(Collins, 2002)",
"ref_id": "BIBREF7"
},
{
"start": 237,
"end": 266,
"text": "MEMMs (McCallum et al., 2000)",
"ref_id": null
},
{
"start": 709,
"end": 724,
"text": "(Collins, 2002)",
"ref_id": "BIBREF7"
},
{
"start": 857,
"end": 876,
"text": "Zhang et al. (2002)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [
{
"start": 364,
"end": 371,
"text": "Table 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "All the results in the rest of this section are for the smaller supported set of 820,000 features. Figures 2a and 2b show how preconditioning helps training convergence.",
"cite_spans": [],
"ref_spans": [
{
"start": 99,
"end": 116,
"text": "Figures 2a and 2b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Convergence Speed",
"sec_num": "5.2"
},
{
"text": "Since each CG iteration involves a line search that may require several forward-backward procedures (typically between 4 and 5 in our experiments), we plot the progress of penalized log-likelihood L \u03bb with respect to the number of forward-backward evaluations. The objective function increases rapidly, achieving close proximity to the maximum in a few iterations (typically 10). In contrast, GIS training increases L \u03bb rather slowly, never reaching the value achieved by CG. The relative slowness of iterative scaling is also documented in a recent evaluation of training methods for maximum-entropy classification (Malouf, 2002) . In theory, GIS would eventually converge to the L \u03bb optimum, but in practice convergence may be so slow that L \u03bb improvements may fall below numerical accuracy, falsely indicating convergence. Mixed CG training converges slightly more slowly than preconditioned CG. On the other hand, CG without preconditioner converges much more slowly than both preconditioned CG and mixed CG training. However, it is still much faster than GIS. We believe that the superior convergence rate of preconditioned CG is due to the use of approximate second-order information. This is confirmed by the performance of L-BFGS, which also uses approximate second-order information. 2 Although there is no direct relationship between F scores and log-likelihood, in these experiments F score tends to follow log-likelihood. Indeed, Figure 3 shows that preconditioned CG training improves test F scores much more rapidly than GIS training. Table 3 compares run times (in minutes) for reaching a target penalized log-likelihood for various training methods with prior \u03c3 = 1.0. GIS is the only method that failed to reach the target, after 3,700 iterations. We cannot place the voted perceptron in this table, as it does not optimize log-likelihood and does not use a prior. However, it reaches a fairly good F-score above 93% in just two training sweeps, but after that it improves more slowly, to a somewhat lower score, than preconditioned CG training.",
"cite_spans": [
{
"start": 616,
"end": 630,
"text": "(Malouf, 2002)",
"ref_id": "BIBREF19"
},
{
"start": 1293,
"end": 1294,
"text": "2",
"ref_id": null
}
],
"ref_spans": [
{
"start": 1442,
"end": 1450,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 1549,
"end": 1556,
"text": "Table 3",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Convergence Speed",
"sec_num": "5.2"
},
{
"text": "The accuracy rate for individual labeling decisions is over-optimistic as an accuracy measure for shallow parsing. For instance, if the chunk BIIIIIII is labled as OIIIIIII, the labeling accuracy is 87.5%, but recall is 0. However, individual labeling errors provide a more convenient basis for statistical significance tests. One (Gillick and Cox, 1989) .",
"cite_spans": [
{
"start": 331,
"end": 354,
"text": "(Gillick and Cox, 1989)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Labeling Accuracy",
"sec_num": "5.3"
},
{
"text": "With McNemar's test, we compare the correctness of the labeling decisions of two models. The null hypothesis is that the disagreements (correct vs. incorrect) are due to chance. Table 4 summarizes the results of tests between the models for which we had labeling decisions. These tests suggest that MEMMs are significantly less accurate, but that there are no significant differences in accuracy among the other models.",
"cite_spans": [],
"ref_spans": [
{
"start": 178,
"end": 185,
"text": "Table 4",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Labeling Accuracy",
"sec_num": "5.3"
},
{
"text": "We have shown that (log-)linear sequence labeling models trained discriminatively with general-purpose optimization methods are a simple, competitive solution to learning shallow parsers. These models combine the best features of generative finite-state models and discriminative (log-)linear classifiers, and do NP chunking as well as or better than \"ad hoc\" classifier combinations, which were the most accurate approach until now. In a longer version of this work we will also describe shallow parsing results for other phrase types. There is no reason why the same techniques cannot be used equally successfully for the other types or for other related tasks, such as POS tagging or named-entity recognition.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "On the machine-learning side, it would be interesting to generalize the ideas of large-margin classification to sequence models, strengthening the results of Collins (2002) and leading to new optimal training algorithms with stronger guarantees against overfitting.",
"cite_spans": [
{
"start": 158,
"end": 172,
"text": "Collins (2002)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "On the application side, (log-)linear parsing models have the potential to supplant the currently dominant lexicalized PCFG models for parsing by allowing much richer feature sets and simpler smoothing, while avoiding the label bias problem that may have hindered earlier classifier-based parsers (Ratnaparkhi, 1997) . However, work in that direction has so far addressed only parse reranking (Collins and Duffy, 2002; Riezler et al., 2002) . Full discriminative parser training faces significant algorithmic challenges in the relationship between parsing alternatives and feature values (Geman and Johnson, 2002) and in computing feature expectations.",
"cite_spans": [
{
"start": 297,
"end": 316,
"text": "(Ratnaparkhi, 1997)",
"ref_id": "BIBREF26"
},
{
"start": 393,
"end": 418,
"text": "(Collins and Duffy, 2002;",
"ref_id": "BIBREF8"
},
{
"start": 419,
"end": 440,
"text": "Riezler et al., 2002)",
"ref_id": null
},
{
"start": 588,
"end": 613,
"text": "(Geman and Johnson, 2002)",
"ref_id": "BIBREF13"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Although L-BFGS has a slightly higher penalized loglikelihood, its log-likelihood on the data is actually lower than that of preconditioned CG and mixed CG training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "John Lafferty and Andrew McCallum worked with the second author on developing CRFs. McCallum helped by the second author implemented the first conjugategradient trainer for CRFs, which convinced us that training of large CRFs on large datasets would be practical. Michael Collins helped us reproduce his generalized per-cepton results and compare his method with ours. Erik Tjong Kim Sang, who has created the best online resources on shallow parsing, helped us with details of the CoNLL-2000 shared task. Taku Kudo provided the output of his SVM chunker for the significance test.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Parsing by chunks",
"authors": [
{
"first": "S",
"middle": [],
"last": "Abney",
"suffix": ""
}
],
"year": 1991,
"venue": "Principle-based Parsing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Abney. Parsing by chunks. In R. Berwick, S. Abney, and C. Tenny, editors, Principle-based Parsing. Kluwer Aca- demic Publishers, 1991.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Boosting applied to tagging and PP attachment",
"authors": [
{
"first": "S",
"middle": [],
"last": "Abney",
"suffix": ""
},
{
"first": "R",
"middle": [
"E"
],
"last": "Schapire",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Singer",
"suffix": ""
}
],
"year": 1999,
"venue": "Proc. EMNLP-VLC",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Abney, R. E. Schapire, and Y. Singer. Boosting applied to tagging and PP attachment. In Proc. EMNLP-VLC, New Brunswick, New Jersey, 1999. ACL.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "A maximum entropy approach to natural language processing",
"authors": [
{
"first": "A",
"middle": [
"L"
],
"last": "Berger",
"suffix": ""
},
{
"first": "S",
"middle": [
"A"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"J"
],
"last": "Della Pietra",
"suffix": ""
}
],
"year": 1996,
"venue": "Computational Linguistics",
"volume": "22",
"issue": "1",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. L. Berger, S. A. Della Pietra, and V. J. Della Pietra. A maxi- mum entropy approach to natural language processing. Com- putational Linguistics, 22(1), 1996.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "An algorithm that learns what's in a name",
"authors": [
{
"first": "D",
"middle": [
"M"
],
"last": "Bikel",
"suffix": ""
},
{
"first": "R",
"middle": [
"L"
],
"last": "Schwartz",
"suffix": ""
},
{
"first": "R",
"middle": [
"M"
],
"last": "Weischedel",
"suffix": ""
}
],
"year": 1999,
"venue": "Machine Learning",
"volume": "34",
"issue": "",
"pages": "211--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. M. Bikel, R. L. Schwartz, and R. M. Weischedel. An algo- rithm that learns what's in a name. Machine Learning, 34: 211-231, 1999.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Une Approche th\u00e9orique de l'Apprentissage Connexionniste: Applications\u00e0 la Reconnaissance de la Parole",
"authors": [
{
"first": "L",
"middle": [],
"last": "Bottou",
"suffix": ""
}
],
"year": 1991,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Bottou. Une Approche th\u00e9orique de l'Apprentissage Con- nexionniste: Applications\u00e0 la Reconnaissance de la Parole. PhD thesis, Universit\u00e9 de Paris XI, 1991.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Transformation-based error-driven learning and natural language processing: a case study in part of speech tagging",
"authors": [
{
"first": "E",
"middle": [],
"last": "Brill",
"suffix": ""
}
],
"year": 1995,
"venue": "Computational Linguistics",
"volume": "21",
"issue": "",
"pages": "543--565",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. Brill. Transformation-based error-driven learning and natural language processing: a case study in part of speech tagging. Computational Linguistics, 21:543-565, 1995.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A Gaussian prior for smoothing maximum entropy models",
"authors": [
{
"first": "S",
"middle": [
"F"
],
"last": "Chen",
"suffix": ""
},
{
"first": "R",
"middle": [],
"last": "Rosenfeld",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. F. Chen and R. Rosenfeld. A Gaussian prior for smoothing maximum entropy models. Technical Report CMU-CS-99- 108, Carnegie Mellon University, 1999.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. EMNLP 2002. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Collins. Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algo- rithms. In Proc. EMNLP 2002. ACL, 2002.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron",
"authors": [
{
"first": "M",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "N",
"middle": [],
"last": "Duffy",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. 40th ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "M. Collins and N. Duffy. New ranking algorithms for parsing and tagging: Kernels over discrete structures, and the voted perceptron. In Proc. 40th ACL, 2002.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Generalized iterative scaling for log-linear models",
"authors": [
{
"first": "J",
"middle": [
"N"
],
"last": "Darroch",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Ratcliff",
"suffix": ""
}
],
"year": 1972,
"venue": "The Annals of Mathematical Statistics",
"volume": "43",
"issue": "5",
"pages": "1470--1480",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. N. Darroch and D. Ratcliff. Generalized iterative scaling for log-linear models. The Annals of Mathematical Statistics, 43 (5):1470-1480, 1972.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Inducing features of random fields",
"authors": [
{
"first": "S",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "V",
"middle": [
"Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
}
],
"year": 1997,
"venue": "IEEE PAMI",
"volume": "19",
"issue": "4",
"pages": "380--393",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Della Pietra, V. Della Pietra, and J. Lafferty. Inducing fea- tures of random fields. IEEE PAMI, 19(4):380-393, 1997.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "An Introduction to the Bootstrap",
"authors": [
{
"first": "B",
"middle": [],
"last": "Efron",
"suffix": ""
},
{
"first": "R",
"middle": [
"J"
],
"last": "Tibshirani",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Efron and R. J. Tibshirani. An Introduction to the Bootstrap. Chapman & Hall/CRC, 1993.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Information extraction with HMM structures learned by stochastic optimization",
"authors": [
{
"first": "D",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. AAAI",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "D. Freitag and A. McCallum. Information extraction with HMM structures learned by stochastic optimization. In Proc. AAAI 2000, 2000.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Dynamic programming for parsing and estimation of stochastic unification-based grammars",
"authors": [
{
"first": "S",
"middle": [],
"last": "Geman",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. 40th ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "S. Geman and M. Johnson. Dynamic programming for parsing and estimation of stochastic unification-based grammars. In Proc. 40th ACL, 2002.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Some statistical issues in the compairson of speech recognition algorithms",
"authors": [
{
"first": "L",
"middle": [],
"last": "Gillick",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Cox",
"suffix": ""
}
],
"year": 1989,
"venue": "International Conference on Acoustics Speech and Signal Processing",
"volume": "1",
"issue": "",
"pages": "532--535",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. Gillick and S. Cox. Some statistical issues in the compairson of speech recognition algorithms. In International Confer- ence on Acoustics Speech and Signal Processing, volume 1, pages 532-535, 1989.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Markov fields on finite graphs and lattices. Unpublished manuscript",
"authors": [
{
"first": "J",
"middle": [],
"last": "Hammersley",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Clifford",
"suffix": ""
}
],
"year": 1971,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Hammersley and P. Clifford. Markov fields on finite graphs and lattices. Unpublished manuscript, 1971.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Chunking with support vector machines",
"authors": [
{
"first": "T",
"middle": [],
"last": "Kudo",
"suffix": ""
},
{
"first": "Y",
"middle": [],
"last": "Matsumoto",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. NAACL 2001. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Kudo and Y. Matsumoto. Chunking with support vector ma- chines. In Proc. NAACL 2001. ACL, 2001.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Robust part-of-speech tagging using a hidden Markov model",
"authors": [
{
"first": "J",
"middle": [],
"last": "Kupiec",
"suffix": ""
}
],
"year": 1992,
"venue": "Computer Speech and Language",
"volume": "6",
"issue": "",
"pages": "225--242",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Kupiec. Robust part-of-speech tagging using a hidden Markov model. Computer Speech and Language, 6:225-242, 1992.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data",
"authors": [
{
"first": "J",
"middle": [],
"last": "Lafferty",
"suffix": ""
},
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2001,
"venue": "Proc. ICML-01",
"volume": "",
"issue": "",
"pages": "282--289",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling se- quence data. In Proc. ICML-01, pages 282-289, 2001.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "A comparison of algorithms for maximum entropy parameter estimation",
"authors": [
{
"first": "R",
"middle": [],
"last": "Malouf",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. CoNLL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "R. Malouf. A comparison of algorithms for maximum entropy parameter estimation. In Proc. CoNLL-2002, 2002.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Maximum entropy Markov models for information extraction and segmentation",
"authors": [
{
"first": "A",
"middle": [],
"last": "Mccallum",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. ICML 2000",
"volume": "",
"issue": "",
"pages": "591--598",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. McCallum, D. Freitag, and F. Pereira. Maximum entropy Markov models for information extraction and segmentation. In Proc. ICML 2000, pages 591-598, Stanford, California, 2000.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Algorithms for maximum-likelihood logistic regression",
"authors": [
{
"first": "T",
"middle": [
"P"
],
"last": "Minka",
"suffix": ""
}
],
"year": 2001,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. P. Minka. Algorithms for maximum-likelihood logistic re- gression. Technical Report 758, CMU Statistics Department, 2001.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Numerical Optimization",
"authors": [
{
"first": "J",
"middle": [],
"last": "Nocedal",
"suffix": ""
},
{
"first": "S",
"middle": [
"J"
],
"last": "Wright",
"suffix": ""
}
],
"year": 1999,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. Nocedal and S. J. Wright. Numerical Optimization. Springer, 1999.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "The use of classifiers in sequential inference",
"authors": [
{
"first": "V",
"middle": [],
"last": "Punyakanok",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Roth",
"suffix": ""
}
],
"year": 2001,
"venue": "NIPS 13",
"volume": "",
"issue": "",
"pages": "995--1001",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "V. Punyakanok and D. Roth. The use of classifiers in sequential inference. In NIPS 13, pages 995-1001. MIT Press, 2001.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Text chunking using transformation-based learning",
"authors": [
{
"first": "L",
"middle": [
"A"
],
"last": "Ramshaw",
"suffix": ""
},
{
"first": "M",
"middle": [
"P"
],
"last": "Marcus",
"suffix": ""
}
],
"year": 1995,
"venue": "Proc. Third Workshop on Very Large Corpora. ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "L. A. Ramshaw and M. P. Marcus. Text chunking using transformation-based learning. In Proc. Third Workshop on Very Large Corpora. ACL, 1995.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "A maximum entropy model for part-of-speech tagging",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1996,
"venue": "Proc. EMNLP",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Ratnaparkhi. A maximum entropy model for part-of-speech tagging. In Proc. EMNLP, New Brunswick, New Jersey, 1996. ACL.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "A linear observed time statistical parser based on maximum entropy models",
"authors": [
{
"first": "A",
"middle": [],
"last": "Ratnaparkhi",
"suffix": ""
}
],
"year": 1997,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Ratnaparkhi. A linear observed time statistical parser based on maximum entropy models. In C. Cardie and R. Weischedel, editors, EMNLP-2. ACL, 1997.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Parsing the Wall Street Journal using a lexical-functional grammar and discriminative estimation techniques",
"authors": [
{
"first": "Iii",
"middle": [],
"last": "Maxwell",
"suffix": ""
},
{
"first": "M",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. 40th ACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maxwell III, and M. Johnson. Parsing the Wall Street Journal using a lexical-functional grammar and discriminative esti- mation techniques. In Proc. 40th ACL, 2002.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Memory-based shallow parsing",
"authors": [
{
"first": "E",
"middle": [
"F T K"
],
"last": "Sang",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Machine Learning Research",
"volume": "2",
"issue": "",
"pages": "559--594",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. F. T. K. Sang. Memory-based shallow parsing. Journal of Machine Learning Research, 2:559-594, 2002.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "An introduction to the conjugate gradient method without the agonizing pain",
"authors": [
{
"first": "J",
"middle": [
"R"
],
"last": "Shewchuk",
"suffix": ""
}
],
"year": 1994,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "J. R. Shewchuk. An introduction to the conjugate gradient method without the agonizing pain, 1994. URL http:// www-2.cs.cmu.edu/\u02dcjrs/jrspapers.html#cg.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Discriminative probabilistic models for relational data",
"authors": [
{
"first": "B",
"middle": [],
"last": "Taskar",
"suffix": ""
},
{
"first": "P",
"middle": [],
"last": "Abbeel",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Koller",
"suffix": ""
}
],
"year": 2002,
"venue": "Eighteenth Conference on Uncertainty in Artificial Intelligence",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilis- tic models for relational data. In Eighteenth Conference on Uncertainty in Artificial Intelligence, 2002.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Introduction to the CoNLL-2000 shared task: Chunking",
"authors": [
{
"first": "E",
"middle": [
"F"
],
"last": "Tjong Kim Sang",
"suffix": ""
},
{
"first": "S",
"middle": [],
"last": "Buchholz",
"suffix": ""
}
],
"year": 2000,
"venue": "Proc. CoNLL-2000",
"volume": "",
"issue": "",
"pages": "127--132",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "E. F. Tjong Kim Sang and S. Buchholz. Introduction to the CoNLL-2000 shared task: Chunking. In Proc. CoNLL-2000, pages 127-132, 2000.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Efficient training of conditional random fields",
"authors": [
{
"first": "H",
"middle": [],
"last": "Wallach",
"suffix": ""
}
],
"year": 2002,
"venue": "Proc. 6th Annual CLUK Research Colloquium",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "H. Wallach. Efficient training of conditional random fields. In Proc. 6th Annual CLUK Research Colloquium, 2002.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "More accurate tests for the statistical significance of result differences",
"authors": [
{
"first": "A",
"middle": [],
"last": "Yeh",
"suffix": ""
}
],
"year": 2000,
"venue": "COLING-2000",
"volume": "",
"issue": "",
"pages": "947--953",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Yeh. More accurate tests for the statistical significance of result differences. In COLING-2000, pages 947-953, Saar- bruecken, Germany, 2000.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Text chunking based on a generalization of winnow",
"authors": [
{
"first": "T",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "F",
"middle": [],
"last": "Damerau",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Johnson",
"suffix": ""
}
],
"year": 2002,
"venue": "Journal of Machine Learning Research",
"volume": "2",
"issue": "",
"pages": "615--637",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "T. Zhang, F. Damerau, and D. Johnson. Text chunking based on a generalization of winnow. Journal of Machine Learning Research, 2:615-637, 2002.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Figure 1: NP chunks",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Test F scores vs. training time such test is McNemar test on paired observations",
"uris": null,
"num": null,
"type_str": "figure"
},
"TABREF0": {
"num": null,
"html": null,
"type_str": "table",
"text": "Boeing Co. to provide structural parts for Boeing 's 747 jetliners .",
"content": "<table><tr><td>Rockwell International Corp. 's Tulsa unit</td><td>said</td><td>it</td><td>signed</td><td>a tentative agreement</td><td>extending</td></tr><tr><td>its contract with</td><td/><td/><td/><td/><td/></tr></table>"
},
"TABREF2": {
"num": null,
"html": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td colspan=\"2\">: Runtime for various training methods</td></tr><tr><td>null hypothesis</td><td>p-value</td></tr><tr><td>CRF vs. SVM</td><td>0.469</td></tr><tr><td>CRF vs. MEMM</td><td>0.00109</td></tr><tr><td>CRF vs. voted perceptron</td><td>0.116</td></tr><tr><td colspan=\"2\">MEMM vs. voted perceptron 0.0734</td></tr></table>"
},
"TABREF3": {
"num": null,
"html": null,
"type_str": "table",
"text": "McNemar's tests on labeling disagreements",
"content": "<table/>"
}
}
}
}