| { |
| "paper_id": "N12-1013", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T14:04:26.957484Z" |
| }, |
| "title": "Minimum-Risk Training of Approximate CRF-Based NLP Systems", |
| "authors": [ |
| { |
| "first": "Veselin", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Johns Hopkins University Baltimore", |
| "location": { |
| "postCode": "21218", |
| "region": "MD" |
| } |
| }, |
| "email": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Johns Hopkins University Baltimore", |
| "location": { |
| "postCode": "21218", |
| "region": "MD" |
| } |
| }, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Conditional Random Fields (CRFs) are a popular formalism for structured prediction in NLP. It is well known how to train CRFs with certain topologies that admit exact inference, such as linear-chain CRFs. Some NLP phenomena, however, suggest CRFs with more complex topologies. Should such models be used, considering that they make exact inference intractable? Stoyanov et al. (2011) recently argued for training parameters to minimize the task-specific loss of whatever approximate inference and decoding methods will be used at test time. We apply their method to three NLP problems, showing that (i) using more complex CRFs leads to improved performance, and that (ii) minimumrisk training learns more accurate models.", |
| "pdf_parse": { |
| "paper_id": "N12-1013", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Conditional Random Fields (CRFs) are a popular formalism for structured prediction in NLP. It is well known how to train CRFs with certain topologies that admit exact inference, such as linear-chain CRFs. Some NLP phenomena, however, suggest CRFs with more complex topologies. Should such models be used, considering that they make exact inference intractable? Stoyanov et al. (2011) recently argued for training parameters to minimize the task-specific loss of whatever approximate inference and decoding methods will be used at test time. We apply their method to three NLP problems, showing that (i) using more complex CRFs leads to improved performance, and that (ii) minimumrisk training learns more accurate models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Conditional Random Fields (CRFs) (Lafferty et al., 2001) are often used to model dependencies among linguistic variables. CRF-based models have improved the state of the art in a number of natural language processing (NLP) tasks ranging from partof-speech tagging to information extraction and sentiment analysis (Lafferty et al., 2001; Peng and Mc-Callum, 2006; Choi et al., 2005) .", |
| "cite_spans": [ |
| { |
| "start": 33, |
| "end": 56, |
| "text": "(Lafferty et al., 2001)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 313, |
| "end": 336, |
| "text": "(Lafferty et al., 2001;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 337, |
| "end": 362, |
| "text": "Peng and Mc-Callum, 2006;", |
| "ref_id": null |
| }, |
| { |
| "start": 363, |
| "end": 381, |
| "text": "Choi et al., 2005)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Robust and theoretically sound training procedures have been developed for CRFs when the model can be used with exact inference and decoding. 1 However, some NLP problems seem to call for higher-treewidth graphical models in which exact inference is expensive or intractable. These \"loopy\" CRFs have cyclic connections among the output and/or latent variables. Alas, standard learning procedures assume exact inference: they do not compensate for approximations that will be used at test time, and can go surprisingly awry if approximate inference is used at training time (Kulesza and Pereira, 2008) .", |
| "cite_spans": [ |
| { |
| "start": 573, |
| "end": 600, |
| "text": "(Kulesza and Pereira, 2008)", |
| "ref_id": "BIBREF21" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "While NLP research has been consistently evolving toward more richly structured models, one may hesitate to add dependencies to a graphical model if there is a danger that this will end up hurting performance through approximations. In this paper we illustrate how to address this problem, even for extremely interconnected models in which every pair of output variables is connected. Wainwright (2006) showed that if approximate inference will be used at test time, it may be beneficial to use a learning procedure that does not converge to the true model but to one that performs well under the approximations. argue for minimizing a certain non-convex training objective, namely the empirical risk of the entire system comprising the CRF together with whatever approximate inference and decoding procedures will be used at test time. They regard this entire system as simply a complex decision rule, analogous to a neural network, and show how to use back-propagation to tune its parameters to locally minimize the empirical risk (i.e., the average task-specific loss on training data). show that on certain synthetic-data problems, this frequentist training regimen significantly reduced test-data loss compared to approximate maximum likelihood estimation (MLE). However, this method has not been evaluated on real-world problems until now.", |
| "cite_spans": [ |
| { |
| "start": 385, |
| "end": 402, |
| "text": "Wainwright (2006)", |
| "ref_id": "BIBREF41" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We will refer to the approach as \"ERMA\"-Empirical Risk Minimization under Approximations. ERMA is attractive for NLP because the freedom to use arbitrarily structured graphical models makes it possible to include latent linguistic variables, predict complex structures such as parses (Smith and Eisner, 2008) , and do collective prediction in relational domains (Ji and Grishman, 2011; Benson et al., 2011; Dreyer and Eisner, 2009) . In training, ERMA considers not only the approximation method but also the task-specific loss function. This means that ERMA is careful to use the additional variables and dependencies only in ways that help training set performance. (Overfitting on the enlarged parameter set should be avoided through regularization.)", |
| "cite_spans": [ |
| { |
| "start": 284, |
| "end": 308, |
| "text": "(Smith and Eisner, 2008)", |
| "ref_id": "BIBREF32" |
| }, |
| { |
| "start": 362, |
| "end": 385, |
| "text": "(Ji and Grishman, 2011;", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 386, |
| "end": 406, |
| "text": "Benson et al., 2011;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 407, |
| "end": 431, |
| "text": "Dreyer and Eisner, 2009)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We have developed a simple syntax for specifying CRFs with complex structures, and a software package (available from http://www.clsp. jhu.edu/\u02dcves/software.html) that allows ERMA training of these CRFs for several popular loss functions (e.g., accuracy, mean-squared error, F-measure). In this paper, we use these tools to revisit three previously studied NLP applications that can be modeled naturally with approximate CRFs (we will use approximate CRFs to refer to CRFbased systems that are used with approximations in inference or decoding). We show that (i) natural language can be modeled more effectively with CRFs that are not restricted to a linear structure and (ii) that ERMA training represents an improvement over previous learning methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The first application, predicting congressional votes, has not been previously modeled with CRFs. By using a more principled probabilistic approach, we are able to improve the state-of-the-art accuracy from 71.2% to 78.2% when training to maximize the approximate log-likelihood of the training data. By switching to ERMA training, we improve this result further to 85.1%.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The second application, information extraction from seminar announcements, has been modeled previously with skip-chain CRFs (Sutton and Mc-Callum, 2005; Finkel et al., 2005) . The skip-chain CRF introduces loops and requires approximate in-ference, which motivates minimum risk training. Our results show that ERMA training improves Fmeasures from 89.5 to 90.9 (compared to 87.1 for the model without skip-chains).", |
| "cite_spans": [ |
| { |
| "start": 124, |
| "end": 152, |
| "text": "(Sutton and Mc-Callum, 2005;", |
| "ref_id": null |
| }, |
| { |
| "start": 153, |
| "end": 173, |
| "text": "Finkel et al., 2005)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Finally, for our third application, we perform collective multi-label text classification. We follow previous work (Ghamrawi and McCallum, 2005; Finley and Joachims, 2008) and use a fully connected CRF to model all pairwise dependencies between labels. We observe similar trends for this task: switching from a maximum entropy model that does not model label dependencies to a loopy CRF leads to an improvement in F-measure from 81.6 to 84.0, and using ERMA leads to additional improvement (84.7).", |
| "cite_spans": [ |
| { |
| "start": 115, |
| "end": 144, |
| "text": "(Ghamrawi and McCallum, 2005;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 145, |
| "end": 171, |
| "text": "Finley and Joachims, 2008)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A conditional random field (CRF) is an undirected graphical model defined by a tuple (X , Y, F, f, \u03b8). X = (X 1 , X 2 , . . .) is a set of random variables and Y = (Y 1 , Y 2 , . . .) is a set of output random variables. 2 We use x = (x 1 , x 2 , . . .), to denote a possible assignment of values to X , and similarly for y, with xy denoting the joint assignment. Each \u03b1 \u2208 F is a subset of the random variables, \u03b1 \u2286 X \u222a Y, and we write xy \u03b1 to denote the restriction of xy to \u03b1. Finally, for each \u03b1 \u2208 F, the CRF specifies a function f \u03b1 that extracts a feature vector \u2208 R d from the restricted assignment xy \u03b1 . We define the overall feature vector f (x, y)", |
| "cite_spans": [ |
| { |
| "start": 221, |
| "end": 222, |
| "text": "2", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries 2.1 Conditional Random Fields", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "= \u03b1\u2208F f \u03b1 (xy \u03b1 ) \u2208 R d . The model defines conditional probabilities p \u03b8 (y|x) = exp \u03b8 \u2022 f (x, y) y exp \u03b8 \u2022 f (x, y )", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Preliminaries 2.1 Conditional Random Fields", |
| "sec_num": "2" |
| }, |
| { |
| "text": "where \u03b8 \u2208 R d is a global weight vector (to be learned). This is a log-linear model; the denominator (traditionally denoted Z x ) sums over all possible output assignments to normalize the distribution. Provided that all probabilities needed at training or test time are conditioned on an observation of the form X = x, CRFs can include arbitrary overlapping features of the input without having to explicitly model input feature dependencies.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Preliminaries 2.1 Conditional Random Fields", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Inference in general CRFs is intractable (Koller and Friedman, 2009) . Nevertheless, there exist several approximate algorithms that have theoretical motivation and tend to exhibit good performance in practice. Those include variational methods such as loopy belief propagation (BP) (Murphy et al., 1999) and mean-field, as well as Markov Chain Monte Carlo methods.", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 68, |
| "text": "(Koller and Friedman, 2009)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 283, |
| "end": 304, |
| "text": "(Murphy et al., 1999)", |
| "ref_id": "BIBREF26" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference in CRFs", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "ERMA training is applicable to any approximation that corresponds to a differentiable function, even if the function has no simple closed form but is computed by an iterative update algorithm. In this paper we select BP, which is exact when the factor graph is a tree, such as a linear-chain CRF, but whose results can be somewhat distorted by loops in the factor graph, as in our settings. BP computes beliefs about the marginal distribution of each random variable using iterative updates. We standardly approximate the posterior CRF marginals given the input observations by running BP over a CRF that enforces those observations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Inference in CRFs", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Conditional random fields are models of probability. A decoder is a procedure for converting these probabilities into system outputs. Given x, the decoder would ideally choose y to minimize the loss (y, y * ), where compares a candidate assignment y to the true assignment y * . But of course we do not know the truth at test time. Instead we can average over possible values y of the truth:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "argmin y y p(y | x) \u2022 (y, y )", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Decoding", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "This is the minimum Bayes risk (MBR) principle from statistical decision theory: choose y to minimize the expected loss (i.e., the risk) according to the CRF's posterior beliefs given x.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In the NLP literature, CRFs are often decoded by choosing y to be the maximum posterior probability assignment (e.g., Sha and Pereira (2003) , Sutton et al. (2007) ). This is the MBR procedure for the 0-1 loss function that simply tests whether y = y * . For other loss functions, however, the corresponding MBR procedure is preferable. For some loss functions it is tractable given the posterior marginals of p, while in other cases approximations are needed.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 140, |
| "text": "Sha and Pereira (2003)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 143, |
| "end": 163, |
| "text": "Sutton et al. (2007)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "In our experiments we use MBR decoding (or a tractable approximation) but substitute the approximate posterior marginals of p as computed by BP. For example, if the loss of y is the number of incorrectly recovered output variables, MBR says to separately pick the most probable value for each output variable, according to its (approximate) marginal.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Decoding", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "This section briefly describes the ERMA training algorithm from and compares it to related structured learning methods. We assume a standard ML setting, with a set of training inputs x i and corresponding correct outputs y i * . All the methods below are regularized in practice, but we omit mention of regularizers for simplicity.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimum-Risk CRF Training", |
| "sec_num": "3" |
| }, |
| { |
| "text": "When inference and decoding can be performed exactly, the CRF parameters \u03b8 are often trained by maximum likelihood estimation (MLE):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Structured Learning Methods", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "argmax \u03b8 i log p \u03b8 (y i * | x i )", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Related Structured Learning Methods", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The gradient of each summand log p \u03b8 (y i * | x i ) can be computed by performing inference in two settings, one with x i , y i * observed and one with only the conditioning events x i observed. The gradient emerges as the difference between the feature expectations in the two cases. If exact inference is intractable, one can compute approximate feature expectations by loopy BP. Computing the approximate gradient in this way, and training the CRF with some gradient-based optimization method, has been shown to work relatively well in practice (Vishwanathan et al., 2006; Sutton and McCallum, 2005) .", |
| "cite_spans": [ |
| { |
| "start": 548, |
| "end": 575, |
| "text": "(Vishwanathan et al., 2006;", |
| "ref_id": "BIBREF40" |
| }, |
| { |
| "start": 576, |
| "end": 602, |
| "text": "Sutton and McCallum, 2005)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Structured Learning Methods", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The above method takes into account neither the loss function that will be used for evaluation, nor the approximate algorithms that have been selected for inference and decoding at test time. Other structure learning methods do consider loss, though it is not obvious how to make them consider approximations. Those include maximum margin (Taskar et al., 2003; Finley and Joachims, 2008) and softmaxmargin (Gimpel and Smith, 2010) . The idea of margin-based methods is to choose weights \u03b8 so that the correct alternative y i * always gets a better score than each possible alternative y i \u2208 Y. The loss is incorporated in these methods by requiring the mar-", |
| "cite_spans": [ |
| { |
| "start": 339, |
| "end": 360, |
| "text": "(Taskar et al., 2003;", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 361, |
| "end": 387, |
| "text": "Finley and Joachims, 2008)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 406, |
| "end": 430, |
| "text": "(Gimpel and Smith, 2010)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Structured Learning Methods", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "gin ( \u03b8 \u2022 f (x i , y i * ) \u2212 \u03b8 \u2022 f (x i , y i )) \u2265 (y i , y i * )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Structured Learning Methods", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": ", with penalized slack in these constraints. The softmaxmargin method uses a different criterion-it resembles MLE but modifies the denominator of (1) to", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Structured Learning Methods", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Z x = y \u2208Y exp( \u03b8 \u2022 f (x, y ) + (y , y * )).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Structured Learning Methods", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "In our experiments we compare against MLE training (which is common) and softmax-margin, which incorporates loss and which Gimpel and Smith (2010) show is either better or competitive when compared to other margin methods on an NLP task. We adapt these methods to the loopy case in the obvious way, by replacing exact inference with loopy BP and keeping everything else the same.", |
| "cite_spans": [ |
| { |
| "start": 123, |
| "end": 146, |
| "text": "Gimpel and Smith (2010)", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Structured Learning Methods", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We wish to consider the approximate inference and decoding algorithms and the loss function that will be used during testing. Thus, we want \u03b8 to minimize the expected loss under the true data distribution P :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimum-Risk Training", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "argmin \u03b8 E xy\u223cP [ (\u03b4 \u03b8 (x), y)]", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Minimum-Risk Training", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where \u03b4 \u03b8 is the decision rule (parameterized by \u03b8), which decodes the results of inference under p \u03b8 . In practice, we do not know the true data distribution, but we can do empirical risk minimization (ERM), instead averaging the loss over our sample of (x i , y i ) pairs. ERM for structured prediction was first introduced in the speech community (Bahl et al., 1988) and later used in NLP (Och, 2003; Kakade et al., 2002; Suzuki et al., 2006; Li and Eisner, 2009, etc.) . Previous applications of risk minimization assume exact inference, having defined the hypothesis space by a precomputed n-best list, lattice, or packed forest over which exact inference is possible.", |
| "cite_spans": [ |
| { |
| "start": 350, |
| "end": 369, |
| "text": "(Bahl et al., 1988)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 392, |
| "end": 403, |
| "text": "(Och, 2003;", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 404, |
| "end": 424, |
| "text": "Kakade et al., 2002;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 425, |
| "end": 445, |
| "text": "Suzuki et al., 2006;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 446, |
| "end": 472, |
| "text": "Li and Eisner, 2009, etc.)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimum-Risk Training", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The ERMA approach ) works with approximate inference and computes exact gradients of the output loss (or a differentiable surrogate) in the context of the approximate inference and decoding algorithms. To determine the gradient of (\u03b4 \u03b8 (x i ), y i ) with respect to \u03b8, the method relies on automatic differentiation in the reverse mode (Griewank and Corliss, 1991) , a general technique for sensitivity analysis in computations. The intuition behind automatic differentiation is that the entire computation is a sequence of elementary differentiable operations. For each elementary operation, given that we know the input and result values, and the partial derivative of the loss with respect to the result, we can compute the partial derivative of the loss with respect to the inputs to the step. Differentiating the whole complicated computation can be carried out in backward pass in this step-by-step manner as long as we record intermediate results during the computation of the function (the forward pass). At the end, we accumulate the partials of the loss with respect to each parameter \u03b8 i .", |
| "cite_spans": [ |
| { |
| "start": 336, |
| "end": 364, |
| "text": "(Griewank and Corliss, 1991)", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimum-Risk Training", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "ERMA is similar to back-propagation used in recurrent neural networks, which involve cyclic updates like those in belief propagation (Williams and Zipser, 1989) . It considers an \"unrolled\" version of the forward pass, in which \"snapshots\" of a variable at times t and t + 1 are treated as distinct variables, with one perhaps influencing the other. The forward pass computes (\u03b4 \u03b8 (x i ), y i ) by performing approximate inference, then decoding, then evaluation. These steps convert (x i , \u03b8) \u2192 marginals \u2192 decision \u2192 loss. The backward pass rewinds the entire computation, differentiating each phase in term. The total time required by this algorithm is roughly twice the time of the forward pass, so its complexity is comparable to approximate inference.", |
| "cite_spans": [ |
| { |
| "start": 133, |
| "end": 160, |
| "text": "(Williams and Zipser, 1989)", |
| "ref_id": "BIBREF42" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimum-Risk Training", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In this paper, we do not advocate any particular test-time inference or decoding procedures. It is reasonable to experiment with several choices that may produce faster or more accurate systems. We simply recommend doing ERMA training to match each selected test-time condition. specifically showed how to train a system that will use sum-product BP for inference at test time (unlike margin-based methods). This may be advantageous for some tasks because it marginalizes over latent variables. However, it is popular and sometimes faster to do 1-best decoding, so we also include experiments where the test-time system returns a 1best value of y (or an approximation to this if the CRF is loopy), based on max-product BP inference. Although 1-best systems are not differentiable functions, we can approach their behavior during ERM training by annealing the training objective (Smith and Eisner, 2006) . In the annealed case we evaluate (4) and its gradient under sum-product BP, except that we perform inference under p (\u03b8/T ) instead of p \u03b8 .", |
| "cite_spans": [ |
| { |
| "start": 878, |
| "end": 902, |
| "text": "(Smith and Eisner, 2006)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimum-Risk Training", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We gradually reduce the temperature T \u2208 R from 1 to 0 as training proceeds, which turns sum-product inference into max-product by moving all the probability mass toward the highest-scoring assignment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Minimum-Risk Training", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "This section describes three NLP problems that can be naturally modeled with approximate CRFs. The first problem, modeling congressional votes, has not been previously modeled with a CRF. We show that by switching to the principled CRF framework we can learn models that are much more accurate when evaluated on test data, though using the same (or less expressive) features as previous work. The other two problems, information extraction from semistructured text and collective multi-label classification, have been modeled with loopy CRFs before. For all three models, we show that ERMA training results in better test set performance. 3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Natural Language with CRFs", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The Congressional Vote (ConVote) corpus was created by Thomas et al. (2006) to study whether votes of U.S. congressional representatives can be predicted from the speeches they gave when debating a bill. The corpus consists of transcripts of congressional floor debates split into speech segments. Each speech segment is labeled with the representative who is speaking and the recorded vote of that representative on the bill. We aim to predict a high percentage of the recorded votes correctly.", |
| "cite_spans": [ |
| { |
| "start": 55, |
| "end": 75, |
| "text": "Thomas et al. (2006)", |
| "ref_id": "BIBREF39" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Congressional Votes", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Speakers often reference one another (e.g., \"I thank the gentleman from Utah\"), to indicate agreement or disagreement. The ConVote corpus manually annotates each phrase such as \"the gentleman from Utah\" with the representative that it denotes. Thomas et al. (2006) show that classification using the agreement/disagreement information in the local context of such references, together with the rest of the language in the speeches, can lead to significant improvement over using either of these two sources of information in isolation. The original approach of Thomas et al. (2006) is based on training two Support Vector Machine (SVM) classifiersone for classifying speeches as supporting/opposing the legislation and another for classifying references as agreement/disagreement. Both classifiers rely on bag-of-word (unigram) features of the document and the context surrounding the link respectively. The scores produced by the two SVMs are used to weight a global graph whose vertices are the representatives; then the min-cut algorithm is applied to partition the vertices into \"yea\" and \"nay\" voters.", |
| "cite_spans": [ |
| { |
| "start": 244, |
| "end": 264, |
| "text": "Thomas et al. (2006)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 561, |
| "end": 581, |
| "text": "Thomas et al. (2006)", |
| "ref_id": "BIBREF39" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Congressional Votes", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "While the approach of Thomas et al. (2006) leads to significant improvement over using the first SVM alone, it does not admit a probabilistic interpretation and the two classifiers are not trained jointly. We also remark that the min-cut technique would not generalize beyond binary random variables (yea/nay).", |
| "cite_spans": [ |
| { |
| "start": 22, |
| "end": 42, |
| "text": "Thomas et al. (2006)", |
| "ref_id": "BIBREF39" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Congressional Votes", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We observe that congressional votes together with references between speakers can be naturally modeled with a CRF. Figure 1 depicts the CRF constructed for one of the debates in the development part of the ConVote corpus. It contains a random variable for each representative's vote. In addition, each speech is an observed input random variable: it is connected by a factor to its speaker's vote and encourages it to be \"yea\" or \"nay\" according to features of the text of the speech. Finally, each reference in each speech is an observed input random variable connected by a factor to two votes-those of the speaker and the referent-which it encourages to agree or disagree according to features of the text surrounding the reference. Just as in (Thomas et al., 2006) , the score of a global assignment to all votes is defined by considering both kinds of factors. However, unlike min-cut, CRF inference finds a probability distribution over assignments, not just a single best assignment. This fact allows us to train the two kinds of factors jointly (on the set of training debates where the votes are known) to predict the correct votes accurately (as defined by accuracy).", |
| "cite_spans": [ |
| { |
| "start": 747, |
| "end": 768, |
| "text": "(Thomas et al., 2006)", |
| "ref_id": "BIBREF39" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 115, |
| "end": 123, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modeling Congressional Votes", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "As Figure 1 shows, the reference factors introduce arbitrary loops, making exact inference intractable and thus motivating ERMA. Our experiments described in section 5.2 show that switching to a CRF model (keeping the same features) leads to a sizable improvement over the previous state of the art- Figure 1 : An example of a debate structure from the Con-Vote corpus. Each black square node represents a factor and is connected to the variables in that factor, shown as round nodes. Unshaded variables correspond to the representatives' votes and depict the output variables that we learn to jointly predict. Shaded variables correspond to the observed input data-the text of all speeches of a representative (in dark gray) or all local contexts of references between two representatives (in light gray).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 11, |
| "text": "Figure 1", |
| "ref_id": null |
| }, |
| { |
| "start": 300, |
| "end": 308, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Modeling Congressional Votes", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "and that ERMA further significantly improves performance, particularly when it properly trains with the same inference algorithm (max-product vs. sumproduct) to be used at test time.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Congressional Votes", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Baseline. As an exact baseline, we compare against the results of Thomas et al. (2006) . Their test-time Min-Cut algorithm is exact in this case: binary variables and a two-way classification.", |
| "cite_spans": [ |
| { |
| "start": 66, |
| "end": 86, |
| "text": "Thomas et al. (2006)", |
| "ref_id": "BIBREF39" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Modeling Congressional Votes", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We utilize the CMU seminar announcement corpus of Freitag (2000) consisting of emails with seminar announcements. The task is to extract four fields that describe each seminar: speaker, location, start time and end time. The corpus annotates the document with all mentions of these four fields. Sequential CRFs have been used successfully for semi-structured information extraction (Sutton and McCallum, 2005; Finkel et al., 2005) . However, they cannot model non-local dependencies in the data. For example, in the seminar announcements corpus, if \"Sutner\" is mentioned once in an email in a context that identifies him as a speaker, it is likely that other occurrences of \"Sutner\" in the same email should be marked as speaker. Hence Finkel et al. (2005) and Sutton and McCallum (2005) propose adding non-local edges to a sequential CRF to represent soft consistency constraints. The model, called a \"skip-chain CRF\" and shown in Figure 2 , contains a factor linking each pair of capitalized words with the same lexical form. The skip-chain CRF model exhibits better empirical performance than its sequential counterpart (Sutton and McCallum, 2005; Finkel et al., 2005) .", |
| "cite_spans": [ |
| { |
| "start": 50, |
| "end": 64, |
| "text": "Freitag (2000)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 382, |
| "end": 409, |
| "text": "(Sutton and McCallum, 2005;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 410, |
| "end": 430, |
| "text": "Finkel et al., 2005)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 736, |
| "end": 756, |
| "text": "Finkel et al. (2005)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 761, |
| "end": 787, |
| "text": "Sutton and McCallum (2005)", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 1123, |
| "end": 1150, |
| "text": "(Sutton and McCallum, 2005;", |
| "ref_id": "BIBREF35" |
| }, |
| { |
| "start": 1151, |
| "end": 1171, |
| "text": "Finkel et al., 2005)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 932, |
| "end": 940, |
| "text": "Figure 2", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Information Extraction from Semi-Structured Text", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The non-local skip links make exact inference intractable. To train the full model, Finkel et al. (2005) estimate the parameters of a sequential CRF and then manually select values for the weights of the non-local edges. At test time, they use Gibbs sampling to perform inference. Sutton and McCallum (2005) use max-product loopy belief propagation for test-time inference, and compare a training procedure that uses a piecewise approximation of the partition function against using sum-product loopy belief propagation to compute output variable marginals. They find that the two training regimens perform similarly on the overall task. All of these training procedures try to approximately maximize conditional likelihood, whereas we will aim to minimize the empirical loss of the approximate inference and decoding procedures.", |
| "cite_spans": [ |
| { |
| "start": 84, |
| "end": 104, |
| "text": "Finkel et al. (2005)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 281, |
| "end": 307, |
| "text": "Sutton and McCallum (2005)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Information Extraction from Semi-Structured Text", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Baseline. As an exact (non-loopy) baseline, we train a model without the skip chains. We give two baseline numbers in Table 1 -for training the exact CRF with MLE and with ERM. The ERM setting resulted in a statistically significant improvement even in the exact case, thanks to the use of the loss function at training time.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 118, |
| "end": 125, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Information Extraction from Semi-Structured Text", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Multi-label classification is the problem of assigning multiple labels to a document. For example, a news article can be about both \"Libya\" and \"civil war.\" The most straightforward approach to multilabel classification employs a binary classifier for each class separately. However, previous work has shown that incorporating information about label dependencies can lead to improvement in performance (Elisseeff and Weston, 2001; Ghamrawi and McCallum, 2005; Finley and Joachims, 2008) .", |
| "cite_spans": [ |
| { |
| "start": 403, |
| "end": 431, |
| "text": "(Elisseeff and Weston, 2001;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 432, |
| "end": 460, |
| "text": "Ghamrawi and McCallum, 2005;", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 461, |
| "end": 487, |
| "text": "Finley and Joachims, 2008)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Label Classification", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "For this task we follow Ghamrawi and McCallum (2005) and Finley and Joachims (2008) and model the label interactions by constructing a fully connected CRF between the output labels. That is, for every document, we construct a CRF that contains a binary random variable for each label (indicating that the corresponding label is on/off for the document) and one binary edge for every unique pair of labels. This architecture can represent dependencies between labels, but leads to a setting in which the output variables form one massive clique. The resulting intractability of inference (and decoding) motivates the use of ERMA training.", |
| "cite_spans": [ |
| { |
| "start": 24, |
| "end": 52, |
| "text": "Ghamrawi and McCallum (2005)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 57, |
| "end": 83, |
| "text": "Finley and Joachims (2008)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Label Classification", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Baseline. We train a model without any of the pairwise edges (i.e., a separate logistic regression model for each class). We report the single best baseline number, since MLE and ERM training resulted in statistically indistinguishable results.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Multi-Label Classification", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "For all experiments we split the data into train/development/test sets using the standard splits when available. We tune optimization algorithm parameters (initial learning rate, batch size and metaparameters \u03bb and \u00b5 for stochastic meta descent) on the training set based on training objective convergence rates. We tune the regularization parameter \u03b2 (below) on development data when available, otherwise we use a default value of 0.1-performance was generally robust for small changes in the value of \u03b2. All statistical significance testing is performed using paired permutation tests (Good, 2000) .", |
| "cite_spans": [ |
| { |
| "start": 587, |
| "end": 599, |
| "text": "(Good, 2000)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Methodology", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Gradient-based Optimization. Gradient information from the back-propagation procedure can be used in a local optimization method to minimize empirical loss. In this paper we use stochastic meta descent (SMD) (Schraudolph, 1999) . SMD is a second-order method that requires vector-Hessian products. For computing those, we do not need to maintain the full Hessian matrix. Instead, we apply more automatic differentiation magic-this time in the forward mode. Computing the vector-Hessian product and utilizing it in SMD does not add to the asymptotic runtime, it requires about twice as many arithmetic operations, and leads to much faster convergence of the learner in our experience. See for details.", |
| "cite_spans": [ |
| { |
| "start": 208, |
| "end": 227, |
| "text": "(Schraudolph, 1999)", |
| "ref_id": "BIBREF29" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Methodology", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Since the empirical risk objective could overfit the training data, we add an L 2 regularizer \u03b2 j \u03b8 2 j that prefers parameter values close to 0. This improves generalization, like the margin constraints in margin-based methods.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Methodology", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Training Procedure Stoyanov et al. (2011) observed that the minimum-risk objective tends to be highly non-convex in practice. The usual approximate log likelihood training objective appeared to be smoother over the parameter space, but exhibited global maxima at parameter values that were relatively good, but sub-optimal for other loss functions. Mean-squared error (MSE) also gave a smoother objective than other loss functions. These observations motivated to use a continuation method. They optimized approximate loglikelihood for a few iterations to get to a good part of the parameter space, then switched to using the hybrid loss function \u03bb (y, y )+(1\u2212\u03bb) MSE (y, y ). The coefficient \u03bb changed gradually from 0 to 1 during training, which morphs from optimizing a smoother loss to optimizing the desired bumpy test loss. We follow the same procedure.", |
| "cite_spans": [ |
| { |
| "start": 9, |
| "end": 41, |
| "text": "Procedure Stoyanov et al. (2011)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Learning Methodology", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Experiments in this paper use two evaluation metrics: percentage accuracy and F-measure. For both of these losses we decode by selecting the most probable value under the marginal distribution of each random variable. This is an exact MBR decode for accuracy but an approximate one for the F-measure; our ERMA training will try to compensate for this approximate decoder. This decoding procedure is not differentiable due to the use of the argmax function. To make the decoder differentiable, we replace argmax with a stochastic (softmax) version during training, averaging loss over all possible values v in proportion to their exponentiated probability p(y i = v | x) 1/T decode . This decoder loses smoothness and approaches an argmax decoder as T decode decreases toward 0. For simplicity, our experiments just use a single fixed value of 0.1 for T decode . Annealing the decoder slowly did not lead to significant differences in early experiments on development data. Table 1 lists results of our evaluation. For all three of our problems, using approximate CRFs results in statistically significant improvement over the exact baselines, for any of the training procedures. But among the training procedures for approximate CRFs, our ERMA procedure-minimizing empirical risk with the training setting matched to the test setting-improves over the two baselines, namely MLE and softmax-margin. MLE and softmaxmargin training were statistically indistinguishable in our experiments with the exception of semistructured IE. ERMA's improvements over them are statistically significant at the p < .05 level for the Congressional Vote and Semi-Structured IE problems and at the p < .1 level for the Multi-label classification problem (comparing each matched minrisk setting shown in a gray cell in Table 1 vs. MLE).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 973, |
| "end": 980, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 1797, |
| "end": 1804, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Learning Methodology", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "When minimizing risk, we also observe that matching training and test-time procedures can result in improved performance in one of the three problems, Congressional Vote. For this problem, the matched training condition performs better than the alternatives (accuracy of 85.1 vs. 83.6 for the annealed max-product testing and 84.5 vs 80.1 for the sum-product setting), significant at p < .01). We observe the same effect for semi-structured IE when testing using max-product inference. For the other remaining three problem setting training with either minimal risk training regiment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Finally, we hypothesized that sum-product inference may produce more accurate results in certain cases as it allows more information about different parts of the model to be exchanged. However, our results show that for these three problems, sum-product and max-product inference yield statistically indistinguishable results. This may be because the particular CRFs we used included no latent variables (in constrast to the synthetic CRFs in ). As expected, we found that max-product BP converges in fewer iterationssum-product BP required as many as twice the number of iterations for some of the runs.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Results in this paper represent a new state-of-theart for the first two of the problems, Congressional Vote and Semi-structured IE. For Multi-Label classification, comparing against the SVM-based method of Finley and Joachims (2008) goes beyond the scope of this paper.", |
| "cite_spans": [ |
| { |
| "start": 206, |
| "end": 232, |
| "text": "Finley and Joachims (2008)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Minimum-risk training has been used in speech recognition (Bahl et al., 1988) , machine translation (Och, 2003) , and energy-based models generally (LeCun et al., 2006) . In graphical models, methods have been proposed to directly minimize loss in tree-shaped or linear chain MRFs and CRFs (Kakade et al., 2002; Suzuki et al., 2006; Gross et al., 2007) .", |
| "cite_spans": [ |
| { |
| "start": 58, |
| "end": 77, |
| "text": "(Bahl et al., 1988)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 100, |
| "end": 111, |
| "text": "(Och, 2003)", |
| "ref_id": "BIBREF27" |
| }, |
| { |
| "start": 148, |
| "end": 168, |
| "text": "(LeCun et al., 2006)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 290, |
| "end": 311, |
| "text": "(Kakade et al., 2002;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 312, |
| "end": 332, |
| "text": "Suzuki et al., 2006;", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 333, |
| "end": 352, |
| "text": "Gross et al., 2007)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "All of the above focus on exact inference. Our approach can be seen as generalizing these methods to arbitrary graph structures, arbitrary loss functions and approximate inference.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Lacoste-Julien et al. (2011) also consider the effects of approximate inference on loss. However, they assume the parameters are given, and modify the approximate inference algorithm at test time to consider the loss function.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Using empirical risk minimization to train graphical models was independently proposed by Domke (2010; 2011) . Just as in our own paper , Domke took a decision-theoretic stance and proposed ERM as a way of calibrating the graphical model for use with approximate inference, or for use with data that do not quite match the modeling assumptions. 4 In particular, (Domke, 2011) is similar to (Stoyanov et al., 2011) in using ERMA to train model parameters to be used with \"truncated\" inference that will be run for only a fixed number of iterations. For a common pixel-labeling benchmark in computer vision, Domke (2011) shows that this procedure improves training time by orders of magnitude, and slightly improves accuracy if the same number of message-passing iterations is used at test time. extend the ERMA objective function by adding an explicit runtime term. This allows them to tune model parameters and stopping criteria to learn models that obtain a given speed-accuracy tradeoff. Their approach improves this hybrid objective over a range of coefficients when compared to the traditional way of inducing sparse structures through L 1 regularization. Eisner and Daum\u00e9 III (2011) propose the same linear combination of speed and accuracy as a reinforcement learning objective. In general, our proposed ERMA setting resembles the reinforcement learning problem of trying to directly learn a policy that minimizes loss or maximizes reward.", |
| "cite_spans": [ |
| { |
| "start": 90, |
| "end": 102, |
| "text": "Domke (2010;", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 103, |
| "end": 108, |
| "text": "2011)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 345, |
| "end": 346, |
| "text": "4", |
| "ref_id": null |
| }, |
| { |
| "start": 362, |
| "end": 375, |
| "text": "(Domke, 2011)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 606, |
| "end": 618, |
| "text": "Domke (2011)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 1160, |
| "end": 1187, |
| "text": "Eisner and Daum\u00e9 III (2011)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We have been concerned with the fact that ERMA training objectives may suffer from local optima and non-differentiability. studied several such settings, graphed the difficult objective, and identified some practical workarounds that are used in the present paper. Although these methods have enabled us to get strong results by reducing the empirical risk, we suspect that ERMA training objectives will benefit from more sophisticated optimization methods. This is true even when the approximate inference itself is restricted to be something as simple as a convex minimization. While that simplified setting can make it slightly more convenient to compute the gradient of the inference result with respect to the parameters (Domke, 2008; Domke, 2012) , there is still no guarantee that following that gradient will minimize the empirical risk. Convex inference does not imply convex training.", |
| "cite_spans": [ |
| { |
| "start": 726, |
| "end": 739, |
| "text": "(Domke, 2008;", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 740, |
| "end": 752, |
| "text": "Domke, 2012)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Motivated by the recently proposed method of Stoyanov et al. 2011for minimum-risk training of CRF-based systems, we revisited three NLP domains that can naturally be modeled with approximate CRF-based systems. These include applications that have not been modeled with CRFs before (the ConVote corpus), as well as applications that have been modeled with loopy CRFs trained to minimize the approximate log-likelihood (semi-structured information extraction and collective multi-label classification). We show that (i) the NLP models are improved by moving to richer CRFs that require approximate inference, and (ii) empirical performance is always significantly improved by training to reduce the loss that would be achieved by approximate inference, even compared to another state-of-the-art training method (softmaxmargin) that also considers loss and uses approximate inference. The general software package that implements the algorithms in this paper is available at http://www.clsp.jhu.edu/\u02dcves/ software.html.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "7" |
| }, |
| { |
| "text": "\"Inference\" typically refers to computing posterior marginal or max-marginal probability distributions of output random variables, given some evidence. \"Decoding\" derives a single structured output from the results of inference.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "distinguished some of the Y variables as latent (i.e., unsupervised and ignored by the loss function). We omit this possibility, to simplify the notation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We also experimented with a fourth application, joint POS tagging and shallow parsing(Sutton et al., 2007) and observed the same overall trend (i.e., minimum risk training improved performance significantly). We do not include those experiments, however, because we were unable to make our baseline results replicate(Sutton et al., 2007).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "However, he is less focused than we are on matching training conditions to test conditions (by including the decoder and task loss in the ERMA objective).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This material is based upon work supported by the National Science Foundation under Grant #0937060 to the Computing Research Association for the CIFellows Project.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A new algorithm for the estimation of hidden Markov model parameters", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Bahl", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Souza", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1988, |
| "venue": "Proceedings of ICASSP", |
| "volume": "", |
| "issue": "", |
| "pages": "493--496", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. Bahl, P. Brown, P. de Souza, and R. Mercer. 1988. A new algorithm for the estimation of hidden Markov model parameters. In Proceedings of ICASSP, pages 493-496.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Event discovery in social media feeds", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Benson", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Haghighi", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Barzilay", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of ACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "389--398", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Benson, A. Haghighi, and R. Barzilay. 2011. Event discovery in social media feeds. In Proceedings of ACL-HLT, pages 389-398.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Identifying sources of opinions with conditional random fields and extraction patterns", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Choi", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Cardie", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Riloff", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Patwardhan", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of HLT/EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "355--362", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. Choi, C. Cardie, E. Riloff, and S. Patwardhan. 2005. Identifying sources of opinions with conditional ran- dom fields and extraction patterns. In Proceedings of HLT/EMNLP, pages 355-362.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Learning convex inference of marginals", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Domke", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of UAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Domke. 2008. Learning convex inference of marginals. In Proceedings of UAI.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Implicit differentiation by perturbation", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Domke", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "523--531", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Domke. 2010. Implicit differentiation by perturba- tion. In Advances in Neural Information Processing Systems, pages 523-531.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Parameter learning with truncated message-passing", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Domke", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Domke. 2011. Parameter learning with truncated message-passing. In Proceedings of the IEEE Con- ference on Computer Vision and Pattern Recognition (CVPR).", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Generic methods for optimizationbased modeling", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Domke", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of AISTATS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Domke. 2012. Generic methods for optimization- based modeling. In Proceedings of AISTATS.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Graphical models over multiple strings", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Dreyer", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "101--110", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Dreyer and J. Eisner. 2009. Graphical models over multiple strings. In Proceedings of EMNLP, pages 101-110.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Learning speedaccuracy tradeoffs in nondeterministic inference algorithms", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| }, |
| { |
| "first": "Hal", |
| "middle": [], |
| "last": "Daum\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "Iii", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "COST: NIPS 2011 Workshop on Computational Trade-offs in Statistical Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Eisner and Hal Daum\u00e9 III. 2011. Learning speed- accuracy tradeoffs in nondeterministic inference al- gorithms. In COST: NIPS 2011 Workshop on Com- putational Trade-offs in Statistical Learning, Sierra Nevada, Spain, December.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Kernel methods for multi-labelled classification and categorical regression problems", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Elisseeff", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "681--687", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Elisseeff and J. Weston. 2001. Kernel methods for multi-labelled classification and categorical regression problems. In Advances in Neural Information Pro- cessing Systems, pages 681-687.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Incorporating non-local information into information extraction systems by Gibbs sampling", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "R" |
| ], |
| "last": "Finkel", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Grenager", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "363--370", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J.R. Finkel, T. Grenager, and C. Manning. 2005. In- corporating non-local information into information ex- traction systems by Gibbs sampling. In Proceedings of ACL, pages 363-370.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Training structural SVMs when exact inference is intractable", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Finley", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Joachims", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "304--311", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Finley and T. Joachims. 2008. Training structural SVMs when exact inference is intractable. In Proceed- ings of ICML, pages 304-311.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Machine learning for information extraction in informal domains", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Freitag", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Machine learning", |
| "volume": "", |
| "issue": "2", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Freitag. 2000. Machine learning for information extraction in informal domains. Machine learning, 39(2).", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Collective multilabel classification", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Ghamrawi", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of CIKM", |
| "volume": "", |
| "issue": "", |
| "pages": "195--200", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "N. Ghamrawi and A. McCallum. 2005. Collective multi- label classification. In Proceedings of CIKM, pages 195-200.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Softmax-margin CRFs: Training log-linear models with cost functions", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Gimpel", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "733--736", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Gimpel and N.A. Smith. 2010. Softmax-margin CRFs: Training log-linear models with cost functions. In Proceedings of ACL, pages 733-736.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Permutation Tests", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "I" |
| ], |
| "last": "Good", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. I. Good. 2000. Permutation Tests. Springer.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Automatic Differentiation of Algorithms", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Griewank", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Corliss", |
| "suffix": "" |
| } |
| ], |
| "year": 1991, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Griewank and G. Corliss, editors. 1991. Automatic Differentiation of Algorithms. SIAM, Philadelphia.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Training conditional random fields for maximum labelwise accuracy", |
| "authors": [ |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Gross", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Russakovsky", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Do", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Batzoglou", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "19", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gross, O. Russakovsky, C. Do, and S. Batzoglou. 2007. Training conditional random fields for maxi- mum labelwise accuracy. Advances in Neural Infor- mation Processing Systems, 19:529.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Knowledge base population: Successful approaches and challenges", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ji", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Grishman", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of ACL-HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "1148--1158", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Ji and R. Grishman. 2011. Knowledge base popula- tion: Successful approaches and challenges. In Pro- ceedings of ACL-HLT, pages 1148-1158.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "An alternate objective function for Markovian fields", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kakade", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [ |
| "W" |
| ], |
| "last": "Teh", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Roweis", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "275--282", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Kakade, Y.W. Teh, and S. Roweis. 2002. An alternate objective function for Markovian fields. In Proceed- ings of ICML, pages 275-282.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Probabilistic Graphical Models: Principles and Techniques", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Koller", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Friedman", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Koller and N. Friedman. 2009. Probabilistic Graph- ical Models: Principles and Techniques. The MIT Press.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Structured learning with approximate inference", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Kulesza", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "785--792", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Kulesza and F. Pereira. 2008. Structured learning with approximate inference. In Advances in Neural Information Processing Systems, pages 785-792.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Approximate inference for the loss-calibrated Bayesian", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Lacoste-Julien", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Huszr", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Ghahramani", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of AISTATS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Lacoste-Julien, F. Huszr, and Z. Ghahramani. 2011. Approximate inference for the loss-calibrated Bayesian. In Proceedings of AISTATS.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Conditional random fields: Probabilistic models for segmenting and labeling sequence data", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "282--289", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Lafferty, A. McCallum, and F. Pereira. 2001. Con- ditional random fields: Probabilistic models for seg- menting and labeling sequence data. In Proceedings of ICML, pages 282-289.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "A tutorial on energy-based learning", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Lecun", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Chopra", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Hadsell", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "A" |
| ], |
| "last": "Ranzato", |
| "suffix": "" |
| }, |
| { |
| "first": "F.-J", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Predicting Structured Data", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. LeCun, S. Chopra, R. Hadsell, M.A. Ranzato, and F.- J. Huang. 2006. A tutorial on energy-based learning. In G. Bakir, T. Hofman, B. Schlkopf, A. Smola, and B. Taskar, editors, Predicting Structured Data. MIT Press.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "First-and second-order expectation semirings with applications to minimumrisk training on translation forests", |
| "authors": [ |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "40--51", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Z. Li and J. Eisner. 2009. First-and second-order expectation semirings with applications to minimum- risk training on translation forests. In Proceedings of EMNLP, pages 40-51.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Loopy belief propagation for approximate inference: An empirical study", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [ |
| "P" |
| ], |
| "last": "Murphy", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Weiss", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "I" |
| ], |
| "last": "Jordan", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of UAI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. P. Murphy, Y. Weiss, and M. I. Jordan. 1999. Loopy belief propagation for approximate inference: An em- pirical study. In Proceedings of UAI.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Minimum error rate training in statistical machine translation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "160--167", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Och. 2003. Minimum error rate training in statisti- cal machine translation. In Proceedings of ACL, pages 160-167.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Information extraction from research papers using conditional random fields", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Peng", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Information Processing & Management", |
| "volume": "42", |
| "issue": "4", |
| "pages": "963--979", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Peng and A. McCallum. 2006. Information extraction from research papers using conditional random fields. Information Processing & Management, 42(4):963- 979.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Local gain adaptation in stochastic gradient descent", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [ |
| "N" |
| ], |
| "last": "Schraudolph", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proceedings of ANN", |
| "volume": "", |
| "issue": "", |
| "pages": "569--574", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "N.N. Schraudolph. 1999. Local gain adaptation in stochastic gradient descent. In Proceedings of ANN, pages 569-574.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Shallow parsing with conditional random fields", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Sha", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Pereira", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of ACL/HLT", |
| "volume": "", |
| "issue": "", |
| "pages": "134--141", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Sha and F. Pereira. 2003. Shallow parsing with con- ditional random fields. In Proceedings of ACL/HLT, pages 134-141.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Minimum risk annealing for training log-linear models", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the COLING/ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "787--794", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D.A. Smith and J. Eisner. 2006. Minimum risk annealing for training log-linear models. In Proceedings of the COLING/ACL, pages 787-794.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Dependency parsing by belief propagation", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "145--156", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Smith and J. Eisner. 2008. Dependency parsing by belief propagation. In Proceedings of EMNLP, pages 145-156.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Learning cost-aware, loss-aware approximate inference policies for probabilistic graphical models", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "COST: NIPS 2011 Workshop on Computational Trade-offs in Statistical Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "V. Stoyanov and J. Eisner. 2011. Learning cost-aware, loss-aware approximate inference policies for proba- bilistic graphical models. In COST: NIPS 2011 Work- shop on Computational Trade-offs in Statistical Learn- ing, Sierra Nevada, Spain, December.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Empirical risk minimization of graphical model parameters given approximate inference, decoding, and model structure", |
| "authors": [ |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Stoyanov", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Ropson", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Eisner", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of AISTATS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "V. Stoyanov, A. Ropson, and J. Eisner. 2011. Empirical risk minimization of graphical model parameters given approximate inference, decoding, and model structure. In Proceedings of AISTATS.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Piecewise training of undirected models", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Sutton", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of UAI", |
| "volume": "", |
| "issue": "", |
| "pages": "568--575", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Sutton and A. McCallum. 2005. Piecewise training of undirected models. In Proceedings of UAI, pages 568-575.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Dynamic conditional random fields: Factorized probabilistic models for labeling and segmenting sequence data", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Sutton", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Mccallum", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Rohanimanesh", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "The Journal of Machine Learning Research", |
| "volume": "8", |
| "issue": "", |
| "pages": "693--723", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Sutton, A. McCallum, and K. Rohanimanesh. 2007. Dynamic conditional random fields: Factorized proba- bilistic models for labeling and segmenting sequence data. The Journal of Machine Learning Research, 8:693-723.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Training conditional random fields with multivariate evaluation measures", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Suzuki", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Mcdermott", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Isozaki", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of COLING/ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "217--224", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Suzuki, E. McDermott, and H. Isozaki. 2006. Train- ing conditional random fields with multivariate eval- uation measures. In Proceedings of COLING/ACL, pages 217-224.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "Max-margin Markov networks", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Guestrin", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Koller", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "25--32", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Taskar, C. Guestrin, and D. Koller. 2003. Max-margin Markov networks. Proceedings of NIPS, pages 25-32.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Get out the vote: Determining support or opposition from congressional floor-debate transcripts", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Thomas", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Pang", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Lee", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "327--335", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Thomas, B. Pang, and L. Lee. 2006. Get out the vote: Determining support or opposition from congressional floor-debate transcripts. In Proceedings of EMNLP, pages 327-335.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "Accelerated training of conditional random fields with stochastic gradient methods", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Vishwanathan", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Schraudolph", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Schmidt", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Murphy", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "969--976", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Vishwanathan, N. Schraudolph, M. Schmidt, and K. Murphy. 2006. Accelerated training of conditional random fields with stochastic gradient methods. In Proceedings of ICML, pages 969-976.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "Estimating the \"wrong\" graphical model: Benefits in the computation-limited setting", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Wainwright", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Journal of Machine Learning Research", |
| "volume": "7", |
| "issue": "", |
| "pages": "1829--1859", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Wainwright. 2006. Estimating the \"wrong\" graphi- cal model: Benefits in the computation-limited setting. Journal of Machine Learning Research, 7:1829-1859, September.", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "A learning algorithm for continually running fully recurrent neural networks", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Williams", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Zipser", |
| "suffix": "" |
| } |
| ], |
| "year": 1989, |
| "venue": "Neural Computation", |
| "volume": "1", |
| "issue": "2", |
| "pages": "270--280", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Williams and D. Zipser. 1989. A learning algo- rithm for continually running fully recurrent neural networks. Neural Computation, 1(2):270-280.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Skip-chain CRF for semi-structured information extraction.", |
| "num": null |
| }, |
| "TABREF0": { |
| "text": "Table 1: Results. The top of the table lists the loss function used for each problem and the score for the best exact baseline. The bottom lists results for the full models used with loopy BP.", |
| "html": null, |
| "type_str": "table", |
| "num": null, |
| "content": "<table><tr><td/><td>Problem</td><td colspan=\"2\">Congressional Vote Semi-structured IE</td><td>Multi-label class.</td></tr><tr><td/><td>Loss function</td><td>Accuracy</td><td>Token-wise F-score</td><td>F-score</td></tr><tr><td/><td>Non-loopy Baseline</td><td>71.2</td><td>86.2 (87.1)</td><td>81.6</td></tr><tr><td/><td>Loopy CRF models</td><td/><td>INFERENCE:</td></tr><tr><td>TRAINING:</td><td>MLE Softmax-margin Min-risk (maxprod) Min-risk (sumprod)</td><td colspan=\"3\">maxprod sumprod maxprod sumprod maxprod sumprod 78.2 78.2 89.0 89.5 84.2 84.0 79.0 79.0 90.1 90.2 84.3 83.8 85.1 80.1 90.9 90.7 84.5 84.4 83.6 84.5 90.3 90.9 84.7 84.6</td></tr><tr><td/><td/><td/><td colspan=\"2\">Models are tested with either sum-product</td></tr><tr><td colspan=\"5\">BP (sumprod) or max-product BP (maxprod) and trained with MLE or the minimum risk criterion. Min-risk training</td></tr><tr><td colspan=\"5\">runs are either annealed (maxprod), which matches max-product test, or not (sumprod), which matches sum-product</td></tr><tr><td colspan=\"5\">test; grey cells in the table indicate matched training and test settings. In each column, we boldface the best result as</td></tr><tr><td colspan=\"4\">well as all results that are not significantly worse (paired permutation test, p < 0.05).</td></tr></table>" |
| } |
| } |
| } |
| } |