| { |
| "paper_id": "P13-1031", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:32:40.448431Z" |
| }, |
| "title": "Fast and Adaptive Online Training of Feature-Rich Translation Models", |
| "authors": [ |
| { |
| "first": "Spence", |
| "middle": [], |
| "last": "Green", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stanford University", |
| "location": {} |
| }, |
| "email": "spenceg@stanford.edu" |
| }, |
| { |
| "first": "Sida", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stanford University", |
| "location": {} |
| }, |
| "email": "sidaw@stanford.edu" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Cer", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stanford University", |
| "location": {} |
| }, |
| "email": "danielcer@stanford.edu" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "Stanford University", |
| "location": {} |
| }, |
| "email": "manning@stanford.edu" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We present a fast and scalable online method for tuning statistical machine translation models with large feature sets. The standard tuning algorithm-MERT-only scales to tens of features. Recent discriminative algorithms that accommodate sparse features have produced smaller than expected translation quality gains in large systems. Our method, which is based on stochastic gradient descent with an adaptive learning rate, scales to millions of features and tuning sets with tens of thousands of sentences, while still converging after only a few epochs. Large-scale experiments on Arabic-English and Chinese-English show that our method produces significant translation quality gains by exploiting sparse features. Equally important is our analysis, which suggests techniques for mitigating overfitting and domain mismatch, and applies to other recent discriminative methods for machine translation.", |
| "pdf_parse": { |
| "paper_id": "P13-1031", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We present a fast and scalable online method for tuning statistical machine translation models with large feature sets. The standard tuning algorithm-MERT-only scales to tens of features. Recent discriminative algorithms that accommodate sparse features have produced smaller than expected translation quality gains in large systems. Our method, which is based on stochastic gradient descent with an adaptive learning rate, scales to millions of features and tuning sets with tens of thousands of sentences, while still converging after only a few epochs. Large-scale experiments on Arabic-English and Chinese-English show that our method produces significant translation quality gains by exploiting sparse features. Equally important is our analysis, which suggests techniques for mitigating overfitting and domain mismatch, and applies to other recent discriminative methods for machine translation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Sparse, overlapping features such as words and ngram contexts improve many NLP systems such as parsers and taggers. Adaptation of discriminative learning methods for these types of features to statistical machine translation (MT) systems, which have historically used idiosyncratic learning techniques for a few dense features, has been an active research area for the past half-decade. However, despite some research successes, feature-rich models are rarely used in annual MT evaluations. For example, among all submissions to the WMT and IWSLT 2012 shared tasks, just one participant tuned more than 30 features (Hasler et al., 2012a) . Slow uptake of these methods may be due to implementation complexities, or to practical difficulties of configuring them for specific translation tasks (Gimpel and Smith, 2012; Simianer et al., 2012, inter alia) .", |
| "cite_spans": [ |
| { |
| "start": 615, |
| "end": 637, |
| "text": "(Hasler et al., 2012a)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 792, |
| "end": 816, |
| "text": "(Gimpel and Smith, 2012;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 817, |
| "end": 851, |
| "text": "Simianer et al., 2012, inter alia)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We introduce a new method for training featurerich MT systems that is effective yet comparatively easy to implement. The algorithm scales to millions of features and large tuning sets. It optimizes a logistic objective identical to that of PRO (Hopkins and May, 2011) with stochastic gradient descent, although other objectives are possible. The learning rate is set adaptively using AdaGrad (Duchi et al., 2011) , which is particularly effective for the mixture of dense and sparse features present in MT models. Finally, feature selection is implemented as efficient L 1 regularization in the forward-backward splitting (FOBOS) framework (Duchi and Singer, 2009) . Experiments show that our algorithm converges faster than batch alternatives.", |
| "cite_spans": [ |
| { |
| "start": 244, |
| "end": 267, |
| "text": "(Hopkins and May, 2011)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 392, |
| "end": 412, |
| "text": "(Duchi et al., 2011)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 640, |
| "end": 664, |
| "text": "(Duchi and Singer, 2009)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "To learn good weights for the sparse features, most algorithms-including ours-benefit from more tuning data, and the natural source is the training bitext. However, the bitext presents two problems. First, it has a single reference, sometimes of lower quality than the multiple references in tuning sets from MT competitions. Second, large bitexts often comprise many text genres (Haddow and Koehn, 2012) , a virtue for classical dense MT models but a curse for high dimensional models: bitext tuning can lead to a significant domain adaptation problem when evaluating on standard test sets. Our analysis separates and quantifies these two issues.", |
| "cite_spans": [ |
| { |
| "start": 380, |
| "end": 404, |
| "text": "(Haddow and Koehn, 2012)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We conduct large-scale translation quality experiments on Arabic-English and Chinese-English. As baselines we use MERT (Och, 2003) , PRO, and the Moses implementation of k-best MIRA, which Cherry and Foster (2012) recently showed to work as well as online MIRA (Chiang, 2012) for feature-rich models. The first experiment uses standard tuning and test sets from the NIST OpenMT competitions. The second experiment uses tuning and test sets sampled from the large bitexts. The new method yields significant improvements in both experiments. Our code is included in the Phrasal (Cer et al., 2010) toolkit, which is freely available.", |
| "cite_spans": [ |
| { |
| "start": 119, |
| "end": 130, |
| "text": "(Och, 2003)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 189, |
| "end": 213, |
| "text": "Cherry and Foster (2012)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 261, |
| "end": 275, |
| "text": "(Chiang, 2012)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 576, |
| "end": 594, |
| "text": "(Cer et al., 2010)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Machine translation is an unusual machine learning setting because multiple correct translations exist and decoding is comparatively expensive. When we have a large feature set and therefore want to tune on a large data set, batch methods are infeasible. Online methods can converge faster, and in practice they often find better solutions (Liang and Klein, 2009; Bottou and Bousquet, 2011, inter alia) .", |
| "cite_spans": [ |
| { |
| "start": 340, |
| "end": 363, |
| "text": "(Liang and Klein, 2009;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 364, |
| "end": 402, |
| "text": "Bottou and Bousquet, 2011, inter alia)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adaptive Online Algorithms", |
| "sec_num": "2" |
| }, |
| { |
| "text": "Recall that stochastic gradient descent (SGD), a fundamental online method, updates weights w according to", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adaptive Online Algorithms", |
| "sec_num": "2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "w t = w t\u22121 \u2212 \u03b7\u2207 t (w t\u22121 )", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Adaptive Online Algorithms", |
| "sec_num": "2" |
| }, |
| { |
| "text": "with loss function 1 t (w) of the t th example, (sub)gradient of the loss with respect to the parameters \u2207 t (w t\u22121 ), and learning rate \u03b7. SGD is sensitive to the learning rate \u03b7, which is difficult to set in an MT system that mixes frequent \"dense\" features (like the language model) with sparse features (e.g., for translation rules). Furthermore, \u03b7 applies to each coordinate in the gradient, an undesirable property in MT where good sparse features may fire very infrequently. We would instead like to take larger steps for sparse features and smaller steps for dense features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adaptive Online Algorithms", |
| "sec_num": "2" |
| }, |
| { |
| "text": "AdaGrad is a method for setting an adaptive learning rate that comes with good theoretical guarantees. The theoretical improvement over SGD is most significant for high-dimensional, sparse features. AdaGrad makes the following update:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "AdaGrad", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "w t = w t\u22121 \u2212 \u03b7\u03a3 1/2 t \u2207 t (w t\u22121 ) (2) \u03a3 \u22121 t = \u03a3 \u22121 t\u22121 + \u2207 t (w t\u22121 )\u2207 t (w t\u22121 ) = t i=1 \u2207 i (w i\u22121 )\u2207 i (w i\u22121 )", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "AdaGrad", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "A diagonal approximation to \u03a3 can be used for a high-dimensional vector w t . In this case, AdaGrad is simple to implement and computationally cheap. Consider a single dimension j, and let scalars", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "AdaGrad", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "v t = w t,j , g t = \u2207 j t (w t\u22121 ), G t = t i=1 g 2 i , then the update rule is v t = v t\u22121 \u2212 \u03b7 G \u22121/2 t g t (4) G t = G t\u22121 + g 2 t", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "AdaGrad", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Compared to SGD, we just need to store", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "AdaGrad", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "G t = \u03a3 \u22121 t,jj", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "AdaGrad", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "for each dimension j. 1 We specify the loss function for MT in section 3.1.", |
| "cite_spans": [ |
| { |
| "start": 22, |
| "end": 23, |
| "text": "1", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "AdaGrad", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "AdaGrad is related to two previous online learning methods for MT.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prior Online Algorithms in MT", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "MIRA Chiang et al. (2008) described an adaption of MIRA (Crammer et al., 2006) to MT. MIRA makes the following update:", |
| "cite_spans": [ |
| { |
| "start": 5, |
| "end": 25, |
| "text": "Chiang et al. (2008)", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 56, |
| "end": 78, |
| "text": "(Crammer et al., 2006)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prior Online Algorithms in MT", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "w t = arg min w 1 2\u03b7 w \u2212 w t\u22121 2 2 + t (w) (6)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prior Online Algorithms in MT", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The first term expresses conservativity: the weight should change as little as possible based on a single example, ensuring that it is never beneficial to overshoot the minimum. The relationship to SGD can be seen by linearizing the loss function", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prior Online Algorithms in MT", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "t (w) \u2248 t (w t\u22121 ) + (w \u2212 w t\u22121 ) \u2207 t (w t\u22121 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prior Online Algorithms in MT", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "and taking the derivative of (6). The result is exactly (1).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prior Online Algorithms in MT", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "AROW Chiang (2012) adapted AROW (Crammer et al., 2009) to MT. AROW models the current weight as a Gaussian centered at w t\u22121 with covariance \u03a3 t\u22121 , and does the following update upon seeing training example x t :", |
| "cite_spans": [ |
| { |
| "start": 5, |
| "end": 18, |
| "text": "Chiang (2012)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 32, |
| "end": 54, |
| "text": "(Crammer et al., 2009)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prior Online Algorithms in MT", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "w t , \u03a3 t = arg min w,\u03a3 1 \u03b7 D KL (N (w, \u03a3)||N (w t\u22121 , \u03a3 t\u22121 )) + t (w) + 1 2\u03b7 x t \u03a3x t", |
| "eq_num": "(7)" |
| } |
| ], |
| "section": "Prior Online Algorithms in MT", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The KL-divergence term expresses a more general, directionally sensitive conservativity. Ignoring the third term, the \u03a3 that minimizes the KL is actually \u03a3 t\u22121 . As a result, the first two terms of (7) generalize MIRA so that we may be more conservative in some directions specified by \u03a3. To see this, we can write out the KL-divergence between two Gaussians in closed form, and observe that the terms involving w do not interact with the terms involving \u03a3:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prior Online Algorithms in MT", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "w t = arg min w 1 2\u03b7 (w \u2212 w t\u22121 ) \u03a3 \u22121 t\u22121 (w \u2212 w t\u22121 ) + t (w) (8) \u03a3 t = arg min \u03a3 1 2\u03b7 log |\u03a3 t\u22121 | |\u03a3| + 1 2\u03b7 Tr \u03a3 \u22121 t\u22121 \u03a3 + 1 2\u03b7 x t \u03a3x t", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "Prior Online Algorithms in MT", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The third term in (7), called the confidence term, gives us adaptivity, the notion that we should have smaller variance in the direction v as more data x t is seen in direction v. For example, if \u03a3 is diagonal and x t are indicator features, the confidence term then says that the weight for a rarer feature should have more variance and vice-versa. Recall that for generalized linear models \u2207 t (w) \u221d x t ; if we substitute x t = \u03b1 t \u2207 t (w) into (9), differentiate and solve, we get:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prior Online Algorithms in MT", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03a3 \u22121 t = \u03a3 \u22121 t\u22121 + x t x t = \u03a3 \u22121 0 + t i=1 \u03b1 2 i \u2207 i (w i\u22121 )\u2207 i (w i\u22121 )", |
| "eq_num": "(10)" |
| } |
| ], |
| "section": "Prior Online Algorithms in MT", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The precision \u03a3 \u22121 t generally grows as more data is seen. Frequently updated features receive an especially high precision, whereas the model maintains large variance for rarely seen features.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prior Online Algorithms in MT", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "If we substitute (10) into (8), linearize the loss t (w) as before, and solve, then we have the linearized AROW update", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prior Online Algorithms in MT", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "w t = w t\u22121 \u2212 \u03b7\u03a3 t \u2207 t (w t\u22121 )", |
| "eq_num": "(11)" |
| } |
| ], |
| "section": "Prior Online Algorithms in MT", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "which is also an adaptive update with per-coordinate learning rates specified by \u03a3 t (as opposed to \u03a3 1/2 t in AdaGrad).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Prior Online Algorithms in MT", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Compare (3) to (10) and observe that if we set \u03a3 \u22121 0 = 0 and \u03b1 t = 1, then the only difference between the AROW update (11) and the AdaGrad update (2) is a square root. Under a constant gradient, AROW decays the step size more aggressively (1/t) compared to AdaGrad (1/ \u221a t), and it is sensitive to the specification of \u03a3 \u22121 0 . Informally, SGD can be improved in the conservativity direction using MIRA so the updates do not overshoot. Second, SGD can be improved in the adaptivity direction using AdaGrad where the decaying stepsize is more robust and the adaptive stepsize allows better weight updates to features differing in sparsity and scale. Finally, AROW combines both adaptivity and conservativity. For MT, adaptivity allows us to deal with mixed dense/sparse features effectively without specific normalization.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparing AdaGrad, MIRA, AROW", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Why do we choose AdaGrad over AROW? MIRA/AROW requires selecting the loss function (w) so that w t can be solved in closed-form, by a quadratic program (QP), or in some other way that is better than linearizing. This usually means choosing a hinge loss. On the other hand, Ada-Grad/linearized AROW only requires that the gradient of the loss function can be computed efficiently.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparing AdaGrad, MIRA, AROW", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Algorithm 1 Adaptive online tuning for MT.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparing AdaGrad, MIRA, AROW", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Require: Tuning set {fi, e 1:k i }i=1:M 1: Set w0 = 0 2: Set t = 1 3: repeat 4: for i in 1 . . . M in random order do 5:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparing AdaGrad, MIRA, AROW", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Decode n-best list Ni for fi 6:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparing AdaGrad, MIRA, AROW", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Sample pairs {dj,+, dj,\u2212}j=1:s from Ni 7:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparing AdaGrad, MIRA, AROW", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Compute Dt = {\u03c6(dj,+) \u2212 \u03c6(dj,\u2212)}j=1:s 8: Set gt = \u2207 (Dt; wt\u22121)} 9: Set \u03a3 \u22121 t = \u03a3 \u22121 t\u22121 + gtg t Eq.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparing AdaGrad, MIRA, AROW", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "(3) 10:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparing AdaGrad, MIRA, AROW", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Update wt = wt\u22121 \u2212 \u03b7\u03a3 1/2 t gt Eq. (2) 11:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparing AdaGrad, MIRA, AROW", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Regularize wt Eq. (15) 12:Set t = t + 1 13: end for 14: until convergence Linearized AROW, however, is less robust than Ada-Grad empirically 2 and lacks known theoretical guarantees. Finally, by using AdaGrad, we separate adaptivity from conservativity. Our experiments suggest that adaptivity is actually more important.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Comparing AdaGrad, MIRA, AROW", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Algorithm 1 shows the full algorithm introduced in this paper. AdaGrad (lines 9-10) is a crucial piece, but the loss function, regularization technique, and parallelization strategy described in this section are equally important in the MT setting.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adaptive Online MT", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Algorithm 1 lines 5-8 describe the gradient computation. We cast MT tuning as pairwise ranking (Herbrich et al., 1999, inter alia) , which Hopkins and May (2011) applied to MT. The pairwise approach results in simple, convex loss functions suitable for online learning. The idea is that for any two derivations, the ranking predicted by the model should be consistent with the ranking predicted by a gold sentence-level metric G like BLEU+1 (Lin and Och, 2004) .", |
| "cite_spans": [ |
| { |
| "start": 95, |
| "end": 130, |
| "text": "(Herbrich et al., 1999, inter alia)", |
| "ref_id": null |
| }, |
| { |
| "start": 441, |
| "end": 460, |
| "text": "(Lin and Och, 2004)", |
| "ref_id": "BIBREF32" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Logistic Loss Function", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Consider a single source sentence f with associated references e 1:k . Let d be a derivation in an n-best list of f that has the target e = e(d) and the feature map", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Logistic Loss Function", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u03c6(d). Let M (d) = w \u2022 \u03c6(d) be the model score. For any derivation d + that is better than d \u2212 under G, we desire pairwise agreement such that G e(d + ), e 1:k > G e(d \u2212 ), e 1:k \u21d0\u21d2 M (d + ) > M (d \u2212 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Logistic Loss Function", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Ensuring pairwise agreement is the same as ensuring w", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Logistic Loss Function", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "\u2022 [\u03c6(d + ) \u2212 \u03c6(d \u2212 )] > 0.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Logistic Loss Function", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "For learning, we need to select derivation pairs", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Logistic Loss Function", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(d + , d \u2212 ) to compute difference vectors x + = \u03c6(d + ) \u2212 \u03c6(d \u2212 ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Logistic Loss Function", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Then we have a 1-class separation problem trying to ensure w \u2022 x + > 0. The derivation pairs are sampled with the algorithm of Hopkins and May (2011). We compute difference vectors D t = {x 1:s + } (Algorithm 1 line 7) from s pairs (d + , d \u2212 ) for source sentence f t . We use the familiar logistic loss:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Logistic Loss Function", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "t (w) = (D t , w) = \u2212 x + \u2208Dt log 1 1 + e \u2212w\u2022x +", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Logistic Loss Function", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "(12) Choosing the hinge loss instead of the logistic loss results in the 1-class SVM problem. The 1class separation problem is equivalent to the binary classification problem with x", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Logistic Loss Function", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "+ = \u03c6(d + ) \u2212 \u03c6(d \u2212 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Logistic Loss Function", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "as positive data and x \u2212 = \u2212x + as negative data, which may be plugged into an existing logistic regression solver.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Logistic Loss Function", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "We find that Algorithm 1 works best with minibatches instead of single examples. In line 4 we simply partition the tuning set so that i becomes a mini-batch of examples.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pairwise Logistic Loss Function", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Algorithm 1 lines 9-11 compute the adaptive learning rate, update the weights, and apply regularization. Section 2.1 explained the AdaGrad learning rate computation. To update and regularize the weights we apply the Forward-Backward Splitting (FOBOS) (Duchi and Singer, 2009) framework, which separates the two operations. The two-step FOBOS update is", |
| "cite_spans": [ |
| { |
| "start": 251, |
| "end": 275, |
| "text": "(Duchi and Singer, 2009)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Updating and Regularization", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "w t\u2212 1 2 = w t\u22121 \u2212 \u03b7 t\u22121 \u2207 t\u22121 (w t\u22121 ) (13) w t = arg min w 1 2 w \u2212 w t\u2212 1 2 2 2 + \u03b7 t\u22121 r(w)", |
| "eq_num": "(14)" |
| } |
| ], |
| "section": "Updating and Regularization", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "where (13) is just an unregularized gradient descent step and (14) balances the regularization term r(w) with staying close to the gradient step. Equation (14) permits efficient L 1 regularization, which is well-suited for selecting good features from exponentially many irrelevant features (Ng, 2004) . It is well-known that feature selection is very important for feature-rich MT. For example, simple indicator features like lexicalized re-ordering classes are potentially useful yet bloat the the feature set and, in the worst case, can negatively impact Algorithm 2 \"Stale gradient\" parallelization method for Algorithm 1.", |
| "cite_spans": [ |
| { |
| "start": 291, |
| "end": 301, |
| "text": "(Ng, 2004)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Updating and Regularization", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Require: Tuning set {fi, e 1:k i }i=1:M 1: Initialize threadpool p1, . . . , pj 2: Set t = 1 3: repeat 4: for i in 1 . . . M in random order do 5:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Updating and Regularization", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Wait until any thread p is idle 6:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Updating and Regularization", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Send (fi, e 1:k i , t) to p Alg. 1 lines 5-8 7:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Updating and Regularization", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "while \u2203 p done with gradient g t do t \u2264 t 8:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Updating and Regularization", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Update wt = wt\u22121 \u2212 \u03b7g t Alg. 1 lines 9-11 9:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Updating and Regularization", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Set t = t + 1 10: end while 11: end for 12: until convergence search. Some of the features generalize, but many do not. This was well understood in previous work, so heuristic filtering was usually applied (Chiang et al., 2009, inter alia) . In contrast, we need only select an appropriate regularization strength \u03bb.", |
| "cite_spans": [ |
| { |
| "start": 206, |
| "end": 239, |
| "text": "(Chiang et al., 2009, inter alia)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Updating and Regularization", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Specifically, when r(w) = \u03bb w 1 , the closedform solution to 14is", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Updating and Regularization", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "w t = sign(w t\u2212 1 2 ) |w t\u2212 1 2 | \u2212 \u03b7 t\u22121 \u03bb + (15) where [x] + = max(x, 0)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Updating and Regularization", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "is the clipping function that in this case sets a weight to 0 when it falls below the threshold \u03b7 t\u22121 \u03bb. It is straightforward to adapt this to AdaGrad with diagonal \u03a3 by setting each dimension of \u03b7 t\u22121,j = \u03b7\u03a3 1 2 t,jj and by taking element-wise products.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Updating and Regularization", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We find that \u2207 t\u22121 (w t\u22121 ) only involves several hundred active features for the current example (or mini-batch). However, naively following the FOBOS framework requires updating millions of weights. But a practical benefit of FOBOS is that we can do lazy updates on just the active dimensions without any approximations.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Updating and Regularization", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Algorithm 1 is inherently sequential like standard online learning. This is undesirable in MT where decoding is costly. We therefore parallelize the algorithm with the \"stale gradient\" method of Langford et al. (2009) (Algorithm 2). A fixed threadpool of workers computes gradients in parallel and sends them to a master thread, which updates a central weight vector. Crucially, the weight updates need not be applied in order, so synchronization is unnecessary; the workers only idle at the end of an epoch. The consequence is that the update in line 8 of Algorithm 2 is with respect to gradient g t with t \u2264 t. Langford et al. (2009) gave convergence results for stale updating, but the bounds do not apply to our setting since we use L 1 regularization. Nevertheless, Gimpel et al. (2010) applied this framework to other non-convex objectives and obtained good empirical results.", |
| "cite_spans": [ |
| { |
| "start": 195, |
| "end": 217, |
| "text": "Langford et al. (2009)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 613, |
| "end": 635, |
| "text": "Langford et al. (2009)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 771, |
| "end": 791, |
| "text": "Gimpel et al. (2010)", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parallelization", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Our asynchronous, stochastic method has practical appeal for MT. During a tuning run, the online method decodes the tuning set under many more weight vectors than a MERT-style batch method. This characteristic may result in broader exploration of the search space, and make the learner more robust to local optima local optima (Liang and Klein, 2009; Bottou and Bousquet, 2011, inter alia) . The adaptive algorithm identifies appropriate learning rates for the mixture of dense and sparse features. Finally, large data structures such as the language model (LM) and phrase table exist in shared memory, obviating the need for remote queries.", |
| "cite_spans": [ |
| { |
| "start": 327, |
| "end": 350, |
| "text": "(Liang and Klein, 2009;", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 351, |
| "end": 389, |
| "text": "Bottou and Bousquet, 2011, inter alia)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Parallelization", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We built Arabic-English and Chinese-English MT systems with Phrasal (Cer et al., 2010) , a phrasebased system based on alignment templates (Och and Ney, 2004) . The corpora 3 in our experiments (Table 1) derive from several LDC sources from 2012 and earlier. We de-duplicated each bitext according to exact string match, and ensured that no overlap existed with the test sets. We produced alignments with the Berkeley aligner (Liang et al., 2006b ) with standard settings and symmetrized via the grow-diag heuristic.", |
| "cite_spans": [ |
| { |
| "start": 68, |
| "end": 86, |
| "text": "(Cer et al., 2010)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 139, |
| "end": 158, |
| "text": "(Och and Ney, 2004)", |
| "ref_id": "BIBREF38" |
| }, |
| { |
| "start": 426, |
| "end": 446, |
| "text": "(Liang et al., 2006b", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 194, |
| "end": 203, |
| "text": "(Table 1)", |
| "ref_id": "TABREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "For each language we used SRILM (Stolcke, 2002) to estimate 5-gram LMs with modified Kneser-Ney smoothing. We included the monolingual English data and the respective target bitexts.", |
| "cite_spans": [ |
| { |
| "start": 32, |
| "end": 47, |
| "text": "(Stolcke, 2002)", |
| "ref_id": "BIBREF43" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "The baseline \"dense\" model contains 19 features: the nine Moses baseline features, the hierarchical lexicalized re-ordering model of , the (log) count of each rule, and an indicator for unique rules.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Templates", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "To the dense features we add three high dimensional \"sparse\" feature sets. Discrimina- 3 We tokenized the English with packages from the Stanford Parser (Klein and Manning, 2003) according to the Penn Treebank standard (Marcus et al., 1993) , the Arabic with the Stanford Arabic segmenter (Green and DeNero, 2012) according to the Penn Arabic Treebank standard (Maamouri et al., 2008) , and the Chinese with the Stanford Chinese segmenter (Chang et al., 2008) according to the Penn Chinese Treebank standard (Xue et al., 2005) .", |
| "cite_spans": [ |
| { |
| "start": 87, |
| "end": 88, |
| "text": "3", |
| "ref_id": null |
| }, |
| { |
| "start": 153, |
| "end": 178, |
| "text": "(Klein and Manning, 2003)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 219, |
| "end": 240, |
| "text": "(Marcus et al., 1993)", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 289, |
| "end": 313, |
| "text": "(Green and DeNero, 2012)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 361, |
| "end": 384, |
| "text": "(Maamouri et al., 2008)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 439, |
| "end": 459, |
| "text": "(Chang et al., 2008)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 508, |
| "end": 526, |
| "text": "(Xue et al., 2005)", |
| "ref_id": "BIBREF48" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature Templates", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Ar-En 6.6M 375M 990M Zh-En 9.3M 538M ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sentences Tokens Tokens", |
| "sec_num": null |
| }, |
| { |
| "text": "The primary baseline is the dense feature set tuned with MERT (Och, 2003) . The Phrasal implementation uses the line search algorithm of Cer et al. (2008) , uniform initialization, and 20 random starting points. 4 We tuned according to BLEU-4 (Papineni et al., 2002) . We built high dimensional baselines with two different algorithms. First, we tuned with batch PRO using the default settings in Phrasal (L 2 regularization with \u03c3=0.1). Second, we ran the k-best batch MIRA (kb-MIRA) (Cherry and Foster, 2012) implementation in Moses. We did implement an online version of MIRA, and in small-scale experiments found that the batch variant worked just as well. Cherry and Foster (2012) reported the same result, and their implementation is available in Moses. We ran their code with standard settings.", |
| "cite_spans": [ |
| { |
| "start": 62, |
| "end": 73, |
| "text": "(Och, 2003)", |
| "ref_id": "BIBREF39" |
| }, |
| { |
| "start": 137, |
| "end": 154, |
| "text": "Cer et al. (2008)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 212, |
| "end": 213, |
| "text": "4", |
| "ref_id": null |
| }, |
| { |
| "start": 236, |
| "end": 266, |
| "text": "BLEU-4 (Papineni et al., 2002)", |
| "ref_id": null |
| }, |
| { |
| "start": 661, |
| "end": 685, |
| "text": "Cherry and Foster (2012)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tuning Algorithms", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Moses 5 also contains the discriminative phrase table implementation of (Hasler et al., 2012b) , which is identical to our implementation using Phrasal. Moses and Phrasal accept the same phrase table and LM formats, so we kept those data structures in common. The two decoders also use the same multi-stack beam search (Och and Ney, 2004) .", |
| "cite_spans": [ |
| { |
| "start": 72, |
| "end": 94, |
| "text": "(Hasler et al., 2012b)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 319, |
| "end": 338, |
| "text": "(Och and Ney, 2004)", |
| "ref_id": "BIBREF38" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tuning Algorithms", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "For our method, we used uniform initialization, 16 threads, and a mini-batch size of 20. We found that \u03b7=0.02 and \u03bb=0.1 worked well on development sets for both languages. To compute the gradients we sampled 15 derivation pairs for each tuning example and scored them with BLEU+1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Tuning Algorithms", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The first experiment evaluates our algorithm when tuning and testing on standard test sets, each with four references. When we add features, our algorithm tends to overfit to a standard-sized tuning set like MT06. We thus concatenated MT05, MT06, and MT08 to create a larger tuning set. Table 2 shows the Ar-En results. Our algorithm is competitive with MERT in the low dimensional \"dense\" setting, and compares favorably to PRO with the PT feature set. PRO does not benefit from additional features, whereas our algorithm improves with both additional features and data. The underperformance of kb-MIRA may result from a difference between Moses and Phrasal: Moses MERT achieves only 45.62 on MT09. Moses PRO with the PT feature set is slightly worse, e.g., 44.52 on MT09. Nevertheless, kb-MIRA does not improve significantly over MERT, and also selects an unnecessarily large model. 48.56 BLEU on MT09. For Ar-En, our algorithm thus has the desirable property of benefiting from more and better features, and more data. Table 3 shows Zh-En results. Somewhat surprisingly our algorithm improves over MERT in the dense setting. When we add the discriminative phrase table, our algorithm improves over kb-MIRA, and over batch PRO on two evaluation sets. With all features and the MT05/6/8 tuning set, we improve significantly over all other models. PRO learns a smaller model with the PT+AL+LO feature set which is surprising given that it applies L 2 regularization (AdaGrad uses L 1 ). We speculate that this may be an consequence of stochastic learning. Our algorithm decodes each example with a new weight vector, thus exploring more of the search space for the same tuning set. Tables 2 and 3 show that adding tuning examples improves translation quality. Nevertheless, even the larger tuning set is small relative to the bitext from which rules were extracted. He and Deng (2012) and Simianer et al. (2012) showed significant translation quality gains by tuning on the bitext. However, their bitexts matched the genre of their test sets. Our bitexts, like those of most large-scale systems, do not. Domain mismatch matters for the dense feature set (Haddow and Koehn, 2012) . We show that it also matters for feature-rich MT.", |
| "cite_spans": [ |
| { |
| "start": 1866, |
| "end": 1884, |
| "text": "He and Deng (2012)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 1889, |
| "end": 1911, |
| "text": "Simianer et al. (2012)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 2154, |
| "end": 2178, |
| "text": "(Haddow and Koehn, 2012)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 287, |
| "end": 294, |
| "text": "Table 2", |
| "ref_id": "TABREF2" |
| }, |
| { |
| "start": 1022, |
| "end": 1029, |
| "text": "Table 3", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 1682, |
| "end": 1696, |
| "text": "Tables 2 and 3", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "NIST OpenMT Experiment", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Before aligning each bitext, we randomly sampled and sequestered 5k and 15k sentence tuning sets, and a 5k test set. We prevented overlap be-", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bitext Tuning Experiment", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "D A D B |A| |B| |A \u2229 B| MT04 MT06 70k 72k 5.9k MT04 MT568 70k 96k 7.6k MT04 bitext5k 70k 67k 4.4k MT04 bitext15k 70k 310k", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Bitext Tuning Experiment", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "10.5k 5ktest bitext5k 82k 67k 5.6k 5ktest bitext15k 82k 310k 14k tween the tuning sets and the test set. We then tuned a dense model with MERT on MT06, and feature-rich models on both MT05/6/8 and the bitext tuning set. Table 4 shows the Ar-En results. When tuned on bitext5k the translation quality gains are significant for bitext5k-test relative to tuning on MT05/6/8, which has multiple references. However, the bitext5k models do not generalize as well to the NIST evaluation sets as represented by the MT04 result. Table 5 shows similar trends for Zh-En.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 220, |
| "end": 227, |
| "text": "Table 4", |
| "ref_id": null |
| }, |
| { |
| "start": 521, |
| "end": 528, |
| "text": "Table 5", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Bitext Tuning Experiment", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "How many sparse features appear in both the tuning and test sets? In Table 6 , A is the set of phrase table features that received a non-zero weight when tuned on dataset D A (same for B). Column D A lists several Zh-En test sets used and column D B lists tuning sets. Our experiments showed that tuning on MT06 generalizes better to MT04 than tuning on bitext5k, whereas tuning on bitext5k generalizes better to bitext5k-test than tuning on MT06. These trends are consistent with the level of feature overlap. Phrase table features in A \u2229 B are overwhelmingly short, simple, and correct phrases, suggesting L 1 regularization is effective for feature selection. It is also important to balance the number of features with how well weights can be learned for those features, as tuning on bitext15k produced higher coverage for MT04 but worse generalization than tuning on MT06.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 69, |
| "end": 76, |
| "text": "Table 6", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Feature Overlap Analysis", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "To understand the domain adaptation issue we compared the non-zero weights in the discriminative phrase table (PT) for Ar-En models tuned on bi-text5k and MT05/6/8. Table 7 illustrates a statistical idiosyncrasy in the data for the American and British spellings of program/programme. The mass is concentrated along the diagonal, probably because MT05/6/8 was prepared by NIST, an American agency, while the bitext was collected from many sources including Agence France Presse. Of course, this discrepancy is consequential for both dense and feature-rich models. However, we observe that the feature-rich models fit the tuning data more closely. For example, the MT05/6/8 model learns rules like \u2192 program includes, \u2192 program of, and \u2192 program window. Crucially, it does not learn the basic rule \u2192 program. In contrast, the bitext5k model contains basic rules such \u2192 programme, \u2192 this programme, and \u2192 that programme. It also contains more elaborate rules such as \u2192 programme expenses were and \u2192 manned space flight programmes. We observed similar trends for 'defense/defence', 'analyze/analyse', etc. This particular genre problem could be addressed with language-specific pre-processing, but our system solves it in a data-driven manner.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 165, |
| "end": 172, |
| "text": "Table 7", |
| "ref_id": "TABREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Domain Adaptation Analysis", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We also analyzed re-ordering differences. Arabic matrix clauses tend to be verb-initial, meaning that the subject and verb must be swapped when translating to English. To assess re-ordering differencesif any-between the dense and feature-rich models, we selected all MT09 segments that began with one dhkr 'commented', aDaaf 'added', a c ln 'announced'. We compared the output of the MERT Dense model to our method with the full feature set, both tuned on MT06. Of the 208 source segments, 32 of the translation pairs contained different word order in the matrix clause. Our featurerich model was correct 18 times (56.3%), Dense was correct 4 times (12.5%), and neither method was correct 10 times (31.3%).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Re-ordering Analysis", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "(1) ref: lebanese prime minister , fuad siniora , announced a. and lebanese prime minister fuad siniora that b. the lebanese prime minister fouad siniora announced", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Re-ordering Analysis", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "(2) ref: the newspaper and television reported a. she said the newspaper and television b. television and newspaper said", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Re-ordering Analysis", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "In (1) the dense model (1a) drops the verb while the feature-rich model correctly re-orders and inserts it after the subject (1b). The coordinated subject in (2) becomes an embedded subject in the dense output (2a). The feature-rich model (2b) performs the correct re-ordering. The core of our method is an inner product between the adaptive learning rate vector and the gradient. This is easy to implement and is very fast even for large feature sets. Since we applied lazy regularization, this inner product usually involves hundred-dimensional vectors. Finally, our method does not need to accumulate n-best lists, a practice that slows down the other algorithms.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Re-ordering Analysis", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Our work relates most closely to that of Hasler et al. (2012b) , who tuned models containing both sparse and dense features with Moses. A discriminative phrase table helped them improve slightly over a dense, online MIRA baseline, but their best results required initialization with MERT-tuned weights and re-tuning a single, shared weight for the discriminative phrase table with MERT. In contrast, our algorithm learned good high dimensional models from a uniform starting point. Chiang (2012) adapted AROW to MT and extended previous work on online MIRA (Chiang et al., 2008; Watanabe et al., 2007) . It was not clear if his improvements came from the novel Hope/Fear search, the conservativity gain from MIRA/AROW by solving the QP exactly, adaptivity, or sophisticated parallelization. In contrast, we show that AdaGrad, which ignores conservativity and only capturing adaptivity, is sufficient. Simianer et al. (2012) investigated SGD with a pairwise perceptron objective. Their best algorithm used iterative parameter mixing (McDonald et al., 2010) , which we found to be slower than the stale gradient method in section 3.3. They regularized once at the end of each epoch, whereas we regularized each weight update. An empirical comparison of these two strategies would be an interesting future contribution.", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 62, |
| "text": "Hasler et al. (2012b)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 482, |
| "end": 495, |
| "text": "Chiang (2012)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 557, |
| "end": 578, |
| "text": "(Chiang et al., 2008;", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 579, |
| "end": 601, |
| "text": "Watanabe et al., 2007)", |
| "ref_id": "BIBREF45" |
| }, |
| { |
| "start": 901, |
| "end": 923, |
| "text": "Simianer et al. (2012)", |
| "ref_id": "BIBREF42" |
| }, |
| { |
| "start": 1032, |
| "end": 1055, |
| "text": "(McDonald et al., 2010)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Watanabe (2012) investigated SGD and even randomly selected pairwise samples as we did. He considered both softmax and hinge losses, observing better results with the latter, which solves a QP. Their parallelization strategy required a line search at the end of each epoch.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Many other discriminative techniques have been proposed based on: ramp loss (Gimpel, 2012) ; hinge loss (Cherry and Foster, 2012; Haddow et al., 2011; Arun and Koehn, 2007) ; maximum entropy (Xiang and Ittycheriah, 2011; Ittycheriah and Roukos, 2007; Och and Ney, 2002) ; perceptron (Liang et al., 2006a) ; and structured SVM (Tillmann and Zhang, 2006) . These works use radically different experimental setups, and to our knowledge only (Cherry and Foster, 2012) and this work compare to at least two high dimensional baselines. Broader comparisons, though time-intensive, could help differentiate these methods.", |
| "cite_spans": [ |
| { |
| "start": 76, |
| "end": 90, |
| "text": "(Gimpel, 2012)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 104, |
| "end": 129, |
| "text": "(Cherry and Foster, 2012;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 130, |
| "end": 150, |
| "text": "Haddow et al., 2011;", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 151, |
| "end": 172, |
| "text": "Arun and Koehn, 2007)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 191, |
| "end": 220, |
| "text": "(Xiang and Ittycheriah, 2011;", |
| "ref_id": "BIBREF47" |
| }, |
| { |
| "start": 221, |
| "end": 250, |
| "text": "Ittycheriah and Roukos, 2007;", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 251, |
| "end": 269, |
| "text": "Och and Ney, 2002)", |
| "ref_id": "BIBREF37" |
| }, |
| { |
| "start": 283, |
| "end": 304, |
| "text": "(Liang et al., 2006a)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 326, |
| "end": 352, |
| "text": "(Tillmann and Zhang, 2006)", |
| "ref_id": "BIBREF44" |
| }, |
| { |
| "start": 438, |
| "end": 463, |
| "text": "(Cherry and Foster, 2012)", |
| "ref_id": "BIBREF5" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "6" |
| }, |
| { |
| "text": "We introduced a new online method for tuning feature-rich translation models. The method is faster per epoch than MERT, scales to millions of features, and converges quickly. We used efficient L 1 regularization for feature selection, obviating the need for the feature scaling and heuristic filtering common in prior work. Those comfortable with implementing vanilla SGD should find our method easy to implement. Even basic discriminative features were effective, so we believe that our work enables fresh approaches to more sophisticated MT feature engineering.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion and Outlook", |
| "sec_num": "7" |
| }, |
| { |
| "text": "According to experiments not reported in this paper.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Other system settings for all experiments: distortion limit of 5, a maximum phrase length of 7, and an n-best size of 200. 5 v1.0 (28 January 2013)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "We thank John DeNero for helpful comments on an earlier draft. The first author is supported by a National Science Foundation Graduate Research Fellowship. We also acknowledge the support of the Defense Advanced Research Projects Agency (DARPA) Broad Operational Language Translation (BOLT) program through IBM. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the view of the DARPA or the US government.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Online learning methods for discriminative training of phrase based statistical machine translation", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Arun", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "MT Summit XI", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Arun and P. Koehn. 2007. Online learning methods for discriminative training of phrase based statistical machine translation. In MT Summit XI.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The tradeoffs of large scale learning", |
| "authors": [ |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Bottou", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Bousquet", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Optimization for Machine Learning", |
| "volume": "", |
| "issue": "", |
| "pages": "351--368", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "L. Bottou and O. Bousquet. 2011. The tradeoffs of large scale learning. In Optimization for Machine Learning, pages 351-368. MIT Press.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Regularization and search for minimum error rate training", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Cer", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "WMT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Cer, D. Jurafsky, and C. D. Manning. 2008. Regu- larization and search for minimum error rate training. In WMT.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Phrasal: A statistical machine translation toolkit for exploring new model features", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Cer", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Jurafsky", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "HLT-NAACL, Demonstration Session", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Cer, M. Galley, D. Jurafsky, and C. D. Manning. 2010. Phrasal: A statistical machine translation toolkit for exploring new model features. In HLT- NAACL, Demonstration Session.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Optimizing Chinese word segmentation for machine translation performance", |
| "authors": [ |
| { |
| "first": "P-C", |
| "middle": [], |
| "last": "Chang", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "WMT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P-C. Chang, M. Galley, and C. D. Manning. 2008. Optimizing Chinese word segmentation for machine translation performance. In WMT.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Batch tuning strategies for statistical machine translation", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Cherry", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Foster", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Cherry and G. Foster. 2012. Batch tuning strategies for statistical machine translation. In HLT-NAACL.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Online large-margin training of syntactic and structural translation features", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Marton", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Chiang, Y. Marton, and P. Resnik. 2008. On- line large-margin training of syntactic and structural translation features. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "11,001 new features for statistical machine translation", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Chiang, K. Knight, and W. Wang. 2009. 11,001 new features for statistical machine translation. In HLT-NAACL.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Hope and fear for discriminative training of statistical translation models", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "JMLR", |
| "volume": "13", |
| "issue": "", |
| "pages": "1159--1187", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Chiang. 2012. Hope and fear for discrimina- tive training of statistical translation models. JMLR, 13:1159-1187.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Online passive-aggressive algorithms", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Dekel", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Keshet", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Shalev-Shwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "JMLR", |
| "volume": "7", |
| "issue": "", |
| "pages": "551--585", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Crammer, O. Dekel, J. Keshet, S. Shalev-Shwartz, and Y. Singer. 2006. Online passive-aggressive al- gorithms. JMLR, 7:551-585.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Adaptive regularization of weight vectors", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Crammer", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Kulesza", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Dredze", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Crammer, A. Kulesza, and M. Dredze. 2009. Adap- tive regularization of weight vectors. In NIPS.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Efficient online and batch learning using forward backward splitting", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Duchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "JMLR", |
| "volume": "10", |
| "issue": "", |
| "pages": "2899--2934", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Duchi and Y. Singer. 2009. Efficient online and batch learning using forward backward splitting. JMLR, 10:2899-2934.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Adaptive subgradient methods for online learning and stochastic optimization", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Duchi", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Hazan", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Singer", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "JMLR", |
| "volume": "12", |
| "issue": "", |
| "pages": "2121--2159", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Duchi, E. Hazan, and Y. Singer. 2011. Adaptive sub- gradient methods for online learning and stochastic optimization. JMLR, 12:2121-2159.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A simple and effective hierarchical phrase reordering model", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Galley", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Galley and C. D. Manning. 2008. A simple and effective hierarchical phrase reordering model. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Structured ramp loss minimization for machine translation", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Gimpel", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Gimpel and N. A. Smith. 2012. Structured ramp loss minimization for machine translation. In HLT- NAACL.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Distributed asynchronous online learning for natural language processing", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Gimpel", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Das", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [ |
| "A" |
| ], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Gimpel, D. Das, and N. A. Smith. 2010. Distributed asynchronous online learning for natural language processing. In CoNLL.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Discriminative Feature-Rich Modeling for Syntax-Based Machine Translation", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Gimpel", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Gimpel. 2012. Discriminative Feature-Rich Mod- eling for Syntax-Based Machine Translation. Ph.D. thesis, Language Technologies Institute, Carnegie Mellon University.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "A class-based agreement model for generating accurately inflected translations", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Green", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Denero", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Green and J. DeNero. 2012. A class-based agree- ment model for generating accurately inflected trans- lations. In ACL.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Analysing the effect of out-of-domain data on SMT systems", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "WMT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Haddow and P. Koehn. 2012. Analysing the effect of out-of-domain data on SMT systems. In WMT.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "SampleRank training for phrase-based machine translation", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Arun", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "WMT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Haddow, A. Arun, and P. Koehn. 2011. SampleR- ank training for phrase-based machine translation. In WMT.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "The UEDIN systems for the IWSLT 2012 evaluation", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Hasler", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Bell", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Ghoshal", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Mcinnes", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "IWSLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Hasler, P. Bell, A. Ghoshal, B. Haddow, P. Koehn, F. McInnes, et al. 2012a. The UEDIN systems for the IWSLT 2012 evaluation. In IWSLT.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Sparse lexicalised features and topic adaptation for SMT", |
| "authors": [ |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Hasler", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "IWSLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "E. Hasler, B. Haddow, and P. Koehn. 2012b. Sparse lexicalised features and topic adaptation for SMT. In IWSLT.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Maximum expected BLEU training of phrase and lexicon translation models", |
| "authors": [ |
| { |
| "first": "X", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "X. He and L. Deng. 2012. Maximum expected BLEU training of phrase and lexicon translation models. In ACL.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Support vector learning for ordinal regression", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Herbrich", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Graepel", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Obermayer", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "ICANN", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Herbrich, T. Graepel, and K. Obermayer. 1999. Support vector learning for ordinal regression. In ICANN.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Tuning as ranking", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Hopkins", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Hopkins and J. May. 2011. Tuning as ranking. In EMNLP.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Direct translation model 2", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Ittycheriah", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Ittycheriah and S. Roukos. 2007. Direct translation model 2. In HLT-NAACL.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Accurate unlexicalized parsing", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Klein and C. D. Manning. 2003. Accurate unlexi- calized parsing. In ACL.", |
| "links": null |
| }, |
| "BIBREF27": { |
| "ref_id": "b27", |
| "title": "Moses: Open source toolkit for statistical machine translation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Hoang", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Federico", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Bertoldi", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "ACL, Demonstration Session", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Koehn, H. Hoang, A. Birch, C. Callison-Burch, M. Federico, N. Bertoldi, et al. 2007. Moses: Open source toolkit for statistical machine translation. In ACL, Demonstration Session.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Slow learners are fast", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Langford", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "J" |
| ], |
| "last": "Smola", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Zinkevich", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. Langford, A. J. Smola, and M. Zinkevich. 2009. Slow learners are fast. In NIPS.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Online EM for unsupervised models", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Liang and D. Klein. 2009. Online EM for unsuper- vised models. In HLT-NAACL.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "An end-to-end discriminative approach to machine translation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Bouchard-C\u00f4t\u00e9", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Liang, A. Bouchard-C\u00f4t\u00e9, D. Klein, and B. Taskar. 2006a. An end-to-end discriminative approach to machine translation. In ACL.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "Alignment by agreement", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Liang, B. Taskar, and D. Klein. 2006b. Alignment by agreement. In NAACL.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "ORANGE: a method for evaluating automatic evaluation metrics for machine translation", |
| "authors": [ |
| { |
| "first": "C.-Y", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C.-Y. Lin and F. J. Och. 2004. ORANGE: a method for evaluating automatic evaluation metrics for machine translation. In COLING.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Enhancing the Arabic Treebank: A collaborative effort toward new annotation guidelines", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Maamouri", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Bies", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kulick", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "LREC", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Maamouri, A. Bies, and S. Kulick. 2008. Enhanc- ing the Arabic Treebank: A collaborative effort to- ward new annotation guidelines. In LREC.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Building a large annotated corpus of English: The Penn Treebank", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Marcus", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [ |
| "A" |
| ], |
| "last": "Marcinkiewicz", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Santorini", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "", |
| "pages": "313--330", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "M. Marcus, M. A. Marcinkiewicz, and B. Santorini. 1993. Building a large annotated corpus of En- glish: The Penn Treebank. Computational Linguis- tics, 19:313-330.", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Distributed training strategies for the structured perceptron", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Mcdonald", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Hall", |
| "suffix": "" |
| }, |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Mann", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. McDonald, K. Hall, and G. Mann. 2010. Distributed training strategies for the structured perceptron. In NAACL-HLT.", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Feature selection, L 1 vs. L 2 regularization, and rotational invariance", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [ |
| "Y" |
| ], |
| "last": "Ng", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "ICML", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Y. Ng. 2004. Feature selection, L 1 vs. L 2 regular- ization, and rotational invariance. In ICML.", |
| "links": null |
| }, |
| "BIBREF37": { |
| "ref_id": "b37", |
| "title": "Discriminative training and maximum entropy models for statistical machine translation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. J. Och and H. Ney. 2002. Discriminative training and maximum entropy models for statistical machine translation. In ACL.", |
| "links": null |
| }, |
| "BIBREF38": { |
| "ref_id": "b38", |
| "title": "The alignment template approach to statistical machine translation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Computational Linguistics", |
| "volume": "30", |
| "issue": "4", |
| "pages": "417--449", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. J. Och and H. Ney. 2004. The alignment template approach to statistical machine translation. Compu- tational Linguistics, 30(4):417-449.", |
| "links": null |
| }, |
| "BIBREF39": { |
| "ref_id": "b39", |
| "title": "Minimum error rate training for statistical machine translation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. J. Och. 2003. Minimum error rate training for statis- tical machine translation. In ACL.", |
| "links": null |
| }, |
| "BIBREF40": { |
| "ref_id": "b40", |
| "title": "BLEU: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Papineni, S. Roukos, T. Ward, and W. Zhu. 2002. BLEU: a method for automatic evaluation of ma- chine translation. In ACL.", |
| "links": null |
| }, |
| "BIBREF41": { |
| "ref_id": "b41", |
| "title": "On some pitfalls in automatic evaluation and significance testing in MT", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Riezler", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "T" |
| ], |
| "last": "Maxwell", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation and/or Summarization (MTSE)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Riezler and J. T. Maxwell. 2005. On some pitfalls in automatic evaluation and significance testing in MT. In ACL Workshop on Intrinsic and Extrinsic Evalua- tion Measures for Machine Translation and/or Sum- marization (MTSE).", |
| "links": null |
| }, |
| "BIBREF42": { |
| "ref_id": "b42", |
| "title": "Joint feature selection in distributed stochastic learning for largescale discriminative training in SMT", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Simianer", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Riezler", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Simianer, S. Riezler, and C. Dyer. 2012. Joint feature selection in distributed stochastic learning for large- scale discriminative training in SMT. In ACL.", |
| "links": null |
| }, |
| "BIBREF43": { |
| "ref_id": "b43", |
| "title": "SRILM-an extensible language modeling toolkit", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Stolcke", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "ICSLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A Stolcke. 2002. SRILM-an extensible language modeling toolkit. In ICSLP.", |
| "links": null |
| }, |
| "BIBREF44": { |
| "ref_id": "b44", |
| "title": "A discriminative global training algorithm for statistical MT", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Tillmann", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "ACL-COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Tillmann and T. Zhang. 2006. A discriminative global training algorithm for statistical MT. In ACL- COLING.", |
| "links": null |
| }, |
| "BIBREF45": { |
| "ref_id": "b45", |
| "title": "Online large-margin training for statistical machine translation", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Watanabe", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Suzuki", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Tsukada", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Isozaki", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "EMNLP-CoNLL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Watanabe, J. Suzuki, H. Tsukada, and H. Isozaki. 2007. Online large-margin training for statistical ma- chine translation. In EMNLP-CoNLL.", |
| "links": null |
| }, |
| "BIBREF46": { |
| "ref_id": "b46", |
| "title": "Optimized online rank learning for machine translation", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Watanabe", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "HLT-NAACL. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Watanabe. 2012. Optimized online rank learning for machine translation. In HLT-NAACL. Associa- tion for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF47": { |
| "ref_id": "b47", |
| "title": "Discriminative feature-tied mixture modeling for statistical machine translation", |
| "authors": [ |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Xiang", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Ittycheriah", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "B. Xiang and A. Ittycheriah. 2011. Discriminative feature-tied mixture modeling for statistical machine translation. In ACL-HLT.", |
| "links": null |
| }, |
| "BIBREF48": { |
| "ref_id": "b48", |
| "title": "The Penn Chinese Treebank: Phrase structure annotation of a large corpus", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Xue", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Xia", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Chiou", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Palmer", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Natural Language Engineering", |
| "volume": "11", |
| "issue": "2", |
| "pages": "207--238", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "N. Xue, F. Xia, F. Chiou, and M. Palmer. 2005. The Penn Chinese Treebank: Phrase structure annotation of a large corpus. Natural Language Engineering, 11(2):207-238.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "TABREF0": { |
| "content": "<table><tr><td>tive phrase table (PT): indicators for each rule in the phrase table. Alignments (AL): indica-tors for phrase-internal alignments and deleted (unaligned) source words. Discriminative re-ordering (LO): indicators for eight lexicalized re-ordering classes, including the six standard mono-tone/swap/discontinuous classes plus the two sim-pler Moses monotone/non-monotone classes.</td></tr></table>", |
| "num": null, |
| "text": "Bilingual and monolingual corpora used in these experiments. The monolingual English data comes from the AFP and Xinhua sections of English Gigaword 4 (LDC2009T13).", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF2": { |
| "content": "<table><tr><td colspan=\"7\">(*) Chiang (2012) used a similar-sized bitext, but two LMs trained on twice as much monolingual data.</td></tr><tr><td>Model</td><td colspan=\"2\">#features Algorithm</td><td colspan=\"2\">Tuning Set</td><td>MT02</td><td>MT03</td><td>MT04</td></tr><tr><td>Dense Dense</td><td>19 19</td><td colspan=\"2\">MERT This paper MT06 MT06</td><td colspan=\"2\">33.90 35.72 32.60 36.23</td><td>33.71 35.14</td><td>34.26 34.78</td></tr><tr><td>+PT +PT +PT</td><td>105k 26k 66k</td><td colspan=\"2\">kb-MIRA MT06 PRO MT06 This paper MT06</td><td colspan=\"2\">29.46 30.67 33.70 36.87 33.90 36.09</td><td>28.96 34.62 34.86</td><td>30.05 34.80 34.73</td></tr><tr><td>+PT+AL+LO +PT+AL+LO Dense +PT+AL+LO</td><td>148k 344k 19 487k</td><td colspan=\"4\">PRO This paper MT06 MT06 MERT MT05/6/8 32.36 35.69 34.81 36.31 38.99 36.40 This paper MT05/6/8 37.64 37.81</td><td>33.81 35.07 33.83 36.26</td><td>34.41 34.84 34.33 36.15</td></tr><tr><td/><td/><td/><td/><td colspan=\"2\">#sentences 878</td><td>919</td><td>1,597</td></tr></table>", |
| "num": null, |
| "text": "Ar-En results for the NIST tuning experiment. The tuning and test sets each have four references. MT06 has 1,717 sentences, while the concatenated MT05/6/8 set has 4,213 sentences. Bold indicates statistical significance relative to the best baseline in each block at p < 0.001; bold-italic at p < 0.05. We assessed significance with the permutation test ofRiezler and Maxwell (2005).", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF3": { |
| "content": "<table/>", |
| "num": null, |
| "text": "for the NIST tuning experiment. MT05/6/8 has 4,103 sentences. OpenMT 2009 did not include Zh-En, hence the asymmetry withTable 2.", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF4": { |
| "content": "<table><tr><td>Model</td><td colspan=\"2\">#features Algorithm</td><td>Tuning Set</td><td>#refs</td><td>bitext5k-test MT04</td></tr><tr><td>Dense +PT +PT +PT+AL+LO</td><td>19 72k 79k 647k</td><td colspan=\"3\">MERT This paper MT05/6/8 51.29 4 MT06 45.08 4 This paper bitext5k 44.79 1 This paper bitext15k 45.68 1</td><td>39.28 39.50 43.85 43.93</td><td>51.42 50.60 45.73 45.24</td></tr><tr><td colspan=\"6\">Table 4: Ar-En results [BLEU-4 % uncased] for the bitext tuning experiment. Statistical significance is relative to the Dense baseline. We include MT04 for comparison to the NIST genre.</td></tr><tr><td>Model</td><td colspan=\"2\">#features Algorithm</td><td>Tuning Set</td><td>#refs</td><td>bitext5k-test MT04</td></tr><tr><td>Dense +PT +PT +PT+AL+LO</td><td>19 97k 67k 536k</td><td colspan=\"3\">MERT This paper MT05/6/8 34.45 4 MT06 33.90 4 This paper bitext5k 36.26 1 This paper bitext15k 37.57 1</td><td>33.44 35.08 36.01 36.30</td><td>34.26 35.19 33.76 34.05</td></tr></table>", |
| "num": null, |
| "text": "The full feature set PT+AL+LO does help. With the PT feature set alone, our algorithm tuned on MT05/6/8 scores well below the best model, e.g.", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF5": { |
| "content": "<table/>", |
| "num": null, |
| "text": "Zh-En results [BLEU-4 % uncased] for the bitext tuning experiment.", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF6": { |
| "content": "<table/>", |
| "num": null, |
| "text": "Number of overlapping phrase table (+PT) features on various Zh-En dataset pairs.", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF8": { |
| "content": "<table><tr><td colspan=\"2\">of seven common verbs:</td><td colspan=\"2\">qaal 'said',</td><td>SrH</td></tr><tr><td>'declared',</td><td colspan=\"2\">ashaar 'indicated',</td><td>kaan 'was',</td></tr></table>", |
| "num": null, |
| "text": "Top: comparison of token counts in two Ar-En tuning sets for programme and program. Bottom: rule counts in the discriminative phrase table (PT) for models tuned on the two tuning sets. Both spellings correspond to the Arabic .", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF9": { |
| "content": "<table><tr><td>compares our method to standard implemen-tations of the other algorithms. MERT parallelizes easily but runtime increases quadratically with n-best list size. PRO runs (single-threaded) L-BFGS to convergence on every epoch, a potentially slow procedure for the larger feature set. Moreover, both</td></tr></table>", |
| "num": null, |
| "text": "", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF10": { |
| "content": "<table><tr><td>: Epochs to convergence (\"epochs\") and approximate runtime per epoch in minutes (\"min.\") for selected Zh-En experiments tuned on MT06. All runs executed on the same dedicated system with the same number of threads. (*) Moses and kb-MIRA are written in C++, while all other rows refer to Java implementations in Phrasal.</td></tr><tr><td>the Phrasal and Moses PRO implementations use L 2 regularization, which regularizes every weight on every update. kb-MIRA makes multiple passes through the n-best lists during each epoch. The Moses implementation parallelizes decoding but weight updating is sequential.</td></tr></table>", |
| "num": null, |
| "text": "", |
| "type_str": "table", |
| "html": null |
| } |
| } |
| } |
| } |