ACL-OCL / Base_JSON /prefixN /json /N15 /N15-1041.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "N15-1041",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T14:35:46.950960Z"
},
"title": "The Geometry of Statistical Machine Translation",
"authors": [
{
"first": "Aurelien",
"middle": [],
"last": "Waite",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {
"country": "UK"
}
},
"email": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Cambridge",
"location": {
"country": "UK"
}
},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Most modern statistical machine translation systems are based on linear statistical models. One extremely effective method for estimating the model parameters is minimum error rate training (MERT), which is an efficient form of line optimisation adapted to the highly nonlinear objective functions used in machine translation. We describe a polynomial-time generalisation of line optimisation that computes the error surface over a plane embedded in parameter space. The description of this algorithm relies on convex geometry, which is the mathematics of polytopes and their faces. Using this geometric representation of MERT we investigate whether the optimisation of linear models is tractable in general. Previous work on finding optimal solutions in MERT (Galley and Quirk, 2011) established a worstcase complexity that was exponential in the number of sentences, in contrast we show that exponential dependence in the worst-case complexity is mainly in the number of features. Although our work is framed with respect to MERT, the convex geometric description is also applicable to other error-based training methods for linear models. We believe our analysis has important ramifications because it suggests that the current trend in building statistical machine translation systems by introducing a very large number of sparse features is inherently not robust.",
"pdf_parse": {
"paper_id": "N15-1041",
"_pdf_hash": "",
"abstract": [
{
"text": "Most modern statistical machine translation systems are based on linear statistical models. One extremely effective method for estimating the model parameters is minimum error rate training (MERT), which is an efficient form of line optimisation adapted to the highly nonlinear objective functions used in machine translation. We describe a polynomial-time generalisation of line optimisation that computes the error surface over a plane embedded in parameter space. The description of this algorithm relies on convex geometry, which is the mathematics of polytopes and their faces. Using this geometric representation of MERT we investigate whether the optimisation of linear models is tractable in general. Previous work on finding optimal solutions in MERT (Galley and Quirk, 2011) established a worstcase complexity that was exponential in the number of sentences, in contrast we show that exponential dependence in the worst-case complexity is mainly in the number of features. Although our work is framed with respect to MERT, the convex geometric description is also applicable to other error-based training methods for linear models. We believe our analysis has important ramifications because it suggests that the current trend in building statistical machine translation systems by introducing a very large number of sparse features is inherently not robust.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The linear model of Statistical Machine Translation (SMT) (Och and Ney, 2002) casts translation as a search for translation hypotheses under a linear combination of weighted features: a source language sentence f is translated a\u015d e(f ; w) = argmax e {wh(e, f )}",
"cite_spans": [
{
"start": 58,
"end": 77,
"text": "(Och and Ney, 2002)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "(1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "where translation scores are a linear combination of the D \u00d7 1 feature vector h(e, f ) \u2208 R D under the 1 \u00d7 D model parameter vector w. Convex geometry (Ziegler, 1995) is the mathematics of such linear equations presented as the study of convex polytopes. We use convex geometry to show that the behaviour of training methods such as MERT (Och, 2003; Macherey et al., 2008) , MIRA (Crammer et al., 2006) , PRO (Hopkins and May, 2011) , and others converge with a high feature dimension. In particular we analyse how robustness decreases in linear models as feature dimension increases. We believe that severe overtraining is a problem in many current linear model formulations due to this lack of robustness.",
"cite_spans": [
{
"start": 151,
"end": 166,
"text": "(Ziegler, 1995)",
"ref_id": "BIBREF33"
},
{
"start": 338,
"end": 349,
"text": "(Och, 2003;",
"ref_id": "BIBREF25"
},
{
"start": 350,
"end": 372,
"text": "Macherey et al., 2008)",
"ref_id": "BIBREF22"
},
{
"start": 380,
"end": 402,
"text": "(Crammer et al., 2006)",
"ref_id": "BIBREF9"
},
{
"start": 409,
"end": 432,
"text": "(Hopkins and May, 2011)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the process of building this geometric representation of linear models we discuss algorithms such as the Minkowski sum algorithm (Fukuda, 2004) and projected MERT (Section 4.2) that could be useful for designing new and more robust training algorithms for SMT and other natural language processing problems.",
"cite_spans": [
{
"start": 132,
"end": 146,
"text": "(Fukuda, 2004)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Let f 1 . . . f S be a set of S source language sentences with reference translations r 1 . . . r S . The goal is to estimate the model parameter vector w so as to minimize an error count based on an automated metric, such as BLEU (Papineni et al., 2002) , assumed to be additive over sentences:",
"cite_spans": [
{
"start": 231,
"end": 254,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w = argmin w S s=1 E(\u00ea(f s ; w), r s )",
"eq_num": "(2)"
}
],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "Optimisation can be made tractable by restricting the search to rescoring of K-best lists of translation hypotheses, {e s,i , 1 \u2264 i \u2264 K} S s=1 . For f s , let h s,i = h(e s,i , f s ) be the feature vector associated with hypothesis e s,i . Restricted to these lists, the general decoder of Eqn. 1 become\u015d e(f s ; w) = argmax e s,i {wh(e s,i , f s )}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "(3)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "Although the objective function in Eqn. (2) cannot be solved analytically, MERT as described by Och (2003) can be performed over the K-best lists. The line optimisation procedure considers a subset of parameters defined by the line w (0) + \u03b3d, where w (0) corresponds to an initial point in parameter space and d is the direction along which to optimise. Eqn.",
"cite_spans": [
{
"start": 96,
"end": 106,
"text": "Och (2003)",
"ref_id": "BIBREF25"
},
{
"start": 252,
"end": 255,
"text": "(0)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "(3) can be rewritten as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e(f s ; \u03b3) = argmax e s,i {w (0) h s,i + \u03b3dh s,i )}",
"eq_num": "(4)"
}
],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "Line optimisation reduces the D-dimensional procedure in Eqn.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "(2) to a 1-Dimensional problem that can be easily solved using a geometric algorithm for many source sentences (Macherey et al., 2008) . More recently, Galley and Quirk (2011) have introduced linear programming MERT (LP-MERT) as an exact search algorithm that reaches the global optimum of the training criterion. A hypothesis e s,i from the sth K-best list can be selected by the decoder only if",
"cite_spans": [
{
"start": 111,
"end": 134,
"text": "(Macherey et al., 2008)",
"ref_id": "BIBREF22"
},
{
"start": 152,
"end": 175,
"text": "Galley and Quirk (2011)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w(h s,j \u2212 h s,i ) \u2264 0 for 1 \u2264 j \u2264 K",
"eq_num": "(5)"
}
],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "for some parameter vector w = 0. If such a solution exists then the system of inequalities is feasible, and defines a convex region in parameter space within which any parameter w will yield e s,i . Testing the system of inequalities in (5) and finding a parameter vector can be cast as a linear programming feasibility problem (Galley and Quirk, 2011) , and this can be extended to find a parameter vector that optimizes Eqn. 2 over a collection of K-best lists. We discuss the complexity of this operation in Section 4.1. Hopkins and May (2011) note that for the sth source sentence, the parameter w that correctly ranks its K-best list must satisfy the following set of constraints for 1 \u2264 i, j \u2264 K:",
"cite_spans": [
{
"start": 328,
"end": 352,
"text": "(Galley and Quirk, 2011)",
"ref_id": "BIBREF15"
},
{
"start": 524,
"end": 546,
"text": "Hopkins and May (2011)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "w(h s,j \u2212 h s,i ) \u2264 0 if \u2206(e s,i , e s,j ) \u2265 0 (6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "where \u2206 computes the difference in error between two hypotheses. The difference vectors (h s,j \u2212 h s,i ) associated with each constraint can be used as input vectors for a binary classification problem in which the aim is to predict whether the the difference in error \u2206(e s,i , e s,j ) is positive or negative. Hopkins and May (2011) call this algorithm Pairwise Ranking Optimisation (PRO). Because there are SK 2 difference vectors across all source sentences, a subset of constraints is sampled in the original formulation; with effcient calculation of rankings, sampling can be avoided (Dreyer and Dong, 2015) . The online error based training algorithm MIRA (Crammer et al., 2006) is also used for SMT (Watanabe et al., 2007; Chiang et al., 2008; Chiang, 2012) . Using a sentence-level error function, a set of S oracle hypotheses are indexed with the vector\u00ee:",
"cite_spans": [
{
"start": 312,
"end": 334,
"text": "Hopkins and May (2011)",
"ref_id": "BIBREF20"
},
{
"start": 590,
"end": 613,
"text": "(Dreyer and Dong, 2015)",
"ref_id": "BIBREF12"
},
{
"start": 663,
"end": 685,
"text": "(Crammer et al., 2006)",
"ref_id": "BIBREF9"
},
{
"start": 707,
"end": 730,
"text": "(Watanabe et al., 2007;",
"ref_id": "BIBREF31"
},
{
"start": 731,
"end": 751,
"text": "Chiang et al., 2008;",
"ref_id": "BIBREF4"
},
{
"start": 752,
"end": 765,
"text": "Chiang, 2012)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "i s = argmin i E(e s,i , r s ) for 1 \u2264 s \u2264 S",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "For a given s the objective at iteration n + 1 is :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "minimise w (n+1) 1 2 w (n+1) \u2212 w (n) 2 + C K j=1 \u03be j (7)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "subject to \u03be j \u2265 0 and for 1 \u2264 j \u2264 K,\u00ee s = j :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "w (n+1) (h s,j \u2212 h s, is ) + \u2206(e s, is , e s,j ) \u2212 \u03be j \u2264 0",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "where {\u03be} are slack variables added to allow infeasible solutions, and C controls the trade-off between error minimisation and margin maximisation. The online nature of the optimiser results in complex implementations, therefore batch versions of MIRA have been proposed (Cherry and Foster, 2012; Gimpel and Smith, 2012) . Although MERT, LP-MERT, PRO, and MIRA carry out their search in very different ways, we can compare them in terms of the constraints they are attempting to satisfy. A feasible solution for LP-MERT is also an optimal solution for MERT, and vice versa. The constraints (Eqn. (5)) that define LP-MERT are a subset of the constraints (Eqn. (6)) that define PRO and so a feasible solution for PRO will also be feasible for LP-MERT; however the converse is not necessarily true. The constraints that define MIRA (Eqn. (7)) are similar to the LP-MERT constraints (5), although with the addition of slack variables and the \u2206 function to handle infeasible solutions. However, if a feasible solution is available for MIRA, then these extra quantities are unnecessary. With these quantities removed, then we recover a 'hard-margin' optimiser, which utilises the same constraint set as in LP-MERT. In the feasible case, the solution found by MIRA is also a solution for LP-MERT.",
"cite_spans": [
{
"start": 271,
"end": 296,
"text": "(Cherry and Foster, 2012;",
"ref_id": "BIBREF3"
},
{
"start": 297,
"end": 320,
"text": "Gimpel and Smith, 2012)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Linear Models",
"sec_num": "2"
},
{
"text": "One avenue of SMT research has been to add as many features as possible to the linear model, especially in the form of sparse features (Chiang et al., 2009; Hopkins and May, 2011; Cherry and Foster, 2012; Gimpel and Smith, 2012; Flanigan et al., 2013; Galley et al., 2013; Green et al., 2013) . The assumption is that the addition of new features will improve translation performance. It is interesting to read the justification for many of these works as stated in their abstracts. For example Hopkins and May (2011) state that:",
"cite_spans": [
{
"start": 135,
"end": 156,
"text": "(Chiang et al., 2009;",
"ref_id": "BIBREF5"
},
{
"start": 157,
"end": 179,
"text": "Hopkins and May, 2011;",
"ref_id": "BIBREF20"
},
{
"start": 180,
"end": 204,
"text": "Cherry and Foster, 2012;",
"ref_id": "BIBREF3"
},
{
"start": 205,
"end": 228,
"text": "Gimpel and Smith, 2012;",
"ref_id": "BIBREF17"
},
{
"start": 229,
"end": 251,
"text": "Flanigan et al., 2013;",
"ref_id": "BIBREF13"
},
{
"start": 252,
"end": 272,
"text": "Galley et al., 2013;",
"ref_id": "BIBREF16"
},
{
"start": 273,
"end": 292,
"text": "Green et al., 2013)",
"ref_id": "BIBREF18"
},
{
"start": 495,
"end": 517,
"text": "Hopkins and May (2011)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Survey of Recent Work",
"sec_num": "2.1"
},
{
"text": "We establish PRO's scalability and effectiveness by comparing it to MERT and MIRA and demonstrate parity on both phrase-based and syntax-based systems Cherry and Foster (2012) state:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Survey of Recent Work",
"sec_num": "2.1"
},
{
"text": "Among other results, we find that a simple and efficient batch version of MIRA performs at least as well as training online. Along similar lines Gimpel and Smith (2012) state:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Survey of Recent Work",
"sec_num": "2.1"
},
{
"text": "[We] present a training algorithm that is easy to implement and that performs comparable to others. In defence of MERT, Galley et al. (2013) ",
"cite_spans": [
{
"start": 120,
"end": 140,
"text": "Galley et al. (2013)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Survey of Recent Work",
"sec_num": "2.1"
},
{
"text": "Experiments with up to 3600 features show that these extensions of MERT yield results comparable to PRO, a learner often used with large feature sets. Green et al. (2013) also note that feature-rich models are rarely used in annual MT evaluations, an observation they use to motivate an investigation into adaptive learning rate algorithms.",
"cite_spans": [
{
"start": 151,
"end": 170,
"text": "Green et al. (2013)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "state:",
"sec_num": null
},
{
"text": "Why do such different methods give such remarkably 'comparable' performance in research settings? And why is it so difficult to get general and unambiguous improvements through the use of high dimensional, sparse features? We believe that the explanation is in feasibility. If the oracle index vector i is feasible then all training methods will find very similar solutions. Our belief is that as the feature dimension increases, the chance of an oracle index vector being feasible also increases.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "state:",
"sec_num": null
},
{
"text": "We now build on the description of LP-MERT to give a geometric interpretation to training linear models. We first give a concise summary of the fundamentals of convex geometry as presented by (Ziegler, 1995) after which we work through the example in Cer et al. (2008) to provide an intuition behind these concepts.",
"cite_spans": [
{
"start": 192,
"end": 207,
"text": "(Ziegler, 1995)",
"ref_id": "BIBREF33"
},
{
"start": 251,
"end": 268,
"text": "Cer et al. (2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convex Geometry",
"sec_num": "3"
},
{
"text": "In this section we reference definitions from convex geometry (Ziegler, 1995) in a form that allows us to describe SMT model parameter optimisation. Vector Space The real valued vector space R D represents the space of all finite D-dimensional feature vectors. Dual Vector Space The dual vector space (R D ) * are the real linear functions R D \u2192 R. Polytope The polytope H s \u2286 R D is the convex hull of the finite set of feature vectors associated with the K hypotheses for the sth sentence, i.e.",
"cite_spans": [
{
"start": 62,
"end": 77,
"text": "(Ziegler, 1995)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Convex Geometry Fundamentals",
"sec_num": "3.1"
},
{
"text": "H s = conv(h s,1 , . . . , h s,K ). Faces in R D Suppose for w \u2208 (R D ) * that wh \u2264 max h \u2208Hs wh , \u2200 h \u2208 H s . A face is defined as F = {h \u2208 H s : wh = max h \u2208Hs wh } (8)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convex Geometry Fundamentals",
"sec_num": "3.1"
},
{
"text": "Vertex A face consisting of a single point is called a vertex. The set of vertices of a polytope is denoted vert(H s ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convex Geometry Fundamentals",
"sec_num": "3.1"
},
{
"text": "Edge An edge is a face in the form of a line segment between two vertices h s,i and h s,j in the polytope H s . The edge can be written as",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convex Geometry Fundamentals",
"sec_num": "3.1"
},
{
"text": "[h s,i , h s,j ] = conv(h s,i , h s,j ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convex Geometry Fundamentals",
"sec_num": "3.1"
},
{
"text": "If an edge exists then the following",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convex Geometry Fundamentals",
"sec_num": "3.1"
},
{
"text": "h LM : log(P LM (e)) h T M : log(P T M (f |e)) e 1 -0.1 -1.2 e 2 -1.2 -0.2 e 3",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convex Geometry Fundamentals",
"sec_num": "3.1"
},
{
"text": "-0.9 -1.6 e 4 -0.9 -0.1 e 5 -0.8 -0.9 ",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convex Geometry Fundamentals",
"sec_num": "3.1"
},
{
"text": "w(h j \u2212 h i ) = 0 (9) w(h k \u2212 h i ) < 0, 1 \u2264 k \u2264 K, k = i, k = j w(h l \u2212 h j ) < 0, 1 \u2264 l \u2264 K, l = i, l = j which implies that [h s,i , h s,j ] defines a decision boundary in (R D ) * between",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convex Geometry Fundamentals",
"sec_num": "3.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "N F = {w : w(h s,j \u2212 h s,i ) \u2264 0, \u2200h s,i \u2208 vert(F ), \u2200h s,j \u2208 vert(H s )}",
"eq_num": "(10)"
}
],
"section": "Convex Geometry Fundamentals",
"sec_num": "3.1"
},
{
"text": "If the face is a vertex F = {h s,i } then its normal cone N {h s,i } is the set of feasible parameters that satisfy the system in (5).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convex Geometry Fundamentals",
"sec_num": "3.1"
},
{
"text": "Normal Fan The set of all normal cones associated with the faces of H s is called the normal fan N (H s ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Convex Geometry Fundamentals",
"sec_num": "3.1"
},
{
"text": "Following the example in Cer et al. (2008) we analyze a system based on two features: the translation P T M (f |e) and language P LM (e) models. For brevity we omit the common sentence index, so that h i = h s,i . The system produces a set of four hypotheses which yield four feature vectors {h 1 , h 2 , h 3 , h 4 } (Table 1) . To this set of four hypotheses, we add a fifth hypothesis and feature vector h 5 to illustrate an infeasible solution. These feature vectors are plotted in Figure 1 . The feature vectors form a polytope H shaded in light blue. From Figure 1 we see that h 4 satisfies the conditions for a vertex in Eqn. 8, because we can draw a decision boundary that interests the vertex and no other h \u2208 H. We also note h 5 is not a vertex, and is redundant to the description of H. Figure 1 of Cer et al. (2008) actually shows a normal fan, although it is not described as such. We now describe how this geometric object is constructed step by step in Figure 2 . In Part (a) we identify the edge [h 4 , h 1 ] in R 2 with a decision boundary represented by a dashed line. We have also drawn a vector w normal to the decision boundary that satisfies Eqn. (8). This parameter would result in a tied model score such that wh 4 = wh 1 . When moving to (R 2 ) * we see that the normal cone N [h 4 ,h 1 ] is a ray parallel to w. This ray can be considered as the set of parameter vectors that yield the edge",
"cite_spans": [
{
"start": 25,
"end": 42,
"text": "Cer et al. (2008)",
"ref_id": "BIBREF2"
},
{
"start": 809,
"end": 826,
"text": "Cer et al. (2008)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [
{
"start": 317,
"end": 326,
"text": "(Table 1)",
"ref_id": "TABREF0"
},
{
"start": 485,
"end": 493,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 561,
"end": 569,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 797,
"end": 805,
"text": "Figure 1",
"ref_id": "FIGREF0"
},
{
"start": 967,
"end": 975,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Drawing a Normal Fan",
"sec_num": "3.2"
},
{
"text": "h LM h T M h 1 h 2 h 3 h 4 h 5",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Drawing a Normal Fan",
"sec_num": "3.2"
},
{
"text": "[h 4 , h 1 ].",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Drawing a Normal Fan",
"sec_num": "3.2"
},
{
"text": "The ray is also a decision boundary in (R 2 ) * , with parameters on either side of the decision boundary maximising either h 4 or h 1 . Any vector parallel to the edge [h 4 , h 1 ], such as (h 1 \u2212 h 4 ), can be used to define this decision boundary in (R 2 ) * .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Drawing a Normal Fan",
"sec_num": "3.2"
},
{
"text": "Next in Part (b), with the same procedure we define the normal cone for the edge [h 3 , h 1 ]. Now both the edges from parts (a) and (b) share the the vertex h 1 . This implies that any parameter vector that lies between the two decision boundaries (i.e. between the two rays N [h 3 ,h 1 ] and N [h 4 ,h 1 ] ) would maximise the vertex h 1 : this is the set of vectors that comprise In Part (c) we have shaded and labelled N {h 1 } . Note that no other edges are needed to define this normal cone; these other edges are redundant to the normal cone's description.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Drawing a Normal Fan",
"sec_num": "3.2"
},
{
"text": "h 1 h 2 h 3 h 4 h 5 w w LM w T M N [ h 4 , h 1 ] (h 1 \u2212 h 4 ) R 2 (R 2 ) * (a) h 1 h 2 h 3 h 4 h 5 w LM w T M N [ h 4 , h 1 ] N [ h 3 , h 1 ] R 2 (R 2 ) * (b) h 1 h 2 h 3 h 4 h 5 w LM w T M N [ h 4 , h 1 ] N [ h 3 , h 1 ] N {h 1 } R 2 (R 2 ) * (c) h 1 h 2 h 3 h 4 h 5 N [ h 4 , h 1 ] N [ h 3 , h 1 ] N[ h4 ,h 2 ] N[h3 ,h2 ] N {h 1 } N {h 3 } N {h 4 } N {h 2 } R 2 (R 2 ) * (d)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Drawing a Normal Fan",
"sec_num": "3.2"
},
{
"text": "Finally in Part (d) we draw the full fan. We have omitted the axes in (R 2 ) * for clarity. The normal cones for all 4 vertices have been identified.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Drawing a Normal Fan",
"sec_num": "3.2"
},
{
"text": "The previous discussion treated only a single sentence. For a training set of S input sentences, let i be an index vector that contains S elements. Each element is an index i s to a hypothesis and a feature vector for the sth sentence. A particular i specifies a set of hypotheses drawn from each of the K-best lists. LP-MERT builds a set of K S feature vectors associated with S dimensional index vectors i of the form h i = h 1,i 1 + . . . + h S,i S . The polytope of these feature vectors is then constructed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Set Geometry",
"sec_num": "4"
},
{
"text": "In convex geometry this operation is called the Minkowski sum and for the polytopes H s and H t , is defined as (Ziegler, 1995) ",
"cite_spans": [
{
"start": 112,
"end": 127,
"text": "(Ziegler, 1995)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Set Geometry",
"sec_num": "4"
},
{
"text": "H s + H t := {h + h : h \u2208 H s , h \u2208 H t } (11)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Set Geometry",
"sec_num": "4"
},
{
"text": "We illustrate this operation in the top part of Figure 3 . The Minkowski sum is commutative and associative and generalises to more than two polytopes (Gritzmann and Sturmfels, 1992) .",
"cite_spans": [
{
"start": 152,
"end": 183,
"text": "(Gritzmann and Sturmfels, 1992)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 48,
"end": 57,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Set Geometry",
"sec_num": "4"
},
{
"text": "For the polytopes H s and H t the common refinement (Ziegler, 1995) is",
"cite_spans": [
{
"start": 52,
"end": 67,
"text": "(Ziegler, 1995)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Set Geometry",
"sec_num": "4"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "N (H s ) \u2227 N (H t ) := {N \u2229 N : N \u2208 N (H s ), N \u2208 N (H t )}",
"eq_num": "(12)"
}
],
"section": "Training Set Geometry",
"sec_num": "4"
},
{
"text": "Each cone in the common refinement is the set of parameter vectors that maximise two faces in H s and H t . This operation is shown in the bottom part of Figure 3 . As suggested by Figure 3 the Minkowski sum and common refinement are linked by the following Proposition 1.",
"cite_spans": [],
"ref_spans": [
{
"start": 154,
"end": 162,
"text": "Figure 3",
"ref_id": null
},
{
"start": 181,
"end": 189,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Set Geometry",
"sec_num": "4"
},
{
"text": "N (H s + H t ) = N (H s ) \u2227 N (H t )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Set Geometry",
"sec_num": "4"
},
{
"text": "Proof. See Gritzmann and Sturmfels (1992) This implies that, with h i defined for the index vector i, the Minkowski sum defines the parameter vectors that satisfy the following (Tsochantaridis et al., 2005, Eqn. 3) Figure 3 : An example of the equivalence between the Minkowski sum and the common refinement.",
"cite_spans": [
{
"start": 11,
"end": 41,
"text": "Gritzmann and Sturmfels (1992)",
"ref_id": "BIBREF19"
},
{
"start": 177,
"end": 211,
"text": "(Tsochantaridis et al., 2005, Eqn.",
"ref_id": null
}
],
"ref_spans": [
{
"start": 215,
"end": 223,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Set Geometry",
"sec_num": "4"
},
{
"text": "w(h s,j \u2212 h s,is ) \u2264 0, 1 \u2264 s \u2264 S, 1 \u2264 j \u2264 K (13) h 1,1 h 1,2 h 1,3 h 1,4 h 2,1 h 2,2 h 2,3 h 1,1 + h 2,1 h 1,2 + h 2,1 h 1,1 + h 2,2 h 1,2 + h 2,2 h 1,3 + h 2,2 h 1,4 + h 2,2 {h 1,4 + h 2,1 , h 1,1 + h 2,3 } {h 1,3 + h 2,1 , h 1,2 + h 2,3 } h 1,3 + h 2,3 h 1,4 + h 2,3 H 1 H 2 H 1 + H 2 R 2 (R 2 ) * N [h 1,4 ,h 1,3 ] N [h 1,2 ,h 1,1 ] N [h 1,4 ,h 1,1 ] N [h 1,3 ,h 1,2 ] N [h 2,3 ,h 2,1 ] N [ h 2 , 3 , h 2 , 2 ] N [ h 2 , 1 , h 2 , 2 ] N [h 1,4 ,h 1,3 ] N [h 1,2 ,h 1,1 ] N [h 1,4 ,h 1,1 ] N [h 1,3 ,h 1,2 ] N [h 2,3 ,h 2,1 ] N [ h 2 , 3 , h 2 , 2 ] N [ h 2 , 1 , h 2 , 2 ] N (H 1 ) N (H 2 ) N (H 1 ) \u2227 N (H 2 )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Set Geometry",
"sec_num": "4"
},
{
"text": "In the top part of the Figure 3 we see that computing the Minkowski sum directly gives 12 feature vectors, 10 of which are unique. Each feature vector would have to be tested under LP-MERT. In general there are K S such feature vectors and exhaustive testing is impractical. LP-MERT performs a lazy enumeration of feature vectors as managed through a divide and conquer algorithm. We believe that in the worst case the complexity of this algorithm could be O(K S ). Figure 3 shows the computation of the common refinement. The common refinement appears as if one normal fan was superimposed on the other. We can see there are six decision boundaries associated with the six edges of the Minkowski sum. Even in this simple example, we can see that the common refinement is an easier quantity to compute than the Minkowski sum.",
"cite_spans": [],
"ref_spans": [
{
"start": 23,
"end": 31,
"text": "Figure 3",
"ref_id": null
},
{
"start": 466,
"end": 474,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Computing the Minkowski Sum",
"sec_num": "4.1"
},
{
"text": "We now briefly describe the algorithm of Fukuda (2004) that computes the common refinement. Consider the example in Figure 3 . For H 1 and H 2 we have drawn an edge in each polytope with a dashed line. The corresponding decision boundaries in their normal fans have also been drawn with dashed lines. Now consider the vertex h 1,3 + h 2,2 in H = H 1 + H 2 and note it has two incident edges. These edges are parallel to edges in the summand polytopes and correspond to decision boundaries in the normal cone N {h 1,3 +h 2,2 } .",
"cite_spans": [
{
"start": 41,
"end": 54,
"text": "Fukuda (2004)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [
{
"start": 116,
"end": 124,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "The lower part of",
"sec_num": null
},
{
"text": "We can find the redundant edges in the Minkowski sum by testing the edges suggested by the summand polytopes. If a decision boundary in (R D ) * is redundant, then we can ignore the feature vector that shares the decision boundary. For example h 1,4 + h 2,2 is redundant and the decision boundary N [h 1,3 ,h 1,4 ] is also redundant to the description of the normal cone N {h 1,3 +h 2,2 } . The test for redundant edges can be performed by a linear program.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The lower part of",
"sec_num": null
},
{
"text": "Given a Minkowski sum H we can define an undirected cyclic graph G(H) = (vert(H), E) where E is the set of edges. The degree of a vertex in G(H) is the number of edges incident to a vertex; \u03b4 is denoted as the maximum degree of the vertices.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The lower part of",
"sec_num": null
},
{
"text": "The linear program for testing redundancy of decision boundaries has a runtime of O(D 3.5 \u03b4) (Fukuda, 2004) . Enumerating the vertices of graph G(H) is not trivial due to it being an undirected and cyclic graph. The solution is to use a reverse search algorithm (Avis and Fukuda, 1993) . Essen- tially reverse search transforms the graph into a tree.",
"cite_spans": [
{
"start": 93,
"end": 107,
"text": "(Fukuda, 2004)",
"ref_id": "BIBREF14"
},
{
"start": 262,
"end": 285,
"text": "(Avis and Fukuda, 1993)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The lower part of",
"sec_num": null
},
{
"text": "The vertex associated with w (0) is denoted as the root of the tree, and from this root vertices are enumerated in reverse order of model score under w (0) . Each branch of the tree can be enumerated independently, which means that the enumeration can be parallelised. The complexity of the full algorithm is O(\u03b4(D 3.5 \u03b4)| vert(H)|) (Fukuda, 2004) . In comparison with the O(K S ) for LP-MERT the worst case complexity of the reverse search algorithm is linear with respect to the size of vert(H).",
"cite_spans": [
{
"start": 152,
"end": 155,
"text": "(0)",
"ref_id": null
},
{
"start": 333,
"end": 347,
"text": "(Fukuda, 2004)",
"ref_id": "BIBREF14"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The lower part of",
"sec_num": null
},
{
"text": "We now explore whether the reverse search algorithm is a practical method for performing MERT using an open source implementation of the algorithm (Weibel, 2010) . For reasons discussed in the next section, we wish to reduce the feature dimension. For M < D, we can define a projection ma-",
"cite_spans": [
{
"start": 147,
"end": 161,
"text": "(Weibel, 2010)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Two Dimensional Projected MERT",
"sec_num": "4.2"
},
{
"text": "trix A M +1,D that maps h i \u2208 R D into R M +1 as A M +1,D h i =h i ,h i \u2208 R M +1",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Two Dimensional Projected MERT",
"sec_num": "4.2"
},
{
"text": ". There are technical constraints to be observed, discussed in Waite (2014) . We note that when M = 1 we obtain Eqn. (4).",
"cite_spans": [
{
"start": 63,
"end": 75,
"text": "Waite (2014)",
"ref_id": "BIBREF30"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Two Dimensional Projected MERT",
"sec_num": "4.2"
},
{
"text": "For our demonstration, we plot the error count over a plane in (R D ) * . Using the CUED Russian-to-English (Pino et al., 2013) entry to WMT'13 (Bojar et al., 2013) we build a tune set of 1502 sentences. The system uses 12 features which we initially tune with lattice MERT (Macherey et al., 2008) to get a parameter w (0) . Using this parameter we generate 1000-best lists. We then project the feature functions in the 1000-best lists to a 3-dimensional representation that includes the source-to-target phrase probability (UtoV), the word insertion penalty (WIP), and the model score due to w (0) . We use the Minkowski sum algorithm to compute BLEU as \u03b3 \u2208 (R 2 ) * is applied to the parameters from w (0) . Figure 4 displays some of the characteristics of the algorithm 1 . This plot can be interpreted as a 3-dimensional version of Figure 3 in Macherey et al. (2008) where we represent the BLEU score as a heatmap instead of a third axis. Execution was on 12 CPU cores, leading to the distinct search regions, demonstrating the parallel nature of the algorithm. Weibel (2010) uses a depth-first enumeration order of G(H), hence the narrow and deep exploration of (R D ) * . A breadth-first ordering would focus on cones closer to w (0) . To our knowledge, this is the first description of a generalised line optimisation algorithm that can search all the parameters in a plane in polynomial time. Extensions to higher dimensional search are straightforward.",
"cite_spans": [
{
"start": 108,
"end": 127,
"text": "(Pino et al., 2013)",
"ref_id": "BIBREF27"
},
{
"start": 144,
"end": 164,
"text": "(Bojar et al., 2013)",
"ref_id": "BIBREF1"
},
{
"start": 274,
"end": 297,
"text": "(Macherey et al., 2008)",
"ref_id": "BIBREF22"
},
{
"start": 319,
"end": 322,
"text": "(0)",
"ref_id": null
},
{
"start": 595,
"end": 598,
"text": "(0)",
"ref_id": null
},
{
"start": 704,
"end": 707,
"text": "(0)",
"ref_id": null
},
{
"start": 848,
"end": 870,
"text": "Macherey et al. (2008)",
"ref_id": "BIBREF22"
},
{
"start": 1236,
"end": 1239,
"text": "(0)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 710,
"end": 718,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 836,
"end": 844,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Two Dimensional Projected MERT",
"sec_num": "4.2"
},
{
"text": "In the previous section we described the Minkowski sum polytope. Let us consider the following upper bound theorem Theorem 1. Let H 1 , . . . ., H S be polytopes in R D with at most N vertices each. Then for D > 2 the upper bound on number of vertices of",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness of Linear Models",
"sec_num": "5"
},
{
"text": "H 1 +. . .+H S is O(S D\u22121 K 2(D\u22121) ).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness of Linear Models",
"sec_num": "5"
},
{
"text": "Proof. See Gritzmann and Sturmfels (1992) Each vertex h i corresponds to a single index vector i, which itself corresponds to a single set of selected hypotheses. Therefore the number of distinct sets of hypotheses that can be drawn from the S K-best lists in bounded above by O(min(K S , S D\u22121 K 2(D\u22121) )).",
"cite_spans": [
{
"start": 11,
"end": 41,
"text": "Gritzmann and Sturmfels (1992)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness of Linear Models",
"sec_num": "5"
},
{
"text": "For low dimension features, i.e. for D :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness of Linear Models",
"sec_num": "5"
},
{
"text": "S D\u22121 K 2(D\u22121)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness of Linear Models",
"sec_num": "5"
},
{
"text": "K S , the optimiser is therefore tightly constrained. It cannot pick arbitrarily from the individual K-best lists to optimise the overall BLEU score. We believe this acts as an inherent form of regularisation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness of Linear Models",
"sec_num": "5"
},
{
"text": "For example, in the system of Section 4.2 (D=12, S=1502, K=1000), only 10 \u22124403 percent of the K S possible index vectors are feasible. However, if the feature dimension D is increased to D = 493, then S D\u22121 K 2(D\u22121) K S and this inherent regularisation is no longer at work: any index vector is feasible, and sentence hypotheses can chosen arbitrarily to optimise the overall BLEU score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness of Linear Models",
"sec_num": "5"
},
{
"text": "This exponential relationship of feasible solutions with respect to feature dimension can be seen in Figure 6 of Galley and Quirk (2011) . At low feature dimension, they find that the LP-MERT algorithm can run to completion for a training set size of hundreds of sentences. As feature dimension increases, the runtime increases exponentially.",
"cite_spans": [
{
"start": 101,
"end": 136,
"text": "Figure 6 of Galley and Quirk (2011)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness of Linear Models",
"sec_num": "5"
},
{
"text": "PRO and other ranking methods are similarly constrained for low dimensional feature vectors. Proof. This is a special case of the upper bound theorem. See Ziegler (1995, Theorem 8.23 ).",
"cite_spans": [
{
"start": 155,
"end": 182,
"text": "Ziegler (1995, Theorem 8.23",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness of Linear Models",
"sec_num": "5"
},
{
"text": "Each feasible pairwise ranking of pairs of hypotheses corresponds to an edge in the Minkowski sum polytope. Therefore in low dimension ranking methods also benefit from this inherent regularisation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness of Linear Models",
"sec_num": "5"
},
{
"text": "For higher dimensional feature vectors, these upper bounds no longer guarantee that this inherent regularisation is at work. The analysis suggests -but does not imply -that index vectors, and their corresponding solutions, can be picked arbitrarily from the K-best lists. For MERT overtraining is clearly a risk.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness of Linear Models",
"sec_num": "5"
},
{
"text": "MIRA and related methods have a regularisation mechanism due to the margin maximisation term in",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness of Linear Models",
"sec_num": "5"
},
{
"text": "N[ h4 ,h 2 ] N [ h 4 , h 1 ] N {h 4 } N {h 2 } N {h 1 } w (0) w (2) w (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Robustness of Linear Models",
"sec_num": "5"
},
{
"text": "w Figure 5 : We redraw the normal fan from Figure 2 with potential optimal parameters under the 2 regularisation scheme of Galley et al. (2013) marked. The thick red line is the subspace of (R 2 ) * optimised. The dashed lines mark the distances between the decision boundaries and w (0) . their objective functions. Although this form of regularisation may be helpful in practice, there is no guarantee that it will prevent overtraining due to the exponential increase in feasible solutions. For example the adaptive learning rate method of Green et al. (2013) finds gains of over 13 BLEU points in the training set with the addition of 390,000 features, yet only 2 to 3 BLEU points are found in the test set.",
"cite_spans": [
{
"start": 123,
"end": 143,
"text": "Galley et al. (2013)",
"ref_id": "BIBREF16"
},
{
"start": 284,
"end": 287,
"text": "(0)",
"ref_id": null
},
{
"start": 542,
"end": 561,
"text": "Green et al. (2013)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [
{
"start": 2,
"end": 10,
"text": "Figure 5",
"ref_id": null
},
{
"start": 43,
"end": 51,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Robustness of Linear Models",
"sec_num": "5"
},
{
"text": "The above analysis suggest a need for regularisation in training with high dimensional feature vectors. Galley et al. (2013) note that regularisation is hard to apply to linear models due to the magnitude invariance of w in Eqn. (1). Figure 2 makes the difficulty clear: the normal cones are determined entirely by the feature vectors of the training samples, and within any particular normal cone a parameter vector can be chosen with arbitrary magnitude. This renders schemes such as L1 or L2 normalisation ineffective. To avoid this, Galley et al. (2013) describe a regularisation scheme for line optimisation that encourages the optimal parameter to be found close to w (0) . The motivation is that w (0) should be a trusted initial point, perhaps taken from a lowerdimensional model. We briefly discuss the challenges of doing this sort of regularisation in MERT. In Figure 5 we reproduce the normal fan from Figure 2 . In this diagram we represent the set of parameters considered by a line optimisation as a thick red line. Let us assume that both e 1 and e 2 have a similarly low error count. Under the regularisation scheme of Galley et al. (2013) we have a choice of w (1) or w (2) , which are equidistant from w (0) . In this affine projection of parameter space it is unclear which one is the optimum. However, if we consider the normal fan as a whole we can clearly see that w \u2208 N {h i } is the optimal point under the regularisation. However, it is not obvious in the projected parameter space that\u0175 is the better choice. This analysis suggests that direct intervention, e.g. monitoring BLEU on a held-out set, may be more effective in avoiding overtraining.",
"cite_spans": [
{
"start": 104,
"end": 124,
"text": "Galley et al. (2013)",
"ref_id": "BIBREF16"
},
{
"start": 537,
"end": 557,
"text": "Galley et al. (2013)",
"ref_id": "BIBREF16"
},
{
"start": 674,
"end": 677,
"text": "(0)",
"ref_id": null
},
{
"start": 1136,
"end": 1156,
"text": "Galley et al. (2013)",
"ref_id": "BIBREF16"
},
{
"start": 1179,
"end": 1182,
"text": "(1)",
"ref_id": null
},
{
"start": 1188,
"end": 1191,
"text": "(2)",
"ref_id": null
},
{
"start": 1223,
"end": 1226,
"text": "(0)",
"ref_id": null
}
],
"ref_spans": [
{
"start": 234,
"end": 242,
"text": "Figure 2",
"ref_id": "FIGREF1"
},
{
"start": 872,
"end": 880,
"text": "Figure 5",
"ref_id": null
},
{
"start": 914,
"end": 922,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "A Note on Regularisation",
"sec_num": "5.1"
},
{
"text": "The main contribution of this work is to present a novel geometric description of MERT. We show that it is possible to enumerate all the feasible solutions of a linear model in polynomial time using this description. The immediate conclusion from this work is that the current methods for estimating linear models as done in SMT works best for low dimensional feature vectors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We can consider the SMT linear model as a member of a family of linear models where the output values are highly structured, and where each input yields a candidate space of possible output values. We have already noted that the constraints in (13) are shared with the structured-SVM (Tsochantaridis et al., 2005) , and we can also see the same constraints in Eqn. 3 of Collins (2002) . It is our belief that our analysis is applicable to all models in this family and extends far beyond the discussion of SMT here.",
"cite_spans": [
{
"start": 284,
"end": 313,
"text": "(Tsochantaridis et al., 2005)",
"ref_id": "BIBREF29"
},
{
"start": 370,
"end": 384,
"text": "Collins (2002)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We note that the upper bound on feasible solutions increases polynomially in training set size S, whereas the number of possible solutions increases exponentially in S. The result is that the ratio of feasible to possible solutions decreases with S. Our analysis suggests that inherent regularisation should be improved by increasing training set size. This confirms most researchers intuition, with perhaps even larger training sets needed than previously believed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "Another avenue to prevent overtraining would be to project high-dimensional feature sets to low dimensional feature sets using the technique described in Section 4.1. We could then use existing training methods to optimise over the projected feature vec-tors.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "We also note that non-linear models methods, such as neural networks (Schwenk et al., 2006; Kalchbrenner and Blunsom, 2013; Devlin et al., 2014; Cho et al., 2014) and decision forests (Criminisi et al., 2011) are not bound by these analyses. In particular neural networks are non-linear functions of the features, and decision forests actively reduce the number of features for individual trees in the forrest. From the perspective of this paper, the recent improvements in SMT due to neural networks are well motivated.",
"cite_spans": [
{
"start": 69,
"end": 91,
"text": "(Schwenk et al., 2006;",
"ref_id": "BIBREF28"
},
{
"start": 92,
"end": 123,
"text": "Kalchbrenner and Blunsom, 2013;",
"ref_id": "BIBREF21"
},
{
"start": 124,
"end": 144,
"text": "Devlin et al., 2014;",
"ref_id": "BIBREF11"
},
{
"start": 145,
"end": 162,
"text": "Cho et al., 2014)",
"ref_id": "BIBREF7"
},
{
"start": 184,
"end": 208,
"text": "(Criminisi et al., 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Discussion",
"sec_num": "6"
},
{
"text": "A replication of this experiment forms part of the UCAM-SMT tutorial at http://ucam-smt.github.io",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This research was supported by a doctoral training account from the Engineering and Physical Sciences Research Council.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Reverse search for enumeration",
"authors": [
{
"first": "David",
"middle": [],
"last": "Avis",
"suffix": ""
},
{
"first": "Komei",
"middle": [],
"last": "Fukuda",
"suffix": ""
}
],
"year": 1993,
"venue": "Discrete Applied Mathematics",
"volume": "65",
"issue": "",
"pages": "21--46",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Avis and Komei Fukuda. 1993. Reverse search for enumeration. Discrete Applied Mathematics, 65:21-46.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Findings of the 2013 Workshop on Statistical Machine Translation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Buck",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Radu",
"middle": [],
"last": "Soricut",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Eighth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "1--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Christian Buck, Chris Callison-Burch, Christian Federmann, Barry Haddow, Philipp Koehn, Christof Monz, Matt Post, Radu Soricut, and Lucia Specia. 2013. Findings of the 2013 Workshop on Statistical Machine Translation. In Proceedings of the Eighth Workshop on Statistical Machine Translation, pages 1-44, Sofia, Bulgaria, August. Association for Computational Linguistics.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Regularization and search for minimum error rate training",
"authors": [
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Dan",
"middle": [],
"last": "Jurafsky",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Third Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "26--34",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Daniel Cer, Dan Jurafsky, and Christopher D. Manning. 2008. Regularization and search for minimum error rate training. In Proceedings of the Third Workshop on Statistical Machine Translation, pages 26-34, Colum- bus, Ohio, June. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Batch tuning strategies for statistical machine translation",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "427--436",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Cherry and George Foster. 2012. Batch tuning strategies for statistical machine translation. In Pro- ceedings of the 2012 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 427-436, Montr\u00e9al, Canada, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Online large-margin training of syntactic and structural translation features",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Yuval",
"middle": [],
"last": "Marton",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "224--233",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang, Yuval Marton, and Philip Resnik. 2008. Online large-margin training of syntactic and struc- tural translation features. In Proceedings of the 2008 Conference on Empirical Methods in Natural Lan- guage Processing, pages 224-233, Honolulu, Hawaii, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "001 new features for statistical machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the Association for Computational Linguistics",
"volume": "11",
"issue": "",
"pages": "218--226",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang, Kevin Knight, and Wei Wang. 2009. 11,001 new features for statistical machine transla- tion. In Proceedings of Human Language Technolo- gies: The 2009 Annual Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics, pages 218-226, Boulder, Colorado, June. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Hope and fear for discriminative training of statistical translation models",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2012,
"venue": "The Journal of Machine Learning Research",
"volume": "13",
"issue": "1",
"pages": "1159--1187",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2012. Hope and fear for discriminative training of statistical translation models. The Journal of Machine Learning Research, 13(1):1159-1187.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Learning phrase representations using RNN encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "1724--1734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gulcehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase repre- sentations using RNN encoder-decoder for statistical machine translation. In Proceedings of the 2014 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1724-1734, Doha, Qatar, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Discriminative training methods for hidden Markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden Markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of the 2002 Conference on Empirical Methods in Natu- ral Language Processing, pages 1-8. Association for Computational Linguistics, July.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Shai Shalev-Shwartz, and Yoram Singer",
"authors": [
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Dekel",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Keshet",
"suffix": ""
}
],
"year": 2006,
"venue": "The Journal of Machine Learning Research",
"volume": "7",
"issue": "",
"pages": "551--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev- Shwartz, and Yoram Singer. 2006. Online passive- aggressive algorithms. The Journal of Machine Learn- ing Research, 7:551-585.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Decision forests for classification, regression, density estimation, manifold learning and semi-supervised learning",
"authors": [
{
"first": "A",
"middle": [],
"last": "Criminisi",
"suffix": ""
},
{
"first": "J",
"middle": [],
"last": "Shotton",
"suffix": ""
},
{
"first": "E",
"middle": [],
"last": "Konukoglu",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "A. Criminisi, J. Shotton, and E. Konukoglu. 2011. Deci- sion forests for classification, regression, density esti- mation, manifold learning and semi-supervised learn- ing. Technical report, Microsoft Research.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Fast and robust neural network joint models for statistical machine translation",
"authors": [
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "Rabih",
"middle": [],
"last": "Zbib",
"suffix": ""
},
{
"first": "Zhongqiang",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Lamar",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1370--1380",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jacob Devlin, Rabih Zbib, Zhongqiang Huang, Thomas Lamar, Richard Schwartz, and John Makhoul. 2014. Fast and robust neural network joint models for sta- tistical machine translation. In Proceedings of the 52nd Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 1370-1380, Baltimore, Maryland, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "APRO: Allpairs ranking optimization for MT tuning",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Dreyer",
"suffix": ""
},
{
"first": "Yuanzhe",
"middle": [],
"last": "Dong",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Dreyer and Yuanzhe Dong. 2015. APRO: All- pairs ranking optimization for MT tuning. In Proceed- ings of the 2015 Conference of the North American Chapter of the Association for Computational Linguis- tics: Human Language Technologies.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Large-scale discriminative training for statistical machine translation using held-out line search",
"authors": [
{
"first": "Jeffrey",
"middle": [],
"last": "Flanigan",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Jaime",
"middle": [],
"last": "Carbonell",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "248--258",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jeffrey Flanigan, Chris Dyer, and Jaime Carbonell. 2013. Large-scale discriminative training for statistical ma- chine translation using held-out line search. In Pro- ceedings of the 2013 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 248-258, Atlanta, Georgia, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "From the zonotope construction to the Minkowski addition of convex polytopes",
"authors": [
{
"first": "Komei",
"middle": [],
"last": "Fukuda",
"suffix": ""
}
],
"year": 2004,
"venue": "Journal of Symbolic Computation",
"volume": "38",
"issue": "4",
"pages": "1261--1272",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Komei Fukuda. 2004. From the zonotope construction to the Minkowski addition of convex polytopes. Journal of Symbolic Computation, 38(4):1261-1272.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Optimal search for minimum error rate training",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "38--49",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley and Chris Quirk. 2011. Optimal search for minimum error rate training. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 38-49, Edinburgh, Scot- land, UK., July. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Regularized minimum error rate training",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Quirk",
"suffix": ""
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1948--1959",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley, Chris Quirk, Colin Cherry, and Kristina Toutanova. 2013. Regularized minimum error rate training. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, pages 1948-1959, Seattle, Washington, USA, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Structured ramp loss minimization for machine translation",
"authors": [
{
"first": "Kevin",
"middle": [],
"last": "Gimpel",
"suffix": ""
},
{
"first": "Noah",
"middle": [
"A"
],
"last": "Smith",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "221--231",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kevin Gimpel and Noah A. Smith. 2012. Structured ramp loss minimization for machine translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 221-231, Montr\u00e9al, Canada, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Fast and adaptive online training of feature-rich translation models",
"authors": [
{
"first": "Spence",
"middle": [],
"last": "Green",
"suffix": ""
},
{
"first": "Sida",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Cer",
"suffix": ""
},
{
"first": "Christopher",
"middle": [
"D"
],
"last": "Manning",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "311--321",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Spence Green, Sida Wang, Daniel Cer, and Christo- pher D. Manning. 2013. Fast and adaptive online training of feature-rich translation models. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), pages 311-321, Sofia, Bulgaria, August. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Minkowski addition of polytopes: Computational complexity and applications to Gr\u00f6bner bases",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Gritzmann",
"suffix": ""
},
{
"first": "Bernd",
"middle": [],
"last": "Sturmfels",
"suffix": ""
}
],
"year": 1992,
"venue": "SIAM Journal on Discrete Mathematics",
"volume": "6",
"issue": "2",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Gritzmann and Bernd Sturmfels. 1992. Minkowski addition of polytopes: Computational complexity and applications to Gr\u00f6bner bases. SIAM Journal on Dis- crete Mathematics, 6(2).",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Association for Computational Linguistics",
"authors": [
{
"first": "Mark",
"middle": [],
"last": "Hopkins",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1352--1362",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mark Hopkins and Jonathan May. 2011. Tuning as rank- ing. In Proceedings of the 2011 Conference on Empir- ical Methods in Natural Language Processing, pages 1352-1362, Edinburgh, Scotland, UK., July. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Recurrent continuous translation models",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Phil",
"middle": [],
"last": "Blunsom",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1700--1709",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recur- rent continuous translation models. In Proceedings of the 2013 Conference on Empirical Methods in Natu- ral Language Processing, pages 1700-1709, Seattle, Washington, USA, October. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Lattice-based minimum error rate training for statistical machine translation",
"authors": [
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "Ignacio",
"middle": [],
"last": "Thayer",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "725--734",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wolfgang Macherey, Franz Och, Ignacio Thayer, and Jakob Uszkoreit. 2008. Lattice-based minimum error rate training for statistical machine translation. In Pro- ceedings of the 2008 Conference on Empirical Meth- ods in Natural Language Processing, pages 725-734, Honolulu, Hawaii, October. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Discriminative training and maximum entropy models for statistical machine translation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 40th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2002. Discrimi- native training and maximum entropy models for sta- tistical machine translation. In Proceedings of 40th",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "295--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 295-302, Philadelphia, Pennsylva- nia, USA, July. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting of the Association for Compu- tational Linguistics, pages 160-167, Sapporo, Japan, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of 40th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of 40th Annual Meeting of the Association for Computational Linguistics, pages 311-318, Philadelphia, Pennsylva- nia, USA, July. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "The University of Cambridge Russian-English system at WMT13",
"authors": [
{
"first": "Juan",
"middle": [],
"last": "Pino",
"suffix": ""
},
{
"first": "Aurelien",
"middle": [],
"last": "Waite",
"suffix": ""
},
{
"first": "Tong",
"middle": [],
"last": "Xiao",
"suffix": ""
},
{
"first": "Adri\u00e0",
"middle": [],
"last": "De Gispert",
"suffix": ""
},
{
"first": "Federico",
"middle": [],
"last": "Flego",
"suffix": ""
},
{
"first": "William",
"middle": [],
"last": "Byrne",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the Eighth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "200--205",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Juan Pino, Aurelien Waite, Tong Xiao, Adri\u00e0 de Gis- pert, Federico Flego, and William Byrne. 2013. The University of Cambridge Russian-English system at WMT13. In Proceedings of the Eighth Workshop on Statistical Machine Translation, pages 200-205, Sofia, Bulgaria, August. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Continuous space language models for statistical machine translation",
"authors": [
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Dechelotte",
"suffix": ""
},
{
"first": "Jean-Luc",
"middle": [],
"last": "Gau",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions",
"volume": "",
"issue": "",
"pages": "723--730",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Holger Schwenk, Daniel Dechelotte, and Jean-Luc Gau- vain. 2006. Continuous space language models for statistical machine translation. In Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions, pages 723-730, Sydney, Australia, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Large margin methods for structured and interdependent output variables",
"authors": [
{
"first": "Ioannis",
"middle": [],
"last": "Tsochantaridis",
"suffix": ""
},
{
"first": "Thorsten",
"middle": [],
"last": "Joachims",
"suffix": ""
}
],
"year": 2005,
"venue": "In Journal of Machine Learning Research",
"volume": "",
"issue": "",
"pages": "1453--1484",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ioannis Tsochantaridis, Thorsten Joachims, Thomas Hof- mann, and Yasemin Altun. 2005. Large margin methods for structured and interdependent output vari- ables. In Journal of Machine Learning Research, pages 1453-1484.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "The Geometry of Statistical Machine Translation",
"authors": [
{
"first": "Aurelien",
"middle": [],
"last": "Waite",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aurelien Waite. 2014. The Geometry of Statistical Ma- chine Translation. Ph.D. thesis, University of Cam- bridge, Cambridge, United Kingdom.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Online large-margin training for statistical machine translation",
"authors": [
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Jun",
"middle": [],
"last": "Suzuki",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
},
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "764--773",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Taro Watanabe, Jun Suzuki, Hajime Tsukada, and Hideki Isozaki. 2007. Online large-margin training for sta- tistical machine translation. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natu- ral Language Processing and Computational Natural Language Learning, pages 764-773.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Implementation and parallelization of a reverse-search algorithm for minkowski sums",
"authors": [
{
"first": "Christophe",
"middle": [],
"last": "Weibel",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 12th Workshop on Algorithm Engineering and Experiments (ALENEX 2010)",
"volume": "",
"issue": "",
"pages": "34--42",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christophe Weibel. 2010. Implementation and paral- lelization of a reverse-search algorithm for minkowski sums. In Proceedings of the 12th Workshop on Algo- rithm Engineering and Experiments (ALENEX 2010), pages 34-42. SIAM.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Lectures on Polytopes",
"authors": [
{
"first": "G",
"middle": [],
"last": "Ziegler",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "G Ziegler. 1995. Lectures on Polytopes. Springer- Verlag.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"uris": null,
"text": "A geometric interpretation of LP-MERT (afterCer et al. (2008) andGalley and Quirk (2011)). The decision boundary represented by the dashed line intersects the polytope at only h 4 , making it a vertex. No decision boundary intersects h 5 without intersecting other points in the polytope, making h 5 redundant.",
"type_str": "figure",
"num": null
},
"FIGREF1": {
"uris": null,
"text": "Drawing the Normal Fan. See the description in Section 3.2. The end result in the r.h.s. of Part (d) reproduces Figure 1 from Cer et al. (2008), identifying the normal cones for all vertices. normal cone of the vertex N {h 1 } .",
"type_str": "figure",
"num": null
},
"FIGREF2": {
"uris": null,
"text": "The BLEU score over a 1502 sentence tune set for the CUED Russian-to-English(Pino et al., 2013) system over two parameters. Enumerated vertices of the Minkowski sum are shown in the shaded regions.",
"type_str": "figure",
"num": null
},
"FIGREF3": {
"uris": null,
"text": "Theorem 2. If H is a D-dimensional polytope, then for D \u2265 3 the following is an upper bound on the number",
"type_str": "figure",
"num": null
},
"TABREF0": {
"text": "",
"type_str": "table",
"content": "<table><tr><td>: An example set of two dimensional fea-</td></tr><tr><td>ture vectors (after Cer et al. (2008), Table 1) with</td></tr><tr><td>language model (h LM ) and translation model (h T M )</td></tr><tr><td>components. A fifth feature vector has been added</td></tr><tr><td>to illustrate redundancy.</td></tr><tr><td>modified system from (5) is feasible</td></tr></table>",
"num": null,
"html": null
},
"TABREF1": {
"text": "the parameters that maximise h s,i and those that maximise h s,j . Normal Cone For the face F in polytope H s the normal cone N F takes the form.",
"type_str": "table",
"content": "<table/>",
"num": null,
"html": null
}
}
}
}