| { |
| "paper_id": "H05-1021", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T03:33:34.024637Z" |
| }, |
| "title": "Local Phrase Reordering Models for Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Shankar", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "skumar@jhu.edu" |
| }, |
| { |
| "first": "William", |
| "middle": [], |
| "last": "Byrne", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We describe stochastic models of local phrase movement that can be incorporated into a Statistical Machine Translation (SMT) system. These models provide properly formulated, non-deficient, probability distributions over reordered phrase sequences. They are implemented by Weighted Finite State Transducers. We describe EM-style parameter re-estimation procedures based on phrase alignment under the complete translation model incorporating reordering. Our experiments show that the reordering model yields substantial improvements in translation performance on Arabic-to-English and Chinese-to-English MT tasks. We also show that the procedure scales as the bitext size is increased.", |
| "pdf_parse": { |
| "paper_id": "H05-1021", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We describe stochastic models of local phrase movement that can be incorporated into a Statistical Machine Translation (SMT) system. These models provide properly formulated, non-deficient, probability distributions over reordered phrase sequences. They are implemented by Weighted Finite State Transducers. We describe EM-style parameter re-estimation procedures based on phrase alignment under the complete translation model incorporating reordering. Our experiments show that the reordering model yields substantial improvements in translation performance on Arabic-to-English and Chinese-to-English MT tasks. We also show that the procedure scales as the bitext size is increased.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Word and Phrase Reordering is a crucial component of Statistical Machine Translation (SMT) systems. However allowing reordering in translation is computationally expensive and in some cases even provably NP-complete (Knight, 1999) . Therefore any translation scheme that incorporates reordering must necessarily balance model complexity against the ability to realize the model without approximation. In this paper our goal is to formulate models of local phrase reordering in such a way that they can be embedded inside a generative phrase-based model of translation (Kumar et al., 2005) . Although this model of reordering is somewhat limited and cannot capture all possible phrase movement, it forms a proper parameterized probability distribution over reorderings of phrase sequences. We show that with this model it is possible to perform Maximum A Posteriori (MAP) decoding (with pruning) and Expectation Maximization (EM) style re-estimation of model parameters over large bitext collections.", |
| "cite_spans": [ |
| { |
| "start": 216, |
| "end": 230, |
| "text": "(Knight, 1999)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 568, |
| "end": 588, |
| "text": "(Kumar et al., 2005)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We now discuss prior work on word and phrase reordering in translation. We focus on SMT systems that do not require phrases to form syntactic constituents.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The IBM translation models (Brown et al., 1993) describe word reordering via a distortion model defined over word positions within sentence pairs. The Alignment Template Model (Och et al., 1999) uses phrases rather than words as the basis for translation, and defines movement at the level of phrases. Phrase reordering is modeled as a first order Markov process with a single parameter that controls the degree of movement.", |
| "cite_spans": [ |
| { |
| "start": 27, |
| "end": 47, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 176, |
| "end": 194, |
| "text": "(Och et al., 1999)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Our current work is inspired by the block (phrase-pair) orientation model introduced by Tillmann (2004) in which reordering allows neighboring blocks to swap. This is described as a sequence of orientations (left, right, neutral) relative to the monotone block order. Model parameters are blockspecific and estimated over word aligned trained bitext using simple heuristics.", |
| "cite_spans": [ |
| { |
| "start": 97, |
| "end": 103, |
| "text": "(2004)", |
| "ref_id": null |
| }, |
| { |
| "start": 207, |
| "end": 229, |
| "text": "(left, right, neutral)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Other researchers (Vogel, 2003; Zens and Ney, 2003; Zens et al., 2004) have reported performance gains in translation by allowing deviations from monotone word and phrase order. In these cases, x 1 x 2 x 3 x 4 x 5 1 e 5 e 7 e 2 e 3 e 4 e 6 e 9 e 8 e u 1 u 2 u 3 u 4 u 5 y 1 y 5 y 4 y 3 y 2 doivent de_25_% exportations grains reordering is not governed by an explicit probabilistic model over reordered phrases; a language model is employed to select the translation hypothesis. We also note the prior work of Wu (1996) , closely related to Tillmann's model.", |
| "cite_spans": [ |
| { |
| "start": 18, |
| "end": 31, |
| "text": "(Vogel, 2003;", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 32, |
| "end": 51, |
| "text": "Zens and Ney, 2003;", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 52, |
| "end": 70, |
| "text": "Zens et al., 2004)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 567, |
| "end": 576, |
| "text": "Wu (1996)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 194, |
| "end": 382, |
| "text": "x 1 x 2 x 3 x 4 x 5 1 e 5 e 7 e 2 e 3 e 4 e 6 e 9 e 8 e u 1 u 2 u 3 u 4 u 5 y 1 y 5 y 4 y 3 y 2 doivent de_25_% exportations grains", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The Translation Template Model (TTM) is a generative model of phrase-based translation (Brown et al., 1993) . Bitext is described via a stochastic process that generates source (English) sentences and transforms them into target (French) sentences (Fig 1 and Eqn 1) .", |
| "cite_spans": [ |
| { |
| "start": 87, |
| "end": 107, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 248, |
| "end": 266, |
| "text": "(Fig 1 and Eqn 1)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "The WFST Reordering Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "P (f J 1 , v R 1 , d K 0 , c K 0 , y K 1 , x K 1 , u K 1 , K, e I 1 ) = P (e I 1 )\u2022 Source Language Model G P (u K 1 , K|e I 1 )\u2022 Source Phrase Segmentation W P (x K 1 |u K 1 , K, e I 1 )\u2022 Phrase Translation and Reordering R P (v R 1 , d K 0 , c K 0 , y K 1 |x K 1 , u K 1 , K, e I 1 )\u2022 Target Phrase Insertion \u03a6 P (f J 1 |v R 1 , d K 0 , c K 0 , y K 1 , x K 1 , u K 1 , K, e I 1 ) Target Phrase Segmentation \u2126 (1)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The WFST Reordering Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The TTM relies on a Phrase-Pair Inventory (PPI) consisting of target language phrases and their source language translations. Translation is modeled via component distributions realized as WFSTs (Fig 1 and Eqn 1) : Source Language Model (G), Source Phrase Segmentation (W ), Phrase Translation and Reordering (R), Target Phrase Insertion (\u03a6), and Target Phrase Segmentation (\u2126) (Kumar et al., 2005) .", |
| "cite_spans": [ |
| { |
| "start": 378, |
| "end": 398, |
| "text": "(Kumar et al., 2005)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 195, |
| "end": 205, |
| "text": "(Fig 1 and", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "The WFST Reordering Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "TTM Reordering Previously, the TTM was formulated with reordering prior to translation; here, we perform reordering of phrase sequences following translation. Reordering prior to translation was found to be memory intensive and unwieldy (Kumar et al., 2005) . In contrast, we will show that the current model can be used for both phrase alignment and translation.", |
| "cite_spans": [ |
| { |
| "start": 237, |
| "end": 257, |
| "text": "(Kumar et al., 2005)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The WFST Reordering Model", |
| "sec_num": "2" |
| }, |
| { |
| "text": "We now describe two WFSTs that allow local reordering within phrase sequences. The simplest allows swapping of adjacent phrases. The second allows phrase movement within a three phrase window. Our formulation ensures that the overall model provides a proper parameterized probability distribution over reordered phrase sequences; we emphasize that the resulting distribution is not degenerate.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Phrase Reordering Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Phrase reordering (Fig 2) takes as its input a French phrase sequence in English phrase order x 1 , x 2 , ..., x K . This is then reordered into French phrase order y 1 , y 2 , ..., y K . Note that words within phrases are not affected.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 18, |
| "end": 25, |
| "text": "(Fig 2)", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "The Phrase Reordering Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We make the following conditional independence assumption:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Phrase Reordering Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (y K 1 |x K 1 , u K 1 , K, e I 1 ) = P (y K 1 |x K 1 , u K 1 ).", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "The Phrase Reordering Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Given an input phrase sequence x K 1 we now associate a unique jump sequence b K 1 with each permissible output phrase sequence y K 1 . The jump b k measures the displacement of the k th phrase x k , i.e.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Phrase Reordering Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "x k \u2192 y k+b k , k \u2208 {1, 2, ..., K}.", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "The Phrase Reordering Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "The jump sequence b K 1 is constructed such that y K 1 is a permutation of x K 1 . This is enforced by constructing all models so that K k=1 b k = 0. We now redefine the model in terms of the jump sequence 1 is determined by x K 1 and b K 1 . Each jump b k depends on the phrase-pair (x k , u k ) and preceding jumps b k\u22121", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Phrase Reordering Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (y K 1 |x K 1 , u K 1 )", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "The Phrase Reordering Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "= P (b K 1 |x K 1 , u K 1 ) y k+b k = x k \u2200k 0 otherwise, x 2 x 3 x 4 x 5 x 1 y 2 y 3 y 4 y", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Phrase Reordering Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "1 P (b K 1 |x K 1 , u K 1 ) = K k=1 P (b k |x k , u k , \u03c6 k\u22121 ),", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "The Phrase Reordering Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "where \u03c6 k\u22121 is an equivalence classification (state) of the jump sequence b k\u22121 1 . The jump sequence b K 1 can be described by a deterministic finite state machine. \u03c6(b k\u22121 1 ) is the state arrived at by b k\u22121 1 ; we will use \u03c6 k\u22121 to denote \u03c6(b k\u22121 1 ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Phrase Reordering Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We will investigate phrase reordering by restricting the maximum allowable jump to 1 phrase and to 2 phrases; we will refer to these reordering models as MJ-1 and MJ-2. In the first case, b k \u2208 {0, +1, \u22121} while in the second case, b k \u2208 {0, +1, \u22121, +2, \u22122}.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The Phrase Reordering Model", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "We first present the Finite State Machine of the phrase reordering process (Fig 3) which has two equivalence classes (FSM states) for any given history b k\u22121 1 ; \u03c6(b k\u22121 1 ) \u2208 {1, 2}. A jump of +1 has to be followed by a jump of \u22121, and 1 is the start and end state; this ensures K k=1 b k = 0. Under this restriction, the probability of the jump b k (Eqn 5) can be simplified as", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 75, |
| "end": 82, |
| "text": "(Fig 3)", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "P (b k |x k , u k , \u03c6(b k\u22121 1 )) = (6) \uf8f1 \uf8f4 \uf8f2 \uf8f4 \uf8f3 \u03b2 1 (x k , u k ) b k = +1, \u03c6 k\u22121 = 1 1 \u2212 \u03b2 1 (x k , u k ) b k = 0, \u03c6 k\u22121 = 1 1 b k = \u22121, \u03c6 k\u22121 = 2.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "There is a single parameter jump probability \u03b2 1 (x, u) = P (b = +1|x, u) associated with each phrase-pair (x, u) in the phrase-pair inventory. This is the probability that the phrase-pair (x, u) appears out of order in the transformed phrase sequence. We now describe the MJ-1 WFST. In the presentation, we use upper-case letters to denote the English phrases (u k ) and lower-case letters to denote the French phrases (x k and y k ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The PPI for this example is given in Table 1 . Table 1 : Example phrase-pair inventory with translation and reordering probabilities.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 37, |
| "end": 44, |
| "text": "Table 1", |
| "ref_id": null |
| }, |
| { |
| "start": 47, |
| "end": 54, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "English French Parameters u x P (x|u) \u03b21(x, u) A a 0.5 0.2 A d 0.5 0.2 B b 1.0 0.4 C c 1.0 0.3 D d 1.0 0.8", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The input to the WFST (Fig 4) is a lattice of French phrase sequences derived from the French sentence to be translated. The outputs are the corresponding English phrase sequences. Note that the reordering is performed on the English side.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 22, |
| "end": 29, |
| "text": "(Fig 4)", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The WFST is constructed by adding a self-loop for each French phrase in the input lattice, and a 2-arc path for every pair of adjacent French phrases in the lattice. The WFST incorporates the translation model P (x|u) and the reordering model P (b|x, u). The score on a self-loop with labels (u, x) is P (x|u) \u00d7 (1 \u2212 \u03b2 1 (x, u)); on a 2-arc path with labels (u 1 , x 1 ) and (u 2 , x 2 ), the score on the 1st arc is P (x 2 |u 1 ) \u00d7 \u03b2 1 (x 2 , u 1 ) and on the 2nd arc is P (x 1 |u 2 ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "In this example, the input to this transducer is a single French phrase sequence V : a, b, c. We perform the WFST composition R\u2022V , project the result on the input labels, and remove the epsilons to form the acceptor (R\u2022V ) 1 which contains the six English phrase sequences (Fig 4) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 274, |
| "end": 281, |
| "text": "(Fig 4)", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Translation Given a French sentence, a lattice of translations is obtained using the weighted finite state composition:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "T = G \u2022 W \u2022 R \u2022 \u03a6 \u2022 \u2126 \u2022 T .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "The most-likely translation is obtained as the path with the highest probability in T .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "Alignment Given a sentence-pair (E, F ), a lattice of phrase alignments is obtained by the finite state composition: S is an acceptor for the English sentence E, and T is an acceptor for the French sentence F . The Viterbi alignment is found as the path with the highest probability in B. The WFST composition gives the word-to-word alignments between the sentences. However, to obtain the phrase alignments, we need to construct additional FSTs not described here.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "B = S \u2022 W \u2022 R \u2022 \u03a6 \u2022 \u2126 \u2022 T ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-1", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "MJ-2 reordering restricts the maximum allowable jump to 2 phrases and also insists that the reordering take place within a window of 3 phrases. This latter condition implies that for an input sequence {a, b, c, d}, we disallow the three output sequences: The jump probability of Eqn 5 becomes", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-2", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "{b, d,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-2", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P (b k |x k , u k , \u03c6 k\u22121 ) = \uf8f1 \uf8f4 \uf8f4 \uf8f4 \uf8f2 \uf8f4 \uf8f4 \uf8f4 \uf8f3 \u03b2 1 (x k , u k ) b k = 1, \u03c6 k\u22121 = 1 \u03b2 2 (x k , u k ) b k = 2, \u03c6 k\u22121 = 1 1 \u2212 \u03b2 1 (x k , u k ) \u2212\u03b2 2 (x k , u k ) b k = 0, \u03c6 k\u22121 = 1 (7) \u03b2 1 (x k , u k ) b k = 1, \u03c6 k\u22121 = 2 1 \u2212 \u03b2 1 (x k , u k ) b k = \u22121, \u03c6 k\u22121 = 2 (8) 0.5 b k = 0, \u03c6 k\u22121 = 3 0.5 b k = \u22121, \u03c6 k\u22121 = 3.", |
| "eq_num": "(9)" |
| } |
| ], |
| "section": "Reordering WFST for MJ-2", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-2", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "b k = \u22122, \u03c6 k\u22121 = 4 (10) 1 b k = \u22122, \u03c6 k\u22121 = 5 (11) 1 b k = \u22121, \u03c6 k\u22121 = 6", |
| "eq_num": "(12)" |
| } |
| ], |
| "section": "Reordering WFST for MJ-2", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "We note that the distributions (Eqns 7 and 8) are based on two parameters \u03b2 1 (x, u) and \u03b2 2 (x, u) for each phrase-pair (x, u).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-2", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "Suppose the input is a phrase sequence a, b, c, the MJ-2 model ( The distributions in Eqns 10-12 ensure that the maximum jump is 2 phrases and the reordering happens within a window of 3 phrases. By insisting that the process start and end at state 1 (Fig 5) , we ensure that the model is not deficient. A WFST implementing the MJ-2 model can be easily constructed for both phrase alignment and translation, following the construction described for the MJ-1 model.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 251, |
| "end": 258, |
| "text": "(Fig 5)", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Reordering WFST for MJ-2", |
| "sec_num": "2.3" |
| }, |
| { |
| "text": "The Translation Template Model relies on an inventory of target language phrases and their source language translations. Our goal is to estimate the reordering model parameters P (b|x, u) for each phrase-pair (x, u) in this inventory. However, when translating a given test set, only a subset of the phrase-pairs is needed. Although there may be an advantage in estimating the model parameters under an inventory that covers all the training bitext, we fix the phrase-pair inventory to cover only the phrases on the test set. Estimation of the reordering model parameters over the training bitext is then performed under this test-set specific inventory.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimation of the Reordering Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We employ the EM algorithm to obtain Maximum Likelihood (ML) estimates of the reordering model parameters. Applying EM to the MJ-1 reordering model gives the following ML parameter estimates for each phrase-pair (u, x).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimation of the Reordering Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\u03b2 1 (x, u) = C x,u (0, +1) C x,u (0, +1) + C x,u (0, 0) . (13) C x,u (\u03c6, b)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimation of the Reordering Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "is defined for \u03c6 = 1, 2 and b = \u22121, 0, +1. Any permissible phrase alignment of a sentence pair corresponds to a b K 1 sequence, which in turn specifies a \u03c6 K 1 sequence. C x,u (\u03c6, b) is the expected number of times the phrase-pair x, u is aligned with a jump of b phrases when the jump history is \u03c6. We do not use full EM but a Viterbi training procedure that obtains the counts for the best (Viterbi) alignments. If a phrase-pair (x, u) is never seen in the Viterbi alignments, we back-off to a flat parameter \u03b2 1 (x, u) = 0.05.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Estimation of the Reordering Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The ML parameter estimates for the MJ-2 model are given in Table 2 , with C x,u (\u03c6, b) defined similarly. In our training scenario, we use WFST operations to obtain Viterbi phrase alignments of the training bitext where the initial reordering model parameters (\u03b2 0 (x, u)) are set to a uniform value of 0.05. The counts C x,u (s, b) are then obtained over the phrase alignments. Finally the ML estimates of the parameters are computed using Eqn 13 (MJ-1) or Eqn 14 (MJ-2). We will refer to the Viterbi trained models as MJ-1 VT and MJ-2 VT. Table 3 shows the MJ-1 VT parameters for some example phrase-pairs in the Arabic-English (A-E) task. Table 3 : MJ-1 parameters for A-E phrase-pairs.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 59, |
| "end": 66, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| }, |
| { |
| "start": 541, |
| "end": 548, |
| "text": "Table 3", |
| "ref_id": null |
| }, |
| { |
| "start": 642, |
| "end": 649, |
| "text": "Table 3", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Estimation of the Reordering Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "To validate alignment under a PPI, we measure performance of the TTM word alignments on French-English (500 sent-pairs) and Chinese-English (124 sent-pairs) (Table 4 ). As desired, the Alignment Recall (AR) and Alignment Error Rate (AER) improve modestly while Alignment Precision (AP) remains constant. This suggests that the models allow more words to be aligned and thus improve the recall; MJ-2 gives a further improvement in AR and AER relative to MJ-1. Alignment preci- ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 157, |
| "end": 165, |
| "text": "(Table 4", |
| "ref_id": "TABREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Estimation of the Reordering Models", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We perform our translation experiments on the large data track of the NIST Arabic-to-English (A-E) and Chinese-to-English (C-E) MT tasks; we report results on the NIST 2002 NIST , 2003 NIST , and 2004 evaluation test sets 1 .", |
| "cite_spans": [ |
| { |
| "start": 163, |
| "end": 172, |
| "text": "NIST 2002", |
| "ref_id": null |
| }, |
| { |
| "start": 173, |
| "end": 184, |
| "text": "NIST , 2003", |
| "ref_id": null |
| }, |
| { |
| "start": 185, |
| "end": 200, |
| "text": "NIST , and 2004", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "In these experiments the training data is restricted to FBIS bitext in C-E and the news bitexts in A-E. The bitext consists of chunk pairs aligned at sentence and sub-sentence level (Deng et al., 2004) . In A-E, the training bitext consists of 3.8M English words, 3.2M Arabic words and 137K chunk pairs. In C-E, the training bitext consists of 11.7M English words, 8.9M Chinese words and 674K chunk pairs. Our Chinese text processing consists of word segmentation (using the LDC segmenter) followed by grouping of numbers. For Arabic our text processing consisted of a modified Buckwalter analysis (LDC2002L49) followed by post processing to separate conjunctions, prepostions and pronouns, and Al-/w-deletion. The English text is processed using a simple tokenizer based on the text processing utility available in the the NIST MT-eval toolkit.", |
| "cite_spans": [ |
| { |
| "start": 182, |
| "end": 201, |
| "text": "(Deng et al., 2004)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Exploratory Experiments", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The Language Model (LM) training data consists of approximately 400M words of English text derived from Xinhua and AFP (English Gigaword), the English side of FBIS, the UN and A-E News texts, and the online archives of The People's Daily. Table 5 gives the performance of the MJ-1 and MJ-2 reordering models when translation is performed using a 4-gram LM. We report performance on the 02, 03, 04 test sets and the combined test set (ALL=02+03+04). For the combined set (ALL), we also show the 95% BLEU confidence interval computed using bootstrap resampling (Och, 2003) . Row 1 gives the performance when no reordering model is used. The next two rows show the influence of the MJ-1 reordering model; in row 2, a flat probability of \u03b2 1 (x, u) = 0.05 is used for all phrase-pairs; in row 3, a reordering probability is estimated for each phrase-pair using Viterbi Training (Eqn 13). The last two rows show the effect of the MJ-2 reordering model; row 4 uses flat probabilities (\u03b2 1 (x, u) = 0.05, \u03b2 2 (x, u) = 0.01) for all phrase-pairs; row 5 applies reordering probabilities estimating with Viterbi Training for each phrase-pair (Table 2) .", |
| "cite_spans": [ |
| { |
| "start": 559, |
| "end": 570, |
| "text": "(Och, 2003)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 239, |
| "end": 246, |
| "text": "Table 5", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 1132, |
| "end": 1141, |
| "text": "(Table 2)", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Exploratory Experiments", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "\u03b2 1(x, u) = Cx,u(1, +1) + Cx,u(2, +1) Cx,u(1, +1) + Cx,u(1, 0) + Cx,u(1, +2) + Cx,u(2, +1) + Cx,u(2, \u22121) \u03b22(x, u) = (Cx,u(1, 0) + Cx,u(2, \u22121) + Cx,u(1, +2))Cx,u(1, +2) (Cx,u(1, +1) + Cx,u(1, 0) + Cx,u(1, +2) + Cx,u(2, +1) + Cx,u(2, \u22121))(Cx,u(1, +2) + Cx,u(1, 0))", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Exploratory Experiments", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "On both language-pairs, we observe that reordering yields significant improvements. The gains from phrase reordering are much higher on A-E relative to C-E; this could be related to the fact that the word order differences between English and Arabic are much higher than the differences between English and Chinese. MJ-1 VT outperforms flat MJ-1 showing that there is value in estimating the reordering parameters from bitext. Finally, the MJ-2 VT model performs better than the flat MJ-2 model, but only marginally better than the MJ-1 VT model. Therefore estimation does improve the MJ-2 model but allowing reordering beyond a window of 1 phrase is not useful when translating either Arabic or Chinese into English in this framework.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Exploratory Experiments", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "The flat MJ-1 model outperforms the noreordering case and the flat MJ-2 model is better than the flat MJ-1 model; we hypothesize that phrase reordering increases search space of translations that allows the language model to select a higher quality hypothesis. This suggests that these models of phrase reordering actually require strong language models to be effective. We now investigate the interaction between language models and reordering.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Exploratory Experiments", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Our goal here is to measure translation performance of reordering models over variable span ngram LMs (Table 6 ). We observe that both MJ-1 and MJ-2 models yield higher improvements under higher order LMs: e.g. on A-E, gains under 3g (3.6 BLEU points on MJ-1, 0.2 points on MJ-2) are higher than the gains with 2g (2.4 BLEU points on MJ-1, 0.1 points on MJ-2). We now measure performance of the reordering models across the three test set genres used in the NIST 2004 evaluation: news, editorials, and speeches. On A-E, MJ-1 and MJ-2 yield larger improvements on News relative to the other genres; on C-E, the gains are larger on Speeches and Editorials relative to News. We hypothesize that the Phrase-Pair Inventory, reordering models and language models could all have been biased away from the test set due to the training data. There may also be less movement across these other genres. ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 102, |
| "end": 110, |
| "text": "(Table 6", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Exploratory Experiments", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "A", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reordering BLEU (%)", |
| "sec_num": null |
| }, |
| { |
| "text": "We here describe the integration of the phrase reordering model in an MT system trained on large bitexts. The text processing and language models have been described in \u00a7 4.1. Alignment Models are trained on all available bitext (7.6M chunk pairs/207.4M English words/175.7M Chinese words on C-E and 5.1M chunk pairs/132.6M English words/123.0M Arabic words on A-E), and word alignments are obtained over the bitext. Phrase-pairs are then extracted from the word alignments (Koehn et al., 2003) . MJ-1 model parameters are estimated over all bitext on A-E and over the non-UN bitext on C-E. Finally we use Minimum Error Training (MET) (Och, 2003) to train log-linear scaling factors that are applied to the WFSTs in Equation 1. 04news (04n) is used as the MET training set. Table 8 reports the performance of the system. Row 1 gives the performance without phrase reordering and Row 2 shows the effect of the MJ-1 VT model. The MJ-1 VT model is used in an initial decoding pass with the four-gram LM to generate translation lattices. These lattices are then rescored under parameters obtained using MET (MET-basic), and 1000-best lists are generated. The 1000-best lists are augmented with IBM Model-1 (Brown et al., 1993) scores and then rescored with a second set of MET parameters. Rows 3 and 4 show the performance of the MET-basic and MET-IBM1 models.", |
| "cite_spans": [ |
| { |
| "start": 474, |
| "end": 494, |
| "text": "(Koehn et al., 2003)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 635, |
| "end": 646, |
| "text": "(Och, 2003)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 1202, |
| "end": 1222, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 774, |
| "end": 781, |
| "text": "Table 8", |
| "ref_id": "TABREF9" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Scaling to Large Bitext Training Sets", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We observe that the maximum likelihood phrase reordering model (MJ-1 VT) yields significantly improved translation performance relative to the monotone phrase order translation baseline. This confirms the translation performance improvements found over smaller training bitexts.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scaling to Large Bitext Training Sets", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "We also find additional gains by applying MET to optimize the scaling parameters that are applied to the WFST component distributions within the TTM (Equation 1). In this procedure, the scale factor applied to the MJ-1 VT Phrase Translation and Reordering component is estimated along with scale factors applied to the other model components; in other words, the ML-estimated phrase reordering model itself is not affected by MET, but the likelihood that it assigns to a phrase sequence is scaled by a single, discriminatively optimized weight. The improvements from MET (see rows MET-Basic and MET-IBM1) demonstrate that the MJ-1 VT reordering models can be incorporated within a discriminative optimized translation system incorporating a variety of models and estimation procedures.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Scaling to Large Bitext Training Sets", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In this paper we have described local phrase reordering models developed for use in statistical machine translation. The models are carefully formulated so that they can be implemented as WFSTs, and we show how the models can be incorporated into the Translation Template Model to perform phrase alignment and translation using standard WFST operations. Previous approaches to WFST-based reordering (Knight and Al-Onaizan, 1998; Kumar and Byrne, 2003; Tsukada and Nagata, 2004) constructed permutation acceptors whose state spaces grow exponentially with the length of the sentence to be translated. As a result, these acceptors have to be pruned heavily for use in translation. In contrast, our models of local phrase movement do not grow explosively and do not require any pruning or approximation in their construction. In other related work, Bangalore and Ricardi (2001) have trained WF-STs for modeling reordering within translation; their WFST parses word sequences into trees containing reordering information, which are then checked for well-formed brackets. Unlike this approach, our model formulation does not use a tree representation and also ensures that the output sequences are valid permutations of input phrase sequences; we emphasize again that the probability distribution induced over reordered phrase sequences is not degenerate.", |
| "cite_spans": [ |
| { |
| "start": 399, |
| "end": 428, |
| "text": "(Knight and Al-Onaizan, 1998;", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 429, |
| "end": 451, |
| "text": "Kumar and Byrne, 2003;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 452, |
| "end": 477, |
| "text": "Tsukada and Nagata, 2004)", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Our reordering models do resemble those of (Tillmann, 2004; Tillmann and Zhang, 2005) in that we treat the reordering as a sequence of jumps relative to the original phrase sequence, and that the likelihood of the reordering is assigned through phrasepair specific parameterized models. We note that our implementation allows phrase reordering beyond simply a 1-phrase window, as was done by Tillmann. More importantly, our model implements a generative model of phrase reordering which can be incorporated directly into a generative model of the overall translation process. This allows us to perform 'embedded' EM-style parameter estimation, in which the parameters of the phrase reordering model are estimated using statistics gathered under the complete model that will actually be used in translation. We believe that this estimation of model parameters directly from phrase alignments obtained under the phrase translation model is a novel contribution; prior approaches derived the parameters of the reordering models from word aligned bitext, e.g. within the phrase pair extraction procedure.", |
| "cite_spans": [ |
| { |
| "start": 60, |
| "end": 85, |
| "text": "Tillmann and Zhang, 2005)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "We have shown that these models yield improvements in alignment and translation performance on Arabic-English and Chinese-English tasks, and that the reordering model can be integrated into large evaluation systems. Our experiments show that discriminative training procedures such Minimum Error Training also yield additive improvements by tuning TTM systems which incorporate ML-trained reordering models. This is essential for integrating our reordering model inside an evaluation system, where a variety of techniques are applied simultaneously.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The MJ-1 and MJ-2 models are extremely simple models of phrase reordering. Despite their simplicity, these models provide large improvements in BLEU score when incorporated into a monotone phrase order translation system. Moreover, they can be used to produced translation lattices for use by more sophisticated reordering models that allow longer phrase order movement. Future work will build on these simple structures to produce more powerful models of word and phrase movement in translation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Discussion", |
| "sec_num": "5" |
| }, |
| { |
| "text": "http://www.nist.gov/speech/tests/mt/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "The mathematics of statistical machine translation: Parameter estimation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "F" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [ |
| "J" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "263--311", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. F. Brown, S. A. Della Pietra, V. J. Della Pietra, and R. L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computa- tional Linguistics, 19(2):263-311.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Bitext chunk alignment for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Byrne", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Research Note", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. Deng, S. Kumar, and W. Byrne. 2004. Bitext chunk alignment for statistical machine translation. In Re- search Note, Center for Language and Speech Pro- cessing, Johns Hopkins University.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Translation with finite-state devices", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Al-Onaizan", |
| "suffix": "" |
| } |
| ], |
| "year": 1998, |
| "venue": "AMTA", |
| "volume": "", |
| "issue": "", |
| "pages": "421--437", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Knight and Y. Al-Onaizan. 1998. Translation with finite-state devices. In AMTA, pages 421-437, Langhorne, PA, USA.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Decoding complexity in wordreplacement translation models", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Knight", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Computational Linguistics, Squibs & Discussion", |
| "volume": "", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Knight. 1999. Decoding complexity in word- replacement translation models. Computational Lin- guistics, Squibs & Discussion, 25(4).", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Statistical phrasebased translation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "127--133", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Koehn, F. Och, and D. Marcu. 2003. Statistical phrase- based translation. In HLT-NAACL, pages 127-133, Edmonton, Canada.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A weighted finite state transducer implementation of the alignment template model for statistical machine translation", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Byrne", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "142--149", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Kumar and W. Byrne. 2003. A weighted finite state transducer implementation of the alignment template model for statistical machine translation. In HLT- NAACL, pages 142-149, Edmonton, Canada.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "A weighted finite state transducer translation template model for statistical machine translation", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [], |
| "last": "Byrne", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Journal of Natural Language Engineering", |
| "volume": "11", |
| "issue": "4", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Kumar, Y. Deng, and W. Byrne. 2005. A weighted fi- nite state transducer translation template model for sta- tistical machine translation. Journal of Natural Lan- guage Engineering, 11(4).", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Improved alignment models for statistical machine translation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Tillmann", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "EMNLP-VLC", |
| "volume": "", |
| "issue": "", |
| "pages": "20--28", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Och, C. Tillmann, and H. Ney. 1999. Improved align- ment models for statistical machine translation. In EMNLP-VLC, pages 20-28, College Park, MD, USA.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Minimum error rate training in statistical machine translation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. Och. 2003. Minimum error rate training in statistical machine translation. In ACL, Sapporo, Japan.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A localized prediction model for statistical machine translation", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Tillmann", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Tillmann and T. Zhang. 2005. A localized prediction model for statistical machine translation. In ACL, Ann Arbor, Michigan, USA.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "A block orientation model for statistical machine translation", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Tillmann", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Tillmann. 2004. A block orientation model for sta- tistical machine translation. In HLT-NAACL, Boston, MA, USA.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Efficient decoding for statistical machine translation with a fully expanded WFST model", |
| "authors": [ |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Tsukada", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Nagata", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "H. Tsukada and M. Nagata. 2004. Efficient decoding for statistical machine translation with a fully expanded WFST model. In EMNLP, Barcelona, Spain.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "SMT Decoder Dissected: Word Reordering", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Vogel", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "NLPKE", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Vogel. 2003. SMT Decoder Dissected: Word Reorder- ing. In NLPKE, Beijing, China.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "A polynomial-time algorithm for statistical machine translation", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "152--158", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Wu. 1996. A polynomial-time algorithm for sta- tistical machine translation. In ACL, pages 152-158, Santa Cruz, CA, USA.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "A comparative study on reordering constraints in statistical machine translation", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Zens", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "144--151", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Zens and H. Ney. 2003. A comparative study on re- ordering constraints in statistical machine translation. In ACL, pages 144-151, Sapporo, Japan.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Reordering constraints for phrase-based statistical machine translation", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Zens", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Watanabe", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Sumita", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "COLING", |
| "volume": "", |
| "issue": "", |
| "pages": "205--211", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Zens, H. Ney, T. Watanabe, and E. Sumita. 2004. Reordering constraints for phrase-based statistical ma- chine translation. In COLING, pages 205-211, Boston, MA, USA.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF2": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "TTM generative translation process; here, I = 9, K = 5, R = 7, J = 9." |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "Figure 2: Phrase reordering and jump sequence. -" |
| }, |
| "FIGREF4": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "Phrase reordering process for MJ-1." |
| }, |
| "FIGREF5": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "WFST for the MJ-1 model." |
| }, |
| "FIGREF6": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "a, c; c, a, d, b; c, d, a, b; }. In the MJ-2 finite state machine, a given history b k\u22121 1 can lead to one of the six states inFig 5." |
| }, |
| "FIGREF7": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "Phrase reordering process for MJ-2." |
| }, |
| "FIGREF8": { |
| "type_str": "figure", |
| "num": null, |
| "uris": null, |
| "text": "Fig 5) allows 6 possible reorderings: a, b, c; a, c, b; b, a, c; b, c, a; c, a, b; c, b, a. The distribution Eqn 9 ensures that the sequences b, c, a and c, b, a are assigned equal probability." |
| }, |
| "TABREF2": { |
| "type_str": "table", |
| "content": "<table><tr><td>sion depends on the quality of the word alignments</td></tr><tr><td>within the phrase-pairs and does not change much</td></tr><tr><td>by allowing phrase reordering. This experiment val-</td></tr><tr><td>idates the estimation procedure based on the phrase</td></tr><tr><td>alignments; however, we do not advocate the use of</td></tr><tr><td>TTM as an alternate word alignment technique.</td></tr></table>", |
| "html": null, |
| "num": null, |
| "text": "Alignment Performance with Reordering." |
| }, |
| "TABREF3": { |
| "type_str": "table", |
| "content": "<table><tr><td>Reordering</td><td/><td/><td/><td colspan=\"2\">BLEU (%)</td><td/><td/></tr><tr><td/><td/><td colspan=\"2\">Arabic-English</td><td/><td/><td colspan=\"2\">Chinese-English</td></tr><tr><td/><td>02</td><td>03</td><td>04</td><td>ALL</td><td>02</td><td>03</td><td>04</td><td>ALL</td></tr><tr><td>None</td><td colspan=\"4\">37.5 40.3 36.8 37.8 \u00b1 0.6</td><td colspan=\"4\">24.2 23.7 26.0 25.0 \u00b1 0.5</td></tr><tr><td>MJ-1 flat</td><td colspan=\"4\">40.4 43.9 39.4 40.7 \u00b1 0.6</td><td colspan=\"4\">25.7 24.5 27.4 26.2 \u00b1 0.5</td></tr><tr><td>MJ-1 VT</td><td colspan=\"4\">41.3 44.8 40.3 41.6 \u00b1 0.6</td><td colspan=\"4\">25.8 24.5 27.8 26.5 \u00b1 0.5</td></tr><tr><td>MJ-2 flat</td><td colspan=\"4\">41.0 44.4 39.7 41.1 \u00b1 0.6</td><td colspan=\"4\">26.4 24.9 27.7 26.7 \u00b1 0.5</td></tr><tr><td>MJ-2 VT</td><td colspan=\"4\">41.7 45.3 40.6 42.0 \u00b1 0.6</td><td colspan=\"4\">26.5 24.9 27.9 26.8 \u00b1 0.5</td></tr></table>", |
| "html": null, |
| "num": null, |
| "text": "ML parameter estimates for MJ-2 model." |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "text": "Performance of MJ-1 and MJ-2 reordering models with a 4-gram LM." |
| }, |
| "TABREF6": { |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "text": "Reordering with variable span n-gram LMs on Eval02+03+04 set." |
| }, |
| "TABREF8": { |
| "type_str": "table", |
| "content": "<table><tr><td/><td/><td/><td colspan=\"2\">BLEU (%)</td><td/><td/></tr><tr><td/><td colspan=\"3\">Arabic-English</td><td colspan=\"3\">Chinese-English</td></tr><tr><td>Reordering</td><td>02</td><td>03</td><td>04n</td><td>02</td><td>03</td><td>04n</td></tr><tr><td>None</td><td colspan=\"3\">40.2 42.3 43.3</td><td colspan=\"3\">28.9 27.4 27.3</td></tr><tr><td>MJ-1 VT</td><td colspan=\"3\">43.1 45.0 45.6</td><td colspan=\"3\">30.2 28.2 28.9</td></tr><tr><td>MET-Basic</td><td colspan=\"3\">44.8 47.2 48.2</td><td colspan=\"3\">31.3 30.3 30.3</td></tr><tr><td>MET-IBM1</td><td colspan=\"3\">45.2 48.2 49.7</td><td colspan=\"3\">31.8 30.7 31.0</td></tr></table>", |
| "html": null, |
| "num": null, |
| "text": "Performance across Eval 04 test genres." |
| }, |
| "TABREF9": { |
| "type_str": "table", |
| "content": "<table/>", |
| "html": null, |
| "num": null, |
| "text": "Translation Performance on Large Bitexts." |
| } |
| } |
| } |
| } |