ACL-OCL / Base_JSON /prefixY /json /Y17 /Y17-1018.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y17-1018",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:33:24.345465Z"
},
"title": "BTG-based Machine Translation with Simple Reordering Model using Structured Perceptron",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Waseda University",
"location": {}
},
"email": ""
},
{
"first": "Yves",
"middle": [],
"last": "Lepage",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Waseda University",
"location": {}
},
"email": "yves.lepage@waseda.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "In this paper, we present a novel statistical machine translation method which employs a BTG-based reordering model during decoding. BTG-based reordering models for preordering have been widely explored, aiming to improve the standard phrase-based statistical machine translation system. Less attention has been paid to incorporating such a reordering model into decoding directly. Our reordering model differs from previous models built using a syntactic parser or directly from annotated treebanks. Here, we train without using any syntactic information. The experiment results on an English-Japanese translation task show that our BTG-based decoder achieves comparable or better performance than the more complex state-of-the-art SMT decoders.",
"pdf_parse": {
"paper_id": "Y17-1018",
"_pdf_hash": "",
"abstract": [
{
"text": "In this paper, we present a novel statistical machine translation method which employs a BTG-based reordering model during decoding. BTG-based reordering models for preordering have been widely explored, aiming to improve the standard phrase-based statistical machine translation system. Less attention has been paid to incorporating such a reordering model into decoding directly. Our reordering model differs from previous models built using a syntactic parser or directly from annotated treebanks. Here, we train without using any syntactic information. The experiment results on an English-Japanese translation task show that our BTG-based decoder achieves comparable or better performance than the more complex state-of-the-art SMT decoders.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "The phrase-based method (Koehn et al., 2003) and the syntax-based method (Yamada and Knight, 2001) are two of the representative methods in statistical machine translation (SMT). On the one hand, in the phrase-based model, the lexical reordering model is a crucial component, but it is often be criticized, especially when translating a language pair with widely divergent syntax like English-Japanese, as the na\u00efve distance-based lexical reordering model does not work so well when applied to longer reorderings. On the other hand, in syntax-based SMT method, word reordering is implicitly addressed by translation rules. The performance is thus directly subject to the parsing errors of the syntactic parser.",
"cite_spans": [
{
"start": 24,
"end": 44,
"text": "(Koehn et al., 2003)",
"ref_id": null
},
{
"start": 73,
"end": 98,
"text": "(Yamada and Knight, 2001)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Syntax-based translation models are usually built from annotated treebanks to extract grammar rules for reordering (Genzel, 2010) . Such reordering models are thus more difficult to train. Between these two models, some loose hierarchical structure models have been proposed: the hierarchical phrasebased model (Chiang, 2007) and or the Bracketing Transduction Grammar (BTG) based model (Wu, 1997) . Compared with the hierarchical phrase-based model, the BTG model has many advantages like its simplicity. Also, its well-formed rules avoid extracting a large number of rare or useless translation rules, as is the case of the hierarchical phrase-based model.",
"cite_spans": [
{
"start": 115,
"end": 129,
"text": "(Genzel, 2010)",
"ref_id": "BIBREF8"
},
{
"start": 311,
"end": 325,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF1"
},
{
"start": 387,
"end": 397,
"text": "(Wu, 1997)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In recent proposals, phrase-based statistical machine translation has been shown to improve when BTG-based preordering is applied as a preprocessing (DeNero and Uszkoreit, 2011; Neubig et al., 2012; Nakagawa, 2015) . The idea behind preordering is to reduce the structural complexity. It is preferable to apply the reordering operations in advance rather than during decoding as this benefits the word alignment step.",
"cite_spans": [
{
"start": 149,
"end": 177,
"text": "(DeNero and Uszkoreit, 2011;",
"ref_id": "BIBREF5"
},
{
"start": 178,
"end": 198,
"text": "Neubig et al., 2012;",
"ref_id": "BIBREF21"
},
{
"start": 199,
"end": 214,
"text": "Nakagawa, 2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, following (Xiong et al., 2008) , we propose to incorporate the BTG-based reordering model directly into the decoding step of a BTG-based SMT system using a simple Structured Perceptron (Rosenblatt, 1958; Collins and Roark, 2004) . The rest of the paper is organized as follows. Section 2 briefly introduces previous BTG-based reordering methods both for preordering or determining the reorderings during decoding. Section 3 describes the principal model used in BTG-based machine translation. Section 4 gives the details of the proposed method and the model combination in the system construction. Section 5 reports the results of the experiment on an English-to-Japanese translation task. We conclude in Section 7.",
"cite_spans": [
{
"start": 25,
"end": 45,
"text": "(Xiong et al., 2008)",
"ref_id": "BIBREF34"
},
{
"start": 200,
"end": 218,
"text": "(Rosenblatt, 1958;",
"ref_id": "BIBREF28"
},
{
"start": 219,
"end": 243,
"text": "Collins and Roark, 2004)",
"ref_id": "BIBREF2"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A common problem in the distortion reordering models (Tillmann, 2004; Koehn et al., 2005; Galley and Manning, 2008) used in phrase-based SMT (PB-SMT) method is that they do not take contexts into account. Hence, we draw our attention on using linguistic-context information for reordering. Bracketing Transduction Grammar (BTG) (Wu, 1997) is a binary and simplified synchronous context-free grammar with only one non-terminal symbol. It has three types for the right hand side of the rules \u03b3: S-straight keeps the order of child nodes, I-inverted reverses the order, and Tterminal generates a terminal symbol.",
"cite_spans": [
{
"start": 53,
"end": 69,
"text": "(Tillmann, 2004;",
"ref_id": "BIBREF30"
},
{
"start": 70,
"end": 89,
"text": "Koehn et al., 2005;",
"ref_id": "BIBREF13"
},
{
"start": 90,
"end": 115,
"text": "Galley and Manning, 2008)",
"ref_id": "BIBREF7"
},
{
"start": 328,
"end": 338,
"text": "(Wu, 1997)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic Contexts for BTG-based Reordering",
"sec_num": "2"
},
{
"text": "X \u2192 \u03b3 = \uf8f1 \uf8f2 \uf8f3 [X 1 X 2 ] straight < X 1 X 2 > inverted f /e terminal (1)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic Contexts for BTG-based Reordering",
"sec_num": "2"
},
{
"text": "where X, X 1 , X 2 are non-terminal symbols and f /e is a source/target phrase pair. BTG provides an easy and simple mechanism for modeling word permutation across languages. Figure 1 illustrates this mechanism. There exists some solutions for BTG grammar induction, which typically focus on unsupervised ap-proaches, like inside-outside algorithm (Pereira and Schabes, 1992) for probabilistic context-free grammar (PCFG), monolingual bracketing representation (Klein and Manning, 2002) or bilingual bracketing grammar induction (Wu, 1995) . The common problem is that these models suffer from a higher computational complexity.",
"cite_spans": [
{
"start": 348,
"end": 375,
"text": "(Pereira and Schabes, 1992)",
"ref_id": "BIBREF27"
},
{
"start": 461,
"end": 486,
"text": "(Klein and Manning, 2002)",
"ref_id": "BIBREF12"
},
{
"start": 529,
"end": 539,
"text": "(Wu, 1995)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [
{
"start": 175,
"end": 183,
"text": "Figure 1",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Using Linguistic Contexts for BTG-based Reordering",
"sec_num": "2"
},
{
"text": "Some supervised versions focus on supervised approach, ranging from simple flat reordering model (Wu, 1997) , maximum-entropy based model (Zens and Ney, 2006; Xiong et al., 2008) and Tree Kernel-based SVM . Other approaches, use pre-annotated treebanks to train a monolingual/synchronous parser (Collins and Roark, 2004; Genzel, 2010) . In this case, the rules are learned directly from the treebank. The majority of works (Zhang and Gildea, 2005; Xiong et al., 2008) rely on syntactic parsers available in one of a source or target language.",
"cite_spans": [
{
"start": 97,
"end": 107,
"text": "(Wu, 1997)",
"ref_id": "BIBREF33"
},
{
"start": 138,
"end": 158,
"text": "(Zens and Ney, 2006;",
"ref_id": "BIBREF37"
},
{
"start": 159,
"end": 178,
"text": "Xiong et al., 2008)",
"ref_id": "BIBREF34"
},
{
"start": 295,
"end": 320,
"text": "(Collins and Roark, 2004;",
"ref_id": "BIBREF2"
},
{
"start": 321,
"end": 334,
"text": "Genzel, 2010)",
"ref_id": "BIBREF8"
},
{
"start": 423,
"end": 447,
"text": "(Zhang and Gildea, 2005;",
"ref_id": "BIBREF38"
},
{
"start": 448,
"end": 467,
"text": "Xiong et al., 2008)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic Contexts for BTG-based Reordering",
"sec_num": "2"
},
{
"text": "However, bilingual parallel treebanks are not always available. As to building a bilingual synchronous parser using the BTG formalism, there exist rare works without the use of such a constituency/dependency parser, and sometimes bilingual parallel treebanks are not always available. Zens and Ney (2006) and DeNero and Uszkoreit (2011) proposed semi-supervised approaches for synchronous grammar induction based on sourceside information only when bilingual word alignments are given in advance, instead of training the parser in a supervised way on pre-annotated treebanks. This strategy does not require syntactic annotations in the training data, making training easier.",
"cite_spans": [
{
"start": 285,
"end": 304,
"text": "Zens and Ney (2006)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic Contexts for BTG-based Reordering",
"sec_num": "2"
},
{
"text": "Rather than developing a novel BTG-decoder incorporated with a BTG-based reordering model, using reordering models for preordering have been widely explored to improve the standard phrasebased statistical machine translation system. Neubig et al. (2012) present a bottom-up method for inducing a preorder for SMT by training a discriminative model to minimize the loss function on the hand-aligned corpus. Their method makes use of the general framework of large margin online structured prediction (Crammer et al., 2006) . Lerner and Petrov (2013) present a simple classifier-based preordering approach using the source-side dependency tree. Nakagawa (2015) further develop a more efficient top-down incremental parser for preordering via online training using simple structured Perceptron algorithm. Differing from the mentioned methods to pre-reorder the sentence before the phase of decoding, in this paper; we propose to build a reordering model directly for building a BTG-based decoder.",
"cite_spans": [
{
"start": 499,
"end": 521,
"text": "(Crammer et al., 2006)",
"ref_id": "BIBREF4"
},
{
"start": 524,
"end": 548,
"text": "Lerner and Petrov (2013)",
"ref_id": "BIBREF17"
},
{
"start": 643,
"end": 658,
"text": "Nakagawa (2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Using Linguistic Contexts for BTG-based Reordering",
"sec_num": "2"
},
{
"text": "Given the three types of rules in Equation 1, we define a BTG derivation D as a sequence of independent operations d 1 , . . . , d K that apply bracketing rules X \u2192 \u03b3 as each stage when parsing a source-target sentence pair < f , e >. We write",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BTG-based Machine Translation",
"sec_num": "3"
},
{
"text": "D = [d 1 , . . . , d k , . . . , d K ]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BTG-based Machine Translation",
"sec_num": "3"
},
{
"text": ". We can produce one single BTG tree accordingly for one given D. The probability of a synchronous derivation (parse tree) under the framework of Probabilistic Synchronous Context Free Grammar (PSCFG) is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BTG-based Machine Translation",
"sec_num": "3"
},
{
"text": "P (D) = \u220f d\u2208D P (d : X \u2192 \u03b3) (2)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BTG-based Machine Translation",
"sec_num": "3"
},
{
"text": "where d : X \u2192 \u03b3 stands for the derivation with the grammar rule X \u2192 \u03b3. Given an input sentence pair < f , e > and the word alignment a, the problem of finding the best derivationD can be defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BTG-based Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D = arg max D P (D|e, f , a)",
"eq_num": "(3)"
}
],
"section": "BTG-based Machine Translation",
"sec_num": "3"
},
{
"text": "In the real case of machine translation, we do not know the word alignment a when training set is the parallel corpus. In order to find the best translatio\u00f1 e from all translation candidates, we assume two latent variables a, D were required as following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BTG-based Machine Translation",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "e = arg max e P (e|f ) (4) \u221d arg max e P (e, D, a|f )",
"eq_num": "(5)"
}
],
"section": "BTG-based Machine Translation",
"sec_num": "3"
},
{
"text": "\u221d arg max e P (D|a, f , e) \u00d7 P (a|f , e) \u00d7 P (e)(6)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BTG-based Machine Translation",
"sec_num": "3"
},
{
"text": "In Equation 6, P (e) is the language model and a, D are latent variables that should be learnt from the training data. The generative story of Equation 6 is understood as follows: Once we found the hidden word alignment a with an alignment model P (a|f , e) and the hidden derivation D using BTGbased reordering model P (D|a, f , e), we can translate the input source sentence f with the target translation\u1ebd.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BTG-based Machine Translation",
"sec_num": "3"
},
{
"text": "kyoto station was renamed as shichijo station kyoto station shichijo station as was renamed f :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BTG-based Machine Translation",
"sec_num": "3"
},
{
"text": "f \u2032 :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BTG-based Machine Translation",
"sec_num": "3"
},
{
"text": "Figure 2: Example of preordering a source sentence given the target word order.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "BTG-based Machine Translation",
"sec_num": "3"
},
{
"text": "There are two sub-models in Equation 6, one is the alignment model P (a|f , e) and the other one is the reordering model P (D|a, f , e). Since state-of-theart alignment methods yield high-quality word-toword alignments, it is not necessary to design a new alignment model to obtain the intermediate variable a. We use the standard method to get word-to-word alignments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Alignment Model",
"sec_num": "3.1"
},
{
"text": "Recently, some research also showed that treating the parse tree as latent variables (Loehlin, 1998) can benefit the BTG tree inference but for preordering (see Figure 2 ). The reordering model is trained to maximize the conditional likelihood of trees that license the reorderings implied by observed word alignments in a given parallel corpus. For example, Neubig et al. (2012) proposed a BTG-based reordering model trained from word-aligned parallel text directly. With assuming that there is an underlying derivation D that produced f \u2032 , where f \u2032 is the reordered source sentence given the corresponding target word orders under the constraints of BTGs.",
"cite_spans": [
{
"start": 85,
"end": 100,
"text": "(Loehlin, 1998)",
"ref_id": "BIBREF19"
},
{
"start": 359,
"end": 379,
"text": "Neubig et al. (2012)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 161,
"end": 169,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Training Reordering Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f preordering with D \u2212 \u2212\u2212\u2212\u2212\u2212\u2212\u2212\u2212\u2212 \u2192 a f \u2032",
"eq_num": "(7)"
}
],
"section": "Training Reordering Model",
"sec_num": "3.2"
},
{
"text": "To learn such a reordering model, they handled the derivations D as a latent variable directly from the source side linguistic contexts. The objective function in their work can be represented as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Reordering Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "f \u2032 = arg max f \u2032 Score(f \u2032 , D|f )",
"eq_num": "(8)"
}
],
"section": "Training Reordering Model",
"sec_num": "3.2"
},
{
"text": "Since their model is based on reorderings f \u2032 licensed by BTG derivations D, notes D \u2192 f \u2032 , the objective function also can be written as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Training Reordering Model",
"sec_num": "3.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "D = arg max D\u2192f \u2032 Score(D|f )",
"eq_num": "(9)"
}
],
"section": "Training Reordering Model",
"sec_num": "3.2"
},
{
"text": "The learning problem defined here is fairly simple. With treating the derivation D as the latent variable, they want to find the derivation with maximal score of Score(D|f ). Furthermore, following (Collins, 2002; Collins and Roark, 2004) , they assume that Score(D|f ) is the linear combination of feature functions defined over D and f . Because it is also possible to apply the score function Score(D|f ) as a reordering model during the BTG-based decoding, following (Neubig et al., 2012; Nakagawa, 2015) , we propose to build such a reordering model with latent derivation for decoding instead of preordering. The natural difference between their works and our work is as follows: In (Neubig et al., 2012; Nakagawa, 2015) , they train an incremental parser for preordering, following the order in the target language before decoding, but we do reordering while decoding. In other words, we adopt their model but make use of it as an online reordering heuristic during decoding.",
"cite_spans": [
{
"start": 198,
"end": 213,
"text": "(Collins, 2002;",
"ref_id": "BIBREF3"
},
{
"start": 214,
"end": 238,
"text": "Collins and Roark, 2004)",
"ref_id": "BIBREF2"
},
{
"start": 471,
"end": 492,
"text": "(Neubig et al., 2012;",
"ref_id": "BIBREF21"
},
{
"start": 493,
"end": 508,
"text": "Nakagawa, 2015)",
"ref_id": "BIBREF20"
},
{
"start": 689,
"end": 710,
"text": "(Neubig et al., 2012;",
"ref_id": "BIBREF21"
},
{
"start": 711,
"end": 726,
"text": "Nakagawa, 2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Training Reordering Model",
"sec_num": "3.2"
},
{
"text": "In our method, we propose to train and use a BTGbased reordering model in three steps. Firstly, we train the BTG parser on the source side with shallow annotations (only POS-tags and word classes (Brown et al., 1992)) on word-aligned bilingual data. Then we select a large mount of features of unigrams, bigrams, and trigrams to represent the current parser state and we estimate feature weights using a Structured Perceptron (Nakagawa, 2015) . Finally, the log-linear combination score for the current state is computed again during decoding. This works as an additional heuristic score and helps the decoder to select the best candidates in subhypothesis combination.",
"cite_spans": [
{
"start": 426,
"end": 442,
"text": "(Nakagawa, 2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Proposed Methods",
"sec_num": "4"
},
{
"text": "We define a reordering model \u03a6 RM as a model composed of a straight reordering model \u03a6 RM s and an inverted reordering model \u03a6 RM i . R stands for the composition of \u03a6 RM s and \u03a6 RM i .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R = {\u03a6 RM s , \u03a6 RM i }",
"eq_num": "(10)"
}
],
"section": "Reordering",
"sec_num": "4.1"
},
{
"text": "Given a source sentence f , we define the score for R the weighted sum of the score P(d) of the subderivation d at each parse state defined over D given kyoto station was renamed as shichijo station f :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering",
"sec_num": "4.1"
},
{
"text": "1. f 7 1 \u2192 f 2 1 f 7 3 2. f 2 1 \u2192 [f 1 f 2 ] 1 2 3 4 5 6 7 4. f 5 3 \u2192 f 4 3 f 5 3. f 7 3 \u2192 f 5 3 f 7 6 6. f 7 6 \u2192 [f 6 f 7 ] 5. f 4 3 \u2192 f 3 f 4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering",
"sec_num": "4.1"
},
{
"text": "Derivations: Figure 3 : Example of step-by-step atomic derivations.",
"cite_spans": [],
"ref_spans": [
{
"start": 13,
"end": 21,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Reordering",
"sec_num": "4.1"
},
{
"text": "a source sentence f .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering",
"sec_num": "4.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "R(D|f ) = \u2211 d\u2208D P(d : X \u2192 \u03b3)",
"eq_num": "(11)"
}
],
"section": "Reordering",
"sec_num": "4.1"
},
{
"text": "Each atomic derivation d which belongs to D is weighted with various features in a log-linear form as (Xiong et al., 2008; Duan et al., 2009) :",
"cite_spans": [
{
"start": 102,
"end": 122,
"text": "(Xiong et al., 2008;",
"ref_id": "BIBREF34"
},
{
"start": 123,
"end": 141,
"text": "Duan et al., 2009)",
"ref_id": "BIBREF6"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering",
"sec_num": "4.1"
},
{
"text": "P(d : X \u2192 \u03b3) = \u2211 \u03d5 i \u2208d \u03c0 i \u03d5 i (12)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering",
"sec_num": "4.1"
},
{
"text": "where \u03d5 i is the ith feature function and \u03c0 i is the ith weight can be trained on the training data. Suppose that we know the word alignment a. We want to train a parser which maximizes the number of times the source sentences in the training data are successfully parsed under the constraints of BTGs. Nakagawa (2015) propose an efficient topdown parser via online training for this problem. He uses a simple structured perceptron algorithm.",
"cite_spans": [
{
"start": 303,
"end": 318,
"text": "Nakagawa (2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering",
"sec_num": "4.1"
},
{
"text": "We assume that the parser has an independent state in each step. We define the parse state as a triple \u27e8X, r, d\u27e9, where X is an unparsed span. For example, following the deductive proof system representations (Shieber et al., 1995; Goodman, 1999) ",
"cite_spans": [
{
"start": 209,
"end": 231,
"text": "(Shieber et al., 1995;",
"ref_id": "BIBREF29"
},
{
"start": 232,
"end": 246,
"text": "Goodman, 1999)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering",
"sec_num": "4.1"
},
{
"text": ", [X, p, q] covers f p , . . . , f q . d = \u27e8r, X \u2192 \u03b3\u27e9",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering",
"sec_num": "4.1"
},
{
"text": "is the derivation at the current state with r is the splitting position between f r\u22121 and f r and X \u2192 \u03b3 is the applied BTG rule. To extract the features used to score the model, we assume that each word in a sentence has three types of features: lexical form, part-of-speech (POS) tag and word class (Brown et al., 1992) as (Nakagawa, 2015) . We extract the unigrams, bigrams, and trigrams at each parse state and compute the model score defined in Equation 12 1 . The training algorithm (see Algorithm 1) can be described briefly as following: The parser first produces a system derivationD with the maximum model score given f . IfD is not licensed by BTG constraints also given (e, a), we consider the parser entered a failure state and stop it. Another oracle derivation D * was also selected, which satisfied the constraint of BTGs (notes Constraint(D, a, e, f ) = true). If the system derivationD and the oracle derivation D * are not equivalent, we update the model weights \u03c0 towards D * . Like all structured prediction learning frameworks, the online Structured Perceptron is costly to train as training complexity is proportional to inference, which is frequently non-linear in the length of example. To train the reordering model, we employ an in-house parser 2 which uses Batch Perceptron. It is a modified and boosted version of the original topdown parser (Nakagawa, 2015) , which allows us to train on the whole training set 3 .",
"cite_spans": [
{
"start": 300,
"end": 320,
"text": "(Brown et al., 1992)",
"ref_id": "BIBREF0"
},
{
"start": 324,
"end": 340,
"text": "(Nakagawa, 2015)",
"ref_id": "BIBREF20"
},
{
"start": 1370,
"end": 1386,
"text": "(Nakagawa, 2015)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Reordering",
"sec_num": "4.1"
},
{
"text": "In decoding, we follow (Och and Ney, 2002; Chiang, 2007) . That is, we remove the target side and use a more general linear model composition over 2 https://github.com/wang-h/HieraParser 3 We skip the sentences which cannot be parsed under the constraints of BTGs. derivations:\u1ebd = arg max e P (e, D|f ) (13)",
"cite_spans": [
{
"start": 23,
"end": 42,
"text": "(Och and Ney, 2002;",
"ref_id": "BIBREF22"
},
{
"start": 43,
"end": 56,
"text": "Chiang, 2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "4.2"
},
{
"text": "\u221d arg max D\u2192e \u220f i \u03a6 i (D) \u03bb i (14)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "4.2"
},
{
"text": "where each \u03a6 i is a sub-model score function and \u03bb is the corresponding weight. For each arbitrary score function \u03a6 i with a derivation D, we decompose it as a chain of independent derivations d with BTG rules X \u2192 \u03b3:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "4.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "\u03a6 i (D) = \u220f d\u2208D \u03a6 i (d : X \u2192 \u03b3)",
"eq_num": "(15)"
}
],
"section": "Decoding",
"sec_num": "4.2"
},
{
"text": "Therefore, given an input sentence f = f 1 , . . . , f n , notes f n 1 , the task to translate an input source sentence can be solved by finding the derivation with maximal score in Equation 14, which uniquely determines a target translation\u00ea (e m 1 ) with this latent derivation D.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "4.2"
},
{
"text": "The decoder needs to generate all derivations for each segment spanning from f i to f j (0 \u2264 i < j \u2264 n). Since our goal is to find the best derivationD that covers the whole input sentence [f 1 , . . . , f n ], we employ a CKY-style decoder to generate the best derivationD for each source sentence. This yields the best translation\u00ea (e m 1 ) at the same time.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Decoding",
"sec_num": "4.2"
},
{
"text": "The integration of a standard n-gram-based language model into a CKY-style decoder is not easy as in the standard phrase-based method (Koehn et al., 2003) . Following (Chiang, 2007) , we first introduce the -LM -RM model in which the reordering and language model are removed from the decoding model:",
"cite_spans": [
{
"start": 134,
"end": 154,
"text": "(Koehn et al., 2003)",
"ref_id": null
},
{
"start": 167,
"end": 181,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The -LM -RM Decoder",
"sec_num": "4.2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w(D) = \u220f i / \u2208{RM,LM } \u03a6 i (D) \u03bb i",
"eq_num": "(16)"
}
],
"section": "The -LM -RM Decoder",
"sec_num": "4.2.1"
},
{
"text": "Using the deductive proof system (Shieber et al., 1995; Goodman, 1999) to describe our -LM -RM decoder, the inference rules are the following:",
"cite_spans": [
{
"start": 33,
"end": 55,
"text": "(Shieber et al., 1995;",
"ref_id": "BIBREF29"
},
{
"start": 56,
"end": 70,
"text": "Goodman, 1999)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The -LM -RM Decoder",
"sec_num": "4.2.1"
},
{
"text": "X \u2192 f /e [X, p, q] : w (17) X \u2192 \u27e8X 1 , X 2 \u27e9 : [X 1 , p, r] : w 1 [X 2 , r + 1, q] : w 2 [X, p, q] : w 1 w 2 (18) X \u2192 [X 1 , X 2 ] : [X 1 , p, r] : w 1 [X 2 , r + 1, q] : w 2 [X, p, q] : w 1 w 2 (19)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The -LM -RM Decoder",
"sec_num": "4.2.1"
},
{
"text": "where X \u2192 \u03b3 is the derivation rule, [X, p, q] is the subtree rooted in a non-terminal X (see Section 2), w is the model score defined in Equation 16. When all terms on the top line are true, the item on the bottom line is derived. The final goal for the decoder is [f , 1, n], where f is the whole source sentence. During decoding, the -LM -RM decoder flexibly explores the derivation without taking reordering into account. This strategy is a simple way to build a CYK-style decoder, but the decoder requires very large beam size to find the true best translation. Incorporating the LM and RM model directly into the translation construction will improve efficiency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The -LM -RM Decoder",
"sec_num": "4.2.1"
},
{
"text": "The computational complexity of online strategy is reduced by using dynamic programming and incorporating the language model and the reordering model into decoding. The similar method has been described in (Chiang, 2007) . The decoder integrated with the n-gram language model is called: \"+LM decoder\". In our case, we also need to integrate the reordering model, so we call it \"+LM +RM decoder\". Given the inference rules described in Equations 17-19, we describe the +LM +RM decoding algorithm using Equations 20-23.",
"cite_spans": [
{
"start": 206,
"end": 220,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The +LM +RM Decoder",
"sec_num": "4.2.2"
},
{
"text": "In our case, the reordering model affects computing the language model score if the derivation requires to swap the target sub-charts. We can calculate \u03a6 RM (X) by just taking the model score as the product of two sub-charts \u03a6 RM (X 1 ) and \u03a6 RM (X 2 ) with current reordering score \u03a6 RM (X \u2192 \u03b3). Since R is a log-linear expression, we compute the reordering score R(X) for a given span X :",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The +LM +RM Decoder",
"sec_num": "4.2.2"
},
{
"text": "[X, p, q] that consists of X 1 : [X 1 , p, r] and X 2 : [X 2 , r+1, q]",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The +LM +RM Decoder",
"sec_num": "4.2.2"
},
{
"text": "with a grammar rule X \u2192 \u03b3 as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The +LM +RM Decoder",
"sec_num": "4.2.2"
},
{
"text": "R(X) = R(X 1 ) + R(X 2 ) + P(X \u2192 \u03b3) (24)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The +LM +RM Decoder",
"sec_num": "4.2.2"
},
{
"text": "When we merge the chart X 1 : [X 1 , p, r] with X 2 : [X 2 , r + 1, q] using the rule X \u2192 \u03b3, we update the total score for the composition model after applying each rule dynamically, we call this the +RM strategy. The BTG terminal rule (T : X \u2192 f /e) is used to translate the source phrase f into the target phrase e while the straight and inverted rules (S : X \u2192 [X 1 X 2 ] and I : X \u2192< X 1 X 2 >) are used to concatenate two neighbouring phrases with a straight or inverted order as following:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The +LM +RM Decoder",
"sec_num": "4.2.2"
},
{
"text": "e y x = { e 1 \u2022 e 2 , X \u2192 [X 1 X 2 ] e 2 \u2022 e 1 , X \u2192 \u27e8X 1 X 2 \u27e9 (25)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The +LM +RM Decoder",
"sec_num": "4.2.2"
},
{
"text": "where \u2022 stands for concatenation between strings. After having decided the word order on the target side, we compute the score in the language model, noted L(\u2022) 4 . The language model score P LM (e y x ) depends on the preceding N \u2212 1 words for any e y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The +LM +RM Decoder",
"sec_num": "4.2.2"
},
{
"text": "x (|e y x | \u2265 N, 1 \u2264 x < y \u2264 m). It is computed as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The +LM +RM Decoder",
"sec_num": "4.2.2"
},
{
"text": "P LM (e y x ) = \u220f x\u2264z\u2264y p(\u00ea z+N \u22121 |\u00ea z . . .\u00ea z+N \u22122 ) (26)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The +LM +RM Decoder",
"sec_num": "4.2.2"
},
{
"text": "The language model score function L(e y x ) depends on the rule type \u03b3 as follows:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The +LM +RM Decoder",
"sec_num": "4.2.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "L(e y x ) = \uf8f1 \uf8f2 \uf8f3 P LM (e y+1 x ), |e y x | = |e m 1 | 0, |e y x | < N P LM (e y x+N ), otherwise",
"eq_num": "(27)"
}
],
"section": "The +LM +RM Decoder",
"sec_num": "4.2.2"
},
{
"text": "To determine whether we have the case |e y x | = |e m 1 |, we assume that, if the span of X : [X, p, q] covers the entire source sentence f n 1 as X : [X, 1, n], then the target translation e y",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The +LM +RM Decoder",
"sec_num": "4.2.2"
},
{
"text": "x should also cover the entire target sentence. On the basis of +RM decoder, we add the +LM component into the decoder and build a +LM+RM decoder for CYK-style bottom-up decoding. cube pruning (Chiang, 2007) was also applied to speedup the decoder.",
"cite_spans": [
{
"start": 193,
"end": 207,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF1"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The +LM +RM Decoder",
"sec_num": "4.2.2"
},
{
"text": "HieraTrans is our newly-developed in-house BTG-based SMT translation platform. It adopts the constraints of BTG in both phrase translation and reordering. We combine the models in a log-liner manner as shown in Equation 14. The feature functions employed by HieraTrans are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Combination",
"sec_num": "4.2.3"
},
{
"text": "\u2022 Phrase-based translation models (TM): direct and inverse phrase translation probabilities, direct and inverse lexical translation probabilities.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Combination",
"sec_num": "4.2.3"
},
{
"text": "\u2022 Language model (LM) 4 For the case of start-of-the sentence and end of the sentence, we wrap the target sentence e (e m 1 ) as\u00ea =\u00ea m+h",
"cite_spans": [
{
"start": 22,
"end": 23,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Combination",
"sec_num": "4.2.3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "1 = \u27e8s\u27e9 N \u22121 e m 1 \u27e8\\s\u27e9. X \u2192 f /e [X, p, q] : w[L(e)] \u03bb L (20) X \u2192 \u27e8X 1 , X 2 \u27e9 : [exp P(X \u2192 \u27e8X 1 , X 2 \u27e9)] \u03bb R [X 1 , p, r] : w 1 [X 2 , r + 1, q] : w 2 [X, p, q] : w 1 w 2 [exp R(X)] \u03bb R [L(e 2 + e 1 )] \u03bb L (21) X \u2192 [X 1 , X 2 ] : [exp P(X \u2192 [X 1 , X 2 ])] \u03bb R [X 1 , p, r] : w 1 [X 2 , r + 1, q] : w 2 [X, p, q] : w 1 w 2 [exp R(X)] \u03bb R [L(e 1 + e 2 )] \u03bb L (22) X 1 \u2192 f 1 /e 1 , X 2 \u2192 f 2 /e 2",
"eq_num": "(23)"
}
],
"section": "Model Combination",
"sec_num": "4.2.3"
},
{
"text": "\u2022 Reordering models (RM): straight and inverted scores combined within the log-linear framework.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Combination",
"sec_num": "4.2.3"
},
{
"text": "\u2022 Penalties (PM): word penalty, phrase penalty, unknown word penalty.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model Combination",
"sec_num": "4.2.3"
},
{
"text": "The weights for each feature are tuned and estimated using the minimum error rate training (MERT) algorithm (Och, 2003) .",
"cite_spans": [
{
"start": 108,
"end": 119,
"text": "(Och, 2003)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model Combination",
"sec_num": "4.2.3"
},
{
"text": "To evaluate our system, we conducted translation experiments on the KFTT Corpus (English-Japanese) and compared our system with baseline phrasebased (PB) and hierarchical phrase-based (HPB) SMT implementations in Moses 5 (Koehn et al., 2007) . For each language, the training corpus is around 330,000 sentences. The development set contains nearly 1,235 sentences and nearly 1,160 sentences used for testing. We use the default training set for training translation model, and traditional lexical (Koehn et al., 2005) reordering model or our proposed BTG-based reordering model, and also target language model. We use the default tuning set for tuning the parameters and the default test set for evaluation. For word alignment, we train word alignments in both directions with the default settings, i.e., the standard bootstrap for IBM model 4 alignment in GIZA++ (1 5 H 5 3 3 4 3 ). We then symmetrize the word alignments using grow-diag-final-and (+gdfa) and the standard phrase extraction heuristic (Koehn et al., 2003) for all systems. In our experiment, the maximum length of phrases entered into phrase table is limited to 7, and we input only the top 20 translation candidates. The language model storage of target language uses the implementation in KenLM (Heafield, 2011) which is trained and queried as a 5-gram model. For distortion model in phrase-based SMT baseline, we set the distortion limit to 6.",
"cite_spans": [
{
"start": 221,
"end": 241,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF14"
},
{
"start": 497,
"end": 517,
"text": "(Koehn et al., 2005)",
"ref_id": "BIBREF13"
},
{
"start": 1002,
"end": 1022,
"text": "(Koehn et al., 2003)",
"ref_id": null
},
{
"start": 1264,
"end": 1280,
"text": "(Heafield, 2011)",
"ref_id": "BIBREF10"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "Word alignments used for training the reordering model are the intersection of both asymmetrical alignments in each mono-direction output by GIZA++ 6 (Och and Ney, 2003) . For pos-tagging, we make use of the Stanford Log-linear POS Tagger 7 (Toutanova and Manning, 2000) . To produce word class tags for each source word, we use the implementation of (Liang, 2005) 8 of Brown's clustering algorithm (Brown et al., 1992) . The size of the class tags is fixed to 256.",
"cite_spans": [
{
"start": 150,
"end": 169,
"text": "(Och and Ney, 2003)",
"ref_id": "BIBREF23"
},
{
"start": 241,
"end": 270,
"text": "(Toutanova and Manning, 2000)",
"ref_id": "BIBREF31"
},
{
"start": 351,
"end": 364,
"text": "(Liang, 2005)",
"ref_id": "BIBREF18"
},
{
"start": 399,
"end": 419,
"text": "(Brown et al., 1992)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "For tuning, the optimal weights for each feature are estimated using the minimum error rate training (MERT) algorithm (Och, 2003) and parameter optimization with ZMERT 9 (Zaidan, 2009) .",
"cite_spans": [
{
"start": 118,
"end": 129,
"text": "(Och, 2003)",
"ref_id": "BIBREF24"
},
{
"start": 170,
"end": 184,
"text": "(Zaidan, 2009)",
"ref_id": "BIBREF36"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "5.1"
},
{
"text": "For evaluation of machine translation quality, standard automatic evaluation metrics are used, like BLEU (Papineni et al., 2002) and RIBES (Isozaki et al., 2010) in all experiments. BLEU is used as the default standard metric, RIBES takes more word order into consideration. Table 1 shows the performance of MT systems on the KFTT test data, which are (1) Moses, trained using the phrase-based model (PB-SMT). (2) Moses, trained using the hierarchical phrase-based model (HPB-SMT) and last one (3) HieraTrans, trained using the BTG-based model Table 1 : Results on phrase-based baseline system, hierarchical phrase-based system and our BTG-based system. Bold scores indicate no statistically significant difference at p < 0.05 from the best system (Koehn, 2004) .",
"cite_spans": [
{
"start": 105,
"end": 128,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF25"
},
{
"start": 139,
"end": 161,
"text": "(Isozaki et al., 2010)",
"ref_id": "BIBREF11"
},
{
"start": 748,
"end": 761,
"text": "(Koehn, 2004)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [
{
"start": 275,
"end": 282,
"text": "Table 1",
"ref_id": null
},
{
"start": 544,
"end": 551,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "5.2"
},
{
"text": "(BTG-SMT).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiment Results",
"sec_num": "5.2"
},
{
"text": "Compared with the PB-SMT, BTG-based SMT uses weak linguistic annotations on the source side which provides additional information for reordering. We found that this strategy does help tree structure construction and finding final translations. However, our BTG-based method underperformed the HPB-SMT method. Increasing the beam size will gain improvement slightly. There are two explanations for the result: First, final machine translation performance is also related to the used tools, which is sensitive to parse errors, alignment errors or annotation errors. Inaccurate labeling hurts the performance. Second, strict constraints of BTGs makes the decoder difficult to find some discontinuous phrases (translations).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.3"
},
{
"text": "In this paper, we proposed a novel BTG-based translation approach using a BTG-based reordering model directly trained from the training data. Training such a reordering model does not require any syntactic annotations, hence no use of treebanks or parsers. This approach provides an alternative to building a BTG-based machine translation system using syntactic information. We also made several improvements over (Xiong et al., 2008) : First, we developed a novel BTG-based parser using Batch Perceptron. It allows training the reordering model on the whole training set. Second, we made the reordering model serve as a model which can be queried during decoding. We compared and validated our method can achieve the comparable per-formance with state-of-the-art SMT approaches. For further improvements, we will work on towards higher-speed decoder and make the decoder open available.",
"cite_spans": [
{
"start": 414,
"end": 434,
"text": "(Xiong et al., 2008)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "We use the same set of features described in(Nakagawa, 2015)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.statmt.org/moses/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.statmt.org/moses/giza/GIZA++.html 7 https://nlp.stanford.edu/software/tagger.shtml 8 https://github.com/percyliang/brown-cluster 9 http://www.cs.jhu.edu/ ozaidan/zmert/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "This work is supported in part by China Scholarship Council (CSC) under the CSC Grant No.201406890026. We also thank the anonymous reviewers for their insightful comments.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Classbased n-gram models of natural language",
"authors": [
{
"first": "",
"middle": [],
"last": "Peter F Brown",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Desouza",
"suffix": ""
},
{
"first": "L",
"middle": [],
"last": "Robert",
"suffix": ""
},
{
"first": "Vincent J Della",
"middle": [],
"last": "Mercer",
"suffix": ""
},
{
"first": "Jenifer C",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational linguistics",
"volume": "18",
"issue": "4",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F Brown, Peter V Desouza, Robert L Mercer, Vin- cent J Della Pietra, and Jenifer C Lai. 1992. Class- based n-gram models of natural language. Computa- tional linguistics, 18(4):467-479.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Hierarchical phrase-based translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2007. Hierarchical phrase-based transla- tion. Computational Linguistics, 33(2):201-228.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Incremental parsing with the perceptron algorithm",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Brian Roark. 2004. Incremental parsing with the perceptron algorithm. In Proceed- ings of the 42nd Annual Meeting on Association for Computational Linguistics, page 111. Association for Computational Linguistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Discriminative training methods for hidden markov models: Theory and experiments with perceptron algorithms",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the ACL-02 conference on Empirical methods in natural language processing",
"volume": "10",
"issue": "",
"pages": "1--8",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins. 2002. Discriminative training meth- ods for hidden markov models: Theory and experi- ments with perceptron algorithms. In Proceedings of the ACL-02 conference on Empirical methods in natu- ral language processing-Volume 10, pages 1-8. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Shai Shalev-Shwartz, and Yoram Singer",
"authors": [
{
"first": "Koby",
"middle": [],
"last": "Crammer",
"suffix": ""
},
{
"first": "Ofer",
"middle": [],
"last": "Dekel",
"suffix": ""
},
{
"first": "Joseph",
"middle": [],
"last": "Keshet",
"suffix": ""
}
],
"year": 2006,
"venue": "Journal of Machine Learning Research",
"volume": "7",
"issue": "",
"pages": "551--585",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Koby Crammer, Ofer Dekel, Joseph Keshet, Shai Shalev- Shwartz, and Yoram Singer. 2006. Online passive- aggressive algorithms. Journal of Machine Learning Research, 7(Mar):551-585.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Inducing sentence structure from parallel corpora for reordering",
"authors": [
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "193--203",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John DeNero and Jakob Uszkoreit. 2011. Inducing sen- tence structure from parallel corpora for reordering. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 193-203. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "I2r's machine translation system for iwslt",
"authors": [
{
"first": "Xiangyu",
"middle": [],
"last": "Duan",
"suffix": ""
},
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Hui",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2009,
"venue": "IWSLT",
"volume": "",
"issue": "",
"pages": "50--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xiangyu Duan, Deyi Xiong, Hui Zhang, Min Zhang, and Haizhou Li. 2009. I2r's machine translation system for iwslt 2009. In IWSLT, pages 50-54.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "A simple and effective hierarchical phrase reordering model",
"authors": [
{
"first": "Michel",
"middle": [],
"last": "Galley",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "848--856",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michel Galley and Christopher D Manning. 2008. A simple and effective hierarchical phrase reordering model. In Proceedings of the Conference on Empir- ical Methods in Natural Language Processing, pages 848-856. Association for Computational Linguistics.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Automatically learning sourceside reordering rules for large scale machine translation",
"authors": [
{
"first": "Dmitriy",
"middle": [],
"last": "Genzel",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 23rd international conference on computational linguistics",
"volume": "",
"issue": "",
"pages": "376--384",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dmitriy Genzel. 2010. Automatically learning source- side reordering rules for large scale machine transla- tion. In Proceedings of the 23rd international con- ference on computational linguistics, pages 376-384. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Semiring parsing",
"authors": [
{
"first": "Joshua",
"middle": [],
"last": "Goodman",
"suffix": ""
}
],
"year": 1999,
"venue": "Computational Linguistics",
"volume": "25",
"issue": "4",
"pages": "573--605",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Joshua Goodman. 1999. Semiring parsing. Computa- tional Linguistics, 25(4):573-605.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Kenlm: Faster and smaller language model queries",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "187--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield. 2011. Kenlm: Faster and smaller language model queries. In Proceedings of the Sixth Workshop on Statistical Machine Translation, pages 187-197. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic evaluation of translation quality for distant language pairs",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "944--952",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010. Automatic evalu- ation of translation quality for distant language pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 944- 952. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A generative constituent-context model for improved grammar induction",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Klein",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology",
"volume": "1",
"issue": "",
"pages": "48--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Klein and Christopher D Manning. 2002. A genera- tive constituent-context model for improved grammar induction. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 128-135. Association for Computational Linguistics. Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceed- ings of the 2003 Conference of the North American Chapter of the Association for Computational Linguis- tics on Human Language Technology, volume 1, pages 48-54. Association for Computational Linguistics.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Edinburgh system description for the 2005 iwslt speech translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Amittai",
"middle": [],
"last": "Axelrod",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Talbot",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "White",
"suffix": ""
}
],
"year": 2005,
"venue": "IWSLT",
"volume": "",
"issue": "",
"pages": "68--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Amittai Axelrod, Alexandra Birch, Chris Callison-Burch, Miles Osborne, David Talbot, and Michael White. 2005. Edinburgh system description for the 2005 iwslt speech translation evaluation. In IWSLT, pages 68-75.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, et al. 2007. Moses: Open source toolkit for sta- tistical machine translation. In Proceedings of the 45th",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the ACL on Interactive Poster and Demonstration Sessions, pages 177-180. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Statistical significance tests for machine translation evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2004,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "388--395",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In EMNLP, pages 388-395. Citeseer.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Source-side classifier preordering for machine translation",
"authors": [
{
"first": "Uri",
"middle": [],
"last": "Lerner",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrov",
"suffix": ""
}
],
"year": 2013,
"venue": "EMNLP",
"volume": "",
"issue": "",
"pages": "513--523",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Uri Lerner and Slav Petrov. 2013. Source-side classifier preordering for machine translation. In EMNLP, pages 513-523.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semi-supervised learning for natural language",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang. 2005. Semi-supervised learning for natu- ral language. Ph.D. thesis, Massachusetts Institute of Technology.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Latent variable models: An introduction to factor, path, and structural analysis",
"authors": [
{
"first": "",
"middle": [],
"last": "John C Loehlin",
"suffix": ""
}
],
"year": 1998,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John C Loehlin. 1998. Latent variable models: An introduction to factor, path, and structural analysis. Lawrence Erlbaum Associates Publishers.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Efficient top-down btg parsing for machine translation preordering",
"authors": [
{
"first": "Tetsuji",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2015,
"venue": "ACL (1)",
"volume": "",
"issue": "",
"pages": "208--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tetsuji Nakagawa. 2015. Efficient top-down btg parsing for machine translation preordering. In ACL (1), pages 208-218.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Inducing a discriminative parser to optimize machine translation reordering",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "843--853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Taro Watanabe, and Shinsuke Mori. 2012. Inducing a discriminative parser to optimize machine translation reordering. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natu- ral Language Processing and Computational Natural Language Learning, pages 843-853. Association for Computational Linguistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Discriminative training and maximum entropy models for statistical machine translation",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "295--302",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2002. Discrimina- tive training and maximum entropy models for statisti- cal machine translation. In Proceedings of the 40th Annual Meeting on Association for Computational Linguistics, pages 295-302. Association for Compu- tational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A systematic comparison of various statistical alignment models",
"authors": [
{
"first": "Josef",
"middle": [],
"last": "Franz",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2003,
"venue": "Computational Linguistics",
"volume": "29",
"issue": "1",
"pages": "19--51",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och and Hermann Ney. 2003. A system- atic comparison of various statistical alignment mod- els. Computational Linguistics, 29(1):19-51.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st Annual Meeting on Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics, volume 1, pages 160-167. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalua- tion of machine translation. In Proceedings of the 40th",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Annual Meeting of the Association for Computational Linguistics",
"authors": [],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Annual Meeting of the Association for Computational Linguistics, pages 311-318. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Insideoutside reestimation from partially bracketed corpora",
"authors": [
{
"first": "Fernando",
"middle": [],
"last": "Pereira",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Schabes",
"suffix": ""
}
],
"year": 1992,
"venue": "Proceedings of the 30th annual meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "128--135",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fernando Pereira and Yves Schabes. 1992. Inside- outside reestimation from partially bracketed corpora. In Proceedings of the 30th annual meeting on Associ- ation for Computational Linguistics, pages 128-135. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "The perceptron: A probabilistic model for information storage and organization in the brain",
"authors": [
{
"first": "Frank",
"middle": [],
"last": "Rosenblatt",
"suffix": ""
}
],
"year": 1958,
"venue": "Psychological review",
"volume": "65",
"issue": "6",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Frank Rosenblatt. 1958. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological review, 65(6):386.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Principles and implementation of deductive parsing",
"authors": [
{
"first": "M",
"middle": [],
"last": "Stuart",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Shieber",
"suffix": ""
},
{
"first": "Fernando Cn",
"middle": [],
"last": "Schabes",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Pereira",
"suffix": ""
}
],
"year": 1995,
"venue": "The Journal of logic programming",
"volume": "24",
"issue": "1",
"pages": "3--36",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stuart M Shieber, Yves Schabes, and Fernando CN Pereira. 1995. Principles and implementation of de- ductive parsing. The Journal of logic programming, 24(1):3-36.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A unigram orientation model for statistical machine translation",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of HLT-NAACL 2004: Short Papers",
"volume": "",
"issue": "",
"pages": "101--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christoph Tillmann. 2004. A unigram orientation model for statistical machine translation. In Proceedings of HLT-NAACL 2004: Short Papers, pages 101-104. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Enriching the knowledge sources used in a maximum entropy part-of-speech tagger",
"authors": [
{
"first": "Kristina",
"middle": [],
"last": "Toutanova",
"suffix": ""
},
{
"first": "D",
"middle": [],
"last": "Christopher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2000,
"venue": "Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large corpora: held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics",
"volume": "13",
"issue": "",
"pages": "63--70",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kristina Toutanova and Christopher D Manning. 2000. Enriching the knowledge sources used in a maximum entropy part-of-speech tagger. In Proceedings of the 2000 Joint SIGDAT conference on Empirical methods in natural language processing and very large cor- pora: held in conjunction with the 38th Annual Meet- ing of the Association for Computational Linguistics- Volume 13, pages 63-70. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Stochastic inversion transduction grammars, with application to segmentation, bracketing, and alignment of parallel corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1995,
"venue": "Proceedings of the 14th International Joint Conference on Artificial Intelligence",
"volume": "95",
"issue": "",
"pages": "1328--1335",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu. 1995. Stochastic inversion transduction grammars, with application to segmentation, bracket- ing, and alignment of parallel corpora. In Proceedings of the 14th International Joint Conference on Artificial Intelligence, volume 95, pages 1328-1335.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-403.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Linguistically annotated btg for statistical machine translation",
"authors": [
{
"first": "Deyi",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Aiti",
"middle": [],
"last": "Aw",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2008,
"venue": "Proceedings of the 22nd International Conference on Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1009--1016",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Deyi Xiong, Min Zhang, Aiti Aw, and Haizhou Li. 2008. Linguistically annotated btg for statistical machine translation. In Proceedings of the 22nd International Conference on Computational Linguistics-Volume 1, pages 1009-1016. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "A syntaxbased statistical translation model",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Yamada and Kevin Knight. 2001. A syntax- based statistical translation model. In Proceedings of the 39th Annual Meeting on Association for Compu- tational Linguistics, pages 523-530. Association for Computational Linguistics.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Z-mert: A fully configurable open source tool for minimum error rate training of machine translation systems",
"authors": [
{
"first": "Omar",
"middle": [],
"last": "Zaidan",
"suffix": ""
}
],
"year": 2009,
"venue": "The Prague Bulletin of Mathematical Linguistics",
"volume": "91",
"issue": "",
"pages": "79--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Omar Zaidan. 2009. Z-mert: A fully configurable open source tool for minimum error rate training of machine translation systems. The Prague Bulletin of Mathe- matical Linguistics, 91:79-88.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Discriminative reordering models for statistical machine translation",
"authors": [
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Hermann",
"middle": [],
"last": "Ney",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "55--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Richard Zens and Hermann Ney. 2006. Discriminative reordering models for statistical machine translation. In Proceedings of the Workshop on Statistical Machine Translation, pages 55-63. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Stochastic lexicalized inversion transduction grammar for alignment",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Gildea",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "475--482",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hao Zhang and Daniel Gildea. 2005. Stochastic lexi- calized inversion transduction grammar for alignment. In Proceedings of the 43rd Annual Meeting on Associ- ation for Computational Linguistics, pages 475-482. Association for Computational Linguistics.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Tree kernel-based svm with structured syntactic knowledge for btg-based phrase reordering",
"authors": [
{
"first": "Min",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "698--707",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Min Zhang and Haizhou Li. 2009. Tree kernel-based svm with structured syntactic knowledge for btg-based phrase reordering. In Proceedings of the 2009 Confer- ence on Empirical Methods in Natural Language Pro- cessing: Volume 2-Volume 2, pages 698-707. Associ- ation for Computational Linguistics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Example of translating a source sentence (English) into Japanese while reordering at the same time using a BTG tree.",
"uris": null,
"num": null,
"type_str": "figure"
},
"FIGREF1": {
"text": "Training the Reordering Model Input : Training data {\u27e8e, f , a\u27e9} L 0 Output: Feature weights \u03c0 for R 1 foreach iteration t do 2 foreach example \u27e8e, f , a\u27e9 do 3D \u2190 \u03c0 + R(D * , f ) \u2212 R(D, f );",
"uris": null,
"num": null,
"type_str": "figure"
}
}
}
}