ACL-OCL / Base_JSON /prefixY /json /Y16 /Y16-2010.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "Y16-2010",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T13:46:53.328839Z"
},
"title": "HSSA Tree Structures for BTG-based Preordering in Machine Translation",
"authors": [
{
"first": "Yujia",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Waseda University",
"location": {
"postCode": "808-0135",
"settlement": "Kitakyushu",
"region": "Fukuoka",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Hao",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Waseda University",
"location": {
"postCode": "808-0135",
"settlement": "Kitakyushu",
"region": "Fukuoka",
"country": "Japan"
}
},
"email": ""
},
{
"first": "Yves",
"middle": [],
"last": "Lepage",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "Waseda University",
"location": {
"postCode": "808-0135",
"settlement": "Kitakyushu",
"region": "Fukuoka",
"country": "Japan"
}
},
"email": "yves.lepage@waseda.jp"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "The Hierarchical Sub-Sentential Alignment (HSSA) method is a method to obtain aligned binary tree structures for two aligned sentences in translation correspondence. We propose to use the binary aligned tree structures delivered by this method as training data for preordering prior to machine translation. For that, we learn a Bracketing Transduction Grammar (BTG) from these binary aligned tree structures. In two oracle experiments in English to Japanese and Japanese to English translation, we show that it is theoretically possible to outperform a baseline system with a default distortion limit of 6, by about 2.5 and 5 BLEU points and, 7 and 10 RIBES points respectively, when preordering the source sentences using the learnt preordering model and using a distortion limit of 0. An attempt at learning a preordering model and its results are also reported.",
"pdf_parse": {
"paper_id": "Y16-2010",
"_pdf_hash": "",
"abstract": [
{
"text": "The Hierarchical Sub-Sentential Alignment (HSSA) method is a method to obtain aligned binary tree structures for two aligned sentences in translation correspondence. We propose to use the binary aligned tree structures delivered by this method as training data for preordering prior to machine translation. For that, we learn a Bracketing Transduction Grammar (BTG) from these binary aligned tree structures. In two oracle experiments in English to Japanese and Japanese to English translation, we show that it is theoretically possible to outperform a baseline system with a default distortion limit of 6, by about 2.5 and 5 BLEU points and, 7 and 10 RIBES points respectively, when preordering the source sentences using the learnt preordering model and using a distortion limit of 0. An attempt at learning a preordering model and its results are also reported.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "One of the major common challenges for machine translation (MT) is the different order of the same conceptual units in the source and target languages. In order to get a fluent and adequate translation in the target language, the default phrase-based statistical machine translation (PB-SMT) system implemented in MOSES has a simple distortion model using position (Koehn et al., 2003) and lexical information (Tillmann, 2004) to allow reordering during decoding. Other solutions exist: e.g., the distortion model in (Al-Onaizan and Papineni, 2006) handles n-gram language model limitations; Setiawan et al. (2007) propose a function word centered syntaxbased (FWS) solution; Zhang et al. (2007) propose a reordering model integrating syntactic knowledge. Also, other models than the phrase-based model have been proposed to address the reordering problem, like hierarchical phrase-based SMT (Chiang, 2007) or syntax-based SMT (Yamada and Knight, 2001) .",
"cite_spans": [
{
"start": 365,
"end": 385,
"text": "(Koehn et al., 2003)",
"ref_id": "BIBREF13"
},
{
"start": 410,
"end": 426,
"text": "(Tillmann, 2004)",
"ref_id": "BIBREF24"
},
{
"start": 533,
"end": 548,
"text": "Papineni, 2006)",
"ref_id": "BIBREF0"
},
{
"start": 592,
"end": 614,
"text": "Setiawan et al. (2007)",
"ref_id": "BIBREF22"
},
{
"start": 676,
"end": 695,
"text": "Zhang et al. (2007)",
"ref_id": "BIBREF35"
},
{
"start": 892,
"end": 906,
"text": "(Chiang, 2007)",
"ref_id": "BIBREF3"
},
{
"start": 927,
"end": 952,
"text": "(Yamada and Knight, 2001)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Preordering (Xia and McCord, 2004; Collins et al., 2005) has been proposed primarily to solve the problems encountered when translating between languages with widely divergent syntax, for instance, from a subject-verb-object (SVO) language (like English and Mandarin Chinese) to a subjectobject-verb (SOV) language (like Japanese and Korean), Preordering is a pre-processing task that aims to rearrange the word order of a source sentence to fit the word order of the target language. It is separated from the core translation task. Recent approaches (DeNero and Uszkoreit, 2011; Neubig et al., 2012; Nakagawa, 2015 ) learn a preordering model based on Bracketing Transduction Grammar (BTG) (Wu, 1997) from parallel texts to score permutations by using tree structures as latent variables. They build the needed tree structures and the preordering model (i.e., a BTG) at the same time using word alignments. However it is needed to check whether a given sentence can fit the desired tree structures.",
"cite_spans": [
{
"start": 12,
"end": 34,
"text": "(Xia and McCord, 2004;",
"ref_id": "BIBREF31"
},
{
"start": 35,
"end": 56,
"text": "Collins et al., 2005)",
"ref_id": "BIBREF5"
},
{
"start": 551,
"end": 579,
"text": "(DeNero and Uszkoreit, 2011;",
"ref_id": "BIBREF6"
},
{
"start": 580,
"end": 600,
"text": "Neubig et al., 2012;",
"ref_id": "BIBREF20"
},
{
"start": 601,
"end": 615,
"text": "Nakagawa, 2015",
"ref_id": "BIBREF19"
},
{
"start": 691,
"end": 701,
"text": "(Wu, 1997)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It seems of course more difficult to build both the tree structures and the preordering model at the same time than to build only a preordering model if the tree structures are given. In this paper, we rapidly obtain tree structures using word-to-word associations taking advantage of the hierarchical subsentential alignment (HSSA) method (Lardilleux et al., 2012) . This method computes a recursive binary segmentation in both languages at the same time, judging whether two spans with the same concepts in both languages are inverted or not. We conduct oracle experiments to show that these tree structures may be beneficial for PB-SMT. We then use these tree structures as the training data to build a preordering model without checking the validity by modifying the top-down BTG parsing method introduced in (Nakagawa, 2015) . Oracle experiments show that if we reorder source sentences exactly, translation scores can be improved by around 2.5 BLEU points and 7 RIBES points in English to Japanese) and 5 BLEU points and 10 RIBES points in Japanese to English. Experiments with our tree structures show that better RIBES scores can be easily obtained.",
"cite_spans": [
{
"start": 340,
"end": 365,
"text": "(Lardilleux et al., 2012)",
"ref_id": "BIBREF16"
},
{
"start": 813,
"end": 829,
"text": "(Nakagawa, 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The rest of this paper is organized as follows: Section 2 describes related work in preordering and BTG-based preordering. Section 3 shows how to obtain tree structures using word-to-word associations. Section 4 reports oracle preordering experiments. Section 5 gives a method to build a preordering model using tree structures. Section 6 presents the results of our experiments and their analysis.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Preordering in statistical machine translation (SMT) converts a source sentence S, before translation, into a reordered source sentence S , where the word order is similar to that of the target sentence T (Figure 1) .",
"cite_spans": [],
"ref_spans": [
{
"start": 205,
"end": 215,
"text": "(Figure 1)",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Preordering for SMT",
"sec_num": "2.1"
},
{
"text": "Preordering can be seen as an optimization problem, where we want to find the best reordered source sentence that maximizes the probability among all possible reordering of the sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Preordering for SMT",
"sec_num": "2.1"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S = argmax S \u2208\u03b3(S) P (S |S)",
"eq_num": "(1)"
}
],
"section": "Preordering for SMT",
"sec_num": "2.1"
},
{
"text": "S represents the best reordered source sentence, and \u03b3(S) stands for the set of all possible reordering of the source sentence. Syntax-based preordering based on the existence parsers has been proposed to pre-process the source sentences by using automatically learned rewriting patterns (Xia and McCord, 2004) . Several methods have been proposed methods, such as constituent parsing by automatically extracting preordering rules from a parallel corpus (Xia and Mc-Cord, 2004; Wu et al., 2011) or by creating rules manually (Wang et al., 2007; Han et al., 2012) , or dependency parsing with automatically created rules (Habash, 2012; Cai et al., 2014) or manually generated rules (Xu et al., 2009; Isozaki et al., 2010) .",
"cite_spans": [
{
"start": 288,
"end": 310,
"text": "(Xia and McCord, 2004)",
"ref_id": "BIBREF31"
},
{
"start": 454,
"end": 477,
"text": "(Xia and Mc-Cord, 2004;",
"ref_id": null
},
{
"start": 478,
"end": 494,
"text": "Wu et al., 2011)",
"ref_id": "BIBREF30"
},
{
"start": 525,
"end": 544,
"text": "(Wang et al., 2007;",
"ref_id": "BIBREF28"
},
{
"start": 545,
"end": 562,
"text": "Han et al., 2012)",
"ref_id": "BIBREF8"
},
{
"start": 620,
"end": 634,
"text": "(Habash, 2012;",
"ref_id": "BIBREF7"
},
{
"start": 635,
"end": 652,
"text": "Cai et al., 2014)",
"ref_id": "BIBREF2"
},
{
"start": 681,
"end": 698,
"text": "(Xu et al., 2009;",
"ref_id": "BIBREF32"
},
{
"start": 699,
"end": 720,
"text": "Isozaki et al., 2010)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preordering for SMT",
"sec_num": "2.1"
},
{
"text": "Another trend of research is to try to solve the preordering problem without relying on parsers. Tromble and Eisner (2009) propose sophisticated reordering models based on the Linear Ordering Problem. Visweswariah et al. (2011) learn a preordering model by similarity with the Traveling Salesman Problem. Lerner and Petrovs (2013) present a source-side classifier-based preordering model. Several pieces of research (DeNero and Uszkoreit, 2011; Neubig et al., 2012; Nakagawa, 2015) are mainly about using tree structures as latent variables for preordering models. This is detailed in the next subsection.",
"cite_spans": [
{
"start": 97,
"end": 122,
"text": "Tromble and Eisner (2009)",
"ref_id": "BIBREF25"
},
{
"start": 201,
"end": 227,
"text": "Visweswariah et al. (2011)",
"ref_id": "BIBREF27"
},
{
"start": 416,
"end": 444,
"text": "(DeNero and Uszkoreit, 2011;",
"ref_id": "BIBREF6"
},
{
"start": 445,
"end": 465,
"text": "Neubig et al., 2012;",
"ref_id": "BIBREF20"
},
{
"start": 466,
"end": 481,
"text": "Nakagawa, 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Preordering for SMT",
"sec_num": "2.1"
},
{
"text": "BTG-based preordering is based on Bracketing Transduction Grammar (BTG), also called Inversion Transduction Grammar (ITG) (Wu, 1997) . Whereas Chomsky Normal Form of context-free rules has two types of rules (X \u2192 X 1 X 2 and X \u2192 x) and the grammar is monolingual, BTG has three types of rules, Straight, Inverted and Terminal, to cope with the possible correspondences between a source language and a target language.",
"cite_spans": [
{
"start": 122,
"end": 132,
"text": "(Wu, 1997)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "BTG-based Preordering",
"sec_num": "2.2"
},
{
"text": "Straight keeps the same order in the source and the target languages; Inverted exchanges the order; Terminal just stands for the production of a nonterminal symbol both in the source and target languages. The corresponding tree structures are illustrated in Figure 3 from (a) to (c) in the same order. The parse tree obtained by applying a BTG to parse a pair of sentences, provides the necessary information to reorder the source sentence in conformity to the word order of the target sentence, as it suffices to Figure 2 : The difference between previous methods (Neubig et al., 2012; Nakagawa, 2015) and our proposed method when building a preordering model. In previous work, the tree structures and the preordering model should be deduced at the same time from the parallel text. Our work firstly produces the tree structures from parallel text, and then computes a preordering model. read the type of rules applied, straight or inverted. Neubig et al. (2012) present a discriminative parser using the derivations of tree structures as underlying variables from word alignment with the parallel corpus. However, the computation complexity is O(n 5 ) for a sentence length of n because the method guesses the tree structure using the Coke-Younger-Kasami (CYK) algorithm, which complexity is O(n 3 ). In order to reduce complexity, Nakagawa (2015) proposes a top-down BTG parsing approach instead of the bottom-up CYK algorithm. The computation complexity reduces to O(kn 2 ) for a sentence length of n and a beam width of k.",
"cite_spans": [
{
"start": 565,
"end": 586,
"text": "(Neubig et al., 2012;",
"ref_id": "BIBREF20"
},
{
"start": 587,
"end": 602,
"text": "Nakagawa, 2015)",
"ref_id": "BIBREF19"
},
{
"start": 944,
"end": 964,
"text": "Neubig et al. (2012)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [
{
"start": 258,
"end": 266,
"text": "Figure 3",
"ref_id": "FIGREF1"
},
{
"start": 514,
"end": 522,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "BTG-based Preordering",
"sec_num": "2.2"
},
{
"text": "Both methods need to predict the possible tree structures for each sentence when building the preordering model. Word alignments are used to check whether a pair of sentences can yield a valid tree structure. 1 Predicting tree structures while building the preordering model at the same time is difficult. In the present paper, we propose to directly generate the tree structures from the word-to-word association matrices, and to use these tree structures to build the preordering model afterwards. Figure 2 illustrates the differences between the two previous methods and our proposed method.",
"cite_spans": [],
"ref_spans": [
{
"start": 500,
"end": 508,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "BTG-based Preordering",
"sec_num": "2.2"
},
{
"text": "In our proposed method, the tree structures are obtained by using soft alignment matrices and recursively segmenting these matrices with Ncut scores (Zha et al., 2001 ) using the hierarchical subsentential alignment (HSSA) method (Lardilleux et al., 2012) .",
"cite_spans": [
{
"start": 149,
"end": 166,
"text": "(Zha et al., 2001",
"ref_id": "BIBREF34"
},
{
"start": 230,
"end": 255,
"text": "(Lardilleux et al., 2012)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Obtaining HSSA Tree Structures",
"sec_num": "3"
},
{
"text": "The HSSA method delivers tree structures which are similar to parse trees obtained by the application of a BTG. Figure 4 shows that segmenting along the second diagonal with the HSSA method corresponds to an Inverted rule in the BTG formalism and that segmenting according to the first diagonal corresponds to Straight. The column S p .S p 2 and the row T p .T p of the matrix in Figure 4 are related to part of the source sentence and part of the target sentence respectively.",
"cite_spans": [],
"ref_spans": [
{
"start": 112,
"end": 120,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 380,
"end": 388,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Obtaining HSSA Tree Structures",
"sec_num": "3"
},
{
"text": "The HSSA method uses soft alignment matrices structure is: B2D4A1C3 to A1B2C3D4. where each cell for a source word s and a target word t has a score w(s, t) computed as the geometric mean of the word-to-word translation probabilities in both directions (see Equation 2). In Figure 4, the saturation of the cells represents the score w(s, t): the darker the color, the higher the score.",
"cite_spans": [],
"ref_spans": [
{
"start": 274,
"end": 280,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Obtaining HSSA Tree Structures",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "w(s, t) = p(s|t) \u00d7 p(t|s)",
"eq_num": "(2)"
}
],
"section": "Obtaining HSSA Tree Structures",
"sec_num": "3"
},
{
"text": "Each segmentation iteration segments the soft alignment matrix in both horizontal and vertical directions to decompose the matrix recursively into two corresponding sub-parts. There are two cases: the two sub-parts follow the main diagonal, (S p , T p ) and (S p , T p ), this is similar to the BTG rule Straight (see Figure 4 (b)); or they follow the second diagonal, (S p , T p ) and (S p , T p ), this is similar to the BTG rule Inverted (see Figure 4 (a)). In order to decide for the segmentation point and for the direction in a submatrix (X, (Zha et al., 2001 ) of crossing points in the matrix (",
"cite_spans": [
{
"start": 548,
"end": 565,
"text": "(Zha et al., 2001",
"ref_id": "BIBREF34"
}
],
"ref_spans": [
{
"start": 318,
"end": 326,
"text": "Figure 4",
"ref_id": "FIGREF2"
},
{
"start": 446,
"end": 454,
"text": "Figure 4",
"ref_id": "FIGREF2"
}
],
"eq_spans": [],
"section": "Obtaining HSSA Tree Structures",
"sec_num": "3"
},
{
"text": "Y ) \u2208 {S p , S p } \u00d7 {T p , T p }, Ncut scores",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtaining HSSA Tree Structures",
"sec_num": "3"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "S p .S p , T p .T p ) are calculated in both direc- tions. W (X, Y ) = s\u2208X,t\u2208Y w(s, t) (3) cut(X, Y ) = W (X, Y ) + W (X, Y ) (4) Ncut(X, Y ) = cut(X, Y ) Ncut(X, Y ) + 2 \u00d7 W (X, Y ) + cut(X, Y ) Ncut(X, Y ) + 2 \u00d7 W (X, Y )",
"eq_num": "(5)"
}
],
"section": "Obtaining HSSA Tree Structures",
"sec_num": "3"
},
{
"text": "One tree structure for one sentence is generated with sub-sentential alignments at the same time by remembering the best segmentation point of each iteration in a sentence, using the HSSA method. In our proposed method, all the tree structures obtained from a training bilingual corpus become a training data set to learn a preordering model. The HSSA approach allows to get tree structures easily and rapidly, by using only a parallel corpus and the word-to-word associations obtained from it. No further annotation is needed.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Obtaining HSSA Tree Structures",
"sec_num": "3"
},
{
"text": "So as to check whether our proposed method is promising, in a first step, we perform oracle experiments. The purpose is to determine the upper bounds that can be obtained in translation evaluation scores. This will offer a judgment on the theoretical effectiveness of utilizing tree structures generated by the hierarchical sub-sentential alignment method.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Experiments: Upper Bounds",
"sec_num": "4"
},
{
"text": "In the oracle experiments, we apply the HSSA method on the sentence pairs of the test set to obtain their tree structures and then use these tree structures to reorder the source sentences of the test set. In a real experiment, this is impossible, because the target sentence, and hence the soft alignment matrices are unknown.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Experiments: Upper Bounds",
"sec_num": "4"
},
{
"text": "To reorder the words in a source sentence, as explained above, we recursively traverse the tree structure in a top-down manner. The order of the words in the source sentence is changed according to the types of nodes encountered in the tree structures. When the type of node is Straight, the two spans in the source sentence keep the original order; when it is Inverted, the two spans in the source sentence are inverted. After reordering, the alignment between the reordered source sentence and the target sentence follows the main diagonal, up to the cases where one word corresponds to several words. In the oracle experiment, this is applied on test data. In a real experiment, this is applied on test data and development data, while the scheme given in Figure 6 is applied on the test data.",
"cite_spans": [],
"ref_spans": [
{
"start": 759,
"end": 767,
"text": "Figure 6",
"ref_id": "FIGREF5"
}
],
"eq_spans": [],
"section": "Oracle Experiments: Upper Bounds",
"sec_num": "4"
},
{
"text": "After reordering all source sentences in the training, tuning, and test sets, a standard PB-SMT system is built as usual with the reordered source sentences in place of the original sources sentences, and with their corresponding target sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Oracle Experiments: Upper Bounds",
"sec_num": "4"
},
{
"text": "A preordering model is built by using the tree structures obtained on the parallel corpus used as training data for machine translation, as its training data. On test data, i.e., source sentences alone, the role of the pre-ordering model is to guess a new order for the words of the source sentences in the absence of corresponding target sentences. Figure 6 illustrates the process of building the preordering model with the tree structures obtained as explained in Figure 1 from the sentence pairs of the training data of a machine translation system. We now present a method to learn and apply a preordering model. This method is a modification of the top-down BTG parsing method presented in (Nakagawa, 2015) . The main difference is that, in our present configuration, tree structures are available from a parallel corpus. In Nakagawa's method, word alignments are used to predict the tree structures, so that, after segmenting one span into two, whether a word in one of two spans aligns to another word in the other span is checked in each iteration. However, in our configuration, we are able to directly get the separating points because we know the tree structure produced by the HSSA method.",
"cite_spans": [
{
"start": 696,
"end": 712,
"text": "(Nakagawa, 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 350,
"end": 358,
"text": "Figure 6",
"ref_id": "FIGREF5"
},
{
"start": 467,
"end": 473,
"text": "Figure",
"ref_id": null
}
],
"eq_spans": [],
"section": "Building and Applying a Preordering Model",
"sec_num": "5"
},
{
"text": "The best derivationd for a sentence is important for both learning and applying a preordering model. Because one derivation leads to one parse tree, finding the best derivation can be regarded as finding the best parse tree. To assess the quality of a parse tree, we compare it with the tree structure output by the HSSA method. The best parse tree is the tree with the maximal score defined by the following formula:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building and Applying a Preordering Model",
"sec_num": "5"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "d = argmax d\u2208D(T ) m\u2208Nodes(T ) \u03c3(m)",
"eq_num": "(6)"
}
],
"section": "Building and Applying a Preordering Model",
"sec_num": "5"
},
{
"text": "where d represents one derivation in the set of all possible derivations D(T ) for the tree structure T ; m represents one node in the set of nodes Nodes(T ) of the tree structure T , and \u03c3(m) represents the score of the node. The score of a node in a tree structure is computed by applying the perceptron algorithm (Collins and Roark, 2004) , i.e., by taking each node of trees as a latent variable (Nakagawa, 2015) . This algorithm is an online learning algorithm, and processes nodes in an available tree structure one by one, by using the following formula to calculate the score of each node \u03c3(m):",
"cite_spans": [
{
"start": 316,
"end": 341,
"text": "(Collins and Roark, 2004)",
"ref_id": "BIBREF4"
},
{
"start": 400,
"end": 416,
"text": "(Nakagawa, 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Building and Applying a Preordering Model",
"sec_num": "5"
},
{
"text": "\u03c3(m) = \u039b \u2022 \u03a6(m), m \u2208 Nodes(T )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building and Applying a Preordering Model",
"sec_num": "5"
},
{
"text": "where \u03a6(m) represents the feature vector of this node, and \u039b represents the vector of feature weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building and Applying a Preordering Model",
"sec_num": "5"
},
{
"text": "Due to iterated binary decomposition, an increasing number of iterations for one sentence results in many derivations that wait for being checked whether they are the best ones or not, both while building and while applying the preordering model. In order to control the size of the search space, a beam search is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building and Applying a Preordering Model",
"sec_num": "5"
},
{
"text": "We need to enable the system to outputd to become as similar as possible as the derivation d found in the tree structure obtained by the HSSA model while building the preordering model. To do so, we learn the feature vectors and adjust their weight vectors by using the Expectation-Maximization (EM) algorithm on the training data. In the end, we obtain a preordering model with features and corresponding weights.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building and Applying a Preordering Model",
"sec_num": "5"
},
{
"text": "We then apply the preordering model on all the source sentences of all three data sets, training, tuning, and test, to reorder their words. A standard PB-SMT system is then built as usual with reordered source sentences in place of the original sources sentences, and with their corresponding target sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Building and Applying a Preordering Model",
"sec_num": "5"
},
{
"text": "We build our PB-SMT systems in a standard way using the Moses system , KenLM for language modelling (Heafield, 2011) , and standard lexical reordering model . This lexical reordering model allows local reordering with a given distortion limit during decoding. The default of the distortion limit in Moses is 6. When set to 0, the system does not perform any lexical reordering. The language pair we work on is Japanese-English in both directions. The data sets are the training, tuning and test sets from the Kyoto Free Translation Task (KFTT) corpus. 3 In this corpus, Japanese sentences have been segmented and tokenized by KyTea. 4 Table 1 gives statistics on these data sets.",
"cite_spans": [
{
"start": 100,
"end": 116,
"text": "(Heafield, 2011)",
"ref_id": "BIBREF9"
},
{
"start": 552,
"end": 553,
"text": "3",
"ref_id": null
},
{
"start": 633,
"end": 634,
"text": "4",
"ref_id": null
}
],
"ref_spans": [
{
"start": 635,
"end": 642,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "6.1"
},
{
"text": "For the generation of tree structures, word-toword associations are extracted from the training set andused to the hierarchical sub-sentential alignment method, are extracted only from the training set. For our preordering model, we carried out experiments by following the experimental settings reported in (Nakagawa, 2015) with a beam search of 20, a number of iteration of 20 and 100,000 sentences pairs as preordering training extracted at random from the training set. We use three kinds of features, LEX, POS, and CLASS. LEX consists in the lexical items inside a given window around the current word in the source language. POS are the parts-of-speech of the lexical items of the LEX fea-ture words. The CLASS features are their semantic classes. The POS tagging information is provided by KyTea for Japanese, and the Lookahead Part-Of-Speech Tagger (Tsuruoka et al., 2011) for English. 5 We use the Brown clustering algorithm (Brown et al., 1992; Liang, 2005) for word class information in English and Japanese.",
"cite_spans": [
{
"start": 308,
"end": 324,
"text": "(Nakagawa, 2015)",
"ref_id": "BIBREF19"
},
{
"start": 857,
"end": 880,
"text": "(Tsuruoka et al., 2011)",
"ref_id": "BIBREF26"
},
{
"start": 894,
"end": 895,
"text": "5",
"ref_id": null
},
{
"start": 934,
"end": 954,
"text": "(Brown et al., 1992;",
"ref_id": "BIBREF1"
},
{
"start": 955,
"end": 967,
"text": "Liang, 2005)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "6.1"
},
{
"text": "In order to evaluate the efficiency of reordering, we use a modified version of the Fuzzy Reordering Score (FRS) (Talbot et al., 2011) and Kendall's \u03c4 (Kendall, 1938) as intrinsic evaluation metrics. The modified version of FRS (see Equation 7) is inspired by (Nakagawa, 2015) because only two words are considered and the indices of the first and the last words are also considered (Neubig et al., 2012) .",
"cite_spans": [
{
"start": 113,
"end": 134,
"text": "(Talbot et al., 2011)",
"ref_id": "BIBREF23"
},
{
"start": 139,
"end": 166,
"text": "Kendall's \u03c4 (Kendall, 1938)",
"ref_id": null
},
{
"start": 260,
"end": 276,
"text": "(Nakagawa, 2015)",
"ref_id": "BIBREF19"
},
{
"start": 383,
"end": 404,
"text": "(Neubig et al., 2012)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "6.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "mod FRS = B |S| + 1",
"eq_num": "(7)"
}
],
"section": "Evaluation Metrics",
"sec_num": "6.2"
},
{
"text": "B represents the number of word bigrams which appear in both the reordered sentence and the golden reference, and |S| represents the length of the source sentence S in words. We also change the formula for calculating Kendall's \u03c4 to a normalized Kendall's \u03c4 following (Isozaki et al., 2010) . Equation 8gives the definition.",
"cite_spans": [
{
"start": 268,
"end": 290,
"text": "(Isozaki et al., 2010)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "6.2"
},
{
"text": "EQUATION",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [
{
"start": 0,
"end": 8,
"text": "EQUATION",
"ref_id": "EQREF",
"raw_str": "norm \u03c4 = 1 \u2212 E |S| \u00d7 (|S| \u2212 1)/2",
"eq_num": "(8)"
}
],
"section": "Evaluation Metrics",
"sec_num": "6.2"
},
{
"text": "E represents the number of not increasing word pairs and |S| \u00d7 (|S| \u2212 1)/2 is the total number of pairs. Being a metric to evaluate the quality of machine translation, RIBES (Isozaki et al., 2010) is an extrinsic metric in our work. However, given the fact that RIBES takes order into account, it can also be considered an intrinsic metric in our work. As a matter of fact, RIBES bases on the computation of FRS and \u03c4 .",
"cite_spans": [
{
"start": 174,
"end": 196,
"text": "(Isozaki et al., 2010)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "6.2"
},
{
"text": "In addition, we of course use BLEU (Papineni et al., 2002) for the evaluation of machine translation quality as it is the de facto standard metric. Table 2 shows the evaluation results in all intrinsic evaluation metrics (modified FRS and normalized \u03c4 ), the intrinsic and extrinsic evaluation metric (RIBES) and in the extrinsic evaluation metric (BLEU). We use all these metrics in the language pair English-Japanese in both directions. In both directions, the seven other BLEU scores are all statistically significantly different (p-value < 0.05) from the BLEU score of the baseline system with a distortion limit of 6.",
"cite_spans": [
{
"start": 35,
"end": 58,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF21"
}
],
"ref_spans": [
{
"start": 148,
"end": 155,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": "6.2"
},
{
"text": "For the oracle experiments, all the scores are much higher than those of the baseline. The smallest improvement in extrinsic evaluation is in RIBES, around 6.5, when dl is equal to 6 in the language pair English to Japanese, but the difference is still statistically significant. The increase in BLEU scores is 4 points with a distortion limit of 0 and 3 points with a distortion limit of 6 in English to Japanese, 7 points with distortion limit of 0 and 5.5 points with distortion limit of 6 in Japanese to English, which is statistically significant. We also compare the results of the oracle experiments when the distortion limit is 0 to the baseline with a default distortion limit of 6. We get almost 2.5 BLEU point improvement in English to Japanese and 5 BLEU point improvement in Japanese to English. The oracle experiments outperform Nakagawa's top-down BTG parsing method, except in FRS and normalized \u03c4 scores for the language pair English to Japanese.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Analysis",
"sec_num": "6.3"
},
{
"text": "These results demonstrate the theoretical effectiveness of utilizing the tree structures generated by the HSSA method. In other words, the tree structures automatically generated using the HSSA method CAN benefit PB-SMT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Analysis",
"sec_num": "6.3"
},
{
"text": "Our preordering model tries to reproduce the results of the oracle experiments. The scores for intrinsic evaluation metrics in both directions are better than those of the baseline, with large improvement. We obtain slight but statistically significant increases in the extrinsic evaluation with the same distortion limit. However, when compared to the baseline system with a default distortion limit of 6, the PB-SMT systems with a distortion limit of 0 that were built with our preordering models still lag behind, by around 1 BLEU point in English to Japanese and less than 0.5 BLEU point in Japanese to English. However, the comparison is in favor of our system (preordering, distortion limit 0) in RIBES by 1 point. This seems natural as RIBES is a metric for machine translation which takes reordering into account. The reasons for these mitigated results are listed below. Firstly, our preordering models do not simulates the HSSA method so well, because this method considers all words in the two parts at hand, while the learning models we used rely only on the features of two words in the beginning and the ending position of each part. Secondly, there may be several segmentation points with similar Ncut values when building the tree structures. We choose only one. To memorize other alternatives, the use of forests instead of trees would be required. Memorizing these alternatives may lead to larger increases in evaluation scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results and Analysis",
"sec_num": "6.3"
},
{
"text": "In this paper, we firstly automatically generate tree structures using the hierarchical sub-sentential alignment (HSSA) method. These tree structures are equivalent to parse trees obtained by Bracketing Transduction Grammars (BTG). Secondly, based on these tree structures, we build a preordering model. Thirdly, using this preordering model, source sentences are reordered. In an oracle experiment, we show that we may expect to outperform a baseline system with the default distortion limit of 6 by 2.5 (English to Japanese) or 5 (Japanese to English) BLEU points if we are able to reorder the text sentences exactly, without the need of any distortion limit. Other experiments show that tree structures generated by the HSSA method help in getting better RIBES scores than a baseline system without preordering.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "In future work, we will try different features, times of iteration and sizes of beam. In addition, we would also like to try to the use of forest structures instead of tree structures.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "7"
},
{
"text": "30th Pacific Asia Conference on Language, Information and Computation (PACLIC 30)Seoul, Republic of Korea, October 28-30, 2016",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "A sentence pair which cannot be represented by a BTG tree",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The symbol \".\" stands for the concatenation of word strings.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.phontron.com/kftt/index.html 4 http://www.phontron.com/kytea/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "http://www.logos.ic.i.u-tokyo.ac.jp/ tsuruoka/lapos/",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "The second author is supported in part by China Scholarship Council (CSC) under CSC Grant No. 201406890026. We would like to thank Tetsuji Nakagawa for his most helpful comments on the experiment setting details.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgements",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Distortion models for statistical machine translation",
"authors": [
{
"first": "Yaser",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
},
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "529--536",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yaser Al-Onaizan and Kishore Papineni. 2006. Dis- tortion models for statistical machine translation. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meet- ing of the Association for Computational Linguistics, pages 529-536, Sydney, Australia, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Classbased n-gram models of natural language",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "Peter",
"middle": [
"V"
],
"last": "Brown",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Desouza",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "Jenifer",
"middle": [
"C"
],
"last": "Della Pietra",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Lai",
"suffix": ""
}
],
"year": 1992,
"venue": "Computational linguistics",
"volume": "18",
"issue": "4",
"pages": "467--479",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, Peter V. deSouza, Robert L. Mercer, Vin- cent J. Della Pietra, and Jenifer C. Lai. 1992. Class- based n-gram models of natural language. Computa- tional linguistics, 18(4): 467-479.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Dependency-based Pre-ordering for Chinese-English Machine Translation",
"authors": [
{
"first": "Jingsheng",
"middle": [],
"last": "Cai",
"suffix": ""
},
{
"first": "Masao",
"middle": [],
"last": "Utiyama",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
},
{
"first": "Yujie",
"middle": [],
"last": "Zhang",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "155--160",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jingsheng Cai, Masao Utiyama, Eiichiro Sumita, and Yu- jie Zhang. 2014. Dependency-based Pre-ordering for Chinese-English Machine Translation. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics, pages 155-160, Baltimore, MD, USA, June. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Hierarchical Phrase-Based Translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2007,
"venue": "Computational Linguistics",
"volume": "33",
"issue": "2",
"pages": "201--228",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Chiang. 2007. Hierarchical Phrase-Based Trans- lation. Computational Linguistics, 33(2): 201-228.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Incremental Parsing with the Perceptron Algorithm",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Brian",
"middle": [],
"last": "Roark",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "111--118",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins and Brian Roark. 2004. Incremental Parsing with the Perceptron Algorithm. In Proceed- ings of the 42nd Annual Meeting on Association for Computational Linguistics, pages 111-118, Barcelona, Spain, July. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Clause Restructuring for Statistical Machine Translation",
"authors": [
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Ivona",
"middle": [],
"last": "Ku\u010derov\u00e1",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the 43rd Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "531--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Michael Collins, Philipp Koehn, and Ivona Ku\u010derov\u00e1. 2005. Clause Restructuring for Statistical Machine Translation. In Proceedings of the 43rd Annual Meet- ing of the Association for Computational Linguistics, pages 531-540, Ann Arbor, MI, USA, June. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Inducing Sentence Structure from Parallel Corpora for Reordering",
"authors": [
{
"first": "John",
"middle": [],
"last": "Denero",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "193--203",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "John DeNero and Jakob Uszkoreit. 2011. Inducing Sen- tence Structure from Parallel Corpora for Reordering. In Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing, pages 193- 203, Edinburgh, Scotland, UK, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Syntactic Preprocessing for Statistical Machine Translation",
"authors": [
{
"first": "Nizar",
"middle": [],
"last": "Habash",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 11th Machine Translation Summit (MT-Summit)",
"volume": "",
"issue": "",
"pages": "215--222",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nizar Habash. 2012. Syntactic Preprocessing for Sta- tistical Machine Translation. In Proceedings of the 11th Machine Translation Summit (MT-Summit), pages 215-222, Copenhagen, Denmark, September.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Head Finalization Reordering for Chinese-to-Japanese Machine Translation",
"authors": [
{
"first": "Dan",
"middle": [],
"last": "Han",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Xianchao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of SSST-6, Sixth Workshop on Syntax, Semantics and Structure in Statistical Translation",
"volume": "",
"issue": "",
"pages": "57--66",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dan Han, Katsuhito Sudoh, Xianchao Wu, Kevin Duh, Hajime Tsukada, and Masaaki Nagata. 2012. Head Finalization Reordering for Chinese-to-Japanese Ma- chine Translation. In Proceedings of SSST-6, Sixth Workshop on Syntax, Semantics and Structure in Sta- tistical Translation, pages 57-66, Jeju, Korea, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "KenLM: Faster and Smaller Language Model Queries",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 6th Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "187--197",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield. 2011. KenLM: Faster and Smaller Language Model Queries. In Proceedings of the 6th Workshop on Statistical Machine Translation, pages 187-197, Edinburgh, Scotland, UK, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Head Finalization: A Simple Reordering Rule for SOV Languages",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the Joint 5th Workshop on Statistical Machine Translation and Metrics MATR",
"volume": "",
"issue": "",
"pages": "244--251",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideki Isozaki, Katsuhito Sudoh, Hajime Tsukada, and Kevin Duh. 2010a. Head Finalization: A Simple Re- ordering Rule for SOV Languages. In Proceedings of the Joint 5th Workshop on Statistical Machine Trans- lation and Metrics MATR, pages 244-251, Uppsala, Sweden, July. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Automatic Evaluation of Translation Quality for Distant Language Pairs",
"authors": [
{
"first": "Hideki",
"middle": [],
"last": "Isozaki",
"suffix": ""
},
{
"first": "Tsutomu",
"middle": [],
"last": "Hirao",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
}
],
"year": 2010,
"venue": "Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "944--952",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hideki Isozaki, Tsutomu Hirao, Kevin Duh, Katsuhito Sudoh, and Hajime Tsukada. 2010b. Automatic Eval- uation of Translation Quality for Distant Language Pairs. In Proceedings of the 2010 Conference on Empirical Methods in Natural Language Processing, pages 944-952, MIT, Massachusetts, USA, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A new measure of rank correlation",
"authors": [
{
"first": "G",
"middle": [],
"last": "Maurice",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kendall",
"suffix": ""
}
],
"year": 1938,
"venue": "Biometrika",
"volume": "30",
"issue": "2",
"pages": "81--93",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maurice G Kendall. 1938. A new measure of rank corre- lation. Biometrika 30(1/2): 81-93.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Statistical Phrase-Based Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language",
"volume": "",
"issue": "",
"pages": "48--54",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical Phrase-Based Translation. In Pro- ceedings of the 2003 Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics on Human Language, pages 48-54, Edmon- ton, Canada, May-June. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Amittai",
"middle": [],
"last": "Axelrod",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [
"Birch"
],
"last": "Mayne",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Miles",
"middle": [],
"last": "Osborne",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Talbot",
"suffix": ""
}
],
"year": 2005,
"venue": "International Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "68--75",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Amittai Axelrod, Alexandra Birch Mayne, Chris Callison-Burch, Miles Osborne, and David Talbot. 2005. Edinburgh System Description for the 2005 IWSLT Speech Translation Evaluation. In 2005 International Workshop on Spoken Language Translation, pages 68-75, Pittsburgh, PA, USA, Octo- ber.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Moses: Open Source Toolkit for Statistical Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th annual meeting of the ACL on interactive poster and demonstration sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the 45th annual meet- ing of the ACL on interactive poster and demonstra- tion sessions, pages 177-180, Prague, Czech Republic, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Hierarchical Sub-sentential Alignment with Anymalign",
"authors": [
{
"first": "Adrien",
"middle": [],
"last": "Lardilleux",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Yvon",
"suffix": ""
},
{
"first": "Yves",
"middle": [],
"last": "Lepage",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 16th annual conference of the European Association for Machine Translation (EAMT 2012)",
"volume": "",
"issue": "",
"pages": "279--286",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Adrien Lardilleux, Fran\u00e7ois Yvon, and Yves Lepage. 2012. Hierarchical Sub-sentential Alignment with Anymalign. In Proceedings of the 16th annual confer- ence of the European Association for Machine Trans- lation (EAMT 2012), pages 279-286, Trento, Italy, May.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Efficient Top-Down BTG Parsing for Machine Translation Preordering",
"authors": [
{
"first": "Uri",
"middle": [],
"last": "Lerner",
"suffix": ""
},
{
"first": "Slav",
"middle": [],
"last": "Petrovs",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "513--523",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Uri Lerner and Slav Petrovs. 2013. Efficient Top- Down BTG Parsing for Machine Translation Preorder- ing. In Proceedings of the 2013 Conference on Empir- ical Methods in Natural Language Processing, 513- 523, Seattle, Washington, USA, October. Association for Computational Linguistics.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Semi-supervised learning for natural language",
"authors": [
{
"first": "Percy",
"middle": [],
"last": "Liang",
"suffix": ""
}
],
"year": 2005,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Percy Liang. 2005. Semi-supervised learning for natural language. Ph.D. Dissertation. Massachusetts Institute of Technology.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Efficient Top-Down BTG Parsing for Machine Translation Preordering",
"authors": [
{
"first": "Tetsuji",
"middle": [],
"last": "Nakagawa",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "208--218",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tetsuji Nakagawa. 2015. Efficient Top-Down BTG Pars- ing for Machine Translation Preordering. In Proceed- ings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing, pages 208-218, Beijing, China, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Inducing a Discriminative Parser to Optimize Machine Translation Reordering",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Shinsuke",
"middle": [],
"last": "Mori",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "843--853",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig, Taro Watanabe, and Shinsuke Mori. 2012. Inducing a Discriminative Parser to Opti- mize Machine Translation Reordering. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pages 843-853, Jeju Is- land, Korea, July. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "BLEU: a Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a Method for Automatic Eval- uation of Machine Translation. In Proceedings of the 40th Annual Meeting of the Association for Computa- tional Linguistics (ACL), pages 311-318, Philadelphia, PA, USA, July. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Ordering Phrases with Function Words",
"authors": [
{
"first": "Hendra",
"middle": [],
"last": "Setiawan",
"suffix": ""
},
{
"first": "Min-Yen",
"middle": [],
"last": "Kan",
"suffix": ""
},
{
"first": "Haizhou",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics",
"volume": "",
"issue": "",
"pages": "712--719",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hendra Setiawan, Min-Yen Kan and Haizhou Li. 2007. Ordering Phrases with Function Words. In Proceed- ings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 712-719, Prague, Czech Republic, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "A Lightweight Evaluation Framework for Machine Translation Reordering",
"authors": [
{
"first": "David",
"middle": [],
"last": "Talbot",
"suffix": ""
},
{
"first": "Hideto",
"middle": [],
"last": "Kazawa",
"suffix": ""
},
{
"first": "Hiroshi",
"middle": [],
"last": "Ichikawa",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Katz-Brown",
"suffix": ""
},
{
"first": "Masakazu",
"middle": [],
"last": "Seno",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"J"
],
"last": "Och",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 6th Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "12--21",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Talbot, Hideto Kazawa, Hiroshi Ichikawa, Jason Katz-Brown, Masakazu Seno, and Franz J Och. 2011. A Lightweight Evaluation Framework for Machine Translation Reordering. In Proceedings of the 6th Workshop on Statistical Machine Translation, pages 12-21, Edinburgh, Scotland, UK, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "A unigram orientation model for statistical machine translation",
"authors": [
{
"first": "Christoph",
"middle": [],
"last": "Tillmann",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 2004 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (Short Papers)",
"volume": "",
"issue": "",
"pages": "101--104",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christoph Tillmann. 2004. A unigram orientation model for statistical machine translation. In Proceedings of the 2004 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (Short Papers), pages 101- 104, Boston, MA, USA, May. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Learning Linear Ordering Problems for Better Translation",
"authors": [
{
"first": "Roy",
"middle": [],
"last": "Tromble",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Eisner",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1007--1016",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Roy Tromble and Jason Eisner. 2009. Learning Linear Ordering Problems for Better Translation. In Proceed- ings of the 2009 Conference on Empirical Methods in Natural Language Processing, pages 1007-1016, Sin- gapore, August. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning with Lookahead: Can History-Based Models Rival Globally Optimized Models",
"authors": [
{
"first": "Yoshimasa",
"middle": [],
"last": "Tsuruoka",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Miyao",
"suffix": ""
},
{
"first": "Junichi",
"middle": [],
"last": "Kazama",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the Fifteenth Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "238--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yoshimasa Tsuruoka, Yusuke Miyao, and Junichi Kazama. 2011. Learning with Lookahead: Can History-Based Models Rival Globally Optimized Models?. In Proceedings of the Fifteenth Conference on Computational Natural Language Learning, pages 238-246, Portland, Oregon, USA, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "A Word Reordering Model for Improved Machine Translation",
"authors": [
{
"first": "Karthik",
"middle": [],
"last": "Visweswariah",
"suffix": ""
},
{
"first": "Rajakrishnan",
"middle": [],
"last": "Rajkumar",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Gandhe",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 2011 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "486--496",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karthik Visweswariah, Rajakrishnan Rajkumar, and Ankur Gandhe. 2011. A Word Reordering Model for Improved Machine Translation. In Proceedings of the 2011 Conference on Empirical Methods in Natu- ral Language Processing, pages 486-496, Edinburgh, Scotland, UK, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Chinese syntactic reordering for statistical machine translation",
"authors": [
{
"first": "Chao",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Collins",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "737--745",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chao Wang, Michael Collins, and Philipp Koehn. 2007. Chinese syntactic reordering for statistical machine translation. In Proceedings of the 2007 Joint Confer- ence on Empirical Methods in Natural Language Pro- cessing and Computational Natural Language Learn- ing, pages 737-745, Prague, June. Association for Computational Linguistics.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora",
"authors": [
{
"first": "Dekai",
"middle": [],
"last": "Wu",
"suffix": ""
}
],
"year": 1997,
"venue": "Computational Linguistics",
"volume": "23",
"issue": "3",
"pages": "377--403",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dekai Wu. 1997. Stochastic Inversion Transduction Grammars and Bilingual Parsing of Parallel Corpora. Computational Linguistics, 23(3): 377-403.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Extracting Preordering Rules from Predicate-Argument Structures",
"authors": [
{
"first": "Xianchao",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Duh",
"suffix": ""
},
{
"first": "Hajime",
"middle": [],
"last": "Tsukada",
"suffix": ""
},
{
"first": "Masaaki",
"middle": [],
"last": "Nagata",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 5th International Joint Conference on Natural Language Processing",
"volume": "",
"issue": "",
"pages": "29--37",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Xianchao Wu, Katsuhito Sudoh, Kevin Duh, Hajime Tsukada, and Masaaki Nagata. 2011. Extracting Pre- ordering Rules from Predicate-Argument Structures. In Proceedings of the 5th International Joint Confer- ence on Natural Language Processing, pages 29-37, Chiang Mai, Thailand, November.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Improving a statistical MT system with automatically learned rewrite patterns",
"authors": [
{
"first": "Fei",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Mccord",
"suffix": ""
}
],
"year": 2004,
"venue": "Proceedings of the 20th international conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "508--515",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Fei Xia and Michael McCord. 2004. Improving a sta- tistical MT system with automatically learned rewrite patterns. In Proceedings of the 20th international con- ference on Computational Linguistics, pages 508-515, Geneva, Switzerland, August. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Using a Dependency Parser to Improve SMT for Subject-Object-Verb Languages",
"authors": [
{
"first": "Peng",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Jaeho",
"middle": [],
"last": "Kang",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Ringgaard",
"suffix": ""
},
{
"first": "Franz",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2009,
"venue": "Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the ACL",
"volume": "",
"issue": "",
"pages": "245--253",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peng Xu, Jaeho Kang, Michael Ringgaard, and Franz Och. 2009. Using a Dependency Parser to Improve SMT for Subject-Object-Verb Languages. In Human Language Technologies: The 2009 Annual Conference of the North American Chapter of the ACL, pages 245- 253, Boulder, Colorado, June. Association for Com- putational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "A Syntaxbased Statistical Translation Model",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Yamada",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the 39th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "523--530",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Yamada and Kevin Knight. 2001. A Syntax- based Statistical Translation Model. In Proceedings of the 39th Annual Meeting on Association for Compu- tational Linguistics, pages 523-530, Toulouse, France, July. Association for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Bipartite Graph Partitioning and Data Clustering",
"authors": [
{
"first": "Hongyuan",
"middle": [],
"last": "Zha",
"suffix": ""
},
{
"first": "Xiaofeng",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Ding",
"suffix": ""
},
{
"first": "Horst",
"middle": [],
"last": "Simon",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Gu",
"suffix": ""
}
],
"year": 2001,
"venue": "Proceedings of the tenth international conference on Information and knowledge management",
"volume": "",
"issue": "",
"pages": "25--32",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Hongyuan Zha, Xiaofeng He, Chris Ding, Horst Simon, and Ming Gu. 2001. Bipartite Graph Partitioning and Data Clustering. In Proceedings of the tenth in- ternational conference on Information and knowledge management, pages 25-32, Atlanta, Georgia, USA, November. Association for Computational Linguis- tics.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Phrase Reordering Model Integrating Syntactic Knowledge for SMT",
"authors": [
{
"first": "Dongdong",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Chi-Ho",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "533--540",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dongdong Zhang, Mu Li, Chi-Ho Li, and Ming Zhou. 2007. Phrase Reordering Model Integrating Syntactic Knowledge for SMT. In Proceedings of the 2007 Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning, pages 533-540, Prague, Czech Re- public, June. Association for Computational Linguis- tics.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"text": "Example of preordering.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF1": {
"text": "Tree structures related to bracketing transduction grammar.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF2": {
"text": "Hierarchical sub-sentential alignment and generation of tree structures. (a) a best segmentation according to the second diagonal in the soft alignment matrix using the HSSA method coresponds to an Inverted rule in the BTG formalism; (b) a best segmentation according to the main diagonal corresponds to a Straight rule. (b) is a sub-part in (a) to illustrate recursivity.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF3": {
"text": "Figure 5 shows an example.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF4": {
"text": "Example for oracle experiment. (a) a soft alignment matrix between a source sentence (left) and a target sentence (above); (b) a tree structure with Straight or Inverted nodes; (c) the alignment between the reordered source sentence and the target sentence. The arrow from (a) to (b) represents the generation of tree structures from word-toword associations by use of the HSSA method; the arrow from (b) to (c) is reordering.",
"uris": null,
"type_str": "figure",
"num": null
},
"FIGREF5": {
"text": "Example of building and applying preordering model using tree structures as the reference. (a), (b) and the arrow from (a) to (b) are the same with Figure 5. The difference is that both (a) and (b) generating from only a training set. (c) a sentence from test set becomes a target-like source sentence in the solid line and in dotted line it shows corresponding target sentence. The arrow from (b) to (c) represents building preordering model.",
"uris": null,
"type_str": "figure",
"num": null
},
"TABREF1": {
"text": "Number of sentences and words in the training, tuning and test sets of the KFTT corpus.",
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table"
},
"TABREF3": {
"text": "Intrinsic and extrinsic evaluation scores in English to Japanese and Japanese to English (mod FRS is the modified Fuzzy Reordering Score; norm \u03c4 is normalized Kendall's \u03c4 ; dl stands for distortion limits). Baseline is a default PB-SMT system; Tree-based is our proposed preordering model; Top-down is the top-down BTG parsingbased reordering model; Oracle is an oracle system that uses HSSA tree structures obtained for the test set. The gray cells indicate the results to compare in translation: systems with preordering methods and with a distortion limit of 0 should be compared with the corresponding baseline system with a default distortion limit of 6; other results are given for completeness.",
"html": null,
"content": "<table/>",
"num": null,
"type_str": "table"
}
}
}
}