| { |
| "paper_id": "2005", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T07:21:55.545825Z" |
| }, |
| "title": "NUT-NTT Statistical Machine Translation System for IWSLT 2005", |
| "authors": [ |
| { |
| "first": "Kazuteru", |
| "middle": [], |
| "last": "Ohashi", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "ohashi@nlp.nagaokaut.ac.jp" |
| }, |
| { |
| "first": "Kazuhide", |
| "middle": [], |
| "last": "Yamamoto", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Kuniko", |
| "middle": [], |
| "last": "Saito", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Masaaki", |
| "middle": [], |
| "last": "Nagata", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "masaaki.nagata\u00a1@labs.ntt.co.jp" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper, we present a novel distortion model for phrase-based statistical machine translation. Unlike the previous phrase distortion models whose role is to simply penalize nonmonotonic alignments[1, 2], the new model assigns the probability of relative position between two source language phrases aligned to the two adjacent target language phrases. The phrase translation probabilities and phrase distortion probabilities are calculated from the N-best phrase alignment of the training bilingual sentences. To obtain Nbest phrase alignment, we devised a novel phrase alignment algorithm based on word translation probabilities and N-best search. Experiments show that the phrase distortion model and phrase translation model improve the BLEU and NIST scores over the baseline method.", |
| "pdf_parse": { |
| "paper_id": "2005", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper, we present a novel distortion model for phrase-based statistical machine translation. Unlike the previous phrase distortion models whose role is to simply penalize nonmonotonic alignments[1, 2], the new model assigns the probability of relative position between two source language phrases aligned to the two adjacent target language phrases. The phrase translation probabilities and phrase distortion probabilities are calculated from the N-best phrase alignment of the training bilingual sentences. To obtain Nbest phrase alignment, we devised a novel phrase alignment algorithm based on word translation probabilities and N-best search. Experiments show that the phrase distortion model and phrase translation model improve the BLEU and NIST scores over the baseline method.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "In recent years, phrase-based translation models have become the mainstream of statistical machine translation, because they can represent context-based word selection and local word reordering better than word-based translation models. Previous phrased-based translation models [1, 2] , however, are not effective for global phrase reordering, because their distortion model is too simplistic. As it was designed simply to penalize nonmonotonic phrase alignment, it is difficult to handle translations that require complex word reordering, such as between Japanese and English.", |
| "cite_spans": [ |
| { |
| "start": 279, |
| "end": 282, |
| "text": "[1,", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 283, |
| "end": 285, |
| "text": "2]", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In this paper, we present a novel distortion model for phrase-based statistical machine translation. It models the probability of relative position between two source language phrases aligned to the two adjacent target language phrases. To obtain the distortion model, we first make a phrase alignment of each sentence pair in the training corpus. We then calculate the phrase distortion probability from the relative frequency of respective events in the phrase aligned training corpus. In order to cope with the sparse data problem, word reordering is classified into four states: monotone, monotone-gap, reverse, and reverse-gap. Phrases are also classified based on the part of speech of the first and last word.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "We need phrase translation probabilities to get phrase alignment from the training corpus, but we need phrase alignment to get phrase translation probabilities. To solve this chicken and egg problem, we devised a novel phrase alignment algorithm using word translation probabilities and forward beam search. Phrase distortion probabilities mentioned above are calculated from the result of this phrase alignment.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The phrase alignment algorithm can easily be extended to obtain N-best phrase alignment using backward A* search, such as [3] . We found that phrase translation probabilities calculated from the result of this N-best phrase alignment improve the translation accuracy significantly.", |
| "cite_spans": [ |
| { |
| "start": 122, |
| "end": 125, |
| "text": "[3]", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In the following sections, we first explain our translation model including the phrase distortion model and phrase alignment algorithm. We then report the experiments' results and show the effectiveness of our phrase distortion model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In the noisy channel approach to machine translation, we search for the target (English) sentence", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Baseline Translation Model", |
| "sec_num": "2." |
| }, |
| { |
| "text": "that maximizes the probability of the target sentence \u00a3 given the source (foreign) sentence", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a2 \u00a3", |
| "sec_num": null |
| }, |
| { |
| "text": ". By using Bayes rule, the posterior probability \u00a5 \u00a7 \u00a6 \u00a3 \u00a9 \u00a4 can be decomposed into the product of target sentence probability", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4", |
| "sec_num": null |
| }, |
| { |
| "text": "and source sentence probability given target sentence (1) Translation probability is calculated from the relative frequency of the respective source phrase given the target phrase.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a5 \u00a6 \u00a3", |
| "sec_num": null |
| }, |
| { |
| "text": "\u00a5 \u00a7 \u00a6 \u00a4 \u00a9\u00a3 . \u00a2 \u00a3 \" ! $ # % ' & ( \u00a5 \u00a6 \u00a3 \u00a9 \u00a4 ) ' 0 ! $ # % 1 & ( \u00a5 \u00a7 \u00a6 \u00a4 \u00a9\u00a3 \u00a5 \u00a6 \u00a3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a5 \u00a6 \u00a3", |
| "sec_num": null |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "8 \u00a6 3 \u00a4 \u00a9 3 \u00a3 \u00a3 \u00a5 \u00a4 \u00a7 \u00a6 \u00a9 \u00a6 3 \u00a4 3 \u00a3 \u00a3 \u00a4 \u00a7 \u00a6 \u00a6 3 \u00a4 3 \u00a3", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "\u00a5 \u00a6 \u00a3", |
| "sec_num": null |
| }, |
| { |
| "text": "where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a5 \u00a6 \u00a3", |
| "sec_num": null |
| }, |
| { |
| "text": "\u00a3 \u00a5 \u00a4 \u00a7 \u00a6 \u00a9 \u00a6 3 \u00a4 3 \u00a3", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a5 \u00a6 \u00a3", |
| "sec_num": null |
| }, |
| { |
| "text": "gives the frequency of the source phrase", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a5 \u00a6 \u00a3", |
| "sec_num": null |
| }, |
| { |
| "text": "aligned to the target phrase", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a4", |
| "sec_num": "3" |
| }, |
| { |
| "text": "in the parallel corpus. Note that, due to Bayes rule, the translation direction is inverted from a modeling standpoint.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a3", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The distortion model used in [1] is empirically defined as follows, with an appropriate value for parameter . Figure 1 illustrates the idea of relative distortion, using Japanese to English translation as an example. The target English sentence is generated from left to right by translating the source Japanese phrases in arbitrary order. Suppose we are generating target phrase \"help\" by translating the source phrase \"1 3 2 5 4", |
| "cite_spans": [ |
| { |
| "start": 29, |
| "end": 32, |
| "text": "[1]", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 110, |
| "end": 118, |
| "text": "Figure 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "\u00a3", |
| "sec_num": "3" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "@ \u00a6 \u00c4 7 B D C 7 F 6 \" !# % $ F & $ ( ' 0 ) F 6 !", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "\u00a3", |
| "sec_num": "3" |
| }, |
| { |
| "text": "\". The source phrase translated into the previous target phrase \"disposed to\" is \" 6 8 7 @ 9 B A \". Since the start position of the source phrase for this target phrase A 7", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a3", |
| "sec_num": "3" |
| }, |
| { |
| "text": "is 4, and the end position of the source phrase for previous target phrase", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a3", |
| "sec_num": "3" |
| }, |
| { |
| "text": "C 7 F 6 is 8, the relative distortion is C B E D B C .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a3", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The purpose of the distortion model in Equation 3 is simply to penalize nonmonotonic phrase alignment. It cannot represent the general tendency of global phrase reordering, in terms of the distance and direction of the movement, as well as their dependency on phrase type. For example, for English to Japanese translation, the verb phrase generally moves toward the end of the sentence. In the next section, we present a novel phrase distortion model that considers these aspects.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "\u00a3", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We define our phrase distortion model as the probability of relative distance between two source language phrases that are aligned to the two adjacent target language phrases, We then classify each phrase by the part of speech of its head word. We define (arguably) the first word of each phrase as head word for English and Chinese, and the last word of each phrase as head word for Japanese.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Distortion Model", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Finally, we consider a series of distortion models that have increasingly complex dependencies. represents the classification of each phrase. When we classify each phrase by the part of speech of its head word, we identify the above five distortion models as type 1, 2s, 3s, 4s and 5s, respectively. Figure 2 and Figure 3 show examples of phrase distortion models type 2s and type 3s, respectively, for Japanese to English translation. Here, monotone, monotone-gap, reverse, reverse-gap are represented by 1, 2, -1, -2, respectively. In Figure 3 , the first three elements are", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 300, |
| "end": 308, |
| "text": "Figure 2", |
| "ref_id": "FIGREF6" |
| }, |
| { |
| "start": 313, |
| "end": 321, |
| "text": "Figure 3", |
| "ref_id": "FIGREF7" |
| }, |
| { |
| "start": 537, |
| "end": 545, |
| "text": "Figure 3", |
| "ref_id": "FIGREF7" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Phrase Distortion Model", |
| "sec_num": "3." |
| }, |
| { |
| "text": "@ , \u00a3 \u00a5 G A I H P H\u00a6 3 \u00a4 1 7 , \u00a3 G A I H \u00a7 H\u00a6 3 \u00a3 7F 6", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Distortion Model", |
| "sec_num": "3." |
| }, |
| { |
| "text": ", respectively. The fourth and fifth element are the distortion probability and frequency of this event in the training corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Distortion Model", |
| "sec_num": "3." |
| }, |
| { |
| "text": "Since we are not sure whether it is appropriate to define the head word of each phrase for each language a priori, we also tried \"dual\" distortion models, where", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Distortion Model", |
| "sec_num": "3." |
| }, |
| { |
| "text": "\u00a3 \u00a5 G A Q H \u00a7 H\u00a6 T R", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Distortion Model", |
| "sec_num": "3." |
| }, |
| { |
| "text": "of each phrase represented by both the first and the last word of each phrase. We call them type 2d, 3d, 4d, and 5d. An example of 3d is shown in Figure 4 , where ", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 146, |
| "end": 154, |
| "text": "Figure 4", |
| "ref_id": "FIGREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Phrase Distortion Model", |
| "sec_num": "3." |
| }, |
| { |
| "text": "\u00a2 \u00a1 -\u00a3 \u00a5 \u00a4 \u00a7 \u00a6 \u00a2 \u00a1 -\u00a3 \u00a5 \u00a4 \u00a7 \u00a6 WRB WRB|0.34|17 -1 \u00a2 \u00a1 -\u00a3 \u00a5 \u00a4 \u00a7 \u00a6 \u00a2 \u00a1 -\u00a2\u00a1 \u00a9 PRP PRP|0.75|3 -1 \u00a2 \u00a1 -\u00a3 \u00a5 \u00a4 \u00a7 \u00a6 \u00a1 - \u00a1 DT NNS|1|2 -1 \u00a2 \u00a1 -\u00a2\u00a1 \u00a9 - ! NNP NNP|0.0526315789473684|1 -1 \u00a2 \u00a1 -\u00a2\u00a1 \u00a9 - ! NNP TO|0.333333333333333|1 -1 \u00a2 \u00a1 -\u00a2\u00a1 \u00a9 -\" # ! . NN|1|1 ...", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Distortion Model", |
| "sec_num": "3." |
| }, |
| { |
| "text": "... -1 \u00a2 \u00a1 -\u00a3 \u00a5 \u00a4 \u00a7 \u00a6 |0.456879958687386|9732 -1 \u00a2 \u00a1 -\u00a2\u00a1 \u00a9 |0.380326288979142|5525 -1 \u00a1 - \u00a1 |0.0594823032223983|563 -2 $ & % ( ' 0 ) - $ 1 % 2 ' 0 ) |0.578082191780822|422 -2 3 5 4 \u00a1 -3 5 4 \u00a1 |0.159919507575758|1351 -2 - ! |0.00304719568373694|1020 ...", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Distortion Model", |
| "sec_num": "3." |
| }, |
| { |
| "text": "... -1 \u00a2 \u00a1 -\u00a3 \u00a5 \u00a4 \u00a7 \u00a6 WDT|0.676470588235294|69 -1 \u00a2 \u00a1 -\u00a3 \u00a5 \u00a4 \u00a7 \u00a6 WP|0.360189573459716|152 -1 \u00a2 \u00a1 -\u00a3 \u00a5 \u00a4 \u00a7 \u00a6 WRB|0.309219858156028|218 -1 \u00a2 \u00a1 -\u00a2\u00a1 \u00a9 ,|0.175824175824176|16 -1 \u00a2 \u00a1 -\u00a2\u00a1 \u00a9 .|0.216|27 -1 \u00a2 \u00a1 -\u00a2\u00a1 \u00a9 CC|0.130434782608696|3 ...", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Distortion Model", |
| "sec_num": "3." |
| }, |
| { |
| "text": "The phrase distortion model in the previous section is computed from the Viterbi phrase alignment of the training corpus. In order to obtain this phrase alignment, we search for the segmentation of source and target sentences that maximizes the product of lexical translation probabilities", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Alignment", |
| "sec_num": "4." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u00a5 \u00a6 3 \u00a4 7 \u00a9 3 \u00a3 7 , \u00a6 \u00a2 3 \u00a4 4 6 \u00a2 3 \u00a3 4 6 ' 0 ! $ # % 1 & 7 6 ) 9 8 ( 6 ) 4 G 7H 6 \u00a5 \u00a7 \u00a6 3 \u00a4 1 7 \u00a9 3 \u00a3 7", |
| "eq_num": "(5)" |
| } |
| ], |
| "section": "Phrase Alignment", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Here, lexical translation probability [4] is an approximation of phrase translation probability based on the word translation probabilities estimated by using GIZA++ [5] ,", |
| "cite_spans": [ |
| { |
| "start": 38, |
| "end": 41, |
| "text": "[4]", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 166, |
| "end": 169, |
| "text": "[5]", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Alignment", |
| "sec_num": "4." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u00a5 \u00a7 \u00a6 3 \u00a4 \u00a9 3 \u00a3 G @ A 7 \u00a5 \u00a6 \u00a4 @ \u00a9\u00a3 7", |
| "eq_num": "(6)" |
| } |
| ], |
| "section": "Phrase Alignment", |
| "sec_num": "4." |
| }, |
| { |
| "text": "where \u00a4 @ and \u00a3 7", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Alignment", |
| "sec_num": "4." |
| }, |
| { |
| "text": "are words in the phrases. The phrase alignment is obtained by following these steps:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Alignment", |
| "sec_num": "4." |
| }, |
| { |
| "text": "1. All pairs of one word from the source sentence and one word from the target sentence are considered as the phrase translation candidates.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Alignment", |
| "sec_num": "4." |
| }, |
| { |
| "text": "2. If the lexical translation probability of a phrase translation candidate is less than the threshold, it is deleted.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Alignment", |
| "sec_num": "4." |
| }, |
| { |
| "text": "3. Each phrase translation candidate is expanded toward its neighbors as described in [1] .", |
| "cite_spans": [ |
| { |
| "start": 86, |
| "end": 89, |
| "text": "[1]", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Alignment", |
| "sec_num": "4." |
| }, |
| { |
| "text": "4. If the lexical translation probability of the expanded phrase translation candidate is less than the threshold, it is deleted.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Alignment", |
| "sec_num": "4." |
| }, |
| { |
| "text": "5. This expansion and deletion is repeated until no further expansion is possible.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Alignment", |
| "sec_num": "4." |
| }, |
| { |
| "text": "6. Search for consistent phrase alignment among all combinations of the above phrase translation candidates.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Alignment", |
| "sec_num": "4." |
| }, |
| { |
| "text": "We can obtain the Viterbi phrase alignment by using beam search from the beginning of the sentence to the end. We also can obtain the N-best phrase alignment by using A* search as described in [3] .", |
| "cite_spans": [ |
| { |
| "start": 193, |
| "end": 196, |
| "text": "[3]", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase Alignment", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Here, we must consider three parameters: phrase translation candidate threshold, beam width, and the number of N-best alignments. Preliminary tests have shown that the appropriate parameter is 1e-15 for phrase candidate threshold, 1000 for beam width, and 20 for the number of N-best. Nbest phrase alignment is used for computing the phrase translation model, and Viterbi alignment is used for computing the phrase distortion model. Figure 5 shows an example of the best 3 phrase alignments for a Japanese-English bilingual sentence. Each line represents a phrase translation candidate, where the first item is source phrase, second and third items are start and end positions of the phrase in the source sentence, fourth and fifth items are the parts of speech of the first and last words in the source phrase. After that, the same information for the target phrase is listed.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 433, |
| "end": 441, |
| "text": "Figure 5", |
| "ref_id": "FIGREF8" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Phrase Alignment", |
| "sec_num": "4." |
| }, |
| { |
| "text": "We participated in Supplied Data + Tools Track in Japanese-English and Chinese-English translation because we need a part of speech tagger to obtain part of speech information for our phrase distortion model. We did not use the word segmentation information of Japanese and Chinese provided in the supplied data because of the constraints of the POS tagger we used.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus and Tools", |
| "sec_num": "5." |
| }, |
| { |
| "text": "\u00a1 \u00a2 \u00a3 \u00a5 \u00a4 9 |1|5| \u00a1 -\u00a6 \u00a7 |1 4 \u00a1 -1 5 4 \u00a1 |the light was red|1|4|DT|JJ \u00a9 |6|6| - \u00a2 ! | - \u00a2 ! |.|5|5|.|. 2.71232e-06 - \u00a1 |1|2| \u00a1 -\u00a6 \u00a7 |1 \u00a2 \u00a1 - 1 \u00a1 |the light|1|2|DT|NN \u00a2 \u00a3 \u00a4 9 |3|5| \u00a1 -\u00a6 \u00a7 |1 5 4 \u00a1 -1 5 4 \u00a2 \u00a1 |was red|3|4|VBD|JJ \u00a9 |6|6| - \u00a2 ! | - \u00a2 ! |.|5|5|.|. 2.4524e-06 - \u00a1 |1|2| \u00a1 -\u00a6 \u00a7 |1 \u00a2 \u00a1 - 1 \u00a1 |the light|1|2|DT|NN \u00a3 \u00a5 \u00a4 9 |4|5|1 4 \u00a1 -1 5 4 \u00a1 |1 5 4 \u00a1 -1 5 4 \u00a1 |was|3|3|VBD|VBD \u00a2 |3|3| 0 \u00a1 -\u00a6 \u00a7 | \u00a2 \u00a1 -\u00a6 \u00a7 |red|4|4|JJ|JJ \u00a9 |6|6| - \u00a2 ! | - \u00a2 ! |.|5|5|.|. 2.38498e-06", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus and Tools", |
| "sec_num": "5." |
| }, |
| { |
| "text": "Word segmentation and POS tagging for Japanese was done by ChaSen [6] . As ChaSen's part of speech has a hierarchy, we used the first two layers. Word segmentation and POS tagging for Chinese was done by our own tool [7] . English is tokenized by a tool provided by LDC (tokenizer.sed) [8] , and POS tagged by MXPOST [9] . Word translation probabilities are obtained by using GIZA++ [5] . English are lowercased for training.", |
| "cite_spans": [ |
| { |
| "start": 66, |
| "end": 69, |
| "text": "[6]", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 217, |
| "end": 220, |
| "text": "[7]", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 286, |
| "end": 289, |
| "text": "[8]", |
| "ref_id": null |
| }, |
| { |
| "start": 317, |
| "end": 320, |
| "text": "[9]", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 383, |
| "end": 386, |
| "text": "[5]", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus and Tools", |
| "sec_num": "5." |
| }, |
| { |
| "text": "We used a back-off word trigram model as the language model. It is trained from the lowercased English side of the parallel training corpus using Palmkit [10] .", |
| "cite_spans": [ |
| { |
| "start": 154, |
| "end": 158, |
| "text": "[10]", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus and Tools", |
| "sec_num": "5." |
| }, |
| { |
| "text": "For Japanese-English translation, we used a minimum error rate training tool provided by CMU [11] . The features used were the following: F Phrase translation probability (both directions) [1] F Lexical translation probability (both directions) [4] F Word penalty [12] F Phrase distortion probability", |
| "cite_spans": [ |
| { |
| "start": 93, |
| "end": 97, |
| "text": "[11]", |
| "ref_id": null |
| }, |
| { |
| "start": 189, |
| "end": 192, |
| "text": "[1]", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 245, |
| "end": 248, |
| "text": "[4]", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 264, |
| "end": 268, |
| "text": "[12]", |
| "ref_id": "BIBREF11" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus and Tools", |
| "sec_num": "5." |
| }, |
| { |
| "text": "We didn't apply minimum error rate training to Chinese-English translation because we found no significant improvements for some reasons.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Corpus and Tools", |
| "sec_num": "5." |
| }, |
| { |
| "text": "First, we compared our phrase extraction method with the conventional method described in [1] . Table 1 shows the NIST and BLEU scores for development set 2 in Japanese-English translation. We found that our phrase extraction method using N-best phrase alignment significantly improved the translation accuracy.", |
| "cite_spans": [ |
| { |
| "start": 90, |
| "end": 93, |
| "text": "[1]", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 96, |
| "end": 103, |
| "text": "Table 1", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments and Discussions", |
| "sec_num": "6." |
| }, |
| { |
| "text": "We then compared our phrase distortion model to the conventional distortion model [1] . Figure 6 shows the BLEU scores of the Japanese-English and Chinese-English translations created with various distortion models. Here, distortion model type 0 represents the conventional model [1] . Table 2 and Table 3 are NIST and BLEU scores for development set 2 of Japanese-English translation with various distortion models, before and after minimum error rate training. We found that, in general, distortion models type 2s and 3s yield a slight improvement in accuracy. In the experiments, the BLEU and NIST scores for distortion models 4d and 5d were generally very low. This is Figure 6 : BLEU score of Japanese-English and Chinese-English translation with different distortion models probably caused by data sparseness. The distortion model must consider 8 to 10 parts of speech using only the supplied data. The situation might be different if we had more training data. We could not get phrase alignment for 1095 (5.5%) of the 20000 training sentences. In general, if the training parallel sentence is too long, we cannot get phrase alignment because of the large search space. As these sentences are not used for training at all, it probably hurt the performance significantly. Some countermeasure is needed, for example, limiting the search space for those long sentences by using the distortion model obtained from relatively short sentences.", |
| "cite_spans": [ |
| { |
| "start": 82, |
| "end": 85, |
| "text": "[1]", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 280, |
| "end": 283, |
| "text": "[1]", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 88, |
| "end": 96, |
| "text": "Figure 6", |
| "ref_id": null |
| }, |
| { |
| "start": 286, |
| "end": 293, |
| "text": "Table 2", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 298, |
| "end": 305, |
| "text": "Table 3", |
| "ref_id": "TABREF6" |
| }, |
| { |
| "start": 673, |
| "end": 681, |
| "text": "Figure 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments and Discussions", |
| "sec_num": "6." |
| }, |
| { |
| "text": "In this experiment, the number of (N-best) phrase alignments for a sentence is fixed. This strategy is not the best because the number of plausible phrase alignments increases exponentially against sentence length. We must vary the number of alignments according to sentence length. It might be worth investigating other representation forms of phrase alignments, such as word graph.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments and Discussions", |
| "sec_num": "6." |
| }, |
| { |
| "text": "In this paper, we present a novel phrase distortion model and a novel phrase alignment method for computing a more useful phrase distortion model. We show, by experiment, that the phrase distortion model described herein offers improved", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "7." |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Statistical phrasebased translation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "HLT-NAACL 2003: Main Proceedings", |
| "volume": "", |
| "issue": "", |
| "pages": "127--133", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Koehn, F. J. Och, and D. Marcu, \"Statistical phrase- based translation,\" in HLT-NAACL 2003: Main Pro- ceedings, M. Hearst and M. Ostendorf, Eds. Edmon- ton, Alberta, Canada: Association for Computational Linguistics, May 27 -June 1 2003, pp. 127-133.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "The alignment template approach to statistical machine translation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Computational Linguistics", |
| "volume": "30", |
| "issue": "4", |
| "pages": "417--449", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. J. Och and H. Ney, \"The alignment template ap- proach to statistical machine translation,\" Computa- tional Linguistics, vol. 30, no. 4, pp. 417-449, 2004.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Generation of word graphs in statistical machine translation", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Ueffing", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "156--163", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "N. Ueffing, F. J. Och, and H. Ney, \"Generation of word graphs in statistical machine translation,\" in Proceed- ings of the Conference on Empirical Methods in Nat- ural Language Processing (EMNLP). Philadelphia: Association for Computational Linguistics, July 2002, pp. 156-163.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "The CMU statistical machine translation system", |
| "authors": [ |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Vogel", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Huang", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Tribble", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Venugopal", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [], |
| "last": "Zhao", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Waibel", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "23--27", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S. Vogel, Y. Zhang, F. Huang, A. Tribble, A. Venu- gopal, B. Zhao, and A. Waibel, \"The CMU statistical machine translation system,\" in MT Summit IX, New Orleans, USA, 23-27, September 2003.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "A systematic comparison of various statistical alignment models", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Computational Linguistics", |
| "volume": "29", |
| "issue": "", |
| "pages": "19--51", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. J. Och and H. Ney, \"A systematic comparison of various statistical alignment models,\" in Computational Linguistics, vol. 29, no. 1, 2003, pp. 19-51.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Morphological analysis system chasen", |
| "authors": [ |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Matsumoto", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Kitauchi", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Yamashita", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Hirano", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Matsuda", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Takaoka", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Asahara", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Y. Matsumoto, A. Kitauchi, T. Yamashita, Y. Hi- rano, H. Matsuda, K. Takaoka, and M. Asahara, \"Morphological analysis system chasen, ver.2.3.3,\" http://chasen.aist-nara.ac.jp/, 2003.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Multi-language named entity recognition system based on hmm, acl2003, workshop on multilingual and mixed-language named entity recognition", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Saito", |
| "suffix": "" |
| }, |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Nagata", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "41--48", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Saito and M. Nagata, \"Multi-language named en- tity recognition system based on hmm, acl2003, work- shop on multilingual and mixed-language named entity recognition,\" 2003, pp. 41-48.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Mxpost(maximum entropy pos tagger)", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Ratnaparkhi", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Ratnaparkhi, \"Mxpost(maximum entropy pos tagger), ver.1.0,\" http://www.cis.upenn.edu/\u02dcadwait/statnlp.html, 1997.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Palmkit", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Ito", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Ito, \"Palmkit,\" http://palmkit.sourceforge.net/, 2002.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Pharaoh: a beam search decoder for phrasebased statistical machine models, user manual and description for version 1.2", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. Koehn, \"Pharaoh: a beam search decoder for phrase- based statistical machine models, user manual and de- scription for version 1.2,\" 2004.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Example of relative distortion whereA the start position of the source phrase that is translated into the -th target phrase, and", |
| "num": null |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "monotone: The two source phrases are adjacent, and are in the same order as the two target phrases.", |
| "num": null |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "monotone-gap: The two source phrases are not adjacent, but are in the same order as the two target phrases.", |
| "num": null |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "reverse: The two source phrases are adjacent, but are in reverse order of the two target phrases.", |
| "num": null |
| }, |
| "FIGREF4": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "reverse-gap: The two source phrases are not adjacent, and are in reverse order as the two target phrases.", |
| "num": null |
| }, |
| "FIGREF5": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Example of phrase distortion model type 3d", |
| "num": null |
| }, |
| "FIGREF6": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Example of phrase distortion model in type 2s", |
| "num": null |
| }, |
| "FIGREF7": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Example of phrase distortion model type 3s", |
| "num": null |
| }, |
| "FIGREF8": { |
| "type_str": "figure", |
| "uris": null, |
| "text": "Example of N-best phrase alignment for Japanese-English bilingual sentence", |
| "num": null |
| }, |
| "TABREF1": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td>\u00a5 \u00a7 \u00a6 @ \u00a9 3 \u00a3 7 F 6</td><td>3 \u00a3</td><td>7</td><td>3 \u00a4</td><td>7F</td><td>6</td><td>3 \u00a4</td><td>7</td><td>(4)</td></tr><tr><td colspan=\"9\">where source phrases aligned to 3 \u00a3 7 F 6 and 3 \u00a3 7 are adjacent target phrases, 3 \u00a3 7 F 6 and 3 \u00a3 , and @ 3 \u00a4 7 F 6 and 3 \u00a4 7 are is the relative 7 distance between 3 \u00a4 7F 6 and 3 \u00a4 7</td></tr><tr><td/><td/><td/><td/><td/><td/><td/><td>into four states:</td><td/></tr></table>", |
| "text": ".Since the above distortion model involves too many parameters to estimate, we approximate it in several steps. First, we classify the relative distance @", |
| "html": null |
| }, |
| "TABREF4": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"3\">Translation accuracy for development set 2 of</td></tr><tr><td colspan=\"3\">Japanese-English with different phrase extraction methods</td></tr><tr><td colspan=\"3\">phrase extraction NIST score BLEU score</td></tr><tr><td>conventional</td><td>7.6162</td><td>0.3375</td></tr><tr><td>our method</td><td>8.8159</td><td>0.4471</td></tr></table>", |
| "text": "", |
| "html": null |
| }, |
| "TABREF5": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"3\">Translation accuracy for development set 2 of</td></tr><tr><td colspan=\"3\">Japanese-English with different distortion models (before</td></tr><tr><td>MER training)</td><td/><td/></tr><tr><td colspan=\"3\">distortion type NIST score BLEU score</td></tr><tr><td>0</td><td>8.7706</td><td>0.4050</td></tr><tr><td>1</td><td>8.9302</td><td>0.4219</td></tr><tr><td>2s</td><td>9.0435</td><td>0.4264</td></tr><tr><td>3s</td><td>8.9000</td><td>0.4179</td></tr><tr><td>4s</td><td>8.9419</td><td>0.4231</td></tr><tr><td>5s</td><td>8.8852</td><td>0.4168</td></tr><tr><td>2d</td><td>8.9904</td><td>0.4231</td></tr><tr><td>3d</td><td>8.9792</td><td>0.4214</td></tr><tr><td>4d</td><td>8.6711</td><td>0.3895</td></tr><tr><td>5d</td><td>8.7216</td><td>0.3959</td></tr></table>", |
| "text": "", |
| "html": null |
| }, |
| "TABREF6": { |
| "num": null, |
| "type_str": "table", |
| "content": "<table><tr><td colspan=\"3\">Translation accuracy for development set 2 of</td></tr><tr><td colspan=\"3\">Japanese-English with different distortion models (after</td></tr><tr><td>MER training)</td><td/><td/></tr><tr><td colspan=\"3\">distortion type NIST score BLEU score</td></tr><tr><td>0</td><td>8.9551</td><td>0.4593</td></tr><tr><td>1</td><td>8.8916</td><td>0.4549</td></tr><tr><td>2s</td><td>8.9454</td><td>0.4581</td></tr><tr><td>3s</td><td>8.9846</td><td>0.4588</td></tr><tr><td>4s</td><td>8.9489</td><td>0.4539</td></tr><tr><td>5s</td><td>8.9995</td><td>0.4586</td></tr><tr><td>2d</td><td>8.8941</td><td>0.4500</td></tr><tr><td>3d</td><td>8.9219</td><td>0.4466</td></tr><tr><td>4d</td><td>8.8263</td><td>0.4181</td></tr><tr><td>5d</td><td>8.8829</td><td>0.4298</td></tr></table>", |
| "text": "", |
| "html": null |
| } |
| } |
| } |
| } |