| { |
| "paper_id": "2004", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T07:16:07.565081Z" |
| }, |
| "title": "Alignment Templates: the RWTH SMT System", |
| "authors": [ |
| { |
| "first": "Oliver", |
| "middle": [], |
| "last": "Bender", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "RWTH Aachen University", |
| "location": { |
| "postCode": "D-52056", |
| "settlement": "Aachen", |
| "country": "Germany" |
| } |
| }, |
| "email": "bender@cs.rwth-aachen.de" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Zens", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "RWTH Aachen University", |
| "location": { |
| "postCode": "D-52056", |
| "settlement": "Aachen", |
| "country": "Germany" |
| } |
| }, |
| "email": "zens@cs.rwth-aachen.de" |
| }, |
| { |
| "first": "Evgeny", |
| "middle": [], |
| "last": "Matusov", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "RWTH Aachen University", |
| "location": { |
| "postCode": "D-52056", |
| "settlement": "Aachen", |
| "country": "Germany" |
| } |
| }, |
| "email": "matusov@cs.rwth-aachen.de" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "RWTH Aachen University", |
| "location": { |
| "postCode": "D-52056", |
| "settlement": "Aachen", |
| "country": "Germany" |
| } |
| }, |
| "email": "ney@cs.rwth-aachen.de" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "In this paper, we describe the RWTH statistical machine translation (SMT) system which is based on log-linear model combination. All knowledge sources are treated as feature functions which depend on the source language sentence, the target language sentence and possible hidden variables. The main feature of our approach are the alignment templates which take shallow phrase structures into account: a phrase level alignment between phrases and a word level alignment between single words within the phrases. Thereby, we directly consider word contexts and local reorderings. In order to incorporate additional models (the IBM-1 statistical lexicon model, a word deletion model, and higher order language models), we perform n-best list rescoring. Participating in the International Workshop on Spoken Language Translation (IWSLT 2004), we evaluate our system on the Basic Travel Expression Corpus (BTEC) Chinese-to-English and Japanese-to-English tasks.", |
| "pdf_parse": { |
| "paper_id": "2004", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "In this paper, we describe the RWTH statistical machine translation (SMT) system which is based on log-linear model combination. All knowledge sources are treated as feature functions which depend on the source language sentence, the target language sentence and possible hidden variables. The main feature of our approach are the alignment templates which take shallow phrase structures into account: a phrase level alignment between phrases and a word level alignment between single words within the phrases. Thereby, we directly consider word contexts and local reorderings. In order to incorporate additional models (the IBM-1 statistical lexicon model, a word deletion model, and higher order language models), we perform n-best list rescoring. Participating in the International Workshop on Spoken Language Translation (IWSLT 2004), we evaluate our system on the Basic Travel Expression Corpus (BTEC) Chinese-to-English and Japanese-to-English tasks.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "The goal of machine translation is the translation of a text given in some source language into a target language. We are given a source string f J 1 = f 1 ...f j ...f J , which is to be translated into a target string e I 1 = e 1 ...e i ...e I . Among all possible target strings, we will choose the string with the highest probability: ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "e I 1 =", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The decomposition into two knowledge sources in Equation 1 is known as the source-channel approach to statistical machine translation [1] . It allows an independent modeling of target language model P r(e I 1 ) and translation model P r(f J 1 |e I 1 ). The target language model describes the well-formedness of the target language sentence. The translation model links the source language sentence to the target language sentence. It can be further decomposed into alignment and lexicon model. An alternative to the classical source-channel approach is the direct modeling of the posterior probability P r(e I 1 |f J 1 ). Using a log-linear model [2] , we obtain:", |
| "cite_spans": [ |
| { |
| "start": 134, |
| "end": 137, |
| "text": "[1]", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 648, |
| "end": 651, |
| "text": "[2]", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "P r(e I 1 |f J 1 ) = p \u03bb M 1 (e I 1 |f J 1 ) = exp \u016f M P m=1 \u03bb m h m (e I 1 , f J 1 ) \u00ff P e I 1 exp \u016f M P m=1 \u03bb m h m (e I 1 , f J 1 ) \u00ff", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "The hm denote the feature functions. As a decision rule, we obtain:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "e I 1 = argmax e I 1 ( M X m=1 \u03bb m h m (e I 1 , f J 1 ) )", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "This approach is a generalization of the source-channel approach. It has the advantage that additional models or feature functions can be easily integrated into the overall system. The overall architecture of the log-linear model combination is summarized in Figure 1 . The argmax operation denotes the search problem, i.e. the generation of the output sentence in the target language. We have to maximize over all possible target language sentences.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 259, |
| "end": 267, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "In a way similar to [3] , we train the model scaling factors \u03bb M 1 with respect to the final translation quality measured by some error criterion, e.g. the NIST score [4] , the BLEU score [5] or the word error rate (WER) [6] . The remainder of the paper is organized as follows: in section 2, we will outline the RWTH statistical machine translation system which introduces the alignment templates [7, 2] . We will describe the training and search procedure of our approach. For the Japanese-English task, we will show that reordering constraints improve translation quality compared to an unconstrained search. We will describe the additional features we integrate into our system. Section 3 will present experimental details and will show the translation results obtained for the Chinese-to-English and Japanese-to-English evaluation tasks. Finally, section 4 will conclude.", |
| "cite_spans": [ |
| { |
| "start": 20, |
| "end": 23, |
| "text": "[3]", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 167, |
| "end": 170, |
| "text": "[4]", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 188, |
| "end": 191, |
| "text": "[5]", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 221, |
| "end": 224, |
| "text": "[6]", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 398, |
| "end": 401, |
| "text": "[7,", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 402, |
| "end": 404, |
| "text": "2]", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1." |
| }, |
| { |
| "text": "A general deficiency of single-word based approaches is that contextual information is not taken into account because they are only able to model correspondences between single words. A countermeasure is to consider word phrases rather than single words as the basis for the translation models. In other words, a whole group of adjacent words in the source sentence may be aligned with a whole group of adjacent words in the target language. As a result the context of words has a greater influence and local reorderings can be learned implicitly..", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "The RWTH SMT System", |
| "sec_num": "2." |
| }, |
| { |
| "text": "The main feature of our translation model are the alignment templates. An alignment templates z is a triple (f ,\u1ebc,\u00c3) which describes the alignment\u00c3 between a source class sequenceF and a target class sequence\u1ebc. The classes used inF and\u1ebc are automatically trained bilingual classes using the method described in [8] . The use of classes instead of words themselves has the advantage of a better generalization. E.g., if a class \"town\" is used in both source and target language and alignment templates are learned for special towns, it is possible to generalize these alignment templates to all towns.", |
| "cite_spans": [ |
| { |
| "start": 311, |
| "end": 314, |
| "text": "[8]", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word level alignments", |
| "sec_num": "2.1." |
| }, |
| { |
| "text": "? Preprocessing ?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Source Language Text", |
| "sec_num": null |
| }, |
| { |
| "text": "Global Searc\u0125 e I 1 = argmax", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Source Language Text", |
| "sec_num": null |
| }, |
| { |
| "text": "e I 1 \u00a1 M P m=1 \u03bb m h m (e I 1 , f J 1 ) \u00bf \u03bb1 \u2022 h1(e I 1 , f J 1 ) \u03bb 2 \u2022 h 2 (e I 1 , f J 1 ) p p p \u03bb M \u2022 h M (e I 1 , f J 1 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Source Language Text", |
| "sec_num": null |
| }, |
| { |
| "text": "?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Source Language Text", |
| "sec_num": null |
| }, |
| { |
| "text": "Postprocessing ?", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Source Language Text", |
| "sec_num": null |
| }, |
| { |
| "text": "Target Language Text An alignment template is applicable to a sequence of source words if the alignment template classes and the classes of the source words are equal, and it constrains the target words to correspond to the target class sequence. For the selection of words from classes we use a statistical model for p(f |z,\u1ebd) based on the lexicon probabilities of a statistical lexicon p(f |e). Figure 2 shows an example of a word aligned sentence pair. The word alignment is represented with the black boxes. The figure also includes some of the possible alignment templates, represented as the larger, unfilled rectangles. Note that the extraction algorithm would extract many more alignment templates from this sentence pair. In this example, the system input was the sequence of Chinese characters without any word segmentation.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 397, |
| "end": 405, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Source Language Text", |
| "sec_num": null |
| }, |
| { |
| "text": "In order to describe the phrase level alignments in a formal way, we first decompose both the source sentence f J 1 and the target sentence e I 1 into a sequence of phrases (k = 1, . . . , K). For the alignment a K 1 between the word phrases, we obtain the following equation:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase level alignments", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": "P r(f J 1 |e I 1 ) = X a K 1 P r(a K 1 , f J 1 |e I 1 ) = X a K 1 P r(a K 1 |e I 1 ) \u2022 P r(f J 1 |a K 1 , e I 1 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase level alignments", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": "Further, we introduce the alignment templates as hidden variables for the translation of the K phrases:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase level alignments", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "P r(f J 1 |e I 1 ) = X a K 1 ,z K 1 P r(a K 1 |e I 1 ) \u2022 P r(z K 1 |a K 1 , e I 1 ) \u2022 P r(f J 1 |z K 1 , a K 1 , e I 1 )", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Phrase level alignments", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": "Hence, we obtain three different probability distributions: the phrase alignment probability P r(a K 1 |e I 1 ), the probability to apply an alignment template P r(z K 1 |a K 1 , e I 1 ), and the phrase translation", |
| "cite_spans": [ |
| { |
| "start": 166, |
| "end": 167, |
| "text": "K", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase level alignments", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": "probability P r(f J 1 |z K 1 , a K 1 , e I 1 )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase level alignments", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": ". The phrase translation probability is discussed in section 2.1. For a detailed description of modeling, training and search, see [7] .", |
| "cite_spans": [ |
| { |
| "start": 131, |
| "end": 134, |
| "text": "[7]", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Phrase level alignments", |
| "sec_num": "2.2." |
| }, |
| { |
| "text": "To use the three component models of Equation 3 in a log-linear approach, we define three different feature functions taking the logarithm for each component of the translation model instead of one feature function for the whole translation model p(f J 1 |e I 1 ). The feature functions have then not only a dependence on f J", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature functions", |
| "sec_num": "2.3." |
| }, |
| { |
| "text": "1 and e I 1 but also on z K 1 , a K", |
| "cite_spans": [ |
| { |
| "start": 26, |
| "end": 27, |
| "text": "K", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature functions", |
| "sec_num": "2.3." |
| }, |
| { |
| "text": "1 . Yet, we are not limited to train only the alignment model scaling factors, the RWTH SMT system consists of the following base models:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature functions", |
| "sec_num": "2.3." |
| }, |
| { |
| "text": "\u2022 a phrase translation model, \u2022 a word penalty model. These features allow a straightforward integration into the used dynamic programming search algorithm [7] . In addition, we extract nbest candidate translations using A * search [9] and perform rescoring, for which we make use of the following extended models:", |
| "cite_spans": [ |
| { |
| "start": 156, |
| "end": 159, |
| "text": "[7]", |
| "ref_id": "BIBREF6" |
| }, |
| { |
| "start": 232, |
| "end": 235, |
| "text": "[9]", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature functions", |
| "sec_num": "2.3." |
| }, |
| { |
| "text": "\u2022 the IBM-1 lexicon model as suggested by [10] ,", |
| "cite_spans": [ |
| { |
| "start": 42, |
| "end": 46, |
| "text": "[10]", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature functions", |
| "sec_num": "2.3." |
| }, |
| { |
| "text": "\u2022 a deletion model: for each source word, we check wether there exists a target translation with a probability higher than a given threshold. If not, this word is considered as deletion and the feature simply counts the number of deletions,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature functions", |
| "sec_num": "2.3." |
| }, |
| { |
| "text": "\u2022 additional language models: applying the SRI Language Modeling Toolkit [11] , we train n-gram language models of increasing order.", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 77, |
| "text": "[11]", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature functions", |
| "sec_num": "2.3." |
| }, |
| { |
| "text": "We combine these different features in a log-linear model [2] .", |
| "cite_spans": [ |
| { |
| "start": 58, |
| "end": 61, |
| "text": "[2]", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Feature functions", |
| "sec_num": "2.3." |
| }, |
| { |
| "text": "As training criterion, we use the maximum class posterior probability criterion:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Optimization of model scaling factors", |
| "sec_num": "2.4." |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "\u03bb M 1 = argmax \u03bb M 1 ( S X s=1 log p \u03bb M 1 (e s |f s ) )", |
| "eq_num": "(4)" |
| } |
| ], |
| "section": "Optimization of model scaling factors", |
| "sec_num": "2.4." |
| }, |
| { |
| "text": "on a parallel training corpus of sentence pairs (f s , e s ), s = 1, . . . , S. This criterion allows for only one reference translation, but for our tasks there exist multiple reference translations. Hence, we change the criterion to allow R s reference translations es,1, . . . , es,R s for the sentence es:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Optimization of model scaling factors", |
| "sec_num": "2.4." |
| }, |
| { |
| "text": "\u03bb M 1 = argmax \u03bb M 1 ( S X s=1 1 R s R s X r=1 log p \u03bb M 1 (e s,r |f s ) )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Optimization of model scaling factors", |
| "sec_num": "2.4." |
| }, |
| { |
| "text": "We use this optimization criterion instead of the optimization criterion shown in Equation 4. The model scaling factors are optimized on the development corpus with respect to the NIST score in a way similar to [3] . We use the downhill simplex algorithm from [12] . We do not perform the optimization on n-best lists but we retranslate the whole development corpus for each iteration of the optimization algorithm. In the experiments, the downhill simplex algorithm converged after about 200 iterations. This method has the advantage that it is not limited to the model scaling factors as the method described in [3] .", |
| "cite_spans": [ |
| { |
| "start": 211, |
| "end": 214, |
| "text": "[3]", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 260, |
| "end": 264, |
| "text": "[12]", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 614, |
| "end": 617, |
| "text": "[3]", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Optimization of model scaling factors", |
| "sec_num": "2.4." |
| }, |
| { |
| "text": "The base models described in section 2.3 are integrated into the used dynamic programming search algorithm [7] . Instead of Equation 1, we use the following search criterion:", |
| "cite_spans": [ |
| { |
| "start": 107, |
| "end": 110, |
| "text": "[7]", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Search", |
| "sec_num": "2.5." |
| }, |
| { |
| "text": "e I 1 = argmax e I 1 n p(e I 1 ) \u2022 p(e I 1 |f J 1 ) o (5)", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Search", |
| "sec_num": "2.5." |
| }, |
| { |
| "text": "This simplifies the search process as shown in [2] . As experiments have shown this approximation does not affect the quality of translation results. The memory requirements for the alignment templates approach are quite large. To reduce these requirements for offline experiments, we apply a special method that works as follows. For each observed source word group (length typically two to twelve) in the test data we check wether the same word group has occurred in the training data. If yes, we calculate an alignment template model for this specific word group. In other words, we compute alignment templates models only for those words that occur in the test data.", |
| "cite_spans": [ |
| { |
| "start": 47, |
| "end": 50, |
| "text": "[2]", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Search", |
| "sec_num": "2.5." |
| }, |
| { |
| "text": "Subsequently the actual translation process begins. It has a search organization along the positions of the target language string. During the search, we produce partial hypotheses which are extended by appending one target word. The set of all partial hypotheses can be structured as a graph with a source node representing the sentence start, leaf nodes representing full translations and intermediate nodes representing partial hypotheses. We recombine partial hypotheses which we do not have to distinguish by neither language model nor translation model. We also use beam-search in order to handle the huge search space.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Search", |
| "sec_num": "2.5." |
| }, |
| { |
| "text": "Furthermore, we compute n-best lists [9] and rescore the candidate translations with the additional models described in section 2.3.", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 40, |
| "text": "[9]", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Search", |
| "sec_num": "2.5." |
| }, |
| { |
| "text": "Within the alignment templates, the reordering is learned in training and kept fix during the search process. There are no constraints on the reorderings within the alignment templates.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Reordering constraints", |
| "sec_num": "2.6." |
| }, |
| { |
| "text": "Although unconstrained reordering looks perfect from a theoretical point of view, we found in [13] that constrained reordering shows better performance at least for the Japanese-to-English task. We used constraints based on inversion transduction grammars (ITG) [14, 15] . Here, we interpret the input sentence as a sequence of blocks. In the beginning, each alignment template is a block of its own. Then, the reordering process can be interpreted as follows: we select two consecutive blocks and merge them to a single block by choosing between two options: either keep the target phrases in monotone order or invert the order. This idea is illustrated in Figure 3 . The dark boxes represent the two blocks to be merged. Once two blocks are merged, they are treated as a single block and they can be only merged further as a whole. It is not allowed to merge one of the sub-blocks again.", |
| "cite_spans": [ |
| { |
| "start": 94, |
| "end": 98, |
| "text": "[13]", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 262, |
| "end": 266, |
| "text": "[14,", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 267, |
| "end": 270, |
| "text": "15]", |
| "ref_id": "BIBREF14" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 658, |
| "end": 666, |
| "text": "Figure 3", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Reordering constraints", |
| "sec_num": "2.6." |
| }, |
| { |
| "text": "Experiments were carried out on the Basic Travel Expression Corpus (BTEC) task [16] . This is a multilingual speech corpus which contains tourism-related sentences similar to those that are usually found in phrase books for tourists going abroad. In particular, the participants of the International Workshop on Spoken Language Translation (IWSLT 2004) were asked to test their systems on the Chinese-to-English and the Japanese-to-English task. For both translation directions different tracks were specified depending on the amount of training data that was allowed to use. We took part in the following tracks: The training data of the MT systems was limited to the supplied corpora only. Here, we evaluated our system for both language pairs.", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 83, |
| "text": "[16]", |
| "ref_id": "BIBREF15" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Results", |
| "sec_num": "3." |
| }, |
| { |
| "text": "\u2022 Unrestricted Data Track: There were no limitations on the linguistic resources used to train the MT systems. We only worked on the Japanese-to-English translation direction.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Results", |
| "sec_num": "3." |
| }, |
| { |
| "text": "The corpus statistics for these tracks are shown in Table 1 to 3.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 52, |
| "end": 59, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Translation Results", |
| "sec_num": "3." |
| }, |
| { |
| "text": "For both language pairs, 20 000 sentences randomly selected from the full BTEC corpus were supplied for training purposes, plus the CSTAR 2003 test set consisting of 506 sentence pairs as development corpus and the official 500 sentence test set for IWSLT 2004. As additional training resources for the unrestricted data track we included the full BTEC Japanese-to-English corpus and the Spoken Language DataBase (SLDB) [17] , which consists of transcriptions of spoken dialogs in the domain of hotel reservation 1 .", |
| "cite_spans": [ |
| { |
| "start": 420, |
| "end": 424, |
| "text": "[17]", |
| "ref_id": "BIBREF16" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Translation Results", |
| "sec_num": "3." |
| }, |
| { |
| "text": "So far, no generally accepted, automatic criterion exists in machine translation for the evaluation of the experimental results. Therefore, the evaluation of the translation quality was twofold:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation specifications", |
| "sec_num": "3.1." |
| }, |
| { |
| "text": "1. Subjective Evaluation as specified by the IWSLT 2004 consortium: \u2022 Human assessments of translation quality with respect to the \"fluency\" and \"adequacy\" of the translation results. \u2022 \"Fluency\" indicates how the evaluation segment sounds to a native speaker of English. The evaluator graded the level of English used in the translation from 1 (\"Incomprehensible\") to 5 (\"Flawless English\"). \u2022 The \"adequacy\" assessment is carried out after the fluency judgement was done. The evaluator was presented with the \"gold standard\" translation and had to judge how much of the information from the original translation was expressed in the translation by selecting one of the grades from 1 (\"None of it\") to 5 (\"All of the information\").", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation specifications", |
| "sec_num": "3.1." |
| }, |
| { |
| "text": "In all experiments, the following error criteria were used:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automatic Evaluation:", |
| "sec_num": "2." |
| }, |
| { |
| "text": "\u2022 WER (word error rate):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automatic Evaluation:", |
| "sec_num": "2." |
| }, |
| { |
| "text": "The WER is computed as the minimum number of substitution, insertion and deletion operations that have to be performed to convert the generated sentence into the reference sentence. \u2022 PER (position-independent word error rate):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automatic Evaluation:", |
| "sec_num": "2." |
| }, |
| { |
| "text": "A shortcoming of the WER is that it requires a perfect word order. The word order of an acceptable sentence can be different from that of the target sentence, so that the WER measure alone could be misleading. The PER compares the words in the two sentences ignoring the word order. \u2022 BLEU score:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automatic Evaluation:", |
| "sec_num": "2." |
| }, |
| { |
| "text": "This score measures the precision of unigrams, bi- This score is similar to BLEU. It is a weighted n-gram precision in combination with a penalty for too short sentences [4] . NIST measures accuracy, i.e. large NIST scores are better. \u2022 GTM score:", |
| "cite_spans": [ |
| { |
| "start": 170, |
| "end": 173, |
| "text": "[4]", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automatic Evaluation:", |
| "sec_num": "2." |
| }, |
| { |
| "text": "The General Text Matcher (GTM) [19] is a tool which measures the similarity between texts in terms of precision and recall. GTM measures accuracy, i.e. large GTM scores are better.", |
| "cite_spans": [ |
| { |
| "start": 31, |
| "end": 35, |
| "text": "[19]", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automatic Evaluation:", |
| "sec_num": "2." |
| }, |
| { |
| "text": "For the BTEC tasks, we had multiple references available. Therefore, we computed all the preceding criteria with respect to multiple references. To indicate this, we will precede the acronyms with an m (multiple) if multiple references are used.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Automatic Evaluation:", |
| "sec_num": "2." |
| }, |
| { |
| "text": "We start with the official IWSLT 2004 evaluation results for our system. Multiple system submissions for each data track were permitted, but each participant had to mark a primary system and that was going to be evaluated by humans. The results are summarized in Table 4 . We see, that the subjective scores are very similar for both language pairs although the performance according to the automatic error criteria is better for the Japanese-to-English task. If we train our system on the full BTEC corpus extended by the SLDB corpus, we observe that the overall quality increases significantly and is rather high on this task. In practice, we found that the subjective accuracy measures seem to be mostly correlated with the NIST score. Hence, we optimized the model scaling factors according to the translation quality with respect to the NIST score. We also did some experiments in which we optimized the model scaling factors with respect to other error criteria, but we found out that the best overall performance is achieved by optimizing our system with respect to the NIST score. E.g., if we optimize our system on the Japanese-to-English small data track for the BLEU score, we are able to increase this score on the development set from 45.3 % to 46.7 %. Further, the mWER decreases from 41.9 % to 40.9 %, but the other error criteria deteriorate (mPER from 33.8 % to 34.4 %, NIST from 9.49 to 9.06, and GTM from 76.4 % to 74.7 %).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 263, |
| "end": 270, |
| "text": "Table 4", |
| "ref_id": "TABREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation results", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "To investigate the effect of n-best list rescoring, we compared the performance of our system based on single-best translations with the performance on n-best lists which were successively enhanced by the models described in section 2.3. The results for the two small data tracks on the corresponding development sets are shown in Table 5 and 6. Again, all the systems have been optimized with respect to the NIST score, which serves as primary score. We see that the performance of the single-best system and that of the initial n-best list can differ due to different parameter settings for the beam search algorithm. Furthermore, we achieve a gain in performance with every model we add to the n-best list, not only in the NIST score but also in the other error criteria.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 331, |
| "end": 338, |
| "text": "Table 5", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation results", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "\u2022 The IBM-1 lexicon is probably helpful because it captures lexical co-occurrences due to its bag-of-words characteristic [10] .", |
| "cite_spans": [ |
| { |
| "start": 122, |
| "end": 126, |
| "text": "[10]", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation results", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "\u2022 The deletion model protects the system from producing too short sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation results", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "\u2022 The additional language model enriches the system with knowledge about larger phrases.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation results", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "Finally, to demonstrate the benefit of the ITG reordering constraints for the Japanese-to-English task we distinguish the performance of the unconstrained single-best system from the ITG constrained one in Table 6 . Obviously, the unconstrained reorderings are significantly inferior to the ITG reorderings. This is not true for the Chinese-to-English task. Here, no performance gain has been achieved by constraining the reorderings.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 206, |
| "end": 213, |
| "text": "Table 6", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation results", |
| "sec_num": "3.2." |
| }, |
| { |
| "text": "We have presented the RWTH statistical machine translation system which is based on log-linear model combination. The main advantage comes from the large number of knowledge sources which can easily be integrated into our system in terms of feature functions. Using the alignment templates as main model, we incorporate shallow phrase structures: a phrase level alignment between phrases and a word level alignment between single words within the phrases. In this way, our system is able to learn word contexts and local reorderings. Due to the fact that the alignment templates do not provide constrained reorderings and that unconstrained reordering may adversely affect the translation quality, we extended our system to cover reordering constraints. For the Japanese-to-English task the ITG constraints showed the best performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "4." |
| }, |
| { |
| "text": "We included the IBM-1 lexicon, a deletion model and higher order language models as additional feature functions and applied n-best list rescoring because a straightforward integration into the dynamic programming search algorithm is not always possible.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "4." |
| }, |
| { |
| "text": "The optimization of the model scaling factors was performed with respect to the translation quality measured by the NIST score, as this score was found out to correspond best to subjective evaluation criteria.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "4." |
| }, |
| { |
| "text": "Participating in the International Workshop on Spoken Language Translation (IWSLT 2004), we evaluated our system on the Basic Travel Expression Corpus (BTEC) Chinese-to-English and Japanese-to-English tasks. On both tasks, our system produces translations of good quality. This is true especially for the unrestricted data track, for which we extended the training resources by additional corpora and obtained a rather high overall performance.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusions", |
| "sec_num": "4." |
| }, |
| { |
| "text": "All corpora (BTEC, SLDB, and the CSTAR test sets) were kindly provided by ATR Spoken Language Translation Research Laboratories Kyoto, Japan.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "This work has been partially funded by the European Commission under the projects PF-Star, IST-2001-37599, and LC-Star, IST-2001-32216.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": "5." |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "A statistical approach to machine translation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "F" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [], |
| "last": "Cocke", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [ |
| "J" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Jelinek", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "D" |
| ], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mercer", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [ |
| "S" |
| ], |
| "last": "Roossin", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Computational Linguistics", |
| "volume": "16", |
| "issue": "2", |
| "pages": "79--85", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P. F. Brown, J. Cocke, S. A. Della Pietra, V. J. Della Pietra, F. Jelinek, J. D. Lafferty, R. L. Mercer, and P. S. Roossin, \"A statistical approach to machine translation,\" Computational Linguistics, vol. 16, no. 2, pp. 79-85, June 1990.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Discriminative training and maximum entropy models for statistical machine translation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "295--302", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. J. Och and H. Ney, \"Discriminative training and maximum entropy models for statistical machine translation,\" in Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL), Philadelphia, PA, July 2002, pp. 295-302.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Minimum error rate training in statistical machine translation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proc. of the 41th Annual Meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "160--167", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. J. Och, \"Minimum error rate training in statistical machine translation,\" in Proc. of the 41th Annual Meeting of the Asso- ciation for Computational Linguistics (ACL), Sapporo, Japan, July 2003, pp. 160-167.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Automatic evaluation of machine translation quality using n-gram co-occurrence statistics", |
| "authors": [ |
| { |
| "first": "G", |
| "middle": [], |
| "last": "Doddington", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. ARPA Workshop on Human Language Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "G. Doddington, \"Automatic evaluation of machine translation quality using n-gram co-occurrence statistics,\" in Proc. ARPA Workshop on Human Language Technology, 2002.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Bleu: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "W.-J", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of the 40th Annual Meeting of the Association for Computational Linguistics (ACL)", |
| "volume": "", |
| "issue": "", |
| "pages": "311--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, \"Bleu: a method for automatic evaluation of machine translation,\" in Proc. of the 40th Annual Meeting of the Association for Com- putational Linguistics (ACL), Philadelphia, PA, July 2002, pp. 311-318.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Accelerated DP based search for statistical translation", |
| "authors": [ |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Tillmann", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Vogel", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Zubiaga", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Sawaf", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "European Conf. on Speech Communication and Technology", |
| "volume": "", |
| "issue": "", |
| "pages": "2667--2670", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "C. Tillmann, S. Vogel, H. Ney, A. Zubiaga, and H. Sawaf, \"Accelerated DP based search for statistical translation.\" in European Conf. on Speech Communication and Technology, Rhodes, Greece, September 1997, pp. 2667-2670.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Improved alignment models for statistical machine translation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "C", |
| "middle": [], |
| "last": "Tillmann", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "Proc. of the Joint SIGDAT Conf. on Empirical Methods in Natural Language Processing and Very Large Corpora", |
| "volume": "", |
| "issue": "", |
| "pages": "20--28", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. J. Och, C. Tillmann, and H. Ney, \"Improved alignment models for statistical machine translation,\" in Proc. of the Joint SIGDAT Conf. on Empirical Methods in Natural Lan- guage Processing and Very Large Corpora, University of Maryland, College Park, MD, June 1999, pp. 20-28.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "An efficient method for determining bilingual word classes", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| } |
| ], |
| "year": 1999, |
| "venue": "EACL '99: Ninth Conf. of the Europ. Chapter of the Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "71--76", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. J. Och, \"An efficient method for determining bilingual word classes,\" in EACL '99: Ninth Conf. of the Europ. Chapter of the Association for Computational Linguistics, Bergen, Nor- way, June 1999, pp. 71-76.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Generation of word graphs in statistical machine translation", |
| "authors": [ |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Ueffing", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. Conf. on Empirical Methods for Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "156--163", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "N. Ueffing, F. J. Och, and H. Ney, \"Generation of word graphs in statistical machine translation,\" in Proc. Conf. on Empirical Methods for Natural Language Processing, Philadelphia, PA, July 2002, pp. 156-163.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "A smorgasbord of features for statistical machine translation", |
| "authors": [ |
| { |
| "first": "F", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Gildea", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Khudanpur", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Sarkar", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Yamada", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Fraser", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Kumar", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| }, |
| { |
| "first": "K", |
| "middle": [], |
| "last": "Eng", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Jain", |
| "suffix": "" |
| }, |
| { |
| "first": "Z", |
| "middle": [], |
| "last": "Jin", |
| "suffix": "" |
| }, |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Radev", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics: HLT-NAACL", |
| "volume": "", |
| "issue": "", |
| "pages": "161--168", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "F. J. Och, D. Gildea, S. Khudanpur, A. Sarkar, K. Yamada, A. Fraser, S. Kumar, L. Shen, D. Smith, K. Eng, V. Jain, Z. Jin, and D. Radev, \"A smorgasbord of features for statis- tical machine translation,\" in Proceedings of the Human Lan- guage Technology Conference of the North American Chap- ter of the Association for Computational Linguistics: HLT- NAACL 2004, Boston, MA, May 2004, pp. 161-168.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "SRILM -an extensible language modeling toolkit", |
| "authors": [ |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Stolcke", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. Intl. Conf. Spoken Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "901--904", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "A. Stolcke, \"SRILM -an extensible language modeling toolkit,\" in Proc. Intl. Conf. Spoken Language Processing, Denver, CO, September 2002, pp. 901-904.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Flannery, Numerical Recipes in C++", |
| "authors": [ |
| { |
| "first": "W", |
| "middle": [ |
| "H" |
| ], |
| "last": "Press", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Teukolsky", |
| "suffix": "" |
| }, |
| { |
| "first": "W", |
| "middle": [ |
| "T" |
| ], |
| "last": "Vetterling", |
| "suffix": "" |
| }, |
| { |
| "first": "B", |
| "middle": [ |
| "P" |
| ], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flan- nery, Numerical Recipes in C++. Cambridge, UK: Cam- bridge University Press, 2002.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Reordering constraints for phrase-based statistical machine translation", |
| "authors": [ |
| { |
| "first": "R", |
| "middle": [], |
| "last": "Zens", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Watanabe", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Sumita", |
| "suffix": "" |
| } |
| ], |
| "year": 2004, |
| "venue": "COLING '04: The 20th Int. Conf. on Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "205--211", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "R. Zens, H. Ney, T. Watanabe, and E. Sumita, \"Reordering constraints for phrase-based statistical machine translation,\" in COLING '04: The 20th Int. Conf. on Computational Lin- guistics, Geneva, Switzerland, August 2004, pp. 205-211.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Stochastic inversion transduction grammars, with application to segmentation, bracketing, and alignment of parallel corpora", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 1995, |
| "venue": "Proc. of the 14th International Joint Conf. on Artificial Intelligence (IJCAI)", |
| "volume": "", |
| "issue": "", |
| "pages": "1328--1334", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Wu, \"Stochastic inversion transduction grammars, with ap- plication to segmentation, bracketing, and alignment of par- allel corpora,\" in Proc. of the 14th International Joint Conf. on Artificial Intelligence (IJCAI), Montreal, August 1995, pp. 1328-1334.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Stochastic inversion transduction grammars and bilingual parsing of parallel corpora", |
| "authors": [ |
| { |
| "first": "D", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Computational Linguistics", |
| "volume": "23", |
| "issue": "3", |
| "pages": "377--403", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "D. Wu, \"Stochastic inversion transduction grammars and bilingual parsing of parallel corpora,\" Computational Linguis- tics, vol. 23, no. 3, pp. 377-403, September 1997.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Takezawa", |
| "suffix": "" |
| }, |
| { |
| "first": "E", |
| "middle": [], |
| "last": "Sumita", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Sugaya", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Yamamoto", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Yamamoto", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proc. of the Third Int. Conf. on Language Resources and Evaluation (LREC)", |
| "volume": "", |
| "issue": "", |
| "pages": "147--152", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Takezawa, E. Sumita, F. Sugaya, H. Yamamoto, and S. Yamamoto, \"Toward a broad-coverage bilingual corpus for speech translation of travel conversations in the real world,\" in Proc. of the Third Int. Conf. on Language Resources and Evaluation (LREC), Las Palmas, Spain, May 2002, pp. 147- 152.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "A speech and language database for speech translation research", |
| "authors": [ |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Morimoto", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Uratani", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Takezawa", |
| "suffix": "" |
| }, |
| { |
| "first": "O", |
| "middle": [], |
| "last": "Furuse", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Sobashima", |
| "suffix": "" |
| }, |
| { |
| "first": "H", |
| "middle": [], |
| "last": "Iida", |
| "suffix": "" |
| }, |
| { |
| "first": "A", |
| "middle": [], |
| "last": "Nakamura", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Sagisaka", |
| "suffix": "" |
| }, |
| { |
| "first": "N", |
| "middle": [], |
| "last": "Higuchi", |
| "suffix": "" |
| }, |
| { |
| "first": "Y", |
| "middle": [], |
| "last": "Yamazaki", |
| "suffix": "" |
| } |
| ], |
| "year": 1994, |
| "venue": "Proc. of the 3rd Int. Conf. on Spoken Language Processing (ICSLP'94)", |
| "volume": "", |
| "issue": "", |
| "pages": "1791--1794", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "T. Morimoto, N. Uratani, T. Takezawa, O. Furuse, Y. Sobashima, H. Iida, A. Nakamura, Y. Sagisaka, N. Higuchi, and Y. Yamazaki, \"A speech and language database for speech translation research,\" in Proc. of the 3rd Int. Conf. on Spo- ken Language Processing (ICSLP'94), Yokohama, Japan, September 1994, pp. 1791-1794.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Bleu: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "K", |
| "middle": [ |
| "A" |
| ], |
| "last": "Papineni", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "T", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "W.-J", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "K. A. Papineni, S. Roukos, T. Ward, and W.-J. Zhu, \"Bleu: a method for automatic evaluation of machine translation,\" IBM Research Division, Thomas J. Watson Research Center, Tech. Rep. RC22176 (W0109-022), September 2001.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Evaluation of machine translation and its evaluation", |
| "authors": [ |
| { |
| "first": "J", |
| "middle": [ |
| "P" |
| ], |
| "last": "Turian", |
| "suffix": "" |
| }, |
| { |
| "first": "L", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "I", |
| "middle": [ |
| "D" |
| ], |
| "last": "Melamed", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "J. P. Turian, L. Shen, and I. D. Melamed, \"Evaluation of ma- chine translation and its evaluation,\" Computer Science De- partment, New York University, Tech. Rep. Proteus technical report 03-005, 2003.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "uris": null, |
| "num": null, |
| "text": "Architecture of the translation approach based on log-linear model combination.", |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "uris": null, |
| "num": null, |
| "text": "Example of a word aligned sentence pair and some possible alignment templates.", |
| "type_str": "figure" |
| }, |
| "FIGREF2": { |
| "uris": null, |
| "num": null, |
| "text": "a phrase alignment model, \u2022 a word translation model, \u2022 a word-based trigram language model, \u2022 a class-based five-gram language model, and", |
| "type_str": "figure" |
| }, |
| "FIGREF3": { |
| "uris": null, |
| "num": null, |
| "text": "Illustration of monotone and inverted concatenation of two consecutive blocks.", |
| "type_str": "figure" |
| }, |
| "TABREF1": { |
| "text": "Statistics of the BTEC corpus for the Chinese-to-English", |
| "html": null, |
| "num": null, |
| "content": "<table><tr><td>Small Data Track</td><td/><td/></tr><tr><td/><td/><td colspan=\"2\">Chinese English</td></tr><tr><td colspan=\"2\">train sentences</td><td>20 000</td></tr><tr><td/><td>words</td><td colspan=\"2\">182 904 160 523</td></tr><tr><td/><td>singletons</td><td>3 525</td><td>2948</td></tr><tr><td/><td>vocabulary</td><td>7 643</td><td>6 982</td></tr><tr><td>dev</td><td>sentences</td><td>506</td></tr><tr><td/><td>words</td><td>3 515</td><td>3 595</td></tr><tr><td>test</td><td>sentences</td><td>500</td></tr><tr><td/><td>words</td><td>3 794</td><td>-</td></tr><tr><td colspan=\"2\">\u2022 Small Data Track:</td><td/></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF2": { |
| "text": "Statistics of the BTEC corpus for the Japanese-to-English Small Data Track", |
| "html": null, |
| "num": null, |
| "content": "<table><tr><td/><td/><td colspan=\"2\">Japanese English</td></tr><tr><td colspan=\"2\">train sentences</td><td>20 000</td></tr><tr><td/><td>words</td><td colspan=\"2\">209 012 160 427</td></tr><tr><td/><td>singletons</td><td>4 108</td><td>2 956</td></tr><tr><td/><td>vocabulary</td><td>9 277</td><td>6 932</td></tr><tr><td>dev</td><td>sentences</td><td>506</td></tr><tr><td/><td>words</td><td>4 374</td><td>3 595</td></tr><tr><td>test</td><td>sentences</td><td>500</td></tr><tr><td/><td>words</td><td>4 370</td><td>-</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "text": "Statistics of the BTEC corpus for the Japanese-to-English", |
| "html": null, |
| "num": null, |
| "content": "<table><tr><td colspan=\"2\">Unrestricted Data Track</td><td/></tr><tr><td/><td/><td>Japanese</td><td>English</td></tr><tr><td colspan=\"2\">train sentences</td><td colspan=\"2\">240 672</td></tr><tr><td/><td>words</td><td colspan=\"2\">1 974 407 1 770 190</td></tr><tr><td/><td>singletons</td><td>8 975</td><td>3 658</td></tr><tr><td/><td>vocabulary</td><td>26 037</td><td>14 301</td></tr><tr><td>dev</td><td>sentences</td><td>506</td></tr><tr><td/><td>words</td><td>3 515</td><td>3 595</td></tr><tr><td>test</td><td>sentences</td><td>500</td></tr><tr><td/><td>words</td><td>3 794</td><td>-</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF4": { |
| "text": "Translation performance of the official run submissions for the BTEC task (500 sentences).", |
| "html": null, |
| "num": null, |
| "content": "<table><tr><td>Language Pair</td><td>Data Track</td><td colspan=\"4\">Automatic Evaluation</td><td/><td colspan=\"2\">Subjective Evaluation</td></tr><tr><td/><td/><td colspan=\"5\">mWER mPER BLEU NIST GTM</td><td colspan=\"2\">Fluency Adequacy</td></tr><tr><td/><td/><td>[%]</td><td>[%]</td><td>[%]</td><td/><td>[%]</td><td/><td/></tr><tr><td>Chinese-to-English</td><td>Small</td><td>45.6</td><td>39.0</td><td>40.9</td><td>8.55</td><td>72.1</td><td>3.36</td><td>3.34</td></tr><tr><td colspan=\"2\">Japanese-to-English Small</td><td>41.9</td><td>33.8</td><td>45.3</td><td>9.49</td><td>76.4</td><td>3.48</td><td>3.41</td></tr><tr><td/><td>Unrestricted</td><td>30.6</td><td>24.9</td><td colspan=\"2\">61.9 10.72</td><td>79.7</td><td>4.04</td><td>4.07</td></tr><tr><td colspan=\"3\">grams, trigrams and fourgrams with respect to a refer-</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"3\">ence translation with a penalty for too short sentences</td><td/><td/><td/><td/><td/><td/></tr><tr><td colspan=\"3\">[18]. BLEU measures accuracy, i.e. large BLEU</td><td/><td/><td/><td/><td/><td/></tr><tr><td>scores are better.</td><td/><td/><td/><td/><td/><td/><td/><td/></tr><tr><td>\u2022 NIST score:</td><td/><td/><td/><td/><td/><td/><td/><td/></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "text": "Translation performance on the Chinese-to-English CSTAR 2003 test set (506 sentences).", |
| "html": null, |
| "num": null, |
| "content": "<table><tr><td>System</td><td/><td colspan=\"2\">Error Criteria</td><td/></tr><tr><td/><td colspan=\"4\">mWER mPER BLEU NIST</td></tr><tr><td/><td>[%]</td><td>[%]</td><td>[%]</td><td/></tr><tr><td>single-best</td><td>55.2</td><td>45.6</td><td>34.8</td><td>7.76</td></tr><tr><td>n-best list</td><td>53.4</td><td>45.3</td><td>33.6</td><td>7.63</td></tr><tr><td>+ IBM-1 lexicon</td><td>50.9</td><td>42.1</td><td>36.4</td><td>8.06</td></tr><tr><td>+ deletion model</td><td>50.6</td><td>42.2</td><td>37.1</td><td>8.07</td></tr><tr><td>+ 9-gram LM</td><td>50.6</td><td>42.2</td><td>38.0</td><td>8.14</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF6": { |
| "text": "Translation performance on the Japanese-to-English CSTAR 2003 test set (506 sentences).", |
| "html": null, |
| "num": null, |
| "content": "<table><tr><td>System</td><td/><td colspan=\"2\">Error Criteria</td><td/></tr><tr><td/><td colspan=\"4\">mWER mPER BLEU NIST</td></tr><tr><td/><td>[%]</td><td>[%]</td><td>[%]</td><td/></tr><tr><td>single-best</td><td>48.7</td><td>38.6</td><td>44.3</td><td>9.10</td></tr><tr><td>+ ITG constraints</td><td>45.1</td><td>36.0</td><td>47.3</td><td>9.32</td></tr><tr><td>n-best list</td><td>49.5</td><td>37.3</td><td>45.0</td><td>9.32</td></tr><tr><td>+ IBM-1 lexicon</td><td>44.6</td><td>35.7</td><td>48.9</td><td>9.71</td></tr><tr><td>+ deletion model</td><td>43.2</td><td>34.7</td><td>50.1</td><td>9.80</td></tr><tr><td>+ 5-gram LM</td><td>42.6</td><td>34.2</td><td>51.5</td><td>9.92</td></tr></table>", |
| "type_str": "table" |
| } |
| } |
| } |
| } |