| { |
| "paper_id": "Y10-1043", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T13:40:56.662403Z" |
| }, |
| "title": "Simpler Is Better: Re-evaluation of Default Word Alignment Models in Statistical MT", |
| "authors": [ |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Fishel", |
| "suffix": "", |
| "affiliation": { |
| "laboratory": "", |
| "institution": "University of Tartu", |
| "location": { |
| "addrLine": "Liivi 2 -307", |
| "postCode": "50606", |
| "settlement": "Tartu", |
| "country": "Estonia" |
| } |
| }, |
| "email": "fishel@ut.ee" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Although several recent studies have shown that alignment quality is a poor indicator of the resulting translation quality, the word alignment models currently considered to be default (the so-called IBM models and HMM-based alignment) have been evaluated using the alignment error rate. We argue that from a machine translation perspective it makes sense to use simpler alignment models. Here we show that not only do the sequential models result in the same or better translation quality, but even from the set of sequential alignment models simpler ones can match the performance of the HMM-based model, whereas using computationally less expensive and faster algorithms to train and align new sentence pairs. Empirical evaluation is performed on a phrase-based and a parsing-based translation system.", |
| "pdf_parse": { |
| "paper_id": "Y10-1043", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Although several recent studies have shown that alignment quality is a poor indicator of the resulting translation quality, the word alignment models currently considered to be default (the so-called IBM models and HMM-based alignment) have been evaluated using the alignment error rate. We argue that from a machine translation perspective it makes sense to use simpler alignment models. Here we show that not only do the sequential models result in the same or better translation quality, but even from the set of sequential alignment models simpler ones can match the performance of the HMM-based model, whereas using computationally less expensive and faster algorithms to train and align new sentence pairs. Empirical evaluation is performed on a phrase-based and a parsing-based translation system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "A majority of state-of-the-art statistical machine translation (SMT) systems operate with multiword units but still use word alignment as an intermediate step for learning the translation models. Such is the case for two wide-spread machine translation frameworks: phrase-based SMT of (Koehn et al., 2003) and hierarchical phrase-based SMT of (Chiang, 2005) . In phrase-based systems word alignment is used to construct phrase tables and in hierarchical phrase-based systemsto extract the synchronous grammar rules.", |
| "cite_spans": [ |
| { |
| "start": 285, |
| "end": 305, |
| "text": "(Koehn et al., 2003)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 343, |
| "end": 357, |
| "text": "(Chiang, 2005)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The word alignment models that are currently considered as default are the so-called IBM models 1 to 5 (Brown et al., 1993) and the HMM-based alignment model (Vogel et al., 1996) . The main work evaluating them is (Och and Ney, 2003) where they are compared in the context of word alignment only (i.e. based on the alignment error rate). Specifically, the default setup of the well known implementation of the models, GIZA++, is derived from (Och and Ney, 2003) and involves training the following models in sequence: IBM model 1, HMM-based model, IBM model 3 and IBM model 4.", |
| "cite_spans": [ |
| { |
| "start": 103, |
| "end": 123, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 158, |
| "end": 178, |
| "text": "(Vogel et al., 1996)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 214, |
| "end": 233, |
| "text": "(Och and Ney, 2003)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 442, |
| "end": 461, |
| "text": "(Och and Ney, 2003)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Although Och and Ney (2003) mention briefly that \"improved alignment quality yields an improved subjective quality of the statistical machine translation system as well\", a number of recent studies suggest otherwise -namely, that the correlation between the alignment quality and the quality of the resulting translation is rather weak (see the following section on related work). This suggests that the best default word alignment models are not necessarily optimal in terms of the resulting translation quality.", |
| "cite_spans": [ |
| { |
| "start": 9, |
| "end": 27, |
| "text": "Och and Ney (2003)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Some recent works (e.g. (Liang et al., 2006) or (DeNero and Klein, 2007) ) are already based on the sequential HMM word alignment model, rather than the fertility-based models 3 or 4 . The former is computationally less complex and at the same time still includes the essential parts of the word alignment (i.e. lexical correspondence and changes in word order).", |
| "cite_spans": [ |
| { |
| "start": 24, |
| "end": 44, |
| "text": "(Liang et al., 2006)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 48, |
| "end": 72, |
| "text": "(DeNero and Klein, 2007)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this work we show that a simpler alignment model introduced together with HMM-based alignment in (Vogel et al., 1996) , but discarded due to worse alignment error rate, results in essentially the same translation quality like HMM-based alignment in almost all cases -i.e. the relative-distortion IBM model 2. At the same time, it does not include a first-order dependency of the alignment, which means that it is much simpler to implement and train it. The experiment results also support the common knowledge that HMM-based alignment works just as well or sometimes better than IBM model 4, usually used by default.", |
| "cite_spans": [ |
| { |
| "start": 100, |
| "end": 120, |
| "text": "(Vogel et al., 1996)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In the following section we review the related work on the link between the word alignment quality and translation quality. Section 3 consists of a theoretical overview of different aspects of the word alignment task in the models in question. In section 4 we present the experiments with simpler default and alternative models on translations from Chinese, Czech, Estonian, Finnish, German and Korean into English and back.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Several papers point out that the scores designed for word alignment (alignment error-rate, Fscore) and translation (BLEU, NIST), are not heavily correlated. In particular (Fraser and Marcu, 2007) and (Ayan and Dorr, 2006) distinctly state that alignment error rate is a poor indicator of translation quality. (Lopez and Resnik, 2006) artificially degrade the alignment quality in order to show that it does not cause a significant drop in translation quality. They further show that with careful feature engineering the flaws of the underlying word alignment can be compensated. (Vilar et al., 2006) give two examples of word alignment modifications which cause worse alignment quality and nevertheless better translation quality. This is achieved by adapting the alignments to the specific requirements of translation. (Guzman et al., 2009) inspect word alignments and their characteristics, especially the number of unaligned words, and their influence on phrase pair extraction. They show that an increased number of unaligned words causes degraded translation quality. Analyzing manually evaluated phrase pairs they come up with translation model features that account for the number of unaligned words and improve the translation quality. (Lambert et al., 2009) tune alignment for the F-score and the BLEU score. They show that the two objectives are not the same and produce different translation models. (Ganchev et al., 2008) use agreement-driven training of alignment models and replace Viterbi decoding with posterior decoding. This results in improvements both in the alignment quality as well as translation quality.", |
| "cite_spans": [ |
| { |
| "start": 172, |
| "end": 196, |
| "text": "(Fraser and Marcu, 2007)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 201, |
| "end": 222, |
| "text": "(Ayan and Dorr, 2006)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 310, |
| "end": 334, |
| "text": "(Lopez and Resnik, 2006)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 580, |
| "end": 600, |
| "text": "(Vilar et al., 2006)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 821, |
| "end": 842, |
| "text": "(Guzman et al., 2009)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 1245, |
| "end": 1267, |
| "text": "(Lambert et al., 2009)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 1412, |
| "end": 1434, |
| "text": "(Ganchev et al., 2008)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A brief comparison of the IBM models in SMT context is performed in (Koehn et al., 2003) . The comparison is based on the BLEU scores and covers IBM models 1 to 4. The given brief conclusions are that using different alignment models does not cause significant changes in translation quality. Model 1 is noted for lower scores and models 2 and 4 are said to produce similar results. Our results suggest the contrary for the latter point. (He, 2007) introduce word dependent HMM-based word alignment. They apply fully lexicalized transition modeling by additionally conditioning the first-order dependency of the alignment on the corresponding output word. They show that this modified alignment model can match the performance of IBM model 4. As it will be shown later in this work, usual HMM-based alignment models also achieve the same result.", |
| "cite_spans": [ |
| { |
| "start": 68, |
| "end": 88, |
| "text": "(Koehn et al., 2003)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 438, |
| "end": 448, |
| "text": "(He, 2007)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The task of word alignment is to find matching words in a pair of sentences that have the same meaning in two different languages. Although in reality sometimes whole phrases are translated with no direct correspondence in meaning between the used words (e.g. idioms), the classic approach is to align single words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Alignment", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Although the task is essentially symmetrical, the IBM and HMM-based models focus on aligning every word in one sentence to at most one word in the other one. For simplicity's sake we will use the standard notation of f and e for the two sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Alignment", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The aim is therefore to find an alignment a, which is a vector of indexes indicating which words in e the words in f are aligned to; in other words, the word f j is aligned to the word e i if a j = i. Also any a j can be equal to 0, in which case the word f j is said to be unaligned.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Alignment", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Here we will focus on the IBM models (Brown et al., 1993) , the HMM-based alignment model (Vogel et al., 1996) and a modification of the original IBM model 2 introduced in (Och and Ney, 2000) and referred to as diagonal-oriented model 2.", |
| "cite_spans": [ |
| { |
| "start": 37, |
| "end": 57, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 90, |
| "end": 110, |
| "text": "(Vogel et al., 1996)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 172, |
| "end": 191, |
| "text": "(Och and Ney, 2000)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Word Alignment", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Lexical (or translational) correspondence of the single words in the two sentences is perhaps the main aspect of word alignment and is present in all of the described models. In all the models considered here lexical correspondence is treated as independent of the word positions in the sentences or any context of either words; it is modeled via a probability distribution p(f |e).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Correspondence", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Although lexical correspondence is a very important aspect, constricting an alignment model to it (as it is the case with model 1) results in serious model flaws: in case of any lexical ambiguity the model will select the most probable word pair in all cases, and the model would not be able to resolve a conflict between repeated items, like punctuation marks or same words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Lexical Correspondence", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "Modeling the different word order in the two sentences is also referred to as distortion (Brown et al., 1993) . Just like the lexical correspondence aspect, in the IBM models it is considered to be independent of the words themselves or their context. The models 2 and 3 include a distortion component based only on the absolute word positions and the sentence lengths: p(a j |j, J, I), where J = |f | and I = |e|. The problem with absolute word positions is that, simply put, same words can occur at different positions in sentences of different length. This means that a separate parameter subset models each different position and sentence length which, in addition to unnecessary treating of the same words differently, can easily suffer from the sparse data effect.", |
| "cite_spans": [ |
| { |
| "start": 89, |
| "end": 109, |
| "text": "(Brown et al., 1993)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distortion", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "A modification of the original model 2, introduced originally in (Vogel et al., 1996) and developed further in (Och and Ney, 2000) , is instead based on the distance between a j and a scaled j:", |
| "cite_spans": [ |
| { |
| "start": 65, |
| "end": 85, |
| "text": "(Vogel et al., 1996)", |
| "ref_id": "BIBREF25" |
| }, |
| { |
| "start": 111, |
| "end": 130, |
| "text": "(Och and Ney, 2000)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distortion", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "d = a j \u2212 j\u2022I J .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distortion", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "Finally, (Och and Ney, 2000) introduce lexicalized reordering, which is additionally conditioned on some general classes of the words (C(f j ), C(e a j )). (Vogel et al., 1996 ) make a step further from independent single pair distortions. They treat alignment as a Markov process with the source words as the observed and the alignments -as hidden variables. With first-order Markov dependency assumption the alignment pairs are not any more independent. This makes the training/aligning algorithms more complicated. Still with the help of dynamic programming these can be solved with no approximations.", |
| "cite_spans": [ |
| { |
| "start": 9, |
| "end": 28, |
| "text": "(Och and Ney, 2000)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 156, |
| "end": 175, |
| "text": "(Vogel et al., 1996", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distortion", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "First-order dependency is a simple way to take into consideration the context of the word pair. If a neighboring (unambiguous) word pair is aligned with an atypical relative distortion, an HMMbased model is capable of deciding to align the current word similarly.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Distortion First-order Dependency", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "The aspect of fertility aims at modeling the fact that a single word can either be unconnected with the other sentence or be aligned with more than one word. In models 3 and 4 this is modeled explicitly: p(\u03c6|e), where \u03c6 denotes the number of words in f that e is connected to. Since this aspect influences the first-order dependencies as well as distortion, it makes learning and applying of such models even more complicated and is solved with approximations in practice. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Fertility", |
| "sec_num": "3.4" |
| }, |
| { |
| "text": "In this section we compare the influence of the word alignments on the resulting translation quality. The main aim is to test, whether it is necessary to use the complex default models or not -i.e., whether simpler models can match their performance. The default way of training the IBM models, as proposed by (Och and Ney, 2003) , is IBM model 1, HMM-based model, IBM model 3 and finally model 4, whereas the resulting parameters of the simpler models are used as the initial values of the more elaborate models. We first evaluate, how stopping at an earlier stage of word alignment influences translation quality. The hypothesis here, dictated by common knowledge, is that HMM-based alignment can perform just as well or even better than IBM model 4.", |
| "cite_spans": [ |
| { |
| "start": 310, |
| "end": 329, |
| "text": "(Och and Ney, 2003)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "As a next step we compare the default alignment model training sequence to an alternative, which uses different variants of IBM model 2 as a final step. Our aim is to see whether some variant can match the performance of the HMM-based model and IBM model 4.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We evaluate the influence of word alignment on two translation systems -a phrase-based system trained with Moses (Koehn et al., 2007) and a hierarchical phrase-based system trained with Joshua (Li et al., 2009) . In both cases we used 5-gram language models from SRI LM (Stolcke, 2002) and minimum error rate training included in the toolkits.", |
| "cite_spans": [ |
| { |
| "start": 113, |
| "end": 133, |
| "text": "(Koehn et al., 2007)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 193, |
| "end": 210, |
| "text": "(Li et al., 2009)", |
| "ref_id": "BIBREF13" |
| }, |
| { |
| "start": 270, |
| "end": 285, |
| "text": "(Stolcke, 2002)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Word alignment was done with GIZA++ (Och and Ney, 2003) for both systems. We modified its implementation to support three kinds of IBM2-based models: the absolute reordering-based model already included in GIZA++ (abbreviated as IBM2), the relative reordering-based model (abbreviated as IBM2(r)) and the latter, augmented with lexicalization, as suggested by (Och and Ney, 2000) (abbreviated as IBM2(r-l)).", |
| "cite_spans": [ |
| { |
| "start": 36, |
| "end": 55, |
| "text": "(Och and Ney, 2003)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 360, |
| "end": 379, |
| "text": "(Och and Ney, 2000)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We performed all of the experiments on the following language pairs and corpora: the Chinese-English and Korean-English parts of the OPUS KDE4 corpus (Tiedemann, 2009) , Czech-English technical documentations from CzEng (Bojar and\u017dabokrtsk\u00fd, 2009) , the Estonian-English part of the JRC-Acquis (Steinberger et al., 2006) and Finnish-English and German-English parts of Europarl (Koehn, 2005) ; all experiments included both translation directions.", |
| "cite_spans": [ |
| { |
| "start": 150, |
| "end": 167, |
| "text": "(Tiedemann, 2009)", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 220, |
| "end": 247, |
| "text": "(Bojar and\u017dabokrtsk\u00fd, 2009)", |
| "ref_id": null |
| }, |
| { |
| "start": 294, |
| "end": 320, |
| "text": "(Steinberger et al., 2006)", |
| "ref_id": "BIBREF21" |
| }, |
| { |
| "start": 378, |
| "end": 391, |
| "text": "(Koehn, 2005)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "Two independent held-out sets, each 2500 sentence pairs, were reserved for minimum error-rate training and validation; the resulting sizes of the training parts after preprocessing and separating the dev and test sets are summarized in table 1.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiment Setup", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We used the BLEU (Papieni et al., 2001) and NIST (NIST, 2002) scores for evaluation and paired bootstrap resampling (Riezler and Maxwell, 2005) for significance testing.", |
| "cite_spans": [ |
| { |
| "start": 17, |
| "end": 39, |
| "text": "(Papieni et al., 2001)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 44, |
| "end": 61, |
| "text": "NIST (NIST, 2002)", |
| "ref_id": "BIBREF16" |
| }, |
| { |
| "start": 116, |
| "end": 143, |
| "text": "(Riezler and Maxwell, 2005)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The results of the first part of the experiments, which is stopping midway in training the word alignment models, are presented on table 1. The scores of translations based on IBM model 1 are noticeably lower than all other models; also after the HMM-based model there is a noticeable drop at the IBM model 3 for almost every language pair, after which the IBM4 model scores rise to the level of the HMM model again.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Significance testing reveals that only in case of Korean-English translation the NIST score of the HMM model is significantly lower (p-value 0.009) than the score of the IBM model 4; in all other cases both scores of the HMM model is either insignificantly different or significantly higher.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The IBM2 models were trained in the same way as the HMM-based model: starting with the IBM model 1. The resulting scores of the IBM2 models are compared to the HMM-based model in table 2. As expected, the absolute reordering-based IBM2 model results are considerably lower than all other models in all experiments.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "The only case where an IBM2-based model outperforms an HMM-based model is English-Czech translation with IBM2(r) (based on the BLEU score). Also Czech-English translation results are essentially the same for IBM2(r) based on both scores with Joshua, and English-Czech, English-Korean and English-Chinese results for IBM2(r-l) -with Moses.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Most other examples of IBM2-based models having insignificantly different scores from the HMM-based models are IBM2(r) with Joshua (German-English and English-German NIST, Chinese-English and English-Chinese BLEU) and IBM2(r-l) with Moses (Czech-English BLEU, German-English NIST, Chinese-English NIST).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "In all other cases HMM-based alignment outperforms IBM2-based alignments significantly. However the score differences are relatively small (0.5-0.6 BLEU and 0.04-0.06 NIST points) in many cases. The main translation directions where the difference between the HMM-based alignment and the best IBM2-based model is high with both phrase-based and parsing-based translation are Estonian-English, Korean-English, Finnish-English and English-Estonian.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "To conclude, it seems that the IBM4 model can be safely replaced with the HMM-based model. Also, although the IBM2-based models did not outperform HMM entirely, for many language pairs the difference was estimated as insignificant and for some others -significant but relatively small. Thus the relative distortion-based IBM2 models can serve as an efficient trade-off between efficiency and quality. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Results", |
| "sec_num": "4.2" |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Going beyond AER: An extensive analysis of word alignments and their impact on mt", |
| "authors": [ |
| { |
| "first": "Necip", |
| "middle": [], |
| "last": "Ayan", |
| "suffix": "" |
| }, |
| { |
| "first": "Bonnie", |
| "middle": [ |
| "J" |
| ], |
| "last": "Fazil", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Dorr", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of ACL/COLING'06", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ayan, Necip Fazil and Bonnie J. Dorr. 2006. Going beyond AER: An extensive analysis of word alignments and their impact on mt. In Proceedings of ACL/COLING'06.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "CzEng0.9: Large Parallel Treebank with Rich Annotation", |
| "authors": [ |
| { |
| "first": "Ond\u0159ej", |
| "middle": [], |
| "last": "Bojar", |
| "suffix": "" |
| }, |
| { |
| "first": "Zden\u011bk\u017eabokrtsk\u00fd", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Prague Bulletin of Mathematical Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Bojar, Ond\u0159ej and Zden\u011bk\u017dabokrtsk\u00fd. 2009. CzEng0.9: Large Parallel Treebank with Rich Annotation. Prague Bulletin of Mathematical Linguistics, 92. in print.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "The mathematics of statistical machine translation: Parameter estimation", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [ |
| "F" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "Vincent", |
| "middle": [ |
| "J" |
| ], |
| "last": "Stephen Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "Robert", |
| "middle": [ |
| "L" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Mercer", |
| "suffix": "" |
| } |
| ], |
| "year": 1993, |
| "venue": "Computational Linguistics", |
| "volume": "19", |
| "issue": "2", |
| "pages": "263--311", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Brown, Peter F., Stephen Della Pietra, Vincent J. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Computational Linguis- tics, 19(2), 263-311.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A hierarchical phrase-based model for statistical machine translation", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Chiang", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of ACL'05", |
| "volume": "", |
| "issue": "", |
| "pages": "263--270", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chiang, David. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL'05, pp. 263-270, Ann Arbor, USA.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Tailoring word alignments to syntactic machine translation", |
| "authors": [ |
| { |
| "first": "John", |
| "middle": [], |
| "last": "Denero", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of ACL'07", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "DeNero, John and Dan Klein. 2007. Tailoring word alignments to syntactic machine translation. In Proceedings of ACL'07, p. 17, Prague, Czech Republic.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Measuring word alignment quality for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Alexander", |
| "middle": [], |
| "last": "Fraser", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Computational Linguistics", |
| "volume": "33", |
| "issue": "3", |
| "pages": "293--303", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fraser, Alexander and Daniel Marcu. 2007. Measuring word alignment quality for statistical machine translation. Computational Linguistics, 33(3), 293-303.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Better alignments = better translations?", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ganchev", |
| "suffix": "" |
| }, |
| { |
| "first": "Jo\u01ceo", |
| "middle": [ |
| "V" |
| ], |
| "last": "Kuzman", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Gra\u00e7a", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "Proceedings of ACL'08", |
| "volume": "", |
| "issue": "", |
| "pages": "986--993", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ganchev, Kuzman, Jo\u01ceo V. Gra\u00e7a, and Ben Taskar. 2008. Better alignments = better translations? In Proceedings of ACL'08, pp. 986-993, Columbus, USA.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Reassessment of the role of phrase extraction in PBSMT", |
| "authors": [ |
| { |
| "first": "Francisco", |
| "middle": [], |
| "last": "Guzman", |
| "suffix": "" |
| }, |
| { |
| "first": "Qin", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Vogel", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of MT Summit XII", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Guzman, Francisco, Qin Gao, and Stephan Vogel. 2009. Reassessment of the role of phrase extraction in PBSMT. In Proceedings of MT Summit XII, Ottawa, Canada.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "Using word-dependent transition models in HMM-based word alignment for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Xiaodong", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the Second Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "80--87", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "He, Xiaodong. 2007. Using word-dependent transition models in HMM-based word alignment for statistical machine translation. In Proceedings of the Second Workshop on Statistical Machine Translation, pp. 80-87, Prague, Czech Republic.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Europarl: A parallel corpus for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of MT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Koehn, Philipp. 2005. Europarl: A parallel corpus for statistical machine translation. In Proceed- ings of MT Summit X, Phuket, Thailand.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Moses: Open source toolkit for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Hieu", |
| "middle": [], |
| "last": "Hoang", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcello", |
| "middle": [], |
| "last": "Federico", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicola", |
| "middle": [], |
| "last": "Bertoldi", |
| "suffix": "" |
| }, |
| { |
| "first": "Brooke", |
| "middle": [], |
| "last": "Cowan", |
| "suffix": "" |
| }, |
| { |
| "first": "Wade", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Christine", |
| "middle": [], |
| "last": "Moran", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Zens", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of ACL'07", |
| "volume": "", |
| "issue": "", |
| "pages": "177--180", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Koehn, Philipp, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bo- jar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of ACL'07, pp. 177-180, Prague, Czech Republic.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "Statistical phrase-based translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Franz", |
| "middle": [ |
| "Joseph" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "Daniel", |
| "middle": [], |
| "last": "Marcu", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Proceedings of NAACL-HLT'03", |
| "volume": "", |
| "issue": "", |
| "pages": "48--54", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Koehn, Philipp, Franz Joseph Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of NAACL-HLT'03, pp. 48-54, Edmonton, Canada.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Tracking relevant alignment characteristics for machine translation", |
| "authors": [ |
| { |
| "first": "Patrik", |
| "middle": [], |
| "last": "Lambert", |
| "suffix": "" |
| }, |
| { |
| "first": "Yanjun", |
| "middle": [], |
| "last": "Ma", |
| "suffix": "" |
| }, |
| { |
| "first": "Sylwia", |
| "middle": [], |
| "last": "Ozdowska", |
| "suffix": "" |
| }, |
| { |
| "first": "Andy", |
| "middle": [], |
| "last": "Way", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of MT Summit XII", |
| "volume": "", |
| "issue": "", |
| "pages": "268--275", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lambert, Patrik, Yanjun Ma, Sylwia Ozdowska, and Andy Way. 2009. Tracking relevant align- ment characteristics for machine translation. In Proceedings of MT Summit XII, pp. 268-275, Ottawa, Canada.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Joshua: An open source toolkit for parsing-based machine translation", |
| "authors": [ |
| { |
| "first": "Zhifei", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Sanjeev", |
| "middle": [], |
| "last": "Khudanpur", |
| "suffix": "" |
| }, |
| { |
| "first": "Lane", |
| "middle": [], |
| "last": "Schwartz", |
| "suffix": "" |
| }, |
| { |
| "first": "Wren", |
| "middle": [], |
| "last": "Thornton", |
| "suffix": "" |
| }, |
| { |
| "first": "Jonathan", |
| "middle": [], |
| "last": "Weese", |
| "suffix": "" |
| }, |
| { |
| "first": "Omar", |
| "middle": [], |
| "last": "Zaidan", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Fourth Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "135--139", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Li, Zhifei, Chris Callison-Burch, Chris Dyer, Sanjeev Khudanpur, Lane Schwartz, Wren Thornton, Jonathan Weese, and Omar Zaidan. 2009. Joshua: An open source toolkit for parsing-based machine translation. In Proceedings of the Fourth Workshop on Statistical Machine Transla- tion, pp. 135-139, Athens, Greece.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Alignment by agreement", |
| "authors": [ |
| { |
| "first": "Percy", |
| "middle": [], |
| "last": "Liang", |
| "suffix": "" |
| }, |
| { |
| "first": "Ben", |
| "middle": [], |
| "last": "Taskar", |
| "suffix": "" |
| }, |
| { |
| "first": "Dan", |
| "middle": [], |
| "last": "Klein", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of NAACL-HLT'06", |
| "volume": "", |
| "issue": "", |
| "pages": "104--111", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Liang, Percy, Ben Taskar, and Dan Klein. 2006. Alignment by agreement. In Proceedings of NAACL-HLT'06, pp. 104-111, New York, USA.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Word-based alignment, phrase-based translation: what's the link?", |
| "authors": [ |
| { |
| "first": "Adam", |
| "middle": [], |
| "last": "Lopez", |
| "suffix": "" |
| }, |
| { |
| "first": "Philip", |
| "middle": [], |
| "last": "Resnik", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of AMTA'06", |
| "volume": "", |
| "issue": "", |
| "pages": "90--99", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Lopez, Adam and Philip Resnik. 2006. Word-based alignment, phrase-based translation: what's the link? In Proceedings of AMTA'06, pp. 90-99.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Automatic evaluation of machine translation quality using n-gram co-occurrence statistics", |
| "authors": [ |
| { |
| "first": "", |
| "middle": [], |
| "last": "Nist", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "NIST. 2002. Automatic evaluation of machine translation quality using n-gram co-occurrence statistics. Technical report.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "A systematic comparison of various statistical alignment models", |
| "authors": [ |
| { |
| "first": "Franz", |
| "middle": [ |
| "J" |
| ], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2003, |
| "venue": "Computational Linguistics", |
| "volume": "29", |
| "issue": "1", |
| "pages": "19--51", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Och, Franz J. and Hermann Ney. 2003. A systematic comparison of various statistical alignment models. Computational Linguistics, 29(1), 19-51.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "A comparison of alignment models for statistical machine translation", |
| "authors": [ |
| { |
| "first": "Franz", |
| "middle": [], |
| "last": "Och", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Joseph", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2000, |
| "venue": "Proceedings of COLING'2000", |
| "volume": "", |
| "issue": "", |
| "pages": "1086--1090", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Och, Franz Joseph and Hermann Ney. 2000. A comparison of alignment models for statistical ma- chine translation. In Proceedings of COLING'2000, pp. 1086-1090, Saarbr\u00fccken, Germany.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "BLEU: a method for automatic evaluation of machine translation", |
| "authors": [ |
| { |
| "first": "Kishore", |
| "middle": [], |
| "last": "Papieni", |
| "suffix": "" |
| }, |
| { |
| "first": "Salim", |
| "middle": [], |
| "last": "Roukos", |
| "suffix": "" |
| }, |
| { |
| "first": "Todd", |
| "middle": [], |
| "last": "Ward", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei-Jing", |
| "middle": [], |
| "last": "Zhu", |
| "suffix": "" |
| } |
| ], |
| "year": 2001, |
| "venue": "Proceedings of ACL'01", |
| "volume": "", |
| "issue": "", |
| "pages": "311--318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Papieni, Kishore, Salim Roukos, Todd Ward, and Wei-Jing Zhu. 2001. BLEU: a method for auto- matic evaluation of machine translation. In Proceedings of ACL'01, pp. 311-318, Philadelphia, PA, USA.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "On some pitfalls in automatic evaluation and significance testing for MT", |
| "authors": [ |
| { |
| "first": "Stefan", |
| "middle": [], |
| "last": "Riezler", |
| "suffix": "" |
| }, |
| { |
| "first": "John", |
| "middle": [ |
| "T" |
| ], |
| "last": "Maxwell", |
| "suffix": "" |
| } |
| ], |
| "year": 2005, |
| "venue": "Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "57--64", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Riezler, Stefan and John T. Maxwell. 2005. On some pitfalls in automatic evaluation and sig- nificance testing for MT. In Proceedings of the ACL Workshop on Intrinsic and Extrinsic Evaluation Measures for Machine Translation, pp. 57-64, Ann Arbor, USA.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "The JRC-Acquis: A multilingual aligned parallel corpus with 20+ languages", |
| "authors": [ |
| { |
| "first": "Ralf", |
| "middle": [], |
| "last": "Steinberger", |
| "suffix": "" |
| }, |
| { |
| "first": "Bruno", |
| "middle": [], |
| "last": "Pouliquen", |
| "suffix": "" |
| }, |
| { |
| "first": "Anna", |
| "middle": [], |
| "last": "Widiger", |
| "suffix": "" |
| }, |
| { |
| "first": "Camelia", |
| "middle": [], |
| "last": "Ignat", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of LREC'06", |
| "volume": "", |
| "issue": "", |
| "pages": "2142--2147", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Steinberger, Ralf, Bruno Pouliquen, Anna Widiger, Camelia Ignat, Toma\u017e Erjavec, Dan Tufi\u015f, and Dniel Varga. 2006. The JRC-Acquis: A multilingual aligned parallel corpus with 20+ languages. In Proceedings of LREC'06, pp. 2142-2147, Genoa, Italy.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "SRILM -an extensible language modeling toolkit", |
| "authors": [ |
| { |
| "first": "Andreas", |
| "middle": [], |
| "last": "Stolcke", |
| "suffix": "" |
| } |
| ], |
| "year": 2002, |
| "venue": "Proceedings of ICSLP'02", |
| "volume": "2", |
| "issue": "", |
| "pages": "901--904", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Stolcke, Andreas. 2002. SRILM -an extensible language modeling toolkit. In Proceedings of ICSLP'02, volume 2, pp. 901-904, Denver, Colorado, USA.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "News from OPUS -a collection of multilingual parallel corpora with tools and interfaces", |
| "authors": [ |
| { |
| "first": "J\u00f6rg", |
| "middle": [], |
| "last": "Tiedemann", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of RANLP'09", |
| "volume": "", |
| "issue": "", |
| "pages": "237--248", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Tiedemann, J\u00f6rg. 2009. News from OPUS -a collection of multilingual parallel corpora with tools and interfaces. In Proceedings of RANLP'09, pp. 237-248, Borovets, Bulgaria.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "AER: Do we need to \"improve\" our alignments", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Vilar", |
| "suffix": "" |
| }, |
| { |
| "first": "Maja", |
| "middle": [], |
| "last": "Popovic", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of IWSLT'06", |
| "volume": "", |
| "issue": "", |
| "pages": "205--212", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vilar, David, Maja Popovic, and Hermann Ney. 2006. AER: Do we need to \"improve\" our alignments. In Proceedings of IWSLT'06, pp. 205-212.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "HMM-based word alignment in statistical translation", |
| "authors": [ |
| { |
| "first": "Stephan", |
| "middle": [], |
| "last": "Vogel", |
| "suffix": "" |
| }, |
| { |
| "first": "Hermann", |
| "middle": [], |
| "last": "Ney", |
| "suffix": "" |
| }, |
| { |
| "first": "Christoph", |
| "middle": [], |
| "last": "Tillmann", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Proceedings of COLING'96", |
| "volume": "", |
| "issue": "", |
| "pages": "836--841", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Vogel, Stephan, Hermann Ney, and Christoph Tillmann. 1996. HMM-based word alignment in statistical translation. In Proceedings of COLING'96, pp. 836-841, Copenhagen, Denmark.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Results of early-stopping experiments: comparison of the influence of word alignment models IBM1, HMM-based, IBM3 and IBM4 on the resulting translation quality. Results of comparing the HMM-based model to IBM2-based models.", |
| "uris": null, |
| "num": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "html": null, |
| "num": null, |
| "content": "<table><tr><td>Corpus</td><td colspan=\"3\">Number of sentence Number of words Number of words</td></tr><tr><td/><td>pairs</td><td>(English)</td><td>(Foreign)</td></tr><tr><td>OPUS KDE4 (Korean-English)</td><td>64.1 \u2022 10 3</td><td>0.32 \u2022 10 6</td><td>0.33 \u2022 10 6</td></tr><tr><td>OPUS KDE4 (Chinese-English)</td><td>103.7 \u2022 10 3</td><td>0.57 \u2022 10 6</td><td>0.78 \u2022 10 6</td></tr><tr><td>CzEng, tech. docs (Czech-English)</td><td>0.97 \u2022 10 6</td><td>7.27 \u2022 10 6</td><td>6.59 \u2022 10 6</td></tr><tr><td>JRC-Acquis (Estonian-English)</td><td>1.09 \u2022 10 6</td><td>27.91 \u2022 10 6</td><td>20.18 \u2022 10 6</td></tr><tr><td>Europarl (German-English)</td><td>1.52 \u2022 10 6</td><td>41.98 \u2022 10 6</td><td>39.81 \u2022 10 6</td></tr><tr><td>Europarl (Finnish-English)</td><td>1.59 \u2022 10 6</td><td>43.94 \u2022 10 6</td><td>31.58 \u2022 10 6</td></tr></table>", |
| "text": "The size of the training parts of the used parallel corpora", |
| "type_str": "table" |
| } |
| } |
| } |
| } |