ACL-OCL / Base_JSON /prefixW /json /wmt /2020.wmt-1.140.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "2020",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T03:41:00.983614Z"
},
"title": "Human-Paraphrased References Improve Neural Machine Translation",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": "",
"affiliation": {},
"email": "freitag@google.com"
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": "",
"affiliation": {},
"email": "fosterg@google.com"
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": "",
"affiliation": {},
"email": "grangier@google.com"
},
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": "",
"affiliation": {},
"email": "colincherry@google.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Automatic evaluation comparing candidate translations to human-generated paraphrases of reference translations has recently been proposed by Freitag et al. (2020). When used in place of original references, the paraphrased versions produce metric scores that correlate better with human judgment. This effect holds for a variety of different automatic metrics, and tends to favor natural formulations over more literal (translationese) ones. In this paper we compare the results of performing endto-end system development using standard and paraphrased references. With state-of-the-art English-German NMT components, we show that tuning to paraphrased references produces a system that is significantly better according to human judgment, but 5 BLEU points worse when tested on standard references. Our work confirms the finding that paraphrased references yield metric scores that correlate better with human judgment, and demonstrates for the first time that using these scores for system development can lead to significant improvements.",
"pdf_parse": {
"paper_id": "2020",
"_pdf_hash": "",
"abstract": [
{
"text": "Automatic evaluation comparing candidate translations to human-generated paraphrases of reference translations has recently been proposed by Freitag et al. (2020). When used in place of original references, the paraphrased versions produce metric scores that correlate better with human judgment. This effect holds for a variety of different automatic metrics, and tends to favor natural formulations over more literal (translationese) ones. In this paper we compare the results of performing endto-end system development using standard and paraphrased references. With state-of-the-art English-German NMT components, we show that tuning to paraphrased references produces a system that is significantly better according to human judgment, but 5 BLEU points worse when tested on standard references. Our work confirms the finding that paraphrased references yield metric scores that correlate better with human judgment, and demonstrates for the first time that using these scores for system development can lead to significant improvements.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Machine Translation (MT) has shown impressive progress in recent years. Neural architectures (Bahdanau et al., 2015; Gehring et al., 2017; Vaswani et al., 2017) have greatly contributed to this improvement, especially for languages with abundant training data (Bojar et al., 2016 (Bojar et al., , 2018 Barrault et al., 2019) . This progress creates novel challenges for the evaluation of machine translation, both for human (Toral, 2020; L\u00e4ubli et al., 2020) and automated evaluation protocols (Lo, 2019; .",
"cite_spans": [
{
"start": 93,
"end": 116,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF0"
},
{
"start": 117,
"end": 138,
"text": "Gehring et al., 2017;",
"ref_id": "BIBREF11"
},
{
"start": 139,
"end": 160,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF42"
},
{
"start": 260,
"end": 279,
"text": "(Bojar et al., 2016",
"ref_id": "BIBREF5"
},
{
"start": 280,
"end": 301,
"text": "(Bojar et al., , 2018",
"ref_id": "BIBREF6"
},
{
"start": 302,
"end": 324,
"text": "Barrault et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 424,
"end": 437,
"text": "(Toral, 2020;",
"ref_id": "BIBREF39"
},
{
"start": 438,
"end": 458,
"text": "L\u00e4ubli et al., 2020)",
"ref_id": null
},
{
"start": 494,
"end": 504,
"text": "(Lo, 2019;",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Both types of evaluation play an important role in machine translation (Koehn, 2010) . While human evaluations provide a gold standard evaluation, they involve a fair amount of careful and hence expensive work by human assessors. Cost therefore limits the scale of their application. On the other hand, automated evaluations are much less expensive. They typically only involve human labor when collecting human reference translations and can hence be run at scale to compare a wide range of systems or validate design decisions. The value of automatic evaluations therefore resides in their capacity to be used as a proxy for human evaluations for large scale comparisons and system development.",
"cite_spans": [
{
"start": 71,
"end": 84,
"text": "(Koehn, 2010)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The recent progress in MT has raised concerns about whether automated evaluation methodologies reliably reflect human ratings in high accuracy ranges. In particular, it has been observed that the best systems according to humans might fare less well with automated metrics (Barrault et al., 2019) . Most metrics such as BLEU (Papineni et al., 2002) and TER (Snover et al., 2006) measure overlap between a system output and a human reference translation. More refined ways to compute such overlap have consequently been proposed (Banerjee and Lavie, 2005; Lo, 2019; .",
"cite_spans": [
{
"start": 273,
"end": 296,
"text": "(Barrault et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 325,
"end": 348,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF29"
},
{
"start": 357,
"end": 378,
"text": "(Snover et al., 2006)",
"ref_id": "BIBREF37"
},
{
"start": 528,
"end": 554,
"text": "(Banerjee and Lavie, 2005;",
"ref_id": "BIBREF2"
},
{
"start": 555,
"end": 564,
"text": "Lo, 2019;",
"ref_id": "BIBREF21"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Orthogonal to the work of building improved metrics, hypothesized that human references are also an important factor in the reliability of automated evaluations. In particular, they observed that standard references exhibit simple, monotonic language due to human 'translationese' effects. These standard references might favor systems which excel at reproducing these effects, independent of the underlying translation quality. They showed that better correlation between human and automated evaluations could be obtained when replacing standard references with paraphrased references, even when still using surface overlap metrics such as BLEU (Papineni et al., 2002) . The novel references, collected by asking linguists to paraphrase standard references, were shown to steer evaluation away from rewarding translation artifacts. This improves the assessment of alternative, but equally good translations.",
"cite_spans": [
{
"start": 646,
"end": 669,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our work builds on the success of paraphrased translations for evaluating existing systems, and asks if different design choices could have been made when designing a system with such an evaluation protocol in mind. This examination has several potential benefits: it can help identify choices which improve BLEU on standard references but have limited impact on final human evaluations; or those that result in better translations for the human reader, but worse in terms of standard reference BLEU. Conversely, it might turn out that paraphrased references are not robust enough to support system development due to the presence of 'metric honeypots': settings that produce poor translations, but which are nevertheless assigned high BLEU scores.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "To address these points, we revisit the major design choices of the best English\u2192German system from WMT2019 step-by-step, and measure their impact on standard reference BLEU as well as on paraphrased BLEU. This allows us to measure the extent to which steps such as data cleaning, back-translation, fine-tuning, ensemble decoding and reranking benefit standard reference BLEU more than paraphrase BLEU. Revisiting these development choices with the two metrics results in two systems with quite different behaviors. We conduct a human evaluation for adequacy and fluency to assess the overall impact of designing a system using paraphrased BLEU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Our main findings show that optimizing for paraphrased BLEU is advantageous for human evaluation when compared to an identical system optimized for standard BLEU. The system optimized for paraphrased BLEU significantly improves WMT newstest19 adequacy ratings (4.72 vs 4.27 on a six-point scale) and fluency ratings (63.8% vs 27.2% on side-by-side preference) despite scoring 5 BLEU points lower on standard references.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Collecting human paraphrases of existing references has recently been shown to be useful for system evaluation . Our work considers applying the same methodology for system tuning. There is some earlier work relying on automated paraphrases for system tuning, especially for Statistical Machine Translation (SMT). Madnani et al. (2007) introduced an automatic paraphrasing technique based on English-to-English translation of full sentences using a statistical MT system, and showed that this permitted reliable system tuning using half as much data. Similar automatic paraphrasing has also been used to augment training data, e.g. (Marton et al., 2009) , but relying on standard references for evaluation. In contrast to human paraphrases, the quality of current machine generated paraphrases degrades significantly as overlap with the input decreases (Mallinson et al., 2017; Roy and Grangier, 2019) . This makes their use difficult for evaluation since suggests that substantial paraphrasing -'paraphrase as much as possible' -is necessary for evaluation.",
"cite_spans": [
{
"start": 314,
"end": 335,
"text": "Madnani et al. (2007)",
"ref_id": "BIBREF23"
},
{
"start": 632,
"end": 653,
"text": "(Marton et al., 2009)",
"ref_id": "BIBREF25"
},
{
"start": 853,
"end": 877,
"text": "(Mallinson et al., 2017;",
"ref_id": "BIBREF24"
},
{
"start": 878,
"end": 901,
"text": "Roy and Grangier, 2019)",
"ref_id": "BIBREF33"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work can be seen as replacing the regular BLEU metric with a new paraphrase BLEU metric for system tuning. Different alternative automatic evaluation metric have also been considered for system tuning (He and Way, 2010; Servan and Schwenk, 2011) with Minimum Error Rate Training, MERT (Och, 2003) . This work showed some specific cases where Translation Error Rate (TER) was superior to BLEU.",
"cite_spans": [
{
"start": 205,
"end": 223,
"text": "(He and Way, 2010;",
"ref_id": "BIBREF13"
},
{
"start": 224,
"end": 249,
"text": "Servan and Schwenk, 2011)",
"ref_id": "BIBREF35"
},
{
"start": 289,
"end": 300,
"text": "(Och, 2003)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Our work is also related to the bias that the human translation process introduces in the references, including source language artifacts-Translationese (Koppel and Ordan, 2011)-as well as source-independent artifacts-Translation Universals (Mauranen and Kujam\u00e4ki, 2004) . The professional translation community studies both systematic biases inherent to translated texts (Baker, 1993; Selinker, 1972) , as well as biases resulting specifically from interference from the source text (Toury, 1995) . For MT, Freitag et al. (2019) point at Translationese as a source of mismatch between BLEU and human evaluation, raising concerns that overlap-based metrics might reward hypotheses with translationese language more than hypotheses using more natural language. The impact of Translationese on human evaluation of MT has recently received attention as well (Toral et al., 2018; Zhang and Toral, 2019; . More generally, the question of bias to a specific reference has also been raised, in the case of monolingual manual evaluation (Fomicheva and Specia, 2016; Ma et al., 2017) . Different from the impact of Translationese on evaluation, the impact of Translationese in the training data has also been studied (Kurokawa et al., 2009; Lembersky et al., 2012a; Bogoychev and Sennrich, 2019; Riley et al., 2020) .",
"cite_spans": [
{
"start": 241,
"end": 270,
"text": "(Mauranen and Kujam\u00e4ki, 2004)",
"ref_id": "BIBREF26"
},
{
"start": 372,
"end": 385,
"text": "(Baker, 1993;",
"ref_id": "BIBREF1"
},
{
"start": 386,
"end": 401,
"text": "Selinker, 1972)",
"ref_id": "BIBREF34"
},
{
"start": 484,
"end": 497,
"text": "(Toury, 1995)",
"ref_id": "BIBREF41"
},
{
"start": 508,
"end": 529,
"text": "Freitag et al. (2019)",
"ref_id": "BIBREF9"
},
{
"start": 855,
"end": 875,
"text": "(Toral et al., 2018;",
"ref_id": "BIBREF40"
},
{
"start": 876,
"end": 898,
"text": "Zhang and Toral, 2019;",
"ref_id": "BIBREF45"
},
{
"start": 1029,
"end": 1057,
"text": "(Fomicheva and Specia, 2016;",
"ref_id": "BIBREF8"
},
{
"start": 1058,
"end": 1074,
"text": "Ma et al., 2017)",
"ref_id": "BIBREF22"
},
{
"start": 1208,
"end": 1231,
"text": "(Kurokawa et al., 2009;",
"ref_id": "BIBREF17"
},
{
"start": 1232,
"end": 1256,
"text": "Lembersky et al., 2012a;",
"ref_id": "BIBREF19"
},
{
"start": 1257,
"end": 1286,
"text": "Bogoychev and Sennrich, 2019;",
"ref_id": "BIBREF4"
},
{
"start": 1287,
"end": 1306,
"text": "Riley et al., 2020)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Finally, our work is also related to studies measuring the importance of the test data quality, looking specifically at the test set translation direction. For SMT evaluation, Lembersky et al. (2012b) and Stymne (2017) explored how the translation direction affects translation results. Holmqvist et al. (2009) noted that the original language of the test sentences influences the BLEU score of translations. They showed that the BLEU scores for targetoriginal sentences are on average higher than sentences that have their original source in a different language. Recently, a similar study was conducted for neural MT (Bogoychev and Sennrich, 2019) .",
"cite_spans": [
{
"start": 176,
"end": 200,
"text": "Lembersky et al. (2012b)",
"ref_id": "BIBREF20"
},
{
"start": 205,
"end": 218,
"text": "Stymne (2017)",
"ref_id": "BIBREF38"
},
{
"start": 287,
"end": 310,
"text": "Holmqvist et al. (2009)",
"ref_id": "BIBREF14"
},
{
"start": 619,
"end": 649,
"text": "(Bogoychev and Sennrich, 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "We first describe data and models, then present our human evaluation protocol.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Setup",
"sec_num": "3"
},
{
"text": "We ran all experiments on the WMT 2019 English\u2192German news translation task (Barrault et al., 2019) . The task provides \u223c38M parallel sentences. As German monolingual data, we concatenate all News Crawl data from 2007 to 2018, comprising \u223c264M sentences after removing duplicates.",
"cite_spans": [
{
"start": 76,
"end": 99,
"text": "(Barrault et al., 2019)",
"ref_id": "BIBREF3"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "In addition to the training data, we use new-stest2018 for development and newstest2019 for evaluation only. There is an important difference between these two test sets. Newstest2018 was created from monolingual news data from both English and German online sources. Half of the data consists of English text translated into German, while the other half consists of German text translated into English. This results in a joint test set of 2,998 sentences. Newstest2019, on the other hand, consists only of 1,997 sentences translated from English into German (see Figure 1 ). To provide a joint test set similar to newstest2018, we took newstest2019 from the reverse translation direction German\u2192English, swapped source and target, and concatenated it with the original test sets. This results in a new joint newstest2019 test set of 3,997 sentences.",
"cite_spans": [],
"ref_spans": [
{
"start": 564,
"end": 572,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "In addition to reporting overall BLEU scores on the different test sets, we also report results on the two subsets (based on the original language) of each newstest20XX, which we call the orig-en and the orig-de halves of the test set. provided an alternative reference translation for the orig-en half of new-(a) Forward-translated, i.e. source original (b) Backward-translated, i.e. target original Figure 1 : Sentences in a test set are either natural in the source and forward-translated into the target language, or vice-versa. If a test set consists of both kinds of sentences, we call it a joint test set. WMT English\u2192German newstest2018 is a joint test set with half of the sentences being forward-translated. WMT English\u2192German newstest2019 is a forwardtranslated test set.",
"cite_spans": [],
"ref_spans": [
{
"start": 401,
"end": 409,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "stest2019. For both standard and alternative references, they provided an additional paraphrased 'as much as possible' version (four different references in all). In order to enable our parameter tuning experiments, we created a paraphrased version of the reference for the orig-en half of newstest2018 (1,500 sentences) following the instructions from . We will release this new paraphrased reference, newstest2018.orig-en.p, as part of our work.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data",
"sec_num": "3.1"
},
{
"text": "For our translation models, we adopt the transformer implementation from Lingvo (Shen et al., 2019) , using the transformer-big model size (Vaswani et al., 2017) . We use a vocabulary of 32k subword units and exponentially moving averaging of checkpoints (EMA decay) with the weight decrease parameter set to \u03b1 = 0.999 (Buduma and Locascio, 2017) . We used a batch size of around 32k sentences in all our experiments.",
"cite_spans": [
{
"start": 80,
"end": 99,
"text": "(Shen et al., 2019)",
"ref_id": "BIBREF36"
},
{
"start": 139,
"end": 161,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF42"
},
{
"start": 319,
"end": 346,
"text": "(Buduma and Locascio, 2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "We report BLEU (Papineni et al., 2002) in addition to human evaluation. All BLEU scores are calculated with sacreBLEU (Post, 2018) 1 .",
"cite_spans": [
{
"start": 15,
"end": 38,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF29"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Models",
"sec_num": "3.2"
},
{
"text": "To collect human rankings, we ran side-by-side evaluation for overall quality and fluency. We hired 20 linguists and divided them equally between the two evaluations. Each evaluation included 1,000 items with each item being rated exactly once. We acquired only a single rating per sentence from the professional linguists as we found that they were more reliable than crowd workers (Toral, 2020) . We evaluated the orig-en sentences corresponding to the official WMT-19 English\u2192German test set (Barrault et al., 2019) . Results in this natural translation direction are more meaningful as pointed out by Zhang and Toral (2019) , who show that translating a 'translationese' source is simpler and should not be used for human evaluation. Our human evaluation followed the protocol:",
"cite_spans": [
{
"start": 383,
"end": 396,
"text": "(Toral, 2020)",
"ref_id": "BIBREF39"
},
{
"start": 495,
"end": 518,
"text": "(Barrault et al., 2019)",
"ref_id": "BIBREF3"
},
{
"start": 605,
"end": 627,
"text": "Zhang and Toral (2019)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "3.3"
},
{
"text": "\u2022 Fluency: We present two translations of the same source sentence to professional linguists without showing the actual source sentence.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "3.3"
},
{
"text": "We then ask the rater wether they prefer one of the outputs or rate them equally based on fluency.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "3.3"
},
{
"text": "\u2022 Overall Quality: We present two translations along with the source and ask the raters to evaluate each translation on a 6-point scale. A score of 6 will be assigned to translations with 'perfect meaning and grammar', while a score of 0 will be assigned to 'nonsense/ no meaning preserved' translations. The average over all ratings yields the system's final quality score.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Human Evaluation",
"sec_num": "3.3"
},
{
"text": "This section first presents our main result comparing the same system tuned with BLEU on standard versus paraphrased references. We then break down how system design choices impact each metric differently. Throughout, we refer to scores computed with standard references as BLEU, and those computed with paraphrased references as BLEUP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Results",
"sec_num": "4"
},
{
"text": "We compare the performance of a system optimized on newstest2018 with standard references (opt-on-BLEU) with one optimized on newstest2018.origen with paraphrased references (opt-on-BLEUP). Both systems were developed using only new-stest2018 data, keeping newstest2019 as a blind test set. Table 1 summarizes the results on new-stest2019. Details of how these two systems were developed and how they differ are given in Section 4.2.",
"cite_spans": [],
"ref_spans": [
{
"start": 291,
"end": 298,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "4.1"
},
{
"text": "The opt-on-BLEU system outperforms opt-on-BLEUP by 5.2 BLEU points. Normally this would lead us to discard opt-on-BLEUP. However, the BLEUP scores tell a different story: opt-on-BLEUP outperforms by 0.3 points, a potentially large improvement given the smaller natural range of this metric. Under a significance test with random approximation (Riezler and Maxwell III, 2005) . We optimized the system to perform best on either newstest2018 with standard reference translations (opt-on-BLEU) or newstest2018.origen with paraphrased reference translations (opt-on-BLEUP). BLEU differences are significant according to random approximation (Riezler and Maxwell III, 2005) with p<5e-18. Human score differences are significant according to a Wilcoxon rank-sum test with p<5e-18. showed that BLEU scores calculated on paraphrased references have higher correlation with human judgment than those calculated on standard references. To verify their findings, we ran a human evaluation for the two different outputs on 1,000 sentences randomly drawn from newstest2019 (orig-en), as described above. As shown in Table 1 , opt-on-BLEUP is consistently evaluated as better for both quality and fluency. To measure the significance between the two ratings, we ran a Wilcoxon rank sum test on the human ratings and found that both improvements are significant with p<e-18.",
"cite_spans": [
{
"start": 343,
"end": 374,
"text": "(Riezler and Maxwell III, 2005)",
"ref_id": "BIBREF31"
},
{
"start": 637,
"end": 668,
"text": "(Riezler and Maxwell III, 2005)",
"ref_id": "BIBREF31"
}
],
"ref_spans": [
{
"start": 1103,
"end": 1110,
"text": "Table 1",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "4.1"
},
{
"text": "This experiment demonstrates that we can actually tune our MT system on paraphrased references to yield higher translation quality when compared to a typical system tuned on standard BLEU. Interestingly, the BLEU score for the better system is much lower, supporting our contention that BLEU rewards spurious translation features (e.g. monotonicity and common translations) that are filtered out by BLEUP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Overall Performance",
"sec_num": "4.1"
},
{
"text": "We now describe the individual model decisions that went into the two final systems of Section 4.1. To build a classical system optimized on BLEU with standard references, we replicate the WMT 2019 winning submission and examine the effect of each of its major design decisions. 2 In particular, we are looking into the effect of data cleaning, back-translation, fine tuning, ensembling and noisy channel reranking. We examine the impact of each method on BLEU and BLEUP. For our experiments, we used newstest2018 as our development set and newstest2019 as our held-out test set. All model decisions (checkpoint, variants) are solely made on newstest2018.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysing Performance",
"sec_num": "4.2"
},
{
"text": "Experimental results are presented in Table 2 . As described in Section 3.1, we report 4 different BLEU scores for newstest2018 (dev) and new-stest2019 (test). In addition to reporting BLEU score on the joint or the orig-de/orig-en halves of the test sets, we also report BLEU scores that are calculated on paraphrased references (BLEUP).",
"cite_spans": [],
"ref_spans": [
{
"start": 38,
"end": 45,
"text": "Table 2",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Analysing Performance",
"sec_num": "4.2"
},
{
"text": "For data cleaning, we used CDS (Wang et al., 2018) . We trained a CDS model for English\u2192German taking news-commentary as the in-domain/clean data set. We scored all parallel sentences with our trained CDS model and kept the 70% highest scoring sentences. Our experimental results suggest that data cleaning is useful for all four types of test sets and consistently improves over a baseline system that is trained on raw parallel data. We conclude that data cleaning is useful for all systems independently of which test set it will be optimized for.",
"cite_spans": [
{
"start": 31,
"end": 50,
"text": "(Wang et al., 2018)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data Cleaning",
"sec_num": "4.2.1"
},
{
"text": "We trained a strong German\u2192English model on the same parallel data (with flipped source/target) and used that model to (back-)translate (BT) all deduped German monolingual sentences from NewsCrawl 2007-2018 into English. We filtered sentences with a source-target ratio lower than 0.5 or higher than 1.5. We further run language identification and filtered out all backtranslations going into the wrong language. We then oversample our bitext data to match the size of the backtranslation data and train a NMT model on the concatenation of both datasets.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Back-Translation",
"sec_num": "4.2.2"
},
{
"text": "As previously reported by (Freitag et al., 2019; Bogoychev and Sennrich, 2019) , the original language of the sentences within a test is crucial and can lead to very different conclusions, in particular for back-translation systems. This difference is visible when looking at the BLEU scores on the standard references. While the BLEU score on origde does improve by 7.5 points, the BLEU score drops by 2.9 points on the orig-en half. Due to the big gain on the orig-de half, BT also improves the BLEU score on the joint set. The paraphrased references were designed to overcome these kinds of mismatches and they show a gain of 0.5 BLEU points. We can conclude that back-translation helps improve BLEU and BLEUP and we include BT for systems that are optimized for both standard or paraphrased BLEU scores.",
"cite_spans": [
{
"start": 26,
"end": 48,
"text": "(Freitag et al., 2019;",
"ref_id": "BIBREF9"
},
{
"start": 49,
"end": 78,
"text": "Bogoychev and Sennrich, 2019)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Back-Translation",
"sec_num": "4.2.2"
},
{
"text": "Similar to , we fine-tuned our backtranslated model on a concatenation of previous WMT testsets (newstest{2013,2015 WMT testsets (newstest{2013, ,2016 and the clean in-domain news-commentary corpus. In total, we fine-tuned the model on 330k sentences. We kept all model parameters the same (batch size, learning rate) and continued training on the finetuned data for one epoch. The BLEU scores on the standard references suggest a small improvement of 0.3 BLEU on the joint test set. Interestingly, the improvement is visible on the orig-en half by 0.7 points while the BLEU scores on orig-de actually drop by 1.7 points. Nevertheless, BLEUP does improve by 0.5 points, suggesting that fine-tuning is especially helpful when measuring scores with paraphrased references. Despite the small gain on standard references, we include fine-tuning in both our optimized systems.",
"cite_spans": [
{
"start": 83,
"end": 115,
"text": "WMT testsets (newstest{2013,2015",
"ref_id": null
},
{
"start": 116,
"end": 150,
"text": "WMT testsets (newstest{2013, ,2016",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Fine-Tuning",
"sec_num": "4.2.3"
},
{
"text": "Combining different predictions is a standard approach in MT to boost BLEU scores. We run ensemble decoding with 4 previously built models. In addition to using the 3 models described in Section 4.2.1, 4.2.2, and 4.2.3, we build a second finetuned model with the same approach, but different initialization.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble",
"sec_num": "4.2.4"
},
{
"text": "Although ensemble decoding improves the performance on our standard references by up to 1.9 BLEU points, the quality is rated as lower by 0.3 BLEU points on the paraphrased references. We suspect that using an ensemble for decoding favors common, average language by promoting target spans where all systems agree. Paraphrase translations actually downweight the importance of this language, which seems important for agreeing with human judgments . This promotion of average language and monotonic translation may explain the effectiveness of ensembling newstest2018 (dev) newstest2019 (test) joint orig-de orig-en orig-en.p joint orig-de orig-en orig-en.p 1 only for standard reference BLEU. Similar to the WMT 2019 winning submission, we include the ensemble approach in our system that is optimized on the joint BLEU scores. However, we do not include it in our system optimized on BLEUP.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Ensemble",
"sec_num": "4.2.4"
},
{
"text": "Finally, we extend the noisy-channel approach which consists of re-ranking the top-50 beam search output of either the ensemble model (when tuned for BLEU) or the fine-tuned model (when tuned for BLEUP). Instead of using 4 features-forward probability, backward probability, language model and word penalty-we use 11 forward probabilities, 10 backward probabilities and 2 language model scores. Different to , we did not pick the re-ranking weights through random search, but used MERT (Och, 2003) for efficient tuning. The 11 different forward translation scores come from different English\u2192German NMT models that are replicas of the previous described models (Section 4.2.1, 4.2.2, and 4.2. 3). The 10 backward translation scores come from the same approaches, but trained in the reverse direction. These 21 NMT model scores are combined with 2 language model (LM) scores. The first LM is trained on the German monolingual NewsCrawl data, while the second LM is trained on forward-translated English NewsCrawl data. The first LM should assign high scores to genuine German text, while the second LM should assign high scores to translationese German originating from English.",
"cite_spans": [
{
"start": 486,
"end": 497,
"text": "(Och, 2003)",
"ref_id": "BIBREF28"
}
],
"ref_spans": [
{
"start": 661,
"end": 692,
"text": "(Section 4.2.1, 4.2.2, and 4.2.",
"ref_id": null
}
],
"eq_spans": [],
"section": "Reranking",
"sec_num": "4.3"
},
{
"text": "We first reranked the 50-best list generated by the ensemble model with MERT on newstest2018. Similar to the original WMT 2019 submission, the BLEU scores on the joint and orig-en set increase. This reranked output corresponds to our opt-on-BLEU model. Next, we reranked the 50-best list generated by the fine-tuned model with MERT on newstest2018.orig-en with paraphrased references. This led to further small increases in BLEUP, and corresponds to our opt-on-BLEUP model. In summary, optimizing on BLEUP leads us to keep back-translation, even though evaluation with standard English-original references would have us drop it, and also leads us to drop the ensembling step. Rescoring using MERT weights learned with BLEU or BLEUP further separates the systems according to these metrics.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Reranking",
"sec_num": "4.3"
},
{
"text": "This section confirms the results from the previous section with additional references for newstest2019 and illustrates the behaviour of our systems on individual sentences.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5"
},
{
"text": "Freitag et al. (2020) released an additional standard reference translation (AR) and two 'paraphrase asmuch-as-possible' reference translations for new-stest2019 (WMT.p and AR.p). We used WMT.p in all our above experiments; here we report BLEU scores for all four available reference translations in table 3. The BLEU improvements between the two standard reference translations agree perfectly. Similarly, the BLEUP improvements between the two paraphrased references also coincide. This indicates that by optimizing on BLEU or BLEUP we have not somehow overfit to a specific set of reference translations or their paraphrases, but instead have molded our model to better match a style of reference translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Alternative Reference Translations",
"sec_num": "5.1"
},
{
"text": "This section presents translation examples from our two differently optimized systems in Table 4 . The first 3 examples show sentences where opt-on-BLEUP has higher translation quality than opt-on-BLEU. One observation of was that BLEU scores calculated on standard references prefer monotonic translations. This is visible in our first translation example, where opt-on-BLEU incorrectly translates the saying Tomorrow's a different beast into Morgen ist ein anderes Biest, using an inappropriately monotonic strategy. On the other hand, the opt-on-BLEUP system captures the meaning of the source sentence and generates a valid translation.",
"cite_spans": [],
"ref_spans": [
{
"start": 89,
"end": 96,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Translation Examples",
"sec_num": "5.2"
},
{
"text": "Another drawback of standard reference BLEU is the preference for literal translation. This is visible in our second example where the word cap is translated into Kappe and tip into kippen. Both are valid word-by-word translations, but do not make much sense in this context. The third example is another example of the monotonic translation style of a regular tuned system. The opt-on-BLEU translation is an incorrect word-by-word translation. The opt-on-BLEUP system is able to introduce a German natural sentence structure and generate a flawless translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Examples",
"sec_num": "5.2"
},
{
"text": "The last translation example is a loss for the paraphrased-tuned system and demonstrates that sometimes a more literal translation can be better. Even though the word run can be translated into Ansturm, it is not appropriate in this context and the simpler translation Lauf is correct.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Translation Examples",
"sec_num": "5.2"
},
{
"text": "The BLEU scores calculated on the two different references yield different conclusions. BLEU on standard references evaluated opt-on-BLEU higher by more than 5 BLEU points. BLEUP came to a different conclusion and gave a higher score to opt-on-BLEUP. In this section, we look at the ngrams that contributed most to these different outcomes. Those that contribute most to the difference in BLEU across the two systems are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matched n-grams",
"sec_num": "5.3"
},
{
"text": "\u2022 Er sagte, dass (He said that)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matched n-grams",
"sec_num": "5.3"
},
{
"text": "\u2022 , sagte er der (, he said the)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matched n-grams",
"sec_num": "5.3"
},
{
"text": "\u2022 stellte fest, dass (noted that)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matched n-grams",
"sec_num": "5.3"
},
{
"text": "These are all generic, high-frequency n-grams. They are crucial for attaining high BLEU scores, and tend to appear in translations that employ the same structure as the source sentence. In contrast, the n-grams that contribute most to the difference in BLEUP are:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matched n-grams",
"sec_num": "5.3"
},
{
"text": "\u2022 Menschen ums Leben kamen (humans died)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matched n-grams",
"sec_num": "5.3"
},
{
"text": "\u2022 Grossbritanien keine Steuern zahlen (Great Britain pay no tax)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matched n-grams",
"sec_num": "5.3"
},
{
"text": "\u2022 von BBC Scottland (from BBC Scottland)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matched n-grams",
"sec_num": "5.3"
},
{
"text": "These are much less frequent sequences with more semantic content.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Matched n-grams",
"sec_num": "5.3"
},
{
"text": "Prior work has shown that BLEU measured on paraphrased references (BLEUP) has better correlation with human evaluation than BLEU measured on regular references (BLEU) for the comparison of existing systems (Freitag et al., 2019) . Motivated by this finding, we collected a development set of paraphrased references and assessed BLEUP for system development. This allowed us to evaluate if the design choices of a modern neural MT system impact BLEU and BLEUP differently, including tuning a re-ranking noisy channel model to these metrics. Our experiments followed the setup from the winning newstest19 English\u2192Germam entry at WMT19 . For design choices, we observe that BLEUP seems to emphasize the importance of backtranslation even when test sets are source original. On the other end, BLEUP seems to de-emphasize the importance of ensembles, as the reliable prediction of common language by ensembles is less rewarded by this metric.",
"cite_spans": [
{
"start": 206,
"end": 228,
"text": "(Freitag et al., 2019)",
"ref_id": "BIBREF9"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "Our tuning experiments led to positive results. In human evaluation, the system tuned on BLEUP showed significant improvements in terms of adequacy and even greater gains in terms of fluency compared to the system tuned on BLEU. Example translations indicate that the model tuned on BLEUP produces noticeably less literal translations. Our experiments also highlight a disconnect between regular BLEU and human evaluation: the system tuned on BLEUP degrades standard BLEU scores by over 5 points, while faring significantly better in human evaluation. Paraphrased automatic evaluation therefore seems to be a promising proxy newstest2019 WMT AR WMT.p AR.p (orig-en) (orig-en) (orig-en.p) (orig-en.p) (1) bitext 40.9 32.2 12.1 12.0 (2) + CDS 42.3 34.2 12.6 12.3 (3) + BT 39.4 33.6 13.1 13.0 (4) + Fine tuning 41.1 35.5 13.6 13.4 (5) + Ensemble of 4 43.6 36.0 13.3 13.0 + reranking of (5) (opt-on-BLEU) 45.0 36.7 13.4 13.1 + reranking of (4) (opt-on-BLEUP) 39.8 34.4 13.7 13.5 Table 3 : BLEU scores for English\u2192German newstest2019 for the additional references from . source Tomorrow's a different beast. opt on BLEU Morgen ist ein anderes Biest. opt on BLEUP Morgen ist alles anders. source You have to tip your cap. opt on BLEU Sie m\u00fcssen Ihre Kappe kippen. opt on BLEUP Man muss den Hut ziehen. source He averaged 5.6 points and 2.6 rebounds a game last season. opt on BLEU Er durchschnittlich 5,6 Punkte und 2,6 Rebounds ein Spiel in der vergangenen Saison. opt on BLEUP In der vergangenen Saison erzielte er im Schnitt 5,6 Punkte und 2,6 Rebounds pro Spiel. source Thirty-two percent supported such a run. opt on BLEU 32 Prozent unterst\u00fctzten einen solchen Lauf. opt on BLEUP 32 Prozent sprachen sich f\u00fcr einen solchen Ansturm aus. Table 4 : Example output for English\u2192German for systems optimized on standard BLEU or BLEUP. Translations for opt-on-BLEU tend to be more literal, and adhere closely to the source sentence structure.",
"cite_spans": [],
"ref_spans": [
{
"start": 975,
"end": 982,
"text": "Table 3",
"ref_id": null
},
{
"start": 1735,
"end": 1742,
"text": "Table 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "for human evaluation when making design choices for MT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "This research opens the question of whether these results can be confirmed over a wide range of language pairs. We also hope to achieve further improvements by refining the paraphrased evaluation protocol.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "BLEU+case.mixed+lang.ende+numrefs.1+smooth.exp+ SET+tok.13a+version.1.4.12 SET \u2208{wmt18, wmt19, wmt19/google/ar, wmt19/google/arp, wmt19/google/wmtp}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "Our replication achieves 45.0 BLEU on newstest19, competitive with the reference system at 42.7 BLEU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Neural Machine Translation by Jointly Learning to Align and Translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "3rd International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Corpus Linguistics and Translation Studies: Implications and Applications. Text and technology: in honour of John Sinclair",
"authors": [
{
"first": "Mona",
"middle": [],
"last": "Baker",
"suffix": ""
}
],
"year": 1993,
"venue": "",
"volume": "",
"issue": "",
"pages": "233--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mona Baker. 1993. Corpus Linguistics and Transla- tion Studies: Implications and Applications. Text and technology: in honour of John Sinclair, pages 233-252.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "METEOR: An Automatic Metric for MT Evaluation with Improved Correlation with Human Judgments",
"authors": [
{
"first": "Satanjeev",
"middle": [],
"last": "Banerjee",
"suffix": ""
},
{
"first": "Alon",
"middle": [],
"last": "Lavie",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization",
"volume": "",
"issue": "",
"pages": "65--72",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Satanjeev Banerjee and Alon Lavie. 2005. METEOR: An Automatic Metric for MT Evaluation with Im- proved Correlation with Human Judgments. In Pro- ceedings of the ACL workshop on intrinsic and ex- trinsic evaluation measures for machine translation and/or summarization, pages 65-72.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Santanu Pal, Matt Post, and Marcos Zampieri",
"authors": [
{
"first": "Lo\u00efc",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Marta",
"middle": [
"R"
],
"last": "Costa-Juss\u00e0",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Shervin",
"middle": [],
"last": "Malmasi",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Mathias",
"middle": [],
"last": "M\u00fcller",
"suffix": ""
}
],
"year": 2019,
"venue": "Findings of the 2019 Conference on Machine Translation (WMT19). In Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "1--61",
"other_ids": {
"DOI": [
"10.18653/v1/W19-5301"
]
},
"num": null,
"urls": [],
"raw_text": "Lo\u00efc Barrault, Ond\u0159ej Bojar, Marta R. Costa-juss\u00e0, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias M\u00fcller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 Conference on Machine Trans- lation (WMT19). In Proceedings of the Fourth Con- ference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61, Florence, Italy. As- sociation for Computational Linguistics.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Domain, Translationese and Noise in Synthetic Data for Neural Machine Translation",
"authors": [
{
"first": "Nikolay",
"middle": [],
"last": "Bogoychev",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1911.03362"
]
},
"num": null,
"urls": [],
"raw_text": "Nikolay Bogoychev and Rico Sennrich. 2019. Do- main, Translationese and Noise in Synthetic Data for Neural Machine Translation. arXiv preprint arXiv:1911.03362.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Findings of the 2016 Conference on Machine Translation",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Rajen",
"middle": [],
"last": "Chatterjee",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Matthias",
"middle": [],
"last": "Huck",
"suffix": ""
},
{
"first": "Antonio",
"middle": [
"Jimeno"
],
"last": "Yepes",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Varvara",
"middle": [],
"last": "Logacheva",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Matteo",
"middle": [],
"last": "Negri",
"suffix": ""
},
{
"first": "Aur\u00e9lie",
"middle": [],
"last": "N\u00e9v\u00e9ol",
"suffix": ""
},
{
"first": "Mariana",
"middle": [],
"last": "Neves",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Popel",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Raphael",
"middle": [],
"last": "Rubino",
"suffix": ""
},
{
"first": "Carolina",
"middle": [],
"last": "Scarton",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
},
{
"first": "Marco",
"middle": [],
"last": "Turchi",
"suffix": ""
},
{
"first": "Karin",
"middle": [],
"last": "Verspoor",
"suffix": ""
},
{
"first": "Marcos",
"middle": [],
"last": "Zampieri",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "131--198",
"other_ids": {
"DOI": [
"10.18653/v1/W16-2301"
]
},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Yvette Graham, Barry Haddow, Matthias Huck, An- tonio Jimeno Yepes, Philipp Koehn, Varvara Lo- gacheva, Christof Monz, Matteo Negri, Aur\u00e9lie N\u00e9v\u00e9ol, Mariana Neves, Martin Popel, Matt Post, Raphael Rubino, Carolina Scarton, Lucia Spe- cia, Marco Turchi, Karin Verspoor, and Marcos Zampieri. 2016. Findings of the 2016 Conference on Machine Translation. In Proceedings of the First Conference on Machine Translation: Volume 2, Shared Task Papers, pages 131-198, Berlin, Ger- many. Association for Computational Linguistics.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Findings of the 2018 Conference on Machine Translation (WMT18)",
"authors": [
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Mark",
"middle": [],
"last": "Fishel",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Shared Task Papers",
"volume": "",
"issue": "",
"pages": "272--303",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6401"
]
},
"num": null,
"urls": [],
"raw_text": "Ond\u0159ej Bojar, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Philipp Koehn, and Christof Monz. 2018. Findings of the 2018 Con- ference on Machine Translation (WMT18). In Pro- ceedings of the Third Conference on Machine Trans- lation: Shared Task Papers, pages 272-303, Bel- gium, Brussels. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Fundamentals of deep learning: Designing nextgeneration machine intelligence algorithms",
"authors": [
{
"first": "Nikhil",
"middle": [],
"last": "Buduma",
"suffix": ""
},
{
"first": "Nicholas",
"middle": [],
"last": "Locascio",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nikhil Buduma and Nicholas Locascio. 2017. Fun- damentals of deep learning: Designing next- generation machine intelligence algorithms. \"O'Reilly Media, Inc.\".",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Reference bias in monolingual machine translation evaluation",
"authors": [
{
"first": "Marina",
"middle": [],
"last": "Fomicheva",
"suffix": ""
},
{
"first": "Lucia",
"middle": [],
"last": "Specia",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "2",
"issue": "",
"pages": "77--82",
"other_ids": {
"DOI": [
"10.18653/v1/P16-2013"
]
},
"num": null,
"urls": [],
"raw_text": "Marina Fomicheva and Lucia Specia. 2016. Reference bias in monolingual machine translation evaluation. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 2: Short Papers), pages 77-82, Berlin, Germany. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "APE at Scale and Its Implications on MT Evaluation Biases",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "Isaac",
"middle": [],
"last": "Caswell",
"suffix": ""
},
{
"first": "Scott",
"middle": [],
"last": "Roy",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "34--44",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Freitag, Isaac Caswell, and Scott Roy. 2019. APE at Scale and Its Implications on MT Evaluation Biases. In Proceedings of the Fourth Conference on Machine Translation, pages 34-44, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "BLEU might be Guilty but References are not Innocent",
"authors": [
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Isaac",
"middle": [],
"last": "Caswell",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Markus Freitag, David Grangier, and Isaac Caswell. 2020. BLEU might be Guilty but References are not Innocent. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "A convolutional encoder model for neural machine translation",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "123--135",
"other_ids": {
"DOI": [
"10.18653/v1/P17-1012"
]
},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, and Yann Dauphin. 2017. A convolutional encoder model for neural machine translation. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 123-135, Vancouver, Canada. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Translationese in machine translation evaluation",
"authors": [
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yvette Graham, Barry Haddow, and Philipp Koehn. 2019. Translationese in machine translation evalu- ation.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Metric and reference factors in minimum error rate training. Machine Translation",
"authors": [
{
"first": "Yifan",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "24",
"issue": "",
"pages": "27--38",
"other_ids": {
"DOI": [
"https://link.springer.com/article/10.1007/s10590-010-9072-7"
]
},
"num": null,
"urls": [],
"raw_text": "Yifan He and Andy Way. 2010. Metric and reference factors in minimum error rate training. Machine Translation, 24(1):27-38.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Improving Alignment for SMT by Reordering and Augmenting the Training Corpus",
"authors": [
{
"first": "Maria",
"middle": [],
"last": "Holmqvist",
"suffix": ""
},
{
"first": "Sara",
"middle": [],
"last": "Stymne",
"suffix": ""
},
{
"first": "Jody",
"middle": [],
"last": "Foo",
"suffix": ""
},
{
"first": "Lars",
"middle": [],
"last": "Ahrenberg",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the Fourth Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "120--124",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maria Holmqvist, Sara Stymne, Jody Foo, and Lars Ahrenberg. 2009. Improving Alignment for SMT by Reordering and Augmenting the Training Corpus. In Proceedings of the Fourth Workshop on Statisti- cal Machine Translation, pages 120-124. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Statistical Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2010. Statistical Machine Translation. Cambridge University Press.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Translationese and Its Dialects",
"authors": [
{
"first": "Moshe",
"middle": [],
"last": "Koppel",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Ordan",
"suffix": ""
}
],
"year": 2011,
"venue": "Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "1318--1326",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Moshe Koppel and Noam Ordan. 2011. Translationese and Its Dialects. In Proceedings of the 49th Annual Meeting of the Association for Computational Lin- guistics: Human Language Technologies -Volume 1, pages 1318-1326.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Automatic detection of translated text and its impact on machine translation",
"authors": [
{
"first": "David",
"middle": [],
"last": "Kurokawa",
"suffix": ""
},
{
"first": "Cyril",
"middle": [],
"last": "Goutte",
"suffix": ""
},
{
"first": "Pierre",
"middle": [],
"last": "Isabelle",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of MT-Summit XII",
"volume": "",
"issue": "",
"pages": "81--88",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "David Kurokawa, Cyril Goutte, and Pierre Isabelle. 2009. Automatic detection of translated text and its impact on machine translation. In Proceedings of MT-Summit XII, pages 81-88.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "2020. A set of recommendations for assessing humanmachine parity in language translation",
"authors": [
{
"first": "Samuel",
"middle": [],
"last": "L\u00e4ubli",
"suffix": ""
},
{
"first": "Sheila",
"middle": [],
"last": "Castilho",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Qinlan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Toral",
"suffix": ""
}
],
"year": null,
"venue": "Journal of Artificial Intelligence Research",
"volume": "67",
"issue": "",
"pages": "653--672",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Samuel L\u00e4ubli, Sheila Castilho, Graham Neubig, Rico Sennrich, Qinlan Shen, and Antonio Toral. 2020. A set of recommendations for assessing human- machine parity in language translation. Journal of Artificial Intelligence Research, 67:653-672.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Adapting Translation Models to Translationese Improves SMT",
"authors": [
{
"first": "Gennadi",
"middle": [],
"last": "Lembersky",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Ordan",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 13th Conference of the European Chapter of the Association for Computational Linguistics, EACL '12",
"volume": "",
"issue": "",
"pages": "255--265",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gennadi Lembersky, Noam Ordan, and Shuly Wint- ner. 2012a. Adapting Translation Models to Transla- tionese Improves SMT. In Proceedings of the 13th Conference of the European Chapter of the Asso- ciation for Computational Linguistics, EACL '12, pages 255-265, Stroudsburg, PA, USA. Association for Computational Linguistics.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Language models for machine translation: Original vs",
"authors": [
{
"first": "Gennadi",
"middle": [],
"last": "Lembersky",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Ordan",
"suffix": ""
},
{
"first": "Shuly",
"middle": [],
"last": "Wintner",
"suffix": ""
}
],
"year": 2012,
"venue": "",
"volume": "38",
"issue": "",
"pages": "799--825",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gennadi Lembersky, Noam Ordan, and Shuly Wintner. 2012b. Language models for machine translation: Original vs. translated texts. Computational Linguis- tics, 38(4):799-825.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Yisi-a unified semantic mt quality evaluation and estimation metric for languages with different levels of available resources",
"authors": [
{
"first": "Chi-Kiu",
"middle": [],
"last": "Lo",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "507--513",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Chi-kiu Lo. 2019. Yisi-a unified semantic mt quality evaluation and estimation metric for languages with different levels of available resources. In Proceed- ings of the Fourth Conference on Machine Transla- tion (Volume 2: Shared Task Papers, Day 1), pages 507-513.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Further investigation into reference bias in monolingual evaluation of machine translation",
"authors": [
{
"first": "Qingsong",
"middle": [],
"last": "Ma",
"suffix": ""
},
{
"first": "Yvette",
"middle": [],
"last": "Graham",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
},
{
"first": "Qun",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "2476--2485",
"other_ids": {
"DOI": [
"10.18653/v1/D17-1262"
]
},
"num": null,
"urls": [],
"raw_text": "Qingsong Ma, Yvette Graham, Timothy Baldwin, and Qun Liu. 2017. Further investigation into reference bias in monolingual evaluation of machine transla- tion. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2476-2485, Copenhagen, Denmark. Associa- tion for Computational Linguistics.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Using paraphrases for parameter tuning in statistical machine translation",
"authors": [
{
"first": "Nitin",
"middle": [],
"last": "Madnani",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Necip Fazil Ayan",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [
"J"
],
"last": "Resnik",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dorr",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the Second Workshop on Statistical Machine Translation",
"volume": "",
"issue": "",
"pages": "120--127",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitin Madnani, Necip Fazil Ayan, Philip Resnik, and Bonnie J Dorr. 2007. Using paraphrases for param- eter tuning in statistical machine translation. In Pro- ceedings of the Second Workshop on Statistical Ma- chine Translation, pages 120-127. Association for Computational Linguistics.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Paraphrasing revisited with neural machine translation",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Mallinson",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Mirella",
"middle": [],
"last": "Lapata",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Mallinson, Rico Sennrich, and Mirella Lapata. 2017. Paraphrasing revisited with neural machine translation. In Proceedings of the 15th Conference of the European Chapter of the Association for Com- putational Linguistics: Volume 1, Long Papers, Va- lencia, Spain.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Improved statistical machine translation using monolingually-derived paraphrases",
"authors": [
{
"first": "Yuval",
"middle": [],
"last": "Marton",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Resnik",
"suffix": ""
}
],
"year": 2009,
"venue": "Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing",
"volume": "1",
"issue": "",
"pages": "381--390",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yuval Marton, Chris Callison-Burch, and Philip Resnik. 2009. Improved statistical machine translation using monolingually-derived paraphrases. In Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing: Volume 1, EMNLP '09, pages 381-390, Stroudsburg, PA, USA. Associ- ation for Computational Linguistics.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Translation universals: Do they exist?",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Mauranen",
"suffix": ""
},
{
"first": "Pekka",
"middle": [],
"last": "Kujam\u00e4ki",
"suffix": ""
}
],
"year": 2004,
"venue": "",
"volume": "48",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Mauranen and Pekka Kujam\u00e4ki. 2004. Transla- tion universals: Do they exist?, volume 48. John Benjamins Publishing.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Facebook fair TM s wmt19 news translation task submission",
"authors": [
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Kyra",
"middle": [],
"last": "Yee",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the Fourth Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "314--319",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nathan Ng, Kyra Yee, Alexei Baevski, Myle Ott, Michael Auli, and Sergey Edunov. 2019. Facebook fair TM s wmt19 news translation task submission. In Proceedings of the Fourth Conference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 314-319, Florence, Italy. Association for Computational Linguistics.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Minimum error rate training in statistical machine translation",
"authors": [
{
"first": "Franz Josef",
"middle": [],
"last": "Och",
"suffix": ""
}
],
"year": 2003,
"venue": "Proceedings of the 41st annual meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "160--167",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Franz Josef Och. 2003. Minimum error rate training in statistical machine translation. In Proceedings of the 41st annual meeting of the Association for Com- putational Linguistics, pages 160-167.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "BLEU: a Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th annual meeting on association for com- putational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A Call for Clarity in Reporting Bleu Scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1804.08771"
]
},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A Call for Clarity in Reporting Bleu Scores. arXiv preprint arXiv:1804.08771.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "On some pitfalls in automatic evaluation and significance testing for mt",
"authors": [
{
"first": "Stefan",
"middle": [],
"last": "Riezler",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "John",
"suffix": ""
},
{
"first": "Iii",
"middle": [],
"last": "Maxwell",
"suffix": ""
}
],
"year": 2005,
"venue": "Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for machine translation and/or summarization",
"volume": "",
"issue": "",
"pages": "57--64",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Stefan Riezler and John T Maxwell III. 2005. On some pitfalls in automatic evaluation and significance test- ing for mt. In Proceedings of the ACL workshop on intrinsic and extrinsic evaluation measures for ma- chine translation and/or summarization, pages 57- 64.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Translationese as a language in \"multilingual\" NMT",
"authors": [
{
"first": "Parker",
"middle": [],
"last": "Riley",
"suffix": ""
},
{
"first": "Isaac",
"middle": [],
"last": "Caswell",
"suffix": ""
},
{
"first": "Markus",
"middle": [],
"last": "Freitag",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2020,
"venue": "Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "7737--7746",
"other_ids": {
"DOI": [
"10.18653/v1/2020.acl-main.691"
]
},
"num": null,
"urls": [],
"raw_text": "Parker Riley, Isaac Caswell, Markus Freitag, and David Grangier. 2020. Translationese as a language in \"multilingual\" NMT. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics, pages 7737-7746, Online. Association for Computational Linguistics.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Unsupervised Paraphrasing without Translation",
"authors": [
{
"first": "Aurko",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
}
],
"year": 2019,
"venue": "Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "6033--6039",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Aurko Roy and David Grangier. 2019. Unsupervised Paraphrasing without Translation. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 6033-6039. Asso- ciation for Computational Linguistics.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Interlanguage. International Review of Applied Linguistics",
"authors": [
{
"first": "Larry",
"middle": [],
"last": "Selinker",
"suffix": ""
}
],
"year": 1972,
"venue": "",
"volume": "",
"issue": "",
"pages": "209--241",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Larry Selinker. 1972. Interlanguage. International Re- view of Applied Linguistics, pages 209-241.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Optimising multiple metrics with mert. The Prague Bulletin of Mathematical Linguistics",
"authors": [
{
"first": "Christophe",
"middle": [],
"last": "Servan",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
}
],
"year": 2011,
"venue": "",
"volume": "96",
"issue": "",
"pages": "109--117",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christophe Servan and Holger Schwenk. 2011. Opti- mising multiple metrics with mert. The Prague Bul- letin of Mathematical Linguistics, 96(1):109-117.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Lingvo: a Modular and Scalable Framework for Sequence-to-Sequence Modeling",
"authors": [
{
"first": "Jonathan",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Patrick",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Mia",
"middle": [
"X"
],
"last": "Chen",
"suffix": ""
},
{
"first": "Ye",
"middle": [],
"last": "Jia",
"suffix": ""
},
{
"first": "Anjuli",
"middle": [],
"last": "Kannan",
"suffix": ""
},
{
"first": "Tara",
"middle": [
"N"
],
"last": "Sainath",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Cao",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonathan Shen, Patrick Nguyen, Yonghui Wu, Zhifeng Chen, Mia X. Chen, Ye Jia, Anjuli Kannan, Tara N. Sainath, and Yuan Cao et al. 2019. Lingvo: a Modular and Scalable Framework for Sequence-to- Sequence Modeling. CoRR, abs/1902.08295.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "A study of translation edit rate with targeted human annotation",
"authors": [
{
"first": "Matthew",
"middle": [],
"last": "Snover",
"suffix": ""
},
{
"first": "Bonnie",
"middle": [],
"last": "Dorr",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Schwartz",
"suffix": ""
},
{
"first": "Linnea",
"middle": [],
"last": "Micciulla",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Makhoul",
"suffix": ""
}
],
"year": 2006,
"venue": "Proceedings of association for machine translation in the Americas",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matthew Snover, Bonnie Dorr, Richard Schwartz, Lin- nea Micciulla, and John Makhoul. 2006. A study of translation edit rate with targeted human annotation. In Proceedings of association for machine transla- tion in the Americas. Cambridge, MA.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "The Effect of Translationese on Tuning for Statistical Machine Translation",
"authors": [
{
"first": "Sara",
"middle": [],
"last": "Stymne",
"suffix": ""
}
],
"year": 2017,
"venue": "The 21st Nordic Conference on Computational Linguistics",
"volume": "",
"issue": "",
"pages": "241--246",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sara Stymne. 2017. The Effect of Translationese on Tuning for Statistical Machine Translation. In The 21st Nordic Conference on Computational Linguis- tics, pages 241-246.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Reassessing Claims of Human Parity and Super-Human Performance in Machine Translation at WMT",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Toral",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:2005.05738"
]
},
"num": null,
"urls": [],
"raw_text": "Antonio Toral. 2020. Reassessing Claims of Hu- man Parity and Super-Human Performance in Ma- chine Translation at WMT 2019. arXiv preprint arXiv:2005.05738.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "Attaining the Unattainable? Reassessing Claims of Human Parity in Neural Machine Translation",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Toral",
"suffix": ""
},
{
"first": "Sheila",
"middle": [],
"last": "Castilho",
"suffix": ""
},
{
"first": "Ke",
"middle": [],
"last": "Hu",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "113--123",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6312"
]
},
"num": null,
"urls": [],
"raw_text": "Antonio Toral, Sheila Castilho, Ke Hu, and Andy Way. 2018. Attaining the Unattainable? Reassessing Claims of Human Parity in Neural Machine Trans- lation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 113- 123, Belgium, Brussels. Association for Computa- tional Linguistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Descriptive Translation Studies and Beyond. Benjamins translation library",
"authors": [
{
"first": "Gideon",
"middle": [],
"last": "Toury",
"suffix": ""
}
],
"year": 1995,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gideon Toury. 1995. Descriptive Translation Studies and Beyond. Benjamins translation library. John Benjamins Publishing Company.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Attention Is All You Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention Is All You Need. In Advances in Neural Information Pro- cessing Systems, pages 5998-6008.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Denoising neural machine translation training with trusted data and online data selection",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Taro",
"middle": [],
"last": "Watanabe",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Tetsuji",
"middle": [],
"last": "Nakagawa",
"suffix": ""
},
{
"first": "Ciprian",
"middle": [],
"last": "Chelba",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "133--143",
"other_ids": {
"DOI": [
"10.18653/v1/W18-6314"
]
},
"num": null,
"urls": [],
"raw_text": "Wei Wang, Taro Watanabe, Macduff Hughes, Tetsuji Nakagawa, and Ciprian Chelba. 2018. Denois- ing neural machine translation training with trusted data and online data selection. In Proceedings of the Third Conference on Machine Translation: Re- search Papers, pages 133-143, Belgium, Brussels. Association for Computational Linguistics.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Simple and effective noisy channel modeling for neural machine translation",
"authors": [
{
"first": "Kyra",
"middle": [],
"last": "Yee",
"suffix": ""
},
{
"first": "Nathan",
"middle": [],
"last": "Ng",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Yann N Dauphin",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1908.05731"
]
},
"num": null,
"urls": [],
"raw_text": "Kyra Yee, Nathan Ng, Yann N Dauphin, and Michael Auli. 2019. Simple and effective noisy channel mod- eling for neural machine translation. arXiv preprint arXiv:1908.05731.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "The effect of translationese in machine translation test sets",
"authors": [
{
"first": "Mike",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Toral",
"suffix": ""
}
],
"year": 2019,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mike Zhang and Antonio Toral. 2019. The effect of translationese in machine translation test sets. CoRR, abs/1906.08069.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Bertscore: Evaluating text generation with BERT. Arxiv",
"authors": [
{
"first": "Tianyi",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Varsha",
"middle": [],
"last": "Kishore",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Kilian",
"middle": [
"Q"
],
"last": "Weinberger",
"suffix": ""
},
{
"first": "Yoav",
"middle": [],
"last": "Artzi",
"suffix": ""
}
],
"year": 1904,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with BERT. Arxiv, 1904.09675.",
"links": null
}
},
"ref_entries": {
"TABREF0": {
"type_str": "table",
"text": ", both the BLEU and BLEUP differences are significant at p<5e-18.",
"num": null,
"content": "<table><tr><td/><td colspan=\"2\">opt-on-BLEU opt-on-BLEUP</td></tr><tr><td>BLEU</td><td>45.0</td><td>39.8</td></tr><tr><td>BLEUP</td><td>13.4</td><td>13.7</td></tr><tr><td>human quality</td><td>4.27</td><td>4.72</td></tr><tr><td>human fluency</td><td>27.2%</td><td>63.8%</td></tr></table>",
"html": null
},
"TABREF1": {
"type_str": "table",
"text": "",
"num": null,
"content": "<table><tr><td>: BLEU scores and human ratings for</td></tr><tr><td>WMT newstest2019 English\u2192German (original En-</td></tr><tr><td>glish sources)</td></tr></table>",
"html": null
},
"TABREF3": {
"type_str": "table",
"text": "BLEU scores for WMT 2019 English\u2192German. The joint sets combine orig-en and orig-de subsets. The orig-en.p sets use paraphrased references instead of standard references. Our experiments compared new-stest2018.joint and newstest2018.orig-en.p for system tuning. The standard newstest2018 and newstest2019 sets are newstest2018.joint and newstest2019.orig-en, respectively.",
"num": null,
"content": "<table/>",
"html": null
}
}
}
}