ACL-OCL / Base_JSON /prefixP /json /P19 /P19-1021.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "P19-1021",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T08:28:06.554524Z"
},
"title": "Revisiting Low-Resource Neural Machine Translation: A Case Study",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {}
},
"email": "rico.sennrich@ed.ac.uk"
},
{
"first": "Biao",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "",
"institution": "University of Edinburgh",
"location": {}
},
"email": "b.zhang@ed.ac.uk"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "It has been shown that the performance of neural machine translation (NMT) drops starkly in low-resource conditions, underperforming phrase-based statistical machine translation (PBSMT) and requiring large amounts of auxiliary data to achieve competitive results. In this paper, we reassess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. We discuss some pitfalls to be aware of when training low-resource NMT systems, and recent techniques that have shown to be especially helpful in low-resource settings, resulting in a set of best practices for low-resource NMT. In our experiments on German-English with different amounts of IWSLT14 training data, we show that, without the use of any auxiliary monolingual or multilingual data, an optimized NMT system can outperform PBSMT with far less data than previously claimed. We also apply these techniques to a low-resource Korean-English dataset, surpassing previously reported results by 4 BLEU.",
"pdf_parse": {
"paper_id": "P19-1021",
"_pdf_hash": "",
"abstract": [
{
"text": "It has been shown that the performance of neural machine translation (NMT) drops starkly in low-resource conditions, underperforming phrase-based statistical machine translation (PBSMT) and requiring large amounts of auxiliary data to achieve competitive results. In this paper, we reassess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. We discuss some pitfalls to be aware of when training low-resource NMT systems, and recent techniques that have shown to be especially helpful in low-resource settings, resulting in a set of best practices for low-resource NMT. In our experiments on German-English with different amounts of IWSLT14 training data, we show that, without the use of any auxiliary monolingual or multilingual data, an optimized NMT system can outperform PBSMT with far less data than previously claimed. We also apply these techniques to a low-resource Korean-English dataset, surpassing previously reported results by 4 BLEU.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "While neural machine translation (NMT) has achieved impressive performance in high-resource data conditions, becoming dominant in the field Bahdanau et al., 2015; Vaswani et al., 2017) , recent research has argued that these models are highly data-inefficient, and underperform phrase-based statistical machine translation (PBSMT) or unsupervised methods in low-data conditions (Koehn and Knowles, 2017; Lample et al., 2018b) . In this paper, we re-assess the validity of these results, arguing that they are the result of lack of system adaptation to low-resource settings. Our main contributions are as follows:",
"cite_spans": [
{
"start": 140,
"end": 162,
"text": "Bahdanau et al., 2015;",
"ref_id": "BIBREF3"
},
{
"start": 163,
"end": 184,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF49"
},
{
"start": 378,
"end": 403,
"text": "(Koehn and Knowles, 2017;",
"ref_id": "BIBREF24"
},
{
"start": 404,
"end": 425,
"text": "Lample et al., 2018b)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "\u2022 we explore best practices for low-resource Figure 3 : BLEU scores for English-Spanish systems trained on 0.4 million to 385.7 million words of parallel data. Quality for NMT starts much lower, outperforms SMT at about 15 million words, and even beats a SMT system with a big 2 billion word in-domain language model under high-resource conditions.",
"cite_spans": [],
"ref_spans": [
{
"start": 45,
"end": 53,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "How do the data needs of SMT and NMT compare? NMT promises both to generalize better (exploiting word similary in embeddings) and condition on larger context (entire input and all prior output words).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We built English-Spanish systems on WMT data, 7 about 385.7 million English words paired with Spanish. To obtain a learning curve, we used 1 1024 , 1 512 , ..., 1 2 , and all of the data. For SMT, the language model was trained on the Spanish part of each subset, respectively. In addition to a NMT and SMT system trained on each subset, we also used all additionally provided monolingual data for a big language model in contrastive SMT systems.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Results are shown in Figure 3 . NMT exhibits a much steeper learning curve, starting with abysmal results (BLEU score of 1.6 vs. 16.4 for 1 1024 of the data), outperforming SMT 25.7 vs. 24.7 with 1 16 of the data (24.1 million words), and even beating the SMT system with a big language model with the full data set (31.1 for NMT, 28.4 for SMT, 30.4 for SMT+BigLM).",
"cite_spans": [],
"ref_spans": [
{
"start": 21,
"end": 29,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A Republican strategy of Obama Figure 4 : Translations of the test set using NMT sys amounts of training data. U ditions, NMT produces flu the input.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Src:",
"sec_num": null
},
{
"text": "The contrast between the ing curves is quite striking exploit increasing amounts effectively, it is unable to g training corpus sizes of a less.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Src:",
"sec_num": null
},
{
"text": "To illustrate this, see Fig training data , the output is c the input, some key words with 1 512 and 1 256 of the da egy, elecci\u00f3n or elecciones ing with 1 64 the translations",
"cite_spans": [],
"ref_spans": [
{
"start": 24,
"end": 42,
"text": "Fig training data",
"ref_id": null
}
],
"eq_spans": [],
"section": "Src:",
"sec_num": null
},
{
"text": "Conventional wisdom stat translation models perform rare words, (Luong et al., 2016b; Arthur et al., 201 smaller vocabularies used examine this claim by com rare word translation betw systems of similar quality and find that NMT system SMT systems on translati words. However, both N do continue to have diffi infrequent words, particula highly-inflected categories For the neural machine use a publicly available m settings of Edinburgh's W nrich et al., 2016a). This 8 https://github.com/rse",
"cite_spans": [
{
"start": 64,
"end": 85,
"text": "(Luong et al., 2016b;",
"ref_id": null
},
{
"start": 86,
"end": 104,
"text": "Arthur et al., 201",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Rare Words",
"sec_num": "3.3"
},
{
"text": "Figure 1: quality of PBSMT and NMT in low-resource conditions according to (Koehn and Knowles, 2017) .",
"cite_spans": [
{
"start": 75,
"end": 100,
"text": "(Koehn and Knowles, 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "31",
"sec_num": null
},
{
"text": "NMT, evaluating their importance with ablation studies.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "31",
"sec_num": null
},
{
"text": "\u2022 we reproduce a comparison of NMT and PB-SMT in different data conditions, showing that when following our best practices, NMT outperforms PBSMT with as little as 100 000 words of parallel training data. Figure 1 reproduces a plot by Koehn and Knowles (2017) which shows that their NMT system only outperforms their PBSMT system when more than 100 million words (approx. 5 million sentences) of parallel training data are available. Results shown by Lample et al. (2018b) are similar, showing that unsupervised NMT outperforms supervised systems if few parallel resources are available. In both papers, NMT systems are trained with hyperparameters that are typical for high-resource set-tings, and the authors did not tune hyperparameters, or change network architectures, to optimize NMT for low-resource conditions.",
"cite_spans": [
{
"start": 235,
"end": 259,
"text": "Koehn and Knowles (2017)",
"ref_id": "BIBREF24"
},
{
"start": 451,
"end": 472,
"text": "Lample et al. (2018b)",
"ref_id": "BIBREF26"
}
],
"ref_spans": [
{
"start": 205,
"end": 213,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "31",
"sec_num": null
},
{
"text": "The bulk of research on low-resource NMT has focused on exploiting monolingual data, or parallel data involving other language pairs. Methods to improve NMT with monolingual data range from the integration of a separately trained language model (G\u00fcl\u00e7ehre et al., 2015) to the training of parts of the NMT model with additional objectives, including a language modelling objective (G\u00fcl\u00e7ehre et al., 2015; Sennrich et al., 2016b; Ramachandran et al., 2017) , an autoencoding objective (Luong et al., 2016; Currey et al., 2017) , or a round-trip objective, where the model is trained to predict monolingual (target-side) training data that has been back-translated into the source language (Sennrich et al., 2016b; Cheng et al., 2016) . As an extreme case, models that rely exclusively on monolingual data have been shown to work (Artetxe et al., 2018b; Lample et al., 2018a; Artetxe et al., 2018a; Lample et al., 2018b) . Similarly, parallel data from other language pairs can be used to pre-train the network or jointly learn representations (Zoph et al., 2016; Chen et al., 2017; Nguyen and Chiang, 2017; Neubig and Hu, 2018; Gu et al., 2018a,b; Kocmi and Bojar, 2018) .",
"cite_spans": [
{
"start": 245,
"end": 268,
"text": "(G\u00fcl\u00e7ehre et al., 2015)",
"ref_id": "BIBREF16"
},
{
"start": 380,
"end": 403,
"text": "(G\u00fcl\u00e7ehre et al., 2015;",
"ref_id": "BIBREF16"
},
{
"start": 404,
"end": 427,
"text": "Sennrich et al., 2016b;",
"ref_id": "BIBREF43"
},
{
"start": 428,
"end": 454,
"text": "Ramachandran et al., 2017)",
"ref_id": "BIBREF38"
},
{
"start": 483,
"end": 503,
"text": "(Luong et al., 2016;",
"ref_id": "BIBREF27"
},
{
"start": 504,
"end": 524,
"text": "Currey et al., 2017)",
"ref_id": "BIBREF9"
},
{
"start": 687,
"end": 711,
"text": "(Sennrich et al., 2016b;",
"ref_id": "BIBREF43"
},
{
"start": 712,
"end": 731,
"text": "Cheng et al., 2016)",
"ref_id": "BIBREF6"
},
{
"start": 827,
"end": 850,
"text": "(Artetxe et al., 2018b;",
"ref_id": "BIBREF1"
},
{
"start": 851,
"end": 872,
"text": "Lample et al., 2018a;",
"ref_id": "BIBREF25"
},
{
"start": 873,
"end": 895,
"text": "Artetxe et al., 2018a;",
"ref_id": "BIBREF0"
},
{
"start": 896,
"end": 917,
"text": "Lample et al., 2018b)",
"ref_id": "BIBREF26"
},
{
"start": 1041,
"end": 1060,
"text": "(Zoph et al., 2016;",
"ref_id": "BIBREF52"
},
{
"start": 1061,
"end": 1079,
"text": "Chen et al., 2017;",
"ref_id": "BIBREF5"
},
{
"start": 1080,
"end": 1104,
"text": "Nguyen and Chiang, 2017;",
"ref_id": "BIBREF33"
},
{
"start": 1105,
"end": 1125,
"text": "Neubig and Hu, 2018;",
"ref_id": "BIBREF31"
},
{
"start": 1126,
"end": 1145,
"text": "Gu et al., 2018a,b;",
"ref_id": null
},
{
"start": 1146,
"end": 1168,
"text": "Kocmi and Bojar, 2018)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Improving Low-Resource Neural Machine Translation",
"sec_num": "2.2"
},
{
"text": "While semi-supervised and unsupervised approaches have been shown to be very effective for some language pairs, their effectiveness depends on the availability of large amounts of suitable auxiliary data, and other conditions being met. For example, the effectiveness of unsupervised methods is impaired when languages are morphologically different, or when training domains do not match (S\u00f8gaard et al., 2018) More broadly, this line of research still accepts the premise that NMT models are data-inefficient and require large amounts of auxiliary data to train. In this work, we want to re-visit this point, and will focus on techniques to make more efficient use of small amounts of parallel training data. Low-resource NMT without auxiliary data has received less attention; work in this direction includes (\u00d6stling and Tiedemann, 2017; Nguyen and Chiang, 2018).",
"cite_spans": [
{
"start": 388,
"end": 410,
"text": "(S\u00f8gaard et al., 2018)",
"ref_id": "BIBREF45"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Improving Low-Resource Neural Machine Translation",
"sec_num": "2.2"
},
{
"text": "3 Methods for Low-Resource Neural Machine Translation",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Improving Low-Resource Neural Machine Translation",
"sec_num": "2.2"
},
{
"text": "We consider the hyperparameters used by Koehn and Knowles (2017) to be our baseline. This baseline does not make use of various advances in NMT architectures and training tricks. In contrast to the baseline, we use a BiDeep RNN architecture (Miceli Barone et al., 2017), label smoothing (Szegedy et al., 2016) , dropout (Srivastava et al., 2014) , word dropout (Sennrich et al., 2016a) , layer normalization (Ba et al., 2016) and tied embeddings (Press and Wolf, 2017) .",
"cite_spans": [
{
"start": 40,
"end": 64,
"text": "Koehn and Knowles (2017)",
"ref_id": "BIBREF24"
},
{
"start": 287,
"end": 309,
"text": "(Szegedy et al., 2016)",
"ref_id": "BIBREF48"
},
{
"start": 320,
"end": 345,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF46"
},
{
"start": 361,
"end": 385,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF42"
},
{
"start": 446,
"end": 468,
"text": "(Press and Wolf, 2017)",
"ref_id": "BIBREF37"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Mainstream Improvements",
"sec_num": "3.1"
},
{
"text": "Subword representations such as BPE (Sennrich et al., 2016c) have become a popular choice to achieve open-vocabulary translation. BPE has one hyperparameter, the number of merge operations, which determines the size of the final vocabulary. For high-resource settings, the effect of vocabulary size on translation quality is relatively small; Haddow et al. 2018report mixed results when comparing vocabularies of 30k and 90k subwords. In low-resource settings, large vocabularies result in low-frequency (sub)words being represented as atomic units at training time, and the ability to learn good high-dimensional representations of these is doubtful. Sennrich et al. (2017a) propose a minimum frequency threshold for subword units, and splitting any less frequent subword into smaller units or characters. We expect that such a threshold reduces the need to carefully tune the vocabulary size to the dataset, leading to more aggressive segmentation on smaller datasets. 1",
"cite_spans": [
{
"start": 36,
"end": 60,
"text": "(Sennrich et al., 2016c)",
"ref_id": "BIBREF44"
},
{
"start": 652,
"end": 675,
"text": "Sennrich et al. (2017a)",
"ref_id": "BIBREF40"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Language Representation",
"sec_num": "3.2"
},
{
"text": "Due to long training times, hyperparameters are hard to optimize by grid search, and are often re-used across experiments. However, best practices differ between high-resource and lowresource settings. While the trend in high-resource settings is towards using larger and deeper models, Nguyen and Chiang (2018) use smaller and fewer layers for smaller datasets. Previous work has argued for larger batch sizes in NMT (Morishita et al., 2017; Neishi et al., 2017 ), but we find that using smaller batches is beneficial in lowresource settings. More aggressive dropout, including dropping whole words at random (Gal and Ghahramani, 2016) , is also likely to be more important. We report results on a narrow hyperparameter search guided by previous work and our own intuition.",
"cite_spans": [
{
"start": 418,
"end": 442,
"text": "(Morishita et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 443,
"end": 462,
"text": "Neishi et al., 2017",
"ref_id": "BIBREF30"
},
{
"start": 610,
"end": 636,
"text": "(Gal and Ghahramani, 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Hyperparameter Tuning",
"sec_num": "3.3"
},
{
"text": "Finally, we implement and test the lexical model by Nguyen and Chiang (2018) , which has been shown to be beneficial in low-data conditions. The core idea is to train a simple feed-forward network, the lexical model, jointly with the original attentional NMT model. The input of the lexical model at time step t is the weighted average of source embeddings f (the attention weights a are shared with the main model). After a feedforward layer (with skip connection), the lexical model's output h l t is combined with the original model's hidden state h o t before softmax computation.",
"cite_spans": [
{
"start": 52,
"end": 76,
"text": "Nguyen and Chiang (2018)",
"ref_id": "BIBREF32"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Model",
"sec_num": "3.4"
},
{
"text": "f l t = tanh s a t (s)f s h l t = tanh(W f l t ) + f l t p(y t |y <t , x) = softmax(W o h o t + b o + W l h l t + b l )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Model",
"sec_num": "3.4"
},
{
"text": "Our implementation adds dropout and layer normalization to the lexical model. 2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Lexical Model",
"sec_num": "3.4"
},
{
"text": "We use the TED data from the IWSLT 2014 German\u2192English shared translation task (Cettolo et al., 2014) . We use the same data cleanup and train/dev split as Ranzato et al. (2016), resulting in 159 000 parallel sentences of training data, and 7584 for development.",
"cite_spans": [
{
"start": 79,
"end": 101,
"text": "(Cettolo et al., 2014)",
"ref_id": "BIBREF4"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "4.1"
},
{
"text": "As a second language pair, we evaluate our systems on a Korean-English dataset 3 with around 90 000 parallel sentences of training data, 1000 for development, and 2000 for testing.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "4.1"
},
{
"text": "For both PBSMT and NMT, we apply the same tokenization and truecasing using Moses scripts. For NMT, we also learn BPE subword segmentation with 30 000 merge operations, shared between German and English, and independently for Korean\u2192English. To simulate different amounts of training resources, we randomly subsample the IWSLT training corpus 5 times, discarding half of the data at each step. Truecaser and BPE segmentation are learned on the full training corpus; as one of our experiments, we set the frequency threshold for subword units to 10 in each subcorpus (see 3.2). Table 1 shows statistics for each subcorpus, including the subword vocabulary.",
"cite_spans": [],
"ref_spans": [
{
"start": 577,
"end": 584,
"text": "Table 1",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "4.1"
},
{
"text": "Translation outputs are detruecased, detokenized, and compared against the reference with cased BLEU using sacreBLEU (Papineni et al., 2002; Post, 2018) . 4 Like Ranzato et al. (2016), we report BLEU on the concatenated dev sets for IWSLT 2014 (tst2010, tst2011, tst2012, dev2010, dev2012).",
"cite_spans": [
{
"start": 117,
"end": 140,
"text": "(Papineni et al., 2002;",
"ref_id": "BIBREF35"
},
{
"start": 141,
"end": 152,
"text": "Post, 2018)",
"ref_id": "BIBREF36"
},
{
"start": 155,
"end": 156,
"text": "4",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Data and Preprocessing",
"sec_num": "4.1"
},
{
"text": "We use Moses (Koehn et al., 2007) to train a PBSMT system. We use MGIZA (Gao and Vogel, 2008) to train word alignments, and lmplz (Heafield et al., 2013 ) for a 5-gram LM. Feature weights are optimized on the dev set to maximize BLEU with batch MIRA (Cherry and Foster, 2012) -we perform multiple runs where indicated. Unlike Koehn and Knowles (2017), we do not use extra data for the LM. Both PBSMT and NMT can benefit from monolingual data, so the availability of monolingual data is no longer an exclusive advantage of PBSMT (see 2.2). ",
"cite_spans": [
{
"start": 13,
"end": 33,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF23"
},
{
"start": 72,
"end": 93,
"text": "(Gao and Vogel, 2008)",
"ref_id": "BIBREF13"
},
{
"start": 130,
"end": 152,
"text": "(Heafield et al., 2013",
"ref_id": "BIBREF19"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "PBSMT Baseline",
"sec_num": "4.2"
},
{
"text": "We train neural systems with Nematus (Sennrich et al., 2017b) . Our baseline mostly follows the settings in (Koehn and Knowles, 2017) ; we use adam (Kingma and Ba, 2015) and perform early stopping based on dev set BLEU. We express our batch size in number of tokens, and set it to 4000 in the baseline (comparable to a batch size of 80 sentences used in previous work).",
"cite_spans": [
{
"start": 37,
"end": 61,
"text": "(Sennrich et al., 2017b)",
"ref_id": "BIBREF41"
},
{
"start": 108,
"end": 133,
"text": "(Koehn and Knowles, 2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "NMT Systems",
"sec_num": "4.3"
},
{
"text": "We subsequently add the methods described in section 3, namely the bideep RNN, label smoothing, dropout, tied embeddings, layer normalization, changes to the BPE vocabulary size, batch Table 2 shows the effect of adding different methods to the baseline NMT system, on the ultra-low data condition (100k words of training data) and the full IWSLT 14 training corpus (3.2M words). Our \"mainstream improvements\" add around 6-7 BLEU in both data conditions.",
"cite_spans": [],
"ref_spans": [
{
"start": 185,
"end": 192,
"text": "Table 2",
"ref_id": "TABREF6"
}
],
"eq_spans": [],
"section": "NMT Systems",
"sec_num": "4.3"
},
{
"text": "In the ultra-low data condition, reducing the BPE vocabulary size is very effective (+4.9 BLEU). Reducing the batch size to 1000 token results in a BLEU gain of 0.3, and the lexical model yields an additional +0.6 BLEU. However, aggressive (word) dropout 6 (+3.4 BLEU) and tuning other hyperparameters (+0.7 BLEU) has a stronger effect than the lexical model, and adding the lexical model (9) on top of the optimized configuration (8) does not improve performance. Together, the adaptations to the ultra-low data setting yield 9.4 BLEU (7.2\u219216.6). The model trained on full IWSLT data is less sensitive to our changes (31.9\u219232.8 BLEU), and optimal hyperparameters differ depending on the data condition. Subsequently, we still apply the hyperparameters that were optimized to the ultra-low data condition (8) 5 beam search results reported by Wiseman and Rush (2016) .",
"cite_spans": [
{
"start": 809,
"end": 810,
"text": "5",
"ref_id": null
},
{
"start": 843,
"end": 866,
"text": "Wiseman and Rush (2016)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "6 p = 0.3 for dropping words; p = 0.5 for other dropout.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "system BLEU MIXER (Ranzato et al., 2016) 5 21.8 BSO (Wiseman and Rush, 2016) 25.5 NPMT+LM (Huang et al., 2018) 30.1 MRT (Edunov et al., 2018) 32.84 \u00b1 0.08 Pervasive Attention (Elbayad et al., 2018) 33.8 Transformer Baseline (Wu et al., 2019) 34.4 Dynamic Convolution (Wu et al., 2019) 35.2 our PBSMT 128.19 \u00b1 0.01 our NMT baseline 227.16 \u00b1 0.38 our NMT best 735.27 \u00b1 0.14 system BLEU (Gu et al., 2018b) 5.97 (supervised Transformer) phrase-based SMT 6.57 \u00b1 0.17 NMT baseline 22.93 \u00b1 0.34 NMT optimized 810.37 \u00b1 0.29 to other data conditions, and Korean\u2192English, for simplicity. For a comparison with PBSMT, and across different data settings, consider Figure 2 , which shows the result of PBSMT, our NMT baseline, and our optimized NMT system. Our NMT baseline still performs worse than the PBSMT system for 3.2M words of training data, which is consistent with the results by Koehn and Knowles (2017) . However, our optimized NMT system shows strong improvements, and outperforms the PBSMT system across all data settings. Some sample translations are shown in Appendix B.",
"cite_spans": [
{
"start": 52,
"end": 76,
"text": "(Wiseman and Rush, 2016)",
"ref_id": "BIBREF50"
},
{
"start": 90,
"end": 110,
"text": "(Huang et al., 2018)",
"ref_id": "BIBREF20"
},
{
"start": 120,
"end": 141,
"text": "(Edunov et al., 2018)",
"ref_id": "BIBREF10"
},
{
"start": 175,
"end": 197,
"text": "(Elbayad et al., 2018)",
"ref_id": "BIBREF11"
},
{
"start": 224,
"end": 241,
"text": "(Wu et al., 2019)",
"ref_id": "BIBREF51"
},
{
"start": 267,
"end": 284,
"text": "(Wu et al., 2019)",
"ref_id": "BIBREF51"
},
{
"start": 384,
"end": 402,
"text": "(Gu et al., 2018b)",
"ref_id": "BIBREF15"
},
{
"start": 877,
"end": 901,
"text": "Koehn and Knowles (2017)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [
{
"start": 408,
"end": 432,
"text": "(supervised Transformer)",
"ref_id": null
},
{
"start": 652,
"end": 660,
"text": "Figure 2",
"ref_id": "FIGREF0"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "For comparison to previous work, we report lowercased and tokenized results on the full IWSLT 14 training set in Table 3 . Our results far outperform the RNN-based results reported by Wiseman and Rush (2016) , and are on par with the best reported results on this dataset. Table 4 shows results for Korean\u2192English, using the same configurations (1, 2 and 8) as for German-English. Our results confirm that the techniques we apply are successful across datasets, and result in stronger systems than previously reported on this dataset, achieving 10.37 BLEU as compared to 5.97 BLEU reported by Gu et al. (2018b) .",
"cite_spans": [
{
"start": 184,
"end": 207,
"text": "Wiseman and Rush (2016)",
"ref_id": "BIBREF50"
},
{
"start": 593,
"end": 610,
"text": "Gu et al. (2018b)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [
{
"start": 113,
"end": 120,
"text": "Table 3",
"ref_id": "TABREF8"
},
{
"start": 273,
"end": 280,
"text": "Table 4",
"ref_id": "TABREF9"
}
],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "Our results demonstrate that NMT is in fact a suitable choice in low-data settings, and can outperform PBSMT with far less parallel training data than previously claimed. Recently, the main trend in low-resource MT research has been the better exploitation of monolingual and multilingual resources. Our results show that low-resource NMT is very sensitive to hyperparameters such as BPE vocabulary size, word dropout, and others, and by following a set of best practices, we can train competitive NMT systems without relying on auxiliary resources. This has practical relevance for languages where large amounts of monolingual data, or multilingual data involving related languages, are not available. Even though we focused on only using parallel data, our results are also relevant for work on using auxiliary data to improve low-resource MT. Supervised systems serve as an important baseline to judge the effectiveness of semisupervised or unsupervised approaches, and the quality of supervised systems trained on little data can directly impact semisupervised workflows, for instance for the backtranslation of monolingual data. Table 5 lists hyperparameters used for the different experiments in the ablation study (Table 2) . Hyperparameters were kept constant across different data settings, except for the validation interval and subword vocabulary size (see Table 1 ). Table 6 shows some sample translations that represent typical errors of our PBSMT and NMT systems, trained with ultra-low (100k words) and low (3.2M words) amounts of data. For unknown words such as blutbefleckten ('bloodstained') or Spaniern ('Spaniards', 'Spanish'), PBSMT systems default to copying, while NMT systems produce translations on a subword-level, with varying success (blue-flect, bleed; spaniers, Spanians). NMT systems learn some syntactic disambiguation even with very little data, for example the translation of das and die as relative pronouns ('that', 'which', 'who'), while PBSMT produces less grammatical translation. On the flip side, the ultra low-resource NMT system ignores some unknown words in favour of a moreor-less fluent, but semantically inadequate translation: erobert ('conquered') is translated into doing, and richtig aufgezeichnet ('registered correctly', 'recorded correctly') into really the first thing. In a bloodstained continent, these people alone were never conquered by the Spanish.",
"cite_spans": [],
"ref_spans": [
{
"start": 1134,
"end": 1141,
"text": "Table 5",
"ref_id": "TABREF11"
},
{
"start": 1221,
"end": 1230,
"text": "(Table 2)",
"ref_id": "TABREF6"
},
{
"start": 1368,
"end": 1375,
"text": "Table 1",
"ref_id": "TABREF4"
},
{
"start": 1379,
"end": 1386,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions",
"sec_num": "6"
},
{
"text": "In a blutbefleckten continent, were these people the only, the never of the Spaniern erobert were. PBSMT 3.2M",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PBSMT 100k",
"sec_num": null
},
{
"text": "In a blutbefleckten continent, these people were the only ones that were never of the Spaniern conquered.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PBSMT 100k",
"sec_num": null
},
{
"text": "NMT 3.2M (baseline) In a blinging tree continent, these people were the only ones that never had been conquered by the Spanians.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PBSMT 100k",
"sec_num": null
},
{
"text": "NMT 100k (optimized) In a blue-flect continent, these people were the only one that has never been doing by the spaniers. NMT 3.2M (optimized) In a bleed continent, these people were the only ones who had never been conquered by the Spanians.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PBSMT 100k",
"sec_num": null
},
{
"text": "source Dies ist tatschlich ein Poster von Notre Dame, das richtig aufgezeichnet wurde. reference This is actually a poster of Notre Dame that registered correctly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PBSMT 100k",
"sec_num": null
},
{
"text": "This is actually poster of Notre lady, the right aufgezeichnet was. PBSMT 3.2M This is actually a poster of Notre Dame, the right recorded.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PBSMT 100k",
"sec_num": null
},
{
"text": "NMT 3.2M (baseline) This is actually a poster of emergency lady who was just recorded properly.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "PBSMT 100k",
"sec_num": null
},
{
"text": "NMT 100k (optimized) This is actually a poster of Notre Dame, that was really the first thing. NMT 3.2M (optimized) This is actually a poster from Notre Dame, which has been recorded right. Table 6 : German\u2192English translation examples with phrase-based SMT and NMT systems trained on 100k/3.2M words of parallel data.",
"cite_spans": [],
"ref_spans": [
{
"start": 190,
"end": 197,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "PBSMT 100k",
"sec_num": null
},
{
"text": "Spanish was last represented in 2013, we used data from http://statmt.org/wmt13/translation-task.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "In related work,Cherry et al. (2018) have shown that, given deep encoders and decoders, character-level models can outperform other subword segmentations. In preliminary experiments, a character-level model performed poorly in our low-resource setting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "Rico Sennrich has received funding from the Swiss National Science Foundation in the project CoNTra (grant number 105212 169888). Biao Zhang acknowledges the support of the Baidu Scholarship.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Unsupervised Statistical Machine Translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3632--3642",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, and Eneko Agirre. 2018a. Unsupervised Statistical Machine Transla- tion. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 3632-3642, Brussels, Belgium.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Unsupervised Neural Machine Translation",
"authors": [
{
"first": "Mikel",
"middle": [],
"last": "Artetxe",
"suffix": ""
},
{
"first": "Gorka",
"middle": [],
"last": "Labaka",
"suffix": ""
},
{
"first": "Eneko",
"middle": [],
"last": "Agirre",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018b. Unsupervised Neural Ma- chine Translation. In International Conference on Learning Representations.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Hinton",
"authors": [
{
"first": "Jimmy",
"middle": [],
"last": "Lei",
"suffix": ""
},
{
"first": "Ryan",
"middle": [],
"last": "Ba",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Kiros",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lei Jimmy Ba, Ryan Kiros, and Geoffrey E. Hin- ton. 2016. Layer Normalization. CoRR, abs/1607.06450.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Neural Machine Translation by Jointly Learning to Align and Translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "Proceedings of the International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the International Conference on Learning Represen- tations (ICLR).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Report on the 11th IWSLT Evaluation Campaign, IWSLT",
"authors": [
{
"first": "Mauro",
"middle": [],
"last": "Cettolo",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "St\u00fcker",
"suffix": ""
},
{
"first": "Luisa",
"middle": [],
"last": "Bentivogli",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the 11th Workshop on Spoken Language Translation",
"volume": "",
"issue": "",
"pages": "2--16",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mauro Cettolo, Jan Niehues, Sebastian St\u00fcker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th IWSLT Evaluation Campaign, IWSLT 2014. In Proceedings of the 11th Workshop on Spo- ken Language Translation, pages 2-16, Lake Tahoe, CA, USA.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A Teacher-Student Framework for Zero-Resource Neural Machine Translation",
"authors": [
{
"first": "Yun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1925--1935",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yun Chen, Yang Liu, Yong Cheng, and Victor O.K. Li. 2017. A Teacher-Student Framework for Zero- Resource Neural Machine Translation. In Proceed- ings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1925-1935, Vancouver, Canada.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Semi-Supervised Learning for Neural Machine Translation",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "1965--1974",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semi- Supervised Learning for Neural Machine Transla- tion. In Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 1965-1974, Berlin, Germany.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Batch Tuning Strategies for Statistical Machine Translation",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
}
],
"year": 2012,
"venue": "Proceedings of the 2012 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL HLT '12",
"volume": "",
"issue": "",
"pages": "427--436",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Cherry and George Foster. 2012. Batch Tuning Strategies for Statistical Machine Translation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Compu- tational Linguistics: Human Language Technolo- gies, NAACL HLT '12, pages 427-436, Montreal, Canada.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Revisiting Character-Based Neural Machine Translation with Capacity and Compression",
"authors": [
{
"first": "Colin",
"middle": [],
"last": "Cherry",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Foster",
"suffix": ""
},
{
"first": "Ankur",
"middle": [],
"last": "Bapna",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "4295--4305",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Colin Cherry, George Foster, Ankur Bapna, Orhan Firat, and Wolfgang Macherey. 2018. Revisiting Character-Based Neural Machine Translation with Capacity and Compression. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4295-4305, Brussels, Belgium.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Copied Monolingual Data Improves Low-Resource Neural Machine Translation",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Currey",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Valerio Miceli",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Barone",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "148--156",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Currey, Antonio Valerio Miceli Barone, and Ken- neth Heafield. 2017. Copied Monolingual Data Improves Low-Resource Neural Machine Transla- tion. In Proceedings of the Second Conference on Machine Translation, pages 148-156, Copenhagen, Denmark.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Classical structured prediction losses for sequence to sequence learning",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "355--364",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sergey Edunov, Myle Ott, Michael Auli, David Grang- ier, and Marc'Aurelio Ranzato. 2018. Classical structured prediction losses for sequence to se- quence learning. In Proceedings of the 2018 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 355-364, New Orleans, Louisiana.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Pervasive Attention: 2D Convolutional Neural Networks for Sequence-to-Sequence Prediction",
"authors": [
{
"first": "Maha",
"middle": [],
"last": "Elbayad",
"suffix": ""
},
{
"first": "Laurent",
"middle": [],
"last": "Besacier",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Verbeek",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 22nd Conference on Computational Natural Language Learning",
"volume": "",
"issue": "",
"pages": "97--107",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Maha Elbayad, Laurent Besacier, and Jakob Verbeek. 2018. Pervasive Attention: 2D Convolutional Neu- ral Networks for Sequence-to-Sequence Prediction. In Proceedings of the 22nd Conference on Compu- tational Natural Language Learning, pages 97-107, Brussels, Belgium.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "A theoretically grounded application of dropout in recurrent neural networks",
"authors": [
{
"first": "Yarin",
"middle": [],
"last": "Gal",
"suffix": ""
},
{
"first": "Zoubin",
"middle": [],
"last": "Ghahramani",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "1019--1027",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yarin Gal and Zoubin Ghahramani. 2016. A theoret- ically grounded application of dropout in recurrent neural networks. In Advances in Neural Information Processing Systems 29, pages 1019-1027.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Parallel Implementations of Word Alignment Tool",
"authors": [
{
"first": "Qin",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "Stephan",
"middle": [],
"last": "Vogel",
"suffix": ""
}
],
"year": 2008,
"venue": "Software Engineering, Testing, and Quality Assurance for Natural Language Processing",
"volume": "",
"issue": "",
"pages": "49--57",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Qin Gao and Stephan Vogel. 2008. Parallel Implemen- tations of Word Alignment Tool. In Software En- gineering, Testing, and Quality Assurance for Natu- ral Language Processing, pages 49-57, Columbus, Ohio.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Universal Neural Machine Translation for Extremely Low Resource Languages",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Hany",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "",
"issue": "",
"pages": "344--354",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor O.K. Li. 2018a. Universal Neural Machine Translation for Extremely Low Resource Languages. In Pro- ceedings of the 2018 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, Vol- ume 1 (Long Papers), pages 344-354, New Orleans, Louisiana.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Meta-Learning for Low-Resource Neural Machine Translation",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Yong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Yun",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "3622--3631",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Yong Wang, Yun Chen, Victor O. K. Li, and Kyunghyun Cho. 2018b. Meta-Learning for Low- Resource Neural Machine Translation. In Proceed- ings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3622-3631, Brussels, Belgium.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "On Using Monolingual Corpora in Neural Machine Translation",
"authors": [
{
"first": "Orhan",
"middle": [],
"last": "\u00c7 Aglar G\u00fcl\u00e7ehre",
"suffix": ""
},
{
"first": "Kelvin",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Lo\u00efc",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Huei-Chi",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Lin",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "\u00c7 aglar G\u00fcl\u00e7ehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Lo\u00efc Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On Us- ing Monolingual Corpora in Neural Machine Trans- lation. CoRR, abs/1503.03535.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "The University of Edinburgh's Submissions to the WMT18 News Translation Task",
"authors": [
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Nikolay",
"middle": [],
"last": "Bogoychev",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Emelin",
"suffix": ""
},
{
"first": "Ulrich",
"middle": [],
"last": "Germann",
"suffix": ""
},
{
"first": "Roman",
"middle": [],
"last": "Grundkiewicz",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Valerio Miceli",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Barone",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Sennrich",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "403--413",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barry Haddow, Nikolay Bogoychev, Denis Emelin, Ulrich Germann, Roman Grundkiewicz, Kenneth Heafield, Antonio Valerio Miceli Barone, and Rico Sennrich. 2018. The University of Edinburgh's Sub- missions to the WMT18 News Translation Task. In Proceedings of the Third Conference on Machine Translation, pages 403-413, Belgium, Brussels.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Dual Learning for Machine Translation",
"authors": [
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nenghai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Tieyan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wei-Ying",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2016,
"venue": "Advances in Neural Information Processing Systems",
"volume": "29",
"issue": "",
"pages": "820--828",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tieyan Liu, and Wei-Ying Ma. 2016. Dual Learn- ing for Machine Translation. In Advances in Neural Information Processing Systems 29, pages 820-828.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Scalable Modified Kneser-Ney Language Model Estimation",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Pouzyrevsky",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2013,
"venue": "Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "690--696",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable Modified Kneser-Ney Language Model Estimation. In Pro- ceedings of the 51st Annual Meeting of the Associa- tion for Computational Linguistics, pages 690-696, Sofia, Bulgaria.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Towards Neural Phrasebased Machine Translation",
"authors": [
{
"first": "Po-Sen",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Chong",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Sitao",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Dengyong",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Li",
"middle": [],
"last": "Deng",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Po-Sen Huang, Chong Wang, Sitao Huang, Dengyong Zhou, and Li Deng. 2018. Towards Neural Phrase- based Machine Translation. In International Con- ference on Learning Representations.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Adam: A Method for Stochastic Optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "The International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In The Inter- national Conference on Learning Representations, San Diego, California, USA.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Trivial Transfer Learning for Low-Resource Neural Machine Translation",
"authors": [
{
"first": "Tom",
"middle": [],
"last": "Kocmi",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation",
"volume": "",
"issue": "",
"pages": "244--252",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tom Kocmi and Ond\u0159ej Bojar. 2018. Trivial Transfer Learning for Low-Resource Neural Machine Trans- lation. In Proceedings of the Third Conference on Machine Translation, pages 244-252, Belgium, Brussels.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Moses: Open Source Toolkit for Statistical Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
},
{
"first": "Christine",
"middle": [],
"last": "Moran",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Zens",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Dyer",
"suffix": ""
},
{
"first": "Ond\u0159ej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Constantin",
"suffix": ""
},
{
"first": "Evan",
"middle": [],
"last": "Herbst",
"suffix": ""
}
],
"year": 2007,
"venue": "Proceedings of the ACL-2007 Demo and Poster Sessions",
"volume": "",
"issue": "",
"pages": "177--180",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the ACL-2007 Demo and Poster Sessions, pages 177-180, Prague, Czech Republic.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Six Challenges for Neural Machine Translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Rebecca",
"middle": [],
"last": "Knowles",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the First Workshop on Neural Machine Translation",
"volume": "",
"issue": "",
"pages": "28--39",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn and Rebecca Knowles. 2017. Six Chal- lenges for Neural Machine Translation. In Pro- ceedings of the First Workshop on Neural Machine Translation, pages 28-39, Vancouver.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Unsupervised Machine Translation Using Monolingual Corpora Only",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018a. Unsupervised Machine Translation Using Monolingual Corpora Only. In International Conference on Learning Representations.",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Phrase-Based & Neural Unsupervised Machine Translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "5039--5049",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc'Aurelio Ranzato. 2018b. Phrase-Based & Neural Unsupervised Machine Translation. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 5039-5049, Brussels, Belgium.",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Multi-task Sequence to Sequence Learning",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
}
],
"year": 2016,
"venue": "The International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Quoc V. Le, Ilya Sutskever, Oriol Vinyals, and Lukasz Kaiser. 2016. Multi-task Se- quence to Sequence Learning. In The International Conference on Learning Representations.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Deep Architectures for Neural Machine Translation",
"authors": [
{
"first": "Antonio",
"middle": [],
"last": "Valerio Miceli Barone",
"suffix": ""
},
{
"first": "Jind\u0159ich",
"middle": [],
"last": "Helcl",
"suffix": ""
},
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "1",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antonio Valerio Miceli Barone, Jind\u0159ich Helcl, Rico Sennrich, Barry Haddow, and Alexandra Birch. 2017. Deep Architectures for Neural Machine Translation. In Proceedings of the Second Confer- ence on Machine Translation, Volume 1: Research Papers, Copenhagen, Denmark.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "An Empirical Study of Mini-Batch Creation Strategies for Neural Machine Translation",
"authors": [
{
"first": "Makoto",
"middle": [],
"last": "Morishita",
"suffix": ""
},
{
"first": "Yusuke",
"middle": [],
"last": "Oda",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Koichiro",
"middle": [],
"last": "Yoshino",
"suffix": ""
},
{
"first": "Katsuhito",
"middle": [],
"last": "Sudoh",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2017,
"venue": "The First Workshop on Neural Machine Translation (NMT)",
"volume": "",
"issue": "",
"pages": "61--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Makoto Morishita, Yusuke Oda, Graham Neubig, Koichiro Yoshino, Katsuhito Sudoh, and Satoshi Nakamura. 2017. An Empirical Study of Mini- Batch Creation Strategies for Neural Machine Trans- lation. In The First Workshop on Neural Ma- chine Translation (NMT), pages 61-68, Vancouver, Canada.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "A Bag of Useful Tricks for Practical Neural Machine Translation: Embedding Layer Initialization and Large Batch Size",
"authors": [
{
"first": "Masato",
"middle": [],
"last": "Neishi",
"suffix": ""
},
{
"first": "Jin",
"middle": [],
"last": "Sakuma",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Tohda",
"suffix": ""
},
{
"first": "Shonosuke",
"middle": [],
"last": "Ishiwatari",
"suffix": ""
},
{
"first": "Naoki",
"middle": [],
"last": "Yoshinaga",
"suffix": ""
},
{
"first": "Masashi",
"middle": [],
"last": "Toyoda",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 4th Workshop on Asian Translation (WAT2017)",
"volume": "",
"issue": "",
"pages": "99--109",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Masato Neishi, Jin Sakuma, Satoshi Tohda, Shonosuke Ishiwatari, Naoki Yoshinaga, and Masashi Toyoda. 2017. A Bag of Useful Tricks for Practical Neural Machine Translation: Embedding Layer Initializa- tion and Large Batch Size. In Proceedings of the 4th Workshop on Asian Translation (WAT2017), pages 99-109, Taipei, Taiwan.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "Rapid Adaptation of Neural Machine Translation to New Languages",
"authors": [
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Junjie",
"middle": [],
"last": "Hu",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "875--880",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Graham Neubig and Junjie Hu. 2018. Rapid Adap- tation of Neural Machine Translation to New Lan- guages. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 875-880, Brussels, Belgium.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Improving Lexical Choice in Neural Machine Translation",
"authors": [
{
"first": "Toan",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies",
"volume": "1",
"issue": "",
"pages": "334--343",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toan Nguyen and David Chiang. 2018. Improving Lexical Choice in Neural Machine Translation. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 334-343, New Or- leans, Louisiana.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Transfer Learning across Low-Resource, Related Languages for Neural Machine Translation",
"authors": [
{
"first": "Q",
"middle": [],
"last": "Toan",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Nguyen",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Chiang",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Eighth International Joint Conference on Natural Language Processing",
"volume": "2",
"issue": "",
"pages": "296--301",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Toan Q. Nguyen and David Chiang. 2017. Transfer Learning across Low-Resource, Related Languages for Neural Machine Translation. In Proceedings of the Eighth International Joint Conference on Natu- ral Language Processing (Volume 2: Short Papers), pages 296-301, Taipei, Taiwan.",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Neural machine translation for low-resource languages",
"authors": [
{
"first": "J\u00f6rg",
"middle": [],
"last": "Robert\u00f6stling",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Tiedemann",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Robert\u00d6stling and J\u00f6rg Tiedemann. 2017. Neural ma- chine translation for low-resource languages. CoRR, abs/1708.05729.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "BLEU: A Method for Automatic Evaluation of Machine Translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th Annual Meeting on Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: A Method for Automatic Evaluation of Machine Translation. In Proceedings of the 40th Annual Meeting on Association for Com- putational Linguistics, pages 311-318, Philadelphia, PA.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "A call for clarity in reporting BLEU scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the Third Conference on Machine Translation: Research Papers",
"volume": "",
"issue": "",
"pages": "186--191",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Belgium, Brussels.",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Using the Output Embedding to Improve Language Models",
"authors": [
{
"first": "Ofir",
"middle": [],
"last": "Press",
"suffix": ""
},
{
"first": "Lior",
"middle": [],
"last": "Wolf",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics (EACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ofir Press and Lior Wolf. 2017. Using the Output Em- bedding to Improve Language Models. In Proceed- ings of the 15th Conference of the European Chap- ter of the Association for Computational Linguistics (EACL), Valencia, Spain.",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Unsupervised pretraining for sequence to sequence learning",
"authors": [
{
"first": "Prajit",
"middle": [],
"last": "Ramachandran",
"suffix": ""
},
{
"first": "Peter",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Quoc",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "383--391",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Prajit Ramachandran, Peter Liu, and Quoc Le. 2017. Unsupervised pretraining for sequence to sequence learning. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Process- ing, pages 383-391, Copenhagen, Denmark.",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Sequence Level Training with Recurrent Neural Networks",
"authors": [
{
"first": "Aurelio",
"middle": [],
"last": "Marc",
"suffix": ""
},
{
"first": "Sumit",
"middle": [],
"last": "Ranzato",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Chopra",
"suffix": ""
},
{
"first": "Wojciech",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Zaremba",
"suffix": ""
}
],
"year": 2016,
"venue": "The International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2016. Sequence Level Training with Recurrent Neural Networks. In The International Conference on Learning Representa- tions.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "The University of Edinburgh's Neural MT Systems for WMT17",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Currey",
"suffix": ""
},
{
"first": "Ulrich",
"middle": [],
"last": "Germann",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Valerio Miceli",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Barone",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Williams",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Second Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Alexandra Birch, Anna Currey, Ulrich Germann, Barry Haddow, Kenneth Heafield, An- tonio Valerio Miceli Barone, and Philip Williams. 2017a. The University of Edinburgh's Neural MT Systems for WMT17. In Proceedings of the Sec- ond Conference on Machine Translation, Volume 2: Shared Task Papers, Copenhagen, Denmark.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Nematus: a Toolkit for Neural Machine Translation",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Julian",
"middle": [],
"last": "Hitschler",
"suffix": ""
},
{
"first": "Marcin",
"middle": [],
"last": "Junczys-Dowmunt",
"suffix": ""
},
{
"first": "Samuel",
"middle": [],
"last": "L\u00e4ubli",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Valerio Miceli",
"suffix": ""
},
{
"first": "Jozef",
"middle": [],
"last": "Barone",
"suffix": ""
},
{
"first": "Maria",
"middle": [],
"last": "Mokry",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Nadejde",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the Software Demonstrations of the 15th Conference of the European Chapter of the Association for Computational Linguistics",
"volume": "",
"issue": "",
"pages": "65--68",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Orhan Firat, Kyunghyun Cho, Alexan- dra Birch, Barry Haddow, Julian Hitschler, Marcin Junczys-Dowmunt, Samuel L\u00e4ubli, Antonio Vale- rio Miceli Barone, Jozef Mokry, and Maria Nade- jde. 2017b. Nematus: a Toolkit for Neural Machine Translation. In Proceedings of the Software Demon- strations of the 15th Conference of the European Chapter of the Association for Computational Lin- guistics, pages 65-68, Valencia, Spain.",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Edinburgh Neural Machine Translation Systems for WMT 16",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the First Conference on Machine Translation",
"volume": "2",
"issue": "",
"pages": "368--373",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Edinburgh Neural Machine Translation Sys- tems for WMT 16. In Proceedings of the First Con- ference on Machine Translation, Volume 2: Shared Task Papers, pages 368-373, Berlin, Germany.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Improving Neural Machine Translation Models with Monolingual Data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "86--96",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Improving Neural Machine Translation Models with Monolingual Data. In Proceedings of the 54th Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Neural Machine Translation of Rare Words with Subword Units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1715--1725",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016c. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "On the limitations of unsupervised bilingual dictionary induction",
"authors": [
{
"first": "Anders",
"middle": [],
"last": "S\u00f8gaard",
"suffix": ""
},
{
"first": "Sebastian",
"middle": [],
"last": "Ruder",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Vulic",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "778--788",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anders S\u00f8gaard, Sebastian Ruder, and Ivan Vulic. 2018. On the limitations of unsupervised bilingual dictionary induction. In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 778- 788, Melbourne, Australia.",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "Dropout: A Simple Way to Prevent Neural Networks from Overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "Journal of Machine Learning Research",
"volume": "15",
"issue": "",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Re- search, 15:1929-1958.",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Sequence to Sequence Learning with Neural Networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Net- works. In Advances in Neural Information Process- ing Systems 27: Annual Conference on Neural Infor- mation Processing Systems 2014, pages 3104-3112, Montreal, Quebec, Canada.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "Rethinking the Inception Architecture for Computer Vision",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Vanhoucke",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Ioffe",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Z",
"middle": [],
"last": "Wojna",
"suffix": ""
}
],
"year": 2016,
"venue": "2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR)",
"volume": "",
"issue": "",
"pages": "2818--2826",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Z. Wojna. 2016. Rethinking the Inception Architecture for Computer Vision. In 2016 IEEE Conference on Computer Vision and Pat- tern Recognition (CVPR), pages 2818-2826.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Attention is All you Need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Advances in Neural Information Processing Systems",
"volume": "30",
"issue": "",
"pages": "5998--6008",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, \u0141ukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Pro- cessing Systems 30, pages 5998-6008.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "Sequence-to-sequence learning as beam-search optimization",
"authors": [
{
"first": "Sam",
"middle": [],
"last": "Wiseman",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"M"
],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1296--1306",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Sam Wiseman and Alexander M. Rush. 2016. Sequence-to-sequence learning as beam-search opti- mization. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Process- ing, pages 1296-1306, Austin, Texas.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Pay less attention with lightweight and dynamic convolutions",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Alexei",
"middle": [],
"last": "Baevski",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2019,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Wu, Angela Fan, Alexei Baevski, Yann Dauphin, and Michael Auli. 2019. Pay less attention with lightweight and dynamic convolutions. In Interna- tional Conference on Learning Representations.",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Transfer Learning for Low-Resource Neural Machine Translation",
"authors": [
{
"first": "Barret",
"middle": [],
"last": "Zoph",
"suffix": ""
},
{
"first": "Deniz",
"middle": [],
"last": "Yuret",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [],
"last": "May",
"suffix": ""
},
{
"first": "Kevin",
"middle": [],
"last": "Knight",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "1568--1575",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Barret Zoph, Deniz Yuret, Jonathan May, and Kevin Knight. 2016. Transfer Learning for Low-Resource Neural Machine Translation. In Proceedings of the 2016 Conference on Empirical Methods in Natu- ral Language Processing, pages 1568-1575, Austin, Texas.",
"links": null
}
},
"ref_entries": {
"FIGREF0": {
"type_str": "figure",
"text": "German\u2192English learning curve, showing BLEU as a function of the amount of parallel training data, for PBSMT and NMT.",
"uris": null,
"num": null
},
"TABREF4": {
"type_str": "table",
"html": null,
"text": "Training corpus size and subword vocabulary size for different subsets of IWSLT14 DE\u2192EN data, and for KO\u2192EN data.",
"num": null,
"content": "<table/>"
},
"TABREF6": {
"type_str": "table",
"html": null,
"text": "German\u2192English IWSLT results for training corpus size of 100k words and 3.2M words (full corpus). Mean and standard deviation of three training runs reported.",
"num": null,
"content": "<table><tr><td/><td/><td/><td/><td/><td/><td>32.8</td></tr><tr><td/><td/><td/><td/><td/><td/><td>30.8</td></tr><tr><td/><td>30</td><td/><td/><td/><td colspan=\"2\">28.7</td></tr><tr><td/><td/><td/><td/><td/><td/><td>26.6</td></tr><tr><td/><td/><td/><td/><td/><td>24.4</td><td>24.9</td></tr><tr><td/><td/><td/><td/><td/><td>23</td><td>25.7</td></tr><tr><td>BLEU</td><td>20</td><td colspan=\"2\">16.6 16</td><td>20.6 18.3</td><td>20.5</td><td>18.5</td></tr><tr><td/><td>10</td><td/><td/><td/><td colspan=\"2\">11.6</td></tr><tr><td/><td/><td/><td/><td/><td/><td>neural MT optimized</td></tr><tr><td/><td/><td/><td/><td/><td/><td>phrase-based SMT</td></tr><tr><td/><td/><td/><td/><td/><td/><td>neural MT baseline</td></tr><tr><td/><td>0</td><td>10 5</td><td>0</td><td>1.3</td><td>1.8</td><td>10 6</td></tr><tr><td/><td/><td/><td/><td colspan=\"3\">corpus size (English words)</td></tr></table>"
},
"TABREF7": {
"type_str": "table",
"html": null,
"text": "Signature BLEU+c.mixed+#.1+s.exp+tok.13a+v.1.3.2.",
"num": null,
"content": "<table><tr><td>size, model depth, regularization parameters and</td></tr><tr><td>learning rate. Detailed hyperparameters are re-</td></tr><tr><td>ported in Appendix A.</td></tr></table>"
},
"TABREF8": {
"type_str": "table",
"html": null,
"text": "Results on full IWSLT14 German\u2192English data on tokenized and lowercased test set with multi-bleu.perl.",
"num": null,
"content": "<table/>"
},
"TABREF9": {
"type_str": "table",
"html": null,
"text": "Korean\u2192English results. Mean and standard deviation of three training runs reported.",
"num": null,
"content": "<table/>"
},
"TABREF11": {
"type_str": "table",
"html": null,
"text": "Configurations of NMT systems reported inTable 2. Empty fields indicate that hyperparameter was unchanged compared to previous systems.sourceIn einem blutbefleckten Kontinent, waren diese Menschen die einzigen, die nie von den Spaniern erobert wurden. reference",
"num": null,
"content": "<table/>"
}
}
}
}