ACL-OCL / Base_JSON /prefixD /json /D18 /D18-1045.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D18-1045",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:51:56.225915Z"
},
"title": "Understanding Back-Translation at Scale",
"authors": [
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": "",
"affiliation": {},
"email": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": "",
"affiliation": {},
"email": ""
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "An effective method to improve neural machine translation with monolingual data is to augment the parallel training corpus with back-translations of target language sentences. This work broadens the understanding of back-translation and investigates a number of methods to generate synthetic source sentences. We find that in all but resource poor settings back-translations obtained via sampling or noised beam outputs are most effective. Our analysis shows that sampling or noisy synthetic data gives a much stronger training signal than data generated by beam or greedy search. We also compare how synthetic data compares to genuine bitext and study various domain effects. Finally, we scale to hundreds of millions of monolingual sentences and achieve a new state of the art of 35 BLEU on the WMT'14 English-German test set.",
"pdf_parse": {
"paper_id": "D18-1045",
"_pdf_hash": "",
"abstract": [
{
"text": "An effective method to improve neural machine translation with monolingual data is to augment the parallel training corpus with back-translations of target language sentences. This work broadens the understanding of back-translation and investigates a number of methods to generate synthetic source sentences. We find that in all but resource poor settings back-translations obtained via sampling or noised beam outputs are most effective. Our analysis shows that sampling or noisy synthetic data gives a much stronger training signal than data generated by beam or greedy search. We also compare how synthetic data compares to genuine bitext and study various domain effects. Finally, we scale to hundreds of millions of monolingual sentences and achieve a new state of the art of 35 BLEU on the WMT'14 English-German test set.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Machine translation relies on the statistics of large parallel corpora, i.e. datasets of paired sentences in both the source and target language. However, bitext is limited and there is a much larger amount of monolingual data available. Monolingual data has been traditionally used to train language models which improved the fluency of statistical machine translation (Koehn, 2010) .",
"cite_spans": [
{
"start": 370,
"end": 383,
"text": "(Koehn, 2010)",
"ref_id": "BIBREF34"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In the context of neural machine translation (NMT; Bahdanau et al. 2015; Gehring et al. 2017; Vaswani et al. 2017) , there has been extensive work to improve models with monolingual data, including language model fusion (Gulcehre et al., 2015 (Gulcehre et al., , 2017 , back-translation (Sennrich et al., 2016a) and dual learning (Cheng et al., 2016; He et al., 2016a) . These methods have different advantages and can be combined to reach high accuracy (Hassan et al., 2018) . *Work done while at Facebook AI Research.",
"cite_spans": [
{
"start": 51,
"end": 72,
"text": "Bahdanau et al. 2015;",
"ref_id": "BIBREF2"
},
{
"start": 73,
"end": 93,
"text": "Gehring et al. 2017;",
"ref_id": "BIBREF15"
},
{
"start": 94,
"end": 114,
"text": "Vaswani et al. 2017)",
"ref_id": "BIBREF57"
},
{
"start": 220,
"end": 242,
"text": "(Gulcehre et al., 2015",
"ref_id": null
},
{
"start": 243,
"end": 267,
"text": "(Gulcehre et al., , 2017",
"ref_id": "BIBREF19"
},
{
"start": 287,
"end": 311,
"text": "(Sennrich et al., 2016a)",
"ref_id": "BIBREF51"
},
{
"start": 330,
"end": 350,
"text": "(Cheng et al., 2016;",
"ref_id": "BIBREF7"
},
{
"start": 351,
"end": 368,
"text": "He et al., 2016a)",
"ref_id": "BIBREF23"
},
{
"start": 454,
"end": 475,
"text": "(Hassan et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "We focus on back-translation (BT) which operates in a semi-supervised setup where both bilingual and monolingual data in the target language are available. Back-translation first trains an intermediate system on the parallel data which is used to translate the target monolingual data into the source language. The result is a parallel corpus where the source side is synthetic machine translation output while the target is genuine text written by humans. The synthetic parallel corpus is then simply added to the real bitext in order to train a final system that will translate from the source to the target language. Although simple, this method has been shown to be helpful for phrase-based translation (Bojar and Tamchyna, 2011) , NMT (Sennrich et al., 2016a; Poncelas et al., 2018) as well as unsupervised MT (Lample et al., 2018a) .",
"cite_spans": [
{
"start": 707,
"end": 733,
"text": "(Bojar and Tamchyna, 2011)",
"ref_id": "BIBREF4"
},
{
"start": 740,
"end": 764,
"text": "(Sennrich et al., 2016a;",
"ref_id": "BIBREF51"
},
{
"start": 765,
"end": 787,
"text": "Poncelas et al., 2018)",
"ref_id": "BIBREF49"
},
{
"start": 815,
"end": 837,
"text": "(Lample et al., 2018a)",
"ref_id": "BIBREF38"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "In this paper, we investigate back-translation for neural machine translation at a large scale by adding hundreds of millions of back-translated sentences to the bitext. Our experiments are based on strong baseline models trained on the public bitext of the WMT competition. We extend previous analysis (Sennrich et al., 2016a; Poncelas et al., 2018) of back-translation in several ways. We provide a comprehensive analysis of different methods to generate synthetic source sentences and we show that this choice matters: sampling from the model distribution or noising beam outputs outperforms pure beam search, which is typically used, by 1.7 BLEU on average across several test sets. Our analysis shows that synthetic data based on sampling and noised beam search provides a stronger training signal than synthetic data based on argmax inference. We also study how adding synthetic data compares to adding real bitext in a controlled setup with the surprising finding that synthetic data can sometimes match the accuracy of real bitext. Our best setup achieves 35 BLEU on the WMT'14 English-German test set by rely-ing only on public WMT bitext as well as 226M monolingual sentences. This outperforms the system of DeepL by 1.7 BLEU who train on large amounts of high quality non-benchmark data. On WMT'14 English-French we achieve 45.6 BLEU.",
"cite_spans": [
{
"start": 303,
"end": 327,
"text": "(Sennrich et al., 2016a;",
"ref_id": "BIBREF51"
},
{
"start": 328,
"end": 350,
"text": "Poncelas et al., 2018)",
"ref_id": "BIBREF49"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "This section describes prior work in machine translation with neural networks as well as semisupervised machine translation.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Related work",
"sec_num": "2"
},
{
"text": "We build upon recent work on neural machine translation which is typically a neural network with an encoder/decoder architecture. The encoder infers a continuous space representation of the source sentence, while the decoder is a neural language model conditioned on the encoder output. The parameters of both models are learned jointly to maximize the likelihood of the target sentences given the corresponding source sentences from a parallel corpus (Sutskever et al., 2014; Cho et al., 2014) . At inference, a target sentence is generated by left-to-right decoding.",
"cite_spans": [
{
"start": 452,
"end": 476,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF55"
},
{
"start": 477,
"end": 494,
"text": "Cho et al., 2014)",
"ref_id": "BIBREF8"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural machine translation",
"sec_num": "2.1"
},
{
"text": "Different neural architectures have been proposed with the goal of improving efficiency and/or effectiveness. This includes recurrent networks (Sutskever et al., 2014; Bahdanau et al., 2015; Luong et al., 2015) , convolutional networks (Kalchbrenner et al., 2016; Gehring et al., 2017; and transformer networks (Vaswani et al., 2017) . Recent work relies on attention mechanisms where the encoder produces a sequence of vectors and, for each target token, the decoder attends to the most relevant part of the source through a contextdependent weighted-sum of the encoder vectors (Bahdanau et al., 2015; Luong et al., 2015) . Attention has been refined with multi-hop attention (Gehring et al., 2017) , self-attention (Vaswani et al., 2017; Paulus et al., 2018) and multi-head attention (Vaswani et al., 2017) . We use a transformer architecture (Vaswani et al., 2017) .",
"cite_spans": [
{
"start": 143,
"end": 167,
"text": "(Sutskever et al., 2014;",
"ref_id": "BIBREF55"
},
{
"start": 168,
"end": 190,
"text": "Bahdanau et al., 2015;",
"ref_id": "BIBREF2"
},
{
"start": 191,
"end": 210,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF41"
},
{
"start": 236,
"end": 263,
"text": "(Kalchbrenner et al., 2016;",
"ref_id": "BIBREF31"
},
{
"start": 264,
"end": 285,
"text": "Gehring et al., 2017;",
"ref_id": "BIBREF15"
},
{
"start": 311,
"end": 333,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF57"
},
{
"start": 579,
"end": 602,
"text": "(Bahdanau et al., 2015;",
"ref_id": "BIBREF2"
},
{
"start": 603,
"end": 622,
"text": "Luong et al., 2015)",
"ref_id": "BIBREF41"
},
{
"start": 677,
"end": 699,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF15"
},
{
"start": 717,
"end": 739,
"text": "(Vaswani et al., 2017;",
"ref_id": "BIBREF57"
},
{
"start": 740,
"end": 760,
"text": "Paulus et al., 2018)",
"ref_id": "BIBREF46"
},
{
"start": 786,
"end": 808,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF57"
},
{
"start": 845,
"end": 867,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF57"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Neural machine translation",
"sec_num": "2.1"
},
{
"text": "Monolingual target data has been used to improve the fluency of machine translations since the early IBM models (Brown et al., 1990) . In phrase-based systems, language models (LM) in the target language increase the score of fluent outputs during decoding (Koehn et al., 2003; Brants et al., 2007) .",
"cite_spans": [
{
"start": 112,
"end": 132,
"text": "(Brown et al., 1990)",
"ref_id": "BIBREF6"
},
{
"start": 257,
"end": 277,
"text": "(Koehn et al., 2003;",
"ref_id": "BIBREF36"
},
{
"start": 278,
"end": 298,
"text": "Brants et al., 2007)",
"ref_id": "BIBREF5"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised NMT",
"sec_num": "2.2"
},
{
"text": "A similar strategy can be applied to NMT (He et al., 2016b) . Besides improving accuracy during decoding, neural LM and NMT can benefit from deeper integration, e.g. by combining the hidden states of both models (Gulcehre et al., 2017) . Neural architecture also allows multi-task learning and parameter sharing between MT and target-side LM (Domhan and Hieber, 2017) .",
"cite_spans": [
{
"start": 41,
"end": 59,
"text": "(He et al., 2016b)",
"ref_id": "BIBREF24"
},
{
"start": 212,
"end": 235,
"text": "(Gulcehre et al., 2017)",
"ref_id": "BIBREF19"
},
{
"start": 342,
"end": 367,
"text": "(Domhan and Hieber, 2017)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised NMT",
"sec_num": "2.2"
},
{
"text": "Back-translation (BT) is an alternative to leverage monolingual data. BT is simple and easy to apply as it does not require modification to the MT training algorithms. It requires training a targetto-source system in order to generate additional synthetic parallel data from the monolingual target data. This data complements human bitext to train the desired source-to-target system. BT has been applied earlier to phrase-base systems (Bojar and Tamchyna, 2011). For these systems, BT has also been successful in leveraging monolingual data for domain adaptation (Bertoldi and Federico, 2009; Lambert et al., 2011) . Recently, BT has been shown beneficial for NMT (Sennrich et al., 2016a; Poncelas et al., 2018) . It has been found to be particularly useful when parallel data is scarce (Karakanta et al., 2017) . Currey et al. (2017) show that low resource language pairs can also be improved with synthetic data where the source is simply a copy of the monolingual target data. Concurrently to our work, Imamura et al. (2018) show that sampling synthetic sources is more effective than beam search. Specifically, they sample multiple sources for each target whereas we draw only a single sample, opting to train on a larger number of target sentences instead. Hoang et al. (2018) and Cotterell and Kreutzer (2018) suggest an iterative procedure which continuously improves the quality of the back-translation and final systems. Niu et al. (2018) experiment with a multilingual model that does both the forward and backward translation which is continuously trained with new synthetic data.",
"cite_spans": [
{
"start": 564,
"end": 593,
"text": "(Bertoldi and Federico, 2009;",
"ref_id": "BIBREF3"
},
{
"start": 594,
"end": 615,
"text": "Lambert et al., 2011)",
"ref_id": "BIBREF37"
},
{
"start": 665,
"end": 689,
"text": "(Sennrich et al., 2016a;",
"ref_id": "BIBREF51"
},
{
"start": 690,
"end": 712,
"text": "Poncelas et al., 2018)",
"ref_id": "BIBREF49"
},
{
"start": 788,
"end": 812,
"text": "(Karakanta et al., 2017)",
"ref_id": "BIBREF32"
},
{
"start": 815,
"end": 835,
"text": "Currey et al. (2017)",
"ref_id": "BIBREF10"
},
{
"start": 1007,
"end": 1028,
"text": "Imamura et al. (2018)",
"ref_id": "BIBREF28"
},
{
"start": 1431,
"end": 1448,
"text": "Niu et al. (2018)",
"ref_id": "BIBREF42"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised NMT",
"sec_num": "2.2"
},
{
"text": "There has also been work using source-side monolingual data (Zhang and Zong, 2016) . Furthermore, Cheng et al. (2016) ; He et al. (2016a) ; Xia et al. (2017) show how monolingual text from both languages can be leveraged by extending back-translation to dual learning: when training both source-to-target and target-to-source models jointly, one can use back-translation in both directions and perform multiple rounds of BT. A simi-lar idea is applied in unsupervised NMT (Lample et al., 2018a,b) . Besides monolingual data, various approaches have been introduced to benefit from parallel data in other language pairs (Johnson et al., 2017; Firat et al., 2016a,b; Ha et al., 2016; Gu et al., 2018) .",
"cite_spans": [
{
"start": 60,
"end": 82,
"text": "(Zhang and Zong, 2016)",
"ref_id": "BIBREF60"
},
{
"start": 98,
"end": 117,
"text": "Cheng et al. (2016)",
"ref_id": "BIBREF7"
},
{
"start": 120,
"end": 137,
"text": "He et al. (2016a)",
"ref_id": "BIBREF23"
},
{
"start": 140,
"end": 157,
"text": "Xia et al. (2017)",
"ref_id": "BIBREF59"
},
{
"start": 472,
"end": 496,
"text": "(Lample et al., 2018a,b)",
"ref_id": null
},
{
"start": 619,
"end": 641,
"text": "(Johnson et al., 2017;",
"ref_id": "BIBREF29"
},
{
"start": 642,
"end": 664,
"text": "Firat et al., 2016a,b;",
"ref_id": null
},
{
"start": 665,
"end": 681,
"text": "Ha et al., 2016;",
"ref_id": "BIBREF20"
},
{
"start": 682,
"end": 698,
"text": "Gu et al., 2018)",
"ref_id": "BIBREF17"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised NMT",
"sec_num": "2.2"
},
{
"text": "Data augmentation is an established technique in computer vision where a labeled dataset is supplemented with cropped or rotated input images. Recently, generative adversarial networks (GANs) have been successfully used to the same end (Antoniou et al., 2017; Perez and Wang, 2017) as well as models that learn distributions over image transformations (Hauberg et al., 2016) .",
"cite_spans": [
{
"start": 236,
"end": 259,
"text": "(Antoniou et al., 2017;",
"ref_id": "BIBREF1"
},
{
"start": 260,
"end": 281,
"text": "Perez and Wang, 2017)",
"ref_id": "BIBREF48"
},
{
"start": 352,
"end": 374,
"text": "(Hauberg et al., 2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Semi-supervised NMT",
"sec_num": "2.2"
},
{
"text": "Back-translation typically uses beam search (Sennrich et al., 2016a) or just greedy search (Lample et al., 2018a,b) to generate synthetic source sentences. Both are approximate algorithms to identify the maximum a-posteriori (MAP) output, i.e. the sentence with the largest estimated probability given an input. Beam is generally successful in finding high probability outputs (Ott et al., 2018a) .",
"cite_spans": [
{
"start": 91,
"end": 115,
"text": "(Lample et al., 2018a,b)",
"ref_id": null
},
{
"start": 377,
"end": 396,
"text": "(Ott et al., 2018a)",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating synthetic sources",
"sec_num": "3"
},
{
"text": "However, MAP prediction can lead to less rich translations (Ott et al., 2018a ) since it always favors the most likely alternative in case of ambiguity. This is particularly problematic in tasks where there is a high level of uncertainty such as dialog (Serban et al., 2016) and story generation (Fan et al., 2018) . We argue that this is also problematic for a data augmentation scheme such as backtranslation. Beam and greedy focus on the head of the model distribution which results in very regular synthetic source sentences that do not properly cover the true data distribution.",
"cite_spans": [
{
"start": 59,
"end": 77,
"text": "(Ott et al., 2018a",
"ref_id": "BIBREF43"
},
{
"start": 253,
"end": 274,
"text": "(Serban et al., 2016)",
"ref_id": "BIBREF53"
},
{
"start": 296,
"end": 314,
"text": "(Fan et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating synthetic sources",
"sec_num": "3"
},
{
"text": "As alternative, we consider sampling from the model distribution as well as adding noise to beam search outputs. First, we explore unrestricted sampling which generates outputs that are very diverse but sometimes highly unlikely. Second, we investigate sampling restricted to the most likely words (Graves, 2013; Ott et al., 2018a; Fan et al., 2018) . At each time step, we select the k most likely tokens from the output distribution, renormalize and then sample from this restricted set. This is a middle ground between MAP and unrestricted sampling.",
"cite_spans": [
{
"start": 298,
"end": 312,
"text": "(Graves, 2013;",
"ref_id": "BIBREF16"
},
{
"start": 313,
"end": 331,
"text": "Ott et al., 2018a;",
"ref_id": "BIBREF43"
},
{
"start": 332,
"end": 349,
"text": "Fan et al., 2018)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating synthetic sources",
"sec_num": "3"
},
{
"text": "As a third alternative, we apply noising Lample et al. (2018a) to beam search outputs. Adding noise to input sentences has been very benefi-cial for the autoencoder setups of (Lample et al., 2018a; Hill et al., 2016) which is inspired by denoising autoencoders (Vincent et al., 2008) . In particular, we transform source sentences with three types of noise: deleting words with probability 0.1, replacing words by a filler token with probability 0.1, and swapping words which is implemented as a random permutation over the tokens, drawn from the uniform distribution but restricted to swapping words no further than three positions apart.",
"cite_spans": [
{
"start": 175,
"end": 197,
"text": "(Lample et al., 2018a;",
"ref_id": "BIBREF38"
},
{
"start": 198,
"end": 216,
"text": "Hill et al., 2016)",
"ref_id": "BIBREF26"
},
{
"start": 261,
"end": 283,
"text": "(Vincent et al., 2008)",
"ref_id": "BIBREF58"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Generating synthetic sources",
"sec_num": "3"
},
{
"text": "The majority of our experiments are based on data from the WMT'18 English-German news translation task. We train on all available bitext excluding the ParaCrawl corpus and remove sentences longer than 250 words as well as sentence-pairs with a source/target length ratio exceeding 1.5. This results in 5.18M sentence pairs. For the backtranslation experiments we use the German monolingual newscrawl data distributed with WMT'18 comprising 226M sentences after removing duplicates. We tokenize all data with the Moses tokenizer (Koehn et al., 2007) and learn a joint source and target Byte-Pair-Encoding (BPE; Sennrich et al., 2016) with 35K types. We develop on new-stest2012 and report final results on newstest2013-2017; additionally we consider a held-out set from the training data of 52K sentence-pairs.",
"cite_spans": [
{
"start": 528,
"end": 548,
"text": "(Koehn et al., 2007)",
"ref_id": "BIBREF35"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup 4.1 Datasets",
"sec_num": "4"
},
{
"text": "We also experiment on the larger WMT'14 English-French task which we filter in the same way as WMT'18 English-German. This results in 35.7M sentence-pairs for training and we learn a joint BPE vocabulary of 44K types. As monolingual data we use newscrawl2010-2014, comprising 31M sentences after language identification (Lui and Baldwin, 2012) . We use newstest2012 as development set and report final results on newstest2013-2015.",
"cite_spans": [
{
"start": 320,
"end": 343,
"text": "(Lui and Baldwin, 2012)",
"ref_id": null
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup 4.1 Datasets",
"sec_num": "4"
},
{
"text": "The majority of results in this paper are in terms of case-sensitive tokenized BLEU (Papineni et al., 2002) but we also report test accuracy with detokenized BLEU using sacreBLEU (Post, 2018) .",
"cite_spans": [
{
"start": 84,
"end": 107,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF45"
},
{
"start": 179,
"end": 191,
"text": "(Post, 2018)",
"ref_id": "BIBREF50"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental setup 4.1 Datasets",
"sec_num": "4"
},
{
"text": "We re-implemented the Transformer model in pytorch using the fairseq toolkit. 1 All experiments 1 Code available at https://github.com/ pytorch/fairseq are based on the Big Transformer architecture with 6 blocks in the encoder and decoder. We use the same hyper-parameters for all experiments, i.e., word representations of size 1024, feed-forward layers with inner dimension 4096. Dropout is set to 0.3 for En-De and 0.1 for En-Fr, we use 16 attention heads, and we average the checkpoints of the last ten epochs. Models are optimized with Adam (Kingma and Ba, 2015) using \u03b2 1 = 0.9, \u03b2 2 = 0.98, and = 1e \u2212 8 and we use the same learning rate schedule as Vaswani et al. (2017) . All models use label smoothing with a uniform prior distribution over the vocabulary = 0.1 (Szegedy et al., 2015; Pereyra et al., 2017) . We run experiments on DGX-1 machines with 8 Nvidia V100 GPUs and machines are interconnected by Infiniband. Experiments are run on 16 machines and we perform 30K synchronous updates. We also use the NCCL2 library and the torch distributed package for inter-GPU communication. We train models with 16-bit floating point operations, following . For final evaluation, we generate translations with a beam of size 5 and with no length penalty.",
"cite_spans": [
{
"start": 96,
"end": 97,
"text": "1",
"ref_id": null
},
{
"start": 656,
"end": 677,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF57"
},
{
"start": 771,
"end": 793,
"text": "(Szegedy et al., 2015;",
"ref_id": "BIBREF56"
},
{
"start": 794,
"end": 815,
"text": "Pereyra et al., 2017)",
"ref_id": "BIBREF47"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Model and hyperparameters",
"sec_num": "4.2"
},
{
"text": "Our evaluation first compares the accuracy of back-translation generation methods ( \u00a75.1) and analyzes the results ( \u00a75.2). Next, we simulate a low-resource setup to experiment further with different generation methods ( \u00a75.3). We also compare synthetic bitext to genuine parallel data and examine domain effects arising in back-translation ( \u00a75.4). We also measure the effect of upsampling bitext during training ( \u00a75.5). Finally, we scale to a very large setup of up to 226M monolingual sentences and compare to previous research ( \u00a75.6).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results",
"sec_num": "5"
},
{
"text": "We first investigate different methods to generate synthetic source translations given a backtranslation model, i.e., a model trained in the reverse language direction (Section 5.1). We consider two types of MAP prediction: greedy search (greedy) and beam search with beam size 5 (beam). Non-MAP methods include unrestricted sampling from the model distribution (sampling), restricting sampling to the k highest scoring outputs at every time step with k = 10 (top10) as well as adding noise to the beam outputs (beam+noise). Restricted sampling is a middle-ground between Total training data BLEU (newstest2012) greedy beam top10 sampling beam+noise Figure 1 : Accuracy of models trained on different amounts of back-translated data obtained with greedy search, beam search (k = 5), randomly sampling from the model distribution, restricting sampling over the ten most likely words (top10), and by adding noise to the beam outputs (beam+noise). Results based on newstest2012 of WMT English-German translation.",
"cite_spans": [],
"ref_spans": [
{
"start": 650,
"end": 658,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synthetic data generation methods",
"sec_num": "5.1"
},
{
"text": "beam search and unrestricted sampling, it is less likely to pick very low scoring outputs but still preserves some randomness. Preliminary experiments with top5, top20, top50 gave similar results to top10.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Synthetic data generation methods",
"sec_num": "5.1"
},
{
"text": "We also vary the amount of synthetic data and perform 30K updates during training for the bitext only, 50K updates when adding 3M synthetic sentences, 75K updates for 6M and 12M sentences and 100K updates for 24M sentences. For each setting, this corresponds to enough updates to reach convergence in terms of held-out loss. In our 128 GPU setup, training of the final models takes 3h 20min for the bitext only model, 7h 30min for 6M and 12M synthetic sentences, and 10h 15min for 24M sentences. During training we also sample the bitext more frequently than the synthetic data and we analyze the effect of this in more detail in \u00a75.5. Figure 1 shows that sampling and beam+noise outperform the MAP methods (pure beam search and greedy) by 0.8-1.1 BLEU. Sampling and beam+noise improve over bitext-only (5M) by between 1.7-2 BLEU in the largest data setting. Restricted sampling (top10) performs better than beam and greedy but is not as effective as unrestricted sampling (sampling) or beam+noise. test sets (newstest2013-2017). Sampling and beam+noise perform roughly equal and we adopt sampling for the remaining experiments.",
"cite_spans": [],
"ref_spans": [
{
"start": 636,
"end": 644,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Synthetic data generation methods",
"sec_num": "5.1"
},
{
"text": "The previous experiment showed that synthetic source sentences generated via sampling and beam with noise perform significantly better than those obtained by pure MAP methods. Why is this? Beam search focuses on very likely outputs which reduces the diversity and richness of the generated source translations. Adding noise to beam outputs and sampling do not have this problem: Noisy source sentences make it harder to predict the target translations which may help learning, similar to denoising autoencoders (Vincent et al., 2008) . Sampling is known to better approximate the data distribution which is richer than the argmax model outputs (Ott et al., 2018a fore, sampling is also more likely to provide a richer training signal than argmax sequences.",
"cite_spans": [
{
"start": 511,
"end": 533,
"text": "(Vincent et al., 2008)",
"ref_id": "BIBREF58"
},
{
"start": 644,
"end": 662,
"text": "(Ott et al., 2018a",
"ref_id": "BIBREF43"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of generation methods",
"sec_num": "5.2"
},
{
"text": "To get a better sense of the training signal provided by each method, we compare the loss on the training data for each method. We report the cross entropy loss averaged over all tokens and separate the loss over the synthetic data and the real bitext data. Specifically, we choose the setup with 24M synthetic sentences. At the end of each epoch we measure the loss over 500K sentence pairs sub-sampled from the synthetic data as well as an equally sized subset of the bitext. For each generation method we choose the same sentences except for the bitext which is disjoint from the synthetic data. This means that losses over the synthetic data are measured over the same target tokens because the generation methods only differ in the source sentences. We found it helpful to upsample the frequency with which we observe the bitext compared to the synthetic data ( \u00a75.5) but we do not upsample for this experiment to keep conditions as similar as possible. We assume that when the training loss is low, then the model can easily fit the training data without extracting much learning signal compared to data which is harder to fit. Figure 2 shows that synthetic data based on source Diese gegenstzlichen Auffassungen von Fairness liegen nicht nur der politischen Debatte zugrunde. reference These competing principles of fairness underlie not only the political debate. beam These conflicting interpretations of fairness are not solely based on the political debate. sample",
"cite_spans": [],
"ref_spans": [
{
"start": 1134,
"end": 1142,
"text": "Figure 2",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Analysis of generation methods",
"sec_num": "5.2"
},
{
"text": "Mr President, these contradictory interpretations of fairness are not based solely on the political debate. top10",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of generation methods",
"sec_num": "5.2"
},
{
"text": "Those conflicting interpretations of fairness are not solely at the heart of the political debate. beam+noise conflicting BLANK interpretations BLANK are of not BLANK based on the political debate. Table 3 : Example where sampling produces inadequate outputs. \"Mr President,\" is not in the source. BLANK means that a word has been replaced by a filler token.",
"cite_spans": [],
"ref_spans": [
{
"start": 198,
"end": 205,
"text": "Table 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of generation methods",
"sec_num": "5.2"
},
{
"text": "greedy or beam is much easier to fit compared to data from sampling, top10, beam+noise and the bitext. In fact, the perplexity on beam data falls below 2 after only 5 epochs. Except for sampling, we find that the perplexity on the training data is somewhat correlated to the end-model accuracy (cf. Figure 1 ) and that all methods except sampling have a lower loss than real bitext. These results suggest that synthetic data obtained with argmax inference does not provide as rich a training signal as sampling or adding noise. We conjecture that the regularity of synthetic data obtained with argmax inference is not optimal. Sampling and noised argmax both expose the model to a wider range of source sentences which makes the model more robust to reordering and substitutions that happen naturally, even if the model of reordering and substitution through noising is not very realistic.",
"cite_spans": [],
"ref_spans": [
{
"start": 299,
"end": 307,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis of generation methods",
"sec_num": "5.2"
},
{
"text": "Next we analyze the richness of synthetic outputs and train a language model on real human text and score synthetic source sentences generated by beam search, sampling, top10 and beam+noise. We hypothesize that data that is very regular should be more predictable by the language model and therefore receive low perplexity. We eliminate a possible domain mismatch effect between the language model training data and the synthetic data by splitting the parallel corpus into three nonoverlapping parts:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of generation methods",
"sec_num": "5.2"
},
{
"text": "1. On 640K sentences pairs, we train a backtranslation model, 2. On 4.1M sentence pairs, we take the source side and train a 5-gram Kneser-Ney language model (Heafield et al., 2013) , 3. On the remaining 450K sentences, we apply the back-translation system using beam, sampling and top10 generation.",
"cite_spans": [
{
"start": 158,
"end": 181,
"text": "(Heafield et al., 2013)",
"ref_id": "BIBREF25"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis of generation methods",
"sec_num": "5.2"
},
{
"text": "For the last set, we have genuine source sentences as well as synthetic sources from different generation techniques. We report the perplexity of our language model on all versions of the source data in Table 2 . The results show that beam outputs receive higher probability by the language model compared to sampling, beam+noise and real source sentences. This indicates that beam search outputs are not as rich as sampling outputs or beam+noise. This lack of variability probably explains in part why back-translations from pure beam search provide a weaker training signal than alternatives.",
"cite_spans": [],
"ref_spans": [
{
"start": 203,
"end": 210,
"text": "Table 2",
"ref_id": "TABREF2"
}
],
"eq_spans": [],
"section": "Analysis of generation methods",
"sec_num": "5.2"
},
{
"text": "Closer inspection of the synthetic sources (Table 3) reveals that sampled and noised beam outputs are sometimes not very adequate, much more so than MAP outputs, e.g., sampling often introduces target words which have no counterpart in the source. This happens because sampling sometimes picks highly unlikely outputs which are harder to fit (cf. Figure 2) .",
"cite_spans": [],
"ref_spans": [
{
"start": 347,
"end": 356,
"text": "Figure 2)",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Analysis of generation methods",
"sec_num": "5.2"
},
{
"text": "The experiments so far are based on a setup with a large bilingual corpus. However, in resource poor settings the back-translation model is of much lower quality. Are non-MAP methods still more effective in such a setup? To answer this question, we simulate such setups by sub-sampling the training data to either 80K sentence-pairs or 640K sentence-pairs and then add synthetic data from sampling and beam search. We compare these smaller setups to our original 5.2M sentence bitext configuration. The accuracy of the beam 5M sampling 5M Figure 3 : BLEU when adding synthetic data from beam and sampling to bitext systems with 80K, 640K and 5M sentence pairs.",
"cite_spans": [],
"ref_spans": [
{
"start": 539,
"end": 547,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Low resource vs. high resource setup",
"sec_num": "5.3"
},
{
"text": "German-English back-translation systems steadily increases with more training data: On new-stest2012 we measure 13.5 BLEU for 80K bitext, 24.3 BLEU for 640K and 28.3 BLEU for 5M. Figure 3 shows that sampling is more effective than beam for larger setups (640K and 5.2M bitexts) while the opposite is true for resource poor settings (80K bitext). This is likely because the back-translations in the 80K setup are of very poor quality and the noise of sampling and beam+noise is too detrimental for this brittle low-resource setting. When the setup is very small the very regular MAP outputs still provide useful training signal while the noise from sampling becomes harmful.",
"cite_spans": [],
"ref_spans": [
{
"start": 179,
"end": 187,
"text": "Figure 3",
"ref_id": null
}
],
"eq_spans": [],
"section": "Low resource vs. high resource setup",
"sec_num": "5.3"
},
{
"text": "Next, we turn to two different questions: How does real human bitext compare to synthetic data in terms of final model accuracy? And how does the domain of the monolingual data affect results?",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain of synthetic data",
"sec_num": "5.4"
},
{
"text": "To answer these questions, we subsample 640K sentence-pairs of the bitext and train a backtranslation system on this set. To train a forward model, we consider three alternative types of data to add to this 640K training set. We either add:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain of synthetic data",
"sec_num": "5.4"
},
{
"text": "\u2022 the remaining parallel data (bitext),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain of synthetic data",
"sec_num": "5.4"
},
{
"text": "\u2022 the back-translated target side of the remaining parallel data (BT-bitext),",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain of synthetic data",
"sec_num": "5.4"
},
{
"text": "\u2022 back-translated newscrawl data (BT-news).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain of synthetic data",
"sec_num": "5.4"
},
{
"text": "The back-translated data is generated via sampling. This setup allows us to compare synthetic data to genuine data since BT-bitext and bitext share the same target side. It also allows us to estimate the value of BT data for domain adaptation since the newscrawl corpus (BT-news) is pure news whereas the bitext is a mixture of europarl and commoncrawl with only a small newscommentary portion. To assess domain adaptation effects, we measure accuracy on two held-out sets:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain of synthetic data",
"sec_num": "5.4"
},
{
"text": "\u2022 newstest2012, i.e. pure newswire data.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain of synthetic data",
"sec_num": "5.4"
},
{
"text": "\u2022 a held-out set of the WMT training data (valid-mixed), which is a mixture of europarl, commoncrawl and the small newscommentary portion. Figure 4 shows the results on both validation sets. Most strikingly, BT-news performs almost as well as bitext on newstest2012 ( Figure 4a ) and improves the baseline (640K) by 2.6 BLEU. BTbitext improves by 2.2 BLEU, achieving 83% of the improvement with real bitext. This shows that synthetic data can be nearly as effective as real human translated data when the domains match. Figure 4b shows the accuracy on valid-mixed, the mixed domain valid set. The accuracy of BTnews is not as good as before since the domain of the BT data and the test set do not match. However, BT-news still improves the baseline by up to 1.2 BLEU. On the other hand, BT-bitext matches the domain of valid-mixed and improves by 2.7 BLEU. This trails the real bitext by only 1.3 BLEU and corresponds to 67% of the gain achieved with real human bitext.",
"cite_spans": [],
"ref_spans": [
{
"start": 139,
"end": 147,
"text": "Figure 4",
"ref_id": null
},
{
"start": 268,
"end": 277,
"text": "Figure 4a",
"ref_id": null
},
{
"start": 520,
"end": 529,
"text": "Figure 4b",
"ref_id": null
}
],
"eq_spans": [],
"section": "Domain of synthetic data",
"sec_num": "5.4"
},
{
"text": "In summary, synthetic data performs remarkably well, coming close to the improvements achieved with real bitext for newswire test data, or trailing real bitext by only 1.3 BLEU for validmixed. In absence of a large parallel corpus for news, back-translation therefore offers a simple, yet very effective domain adaptation technique.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Domain of synthetic data",
"sec_num": "5.4"
},
{
"text": "We found it beneficial to adjust the ratio of bitext to synthetic data observed during training. In particular, we tuned the rate at which we sample data from the bitext compared to synthetic data. For example, in a setup of 5M bitext sentences and 10M synthetic sentences, an upsampling rate of 2 means that we double the frequency at which we Figure 4 : Accuracy on (a) newstest2012 and (b) a mixed domain valid set when growing a 640K bitext corpus with (i) real parallel data (bitext), (ii) a back-translated version of the target side of the bitext (BT-bitext), (iii) or back-translated newscrawl data (BT-news).",
"cite_spans": [],
"ref_spans": [
{
"start": 345,
"end": 353,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Upsampling the bitext",
"sec_num": "5.5"
},
{
"text": "visit bitext, i.e. training batches contain on average an equal amount of bitext and synthetic data as opposed to 1/3 bitext and 2/3 synthetic data. Figure 5 shows the accuracy of various upsampling rates for different generation methods in a setup with 5M bitext sentences and 24M synthetic sentences. Beam and greedy benefit a lot from higher rates which results in training more on the bitext data. This is likely because synthetic beam and greedy data does not provide as much training signal as the bitext which has more variation and is harder to fit. On the other hand, sampling and beam+noise require no upsampling of the bitext, which is likely because the synthetic data is already hard enough to fit and thus provides a strong training signal ( \u00a75.2).",
"cite_spans": [],
"ref_spans": [
{
"start": 149,
"end": 157,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Upsampling the bitext",
"sec_num": "5.5"
},
{
"text": "To confirm our findings we experiment on WMT'14 English-French translation where we show results on newstest2013-2015. We augment the large bitext of 35.7M sentence pairs by 31M newscrawl sentences generated by sampling. To train this system we perform 300K training updates in 27h 40min on 128 GPUs; we do not upsample the bitext for this experiment. Table 4 shows tokenized BLEU and Figure 5 : Accuracy when changing the rate at which the bitext is upsampled during training. Rates larger than one mean that the bitext is observed more often than actually present in the combined bitext and synthetic training corpus.",
"cite_spans": [],
"ref_spans": [
{
"start": 352,
"end": 359,
"text": "Table 4",
"ref_id": "TABREF6"
},
{
"start": 385,
"end": 393,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Large scale results",
"sec_num": "5.6"
},
{
"text": "is the best reported result in the literature for new-stest2014, and back-translation further improves upon this by 2.6 BLEU (tokenized).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Large scale results",
"sec_num": "5.6"
},
{
"text": "Finally, for WMT English-German we train on all 226M available monolingual training sentences and perform 250K updates in 22.5 hours on 128 GPUs. We upsample the bitext with a rate of 16 so that we observe every bitext sentence 16 times more often than each monolingual sentence. This results in a new state of the art of 35 BLEU on newstest2014 by using only WMT benchmark data. For comparison, DeepL, a commercial translation engine relying on high quality bilingual training data, achieves 33.3 tokenized BLEU . 4 Table 6 summarizes our results and compares to other work in the literature. This shows that back-translation with sampling can result in high-quality translation models based on benchmark data only.",
"cite_spans": [],
"ref_spans": [
{
"start": 517,
"end": 524,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Large scale results",
"sec_num": "5.6"
},
{
"text": "Back-translation is a very effective data augmentation technique for neural machine translation. Generating synthetic sources by sampling or by adding noise to beam outputs leads to higher accuracy than argmax inference which is typically used. In particular, sampling and noised beam outperforms pure beam by 1.7 BLEU on average on newstest2013-2017 for WMT English-German translation. Both methods provide a richer training signal for all but resource poor setups. We also find that synthetic data can achieve up to 83% of the performance attainable with real bitext. Finally, we achieve a new state of the art result of 35 BLEU on the WMT'14 English-German test set by using publicly available benchmark data only.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "6"
},
{
"text": "In future work, we would like to investigate an end-to-end approach where the back-translation model is optimized to output synthetic sources that are most helpful to the final forward model. En-De En-Fr a. Gehring et al. (2017) 25.2 40.5 b. Vaswani et al. (2017) 28.4 41.0 c. Ahmed et al. (2017) 28.9 41.4 d. Shaw et al. (2018) 29.2 41.5 DeepL 33.3 45.9 Our result 35.0 45.6 detok. sacreBLEU 3 33.8 43.8 Table 6 : BLEU on newstest2014 for WMT English-German (En-De) and English-French (En-Fr). The first four results use only WMT bitext (WMT'14, except for b, c, d in En-De which train on WMT'16). DeepL uses proprietary high-quality bitext and our result relies on back-translation with 226M newscrawl sentences for En-De and 31M for En-Fr. We also show detokenized BLEU (SacreBLEU).",
"cite_spans": [
{
"start": 207,
"end": 228,
"text": "Gehring et al. (2017)",
"ref_id": "BIBREF15"
},
{
"start": 242,
"end": 263,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF57"
},
{
"start": 277,
"end": 296,
"text": "Ahmed et al. (2017)",
"ref_id": "BIBREF0"
},
{
"start": 310,
"end": 328,
"text": "Shaw et al. (2018)",
"ref_id": "BIBREF54"
}
],
"ref_spans": [
{
"start": 405,
"end": 412,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Conclusions and future work",
"sec_num": "6"
},
{
"text": "sacreBLEU signatures: BLEU+case.mixed+lang.en-LANG+numrefs.1+smooth.exp+test.wmt14/full+ tok.13a+version.1.2.7 with LANG \u2208 {de,fr}",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "https://www.deepl.com/press.html",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Weighted transformer network for machine translation",
"authors": [
{
"first": "Karim",
"middle": [],
"last": "Ahmed",
"suffix": ""
},
{
"first": "Nitish",
"middle": [],
"last": "Shirish Keskar",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Karim Ahmed, Nitish Shirish Keskar, and Richard Socher. 2017. Weighted transformer network for machine translation. arxiv, 1711.02132.",
"links": null
},
"BIBREF1": {
"ref_id": "b1",
"title": "Data augmentation generative adversarial networks. arXiv",
"authors": [
{
"first": "Antreas",
"middle": [],
"last": "Antoniou",
"suffix": ""
},
{
"first": "Amos",
"middle": [
"J"
],
"last": "Storkey",
"suffix": ""
},
{
"first": "Harrison",
"middle": [],
"last": "",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Antreas Antoniou, Amos J. Storkey, and Harrison Ed- wards. 2017. Data augmentation generative adver- sarial networks. arXiv, abs/1711.04340.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In International Con- ference on Learning Representations (ICLR).",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Domain adaptation for statistical machine translation with monolingual resources",
"authors": [
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
}
],
"year": 2009,
"venue": "Workshop on Statistical Machine Translation (WMT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nicola Bertoldi and Marcello Federico. 2009. Domain adaptation for statistical machine translation with monolingual resources. In Workshop on Statistical Machine Translation (WMT).",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Improving translation model by monolingual data",
"authors": [
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Ales",
"middle": [],
"last": "Tamchyna",
"suffix": ""
}
],
"year": 2011,
"venue": "Workshop on Statistical Machine Translation (WMT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ondrej Bojar and Ales Tamchyna. 2011. Improving translation model by monolingual data. In Workshop on Statistical Machine Translation (WMT).",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "Large language models in machine translation",
"authors": [
{
"first": "Thorsten",
"middle": [],
"last": "Brants",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Ashok",
"suffix": ""
},
{
"first": "Peng",
"middle": [],
"last": "Popat",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Xu",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Och",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2007,
"venue": "Conference on Natural Language Learning (CoNLL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thorsten Brants, Ashok C. Popat, Peng Xu, Franz Josef Och, and Jeffrey Dean. 2007. Large language mod- els in machine translation. In Conference on Natural Language Learning (CoNLL).",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "A statistical approach to machine translation",
"authors": [
{
"first": "F",
"middle": [],
"last": "Peter",
"suffix": ""
},
{
"first": "John",
"middle": [],
"last": "Brown",
"suffix": ""
},
{
"first": "Stephen",
"middle": [
"Della"
],
"last": "Cocke",
"suffix": ""
},
{
"first": "Vincent",
"middle": [
"J Della"
],
"last": "Pietra",
"suffix": ""
},
{
"first": "Frederick",
"middle": [],
"last": "Pietra",
"suffix": ""
},
{
"first": "John",
"middle": [
"D"
],
"last": "Jelinek",
"suffix": ""
},
{
"first": "Robert",
"middle": [
"L"
],
"last": "Lafferty",
"suffix": ""
},
{
"first": "Paul",
"middle": [
"S"
],
"last": "Mercer",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Roossin",
"suffix": ""
}
],
"year": 1990,
"venue": "Computational Linguistics",
"volume": "16",
"issue": "",
"pages": "79--85",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter F. Brown, John Cocke, Stephen Della Pietra, Vin- cent J. Della Pietra, Frederick Jelinek, John D. Laf- ferty, Robert L. Mercer, and Paul S. Roossin. 1990. A statistical approach to machine translation. Com- putational Linguistics, 16:79-85.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Semisupervised learning for neural machine translation",
"authors": [
{
"first": "Yong",
"middle": [],
"last": "Cheng",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Maosong",
"middle": [],
"last": "Sun",
"suffix": ""
},
{
"first": "Yang",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yong Cheng, Wei Xu, Zhongjun He, Wei He, Hua Wu, Maosong Sun, and Yang Liu. 2016. Semi- supervised learning for neural machine translation. In Conference of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merrienboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Explaining and generalizing back-translation through wakesleep",
"authors": [
{
"first": "Ryan",
"middle": [],
"last": "Cotterell",
"suffix": ""
},
{
"first": "Julia",
"middle": [],
"last": "Kreutzer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1806.04402"
]
},
"num": null,
"urls": [],
"raw_text": "Ryan Cotterell and Julia Kreutzer. 2018. Explain- ing and generalizing back-translation through wake- sleep. arXiv preprint arXiv:1806.04402.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Copied Monolingual Data Improves Low-Resource Neural Machine Translation",
"authors": [
{
"first": "Anna",
"middle": [],
"last": "Currey",
"suffix": ""
},
{
"first": "Antonio",
"middle": [],
"last": "Valerio Miceli",
"suffix": ""
},
{
"first": "Kenneth",
"middle": [],
"last": "Barone",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Heafield",
"suffix": ""
}
],
"year": 2017,
"venue": "Proc. of WMT",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Anna Currey, Antonio Valerio Miceli Barone, and Ken- neth Heafield. 2017. Copied Monolingual Data Im- proves Low-Resource Neural Machine Translation. In Proc. of WMT.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Using targetside monolingual data for neural machine translation through multi-task learning",
"authors": [
{
"first": "Tobias",
"middle": [],
"last": "Domhan",
"suffix": ""
},
{
"first": "Felix",
"middle": [],
"last": "Hieber",
"suffix": ""
}
],
"year": 2017,
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Tobias Domhan and Felix Hieber. 2017. Using target- side monolingual data for neural machine transla- tion through multi-task learning. In Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Hierarchical neural story generation",
"authors": [
{
"first": "Angela",
"middle": [],
"last": "Fan",
"suffix": ""
},
{
"first": "Yann",
"middle": [],
"last": "Dauphin",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Lewis",
"suffix": ""
}
],
"year": 2018,
"venue": "Conference of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Angela Fan, Yann Dauphin, and Mike Lewis. 2018. Hierarchical neural story generation. In Confer- ence of the Association for Computational Linguis- tics (ACL).",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Multi-way, multilingual neural machine translation with a shared attention mechanism",
"authors": [
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Orhan Firat, Kyunghyun Cho, and Yoshua Ben- gio. 2016a. Multi-way, multilingual neural ma- chine translation with a shared attention mecha- nism. In Conference of the North American Chap- ter of the Association for Computational Linguistics (NAACL).",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Zero-resource translation with multi-lingual neural machine translation",
"authors": [
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Baskaran",
"middle": [],
"last": "Sankaran",
"suffix": ""
},
{
"first": "Yaser",
"middle": [],
"last": "Al-Onaizan",
"suffix": ""
},
{
"first": "T",
"middle": [],
"last": "Fatos",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Yarman-Vural",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference on Empirical Methods in Natural Language Processing",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Orhan Firat, Baskaran Sankaran, Yaser Al-Onaizan, Fatos T. Yarman-Vural, and Kyunghyun Cho. 2016b. Zero-resource translation with multi-lingual neu- ral machine translation. In Conference on Em- pirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann N",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference of Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning. In International Conference of Machine Learning (ICML).",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Generating sequences with recurrent neural networks",
"authors": [
{
"first": "Alex",
"middle": [],
"last": "Graves",
"suffix": ""
}
],
"year": 2013,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alex Graves. 2013. Generating sequences with recur- rent neural networks. arXiv, 1308.0850.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Universal neural machine translation for extremely low resource languages",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "Hany",
"middle": [],
"last": "Hassan",
"suffix": ""
},
{
"first": "Jacob",
"middle": [],
"last": "Devlin",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Li",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, Hany Hassan, Jacob Devlin, and Victor O. K. Li. 2018. Universal neural machine transla- tion for extremely low resource languages. arXiv, 1802.05368.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On using monolingual corpora in neural machine translation. arXiv",
"authors": [
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Loic",
"middle": [],
"last": "Barrault",
"suffix": ""
},
{
"first": "Huei-Chi",
"middle": [],
"last": "Lin",
"suffix": ""
}
],
"year": null,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Loic Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On us- ing monolingual corpora in neural machine transla- tion. arXiv, 1503.03535.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "On integrating a language model into neural machine translation",
"authors": [
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Orhan",
"middle": [],
"last": "Firat",
"suffix": ""
},
{
"first": "Kelvin",
"middle": [],
"last": "Xu",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2017,
"venue": "Computer Speech & Language",
"volume": "45",
"issue": "",
"pages": "137--148",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Caglar Gulcehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, and Yoshua Bengio. 2017. On integrating a language model into neural machine translation. Computer Speech & Language, 45:137-148.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Toward multilingual neural machine translation with universal encoder and decoder",
"authors": [
{
"first": "Thanh-Le",
"middle": [],
"last": "Ha",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Niehues",
"suffix": ""
},
{
"first": "Alexander",
"middle": [
"H"
],
"last": "Waibel",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Thanh-Le Ha, Jan Niehues, and Alexander H. Waibel. 2016. Toward multilingual neural machine trans- lation with universal encoder and decoder. arXiv, 1611.04798.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Dreaming more data: Class-dependent distributions over diffeomorphisms for learned data augmentation",
"authors": [
{
"first": "Soren",
"middle": [],
"last": "Hauberg",
"suffix": ""
},
{
"first": "Oren",
"middle": [],
"last": "Freifeld",
"suffix": ""
},
{
"first": "Anders",
"middle": [],
"last": "Boesen Lindbo",
"suffix": ""
},
{
"first": "John",
"middle": [
"W"
],
"last": "Larsen",
"suffix": ""
},
{
"first": "Lars",
"middle": [
"Kai"
],
"last": "Fisher",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Hansen",
"suffix": ""
}
],
"year": 2016,
"venue": "AISTATS",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Soren Hauberg, Oren Freifeld, Anders Boesen Lindbo Larsen, John W. Fisher, and Lars Kai Hansen. 2016. Dreaming more data: Class-dependent distributions over diffeomorphisms for learned data augmenta- tion. In AISTATS.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Dual learning for machine translation",
"authors": [
{
"first": "Di",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Liwei",
"middle": [],
"last": "Wang",
"suffix": ""
},
{
"first": "Nenghai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Tieyan",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Wei-Ying",
"middle": [],
"last": "Ma",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference on Advances in Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Di He, Yingce Xia, Tao Qin, Liwei Wang, Nenghai Yu, Tieyan Liu, and Wei-Ying Ma. 2016a. Dual learning for machine translation. In Conference on Advances in Neural Information Processing Systems (NIPS).",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Improved neural machine translation with smt features",
"authors": [
{
"first": "Wei",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Zhongjun",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Hua",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Haifeng",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference of the Association for the Advancement of Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "151--157",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Wei He, Zhongjun He, Hua Wu, and Haifeng Wang. 2016b. Improved neural machine translation with smt features. In Conference of the Association for the Advancement of Artificial Intelligence (AAAI), pages 151-157.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Scalable Modified Kneser-Ney Language Model Estimation",
"authors": [
{
"first": "Kenneth",
"middle": [],
"last": "Heafield",
"suffix": ""
},
{
"first": "Ivan",
"middle": [],
"last": "Pouzyrevsky",
"suffix": ""
},
{
"first": "Jonathan",
"middle": [
"H"
],
"last": "Clark",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2013,
"venue": "Conference of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenneth Heafield, Ivan Pouzyrevsky, Jonathan H. Clark, and Philipp Koehn. 2013. Scalable Modified Kneser-Ney Language Model Estimation. In Con- ference of the Association for Computational Lin- guistics (ACL).",
"links": null
},
"BIBREF26": {
"ref_id": "b26",
"title": "Learning distributed representations of sentences from unlabelled data",
"authors": [
{
"first": "Felix",
"middle": [],
"last": "Hill",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Anna",
"middle": [],
"last": "Korhonen",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Felix Hill, Kyunghyun Cho, and Anna Korhonen. 2016. Learning distributed representations of sentences from unlabelled data. In Conference of the North American Chapter of the Association for Computa- tional Linguistics (NAACL).",
"links": null
},
"BIBREF27": {
"ref_id": "b27",
"title": "Iterative backtranslation for neural machine translation",
"authors": [
{
"first": "Duy",
"middle": [],
"last": "Vu Cong",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Gholamreza",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Trevor",
"middle": [],
"last": "Haffari",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Cohn",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation",
"volume": "",
"issue": "",
"pages": "18--24",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Vu Cong Duy Hoang, Philipp Koehn, Gholamreza Haffari, and Trevor Cohn. 2018. Iterative back- translation for neural machine translation. In Pro- ceedings of the 2nd Workshop on Neural Machine Translation and Generation, pages 18-24.",
"links": null
},
"BIBREF28": {
"ref_id": "b28",
"title": "Enhancement of encoder and attention using target monolingual corpora in neural machine translation",
"authors": [
{
"first": "Kenji",
"middle": [],
"last": "Imamura",
"suffix": ""
},
{
"first": "Atsushi",
"middle": [],
"last": "Fujita",
"suffix": ""
},
{
"first": "Eiichiro",
"middle": [],
"last": "Sumita",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 2nd Workshop on Neural Machine Translation and Generation",
"volume": "",
"issue": "",
"pages": "55--63",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kenji Imamura, Atsushi Fujita, and Eiichiro Sumita. 2018. Enhancement of encoder and attention using target monolingual corpora in neural machine trans- lation. In Proceedings of the 2nd Workshop on Neu- ral Machine Translation and Generation, pages 55- 63.",
"links": null
},
"BIBREF29": {
"ref_id": "b29",
"title": "Google's multilingual neural machine translation system: Enabling zero-shot translation",
"authors": [
{
"first": "Melvin",
"middle": [],
"last": "Johnson",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Quoc",
"middle": [
"V"
],
"last": "Le",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Nikhil",
"middle": [],
"last": "Thorat",
"suffix": ""
},
{
"first": "Fernanda",
"middle": [
"B"
],
"last": "Vi\u00e9gas",
"suffix": ""
},
{
"first": "Martin",
"middle": [],
"last": "Wattenberg",
"suffix": ""
},
{
"first": "Gregory",
"middle": [
"S"
],
"last": "Corrado",
"suffix": ""
},
{
"first": "Macduff",
"middle": [],
"last": "Hughes",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2017,
"venue": "Transactions of the Association for Computational Linguistics (TACL)",
"volume": "5",
"issue": "",
"pages": "339--351",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Melvin Johnson, Mike Schuster, Quoc V. Le, Maxim Krikun, Yonghui Wu, Zhifeng Chen, Nikhil Tho- rat, Fernanda B. Vi\u00e9gas, Martin Wattenberg, Gre- gory S. Corrado, Macduff Hughes, and Jeffrey Dean. 2017. Google's multilingual neural machine transla- tion system: Enabling zero-shot translation. Trans- actions of the Association for Computational Lin- guistics (TACL), 5:339-351.",
"links": null
},
"BIBREF30": {
"ref_id": "b30",
"title": "Depthwise separable convolutions for neural machine translation",
"authors": [
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Fran\u00e7ois",
"middle": [],
"last": "Chollet",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Lukasz Kaiser, Aidan N. Gomez, and Fran\u00e7ois Chollet. 2017. Depthwise separable convolutions for neural machine translation. CoRR, abs/1706.03059.",
"links": null
},
"BIBREF31": {
"ref_id": "b31",
"title": "A\u00e4ron van den Oord, Alex Graves, and Koray Kavukcuoglu",
"authors": [
{
"first": "Nal",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Lasse",
"middle": [],
"last": "Espeholt",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
}
],
"year": 2016,
"venue": "CoRR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, A\u00e4ron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2016. Neural machine translation in linear time. CoRR, abs/1610.10099.",
"links": null
},
"BIBREF32": {
"ref_id": "b32",
"title": "Neural machine translation for low-resource languages without parallel corpora",
"authors": [
{
"first": "Alina",
"middle": [],
"last": "Karakanta",
"suffix": ""
},
{
"first": "Jon",
"middle": [],
"last": "Dehdari",
"suffix": ""
},
{
"first": "Josef",
"middle": [],
"last": "Van Genabith",
"suffix": ""
}
],
"year": 2017,
"venue": "Machine Translation",
"volume": "",
"issue": "",
"pages": "1--23",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alina Karakanta, Jon Dehdari, and Josef van Genabith. 2017. Neural machine translation for low-resource languages without parallel corpora. Machine Trans- lation, pages 1-23.",
"links": null
},
"BIBREF33": {
"ref_id": "b33",
"title": "Adam: A Method for Stochastic Optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "International Conference on Learning Representations (ICLR",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In Inter- national Conference on Learning Representations (ICLR).",
"links": null
},
"BIBREF34": {
"ref_id": "b34",
"title": "Statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
}
],
"year": 2010,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn. 2010. Statistical machine translation. Cambridge University Press.",
"links": null
},
"BIBREF35": {
"ref_id": "b35",
"title": "Moses: Open source toolkit for statistical machine translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Hoang",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
},
{
"first": "Chris",
"middle": [],
"last": "Callison-Burch",
"suffix": ""
},
{
"first": "Marcello",
"middle": [],
"last": "Federico",
"suffix": ""
},
{
"first": "Nicola",
"middle": [],
"last": "Bertoldi",
"suffix": ""
},
{
"first": "Brooke",
"middle": [],
"last": "Cowan",
"suffix": ""
},
{
"first": "Wade",
"middle": [],
"last": "Shen",
"suffix": ""
}
],
"year": 2007,
"venue": "ACL Demo Session",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondrej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In ACL Demo Session.",
"links": null
},
"BIBREF36": {
"ref_id": "b36",
"title": "Statistical phrase-based translation",
"authors": [
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Franz",
"middle": [
"Josef"
],
"last": "Och",
"suffix": ""
},
{
"first": "Daniel",
"middle": [],
"last": "Marcu",
"suffix": ""
}
],
"year": 2003,
"venue": "Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Philipp Koehn, Franz Josef Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics (NAACL).",
"links": null
},
"BIBREF37": {
"ref_id": "b37",
"title": "Investigations on translation model adaptation using monolingual data",
"authors": [
{
"first": "Patrik",
"middle": [],
"last": "Lambert",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Christophe",
"middle": [],
"last": "Servan",
"suffix": ""
},
{
"first": "Sadaf",
"middle": [],
"last": "Abdul-Rauf",
"suffix": ""
}
],
"year": 2011,
"venue": "Workshop on Statistical Machine Translation (WMT)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Patrik Lambert, Holger Schwenk, Christophe Ser- van, and Sadaf Abdul-Rauf. 2011. Investigations on translation model adaptation using monolingual data. In Workshop on Statistical Machine Transla- tion (WMT).",
"links": null
},
"BIBREF38": {
"ref_id": "b38",
"title": "Unsupervised machine translation using monolingual corpora only",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018a. Unsupervised machine translation using monolingual corpora only. In International Conference on Learning Represen- tations (ICLR).",
"links": null
},
"BIBREF39": {
"ref_id": "b39",
"title": "Phrase-based & neural unsupervised machine translation",
"authors": [
{
"first": "Guillaume",
"middle": [],
"last": "Lample",
"suffix": ""
},
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Alexis",
"middle": [],
"last": "Conneau",
"suffix": ""
},
{
"first": "Ludovic",
"middle": [],
"last": "Denoyer",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Guillaume Lample, Myle Ott, Alexis Conneau, Lu- dovic Denoyer, and Marc'Aurelio Ranzato. 2018b. Phrase-based & neural unsupervised machine trans- lation. arXiv, 1803.05567.",
"links": null
},
"BIBREF40": {
"ref_id": "b40",
"title": "2012. langid. py: An off-the-shelf language identification tool",
"authors": [
{
"first": "Marco",
"middle": [],
"last": "Lui",
"suffix": ""
},
{
"first": "Timothy",
"middle": [],
"last": "Baldwin",
"suffix": ""
}
],
"year": null,
"venue": "Proceedings of the ACL 2012 system demonstrations",
"volume": "",
"issue": "",
"pages": "25--30",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Marco Lui and Timothy Baldwin. 2012. langid. py: An off-the-shelf language identification tool. In Pro- ceedings of the ACL 2012 system demonstrations, pages 25-30. Association for Computational Lin- guistics.",
"links": null
},
"BIBREF41": {
"ref_id": "b41",
"title": "Effective approaches to attentionbased neural machine translation",
"authors": [
{
"first": "Minh-Thang",
"middle": [],
"last": "Luong",
"suffix": ""
},
{
"first": "Hieu",
"middle": [],
"last": "Pham",
"suffix": ""
},
{
"first": "Christopher D",
"middle": [],
"last": "Manning",
"suffix": ""
}
],
"year": 2015,
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015. Effective approaches to attention- based neural machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
},
"BIBREF42": {
"ref_id": "b42",
"title": "Bi-directional neural machine translation with synthetic parallel data",
"authors": [
{
"first": "Xing",
"middle": [],
"last": "Niu",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Denkowski",
"suffix": ""
},
{
"first": "Marine",
"middle": [],
"last": "Carpuat",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1805.11213"
]
},
"num": null,
"urls": [],
"raw_text": "Xing Niu, Michael Denkowski, and Marine Carpuat. 2018. Bi-directional neural machine transla- tion with synthetic parallel data. arXiv preprint arXiv:1805.11213.",
"links": null
},
"BIBREF43": {
"ref_id": "b43",
"title": "Analyzing uncertainty in neural machine translation",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Marc'aurelio",
"middle": [],
"last": "Ranzato",
"suffix": ""
}
],
"year": 2018,
"venue": "Proceedings of the 35th International Conference on Machine Learning",
"volume": "80",
"issue": "",
"pages": "3956--3965",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Michael Auli, David Grangier, and Marc'Aurelio Ranzato. 2018a. Analyzing uncer- tainty in neural machine translation. In Proceed- ings of the 35th International Conference on Ma- chine Learning, volume 80, pages 3956-3965.",
"links": null
},
"BIBREF44": {
"ref_id": "b44",
"title": "Scaling neural machine translation",
"authors": [
{
"first": "Myle",
"middle": [],
"last": "Ott",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Edunov",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Myle Ott, Sergey Edunov, David Grangier, and Michael Auli. 2018b. Scaling neural machine trans- lation.",
"links": null
},
"BIBREF45": {
"ref_id": "b45",
"title": "BLEU: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Conference of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic evaluation of machine translation. In Conference of the Association for Computational Linguistics (ACL).",
"links": null
},
"BIBREF46": {
"ref_id": "b46",
"title": "A deep reinforced model for abstractive summarization",
"authors": [
{
"first": "Romain",
"middle": [],
"last": "Paulus",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2018,
"venue": "International Conference on Learning Representations (ICLR)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Romain Paulus, Caiming Xiong, and Richard Socher. 2018. A deep reinforced model for abstractive sum- marization. In International Conference on Learn- ing Representations (ICLR).",
"links": null
},
"BIBREF47": {
"ref_id": "b47",
"title": "Regularizing neural networks by penalizing confident output distributions",
"authors": [
{
"first": "Gabriel",
"middle": [],
"last": "Pereyra",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Tucker",
"suffix": ""
},
{
"first": "Jan",
"middle": [],
"last": "Chorowski",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [
"E"
],
"last": "Hinton",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Learning Representations (ICLR) Workshop",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Gabriel Pereyra, George Tucker, Jan Chorowski, Lukasz Kaiser, and Geoffrey E. Hinton. 2017. Reg- ularizing neural networks by penalizing confident output distributions. In International Conference on Learning Representations (ICLR) Workshop.",
"links": null
},
"BIBREF48": {
"ref_id": "b48",
"title": "The effectiveness of data augmentation in image classification using deep learning",
"authors": [
{
"first": "Luis",
"middle": [],
"last": "Perez",
"suffix": ""
},
{
"first": "Jason",
"middle": [],
"last": "Wang",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Luis Perez and Jason Wang. 2017. The effectiveness of data augmentation in image classification using deep learning. arxiv, 1712.04621.",
"links": null
},
"BIBREF49": {
"ref_id": "b49",
"title": "Gideon Maillette de Buy Wenniger, and Peyman Passban",
"authors": [
{
"first": "Alberto",
"middle": [],
"last": "Poncelas",
"suffix": ""
},
{
"first": "Dimitar",
"middle": [],
"last": "Sht",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Shterionov",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Way",
"suffix": ""
}
],
"year": 2018,
"venue": "Investigating backtranslation in neural machine translation",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Alberto Poncelas, Dimitar Sht. Shterionov, Andy Way, Gideon Maillette de Buy Wenniger, and Peyman Passban. 2018. Investigating backtranslation in neu- ral machine translation. arXiv, 1804.06189.",
"links": null
},
"BIBREF50": {
"ref_id": "b50",
"title": "A call for clarity in reporting bleu scores",
"authors": [
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Matt Post. 2018. A call for clarity in reporting bleu scores. arXiv, 1804.08771.",
"links": null
},
"BIBREF51": {
"ref_id": "b51",
"title": "Improving neural machine translation models with monolingual data",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. Conference of the Asso- ciation for Computational Linguistics (ACL).",
"links": null
},
"BIBREF52": {
"ref_id": "b52",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference of the Association for Computational Linguistics (ACL)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Conference of the Associa- tion for Computational Linguistics (ACL).",
"links": null
},
"BIBREF53": {
"ref_id": "b53",
"title": "Building end-to-end dialogue systems using generative hierarchical neural network models",
"authors": [
{
"first": "Iulian",
"middle": [],
"last": "Serban",
"suffix": ""
},
{
"first": "Alessandro",
"middle": [],
"last": "Sordoni",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Aaron",
"middle": [
"C"
],
"last": "Courville",
"suffix": ""
},
{
"first": "Joelle",
"middle": [],
"last": "Pineau",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference of the Association for the Advancement of Artificial Intelligence (AAAI)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Iulian Serban, Alessandro Sordoni, Yoshua Bengio, Aaron C. Courville, and Joelle Pineau. 2016. Build- ing end-to-end dialogue systems using generative hi- erarchical neural network models. In Conference of the Association for the Advancement of Artificial In- telligence (AAAI).",
"links": null
},
"BIBREF54": {
"ref_id": "b54",
"title": "Self-attention with relative position representations",
"authors": [
{
"first": "Peter",
"middle": [],
"last": "Shaw",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
}
],
"year": 2018,
"venue": "Proc. of NAACL",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position represen- tations. In Proc. of NAACL.",
"links": null
},
"BIBREF55": {
"ref_id": "b55",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Conference on Advances in Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to sequence learning with neural net- works. In Conference on Advances in Neural In- formation Processing Systems (NIPS).",
"links": null
},
"BIBREF56": {
"ref_id": "b56",
"title": "Rethinking the Inception Architecture for Computer Vision",
"authors": [
{
"first": "Christian",
"middle": [],
"last": "Szegedy",
"suffix": ""
},
{
"first": "Vincent",
"middle": [],
"last": "Vanhoucke",
"suffix": ""
},
{
"first": "Sergey",
"middle": [],
"last": "Ioffe",
"suffix": ""
},
{
"first": "Jonathon",
"middle": [],
"last": "Shlens",
"suffix": ""
},
{
"first": "Zbigniew",
"middle": [],
"last": "Wojna",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1512.00567"
]
},
"num": null,
"urls": [],
"raw_text": "Christian Szegedy, Vincent Vanhoucke, Sergey Ioffe, Jonathon Shlens, and Zbigniew Wojna. 2015. Re- thinking the Inception Architecture for Computer Vision. arXiv preprint arXiv:1512.00567.",
"links": null
},
"BIBREF57": {
"ref_id": "b57",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "Conference on Advances in Neural Information Processing Systems (NIPS)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Conference on Advances in Neural In- formation Processing Systems (NIPS).",
"links": null
},
"BIBREF58": {
"ref_id": "b58",
"title": "Extracting and composing robust features with denoising autoencoders",
"authors": [
{
"first": "Pascal",
"middle": [],
"last": "Vincent",
"suffix": ""
},
{
"first": "Hugo",
"middle": [],
"last": "Larochelle",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Pierre-Antoine",
"middle": [],
"last": "Manzagol",
"suffix": ""
}
],
"year": 2008,
"venue": "International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Pascal Vincent, Hugo Larochelle, Yoshua Bengio, , and Pierre-Antoine Manzagol. 2008. Extracting and composing robust features with denoising autoen- coders. In International Conference on Machine Learning (ICML).",
"links": null
},
"BIBREF59": {
"ref_id": "b59",
"title": "Dual supervised learning",
"authors": [
{
"first": "Yingce",
"middle": [],
"last": "Xia",
"suffix": ""
},
{
"first": "Tao",
"middle": [],
"last": "Qin",
"suffix": ""
},
{
"first": "Wei",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Jiang",
"middle": [],
"last": "Bian",
"suffix": ""
},
{
"first": "Nenghai",
"middle": [],
"last": "Yu",
"suffix": ""
},
{
"first": "Tie-Yan",
"middle": [],
"last": "Liu",
"suffix": ""
}
],
"year": 2017,
"venue": "International Conference on Machine Learning (ICML)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yingce Xia, Tao Qin, Wei Chen, Jiang Bian, Nenghai Yu, and Tie-Yan Liu. 2017. Dual supervised learn- ing. In International Conference on Machine Learn- ing (ICML).",
"links": null
},
"BIBREF60": {
"ref_id": "b60",
"title": "Exploiting source-side monolingual data in neural machine translation",
"authors": [
{
"first": "Jiajun",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Chengqing",
"middle": [],
"last": "Zong",
"suffix": ""
}
],
"year": 2016,
"venue": "Conference on Empirical Methods in Natural Language Processing (EMNLP)",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jiajun Zhang and Chengqing Zong. 2016. Exploit- ing source-side monolingual data in neural machine translation. In Conference on Empirical Methods in Natural Language Processing (EMNLP).",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"num": null,
"uris": null,
"text": "Training perplexity (PPL) per epoch for different synthetic data. We separately report PPL on the synthetic data and the bitext. Bitext PPL is averaged over all generation methods.",
"type_str": "figure"
},
"TABREF0": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>shows results on a wider range of</td></tr></table>"
},
"TABREF2": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table/>"
},
"TABREF4": {
"html": null,
"num": null,
"type_str": "table",
"text": "",
"content": "<table><tr><td>BLEU (newstest2012)</td><td>23 24 25</td><td/><td>greedy beam top10</td></tr><tr><td/><td>22</td><td/><td>sampling beam+noise</td></tr><tr><td/><td>1</td><td>2</td><td>4</td><td>8</td></tr><tr><td/><td/><td colspan=\"2\">bitext upsample rate</td></tr><tr><td>shows deto-</td><td/><td/><td/></tr><tr><td>kenized BLEU. 2 To our knowledge, our baseline</td><td/><td/><td/></tr><tr><td>2 sacreBLEU signatures: BLEU+case.mixed+lang.en-</td><td/><td/><td/></tr><tr><td>fr+numrefs.1+smooth.exp+test.SET+tok.13a+version.1.2.7</td><td/><td/><td/></tr><tr><td>with SET \u2208 {wmt13, wmt14/full, wmt15}</td><td/><td/><td/></tr></table>"
},
"TABREF6": {
"html": null,
"num": null,
"type_str": "table",
"text": "Tokenized BLEU on various test sets for WMT English-French translation.",
"content": "<table><tr><td/><td colspan=\"3\">news13 news14 news15</td></tr><tr><td>bitext</td><td>35.30</td><td>41.03</td><td>38.31</td></tr><tr><td>+sampling</td><td>36.13</td><td>43.84</td><td>40.91</td></tr></table>"
},
"TABREF7": {
"html": null,
"num": null,
"type_str": "table",
"text": "De-tokenized BLEU (sacreBLEU) on various test sets for WMT English-French.",
"content": "<table/>"
}
}
}
}