| { |
| "paper_id": "P16-1009", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T09:00:08.326486Z" |
| }, |
| "title": "Improving Neural Machine Translation Models with Monolingual Data", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "rico.sennrich@ed.ac.uk" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "bhaddow@inf.ed.ac.uk" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "a.birch@ed.ac.uk" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "Neural Machine Translation (NMT) has obtained state-of-the art performance for several language pairs, while only using parallel data for training. Targetside monolingual data plays an important role in boosting fluency for phrasebased statistical machine translation, and we investigate the use of monolingual data for NMT. In contrast to previous work, which combines NMT models with separately trained language models, we note that encoder-decoder NMT architectures already have the capacity to learn the same information as a language model, and we explore strategies to train with monolingual data without changing the neural network architecture. By pairing monolingual training data with an automatic backtranslation, we can treat it as additional parallel training data, and we obtain substantial improvements on the WMT 15 task English\u2194German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task Turkish\u2192English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We also show that fine-tuning on in-domain monolingual and parallel data gives substantial improvements for the IWSLT 15 task English\u2192German.", |
| "pdf_parse": { |
| "paper_id": "P16-1009", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "Neural Machine Translation (NMT) has obtained state-of-the art performance for several language pairs, while only using parallel data for training. Targetside monolingual data plays an important role in boosting fluency for phrasebased statistical machine translation, and we investigate the use of monolingual data for NMT. In contrast to previous work, which combines NMT models with separately trained language models, we note that encoder-decoder NMT architectures already have the capacity to learn the same information as a language model, and we explore strategies to train with monolingual data without changing the neural network architecture. By pairing monolingual training data with an automatic backtranslation, we can treat it as additional parallel training data, and we obtain substantial improvements on the WMT 15 task English\u2194German (+2.8-3.7 BLEU), and for the low-resourced IWSLT 14 task Turkish\u2192English (+2.1-3.4 BLEU), obtaining new state-of-the-art results. We also show that fine-tuning on in-domain monolingual and parallel data gives substantial improvements for the IWSLT 15 task English\u2192German.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Neural Machine Translation (NMT) has obtained state-of-the art performance for several language pairs, while only using parallel data for training. Target-side monolingual data plays an important role in boosting fluency for phrase-based statisti-cal machine translation, and we investigate the use of monolingual data for NMT.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Language models trained on monolingual data have played a central role in statistical machine translation since the first IBM models (Brown et al., 1990 ). There are two major reasons for their importance. Firstly, word-based and phrase-based translation models make strong independence assumptions, with the probability of translation units estimated independently from context, and language models, by making different independence assumptions, can model how well these translation units fit together. Secondly, the amount of available monolingual data in the target language typically far exceeds the amount of parallel data, and models typically improve when trained on more data, or data more similar to the translation task.", |
| "cite_spans": [ |
| { |
| "start": 133, |
| "end": 152, |
| "text": "(Brown et al., 1990", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In (attentional) encoder-decoder architectures for neural machine translation (Sutskever et al., 2014; Bahdanau et al., 2015) , the decoder is essentially an RNN language model that is also conditioned on source context, so the first rationale, adding a language model to compensate for the independence assumptions of the translation model, does not apply. However, the data argument is still valid in NMT, and we expect monolingual data to be especially helpful if parallel data is sparse, or a poor fit for the translation task, for instance because of a domain mismatch.", |
| "cite_spans": [ |
| { |
| "start": 78, |
| "end": 102, |
| "text": "(Sutskever et al., 2014;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 103, |
| "end": 125, |
| "text": "Bahdanau et al., 2015)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In contrast to previous work, which integrates a separately trained RNN language model into the NMT model (G\u00fcl\u00e7ehre et al., 2015) , we explore strategies to include monolingual training data in the training process without changing the neural network architecture. This makes our approach applicable to different NMT architectures.", |
| "cite_spans": [ |
| { |
| "start": 106, |
| "end": 129, |
| "text": "(G\u00fcl\u00e7ehre et al., 2015)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The main contributions of this paper are as follows:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 we show that we can improve the machine translation quality of NMT systems by mixing monolingual target sentences into the training set.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 we investigate two different methods to fill the source side of monolingual training instances: using a dummy source sentence, and using a source sentence obtained via backtranslation, which we call synthetic. We find that the latter is more effective.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "\u2022 we successfully adapt NMT models to a new domain by fine-tuning with either monolingual or parallel in-domain data.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We follow the neural machine translation architecture by Bahdanau et al. (2015) , which we will briefly summarize here. However, we note that our approach is not specific to this architecture. The neural machine translation system is implemented as an encoder-decoder network with recurrent neural networks.", |
| "cite_spans": [ |
| { |
| "start": 57, |
| "end": 79, |
| "text": "Bahdanau et al. (2015)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Machine Translation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The encoder is a bidirectional neural network with gated recurrent units (Cho et al., 2014) that reads an input sequence x = (x 1 , ..., x m ) and calculates a forward sequence of hidden states (", |
| "cite_spans": [ |
| { |
| "start": 73, |
| "end": 91, |
| "text": "(Cho et al., 2014)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Machine Translation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "\u2212 \u2192 h 1 , ..., \u2212 \u2192 h m )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Machine Translation", |
| "sec_num": "2" |
| }, |
| { |
| "text": ", and a backward sequence", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Machine Translation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "( \u2190 \u2212 h 1 , ..., \u2190 \u2212 h m ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Machine Translation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The hidden states \u2212 \u2192 h j and \u2190 \u2212 h j are concatenated to obtain the annotation vector h j .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Machine Translation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The decoder is a recurrent neural network that predicts a target sequence y = (y 1 , ..., y n ). Each word y i is predicted based on a recurrent hidden state s i , the previously predicted word y i\u22121 , and a context vector c i . c i is computed as a weighted sum of the annotations h j . The weight of each annotation h j is computed through an alignment model \u03b1 ij , which models the probability that y i is aligned to x j . The alignment model is a singlelayer feedforward neural network that is learned jointly with the rest of the network through backpropagation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Machine Translation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A detailed description can be found in (Bahdanau et al., 2015) . Training is performed on a parallel corpus with stochastic gradient descent. For translation, a beam search with small beam size is employed.", |
| "cite_spans": [ |
| { |
| "start": 39, |
| "end": 62, |
| "text": "(Bahdanau et al., 2015)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Neural Machine Translation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In machine translation, more monolingual data (or monolingual data more similar to the test set) serves to improve the estimate of the prior probability p(T ) of the target sentence T , before taking the source sentence S into account. In contrast to (G\u00fcl\u00e7ehre et al., 2015) , who train separate language models on monolingual training data and incorporate them into the neural network through shallow or deep fusion, we propose techniques to train the main NMT model with monolingual data, exploiting the fact that encoder-decoder neural networks already condition the probability distribution of the next target word on the previous target words. We describe two strategies to do this: providing monolingual training examples with an empty (or dummy) source sentence, or providing monolingual training data with a synthetic source sentence that is obtained from automatically translating the target sentence into the source language, which we will refer to as back-translation.", |
| "cite_spans": [ |
| { |
| "start": 251, |
| "end": 274, |
| "text": "(G\u00fcl\u00e7ehre et al., 2015)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "NMT Training with Monolingual Training Data", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The first technique we employ is to treat monolingual training examples as parallel examples with empty source side, essentially adding training examples whose context vector c i is uninformative, and for which the network has to fully rely on the previous target words for its prediction. This could be conceived as a form of dropout (Hinton et al., 2012) , with the difference that the training instances that have the context vector dropped out constitute novel training data. We can also conceive of this setup as multi-task learning, with the two tasks being translation when the source is known, and language modelling when it is unknown. During training, we use both parallel and monolingual training examples in the ratio 1-to-1, and randomly shuffle them. We define an epoch as one iteration through the parallel data set, and resample from the monolingual data set for every epoch. We pair monolingual sentences with a single-word dummy source side <null> to allow processing of both parallel and monolingual training examples with the same network graph. 1 For monolingual minibatches 2 , we freeze the network parameters of the encoder and the attention model. One problem with this integration of monolin-gual data is that we cannot arbitrarily increase the ratio of monolingual training instances, or finetune a model with only monolingual training data, because different output layer parameters are optimal for the two tasks, and the network 'unlearns' its conditioning on the source context if the ratio of monolingual training instances is too high.", |
| "cite_spans": [ |
| { |
| "start": 335, |
| "end": 356, |
| "text": "(Hinton et al., 2012)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dummy Source Sentences", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "To ensure that the output layer remains sensitive to the source context, and that good parameters are not unlearned from monolingual data, we propose to pair monolingual training instances with a synthetic source sentence from which a context vector can be approximated. We obtain these through back-translation, i.e. an automatic translation of the monolingual target text into the source language.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synthetic Source Sentences", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "During training, we mix synthetic parallel text into the original (human-translated) parallel text and do not distinguish between the two: no network parameters are frozen. Importantly, only the source side of these additional training examples is synthetic, and the target side comes from the monolingual corpus.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Synthetic Source Sentences", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "We evaluate NMT training on parallel text, and with additional monolingual data, on English\u2194German and Turkish\u2192English, using training and test data from WMT 15 for English\u2194German, IWSLT 15 for English\u2192German, and IWSLT 14 for Turkish\u2192English.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4" |
| }, |
| { |
| "text": "We use Groundhog 3 as the implementation of the NMT system for all experiments (Bahdanau et al., 2015; Jean et al., 2015a) . We generally follow the settings and training procedure described by Sennrich et al. (2016) .", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 102, |
| "text": "(Bahdanau et al., 2015;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 103, |
| "end": 122, |
| "text": "Jean et al., 2015a)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 194, |
| "end": 216, |
| "text": "Sennrich et al. (2016)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and Methods", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "For English\u2194German, we report case-sensitive BLEU on detokenized text with mteval-v13a.pl for comparison to official WMT and IWSLT results. For Turkish\u2192English, we report case-sensitive BLEU on tokenized text with multi-bleu.perl for comparison to results by G\u00fcl\u00e7ehre et al. (2015) . G\u00fcl\u00e7ehre et al. (2015) determine the network vocabulary based on the parallel training data, 3 github.com/sebastien-j/LV_groundhog dataset sentences WMT parallel 4 200 000 WIT parallel 200 000 WMT mono_de 160 000 000 WMT synth_de 3 600 000 WMT mono_en 118 000 000 WMT synth_en 4 200 000 Table 1 : English\u2194German training data.", |
| "cite_spans": [ |
| { |
| "start": 259, |
| "end": 281, |
| "text": "G\u00fcl\u00e7ehre et al. (2015)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 284, |
| "end": 306, |
| "text": "G\u00fcl\u00e7ehre et al. (2015)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 377, |
| "end": 378, |
| "text": "3", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 571, |
| "end": 578, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Data and Methods", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "and replace out-of-vocabulary words with a special UNK symbol. They remove monolingual sentences with more than 10% UNK symbols. In contrast, we represent unseen words as sequences of subword units (Sennrich et al., 2016) , and can represent any additional training data with the existing network vocabulary that was learned on the parallel data. In all experiments, the network vocabulary remains fixed.", |
| "cite_spans": [ |
| { |
| "start": 198, |
| "end": 221, |
| "text": "(Sennrich et al., 2016)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Data and Methods", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We use all parallel training data provided by WMT 2015 (Bojar et al., 2015) 4 . We use the News Crawl corpora as additional training data for the experiments with monolingual data. The amount of training data is shown in Table 1 . Baseline models are trained for a week. Ensembles are sampled from the last 4 saved models of training (saved at 12h-intervals). Each model is fine-tuned with fixed embeddings for 12 hours.", |
| "cite_spans": [ |
| { |
| "start": 55, |
| "end": 77, |
| "text": "(Bojar et al., 2015) 4", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 221, |
| "end": 228, |
| "text": "Table 1", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "English\u2194German", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "For the experiments with synthetic parallel data, we back-translate a random sample of 3 600 000 sentences from the German monolingual data set into English. The German\u2192English system used for this is the baseline system (parallel). Translation took about a week on an NVIDIA Titan Black GPU. For experiments in German\u2192English, we back-translate 4 200 000 monolingual English sentences into German, using the English\u2192German system +synthetic. Note that we always use single models for backtranslation, not ensembles. We leave it to future work to explore how sensitive NMT training with synthetic data is to the quality of the backtranslation.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "English\u2194German", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "We tokenize and truecase the training data, and represent rare words via BPE (Sennrich et al., 2016) . Specifically, we follow Sennrich et al. (2016) in performing BPE on the joint vocabulary with 89 500 merge operations. The network vo-dataset sentences WIT 160 000 SETimes 160 000 Gigaword mono 177 000 000 Gigaword synth 3 200 000 Table 2 : Turkish\u2192English training data.", |
| "cite_spans": [ |
| { |
| "start": 77, |
| "end": 100, |
| "text": "(Sennrich et al., 2016)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 127, |
| "end": 149, |
| "text": "Sennrich et al. (2016)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 334, |
| "end": 341, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "English\u2194German", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "cabulary size is 90 000. We also perform experiments on the IWSLT 15 test sets to investigate a cross-domain setting. 5 The test sets consist of TED talk transcripts. As indomain training data, IWSLT provides the WIT 3 parallel corpus (Cettolo et al., 2012) , which also consists of TED talks.", |
| "cite_spans": [ |
| { |
| "start": 118, |
| "end": 119, |
| "text": "5", |
| "ref_id": null |
| }, |
| { |
| "start": 235, |
| "end": 257, |
| "text": "(Cettolo et al., 2012)", |
| "ref_id": "BIBREF4" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "English\u2194German", |
| "sec_num": "4.1.1" |
| }, |
| { |
| "text": "We use data provided for the IWSLT 14 machine translation track (Cettolo et al., 2014) , namely the WIT 3 parallel corpus (Cettolo et al., 2012) , which consists of TED talks, and the SETimes corpus (Tyers and Alperen, 2010). 6 After removal of sentence pairs which contain empty lines or lines with a length ratio above 9, we retain 320 000 sentence pairs of training data. For the experiments with monolingual training data, we use the English LDC Gigaword corpus (Fifth Edition). The amount of training data is shown in Table 2 . With only 320 000 sentences of parallel data available for training, this is a much lower-resourced translation setting than English\u2194German. G\u00fcl\u00e7ehre et al. (2015) segment the Turkish text with the morphology tool Zemberek, followed by a disambiguation of the morphological analysis (Sak et al., 2007) , and removal of non-surface tokens produced by the analysis. We use the same preprocessing 7 . For both Turkish and English, we represent rare words (or morphemes in the case of Turkish) as character bigram sequences (Sennrich et al., 2016) . The 20 000 most frequent words (morphemes) are left unsegmented. The networks have a vocabulary size of 23 000 symbols.", |
| "cite_spans": [ |
| { |
| "start": 64, |
| "end": 86, |
| "text": "(Cettolo et al., 2014)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 122, |
| "end": 144, |
| "text": "(Cettolo et al., 2012)", |
| "ref_id": "BIBREF4" |
| }, |
| { |
| "start": 226, |
| "end": 227, |
| "text": "6", |
| "ref_id": null |
| }, |
| { |
| "start": 674, |
| "end": 696, |
| "text": "G\u00fcl\u00e7ehre et al. (2015)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 816, |
| "end": 834, |
| "text": "(Sak et al., 2007)", |
| "ref_id": "BIBREF19" |
| }, |
| { |
| "start": 1053, |
| "end": 1076, |
| "text": "(Sennrich et al., 2016)", |
| "ref_id": "BIBREF22" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 523, |
| "end": 530, |
| "text": "Table 2", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Turkish\u2192English", |
| "sec_num": "4.1.2" |
| }, |
| { |
| "text": "To obtain a synthetic parallel training set, we back-translate a random sample of 3 200 000 sentences from Gigaword. We use an English\u2192Turkish NMT system trained with the same settings as the Turkish\u2192English baseline system.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Turkish\u2192English", |
| "sec_num": "4.1.2" |
| }, |
| { |
| "text": "We found overfitting to be a bigger problem than with the larger English\u2194German data set, and follow G\u00fcl\u00e7ehre et al. (2015) in using Gaussian noise (stddev 0.01) (Graves, 2011) , and dropout on the output layer (p=0.5) (Hinton et al., 2012) . We also use early stopping, based on BLEU measured every three hours on tst2010, which we treat as development set. For Turkish\u2192English, we use gradient clipping with threshold 5, following G\u00fcl\u00e7ehre et al. (2015) , in contrast to the threshold 1 that we use for English\u2194German, following Jean et al. (2015a). Table 3 shows English\u2192German results with WMT training and test data. We find that mixing parallel training data with monolingual data with a dummy source side in a ratio of 1-1 improves quality by 0.4-0.5 BLEU for the single system, 1 BLEU for the ensemble. We train the system for twice as long as the baseline to provide the training algorithm with a similar amount of parallel training instances. To ensure that the quality improvement is due to the monolingual training instances, and not just increased training time, we also continued training our baseline system for another week, but saw no improvements in BLEU.", |
| "cite_spans": [ |
| { |
| "start": 101, |
| "end": 123, |
| "text": "G\u00fcl\u00e7ehre et al. (2015)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 162, |
| "end": 176, |
| "text": "(Graves, 2011)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 219, |
| "end": 240, |
| "text": "(Hinton et al., 2012)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 433, |
| "end": 455, |
| "text": "G\u00fcl\u00e7ehre et al. (2015)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 552, |
| "end": 559, |
| "text": "Table 3", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Turkish\u2192English", |
| "sec_num": "4.1.2" |
| }, |
| { |
| "text": "Including synthetic data during training is very effective, and yields an improvement over our baseline by 2.8-3.4 BLEU. Our best ensemble system also outperforms a syntax-based baseline (Sennrich and Haddow, 2015) by 1.2-2.1 BLEU. We also substantially outperform NMT results reported by Jean et al. (2015a) and , who previously reported SOTA result. 8 We note that the difference is particularly large for single systems, since our ensemble is not as diverse as that of , who used 8 independently trained ensemble components, whereas we sampled 4 ensemble components from the same training run. test sets, which are news texts. We investigate if monolingual training data is especially valuable if it can be used to adapt a model to a new genre or domain, specifically adapting a system trained on WMT data to translating TED talks. Systems 1 and 2 correspond to systems in Table 3 , trained only on WMT data. System 2, trained on parallel and synthetic WMT data, obtains a BLEU score of 25.5 on tst2015. We observe that even a small amount of fine-tuning 9 , i.e. continued training of an existing model, on WIT data can adapt a system trained on WMT data to the TED domain. By back-translating the monolingual WIT corpus (using a German\u2192English system trained on WMT data, i.e. without in-domain knowledge), we obtain the synthetic data set WIT synth . A single epoch of fine-tuning on WIT synth (system 4) results in a BLEU score of 26.7 on tst2015, or an improvement of 1.2 BLEU. We observed no improvement from fine-tuning on WIT mono , the monolingual TED corpus with dummy input (system 3).", |
| "cite_spans": [ |
| { |
| "start": 289, |
| "end": 308, |
| "text": "Jean et al. (2015a)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 352, |
| "end": 353, |
| "text": "8", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 876, |
| "end": 884, |
| "text": "Table 3", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "English\u2192German WMT 15", |
| "sec_num": "4.2.1" |
| }, |
| { |
| "text": "These adaptation experiments with monolingual data are slightly artificial in that parallel training data is available. System 5, which is finetuned with the original WIT training data, obtains a BLEU of 28.4 on tst2015, which is an improve- 9 We leave the word embeddings fixed for fine-tuning. BLEU name 2014 2015 PBSMT 28.8 29.3 NMT (G\u00fcl\u00e7ehre et al., 2015) 23.6 -+shallow fusion 23.7 -+deep fusion 24.0 parallel 25.9 26.7 +synthetic 29.5 30.4 +synthetic (ensemble of 4) 30.8 31.6 Table 5 : German\u2192English translation performance (BLEU) on WMT training/test sets (new-stest2014; newstest2015). ment of 2.9 BLEU. While it is unsurprising that in-domain parallel data is most valuable, we find it encouraging that NMT domain adaptation with monolingual data is also possible, and effective, since there are settings where only monolingual in-domain data is available.", |
| "cite_spans": [ |
| { |
| "start": 242, |
| "end": 243, |
| "text": "9", |
| "ref_id": null |
| }, |
| { |
| "start": 336, |
| "end": 359, |
| "text": "(G\u00fcl\u00e7ehre et al., 2015)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 483, |
| "end": 490, |
| "text": "Table 5", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "English\u2192German IWSLT 15", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "The best results published on this dataset are by , obtained with an ensemble of 8 independently trained models. In a comparison of single-model results, we outperform their model on tst2013 by 1 BLEU.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "English\u2192German IWSLT 15", |
| "sec_num": "4.2.2" |
| }, |
| { |
| "text": "Results for German\u2192English on the WMT 15 data sets are shown in Table 5 . Like for the reverse translation direction, we see substantial improvements (3.6-3.7 BLEU) from adding monolingual training data with synthetic source sentences, which is substantially bigger than the improvement observed with deep fusion (G\u00fcl\u00e7ehre et al., 2015) ; our ensemble outperforms the previous state of the art on newstest2015 by 2.3 BLEU. Table 6 shows results for Turkish\u2192English. On average, we see an improvement of 0.6 BLEU on the test sets from adding monolingual data with a dummy source side in a 1-1 ratio 10 , although we note a high variance between different test sets.", |
| "cite_spans": [ |
| { |
| "start": 313, |
| "end": 336, |
| "text": "(G\u00fcl\u00e7ehre et al., 2015)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 64, |
| "end": 71, |
| "text": "Table 5", |
| "ref_id": null |
| }, |
| { |
| "start": 423, |
| "end": 430, |
| "text": "Table 6", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "German\u2192English WMT 15", |
| "sec_num": "4.2.3" |
| }, |
| { |
| "text": "With synthetic training data (Gigaword synth ), we outperform the baseline by 2.7 BLEU on average, and also outperform results obtained via shallow or deep fusion by G\u00fcl\u00e7ehre et al. (2015) by 0.5 BLEU on average. To compare to what extent synthetic data has a regularization effect, even without novel training data, we also back-translate the target side of the parallel training text to obtain the training corpus parallel synth . Mixing the original parallel corpus with parallel synth (ratio 1-1) gives some improvement over the baseline (1.7 BLEU on average), but the novel monolingual training data (Gigaword mono ) gives higher improvements, despite being out-of-domain in relation to the test sets. We speculate that novel in-domain monolingual data would lead to even higher improvements.", |
| "cite_spans": [ |
| { |
| "start": 166, |
| "end": 188, |
| "text": "G\u00fcl\u00e7ehre et al. (2015)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Turkish\u2192English IWSLT 14", |
| "sec_num": "4.2.4" |
| }, |
| { |
| "text": "One question that our previous experiments leave open is how the quality of the automatic backtranslation affects training with synthetic data. To investigate this question, we back-translate the same German monolingual corpus with three different German\u2192English systems:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Back-translation Quality for Synthetic Data", |
| "sec_num": "4.2.5" |
| }, |
| { |
| "text": "\u2022 with our baseline system and greedy decoding", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Back-translation Quality for Synthetic Data", |
| "sec_num": "4.2.5" |
| }, |
| { |
| "text": "\u2022 with our baseline system and beam search (beam size 12). This is the same system used for the experiments in \u2022 with the German\u2192English system that was itself trained with synthetic data (beam size 12).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Back-translation Quality for Synthetic Data", |
| "sec_num": "4.2.5" |
| }, |
| { |
| "text": "BLEU scores of the German\u2192English systems, and of the resulting English\u2192German systems that are trained on the different backtranslations, are shown in Table 7 . The quality of the German\u2192English back-translation differs substantially, with a difference of 6 BLEU on new-stest2015. Regarding the English\u2192German systems trained on the different synthetic corpora, we find that the 6 BLEU difference in back-translation quality leads to a 0.6-0.7 BLEU difference in translation quality. This is balanced by the fact that we can increase the speed of back-translation by trading off some quality, for instance by reducing beam size, and we leave it to future research to explore how much the amount of synthetic data affects translation quality.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 152, |
| "end": 159, |
| "text": "Table 7", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Back-translation Quality for Synthetic Data", |
| "sec_num": "4.2.5" |
| }, |
| { |
| "text": "We also show results for an ensemble of 3 models (the best single model of each training run), and 12 models (all 4 models of each training run). Thanks to the increased diversity of the ensemble components, these ensembles outperform the ensembles of 4 models that were all sampled from the same training run, and we obtain another improvement of 0.8-1.0 BLEU.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Back-translation Quality for Synthetic Data", |
| "sec_num": "4.2.5" |
| }, |
| { |
| "text": "The back-translation of monolingual target data into the source language to produce synthetic parallel text has been previously explored for phrasebased SMT (Bertoldi and Federico, 2009; Lambert et al., 2011) . While our approach is technically similar, synthetic parallel data fulfills novel roles name training BLEU data instances tst2011 tst2012 tst2013 tst2014 baseline (G\u00fcl\u00e7ehre et al., 2015) 18.4 18.8 19.9 18.7 deep fusion (G\u00fcl\u00e7ehre et al., 2015) 20 Table 8 : Phrase-based SMT results (English\u2192German) on WMT test sets (average of newstest201{4,5}), and IWSLT test sets (average of tst201{3,4,5}), and average BLEU gain from adding synthetic data for both PBSMT and NMT. in NMT.", |
| "cite_spans": [ |
| { |
| "start": 157, |
| "end": 186, |
| "text": "(Bertoldi and Federico, 2009;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 187, |
| "end": 208, |
| "text": "Lambert et al., 2011)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 374, |
| "end": 397, |
| "text": "(G\u00fcl\u00e7ehre et al., 2015)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 430, |
| "end": 453, |
| "text": "(G\u00fcl\u00e7ehre et al., 2015)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 457, |
| "end": 464, |
| "text": "Table 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Contrast to Phrase-based SMT", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "To explore the relative effectiveness of backtranslated data for phrase-based SMT and NMT, we train two phrase-based SMT systems with Moses (Koehn et al., 2007) , using only WMT parallel , or both WMT parallel and WMT synth_de for training the translation and reordering model. Both systems contain the same language model, a 5-gram Kneser-Ney model trained on all available WMT data. We use the baseline features described by .", |
| "cite_spans": [ |
| { |
| "start": 140, |
| "end": 160, |
| "text": "(Koehn et al., 2007)", |
| "ref_id": "BIBREF13" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Contrast to Phrase-based SMT", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "Results are shown in Table 8 . In phrase-based SMT, we find that the use of back-translated training data has a moderate positive effect on the WMT test sets (+0.7 BLEU), but not on the IWSLT test sets. This is in line with the expectation that the main effect of back-translated data for phrasebased SMT is domain adaptation (Bertoldi and Federico, 2009) . Both the WMT test sets and the News Crawl corpora which we used as monolingual data come from the same source, a web crawl of newspaper articles. 11 In contrast, News Crawl is out-of-domain for the IWSLT test sets.", |
| "cite_spans": [ |
| { |
| "start": 326, |
| "end": 355, |
| "text": "(Bertoldi and Federico, 2009)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 21, |
| "end": 28, |
| "text": "Table 8", |
| "ref_id": null |
| } |
| ], |
| "eq_spans": [], |
| "section": "Contrast to Phrase-based SMT", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "In contrast to phrase-based SMT, which can 11 The WMT test sets are held-out from News Crawl. make use of monolingual data via the language model, NMT has so far not been able to use monolingual data to great effect, and without requiring architectural changes. We find that the effect of synthetic parallel data is not limited to domain adaptation, and that even out-of-domain synthetic data improves NMT quality, as in our evaluation on IWSLT. The fact that the synthetic data is more effective on the WMT test sets (+2.9 BLEU) than on the IWSLT test sets (+1.2 BLEU) supports the hypothesis that domain adaptation contributes to the effectiveness of adding synthetic data to NMT training.", |
| "cite_spans": [ |
| { |
| "start": 43, |
| "end": 45, |
| "text": "11", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Contrast to Phrase-based SMT", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "It is an important finding that back-translated data, which is mainly effective for domain adaptation in phrase-based SMT, is more generally useful in NMT, and has positive effects that go beyond domain adaptation. In the next section, we will investigate further reasons for its effectiveness. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Contrast to Phrase-based SMT", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We previously indicated that overfitting is a concern with our baseline system, especially on small data sets of several hundred thousand training sentences, despite the regularization employed. This overfitting is illustrated in Figure 1 , which plots training and development set cross-entropy by training time for Turkish\u2192English models. For comparability, we measure training set crossentropy for all models on the same random sample of the parallel training set. We can see that the model trained on only parallel training data quickly overfits, while all three monolingual data sets (parallel synth , Gigaword mono , or Gigaword synth ) delay overfitting, and give better perplexity on the development set. The best development set cross-entropy is reached by Gigaword synth . Figure 2 shows cross-entropy for English\u2192German, comparing the system trained on only parallel data and the system that includes synthetic training data. Since more training data is available for English\u2192German, there is no indication that overfitting happens during the first 40 million training instances (or 7 days of training); while both systems obtain comparable training set cross-entropies, the system with synthetic data reaches a lower cross-entropy on the development set. One explanation for this is the domain effect discussed in the previous section.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 230, |
| "end": 238, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| }, |
| { |
| "start": 783, |
| "end": 792, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "A central theoretical expectation is that monolingual target-side data improves the model's flu- ency, its ability to produce natural target-language sentences. As a proxy to sentence-level fluency, we investigate word-level fluency, specifically words produced as sequences of subword units, and whether NMT systems trained with additional monolingual data produce more natural words. For instance, the English\u2192German systems translate the English phrase civil rights protections as a single compound, composed of three subword units:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "B\u00fcrger|rechts|schutzes 12 , and we analyze how many of these multi-unit words that the translation systems produce are well-formed German words.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "We compare the number of words in the system output for the newstest2015 test set which are produced via subword units, and that do not occur in the parallel training corpus. We also count how many of them are attested in the full monolingual corpus or the reference translation, which we all consider 'natural'. Additionally, the main authors, a native speaker of German, annotated a random subset (n = 100) of unattested words of each system according to their naturalness 13 , distinguishing between natural German words (or names) such as Literatur|klassen 'literature classes', and nonsensical ones such as *As|best|atten (a missspelling of Astbestmatten 'asbestos mats').", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "In the results (Table 9) , we see that the systems trained with additional monolingual or synthetic data have a higher proportion of novel words attested in the non-parallel data, and a higher proportion that is deemed natural by our annotator. This supports our expectation that additional monolingual data improves the (word-level) fluency of the NMT system.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 15, |
| "end": 24, |
| "text": "(Table 9)", |
| "ref_id": "TABREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Analysis", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "To our knowledge, the integration of monolingual data for pure neural machine translation architectures was first investigated by (G\u00fcl\u00e7ehre et al., 2015) , who train monolingual language models independently, and then integrate them during decoding through rescoring of the beam (shallow fusion), or by adding the recurrent hidden state of the language model to the decoder state of the encoder-decoder network, with an additional controller mechanism that controls the magnitude of the LM signal (deep fusion). In deep fusion, the controller parameters and output parameters are tuned on further parallel training data, but the language model parameters are fixed during the finetuning stage. Jean et al. (2015b) also report on experiments with reranking of NMT output with a 5-gram language model, but improvements are small (between 0.1-0.5 BLEU).", |
| "cite_spans": [ |
| { |
| "start": 130, |
| "end": 153, |
| "text": "(G\u00fcl\u00e7ehre et al., 2015)", |
| "ref_id": "BIBREF8" |
| }, |
| { |
| "start": 694, |
| "end": 713, |
| "text": "Jean et al. (2015b)", |
| "ref_id": "BIBREF12" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "The production of synthetic parallel texts bears resemblance to data augmentation techniques used in computer vision, where datasets are often augmented with rotated, scaled, or otherwise distorted variants of the (limited) training set (Rowley et al., 1996) .", |
| "cite_spans": [ |
| { |
| "start": 237, |
| "end": 258, |
| "text": "(Rowley et al., 1996)", |
| "ref_id": "BIBREF18" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Another similar avenue of research is selftraining (McClosky et al., 2006; Schwenk, 2008) . The main difference is that self-training typically refers to scenario where the training set is enhanced with training instances with artificially produced output labels, whereas we start with human-produced output (i.e. the translation), and artificially produce an input. We expect that this is more robust towards noise in the automatic translation. Improving NMT with monolingual source data, following similar work on phrasebased SMT (Schwenk, 2008) , remains possible future work.", |
| "cite_spans": [ |
| { |
| "start": 51, |
| "end": 74, |
| "text": "(McClosky et al., 2006;", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 75, |
| "end": 89, |
| "text": "Schwenk, 2008)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 532, |
| "end": 547, |
| "text": "(Schwenk, 2008)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "Domain adaptation of neural networks via continued training has been shown to be effective for neural language models by (Ter-Sarkisov et al., 2015) , and in work parallel to ours, for neural translation models . We are the first to show that we can effectively adapt neural translation models with monolingual data.", |
| "cite_spans": [ |
| { |
| "start": 121, |
| "end": 148, |
| "text": "(Ter-Sarkisov et al., 2015)", |
| "ref_id": "BIBREF24" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "5" |
| }, |
| { |
| "text": "In this paper, we propose two simple methods to use monolingual training data during training of NMT systems, with no changes to the network architecture. Providing training examples with dummy source context was successful to some extent, but we achieve substantial gains in all tasks, and new SOTA results, via back-translation of monolingual target data into the source language, and treating this synthetic data as additional training data. We also show that small amounts of indomain monolingual data, back-translated into the source language, can be effectively used for domain adaptation. In our analysis, we identified domain adaptation effects, a reduction of overfitting, and improved fluency as reasons for the effectiveness of using monolingual data for training.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "While our experiments did make use of monolingual training data, we only used a small random sample of the available data, especially for the experiments with synthetic parallel data. It is conceivable that larger synthetic data sets, or data sets obtained via data selection, will provide bigger performance benefits.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Because we do not change the neural network architecture to integrate monolingual training data, our approach can be easily applied to other NMT systems. We expect that the effectiveness of our approach not only varies with the quality of the MT system used for back-translation, but also depends on the amount (and similarity to the test set) of available parallel and monolingual data, and the extent of overfitting of the baseline model. Future work will explore the effectiveness of our approach in more settings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "The research presented in this publication was conducted in cooperation with Samsung Electronics Polska sp. z o.o. -Samsung R&D Institute Poland.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "One could force the context vector ci to be 0 for monolingual training instances, but we found that this does not solve the main problem with this approach, discussed below.2 For efficiency,Bahdanau et al. (2015) sort sets of 20 minibatches according to length. This also groups monolingual training instances together.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://www.statmt.org/wmt15/", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "http://workshop2015.iwslt.org/ 6 http://workshop2014.iwslt.org/ 7 github.com/orhanf/zemberekMorphTR", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "report 20.9 BLEU (tokenized) on newstest2014 with a single model, and 23.0 BLEU with an ensemble of 8 models. Our best single system achieves a tokenized BLEU (as opposed to untokenized scores reported inTable 3) of 23.8, and our ensemble reaches 25.0 BLEU.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "We also experimented with higher ratios of monolingual data, but this led to decreased BLEU scores.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Subword boundaries are marked with '|'.13 For the annotation, the words were blinded regarding the system that produced them.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [ |
| { |
| "text": "The research presented in this publication was conducted in cooperation with Samsung Electronics Polska sp. z o.o. -Samsung R&D Institute Poland. This project received funding from the European Union's Horizon 2020 research and innovation programme under grant agreement 645452 (QT21).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Acknowledgments", |
| "sec_num": null |
| } |
| ], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Neural Machine Translation by Jointly Learning to Align and Translate", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the International Conference on Learning Represen- tations (ICLR).", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Domain adaptation for statistical machine translation with monolingual resources", |
| "authors": [ |
| { |
| "first": "Nicola", |
| "middle": [], |
| "last": "Bertoldi", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcello", |
| "middle": [], |
| "last": "Federico", |
| "suffix": "" |
| } |
| ], |
| "year": 2009, |
| "venue": "Proceedings of the Fourth Workshop on Statistical Machine Translation StatMT 09. Association for Computational Linguistics", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nicola Bertoldi and Marcello Federico. 2009. Do- main adaptation for statistical machine translation with monolingual resources. In Proceedings of the Fourth Workshop on Statistical Machine Translation StatMT 09. Association for Computational Linguis- tics.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Proceedings of the Tenth Workshop on Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Ond\u0159ej", |
| "middle": [], |
| "last": "Bojar", |
| "suffix": "" |
| }, |
| { |
| "first": "Rajen", |
| "middle": [], |
| "last": "Chatterjee", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Federmann", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthias", |
| "middle": [], |
| "last": "Huck", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Hokamp", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Varvara", |
| "middle": [], |
| "last": "Logacheva", |
| "suffix": "" |
| }, |
| { |
| "first": "Christof", |
| "middle": [], |
| "last": "Monz", |
| "suffix": "" |
| }, |
| { |
| "first": "Matteo", |
| "middle": [], |
| "last": "Negri", |
| "suffix": "" |
| }, |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Post", |
| "suffix": "" |
| }, |
| { |
| "first": "Carolina", |
| "middle": [], |
| "last": "Scarton", |
| "suffix": "" |
| }, |
| { |
| "first": "Lucia", |
| "middle": [], |
| "last": "Specia", |
| "suffix": "" |
| }, |
| { |
| "first": "Marco", |
| "middle": [], |
| "last": "Turchi", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "1--46", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ond\u0159ej Bojar, Rajen Chatterjee, Christian Federmann, Barry Haddow, Matthias Huck, Chris Hokamp, Philipp Koehn, Varvara Logacheva, Christof Monz, Matteo Negri, Matt Post, Carolina Scarton, Lucia Specia, and Marco Turchi. 2015. Findings of the 2015 Workshop on Statistical Machine Translation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 1-46, Lisbon, Portugal. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "A Statistical Approach to Machine Translation", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [ |
| "F" |
| ], |
| "last": "Brown", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [ |
| "A" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [ |
| "J" |
| ], |
| "last": "Della Pietra", |
| "suffix": "" |
| }, |
| { |
| "first": "F", |
| "middle": [], |
| "last": "Jelinek", |
| "suffix": "" |
| }, |
| { |
| "first": "J", |
| "middle": [ |
| "D" |
| ], |
| "last": "Lafferty", |
| "suffix": "" |
| }, |
| { |
| "first": "R", |
| "middle": [ |
| "L" |
| ], |
| "last": "Mercer", |
| "suffix": "" |
| }, |
| { |
| "first": "P", |
| "middle": [ |
| "S" |
| ], |
| "last": "Roossin", |
| "suffix": "" |
| } |
| ], |
| "year": 1990, |
| "venue": "Computational Linguistics", |
| "volume": "16", |
| "issue": "2", |
| "pages": "79--85", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "P.F. Brown, S.A. Della Pietra, V.J. Della Pietra, F. Je- linek, J.D. Lafferty, R.L. Mercer, and P.S. Roossin. 1990. A Statistical Approach to Machine Transla- tion. Computational Linguistics, 16(2):79-85.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "WIT 3 : Web Inventory of Transcribed and Translated Talks", |
| "authors": [ |
| { |
| "first": "Mauro", |
| "middle": [], |
| "last": "Cettolo", |
| "suffix": "" |
| }, |
| { |
| "first": "Christian", |
| "middle": [], |
| "last": "Girardi", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcello", |
| "middle": [], |
| "last": "Federico", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "Proceedings of the 16 th Conference of the European Association for Machine Translation (EAMT)", |
| "volume": "", |
| "issue": "", |
| "pages": "261--268", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mauro Cettolo, Christian Girardi, and Marcello Fed- erico. 2012. WIT 3 : Web Inventory of Transcribed and Translated Talks. In Proceedings of the 16 th Conference of the European Association for Ma- chine Translation (EAMT), pages 261-268, Trento, Italy.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Report on the 11th IWSLT Evaluation Campaign, IWSLT", |
| "authors": [ |
| { |
| "first": "Mauro", |
| "middle": [], |
| "last": "Cettolo", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Niehues", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "St\u00fcker", |
| "suffix": "" |
| }, |
| { |
| "first": "Luisa", |
| "middle": [], |
| "last": "Bentivogli", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcello", |
| "middle": [], |
| "last": "Federico", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 11th Workshop on Spoken Language Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "2--16", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mauro Cettolo, Jan Niehues, Sebastian St\u00fcker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th IWSLT Evaluation Campaign, IWSLT 2014. In Proceedings of the 11th Workshop on Spo- ken Language Translation, pages 2-16, Lake Tahoe, CA, USA.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Bart", |
| "middle": [], |
| "last": "Van Merrienboer", |
| "suffix": "" |
| }, |
| { |
| "first": "Caglar", |
| "middle": [], |
| "last": "Gulcehre", |
| "suffix": "" |
| }, |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Fethi", |
| "middle": [], |
| "last": "Bougares", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)", |
| "volume": "", |
| "issue": "", |
| "pages": "1724--1734", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kyunghyun Cho, Bart van Merrienboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Hol- ger Schwenk, and Yoshua Bengio. 2014. Learn- ing Phrase Representations using RNN Encoder- Decoder for Statistical Machine Translation. In Pro- ceedings of the 2014 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 1724-1734, Doha, Qatar. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Practical Variational Inference for Neural Networks", |
| "authors": [ |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Graves", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "24", |
| "issue": "", |
| "pages": "2348--2356", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alex Graves. 2011. Practical Variational Inference for Neural Networks. In J. Shawe-Taylor, R.S. Zemel, P.L. Bartlett, F. Pereira, and K.Q. Weinberger, ed- itors, Advances in Neural Information Processing Systems 24, pages 2348-2356. Curran Associates, Inc.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "On Using Monolingual Corpora in Neural Machine Translation. CoRR", |
| "authors": [ |
| { |
| "first": "\u00c7aglar", |
| "middle": [], |
| "last": "G\u00fcl\u00e7ehre", |
| "suffix": "" |
| }, |
| { |
| "first": "Orhan", |
| "middle": [], |
| "last": "Firat", |
| "suffix": "" |
| }, |
| { |
| "first": "Kelvin", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Lo\u00efc", |
| "middle": [], |
| "last": "Barrault", |
| "suffix": "" |
| }, |
| { |
| "first": "Huei-Chi", |
| "middle": [], |
| "last": "Lin", |
| "suffix": "" |
| }, |
| { |
| "first": "Fethi", |
| "middle": [], |
| "last": "Bougares", |
| "suffix": "" |
| }, |
| { |
| "first": "Holger", |
| "middle": [], |
| "last": "Schwenk", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "\u00c7aglar G\u00fcl\u00e7ehre, Orhan Firat, Kelvin Xu, Kyunghyun Cho, Lo\u00efc Barrault, Huei-Chi Lin, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2015. On Using Monolingual Corpora in Neural Machine Translation. CoRR, abs/1503.03535.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "The Edinburgh/JHU Phrase-based Machine Translation Systems for WMT 2015", |
| "authors": [ |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Matthias", |
| "middle": [], |
| "last": "Huck", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| }, |
| { |
| "first": "Nikolay", |
| "middle": [], |
| "last": "Bogoychev", |
| "suffix": "" |
| }, |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "126--133", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Barry Haddow, Matthias Huck, Alexandra Birch, Niko- lay Bogoychev, and Philipp Koehn. 2015. The Edinburgh/JHU Phrase-based Machine Translation Systems for WMT 2015. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 126-133, Lisbon, Portugal. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Improving neural networks by preventing co-adaptation of feature detectors", |
| "authors": [ |
| { |
| "first": "Geoffrey", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hinton", |
| "suffix": "" |
| }, |
| { |
| "first": "Nitish", |
| "middle": [], |
| "last": "Srivastava", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Krizhevsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| } |
| ], |
| "year": 2012, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Geoffrey E. Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhut- dinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. CoRR, abs/1207.0580.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "On Using Very Large Target Vocabulary for Neural Machine Translation", |
| "authors": [ |
| { |
| "first": "S\u00e9bastien", |
| "middle": [], |
| "last": "Jean", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Roland", |
| "middle": [], |
| "last": "Memisevic", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing", |
| "volume": "1", |
| "issue": "", |
| "pages": "1--10", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S\u00e9bastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015a. On Using Very Large Target Vocabulary for Neural Machine Translation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Lan- guage Processing (Volume 1: Long Papers), pages 1-10, Beijing, China. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Montreal Neural Machine Translation Systems for WMT'15", |
| "authors": [ |
| { |
| "first": "S\u00e9bastien", |
| "middle": [], |
| "last": "Jean", |
| "suffix": "" |
| }, |
| { |
| "first": "Orhan", |
| "middle": [], |
| "last": "Firat", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Roland", |
| "middle": [], |
| "last": "Memisevic", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the Tenth Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "134--140", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S\u00e9bastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015b. Montreal Neural Machine Translation Systems for WMT'15 . In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 134-140, Lisbon, Por- tugal. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Moses: Open Source Toolkit for Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Hieu", |
| "middle": [], |
| "last": "Hoang", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcello", |
| "middle": [], |
| "last": "Federico", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicola", |
| "middle": [], |
| "last": "Bertoldi", |
| "suffix": "" |
| }, |
| { |
| "first": "Brooke", |
| "middle": [], |
| "last": "Cowan", |
| "suffix": "" |
| }, |
| { |
| "first": "Wade", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| }, |
| { |
| "first": "Christine", |
| "middle": [], |
| "last": "Moran", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Zens", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Ond\u0159ej", |
| "middle": [], |
| "last": "Bojar", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Constantin", |
| "suffix": "" |
| }, |
| { |
| "first": "Evan", |
| "middle": [], |
| "last": "Herbst", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proceedings of the ACL-2007 Demo and Poster Sessions", |
| "volume": "", |
| "issue": "", |
| "pages": "177--180", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ond\u0159ej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proceedings of the ACL-2007 Demo and Poster Sessions, pages 177-180, Prague, Czech Republic. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Investigations on Translation Model Adaptation Using Monolingual Data", |
| "authors": [ |
| { |
| "first": "Patrik", |
| "middle": [], |
| "last": "Lambert", |
| "suffix": "" |
| }, |
| { |
| "first": "Holger", |
| "middle": [], |
| "last": "Schwenk", |
| "suffix": "" |
| }, |
| { |
| "first": "Christophe", |
| "middle": [], |
| "last": "Servan", |
| "suffix": "" |
| }, |
| { |
| "first": "Sadaf", |
| "middle": [], |
| "last": "Abdul-Rauf", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "Proceedings of the Sixth Workshop on Statistical Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "284--293", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Patrik Lambert, Holger Schwenk, Christophe Servan, and Sadaf Abdul-Rauf. 2011. Investigations on Translation Model Adaptation Using Monolingual Data. In Proceedings of the Sixth Workshop on Sta- tistical Machine Translation, pages 284-293, Edin- burgh, Scotland. Association for Computational Lin- guistics.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Stanford Neural Machine Translation Systems for Spoken Language Domains", |
| "authors": [ |
| { |
| "first": "Minh-Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the International Workshop on Spoken Language Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minh-Thang Luong and Christopher D. Manning. 2015. Stanford Neural Machine Translation Sys- tems for Spoken Language Domains. In Proceed- ings of the International Workshop on Spoken Lan- guage Translation 2015, Da Nang, Vietnam.", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Effective Approaches to Attentionbased Neural Machine Translation", |
| "authors": [ |
| { |
| "first": "Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Hieu", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "1412--1421", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective Approaches to Attention- based Neural Machine Translation. In Proceed- ings of the 2015 Conference on Empirical Meth- ods in Natural Language Processing, pages 1412- 1421, Lisbon, Portugal. Association for Computa- tional Linguistics.", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Effective Self-training for Parsing", |
| "authors": [ |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Mcclosky", |
| "suffix": "" |
| }, |
| { |
| "first": "Eugene", |
| "middle": [], |
| "last": "Charniak", |
| "suffix": "" |
| }, |
| { |
| "first": "Mark", |
| "middle": [], |
| "last": "Johnson", |
| "suffix": "" |
| } |
| ], |
| "year": 2006, |
| "venue": "Proceedings of the Main Conference on Human Language Technology Conference of the North American Chapter of the Association of Computational Linguistics, HLT-NAACL '06", |
| "volume": "", |
| "issue": "", |
| "pages": "152--159", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "David McClosky, Eugene Charniak, and Mark John- son. 2006. Effective Self-training for Parsing. In Proceedings of the Main Conference on Human Lan- guage Technology Conference of the North Amer- ican Chapter of the Association of Computational Linguistics, HLT-NAACL '06, pages 152-159, New York. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Neural Network-Based Face Detection", |
| "authors": [ |
| { |
| "first": "Henry", |
| "middle": [], |
| "last": "Rowley", |
| "suffix": "" |
| }, |
| { |
| "first": "Shumeet", |
| "middle": [], |
| "last": "Baluja", |
| "suffix": "" |
| }, |
| { |
| "first": "Takeo", |
| "middle": [], |
| "last": "Kanade", |
| "suffix": "" |
| } |
| ], |
| "year": 1996, |
| "venue": "Computer Vision and Pattern Recognition '96", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Henry Rowley, Shumeet Baluja, and Takeo Kanade. 1996. Neural Network-Based Face Detection. In Computer Vision and Pattern Recognition '96.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Morphological Disambiguation of Turkish Text with Perceptron Algorithm", |
| "authors": [ |
| { |
| "first": "Ha\u015fim", |
| "middle": [], |
| "last": "Sak", |
| "suffix": "" |
| }, |
| { |
| "first": "Tunga", |
| "middle": [], |
| "last": "G\u00fcng\u00f6r", |
| "suffix": "" |
| }, |
| { |
| "first": "Murat", |
| "middle": [], |
| "last": "Sara\u00e7lar", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "CICLing", |
| "volume": "", |
| "issue": "", |
| "pages": "107--118", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ha\u015fim Sak, Tunga G\u00fcng\u00f6r, and Murat Sara\u00e7lar. 2007. Morphological Disambiguation of Turkish Text with Perceptron Algorithm. In CICLing 2007, pages 107-118.", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Investigations on Large-Scale Lightly-Supervised Training for Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Holger", |
| "middle": [], |
| "last": "Schwenk", |
| "suffix": "" |
| } |
| ], |
| "year": 2008, |
| "venue": "International Workshop on Spoken Language Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "182--189", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Holger Schwenk. 2008. Investigations on Large-Scale Lightly-Supervised Training for Statistical Machine Translation. In International Workshop on Spoken Language Translation, pages 182-189.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "A Joint Dependency Model of Morphological and Syntactic Structure for Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "2081--2087", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich and Barry Haddow. 2015. A Joint Dependency Model of Morphological and Syntac- tic Structure for Statistical Machine Translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 2081-2087, Lisbon, Portugal. Association for Com- putational Linguistics.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Neural Machine Translation of Rare Words with Subword Units", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (ACL 2016)", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural Machine Translation of Rare Words with Subword Units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (ACL 2016), Berlin, Germany.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Sequence to Sequence Learning with Neural Networks", |
| "authors": [ |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Quoc", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Advances in Neural Information Processing Systems 27: Annual Conference on Neural Information Processing Systems", |
| "volume": "", |
| "issue": "", |
| "pages": "3104--3112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. 2014. Sequence to Sequence Learning with Neural Net- works. In Advances in Neural Information Process- ing Systems 27: Annual Conference on Neural Infor- mation Processing Systems 2014, pages 3104-3112, Montreal, Quebec, Canada.", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "Incremental Adaptation Strategies for Neural Network Language Models", |
| "authors": [ |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Ter-Sarkisov", |
| "suffix": "" |
| }, |
| { |
| "first": "Holger", |
| "middle": [], |
| "last": "Schwenk", |
| "suffix": "" |
| }, |
| { |
| "first": "Fethi", |
| "middle": [], |
| "last": "Bougares", |
| "suffix": "" |
| }, |
| { |
| "first": "Lo\u00efc", |
| "middle": [], |
| "last": "Barrault", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 3rd Workshop on Continuous Vector Space Models and their Compositionality", |
| "volume": "", |
| "issue": "", |
| "pages": "48--56", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alex Ter-Sarkisov, Holger Schwenk, Fethi Bougares, and Lo\u00efc Barrault. 2015. Incremental Adaptation Strategies for Neural Network Language Models. In Proceedings of the 3rd Workshop on Continu- ous Vector Space Models and their Composition- ality, pages 48-56, Beijing, China. Association for Computational Linguistics.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "SE-Times: A parallel corpus of Balkan languages", |
| "authors": [ |
| { |
| "first": "M", |
| "middle": [], |
| "last": "Francis", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Tyers", |
| "suffix": "" |
| }, |
| { |
| "first": "S", |
| "middle": [], |
| "last": "Murat", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Alperen", |
| "suffix": "" |
| } |
| ], |
| "year": 2010, |
| "venue": "Workshop on Exploitation of multilingual resources and tools for Central and (South) Eastern European Languages at the Language Resources and Evaluation Conference", |
| "volume": "", |
| "issue": "", |
| "pages": "1--5", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Francis M. Tyers and Murat S. Alperen. 2010. SE- Times: A parallel corpus of Balkan languages. In Workshop on Exploitation of multilingual resources and tools for Central and (South) Eastern European Languages at the Language Resources and Evalua- tion Conference, pages 1-5.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "text": "Turkish\u2192English training and development set (tst2010) cross-entropy as a function of training time (number of training instances) for different systems.", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "FIGREF1": { |
| "text": "English\u2192German training and development set (newstest2013) cross-entropy as a function of training time (number of training instances) for different systems.", |
| "num": null, |
| "uris": null, |
| "type_str": "figure" |
| }, |
| "TABREF0": { |
| "num": null, |
| "content": "<table><tr><td>BLEU</td></tr></table>", |
| "text": "English\u2192German results on IWSLT test sets. IWSLT test sets consist of TED talks, and are thus very dissimilar from the WMT", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF1": { |
| "num": null, |
| "content": "<table><tr><td>name</td><td>fine-tuning</td><td/><td/><td>BLEU</td><td/></tr><tr><td/><td>data</td><td colspan=\"4\">instances tst2013 tst2014 tst2015</td></tr><tr><td colspan=\"3\">NMT (Luong and Manning, 2015) (single model)</td><td>29.4</td><td>-</td><td>-</td></tr><tr><td colspan=\"3\">NMT (Luong and Manning, 2015) (ensemble of 8)</td><td>31.4</td><td>27.6</td><td>30.1</td></tr><tr><td>1 parallel</td><td>-</td><td>-</td><td>25.2</td><td>22.6</td><td>24.0</td></tr><tr><td>2 +synthetic</td><td>-</td><td>-</td><td>26.5</td><td>23.5</td><td>25.5</td></tr><tr><td colspan=\"3\">3 2+WIT mono_de WMT parallel / WIT mono 200k/200k</td><td>26.6</td><td>23.6</td><td>25.4</td></tr><tr><td colspan=\"2\">4 2+WIT synth_de WIT synth</td><td>200k</td><td>28.2</td><td>24.4</td><td>26.7</td></tr><tr><td>5 2+WIT parallel</td><td>WIT</td><td>200k</td><td>30.4</td><td>25.9</td><td>28.4</td></tr></table>", |
| "text": "English\u2192German translation performance (BLEU) on WMT training/test sets. Ens-4: ensemble of 4 models. Number of training instances varies due to differences in training time and speed.", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF2": { |
| "num": null, |
| "content": "<table/>", |
| "text": "", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF3": { |
| "num": null, |
| "content": "<table><tr><td>.</td></tr></table>", |
| "text": "", |
| "type_str": "table", |
| "html": null |
| }, |
| "TABREF6": { |
| "num": null, |
| "content": "<table><tr><td>: Number of words in system out-</td></tr><tr><td>put that do not occur in parallel training data</td></tr><tr><td>(count ref = 1168), and proportion that is attested</td></tr><tr><td>in data, or natural according to native speaker.</td></tr><tr><td>English\u2192German; newstest2015; ensemble sys-</td></tr><tr><td>tems.</td></tr></table>", |
| "text": "", |
| "type_str": "table", |
| "html": null |
| } |
| } |
| } |
| } |