| { |
| "paper_id": "P17-1012", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:16:24.686695Z" |
| }, |
| "title": "A Convolutional Encoder Model for Neural Machine Translation", |
| "authors": [ |
| { |
| "first": "Jonas", |
| "middle": [], |
| "last": "Gehring", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Auli", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Grangier", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Yann", |
| "middle": [ |
| "N" |
| ], |
| "last": "Dauphin Facebook", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "A", |
| "middle": [ |
| "I" |
| ], |
| "last": "Research", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence. We present a faster and simpler architecture based on a succession of convolutional layers. This allows to encode the source sentence simultaneously compared to recurrent networks for which computation is constrained by temporal dependencies. On WMT'16 English-Romanian translation we achieve competitive accuracy to the state-of-the-art and on WMT'15 English-German we outperform several recently published results. Our models obtain almost the same accuracy as a very deep LSTM setup on WMT'14 English-French translation. We speed up CPU decoding by more than two times at the same or higher accuracy as a strong bidirectional LSTM. 1", |
| "pdf_parse": { |
| "paper_id": "P17-1012", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "The prevalent approach to neural machine translation relies on bi-directional LSTMs to encode the source sentence. We present a faster and simpler architecture based on a succession of convolutional layers. This allows to encode the source sentence simultaneously compared to recurrent networks for which computation is constrained by temporal dependencies. On WMT'16 English-Romanian translation we achieve competitive accuracy to the state-of-the-art and on WMT'15 English-German we outperform several recently published results. Our models obtain almost the same accuracy as a very deep LSTM setup on WMT'14 English-French translation. We speed up CPU decoding by more than two times at the same or higher accuracy as a strong bidirectional LSTM. 1", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Neural machine translation (NMT) is an end-to-end approach to machine translation . The most successful approach to date encodes the source sentence with a bi-directional recurrent neural network (RNN) into a variable length representation and then generates the translation left-to-right with another RNN where both components interface via a soft-attention mechanism (Bahdanau et al., 2015; Luong et al., 2015a; Bradbury and Socher, 2016; Sennrich et al., 2016a) . Recurrent networks are typically parameterized as long short term memory networks (LSTM; Hochreiter et al. 1997) or gated recurrent units (GRU; Cho et al. 2014) , often with residual or skip connections (Wu et al., 2016; Zhou et al., 2016) to enable stacking of several layers ( \u00a72).", |
| "cite_spans": [ |
| { |
| "start": 369, |
| "end": 392, |
| "text": "(Bahdanau et al., 2015;", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 393, |
| "end": 413, |
| "text": "Luong et al., 2015a;", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 414, |
| "end": 440, |
| "text": "Bradbury and Socher, 2016;", |
| "ref_id": "BIBREF1" |
| }, |
| { |
| "start": 441, |
| "end": 464, |
| "text": "Sennrich et al., 2016a)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 549, |
| "end": 555, |
| "text": "(LSTM;", |
| "ref_id": null |
| }, |
| { |
| "start": 556, |
| "end": 579, |
| "text": "Hochreiter et al. 1997)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 605, |
| "end": 610, |
| "text": "(GRU;", |
| "ref_id": null |
| }, |
| { |
| "start": 611, |
| "end": 627, |
| "text": "Cho et al. 2014)", |
| "ref_id": "BIBREF11" |
| }, |
| { |
| "start": 670, |
| "end": 687, |
| "text": "(Wu et al., 2016;", |
| "ref_id": "BIBREF34" |
| }, |
| { |
| "start": 688, |
| "end": 706, |
| "text": "Zhou et al., 2016)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "There have been several attempts to use convolutional encoder models for neural machine trans-lation in the past but they were either only applied to rescoring n-best lists of classical systems (Kalchbrenner and Blunsom, 2013) or were not competitive to recurrent alternatives (Cho et al., 2014a) . This is despite several attractive properties of convolutional networks. For example, convolutional networks operate over a fixed-size window of the input sequence which enables the simultaneous computation of all features for a source sentence. This contrasts to RNNs which maintain a hidden state of the entire past that prevents parallel computation within a sequence.", |
| "cite_spans": [ |
| { |
| "start": 194, |
| "end": 226, |
| "text": "(Kalchbrenner and Blunsom, 2013)", |
| "ref_id": "BIBREF14" |
| }, |
| { |
| "start": 277, |
| "end": 296, |
| "text": "(Cho et al., 2014a)", |
| "ref_id": "BIBREF3" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "A succession of convolutional layers provides a shorter path to capture relationships between elements of a sequence compared to RNNs. 2 This also eases learning because the resulting tree-structure applies a fixed number of non-linearities compared to a recurrent neural network for which the number of non-linearities vary depending on the time-step. Because processing is bottom-up, all words undergo the same number of transformations, whereas for RNNs the first word is over-processed and the last word is transformed only once.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this paper we show that an architecture based on convolutional layers is very competitive to recurrent encoders. We investigate simple average pooling as well as parameterized convolutions as an alternative to recurrent encoders and enable very deep convolutional encoders by using residual connections (He et al., 2015; \u00a73) .", |
| "cite_spans": [ |
| { |
| "start": 306, |
| "end": 323, |
| "text": "(He et al., 2015;", |
| "ref_id": "BIBREF9" |
| }, |
| { |
| "start": 324, |
| "end": 327, |
| "text": "\u00a73)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "We experiment on several standard datasets and compare our approach to variants of recurrent encoders such as uni-directional and bi-directional LSTMs. On WMT'16 English-Romanian translation we achieve accuracy that is very competitive to the current state-of-the-art result. We perform competitively on WMT'15 English-German, and nearly match the performance of the best WMT'14 English-French system based on a deep LSTM setup when comparing on a commonly used subset of the training data (Zhou et al. 2016; \u00a74, \u00a75) .", |
| "cite_spans": [ |
| { |
| "start": 490, |
| "end": 508, |
| "text": "(Zhou et al. 2016;", |
| "ref_id": "BIBREF36" |
| }, |
| { |
| "start": 509, |
| "end": 516, |
| "text": "\u00a74, \u00a75)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "The general architecture of the models in this work follows the encoder-decoder approach with soft attention first introduced in (Bahdanau et al., 2015) . A source sentence x = (x 1 , . . . , x m ) of m words is processed by an encoder which outputs a sequence of states z = (z 1 . . . . , z m ).", |
| "cite_spans": [ |
| { |
| "start": 129, |
| "end": 152, |
| "text": "(Bahdanau et al., 2015)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Machine Translation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The decoder is an RNN network that computes a new hidden state s i+1 based on the previous state s i , an embedding g i of the previous target language word y i , as well as a conditional input c i derived from the encoder output z. We use LSTMs (Hochreiter and Schmidhuber, 1997) for all decoder networks whose state s i comprises of a cell vector and a hidden vector h i which is output by the LSTM at each time step. We input c i into the LSTM by concatenating it to g i .", |
| "cite_spans": [ |
| { |
| "start": 246, |
| "end": 280, |
| "text": "(Hochreiter and Schmidhuber, 1997)", |
| "ref_id": "BIBREF10" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Machine Translation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The translation model computes a distribution over the V possible target words y i+1 by transforming the LSTM output h i via a linear layer with weights W o and bias b o :", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Machine Translation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "p(y i+1 |y 1 , . . . , y i , x) = softmax(W o h i+1 + b o )", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Machine Translation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "The conditional input c i at time i is computed via a simple dot-product style attention mechanism (Luong et al., 2015a) . Specifically, we transform the decoder hidden state h i by a linear layer with weights W d and b d to match the size of the embedding of the previous target word g i and then sum the two representations to yield d i . Conditional input c i is a weighted sum of attention scores a i \u2208 R m and encoder outputs z. The attention scores a i are determined by a dot product between h i with each z j , followed by a softmax over the source sequence:", |
| "cite_spans": [ |
| { |
| "start": 99, |
| "end": 120, |
| "text": "(Luong et al., 2015a)", |
| "ref_id": "BIBREF20" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Machine Translation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "d i = W d h i + b d + g i , a ij = exp d T i z j m t=1 exp d T i z t , c i = m j=1 a ij z j", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Machine Translation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "In preliminary experiments, we did not find the MLP attention of (Bahdanau et al., 2015) to perform significantly better in terms of BLEU nor perplexity. However, we found the dot-product attention to be more favorable in terms of training and evaluation speed. We use bi-directional LSTMs to implement recurrent encoders similar to (Zhou et al., 2016) which achieved some of the best WMT14 English-French results reported to date. First, each word of the input sequence x is embedded in distributional space resulting in e = (e 1 , . . . , e m ). The embeddings are input to two stacks of uni-directional RNNs where the output of each layer is reversed before being fed into the next layer. The first stack takes the original sequence while the second takes the reversed input sequence; the output of the second stack is reversed so that the final outputs of the stacks align. Finally, the top-level hidden states of the two stacks are concatenated and fed into a linear layer to yield z. We denote this encoder architecture as BiLSTM.", |
| "cite_spans": [ |
| { |
| "start": 65, |
| "end": 88, |
| "text": "(Bahdanau et al., 2015)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 333, |
| "end": 352, |
| "text": "(Zhou et al., 2016)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Machine Translation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "3 Non-recurrent Encoders", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Recurrent Neural Machine Translation", |
| "sec_num": "2" |
| }, |
| { |
| "text": "A simple baseline for non-recurrent encoders is the pooling model described in (Ranzato et al., 2015) which simply averages the embeddings of k consecutive words. Averaging word embeddings does not convey positional information besides that the words in the input are somewhat close to each other. As a remedy, we add position embeddings to encode the absolute position of each source word within a sentence. Each source embedding e j therefore contains a position embedding l j as well as the word embedding w j . Position embeddings have also been found helpful in memory networks for question-answering and language modeling (Sukhbaatar et al., 2015) . Similar to the recurrent encoder ( \u00a72), the attention scores a ij are computed from the pooled representations z j , however, the conditional input c i is a weighted sum of the embeddings e j , not z j , i.e.,", |
| "cite_spans": [ |
| { |
| "start": 79, |
| "end": 101, |
| "text": "(Ranzato et al., 2015)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 628, |
| "end": 653, |
| "text": "(Sukhbaatar et al., 2015)", |
| "ref_id": "BIBREF31" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pooling Encoder", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "e j = w j + l j , z j = 1 k k/2 t=\u2212 k/2 e j+t , c i = m j=1 a ij e j", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pooling Encoder", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "The input sequence is padded prior to pooling such that the encoder output matches the input length |z| = |x|. We set k to 5 in all experiments as (Ranzato et al., 2015).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Pooling Encoder", |
| "sec_num": "3.1" |
| }, |
| { |
| "text": "A straightforward extension of pooling is to learn the kernel in a convolutional neural network (CNN). The encoder output z j contains information about a fixed-sized context depending on the kernel width k but the desired context width may vary. This can be addressed by stacking several layers of convolutions followed by non-linearities: additional layers increase the total context size while non-linearities can modulate the effective size of the context as needed. For instance, stacking 5 convolutions with kernel width k = 3 results in an input field of 11 words, i.e., each output depends on 11 input words, and the non-linearities allow the encoder to exploit the full input field, or to concentrate on fewer words as needed. To ease learning for deep encoders, we add residual connections from the input of each convolution to the output and then apply the non-linear activation function to the output (tanh; He et al., 2015) ; the non-linearities are therefore not 'bypassed'. Multi-layer CNNs are constructed by stacking several blocks on top of each other. The CNNs do not contain pooling layers which are commonly used for down-sampling, i.e., the full source sequence length will be retained after the network has been applied. Similar to the pooling model, the convolutional encoder uses position embeddings.", |
| "cite_spans": [ |
| { |
| "start": 920, |
| "end": 936, |
| "text": "He et al., 2015)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convolutional Encoder", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "The final encoder consists of two stacked convolutional networks (Figure 1 ): CNN-a produces the encoder output z j to compute the attention scores a i , while the conditional input c i to the decoder is computed by summing the outputs of CNN-c,", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 65, |
| "end": 74, |
| "text": "(Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Convolutional Encoder", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "z j = CNN-a(e) j , c i = m j=1 a ij CNN-c(e) j .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convolutional Encoder", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "In practice, we found that two different CNNs resulted in better perplexity as well as BLEU compared to using a single one ( \u00a75.3). We also found this to perform better than directly summing the e i without transformation as for the pooling model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convolutional Encoder", |
| "sec_num": "3.2" |
| }, |
| { |
| "text": "There are several past attempts to use convolutional encoders for neural machine translation, however, to our knowledge none of them were able to match the performance of recurrent encoders. (Kalchbrenner and Blunsom, 2013) introduce a convolutional sentence encoder in which a multi-layer CNN generates a fixed sized embedding for a source sentence, or an n-gram representation followed by transposed convolutions for directly generating a per-token decoder input. The latter requires the length of the translation prior to generation and both models were evaluated by rescoring the output of an existing translation system. (Cho et al., 2014a) propose a gated recursive CNN which is repeatedly applied until a fixed-size representation is ob-tained but the recurrent encoder achieves higher accuracy. In follow-up work, the authors improved the model via a soft-attention mechanism but did not reconsider convolutional encoder models (Bahdanau et al., 2015) .", |
| "cite_spans": [ |
| { |
| "start": 626, |
| "end": 645, |
| "text": "(Cho et al., 2014a)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 936, |
| "end": 959, |
| "text": "(Bahdanau et al., 2015)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "Concurrently to our work, (Kalchbrenner et al., 2016) have introduced convolutional translation models without an explicit attention mechanism but their approach does not yet result in state-ofthe-art accuracy. (Lamb and Xie, 2016) also proposed a multi-layer CNN to generate a fixed-size encoder representation but their work lacks quantitative evaluation in terms of BLEU. Meng et al. (2015) and (Tu et al., 2015) applied convolutional models to score phrase-pairs of traditional phrasebased and dependency-based translation models. Convolutional architectures have also been successful in language modeling but so far failed to outperform LSTMs (Pham et al., 2016) .", |
| "cite_spans": [ |
| { |
| "start": 26, |
| "end": 53, |
| "text": "(Kalchbrenner et al., 2016)", |
| "ref_id": "BIBREF15" |
| }, |
| { |
| "start": 211, |
| "end": 231, |
| "text": "(Lamb and Xie, 2016)", |
| "ref_id": "BIBREF18" |
| }, |
| { |
| "start": 375, |
| "end": 393, |
| "text": "Meng et al. (2015)", |
| "ref_id": "BIBREF22" |
| }, |
| { |
| "start": 398, |
| "end": 415, |
| "text": "(Tu et al., 2015)", |
| "ref_id": "BIBREF33" |
| }, |
| { |
| "start": 648, |
| "end": 667, |
| "text": "(Pham et al., 2016)", |
| "ref_id": "BIBREF25" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Related Work", |
| "sec_num": "3.3" |
| }, |
| { |
| "text": "We evaluate different encoders and ablate architectural choices on a small dataset from the German-English machine translation track of IWSLT 2014 (Cettolo et al., 2014 ) with a similar setting to (Ranzato et al., 2015) . Unless otherwise stated, we restrict training sentences to have no more than 175 words; test sentences are not filtered. This is a higher threshold compared to other publications but ensures proper training of the position embeddings for non-recurrent encoders; the length threshold did not significantly effect recurrent encoders. Length filtering results in 167K sentence pairs and we test on the concatenation of tst2010, tst2011, tst2012, tst2013 and dev2010 comprising 6948 sentence pairs. 3 Our final results are on three major WMT tasks: WMT'16 English-Romanian. We use the same data and pre-processing as (Sennrich et al., 2016a) and train on 2.8M sentence pairs. 4 Our model is word-based instead of relying on byte-pair encoding (Sennrich et al., 2016b) . We evaluate on new-stest2016. WMT'15 English-German. We use all available parallel training data, namely Europarl v7, Com- mon Crawl and News Commentary v10 and apply the standard Moses tokenization to obtain 3.9M sentence pairs (Koehn et al., 2007) . We report results on newstest2015. WMT'14 English-French. We use a commonly used subset of 12M sentence pairs (Schwenk, 2014) , and remove sentences longer than 150 words. This results in 10.7M sentence-pairs for training. Results are reported on ntst14.", |
| "cite_spans": [ |
| { |
| "start": 147, |
| "end": 168, |
| "text": "(Cettolo et al., 2014", |
| "ref_id": "BIBREF2" |
| }, |
| { |
| "start": 197, |
| "end": 219, |
| "text": "(Ranzato et al., 2015)", |
| "ref_id": "BIBREF26" |
| }, |
| { |
| "start": 835, |
| "end": 859, |
| "text": "(Sennrich et al., 2016a)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 961, |
| "end": 985, |
| "text": "(Sennrich et al., 2016b)", |
| "ref_id": "BIBREF29" |
| }, |
| { |
| "start": 1217, |
| "end": 1237, |
| "text": "(Koehn et al., 2007)", |
| "ref_id": "BIBREF17" |
| }, |
| { |
| "start": 1350, |
| "end": 1365, |
| "text": "(Schwenk, 2014)", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "A small subset of the training data serves as validation set (5% for IWSLT'14 and 1% for WMT) for early stopping and learning rate annealing ( \u00a74.3). For IWSLT'14, we replace words that occur fewer than 3 times with a <unk> symbol, which results in a vocabulary of 24158 English and 35882 German word types. For WMT datasets, we retain 200K source and 80K target words. For English-French only, we set the target vocabulary to 30K types to be comparable with previous work.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Datasets", |
| "sec_num": "4.1" |
| }, |
| { |
| "text": "We use 512 hidden units for both recurrent encoders and decoders. We reset the decoder hidden states to zero between sentences. For the convolutional encoder, 512 hidden units are used for each layer in CNN-a, while layers in CNN-c contain 256 units each. All embeddings, including the output produced by the decoder before the final linear layer, are of 256 dimensions. On the WMT corpora, we find that we can improve the performance of the bidirectional LSTM models (BiLSTM) by using 512dimensional word embeddings.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model parameters", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Model weights are initialized from a uniform distribution within [\u22120.05, 0.05]. For convolutional layers, we use a uniform distribution of \u2212kd \u22120.5 , kd \u22120.5 , where k is the kernel width (we use 3 throughout this work) and d is the input size for the first layer and the number of hidden units for subsequent layers (Collobert et al., 2011b) . For CNN-c, we transform the input and output with a linear layer each to match the smaller embedding size. The model parameters were tuned on IWSLT'14 and cross-validated on the larger WMT corpora.", |
| "cite_spans": [ |
| { |
| "start": 317, |
| "end": 342, |
| "text": "(Collobert et al., 2011b)", |
| "ref_id": "BIBREF7" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model parameters", |
| "sec_num": "4.2" |
| }, |
| { |
| "text": "Recurrent models are trained with Adam as we found them to benefit from aggressive optimization. We use a step width of 3.125 \u2022 10 \u22124 and early stopping based on validation perplexity (Kingma and Ba, 2014). For non-recurrent encoders, we obtain best results with stochastic gradient descent (SGD) and annealing: we use a learning rate of 0.1 and once the validation perplexity stops improving, we reduce the learning rate by an order of magnitude each epoch until it falls below 10 \u22124 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Optimization", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "For all models, we use mini-batches of 32 sentences for IWSLT'14 and 64 for WMT. We use truncated back-propagation through time to limit the length of target sequences per mini-batch to 25 words. Gradients are normalized by the mini-batch size. We re-normalize the gradients if their norm exceeds 25 (Pascanu et al., 2013) . Gradients of convolutional layers are scaled by sqrt(dim(input)) \u22121 similar to (Collobert et al., 2011b) . We use dropout on the embeddings and decoder outputs h i with a rate of 0.2 for IWSLT'14 and 0.1 for WMT (Srivastava et al., 2014) . All models are implemented in Torch (Collobert et al., 2011a) and trained on a single GPU.", |
| "cite_spans": [ |
| { |
| "start": 300, |
| "end": 322, |
| "text": "(Pascanu et al., 2013)", |
| "ref_id": "BIBREF24" |
| }, |
| { |
| "start": 404, |
| "end": 429, |
| "text": "(Collobert et al., 2011b)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 537, |
| "end": 562, |
| "text": "(Srivastava et al., 2014)", |
| "ref_id": "BIBREF30" |
| }, |
| { |
| "start": 601, |
| "end": 626, |
| "text": "(Collobert et al., 2011a)", |
| "ref_id": "BIBREF6" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Optimization", |
| "sec_num": "4.3" |
| }, |
| { |
| "text": "We report accuracy of single systems by training several identical models with different ran-dom seeds (5 for IWSLT'14, 3 for WMT) and pick the one with the best validation perplexity for final BLEU evaluation. Translations are generated by a beam search and we normalize log-likelihood scores by sentence length. On IWSLT'14 we use a beam width of 10 and for WMT models we tune beam width and word penalty on a separate test set, that is newsdev2016 for WMT'16 English-Romanian, newstest2014 for WMT'15 English-German and ntst1213 for WMT'14 English-French. 5 The word penalty adds a constant factor to log-likelihoods, except for the end-of-sentence token.", |
| "cite_spans": [ |
| { |
| "start": 559, |
| "end": 560, |
| "text": "5", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "Prior to scoring the generated translations against the respective references, we perform unknown word replacement based on attention scores (Jean et al., 2015) . Unknown words are replaced by looking up the source word with the maximum attention score in a pre-computed dictionary. If the dictionary contains no translation, then we simply copy the source word. Dictionaries were extracted from the aligned training data that was aligned with fast align (Dyer et al., 2013) . Each source word is mapped to the target word it is most frequently aligned to.", |
| "cite_spans": [ |
| { |
| "start": 141, |
| "end": 160, |
| "text": "(Jean et al., 2015)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 455, |
| "end": 474, |
| "text": "(Dyer et al., 2013)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "For convolutional encoders with stacked CNN-c layers we noticed for some models that the attention maxima were consistently shifted by one word. We determine this per-model offset on the abovementioned development sets and correct for it. Finally, we compute case-sensitive tokenized BLEU, except for WMT'16 English-Romanian where we use detokenized BLEU to be comparable with Sennrich et al. (2016a). 6", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation", |
| "sec_num": "4.4" |
| }, |
| { |
| "text": "We first compare recurrent and non-recurrent encoders in terms of perplexity and BLEU on IWSLT'14 with and without position embeddings ( \u00a73.1) and include a phrase-based system (Koehn et al., 2007) . Table 1 shows that a single-layer convolutional model with position embeddings (Convolutional) can outperform both a uni-directional LSTM encoder (LSTM) as well as a bi-directional LSTM encoder (BiLSTM). Next, we increase the depth of the convolutional encoder. We choose a good setting by independently varying the number of layers in CNN-a and CNN-c between 1 and 10 and obtained best validation set perplexity with six layers for CNN-a and three layers for CNN-c. This configuration outperforms BiLSTM by 0.7 BLEU (Deep Convolutional 6/3). We investigate depth in the convolutional encoder more in \u00a75.3. Among recurrent encoders, the BiLSTM is 2.3 BLEU better than the uni-directional version. The simple pooling encoder which does not contain any parameters is only 1.3 BLEU lower than a unidirectional LSTM encoder and 3.6 BLEU lower than BiLSTM. The results without position embeddings (words) show that position information is crucial for convolutional encoders. In particular for shallow models (Pooling and Convolutional), whereas deeper models are less effected. Recurrent encoders do not benefit from explicit position information because this information can be naturally extracted through the sequential computation.", |
| "cite_spans": [ |
| { |
| "start": 177, |
| "end": 197, |
| "text": "(Koehn et al., 2007)", |
| "ref_id": "BIBREF17" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 200, |
| "end": 207, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Recurrent vs. Non-recurrent Encoders", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "When tuning model settings, we generally observe good correlation between perplexity and BLEU. However, for convolutional encoders perplexity gains translate to smaller BLEU improvements compared to recurrent counterparts (Table 1) . We observe a similar trend on larger datasets.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 222, |
| "end": 231, |
| "text": "(Table 1)", |
| "ref_id": "TABREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Recurrent vs. Non-recurrent Encoders", |
| "sec_num": "5.1" |
| }, |
| { |
| "text": "Next, we evaluate the BiLSTM encoder and the convolutional encoder architecture on three larger tasks and compare against previously published results. On WMT'16 English-Romanian translation we compare to (Sennrich et al., 2016a) , the winning single system entry for this language pair. Their model consists of a bi-directional GRU encoder, a GRU decoder and MLP-based attention. They use byte pair encoding (BPE) to achieve openvocabulary translation and dropout in all components of the neural network to achieve 28.1 BLEU; we use the same pre-processing but no BPE ( \u00a74). The results (Table 2) show that a deep convolutional encoder can perform competitively to the state of the art on this dataset (Sennrich et al., 2016a) . Our bi-directional LSTM encoder baseline is 0.6 BLEU lower than the state of the art but uses only 512 hidden units compared to 1024. A singlelayer convolutional encoder with embedding size 256 performs at 27.1 BLEU. Increasing the number of convolutional layers to 8 in CNN-a and 4 in CNN-c achieves 27.8 BLEU which outperforms our baseline and is competitive to the state of the art.", |
| "cite_spans": [ |
| { |
| "start": 205, |
| "end": 229, |
| "text": "(Sennrich et al., 2016a)", |
| "ref_id": "BIBREF28" |
| }, |
| { |
| "start": 703, |
| "end": 727, |
| "text": "(Sennrich et al., 2016a)", |
| "ref_id": "BIBREF28" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 588, |
| "end": 597, |
| "text": "(Table 2)", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Evaluation on WMT Corpora", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "On WMT'15 English to German, we compare to a BiLSTM baseline and prior work: (Jean et al., 2015) introduce a large output vocabulary; the decoder of (Chung et al., 2016) operates on the character-level; (Yang et al., 2016) uses LSTMs instead of GRUs and feeds the conditional input to the output layer as well as to the decoder.", |
| "cite_spans": [ |
| { |
| "start": 77, |
| "end": 96, |
| "text": "(Jean et al., 2015)", |
| "ref_id": "BIBREF12" |
| }, |
| { |
| "start": 203, |
| "end": 222, |
| "text": "(Yang et al., 2016)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation on WMT Corpora", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Our single-layer BiLSTM baseline is competitive to prior work and a two-layer BiLSTM encoder performs 0.6 BLEU better at 24.1 BLEU. Previous work also used multi-layer setups, e.g., (Chung et al., 2016) has two layers both in the encoder and the decoder with 1024 hidden units, and (Yang et al., 2016) use 1000 hidden units per LSTM. We use 512 hidden units for both LSTM and convolutional encoders. Our convolutional model with either 8 or 15 layers in CNN-a outperform the BiL-STM encoder with both a single decoder layer or two decoder layers.", |
| "cite_spans": [ |
| { |
| "start": 182, |
| "end": 202, |
| "text": "(Chung et al., 2016)", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 282, |
| "end": 301, |
| "text": "(Yang et al., 2016)", |
| "ref_id": "BIBREF35" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation on WMT Corpora", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "Finally, we evaluate on the larger WMT'14 English-French corpus. On this dataset the recurrent architectures benefit from an additional layer both in the encoder and the decoder. For a singlelayer decoder, a deep convolutional encoder outperforms the BiLSTM accuracy by 0.3 BLEU and for a two-layer decoder, our very deep convolutional encoder with up to 20 layers outperforms the BiLSTM by 0.4 BLEU. It has 40% fewer parameters than the BiLSTM due to the smaller embedding sizes. We also outperform several previous systems, including the very deep encoder-decoder model proposed by (Luong et al., 2015a) . Our best result is just 0.2 BLEU below (Zhou et al., 2016) who use a very deep LSTM setup with a 9-layer encoder, a 7-layer decoder, shortcut connections and extensive regularization with dropout and L2 regularization.", |
| "cite_spans": [ |
| { |
| "start": 584, |
| "end": 605, |
| "text": "(Luong et al., 2015a)", |
| "ref_id": "BIBREF20" |
| }, |
| { |
| "start": 647, |
| "end": 666, |
| "text": "(Zhou et al., 2016)", |
| "ref_id": "BIBREF36" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Evaluation on WMT Corpora", |
| "sec_num": "5.2" |
| }, |
| { |
| "text": "We next motivate our design of the convolutional encoder ( \u00a73.2). We use the smaller IWSLT'14 German-English setup without unknown word replacement to enable fast experimental turn-around. BLEU results are averaged over three training runs initialized with different seeds. Figure 2 shows accuracy for a different number of layers of both CNNs with and without residual connections. Our first observation is that computing the conditional input c i directly over embeddings e (line \"without CNN-c\") is already working well at 28.3 BLEU with a single CNN-a layer and at 29.1 BLEU for CNN-a with 7 layers (Figure 2a) . Increasing the number of CNN-c layers is beneficial up to three layers and beyond this we did not observe further improvements. Similarly, increasing the number of layers in CNN-a beyond six does not increase accuracy on this relatively small dataset. In general, choosing two to three times as many layers in CNN-a as in CNN-c is a good rule of thumb. Without residual connections, the model fails to utilize the increase in modeling power from additional layers, and performance drops significantly for deeper encoders (Figure 2b) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 274, |
| "end": 282, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 603, |
| "end": 614, |
| "text": "(Figure 2a)", |
| "ref_id": "FIGREF1" |
| }, |
| { |
| "start": 1138, |
| "end": 1149, |
| "text": "(Figure 2b)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Convolutional Encoder Architecture Details", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "Our convolutional architecture relies on two sets of networks, CNN-a for attention score computation a i and CNN-c for the conditional input c i to be fed to the decoder. We found that using the same network for both tasks, similar to recurrent encoders, resulted in poor accuracy of 22.9 BLEU. This compares to 28.5 BLEU for separate singlelayer networks, or 28.3 BLEU when aggregating embeddings for c i . Increasing the number of layers in the single network setup did not help. Figure 2(a) suggests that the attention weights (CNN-a) need to integrate information from a wide context which can be done with a deep stack. At the same time, the vectors which are averaged (CNN-c) seem to benefit from a shallower, more local representation closer to the input words. Two stacks are an easy way to achieve these contradicting requirements.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 482, |
| "end": 493, |
| "text": "Figure 2(a)", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Convolutional Encoder Architecture Details", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "In Appendix A we visualize attention scores and find that alignments for CNN encoders are less sharp compared to BiLSTMs, however, this does not affect the effectiveness of unknown word replacement once we adjust for shifted maxima. In Appendix B we investigate whether deep convolutional encoders are required for translating long sentences and observe that even relatively shallow encoders perform well on long sentences.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Convolutional Encoder Architecture Details", |
| "sec_num": "5.3" |
| }, |
| { |
| "text": "For training, we use the fast CuDNN LSTM implementation for layers without attention and experiment on IWSLT'14 with batch size 32. The single-layer BiLSTM model trains at 4300 target words/second, while the 6/3 deep convolutional encoder compares at 6400 words/second on an NVidia Tesla M40 GPU. We do not observe shorter overall training time since SGD converges slower than Adam which we use for BiLSTM models.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and Generation Speed", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "We measure generation speed on an Intel Haswell CPU clocked at 2.50GHz with a single thread for BLAS operations. We use vocabulary selection which can speed up generation by up to a factor of ten at no cost in accuracy via making the time to compute the final output layer negligible (Mi et al., 2016; L'Hostis et al., 2016) . This shifts the focus from the efficiency of the encoder to the efficiency of the decoder. On IWSLT'14 (Table 3a) the convolutional encoder increases the speed of the overall model by a factor of 1.35 compared to the BiLSTM encoder while improving accuracy by 0.7 BLEU. In this setup both encoders models have the same hidden layer and embedding sizes.", |
| "cite_spans": [ |
| { |
| "start": 284, |
| "end": 301, |
| "text": "(Mi et al., 2016;", |
| "ref_id": "BIBREF23" |
| }, |
| { |
| "start": 302, |
| "end": 324, |
| "text": "L'Hostis et al., 2016)", |
| "ref_id": "BIBREF19" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 430, |
| "end": 440, |
| "text": "(Table 3a)", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training and Generation Speed", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "On the larger WMT'15 English-German task (Table 3b ) the convolutional encoder speeds up generation by 2.1 times compared to a two-layer BiL-STM. This corresponds to 231 source words/second with beam size 5. Our best model on this dataset generates 203 words/second but at slightly lower accuracy compared to the full vocabulary setting in Table 2 . The recurrent encoder uses larger embeddings than the convolutional encoder which were required for the models to match in accuracy.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 41, |
| "end": 50, |
| "text": "(Table 3b", |
| "ref_id": "TABREF5" |
| }, |
| { |
| "start": 340, |
| "end": 347, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training and Generation Speed", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "The smaller embedding size is not the only reason for the speed-up. In Table 3a (a), we compare a Conv 6/3 encoder and a BiLSTM with equal embedding sizes. The convolutional encoder is still 1.34x faster (at 0.7 higher BLEU) although it requires roughly 1.6x as many FLOPs. We believe that this is likely due to better cache locality for convolutional layers on CPUs: an LSTM with fused gates 7 requires two big matrix multiplications with different weights as well as additions, multiplications and non-linearities for each source word, while the output of each convolutional layer can be computed as whole with a single matrix multiply.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 71, |
| "end": 79, |
| "text": "Table 3a", |
| "ref_id": "TABREF5" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Training and Generation Speed", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "For comparison, the quantized deep LSTM- 7 Our bi-directional LSTM implementation is based on torch rnnlib which uses fused LSTM gates (https://github.com/facebookresearch/ torch-rnnlib/) and which we consider an efficient implementation. ", |
| "cite_spans": [ |
| { |
| "start": 41, |
| "end": 42, |
| "text": "7", |
| "ref_id": null |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Training and Generation Speed", |
| "sec_num": "5.4" |
| }, |
| { |
| "text": "We introduced a simple encoder model for neural machine translation based on convolutional networks. This approach is more parallelizable than recurrent networks and provides a shorter path to capture long-range dependencies in the source. We find it essential to use source position embeddings as well as different CNNs for attention score computation and conditional input aggregation. Our experiments show that convolutional encoders perform on par or better than baselines based on bi-directional LSTM encoders. In comparison to other recent work, our deep convolutional encoder is competitive to the best published results to date (WMT'16 English-Romanian) which are obtained with significantly more complex models (WMT'14 English-French) or stem from improvements that are orthogonal to our work (WMT'15 English-German). Our architecture also leads to large generation speed improvements: translation models with our convolutional encoder can translate twice as fast as strong baselines with bi-directional recurrent encoders.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "Future work includes better training to enable faster convergence with the convolutional encoder to better leverage the higher processing speed. Our fast architecture is interesting for character level encoders where the input is significantly longer than for words. Also, we plan to investigate the effectiveness of our architecture on other sequence-tosequence tasks, e.g. summarization, constituency parsing, dialog modeling.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "6" |
| }, |
| { |
| "text": "In Figure 4 and Figure 5 , we plot attention scores for a sample WMT'15 English-German and WMT'14 English-French translation with BiLSTM and deep convolutional encoders. The translation is on the x-axis and the source sentence on the y-axis.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 11, |
| "text": "Figure 4", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 16, |
| "end": 24, |
| "text": "Figure 5", |
| "ref_id": "FIGREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Alignment Visualization", |
| "sec_num": null |
| }, |
| { |
| "text": "The attention scores of the BiLSTM output are sharp but do not necessarily represent a correct alignment. For CNN encoders the scores are less focused but still indicate an approximate source location, e.g., in Figure 4b , when moving the clause \"over 1,000 people were taken hostage\" to the back of the translation. For some models, attention maxima are consistently shifted by one token as both in Figure 4b and Figure 5b .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 211, |
| "end": 220, |
| "text": "Figure 4b", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 400, |
| "end": 409, |
| "text": "Figure 4b", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 414, |
| "end": 423, |
| "text": "Figure 5b", |
| "ref_id": "FIGREF6" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Alignment Visualization", |
| "sec_num": null |
| }, |
| { |
| "text": "Interestingly, convolutional encoders tend to focus on the last token ( Figure 4b ) or both the first and last tokens (Figure 5b) . Motivated by the hypothesis that the this may be due to the decoder depending on the length of the source sentence (which it cannot determine without position embeddings), we explicitly provided a distributed representation of the input length to the decoder and attention module. However, this did not cause a change in attention patterns nor did it improve translation accuracy. One characteristic of our convolutional encoder architecture is that the context over which outputs are computed depends on the number of layers. With bi-directional RNNs, every encoder output depends on the entire source sentence. In Figure 3 , we evaluate whether limited context affects the translation quality on longer sentences of WMT'15 English-German which often requires moving verbs over long distances. We sort the newstest2015 test set by source length, partition it into 15 equallysized buckets, and compare the BLEU scores of models listed in Table 2 on a per-bucket basis.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 72, |
| "end": 81, |
| "text": "Figure 4b", |
| "ref_id": "FIGREF4" |
| }, |
| { |
| "start": 118, |
| "end": 129, |
| "text": "(Figure 5b)", |
| "ref_id": "FIGREF6" |
| }, |
| { |
| "start": 748, |
| "end": 756, |
| "text": "Figure 3", |
| "ref_id": null |
| }, |
| { |
| "start": 1070, |
| "end": 1077, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "A Alignment Visualization", |
| "sec_num": null |
| }, |
| { |
| "text": "There is no clear evidence for sub-par translations on sentences that are longer than the observable context per encoder output. We include a small encoder with a 6-layer CNN-c and a 3-layer CNN-a in the comparison which performs worse than a 2layer BiLSTM (23.3 BLEU vs. 24.1). With 6 convolutional layers at kernel width 3, each encoder output contains information of 13 adjacent source words. Looking at the accuracy for sentences with 15 words or more, this relatively shallow CNN is either on par or better than the BiLSTM for 5 out of 10 buckets; the BiLSTM has access to the entire source context. Similar observations can be made for the deeper convolutional encoders. ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "B Performance by Sentence Length", |
| "sec_num": null |
| }, |
| { |
| "text": "The source code will be availabe at https://github. com/facebookresearch/fairseq", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "For kernel width k and sequence length n we requiremax 1, n\u22121 k\u22121forwards on a succession of stacked convolutional layers compared to n forwards with an RNN.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Different to the other datasets, we lowercase the training data and evaluate with case-insensitive BLEU.4 We followed the pre-processing of https: //github.com/rsennrich/wmt16-scripts/ blob/master/sample/preprocess.sh and added the back-translated data from http://data.statmt.org/ rsennrich/wmt16_backtranslations/en-ro.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| }, |
| { |
| "text": "Specifically, we select a beam from {5, 10} and a word penalty from {0, \u22120.5, \u22121, \u22121.5}6 https://github.com/moses-smt/ mosesdecoder/blob/617e8c8ed1630fb1d1/ scripts/generic/{multi-bleu.perl, mteval-v13a.pl}", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Neural machine translation by jointly learning to align and translate", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proc. of ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In Proc. of ICLR.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "MetaMind Neural Machine Translation System for WMT 2016", |
| "authors": [ |
| { |
| "first": "James", |
| "middle": [], |
| "last": "Bradbury", |
| "suffix": "" |
| }, |
| { |
| "first": "Richard", |
| "middle": [], |
| "last": "Socher", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proc. of WMT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "James Bradbury and Richard Socher. 2016. MetaMind Neural Machine Translation System for WMT 2016. In Proc. of WMT.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Report on the 11th IWSLT evaluation campaign", |
| "authors": [ |
| { |
| "first": "Mauro", |
| "middle": [], |
| "last": "Cettolo", |
| "suffix": "" |
| }, |
| { |
| "first": "Jan", |
| "middle": [], |
| "last": "Niehues", |
| "suffix": "" |
| }, |
| { |
| "first": "Sebastian", |
| "middle": [], |
| "last": "St\u00fcker", |
| "suffix": "" |
| }, |
| { |
| "first": "Luisa", |
| "middle": [], |
| "last": "Bentivogli", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcello", |
| "middle": [], |
| "last": "Federico", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proc. of IWSLT", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Mauro Cettolo, Jan Niehues, Sebastian St\u00fcker, Luisa Bentivogli, and Marcello Federico. 2014. Report on the 11th IWSLT evaluation campaign. In Proc. of IWSLT.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "On the Properties of Neural Machine Translation: Encoder-decoder Approaches", |
| "authors": [ |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Bart", |
| "middle": [], |
| "last": "Van Merri\u00ebnboer", |
| "suffix": "" |
| }, |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proc. of SSST", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Dzmitry Bah- danau, and Yoshua Bengio. 2014a. On the Properties of Neural Machine Translation: Encoder-decoder Ap- proaches. In Proc. of SSST.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Bart", |
| "middle": [], |
| "last": "Van Merri\u00ebnboer", |
| "suffix": "" |
| }, |
| { |
| "first": "Caglar", |
| "middle": [], |
| "last": "Gulcehre", |
| "suffix": "" |
| }, |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Fethi", |
| "middle": [], |
| "last": "Bougares", |
| "suffix": "" |
| }, |
| { |
| "first": "Holger", |
| "middle": [], |
| "last": "Schwenk", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proc. of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014b. Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation. In Proc. of EMNLP.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "A Character-level Decoder without Explicit Segmentation for Neural Machine Translation", |
| "authors": [ |
| { |
| "first": "Junyoung", |
| "middle": [], |
| "last": "Chung", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1603.06147" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Junyoung Chung, Kyunghyun Cho, and Yoshua Bengio. 2016. A Character-level Decoder without Explicit Segmentation for Neural Machine Translation. arXiv preprint arXiv:1603.06147 .", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Torch7: A Matlab-like Environment for Machine Learning", |
| "authors": [ |
| { |
| "first": "Ronan", |
| "middle": [], |
| "last": "Collobert", |
| "suffix": "" |
| }, |
| { |
| "first": "Koray", |
| "middle": [], |
| "last": "Kavukcuoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Clement", |
| "middle": [], |
| "last": "Farabet", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "BigLearn, NIPS Workshop", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronan Collobert, Koray Kavukcuoglu, and Clement Farabet. 2011a. Torch7: A Matlab-like Environment for Machine Learning. In BigLearn, NIPS Workshop. http://torch.ch.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Natural Language Processing (almost) from scratch", |
| "authors": [ |
| { |
| "first": "Ronan", |
| "middle": [], |
| "last": "Collobert", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "L\u00e9on", |
| "middle": [], |
| "last": "Bottou", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Karlen", |
| "suffix": "" |
| }, |
| { |
| "first": "Koray", |
| "middle": [], |
| "last": "Kavukcuoglu", |
| "suffix": "" |
| }, |
| { |
| "first": "Pavel", |
| "middle": [], |
| "last": "Kuksa", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "JMLR", |
| "volume": "12", |
| "issue": "", |
| "pages": "2493--2537", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ronan Collobert, Jason Weston, L\u00e9on Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011b. Natural Language Processing (almost) from scratch. JMLR 12(Aug):2493-2537.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "A Simple, Fast, and Effective Reparameterization of IBM Model 2", |
| "authors": [ |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Victor", |
| "middle": [], |
| "last": "Chahuneau", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah A", |
| "middle": [], |
| "last": "Smith", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Chris Dyer, Victor Chahuneau, and Noah A Smith. 2013. A Simple, Fast, and Effective Reparameterization of IBM Model 2. Proc. of ACL.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "Deep Residual Learning for Image Recognition", |
| "authors": [ |
| { |
| "first": "Kaiming", |
| "middle": [], |
| "last": "He", |
| "suffix": "" |
| }, |
| { |
| "first": "Xiangyu", |
| "middle": [], |
| "last": "Zhang", |
| "suffix": "" |
| }, |
| { |
| "first": "Shaoqing", |
| "middle": [], |
| "last": "Ren", |
| "suffix": "" |
| }, |
| { |
| "first": "Jian", |
| "middle": [], |
| "last": "Sun", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proc. of CVPR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2015. Deep Residual Learning for Image Recog- nition. In Proc. of CVPR.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Long short-term memory", |
| "authors": [ |
| { |
| "first": "Sepp", |
| "middle": [], |
| "last": "Hochreiter", |
| "suffix": "" |
| }, |
| { |
| "first": "J\u00fcrgen", |
| "middle": [], |
| "last": "Schmidhuber", |
| "suffix": "" |
| } |
| ], |
| "year": 1997, |
| "venue": "Neural computation", |
| "volume": "9", |
| "issue": "8", |
| "pages": "1735--1780", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sepp Hochreiter and J\u00fcrgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735- 1780.", |
| "links": null |
| }, |
| "BIBREF11": { |
| "ref_id": "b11", |
| "title": "On Using Very Large Target Vocabulary for Neural Machine Translation", |
| "authors": [ |
| { |
| "first": "S\u00e9bastien", |
| "middle": [], |
| "last": "Jean", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Roland", |
| "middle": [], |
| "last": "Memisevic", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1412.2007v2" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "S\u00e9bastien Jean, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2014. On Using Very Large Target Vocabulary for Neural Machine Translation. arXiv preprint arXiv:1412.2007v2 .", |
| "links": null |
| }, |
| "BIBREF12": { |
| "ref_id": "b12", |
| "title": "Montreal Neural Machine Translation systems for WMT15", |
| "authors": [ |
| { |
| "first": "S\u00e9bastien", |
| "middle": [], |
| "last": "Jean", |
| "suffix": "" |
| }, |
| { |
| "first": "Orhan", |
| "middle": [], |
| "last": "Firat", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Roland", |
| "middle": [], |
| "last": "Memisevic", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proc. of WMT", |
| "volume": "", |
| "issue": "", |
| "pages": "134--140", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "S\u00e9bastien Jean, Orhan Firat, Kyunghyun Cho, Roland Memisevic, and Yoshua Bengio. 2015. Montreal Neural Machine Translation systems for WMT15. In Proc. of WMT. pages 134-140.", |
| "links": null |
| }, |
| "BIBREF13": { |
| "ref_id": "b13", |
| "title": "Is Neural Machine Translation Ready for Deployment? A Case Study on 30 Translation Directions", |
| "authors": [ |
| { |
| "first": "Marcin", |
| "middle": [], |
| "last": "Junczys-Dowmunt", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomasz", |
| "middle": [], |
| "last": "Dwojak", |
| "suffix": "" |
| }, |
| { |
| "first": "Hieu", |
| "middle": [], |
| "last": "Hoang", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1610.01108" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marcin Junczys-Dowmunt, Tomasz Dwojak, and Hieu Hoang. 2016. Is Neural Machine Translation Ready for Deployment? A Case Study on 30 Translation Di- rections. arXiv preprint arXiv:1610.01108 .", |
| "links": null |
| }, |
| "BIBREF14": { |
| "ref_id": "b14", |
| "title": "Recurrent Continuous Translation Models", |
| "authors": [ |
| { |
| "first": "Nal", |
| "middle": [], |
| "last": "Kalchbrenner", |
| "suffix": "" |
| }, |
| { |
| "first": "Phil", |
| "middle": [], |
| "last": "Blunsom", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "Proc. of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nal Kalchbrenner and Phil Blunsom. 2013. Recurrent Continuous Translation Models. In Proc. of EMNLP.", |
| "links": null |
| }, |
| "BIBREF15": { |
| "ref_id": "b15", |
| "title": "Neural Machine Translation in Linear Time. arXiv", |
| "authors": [ |
| { |
| "first": "Nal", |
| "middle": [], |
| "last": "Kalchbrenner", |
| "suffix": "" |
| }, |
| { |
| "first": "Lasse", |
| "middle": [], |
| "last": "Espeholt", |
| "suffix": "" |
| }, |
| { |
| "first": "Karen", |
| "middle": [], |
| "last": "Simonyan", |
| "suffix": "" |
| }, |
| { |
| "first": "Aaron", |
| "middle": [], |
| "last": "Van Den Oord", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Graves", |
| "suffix": "" |
| }, |
| { |
| "first": "Koray", |
| "middle": [], |
| "last": "Kavukcuoglu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nal Kalchbrenner, Lasse Espeholt, Karen Simonyan, Aaron van den Oord, Alex Graves, and Koray Kavukcuoglu. 2016. Neural Machine Translation in Linear Time. arXiv .", |
| "links": null |
| }, |
| "BIBREF16": { |
| "ref_id": "b16", |
| "title": "Adam: A Method for Stochastic Optimization", |
| "authors": [ |
| { |
| "first": "P", |
| "middle": [], |
| "last": "Diederik", |
| "suffix": "" |
| }, |
| { |
| "first": "Jimmy", |
| "middle": [], |
| "last": "Kingma", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Ba", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proc. of ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Diederik P. Kingma and Jimmy Ba. 2014. Adam: A Method for Stochastic Optimization. Proc. of ICLR .", |
| "links": null |
| }, |
| "BIBREF17": { |
| "ref_id": "b17", |
| "title": "Moses: Open Source Toolkit for Statistical Machine Translation", |
| "authors": [ |
| { |
| "first": "Philipp", |
| "middle": [], |
| "last": "Koehn", |
| "suffix": "" |
| }, |
| { |
| "first": "Hieu", |
| "middle": [], |
| "last": "Hoang", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Callison-Burch", |
| "suffix": "" |
| }, |
| { |
| "first": "Marcello", |
| "middle": [], |
| "last": "Federico", |
| "suffix": "" |
| }, |
| { |
| "first": "Nicola", |
| "middle": [], |
| "last": "Bertoldi", |
| "suffix": "" |
| }, |
| { |
| "first": "Brooke", |
| "middle": [], |
| "last": "Cowan", |
| "suffix": "" |
| }, |
| { |
| "first": "Wade", |
| "middle": [], |
| "last": "Shen", |
| "suffix": "" |
| } |
| ], |
| "year": 2007, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Philipp Koehn, Hieu Hoang, Alexandra Birch, Chris Callison-Burch, Marcello Federico, Nicola Bertoldi, Brooke Cowan, Wade Shen, Christine Moran, Richard Zens, Chris Dyer, Ondej Bojar, Alexandra Constantin, and Evan Herbst. 2007. Moses: Open Source Toolkit for Statistical Machine Translation. In Proc. of ACL.", |
| "links": null |
| }, |
| "BIBREF18": { |
| "ref_id": "b18", |
| "title": "Convolutional Encoders for Neural Machine Translation", |
| "authors": [ |
| { |
| "first": "Andrew", |
| "middle": [], |
| "last": "Lamb", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Xie", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "2010--2020", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Andrew Lamb and Michael Xie. 2016. Con- volutional Encoders for Neural Machine Trans- lation. https://cs224d.stanford.edu/ reports/LambAndrew.pdf. Accessed: 2010- 10-31.", |
| "links": null |
| }, |
| "BIBREF19": { |
| "ref_id": "b19", |
| "title": "Vocabulary Selection Strategies for Neural Machine Translation", |
| "authors": [ |
| { |
| "first": "L'", |
| "middle": [], |
| "last": "Gurvan", |
| "suffix": "" |
| }, |
| { |
| "first": "David", |
| "middle": [], |
| "last": "Hostis", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Grangier", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Auli", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1610.00072" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Gurvan L'Hostis, David Grangier, and Michael Auli. 2016. Vocabulary Selection Strategies for Neural Ma- chine Translation. arXiv preprint arXiv:1610.00072 .", |
| "links": null |
| }, |
| "BIBREF20": { |
| "ref_id": "b20", |
| "title": "Effective approaches to attentionbased neural machine translation", |
| "authors": [ |
| { |
| "first": "Minh-Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Hieu", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher D", |
| "middle": [], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proc. of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minh-Thang Luong, Hieu Pham, and Christopher D Manning. 2015a. Effective approaches to attention- based neural machine translation. In Proc. of EMNLP.", |
| "links": null |
| }, |
| "BIBREF21": { |
| "ref_id": "b21", |
| "title": "Addressing the Rare Word Problem in Neural Machine Translation", |
| "authors": [ |
| { |
| "first": "Minh-Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Quoc", |
| "suffix": "" |
| }, |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "Wojciech", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zaremba", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Minh-Thang Luong, Ilya Sutskever, Quoc V Le, Oriol Vinyals, and Wojciech Zaremba. 2015b. Addressing the Rare Word Problem in Neural Machine Transla- tion. In Proc. of ACL.", |
| "links": null |
| }, |
| "BIBREF22": { |
| "ref_id": "b22", |
| "title": "Encoding Source Language with Convolutional Neural Network for Machine Translation", |
| "authors": [ |
| { |
| "first": "Fandong", |
| "middle": [], |
| "last": "Meng", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhengdong", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Mingxuan", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Wenbin", |
| "middle": [], |
| "last": "Jiang", |
| "suffix": "" |
| }, |
| { |
| "first": "Qun", |
| "middle": [], |
| "last": "Liu", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Fandong Meng, Zhengdong Lu, Mingxuan Wang, Hang Li, Wenbin Jiang, and Qun Liu. 2015. Encoding Source Language with Convolutional Neural Network for Machine Translation. In Proc. of ACL.", |
| "links": null |
| }, |
| "BIBREF23": { |
| "ref_id": "b23", |
| "title": "Vocabulary Manipulation for Neural Machine Translation", |
| "authors": [ |
| { |
| "first": "Haitao", |
| "middle": [], |
| "last": "Mi", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhiguo", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Abe", |
| "middle": [], |
| "last": "Ittycheriah", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1605.03209" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Haitao Mi, Zhiguo Wang, and Abe Ittycheriah. 2016. Vocabulary Manipulation for Neural Machine Trans- lation. arXiv preprint arXiv:1605.03209 .", |
| "links": null |
| }, |
| "BIBREF24": { |
| "ref_id": "b24", |
| "title": "On the Difficulty of Training Recurrent Neural Networks. ICML (3)", |
| "authors": [ |
| { |
| "first": "Razvan", |
| "middle": [], |
| "last": "Pascanu", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2013, |
| "venue": "", |
| "volume": "28", |
| "issue": "", |
| "pages": "1310--1318", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Razvan Pascanu, Tomas Mikolov, and Yoshua Bengio. 2013. On the Difficulty of Training Recurrent Neural Networks. ICML (3) 28:1310-1318.", |
| "links": null |
| }, |
| "BIBREF25": { |
| "ref_id": "b25", |
| "title": "Convolutional Neural Network Language Models", |
| "authors": [ |
| { |
| "first": "Ngoc-Quan", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "Germn", |
| "middle": [], |
| "last": "Kruszewski", |
| "suffix": "" |
| }, |
| { |
| "first": "Gemma", |
| "middle": [], |
| "last": "Boleda", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proc. of EMNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ngoc-Quan Pham, Germn Kruszewski, and Gemma Boleda. 2016. Convolutional Neural Network Lan- guage Models. In Proc. of EMNLP.", |
| "links": null |
| }, |
| "BIBREF26": { |
| "ref_id": "b26", |
| "title": "Sequence level Training with Recurrent Neural Networks", |
| "authors": [ |
| { |
| "first": "Aurelio", |
| "middle": [], |
| "last": "Marc", |
| "suffix": "" |
| }, |
| { |
| "first": "Sumit", |
| "middle": [], |
| "last": "Ranzato", |
| "suffix": "" |
| }, |
| { |
| "first": "Michael", |
| "middle": [], |
| "last": "Chopra", |
| "suffix": "" |
| }, |
| { |
| "first": "Wojciech", |
| "middle": [], |
| "last": "Auli", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Zaremba", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proc. of ICLR", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Marc'Aurelio Ranzato, Sumit Chopra, Michael Auli, and Wojciech Zaremba. 2015. Sequence level Train- ing with Recurrent Neural Networks. In Proc. of ICLR.", |
| "links": null |
| }, |
| "BIBREF28": { |
| "ref_id": "b28", |
| "title": "Edinburgh neural machine translation systems for wmt 16", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Edinburgh neural machine translation systems for wmt 16.", |
| "links": null |
| }, |
| "BIBREF29": { |
| "ref_id": "b29", |
| "title": "Neural Machine Translation of Rare Words with Subword Units", |
| "authors": [ |
| { |
| "first": "Rico", |
| "middle": [], |
| "last": "Sennrich", |
| "suffix": "" |
| }, |
| { |
| "first": "Barry", |
| "middle": [], |
| "last": "Haddow", |
| "suffix": "" |
| }, |
| { |
| "first": "Alexandra", |
| "middle": [], |
| "last": "Birch", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "Proc. of ACL", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural Machine Translation of Rare Words with Subword Units. In Proc. of ACL.", |
| "links": null |
| }, |
| "BIBREF30": { |
| "ref_id": "b30", |
| "title": "Dropout: a simple way to prevent Neural Networks from overfitting", |
| "authors": [ |
| { |
| "first": "Nitish", |
| "middle": [], |
| "last": "Srivastava", |
| "suffix": "" |
| }, |
| { |
| "first": "Geoffrey", |
| "middle": [ |
| "E" |
| ], |
| "last": "Hinton", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Krizhevsky", |
| "suffix": "" |
| }, |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "Salakhutdinov", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "JMLR", |
| "volume": "15", |
| "issue": "", |
| "pages": "1929--1958", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Nitish Srivastava, Geoffrey E. Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent Neural Networks from overfitting. JMLR 15:1929-1958.", |
| "links": null |
| }, |
| "BIBREF31": { |
| "ref_id": "b31", |
| "title": "End-to-end Memory Networks", |
| "authors": [ |
| { |
| "first": "Sainbayar", |
| "middle": [], |
| "last": "Sukhbaatar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "Rob", |
| "middle": [], |
| "last": "Fergus", |
| "suffix": "" |
| }, |
| { |
| "first": "Arthur", |
| "middle": [], |
| "last": "Szlam", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proc. of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "2440--2448", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sainbayar Sukhbaatar, Jason Weston, Rob Fergus, and Arthur Szlam. 2015. End-to-end Memory Networks. In Proc. of NIPS. pages 2440-2448.", |
| "links": null |
| }, |
| "BIBREF32": { |
| "ref_id": "b32", |
| "title": "Sequence to Sequence Learning with Neural Networks", |
| "authors": [ |
| { |
| "first": "Ilya", |
| "middle": [], |
| "last": "Sutskever", |
| "suffix": "" |
| }, |
| { |
| "first": "Oriol", |
| "middle": [], |
| "last": "Vinyals", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc V", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| } |
| ], |
| "year": 2014, |
| "venue": "Proc. of NIPS", |
| "volume": "", |
| "issue": "", |
| "pages": "3104--3112", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Se- quence to Sequence Learning with Neural Networks. In Proc. of NIPS. pages 3104-3112.", |
| "links": null |
| }, |
| "BIBREF33": { |
| "ref_id": "b33", |
| "title": "Context-dependent Translation selection using Convolutional Neural Network", |
| "authors": [ |
| { |
| "first": "Zhaopeng", |
| "middle": [], |
| "last": "Tu", |
| "suffix": "" |
| }, |
| { |
| "first": "Baotian", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhengdong", |
| "middle": [], |
| "last": "Lu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hang", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proc. of ACL-IJCNLP", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zhaopeng Tu, Baotian Hu, Zhengdong Lu, and Hang Li. 2015. Context-dependent Translation selection us- ing Convolutional Neural Network. In Proc. of ACL- IJCNLP.", |
| "links": null |
| }, |
| "BIBREF34": { |
| "ref_id": "b34", |
| "title": "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation", |
| "authors": [ |
| { |
| "first": "Yonghui", |
| "middle": [], |
| "last": "Wu", |
| "suffix": "" |
| }, |
| { |
| "first": "Mike", |
| "middle": [], |
| "last": "Schuster", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhifeng", |
| "middle": [], |
| "last": "Chen", |
| "suffix": "" |
| }, |
| { |
| "first": "V", |
| "middle": [], |
| "last": "Quoc", |
| "suffix": "" |
| }, |
| { |
| "first": "Mohammad", |
| "middle": [], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "Wolfgang", |
| "middle": [], |
| "last": "Norouzi", |
| "suffix": "" |
| }, |
| { |
| "first": "Maxim", |
| "middle": [], |
| "last": "Macherey", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuan", |
| "middle": [], |
| "last": "Krikun", |
| "suffix": "" |
| }, |
| { |
| "first": "Qin", |
| "middle": [], |
| "last": "Cao", |
| "suffix": "" |
| }, |
| { |
| "first": "Klaus", |
| "middle": [], |
| "last": "Gao", |
| "suffix": "" |
| }, |
| { |
| "first": "", |
| "middle": [], |
| "last": "Macherey", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1609.08144" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, Klaus Macherey, et al. 2016. Google's Neural Machine Translation Sys- tem: Bridging the Gap between Human and Machine Translation. arXiv preprint arXiv:1609.08144 .", |
| "links": null |
| }, |
| "BIBREF35": { |
| "ref_id": "b35", |
| "title": "Neural Machine Translation with Recurrent Attention Modeling", |
| "authors": [ |
| { |
| "first": "Zichao", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhiting", |
| "middle": [], |
| "last": "Hu", |
| "suffix": "" |
| }, |
| { |
| "first": "Yuntian", |
| "middle": [], |
| "last": "Deng", |
| "suffix": "" |
| }, |
| { |
| "first": "Chris", |
| "middle": [], |
| "last": "Dyer", |
| "suffix": "" |
| }, |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Smola", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1607.05108" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zichao Yang, Zhiting Hu, Yuntian Deng, Chris Dyer, and Alex Smola. 2016. Neural Machine Translation with Recurrent Attention Modeling. arXiv preprint arXiv:1607.05108 .", |
| "links": null |
| }, |
| "BIBREF36": { |
| "ref_id": "b36", |
| "title": "Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation", |
| "authors": [ |
| { |
| "first": "Jie", |
| "middle": [], |
| "last": "Zhou", |
| "suffix": "" |
| }, |
| { |
| "first": "Ying", |
| "middle": [], |
| "last": "Cao", |
| "suffix": "" |
| }, |
| { |
| "first": "Xuguang", |
| "middle": [], |
| "last": "Wang", |
| "suffix": "" |
| }, |
| { |
| "first": "Peng", |
| "middle": [], |
| "last": "Li", |
| "suffix": "" |
| }, |
| { |
| "first": "Wei", |
| "middle": [], |
| "last": "Xu", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": { |
| "arXiv": [ |
| "arXiv:1606.04199.132" |
| ] |
| }, |
| "num": null, |
| "urls": [], |
| "raw_text": "Jie Zhou, Ying Cao, Xuguang Wang, Peng Li, and Wei Xu. 2016. Deep Recurrent Models with Fast-Forward Connections for Neural Machine Translation. arXiv preprint arXiv:1606.04199 . 132", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Neural machine translation model with single-layer convolutional encoder networks. CNN-a is on the left and CNN-c is at the right. Embedding layers are not shown.", |
| "uris": null |
| }, |
| "FIGREF1": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Effect of encoder depth on IWSLT'14 with and without residual connections. The x-axis varies the number of layers in CNN-a and curves show different CNN-c settings.", |
| "uris": null |
| }, |
| "FIGREF3": { |
| "num": null, |
| "type_str": "figure", |
| "text": "convolutional encoder with 15-layer CNN-a and 5-layer CNN-c.", |
| "uris": null |
| }, |
| "FIGREF4": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Attention scores for WMT'15 English-German translation for a sentence of newstest2015.", |
| "uris": null |
| }, |
| "FIGREF5": { |
| "num": null, |
| "type_str": "figure", |
| "text": "convolutional encoder with 20-layer CNN-a and 5-layer CNN-c.", |
| "uris": null |
| }, |
| "FIGREF6": { |
| "num": null, |
| "type_str": "figure", |
| "text": "Attention scores for WMT'14 English-French translation for a sentence of ntst14.", |
| "uris": null |
| }, |
| "TABREF1": { |
| "text": "", |
| "html": null, |
| "num": null, |
| "content": "<table><tr><td>: Accuracy of encoders with position fea-</td></tr><tr><td>tures (wrd+pos) and without (wrd) in terms of</td></tr><tr><td>BLEU and perplexity (PPL) on IWSLT'14 Ger-</td></tr><tr><td>man to English translation; results include unknown</td></tr><tr><td>word replacement. Deep Convolutional 6/3 is the</td></tr><tr><td>only multi-layer configuration, more layers for the</td></tr><tr><td>LSTMs did not improve accuracy on this dataset.</td></tr></table>", |
| "type_str": "table" |
| }, |
| "TABREF3": { |
| "text": "", |
| "html": null, |
| "num": null, |
| "content": "<table/>", |
| "type_str": "table" |
| }, |
| "TABREF5": { |
| "text": "Generation speed in source words per second on a single CPU core using vocabulary selection.", |
| "html": null, |
| "num": null, |
| "content": "<table><tr><td>based model in (Wu et al., 2016) processes 106.4</td></tr><tr><td>words/second for English-French on a CPU with</td></tr><tr><td>88 cores and 358.8 words/second on a custom TPU</td></tr><tr><td>chip. The optimized RNNsearch model and C++</td></tr><tr><td>decoder described by (Junczys-Dowmunt et al.,</td></tr><tr><td>2016) translates 265.3 words/s on a CPU with a</td></tr><tr><td>similar vocabulary selection technique, computing</td></tr><tr><td>16 sentences in parallel, i.e., 16.6 words/s on a sin-</td></tr><tr><td>gle core.</td></tr></table>", |
| "type_str": "table" |
| } |
| } |
| } |
| } |