ACL-OCL / Base_JSON /prefixD /json /D18 /D18-1044.json
Benjamin Aw
Add updated pkl file v3
6fa4bc9
{
"paper_id": "D18-1044",
"header": {
"generated_with": "S2ORC 1.0.0",
"date_generated": "2023-01-19T16:45:58.143068Z"
},
"title": "Semi-Autoregressive Neural Machine Translation",
"authors": [
{
"first": "Chunqi",
"middle": [],
"last": "Wang",
"suffix": "",
"affiliation": {
"laboratory": "Alibaba Group",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "Ji",
"middle": [],
"last": "Zhang",
"suffix": "",
"affiliation": {
"laboratory": "Alibaba Group",
"institution": "",
"location": {}
},
"email": ""
},
{
"first": "Haiqing",
"middle": [],
"last": "Chen",
"suffix": "",
"affiliation": {
"laboratory": "Alibaba Group",
"institution": "",
"location": {}
},
"email": "haiqing.chenhq@alibaba-inc.com"
}
],
"year": "",
"venue": null,
"identifiers": {},
"abstract": "Existing approaches to neural machine translation are typically autoregressive models. While these models attain state-of-the-art translation quality, they are suffering from low parallelizability and thus slow at decoding long sequences. In this paper, we propose a novel model for fast sequence generationthe semi-autoregressive Transformer (SAT). The SAT keeps the autoregressive property in global but relieves in local and thus are able to produce multiple successive words in parallel at each time step. Experiments conducted on English-German and Chinese-English translation tasks show that the SAT achieves a good balance between translation quality and decoding speed. On WMT'14 English-German translation, the SAT achieves 5.58\u00d7 speedup while maintaining 88% translation quality, significantly better than the previous non-autoregressive methods. When produces two words at each time step, the SAT is almost lossless (only 1% degeneration in BLEU score).",
"pdf_parse": {
"paper_id": "D18-1044",
"_pdf_hash": "",
"abstract": [
{
"text": "Existing approaches to neural machine translation are typically autoregressive models. While these models attain state-of-the-art translation quality, they are suffering from low parallelizability and thus slow at decoding long sequences. In this paper, we propose a novel model for fast sequence generationthe semi-autoregressive Transformer (SAT). The SAT keeps the autoregressive property in global but relieves in local and thus are able to produce multiple successive words in parallel at each time step. Experiments conducted on English-German and Chinese-English translation tasks show that the SAT achieves a good balance between translation quality and decoding speed. On WMT'14 English-German translation, the SAT achieves 5.58\u00d7 speedup while maintaining 88% translation quality, significantly better than the previous non-autoregressive methods. When produces two words at each time step, the SAT is almost lossless (only 1% degeneration in BLEU score).",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Abstract",
"sec_num": null
}
],
"body_text": [
{
"text": "Neural networks have been successfully applied to a variety of tasks, including machine translation. The encoder-decoder architecture is the central idea of neural machine translation (NMT). The encoder first encodes a source-side sentence x = x 1 . . . x m into hidden states and then the decoder generates the target-side sentence y = y 1 . . . y n from the hidden states according to an autoregressive model p(y t |y 1 . . . y t\u22121 , x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Recurrent neural networks (RNNs) are inherently good at processing sequential data. Sutskever * Part of this work was done when the author was at Institute of Automation, Chinese Academy of Sciences. Figure 1: The different levels of autoregressive properties. Lines with arrow indicate dependencies. We mark the longest dependency path with bold red lines. The length of the longest dependency path decreases as we relieve the autoregressive property. An extreme case is non-autoregressive, where there is no dependency at all. et al. 2014; successfully applied RNNs to machine translation. introduced attention mechanism into the encoder-decoder architecture and greatly improved NMT. GNMT (Wu et al., 2016) further improved NMT by a bunch of tricks including residual connection and reinforcement learning.",
"cite_spans": [
{
"start": 692,
"end": 709,
"text": "(Wu et al., 2016)",
"ref_id": "BIBREF24"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "The sequential property of RNNs leads to its wide application in language processing. However, the property also hinders its parallelizability thus RNNs are slow to execute on modern hardware optimized for parallel execution. As a result, a number of more parallelizable sequence models were proposed such as ConvS2S (Gehring et al., 2017) and the Transformer (Vaswani et al., 2017) . These models avoid the dependencies between dif-ferent positions in each layer thus can be trained much faster than RNN based models. When inference, however, these models are still slow because of the autoregressive property.",
"cite_spans": [
{
"start": 317,
"end": 339,
"text": "(Gehring et al., 2017)",
"ref_id": "BIBREF6"
},
{
"start": 360,
"end": 382,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "A recent work (Gu et al., 2017) proposed a non-autoregressive NMT model that generates all target-side words in parallel. While the parallelizability is greatly improved, the translation quality encounter much decrease. In this paper, we propose the semi-autoregressive Transformer (SAT) for faster sequence generation. Unlike Gu et al. (2017) , the SAT is semi-autoregressive, which means it keeps the autoregressive property in global but relieves in local. As the result, the SAT can produce multiple successive words in parallel at each time step. Figure 1 gives an illustration of the different levels of autoregressive properties.",
"cite_spans": [
{
"start": 14,
"end": 31,
"text": "(Gu et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 327,
"end": 343,
"text": "Gu et al. (2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 552,
"end": 560,
"text": "Figure 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Experiments conducted on English-German and Chinese-English translation show that compared with non-autoregressive methods, the SAT achieves a better balance between translation quality and decoding speed. On WMT'14 English-German translation, the proposed SAT is 5.58\u00d7 faster than the Transformer while maintaining 88% of translation quality. Besides, when producing two words at each time step, the SAT is almost lossless.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "It is worth noting that although we apply the SAT to machine translation, it is not designed specifically for translation as Gu et al. (2017) ; Lee et al. (2018) . The SAT can also be applied to any other sequence generation task, such as summary generation and image caption generation.",
"cite_spans": [
{
"start": 125,
"end": 141,
"text": "Gu et al. (2017)",
"ref_id": "BIBREF7"
},
{
"start": 144,
"end": 161,
"text": "Lee et al. (2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Introduction",
"sec_num": "1"
},
{
"text": "Almost all state-of-the-art NMT models are autoregressive Wu et al., 2016; Gehring et al., 2017; Vaswani et al., 2017) , meaning that the model generates words one by one and is not friendly to modern hardware optimized for parallel execution. A recent work (Gu et al., 2017) attempts to accelerate generation by introducing a non-autoregressive model. Based on the Transformer (Vaswani et al., 2017) , they made lots of modifications. The most significant modification is that they avoid feeding the previously generated target words to the decoder, but instead feeding the source words, to predict the next target word. They also introduced a set of latent variables to model the fertilities of source words to tackle the multimodality problem in translation. Lee et al. (2018) proposed another non-autoregressive sequence model based on iterative refinement. The model can be viewed as both a latent variable model and a conditional denoising autoencoder. They also proposed a learning algorithm that is hybrid of lower-bound maximization and reconstruction error minimization.",
"cite_spans": [
{
"start": 58,
"end": 74,
"text": "Wu et al., 2016;",
"ref_id": "BIBREF24"
},
{
"start": 75,
"end": 96,
"text": "Gehring et al., 2017;",
"ref_id": "BIBREF6"
},
{
"start": 97,
"end": 118,
"text": "Vaswani et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 258,
"end": 275,
"text": "(Gu et al., 2017)",
"ref_id": "BIBREF7"
},
{
"start": 378,
"end": 400,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 762,
"end": 779,
"text": "Lee et al. (2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "The most relevant to our proposed semiautoregressive model is (Kaiser et al., 2018) . They first autoencode the target sequence into a shorter sequence of discrete latent variables, which at inference time is generated autoregressively, and finally decode the output sequence from this shorter latent sequence in parallel. What we have in common with their idea is that we have not entirely abandoned autoregressive, but rather shortened the autoregressive path.",
"cite_spans": [
{
"start": 62,
"end": 83,
"text": "(Kaiser et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "A related study on realistic speech synthesis is the parallel WaveNet (Oord et al., 2017). The paper introduced probability density distillation, a new method for training a parallel feed-forward network from a trained WaveNet (Van Den Oord et al., 2016) with no significant difference in quality.",
"cite_spans": [
{
"start": 227,
"end": 254,
"text": "(Van Den Oord et al., 2016)",
"ref_id": "BIBREF22"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "There are also some work share a somehow simillar idea with our work: character-level NMT (Chung et al., 2016; Lee et al., 2016) and chunkbased NMT Ishiwatari et al., 2017) . Unlike the SAT, these models are not able to produce multiple tokens (characters or words) each time step. Oda et al. (2017) proposed a bitlevel decoder, where a word is represented by a binary code and each bit of the code can be predicted in parallel.",
"cite_spans": [
{
"start": 90,
"end": 110,
"text": "(Chung et al., 2016;",
"ref_id": "BIBREF5"
},
{
"start": 111,
"end": 128,
"text": "Lee et al., 2016)",
"ref_id": "BIBREF14"
},
{
"start": 148,
"end": 172,
"text": "Ishiwatari et al., 2017)",
"ref_id": "BIBREF10"
},
{
"start": 282,
"end": 299,
"text": "Oda et al. (2017)",
"ref_id": "BIBREF16"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Related Work",
"sec_num": "2"
},
{
"text": "Since our proposed model is built upon the Transformer (Vaswani et al., 2017) , we will briefly introduce the Transformer. The Transformer uses an encoder-decoder architecture. We describe the encoder and decoder below.",
"cite_spans": [
{
"start": 55,
"end": 77,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Transformer",
"sec_num": "3"
},
{
"text": "From the source tokens, learned embeddings of dimension d model are generated which are then modified by an additive positional encoding. The positional encoding is necessary since the network does not leverage the order of the sequence by recurrence or convolution. The authors use additive encoding which is defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Encoder",
"sec_num": "3.1"
},
{
"text": "P E(pos, 2i) = sin(pos/10000 2i/d model ) P E(pos, 2i + 1) = cos(pos/10000 2i/d model )",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Encoder",
"sec_num": "3.1"
},
{
"text": "where pos is the position of a word in the sentence and i is the dimension. The authors chose this function because they hypothesized it would allow the model to learn to attend by relative positions easily. The encoded word embeddings are then used as input to the encoder which consists of N blocks each containing two layers: (1) a multihead attention layer, and (2) a position-wise feedforward layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Encoder",
"sec_num": "3.1"
},
{
"text": "Multi-head attention builds upon scaled dotproduct attention, which operates on a query Q, key K and value V:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Encoder",
"sec_num": "3.1"
},
{
"text": "Attention(Q, K, V ) = sof tmax( QK T \u221a d k )V",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Encoder",
"sec_num": "3.1"
},
{
"text": "where d k is the dimension of the key. The authors scale the dot product by 1/ \u221a d k to avoid the inputs to softmax function growing too large in magnitude. Multi-head attention computes h different queries, keys and values with h linear projections, computes scaled dot-product attention for each query, key and value, concatenates the results, and projects the concatenation with another linear projection:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Encoder",
"sec_num": "3.1"
},
{
"text": "H i = Attention(QW Q i , KW K i , V W V i ) M ultiHead(Q, K, V ) = Concat(H 1 , . . . H h ) in which W Q i , W K i \u2208 R d model \u00d7d k and W V i \u2208 R d model \u00d7dv .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Encoder",
"sec_num": "3.1"
},
{
"text": "The attention mechanism in the encoder performs attention over itself (Q = K = V ), so it is also called self-attention.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Encoder",
"sec_num": "3.1"
},
{
"text": "The second component in each encoder block is a position-wise feed-forward layer defined as:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Encoder",
"sec_num": "3.1"
},
{
"text": "F F N (x) = max(0, xW 1 + b 1 )W 2 + b 2 where W 1 \u2208 R d model \u00d7d f f , W 2 \u2208 R d f f \u00d7d model , b 1 \u2208 R d f f , b 2 \u2208 R d model .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Encoder",
"sec_num": "3.1"
},
{
"text": "For more stable and faster convergence, residual connection (He et al., 2016) is applied to each layer, followed by layer normalization (Ba et al., 2016) . For regularization, dropout (Srivastava et al., 2014) are applied before residual connections.",
"cite_spans": [
{
"start": 60,
"end": 77,
"text": "(He et al., 2016)",
"ref_id": "BIBREF8"
},
{
"start": 136,
"end": 153,
"text": "(Ba et al., 2016)",
"ref_id": null
},
{
"start": 184,
"end": 209,
"text": "(Srivastava et al., 2014)",
"ref_id": "BIBREF20"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "The Encoder",
"sec_num": "3.1"
},
{
"text": "Input Embedding Output Embedding Multi-Head Self-Attention Add & Norm Feed Forward Add & Norm Masked Multi-Head Self-Attention Add & Norm Multi-Head Attention Add & Norm Feed Forward Add & Norm Linear Softmax Nx Nx Positional Encoding Positional Encoding Probabilities Inputs",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Encoder",
"sec_num": "3.1"
},
{
"text": "Shifted Outputs Figure 2 : The architecture of the Transformer, also of the SAT, where the red dashed boxes point out the different parts of these two models.",
"cite_spans": [],
"ref_spans": [
{
"start": 16,
"end": 24,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "The Encoder",
"sec_num": "3.1"
},
{
"text": "The decoder is similar with the encoder and is also composed by N blocks. In addition to the two layers in each encoder block, the decoder inserts a third layer, which performs multi-head attention over the output of the encoder. It is worth noting that, different from the encoder, the self-attention layer in the decoder must be masked with a causal mask, which is a lower triangular matrix, to ensure that the prediction for position i can depend only on the known outputs at positions less than i during training.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The Decoder",
"sec_num": "3.2"
},
{
"text": "Standard NMT models usually factorize the joint probability of a word sequence y 1 . . . y n according to the word-level chain rule",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group-Level Chain Rule",
"sec_num": "4.1"
},
{
"text": "p(y 1 . . . y n |x) = n \u220f t=1 p(y t |y 1 . . . y t\u22121 , x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group-Level Chain Rule",
"sec_num": "4.1"
},
{
"text": "resulting in decoding each word depending on all previous decoding results, thus hindering the parallelizability. In the SAT, we extend the standard word-level chain rule to the group-level chain rule.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group-Level Chain Rule",
"sec_num": "4.1"
},
{
"text": "We first divide the word sequence y 1 . . . y n into consecutive groups",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group-Level Chain Rule",
"sec_num": "4.1"
},
{
"text": "G 1 , G 2 , . . . , G [(n\u22121)/K]+1 = y 1 . . . y K , y K+1 . . . y 2K , . . . , y [(n\u22121)/K]\u00d7K+1 . . . y n",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group-Level Chain Rule",
"sec_num": "4.1"
},
{
"text": "where [\u2022] denotes floor operation, K is the group size, and also the indicator of parallelizability. The larger the K, the higher the parallelizability. Except for the last group, all groups must contain K words. Then comes the group-level chain rule",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group-Level Chain Rule",
"sec_num": "4.1"
},
{
"text": "p(y 1 . . . y n |x) = [(n\u22121)/K]+1 \u220f t=1 p(G t |G 1 . . . G t\u22121 , x)",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group-Level Chain Rule",
"sec_num": "4.1"
},
{
"text": "This group-level chain rule avoids the dependencies between consecutive words if they are in the same group. With group-level chain rule, the model no longer produce words one by one as the Transformer, but rather group by group. In next subsections, we will show how to implement the model in detail.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Group-Level Chain Rule",
"sec_num": "4.1"
},
{
"text": "In autoregressive models, to predict y t , the model should be fed with the previous word y t\u22121 . We refer it as short-distance prediction. In the SAT, however, we feed y t\u2212K to predict y t , to which we refer as long-distance prediction. At the beginning of decoding, we feed the model with K special symbols <s> to predict y 1 . . . y K in parallel. Then y 1 . . . y K are fed to the model to predict y K+1 . . . y 2K in parallel. This process will continue until a terminator </s> is generated. Figure 3 gives illustrations for both short and longdistance prediction.",
"cite_spans": [],
"ref_spans": [
{
"start": 498,
"end": 506,
"text": "Figure 3",
"ref_id": "FIGREF1"
}
],
"eq_spans": [],
"section": "Long-Distance Prediction",
"sec_num": "4.2"
},
{
"text": "In the Transformer decoder, the causal mask is a lower triangular matrix, which strictly prevents earlier decoding steps from peeping information from later steps. We denote it as strict causal mask. However, in the SAT decoder, strict causal mask is not a good choice. As described in the previous subsection, in long-distance prediction, the model predicts y K+1 by feeding with y 1 . With strict causal mask, the model can only access to y 1 when predict y K+1 , which is not reasonable since y 1 . . . y K are already produced. It is better to allow the model to access to y 1 . . . y K rather than only y 1 when predict y K+1 . Therefore, we use a coarse-grained lower triangular matrix as the causal mask that allows peeping later information in the same group. We refer to it as relaxed causal mask. Given the target length n and the group size K, relaxed causal mask M \u2208 R n\u00d7n and its elements are defined below:",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relaxed Causal Mask",
"sec_num": "4.3"
},
{
"text": "M [i][j] = { 1 if j < ([(i \u2212 1)/K] + 1) \u00d7 K 0 other",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Relaxed Causal Mask",
"sec_num": "4.3"
},
{
"text": "For a more intuitive understanding, Figure 4 gives a comparison between strict and relaxed causal mask.",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 44,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "Relaxed Causal Mask",
"sec_num": "4.3"
},
{
"text": "Using group-level chain rule instead of wordlevel chain rule, long-distance prediction instead of short-distance prediction, and relaxed causal Figure 4 : Strict causal mask (left) and relaxed causal mask (right) when the target length n = 6 and the group size K = 2. We mark their differences in bold.",
"cite_spans": [],
"ref_spans": [
{
"start": 144,
"end": 152,
"text": "Figure 4",
"ref_id": null
}
],
"eq_spans": [],
"section": "The SAT",
"sec_num": "4.4"
},
{
"text": "\uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 1 0 0 0 0 0 1 1 0 0 0 0 1 1 1 0 0 0 1 1 1 1 0 0 1 1 1 1 1 0 1 1 1 1 1 1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb \uf8ee \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8ef \uf8f0 1 1 0 0 0 0 1 1 0 0 0 0 1 1 1 1 0 0 1 1 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 \uf8f9 \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fa \uf8fb",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "The SAT",
"sec_num": "4.4"
},
{
"text": "Complexity Acceleration Transformer Table 1 : Theoretical complexity and acceleration of the SAT. a denotes the time consumed on the decoder network (calculating a distribution over the target vocabulary) each time step and b denotes the time consumed on search (searching for top scores, expanding nodes and pruning). In practice, a is usually much larger than b since the network is deep.",
"cite_spans": [],
"ref_spans": [
{
"start": 36,
"end": 43,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "N (a + b) 1 SAT (beam search) N K a + N b K( a+b a+Kb ) SAT (greedy search) N K (a + b) K",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "mask instead of strict causal mask, we successfully extended the Transformer to the SAT. The Transformer can be viewed as a special case of the SAT, when the group size K = 1. The nonautoregressive Transformer (NAT) described in Gu et al. (2017) can also be viewed as a special case of the SAT, when the group size K is not less than maximum target length. Table 1 gives the theoretical complexity and acceleration of the model. We list two search strategies separately: beam search and greedy search. Beam search is the most prevailing search strategy. However, it requires the decoder states to be updated once every word is generated, thus hinders the decoding parallelizability. When decode with greedy search, there is no such concern, therefore the parallelizability of the SAT can be maximized.",
"cite_spans": [
{
"start": 229,
"end": 245,
"text": "Gu et al. (2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [
{
"start": 357,
"end": 364,
"text": "Table 1",
"ref_id": null
}
],
"eq_spans": [],
"section": "Model",
"sec_num": null
},
{
"text": "We evaluate the proposed SAT on English-German and Chinese-English translation tasks.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experiments",
"sec_num": "5"
},
{
"text": "Datasets For English-German translation, we choose the corpora provided by WMT 2014 (Bojar et al., 2014) . We use the newstest2013 dataset for development, and the newstest2014 dataset for test. For Chinese-English translation, the corpora we use is extracted from LDC 1 . We chose the NIST02 dataset for development, and the NIST03, NIST04 and NIST05 datasets for test. For English and German, we tokenized and segmented them into subword symbols using byte-pair encoding (BPE) (Sennrich et al., 2015) to restrict the vocabulary size. As for Chinese, we segmented sentences into characters. For English-German translation, we use a shared source and target vocabulary. Table 2 summaries the two corpora.",
"cite_spans": [
{
"start": 84,
"end": 104,
"text": "(Bojar et al., 2014)",
"ref_id": "BIBREF3"
},
{
"start": 479,
"end": 502,
"text": "(Sennrich et al., 2015)",
"ref_id": "BIBREF19"
}
],
"ref_spans": [
{
"start": 670,
"end": 677,
"text": "Table 2",
"ref_id": "TABREF1"
}
],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "Baseline We use the base Transformer model described in Vaswani et al. (2017) as the baseline, where d model = 512 and N = 6. In addition, for comparison, we also prepared a lighter Transformer model, in which two encoder/decoder blocks are used (N = 2), and other hyper-parameters remain the same.",
"cite_spans": [
{
"start": 56,
"end": 77,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "Hyperparameters Unless otherwise specified, all hyperparameters are inherited from the base Transformer model. We try three different settings of the group size K: K = 2, K = 4, and K = 6. For English-German translation, we share the same weight matrix between the source and target embedding layers and the pre-softmax linear layer. For Chinese-English translation, we only share weights of the target embedding layer and the pre-softmax linear layer.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Experimental Settings",
"sec_num": "5.1"
},
{
"text": "We use two search strategies: beam search and greedy search. As mentioned in Section 4.4, these two strategies lead to different parallelizability. When beam size is set to 1, greedy search is used, otherwise, beam search is used.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search Strategies",
"sec_num": null
},
{
"text": "Knowledge Distillation Knowledge distillation (Hinton et al., 2015; Kim and Rush, 2016) describes a class of methods for training a smaller student network to perform better by learning from a larger teacher network. For NMT, Kim and Rush (2016) proposed a sequence-level knowledge distillation method. In this work, we apply this method to train the SAT using a pre-trained Gu et al. (2017) ; Kaiser et al. (2018) ; Lee et al. (2018) . Note that Gu et al. (2017) ; Lee et al. (2018) used PyTorch as their platform, but we and Kaiser et al. (2018) used TensorFlow. Even on the same platform, implementation and hardware may not exactly be the same. Therefore, it is not fair to directly compare BLEU and latency. A fairer way is to compare performance degradation and speedup, which are calculated based on their own baseline.",
"cite_spans": [
{
"start": 46,
"end": 67,
"text": "(Hinton et al., 2015;",
"ref_id": "BIBREF9"
},
{
"start": 68,
"end": 87,
"text": "Kim and Rush, 2016)",
"ref_id": "BIBREF12"
},
{
"start": 226,
"end": 245,
"text": "Kim and Rush (2016)",
"ref_id": "BIBREF12"
},
{
"start": 375,
"end": 391,
"text": "Gu et al. (2017)",
"ref_id": "BIBREF7"
},
{
"start": 394,
"end": 414,
"text": "Kaiser et al. (2018)",
"ref_id": "BIBREF11"
},
{
"start": 417,
"end": 434,
"text": "Lee et al. (2018)",
"ref_id": "BIBREF15"
},
{
"start": 447,
"end": 463,
"text": "Gu et al. (2017)",
"ref_id": "BIBREF7"
},
{
"start": 466,
"end": 483,
"text": "Lee et al. (2018)",
"ref_id": "BIBREF15"
},
{
"start": 527,
"end": 547,
"text": "Kaiser et al. (2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Search Strategies",
"sec_num": null
},
{
"text": "autoregressive Transformer network. This method consists of three steps: (1) train an autoregressive Transformer network (the teacher), (2) run beam search over the training set with this model and (3) train the SAT (the student) on this new created corpus.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Search Strategies",
"sec_num": null
},
{
"text": "Since the SAT and the Transformer have only slight differences in their architecture (see Figure 2 ), in order to accelerate convergence, we use a pre-trained Transformer model to initialize some parameters in the SAT. These parameters include all parameters in the encoder, source and target word embeddings, and pre-softmax weights. Other parameters are initialized randomly. In addition to accelerating convergence, we find this method also slightly improves the translation quality.",
"cite_spans": [],
"ref_spans": [
{
"start": 90,
"end": 98,
"text": "Figure 2",
"ref_id": null
}
],
"eq_spans": [],
"section": "Initialization",
"sec_num": null
},
{
"text": "Training Same as Vaswani et al. (2017) , we train the SAT by minimize cross-entropy with label smoothing. The optimizer we use is Adam (Kingma and Ba, 2015) with \u03b2 1 = 0.9, \u03b2 2 = 0.98 and \u03b5 = 10 \u22129 . We change the learning rate during training using the learning rate funtion described in Vaswani et al. (2017) . All models are trained for 10K steps on 8 NVIDIA TITAN Xp with each minibatch consisting of about 30k tokens. For evaluation, we average last five checkpoints saved with an interval of 1000 training steps.",
"cite_spans": [
{
"start": 17,
"end": 38,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF23"
},
{
"start": 289,
"end": 310,
"text": "Vaswani et al. (2017)",
"ref_id": "BIBREF23"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Initialization",
"sec_num": null
},
{
"text": "We evaluate the translation quality of the model using BLEU score (Papineni et al., 2002) .",
"cite_spans": [
{
"start": 66,
"end": 89,
"text": "(Papineni et al., 2002)",
"ref_id": "BIBREF18"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": null
},
{
"text": "Implementation We implement the proposed SAT with TensorFlow (Abadi et al., 2016) . The code and resources needed for reproducing the results are released at https://github.com/ chqiwang/sa-nmt. Table 3 summaries results of English-German translation. According to the results, the translation quality of the SAT gradually decreases as K increases, which is consistent with intuition. When K = 2, the SAT decodes 1.51\u00d7 faster than the Transformer and is almost lossless in translation quality (only drops 0.21 BLEU score). With K = 6, the SAT can achieve 2.98\u00d7 speedup while the performance degeneration is only 8%.",
"cite_spans": [
{
"start": 61,
"end": 81,
"text": "(Abadi et al., 2016)",
"ref_id": "BIBREF0"
}
],
"ref_spans": [
{
"start": 195,
"end": 202,
"text": "Table 3",
"ref_id": "TABREF3"
}
],
"eq_spans": [],
"section": "Evaluation Metrics",
"sec_num": null
},
{
"text": "When using greedy search, the acceleration becomes much more significant. When K = 6, the decoding speed of the SAT can reach about 5.58\u00d7 of the Transformer while maintaining 88% Model b=1 b=16 b=32 b=64 Transformer 346ms 58ms 53ms 56ms SAT, K=2",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on English-German",
"sec_num": "5.2"
},
{
"text": "229ms 38ms 32ms 32ms SAT, K=4",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on English-German",
"sec_num": "5.2"
},
{
"text": "149ms 24ms 21ms 20ms SAT, K=6",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Results on English-German",
"sec_num": "5.2"
},
{
"text": "116ms 20ms 17ms 16ms of translation quality. Comparing with Gu et al. (2017) ; Kaiser et al. (2018) ; Lee et al. (2018) , the SAT achieves a better balance between translation quality and decoding speed. Compared to the lighter Transformer (N = 2), with K = 4, the SAT achieves a higher speedup with significantly better translation quality.",
"cite_spans": [
{
"start": 60,
"end": 76,
"text": "Gu et al. (2017)",
"ref_id": "BIBREF7"
},
{
"start": 79,
"end": 99,
"text": "Kaiser et al. (2018)",
"ref_id": "BIBREF11"
},
{
"start": 102,
"end": 119,
"text": "Lee et al. (2018)",
"ref_id": "BIBREF15"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Results on English-German",
"sec_num": "5.2"
},
{
"text": "In a real production environment, it is often not to decode sentences one by one, but batch by batch. To investigate whether the SAT can accelerate decoding when decoding in batches, we test the decoding latency under different batch size settings. As shown in Table 4 , the SAT significantly accelerates decoding even with a large batch size.",
"cite_spans": [],
"ref_spans": [
{
"start": 261,
"end": 268,
"text": "Table 4",
"ref_id": "TABREF4"
}
],
"eq_spans": [],
"section": "Results on English-German",
"sec_num": "5.2"
},
{
"text": "It is also good to know if the SAT can still accelerate decoding on CPU device that does not support parallel execution as well as GPU. Results in Table 5 show that even on CPU device, the SAT can still accelerate decoding significantly. Table 6 summaries results on Chinese-English translation. With K = 2, the SAT decodes 1.69\u00d7 while maintaining 97% of the translation quality. In an extreme setting where K = 6 and beam size = 1, the SAT can achieve 6.41\u00d7 speedup while maintaining 83% of the translation quality.",
"cite_spans": [],
"ref_spans": [
{
"start": 147,
"end": 154,
"text": "Table 5",
"ref_id": "TABREF5"
},
{
"start": 238,
"end": 245,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Results on English-German",
"sec_num": "5.2"
},
{
"text": "Effects of Knowledge Distillation As shown in Figure 5 , sequence-level knowledge distillation is very effective for training the SAT. For larger K, the effect is more significant. This phenomenon is echoing with observations by Gu et al. (2017) ; Oord et al. 2017; Lee et al. (2018) . In addition, we tried word-level knowledge distillation (Kim and Rush, 2016) but only a slight improvement was observed.",
"cite_spans": [
{
"start": 229,
"end": 245,
"text": "Gu et al. (2017)",
"ref_id": "BIBREF7"
},
{
"start": 266,
"end": 283,
"text": "Lee et al. (2018)",
"ref_id": "BIBREF15"
},
{
"start": 342,
"end": 362,
"text": "(Kim and Rush, 2016)",
"ref_id": "BIBREF12"
}
],
"ref_spans": [
{
"start": 46,
"end": 54,
"text": "Figure 5",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.4"
},
{
"text": "Position-Wise Cross-Entropy In Figure 6 , we plot position-wise cross-entropy for various models. To compare with the baseline model, the results in the figure are from models trained on the original corpora, i.e., without knowledge distillation. As shown in the figure, positionwise cross-entropy has an apparent periodicity with a period of K. For positions in the same group, the position-wise cross-entropy increase monotonously, which indicates that the longdistance dependencies are always more difficult to model than short ones. It suggests the key to further improve the SAT is to improve the ability of modeling long-distance dependencies.",
"cite_spans": [],
"ref_spans": [
{
"start": 31,
"end": 39,
"text": "Figure 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.4"
},
{
"text": "Case Study Table 7 lists three sample Chinese-English translations from the development set. As shown in the table, even when producing K = 6 words at each time step, the model can still gen- Table 6 : Results on Chinese-English translation. Latency is calculated on NIST02.",
"cite_spans": [],
"ref_spans": [
{
"start": 11,
"end": 18,
"text": "Table 7",
"ref_id": null
},
{
"start": 192,
"end": 199,
"text": "Table 6",
"ref_id": null
}
],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.4"
},
{
"text": "Source Transformer the international football federation will severely punish the fraud on the football field SAT, k=2 fifa will severely punish the deception on the football field SAT, k=4 fifa a will severely punish the fraud on the football court SAT, k=6 fifa a will severely punish the fraud on the football football court Reference federation international football association to mete out severe punishment for fraud on the football field Source Transformer the largescale exhibition of campus culture will also be held during the meeting .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.4"
},
{
"text": "SAT, k=2 the largescale cultural cultural exhibition on campus will also be held during the meeting .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.4"
},
{
"text": "SAT, k=4 the campus campus exhibition will also be held during the meeting .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.4"
},
{
"text": "SAT, k=6 a largescale campus culture exhibition will also be held on the sidelines of the meeting .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.4"
},
{
"text": "Reference there will also be a large -scale campus culture show during the conference .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Analysis",
"sec_num": "5.4"
},
{
"text": "Transformer this is the second time mr koizumi has visited the yasukuni shrine since he came to power .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "SAT, k=2 this is the second time that mr koizumi has visited the yasukuni shrine since he took office .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "SAT, k=4 this is the second time that koizumi has visited the yasukuni shrine since he came into power .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "SAT, k=6 this is the second visit to the yasukuni shrine since mr koizumi came office power .",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "Reference this is the second time that junichiro koizumi has paid a visit to the yasukuni shrine since he became prime minister . Table 7 : Three sample Chinese-English translations by the SAT and the Transformer. We mark repeated words or phrases by red font and underline.",
"cite_spans": [],
"ref_spans": [
{
"start": 130,
"end": 137,
"text": "Table 7",
"ref_id": null
}
],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "erate fluent sentences. As reported by Gu et al. (2017) , instances of repeated words or phrases are most prevalent in their non-autoregressive model. In the SAT, this is also the case. This suggests that we may be able to improve the translation quality of the SAT by reducing the similarity of the output distribution of adjacent positions.",
"cite_spans": [
{
"start": 39,
"end": 55,
"text": "Gu et al. (2017)",
"ref_id": "BIBREF7"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Source",
"sec_num": null
},
{
"text": "In this work, we have introduced a novel model for faster sequence generation based on the Transformer (Vaswani et al., 2017) , which we refer to as the semi-autoregressive Transformer (SAT). Com-bining the original Transformer with group-level chain rule, long-distance prediction and relaxed causal mask, the SAT can produce multiple consecutive words at each time step, thus speedup decoding significantly. We conducted experiments on English-German and Chinese-English translation. Compared with previously proposed nonautoregressive models (Gu et al., 2017; Lee et al., 2018; Kaiser et al., 2018) , the SAT achieves a better balance between translation quality and decoding speed. On WMT'14 English-German translation, the SAT achieves 5.58\u00d7 speedup while maintaining 88% translation quality, significantly bet-ter than previous methods. When producing two words at each time step, the SAT is almost lossless (only 1% degeneration in BLEU score).",
"cite_spans": [
{
"start": 103,
"end": 125,
"text": "(Vaswani et al., 2017)",
"ref_id": "BIBREF23"
},
{
"start": 545,
"end": 562,
"text": "(Gu et al., 2017;",
"ref_id": "BIBREF7"
},
{
"start": 563,
"end": 580,
"text": "Lee et al., 2018;",
"ref_id": "BIBREF15"
},
{
"start": 581,
"end": 601,
"text": "Kaiser et al., 2018)",
"ref_id": "BIBREF11"
}
],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "In the future, we plan to investigate better methods for training the SAT to further shrink the performance gap between the SAT and the Transformer. Specifically, we believe that the following two directions are worth study. First, use object function beyond maximum likelihood to improve the modeling of long-distance dependencies. Second, explore new method for knowledge distillation. We also plan to extend the SAT to allow the use of different group sizes K at different positions, instead of using a fixed value.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Conclusion",
"sec_num": "6"
},
{
"text": "The Semi-Autoregressive TransformerWe propose a novel NMT model-the Semi-Autoregressive Transformer (SAT)-that can produce multiple successive words in parallel. As shown inFigure 2, the architecture of the SAT is almost the same as the Transformer, except some modifications in the decoder.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
},
{
"text": "The corpora include LDC2002E18, LDC2003E14, LDC2004T08 and LDC2005T0.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "",
"sec_num": null
}
],
"back_matter": [
{
"text": "We would like to thank the anonymous reviewers for their valuable comments. We also thank Wenfu Wang, Hao Wang for helpful discussion and Linhao Dong, Jinghao Niu for their help in paper writting.",
"cite_spans": [],
"ref_spans": [],
"eq_spans": [],
"section": "Acknowledgments",
"sec_num": null
}
],
"bib_entries": {
"BIBREF0": {
"ref_id": "b0",
"title": "Tensorflow: A system for large-scale machine learning",
"authors": [
{
"first": "Mart\u00edn",
"middle": [],
"last": "Abadi",
"suffix": ""
},
{
"first": "Paul",
"middle": [],
"last": "Barham",
"suffix": ""
},
{
"first": "Jianmin",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "Andy",
"middle": [],
"last": "Davis",
"suffix": ""
},
{
"first": "Jeffrey",
"middle": [],
"last": "Dean",
"suffix": ""
},
{
"first": "Matthieu",
"middle": [],
"last": "Devin",
"suffix": ""
},
{
"first": "Sanjay",
"middle": [],
"last": "Ghemawat",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Irving",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Isard",
"suffix": ""
}
],
"year": 2016,
"venue": "OSDI",
"volume": "16",
"issue": "",
"pages": "265--283",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Mart\u00edn Abadi, Paul Barham, Jianmin Chen, Zhifeng Chen, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Geoffrey Irving, Michael Isard, et al. 2016. Tensorflow: A system for large-scale machine learning. In OSDI, volume 16, pages 265- 283.",
"links": null
},
"BIBREF2": {
"ref_id": "b2",
"title": "Neural machine translation by jointly learning to align and translate",
"authors": [
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1409.0473"
]
},
"num": null,
"urls": [],
"raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2014. Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:1409.0473.",
"links": null
},
"BIBREF3": {
"ref_id": "b3",
"title": "Findings of the 2014 workshop on statistical machine translation",
"authors": [
{
"first": "Ondrej",
"middle": [],
"last": "Bojar",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Buck",
"suffix": ""
},
{
"first": "Christian",
"middle": [],
"last": "Federmann",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Philipp",
"middle": [],
"last": "Koehn",
"suffix": ""
},
{
"first": "Johannes",
"middle": [],
"last": "Leveling",
"suffix": ""
},
{
"first": "Christof",
"middle": [],
"last": "Monz",
"suffix": ""
},
{
"first": "Pavel",
"middle": [],
"last": "Pecina",
"suffix": ""
},
{
"first": "Matt",
"middle": [],
"last": "Post",
"suffix": ""
},
{
"first": "Herve",
"middle": [],
"last": "Saint-Amand",
"suffix": ""
}
],
"year": 2014,
"venue": "Proceedings of the ninth workshop on statistical machine translation",
"volume": "",
"issue": "",
"pages": "12--58",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ondrej Bojar, Christian Buck, Christian Federmann, Barry Haddow, Philipp Koehn, Johannes Leveling, Christof Monz, Pavel Pecina, Matt Post, Herve Saint-Amand, et al. 2014. Findings of the 2014 workshop on statistical machine translation. In Pro- ceedings of the ninth workshop on statistical ma- chine translation, pages 12-58.",
"links": null
},
"BIBREF4": {
"ref_id": "b4",
"title": "Learning phrase representations using rnn encoder-decoder for statistical machine translation",
"authors": [
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Bart",
"middle": [],
"last": "Van Merri\u00ebnboer",
"suffix": ""
},
{
"first": "Caglar",
"middle": [],
"last": "Gulcehre",
"suffix": ""
},
{
"first": "Dzmitry",
"middle": [],
"last": "Bahdanau",
"suffix": ""
},
{
"first": "Fethi",
"middle": [],
"last": "Bougares",
"suffix": ""
},
{
"first": "Holger",
"middle": [],
"last": "Schwenk",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2014,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1406.1078"
]
},
"num": null,
"urls": [],
"raw_text": "Kyunghyun Cho, Bart Van Merri\u00ebnboer, Caglar Gul- cehre, Dzmitry Bahdanau, Fethi Bougares, Holger Schwenk, and Yoshua Bengio. 2014. Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.",
"links": null
},
"BIBREF5": {
"ref_id": "b5",
"title": "A character-level decoder without explicit segmentation for neural machine translation",
"authors": [
{
"first": "Junyoung",
"middle": [],
"last": "Chung",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Yoshua",
"middle": [],
"last": "Bengio",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1603.06147"
]
},
"num": null,
"urls": [],
"raw_text": "Junyoung Chung, Kyunghyun Cho, and Yoshua Ben- gio. 2016. A character-level decoder without ex- plicit segmentation for neural machine translation. arXiv preprint arXiv:1603.06147.",
"links": null
},
"BIBREF6": {
"ref_id": "b6",
"title": "Convolutional sequence to sequence learning",
"authors": [
{
"first": "Jonas",
"middle": [],
"last": "Gehring",
"suffix": ""
},
{
"first": "Michael",
"middle": [],
"last": "Auli",
"suffix": ""
},
{
"first": "David",
"middle": [],
"last": "Grangier",
"suffix": ""
},
{
"first": "Denis",
"middle": [],
"last": "Yarats",
"suffix": ""
},
{
"first": "Yann N",
"middle": [],
"last": "Dauphin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, and Yann N Dauphin. 2017. Convolutional sequence to sequence learning.",
"links": null
},
"BIBREF7": {
"ref_id": "b7",
"title": "Nonautoregressive neural machine translation",
"authors": [
{
"first": "Jiatao",
"middle": [],
"last": "Gu",
"suffix": ""
},
{
"first": "James",
"middle": [],
"last": "Bradbury",
"suffix": ""
},
{
"first": "Caiming",
"middle": [],
"last": "Xiong",
"suffix": ""
},
{
"first": "O",
"middle": [
"K"
],
"last": "Victor",
"suffix": ""
},
{
"first": "Richard",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Socher",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.02281"
]
},
"num": null,
"urls": [],
"raw_text": "Jiatao Gu, James Bradbury, Caiming Xiong, Vic- tor OK Li, and Richard Socher. 2017. Non- autoregressive neural machine translation. arXiv preprint arXiv:1711.02281.",
"links": null
},
"BIBREF8": {
"ref_id": "b8",
"title": "Deep residual learning for image recognition",
"authors": [
{
"first": "Kaiming",
"middle": [],
"last": "He",
"suffix": ""
},
{
"first": "Xiangyu",
"middle": [],
"last": "Zhang",
"suffix": ""
},
{
"first": "Shaoqing",
"middle": [],
"last": "Ren",
"suffix": ""
},
{
"first": "Jian",
"middle": [],
"last": "Sun",
"suffix": ""
}
],
"year": 2016,
"venue": "Proceedings of the IEEE conference on computer vision and pattern recognition",
"volume": "",
"issue": "",
"pages": "770--778",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recog- nition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770- 778.",
"links": null
},
"BIBREF9": {
"ref_id": "b9",
"title": "Distilling the knowledge in a neural network",
"authors": [
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Jeff",
"middle": [],
"last": "Dean",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1503.02531"
]
},
"num": null,
"urls": [],
"raw_text": "Geoffrey Hinton, Oriol Vinyals, and Jeff Dean. 2015. Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531.",
"links": null
},
"BIBREF10": {
"ref_id": "b10",
"title": "Chunk-based decoder for neural machine translation",
"authors": [
{
"first": "Shonosuke",
"middle": [],
"last": "Ishiwatari",
"suffix": ""
},
{
"first": "Jingtao",
"middle": [],
"last": "Yao",
"suffix": ""
},
{
"first": "Shujie",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Mu",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Ming",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Naoki",
"middle": [],
"last": "Yoshinaga",
"suffix": ""
},
{
"first": "Masaru",
"middle": [],
"last": "Kitsuregawa",
"suffix": ""
},
{
"first": "Weijia",
"middle": [],
"last": "Jia",
"suffix": ""
}
],
"year": 2017,
"venue": "Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics",
"volume": "1",
"issue": "",
"pages": "1901--1912",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Shonosuke Ishiwatari, Jingtao Yao, Shujie Liu, Mu Li, Ming Zhou, Naoki Yoshinaga, Masaru Kitsuregawa, and Weijia Jia. 2017. Chunk-based decoder for neu- ral machine translation. In Proceedings of the 55th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), vol- ume 1, pages 1901-1912.",
"links": null
},
"BIBREF11": {
"ref_id": "b11",
"title": "Fast decoding in sequence models using discrete latent variables",
"authors": [
{
"first": "\u0141ukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Aurko",
"middle": [],
"last": "Roy",
"suffix": ""
},
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Pamar",
"suffix": ""
},
{
"first": "Samy",
"middle": [],
"last": "Bengio",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1803.03382"
]
},
"num": null,
"urls": [],
"raw_text": "\u0141ukasz Kaiser, Aurko Roy, Ashish Vaswani, Niki Pa- mar, Samy Bengio, Jakob Uszkoreit, and Noam Shazeer. 2018. Fast decoding in sequence mod- els using discrete latent variables. arXiv preprint arXiv:1803.03382.",
"links": null
},
"BIBREF12": {
"ref_id": "b12",
"title": "Sequencelevel knowledge distillation",
"authors": [
{
"first": "Yoon",
"middle": [],
"last": "Kim",
"suffix": ""
},
{
"first": "Alexander M",
"middle": [],
"last": "Rush",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1606.07947"
]
},
"num": null,
"urls": [],
"raw_text": "Yoon Kim and Alexander M Rush. 2016. Sequence- level knowledge distillation. arXiv preprint arXiv:1606.07947.",
"links": null
},
"BIBREF13": {
"ref_id": "b13",
"title": "Adam: A method for stochastic optimization",
"authors": [
{
"first": "P",
"middle": [],
"last": "Diederik",
"suffix": ""
},
{
"first": "Jimmy",
"middle": [
"Lei"
],
"last": "Kingma",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Ba",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Diederik P Kingma and Jimmy Lei Ba. 2015. Adam: A method for stochastic optimization. international conference on learning representations.",
"links": null
},
"BIBREF14": {
"ref_id": "b14",
"title": "Fully character-level neural machine translation without explicit segmentation",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
},
{
"first": "Thomas",
"middle": [],
"last": "Hofmann",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1610.03017"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Lee, Kyunghyun Cho, and Thomas Hofmann. 2016. Fully character-level neural machine trans- lation without explicit segmentation. arXiv preprint arXiv:1610.03017.",
"links": null
},
"BIBREF15": {
"ref_id": "b15",
"title": "Deterministic non-autoregressive neural sequence modeling by iterative refinement",
"authors": [
{
"first": "Jason",
"middle": [],
"last": "Lee",
"suffix": ""
},
{
"first": "Elman",
"middle": [],
"last": "Mansimov",
"suffix": ""
},
{
"first": "Kyunghyun",
"middle": [],
"last": "Cho",
"suffix": ""
}
],
"year": 2018,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1802.06901"
]
},
"num": null,
"urls": [],
"raw_text": "Jason Lee, Elman Mansimov, and Kyunghyun Cho. 2018. Deterministic non-autoregressive neural se- quence modeling by iterative refinement. arXiv preprint arXiv:1802.06901.",
"links": null
},
"BIBREF16": {
"ref_id": "b16",
"title": "Neural machine translation via binary code prediction",
"authors": [
{
"first": "Yusuke",
"middle": [],
"last": "Oda",
"suffix": ""
},
{
"first": "Philip",
"middle": [],
"last": "Arthur",
"suffix": ""
},
{
"first": "Graham",
"middle": [],
"last": "Neubig",
"suffix": ""
},
{
"first": "Koichiro",
"middle": [],
"last": "Yoshino",
"suffix": ""
},
{
"first": "Satoshi",
"middle": [],
"last": "Nakamura",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1704.06918"
]
},
"num": null,
"urls": [],
"raw_text": "Yusuke Oda, Philip Arthur, Graham Neubig, Koichiro Yoshino, and Satoshi Nakamura. 2017. Neural ma- chine translation via binary code prediction. arXiv preprint arXiv:1704.06918.",
"links": null
},
"BIBREF17": {
"ref_id": "b17",
"title": "Parallel wavenet: Fast high-fidelity speech synthesis",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Van Den Oord",
"suffix": ""
},
{
"first": "Yazhe",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Igor",
"middle": [],
"last": "Babuschkin",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
},
{
"first": "George",
"middle": [],
"last": "Van Den Driessche",
"suffix": ""
},
{
"first": "Edward",
"middle": [],
"last": "Lockhart",
"suffix": ""
},
{
"first": "C",
"middle": [],
"last": "Luis",
"suffix": ""
},
{
"first": "Florian",
"middle": [],
"last": "Cobo",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Stimberg",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1711.10433"
]
},
"num": null,
"urls": [],
"raw_text": "Aaron van den Oord, Yazhe Li, Igor Babuschkin, Karen Simonyan, Oriol Vinyals, Koray Kavukcuoglu, George van den Driessche, Edward Lockhart, Luis C Cobo, Florian Stimberg, et al. 2017. Parallel wavenet: Fast high-fidelity speech synthesis. arXiv preprint arXiv:1711.10433.",
"links": null
},
"BIBREF18": {
"ref_id": "b18",
"title": "Bleu: a method for automatic evaluation of machine translation",
"authors": [
{
"first": "Kishore",
"middle": [],
"last": "Papineni",
"suffix": ""
},
{
"first": "Salim",
"middle": [],
"last": "Roukos",
"suffix": ""
},
{
"first": "Todd",
"middle": [],
"last": "Ward",
"suffix": ""
},
{
"first": "Wei-Jing",
"middle": [],
"last": "Zhu",
"suffix": ""
}
],
"year": 2002,
"venue": "Proceedings of the 40th annual meeting on association for computational linguistics",
"volume": "",
"issue": "",
"pages": "311--318",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th annual meeting on association for compu- tational linguistics, pages 311-318. Association for Computational Linguistics.",
"links": null
},
"BIBREF19": {
"ref_id": "b19",
"title": "Neural machine translation of rare words with subword units",
"authors": [
{
"first": "Rico",
"middle": [],
"last": "Sennrich",
"suffix": ""
},
{
"first": "Barry",
"middle": [],
"last": "Haddow",
"suffix": ""
},
{
"first": "Alexandra",
"middle": [],
"last": "Birch",
"suffix": ""
}
],
"year": 2015,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1508.07909"
]
},
"num": null,
"urls": [],
"raw_text": "Rico Sennrich, Barry Haddow, and Alexandra Birch. 2015. Neural machine translation of rare words with subword units. arXiv preprint arXiv:1508.07909.",
"links": null
},
"BIBREF20": {
"ref_id": "b20",
"title": "Dropout: A simple way to prevent neural networks from overfitting",
"authors": [
{
"first": "Nitish",
"middle": [],
"last": "Srivastava",
"suffix": ""
},
{
"first": "Geoffrey",
"middle": [],
"last": "Hinton",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Krizhevsky",
"suffix": ""
},
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Ruslan",
"middle": [],
"last": "Salakhutdinov",
"suffix": ""
}
],
"year": 2014,
"venue": "The Journal of Machine Learning Research",
"volume": "15",
"issue": "1",
"pages": "1929--1958",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. The Journal of Machine Learning Research, 15(1):1929-1958.",
"links": null
},
"BIBREF21": {
"ref_id": "b21",
"title": "Sequence to sequence learning with neural networks",
"authors": [
{
"first": "Ilya",
"middle": [],
"last": "Sutskever",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Quoc V",
"middle": [],
"last": "Le",
"suffix": ""
}
],
"year": 2014,
"venue": "Advances in neural information processing systems",
"volume": "",
"issue": "",
"pages": "3104--3112",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural net- works. In Advances in neural information process- ing systems, pages 3104-3112.",
"links": null
},
"BIBREF22": {
"ref_id": "b22",
"title": "Wavenet: A generative model for raw audio",
"authors": [
{
"first": "Aaron",
"middle": [],
"last": "Van Den",
"suffix": ""
},
{
"first": "Sander",
"middle": [],
"last": "Oord",
"suffix": ""
},
{
"first": "Heiga",
"middle": [],
"last": "Dieleman",
"suffix": ""
},
{
"first": "Karen",
"middle": [],
"last": "Zen",
"suffix": ""
},
{
"first": "Oriol",
"middle": [],
"last": "Simonyan",
"suffix": ""
},
{
"first": "Alex",
"middle": [],
"last": "Vinyals",
"suffix": ""
},
{
"first": "Nal",
"middle": [],
"last": "Graves",
"suffix": ""
},
{
"first": "Andrew",
"middle": [],
"last": "Kalchbrenner",
"suffix": ""
},
{
"first": "Koray",
"middle": [],
"last": "Senior",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Kavukcuoglu",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1609.03499"
]
},
"num": null,
"urls": [],
"raw_text": "Aaron Van Den Oord, Sander Dieleman, Heiga Zen, Karen Simonyan, Oriol Vinyals, Alex Graves, Nal Kalchbrenner, Andrew Senior, and Koray Kavukcuoglu. 2016. Wavenet: A generative model for raw audio. arXiv preprint arXiv:1609.03499.",
"links": null
},
"BIBREF23": {
"ref_id": "b23",
"title": "Attention is all you need",
"authors": [
{
"first": "Ashish",
"middle": [],
"last": "Vaswani",
"suffix": ""
},
{
"first": "Noam",
"middle": [],
"last": "Shazeer",
"suffix": ""
},
{
"first": "Niki",
"middle": [],
"last": "Parmar",
"suffix": ""
},
{
"first": "Jakob",
"middle": [],
"last": "Uszkoreit",
"suffix": ""
},
{
"first": "Llion",
"middle": [],
"last": "Jones",
"suffix": ""
},
{
"first": "Aidan",
"middle": [
"N"
],
"last": "Gomez",
"suffix": ""
},
{
"first": "Lukasz",
"middle": [],
"last": "Kaiser",
"suffix": ""
},
{
"first": "Illia",
"middle": [],
"last": "Polosukhin",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1706.03762"
]
},
"num": null,
"urls": [],
"raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.",
"links": null
},
"BIBREF24": {
"ref_id": "b24",
"title": "Google's neural machine translation system: Bridging the gap between human and machine translation",
"authors": [
{
"first": "Yonghui",
"middle": [],
"last": "Wu",
"suffix": ""
},
{
"first": "Mike",
"middle": [],
"last": "Schuster",
"suffix": ""
},
{
"first": "Zhifeng",
"middle": [],
"last": "Chen",
"suffix": ""
},
{
"first": "V",
"middle": [],
"last": "Quoc",
"suffix": ""
},
{
"first": "Mohammad",
"middle": [],
"last": "Le",
"suffix": ""
},
{
"first": "Wolfgang",
"middle": [],
"last": "Norouzi",
"suffix": ""
},
{
"first": "Maxim",
"middle": [],
"last": "Macherey",
"suffix": ""
},
{
"first": "Yuan",
"middle": [],
"last": "Krikun",
"suffix": ""
},
{
"first": "Qin",
"middle": [],
"last": "Cao",
"suffix": ""
},
{
"first": "Klaus",
"middle": [],
"last": "Gao",
"suffix": ""
},
{
"first": "",
"middle": [],
"last": "Macherey",
"suffix": ""
}
],
"year": 2016,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {},
"num": null,
"urls": [],
"raw_text": "Yonghui Wu, Mike Schuster, Zhifeng Chen, Quoc V Le, Mohammad Norouzi, Wolfgang Macherey, Maxim Krikun, Yuan Cao, Qin Gao, and Klaus Macherey. 2016. Google's neural machine transla- tion system: Bridging the gap between human and machine translation.",
"links": null
},
"BIBREF25": {
"ref_id": "b25",
"title": "Chunk-based biscale decoder for neural machine translation",
"authors": [
{
"first": "Hao",
"middle": [],
"last": "Zhou",
"suffix": ""
},
{
"first": "Zhaopeng",
"middle": [],
"last": "Tu",
"suffix": ""
},
{
"first": "Shujian",
"middle": [],
"last": "Huang",
"suffix": ""
},
{
"first": "Xiaohua",
"middle": [],
"last": "Liu",
"suffix": ""
},
{
"first": "Hang",
"middle": [],
"last": "Li",
"suffix": ""
},
{
"first": "Jiajun",
"middle": [],
"last": "Chen",
"suffix": ""
}
],
"year": 2017,
"venue": "",
"volume": "",
"issue": "",
"pages": "",
"other_ids": {
"arXiv": [
"arXiv:1705.01452"
]
},
"num": null,
"urls": [],
"raw_text": "Hao Zhou, Zhaopeng Tu, Shujian Huang, Xiaohua Liu, Hang Li, and Jiajun Chen. 2017. Chunk-based bi- scale decoder for neural machine translation. arXiv preprint arXiv:1705.01452.",
"links": null
}
},
"ref_entries": {
"FIGREF1": {
"text": "Short-distance prediction (top) and longdistance prediction (bottom).",
"num": null,
"type_str": "figure",
"uris": null
},
"FIGREF2": {
"text": "Performance of the SAT with and without sequence-level knowledge distillation. Position-wise cross-entropy for various models on English-German translation.",
"num": null,
"type_str": "figure",
"uris": null
},
"TABREF1": {
"num": null,
"text": "Summary of the two corpora.",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF3": {
"num": null,
"text": "",
"html": null,
"type_str": "table",
"content": "<table/>"
},
"TABREF4": {
"num": null,
"text": "Time needed to decode one sentence under various batch size settings. A single NVIDIA TIAN Xp is used in this test.",
"html": null,
"type_str": "table",
"content": "<table><tr><td>Model</td><td>K=1</td><td>K=2</td><td>K=4</td><td>K=6</td></tr><tr><td colspan=\"5\">Latency 1384ms 607ms 502ms 372ms</td></tr></table>"
},
"TABREF5": {
"num": null,
"text": "Time needed to decode one sentence on CPU device. Sentences are decoded one by one without batching. K=1 denotes the Transformer.",
"html": null,
"type_str": "table",
"content": "<table/>"
}
}
}
}