| { |
| "paper_id": "P19-1032", |
| "header": { |
| "generated_with": "S2ORC 1.0.0", |
| "date_generated": "2023-01-19T08:21:21.324785Z" |
| }, |
| "title": "Adaptive Attention Span in Transformers", |
| "authors": [ |
| { |
| "first": "Sainbayar", |
| "middle": [], |
| "last": "Sukhbaatar", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "" |
| }, |
| { |
| "first": "Edouard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "egrave@fb.com" |
| }, |
| { |
| "first": "Piotr", |
| "middle": [], |
| "last": "Bojanowski", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "bojanowski@fb.com" |
| }, |
| { |
| "first": "Armand", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "", |
| "affiliation": {}, |
| "email": "ajoulin@fb.com" |
| } |
| ], |
| "year": "", |
| "venue": null, |
| "identifiers": {}, |
| "abstract": "We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time. We show the effectiveness of our approach on the task of character level language modeling, where we achieve state-of-the-art performances on text8 and enwiki8 by using a maximum context of 8k characters.", |
| "pdf_parse": { |
| "paper_id": "P19-1032", |
| "_pdf_hash": "", |
| "abstract": [ |
| { |
| "text": "We propose a novel self-attention mechanism that can learn its optimal attention span. This allows us to extend significantly the maximum context size used in Transformer, while maintaining control over their memory footprint and computational time. We show the effectiveness of our approach on the task of character level language modeling, where we achieve state-of-the-art performances on text8 and enwiki8 by using a maximum context of 8k characters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Abstract", |
| "sec_num": null |
| } |
| ], |
| "body_text": [ |
| { |
| "text": "Language models are at the core of many NLP applications, like machine translation or dialogue. Recently, much progress has been made by a new neural network called Transformer (Vaswani et al., 2017) . Part of its success is due to its ability to capture long term dependencies. This is achieved by taking long sequences as inputs and explicitly compute the relations between every token via a mechanism called the \"self-attention\" layer (Al-Rfou et al., 2019) .", |
| "cite_spans": [ |
| { |
| "start": 177, |
| "end": 199, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 438, |
| "end": 460, |
| "text": "(Al-Rfou et al., 2019)", |
| "ref_id": "BIBREF0" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "While this layer allows for information to propagate across long distances, it has a computational and memory cost that scales quadratically with the size of the input sequence. As a consequence, Transformers hardly scale to sequences of more than a thousand tokens. This is particularly problematic in the case of character level language modeling where dependencies are often spread over a few thousands time steps.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "In this work, we propose an alternative to the self-attention layer to reduce the computational burden of a Transformer. Our layer learns its optimal context size, resulting in a network where each attention layer gathers information on their own context. In practice, we observe that this leads to Transformer with small context in the low-level layers and very large ones for the last layers. With this modification, we are able to scale input sequences to more than 8k tokens with no loss of performance, nor additional computational or memory cost. We validate our approach on the task of character level language modeling where we reach state-of-the-art performances while reducing the number of FLOPS. The code to reproduce our results is publicly available 1 .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Introduction", |
| "sec_num": "1" |
| }, |
| { |
| "text": "Language modeling is the problem of assigning a probability to a sequence of tokens (w 1 , . . . , w T ):", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential transformer network", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "P (w 1 , . . . , w T ) = T t=1 P (w t | w t\u22121 , . . . , w 1 ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential transformer network", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Recent progress was made with a new autoregressive model called Sequential Transformer (Vaswani et al., 2017) . A Transformer is made of a sequence of layers that are composed of a block of parallel self-attention layers followed by a feedforward network. We refer to Vaswani et al. (2017) for the details on the structure. In this paper, we make a couple of modifications to the Transformer model: we use the relative position embeddings of Shaw et al. (2018) and the caching mechanism of Dai et al. (2019) to speed up the train and test time.", |
| "cite_spans": [ |
| { |
| "start": 87, |
| "end": 109, |
| "text": "(Vaswani et al., 2017)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 268, |
| "end": 289, |
| "text": "Vaswani et al. (2017)", |
| "ref_id": "BIBREF10" |
| }, |
| { |
| "start": 442, |
| "end": 460, |
| "text": "Shaw et al. (2018)", |
| "ref_id": "BIBREF7" |
| }, |
| { |
| "start": 490, |
| "end": 507, |
| "text": "Dai et al. (2019)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential transformer network", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Self-attention layer. A core mechanism of a transformer network is the self-attention layer, which consists of multiple attention heads working in parallel. Each attention head applies the attention mechanism of Bahdanau et al. (2015) to its own input. Given a token t in a sequence, the head ", |
| "cite_spans": [ |
| { |
| "start": 212, |
| "end": 234, |
| "text": "Bahdanau et al. (2015)", |
| "ref_id": "BIBREF1" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential transformer network", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "s tr = x t W q (W k x r + p t\u2212r ) ,", |
| "eq_num": "(1)" |
| } |
| ], |
| "section": "Sequential transformer network", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "where W k and W q are the \"key\" and \"query\" matrices, and p t\u2212r is the relative position embedding. The attention weights are then obtained by applying a softmax function on these similarities:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential transformer network", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "a tr = exp (s tr ) t\u22121 q=t\u2212S exp (s tq ) ,", |
| "eq_num": "(2)" |
| } |
| ], |
| "section": "Sequential transformer network", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Finally, the head outputs a vector y t by taking the average of the past representations weighted by their attention weights:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential transformer network", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "EQUATION", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [ |
| { |
| "start": 0, |
| "end": 8, |
| "text": "EQUATION", |
| "ref_id": "EQREF", |
| "raw_str": "y t = t\u22121 r=t\u2212S a tr W v x r ,", |
| "eq_num": "(3)" |
| } |
| ], |
| "section": "Sequential transformer network", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "where W v is called the \"value\" matrix. Outputs from different heads are then concatenated together and multiplied by an output matrix W o before feeding to the next layer. Similar to the memory access mechanisms of Sukhbaatar et al. (2015) , it pulls information from the past to update the current token representation. Repeating this mechanism in consecutive layers allows for information to flow over long distances. However, for each input token, each attention head scales linearly in memory and time in the context size, or attention span. There are typically 12 layers with 8 heads each that processes 512 tokens simultaneously. This drastically limits the maximum attention span used in Transformers.", |
| "cite_spans": [ |
| { |
| "start": 216, |
| "end": 240, |
| "text": "Sukhbaatar et al. (2015)", |
| "ref_id": "BIBREF9" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Sequential transformer network", |
| "sec_num": "2.1" |
| }, |
| { |
| "text": "Each attention head of a Transformer shares the same attention span S. This assumes that every head requires the same span to form its representation. As shown in Figure 1 modeling: some heads (e.g., Head A) focus on the recent history, while others take information from the whole available context (e.g., Head B). In this section, we propose to learn the attention span of each head independently to reduce their computational and memory cost. For each head, we add a masking function to control for the span of the attention. A masking function is a non-increasing function that maps a distance to a value in [0, 1]. We take the following soft masking function m z parametrized by a real value z in [0, S]:", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 163, |
| "end": 171, |
| "text": "Figure 1", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Adaptive attention span", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "m z (x) = min max 1 R (R + z \u2212 x) , 0 , 1 ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adaptive attention span", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "where R is a hyper-parameter that controls its softness. This soft masking function is inspired by Jernite et al. (2017). In Figure 2 , we show the shape of this piecewise function as a function of the distance. The attention weights from Eq. 2 are then computed on the masked span, i.e.,", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 125, |
| "end": 133, |
| "text": "Figure 2", |
| "ref_id": "FIGREF1" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Adaptive attention span", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "a tr = m z (t \u2212 r) exp (s tr ) t\u22121 q=t\u2212S m z (t \u2212 q) exp (s tq ) .", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adaptive attention span", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "We add a 1 penalization on the parameters z i for each attention head i of the model to the loss function:", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adaptive attention span", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "L = \u2212 log P (w 1 , . . . , w T ) + \u03bb M i z i ,", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adaptive attention span", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "where \u03bb > 0 is the regularization hyperparameter, and M is the number of heads in each layer. Our formulation is differentiable in the parameters z i and we learn them jointly with the rest of the model.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Adaptive attention span", |
| "sec_num": "2.2" |
| }, |
| { |
| "text": "As an extension, we consider a dynamic computation approach (Graves, 2016) where the attention span dynamically change based on the current input (Luong et al., 2015; Shu and Nakayama, 2017) . At a time step t, the span parameter z t of an attention head is then a function of the input parametrized by a vector v and a scalar b, i.e., z t = S\u03c3(v T x t + b). We penalize z t in the same way as before and learn the parameters v, b jointly with the rest of the parameters.", |
| "cite_spans": [ |
| { |
| "start": 60, |
| "end": 74, |
| "text": "(Graves, 2016)", |
| "ref_id": "BIBREF3" |
| }, |
| { |
| "start": 146, |
| "end": 166, |
| "text": "(Luong et al., 2015;", |
| "ref_id": "BIBREF5" |
| }, |
| { |
| "start": 167, |
| "end": 190, |
| "text": "Shu and Nakayama, 2017)", |
| "ref_id": "BIBREF8" |
| } |
| ], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Dynamic attention span.", |
| "sec_num": null |
| }, |
| { |
| "text": "In this section, we evaluate the impact of our adaptive attention mechanism in the experimental setting of Al-Rfou et al. 2019 N (0, 1) , and the pro-", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 127, |
| "end": 135, |
| "text": "N (0, 1)", |
| "ref_id": "FIGREF0" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "jection matrices W {q,k,v,o} are initialized from U(\u22121/ \u221a d h , 1/ \u221a d h ).", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "A single set of position embeddings p t is shared across all the heads.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In adaptive-span models, we reprameterized the span parameter z by z = Sz , where z \u2208 [0, 1] is initialized to 0. In dynamic-span models, the bias term b is initialized \u22124 to make initial spans small. We set the hyperparameters \u03bb = 2 \u00d7 10 \u22126 and R = 32 for the both type of models, except \u03bb is reduced to 0.5 \u00d7 10 \u22126 when S = 8192 because z was not growing longer than 4000.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "We use Adagrad with a batch size of 64 and fixed learning rate of 0.07 and 32k warm-up steps. Our warm-up strategy differs from Vaswani et al. (2017): we linearly increase learning rate from zero to the final learning rate. Gradients of each module are clipped at 0.03 for better stability. At train time, we use a block of 512 consecutive characters and compute the loss and gradient for each of those 512 characters.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In small models, we apply dropout with a rate of 0.3 to the attention and the feedforward ReLU activations. We train small models for 600K steps (900K steps when S = 8192), which takes about 2 \u223c 3 days on 8 V100 GPUs depending on the attention span limit. Large models are trained with a dropout rate of 0.4 until the validation performance stopped improving (250K steps for text8 and 150K steps for enwik8), and then further trained for 20K steps with a learning rate divided by 10.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Results. In Table 1 , we compare our sequential Transformer with the adaptive spans (\"Adaptive-Span\") of Sec. 2.2 to models of Al-Rfou et al. (2019) and Dai et al. (2019) . For small models, our model outperforms the other Transformers by 0.07 bcp while significantly reducing the memory usage for large attention span. Interestingly, even with a limit on span sets to 8192, the average span is only 314. Similar results are obtained on enwik8 as shown in Table 2 , where the adaptive-span model outperformed similar sized models with a significantly smaller average span. Our large models achieved state-of-the-art performances on both datasets with fewer parameters and FLOPS.", |
| "cite_spans": [ |
| { |
| "start": 127, |
| "end": 148, |
| "text": "Al-Rfou et al. (2019)", |
| "ref_id": "BIBREF0" |
| }, |
| { |
| "start": 153, |
| "end": 170, |
| "text": "Dai et al. (2019)", |
| "ref_id": "BIBREF2" |
| } |
| ], |
| "ref_spans": [ |
| { |
| "start": 12, |
| "end": 19, |
| "text": "Table 1", |
| "ref_id": "TABREF1" |
| }, |
| { |
| "start": 456, |
| "end": 463, |
| "text": "Table 2", |
| "ref_id": "TABREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In Figure 3 , we compare the fixed and adaptive span small Transformers as we increase the attention span limit S. The performance of both models improve as the limit increase (see Figure 3(left) ), but the adaptive-span model benefits more from longer span. As shown on the Figure 3(center) , a Transformer with adaptive spans controls its average spans, leading to reduction of up to 70% in the number of FLOPS for the inference with large spans (see Figure 3(right) ).", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 3, |
| "end": 11, |
| "text": "Figure 3", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 181, |
| "end": 195, |
| "text": "Figure 3(left)", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 275, |
| "end": 291, |
| "text": "Figure 3(center)", |
| "ref_id": "FIGREF2" |
| }, |
| { |
| "start": 453, |
| "end": 468, |
| "text": "Figure 3(right)", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Impact on the attention span. In Figure 4 , we show the final attention spans of every attention heads of our small adaptive-span model with S = 4096. Even though all the span sizes are initialized to the same value, we see large varieties in their final values. We can see that the lowest 5 layers have the smallest possible attention span, which is R = 32 of the masking function. This indicates that lower layers in a Transformer model do not really require a long attention span in this particular task. In contrast, few attention heads in the higher layers have very long spans, exceeding several thousand. Although there is a general tendency of higher layers having longer attention spans, it is not a simple monotonic function of the layer height.", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 33, |
| "end": 41, |
| "text": "Figure 4", |
| "ref_id": "FIGREF3" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Impact on the number of FLOPS. Having a smaller attention span has a direct impact on the total number of FLOPS necessary for computing one-step prediction. In a standard fixed-span model, the total number of FLOPS is mostly controlled by the feed-forward layer (accounting for 62% of FLOPS when S = 256). However, as the span increase, the attention layer dominates the computation (82% of FLOPS when S = 8192), making it hard to scale to longer sequences. In contrast, the learning of an attention span keeps computation at a relatively constant level even as 1 2 3 4 5 6 7 8 9 10 11 12 Layers 10 1 10 2 10 3 Attention span S increase as shown in Figure 3(right) .", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 649, |
| "end": 664, |
| "text": "Figure 3(right)", |
| "ref_id": "FIGREF2" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "The memory usage is also dominated by the attention layer as the attention span increase. Thus, reducing the average span will also reduce the memory usage. However, because all heads in a single layer attend to common state vectors, the maximum span within each layer will determine the memory usage. The same is true for the number of FLOPS if all heads of a layer are computed together, as often done for better efficiency.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "In practice, the largest fixed-span model that can fit in memory for training had a span of S = 2048 (batches had to be split when S = 4096), and ", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "o v", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Experiments", |
| "sec_num": "3" |
| }, |
| { |
| "text": "Avg. span dev Adaptive (S = 1024) 123 1.08 Dynamic (S = 1024) 149 1.08 it took about 550ms per batch. In contrast, an adaptive-span model with a 4 times longer span of S = 8192 fit in memory and took about similar time per batch.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": null |
| }, |
| { |
| "text": "Dynamic span. In Table 3 , we show the adaptive and dynamic spans achieved the same performance with comparable average spans on text8. Figure 5 shows how the average dynamic span adapts to the input sequence. The span increases at the beginning of words and in the middle of composed words, e.g., to predict the \"l\" in \"overlook\".", |
| "cite_spans": [], |
| "ref_spans": [ |
| { |
| "start": 17, |
| "end": 24, |
| "text": "Table 3", |
| "ref_id": "TABREF4" |
| }, |
| { |
| "start": 136, |
| "end": 144, |
| "text": "Figure 5", |
| "ref_id": "FIGREF4" |
| } |
| ], |
| "eq_spans": [], |
| "section": "Model", |
| "sec_num": null |
| }, |
| { |
| "text": "In this work, we present a novel self-attention layer with an adaptive span. This mechanism allows for models with longer context, and thus with the capability to catch longer dependencies. We have shown the importantce of this feature in the context of character level modeling where information is spread over great distances.", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "Conclusion", |
| "sec_num": "4" |
| }, |
| { |
| "text": "https://github.com/facebookresearch/ adaptive-span", |
| "cite_spans": [], |
| "ref_spans": [], |
| "eq_spans": [], |
| "section": "", |
| "sec_num": null |
| } |
| ], |
| "back_matter": [], |
| "bib_entries": { |
| "BIBREF0": { |
| "ref_id": "b0", |
| "title": "Character-level language modeling with deeper self-attention", |
| "authors": [ |
| { |
| "first": "Rami", |
| "middle": [], |
| "last": "Al-Rfou", |
| "suffix": "" |
| }, |
| { |
| "first": "Dokook", |
| "middle": [], |
| "last": "Choe", |
| "suffix": "" |
| }, |
| { |
| "first": "Noah", |
| "middle": [], |
| "last": "Constant", |
| "suffix": "" |
| }, |
| { |
| "first": "Mandy", |
| "middle": [], |
| "last": "Guo", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "Proceedings of the 33rd AAAI Conference on Artificial Intelligence", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Rami Al-Rfou, Dokook Choe, Noah Constant, Mandy Guo, and Llion Jones. 2019. Character-level lan- guage modeling with deeper self-attention. In Pro- ceedings of the 33rd AAAI Conference on Artificial Intelligence.", |
| "links": null |
| }, |
| "BIBREF1": { |
| "ref_id": "b1", |
| "title": "Neural machine translation by jointly learning to align and translate", |
| "authors": [ |
| { |
| "first": "Dzmitry", |
| "middle": [], |
| "last": "Bahdanau", |
| "suffix": "" |
| }, |
| { |
| "first": "Kyunghyun", |
| "middle": [], |
| "last": "Cho", |
| "suffix": "" |
| }, |
| { |
| "first": "Yoshua", |
| "middle": [], |
| "last": "Bengio", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "3rd International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Ben- gio. 2015. Neural machine translation by jointly learning to align and translate. In 3rd International Conference on Learning Representations, ICLR.", |
| "links": null |
| }, |
| "BIBREF2": { |
| "ref_id": "b2", |
| "title": "Transformer-xl: Attentive language models beyond a fixed", |
| "authors": [ |
| { |
| "first": "Zihang", |
| "middle": [], |
| "last": "Dai", |
| "suffix": "" |
| }, |
| { |
| "first": "Zhilin", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Yiming", |
| "middle": [], |
| "last": "Yang", |
| "suffix": "" |
| }, |
| { |
| "first": "Jaime", |
| "middle": [ |
| "G" |
| ], |
| "last": "Carbonell", |
| "suffix": "" |
| }, |
| { |
| "first": "Quoc", |
| "middle": [ |
| "V" |
| ], |
| "last": "Le", |
| "suffix": "" |
| }, |
| { |
| "first": "Ruslan", |
| "middle": [], |
| "last": "", |
| "suffix": "" |
| } |
| ], |
| "year": 2019, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Zihang Dai, Zhilin Yang, Yiming Yang, Jaime G. Carbonell, Quoc V. Le, and Ruslan Salakhutdi- nov. 2019. Transformer-xl: Attentive language models beyond a fixed-length context. CoRR, abs/1901.02860.", |
| "links": null |
| }, |
| "BIBREF3": { |
| "ref_id": "b3", |
| "title": "Adaptive computation time for recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Alex", |
| "middle": [], |
| "last": "Graves", |
| "suffix": "" |
| } |
| ], |
| "year": 2016, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Alex Graves. 2016. Adaptive computation time for re- current neural networks. CoRR, abs/1603.08983.", |
| "links": null |
| }, |
| "BIBREF4": { |
| "ref_id": "b4", |
| "title": "Variable computation in recurrent neural networks", |
| "authors": [ |
| { |
| "first": "Yacine", |
| "middle": [], |
| "last": "Jernite", |
| "suffix": "" |
| }, |
| { |
| "first": "Edouard", |
| "middle": [], |
| "last": "Grave", |
| "suffix": "" |
| }, |
| { |
| "first": "Armand", |
| "middle": [], |
| "last": "Joulin", |
| "suffix": "" |
| }, |
| { |
| "first": "Tomas", |
| "middle": [], |
| "last": "Mikolov", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "5th International Conference on Learning Representations", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Yacine Jernite, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Variable computation in re- current neural networks. In 5th International Con- ference on Learning Representations, ICLR.", |
| "links": null |
| }, |
| "BIBREF5": { |
| "ref_id": "b5", |
| "title": "Effective approaches to attention-based neural machine translation", |
| "authors": [ |
| { |
| "first": "Thang", |
| "middle": [], |
| "last": "Luong", |
| "suffix": "" |
| }, |
| { |
| "first": "Hieu", |
| "middle": [], |
| "last": "Pham", |
| "suffix": "" |
| }, |
| { |
| "first": "Christopher", |
| "middle": [ |
| "D" |
| ], |
| "last": "Manning", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Thang Luong, Hieu Pham, and Christopher D. Man- ning. 2015. Effective approaches to attention-based neural machine translation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP.", |
| "links": null |
| }, |
| "BIBREF6": { |
| "ref_id": "b6", |
| "title": "Large text compression benchmark", |
| "authors": [ |
| { |
| "first": "Matt", |
| "middle": [], |
| "last": "Mahoney", |
| "suffix": "" |
| } |
| ], |
| "year": 2011, |
| "venue": "", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Matt Mahoney. 2011. Large text compression bench- mark. URL: http://www. mattmahoney. net/text/text. html.", |
| "links": null |
| }, |
| "BIBREF7": { |
| "ref_id": "b7", |
| "title": "Self-attention with relative position representations", |
| "authors": [ |
| { |
| "first": "Peter", |
| "middle": [], |
| "last": "Shaw", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| } |
| ], |
| "year": 2018, |
| "venue": "Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Peter Shaw, Jakob Uszkoreit, and Ashish Vaswani. 2018. Self-attention with relative position represen- tations. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Tech- nologies, NAACL-HLT.", |
| "links": null |
| }, |
| "BIBREF8": { |
| "ref_id": "b8", |
| "title": "An empirical study of adequate vision span for attention-based neural machine translation", |
| "authors": [ |
| { |
| "first": "Raphael", |
| "middle": [], |
| "last": "Shu", |
| "suffix": "" |
| }, |
| { |
| "first": "Hideki", |
| "middle": [], |
| "last": "Nakayama", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Proceedings of the First Workshop on Neural Machine Translation", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Raphael Shu and Hideki Nakayama. 2017. An empiri- cal study of adequate vision span for attention-based neural machine translation. In Proceedings of the First Workshop on Neural Machine Translation.", |
| "links": null |
| }, |
| "BIBREF9": { |
| "ref_id": "b9", |
| "title": "End-to-end memory networks", |
| "authors": [ |
| { |
| "first": "Sainbayar", |
| "middle": [], |
| "last": "Sukhbaatar", |
| "suffix": "" |
| }, |
| { |
| "first": "Arthur", |
| "middle": [], |
| "last": "Szlam", |
| "suffix": "" |
| }, |
| { |
| "first": "Jason", |
| "middle": [], |
| "last": "Weston", |
| "suffix": "" |
| }, |
| { |
| "first": "Rob", |
| "middle": [], |
| "last": "Fergus", |
| "suffix": "" |
| } |
| ], |
| "year": 2015, |
| "venue": "Advances in Neural Information Processing Systems 28", |
| "volume": "", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Sainbayar Sukhbaatar, Arthur Szlam, Jason Weston, and Rob Fergus. 2015. End-to-end memory net- works. In Advances in Neural Information Process- ing Systems 28.", |
| "links": null |
| }, |
| "BIBREF10": { |
| "ref_id": "b10", |
| "title": "Attention is all you need", |
| "authors": [ |
| { |
| "first": "Ashish", |
| "middle": [], |
| "last": "Vaswani", |
| "suffix": "" |
| }, |
| { |
| "first": "Noam", |
| "middle": [], |
| "last": "Shazeer", |
| "suffix": "" |
| }, |
| { |
| "first": "Niki", |
| "middle": [], |
| "last": "Parmar", |
| "suffix": "" |
| }, |
| { |
| "first": "Jakob", |
| "middle": [], |
| "last": "Uszkoreit", |
| "suffix": "" |
| }, |
| { |
| "first": "Llion", |
| "middle": [], |
| "last": "Jones", |
| "suffix": "" |
| }, |
| { |
| "first": "Aidan", |
| "middle": [ |
| "N" |
| ], |
| "last": "Gomez", |
| "suffix": "" |
| }, |
| { |
| "first": "Lukasz", |
| "middle": [], |
| "last": "Kaiser", |
| "suffix": "" |
| }, |
| { |
| "first": "Illia", |
| "middle": [], |
| "last": "Polosukhin", |
| "suffix": "" |
| } |
| ], |
| "year": 2017, |
| "venue": "Advances in Neural Information Processing Systems", |
| "volume": "30", |
| "issue": "", |
| "pages": "", |
| "other_ids": {}, |
| "num": null, |
| "urls": [], |
| "raw_text": "Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30.", |
| "links": null |
| } |
| }, |
| "ref_entries": { |
| "FIGREF0": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "Attention patterns of two different heads of a standard Transformer. The two patterns are qualitatively different: Head A utilizes recent steps, while Head B has uniform attention over the context. first computes similarities with its past, i.e., any token r in the span [t \u2212 S, t):" |
| }, |
| "FIGREF1": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": ", this assumption does not hold in the context of character level language The soft mask as a function of the distance." |
| }, |
| "FIGREF2": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "Left: validation performances improve as the attention span limit S increase (we did not train a fixedspan model with S = 8192 due to memory limitation). Center: average attention span of trained models. Learning attention spans significantly reduces the average attention span. Right: the number of FLOPS during inference time grows almost linearly with S for the fixed span models. The adaptive-span models do not have this growth in #FLOPS because they have a very small attention span on average." |
| }, |
| "FIGREF3": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "Adaptive spans (in log-scale) of every attention heads in a 12-layer model with span limit S = 4096. Few attention heads require long attention spans." |
| }, |
| "FIGREF4": { |
| "type_str": "figure", |
| "uris": null, |
| "num": null, |
| "text": "e r l o o k s t h e p a r k a nd i t s nume r o Example of average dynamic attention span as a function of the input sequence. The span is averaged over the layers and heads." |
| }, |
| "TABREF1": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "content": "<table><tr><td/><td>Model</td><td/><td/><td colspan=\"4\">#layers Avg. span #Params #FLOPS dev test</td></tr><tr><td/><td colspan=\"2\">Small models</td><td/><td/><td/><td/></tr><tr><td/><td colspan=\"3\">T12 (Al-Rfou et al., 2019)</td><td>12</td><td>512</td><td>44M</td><td>22G</td><td>-</td><td>1.18</td></tr><tr><td/><td colspan=\"3\">Adaptive-Span (S = 8192)</td><td>12</td><td>314</td><td>38M</td><td>42M</td><td>1.05 1.11</td></tr><tr><td/><td colspan=\"2\">Large models</td><td/><td/><td/><td/></tr><tr><td/><td colspan=\"3\">T64 (Al-Rfou et al., 2019)</td><td>64</td><td>512</td><td>235M</td><td>120G</td><td>1.06 1.13</td></tr><tr><td/><td colspan=\"3\">T-XL (Dai et al., 2019)</td><td>24</td><td>3800</td><td>277M</td><td>438M</td><td>-</td><td>1.08</td></tr><tr><td/><td colspan=\"3\">Adaptive-Span (S = 8192)</td><td>24</td><td>245</td><td>209M</td><td>179M</td><td>1.01 1.07</td></tr><tr><td/><td>1.10</td><td/><td/><td/><td/><td/></tr><tr><td>Dev. (bpc)</td><td>1.06 1.08</td><td/><td/><td/><td/><td/></tr><tr><td/><td>256</td><td>1024</td><td>4096</td><td/><td/><td/></tr><tr><td/><td colspan=\"3\">Span limit (S)</td><td/><td/><td/></tr></table>", |
| "text": "Character level language modeling on text8. We report bpc for the dev and test sets, as well as, the number of parameters, the average attention spans and total number of FLOPS (an estimate of the number of FLOPS necessary for computing one step prediction)." |
| }, |
| "TABREF3": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "text": "Results on enwik8. The span limit is S = 8192 for the adaptive-span models." |
| }, |
| "TABREF4": { |
| "type_str": "table", |
| "num": null, |
| "html": null, |
| "content": "<table/>", |
| "text": "Comparison between adaptive and dynamic attention span on text8." |
| } |
| } |
| } |
| } |