source
stringlengths
36
80
text
stringlengths
51
500
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#46
here the softmax is applied over each of the rows of the matrix. The number of dimensions in a query vector is query size and similarly for the key size and value size . The output dimension of an attention head is its head dimension . The attention mechanism requires the following three equalities to hold:but is otherwise unconstrained. If the attention head is used in a self-attention fashion, then . If the attention head is used in a cross-attention fashion, then usually . It is theoretically
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#47
ention fashion, then usually . It is theoretically possible for all three to be different, but that is rarely the case in practice. Multiheaded attention [edit]One set of matrices is called an attention head, and each layer in a transformer model has multiple attention heads. While each attention head attends to the tokens that are relevant to each token, multiple attention heads allow the model to do this for different definitions of "relevance". Specifically, the query and key projection matri
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#48
. Specifically, the query and key projection matrices, and , which are involved in the attention score computation, defines the "relevance". Meanwhile, the value projection matrix , in combination with the part of the output projection matrix , determines how the attended tokens influence what information is passed to subsequent layers and ultimately the output logits. In addition, the scope of attention, or the range of token relationships captured by each attention head, can expand as tokens p
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#49
red by each attention head, can expand as tokens pass through successive layers. This allows the model to capture more complex and long-range dependencies in deeper layers. Many transformer attention heads encode relevance relations that are meaningful to humans. For example, some attention heads can attend mostly to the next word, while others mainly attend from verbs to their direct objects.[56] The computations for each attention head can be performed in parallel, which allows for fast proces
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#50
erformed in parallel, which allows for fast processing. The outputs for the attention layer are concatenated to pass into the feed-forward neural network layers. Concretely, let the multiple attention heads be indexed by , then we have where the matrix is the concatenation of word embeddings, and the matrices are "projection matrices" owned by individual attention head , and is a final projection matrix owned by the whole multi-headed attention head. It is theoretically possible for each attenti
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#51
ead. It is theoretically possible for each attention head to have a different head dimension , but that is rarely the case in practice. As an example, in the smallest GPT-2 model, there are only self-attention mechanisms. It has the following dimensions:Since , its output projection matrix is a square matrix. Masked attention [edit]The Transformer architecture is constructed to calculate output tokens iteratively. Assuming refers to the calculation of the first output token , for step , the outp
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#52
on of the first output token , for step , the output token shall remain constant. This ensures properties of the model similar to autoregressive models.[1] Therefore, at every time step , the calculation for all outputs should not have access to tokens at position for (as it naturally is the case for time step , when tokens are not yet calculated). This behavior may be accomplished before the softmax stage by adding a mask matrix that is at entries where the attention link must be cut, and at ot
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#53
es where the attention link must be cut, and at other places: The following matrix is commonly used in decoder self-attention modules, called "causal masking": In words, it means that each token can pay attention to itself, and every token before it, but not any after it. A non-masked attention module can be thought of as a masked attention module where the mask has all entries zero. As an example of an uncommon use of mask matrix, the XLNet considers all masks of the form , where is a random pe
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#54
iders all masks of the form , where is a random permutation matrix.[57] Encoder [edit]An encoder consists of an embedding layer, followed by multiple encoder layers. Each encoder layer consists of two major components: a self-attention mechanism and a feed-forward layer. It takes an input as a sequence of input vectors, applies the self-attention mechanism, to produce an intermediate sequence of vectors, then applies the feed-forward layer for each vector individually. Schematically, we have: wh
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#55
ch vector individually. Schematically, we have: where stands for "feed-forward network". We can more succinctly write it aswith the implicit convention that the is applied to each row of the matrix individually. The encoder layers are stacked. The first encoder layer takes the sequence of input vectors from the embedding layer, producing a sequence of vectors. This sequence of vectors is processed by the second encoder, and so on. The output from the final encoder layer is then used by the decod
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#56
the final encoder layer is then used by the decoder. As the encoder processes the entire input all at once, every token can attend to every other token (all-to-all attention), so there is no need for causal masking. Decoder [edit]A decoder consists of an embedding layer, followed by multiple decoder layers, followed by an un-embedding layer. Each decoder consists of three major components: a causally masked self-attention mechanism, a cross-attention mechanism, and a feed-forward neural network
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#57
ntion mechanism, and a feed-forward neural network. The decoder functions in a similar fashion to the encoder, but an additional attention mechanism is inserted which instead draws relevant information from the encodings generated by the encoders. This mechanism can also be called the encoder-decoder attention.[1][54] Like the first encoder, the first decoder takes positional information and embeddings of the output sequence as its input, rather than encodings. The transformer must not use the c
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#58
than encodings. The transformer must not use the current or future output to predict an output, so the output sequence must be partially masked to prevent this reverse information flow.[1] This allows for autoregressive text generation. For decoding, all-to-all attention is inappropriate, because a token cannot attend to tokens not yet generated. Thus, the self-attention module in the decoder is causally masked. In contrast, the cross-attention mechanism attends to the output vectors of the enco
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#59
echanism attends to the output vectors of the encoder, which is computed before the decoder starts decoding. Consequently, there is no need for masking in the cross-attention mechanism. Schematically, we have:where is the matrix with rows being the output vectors from the encoder. The last decoder is followed by a final un-embedding layer. to produce the output probabilities over the vocabulary. Then, one of the tokens is sampled according to the probability, and the decoder can be run again to
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#60
probability, and the decoder can be run again to produce the next token, etc, autoregressively generating output text. Adapted architectures [edit]Many large language models, since they do not need to predict a whole new sequence from an input sequence, only use the encoder or decoder of the original transformer architecture. Early GPT models are decoder-only models trained to predict the next token in a sequence.[58] BERT, another language model, only makes use of an encoder, and is trained to
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#61
l, only makes use of an encoder, and is trained to predict a randomly masked token in a sequence.[35] Full transformer architecture [edit]Sublayers [edit]Each encoder layer contains 2 sublayers: the self-attention and the feedforward network. Each decoder layer contains 3 sublayers: the causally masked self-attention, the cross-attention, and the feedforward network. The final points of detail are the residual connections and layer normalization (LayerNorm, or LN), which while conceptually unnec
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#62
(LayerNorm, or LN), which while conceptually unnecessary, are necessary for numerical stability and convergence. The residual connection, which is introduced to avoid vanishing gradient issues and stabilize the training process, can be expressed as follows: y = F(x) + x. The expression indicates that an output y is the sum of the transformation of input x (F(x)) and the input itself (x). Adding the input x can preserve the input information and avoid issues when the gradient of F(x) is close to
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#63
void issues when the gradient of F(x) is close to zero. Similarly to how the feedforward network modules are applied individually to each vector, the LayerNorm is also applied individually to each vector. There are two common conventions in use: the post-LN and the pre-LN convention. In the post-LN convention, the output of each sublayer is where is the function implemented by the sublayer itself. In the pre-LN convention, the output of each sublayer isThe original 2017 Transformer used the post
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#64
ayer isThe original 2017 Transformer used the post-LN convention. It was difficult to train and required careful hyperparameter tuning and a "warm-up" in learning rate, where it starts small and gradually increases. The pre-LN convention, proposed several times in 2018,[59] was found to be easier to train, requiring no warm-up, leading to faster convergence.[46] Pseudocode [edit]The following is the pseudocode for a standard pre-LN encoder-decoder Transformer, adapted from[60] input: Encoder inp
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#65
r Transformer, adapted from[60] input: Encoder input t_e Decoder input t_d output: Array of probability distributions, with shape (decoder vocabulary size x length(decoder output sequence)) /* encoder */ z_e ← encoder.tokenizer(t_e) for each t in 1:length(z_e) do z_e[t] ← encoder.embedding(z_e[t]) + encoder.positional_embedding(t) for each l in 1:length(encoder.layers) do layer ← encoder.layers[l] /* first sublayer */ z_e_copy ← copy(z_e) for each t in 1:length(z_e) do z_e[t] ← layer.layer_norm(
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#66
h t in 1:length(z_e) do z_e[t] ← layer.layer_norm(z_e[t]) z_e ← layer.multiheaded_attention(z_e, z_e, z_e) for each t in 1:length(z_e) do z_e[t] ← z_e[t] + z_e_copy[t] /* second sublayer */ z_e_copy ← copy(z_e) for each t in 1:length(z_e) do z_e[t] ← layer.layer_norm(z_e[t]) z_e ← layer.feedforward(z_e) for each t in 1:length(z_e) do z_e[t] ← z_e[t] + z_e_copy[t] for each t in 1:length(z_e) do z_e[t] ← encoder.final_layer_norm(z_e[t]) /* decoder */ z_d ← decoder.tokenizer(t_d) for each t in 1:le
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#67
*/ z_d ← decoder.tokenizer(t_d) for each t in 1:length(z_d) do z_d[t] ← decoder.embedding(z_d[t]) + decoder.positional_embedding(t) for each l in 1:length(decoder.layers) do layer ← decoder.layers[l] /* first sublayer */ z_d_copy ← copy(z_d) for each t in 1:length(z_d) do z_d[t] ← layer.layer_norm(z_d[t]) z_d ← layer.masked_multiheaded_attention(z_d, z_d, z_d) for each t in 1:length(z_d) do z_d[t] ← z_d[t] + z_d_copy[t] /* second sublayer */ z_d_copy ← copy(z_d) for each t in 1:length(z_d) do z_
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#68
copy ← copy(z_d) for each t in 1:length(z_d) do z_d[t] ← layer.layer_norm(z_d[t]) z_d ← layer.multiheaded_attention(z_d, z_e, z_e) for each i in 1:length(z_d) do z_d[t] ← z_d[t] + z_d_copy[t] /* third sublayer */ z_d_copy ← copy(z_d) for each t in 1:length(z_d) do z_d[t] ← layer.layer_norm(z_d[t]) z_d ← layer.feedforward(z_d) for each t in 1:length(z_d) do z_d[t] ← z_d[t] + z_d_copy[t] z_d ← decoder.final_layer_norm(z_d) output_distributions ← [] for each t in 1:length(z_d) do output_distributio
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#69
for each t in 1:length(z_d) do output_distributions.append(decoder.unembed(z_d[t])) return output_distributions Terminology [edit]The Transformer architecture, being modular, allows variations. Several common variations are described here.[61] An "encoder-only" Transformer applies the encoder to map an input text into a sequence of vectors that represent the input text. This is usually used for text embedding and representation learning for downstream applications. BERT is encoder-only. They ar
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#70
stream applications. BERT is encoder-only. They are less often used currently, as they were found to be not significantly better than training an encoder-decoder Transformer, then taking just the encoder.[51] A "decoder-only" Transformer is not literally decoder-only, since without an encoder, the cross-attention mechanism has nothing to attend to. Thus, the decoder layers in a decoder-only Transformer is composed of just two sublayers: the causally masked self-attention, and the feedforward net
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#71
lly masked self-attention, and the feedforward network. This is usually used for text generation and instruction following. The models in the GPT series and Chinchilla series are decoder-only. An "encoder-decoder" Transformer is generally the same as the original Transformer, with 2 sublayers per encoder layer and 3 sublayers per decoder layer, etc. They might have minor architectural improvements, such as alternative activation functions, changing the location of normalization, etc. This is als
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#72
ng the location of normalization, etc. This is also usually used for text generation and instruction following. The models in the T5 series are encoder-decoder.[61] A "prefixLM" (prefix language model) is a decoder-only architecture, but with prefix masking, which is different from causal masking. Specifically, it has mask of the form[61]: Figure 3 where the first columns correspond to the "prefix", and the subsequent columns correspond to the autoregressively generated text based on the prefix.
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#73
toregressively generated text based on the prefix. They resemble encoder-decoder models, but has less "sparsity". Such models are rarely used, though they are cited as theoretical possibilities and benchmarked comparisons.[51] There are also mixed seq2seq models. For example, in 2020, Google Translate replaced the previous RNN-encoder–RNN-decoder model by a Transformer-encoder–RNN-decoder model, on the argument that an RNN-decoder runs much faster than Transformer-decoder when run autoregressive
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#74
r than Transformer-decoder when run autoregressively.[62] Subsequent work [edit]Alternative activation functions [edit]The original transformer uses ReLU activation function. Other activation functions were developed. The Llama series and PaLM used SwiGLU;[63] both GPT-1 and BERT[35] used GELU.[64] Alternative activation functions are often used in combination with Gated Linear Units in the feedforward module.[63] Alternative normalizations [edit]The normalization used in the Transformer can be
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#75
]The normalization used in the Transformer can be different from LayerNorm. One example is RMSNorm[65] which is used in the Llama series. Other examples include CapsuleNorm[66] ScaleNorm,[67] or FixNorm.[67] Alternative positional encodings [edit]Transformers may use other positional encoding methods than sinusoidal.[68] The original Transformer paper reported using a learned positional encoding,[69] but finding it not superior to the sinusoidal one.[1] Later,[70] found that causal masking itsel
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#76
one.[1] Later,[70] found that causal masking itself provides enough signal to a Transformer decoder that it can learn to implicitly perform absolute positional encoding without the positional encoding module. RoPE [edit]RoPE (rotary positional embedding),[71] is best explained by considering a list of 2-dimensional vectors . Now pick some angle . Then RoPE encoding isEquivalently, if we write the 2-dimensional vectors as complex numbers , then RoPE encoding is just multiplication by an angle:For
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#77
PE encoding is just multiplication by an angle:For a list of -dimensional vectors, a RoPE encoder is defined by a sequence of angles . Then the RoPE encoding is applied to each pair of coordinates. The benefit of RoPE is that the dot-product between two vectors depends on their relative location only: for any integer . ALiBi [edit]ALiBi (Attention with Linear Biases)[72] is not a replacement for the positional encoder on the original transformer. Instead, it is an additional positional encoder t
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#78
Instead, it is an additional positional encoder that is directly plugged into the attention mechanism. Specifically, the ALiBi attention mechanism isHere, is a real number ("scalar"), and is the linear bias matrix defined byin other words, . The idea being that the linear bias matrix is a softened mask. Just as represent full attention paid, and represents no attention paid, the linear bias matrix increases attention paid in one direction and decreases attention paid in the other direction. ALi
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#79
creases attention paid in the other direction. ALiBi allows pretraining on short context windows, then fine-tuning on longer context windows. Since it is directly plugged into the attention mechanism, it can be combined with any positional encoder that is plugged into the "bottom" of the entire network (which is where the sinusoidal encoder on the original transformer, as well as RoPE and many others, are located). Relative Position Encodings [edit]Relative Position Encodings[73] is similar to A
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#80
it]Relative Position Encodings[73] is similar to ALiBi, but more generic:where is a Toeplitz matrix, that is, whenever . This is contrasted with the original sinusoidal positional encoding, which is an "absolute positional encoding".[74] Efficient implementation [edit]The transformer model has been implemented in standard deep learning frameworks such as TensorFlow and PyTorch. Transformers is a library produced by Hugging Face that supplies transformer-based architectures and pretrained models.
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#81
sformer-based architectures and pretrained models.[11] KV caching [edit]When an autoregressive transformer is used for inference, such as generating text, the query vector is different at each step, but the already-computed key and value vectors are always the same. The KV caching method saves the computed key and value vectors at each attention block, so that they are not recomputed at each new token. PagedAttention applies memory paging to KV caching.[75][76][77] If a transformer is used with
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#82
aching.[75][76][77] If a transformer is used with a baked-in prompt, such as ["You are a customer support agent..."], then the key and value vectors can be computed for the prompt, and saved on disk. The saving in compute is significant when the model is used for many short interactions, such as in online chatbots. FlashAttention [edit]FlashAttention[78] is an algorithm that implements the transformer attention mechanism efficiently on a GPU. It is a communication-avoiding algorithm that perform
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#83
is a communication-avoiding algorithm that performs matrix multiplications in blocks, such that each block fits within the cache of a GPU, and by careful management of the blocks it minimizes data copying between GPU caches (as data movement is slow). See the page on softmax for details. An improved version, FlashAttention-2,[79][80][81] was developed to cater to the rising demand for language models capable of handling longer context lengths. It offers enhancements in work partitioning and para
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#84
offers enhancements in work partitioning and parallelism, enabling it to achieve up to 230 TFLOPs/s on A100 GPUs (FP16/BF16), a 2x speed increase over the original FlashAttention. Key advancements in FlashAttention-2 include the reduction of non-matmul FLOPs, improved parallelism over the sequence length dimension, better work partitioning between GPU warps, and added support for head dimensions up to 256 and multi-query attention (MQA) and grouped-query attention (GQA).[82] Benchmarks revealed
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#85
ped-query attention (GQA).[82] Benchmarks revealed FlashAttention-2 to be up to 2x faster than FlashAttention and up to 9x faster than a standard attention implementation in PyTorch. Future developments include optimization for new hardware like H100 GPUs and new data types like FP8. Multi-Query Attention [edit] Multi-Query Attention changes the multiheaded attention mechanism.[83] Whereas normally, with Multi-Query Attention, there is just one , thus: This has a neutral effect on model quality
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#86
thus: This has a neutral effect on model quality and training speed, but increases inference speed. More generally, grouped-query attention (GQA) partitions attention heads into groups, each of which shares the key-value pair. MQA is GQA with one group, while standard multiheaded attention is GQA with the maximal number of groups.[84] Multihead Latent Attention (MLA) is a low-rank approximation to standard MHA. Specifically, each hidden vector, before entering the attention mechanism, is first
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#87
before entering the attention mechanism, is first projected to two low-dimensional spaces ("latent space"), one for query and one for key-value (KV vector). This design minimizes the KV cache, as only the low-dimensional KV vector needs to be cached.[85] Speculative decoding [edit]Speculative decoding[86][87] is a method to accelerate token decoding. Similarly to speculative execution in CPUs, future tokens are computed quickly, then verified. If the quickly computed tokens are incorrect, they a
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#88
the quickly computed tokens are incorrect, they are discarded and computed slowly. The key factor in speculative decoding is that a Transformer decoder can verify faster than it can decode, in the following sense. Suppose we have two transformer models like GPT-3 and GPT-3-small, both with a context window size of 512. To generate an entire context window autoregressively with greedy decoding with GPT-3, it must be run for 512 times, each time generating a token , taking time . However, if we h
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#89
enerating a token , taking time . However, if we had some educated guess for the values of these tokens, we could verify all of them in parallel, in one run of the model, by checking that each is indeed the token with the largest log-likelihood in the -th output. In speculative decoding, a smaller model or some other simple heuristic is used to generate a few speculative tokens that are subsequently verified by the larger model. For example, suppose we use GPT-3-small to generate four speculativ
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#90
ose we use GPT-3-small to generate four speculative tokens: . This only takes . These tokens are then run through the larger GPT-3 in one go. Suppose that and are verified by GPT-3 as what it would have picked, then those are kept, but is not, so are discarded, and GPT-3 is run on those. This would take , which might be shorter than . For non-greedy decoding, similar ideas apply, except the speculative tokens are accepted or rejected stochastically, in a way that guarantees the final output dist
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#91
ly, in a way that guarantees the final output distribution is the same as if speculative decoding was not used.[86][88] In Multi-Token Prediction, a single forward pass creates a final embedding vector, which then is un-embedded into a token probability. However, that vector can then be further processed by another Transformer block to predict the next token, and so on for arbitrarily many steps into the future. This trades off accuracy for speed, since each new token costs just one more Transfo
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#92
, since each new token costs just one more Transformer block, rather than the entire stack.[89][90] Sub-quadratic transformers [edit]Training transformer-based architectures can be expensive, especially for long inputs.[91] Many methods have been developed to attempt to address the issue. In the image domain, Swin Transformer is an efficient architecture that performs attention inside shifting windows.[92] In the audio domain, SepTr decouples the attention in time and frequency domains.[93] Long
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#93
attention in time and frequency domains.[93] Long Range Arena (2020)[94] is a standard benchmark for comparing the behavior of transformer architectures over long inputs. Alternative attention graphs [edit]The standard attention graph is either all-to-all or causal, both of which scales as where is the number of tokens in a sequence. Reformer (2020)[91][95] reduces the computational load from to by using locality-sensitive hashing and reversible layers.[96] Sparse attention[97] uses attention g
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#94
layers.[96] Sparse attention[97] uses attention graphs that grows slower than . For example, BigBird (2020)[98] uses random small-world networks which grows as . Ordinary transformers require a memory size that is quadratic in the size of the context window. Attention-free transformers[99] reduce this to a linear dependence while still retaining the advantages of a transformer by linking the key to the value. Random Feature Attention [edit]Random Feature Attention (2021)[100] uses Fourier rando
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#95
m Feature Attention (2021)[100] uses Fourier random features:where are independent samples from the normal distribution . This choice of parameters satisfy , or Consequently, the one-headed attention, with one query, can be written as where . Similarly for multiple queries, and for multiheaded attention. This approximation can be computed in linear time, as we can compute the matrix first, then multiply it with the query. In essence, we have managed to obtain a more precise version of Performer
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#96
ged to obtain a more precise version of Performer (2022)[101] uses the same Random Feature Attention, but are first independently sampled from the normal distribution , then they are Gram-Schmidt processed. Multimodality [edit]Transformers can also be used/adapted for modalities (input or output) beyond just text, usually by finding a way to "tokenize" the modality. Multimodal models can either be trained from scratch, or by finetuning. A 2022 study found that Transformers pretrained only on nat
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#97
udy found that Transformers pretrained only on natural language can be finetuned on only 0.03% of parameters and become competitive with LSTMs on a variety of logical and visual tasks, demonstrating transfer learning.[102] The LLaVA was a vision-language model composed of a language model (Vicuna-13B)[103] and a vision model (ViT-L/14), connected by a linear layer. Only the linear layer is finetuned.[104] Vision transformers[41] adapt the transformer to computer vision by breaking down input ima
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#98
rmer to computer vision by breaking down input images as a series of patches, turning them into vectors, and treating them like tokens in a standard transformer. Conformer[42] and later Whisper[105] follow the same pattern for speech recognition, first turning the speech signal into a spectrogram, which is then treated like an image, i.e. broken down into a series of patches, turned into vectors and treated like tokens in a standard transformer. Perceivers[106][107] are a variant of Transformers
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#99
Perceivers[106][107] are a variant of Transformers designed for multimodality. For image generation, notable architectures are DALL-E 1 (2021), Parti (2022),[108] Phenaki (2023),[109] and Muse (2023).[110] Unlike later models, DALL-E is not a diffusion model. Instead, it uses a decoder-only Transformer that autoregressively generates a text, followed by the token representation of an image, which is then converted by a variational autoencoder to an image.[111] Parti is an encoder-decoder Transfo
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#100
an image.[111] Parti is an encoder-decoder Transformer, where the encoder processes a text prompt, and the decoder generates a token representation of an image.[112] Muse is an encoder-only Transformer that is trained to predict masked image tokens from unmasked image tokens. During generation, all input tokens are masked, and the highest-confidence predictions are included for the next iteration, until all tokens are predicted.[110] Phenaki is a text-to-video model. It is a bidirectional masked
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#101
text-to-video model. It is a bidirectional masked transformer conditioned on pre-computed text tokens. The generated tokens are then decoded to a video.[109] Applications [edit]The transformer has had great success in natural language processing (NLP). Many large language models such as GPT-2, GPT-3, GPT-4, Gemini, AlbertAGPT, Claude, BERT, Grok, XLNet, RoBERTa and ChatGPT demonstrate the ability of transformers to perform a wide variety of NLP-related subtasks and their related real-world appl
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#102
related subtasks and their related real-world applications, including: - machine translation - time series prediction - document summarization - document generation - named entity recognition (NER)[113] - writing computer code based on requirements expressed in natural language. - speech-to-text Beyond traditional NLP, the transformer architecture has had success in other applications, such as: - biological sequence analysis - video understanding - protein folding (such as AlphaFold) - evaluatin
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#103
- protein folding (such as AlphaFold) - evaluating chess board positions. Using static evaluation alone (that is, with no Minimax search) transformer achieved an Elo of 2895, putting it at grandmaster level.[10] See also [edit]- seq2seq – Family of machine learning approaches - Perceiver – Variant of Transformer designed for multimodal data - Vision transformer – Machine learning model for vision processing - Large language model – Type of machine learning model - BERT (language model) – Series
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#104
ne learning model - BERT (language model) – Series of language models developed by Google AI - Generative pre-trained transformer – Type of large language model - T5 (language model) – Series of large language models developed by Google AI Notes [edit]- ^ Gated recurrent units (2014) further reduced its complexity. - ^ Some architectures, such as RWKV or state space models, avoid the issue. References [edit]- ^ a b c d e f g h i j k l Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jako
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#105
hish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N; Kaiser, Łukasz; Polosukhin, Illia (2017). "Attention is All you Need" (PDF). Advances in Neural Information Processing Systems. 30. Curran Associates, Inc. - ^ Hochreiter, Sepp; Schmidhuber, Jürgen (1 November 1997). "Long Short-Term Memory". Neural Computation. 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. ISSN 0899-7667. PMID 9377276. S2CID 1915014. - ^ a b "Better Language Models and Their Implications". Op
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#106
Better Language Models and Their Implications". OpenAI. 2019-02-14. Archived from the original on 2020-12-19. Retrieved 2019-08-25. - ^ a b Bahdanau; Cho, Kyunghyun; Bengio, Yoshua (September 1, 2014). "Neural Machine Translation by Jointly Learning to Align and Translate". arXiv:1409.0473 [cs.CL]. - ^ Luong, Minh-Thang; Pham, Hieu; Manning, Christopher D. (August 17, 2015). "Effective Approaches to Attention-based Neural Machine Translation". arXiv:1508.04025 [cs.CL]. - ^ a b Chen, Lili; Lu, Ke
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#107
Xiv:1508.04025 [cs.CL]. - ^ a b Chen, Lili; Lu, Kevin; Rajeswaran, Aravind; Lee, Kimin; Grover, Aditya; Laskin, Michael; Abbeel, Pieter; Srinivas, Aravind; Mordatch, Igor (2021-06-24), Decision Transformer: Reinforcement Learning via Sequence Modeling, arXiv:2106.01345 - ^ Parisotto, Emilio; Song, Francis; Rae, Jack; Pascanu, Razvan; Gulcehre, Caglar; Jayakumar, Siddhant; Jaderberg, Max; Kaufman, Raphaël Lopez; Clark, Aidan; Noury, Seb; Botvinick, Matthew; Heess, Nicolas; Hadsell, Raia (2020-11-
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#108
, Matthew; Heess, Nicolas; Hadsell, Raia (2020-11-21). "Stabilizing Transformers for Reinforcement Learning". Proceedings of the 37th International Conference on Machine Learning. PMLR: 7487–7498. - ^ Radford, Alec; Jong Wook Kim; Xu, Tao; Brockman, Greg; McLeavey, Christine; Sutskever, Ilya (2022). "Robust Speech Recognition via Large-Scale Weak Supervision". arXiv:2212.04356 [eess.AS]. - ^ Monastirsky, Maxim; Azulay, Osher; Sintov, Avishai (February 2023). "Learning to Throw With a Handful of
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#109
ruary 2023). "Learning to Throw With a Handful of Samples Using Decision Transformers". IEEE Robotics and Automation Letters. 8 (2): 576–583. doi:10.1109/LRA.2022.3229266. ISSN 2377-3766. - ^ a b Ruoss, Anian; Delétang, Grégoire; Medapati, Sourabh; Grau-Moya, Jordi; Wenliang, Li; Catt, Elliot; Reid, John; Genewein, Tim (2024-02-07). "Grandmaster-Level Chess Without Search". arXiv:2402.04494v1 [cs.LG]. - ^ a b Wolf, Thomas; Debut, Lysandre; Sanh, Victor; Chaumond, Julien; Delangue, Clement; Moi,
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#110
Victor; Chaumond, Julien; Delangue, Clement; Moi, Anthony; Cistac, Pierric; Rault, Tim; Louf, Remi; Funtowicz, Morgan; Davison, Joe; Shleifer, Sam; von Platen, Patrick; Ma, Clara; Jernite, Yacine; Plu, Julien; Xu, Canwen; Le Scao, Teven; Gugger, Sylvain; Drame, Mariama; Lhoest, Quentin; Rush, Alexander (2020). "Transformers: State-of-the-Art Natural Language Processing". Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. pp. 38–45. doi:
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#111
Processing: System Demonstrations. pp. 38–45. doi:10.18653/v1/2020.emnlp-demos.6. S2CID 208117506. - ^ a b c "Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing". Google AI Blog. 2 November 2018. Archived from the original on 2021-01-13. Retrieved 2019-08-25. - ^ Feldman, J. A.; Ballard, D. H. (1982-07-01). "Connectionist models and their properties". Cognitive Science. 6 (3): 205–254. doi:10.1016/S0364-0213(82)80001-3. ISSN 0364-0213. - ^ Rumelhart, David E.; McCl
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#112
1-3. ISSN 0364-0213. - ^ Rumelhart, David E.; McClelland, James L.; Hinton, Geoffrey E. (1987-07-29). Parallel Distributed Processing, Volume 1: Explorations in the Microstructure of Cognition: Foundations, Chapter 2 (PDF). Cambridge, Mass: Bradford Books. ISBN 978-0-262-68053-0. - ^ Giles, C. Lee; Maxwell, Tom (1987-12-01). "Learning, invariance, and generalization in high-order neural networks". Applied Optics. 26 (23): 4972–4978. doi:10.1364/AO.26.004972. ISSN 0003-6935. PMID 20523475. - ^ a
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#113
O.26.004972. ISSN 0003-6935. PMID 20523475. - ^ a b Schmidhuber, Jürgen (1992). "Learning to control fast-weight memories: an alternative to recurrent nets" (PDF). Neural Computation. 4 (1): 131–139. doi:10.1162/neco.1992.4.1.131. S2CID 16683347. - ^ Christoph von der Malsburg: The correlation theory of brain function. Internal Report 81-2, MPI Biophysical Chemistry, 1981. http://cogprints.org/1380/1/vdM_correlation.pdf See Reprint in Models of Neural Networks II, chapter 2, pages 95–119. Spring
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#114
eural Networks II, chapter 2, pages 95–119. Springer, Berlin, 1994. - ^ Jerome A. Feldman, "Dynamic connections in neural networks," Biological Cybernetics, vol. 46, no. 1, pp. 27–39, Dec. 1982. - ^ Hinton, Geoffrey E.; Plaut, David C. (1987). "Using Fast Weights to Deblur Old Memories". Proceedings of the Annual Meeting of the Cognitive Science Society. 9. - ^ Katharopoulos, Angelos; Vyas, Apoorv; Pappas, Nikolaos; Fleuret, François (2020). "Transformers are RNNs: Fast autoregressive Transforme
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#115
nsformers are RNNs: Fast autoregressive Transformers with linear attention". ICML 2020. PMLR. pp. 5156–5165. - ^ Schlag, Imanol; Irie, Kazuki; Schmidhuber, Jürgen (2021). "Linear Transformers Are Secretly Fast Weight Programmers". ICML 2021. Springer. pp. 9355–9366. - ^ a b c Cho, Kyunghyun; van Merriënboer, Bart; Gulcehre, Caglar; Bahdanau, Dzmitry; Bougares, Fethi; Schwenk, Holger; Bengio, Yoshua (October 2014). "Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#116
using RNN Encoder–Decoder for Statistical Machine Translation". In Moschitti, Alessandro; Pang, Bo; Daelemans, Walter (eds.). Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar: Association for Computational Linguistics. pp. 1724–1734. arXiv:1406.1078. doi:10.3115/v1/D14-1179. - ^ a b c Sutskever, Ilya; Vinyals, Oriol; Le, Quoc Viet (14 Dec 2014). "Sequence to sequence learning with neural networks". arXiv:1409.3215 [cs.CL]. [first versio
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#117
networks". arXiv:1409.3215 [cs.CL]. [first version posted to arXiv on 10 Sep 2014] - ^ Chung, Junyoung; Gulcehre, Caglar; Cho, KyungHyun; Bengio, Yoshua (2014). "Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling". arXiv:1412.3555 [cs.NE]. - ^ Gruber, N.; Jockisch, A. (2020), "Are GRU cells more specific and LSTM cells more sensitive in motive classification of text?", Frontiers in Artificial Intelligence, 3: 40, doi:10.3389/frai.2020.00040, PMC 7861254, PMID 33733157,
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#118
.3389/frai.2020.00040, PMC 7861254, PMID 33733157, S2CID 220252321 - ^ Sutskever, Ilya; Vinyals, Oriol; Le, Quoc V (2014). "Sequence to Sequence Learning with Neural Networks". Advances in Neural Information Processing Systems. 27. Curran Associates, Inc. arXiv:1409.3215. - ^ Luong, Minh-Thang; Pham, Hieu; Manning, Christopher D. (2015). "Effective Approaches to Attention-based Neural Machine Translation". arXiv:1508.04025 [cs.CL]. - ^ Wu, Yonghui; et al. (2016-09-01). "Google's Neural Machine T
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#119
i; et al. (2016-09-01). "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation". arXiv:1609.08144 [cs.CL]. - ^ Lewis-Kraus, Gideon (2016-12-14). "The Great A.I. Awakening". The New York Times. ISSN 0362-4331. Archived from the original on 24 May 2023. Retrieved 2023-06-22. - ^ Parikh, Ankur P.; Täckström, Oscar; Das, Dipanjan; Uszkoreit, Jakob (2016-09-25). "A Decomposable Attention Model for Natural Language Inference". arXiv:1606.01933 [cs.CL]. - ^
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#120
anguage Inference". arXiv:1606.01933 [cs.CL]. - ^ a b Levy, Steven. "8 Google Employees Invented Modern AI. Here's the Inside Story". Wired. ISSN 1059-1028. Archived from the original on 20 Mar 2024. Retrieved 2024-08-06. - ^ Cheng, Jianpeng; Dong, Li; Lapata, Mirella (November 2016). "Long Short-Term Memory-Networks for Machine Reading". In Su, Jian; Duh, Kevin; Carreras, Xavier (eds.). Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. Austin, Texas: Associ
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#121
Natural Language Processing. Austin, Texas: Association for Computational Linguistics. pp. 551–561. doi:10.18653/v1/D16-1053. - ^ Peng, Bo; Alcaide, Eric; Anthony, Quentin; Albalak, Alon; Arcadinho, Samuel; Biderman, Stella; Cao, Huanqi; Cheng, Xin; Chung, Michael (2023-12-10), RWKV: Reinventing RNNs for the Transformer Era, arXiv:2305.13048 - ^ Marche, Stephen (2024-08-23). "Was Linguistic A.I. Created by Accident?". The New Yorker. ISSN 0028-792X. Retrieved 2024-08-27. - ^ a b c d e f Devlin,
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#122
2X. Retrieved 2024-08-27. - ^ a b c d e f Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (11 October 2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". arXiv:1810.04805v2 [cs.CL]. - ^ "Google: BERT now used on almost every English query". Search Engine Land. 2020-10-15. Retrieved 2020-11-24. - ^ "Recent Advances in Google Translate". research.google. Retrieved 2024-05-08. - ^ "The inside story of how ChatGPT was built from the people who
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#123
tory of how ChatGPT was built from the people who made it". MIT Technology Review. Retrieved 2024-08-06. - ^ "Improving language understanding with unsupervised learning". openai.com. June 11, 2018. Archived from the original on 2023-03-18. Retrieved 2023-03-18. - ^ finetune-transformer-lm, OpenAI, June 11, 2018, retrieved 2023-05-01 - ^ a b Dosovitskiy, Alexey; Beyer, Lucas; Kolesnikov, Alexander; Weissenborn, Dirk; Zhai, Xiaohua; Unterthiner, Thomas; Dehghani, Mostafa; Minderer, Matthias; Heig
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#124
homas; Dehghani, Mostafa; Minderer, Matthias; Heigold, Georg; Gelly, Sylvain; Uszkoreit, Jakob (2021-06-03). "An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale". arXiv:2010.11929 [cs.CV]. - ^ a b Gulati, Anmol; Qin, James; Chiu, Chung-Cheng; Parmar, Niki; Zhang, Yu; Yu, Jiahui; Han, Wei; Wang, Shibo; Zhang, Zhengdong; Wu, Yonghui; Pang, Ruoming (2020). "Conformer: Convolution-augmented Transformer for Speech Recognition". arXiv:2005.08100 [eess.AS]. - ^ Choromanski, Krzy
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#125
arXiv:2005.08100 [eess.AS]. - ^ Choromanski, Krzysztof; Likhosherstov, Valerii; Dohan, David; Song, Xingyou; Gane, Andreea; Sarlos, Tamas; Hawkins, Peter; Davis, Jared; Mohiuddin, Afroz (2022-11-19), Rethinking Attention with Performers, arXiv:2009.14794 - ^ Liu, Zhuang; Mao, Hanzi; Wu, Chao-Yuan; Feichtenhofer, Christoph; Darrell, Trevor; Xie, Saining (2022). A ConvNet for the 2020s. Conference on Computer Vision and Pattern Recognition. pp. 11976–11986. - ^ Esser, Patrick; Kulal, Sumith; Blat
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#126
976–11986. - ^ Esser, Patrick; Kulal, Sumith; Blattmann, Andreas; Entezari, Rahim; Müller, Jonas; Saini, Harry; Levi, Yam; Lorenz, Dominik; Sauer, Axel (2024-03-05), Scaling Rectified Flow Transformers for High-Resolution Image Synthesis, arXiv:2403.03206 - ^ a b Xiong, Ruibin; Yang, Yunchang; He, Di; Zheng, Kai; Zheng, Shuxin; Xing, Chen; Zhang, Huishuai; Lan, Yanyan; Wang, Liwei; Liu, Tie-Yan (2020-06-29). "On Layer Normalization in the Transformer Architecture". arXiv:2002.04745 [cs.LG]. - ^
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#127
rmer Architecture". arXiv:2002.04745 [cs.LG]. - ^ Raffel, Colin; Shazeer, Noam; Roberts, Adam; Lee, Katherine; Narang, Sharan; Matena, Michael; Zhou, Yanqi; Li, Wei; Liu, Peter J. (2020-01-01). "Exploring the limits of transfer learning with a unified text-to-text transformer". The Journal of Machine Learning Research. 21 (1): 140:5485–140:5551. arXiv:1910.10683. ISSN 1532-4435. - ^ Raffel, Colin; Shazeer, Noam; Roberts, Adam; Lee, Katherine; Narang, Sharan; Matena, Michael; Zhou, Yanqi; Li, Wei
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#128
ang, Sharan; Matena, Michael; Zhou, Yanqi; Li, Wei; Liu, Peter J. (2019). "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer". arXiv:1910.10683 [cs.LG]. - ^ a b "Masked language modeling". huggingface.co. Retrieved 2023-10-05. - ^ a b "Causal language modeling". huggingface.co. Retrieved 2023-10-05. - ^ a b c d Tay, Yi; Dehghani, Mostafa; Tran, Vinh Q.; Garcia, Xavier; Wei, Jason; Wang, Xuezhi; Chung, Hyung Won; Shakeri, Siamak; Bahri, Dara (2023-02-28), UL2: Unif
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#129
akeri, Siamak; Bahri, Dara (2023-02-28), UL2: Unifying Language Learning Paradigms, arXiv:2205.05131 - ^ Press, Ofir; Wolf, Lior (2017-02-21), Using the Output Embedding to Improve Language Models, arXiv:1608.05859 - ^ Lintz, Nathan (2016-04-18). "Sequence Modeling with Neural Networks (Part 2): Attention Models". Indico. Archived from the original on 2020-10-21. Retrieved 2019-10-15. - ^ a b c Alammar, Jay. "The Illustrated Transformer". jalammar.github.io. Archived from the original on 2020-10
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#130
r.github.io. Archived from the original on 2020-10-18. Retrieved 2019-10-15. - ^ Team, Keras. "Keras documentation: GPT2Backbone model". keras.io. Retrieved 2024-08-08. - ^ Clark, Kevin; Khandelwal, Urvashi; Levy, Omer; Manning, Christopher D. (August 2019). "What Does BERT Look at? An Analysis of BERT's Attention". Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Florence, Italy: Association for Computational Linguistics: 276–286. arXiv:1906.
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#131
or Computational Linguistics: 276–286. arXiv:1906.04341. doi:10.18653/v1/W19-4828. Archived from the original on 2020-10-21. Retrieved 2020-05-20. - ^ Yang, Zhilin; Dai, Zihang; Yang, Yiming; Carbonell, Jaime; Salakhutdinov, Russ R; Le, Quoc V (2019). "XLNet: Generalized Autoregressive Pretraining for Language Understanding". Advances in Neural Information Processing Systems. 32. Curran Associates, Inc. arXiv:1906.08237. - ^ Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (11
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#132
mhan, Karthik; Salimans, Tim; Sutskever, Ilya (11 June 2018). "Improving Language Understanding by Generative Pre-Training" (PDF). OpenAI. p. 12. Archived (PDF) from the original on 26 January 2021. Retrieved 23 January 2021. - ^ Wang, Qiang; Li, Bei; Xiao, Tong; Zhu, Jingbo; Li, Changliang; Wong, Derek F.; Chao, Lidia S. (2019-06-04), Learning Deep Transformer Models for Machine Translation, arXiv:1906.01787 - ^ Phuong, Mary; Hutter, Marcus (2022-07-19), Formal Algorithms for Transformers, arXi
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#133
2-07-19), Formal Algorithms for Transformers, arXiv:2207.09238 - ^ a b c Raffel, Colin; Shazeer, Noam; Roberts, Adam; Lee, Katherine; Narang, Sharan; Matena, Michael; Zhou, Yanqi; Li, Wei; Liu, Peter J. (2020). "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer". Journal of Machine Learning Research. 21 (140): 1–67. arXiv:1910.10683. ISSN 1533-7928. - ^ "Recent Advances in Google Translate". Google Research. June 8, 2020. Archived from the original on 4 Jul 2024.
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#134
, 2020. Archived from the original on 4 Jul 2024. Retrieved 2024-08-07. - ^ a b Shazeer, Noam (2020-02-01). "GLU Variants Improve Transformer". arXiv:2002.05202 [cs.LG]. - ^ Hendrycks, Dan; Gimpel, Kevin (2016-06-27). "Gaussian Error Linear Units (GELUs)". arXiv:1606.08415v5 [cs.LG]. - ^ Zhang, Biao; Sennrich, Rico (2019). "Root Mean Square Layer Normalization". Advances in Neural Information Processing Systems. 32. Curran Associates, Inc. arXiv:1910.07467. - ^ Tembine, Hamidou, Manzoor Ahmed Kh
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#135
1910.07467. - ^ Tembine, Hamidou, Manzoor Ahmed Khan, and Issa Bamia. 2024. "Mean-Field-Type Transformers" Mathematics 12, no. 22: 3506. https://doi.org/10.3390/math12223506 - ^ a b Nguyen, Toan Q.; Salazar, Julian (2019-11-02). Niehues, Jan; Cattoni, Rolando; Stüker, Sebastian; Negri, Matteo; Turchi, Marco; Ha, Thanh-Le; Salesky, Elizabeth; Sanabria, Ramon; Barrault, Loic (eds.). "Transformers without Tears: Improving the Normalization of Self-Attention". Proceedings of the 16th International C
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#136
ttention". Proceedings of the 16th International Conference on Spoken Language Translation. Hong Kong: Association for Computational Linguistics. arXiv:1910.05895. doi:10.5281/zenodo.3525484. - ^ Dufter, Philipp; Schmitt, Martin; Schütze, Hinrich (2022-06-06). "Position Information in Transformers: An Overview". Computational Linguistics. 48 (3): 733–763. arXiv:2102.11090. doi:10.1162/coli_a_00445. ISSN 0891-2017. S2CID 231986066. - ^ Gehring, Jonas; Auli, Michael; Grangier, David; Yarats, Denis
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#137
nas; Auli, Michael; Grangier, David; Yarats, Denis; Dauphin, Yann N. (2017-07-17). "Convolutional Sequence to Sequence Learning". Proceedings of the 34th International Conference on Machine Learning. PMLR: 1243–1252. - ^ Haviv, Adi; Ram, Ori; Press, Ofir; Izsak, Peter; Levy, Omer (2022-12-05), Transformer Language Models without Positional Encodings Still Learn Positional Information, arXiv:2203.16634 - ^ Su, Jianlin; Lu, Yu; Pan, Shengfeng; Murtadha, Ahmed; Wen, Bo; Liu, Yunfeng (2021-04-01). "
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#138
adha, Ahmed; Wen, Bo; Liu, Yunfeng (2021-04-01). "RoFormer: Enhanced Transformer with Rotary Position Embedding". arXiv:2104.09864 [cs.CL]. - ^ Press, Ofir; Smith, Noah A.; Lewis, Mike (2021-08-01). "Train Short, Test Long: Attention with Linear Biases Enables Input Length Extrapolation". arXiv:2108.12409 [cs.CL]. - ^ Shaw, Peter; Uszkoreit, Jakob; Vaswani, Ashish (2018). "Self-Attention with Relative Position Representations". arXiv:1803.02155 [cs.CL]. - ^ Ke, Guolin; He, Di; Liu, Tie-Yan (2021
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#139
cs.CL]. - ^ Ke, Guolin; He, Di; Liu, Tie-Yan (2021-03-15), Rethinking Positional Encoding in Language Pre-training, arXiv:2006.15595 - ^ Kwon, Woosuk; Li, Zhuohan; Zhuang, Siyuan; Sheng, Ying; Zheng, Lianmin; Yu, Cody Hao; Gonzalez, Joseph; Zhang, Hao; Stoica, Ion (2023-10-23). "Efficient Memory Management for Large Language Model Serving with PagedAttention". Proceedings of the 29th Symposium on Operating Systems Principles. SOSP '23. New York, NY, USA: Association for Computing Machinery. pp.
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#140
NY, USA: Association for Computing Machinery. pp. 611–626. arXiv:2309.06180. doi:10.1145/3600006.3613165. ISBN 979-8-4007-0229-7. - ^ vllm-project/vllm, vLLM, 2024-06-20, retrieved 2024-06-20 - ^ Contribution), Woosuk Kwon*, Zhuohan Li*, Siyuan Zhuang, Ying Sheng, Lianmin Zheng, Cody Yu, Joey Gonzalez, Hao Zhang, and Ion Stoica (* Equal (2023-06-20). "vLLM: Easy, Fast, and Cheap LLM Serving with PagedAttention". vLLM Blog. Retrieved 2024-06-20. {{cite web}} : CS1 maint: multiple names: authors l
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#141
{cite web}} : CS1 maint: multiple names: authors list (link) - ^ Dao, Tri; Fu, Dan; Ermon, Stefano; Rudra, Atri; Ré, Christopher (2022-12-06). "FlashAttention: Fast and Memory-Efficient Exact Attention with IO-Awareness". Advances in Neural Information Processing Systems. 35: 16344–16359. arXiv:2205.14135. - ^ "Stanford CRFM". crfm.stanford.edu. Retrieved 2023-07-18. - ^ "FlashAttention-2: Faster Attention with Better Parallelism and Work Partitioning". Princeton NLP. 2023-06-17. Retrieved 2023-
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#142
oning". Princeton NLP. 2023-06-17. Retrieved 2023-07-18. - ^ "Introducing Together AI Chief Scientist Tri Dao, as he releases FlashAttention-2 to speed up model training and inference". TOGETHER. Retrieved 2023-07-18. - ^ Ainslie, Joshua; Lee-Thorp, James; de Jong, Michiel; Zemlyanskiy, Yury; Lebrón, Federico; Sanghai, Sumit (2023-12-23). "GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints". arXiv:2305.13245 [cs.CL]. - ^ Chowdhery, Aakanksha; Narang, Sharan; Dev
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#143
CL]. - ^ Chowdhery, Aakanksha; Narang, Sharan; Devlin, Jacob; Bosma, Maarten; Mishra, Gaurav; Roberts, Adam; Barham, Paul; Chung, Hyung Won; Sutton, Charles; Gehrmann, Sebastian; Schuh, Parker; Shi, Kensen; Tsvyashchenko, Sasha; Maynez, Joshua; Rao, Abhishek (2022-04-01). "PaLM: Scaling Language Modeling with Pathways". arXiv:2204.02311 [cs.CL]. - ^ Ainslie, Joshua; Lee-Thorp, James; de Jong, Michiel; Zemlyanskiy, Yury; Lebrón, Federico; Sanghai, Sumit (2023-12-23), GQA: Training Generalized Mul
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#144
Sumit (2023-12-23), GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints, arXiv:2305.13245 - ^ a b DeepSeek-AI; Liu, Aixin; Feng, Bei; Wang, Bin; Wang, Bingxuan; Liu, Bo; Zhao, Chenggang; Dengr, Chengqi; Ruan, Chong (19 June 2024), DeepSeek-V2: A Strong, Economical, and Efficient Mixture-of-Experts Language Model, arXiv:2405.04434. - ^ a b Leviathan, Yaniv; Kalman, Matan; Matias, Yossi (2023-05-18), Fast Inference from Transformers via Speculative Decoding, arXiv
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#145
from Transformers via Speculative Decoding, arXiv:2211.17192 - ^ Fu, Yao (2023-12-13). "Towards 100x Speedup: Full Stack Transformer Inference Optimization". - ^ Chen, Charlie; Borgeaud, Sebastian; Irving, Geoffrey; Lespiau, Jean-Baptiste; Sifre, Laurent; Jumper, John (2023-02-02), Accelerating Large Language Model Decoding with Speculative Sampling, arXiv:2302.01318 - ^ Gloeckle, Fabian; Badr Youbi Idrissi; Rozière, Baptiste; Lopez-Paz, David; Synnaeve, Gabriel (2024). "Better & Faster Large L