source
stringlengths
36
80
text
stringlengths
51
500
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#146
Synnaeve, Gabriel (2024). "Better & Faster Large Language Models via Multi-token Prediction". arXiv:2404.19737 [cs.CL]. - ^ DeepSeek-AI; et al. (2024). "DeepSeek-V3 Technical Report". arXiv:2412.19437 [cs.CL]. - ^ a b Kitaev, Nikita; Kaiser, Łukasz; Levskaya, Anselm (2020). "Reformer: The Efficient Transformer". arXiv:2001.04451 [cs.LG]. - ^ Liu, Ze; Lin, Yutong; Cao, Yue; Hu, Han; Wei, Yixuan; Zhang, Zheng; Lin, Stephen; Guo, Baining (2021). "Swin Transformer: Hierarchical Vision Transformer us
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#147
in Transformer: Hierarchical Vision Transformer using Shifted Windows". 2021 IEEE/CVF International Conference on Computer Vision (ICCV). IEEE. pp. 9992–10002. arXiv:2103.14030. doi:10.1109/ICCV48922.2021.00986. ISBN 978-1-6654-2812-5. - ^ Ristea, Nicolaea Catalin; Ionescu, Radu Tudor; Khan, Fahad Shahbaz (2022-09-18). "SepTr: Separable Transformer for Audio Spectrogram Processing". Interspeech. ISCA: 4103–4107. arXiv:2203.09581. doi:10.21437/Interspeech.2022-249. - ^ Tay, Yi; Dehghani, Mostafa;
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#148
erspeech.2022-249. - ^ Tay, Yi; Dehghani, Mostafa; Abnar, Samira; Shen, Yikang; Bahri, Dara; Pham, Philip; Rao, Jinfeng; Yang, Liu; Ruder, Sebastian; Metzler, Donald (2020-11-08). "Long Range Arena: A Benchmark for Efficient Transformers". arXiv:2011.04006 [cs.LG]. - ^ "Reformer: The Efficient Transformer". Google AI Blog. 16 January 2020. Archived from the original on 2020-10-22. Retrieved 2020-10-22. - ^ Gomez, Aidan N; Ren, Mengye; Urtasun, Raquel; Grosse, Roger B (2017). "The Reversible Resi
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#149
quel; Grosse, Roger B (2017). "The Reversible Residual Network: Backpropagation Without Storing Activations". Advances in Neural Information Processing Systems. 30. Curran Associates, Inc. arXiv:1707.04585. - ^ Child, Rewon; Gray, Scott; Radford, Alec; Sutskever, Ilya (2019-04-23), Generating Long Sequences with Sparse Transformers, arXiv:1904.10509 - ^ "Constructing Transformers For Longer Sequences with Sparse Attention Methods". Google AI Blog. 25 March 2021. Archived from the original on 202
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#150
. 25 March 2021. Archived from the original on 2021-09-18. Retrieved 2021-05-28. - ^ Zhai, Shuangfei; Talbott, Walter; Srivastava, Nitish; Huang, Chen; Goh, Hanlin; Zhang, Ruixiang; Susskind, Josh (2021-09-21). "An Attention Free Transformer". arXiv:2105.14103 [cs.LG]. - ^ Peng, Hao; Pappas, Nikolaos; Yogatama, Dani; Schwartz, Roy; Smith, Noah A.; Kong, Lingpeng (2021-03-19). "Random Feature Attention". arXiv:2103.02143 [cs.CL]. - ^ Choromanski, Krzysztof; Likhosherstov, Valerii; Dohan, David; S
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#151
Krzysztof; Likhosherstov, Valerii; Dohan, David; Song, Xingyou; Gane, Andreea; Sarlos, Tamas; Hawkins, Peter; Davis, Jared; Belanger, David; Colwell, Lucy; Weller, Adrian (2020-09-30). "Masked Language Modeling for Proteins via Linearly Scalable Long-Context Transformers". arXiv:2006.03555 [cs.LG]. - ^ Lu, Kevin; Grover, Aditya; Abbeel, Pieter; Mordatch, Igor (2022-06-28). "Frozen Pretrained Transformers as Universal Computation Engines". Proceedings of the AAAI Conference on Artificial Intellig
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#152
ings of the AAAI Conference on Artificial Intelligence. 36 (7): 7628–7636. doi:10.1609/aaai.v36i7.20729. ISSN 2374-3468. - ^ "Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality | LMSYS Org". lmsys.org. Retrieved 2024-08-11. - ^ Liu, Haotian; Li, Chunyuan; Wu, Qingyang; Lee, Yong Jae (2023-12-15). "Visual Instruction Tuning". Advances in Neural Information Processing Systems. 36: 34892–34916. - ^ Radford, Alec; Kim, Jong Wook; Xu, Tao; Brockman, Greg; McLeavey, Christine; S
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#153
k; Xu, Tao; Brockman, Greg; McLeavey, Christine; Sutskever, Ilya (2022). "Robust Speech Recognition via Large-Scale Weak Supervision". arXiv:2212.04356 [eess.AS]. - ^ Jaegle, Andrew; Gimeno, Felix; Brock, Andrew; Zisserman, Andrew; Vinyals, Oriol; Carreira, Joao (2021-06-22). "Perceiver: General Perception with Iterative Attention". arXiv:2103.03206 [cs.CV]. - ^ Jaegle, Andrew; Borgeaud, Sebastian; Alayrac, Jean-Baptiste; Doersch, Carl; Ionescu, Catalin; Ding, David; Koppula, Skanda; Zoran, Dani
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#154
Catalin; Ding, David; Koppula, Skanda; Zoran, Daniel; Brock, Andrew; Shelhamer, Evan; Hénaff, Olivier (2021-08-02). "Perceiver IO: A General Architecture for Structured Inputs & Outputs". arXiv:2107.14795 [cs.LG]. - ^ "Parti: Pathways Autoregressive Text-to-Image Model". sites.research.google. Retrieved 2024-08-09. - ^ a b Villegas, Ruben; Babaeizadeh, Mohammad; Kindermans, Pieter-Jan; Moraldo, Hernan; Zhang, Han; Saffar, Mohammad Taghi; Castro, Santiago; Kunze, Julius; Erhan, Dumitru (2022-09-2
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#155
Santiago; Kunze, Julius; Erhan, Dumitru (2022-09-29). "Phenaki: Variable Length Video Generation from Open Domain Textual Descriptions". {{cite journal}} : Cite journal requires|journal= (help) - ^ a b Chang, Huiwen; Zhang, Han; Barber, Jarred; Maschinot, A. J.; Lezama, Jose; Jiang, Lu; Yang, Ming-Hsuan; Murphy, Kevin; Freeman, William T. (2023-01-02). "Muse: Text-To-Image Generation via Masked Generative Transformers". arXiv:2301.00704 [cs.CV]. - ^ Ramesh, Aditya; Pavlov, Mikhail; Goh, Gabriel;
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#156
- ^ Ramesh, Aditya; Pavlov, Mikhail; Goh, Gabriel; Gray, Scott; Voss, Chelsea; Radford, Alec; Chen, Mark; Sutskever, Ilya (2021-02-26), Zero-Shot Text-to-Image Generation, arXiv:2102.12092 - ^ Yu, Jiahui; Xu, Yuanzhong; Koh, Jing Yu; Luong, Thang; Baid, Gunjan; Wang, Zirui; Vasudevan, Vijay; Ku, Alexander; Yang, Yinfei (2022-06-21), Scaling Autoregressive Models for Content-Rich Text-to-Image Generation, arXiv:2206.10789 - ^ Kariampuzha, William; Alyea, Gioconda; Qu, Sue; Sanjak, Jaleal; Mathé,
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#157
Alyea, Gioconda; Qu, Sue; Sanjak, Jaleal; Mathé, Ewy; Sid, Eric; Chatelaine, Haley; Yadaw, Arjun; Xu, Yanji; Zhu, Qian (2023). "Precision information extraction for rare disease epidemiology at scale". Journal of Translational Medicine. 21 (1): 157. doi:10.1186/s12967-023-04011-y. PMC 9972634. PMID 36855134. Further reading [edit]- Alexander Rush, The Annotated transformer Archived 2021-09-22 at the Wayback Machine, Harvard NLP group, 3 April 2018 - Phuong, Mary; Hutter, Marcus (2022). "Formal
https://en.wikipedia.org/wiki/Transformer_%28deep_learning_architecture%29#158
18 - Phuong, Mary; Hutter, Marcus (2022). "Formal Algorithms for Transformers". arXiv:2207.09238 [cs.LG]. - Ferrando, Javier; Sarti, Gabriele; Bisazza, Arianna; Costa-jussà, Marta R. (2024-05-01). "A Primer on the Inner Workings of Transformer-based Language Models". arXiv:2405.00208 [cs.CL]. - Leech, Gavin (2024-11-06). "Transformer++". argmin gravitas. Archived from the original on 2025-02-26. Retrieved 2025-05-08.
https://en.wikipedia.org/wiki/Llama_%28language_model%29#0
Llama (language model) Llama (Large Language Model Meta AI, formerly stylized as LLaMA) is a family of large language models (LLMs) released by Meta AI starting in February 2023.[2] The latest version is Llama 4, released in April 2025.[3] Llama models come in different sizes, ranging from 1 billion to 2 trillion parameters. Initially only a foundation model,[4] starting with Llama 2, Meta AI released instruction fine-tuned versions alongside foundation models.[5] Model weights for the first ver
https://en.wikipedia.org/wiki/Llama_%28language_model%29#1
ndation models.[5] Model weights for the first version of Llama were only available to researchers on a case-by-case basis, under a non-commercial license.[6][7] Unauthorized copies of the first model were shared via BitTorrent.[8] Subsequent versions of Llama were made accessible outside academia and released under licenses that permitted some commercial use.[9][5] Alongside the release of Llama 3, Meta added virtual assistant features to Facebook and WhatsApp in select regions, and a standalon
https://en.wikipedia.org/wiki/Llama_%28language_model%29#2
ok and WhatsApp in select regions, and a standalone website. Both services use a Llama 3 model.[10] Background [edit]After the release of large language models such as GPT-3, a focus of research was up-scaling models which in some instances showed major increases in emergent capabilities.[11] The release of ChatGPT and its surprise success caused an increase in attention to large language models.[12] Compared with other responses to ChatGPT, Meta's Chief AI scientist Yann LeCun stated that large
https://en.wikipedia.org/wiki/Llama_%28language_model%29#3
's Chief AI scientist Yann LeCun stated that large language models are best for aiding with writing.[13][14][15][16] An empirical investigation of the Llama series was the scaling laws. It was observed that the Llama 3 models showed that when a model is trained on data that is more than the "Chinchilla-optimal" amount, the performance continues to scale log-linearly. For example, the Chinchilla-optimal dataset for Llama 3 8B is 200 billion tokens, but performance continued to scale log-linearly
https://en.wikipedia.org/wiki/Llama_%28language_model%29#4
, but performance continued to scale log-linearly to the 75-times larger dataset of 15 trillion tokens.[17] Initial release [edit]LLaMA was announced on February 24, 2023, via a blog post and a paper describing the model's training, architecture, and performance.[18][7] The inference code used to run the model was publicly released under the open-source GPLv3 license.[19] Access to the model's weights was managed by an application process, with access to be granted "on a case-by-case basis to ac
https://en.wikipedia.org/wiki/Llama_%28language_model%29#5
ccess to be granted "on a case-by-case basis to academic researchers; those affiliated with organizations in government, civil society, and academia; and industry research laboratories around the world".[7] Llama was trained on only publicly available information, and was trained at various model sizes, with the intention to make it more accessible to different hardware. The model was exclusively a foundation model,[4] although the paper contained examples of instruction fine-tuned versions of t
https://en.wikipedia.org/wiki/Llama_%28language_model%29#6
d examples of instruction fine-tuned versions of the model.[18] Meta AI reported the 13B parameter model performance on most NLP benchmarks exceeded that of the much larger GPT-3 (with 175B parameters), and the largest 65B model was competitive with state of the art models such as PaLM and Chinchilla.[18] Leak [edit]On March 3, 2023, a torrent containing LLaMA's weights was uploaded, with a link to the torrent shared on the 4chan imageboard and subsequently spread through online AI communities.[
https://en.wikipedia.org/wiki/Llama_%28language_model%29#7
ubsequently spread through online AI communities.[20] That same day, a pull request on the main LLaMA repository was opened, requesting to add the magnet link to the official documentation.[21][22] On March 4, a pull request was opened to add links to HuggingFace repositories containing the model.[23][21] On March 6, Meta filed takedown requests to remove the HuggingFace repositories linked in the pull request, characterizing it as "unauthorized distribution" of the model. HuggingFace complied w
https://en.wikipedia.org/wiki/Llama_%28language_model%29#8
distribution" of the model. HuggingFace complied with the requests.[24] On March 20, Meta filed a DMCA takedown request for copyright infringement against a repository containing a script that downloaded LLaMA from a mirror, and GitHub complied the next day.[25] Reactions to the leak varied. Some speculated that the model would be used for malicious purposes, such as more sophisticated spam. Some have celebrated the model's accessibility, as well as the fact that smaller versions of the model ca
https://en.wikipedia.org/wiki/Llama_%28language_model%29#9
as the fact that smaller versions of the model can be run relatively cheaply, suggesting that this will promote the flourishing of additional research developments.[20] Multiple commentators, such as Simon Willison, compared LLaMA to Stable Diffusion, a text-to-image model which, unlike comparably sophisticated models which preceded it, was openly distributed, leading to a rapid proliferation of associated tools, techniques, and software.[20][26] LLaMa 2 [edit]On July 18, 2023, in partnership w
https://en.wikipedia.org/wiki/Llama_%28language_model%29#10
] LLaMa 2 [edit]On July 18, 2023, in partnership with Microsoft, Meta announced LLaMa 2, the next generation of Llama. Meta trained and released Llama 2 in three model sizes: 7, 13, and 70 billion parameters.[5] The model architecture remains largely unchanged from that of LLaMA-1 models, but 40% more data was used to train the foundational models.[27] The accompanying preprint[27] also mentions a model with 34B parameters that might be released in the future upon satisfying safety targets. LLaM
https://en.wikipedia.org/wiki/Llama_%28language_model%29#11
in the future upon satisfying safety targets. LLaMa 2 includes foundation models and models fine-tuned for chat. In a further departure from the original version of LLaMa, all models are released with weights and may be used for many commercial use cases. However, because LLaMa's license enforces an acceptable use policy that prohibits Llama from being used for some purposes, Meta's use of the term open source to describe Llama has been disputed by the Open Source Initiative (which maintains The
https://en.wikipedia.org/wiki/Llama_%28language_model%29#12
by the Open Source Initiative (which maintains The Open Source Definition) and others.[28][29] Code Llama is a fine-tune of LLaMa 2 with code specific datasets. 7B, 13B, and 34B versions were released on August 24, 2023, with the 70B releasing on the January 29, 2024.[30] Starting with the foundation models from LLaMa 2, Meta AI would train an additional 500B tokens of code datasets, before an additional 20B token of long-context data, creating the Code Llama foundation models. This foundation m
https://en.wikipedia.org/wiki/Llama_%28language_model%29#13
he Code Llama foundation models. This foundation model was further trained on 5B instruction following token to create the instruct fine-tune. Another foundation model was created for Python code, which trained on 100B tokens of Python-only code, before the long-context data.[31] Llama 3 [edit]On April 18, 2024, Meta released Llama-3 with two sizes: 8B and 70B parameters.[17] The models have been pre-trained on approximately 15 trillion tokens of text gathered from “publicly available sources” w
https://en.wikipedia.org/wiki/Llama_%28language_model%29#14
text gathered from “publicly available sources” with the instruct models fine-tuned on “publicly available instruction datasets, as well as over 10M human-annotated examples". Meta AI's testing showed in April 2024 that Llama 3 70B was beating Gemini Pro 1.5 and Claude 3 Sonnet on most benchmarks. Meta also announced plans to make Llama 3 multilingual and multimodal, better at coding and reasoning, and to increase its context window.[32][33] During an interview with Dwarkesh Patel, Mark Zuckerb
https://en.wikipedia.org/wiki/Llama_%28language_model%29#15
ing an interview with Dwarkesh Patel, Mark Zuckerberg said that the 8B version of Llama 3 was nearly as powerful as the largest Llama 2. Compared to previous models, Zuckerberg stated the team was surprised that the 70B model was still learning even at the end of the 15T tokens training. The decision was made to end training to focus GPU power elsewhere.[34] Llama-3.1 was released on July 23, 2024, with three sizes: 8B, 70B, and 405B parameters.[35][36] Llama 4 [edit]The Llama-4 series was relea
https://en.wikipedia.org/wiki/Llama_%28language_model%29#16
35][36] Llama 4 [edit]The Llama-4 series was released in 2025. The architecture was changed to a mixture of experts. They are multimodal (text and image input, text output) and multilingual (12 languages).[37] Specifically, on 5 April 2025, the following were released both as base and instruction-tuned versions:[38] - Scout: 17 billion active parameter model with 16 experts, context window of 10M, with 109B parameters in total. - Maverick: 17 billion active parameter model with 128 experts, cont
https://en.wikipedia.org/wiki/Llama_%28language_model%29#17
lion active parameter model with 128 experts, context window of 1M, with 400B parameters in total. Also claimed was Behemoth (not yet released): 288 billion active parameter model with 16 experts and around 2T parameters in total. The Behemoth version was still in training at that time. The Scout was trained from scratch. The Maverick was "codistilled" from Behemoth. Note that the Scout was trained for longer and had a longer context length than Maverick. The training data included publicly avai
https://en.wikipedia.org/wiki/Llama_%28language_model%29#18
Maverick. The training data included publicly available data, licensed data, and Meta-proprietary data such as publicly shared posts from Instagram and Facebook and people’s interactions with Meta AI. The data cutoff was August 2024.[37] Meta claimed in its release announcement that Llama 4 bested GPT-4o's score on the LMArena AI benchmark.[39] The company also stated that Llama 4's benchmark score was achieved using an unreleased "experimental chat version" of the model that was "optimized for
https://en.wikipedia.org/wiki/Llama_%28language_model%29#19
hat version" of the model that was "optimized for conversationality", which differed from the version of Llama 4 released to the public.[40] LMArena indicated that it would change its policies to prevent this incident from reoccurring, and responded, "Meta's interpretation of our policy did not match what we expect from model providers. Meta should have made it clearer that 'Llama-4-Maverick-03-26-Experimental' was a customized model to optimize for human preference."[39] Some users criticized M
https://en.wikipedia.org/wiki/Llama_%28language_model%29#20
for human preference."[39] Some users criticized Meta on social media for its use of a separate model version tailored for benchmarking, and some additionally accused Meta of training Llama 4 on test sets to further boost its benchmark scores—which Meta denied.[41] Comparison of models [edit]For the training cost column, only the largest model's cost is written by default. So for example, "21,000" is the training cost of Llama 2 69B in units of petaFLOP-day. Also, 1 petaFLOP-day = 1 petaFLOP/sec
https://en.wikipedia.org/wiki/Llama_%28language_model%29#21
etaFLOP-day. Also, 1 petaFLOP-day = 1 petaFLOP/sec × 1 day = 8.64E19 FLOP. "T" means "trillion" and "B" means "billion". The following table lists the main model versions of Llama, describing the significant changes included with each version:[42] Architecture and training [edit]Here is the recommendation letter that I wrote for an application to a dragon feeder position at the Magic Unicorn Corporation: Dear recruiter, I have known ___ for two years, and I believe that she would be an excellent
https://en.wikipedia.org/wiki/Llama_%28language_model%29#22
ears, and I believe that she would be an excellent dragon feeder for the Magic Unicorn Corporation. ___ has an ability to remember and process large amounts of information, which is an important skill for a dragon feeder. ___, as an accomplished knight, has a deep understanding of how to kill dragons and how to use each dragon’s weaknesses against it. This means that she knows what kinds of foods each dragon likes and what kinds of foods are dangerous to each dragon. This knowledge and experienc
https://en.wikipedia.org/wiki/Llama_%28language_model%29#23
erous to each dragon. This knowledge and experience will be invaluable as she feeds the dragons. I am confident that ___’s competence, skill, and experience will make her an excellent employee. Please contact me at (___) ___-___ if you have any questions. I look forward to hearing from you. Best regards, Honorable Knight Sir George Architecture [edit]Like GPT-3, the Llama series of models are autoregressive decoder-only Transformers, but there are some minor differences: - SwiGLU[51] activation
https://en.wikipedia.org/wiki/Llama_%28language_model%29#24
e some minor differences: - SwiGLU[51] activation function instead of GeLU; - rotary positional embeddings (RoPE)[52] instead of absolute positional embedding; - RMSNorm[53] instead of layer normalization;[54] Training datasets [edit]LLaMA's developers focused their effort on scaling the model's performance by increasing the volume of training data, rather than the number of parameters, reasoning that the dominating cost for LLMs is from doing inference on the trained model rather than the compu
https://en.wikipedia.org/wiki/Llama_%28language_model%29#25
ference on the trained model rather than the computational cost of the training process. LLaMA 1 foundational models were trained on a data set with 1.4 trillion tokens, drawn from publicly available data sources, including:[18] - Webpages scraped by CommonCrawl - Open source repositories of source code from GitHub - Wikipedia in 20 languages - Public domain books from Project Gutenberg - Books3 books dataset - The LaTeX source code for scientific papers uploaded to ArXiv - Questions and answers
https://en.wikipedia.org/wiki/Llama_%28language_model%29#26
c papers uploaded to ArXiv - Questions and answers from Stack Exchange websites On April 17, 2023, TogetherAI launched a project named RedPajama to reproduce and distribute an open source version of the LLaMA dataset.[55] The dataset has approximately 1.2 trillion tokens and is publicly available for download.[56] Llama 2 foundational models were trained on a data set with 2 trillion tokens. This data set was curated to remove Web sites that often disclose personal data of people. It also upsamp
https://en.wikipedia.org/wiki/Llama_%28language_model%29#27
n disclose personal data of people. It also upsamples sources considered trustworthy.[27] Llama 2 - Chat was additionally fine-tuned on 27,540 prompt-response pairs created for this project, which performed better than larger but lower-quality third-party datasets. For AI alignment, reinforcement learning with human feedback (RLHF) was used with a combination of 1,418,091 Meta examples and seven smaller datasets. The average dialog depth was 3.9 in the Meta examples, 3.0 for Anthropic Helpful an
https://en.wikipedia.org/wiki/Llama_%28language_model%29#28
in the Meta examples, 3.0 for Anthropic Helpful and Anthropic Harmless sets, and 1.0 for five other sets, including OpenAI Summarize, StackExchange, etc. Llama 3 consists of mainly English data, with over 5% in over 30 other languages. Its dataset was filtered by a text-quality classifier, and the classifier was trained by text synthesized by Llama 2.[17] In a lawsuit brought by Richard Kadrey and others against Meta Platforms, CEO Mark Zuckerberg was alleged to have authorized the use of copyri
https://en.wikipedia.org/wiki/Llama_%28language_model%29#29
g was alleged to have authorized the use of copyrighted content from Library Genesis to train Llama AI models and conceal its actions by removing copyright markers from the data.[57] Fine-tuning [edit]Llama 1 models are only available as foundational models with self-supervised learning and without fine-tuning. Llama 2 – Chat models were derived from foundational Llama 2 models. Unlike GPT-4 which increased context length during fine-tuning, Llama 2 and Code Llama - Chat have the same context le
https://en.wikipedia.org/wiki/Llama_%28language_model%29#30
a 2 and Code Llama - Chat have the same context length of 4K tokens. Supervised fine-tuning used an autoregressive loss function with token loss on user prompts zeroed out. The batch size was 64. For AI alignment, human annotators wrote prompts and then compared two model outputs (a binary protocol), giving confidence levels and separate safety labels with veto power. Two separate reward models were trained from these preferences for safety and helpfulness using Reinforcement learning from human
https://en.wikipedia.org/wiki/Llama_%28language_model%29#31
elpfulness using Reinforcement learning from human feedback (RLHF). A major technical contribution is the departure from the exclusive use of Proximal Policy Optimization (PPO) for RLHF – a new technique based on Rejection sampling was used, followed by PPO. Multi-turn consistency in dialogs was targeted for improvement, to make sure that "system messages" (initial instructions, such as "speak in French" and "act like Napoleon") are respected during the dialog. This was accomplished using the ne
https://en.wikipedia.org/wiki/Llama_%28language_model%29#32
ing the dialog. This was accomplished using the new "Ghost attention" technique during training, which concatenates relevant instructions to each new user message but zeros out the loss function for tokens in the prompt (earlier parts of the dialog). Applications [edit]The Stanford University Institute for Human-Centered Artificial Intelligence (HAI) Center for Research on Foundation Models (CRFM) released Alpaca, a training recipe based on the LLaMA 7B model that uses the "Self-Instruct" method
https://en.wikipedia.org/wiki/Llama_%28language_model%29#33
LaMA 7B model that uses the "Self-Instruct" method of instruction tuning to acquire capabilities comparable to the OpenAI GPT-3 series text-davinci-003 model at a modest cost.[58][59][60] The model files were officially removed on March 21, 2023, over hosting costs and safety concerns, though the code and paper remain online for reference.[61][62][63] Meditron is a family of Llama-based finetuned on a corpus of clinical guidelines, PubMed papers, and articles. It was created by researchers at Éc
https://en.wikipedia.org/wiki/Llama_%28language_model%29#34
and articles. It was created by researchers at École Polytechnique Fédérale de Lausanne School of Computer and Communication Sciences, and the Yale School of Medicine. It shows increased performance on medical-related benchmarks such as MedQA and MedMCQA.[64][65][66] Zoom used Meta Llama 2 to create an AI Companion that can summarize meetings, provide helpful presentation tips, and assist with message responses. This AI Companion is powered by multiple models, including Meta Llama 2.[67] Reuter
https://en.wikipedia.org/wiki/Llama_%28language_model%29#35
ultiple models, including Meta Llama 2.[67] Reuters reported in 2024 that many Chinese foundation models relied on Llama models for their training.[68] llama.cpp [edit]Software developer Georgi Gerganov released llama.cpp as open-source on March 10, 2023. It's a re-implementation of LLaMA in C++, allowing systems without a powerful GPU to run the model locally.[69] The llama.cpp project introduced the GGUF file format, a binary format that stores both tensors and metadata.[70] The format focuses
https://en.wikipedia.org/wiki/Llama_%28language_model%29#36
both tensors and metadata.[70] The format focuses on supporting different quantization types, which can reduce memory usage, and increase speed at the expense of lower model precision.[71] llamafile created by Justine Tunney is an open-source tool that bundles llama.cpp with the model into a single executable file. Tunney et al. introduced new optimized matrix multiplication kernels for x86 and ARM CPUs, improving prompt evaluation performance for FP16 and 8-bit quantized data types.[72] Milita
https://en.wikipedia.org/wiki/Llama_%28language_model%29#37
or FP16 and 8-bit quantized data types.[72] Military [edit]In 2024, researchers from the People's Liberation Army Academy of Military Sciences (top military academy of China) were reported to have developed a military tool using Llama, which Meta Platforms stated was unauthorized due to Llama's license prohibiting the use of the model for military purposes.[73][74] Meta granted the US government and US military contractors permission to use Llama in November 2024, but continued to prohibit milit
https://en.wikipedia.org/wiki/Llama_%28language_model%29#38
in November 2024, but continued to prohibit military use by non-US entities.[29][75] Reception [edit]Wired describes the 8B parameter version of Llama 3 as being "surprisingly capable" given its size.[76] The response to Meta's integration of Llama into Facebook was mixed, with some users confused after Meta AI told a parental group that it had a child.[77] According to the Q4 2023 Earnings transcript, Meta adopted the strategy of open weights to improve on model safety, iteration speed, increa
https://en.wikipedia.org/wiki/Llama_%28language_model%29#39
o improve on model safety, iteration speed, increase adoption among developers and researchers, and to become the industry standard. Llama 5, 6, and 7 are planned for the future.[78] The release of Llama models has sparked significant debates on the benefits and misuse risks of open weight models. Such models can be fine-tuned to remove safeguards, notably by cyber criminals, until they comply with harmful requests. Some experts contend that future models may facilitate causing damage more than
https://en.wikipedia.org/wiki/Llama_%28language_model%29#40
re models may facilitate causing damage more than defending against it, for example by making it relatively easy to engineer advanced bioweapons without specialized knowledge. Conversely, open-weight models can be useful for a wide variety of purposes, including for safety research.[79] Open Source Initiative head Stefano Maffulli criticized Meta for describing Llama as open source, saying that it was causing confusion among users and "polluting" the term.[80] See also [edit]- GPT-4o - IBM Grani
https://en.wikipedia.org/wiki/Llama_%28language_model%29#41
the term.[80] See also [edit]- GPT-4o - IBM Granite, an open-source LLM made by IBM - Mistral AI, a French open-source AI company References [edit]- ^ "llama-models/models/llama3_2/LICENSE at main · meta-llama/llama-models · GitHub". GitHub. Archived from the original on 2024-09-29. Retrieved 2024-10-20. - ^ Leswing, Kif (2023-02-24). "Mark Zuckerberg announces Meta's new large language model as A.I. race heats up". CNBC. Retrieved 2025-04-10. - ^ Franzen, Carl (2025-04-08). "Meta defends Llama
https://en.wikipedia.org/wiki/Llama_%28language_model%29#42
^ Franzen, Carl (2025-04-08). "Meta defends Llama 4 release against 'reports of mixed quality,' blames bugs". VentureBeat. Retrieved 2025-04-10. - ^ a b Peters, Jay; Vincent, James (24 February 2023). "Meta has a new machine learning language model to remind you it does AI too". The Verge. - ^ a b c "Meta and Microsoft Introduce the Next Generation of LLaMA". Meta. 18 July 2023. Archived from the original on 14 September 2023. Retrieved 21 July 2023. - ^ Malik, Yuvraj; Paul, Katie (25 February
https://en.wikipedia.org/wiki/Llama_%28language_model%29#43
2023. - ^ Malik, Yuvraj; Paul, Katie (25 February 2023). "Meta heats up Big Tech's AI arms race with new language model". Reuters. - ^ a b c "Introducing LLaMA: A foundational, 65-billion-parameter large language model". Meta AI. 24 February 2023. Archived from the original on 3 March 2023. Retrieved 16 March 2023. - ^ Hern, Alex (2023-03-07). "TechScape: Will Meta's massive leak democratise AI – and at what cost?". The Guardian. ISSN 0261-3077. Retrieved 2025-04-10. - ^ David, Emilia (30 Octobe
https://en.wikipedia.org/wiki/Llama_%28language_model%29#44
Retrieved 2025-04-10. - ^ David, Emilia (30 October 2023). "Meta's AI research head wants open source licensing to change". The Verge. Archived from the original on 14 September 2024. Retrieved 20 October 2024. - ^ Heath, Alex (2024-04-18). "Meta's battle with ChatGPT begins now". The Verge. Retrieved 2025-04-10. - ^ "Examining Emergent Abilities in Large Language Models". hai.stanford.edu. 13 September 2022. - ^ "The inside story of how ChatGPT was built from the people who made it". MIT Techno
https://en.wikipedia.org/wiki/Llama_%28language_model%29#45
was built from the people who made it". MIT Technology Review. Archived from the original on 2023-03-03. Retrieved 2024-10-20. - ^ Ray, Tiernan (23 January 2023). "ChatGPT is 'not particularly innovative,' and 'nothing revolutionary', says Meta's chief AI scientist". ZDNET. Archived from the original on 2023-02-17. - ^ Badminton, Nik (13 February 2023). "Meta's Yann LeCun on auto-regressive Large Language Models (LLMs)". Futurist.com. Archived from the original on 22 July 2024. Retrieved 20 Octo
https://en.wikipedia.org/wiki/Llama_%28language_model%29#46
om the original on 22 July 2024. Retrieved 20 October 2024. - ^ "Yann LeCun on LinkedIn: My unwavering opinion on current (auto-regressive) LLMs". LinkedIn. Archived from the original on 2024-09-17. Retrieved 2024-10-20. - ^ "Meta's Yann LeCun Asks How AIs will Match — and Exceed — Human-level Intelligence". 23 October 2024. - ^ a b c "Introducing Meta Llama 3: The most capable openly available LLM to date". ai.meta.com. April 18, 2024. Archived from the original on 2024-05-15. Retrieved 2024-04
https://en.wikipedia.org/wiki/Llama_%28language_model%29#47
from the original on 2024-05-15. Retrieved 2024-04-21. - ^ a b c d e Touvron, Hugo; Lavril, Thibaut; Izacard, Gautier; Martinet, Xavier; Lachaux, Marie-Anne; Lacroix, Timothée; Rozière, Baptiste; Goyal, Naman; Hambro, Eric; Azhar, Faisal; Rodriguez, Aurelien; Joulin, Armand; Grave, Edouard; Lample, Guillaume (2023). "LLaMA: Open and Efficient Foundation Language Models". arXiv:2302.13971 [cs.CL]. - ^ "llama". GitHub. Archived from the original on 15 March 2023. Retrieved 16 March 2023. - ^ a b c
https://en.wikipedia.org/wiki/Llama_%28language_model%29#48
15 March 2023. Retrieved 16 March 2023. - ^ a b c Vincent, James (8 March 2023). "Meta's powerful AI language model has leaked online — what happens now?". The Verge. Archived from the original on 3 November 2023. Retrieved 16 March 2023. - ^ a b VK, Anirudh (6 March 2023). "Meta's LLaMA Leaked to the Public, Thanks To 4chan". Analytics India Magazine. Archived from the original on 26 March 2023. Retrieved 17 March 2023. - ^ "Save bandwidth by using a torrent to distribute more efficiently by C
https://en.wikipedia.org/wiki/Llama_%28language_model%29#49
sing a torrent to distribute more efficiently by ChristopherKing42 · Pull Request #73 · facebookresearch/llama". GitHub. Archived from the original on 10 April 2023. Retrieved 25 March 2023. - ^ "Download weights from hugging face to help us save bandwidth by Jainam213 · Pull Request #109 · facebookresearch/llama". GitHub. Archived from the original on 21 March 2023. Retrieved 17 March 2023. - ^ Cox, Joseph (7 March 2023). "Facebook's Powerful Large Language Model Leaks Online". Vice. Archived f
https://en.wikipedia.org/wiki/Llama_%28language_model%29#50
rge Language Model Leaks Online". Vice. Archived from the original on 6 April 2023. Retrieved 17 March 2023. - ^ OpSec Online LLC (21 March 2023). "github/dmca - Notice of Claimed Infringement via Email". GitHub. Archived from the original on 10 April 2023. Retrieved 25 March 2023. - ^ Willison, Simon (11 March 2023). "Large language models are having their Stable Diffusion moment". Simon Willison's Weblog. Archived from the original on 16 March 2023. Retrieved 16 March 2023. - ^ a b c Touvron,
https://en.wikipedia.org/wiki/Llama_%28language_model%29#51
2023. Retrieved 16 March 2023. - ^ a b c Touvron, Hugo; Martin, Louis; et al. (18 Jul 2023). "LLaMA-2: Open Foundation and Fine-Tuned Chat Models". arXiv:2307.09288 [cs.CL]. - ^ Edwards, Benj (2023-07-18). "Meta launches LLaMA-2, a source-available AI model that allows commercial applications [Updated]". Ars Technica. Archived from the original on 2023-11-07. Retrieved 2023-08-08. - ^ a b Thomas, Prasanth Aby (5 November 2024). "Meta offers Llama AI to US government for national security". CIO.
https://en.wikipedia.org/wiki/Llama_%28language_model%29#52
AI to US government for national security". CIO. Retrieved 9 December 2024. - ^ "Introducing Code Llama, a state-of-the-art large language model for coding". ai.meta.com. Archived from the original on 2024-09-27. Retrieved 2024-10-20. - ^ Rozière, Baptiste; Gehring, Jonas; Gloeckle, Fabian; Sootla, Sten; Gat, Itai; Tan, Xiaoqing Ellen; Adi, Yossi; Liu, Jingyu; Sauvestre, Romain (2024-01-31). "Code Llama: Open Foundation Models for Code". arXiv:2308.12950 [cs.CL]. - ^ Wiggers, Kyle (18 April 202
https://en.wikipedia.org/wiki/Llama_%28language_model%29#53
308.12950 [cs.CL]. - ^ Wiggers, Kyle (18 April 2024). "Meta releases Llama 3, claims it's among the best open models available". TechCrunch. Archived from the original on 18 September 2024. Retrieved 20 October 2024. - ^ Mann, Tobias (April 19, 2024). "Meta debuts third-generation Llama large language model". The Register. Archived from the original on August 25, 2024. Retrieved October 20, 2024. - ^ Patel, Dwarkesh (2024-07-24). "Mark Zuckerberg - Llama 3, Open Sourcing $10b Models, & Caesar Au
https://en.wikipedia.org/wiki/Llama_%28language_model%29#54
- Llama 3, Open Sourcing $10b Models, & Caesar Augustus". www.dwarkeshpatel.com. Archived from the original on 2024-07-16. Retrieved 2024-08-01. the 8 billion is nearly as powerful as the biggest version of Llama 2 that we released [...] even by the end, it was... still learning right it's like we probably could have fed it more tokens and it would have gotten somewhat better but i mean at some point you know you're running a company you need to do these meta reasoning questions of [...] how do
https://en.wikipedia.org/wiki/Llama_%28language_model%29#55
do these meta reasoning questions of [...] how do I want to spend our GPUs - ^ "Introducing Llama 3.1: Our most capable models to date". ai.meta.com. July 23, 2024. Archived from the original on 2024-07-23. Retrieved 2024-07-23. - ^ a b Dubey, Abhimanyu; Jauhri, Abhinav; Pandey, Abhinav; Kadian, Abhishek; Al-Dahle, Ahmad; Letman, Aiesha; Mathur, Akhil; Schelten, Alan; Yang, Amy (2024-07-31), The Llama 3 Herd of Models, arXiv:2407.21783 - ^ a b c "meta-llama/Llama-4-Maverick-17B-128E · Hugging F
https://en.wikipedia.org/wiki/Llama_%28language_model%29#56
"meta-llama/Llama-4-Maverick-17B-128E · Hugging Face". huggingface.co. 2025-04-05. Retrieved 2025-04-06. - ^ "The Llama 4 herd: The beginning of a new era of natively multimodal AI innovation". ai.meta.com. Archived from the original on 2025-04-05. Retrieved 2025-04-05. - ^ a b Robison, Kylie (8 April 2025). "Meta got caught gaming AI benchmarks". The Verge. Retrieved 8 April 2025. - ^ Wiggers, Kyle (6 April 2025). "Meta's benchmarks for its new AI models are a bit misleading". TechCrunch. Retr
https://en.wikipedia.org/wiki/Llama_%28language_model%29#57
AI models are a bit misleading". TechCrunch. Retrieved 8 April 2025. - ^ Franzen, Carl (8 April 2025). "Meta defends Llama 4 release against 'reports of mixed quality,' blames bugs". VentureBeat. Retrieved 8 April 2025. - ^ "Llama Models". www.llama.com. Archived from the original on April 9, 2025. Retrieved April 20, 2025. - ^ "The Falcon has landed in the Hugging Face ecosystem". huggingface.co. Archived from the original on 2023-06-20. Retrieved 2023-06-20. - ^ "llama/MODEL_CARD.md at main ·
https://en.wikipedia.org/wiki/Llama_%28language_model%29#58
ved 2023-06-20. - ^ "llama/MODEL_CARD.md at main · meta-llama/llama". GitHub. Archived from the original on 2024-05-28. Retrieved 2024-05-28. - ^ "Andrej Karpathy (Apr 18, 2024), The model card has some more interesting info too". X (formerly Twitter). Archived from the original on August 17, 2024. Retrieved October 20, 2024. - ^ "llama3/MODEL_CARD.md at main · meta-llama/llama3". GitHub. Archived from the original on 2024-05-21. Retrieved 2024-05-28. - ^ "llama-models/models/llama3_1/MODEL_CARD
https://en.wikipedia.org/wiki/Llama_%28language_model%29#59
5-28. - ^ "llama-models/models/llama3_1/MODEL_CARD.md at main · meta-llama/llama-models". GitHub. Archived from the original on 2024-07-23. Retrieved 2024-07-23. - ^ Robison, Kylie (2024-09-25). "Meta releases its first open AI model that can process images". The Verge. Retrieved 2024-09-25. - ^ Wiggers, Kyle (2024-09-25). "Meta's Llama AI models get multimodal". TechCrunch. Archived from the original on 2024-09-25. Retrieved 2024-09-25. - ^ "Llama 3.2: Revolutionizing edge AI and vision with op
https://en.wikipedia.org/wiki/Llama_%28language_model%29#60
ma 3.2: Revolutionizing edge AI and vision with open, customizable models". ai.meta.com. Archived from the original on 2024-09-25. Retrieved 2024-09-26. - ^ Shazeer, Noam (2020-02-01). "GLU Variants Improve Transformer". arXiv:2002.05202 [cs.CL]. - ^ Su, Jianlin; Lu, Yu; Pan, Shengfeng; Murtadha, Ahmed; Wen, Bo; Liu, Yunfeng (2021-04-01). "RoFormer: Enhanced Transformer with Rotary Position Embedding". arXiv:2104.09864 [cs.CL]. - ^ Zhang, Biao; Sennrich, Rico (2019-10-01). "Root Mean Square Laye
https://en.wikipedia.org/wiki/Llama_%28language_model%29#61
ennrich, Rico (2019-10-01). "Root Mean Square Layer Normalization". arXiv:1910.07467 [cs.LG]. - ^ Lei Ba, Jimmy; Kiros, Jamie Ryan; Hinton, Geoffrey E. (2016-07-01). "Layer Normalization". arXiv:1607.06450 [stat.ML]. - ^ "RedPajama-Data: An Open Source Recipe to Reproduce LLaMA training dataset". GitHub. Together. Archived from the original on 7 November 2023. Retrieved 4 May 2023. - ^ "RedPajama-Data-1T". Hugging Face. Together. Archived from the original on 3 November 2023. Retrieved 4 May 202
https://en.wikipedia.org/wiki/Llama_%28language_model%29#62
e original on 3 November 2023. Retrieved 4 May 2023. - ^ Wiggers, Kyle (January 9, 2025). "Mark Zuckerberg gave Meta's Llama team the OK to train on copyrighted works, filing claims". Techcrunch. Retrieved January 12, 2025. - ^ Taori, Rohan; Gulrajani, Ishaan; Zhang, Tianyi; Dubois, Yann; Li, Xuechen; Guestrin, Carlos; Liang, Percy; Hashimoto, Tatsunori B. (13 March 2023). "Alpaca: A Strong, Replicable Instruction-Following Model". Stanford Center for Research on Foundation Models. Archived from
https://en.wikipedia.org/wiki/Llama_%28language_model%29#63
r for Research on Foundation Models. Archived from the original on 6 April 2023. - ^ Wang, Yizhong; Kordi, Yeganeh; Mishra, Swaroop; Liu, Alisa; Smith, Noah A.; Khashabi, Daniel; Hajishirzi, Hannaneh (2022). "Self-Instruct: Aligning Language Models with Self-Generated Instructions". arXiv:2212.10560 [cs.CL]. - ^ "Stanford CRFM". crfm.stanford.edu. Archived from the original on 2023-04-06. Retrieved 2023-03-20. - ^ Quach, Katyanna. "Stanford takes costly, risky Alpaca AI model offline". www.there
https://en.wikipedia.org/wiki/Llama_%28language_model%29#64
costly, risky Alpaca AI model offline". www.theregister.com. - ^ "Stanford Researchers Take Down Alpaca AI Over Cost and Hallucinations". Gizmodo. 21 March 2023. Archived from the original on 12 May 2024. Retrieved 20 October 2024. - ^ "alpaca-lora". GitHub. Archived from the original on 4 April 2023. Retrieved 5 April 2023. - ^ "Meditron: An LLM suite for low-resource medical settings leveraging Meta Llama". ai.meta.com. - ^ Petersen, Tanya (28 November 2023). "EPFL's new Large Language Model
https://en.wikipedia.org/wiki/Llama_%28language_model%29#65
November 2023). "EPFL's new Large Language Model for Medical Knowledge". Archived from the original on 17 September 2024. Retrieved 20 October 2024. - ^ "epfLLM/meditron". epfLLM. 11 May 2024. Archived from the original on 27 September 2024. Retrieved 20 October 2024. - ^ "How Companies Are Using Meta Llama". Meta. 7 May 2024. Archived from the original on 27 September 2024. Retrieved 20 October 2024. - ^ "How dependent is China on US artificial intelligence technology?". Reuters. May 9, 2024.
https://en.wikipedia.org/wiki/Llama_%28language_model%29#66
intelligence technology?". Reuters. May 9, 2024. - ^ Edwards, Benj (2023-03-13). "You can now run a GPT-3-level AI model on your laptop, phone, and Raspberry Pi". Ars Technica. Archived from the original on 2024-01-09. Retrieved 2024-01-04. - ^ "GGUF". huggingface.co. Retrieved 9 May 2024. - ^ Labonne, Maxime (29 November 2023). "Quantize Llama models with GGUF and llama.cpp". Medium. Towards Data Science. Archived from the original on 9 May 2024. Retrieved 9 May 2024. - ^ Connatser, Matthew. "
https://en.wikipedia.org/wiki/Llama_%28language_model%29#67
4. Retrieved 9 May 2024. - ^ Connatser, Matthew. "Llamafile LLM driver project boosts performance on CPU cores". www.theregister.com. Archived from the original on 10 May 2024. Retrieved 10 May 2024. - ^ Cheung, Sunny (October 31, 2024). "PRC Adapts Meta's Llama for Military and Security AI Applications". Jamestown Foundation. Retrieved 2024-11-03. - ^ Pomfret, James; Pang, Jessie (November 1, 2024). "Chinese researchers develop AI model for military use on back of Meta's Llama". Reuters. Retrie
https://en.wikipedia.org/wiki/Llama_%28language_model%29#68
tary use on back of Meta's Llama". Reuters. Retrieved November 1, 2024. - ^ Smith, Matthew S. (17 November 2024). "Meta Opens Its AI Model for the U.S. Military - IEEE Spectrum". IEEE Spectrum. Retrieved 9 December 2024. - ^ Knight, Will. "Meta's Open Source Llama 3 Is Already Nipping at OpenAI's Heels". Wired. Archived from the original on 2024-09-27. Retrieved 2024-10-20. - ^ "Meta's amped-up AI agents confusing Facebook users". ABC News. 19 April 2024. Archived from the original on 2024-09-17
https://en.wikipedia.org/wiki/Llama_%28language_model%29#69
ril 2024. Archived from the original on 2024-09-17. Retrieved 2024-10-20. - ^ "Archived copy" (PDF). Archived (PDF) from the original on 2024-09-17. Retrieved 2024-10-20. {{cite web}} : CS1 maint: archived copy as title (link) - ^ Knight, Will. "Meta's New Llama 3.1 AI Model Is Free, Powerful, and Risky". Wired. ISSN 1059-1028. Archived from the original on 2024-08-03. Retrieved 2024-08-04. - ^ Waters, Richard (October 17, 2024). "Meta under fire for 'polluting' open-source". Financial Times. Fu
https://en.wikipedia.org/wiki/Llama_%28language_model%29#70
for 'polluting' open-source". Financial Times. Further reading [edit]- Huang, Kalley; O'Regan, Sylvia Varnham (September 5, 2023). "Inside Meta's AI Drama: Internal Feuds Over Compute Power". The Information. Archived from the original on September 5, 2023. Retrieved September 6, 2023.
https://en.wikipedia.org/wiki/T5_%28language_model%29#0
T5 (language model) T5 (Text-to-Text Transfer Transformer) is a series of large language models developed by Google AI introduced in 2019.[1][2] Like the original Transformer model,[3] T5 models are encoder-decoder Transformers, where the encoder processes the input text, and the decoder generates the output text. T5 models are usually pretrained on a massive dataset of text and code, after which they can perform the text-based tasks that are similar to their pretrained tasks. They can also be f
https://en.wikipedia.org/wiki/T5_%28language_model%29#1
ilar to their pretrained tasks. They can also be finetuned to perform other tasks. T5 models have been employed in various applications, including chatbots, machine translation systems, text summarization tools, code generation, and robotics.[4] Training [edit]The original T5 models are pre-trained on the Colossal Clean Crawled Corpus (C4), containing text and code scraped from the internet. This pre-training process enables the models to learn general language understanding and generation abili
https://en.wikipedia.org/wiki/T5_%28language_model%29#2
eneral language understanding and generation abilities. T5 models can then be fine-tuned on specific downstream tasks, adapting their knowledge to perform well in various applications. The T5 models were pretrained on many tasks, all in the format of <input text> -> <output text> . Some examples are: - restoring corrupted text: Thank you <X> me to your party <Y> week. -><X> for inviting <Y> last <Z> , where the<Z> means "end of output", and the<X> and<Y> denote blanks to be filled, called "senti
https://en.wikipedia.org/wiki/T5_%28language_model%29#3
> and<Y> denote blanks to be filled, called "sentinels" in the original report. - translation: translate English to German: That is good. ->Das ist gut. . - judging the grammatical acceptability of a sentence (CoLA sentence): The course is jumping well. ->not acceptable . Architecture [edit]The T5 series encompasses several models with varying sizes and capabilities, all encoder-decoder Transformers, where the encoder processes the input text, and the decoder generates the output text. These mod
https://en.wikipedia.org/wiki/T5_%28language_model%29#4
d the decoder generates the output text. These models are often distinguished by their parameter count, which indicates the complexity and potential capacity of the model. The original paper[1] reported the following 5 models: *The encoder and the decoder have the same shape. So for example, the T5-small has 6 layers in the encoder and 6 layers in the decoder. In the above table, - : Number of layers in the encoder; also, number of layers in the decoder. They always have the same number of layer
https://en.wikipedia.org/wiki/T5_%28language_model%29#5
decoder. They always have the same number of layers. - : Number of attention heads in each attention block. - : Dimension of the embedding vectors. - : Dimension of the feedforward network within each encoder and decoder layer. - : Dimension of the key and value vectors used in the self-attention mechanism. Note that unlike typical Transformers, the 3B and 11B models do not satisfy .[6] Compared to the original Transformer, it uses a few minor modifications: layer normalization with no additive
https://en.wikipedia.org/wiki/T5_%28language_model%29#6
difications: layer normalization with no additive bias; placing the layer normalization outside the residual path; relative positional embedding.[7] For all experiments, they used a WordPiece tokenizer, with vocabulary size 32,000. The tokenizer is shared across both the input and output of each model. It was trained on a mixture of English, German, French, and Romanian data from the C4 dataset, at a ratio of 10:1:1:1. Variants [edit]Several subsequent models used the T5 architecture, with non-s
https://en.wikipedia.org/wiki/T5_%28language_model%29#7
equent models used the T5 architecture, with non-standardized naming conventions used to differentiate them. This section attempts to collect the main ones. An exhaustive list of the variants released by Google Brain is on the GitHub repo for T5X.[8] Some models are trained from scratch while others are trained by starting with a previous trained model. By default, each model is trained from scratch, except otherwise noted. - T5 small, base, large, 3B, 11B (2019): The original models.[1] - T5 1.
https://en.wikipedia.org/wiki/T5_%28language_model%29#8
e, 3B, 11B (2019): The original models.[1] - T5 1.1 small, base, large, XL, XXL: Improved versions of the original T5 series. These have roughly equal parameters. The activation function is GEGLU[9] instead of ReLU. The 3B and the 11B were changed to "XL" and "XXL", and their shapes are changed:[8][10][11] - LM-adapted T5 (2021): a series of models (from small to XXL) that started from checkpoints of the T5 series, but trained further on 100B additional tokens from C4.[12] - Switch Transformer (
https://en.wikipedia.org/wiki/T5_%28language_model%29#9
itional tokens from C4.[12] - Switch Transformer (2021): a mixture-of-experts variant of T5, by replacing the feedforward layers in the encoder and decoder blocks with mixture of expert feedforward layers.[13][14] - T0 3B, 11B (2021): a series of models that started from checkpoints of LM-adapted T5, and further trained to perform tasks based only on task instruction (zero-shot).[15] Different entries in the series uses different finetuning data.[16] - ByT5 (2021): a byte-level version of T5, tr
https://en.wikipedia.org/wiki/T5_%28language_model%29#10
[16] - ByT5 (2021): a byte-level version of T5, trained on mC4 (multilingual C4) dataset.[17] It operates on text encoded as UTF-8 bytes, without tokenizers. - Flan-T5-XL (2022): a model that started with a checkpoint of T5 XL, then instruction-tuned on the FLAN dataset.[18][19][20][21] - T5X (2022): a JAX-based re-implementation of the original T5 codebase. It is not a model.[22] The original T5 codebase was implemented in TensorFlow with MeshTF.[2] - UL2 20B (2022): a model with the same archi
https://en.wikipedia.org/wiki/T5_%28language_model%29#11
.[2] - UL2 20B (2022): a model with the same architecture as the T5 series, but scaled up to 20B, and trained with "mixture of denoisers" objective on the C4.[23] It was trained on a TPU cluster by accident, when a training run was left running accidentally for a month.[24] - Flan-UL2 20B (2022): UL2 20B instruction-finetuned on the FLAN dataset.[23][20] - Pile-T5 (2024): has the same architecture of T5, except it used the Llama tokenizer. It was trained on The Pile. It came in sizes of base, la
https://en.wikipedia.org/wiki/T5_%28language_model%29#12
trained on The Pile. It came in sizes of base, large, XL, XXL.[25] Applications [edit]The T5 model itself is an encoder-decoder model, allowing it to be used for instruction following. The encoder encodes the instruction, and the decoder autoregressively generates the reply. The T5 encoder can be used as a text encoder, much like BERT. It encodes a text into a sequence of real-number vectors, which can be used for downstream applications. For example, Google Imagen[26] uses T5-XXL as text encod
https://en.wikipedia.org/wiki/T5_%28language_model%29#13
ample, Google Imagen[26] uses T5-XXL as text encoder, and the encoded text vectors are used as conditioning on a diffusion model. As another example, the AuraFlow diffusion model[27] uses Pile-T5-XL. References [edit]- ^ a b c Raffel, Colin; Shazeer, Noam; Roberts, Adam; Lee, Katherine; Narang, Sharan; Matena, Michael; Zhou, Yanqi; Li, Wei; Liu, Peter J. (2020). "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer". Journal of Machine Learning Research. 21 (140): 1–
https://en.wikipedia.org/wiki/T5_%28language_model%29#14
Journal of Machine Learning Research. 21 (140): 1–67. arXiv:1910.10683. ISSN 1533-7928. - ^ a b google-research/text-to-text-transfer-transformer, Google Research, 2024-08-21, retrieved 2024-08-21 - ^ Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N; Kaiser, Łukasz; Polosukhin, Illia (2017). "Attention is All you Need". Advances in Neural Information Processing Systems. 30. Curran Associates, Inc. - ^ Jiang, Yunfan; Gupta, Agrim; Zhang, Zichen; Wang, G
https://en.wikipedia.org/wiki/T5_%28language_model%29#15
iang, Yunfan; Gupta, Agrim; Zhang, Zichen; Wang, Guanzhi; Dou, Yongqiang; Chen, Yanjun; Fei-Fei, Li; Anandkumar, Anima; Zhu, Yuke (2022-10-06). "VIMA: General Robot Manipulation with Multimodal Prompts". arXiv:2210.03094 [cs.RO]. - ^ a b Zhang, Aston; Lipton, Zachary; Li, Mu; Smola, Alexander J. (2024). "11.9. Large-Scale Pretraining with Transformers". Dive into deep learning. Cambridge New York Port Melbourne New Delhi Singapore: Cambridge University Press. ISBN 978-1-009-38943-3. - ^ "config.