source
stringlengths 36
80
| text
stringlengths 51
500
|
|---|---|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#15
|
(formerly Facebook) also has a generative transformer-based foundational large language model, known as LLaMA.[48]
Foundational GPTs can also employ modalities other than text, for input and/or output. GPT-4 is a multi-modal LLM that is capable of processing text and image input (though its output is limited to text).[49] Regarding multimodal output, some generative transformer-based models are used for text-to-image technologies such as diffusion[50] and parallel decoding.[51] Such kinds of mod
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#16
|
n[50] and parallel decoding.[51] Such kinds of models can serve as visual foundation models (VFMs) for developing downstream systems that can work with images.[52]
Task-specific models
[edit]A foundational GPT model can be further adapted to produce more targeted systems directed to specific tasks and/or subject-matter domains. Methods for such adaptation can include additional fine-tuning (beyond that done for the foundation model) as well as certain forms of prompt engineering.[53]
An importan
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#17
|
rtain forms of prompt engineering.[53]
An important example of this is fine-tuning models to follow instructions, which is of course a fairly broad task but more targeted than a foundation model. In January 2022, OpenAI introduced "InstructGPT"—a series of models which were fine-tuned to follow instructions using a combination of supervised training and reinforcement learning from human feedback (RLHF) on base GPT-3 language models.[54][55] Advantages this had over the bare foundational models i
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#18
|
tages this had over the bare foundational models included higher accuracy, less negative/toxic sentiment, and generally better alignment with user needs. Hence, OpenAI began using this as the basis for its API service offerings.[56] Other instruction-tuned models have been released by others, including a fully open version.[57][58]
Another (related) kind of task-specific models are chatbots, which engage in human-like conversation. In November 2022, OpenAI launched ChatGPT—an online chat interfa
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#19
|
22, OpenAI launched ChatGPT—an online chat interface powered by an instruction-tuned language model trained in a similar fashion to InstructGPT.[59] They trained this model using RLHF, with human AI trainers providing conversations in which they played both the user and the AI, and mixed this new dialogue dataset with the InstructGPT dataset for a conversational format suitable for a chatbot. Other major chatbots currently include Microsoft's Bing Chat, which uses OpenAI's GPT-4 (as part of a br
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#20
|
g Chat, which uses OpenAI's GPT-4 (as part of a broader close collaboration between OpenAI and Microsoft),[60] and Google's competing chatbot Gemini (initially based on their LaMDA family of conversation-trained language models, with plans to switch to PaLM).[61]
Yet another kind of task that a GPT can be used for is the meta-task of generating its own instructions, like developing a series of prompts for 'itself' to be able to effectuate a more general goal given by a human user.[62] This is kn
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#21
|
general goal given by a human user.[62] This is known as an AI agent, and more specifically a recursive one because it uses results from its previous self-instructions to help it form its subsequent prompts; the first major example of this was Auto-GPT (which uses OpenAI's GPT models), and others have since been developed as well.[63]
Multimodality
[edit]Generative transformer-based systems can also be targeted for tasks involving modalities beyond text. For example, Microsoft's "Visual ChatGPT"
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#22
|
nd text. For example, Microsoft's "Visual ChatGPT" combines ChatGPT with visual foundation models (VFMs) to enable input or output comprising images as well as text.[64] Also, advances in text-to-speech technology offer tools for audio content creation when used in conjunction with foundational GPT language models.[65]
Domain-specificity
[edit]GPT systems can be directed toward particular fields or domains. Some reported examples of such models and apps are as follows:
- EinsteinGPT – for sales
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#23
|
nd apps are as follows:
- EinsteinGPT – for sales and marketing domains, to aid with customer relationship management (uses GPT-3.5)[66][67]
- BloombergGPT – for the financial domain, to aid with financial news and information (uses "freely available" AI methods, combined with their proprietary data)[68]
- Khanmigo – described as a GPT version for tutoring, in the education domain, it aids students using Khan Academy by guiding them through their studies without directly providing answers (power
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#24
|
studies without directly providing answers (powered by GPT-4)[69][70]
- SlackGPT – for the Slack instant-messaging service, to aid with navigating and summarizing discussions on it (uses OpenAI's API)[71]
- BioGPT – for the biomedical domain, to aid with biomedical literature text generation and mining (uses GPT-2)[72]
Sometimes domain-specificity is accomplished via software plug-ins or add-ons. For example, several different companies have developed particular plugins that interact directly w
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#25
|
eloped particular plugins that interact directly with OpenAI's ChatGPT interface,[73][74] and Google Workspace has available add-ons such as "GPT for Sheets and Docs"—which is reported to aid use of spreadsheet functionality in Google Sheets.[75][76]
Brand issues
[edit]OpenAI, which created the first generative pre-trained transformer (GPT) in 2018, asserted in 2023 that "GPT" should be regarded as a brand of OpenAI.[77] In April 2023, OpenAI revised the brand guidelines in its terms of service
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#26
|
ised the brand guidelines in its terms of service to indicate that other businesses using its API to run their artificial intelligence (AI) services would no longer be able to include "GPT" in such names or branding.[78] In May 2023, OpenAI engaged a brand management service to notify its API customers of this policy, although these notifications stopped short of making overt legal claims (such as allegations of trademark infringement or demands to cease and desist).[77] As of November 2023, Ope
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#27
|
to cease and desist).[77] As of November 2023, OpenAI still prohibits its API licensees from naming their own products with "GPT",[79] but it has begun enabling its ChatGPT Plus subscribers to make "custom versions of ChatGPT" that are being called GPTs on the OpenAI site.[80] OpenAI's terms of service says that its subscribers may use "GPT" in the names of these, although it's "discouraged".[79]
Relatedly, OpenAI has applied to the United States Patent and Trademark Office (USPTO) to seek domes
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#28
|
Patent and Trademark Office (USPTO) to seek domestic trademark registration for the term "GPT" in the field of AI.[77] OpenAI sought to expedite handling of its application, but the USPTO declined that request in April 2023.[81] In May 2023, the USPTO responded to the application with a determination that "GPT" was both descriptive and generic.[82] As of November 2023, OpenAI continues to pursue its argument through the available processes. Regardless, failure to obtain a registered U.S. tradem
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#29
|
rdless, failure to obtain a registered U.S. trademark does not preclude some level of common-law trademark rights in the U.S.,[83] and/or trademark rights in other countries.[84]
For any given type or scope of trademark protection in the U.S., OpenAI would need to establish that the term is actually "distinctive" to their specific offerings in addition to being a broader technical term for the kind of technology. Some media reports suggested that OpenAI may be able to obtain trademark registrati
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#30
|
OpenAI may be able to obtain trademark registration based indirectly on the fame of its GPT-based chatbot product, ChatGPT,[81][85] for which OpenAI has separately sought protection (and which it has sought to enforce more strongly).[86] Other reports have indicated that registration for the bare term "GPT" seems unlikely to be granted,[77][87] as it is used frequently as a common term to refer simply to AI systems that involve generative pre-trained transformers.[3][88][89][90] In any event, t
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#31
|
ained transformers.[3][88][89][90] In any event, to whatever extent exclusive rights in the term may occur the U.S., others would need to avoid using it for similar products or services in ways likely to cause confusion.[87][91] If such rights ever became broad enough to implicate other well-established uses in the field, the trademark doctrine of descriptive fair use could still continue non-brand-related usage.[92]
Selected bibliography
[edit]This section lists the main official publications f
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#32
|
his section lists the main official publications from OpenAI and Microsoft on their GPT models.
- GPT-1: report,[9] GitHub release.[93]
- GPT-2: blog announcement,[94] report on its decision of "staged release",[95] GitHub release.[96]
- GPT-3: report.[41] No GitHub or any other form of code release thenceforth.
- WebGPT: blog announcement,[97] report,[98]
- InstructGPT: blog announcement,[54] report.[55]
- ChatGPT: blog announcement (no report).[59]
- GPT-4: blog announcement,[99] reports,[100]
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#33
|
[59]
- GPT-4: blog announcement,[99] reports,[100][101] model card.[102]
- GPT-4o: blog announcement.[103]
- GPT-4.5: blog announcement.[104]
- GPT-4.1: blog announcement.[105]
See also
[edit]References
[edit]- ^ a b Haddad, Mohammed. "How does GPT-4 work and how can you start using it in ChatGPT?". www.aljazeera.com. Archived from the original on July 5, 2023. Retrieved April 10, 2023.
- ^ a b "Generative AI: a game-changer society needs to be ready for". World Economic Forum. January 9, 2023.
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#34
|
eady for". World Economic Forum. January 9, 2023. Archived from the original on April 25, 2023. Retrieved April 8, 2023.
- ^ a b c "The A to Z of Artificial Intelligence". Time. April 13, 2023. Archived from the original on June 16, 2023. Retrieved April 14, 2023.
- ^ Hu, Luhui (November 15, 2022). "Generative AI and Future". Medium. Archived from the original on June 5, 2023. Retrieved April 29, 2023.
- ^ "CSDL | IEEE Computer Society". www.computer.org. Archived from the original on April 28,
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#35
|
uter.org. Archived from the original on April 28, 2023. Retrieved April 29, 2023.
- ^ "LibGuides: Using AI Language Models : ChatGPT". Archived from the original on December 8, 2023. Retrieved December 7, 2023.
- ^ Toews, Rob. "The Next Generation Of Large Language Models". Forbes. Archived from the original on April 14, 2023. Retrieved April 9, 2023.
- ^ Mckendrick, Joe (March 13, 2023). "Most Jobs Soon To Be 'Influenced' By Artificial Intelligence, Research Out Of OpenAI And University Of Penn
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#36
|
nce, Research Out Of OpenAI And University Of Pennsylvania Suggests". Forbes. Archived from the original on April 16, 2023. Retrieved April 16, 2023.
- ^ a b c d "Improving language understanding with unsupervised learning". openai.com. June 11, 2018. Archived from the original on March 18, 2023. Retrieved March 18, 2023.
- ^ "GPT-1 to GPT-4: Each of OpenAI's GPT Models Explained and Compared". MUO. April 11, 2023. Archived from the original on April 15, 2023. Retrieved May 3, 2023.
- ^ "GPT-4".
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#37
|
pril 15, 2023. Retrieved May 3, 2023.
- ^ "GPT-4". openai.com. Archived from the original on March 14, 2023. Retrieved December 8, 2023.
- ^ a b Alford, Anthony (July 13, 2021). "EleutherAI Open-Sources Six Billion Parameter GPT-3 Clone GPT-J". InfoQ. Archived from the original on February 10, 2023. Retrieved April 3, 2023.
- ^ a b "News" (Press release). Archived from the original on April 5, 2023. Retrieved April 5, 2023.
- ^ Morrison, Ryan (March 7, 2023). "Salesforce launches EinsteinGPT bui
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#38
|
rch 7, 2023). "Salesforce launches EinsteinGPT built with OpenAI technology". Tech Monitor. Archived from the original on April 15, 2023. Retrieved April 10, 2023.
- ^ "The ChatGPT of Finance is Here, Bloomberg is Combining AI and Fintech". Forbes. Archived from the original on April 6, 2023. Retrieved April 6, 2023.
- ^ Hinton (et-al), Geoffrey (October 15, 2012). "Deep neural networks for acoustic modeling in speech recognition" (PDF). IEEE Signal Processing Magazine. Digital Object Identifier
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#39
|
nal Processing Magazine. Digital Object Identifier 10.1109/MSP.2012.2205597. doi:10.1109/MSP.2012.2205597. S2CID 206485943. Archived (PDF) from the original on March 18, 2023. Retrieved April 27, 2023.
- ^ Deng, Li (January 22, 2014). "A tutorial survey of architectures, algorithms, and applications for deep learning | APSIPA Transactions on Signal and Information Processing | Cambridge Core". Apsipa Transactions on Signal and Information Processing. 3. Cambridge.org: e2. doi:10.1017/atsip.2013.
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#40
|
ing. 3. Cambridge.org: e2. doi:10.1017/atsip.2013.9. S2CID 9928823.
- ^ Erhan, Dumitru; Courville, Aaron; Bengio, Yoshua; Vincent, Pascal (March 31, 2010). "Why Does Unsupervised Pre-training Help Deep Learning?". Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. JMLR Workshop and Conference Proceedings: 201–208. Archived from the original on January 24, 2024. Retrieved January 24, 2024.
- ^ "First-Hand:The Hidden Markov Model – Engineering and Tec
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#41
|
Hand:The Hidden Markov Model – Engineering and Technology History Wiki". ethw.org. January 12, 2015. Archived from the original on April 3, 2018. Retrieved May 1, 2018.
- ^ Juang, B. H.; Rabiner, L. R. (1991). "Hidden Markov Models for Speech Recognition". Technometrics. 33 (3): 251–272. doi:10.2307/1268779. ISSN 0040-1706. JSTOR 1268779. Archived from the original on October 8, 2024. Retrieved October 4, 2024.
- ^ Cottrell, Garrison W.; Munro, Paul; Zipser, David (1987). "Learning Internal Repr
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#42
|
aul; Zipser, David (1987). "Learning Internal Representation From Gray-Scale Images: An Example of Extensional Programming". Proceedings of the Annual Meeting of the Cognitive Science Society. 9. Archived from the original on October 7, 2024. Retrieved October 4, 2024.
- ^ Cottrell, Garrison W. (January 1, 1991), Touretzky, David S.; Elman, Jeffrey L.; Sejnowski, Terrence J.; Hinton, Geoffrey E. (eds.), "Extracting features from faces using compression networks: Face, identity, emotion, and gend
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#43
|
ession networks: Face, identity, emotion, and gender recognition using holons", Connectionist Models, Morgan Kaufmann, pp. 328–337, ISBN 978-1-4832-1448-1, archived from the original on October 7, 2024, retrieved October 4, 2024
- ^ Schmidhuber, Jürgen (1992). "Learning complex, extended sequences using the principle of history compression" (PDF). Neural Computation. 4 (2): 234–242. doi:10.1162/neco.1992.4.2.234. S2CID 18271205.
- ^ Elman, Jeffrey L.; Zipser, David (April 1, 1988). "Learning the
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#44
|
y L.; Zipser, David (April 1, 1988). "Learning the hidden structure of speech". The Journal of the Acoustical Society of America. 83 (4): 1615–1626. Bibcode:1988ASAJ...83.1615E. doi:10.1121/1.395916. ISSN 0001-4966. PMID 3372872. Archived from the original on October 7, 2024. Retrieved October 4, 2024.
- ^ Bourlard, H.; Kamp, Y. (1988). "Auto-association by multilayer perceptrons and singular value decomposition". Biological Cybernetics. 59 (4–5): 291–294. doi:10.1007/BF00332918. PMID 3196773. S
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#45
|
: 291–294. doi:10.1007/BF00332918. PMID 3196773. S2CID 206775335. Archived from the original on June 27, 2021. Retrieved October 4, 2024.
- ^ Hinton, Geoffrey E; Zemel, Richard (1993). "Autoencoders, Minimum Description Length and Helmholtz Free Energy". Advances in Neural Information Processing Systems. 6. Morgan-Kaufmann. Archived from the original on August 14, 2024. Retrieved October 4, 2024.
- ^ Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N; Ka
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#46
|
Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N; Kaiser, Łukasz; Polosukhin, Illia (2017). "Attention is All you Need" (PDF). Advances in Neural Information Processing Systems. 30. Curran Associates, Inc. Archived (PDF) from the original on February 21, 2024. Retrieved January 28, 2024.
- ^ Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (May 24, 2019). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". Association for Computational Linguistics. a
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#47
|
ing". Association for Computational Linguistics. arXiv:1810.04805.
- ^ a b c Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (June 11, 2018). "Improving Language Understanding by Generative Pre-Training" (PDF). OpenAI. p. 12. Archived (PDF) from the original on January 26, 2021. Retrieved January 23, 2021.
- ^ Radford, Alec; Jozefowicz, Rafal; Sutskever, Ilya (April 6, 2017). "Learning to Generate Reviews and Discovering Sentiment". arXiv:1704.01444 [cs.LG].
- ^ Chen, Mark; Tw
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#48
|
ent". arXiv:1704.01444 [cs.LG].
- ^ Chen, Mark; Tworek, Jerry; Jun, Heewoo; Yuan, Qiming; Ponde de Oliveira Pinto, Henrique; Kaplan, Jared; Edwards, Harri; Burda, Yuri; Joseph, Nicholas; Brockman, Greg; Ray, Alex; Puri, Raul; Krueger, Gretchen; Petrov, Michael; Khlaaf, Heidy (July 1, 2021). "Evaluating Large Language Models Trained on Code". Association for Computational Linguistics. arXiv:2107.03374.
- ^ Ouyang, Long; Wu, Jeffrey; Jiang, Xu; Almeida, Diogo; Wainwright, Carroll; Mishkin, Pamela;
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#49
|
eida, Diogo; Wainwright, Carroll; Mishkin, Pamela; Zhang, Chong; Agarwal, Sandhini; Slama, Katarina; Ray, Alex; Schulman, John; Hilton, Jacob; Kelton, Fraser; Miller, Luke; Simens, Maddie (December 6, 2022). "Training language models to follow instructions with human feedback". Advances in Neural Information Processing Systems. 35: 27730–27744. arXiv:2203.02155. Archived from the original on June 28, 2023. Retrieved June 24, 2023.
- ^ "New GPT-3 capabilities: Edit & insert". openai.com. Archived
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#50
|
capabilities: Edit & insert". openai.com. Archived from the original on June 29, 2023. Retrieved June 24, 2023.
- ^ Fu, Yao; Peng, Hao; Khot, Tushar (2022). "How does GPT Obtain its Ability? Tracing Emergent Abilities of Language Models to their Sources". Yao Fu's Notion. Archived from the original on April 19, 2023. Retrieved June 24, 2023.
- ^ "Model index for researchers". OpenAI API. Archived from the original on June 23, 2023. Retrieved June 23, 2023.
- ^ "Introducing the Center for Researc
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#51
|
23, 2023.
- ^ "Introducing the Center for Research on Foundation Models (CRFM)". Stanford HAI. August 18, 2021. Archived from the original on June 4, 2023. Retrieved April 26, 2023.
- ^ "Reflections on Foundation Models". hai.stanford.edu. October 18, 2021. Archived from the original on August 15, 2024. Retrieved August 15, 2024.
- ^ a b OpenAI (2023). "GPT-4 Technical Report" (PDF). Archived (PDF) from the original on March 14, 2023. Retrieved March 16, 2023.
- ^ Zhu, Yukun; Kiros, Ryan; Zemel
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#52
|
March 16, 2023.
- ^ Zhu, Yukun; Kiros, Ryan; Zemel, Rich; Salakhutdinov, Ruslan; Urtasun, Raquel; Torralba, Antonio; Fidler, Sanja (2015). Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books. IEEE International Conference on Computer Vision (ICCV) 2015. pp. 19–27. arXiv:1506.06724. Archived from the original on February 5, 2023. Retrieved February 7, 2023.
- ^ Vincent, James (November 7, 2019). "OpenAI has published the text-generating AI it sai
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#53
|
OpenAI has published the text-generating AI it said was too dangerous to share". The Verge. Archived from the original on June 11, 2020. Retrieved April 28, 2023.
- ^ a b c d Brown, Tom B.; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon; Ramesh, Aditya; Ziegler, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse,
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#54
|
r, Daniel M.; Wu, Jeffrey; Winter, Clemens; Hesse, Christopher; Chen, Mark; Sigler, Eric; Litwin, Mateusz; Gray, Scott; Chess, Benjamin; Clark, Jack; Berner, Christopher; McCandlish, Sam; Radford, Alec; Sutskever, Ilya; Amodei, Dario (May 28, 2020). "Language Models are Few-Shot Learners". NeurIPS. arXiv:2005.14165v4.
- ^ a b c "ML input trends visualization". Epoch. Archived from the original on July 16, 2023. Retrieved May 2, 2023.
- ^ a b Ver Meer, Dave (June 1, 2023). "ChatGPT Statistics". N
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#55
|
Meer, Dave (June 1, 2023). "ChatGPT Statistics". NamePepper. Archived from the original on June 5, 2023. Retrieved June 9, 2023.
- ^ "GPT-4 has more than a trillion parameters – Report". March 25, 2023. Archived from the original on March 4, 2024. Retrieved October 23, 2023.
- ^ Vincent, James (March 14, 2023). "Google opens up its AI language model PaLM to challenge OpenAI and GPT-3". The Verge. Archived from the original on March 14, 2023. Retrieved April 29, 2023.
- ^ "Google Opens Access to
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#56
|
ieved April 29, 2023.
- ^ "Google Opens Access to PaLM Language Model". Archived from the original on May 31, 2023. Retrieved April 29, 2023.
- ^ Iyer, Aparna (November 30, 2022). "Meet GPT-JT, the Closest Open Source Alternative to GPT-3". Analytics India Magazine. Archived from the original on June 2, 2023. Retrieved April 29, 2023.
- ^ "Meta Debuts AI Language Model, But It's Only for Researchers". PCMAG. February 24, 2023. Archived from the original on July 19, 2023. Retrieved May 21, 2023.
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#57
|
riginal on July 19, 2023. Retrieved May 21, 2023.
- ^ Islam, Arham (March 27, 2023). "Multimodal Language Models: The Future of Artificial Intelligence (AI)". Archived from the original on May 15, 2023. Retrieved May 15, 2023.
- ^ Islam, Arham (November 14, 2022). "How Do DALL·E 2, Stable Diffusion, and Midjourney Work?". Archived from the original on July 18, 2023. Retrieved May 21, 2023.
- ^ Saha, Shritama (January 4, 2023). "Google Launches Muse, A New Text-to-Image Transformer Model". Analyt
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#58
|
se, A New Text-to-Image Transformer Model". Analytics India Magazine. Archived from the original on May 15, 2023. Retrieved May 15, 2023.
- ^ Wu (et-al), Chenfei (March 8, 2023). "Visual ChatGPT". arXiv:2303.04671 [cs.CV].
- ^ Bommasani (et-al), Rishi (July 12, 2022). "On the Opportunities and Risks of Foundation Models". arXiv:2108.07258 [cs.LG].
- ^ a b "Aligning language models to follow instructions". openai.com. Archived from the original on March 23, 2023. Retrieved March 23, 2023.
- ^ a b
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#59
|
March 23, 2023. Retrieved March 23, 2023.
- ^ a b Ouyang, Long; Wu, Jeff; Jiang, Xu; et al. (November 4, 2022). "Training language models to follow instructions with human feedback". NeurIPS. arXiv:2203.02155.
- ^ Ramnani, Meeta (January 28, 2022). "OpenAI dumps its own GPT-3 for something called InstructGPT, and for right reason". Analytics India Magazine. Archived from the original on June 4, 2023. Retrieved April 29, 2023.
- ^ "Stanford CRFM". crfm.stanford.edu. Archived from the original on
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#60
|
. crfm.stanford.edu. Archived from the original on April 6, 2023. Retrieved May 15, 2023.
- ^ "Free Dolly: Introducing the World's First Truly Open Instruction-Tuned LLM". Databricks. April 12, 2023. Archived from the original on July 14, 2023. Retrieved May 15, 2023.
- ^ a b "Introducing ChatGPT". openai.com. Archived from the original on March 16, 2023. Retrieved March 16, 2023.
- ^ Wiggers, Kyle (May 4, 2023). "Microsoft doubles down on AI with new Bing features". Archived from the original o
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#61
|
h new Bing features". Archived from the original on December 7, 2023. Retrieved May 4, 2023.
- ^ "ChatGPT vs. Bing vs. Google Bard: Which AI Is the Most Helpful?". CNET. Archived from the original on July 24, 2023. Retrieved April 30, 2023.
- ^ "Auto-GPT, BabyAGI, and AgentGPT: How to use AI agents". Mashable. April 19, 2023. Archived from the original on July 22, 2023. Retrieved May 15, 2023.
- ^ Marr, Bernard. "Auto-GPT May Be The Strong AI Tool That Surpasses ChatGPT". Forbes. Archived from t
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#62
|
l That Surpasses ChatGPT". Forbes. Archived from the original on May 21, 2023. Retrieved May 15, 2023.
- ^ "Microsoft Open-Sources Multimodal Chatbot Visual ChatGPT". InfoQ. Archived from the original on June 3, 2023. Retrieved May 15, 2023.
- ^ Edwards, Benj (January 9, 2023). "Microsoft's new AI can simulate anyone's voice with 3 seconds of audio". Ars Technica. Archived from the original on July 18, 2023. Retrieved May 15, 2023.
- ^ Morrison, Ryan (March 7, 2023). "Salesforce launches Einstei
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#63
|
Ryan (March 7, 2023). "Salesforce launches EinsteinGPT built with OpenAI technology". Archived from the original on April 15, 2023. Retrieved April 10, 2023.
- ^ Sharma, Animesh K.; Sharma, Rahul (2023). "The role of generative pretrained transformers (GPTs) in revolutionising digital marketing: A conceptual model". Journal of Cultural Marketing Strategy. 8 (1): 80–90. doi:10.69554/TLVQ2275.
- ^ Leswing, Kif (April 13, 2023). "Bloomberg plans to integrate GPT-style A.I. into its terminal". CNBC.
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#64
|
integrate GPT-style A.I. into its terminal". CNBC. Archived from the original on May 19, 2023. Retrieved May 4, 2023.
- ^ Melendez, Steven (May 4, 2023). "Learning nonprofit Khan Academy is piloting a version of GPT called Khanmigo". Fast Company. Archived from the original on May 11, 2023. Retrieved May 22, 2023.
- ^ "Khan Academy Pilots GPT-4 Powered Tool Khanmigo for Teachers". THE Journal. Archived from the original on May 7, 2023. Retrieved May 7, 2023.
- ^ Hachman, Mark (May 4, 2023). "Sla
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#65
|
May 7, 2023.
- ^ Hachman, Mark (May 4, 2023). "Slack GPT will bring AI chatbots to your conversations". PCWorld. Archived from the original on June 9, 2023. Retrieved May 4, 2023.
- ^ Luo (et-al), Renqian (April 3, 2023). "BioGPT: Generative pre-trained transformer for biomedical text generation and mining". Briefings in Bioinformatics. 23 (6). arXiv:2210.10341. doi:10.1093/bib/bbac409. PMID 36156661.
- ^ John, Amy Sarah (May 5, 2023). "Know about ChatGPT's 13 best plugins, designed to improve y
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#66
|
t ChatGPT's 13 best plugins, designed to improve your overall user experience". Latest Digital Transformation Trends | Cloud News | Wire19. Archived from the original on May 9, 2023. Retrieved May 7, 2023.
- ^ "ChatGPT plugins". openai.com. March 13, 2024. Archived from the original on March 23, 2023. Retrieved May 7, 2023.
- ^ "How to Use ChatGPT on Google Sheets With GPT for Sheets and Docs". MUO. March 12, 2023. Archived from the original on June 19, 2023. Retrieved May 7, 2023.
- ^ Asay, Mat
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#67
|
une 19, 2023. Retrieved May 7, 2023.
- ^ Asay, Matt (February 27, 2023). "Embrace and extend Excel for AI data prep". InfoWorld. Archived from the original on June 2, 2023. Retrieved May 7, 2023.
- ^ a b c d Hicks, William (May 10, 2023). "ChatGPT creator OpenAI is asking startups to remove 'GPT' from their names". The Business Journal. Archived from the original on June 28, 2023. Retrieved May 21, 2023.
- ^ OpenAI (April 24, 2023). "Brand Guidelines". Archived from the original on July 18, 2023
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#68
|
ines". Archived from the original on July 18, 2023. Retrieved May 21, 2023.
- ^ a b "Brand guidelines". Archived from the original on July 18, 2023. Retrieved November 28, 2023.
- ^ "Introducing GPTS". March 13, 2024. Archived from the original on March 20, 2024. Retrieved November 28, 2023.
- ^ a b Heah, Alexa (April 26, 2023). "OpenAI Unsuccessful At Speeding Up Its Attempt To Trademark 'GPT'". DesignTAXI. Archived from the original on April 26, 2023. Retrieved May 21, 2023.
- ^ "NONFINAL OFFI
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#69
|
, 2023. Retrieved May 21, 2023.
- ^ "NONFINAL OFFICE ACTION". USPTO. May 25, 2023. Archived from the original on December 3, 2023. Retrieved December 30, 2023.
- ^ "U.S. Trademark Law". December 2015. Archived from the original on January 17, 2024. Retrieved November 29, 2023.
- ^ "International Trademark Rights". Archived from the original on March 11, 2024. Retrieved November 29, 2023.
- ^ "OpenAI Wants to Trademark 'GPT' Amid Rise of AI Chatbots". Tech Times. April 25, 2023. Archived from the
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#70
|
ts". Tech Times. April 25, 2023. Archived from the original on April 25, 2023. Retrieved May 21, 2023.
- ^ Louise, Nickie (April 3, 2023). "OpenAI files a UDRP case against the current owner of ChatGPT.com". Archived from the original on June 5, 2023. Retrieved May 21, 2023.
- ^ a b Demcak, Tramatm-Igor (April 26, 2023). "OpenAI's Battle for Brand Protection: Can GPT be trademarked?". Lexology. Archived from the original on May 5, 2023. Retrieved May 22, 2023.
- ^ Lawton, George (April 20, 2023)
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#71
|
May 22, 2023.
- ^ Lawton, George (April 20, 2023). "ChatGPT vs. GPT: How are they different? | TechTarget". Enterprise AI. Archived from the original on May 9, 2023. Retrieved May 21, 2023.
- ^ Robb, Drew (April 12, 2023). "GPT-4 vs. ChatGPT: AI Chatbot Comparison". eWEEK. Archived from the original on July 27, 2023. Retrieved May 21, 2023.
- ^ Russo, Philip (August 22, 2023). "The Genesis of Generative AI for Everything Everywhere All at Once in CRE". Commercial Observer. Archived from the ori
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#72
|
n CRE". Commercial Observer. Archived from the original on August 24, 2023.
- ^ "Trademark infringement". Archived from the original on November 30, 2023. Retrieved November 29, 2023.
- ^ Rheintgen, Husch Blackwell LLP-Kathleen A. (August 16, 2013). "Branding 101: trademark descriptive fair use". Lexology. Archived from the original on May 21, 2023. Retrieved May 21, 2023.
- ^ finetune-transformer-lm, OpenAI, June 11, 2018, archived from the original on May 19, 2023, retrieved May 1, 2023
- ^ "G
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#73
|
inal on May 19, 2023, retrieved May 1, 2023
- ^ "GPT-2: 1.5B release". openai.com. Archived from the original on March 31, 2023. Retrieved May 1, 2023.
- ^ Solaiman, Irene; Brundage, Miles; Clark, Jack; Askell, Amanda; Herbert-Voss, Ariel; Wu, Jeff; Radford, Alec; Krueger, Gretchen; Kim, Jong Wook; Kreps, Sarah; McCain, Miles; Newhouse, Alex; Blazakis, Jason; McGuffie, Kris; Wang, Jasmine (November 12, 2019). "Release Strategies and the Social Impacts of Language Models". arXiv:1908.09203 [cs.CL
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#74
|
pacts of Language Models". arXiv:1908.09203 [cs.CL].
- ^ gpt-2, OpenAI, May 1, 2023, archived from the original on March 11, 2023, retrieved May 1, 2023
- ^ "WebGPT: Improving the factual accuracy of language models through web browsing". openai.com. Archived from the original on June 21, 2023. Retrieved July 2, 2023.
- ^ Nakano, Reiichiro; Hilton, Jacob; Balaji, Suchir; Wu, Jeff; Ouyang, Long; Kim, Christina; Hesse, Christopher; Jain, Shantanu; Kosaraju, Vineet; Saunders, William; Jiang, Xu; Co
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#75
|
Kosaraju, Vineet; Saunders, William; Jiang, Xu; Cobbe, Karl; Eloundou, Tyna; Krueger, Gretchen; Button, Kevin (December 1, 2021). "WebGPT: Browser-assisted question-answering with human feedback". CoRR. arXiv:2112.09332. Archived from the original on July 2, 2023. Retrieved July 2, 2023.
- ^ "GPT-4". openai.com. Archived from the original on March 14, 2023. Retrieved May 1, 2023.
- ^ OpenAI (March 27, 2023). "GPT-4 Technical Report". arXiv:2303.08774 [cs.CL].
- ^ Bubeck, Sébastien; Chandrasekara
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#76
|
8774 [cs.CL].
- ^ Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric; Kamar, Ece; Lee, Peter; Lee, Yin Tat; Li, Yuanzhi; Lundberg, Scott; Nori, Harsha; Palangi, Hamid; Ribeiro, Marco Tulio; Zhang, Yi (April 13, 2023). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv:2303.12712 [cs.CL].
- ^ GPT-4 System Card Archived April 7, 2023, at the Wayback Machine, OpenAI, March 23, 2023 (Accessed May 22, 2023).
- ^ "Hello GPT-4o". OpenAI
|
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#77
|
Accessed May 22, 2023).
- ^ "Hello GPT-4o". OpenAI. May 13, 2024. Archived from the original on May 14, 2024. Retrieved August 8, 2024.
- ^ "Introducing GPT-4.5". OpenAI. February 27, 2025. Archived from the original on March 19, 2025. Retrieved March 18, 2025.
- ^ "Introducing GPT-4.1 in the API". OpenAI. April 14, 2025. Archived from the original on May 17, 2025. Retrieved April 14, 2025.
|
https://en.wikipedia.org/wiki/GPT-2#0
|
GPT-2
Generative Pre-trained Transformer 2 (GPT-2) is a large language model by OpenAI and the second in their foundational series of GPT models. GPT-2 was pre-trained on a dataset of 8 million web pages.[2] It was partially released in February 2019, followed by full release of the 1.5-billion-parameter model on November 5, 2019.[3][4][5]
GPT-2 was created as a "direct scale-up" of GPT-1[6] with a ten-fold increase in both its parameter count and the size of its training dataset.[5] It is a gen
|
https://en.wikipedia.org/wiki/GPT-2#1
|
d the size of its training dataset.[5] It is a general-purpose learner and its ability to perform the various tasks was a consequence of its general ability to accurately predict the next item in a sequence,[2][7] which enabled it to translate texts, answer questions about a topic from a text, summarize passages from a larger text,[7] and generate text output on a level sometimes indistinguishable from that of humans; however, it could become repetitive or nonsensical when generating long passag
|
https://en.wikipedia.org/wiki/GPT-2#2
|
etitive or nonsensical when generating long passages.[8] It was superseded by the GPT-3 and GPT-4 models, which are no longer open source.
GPT-2 has, like its predecessor GPT-1 and its successors GPT-3 and GPT-4, a generative pre-trained transformer architecture, implementing a deep neural network, specifically a transformer model,[6] which uses attention instead of older recurrence- and convolution-based architectures.[9][10] Attention mechanisms allow the model to selectively focus on segments
|
https://en.wikipedia.org/wiki/GPT-2#3
|
s allow the model to selectively focus on segments of input text it predicts to be the most relevant.[11][12] This model allows for greatly increased parallelization, and outperforms previous benchmarks for RNN/CNN/LSTM-based models.[6]
Training
[edit]Since the transformer architecture enabled massive parallelization, GPT models could be trained on larger corpora than previous NLP (natural language processing) models. While the GPT-1 model demonstrated that the approach was viable, GPT-2 would f
|
https://en.wikipedia.org/wiki/GPT-2#4
|
trated that the approach was viable, GPT-2 would further explore the emergent properties of networks trained on extremely large corpora. CommonCrawl, a large corpus produced by web crawling and previously used in training NLP systems,[13] was considered due to its large size, but was rejected after further review revealed large amounts of unintelligible content.[2][13] Instead, OpenAI developed a new corpus, known as WebText; rather than scraping content indiscriminately from the World Wide Web,
|
https://en.wikipedia.org/wiki/GPT-2#5
|
content indiscriminately from the World Wide Web, WebText was generated by scraping only pages linked to by Reddit posts that had received at least 3 karma prior to December 2017. The corpus was subsequently cleaned; HTML documents were parsed into plain text, duplicate pages were eliminated, and Wikipedia pages were removed (since their presence in many other datasets could have induced overfitting).[2]
While the cost of training GPT-2 is known to have been $256 per hour,[14][15] the amount of
|
https://en.wikipedia.org/wiki/GPT-2#6
|
to have been $256 per hour,[14][15] the amount of hours it took to complete training is unknown; therefore, the overall training cost cannot be estimated accurately.[16] However, comparable large language models using transformer architectures have had their costs documented in more detail; the training processes for BERT and XLNet consumed, respectively, $6,912 and $245,000 of resources.[15]
Release
[edit]GPT-2 was first announced on 14 February 2019. A February 2019 article in The Verge by Ja
|
https://en.wikipedia.org/wiki/GPT-2#7
|
y 2019. A February 2019 article in The Verge by James Vincent said that, while "[the] writing it produces is usually easily identifiable as non-human", it remained "one of the most exciting examples yet" of language generation programs:[17]
Give it a fake headline, and it’ll write the rest of the article, complete with fake quotations and statistics. Feed it the first line of a short story, and it’ll tell you what happens to your character next. It can even write fan fiction, given the right pro
|
https://en.wikipedia.org/wiki/GPT-2#8
|
It can even write fan fiction, given the right prompt.[17]
The Guardian described this output as "plausible newspaper prose";[8] Kelsey Piper of Vox said "one of the coolest AI systems I’ve ever seen may also be the one that will kick me out of my job".[18] GPT-2's flexibility was described as "impressive" by The Verge; specifically, its ability to translate text between languages, summarize long articles, and answer trivia questions were noted.[17]
A study by the University of Amsterdam employi
|
https://en.wikipedia.org/wiki/GPT-2#9
|
17]
A study by the University of Amsterdam employing a modified Turing test found that at least in some scenarios, participants were unable to distinguish poems generated by GPT-2 from those written by humans.[19]
The GPT-2 series contained 4 models, reported in the paper. They were not released all at once, but in stages.
Restrictions and partial release
[edit]While previous OpenAI models had been made immediately available to the public, OpenAI initially refused to make a public release of GPT
|
https://en.wikipedia.org/wiki/GPT-2#10
|
initially refused to make a public release of GPT-2's source code when announcing it in February, citing the risk of malicious use;[8] limited access to the model (i.e. an interface that allowed input and provided output, not the source code itself) was allowed for selected press outlets on announcement.[8] One commonly-cited justification was that, since generated text was usually completely novel, it could be used by spammers to evade automated filters; OpenAI demonstrated a version of GPT-2
|
https://en.wikipedia.org/wiki/GPT-2#11
|
d filters; OpenAI demonstrated a version of GPT-2 fine-tuned to "generate infinite positive – or negative – reviews of products".[8]
Another justification was that GPT-2 could be used to generate text that was obscene or racist. Researchers such as Jeremy Howard warned of "the technology to totally fill Twitter, email, and the web up with reasonable-sounding, context-appropriate prose, which would drown out all other speech and be impossible to filter".[17] The Allen Institute for Artificial Int
|
https://en.wikipedia.org/wiki/GPT-2#12
|
ilter".[17] The Allen Institute for Artificial Intelligence, in response to GPT-2, announced a tool to detect "neural fake news".[20]
However, opinion was divided. A February 2019 article in The Verge argued that the threat posed by GPT-2 had been exaggerated;[21] Anima Anandkumar, a professor at Caltech and director of machine learning research at Nvidia, said that there was no evidence that GPT-2 had the capabilities to pose the threats described by OpenAI, and that what they did was the "oppo
|
https://en.wikipedia.org/wiki/GPT-2#13
|
ed by OpenAI, and that what they did was the "opposite of open", characterizing their refusal to release the full model as "malicious BS".[21] The Gradient published an open letter to OpenAI requesting that they release the model publicly, comparing the threat posed by text-generation AI to the threat posed by the printing press, and giving Photoshop as an example of "a technology that has (thankfully) not destroyed modern society despite its potential for chaos":[22]
Thirty years later, society
|
https://en.wikipedia.org/wiki/GPT-2#14
|
ential for chaos":[22]
Thirty years later, society has emerged relatively unscathed despite Photoshop being simple enough for high school students to use and ubiquitous enough to commandeer its own verb. Why? Precisely because everyone knows about Photoshop.[22]
774M release
[edit]While OpenAI did not release the fully-trained model or the corpora it was trained on, description of their methods in prior publications (and the free availability of underlying technology) made it possible for GPT-2
|
https://en.wikipedia.org/wiki/GPT-2#15
|
underlying technology) made it possible for GPT-2 to be replicated by others as free software; one such replication, OpenGPT-2, was released in August 2019, in conjunction with a freely licensed version of WebText called OpenWebText. The cloud compute costs for OpenGPT-2 were given as approximately $50,000.[23]
On August 20, 2019, OpenAI released a partial version of GPT-2, with 774 million parameters (roughly half the size of the full 1.5 billion parameter model).[24]
Full 1.5B release
[edit]In
|
https://en.wikipedia.org/wiki/GPT-2#16
|
n parameter model).[24]
Full 1.5B release
[edit]Initial concerns that GPT-2 would lend itself to widespread misuse did not come to pass; The Verge said that "there are reasons to be skeptical about claims that AI technology will usher in some sort of ‘infopocalypse.’ For a start, we already have programs that can generate plausible text at high volume for little cost: humans."[25] By November 2019, OpenAI said that they had "seen no strong evidence of misuse so far", and the full version, with 1
|
https://en.wikipedia.org/wiki/GPT-2#17
|
ce of misuse so far", and the full version, with 1.5 billion parameters trained with forty gigabytes of data, "about eight thousand times larger than the collected works of Shakespeare",[26] was released on November 5, 2019.[3][4]
Small and Medium Releases
[edit]Two other smaller releases of GPT-2 are available, including the small version of 124M parameters and the medium size of 355M parameters. Both are available to download from Huggingface.[27][28]
Limitations
[edit]While GPT-2's ability to
|
https://en.wikipedia.org/wiki/GPT-2#18
|
27][28]
Limitations
[edit]While GPT-2's ability to generate plausible passages of natural language text were generally remarked on positively, its shortcomings were noted as well, especially when generating texts longer than a couple paragraphs; Vox said "the prose is pretty rough, there’s the occasional non-sequitur, and the articles get less coherent the longer they get".[18] The Verge similarly noted that longer samples of GPT-2 writing tended to "stray off topic" and lack overall coherence;[
|
https://en.wikipedia.org/wiki/GPT-2#19
|
to "stray off topic" and lack overall coherence;[17] The Register opined that "a human reading it should, after a short while, realize something's up", and noted that "GPT-2 doesn't answer questions as well as other systems that rely on algorithms to extract and retrieve information."[14]
GPT-2 deployment is resource-intensive; the full version of the model is larger than five gigabytes, making it difficult to embed locally into applications, and consumes large amounts of RAM. In addition, perf
|
https://en.wikipedia.org/wiki/GPT-2#20
|
d consumes large amounts of RAM. In addition, performing a single prediction "can occupy a CPU at 100% utilization for several minutes", and even with GPU processing, "a single prediction can take seconds". To alleviate these issues, the company Hugging Face created DistilGPT2, using knowledge distillation to produce a smaller model that "scores a few points lower on some quality benchmarks", but is "33% smaller and twice as fast".[citation needed]
Application and subsequent research
[edit]Even
|
https://en.wikipedia.org/wiki/GPT-2#21
|
d]
Application and subsequent research
[edit]Even before the release of the full version, GPT-2 was used for a variety of applications and services, as well as for entertainment. In June 2019, a subreddit named r/SubSimulatorGPT2 was created in which a variety of GPT-2 instances trained on different subreddits made posts and replied to each other's comments, creating a situation where one could observe "an AI personification of r/Bitcoin argue with the machine learning-derived spirit of r/Shitty
|
https://en.wikipedia.org/wiki/GPT-2#22
|
th the machine learning-derived spirit of r/ShittyFoodPorn";[25] by July of that year, a GPT-2-based software program released to autocomplete lines of code in a variety of programming languages was described by users as a "game-changer".[29]
In 2019, AI Dungeon was launched, which used GPT-2 to generate dynamic text adventures based on user input.[30] AI Dungeon now offers access to the largest release of GPT-3 API as an optional paid upgrade, the free version of the site uses the 2nd largest r
|
https://en.wikipedia.org/wiki/GPT-2#23
|
he free version of the site uses the 2nd largest release of GPT-3.[31] Latitude, the company formed around AI Dungeon, raised $3.3 million in seed funding in 2021.[32] Several websites host interactive demonstrations of different instances of GPT-2 and other transformer models.[33][34][35]
In February 2021, a crisis center for troubled teens announced that they would begin using a GPT-2-derived chatbot to help train counselors by allowing them to have conversations with simulated teens (this use
|
https://en.wikipedia.org/wiki/GPT-2#24
|
have conversations with simulated teens (this use was purely for internal purposes, and did not involve having GPT-2 communicate with the teens themselves).[36]
On May 9, 2023, OpenAI released a mapped version of GPT-2. OpenAI used successor model, GPT-4, to map each neuron of GPT-2 to determine their functions.[37]
Performance and evaluation
[edit]GPT-2 became capable of performing a variety of tasks beyond simple text production due to the breadth of its dataset and technique: answering quest
|
https://en.wikipedia.org/wiki/GPT-2#25
|
adth of its dataset and technique: answering questions, summarizing, and even translating between languages in a variety of specific domains, without being instructed in anything beyond how to predict the next word in a sequence.[17][18]
One example of generalized learning is GPT-2's ability to perform machine translation between French and English, for which task GPT-2's performance was assessed using WMT-14 translation tasks. GPT-2's training corpus included virtually no French text; non-Engli
|
https://en.wikipedia.org/wiki/GPT-2#26
|
orpus included virtually no French text; non-English text was deliberately removed while cleaning the dataset prior to training, and as a consequence, only 10MB of French of the remaining 40,000MB was available for the model to learn from (mostly from foreign-language quotations in English posts and articles).[2]
Despite this, GPT-2 achieved 5 BLEU on the WMT-14 English-to-French test set (slightly below the score of a translation via word-for-word substitution). It was also able to outperform s
|
https://en.wikipedia.org/wiki/GPT-2#27
|
rd substitution). It was also able to outperform several contemporary (2017) unsupervised machine translation baselines on the French-to-English test set, where GPT-2 achieved 11.5 BLEU. This remained below the highest-performing contemporary unsupervised approach (2019), which had achieved 33.5 BLEU.[2] However, other models used large amounts of French text to achieve these results; GPT-2 was estimated to have used a monolingual French corpus approximately 1/500 the size of comparable approach
|
https://en.wikipedia.org/wiki/GPT-2#28
|
pproximately 1/500 the size of comparable approaches.[2]
GPT-2 was to be followed by the 175-billion-parameter GPT-3,[39] revealed to the public in 2020[40] (whose source code has never been made available). Access to GPT-3 is provided exclusively through APIs offered by OpenAI and Microsoft.[41] That was then later followed by GPT-4.
References
[edit]- ^ "gpt-2". GitHub. Archived from the original on 11 March 2023. Retrieved 13 March 2023.
- ^ a b c d e f g Radford, Alec; Wu, Jeffrey; Child, Re
|
https://en.wikipedia.org/wiki/GPT-2#29
|
b c d e f g Radford, Alec; Wu, Jeffrey; Child, Rewon; Luan, David; Amodei, Dario; Sutskever, Ilua (14 February 2019). "Language models are unsupervised multitask learners" (PDF). OpenAI. 1 (8). Archived (PDF) from the original on 6 February 2021. Retrieved 19 December 2020.
- ^ a b Vincent, James (7 November 2019). "OpenAI has published the text-generating AI it said was too dangerous to share". The Verge. Archived from the original on 11 June 2020. Retrieved 19 December 2020.
- ^ a b "GPT-2: 1
|
https://en.wikipedia.org/wiki/GPT-2#30
|
020. Retrieved 19 December 2020.
- ^ a b "GPT-2: 1.5B Release". OpenAI. 2019-11-05. Archived from the original on 2019-11-14. Retrieved 2019-11-14.
- ^ a b "Better Language Models and Their Implications". OpenAI. 14 February 2019. Archived from the original on 19 December 2020. Retrieved 19 December 2020.
- ^ a b c Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (11 June 2018). "Improving Language Understanding by Generative Pre-Training" (PDF). OpenAI. p. 12. Archived (PDF) f
|
https://en.wikipedia.org/wiki/GPT-2#31
|
e-Training" (PDF). OpenAI. p. 12. Archived (PDF) from the original on 26 January 2021. Retrieved 23 January 2021.
- ^ a b Hegde, Chaitra; Patil, Shrikumar (9 June 2020). "Unsupervised Paraphrase Generation using Pre-trained Language Models". arXiv:2006.05477 [cs.CL].
- ^ a b c d e Hern, Alex (14 February 2019). "New AI fake text generator may be too dangerous to release, say creators". The Guardian. Archived from the original on 14 February 2019. Retrieved 19 December 2020.
- ^ Vaswani, Ashish;
|
https://en.wikipedia.org/wiki/GPT-2#32
|
Retrieved 19 December 2020.
- ^ Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Jones, Llion; Gomez, Aidan N; Kaiser, Łukasz; Polosukhin, Illia (2017). "Attention is All you Need" (PDF). Advances in Neural Information Processing Systems. 30. Curran Associates, Inc.
- ^ Olah, Chris; Carter, Shan (8 September 2016). "Attention and Augmented Recurrent Neural Networks". Distill. 1 (9). doi:10.23915/distill.00001. Archived from the original on 22 December 2020. Retrieved 22 January 2
|
https://en.wikipedia.org/wiki/GPT-2#33
|
iginal on 22 December 2020. Retrieved 22 January 2021.
- ^ Bahdanau, Dzmitry; Cho, Kyunghyun; Bengio, Yoshua (1 September 2014). "Neural Machine Translation by Jointly Learning to Align and Translate". arXiv:1409.0473 [cs.CL].
- ^ Luong, Minh-Thang; Pham, Hieu; Manning, Christopher D. (17 August 2015). "Effective Approaches to Attention-based Neural Machine Translation". arXiv:1508.04025 [cs.CL].
- ^ a b Trinh, Trieu H.; Le, Quoc V. (7 Jun 2018). "A Simple Method for Commonsense Reasoning". arXi
|
https://en.wikipedia.org/wiki/GPT-2#34
|
"A Simple Method for Commonsense Reasoning". arXiv:1806.02847 [cs.CL].
- ^ a b Quach, Katyanna (14 February 2019). "Roses are red, this is sublime: We fed OpenAI's latest chat bot a classic Reg headline". The Register. Archived from the original on 9 March 2021. Retrieved 27 February 2021.
- ^ a b "The Staggering Cost of Training SOTA AI Models". Synced. 27 June 2019. Archived from the original on 24 November 2020. Retrieved 27 February 2021.
- ^ Wiggers, Kyle (23 March 2020). "Google open-sour
|
https://en.wikipedia.org/wiki/GPT-2#35
|
^ Wiggers, Kyle (23 March 2020). "Google open-sources framework that reduces AI training costs by up to 80%". VentureBeat. Archived from the original on 26 November 2020. Retrieved 27 February 2021.
- ^ a b c d e f Vincent, James (14 February 2019). "OpenAI's new multitalented AI writes, translates, and slanders". The Verge. Archived from the original on 18 December 2020. Retrieved 19 December 2020.
- ^ a b c Piper, Kelsey (14 February 2019). "An AI helped us write this article". Vox. Archived f
|
https://en.wikipedia.org/wiki/GPT-2#36
|
AI helped us write this article". Vox. Archived from the original on 8 November 2020. Retrieved 19 December 2020.
- ^ Köbis, Nils; Mossink, Luca D. (1 January 2021). "Artificial intelligence versus Maya Angelou: Experimental evidence that people cannot differentiate AI-generated from human-written poetry". Computers in Human Behavior. 114: 106553. doi:10.1016/j.chb.2020.106553. hdl:21.11116/0000-0007-13E5-1.
- ^ Schwartz, Oscar (4 July 2019). "Could 'fake text' be the next global political thre
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.