source
stringlengths
36
80
text
stringlengths
51
500
https://en.wikipedia.org/wiki/Language_model_benchmark#67
benchmark LLMs, to prevent contamination.[128] Programming [edit]- APPS: 10,000 problems from Codewars, AtCoder, Kattis, and Codeforces.[129] - MBPP (Mostly Basic Programming Problems): 974 short Python functions designed to be solved by entry-level programmers. Each comes with a text description and unit tests. They were written by an internal pool of crowdworkers who have basic knowledge of Python.[115] - DS-1000: 1000 data science problems obtained by reformulating 451 unique StackOverflow pr
https://en.wikipedia.org/wiki/Language_model_benchmark#68
ained by reformulating 451 unique StackOverflow problems, requiring the use of 7 Python libraries, such as NumPy and Pandas. The resposes are scored by running test cases and comparing outputs, and checking for the presence/absence of specific APIs or keywords.[130][131] - HumanEval: 164 problems where the solution is always a python function, often just a few lines long.[8] - CodeElo: 387 contest problems from Codeforces during 2024, annotated with metadata such as contest divisions, problem di
https://en.wikipedia.org/wiki/Language_model_benchmark#69
ith metadata such as contest divisions, problem difficulty ratings, and problem algorithm tags. Benchmarking is run by directly submitting to Codeforces, resulting in an Elo rating. Limited to 8 submissions per problem.[132] - Aider Polyglot: 225 of the hardest coding exercises from Exercism, in languages of C++, Go, Java, JavaScript, Python and Rust.[133] - BigCodeBench: 1140 tasks that requires multiple function calls. The benchmark involves 139 libraries and 7 domains. A subset BigCodeBench-H
https://en.wikipedia.org/wiki/Language_model_benchmark#70
9 libraries and 7 domains. A subset BigCodeBench-Hard involves just a 148-task subset of the full benchmark.[134][135] - SWE-bench: 2,294 software engineering problems drawn from real GitHub issues and corresponding pull requests across 12 popular Python repositories. Given a codebase and an issue, the task is to edit the codebase to solve the issue.[136] There are 2 subsets: Lite (300 problems that are faster to run), Verified (human-validated subset of 500 problems reviewed by software enginee
https://en.wikipedia.org/wiki/Language_model_benchmark#71
ubset of 500 problems reviewed by software engineers).[137] - Multi-SWE-bench: 1,632 problems across 7 languages: Java, TypeScript, JavaScript, Go, Rust, C, and C++. Similar to SWE-bench.[138] - SWE-bench Multimodal: a variant of SWE-bench, with 619 task instances from 17 popular JavaScript repositories, each featuring images that are required for solving the task.[139] - SWE-Lancer: 1,488 freelance software engineering tasks from Upwork. Includes implementation tasks (from $50 bug fixes to $32,
https://en.wikipedia.org/wiki/Language_model_benchmark#72
s implementation tasks (from $50 bug fixes to $32,000 feature implementations) and managerial tasks, where the model must choose between technical implementation proposals.[140][141] - KernelBench: 250 PyTorch machine learning tasks, for which a CUDA kernel must be written.[142] - Cybench (cybersecurity bench): 40 professional-level Capture the Flag (CTF) tasks from 4 competitions. Tasks are broken down into subtasks for more fine-grained scoring. At least one professional-level human team at ea
https://en.wikipedia.org/wiki/Language_model_benchmark#73
. At least one professional-level human team at each competition was able to solve each of the tasks. The time it took the fastest team to solve each task ranged from 2 minutes to 25 hours.[143] - HCAST (Human-Calibrated Autonomy Software Tasks): 189 tasks in machine learning, cybersecurity, software engineering, and general reasoning. Each task has a "baseline", the measured average time required for a human skilled in the task domains, working under identical conditions as AI agents. The basel
https://en.wikipedia.org/wiki/Language_model_benchmark#74
under identical conditions as AI agents. The baseline ranges from 1 minute to 8+ hours.[144] - PaperBench: 8,316 individually gradable tasks that would be necessary for replicating 20 Spotlight and Oral papers from ICML 2024 from scratch. The human baseline of ML PhDs (best of 3 attempts) at 48 hours of effort is 41.4%.[145] General [edit]- GPQA (Google-Proof Q&A): 448 multiple-choice questions written by domain experts in biology, physics, and chemistry, designed to be PhD-level. The "Diamond"
https://en.wikipedia.org/wiki/Language_model_benchmark#75
hemistry, designed to be PhD-level. The "Diamond" subset contains the 198 hardest questions in it.[146] OpenAI found that human experts achieve an average score of 69.7% on the Diamond subset.[147] - SuperGPQA: 26,529 multiple-choice questions collected by domain experts in 285 graduate-level disciplines. The questions were collected by individuals with or pursuing a PhD and then refined and inspected with the help of large language models.[148] - MathVista: 6,141 questions involving quantitativ
https://en.wikipedia.org/wiki/Language_model_benchmark#76
- MathVista: 6,141 questions involving quantitative reasoning that requires reading a picture to solve.[149] - AGIEval: questions from 20 official, public, and high-standard admission and qualification exams, such as SAT, Gaokao, law school admission tests, math competitions, lawyer qualification tests, and national civil service exams.[150] - OlympicArena: 11,163 problems from 62 distinct Olympic competitions.[151] - OlympiadBench: 8,476 math and physics problems in English and Chinese, sourced
https://en.wikipedia.org/wiki/Language_model_benchmark#77
d physics problems in English and Chinese, sourced from International Olympiads, Chinese Olympiads, and Gaokao.[152] - ARC-AGI (Abstraction and Reasoning Corpus for Artificial General Intelligence): Given three pairs of before-and-after diagrams of applying a rule, apply the same rule to the fourth before-diagram. It is similar to a Raven's Progressive Matrices test.[153] - LiveBench: A series of benchmarks released monthly, including high school math competition questions, competitive coding qu
https://en.wikipedia.org/wiki/Language_model_benchmark#78
math competition questions, competitive coding questions, logic puzzles, and other tasks.[154] - Humanity's Last Exam: 3,000 multimodal questions across over a hundred academic subjects, with a held-out private dataset left unreleased to prevent contamination. 10% of questions requires both image and text comprehension and the rest are fully text-based. 80% of questions are scored by exact string matching, and the rest are multiple-choice.[155] - SimpleBench: A multiple-choice text benchmark wi
https://en.wikipedia.org/wiki/Language_model_benchmark#79
- SimpleBench: A multiple-choice text benchmark with over 200 questions covering spatio-temporal reasoning, social intelligence, and linguistic adversarial robustness (or trick questions). It is designed to test "everyday human reasoning".[156] See also [edit]External links [edit]References [edit]- ^ Chen, Danqi; Yih, Wen-tau (July 2020). Savary, Agata; Zhang, Yue (eds.). "Open-Domain Question Answering". Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: Tu
https://en.wikipedia.org/wiki/Language_model_benchmark#80
the Association for Computational Linguistics: Tutorial Abstracts. Online: Association for Computational Linguistics: 34–37. doi:10.18653/v1/2020.acl-tutorials.8. - ^ Weng, Lilian (2020-10-29). "How to Build an Open-Domain Question Answering System?". lilianweng.github.io. Retrieved 2025-03-05. - ^ a b Radford, Alec; Wu, Jeffrey; Child, Rewon; Luan, David; Amodei, Dario; Sutskever, Ilya (February 14, 2019). "Language Models are Unsupervised Multitask Learners" (PDF). OpenAI. - ^ "English Gigawo
https://en.wikipedia.org/wiki/Language_model_benchmark#81
itask Learners" (PDF). OpenAI. - ^ "English Gigaword Fifth Edition". Linguistic Data Consortium. June 17, 2011. Retrieved 2025-05-17. - ^ a b Chelba, Ciprian; Mikolov, Tomas; Schuster, Mike; Ge, Qi; Brants, Thorsten; Koehn, Phillipp; Robinson, Tony (2014-03-04), One Billion Word Benchmark for Measuring Progress in Statistical Language Modeling, arXiv:1312.3005 - ^ a b Dehghani, Mostafa; Tay, Yi; Gritsenko, Alexey A.; Zhao, Zhe; Houlsby, Neil; Diaz, Fernando; Metzler, Donald; Vinyals, Oriol (2021
https://en.wikipedia.org/wiki/Language_model_benchmark#82
z, Fernando; Metzler, Donald; Vinyals, Oriol (2021-07-14), The Benchmark Lottery, arXiv:2107.07002 - ^ DeepSeek-AI; Guo, Daya; Yang, Dejian; Zhang, Haowei; Song, Junxiao; Zhang, Ruoyu; Xu, Runxin; Zhu, Qihao; Ma, Shirong (2025-01-22), DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning, arXiv:2501.12948 - ^ a b Chen, Mark; Tworek, Jerry; Jun, Heewoo; Yuan, Qiming; Pinto, Henrique Ponde de Oliveira; Kaplan, Jared; Edwards, Harri; Burda, Yuri; Joseph, Nicholas (2021-
https://en.wikipedia.org/wiki/Language_model_benchmark#83
wards, Harri; Burda, Yuri; Joseph, Nicholas (2021-07-14), Evaluating Large Language Models Trained on Code, arXiv:2107.03374 - ^ Vedantam, Ramakrishna; Lawrence Zitnick, C.; Parikh, Devi (2015). "CIDEr: Consensus-Based Image Description Evaluation". Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR): 4566–4575. - ^ Anderson, Peter; Fernando, Basura; Johnson, Mark; Gould, Stephen (2016). "SPICE: Semantic Propositional Image Caption Evaluation". In Leibe, Bastian;
https://en.wikipedia.org/wiki/Language_model_benchmark#84
onal Image Caption Evaluation". In Leibe, Bastian; Matas, Jiri; Sebe, Nicu; Welling, Max (eds.). Computer Vision – ECCV 2016. Lecture Notes in Computer Science. Vol. 9909. Cham: Springer International Publishing. pp. 382–398. doi:10.1007/978-3-319-46454-1_24. ISBN 978-3-319-46454-1. - ^ Northcutt, Curtis G.; Athalye, Anish; Mueller, Jonas (2021-11-07), Pervasive Label Errors in Test Sets Destabilize Machine Learning Benchmarks, arXiv:2103.14749 - ^ Richie, Russell; Grover, Sachin; Tsui, Fuchiang
https://en.wikipedia.org/wiki/Language_model_benchmark#85
^ Richie, Russell; Grover, Sachin; Tsui, Fuchiang (Rich) (May 2022). Demner-Fushman, Dina; Cohen, Kevin Bretonnel; Ananiadou, Sophia; Tsujii, Junichi (eds.). "Inter-annotator agreement is not the ceiling of machine learning performance: Evidence from a comprehensive set of simulations". Proceedings of the 21st Workshop on Biomedical Language Processing. Dublin, Ireland: Association for Computational Linguistics: 275–284. doi:10.18653/v1/2022.bionlp-1.26. - ^ Artstein, Ron (2017), Ide, Nancy; Pu
https://en.wikipedia.org/wiki/Language_model_benchmark#86
nlp-1.26. - ^ Artstein, Ron (2017), Ide, Nancy; Pustejovsky, James (eds.), "Inter-annotator Agreement", Handbook of Linguistic Annotation, Dordrecht: Springer Netherlands, pp. 297–313, doi:10.1007/978-94-024-0881-2_11, ISBN 978-94-024-0881-2, retrieved 2025-02-22 - ^ Nie, Yixin; Zhou, Xiang; Bansal, Mohit (November 2020). "What Can We Learn from Collective Human Opinions on Natural Language Inference Data?". In Webber, Bonnie; Cohn, Trevor; He, Yulan; Liu, Yang (eds.). Proceedings of the 2020 Co
https://en.wikipedia.org/wiki/Language_model_benchmark#87
ulan; Liu, Yang (eds.). Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). Online: Association for Computational Linguistics. pp. 9131–9143. doi:10.18653/v1/2020.emnlp-main.734. - ^ Pavlick, Ellie; Kwiatkowski, Tom (November 2019). "Inherent Disagreements in Human Textual Inferences". Transactions of the Association for Computational Linguistics. 7: 677–694. doi:10.1162/tacl_a_00293. ISSN 2307-387X. - ^ Gururangan, Suchin; Swayamdipta, Swabha; Levy, O
https://en.wikipedia.org/wiki/Language_model_benchmark#88
^ Gururangan, Suchin; Swayamdipta, Swabha; Levy, Omer; Schwartz, Roy; Bowman, Samuel R.; Smith, Noah A. (2018-04-16), Annotation Artifacts in Natural Language Inference Data, arXiv:1803.02324 - ^ Deng, Chunyuan; Zhao, Yilun; Tang, Xiangru; Gerstein, Mark; Cohan, Arman (June 2024). "Investigating Data Contamination in Modern Benchmarks for Large Language Models". In Duh, Kevin; Gomez, Helena; Bethard, Steven (eds.). Proceedings of the 2024 Conference of the North American Chapter of the Associati
https://en.wikipedia.org/wiki/Language_model_benchmark#89
nce of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies (Volume 1: Long Papers). Mexico City, Mexico: Association for Computational Linguistics. pp. 8706–8719. arXiv:2311.09783. doi:10.18653/v1/2024.naacl-long.482. - ^ LI, Yanyang (2025-02-17), lyy1994/awesome-data-contamination, retrieved 2025-02-22 - ^ Shannon, C. E. (1951). "Prediction and Entropy of Printed English". Bell System Technical Journal. 30 (1): 50–64. doi:10.1002/j.1538-7305.
https://en.wikipedia.org/wiki/Language_model_benchmark#90
l Journal. 30 (1): 50–64. doi:10.1002/j.1538-7305.1951.tb01366.x. ISSN 1538-7305. - ^ Radford, Alec; Wu, Jeffrey; Child, Rewon; Luan, David; Amodei, Dario; Sutskever, Ilya (February 14, 2019). "Better language models and their implications". OpenAI. - ^ Magnusson, Ian; Bhagia, Akshita; Hofmann, Valentin; Soldaini, Luca; Jha, Ananya Harsh; Tafjord, Oyvind; Schwenk, Dustin; Walsh, Evan Pete; Elazar, Yanai (2024-12-07), Paloma: A Benchmark for Evaluating Language Model Fit, arXiv:2312.10523 - ^ Dav
https://en.wikipedia.org/wiki/Language_model_benchmark#91
ating Language Model Fit, arXiv:2312.10523 - ^ Davis, Ernest (2023-10-23). "Benchmarks for Automated Commonsense Reasoning: A Survey". ACM Comput. Surv. 56 (4): 81:1–81:41. arXiv:2302.04752. doi:10.1145/3615355. ISSN 0360-0300. - ^ Levesque, Hector; Davis, Ernest; Morgenstern, Leora (2012). The Winograd Schema Challenge. Proceedings of the Thirteenth International Conference on Principles of Knowledge Representation and Reasoning. - ^ Kocijan, Vid; Davis, Ernest; Lukasiewicz, Thomas; Marcus, Gar
https://en.wikipedia.org/wiki/Language_model_benchmark#92
d; Davis, Ernest; Lukasiewicz, Thomas; Marcus, Gary; Morgenstern, Leora (2023-07-11). "The defeat of the Winograd Schema Challenge". Artificial Intelligence. 325: 103971. arXiv:2201.02387. doi:10.1016/j.artint.2023.103971. ISSN 0004-3702. S2CID 245827747. - ^ Sakaguchi, Keisuke; Le Bras, Ronan; Bhagavatula, Chandra; Choi, Yejin (2019). "WinoGrande: An Adversarial Winograd Schema Challenge at Scale". arXiv:1907.10641 [cs.CL]. - ^ "The Corpus of Linguistic Acceptability (CoLA)". nyu-mll.github.io.
https://en.wikipedia.org/wiki/Language_model_benchmark#93
nguistic Acceptability (CoLA)". nyu-mll.github.io. Archived from the original on 2025-03-11. Retrieved 2025-04-19. - ^ Warstadt, Alex; Singh, Amanpreet; Bowman, Samuel R. (November 2019). "Neural Network Acceptability Judgments". Transactions of the Association for Computational Linguistics. 7: 625–641. arXiv:1805.12471. doi:10.1162/tacl_a_00290. ISSN 2307-387X. - ^ Bowman, Samuel R.; Angeli, Gabor; Potts, Christopher; Manning, Christopher D. (September 2015). "A large annotated corpus for learn
https://en.wikipedia.org/wiki/Language_model_benchmark#94
ptember 2015). "A large annotated corpus for learning natural language inference". In Màrquez, Lluís; Callison-Burch, Chris; Su, Jian (eds.). Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. Lisbon, Portugal: Association for Computational Linguistics. pp. 632–642. arXiv:1508.05326. doi:10.18653/v1/D15-1075. - ^ "The Stanford Natural Language Processing Group". nlp.stanford.edu. Retrieved 2025-02-22. - ^ Bojar, Ondřej; Buck, Christian; Federmann, Christian;
https://en.wikipedia.org/wiki/Language_model_benchmark#95
r, Ondřej; Buck, Christian; Federmann, Christian; Haddow, Barry; Koehn, Philipp; Leveling, Johannes; Monz, Christof; Pecina, Pavel; Post, Matt; Saint-Amand, Herve; Soricut, Radu; Specia, Lucia; Tamchyna, Aleš (June 2014). Bojar, Ondřej; Buck, Christian; Federmann, Christian; Haddow, Barry; Koehn, Philipp; Monz, Christof; Post, Matt; Specia, Lucia (eds.). "Findings of the 2014 Workshop on Statistical Machine Translation". Proceedings of the Ninth Workshop on Statistical Machine Translation. Balti
https://en.wikipedia.org/wiki/Language_model_benchmark#96
Workshop on Statistical Machine Translation. Baltimore, Maryland, USA: Association for Computational Linguistics: 12–58. doi:10.3115/v1/W14-3302. hdl:20.500.11820/789fbc29-61e0-4529-af4a-819461c57a8f. - ^ Williams, Adina; Nangia, Nikita; Bowman, Samuel R. (2018-02-19), A Broad-Coverage Challenge Corpus for Sentence Understanding through Inference, arXiv:1704.05426 - ^ Chen, Danqi; Bolton, Jason; Manning, Christopher D. (2016-08-08), A Thorough Examination of the CNN/Daily Mail Reading Comprehens
https://en.wikipedia.org/wiki/Language_model_benchmark#97
amination of the CNN/Daily Mail Reading Comprehension Task, arXiv:1606.02858 - ^ Zellers, Rowan; Bisk, Yonatan; Schwartz, Roy; Choi, Yejin (2018-08-16), SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference, arXiv:1808.05326 - ^ Zellers, Rowan; Holtzman, Ari; Bisk, Yonatan; Farhadi, Ali; Choi, Yejin (2019-05-19), HellaSwag: Can a Machine Really Finish Your Sentence?, arXiv:1905.07830 - ^ "HellaSwag". rowanzellers.com. Retrieved 2025-02-06. - ^ Lai, Guokun; Xie, Qizhe; Liu, H
https://en.wikipedia.org/wiki/Language_model_benchmark#98
ed 2025-02-06. - ^ Lai, Guokun; Xie, Qizhe; Liu, Hanxiao; Yang, Yiming; Hovy, Eduard (2017-12-05), RACE: Large-scale ReAding Comprehension Dataset From Examinations, arXiv:1704.04683 - ^ Paperno, Denis; Kruszewski, Germán; Lazaridou, Angeliki; Pham, Quan Ngoc; Bernardi, Raffaella; Pezzelle, Sandro; Baroni, Marco; Boleda, Gemma; Fernández, Raquel (2016-06-20), The LAMBADA dataset: Word prediction requiring a broad discourse context, arXiv:1606.06031 - ^ Mishra, Swaroop; Khashabi, Daniel; Baral, C
https://en.wikipedia.org/wiki/Language_model_benchmark#99
31 - ^ Mishra, Swaroop; Khashabi, Daniel; Baral, Chitta; Hajishirzi, Hannaneh (2022-03-14), Cross-Task Generalization via Natural Language Crowdsourcing Instructions, arXiv:2104.08773 - ^ Wang, Yizhong; Mishra, Swaroop; Alipoormolabashi, Pegah; Kordi, Yeganeh; Mirzaei, Amirreza; Arunkumar, Anjana; Ashok, Arjun; Dhanasekaran, Arut Selvan; Naik, Atharva (2022-10-24), Super-NaturalInstructions: Generalization via Declarative Instructions on 1600+ NLP Tasks, arXiv:2204.07705 - ^ Zhou, Jeffrey; Lu, T
https://en.wikipedia.org/wiki/Language_model_benchmark#100
P Tasks, arXiv:2204.07705 - ^ Zhou, Jeffrey; Lu, Tianjian; Mishra, Swaroop; Brahma, Siddhartha; Basu, Sujoy; Luan, Yi; Zhou, Denny; Hou, Le (2023-11-14), Instruction-Following Evaluation for Large Language Models, arXiv:2311.07911 - ^ a b Zheng, Lianmin; Chiang, Wei-Lin; Sheng, Ying; Zhuang, Siyuan; Wu, Zhanghao; Zhuang, Yonghao; Lin, Zi; Li, Zhuohan; Li, Dacheng (2023-12-24), Judging LLM-as-a-Judge with MT-Bench and Chatbot Arena, arXiv:2306.05685 - ^ Richardson, Matthew; Burges, Christopher J.
https://en.wikipedia.org/wiki/Language_model_benchmark#101
85 - ^ Richardson, Matthew; Burges, Christopher J.C.; Renshaw, Erin (October 2013). "MCTest: A Challenge Dataset for the Open-Domain Machine Comprehension of Text". In Yarowsky, David; Baldwin, Timothy; Korhonen, Anna; Livescu, Karen; Bethard, Steven (eds.). Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Seattle, Washington, USA: Association for Computational Linguistics. pp. 193–203. doi:10.18653/v1/D13-1020. - ^ Rajpurkar, Pranav; Zhang, Jian; Lopyrev,
https://en.wikipedia.org/wiki/Language_model_benchmark#102
020. - ^ Rajpurkar, Pranav; Zhang, Jian; Lopyrev, Konstantin; Liang, Percy (2016-10-11), SQuAD: 100,000+ Questions for Machine Comprehension of Text, arXiv:1606.05250 - ^ Rajpurkar, Pranav; Jia, Robin; Liang, Percy (2018-06-11), Know What You Don't Know: Unanswerable Questions for SQuAD, arXiv:1806.03822 - ^ Clark, Peter; Cowhey, Isaac; Etzioni, Oren; Khot, Tushar; Sabharwal, Ashish; Schoenick, Carissa; Tafjord, Oyvind (2018-03-14), Think you have Solved Question Answering? Try ARC, the AI2 Reas
https://en.wikipedia.org/wiki/Language_model_benchmark#103
e Solved Question Answering? Try ARC, the AI2 Reasoning Challenge, arXiv:1803.05457 - ^ Reddy, Siva; Chen, Danqi; Manning, Christopher D. (2019-05-01). "CoQA: A Conversational Question Answering Challenge". Transactions of the Association for Computational Linguistics. 7: 249–266. arXiv:1808.07042. doi:10.1162/tacl_a_00266. ISSN 2307-387X. - ^ Berant, Jonathan; Chou, Andrew; Frostig, Roy; Liang, Percy (October 2013). "Semantic Parsing on Freebase from Question-Answer Pairs". In Yarowsky, David;
https://en.wikipedia.org/wiki/Language_model_benchmark#104
from Question-Answer Pairs". In Yarowsky, David; Baldwin, Timothy; Korhonen, Anna; Livescu, Karen; Bethard, Steven (eds.). Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. Seattle, Washington, USA: Association for Computational Linguistics. pp. 1533–1544. doi:10.18653/v1/D13-1160. - ^ Kwiatkowski, Tom; Palomaki, Jennimaria; Redfield, Olivia; Collins, Michael; Parikh, Ankur; Alberti, Chris; Epstein, Danielle; Polosukhin, Illia; Devlin, Jacob; Lee, Kenton; T
https://en.wikipedia.org/wiki/Language_model_benchmark#105
; Polosukhin, Illia; Devlin, Jacob; Lee, Kenton; Toutanova, Kristina; Jones, Llion; Kelcey, Matthew; Chang, Ming-Wei; Dai, Andrew M. (2019-08-01). "Natural Questions: A Benchmark for Question Answering Research". Transactions of the Association for Computational Linguistics. 7: 453–466. doi:10.1162/tacl_a_00276. ISSN 2307-387X. - ^ Joshi, Mandar; Choi, Eunsol; Weld, Daniel S.; Zettlemoyer, Luke (2017-05-13), TriviaQA: A Large Scale Distantly Supervised Challenge Dataset for Reading Comprehension
https://en.wikipedia.org/wiki/Language_model_benchmark#106
rvised Challenge Dataset for Reading Comprehension, arXiv:1705.03551 - ^ Mihaylov, Todor; Clark, Peter; Khot, Tushar; Sabharwal, Ashish (2018-09-08), Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering, arXiv:1809.02789 - ^ Dunn, Matthew; Sagun, Levent; Higgins, Mike; Guney, V. Ugur; Cirik, Volkan; Cho, Kyunghyun (2017-06-11), SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine, arXiv:1704.05179 - ^ Yang, Zhilin; Qi, Peng; Zhang, Saizheng;
https://en.wikipedia.org/wiki/Language_model_benchmark#107
5179 - ^ Yang, Zhilin; Qi, Peng; Zhang, Saizheng; Bengio, Yoshua; Cohen, William W.; Salakhutdinov, Ruslan; Manning, Christopher D. (2018-09-25), HotpotQA: A Dataset for Diverse, Explainable Multi-hop Question Answering, arXiv:1809.09600 - ^ Geva, Mor; Khashabi, Daniel; Segal, Elad; Khot, Tushar; Roth, Dan; Berant, Jonathan (2021-04-26). "Did Aristotle Use a Laptop? A Question Answering Benchmark with Implicit Reasoning Strategies". Transactions of the Association for Computational Linguistics.
https://en.wikipedia.org/wiki/Language_model_benchmark#108
of the Association for Computational Linguistics. 9: 346–361. doi:10.1162/tacl_a_00370. ISSN 2307-387X. - ^ Dua, Dheeru; Wang, Yizhong; Dasigi, Pradeep; Stanovsky, Gabriel; Singh, Sameer; Gardner, Matt (2019-04-16), DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs, arXiv:1903.00161 - ^ Pahilajani, Anish; Trivedi, Devasha; Shuai, Jincen; Yone, Khin S.; Jain, Samyak Rajesh; Park, Namyong; Rossi, Ryan A.; Ahmed, Nesreen K.; Dernoncourt, Franck; Wang, Yu (2024-11-
https://en.wikipedia.org/wiki/Language_model_benchmark#109
esreen K.; Dernoncourt, Franck; Wang, Yu (2024-11-01), GRS-QA: Graph Reasoning-Structured Question Answering Dataset, arXiv:2411.00369 - ^ Masry, Ahmed; Do, Xuan Long; Tan, Jia Qing; Joty, Shafiq; Hoque, Enamul (May 2022). Muresan, Smaranda; Nakov, Preslav; Villavicencio, Aline (eds.). "ChartQA: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning". Findings of the Association for Computational Linguistics: ACL 2022. Dublin, Ireland: Association for Computational Lin
https://en.wikipedia.org/wiki/Language_model_benchmark#110
Dublin, Ireland: Association for Computational Linguistics: 2263–2279. arXiv:2203.10244. doi:10.18653/v1/2022.findings-acl.177. - ^ "Industry Documents Library". industrydocuments.ucsf.edu. Retrieved 2025-04-05. - ^ "DocVQA". www.docvqa.org. Retrieved 2025-04-05. - ^ Mathew, Minesh; Karatzas, Dimosthenis; Jawahar, C. V. (2021). "DocVQA: A Dataset for VQA on Document Images": 2200–2209. {{cite journal}} : Cite journal requires|journal= (help) - ^ "C-Eval: 一个适用于大语言模型的多层次多学科中文评估套件". cevalbenchmark.
https://en.wikipedia.org/wiki/Language_model_benchmark#111
"C-Eval: 一个适用于大语言模型的多层次多学科中文评估套件". cevalbenchmark.com. Retrieved 2025-02-25. - ^ Lin, Stephanie; Hilton, Jacob; Evans, Owain (2022-05-08), TruthfulQA: Measuring How Models Mimic Human Falsehoods, arXiv:2109.07958 - ^ Bisk, Yonatan; Zellers, Rowan; Bras, Ronan Le; Gao, Jianfeng; Choi, Yejin (2020-04-03). "PIQA: Reasoning about Physical Commonsense in Natural Language". Proceedings of the AAAI Conference on Artificial Intelligence. 34 (5): 7432–7439. arXiv:1911.11641. doi:10.1609/aaai.v34i05.6239.
https://en.wikipedia.org/wiki/Language_model_benchmark#112
9. arXiv:1911.11641. doi:10.1609/aaai.v34i05.6239. ISSN 2374-3468. - ^ Jin, Di; Pan, Eileen; Oufattole, Nassim; Weng, Wei-Hung; Fang, Hanyi; Szolovits, Peter (January 2021). "What Disease Does This Patient Have? A Large-Scale Open Domain Question Answering Dataset from Medical Exams". Applied Sciences. 11 (14): 6421. doi:10.3390/app11146421. hdl:1721.1/136684.2. ISSN 2076-3417. - ^ Lu, Pan; Mishra, Swaroop; Xia, Tanglin; Qiu, Liang; Chang, Kai-Wei; Zhu, Song-Chun; Tafjord, Oyvind; Clark, Peter;
https://en.wikipedia.org/wiki/Language_model_benchmark#113
i; Zhu, Song-Chun; Tafjord, Oyvind; Clark, Peter; Kalyan, Ashwin (2022-12-06). "Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering". Advances in Neural Information Processing Systems. 35: 2507–2521. arXiv:2209.09513. - ^ Wei, Jason; Karina, Nguyen; Chung, Hyung Won; Jiao, Yunxin Joy; Papay, Spencer; Glaese, Amelia; Schulman, John; Fedus, William (2024-11-07), Measuring short-form factuality in large language models, arXiv:2411.04368 - ^ "Grok-1.5 Vision Prev
https://en.wikipedia.org/wiki/Language_model_benchmark#114
models, arXiv:2411.04368 - ^ "Grok-1.5 Vision Preview | xAI". x.ai. Retrieved 2025-03-12. - ^ Majumdar, Arjun; Ajay, Anurag; Zhang, Xiaohan; Putta, Pranav; Yenamandra, Sriram; Henaff, Mikael; Silwal, Sneha; Mcvay, Paul; Maksymets, Oleksandr; Arnaud, Sergio; Yadav, Karmesh; Li, Qiyang; Newman, Ben; Sharma, Mohit; Berges, Vincent (2024). "OpenEQA: Embodied Question Answering in the Era of Foundation Models". Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR):
https://en.wikipedia.org/wiki/Language_model_benchmark#115
n Computer Vision and Pattern Recognition (CVPR): 16488–16498. - ^ Wang, Alex; Singh, Amanpreet; Michael, Julian; Hill, Felix; Levy, Omer; Bowman, Samuel R. (2018). "GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding". arXiv:1804.07461 [cs.CL]. - ^ "GLUE Benchmark". gluebenchmark.com. Retrieved 2019-02-25. - ^ Wang, Alex; Pruksachatkun, Yada; Nangia, Nikita; Singh, Amanpreet; Michael, Julian; Hill, Felix; Levy, Omer; Bowman, Samuel R. (2020-02-13), SuperGLUE: A
https://en.wikipedia.org/wiki/Language_model_benchmark#116
Omer; Bowman, Samuel R. (2020-02-13), SuperGLUE: A Stickier Benchmark for General-Purpose Language Understanding Systems, arXiv:1905.00537 - ^ Srivastava, Aarohi; Rastogi, Abhinav; Rao, Abhishek; Shoeb, Abu Awal Md; Abid, Abubakar; Fisch, Adam; Brown, Adam R.; Santoro, Adam; Gupta, Aditya (2023-06-12), Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models, arXiv:2206.04615 - ^ Suzgun, Mirac; Scales, Nathan; Schärli, Nathanael; Gehrmann, Sebastian; Tay, Yi;
https://en.wikipedia.org/wiki/Language_model_benchmark#117
Schärli, Nathanael; Gehrmann, Sebastian; Tay, Yi; Chung, Hyung Won; Chowdhery, Aakanksha; Le, Quoc V.; Chi, Ed H. (2022-10-17), Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them, arXiv:2210.09261 - ^ Kazemi, Mehran; Fatemi, Bahare; Bansal, Hritik; Palowitch, John; Anastasiou, Chrysovalantis; Sanket Vaibhav Mehta; Jain, Lalit K.; Aglietti, Virginia; Jindal, Disha; Chen, Peter; Dikkala, Nishanth; Tyen, Gladys; Liu, Xin; Shalit, Uri; Chiappa, Silvia; Olszewska, Kate; Tay, Yi;
https://en.wikipedia.org/wiki/Language_model_benchmark#118
, Uri; Chiappa, Silvia; Olszewska, Kate; Tay, Yi; Tran, Vinh Q.; Le, Quoc V.; Firat, Orhan (2025). "BIG-Bench Extra Hard". arXiv:2502.19187 [cs.CL]. - ^ Hendrycks, Dan; Burns, Collin; Basart, Steven; Zou, Andy; Mazeika, Mantas; Song, Dawn; Steinhardt, Jacob (2021-01-12), Measuring Massive Multitask Language Understanding, arXiv:2009.03300 - ^ Wang, Yubo; Ma, Xueguang; Zhang, Ge; Ni, Yuansheng; Chandra, Abhranil; Guo, Shiguang; Ren, Weiming; Arulraj, Aaran; He, Xuan (2024-11-06), MMLU-Pro: A More
https://en.wikipedia.org/wiki/Language_model_benchmark#119
aj, Aaran; He, Xuan (2024-11-06), MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark, arXiv:2406.01574 - ^ "openai/MMMLU · Datasets at Hugging Face". huggingface.co. 2024-10-22. Retrieved 2025-02-28. - ^ Li, Haonan; Zhang, Yixuan; Koto, Fajri; Yang, Yifei; Zhao, Hai; Gong, Yeyun; Duan, Nan; Baldwin, Timothy (2024-01-17), CMMLU: Measuring massive multitask language understanding in Chinese, arXiv:2306.09212 - ^ "MMMU". mmmu-benchmark.github.io. Retrieved 2025-02-2
https://en.wikipedia.org/wiki/Language_model_benchmark#120
MU". mmmu-benchmark.github.io. Retrieved 2025-02-28. - ^ Yue, Xiang; Ni, Yuansheng; Zhang, Kai; Zheng, Tianyu; Liu, Ruoqi; Zhang, Ge; Stevens, Samuel; Jiang, Dongfu; Ren, Weiming (2024-06-13), MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI, arXiv:2311.16502 - ^ Yue, Xiang; Zheng, Tianyu; Ni, Yuansheng; Wang, Yubo; Zhang, Kai; Tong, Shengbang; Sun, Yuxuan; Yu, Botao; Zhang, Ge (2024-09-10), MMMU-Pro: A More Robust Multi-discipline Multimodal Under
https://en.wikipedia.org/wiki/Language_model_benchmark#121
o: A More Robust Multi-discipline Multimodal Understanding Benchmark, arXiv:2409.02813 - ^ "MMT-Bench". mmt-bench.github.io. Retrieved 2025-05-04. - ^ Mialon, Grégoire; Fourrier, Clémentine; Swift, Craig; Wolf, Thomas; LeCun, Yann; Scialom, Thomas (2023-11-21), GAIA: a benchmark for General AI Assistants, arXiv:2311.12983 - ^ Zhou, Shuyan; Xu, Frank F.; Zhu, Hao; Zhou, Xuhui; Lo, Robert; Sridhar, Abishek; Cheng, Xianyi; Ou, Tianyue; Bisk, Yonatan (2024-04-16), WebArena: A Realistic Web Environme
https://en.wikipedia.org/wiki/Language_model_benchmark#122
(2024-04-16), WebArena: A Realistic Web Environment for Building Autonomous Agents, arXiv:2307.13854 - ^ Deng, Xiang; Gu, Yu; Zheng, Boyuan; Chen, Shijie; Stevens, Sam; Wang, Boshi; Sun, Huan; Su, Yu (2023-12-15). "Mind2Web: Towards a Generalist Agent for the Web". Advances in Neural Information Processing Systems. 36: 28091–28114. arXiv:2306.06070. - ^ "OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments". os-world.github.io. Retrieved 2025-02-24. - ^ "Wi
https://en.wikipedia.org/wiki/Language_model_benchmark#123
os-world.github.io. Retrieved 2025-02-24. - ^ "Windows Agent Arena: Evaluating Multi-modal OS Agents at Scale". microsoft.github.io. Retrieved 2025-02-24. - ^ He, Hongliang; Yao, Wenlin; Ma, Kaixin; Yu, Wenhao; Dai, Yong; Zhang, Hongming; Lan, Zhenzhong; Yu, Dong (2024-06-06), WebVoyager: Building an End-to-End Web Agent with Large Multimodal Models, arXiv:2401.13919 - ^ "Berkeley Function Calling Leaderboard". gorilla.cs.berkeley.edu. Retrieved 2025-03-11. - ^ Yao, Shunyu; Shinn, Noah; Razavi,
https://en.wikipedia.org/wiki/Language_model_benchmark#124
2025-03-11. - ^ Yao, Shunyu; Shinn, Noah; Razavi, Pedram; Narasimhan, Karthik (2024-06-17), TAU-bench: A Benchmark for Tool-Agent-User Interaction in Real-World Domains, arXiv:2406.12045 - ^ "Terminal-Bench". Terminal-Bench. Retrieved 2025-05-25. - ^ https://x.com/GregKamradt/status/1722386725635580292 - ^ Tay, Yi; Dehghani, Mostafa; Abnar, Samira; Shen, Yikang; Bahri, Dara; Pham, Philip; Rao, Jinfeng; Yang, Liu; Ruder, Sebastian (2020-11-08), Long Range Arena: A Benchmark for Efficient Transfo
https://en.wikipedia.org/wiki/Language_model_benchmark#125
ong Range Arena: A Benchmark for Efficient Transformers, arXiv:2011.04006 - ^ Modarressi, Ali; Deilamsalehy, Hanieh; Dernoncourt, Franck; Bui, Trung; Rossi, Ryan A.; Yoon, Seunghyun; Schütze, Hinrich (2025-02-11), NoLiMa: Long-Context Evaluation Beyond Literal Matching, arXiv:2502.05167 - ^ An, Chenxin; Gong, Shansan; Zhong, Ming; Zhao, Xingjian; Li, Mukai; Zhang, Jun; Kong, Lingpeng; Qiu, Xipeng (August 2024). Ku, Lun-Wei; Martins, Andre; Srikumar, Vivek (eds.). "L-Eval: Instituting Standardize
https://en.wikipedia.org/wiki/Language_model_benchmark#126
ar, Vivek (eds.). "L-Eval: Instituting Standardized Evaluation for Long Context Language Models". Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Bangkok, Thailand: Association for Computational Linguistics: 14388–14411. arXiv:2307.11088. doi:10.18653/v1/2024.acl-long.776. - ^ Zhang, Xinrong; Chen, Yingfa; Hu, Shengding; Xu, Zihang; Chen, Junhao; Hao, Moo Khai; Han, Xu; Thai, Zhen Leng; Wang, Shuo (2024-02-24), ∞Bench: Extending Lo
https://en.wikipedia.org/wiki/Language_model_benchmark#127
eng; Wang, Shuo (2024-02-24), ∞Bench: Extending Long Context Evaluation Beyond 100K Tokens, arXiv:2402.13718 - ^ Shaham, Uri; Ivgi, Maor; Efrat, Avia; Berant, Jonathan; Levy, Omer (2023-12-17), ZeroSCROLLS: A Zero-Shot Benchmark for Long Text Understanding, arXiv:2305.14196 - ^ Li, Tianle; Zhang, Ge; Do, Quy Duc; Yue, Xiang; Chen, Wenhu (2024-06-12), Long-context LLMs Struggle with Long In-context Learning, arXiv:2404.02060 - ^ "LongBench v2". longbench2.github.io. Retrieved 2025-02-21. - ^ Bai,
https://en.wikipedia.org/wiki/Language_model_benchmark#128
ngbench2.github.io. Retrieved 2025-02-21. - ^ Bai, Yushi; Tu, Shangqing; Zhang, Jiajie; Peng, Hao; Wang, Xiaozhi; Lv, Xin; Cao, Shulin; Xu, Jiazheng; Hou, Lei (2025-01-03), LongBench v2: Towards Deeper Understanding and Reasoning on Realistic Long-context Multitasks, arXiv:2412.15204 - ^ Hsieh, Cheng-Ping; Sun, Simeng; Kriman, Samuel; Acharya, Shantanu; Rekesh, Dima; Jia, Fei; Zhang, Yang; Ginsburg, Boris (2024-08-06), RULER: What's the Real Context Size of Your Long-Context Language Models?, ar
https://en.wikipedia.org/wiki/Language_model_benchmark#129
ext Size of Your Long-Context Language Models?, arXiv:2404.06654 - ^ Lee, Jinhyuk; Chen, Anthony; Dai, Zhuyun; Dua, Dheeru; Sachan, Devendra Singh; Boratko, Michael; Luan, Yi; Arnold, Sébastien M. R.; Perot, Vincent (2024-06-19), Can Long-Context Language Models Subsume Retrieval, RAG, SQL, and More?, arXiv:2406.13121 - ^ Visser, Eline (2022). A grammar of Kalamang. Language Science Press. ISBN 978-3-96110-343-0. - ^ Visser, Eline (2021-09-24), dictionaria/kalamang: Kalamang dictionary, doi:10.5
https://en.wikipedia.org/wiki/Language_model_benchmark#130
ictionaria/kalamang: Kalamang dictionary, doi:10.5281/ZENODO.5526419, retrieved 2025-04-05 - ^ Tanzer, Garrett; Suzgun, Mirac; Visser, Eline; Jurafsky, Dan; Melas-Kyriazi, Luke (2023). "A Benchmark for Learning to Translate a New Language from One Grammar Book". arXiv:2309.16575 [cs.CL]. - ^ Kushman, Nate; Artzi, Yoav; Zettlemoyer, Luke; Barzilay, Regina (June 2014). Toutanova, Kristina; Wu, Hua (eds.). "Learning to Automatically Solve Algebra Word Problems". Proceedings of the 52nd Annual Meeti
https://en.wikipedia.org/wiki/Language_model_benchmark#131
rd Problems". Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Baltimore, Maryland: Association for Computational Linguistics: 271–281. doi:10.3115/v1/P14-1026. - ^ Huang, Danqing; Shi, Shuming; Lin, Chin-Yew; Yin, Jian; Ma, Wei-Ying (August 2016). Erk, Katrin; Smith, Noah A. (eds.). "How well do Computers Solve Math Word Problems? Large-Scale Dataset Construction and Evaluation". Proceedings of the 54th Annual Meeting of the Associ
https://en.wikipedia.org/wiki/Language_model_benchmark#132
oceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Berlin, Germany: Association for Computational Linguistics: 887–896. doi:10.18653/v1/P16-1084. - ^ Wang, Yan; Liu, Xiaojiang; Shi, Shuming (September 2017). "Deep Neural Solver for Math Word Problems". In Palmer, Martha; Hwa, Rebecca; Riedel, Sebastian (eds.). Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. Copenhagen, Denmark: Association for Com
https://en.wikipedia.org/wiki/Language_model_benchmark#133
ocessing. Copenhagen, Denmark: Association for Computational Linguistics. pp. 845–854. doi:10.18653/v1/D17-1088. - ^ Ling, Wang; Yogatama, Dani; Dyer, Chris; Blunsom, Phil (July 2017). Barzilay, Regina; Kan, Min-Yen (eds.). "Program Induction by Rationale Generation: Learning to Solve and Explain Algebraic Word Problems". Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Vancouver, Canada: Association for Computational Linguistics: 1
https://en.wikipedia.org/wiki/Language_model_benchmark#134
nada: Association for Computational Linguistics: 158–167. arXiv:1705.04146. doi:10.18653/v1/P17-1015. - ^ Cobbe, Karl; Kosaraju, Vineet; Bavarian, Mohammad; Chen, Mark; Jun, Heewoo; Kaiser, Lukasz; Plappert, Matthias; Tworek, Jerry; Hilton, Jacob (2021-11-18), Training Verifiers to Solve Math Word Problems, arXiv:2110.14168 - ^ "madrylab/gsm8k-platinum · Datasets at Hugging Face". huggingface.co. Retrieved 2025-03-07. - ^ Zhang, Hugh; Da, Jeff; Lee, Dean; Robinson, Vaughn; Wu, Catherine; Song, W
https://en.wikipedia.org/wiki/Language_model_benchmark#135
ee, Dean; Robinson, Vaughn; Wu, Catherine; Song, Will; Zhao, Tiffany; Raja, Pranav; Zhuang, Charlotte (2024-11-22), A Careful Examination of Large Language Model Performance on Grade School Arithmetic, arXiv:2405.00332 - ^ Hendrycks, Dan; Burns, Collin; Kadavath, Saurav; Arora, Akul; Basart, Steven; Tang, Eric; Song, Dawn; Steinhardt, Jacob (2021-11-08), Measuring Mathematical Problem Solving With the MATH Dataset, arXiv:2103.03874 - ^ "MATH-Perturb". math-perturb.github.io. Retrieved 2025-04-09
https://en.wikipedia.org/wiki/Language_model_benchmark#136
urb". math-perturb.github.io. Retrieved 2025-04-09. - ^ Amini, Aida; Gabriel, Saadia; Lin, Peter; Koncel-Kedziorski, Rik; Choi, Yejin; Hajishirzi, Hannaneh (2019), MathQA: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms, arXiv:1905.13319 - ^ a b Austin, Jacob; Odena, Augustus; Nye, Maxwell; Bosma, Maarten; Michalewski, Henryk; Dohan, David; Jiang, Ellen; Cai, Carrie; Terry, Michael (2021-08-16), Program Synthesis with Large Language Models, arXiv:2108.07732 - ^ ma
https://en.wikipedia.org/wiki/Language_model_benchmark#137
ith Large Language Models, arXiv:2108.07732 - ^ math-eval (2025-01-26), math-eval/MathEval, retrieved 2025-01-27 - ^ Chen, Wenhu; Yin, Ming; Ku, Max; Lu, Pan; Wan, Yixin; Ma, Xueguang; Xu, Jianyu; Wang, Xinyi; Xia, Tony (December 2023). "TheoremQA: A Theorem-driven Question Answering Dataset". In Bouamor, Houda; Pino, Juan; Bali, Kalika (eds.). Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing. Singapore: Association for Computational Linguistics. pp. 7889–79
https://en.wikipedia.org/wiki/Language_model_benchmark#138
ciation for Computational Linguistics. pp. 7889–7901. arXiv:2305.12524. doi:10.18653/v1/2023.emnlp-main.489. - ^ Azerbayev, Zhangir; Piotrowski, Bartosz; Schoelkopf, Hailey; Ayers, Edward W.; Radev, Dragomir; Avigad, Jeremy (2023). "ProofNet: Autoformalizing and Formally Proving Undergraduate-Level Mathematics". arXiv:2302.12433 [cs.CL]. - ^ Azerbayev, Zhangir (2025-04-02), zhangir-azerbayev/ProofNet, retrieved 2025-04-03 - ^ deepseek-ai/DeepSeek-Prover-V1.5, DeepSeek, 2025-04-01, retrieved 2025
https://en.wikipedia.org/wiki/Language_model_benchmark#139
-Prover-V1.5, DeepSeek, 2025-04-01, retrieved 2025-04-03 - ^ openai/miniF2F, OpenAI, 2025-02-01, retrieved 2025-02-03 - ^ Chernyshev, Konstantin; Polshkov, Vitaliy; Artemova, Ekaterina; Myasnikov, Alex; Stepanov, Vlad; Miasnikov, Alexei; Tilga, Sergei (2024-12-04), U-MATH: A University-Level Benchmark for Evaluating Mathematical Skills in LLMs, arXiv:2412.03205 - ^ Liu, Hongwei; Zheng, Zilong; Qiao, Yuxuan; Duan, Haodong; Fei, Zhiwei; Zhou, Fengzhe; Zhang, Wenwei; Zhang, Songyang; Lin, Dahua; Ch
https://en.wikipedia.org/wiki/Language_model_benchmark#140
he; Zhang, Wenwei; Zhang, Songyang; Lin, Dahua; Chen, Kai (2024). "MathBench: Evaluating the Theory and Application Proficiency of LLMS with a Hierarchical Mathematics Benchmark". arXiv:2405.12209 [cs.CL]. - ^ Tsoukalas, George; Lee, Jasper; Jennings, John; Xin, Jimmy; Ding, Michelle; Jennings, Michael; Thakur, Amitayush; Chaudhuri, Swarat (2024). "PutnamBench: Evaluating Neural Theorem-Provers on the Putnam Mathematical Competition". arXiv:2407.11214 [cs.AI]. - ^ "PutnamBench: A Multilingual Ma
https://en.wikipedia.org/wiki/Language_model_benchmark#141
11214 [cs.AI]. - ^ "PutnamBench: A Multilingual Mathematics Benchmark for Formal Theorem-Proving". trishullab.github.io. Retrieved 2025-04-02. - ^ Gao, Bofei; Song, Feifan; Yang, Zhe; Cai, Zefan; Miao, Yibo; Dong, Qingxiu; Li, Lei; Ma, Chenghao; Chen, Liang (2024-12-24), Omni-MATH: A Universal Olympiad Level Mathematic Benchmark For Large Language Models, arXiv:2410.07985 - ^ Glazer, Elliot; Erdil, Ege; Besiroglu, Tamay; Chicharro, Diego; Chen, Evan; Gunning, Alex; Olsson, Caroline Falkman; Dena
https://en.wikipedia.org/wiki/Language_model_benchmark#142
van; Gunning, Alex; Olsson, Caroline Falkman; Denain, Jean-Stanislas; Ho, Anson (2024-12-20), FrontierMath: A Benchmark for Evaluating Advanced Mathematical Reasoning in AI, arXiv:2411.04872 - ^ "MathArena.ai". matharena.ai. Retrieved 2025-02-22. - ^ Hendrycks, Dan; Basart, Steven; Kadavath, Saurav; Mazeika, Mantas; Arora, Akul; Guo, Ethan; Burns, Collin; Puranik, Samir; He, Horace (2021-11-08), Measuring Coding Challenge Competence With APPS, arXiv:2105.09938 - ^ Lai, Yuhang; Li, Chengxi; Wang,
https://en.wikipedia.org/wiki/Language_model_benchmark#143
Xiv:2105.09938 - ^ Lai, Yuhang; Li, Chengxi; Wang, Yiming; Zhang, Tianyi; Zhong, Ruiqi; Zettlemoyer, Luke; Yih, Scott Wen-tau; Fried, Daniel; Wang, Sida (2022-11-18), DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation, arXiv:2211.11501 - ^ "DS-1000: A Natural and Reliable Benchmark for Data Science Code Generation". ds1000-code-gen.github.io. Retrieved 2025-03-11. - ^ "CodeElo". codeelo-bench.github.io. Retrieved 2025-02-13. - ^ Aider-AI/polyglot-benchmark, Aider AI, 2025
https://en.wikipedia.org/wiki/Language_model_benchmark#144
3. - ^ Aider-AI/polyglot-benchmark, Aider AI, 2025-03-29, retrieved 2025-03-30 - ^ Zhuo, Terry Yue; Chien, Vu Minh; Chim, Jenny; Hu, Han; Yu, Wenhao; Widyasari, Ratnadira; Yusuf, Imam Nur Bani; Zhan, Haolan; He, Junda; Paul, Indraneil; Brunner, Simon; Gong, Chen; Hoang, James; Zebaze, Armel Randy; Hong, Xiaoheng (2024-10-04). "BigCodeBench: Benchmarking Code Generation with Diverse Function Calls and Complex Instructions". Iclr 2025. arXiv:2406.15877. - ^ "BigCodeBench Leaderboard". bigcode-benc
https://en.wikipedia.org/wiki/Language_model_benchmark#145
5877. - ^ "BigCodeBench Leaderboard". bigcode-bench.github.io. Retrieved 2025-04-09. - ^ Jimenez, Carlos E.; Yang, John; Wettig, Alexander; Yao, Shunyu; Pei, Kexin; Press, Ofir; Narasimhan, Karthik (2024-11-11), SWE-bench: Can Language Models Resolve Real-World GitHub Issues?, arXiv:2310.06770 - ^ "Introducing SWE-bench Verified". openai.com. - ^ Zan, Daoguang; Huang, Zhirong; Liu, Wei; Chen, Hanwu; Zhang, Linhao; Xin, Shulin; Chen, Lu; Liu, Qi; Zhong, Xiaojian; Li, Aoyan; Liu, Siyao; Xiao, Yong
https://en.wikipedia.org/wiki/Language_model_benchmark#146
Zhong, Xiaojian; Li, Aoyan; Liu, Siyao; Xiao, Yongsheng; Chen, Liangqiang; Zhang, Yuyu; Su, Jing; Liu, Tianyu; Long, Rui; Shen, Kai; Xiang, Liang (2025). "Multi-SWE-bench: A Multilingual Benchmark for Issue Resolving". arXiv:2504.02605 [cs.SE]. - ^ "SWE-bench". www.swebench.com. Retrieved 2025-02-11. - ^ openai/SWELancer-Benchmark, OpenAI, 2025-02-21, retrieved 2025-02-21 - ^ Miserendino, Samuel; Wang, Michele; Patwardhan, Tejal; Heidecke, Johannes (2025-02-19), SWE-Lancer: Can Frontier LLMs Ear
https://en.wikipedia.org/wiki/Language_model_benchmark#147
es (2025-02-19), SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?, arXiv:2502.12115 - ^ Ouyang, Anne; Guo, Simon; Arora, Simran; Zhang, Alex L.; Hu, William; Ré, Christopher; Mirhoseini, Azalia (2025-02-18), KernelBench: Can LLMs Write Efficient GPU Kernels?, arXiv:2502.10517 - ^ "Cybench". cybench.github.io. Retrieved 2025-04-10. - ^ Rein, David; Becker, Joel; Deng, Amy; Nix, Seraphina; Canal, Chris; O'Connel, Daniel; Arnott, Pip; Bloom, Ryan; Broadl
https://en.wikipedia.org/wiki/Language_model_benchmark#148
O'Connel, Daniel; Arnott, Pip; Bloom, Ryan; Broadley, Thomas; Garcia, Katharyn; Goodrich, Brian; Hasin, Max; Jawhar, Sami; Kinniment, Megan; Kwa, Thomas; Lajko, Aron; Rush, Nate; Lucas Jun Koba Sato; Sydney Von Arx; West, Ben; Chan, Lawrence; Barnes, Elizabeth (2025). "HCAST: Human-Calibrated Autonomy Software Tasks". arXiv:2503.17354 [cs.AI]. - ^ "PaperBench: Evaluating AI's Ability to Replicate AI Research". openai.com. Retrieved 2025-04-02. - ^ Rein, David; Hou, Betty Li; Stickland, Asa Coope
https://en.wikipedia.org/wiki/Language_model_benchmark#149
^ Rein, David; Hou, Betty Li; Stickland, Asa Cooper; Petty, Jackson; Pang, Richard Yuanzhe; Dirani, Julien; Michael, Julian; Bowman, Samuel R. (2023-11-20), GPQA: A Graduate-Level Google-Proof Q&A Benchmark, arXiv:2311.12022 - ^ "Learning to reason with LLMs". openai.com. September 12, 2024. Retrieved 2025-02-27. - ^ Team, M.-A.-P.; Du, Xinrun; Yao, Yifan; Ma, Kaijing; Wang, Bingli; Zheng, Tianyu; Zhu, Kang; Liu, Minghao; Liang, Yiming (2025-02-20), SuperGPQA: Scaling LLM Evaluation across 285 G
https://en.wikipedia.org/wiki/Language_model_benchmark#150
0), SuperGPQA: Scaling LLM Evaluation across 285 Graduate Disciplines, arXiv:2502.14739 - ^ "MathVista: Evaluating Math Reasoning in Visual Contexts". mathvista.github.io. Retrieved 2025-03-07. - ^ Cui, Ruixiang (2025-02-03), ruixiangcui/AGIEval, retrieved 2025-02-03 - ^ "OlympicArena: Benchmarking Multi-discipline Cognitive Reasoning for Superintelligent AI". gair-nlp.github.io. Retrieved 2025-02-03. - ^ He, Chaoqun; Luo, Renjie; Bai, Yuzhuo; Hu, Shengding; Thai, Zhen Leng; Shen, Junhao; Hu, Ji
https://en.wikipedia.org/wiki/Language_model_benchmark#151
, Shengding; Thai, Zhen Leng; Shen, Junhao; Hu, Jinyi; Han, Xu; Huang, Yujie (2024-06-06), OlympiadBench: A Challenging Benchmark for Promoting AGI with Olympiad-Level Bilingual Multimodal Scientific Problems, arXiv:2402.14008 - ^ "ARC Prize". ARC Prize. Retrieved 2025-01-27. - ^ "LiveBench". livebench.ai. Retrieved 2025-01-27. - ^ "Humanity's Last Exam". lastexam.ai. Retrieved 2025-02-02. - ^ "SimpleBench". simple-bench.com. Retrieved 2025-04-09.
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#0
Generative pre-trained transformer A generative pre-trained transformer (GPT) is a type of large language model (LLM)[1][2][3] and a prominent framework for generative artificial intelligence.[4][5] It is an artificial neural network that is used in natural language processing by machines.[6] It is based on the transformer deep learning architecture, pre-trained on large data sets of unlabeled text, and able to generate novel human-like content.[2][3] As of 2023, most LLMs had these characterist
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#1
2][3] As of 2023, most LLMs had these characteristics[7] and are sometimes referred to broadly as GPTs.[8] The first GPT was introduced in 2018 by OpenAI.[9] OpenAI has released significant GPT foundation models that have been sequentially numbered, to comprise its "GPT-n" series.[10] Each of these was significantly more capable than the previous, due to increased size (number of trainable parameters) and training. The most recent of these, GPT-4o, was released in May 2024.[11] Such models have
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#2
o, was released in May 2024.[11] Such models have been the basis for their more task-specific GPT systems, including models fine-tuned for instruction following—which in turn power the ChatGPT chatbot service.[1] The term "GPT" is also used in the names and descriptions of such models developed by others. For example, other GPT foundation models include a series of models created by EleutherAI,[12] and seven models created by Cerebras in 2023.[13] Companies in different industries have developed
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#3
] Companies in different industries have developed task-specific GPTs in their respective fields, such as Salesforce's "EinsteinGPT" (for CRM)[14] and Bloomberg's "BloombergGPT" (for finance).[15] History [edit]Initial developments [edit]Generative pretraining (GP) was a long-established concept in machine learning applications.[16][17] It was originally used as a form of semi-supervised learning, as the model is trained first on an unlabeled dataset (pretraining step) by learning to generate da
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#4
aset (pretraining step) by learning to generate datapoints in the dataset, and then it is trained to classify a labeled dataset.[18] There were three main types of early GP. The hidden Markov models learn a generative model of sequences for downstream applications. For example, in speech recognition, a trained HMM infers the most likely hidden sequence for a speech signal, and the hidden sequence is taken as the phonemes of the speech signal. These were developed in the 1970s and became widely a
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#5
se were developed in the 1970s and became widely applied in speech recognition in the 1980s.[19][20] The compressors learn to compress data such as images and textual sequences, and the compressed data serves as a good representation for downstream applications such as facial recognition.[21][22][23] The autoencoders similarly learn a latent representation of data for later downstream applications such as speech recognition.[24][25] The connection between autoencoders and algorithmic compressors
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#6
n between autoencoders and algorithmic compressors was noted in 1993.[26] During the 2010s, the problem of machine translation was solved[citation needed] by recurrent neural networks, with attention mechanism added. This was optimized into the transformer architecture, published by Google researchers in Attention Is All You Need (2017).[27] That development led to the emergence of large language models such as BERT (2018)[28] which was a pre-trained transformer (PT) but not designed to be gener
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#7
ined transformer (PT) but not designed to be generative (BERT was an "encoder-only" model). Also in 2018, OpenAI published Improving Language Understanding by Generative Pre-Training, which introduced GPT-1, the first in its GPT series.[29] Previously in 2017, some of the authors who would later work on GPT-1 worked on generative pre-training of language with LSTM, which resulted in a model that could represent text with vectors that could easily be fine-tuned for downstream applications.[30] Pr
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#8
be fine-tuned for downstream applications.[30] Prior to transformer-based architectures, the best-performing neural NLP (natural language processing) models commonly employed supervised learning from large amounts of manually-labeled data. The reliance on supervised learning limited their use on datasets that were not well-annotated, and also made it prohibitively expensive and time-consuming to train extremely large language models.[29] The semi-supervised approach OpenAI employed to make a la
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#9
i-supervised approach OpenAI employed to make a large-scale generative system—and was first to do with a transformer model—involved two stages: an unsupervised generative "pretraining" stage to set initial parameters using a language modeling objective, and a supervised discriminative "fine-tuning" stage to adapt these parameters to a target task.[29] Later developments [edit]Regarding more recent GPT foundation models, OpenAI published its first versions of GPT-3 in July 2020. There were three
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#10
versions of GPT-3 in July 2020. There were three models, with 1B, 6.7B, 175B parameters, respectively named babbage, curie, and davinci (giving initials B, C, and D).[citation needed] In July 2021, OpenAI published Codex, a task-specific GPT model targeted for programming applications. This was developed by fine-tuning a 12B parameter version of GPT-3 (different from previous GPT-3 models) using code from GitHub.[31] In March 2022, OpenAI published two versions of GPT-3 that were fine-tuned for
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#11
hed two versions of GPT-3 that were fine-tuned for instruction-following (instruction-tuned), named davinci-instruct-beta (175B) and text-davinci-001,[32] and then started beta testing code-davinci-002.[33] text-davinci-002 was instruction-tuned from code-davinci-002. Both text-davinci-003 and ChatGPT were released in November 2022, with both building upon text-davinci-002 via reinforcement learning from human feedback (RLHF). text-davinci-003 is trained for following instructions (like its pred
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#12
trained for following instructions (like its predecessors), whereas ChatGPT is further trained for conversational interaction with a human user.[34][35] OpenAI's most recent GPT foundation model, GPT-4, was released on March 14, 2023. It can be accessed directly by users via a premium version of ChatGPT, and is available to developers for incorporation into other products and services via OpenAI's API. Other producers of GPT foundation models include EleutherAI (with a series of models starting
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#13
clude EleutherAI (with a series of models starting in March 2021)[12] and Cerebras (with seven models released in March 2023).[13] Foundation models [edit]A foundation model is an AI model trained on broad data at scale such that it can be adapted to a wide range of downstream tasks.[36][37] Thus far, the most notable GPT foundation models have been from OpenAI's GPT-n series. The most recent from that is GPT-4, for which OpenAI declined to publish the size or training details (citing "the compe
https://en.wikipedia.org/wiki/Generative_pre-trained_transformer#14
sh the size or training details (citing "the competitive landscape and the safety implications of large-scale models").[38] Other such models include Google's PaLM, a broad foundation model that has been compared to GPT-3 and has been made available to developers via an API,[45][46] and Together's GPT-JT, which has been reported as the closest-performing open-source alternative to GPT-3 (and is derived from earlier open-source GPTs).[47] Meta AI (formerly Facebook) also has a generative transfor