source
stringlengths 36
80
| text
stringlengths 51
500
|
|---|---|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#20
|
d language generation tasks such as question answering and conversational response generation.[12]
The original BERT paper published results demonstrating that a small amount of finetuning (for BERTLARGE, 1 hour on 1 Cloud TPU) allowed it to achieved state-of-the-art performance on a number of natural language understanding tasks:[1]
- GLUE (General Language Understanding Evaluation) task set (consisting of 9 tasks);
- SQuAD (Stanford Question Answering Dataset[13]) v1.1 and v2.0;
- SWAG (Situat
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#21
|
swering Dataset[13]) v1.1 and v2.0;
- SWAG (Situations With Adversarial Generations[14]).
In the original paper, all parameters of BERT are finetuned, and recommended that, for downstream applications that are text classifications, the output token at the [CLS]
input token is fed into a linear-softmax layer to produce the label outputs.[1]
The original code base defined the final linear layer as a "pooler layer", in analogy with global pooling in computer vision, even though it simply discards a
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#22
|
computer vision, even though it simply discards all output tokens except the one corresponding to [CLS]
.[15]
Cost
[edit]BERT was trained on the BookCorpus (800M words) and a filtered version of English Wikipedia (2,500M words) without lists, tables, and headers.
Training BERTBASE on 4 cloud TPU (16 TPU chips total) took 4 days, at an estimated cost of 500 USD.[7] Training BERTLARGE on 16 cloud TPU (64 TPU chips total) took 4 days.[1]
Interpretation
[edit]Language models like ELMo, GPT-2, and B
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#23
|
tion
[edit]Language models like ELMo, GPT-2, and BERT, spawned the study of "BERTology", which attempts to interpret what is learned by these models. Their performance on these natural language understanding tasks are not yet well understood.[3][16][17] Several research publications in 2018 and 2019 focused on investigating the relationship behind BERT's output as a result of carefully chosen input sequences,[18][19] analysis of internal vector representations through probing classifiers,[20][21
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#24
|
epresentations through probing classifiers,[20][21] and the relationships represented by attention weights.[16][17]
The high performance of the BERT model could also be attributed to the fact that it is bidirectionally trained.[22] This means that BERT, based on the Transformer model architecture, applies its self-attention mechanism to learn information from a text from the left and right side during training, and consequently gains a deep understanding of the context. For example, the word fin
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#25
|
standing of the context. For example, the word fine can have two different meanings depending on the context (I feel fine today, She has fine blond hair). BERT considers the words surrounding the target word fine from the left and right side.
However it comes at a cost: due to encoder-only architecture lacking a decoder, BERT can't be prompted and can't generate text, while bidirectional models in general do not work effectively without the right side, thus being difficult to prompt. As an illus
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#26
|
side, thus being difficult to prompt. As an illustrative example, if one wishes to use BERT to continue a sentence fragment "Today, I went to", then naively one would mask out all the tokens as "Today, I went to [MASK]
[MASK]
[MASK]
... [MASK]
." where the number of [MASK]
is the length of the sentence one wishes to extend to. However, this constitutes a dataset shift, as during training, BERT has never seen sentences with that many tokens masked out. Consequently, its performance degrades. Mor
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#27
|
d out. Consequently, its performance degrades. More sophisticated techniques allow text generation, but at a high computational cost.[23]
History
[edit]BERT was originally published by Google researchers Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. The design has its origins from pre-training contextual representations, including semi-supervised sequence learning,[24] generative pre-training, ELMo,[25] and ULMFit.[26] Unlike previous models, BERT is a deeply bidirectional, u
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#28
|
previous models, BERT is a deeply bidirectional, unsupervised language representation, pre-trained using only a plain text corpus. Context-free models such as word2vec or GloVe generate a single word embedding representation for each word in the vocabulary, whereas BERT takes into account the context for each occurrence of a given word. For instance, whereas the vector for "running" will have the same word2vec vector representation for both of its occurrences in the sentences "He is running a co
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#29
|
s occurrences in the sentences "He is running a company" and "He is running a marathon", BERT will provide a contextualized embedding that will be different according to the sentence.[4]
On October 25, 2019, Google announced that they had started applying BERT models for English language search queries within the US.[27] On December 9, 2019, it was reported that BERT had been adopted by Google Search for over 70 languages.[28][29] In October 2020, almost every single English-based query was proc
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#30
|
, almost every single English-based query was processed by a BERT model.[30]
Variants
[edit]The BERT models were influential and inspired many variants.
RoBERTa (2019)[31] was an engineering improvement. It preserves BERT's architecture (slightly larger, at 355M parameters), but improves its training, changing key hyperparameters, removing the next-sentence prediction task, and using much larger mini-batch sizes.
DistilBERT (2019) distills BERTBASE to a model with just 60% of its parameters (66M
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#31
|
SE to a model with just 60% of its parameters (66M), while preserving 95% of its benchmark scores.[32][33] Similarly, TinyBERT (2019)[34] is a distilled model with just 28% of its parameters.
ALBERT (2019)[35] used shared-parameter across layers, and experimented with independently varying the hidden size and the word-embedding layer's output size as two hyperparameters. They also replaced the next sentence prediction task with the sentence-order prediction (SOP) task, where the model must disti
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#32
|
prediction (SOP) task, where the model must distinguish the correct order of two consecutive text segments from their reversed order.
ELECTRA (2020)[36] applied the idea of generative adversarial networks to the MLM task. Instead of masking out tokens, a small language model generates random plausible substitutions, and a larger network identify these replaced tokens. The small model aims to fool the large model.
DeBERTa (2020)[37] is a significant architectural variant, with disentangled atten
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#33
|
ant architectural variant, with disentangled attention. Its key idea is to treat the positional and token encodings separately throughout the attention mechanism. Instead of combining the positional encoding () and token encoding () into a single input vector (), DeBERTa keeps them separate as a tuple: (). Then, at each self-attention layer, DeBERTa computes three distinct attention matrices, rather than the single attention matrix used in BERT:[note 1]
The three attention matrices are added tog
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#34
|
note 1]
The three attention matrices are added together element-wise, then passed through a softmax layer and multiplied by a projection matrix.
Absolute position encoding is included in the final self-attention layer as additional input.
Notes
[edit]- ^ The position-to-position type was omitted by the authors for being useless.
References
[edit]- ^ a b c d e f Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (October 11, 2018). "BERT: Pre-training of Deep Bidirectional Transform
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#35
|
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". arXiv:1810.04805v2 [cs.CL].
- ^ "Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing". Google AI Blog. November 2, 2018. Retrieved November 27, 2019.
- ^ a b c Rogers, Anna; Kovaleva, Olga; Rumshisky, Anna (2020). "A Primer in BERTology: What We Know About How BERT Works". Transactions of the Association for Computational Linguistics. 8: 842–866. arXiv:2002.12327. doi:10.1162/tacl_a_0
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#36
|
8: 842–866. arXiv:2002.12327. doi:10.1162/tacl_a_00349. S2CID 211532403.
- ^ a b Ethayarajh, Kawin (September 1, 2019), How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings, arXiv:1909.00512
- ^ Anderson, Dawn (November 5, 2019). "A deep dive into BERT: How BERT launched a rocket into natural language understanding". Search Engine Land. Retrieved August 6, 2024.
- ^ name="bookcorpus"Zhu, Yukun; Kiros, Ryan; Zemel, Rich; Salakhutdinov,
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#37
|
u, Yukun; Kiros, Ryan; Zemel, Rich; Salakhutdinov, Ruslan; Urtasun, Raquel; Torralba, Antonio; Fidler, Sanja (2015). "Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books". pp. 19–27. arXiv:1506.06724 [cs.CV].
- ^ a b c "BERT". GitHub. Retrieved March 28, 2023.
- ^ Zhang, Tianyi; Wu, Felix; Katiyar, Arzoo; Weinberger, Kilian Q.; Artzi, Yoav (March 11, 2021), Revisiting Few-sample BERT Fine-tuning, arXiv:2006.05987
- ^ Turc, Iulia; Chang, Ming-Wei
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#38
|
arXiv:2006.05987
- ^ Turc, Iulia; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (September 25, 2019), Well-Read Students Learn Better: On the Importance of Pre-training Compact Models, arXiv:1908.08962
- ^ "Summary of the models — transformers 3.4.0 documentation". huggingface.co. Retrieved February 16, 2023.
- ^ Tay, Yi; Dehghani, Mostafa; Tran, Vinh Q.; Garcia, Xavier; Wei, Jason; Wang, Xuezhi; Chung, Hyung Won; Shakeri, Siamak; Bahri, Dara (February 28, 2023), UL2: Unifying Language Lear
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#39
|
a (February 28, 2023), UL2: Unifying Language Learning Paradigms, arXiv:2205.05131
- ^ a b Zhang, Aston; Lipton, Zachary; Li, Mu; Smola, Alexander J. (2024). "11.9. Large-Scale Pretraining with Transformers". Dive into deep learning. Cambridge New York Port Melbourne New Delhi Singapore: Cambridge University Press. ISBN 978-1-009-38943-3.
- ^ Rajpurkar, Pranav; Zhang, Jian; Lopyrev, Konstantin; Liang, Percy (October 10, 2016). "SQuAD: 100,000+ Questions for Machine Comprehension of Text". arXiv:
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#40
|
estions for Machine Comprehension of Text". arXiv:1606.05250 [cs.CL].
- ^ Zellers, Rowan; Bisk, Yonatan; Schwartz, Roy; Choi, Yejin (August 15, 2018). "SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference". arXiv:1808.05326 [cs.CL].
- ^ "bert/modeling.py at master · google-research/bert". GitHub. Retrieved September 16, 2024.
- ^ a b Kovaleva, Olga; Romanov, Alexey; Rogers, Anna; Rumshisky, Anna (November 2019). "Revealing the Dark Secrets of BERT". Proceedings of the 2019
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#41
|
he Dark Secrets of BERT". Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). pp. 4364–4373. doi:10.18653/v1/D19-1445. S2CID 201645145.
- ^ a b Clark, Kevin; Khandelwal, Urvashi; Levy, Omer; Manning, Christopher D. (2019). "What Does BERT Look at? An Analysis of BERT's Attention". Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#42
|
oxNLP: Analyzing and Interpreting Neural Networks for NLP. Stroudsburg, PA, USA: Association for Computational Linguistics: 276–286. arXiv:1906.04341. doi:10.18653/v1/w19-4828.
- ^ Khandelwal, Urvashi; He, He; Qi, Peng; Jurafsky, Dan (2018). "Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context". Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics: 284–294.
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#43
|
ssociation for Computational Linguistics: 284–294. arXiv:1805.04623. doi:10.18653/v1/p18-1027. S2CID 21700944.
- ^ Gulordava, Kristina; Bojanowski, Piotr; Grave, Edouard; Linzen, Tal; Baroni, Marco (2018). "Colorless Green Recurrent Networks Dream Hierarchically". Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics. pp
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#44
|
USA: Association for Computational Linguistics. pp. 1195–1205. arXiv:1803.11138. doi:10.18653/v1/n18-1108. S2CID 4460159.
- ^ Giulianelli, Mario; Harding, Jack; Mohnert, Florian; Hupkes, Dieuwke; Zuidema, Willem (2018). "Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information". Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Stroudsburg, PA, USA: Association for Computational L
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#45
|
oudsburg, PA, USA: Association for Computational Linguistics: 240–248. arXiv:1808.08079. doi:10.18653/v1/w18-5426. S2CID 52090220.
- ^ Zhang, Kelly; Bowman, Samuel (2018). "Language Modeling Teaches You More than Translation Does: Lessons Learned Through Auxiliary Syntactic Task Analysis". Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Stroudsburg, PA, USA: Association for Computational Linguistics: 359–361. doi:10.18653/v1/w18-5448.
- ^ S
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#46
|
guistics: 359–361. doi:10.18653/v1/w18-5448.
- ^ Sur, Chiranjib (January 2020). "RBN: enhancement in language attribute prediction using global representation of natural language transfer learning technology like Google BERT". SN Applied Sciences. 2 (1). doi:10.1007/s42452-019-1765-9.
- ^ Patel, Ajay; Li, Bryan; Mohammad Sadegh Rasooli; Constant, Noah; Raffel, Colin; Callison-Burch, Chris (2022). "Bidirectional Language Models Are Also Few-shot Learners". arXiv:2209.14500 [cs.LG].
- ^ Dai, Andre
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#47
|
earners". arXiv:2209.14500 [cs.LG].
- ^ Dai, Andrew; Le, Quoc (November 4, 2015). "Semi-supervised Sequence Learning". arXiv:1511.01432 [cs.LG].
- ^ Peters, Matthew; Neumann, Mark; Iyyer, Mohit; Gardner, Matt; Clark, Christopher; Lee, Kenton; Luke, Zettlemoyer (February 15, 2018). "Deep contextualized word representations". arXiv:1802.05365v2 [cs.CL].
- ^ Howard, Jeremy; Ruder, Sebastian (January 18, 2018). "Universal Language Model Fine-tuning for Text Classification". arXiv:1801.06146v5 [cs.CL
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#48
|
or Text Classification". arXiv:1801.06146v5 [cs.CL].
- ^ Nayak, Pandu (October 25, 2019). "Understanding searches better than ever before". Google Blog. Retrieved December 10, 2019.
- ^ "Understanding searches better than ever before". Google. October 25, 2019. Retrieved August 6, 2024.
- ^ Montti, Roger (December 10, 2019). "Google's BERT Rolls Out Worldwide". Search Engine Journal. Retrieved December 10, 2019.
- ^ "Google: BERT now used on almost every English query". Search Engine Land. Octob
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#49
|
st every English query". Search Engine Land. October 15, 2020. Retrieved November 24, 2020.
- ^ Liu, Yinhan; Ott, Myle; Goyal, Naman; Du, Jingfei; Joshi, Mandar; Chen, Danqi; Levy, Omer; Lewis, Mike; Zettlemoyer, Luke; Stoyanov, Veselin (2019). "RoBERTa: A Robustly Optimized BERT Pretraining Approach". arXiv:1907.11692 [cs.CL].
- ^ Sanh, Victor; Debut, Lysandre; Chaumond, Julien; Wolf, Thomas (February 29, 2020), DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter, arXi
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#50
|
f BERT: smaller, faster, cheaper and lighter, arXiv:1910.01108
- ^ "DistilBERT". huggingface.co. Retrieved August 5, 2024.
- ^ Jiao, Xiaoqi; Yin, Yichun; Shang, Lifeng; Jiang, Xin; Chen, Xiao; Li, Linlin; Wang, Fang; Liu, Qun (October 15, 2020), TinyBERT: Distilling BERT for Natural Language Understanding, arXiv:1909.10351
- ^ Lan, Zhenzhong; Chen, Mingda; Goodman, Sebastian; Gimpel, Kevin; Sharma, Piyush; Soricut, Radu (February 8, 2020), ALBERT: A Lite BERT for Self-supervised Learning of Lang
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#51
|
: A Lite BERT for Self-supervised Learning of Language Representations, arXiv:1909.11942
- ^ Clark, Kevin; Luong, Minh-Thang; Le, Quoc V.; Manning, Christopher D. (March 23, 2020), ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators, arXiv:2003.10555
- ^ He, Pengcheng; Liu, Xiaodong; Gao, Jianfeng; Chen, Weizhu (October 6, 2021), DeBERTa: Decoding-enhanced BERT with Disentangled Attention, arXiv:2006.03654
Further reading
[edit]- Rogers, Anna; Kovaleva, Olga; Rumshisky,
|
https://en.wikipedia.org/wiki/BERT_%28language_model%29#52
|
[edit]- Rogers, Anna; Kovaleva, Olga; Rumshisky, Anna (2020). "A Primer in BERTology: What we know about how BERT works". arXiv:2002.12327 [cs.CL].
|
https://en.wikipedia.org/wiki/Language_model_benchmark#0
|
Language model benchmark
Language model benchmarks are standardized tests designed to evaluate the performance of language models on various natural language processing tasks. These tests are intended for comparing different models' capabilities in areas such as language understanding, generation, and reasoning.
Benchmarks generally consist of a dataset and corresponding evaluation metrics. The dataset provides text samples and annotations, while the metrics measure a model's performance on task
|
https://en.wikipedia.org/wiki/Language_model_benchmark#1
|
the metrics measure a model's performance on tasks like question answering, text classification, and machine translation. These benchmarks are developed and maintained by academic institutions, research organizations, and industry players to track progress in the field.
Overview
[edit]Types
[edit]Benchmarks may be described by the following adjectives, not mutually exclusive:
- Classical: These tasks are studied in natural language processing, even before the advent of deep learning. Examples i
|
https://en.wikipedia.org/wiki/Language_model_benchmark#2
|
ven before the advent of deep learning. Examples include the Penn Treebank for testing syntactic and semantic parsing, as well as bilingual translation benchmarked by BLEU scores.
- Question answering: These tasks have a text question and a text answer, often multiple-choice. They can be open-book or closed-book. Open-book QA resembles reading comprehension questions, with relevant passages included as annotation in the question, in which the answer appears. Closed-book QA includes no relevant p
|
https://en.wikipedia.org/wiki/Language_model_benchmark#3
|
wer appears. Closed-book QA includes no relevant passages. Closed-book QA is also called open-domain question-answering.[1][2] Before the era of large language models, open-book QA was more common, and understood as testing information retrieval methods. Closed-book QA became common since GPT-2 as a method to measure knowledge stored within model parameters.[3]
- Omnibus: An omnibus benchmark combines many benchmarks, often previously published. It is intended as an all-in-one benchmarking solut
|
https://en.wikipedia.org/wiki/Language_model_benchmark#4
|
It is intended as an all-in-one benchmarking solution.
- Reasoning: These tasks are usually in the question-answering format, but are intended to be more difficult than standard question answering.
- Multimodal: These tasks require processing not only text, but also other modalities, such as images and sound. Examples include OCR and transcription.
- Agency: These tasks are for a language-model–based software agent that operates a computer for a user, such as editing images, browsing the web, et
|
https://en.wikipedia.org/wiki/Language_model_benchmark#5
|
user, such as editing images, browsing the web, etc.
- Adversarial: A benchmark is "adversarial" if the items in the benchmark are picked specifically so that certain models do badly on them. Adversarial benchmarks are often constructed after SOTA models have saturated a benchmark, to renew the benchmark. A benchmark is "adversarial" only at a certain moment in time, since what is adversarial may cease to be adversarial as newer SOTA models appear.
The boundary between a benchmark and a dataset
|
https://en.wikipedia.org/wiki/Language_model_benchmark#6
|
r.
The boundary between a benchmark and a dataset is not sharp. Generally, a dataset contains three "splits": training, test, validation. Both the test and validation splits are essentially benchmarks. In general, a benchmark is distinguished from a test/validation dataset in that a benchmark is typically intended to be used to measure the performance of many different models that are not trained specifically for doing well on the benchmark, while a test/validation set is intended to be used to
|
https://en.wikipedia.org/wiki/Language_model_benchmark#7
|
e a test/validation set is intended to be used to measure the performance of models trained specifically on the corresponding training set. In other words, a benchmark may be thought of as a test/validation set without a corresponding training set.
Conversely, certain benchmarks may be used as a training set, such as the English Gigaword[4] or the One Billion Word Benchmark, which in modern language is just the negative log likelihood loss on a pretraining set with 1 billion words.[5] Indeed, th
|
https://en.wikipedia.org/wiki/Language_model_benchmark#8
|
retraining set with 1 billion words.[5] Indeed, the distinction between benchmark and dataset in language models became sharper after the rise of the pretraining paradigm.
Lifecycle
[edit]Generally, the life cycle of a benchmark consists of the following steps:[6]
- Inception: A benchmark is published. It can be simply given as a demonstration of the power of a new model (implicitly) that others then picked up as a benchmark, or as a benchmark that others are encouraged to use (explicitly).
- Gr
|
https://en.wikipedia.org/wiki/Language_model_benchmark#9
|
at others are encouraged to use (explicitly).
- Growth: More papers and models use the benchmark, and the performance on the benchmark grows.
- Maturity, degeneration or deprecation: A benchmark may be saturated, after which researchers move on to other benchmarks. Progress on the benchmark may also be neglected as the field moves to focus on other benchmarks.
- Renewal: A saturated benchmark can be upgraded to make it no longer saturated, allowing further progress.
Construction
[edit]Like datas
|
https://en.wikipedia.org/wiki/Language_model_benchmark#10
|
ng further progress.
Construction
[edit]Like datasets, benchmarks are typically constructed by several methods, individually or in combination:
- Web scraping: Ready-made question-answer pairs may be scraped online, such as from websites that teach mathematics and programming.
- Conversion: Items may be constructed programmatically from scraped web content, such as by blanking out named entities from sentences, and asking the model to fill in the blank. This was used for making the CNN/Daily Mai
|
https://en.wikipedia.org/wiki/Language_model_benchmark#11
|
blank. This was used for making the CNN/Daily Mail Reading Comprehension Task.
- Crowd sourcing: Items may be constructed by paying people to write them, such as on Amazon Mechanical Turk. This was used for making the MCTest.
Evaluation
[edit]Generally, benchmarks are fully automated. This limits the questions that can be asked. For example, with mathematical questions, "proving a claim" would be difficult to automatically check, while "calculate an answer with a unique integer answer" would be
|
https://en.wikipedia.org/wiki/Language_model_benchmark#12
|
e an answer with a unique integer answer" would be automatically checkable. With programming tasks, the answer can generally be checked by running unit tests, with an upper limit on runtime.
The benchmark scores are of the following kinds:
- For multiple choice or cloze questions, common scores are accuracy (frequency of correct answer), precision, recall, F1 score, etc.
- pass@n: The model is given attempts to solve each problem. If any attempt is correct, the model earns a point. The pass@n sc
|
https://en.wikipedia.org/wiki/Language_model_benchmark#13
|
is correct, the model earns a point. The pass@n score is the model's average score over all problems.
- k@n: The model makes attempts to solve each problem, but only attempts out of them are selected for submission. If any submission is correct, the model earns a point. The k@n score is the model's average score over all problems.
- cons@n: The model is given attempts to solve each problem. If the most common answer is correct, the model earns a point. The cons@n score is the model's average sco
|
https://en.wikipedia.org/wiki/Language_model_benchmark#14
|
point. The cons@n score is the model's average score over all problems. Here "cons" stands for "consensus" or "majority voting".[7]
The pass@n score can be estimated more accurately by making attempts, and use the unbiased estimator , where is the number of correct attempts.[8]
For less well-formed tasks, where the output can be any sentence, there are the following commonly used scores: BLEU ROUGE, METEOR, NIST, word error rate, LEPOR, CIDEr,[9] SPICE,[10] etc.
Issues
[edit]- error: Some benchm
|
https://en.wikipedia.org/wiki/Language_model_benchmark#15
|
SPICE,[10] etc.
Issues
[edit]- error: Some benchmark answers may be wrong.[11]
- ambiguity: Some benchmark questions may be ambiguously worded.
- subjective: Some benchmark questions may not have an objective answer at all. This problem generally prevents creative writing benchmarks. Similarly, this prevents benchmarking writing proofs in natural language, though benchmarking proofs in a formal language is possible.
- open-ended: Some benchmark questions may not have a single answer of a fixed
|
https://en.wikipedia.org/wiki/Language_model_benchmark#16
|
questions may not have a single answer of a fixed size. This problem generally prevents programming benchmarks from using more natural tasks such as "write a program for X", and instead uses tasks such as "write a function that implements specification X".
- inter-annotator agreement: Some benchmark questions may be not fully objective, such that even people would not agree with 100% on what the answer should be. This is common in natural language processing tasks, such as syntactic annotation.[
|
https://en.wikipedia.org/wiki/Language_model_benchmark#17
|
e processing tasks, such as syntactic annotation.[12][13][14][15]
- shortcut: Some benchmark questions may be easily solved by an "unintended" shortcut. For example, in the SNLI benchmark, having a negative word like "not" in the second sentence is a strong signal for the "Contradiction" category, regardless of what the sentences actually say.[16]
- contamination/leakage: Some benchmark questions may have answers already present in the training set. Also called "training on the test set".[17][18
|
https://en.wikipedia.org/wiki/Language_model_benchmark#18
|
et. Also called "training on the test set".[17][18] Some benchmarks (such as Big-Bench) may use a "canary string", so that documents containing the canary string can be voluntarily removed from the training set.
- saturation: As time goes on, many models reach the highest performance level practically possible, and so the benchmark can no longer differentiate these models. For example, GLUE had been saturated, necessitating SuperGLUE.
- Goodhart's law: If new models are designed or selected to s
|
https://en.wikipedia.org/wiki/Language_model_benchmark#19
|
s law: If new models are designed or selected to score highly on a benchmark, the benchmark may cease to be a good indicator for model quality.[6]
- cherry picking: New model publications may only point to benchmark scores on which the new model performed well, avoiding benchmark scores that it did badly on.
List of benchmarks
[edit]General language modeling
[edit]Essentially any dataset can be used as a benchmark for statistical language modeling, with the perplexity (or near-equivalently, nega
|
https://en.wikipedia.org/wiki/Language_model_benchmark#20
|
g, with the perplexity (or near-equivalently, negative log-likelihood and bits per character, as in the original Shannon's test of the entropy of the English language[19]) being used as the benchmark score. For example, the original GPT-2 announcement included those of the model on WikiText-2, enwik8, text8, and WikiText-103 (all being standard language datasets made from the English Wikipedia).[3][20]
However, there had been datasets more commonly used, or specifically designed, for use as a be
|
https://en.wikipedia.org/wiki/Language_model_benchmark#21
|
ly used, or specifically designed, for use as a benchmark.
- One Billion Word Benchmark: The negative log likelihood loss on a dataset of 1 billion words.[5]
- Penn Treebank: The error or negative log likelihood loss for part-of-speech tags on a dataset of text.
- Paloma (Perplexity Analysis for Language Model Assessment): A collection of English and code texts, divided into 546 domains. Used to measure the perplexity of a model on specific domains.[21]
General language understanding
[edit]See [
|
https://en.wikipedia.org/wiki/Language_model_benchmark#22
|
ns.[21]
General language understanding
[edit]See [22] for a review of over 100 such benchmarks.
- WSC (Winograd schema challenge): 273 sentences with ambiguous pronouns. The task is to determine what the pronoun refers to.[23]
- WinoGrande: A larger version of WSC with 44,000 items. Designed to be adversarial to 2019 SOTA, since the original had been saturated. This dataset consists of fill-in-the-blank style sentences, as opposed to the pronoun format of previous datasets.[24][25]
- CoLA (Corpu
|
https://en.wikipedia.org/wiki/Language_model_benchmark#23
|
format of previous datasets.[24][25]
- CoLA (Corpus of Linguistic Acceptability): 10,657 English sentences from published linguistics literature that were manually labeled either as grammatical or ungrammatical.[26][27]
- SNLI (Stanford Natural Language Inference: 570K human-written English sentence pairs manually labeled for balanced classification with 3 labels "entailment", "contradiction", and "neutral".[28][29]
- WMT 2014 (Workshop on Statistical Machine Translation): a collection of 4 mach
|
https://en.wikipedia.org/wiki/Language_model_benchmark#24
|
tical Machine Translation): a collection of 4 machine translation benchmarks at the Ninth Workshop on Statistical Machine Translation. The Attention Is All You Need paper used it as a benchmark.[30]
- MultiNLI (Multi-Genre Natural Language Inference): Similar to SNLI, with 433K English sentence pairs from ten distinct genres of written and spoken English.[31]
- CNN/Daily Mail Reading Comprehension Task: Articles from CNN (380K training, 3.9K development, 3.2K test) and Daily Mail (879K training,
|
https://en.wikipedia.org/wiki/Language_model_benchmark#25
|
lopment, 3.2K test) and Daily Mail (879K training, 64.8K development, 53.2K test) were scraped. The bullet point summaries accompanying the news articles were used. One entity in a bullet point was replaced with a placeholder, creating a cloze-style question. The goal is to identify the masked entity from the article.[32]
- SWAG (Situations With Adversarial Generations): 113K descriptions of activities or events, each with 4 candidate endings; the model must choose the most plausible ending. Adv
|
https://en.wikipedia.org/wiki/Language_model_benchmark#26
|
e model must choose the most plausible ending. Adversarial against a few shallow language models (MLP, bag of words, one-layer CNN, etc).[33]
- HellaSwag (Harder Endings, Longer contexts, and Low-shot Activities for SWAG): A harder version of SWAG. Contains 10K items.[34][35]
- RACE (ReAding Comprehension Examinations): 100,000 reading comprehension problems in 28,000 passages, collected from the English exams for middle and high school Chinese students in the age range between 12 to 18.[36]
- L
|
https://en.wikipedia.org/wiki/Language_model_benchmark#27
|
tudents in the age range between 12 to 18.[36]
- LAMBADA: 10,000 narrative passages from books, each with a missing last word that humans can guess if given the full passage but not from the last sentence alone.[37]
General language generation
[edit]- NaturalInstructions: 61 distinct tasks with human-authored instructions, and 193k task instances (input-output pairs). The instructions are obtained from crowdsourcing instructions used to create existing NLP datasets and mapped to a unified schema
|
https://en.wikipedia.org/wiki/Language_model_benchmark#28
|
isting NLP datasets and mapped to a unified schema.[38]
- Super-NaturalInstructions: 1,616 diverse NLP tasks and their expert-written instructions, and 5M task instances.[39]
- IFEval (Instruction-Following Eval): 541 instructions to be followed, each containing at least one verifiable constraint, such as "mention the keyword of AI at least 3 times".[40]
- Chatbot Arena: Human users vote between two outputs from two language models. An Elo rating for each language model is computed based on thes
|
https://en.wikipedia.org/wiki/Language_model_benchmark#29
|
for each language model is computed based on these human votes.[41]
- MT-Bench (multi-turn benchmark): An automated version of Chatbot Arena where LLMs replace humans in generating votes.[41]
Open-book question-answering
[edit]- MCTest (Machine Comprehension Test): 500 fictional stories, each with 4 multiple-choice questions (with at least 2 requiring multi-sentence understanding), designed to be understandable by a 7-year-old. The vocabulary was limited to approximately 8,000 words probably kn
|
https://en.wikipedia.org/wiki/Language_model_benchmark#30
|
s limited to approximately 8,000 words probably known by a 7-year-old. The stories were written by workers on Amazon Mechanical Turk.[42]
- SQuAD (Stanford Question Answering Dataset): 100,000+ questions posed by crowd workers on 500+ Wikipedia articles. The task is, given a passage from Wikipedia and a question, find a span of text in the text that answers the question.[43]
- SQuAD 2.0: 50,000 unanswerable questions that look similar to SQuAD questions. Every such unanswerable question must be
|
https://en.wikipedia.org/wiki/Language_model_benchmark#31
|
estions. Every such unanswerable question must be answered with an empty string. Written by crowd workers.[44]
- ARC (AI2 Reasoning Challenge): Multiple choice questions, with a Challenge Set (2590 questions) and an Easy Set (5197 questions). Designed specifically to be adversarial against models that had saturated SNLI and SQuAD.[45]
- CoQA (Conversational QA): 127k questions with answers, obtained from 8k conversations about text passages from seven diverse domains.[46]
- WebQuestions: 6,642 q
|
https://en.wikipedia.org/wiki/Language_model_benchmark#32
|
seven diverse domains.[46]
- WebQuestions: 6,642 question-answer pairs designed to be answerable with knowledge present in the 2013 version of Freebase.[47]
- Natural Questions: 323045 items. Each containing a question that had been searched on Google, a Wikipedia page relevant for answering the question, a long answer (typically a paragraph) and a short answer (one or more entities) if present on the page, or "null" if no long/short answer is present.[48]
- TriviaQA: 650K question-answer-eviden
|
https://en.wikipedia.org/wiki/Language_model_benchmark#33
|
esent.[48]
- TriviaQA: 650K question-answer-evidence triples. Includes 95K question-answer pairs scraped from 14 trivia and quiz-league websites, and (on average 6) evidence documents for each pair, gathered by searching with Bing and Wikipedia.[49]
- OpenBookQA: 5960 multiple choice questions, each coming with an elementary level science fact (the "open book"). There are 1329 such facts in total.[50]
- SearchQA: 140,461 question-answer pairs from the J! Archive, with each pair augmented with (o
|
https://en.wikipedia.org/wiki/Language_model_benchmark#34
|
m the J! Archive, with each pair augmented with (on average 50) snippets and urls obtained by searching the question on Google.[51]
- HotpotQA: 113K multi-hop questions that require reading multiple Wikipedia-based passages to answer. They were produced by showing crowd workers multiple supporting context documents and asking them to produce questions that requiring reasoning about all of the documents.[52]
- StrategyQA: 2,780 questions annotated with relevant passages from Wikipedia, such that
|
https://en.wikipedia.org/wiki/Language_model_benchmark#35
|
with relevant passages from Wikipedia, such that the question require multi-hop reasoning over the passages to answer. For example, "Did Aristotle use a laptop?" is annotated with passages from the Wikipedia pages for "laptop" and "Aristotle".[53]
- DROP (Discrete Reasoning Over the content of Paragraphs): 96,567 questions along with Wikipedia passages, especially from narratives rich in numerical information (like sports summaries and history), often involving multi-step numerical reasoning ov
|
https://en.wikipedia.org/wiki/Language_model_benchmark#36
|
often involving multi-step numerical reasoning over several text spans. Adversarial against 2019 SOTA.[54]
- GRS-QA: Graph Reasoning-Structured Question Answering Dataset. A dataset designed to evaluate question answering models on graph-based reasoning tasks.[55]
- ChartQA: 32,719 questions about 20,882 charts crawled from four diverse online sources (Statista, Pew Research Center, Our World In Data, OECD). Of these, 9,608 were human-written (in ChartQA-H), and 23,111 were machine-generated (i
|
https://en.wikipedia.org/wiki/Language_model_benchmark#37
|
n ChartQA-H), and 23,111 were machine-generated (in ChartQA-M). The answers are either verbatim texts from the chart or integers calculated based on the chart's data.[56]
- DocVQA: multimodal, 50,000 questions on 12,767 document images, sectioned from 6,071 distinct documents. The documents were sourced from 5 industries (tobacco, food, drug, fossil fuel, chemical) of the UCSF Industry Documents Library, mostly from the 1940-2010 period. Documents with structured elements like tables, forms, lis
|
https://en.wikipedia.org/wiki/Language_model_benchmark#38
|
s with structured elements like tables, forms, lists, and figures were prioritized. The answers are verbatim extracts from the document text.[57][58][59]
Closed-book question-answering
[edit]- C-Eval (Chinese Eval): 13948 multiple choice questions about in 52 subjects at 4 levels of difficulty. In Chinese.[60]
- TruthfulQA: 817 questions in health, law, finance and politics with common misconceptions. Adversarial against GPT-3 and T5.[61]
- PIQA (Physical Interaction QA): 17951 two-choice questi
|
https://en.wikipedia.org/wiki/Language_model_benchmark#39
|
(Physical Interaction QA): 17951 two-choice questions. Each question gives a goal (like separating egg yolk from egg white with a water bottle), and 2 choices for accomplishing it.[62]
- MedQA: 61097 questions from professional medical board exams, in English, Simplified Chinese, Traditional Chinese.[63]
- ScienceQA: 21208 multiple choice questions in natural science, social science, and linguistics, with difficulty level from grade 1 to grade 12, sourced from elementary and high school science
|
https://en.wikipedia.org/wiki/Language_model_benchmark#40
|
, sourced from elementary and high school science curricula. Some questions require reading a diagram. Most questions are annotated with lecture textual lectures and explanations.[64]
- SimpleQA: 4,326 short questions that are answerable with knowledge as of 2023. Each answer is graded as either "correct", "incorrect", or "not attempted". Adversarial against GPT-4 specifically.[65]
- RealWorldQA: 765 multimodal multiple-choice questions. Each containing an image and a question. Designed to test
|
https://en.wikipedia.org/wiki/Language_model_benchmark#41
|
taining an image and a question. Designed to test spatial understanding. Images are drawn from various real-world scenarios, including those captured from vehicles.[66]
- OpenEQA (Open Embodied QA): over 1600 questions accompanying about videos, scans of real-world environments, and simulations.[67]
Omnibus
[edit]Some benchmarks are "omnibus", meaning they are made by combining several previous benchmarks.
- GLUE (General Language Understanding Evaluation): collection of 9 benchmarks designed fo
|
https://en.wikipedia.org/wiki/Language_model_benchmark#42
|
valuation): collection of 9 benchmarks designed for testing general language understanding. The tasks are in the format of sentence- or sentence-pair. There are over 1M items.[68][69]
- SuperGLUE: An update to GLUE. Designed to be still challenging to the SOTA models of the time (2019) since the original had been saturated. Includes 8 additional tasks (e.g. logical reasoning, commonsense inference, coreference resolution).[70]
- Big-Bench (Beyond the Imitation Game): A benchmark collection of 20
|
https://en.wikipedia.org/wiki/Language_model_benchmark#43
|
the Imitation Game): A benchmark collection of 204 tasks.[71] A particular subset of 23 tasks is called BBH (Big-Bench Hard).[72] An adversarial variant of BBH is called BBEH (Big-Bench Extra Hard), made by replacing each of the 23 tasks from BBH with a similar but adversarial variant.[73]
- MMLU (Measuring Massive Multitask Language Understanding): 16,000 multiple-choice questions spanning 57 academic subjects including mathematics, philosophy, law, and medicine.[74] Upgraded to MMLU-Pro which
|
https://en.wikipedia.org/wiki/Language_model_benchmark#44
|
law, and medicine.[74] Upgraded to MMLU-Pro which increases the number of choices from 4 to 10, eliminated the trivial and noisy questions from MMLU, and added harder problems.[75]
- MMMLU (Multilingual MMLU): The test set of MMLU, translated into 14 languages by professional human translators.[76]
- CMMLU (Chinese MMLU): 1,528 multiple-choice questions across 67 subjects, 16 of which are "China-specific", like Classical Chinese. Some data collected from non-publicly available materials, mock e
|
https://en.wikipedia.org/wiki/Language_model_benchmark#45
|
cted from non-publicly available materials, mock exam questions, and questions from quiz shows to avoid contamination. More than 80% of the data was crawled from PDFs after OCR.[77]
- MMMU (Massive Multi-discipline Multimodal Understanding): A vision-language version of MMLU. 11550 questions collected from college exams, quizzes, and textbooks, covering 30 subjects. The questions require image-understanding to solve. Includes multiple-choice questions and open-ended QA (which are scored by regex
|
https://en.wikipedia.org/wiki/Language_model_benchmark#46
|
tions and open-ended QA (which are scored by regex extraction). Human expert baseline is 89%.[78][79]
- MMMU-Pro: 1730 multiple-choice multimodal questions in the same format as MMMU, designed to be adversarial against text-only models. Some problems in MMMU turned out to be answerable without looking at the images, necessitating MMMU-Pro. Each question has 10 choices, and presented in both text-image format, and screenshot/photo format.[80]
- MMT-Bench: A comprehensive benchmark designed to ass
|
https://en.wikipedia.org/wiki/Language_model_benchmark#47
|
T-Bench: A comprehensive benchmark designed to assess LVLMs across massive multimodal tasks requiring expert knowledge and deliberate visual recognition, localization, reasoning, and planning. Comprises 31,325 meticulously curated multi-choice visual questions from various multimodal scenarios such as vehicle driving and embodied navigation, covering 32 core meta-tasks and 162 subtasks in multimodal understanding.[81]
Agency
[edit]- GAIA: 450 questions with unambiguous answers that require infor
|
https://en.wikipedia.org/wiki/Language_model_benchmark#48
|
stions with unambiguous answers that require information that can be obtained by browsing the Internet, requiring different levels of tooling and autonomy to solve. Divided into 3 difficulty levels.[82]
- WebArena: 241 mock-up websites based on real-world websites (Reddit, GitLab, Magento's admin portal, etc), and 812 tasks to be performed on the websites. The tasks include information-seeking, site navigation, and content and configuration operation.[83]
- Mind2Web: 2,350 tasks collected from 1
|
https://en.wikipedia.org/wiki/Language_model_benchmark#49
|
tion.[83]
- Mind2Web: 2,350 tasks collected from 137 websites, and crowdsourced action sequences. The task is to reproduce the action sequence.[84]
- OSWorld: 369 multimodal computer-using tasks, involving multiple real web and desktop apps and OS file I/O. In both Windows and Ubuntu. Each task includes an initial state setup configuration, and is tested by an execution-based evaluation script.[85]
- Windows Agent Arena: 154 multimodal tasks with the same format as OSWorld. Only in Windows.[86]
|
https://en.wikipedia.org/wiki/Language_model_benchmark#50
|
the same format as OSWorld. Only in Windows.[86]
- WebVoyager: 643 multimodal tasks based on 15 popular websites. Evaluation is by screenshotting the action sequence and asking a vision language model to judge.[87]
- BFCL (Berkeley Function-Calling Leaderboard): The task is to write API calls according to a specification. Released in 3 versions, with 1760, 2251, and 1000 items respectively. Some calls are evaluated by parsing into an AST and comparing against the reference answer, while others
|
https://en.wikipedia.org/wiki/Language_model_benchmark#51
|
paring against the reference answer, while others are evaluated by calling and comparing the response against the reference response. Includes Python, Java, JavaScript, SQL, and REST API.[88]
- TAU-bench (Tool-Agent-User benchmark, also written as τ-bench): Two environments (retail, airline booking) that test for an agent to fulfill user instructions, interactively over multiple turns of dialogue. The user is simulated by a language model.[89]
- terminal-bench: A collection of complex tasks in t
|
https://en.wikipedia.org/wiki/Language_model_benchmark#52
|
terminal-bench: A collection of complex tasks in the Linux terminal.[90]
Context length
[edit]Some benchmarks were designed specifically to test for processing continuous text that is very long.
- Needle in a haystack tests (NIH): This is not a specific benchmark, but a method for benchmarking context lengths. In this method, a long context window is filled with text, such as Paul Graham's essays, and a random statement is inserted. The task is to answer a question about the inserted statement.[
|
https://en.wikipedia.org/wiki/Language_model_benchmark#53
|
o answer a question about the inserted statement.[91]
- Long Range Arena: 6 synthetic tasks that required 1K to 16K tokens of context length to solve.[92]
- NoLiMa: Long-Context Evaluation Beyond Literal Matching. The benchmark assesses long-context models beyond simple keyword matching. Specifically, the words in the question have minimal or no direct lexical overlap with the words in the "needle" sentence. The "haystacks" are 10 open-licensed books.[93]
- L-Eval: 2,000+ human-labeled query-res
|
https://en.wikipedia.org/wiki/Language_model_benchmark#54
|
ooks.[93]
- L-Eval: 2,000+ human-labeled query-response pairs over 508 long documents in 20 tasks, including diverse task types, domains, and input length (3K—200K tokens).[94]
- InfiniteBench: 3946 items in 12 tasks from 5 domains (retrieval, code, math, novels, and dialogue) with context lengths exceeding 100K tokens.[95]
- ZeroSCROLLS: 4,378 items in 6 tasks. Includes 6 tasks from SCROLLS and introduces 4 new datasets. Named "zero" because it was designed for zero-shot learning during the ear
|
https://en.wikipedia.org/wiki/Language_model_benchmark#55
|
was designed for zero-shot learning during the early days of pretraining paradigm, back when zero-shot capability was uncommon.[96]
- LongBench: 4,750 tasks on 21 datasets across 6 task categories in both English and Chinese, with an average length of 6,711 words (English) and 13,386 characters (Chinese).[97] Updated with LongBench v2 that contained 503 more tasks, that require a context length ranging from 8K to 2M words, with the majority under 128K.[98][99]
- RULER: 13 tasks in 4 categories (
|
https://en.wikipedia.org/wiki/Language_model_benchmark#56
|
128K.[98][99]
- RULER: 13 tasks in 4 categories (retrieval, multi-hop, aggregation, question answering). Each task is specified by a program which can generate arbitrarily long instances of each task on demand.[100]
- LOFT (Long-Context Frontiers): 6 long-context task categories (text retrieval, visual retrieval, audio retrieval, retrieval-augmented generation, SQL-like dataset query, many-shot in-context learning) in 35 datasets and 4 modalities. Up to 1 million tokens.[101]
- MTOB (Machine Tr
|
https://en.wikipedia.org/wiki/Language_model_benchmark#57
|
s. Up to 1 million tokens.[101]
- MTOB (Machine Translation from One Book): translate sentences between English and Kalamang after reading a grammar book of Kalamang (~570 pages),[102] a bilingual word list (2,531 entries, with Part-of-Speech tags) and a small parallel corpus of sentence pairs (~400 train sentences, 100 test sentences, filtered to exclude examples from the book), both published on Dictionaria.[103][104]
Reasoning
[edit]Mathematics
[edit]- Alg514: 514 algebra word problems and as
|
https://en.wikipedia.org/wiki/Language_model_benchmark#58
|
s
[edit]- Alg514: 514 algebra word problems and associated equation systems gathered from Algebra.com.[105][106]
- Math23K: 23,164 elementary school Chinese mathematical word problems, collected from various online educational websites.[107]
- AQuA-RAT (Algebra Question Answering with Rationales): Also known as just "AQuA". 100,000 algebraic word problems with 5 choices per problem, and an annotation for the correct choice with natural language rationales. 34,202 "seed problems" were collected f
|
https://en.wikipedia.org/wiki/Language_model_benchmark#59
|
ationales. 34,202 "seed problems" were collected from many sources, such as GMAT and GRE, which were then expanded to the full dataset with Amazon Turk.[108]
- GSM8K (Grade School Math): 8.5K linguistically diverse elementary school math word problems that require 2 to 8 basic arithmetic operations to solve.[109] Contains errors that had been corrected with GSM8K-Platinum.[110]
- GSM1K: 1205 items with the same format and difficulty as GSM8K. More securely contained to avoid the data contaminati
|
https://en.wikipedia.org/wiki/Language_model_benchmark#60
|
e securely contained to avoid the data contamination concerns with the previous GSM8K.[111]
- MATH: 12,500 competition-level math problems divided into difficulty levels 1 to 5 (as the Art of Problem Solving), with AIME problems being level 5. There are 1,324 level 5 items.[112] An adversarial version is MATH-P, obtained by modifying a few characters in the original questions.[113]
- MathQA: 37,200 word problems in English. Each problem came from AQuA-RAT, and annotated with an "operation progra
|
https://en.wikipedia.org/wiki/Language_model_benchmark#61
|
AQuA-RAT, and annotated with an "operation program" which exactly specifies the mathematical operations required to solve the problem, written in a domain-specific language with 58 operators.[114] Has a variant, MathQA-Python, consisting of 23,914 problems, produced by taking the solutions to a subset of the MathQA dataset, and rewriting into Python.[115]
- MathEval: An omnibus benchmark that contains 20 other benchmarks, such as GSM8K, MATH, and the math subsection of MMLU. Over 20,000 math pr
|
https://en.wikipedia.org/wiki/Language_model_benchmark#62
|
d the math subsection of MMLU. Over 20,000 math problems. Difficulty ranges from elementary school to high school competition.[116]
- TheoremQA: 800 questions that test for the use of 350 theorems from math, physics, electric engineering, computer science, and finance.[117]
- ProofNet: 371 theorems in undergraduate-level mathematics, each consisting of a formal statement in Lean, a natural language statement, and a natural language proof. There are two tasks: given an informal (formal) statement
|
https://en.wikipedia.org/wiki/Language_model_benchmark#63
|
re two tasks: given an informal (formal) statement, produce a corresponding formal (informal) statement; given an informal theorem statement, its informal proof, and its formal statement, produce a formal proof.[118] Originally was in Lean 3,[119] but the original authors deprecated it in favor of the Lean 4 version.[120]
- miniF2F (mini formal-to-formal): 488 Olympiad-level mathematics problems from AIME, AMC, and IMO, stated in formal languages (Metamath, Lean, Isabelle (partially) and HOL Lig
|
https://en.wikipedia.org/wiki/Language_model_benchmark#64
|
(Metamath, Lean, Isabelle (partially) and HOL Light (partially)). The task is to formally prove the formal statement, which can be verified automatically.[121]
- U-MATH: 1100 math problems sourced from real-world university curricula, balanced across six subjects with 20% of problems including visual elements.[122]
- MathBench: 3709 questions in English and Chinese, divided into 5 difficulty levels (basic arithmetic, primary school, middle school, high school, college). Divided into 2,209 quest
|
https://en.wikipedia.org/wiki/Language_model_benchmark#65
|
l, high school, college). Divided into 2,209 questions of MathBench-T (theoretical) and 1,500 questions of MathBench-A (applied).[123]
- PutnamBench: 1709 formalized versions of Putnam competition questions during 1962 - 2023. The task is to compute the numerical answer (if there is a numerical answer) and to provide a formal proof. The formalizations are in Lean 4, Isabelle, and Coq.[124][125]
- Omni-MATH: 4428 competition-level math problems with human annotation.[126]
- FrontierMath: Several
|
https://en.wikipedia.org/wiki/Language_model_benchmark#66
|
th human annotation.[126]
- FrontierMath: Several hundred questions from areas of modern math that are difficult for professional mathematicians to solve. Many questions have integer answers, so that answers can be verified automatically. Held-out to prevent contamination.[127]
- MathArena: Instead of a purpose-built benchmark, the MathArena benchmark simply takes the latest math competitions (AIME and HMMT) as soon as possible and uses those to benchmark LLMs, to prevent contamination.[128]
Pro
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.