text stringlengths 55 456k | metadata dict |
|---|---|
# User Guide
This document details the interface exposed by `lm-eval` and provides details on what flags are available to users.
## Command-line Interface
A majority of users run the library by cloning it from Github, installing the package as editable, and running the `python -m lm_eval` script.
Equivalently, runn... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/docs/interface.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/interface.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 12830
} |
# New Model Guide
This guide may be of special interest to users who are using the library outside of the repository, via installing the library via pypi and calling `lm_eval.evaluator.evaluate()` to evaluate an existing model.
In order to properly evaluate a given LM, we require implementation of a wrapper class sub... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/docs/model_guide.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/model_guide.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 11342
... |
# New Task Guide
`lm-evaluation-harness` is a framework that strives to support a wide range of zero- and few-shot evaluation tasks on autoregressive language models (LMs).
This documentation page provides a walkthrough to get started creating your own task, in `lm-eval` versions v0.4.0 and later.
A more interactive... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/docs/new_task_guide.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/new_task_guide.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": ... |
# Task Configuration
The `lm-evaluation-harness` is meant to be an extensible and flexible framework within which many different evaluation tasks can be defined. All tasks in the new version of the harness are built around a YAML configuration file format.
These YAML configuration files, along with the current codeba... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/docs/task_guide.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/docs/task_guide.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 20188
} |
# Code Repo
[**Inference Scaling Laws: An Empirical Analysis of Compute-Optimal Inference for Problem-Solving with Language Models**](https://arxiv.org/abs/2408.00724).
## Clone
git clone --recurse-submodules git@github.com:thu-wyz/rebase.git
This command will clone our repository with the [sglang](https://github.... | {
"source": "simplescaling/s1",
"title": "eval/rebase/inference_scaling/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/inference_scaling/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 2704
} |
<div align="center">
<img src="assets/logo.png" alt="logo" width="400"></img>
</div>
--------------------------------------------------------------------------------
| [**Blog**](https://lmsys.org/blog/2024-01-17-sglang/) | [**Paper**](https://arxiv.org/abs/2312.07104) |
SGLang is a structured generation language de... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 14251
} |
# Tasks
A list of supported tasks and task groupings can be viewed with `lm-eval --tasks list`.
For more information, including a full list of task names and their precise meanings or sources, follow the links provided to the individual README.md files for each subfolder.
| Task Family | Description | Language(s) ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size"... |
janitor.py contains a script to remove benchmark data contamination from training data sets.
It uses the approach described in the [GPT-3 paper](https://arxiv.org/abs/2005.14165).
## Algorithm
1) Collects all contamination text files that are to be removed from training data
2) Filters training data by finding `N`gram... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/scripts/clean_training_data/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/scripts/clean_training_data/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-... |
# Task-name
### Paper
Title: `paper titles goes here`
Abstract: `link to paper PDF or arXiv abstract goes here`
`Short description of paper / benchmark goes here:`
Homepage: `homepage to the benchmark's website goes here, if applicable`
### Citation
```
BibTeX-formatted citation goes here
```
### Groups, Tags,... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/templates/new_yaml_task/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/templates/new_yaml_task/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time sca... |
# Finetune
## gpt-accelera
Using gpt-accelera, first download and convert hf model to checkpoints:
bash ./scripts_finetune/prepare*.sh
Then finetune the reward model or policy model:
bash ./scripts_finetune/finetune_rm.sh
bash ./scripts_finetune/finetune_sft.sh
Finally, convert back to hf model:
ba... | {
"source": "simplescaling/s1",
"title": "eval/rebase/inference_scaling/finetune/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/inference_scaling/finetune/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 46... |
## Benchmark Results
We tested our system on the following common LLM workloads and reported the achieved throughput:
- **[MMLU](https://arxiv.org/abs/2009.03300)**: A 5-shot, multi-choice, multi-task benchmark.
- **[HellaSwag](https://arxiv.org/abs/1905.07830)**: A 20-shot, multi-choice sentence completion benchmark.... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/docs/benchmark_results.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/docs/benchmark_results.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 1671
} |
## Flashinfer Mode
[flashinfer](https://github.com/flashinfer-ai/flashinfer) is a kernel library for LLM serving.
It can be used in SGLang runtime to accelerate attention computation.
### Install flashinfer
See https://docs.flashinfer.ai/installation.html.
### Run a Server With Flashinfer Mode
Add `--enable-flashi... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/docs/flashinfer.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/docs/flashinfer.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 510
} |
## How to Support a New Model
To support a new model in SGLang, you only need to add a single file under [SGLang Models Directory](https://github.com/sgl-project/sglang/tree/main/python/sglang/srt/models).
You can learn from existing model implementations and create new files for the new models. Most models are based... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/docs/model_support.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/docs/model_support.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 1253
} |
## Sampling Parameters of SGLang Runtime
This doc describes the sampling parameters of the SGLang Runtime.
The `/generate` endpoint accepts the following arguments in the JSON format.
```python
@dataclass
class GenerateReqInput:
# The input prompt
text: Union[List[str], str]
# The image input
image_da... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/docs/sampling_params.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/docs/sampling_params.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 2521
} |
## SRT Unit Tests
### Low-level API
```
cd sglang/test/srt/model
python3 test_llama_low_api.py
python3 test_llama_extend.py
python3 test_llava_low_api.py
python3 bench_llama_low_api.py
```
### High-level API
```
python -m sglang.launch_server --model-path meta-llama/Llama-2-7b-chat-hf --port 30000
```
```
cd test/... | {
"source": "simplescaling/s1",
"title": "eval/rebase/sglang/docs/test_process.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/rebase/sglang/docs/test_process.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"file_size": 1325
} |
# ACLUE
### Paper
Can Large Language Model Comprehend Ancient Chinese? A Preliminary Test on ACLUE
https://arxiv.org/abs/2310.09550
The Ancient Chinese Language Understanding Evaluation (ACLUE) is an evaluation benchmark focused on ancient Chinese language comprehension. It aims to assess the performance of large-sc... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/aclue/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/aclue/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
... |
# Arabic EXAMS
### Paper
EXAMS: a resource specialized in multilingual high school exam questions.
The original paper [EXAMS](https://aclanthology.org/2020.emnlp-main.438/)
The Arabic EXAMS dataset includes five subjects
- Islamic studies
- Biology
- Physics
- Science
- Social
The original dataset [EXAMS... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/aexams/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/aexams/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
# MathQA
### Paper
IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models
https://arxiv.org/pdf/2406.03368
IrokoBench is a human-translated benchmark dataset for 16 typologically diverse
low-resource African languages covering three tasks: natural language inference (AfriXNLI),
mathema... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/afrimgsm/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/afrimgsm/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scali... |
# MathQA
### Paper
IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models
https://arxiv.org/pdf/2406.03368
IrokoBench is a human-translated benchmark dataset for 16 typologically diverse
low-resource African languages covering three tasks: natural language inference (AfriXNLI),
mathema... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/afrimmlu/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/afrimmlu/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scali... |
# IrokoBench
### Paper
IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models
https://arxiv.org/pdf/2406.03368
IrokoBench is a human-translated benchmark dataset for 16 typologically diverse
low-resource African languages covering three tasks: natural language inference (AfriXNLI),
mat... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/afrixnli/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/afrixnli/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scali... |
# AGIEval
### Paper
Title: AGIEval: A Human-Centric Benchmark for Evaluating Foundation Models
Abstract: https://arxiv.org/abs/2304.06364.pdf
AGIEval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving.
T... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/agieval/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/agieval/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling... |
# GSM8k
## Paper
Training Verifiers to Solve Math Word Problems
https://arxiv.org/abs/2110.14168
State-of-the-art language models can match human performance on many tasks, but
they still struggle to robustly perform multi-step mathematical reasoning. To
diagnose the failures of current models and support research, w... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/aime/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/aime/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# ANLI
### Paper
Title: `Adversarial NLI: A New Benchmark for Natural Language Understanding`
Paper Link: https://arxiv.org/abs/1910.14599
Adversarial NLI (ANLI) is a dataset collected via an iterative, adversarial
human-and-model-in-the-loop procedure. It consists of three rounds that progressively
increase in dif... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/anli/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/anli/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# Arabic Leaderboard
Title: Open Arabic LLM Leaderboard
The Open Arabic LLM Leaderboard evaluates language models on a large number of different evaluation tasks that reflect the characteristics of the Arabic language and culture.
The benchmark uses several datasets, most of them translated to Arabic, and validated ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/arabic_leaderboard_complete/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arabic_leaderboard_complete/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"des... |
# Arabic Leaderboard Light
Title: Open Arabic LLM Leaderboard Light
This leaderboard follows all the details as in [`arabic_leaderboard_complete`](../arabic_leaderboard_complete), except that a light version - 10% random sample of the test set of each benchmark - is used to test the language models.
NOTE: In ACVA be... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/arabic_leaderboard_light/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arabic_leaderboard_light/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"descripti... |
# ArabicMMLU
### Paper
Title: ArabicMMLU: Assessing Massive Multitask Language Understanding in Arabic
Abstract: https://arxiv.org/abs/2402.12840
The focus of language model evaluation has
transitioned towards reasoning and knowledge intensive tasks, driven by advancements in pretraining large models. While state-o... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arabicmmlu/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time s... |
# ARC
### Paper
Title: Think you have Solved Question Answering? Try ARC, the AI2 Reasoning Challenge
Abstract: https://arxiv.org/abs/1803.05457
The ARC dataset consists of 7,787 science exam questions drawn from a variety
of sources, including science questions provided under license by a research
partner affiliat... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/arc/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arc/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"fi... |
# arc mt
arc mt is an implementation of tasks to support machine translated arc
challenge evals, to improve eval support across a number of additional
languages.
The main page for the effort is
[here](https://huggingface.co/datasets/LumiOpen/arc_challenge_mt) and we will
include more data and analysis there.
Initial... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/arc_mt/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arc_mt/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
# Arithmetic
### Paper
Title: `Language Models are Few-Shot Learners`
Abstract: https://arxiv.org/abs/2005.14165
A small battery of 10 tests that involve asking language models a simple arithmetic
problem in natural language.
Homepage: https://github.com/openai/gpt-3/tree/master/data
### Citation
```
@inproceedi... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/arithmetic/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/arithmetic/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time s... |
# ASDiv
### Paper
Title: `ASDiv: A Diverse Corpus for Evaluating and Developing English Math Word Problem Solvers`
Abstract: https://arxiv.org/abs/2106.15772
ASDiv (Academia Sinica Diverse MWP Dataset) is a diverse (in terms of both language
patterns and problem types) English math word problem (MWP) corpus for eva... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/asdiv/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/asdiv/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
... |
# bAbI
### Paper
Title: Towards ai-complete question answering: A set of prerequisite toy tasks
Abstract: https://arxiv.org/abs/1502.05698
One long-term goal of machine learning research is to produce methods that are applicable to reasoning and natural language, in particular building an intelligent dialogue agent.... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/babi/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/babi/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# BasqueBench
### Paper
BasqueBench is a benchmark for evaluating language models in Basque tasks. This is, it evaluates the ability of a language model to understand and generate Basque text. BasqueBench offers a combination of pre-existing, open datasets and datasets developed exclusivelly for this benchmark. All t... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/basque_bench/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/basque_bench/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-ti... |
# BasqueGLUE
### Paper
Title: `BasqueGLUE: A Natural Language Understanding Benchmark for Basque`
Abstract: `https://aclanthology.org/2022.lrec-1.172/`
Natural Language Understanding (NLU) technology has improved significantly over the last few years and multitask benchmarks such as GLUE are key to evaluate this im... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/basqueglue/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/basqueglue/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time s... |
# BigBenchHard
## Paper
Title: `Challenging BIG-Bench Tasks and Whether Chain-of-Thought Can Solve Them`
Abstract: https://arxiv.org/abs/2210.09261
A suite of 23 challenging BIG-Bench tasks which we call BIG-Bench Hard (BBH).
These are the task for which prior language model evaluations did not outperform
the average... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/bbh/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/bbh/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"fi... |
# Belebele
### Paper
The Belebele Benchmark for Massively Multilingual NLU Evaluation
https://arxiv.org/abs/2308.16884
Belebele is a multiple-choice machine reading comprehension (MRC) dataset spanning 122 language variants. This dataset enables the evaluation of mono- and multi-lingual models in high-, medium-, and... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/belebele/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/belebele/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scali... |
# BertaQA
### Paper
Title: BertaQA: How Much Do Language Models Know About Local Culture?
Abstract: https://arxiv.org/abs/2406.07302
Large Language Models (LLMs) exhibit extensive knowledge about the world, but most evaluations have been limited to global or anglocentric subjects. This raises the question of how we... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/bertaqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/bertaqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling... |
# BigBench
### Paper
Title: `Beyond the Imitation Game: Quantifying and extrapolating the capabilities of language models`
Abstract: https://arxiv.org/abs/2206.04615
The Beyond the Imitation Game Benchmark (BIG-bench) is a collaborative benchmark intended to probe large language models and extrapolate their future ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/bigbench/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/bigbench/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scali... |
# Task-name
### Paper
Title: `BLiMP: A Benchmark of Linguistic Minimal Pairs for English`
Abstract: `https://arxiv.org/abs/1912.00582`
BLiMP is a challenge set for evaluating what language models (LMs) know about
major grammatical phenomena in English. BLiMP consists of 67 sub-datasets, each
containing 1000 minimal ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/blimp/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/blimp/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
... |
# CatalanBench
### Paper
CatalanBench is a benchmark for evaluating language models in Catalan tasks. This is, it evaluates the ability of a language model to understand and generate Catalan text. CatalanBench offers a combination of pre-existing, open datasets and datasets developed exclusivelly for this benchmark. ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/catalan_bench/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/catalan_bench/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-... |
# C-Eval (Validation)
### Paper
C-Eval: A Multi-Level Multi-Discipline Chinese Evaluation Suite for Foundation Models
https://arxiv.org/pdf/2305.08322.pdf
C-Eval is a comprehensive Chinese evaluation suite for foundation models.
It consists of 13948 multi-choice questions spanning 52 diverse disciplines
and four diff... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/ceval/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/ceval/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
... |
# CMMLU
### Paper
CMMLU: Measuring massive multitask language understanding in Chinese
https://arxiv.org/abs/2306.09212
CMMLU is a comprehensive evaluation benchmark specifically designed to evaluate the knowledge and reasoning abilities of LLMs within the context of Chinese language and culture.
CMMLU covers a wide... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/cmmlu/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/cmmlu/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
... |
# Task-name
### Paper
Title: `COMMONSENSEQA: A Question Answering Challenge Targeting
Commonsense Knowledge`
Abstract: https://arxiv.org/pdf/1811.00937.pdf
CommonsenseQA is a multiple-choice question answering dataset that requires different types of commonsense knowledge to predict the correct answers.
It contains... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/commonsense_qa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/commonsense_qa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple tes... |
# COPAL
### Paper
Title: `COPAL-ID: Indonesian Language Reasoning with Local Culture and Nuances`
Abstract: `https://arxiv.org/abs/2311.01012`
`COPAL-ID is an Indonesian causal commonsense reasoning dataset that captures local nuances. It provides a more natural portrayal of day-to-day causal reasoning within the I... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/copal_id/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/copal_id/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scali... |
# CoQA
### Paper
Title: `CoQA: A Conversational Question Answering Challenge`
Abstract: https://arxiv.org/pdf/1808.07042.pdf
CoQA is a large-scale dataset for building Conversational Question Answering
systems. The goal of the CoQA challenge is to measure the ability of machines to
understand a text passage and ans... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/coqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/coqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# CrowS-Pairs
### Paper
CrowS-Pairs: A Challenge Dataset for Measuring Social Biases in Masked Language Models
https://aclanthology.org/2020.emnlp-main.154/
French CrowS-Pairs: Extending a challenge dataset for measuring social bias in masked
language models to a language other than English
https://aclanthology.org/2... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/crows_pairs/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/crows_pairs/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time... |
# DROP
### Paper
Title: `DROP: A Reading Comprehension Benchmark Requiring Discrete Reasoning Over Paragraphs`
Abstract: https://aclanthology.org/attachments/N19-1246.Supplementary.pdf
DROP is a QA dataset which tests comprehensive understanding of paragraphs. In
this crowdsourced, adversarially-created, 96k questi... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/drop/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/drop/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# EQ-Bench
Title: `EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models`
Abstract: https://arxiv.org/abs/2312.06281
EQ-Bench is a benchmark for language models designed to assess emotional intelligence.
Why emotional intelligence? One reason is that it represents a subset of abilities that are im... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/eq_bench/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/eq_bench/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scali... |
# EusExams
### Paper
Title: Latxa: An Open Language Model and Evaluation Suite for Basque
Abstract: https://arxiv.org/abs/2403.20266
EusExams is a collection of tests designed to prepare individuals for Public Service examinations conducted by several Basque institutions, including the public health system Osakidet... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/eus_exams/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/eus_exams/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time sca... |
# EusProficiency
### Paper
Title: Latxa: An Open Language Model and Evaluation Suite for Basque
Abstract: https://arxiv.org/abs/2403.20266
EusProficiency comprises 5,169 exercises on different topics from past EGA exams, the official C1-level certificate of proficiency in Basque. We collected the atarikoa exercises... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/eus_proficiency/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/eus_proficiency/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple t... |
# EusReading
### Paper
Title: Latxa: An Open Language Model and Evaluation Suite for Basque
Abstract: https://arxiv.org/abs/2403.20266
EusReading consists of 352 reading comprehension exercises (irakurmena) sourced from the set of past EGA exams from 1998 to 2008. Each test generally has 10 multiple-choice question... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/eus_reading/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/eus_reading/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time... |
# EusTrivia
### Paper
Title: Latxa: An Open Language Model and Evaluation Suite for Basque
Abstract: https://arxiv.org/abs/2403.20266
EusTrivia consists of 1,715 trivia questions from multiple online sources. 56.3\% of the questions are elementary level (grades 3-6), while the rest are considered challenging. A sig... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/eus_trivia/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/eus_trivia/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time s... |
# FDA
### Paper
Title: Language Models Enable Simple Systems For
Generating Structured Views Of Heterogenous Data
Lakes
Abstract: A long standing goal of the data management community is to develop general, automated systems
that ingest semi-structured documents and output queryable tables without human effort or do... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/fda/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/fda/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"fi... |
# FLD
### Paper
Title: Learning Deductive Reasoning from Synthetic Corpus based on Formal Logic
Abstract: https://arxiv.org/abs/2308.07336
**FLD** (**F**ormal **L**ogic **D**eduction) is a deductive reasoning benchmark.
Given a set of facts and a hypothesis, an LLM is required to generate (i) proof steps to (dis-)p... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/fld/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/fld/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"fi... |
# FrenchBench
### Paper
FrenchBench is a benchmark for evaluating French language models, introduced in the paper
[CroissantLLM: A Truly Bilingual French-English Language Model](https://arxiv.org/abs/2402.00786).
It is a collection of tasks that evaluate the ability of a language model to understand and generate Fren... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/french_bench/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/french_bench/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-ti... |
# GalicianBench
### Paper
GalicianBench is a benchmark for evaluating language models in Galician tasks. This is, it evaluates the ability of a language model to understand and generate Galician text. GalicianBench offers a combination of pre-existing, open datasets and datasets developed exclusivelly for this benchm... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/galician_bench/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/galician_bench/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple tes... |
# Glianorex
The goal of this benchmark is to isolate the test answering capabilities from the content knowledge.
### Paper
Title: Multiple Choice Questions and Large Languages Models: A Case Study with Fictional Medical Data
Abstract: https://arxiv.org/abs/2406.02394
To test the relevance of MCQs to assess LLM per... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/glianorex/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/glianorex/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time sca... |
# GLUE
**NOTE**: GLUE benchmark tasks do not provide publicly accessible labels for their test sets, so we default to the validation sets for all sub-tasks.
### Paper
Title: `GLUE: A Multi-Task Benchmark and Analysis Platform for Natural Language Understanding`
Abstract: https://openreview.net/pdf?id=rJ4km2R5t7
The... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/glue/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/glue/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# GPQA
### Paper
Title: GPQA: A Graduate-Level Google-Proof Q&A Benchmark
Abstract: https://arxiv.org/abs/2311.12022
We present GPQA, a challenging dataset of 448 multiple-choice questions written by domain experts in biology, physics, and chemistry. We ensure that the questions are high-quality and extremely diffi... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/gpqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/gpqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# GSM8k
## Paper
Training Verifiers to Solve Math Word Problems
https://arxiv.org/abs/2110.14168
State-of-the-art language models can match human performance on many tasks, but
they still struggle to robustly perform multi-step mathematical reasoning. To
diagnose the failures of current models and support research, w... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/gsm8k/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/gsm8k/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
... |
# gsm_plus
### Paper
Title: `GSM-PLUS: A Comprehensive Benchmark for Evaluating the Robustness of LLMs as Mathematical Problem Solvers`
Abstract: `Large language models (LLMs) have achieved impressive performance across various mathematical reasoning benchmarks. However, there are increasing debates regarding whethe... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/gsm_plus/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/gsm_plus/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scali... |
# HAE-RAE BENCH
### Paper
Title: `HAE-RAE Bench: Evaluation of Korean Knowledge in Language Models`
Abstract: `Large Language Models (LLMs) trained on massive corpora demonstrate impressive capabilities in a wide range of tasks. While there are ongoing efforts to adapt these models to languages beyond English, the a... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/haerae/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/haerae/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
# HEAD-QA
### Paper
HEAD-QA: A Healthcare Dataset for Complex Reasoning
https://arxiv.org/pdf/1906.04701.pdf
HEAD-QA is a multi-choice HEAlthcare Dataset. The questions come from exams to access a specialized position in the
Spanish healthcare system, and are challenging even for highly specialized humans. They are ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/headqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/headqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
# HellaSwag
### Paper
Title: `HellaSwag: Can a Machine Really Finish Your Sentence?`
Abstract: https://arxiv.org/abs/1905.07830
Recent work by Zellers et al. (2018) introduced a new task of commonsense natural language inference: given an event description such as "A woman sits at a piano," a machine must select th... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/hellaswag/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/hellaswag/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time sca... |
# ETHICS Dataset
### Paper
Pointer Sentinel Mixture Models
https://arxiv.org/pdf/1609.07843.pdf
The ETHICS dataset is a benchmark that spans concepts in justice, well-being,
duties, virtues, and commonsense morality. Models predict widespread moral
judgments about diverse text scenarios. This requires connecting phy... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/hendrycks_ethics/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/hendrycks_ethics/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple... |
# MATH
## Paper
Measuring Mathematical Problem Solving With the MATH Dataset
https://arxiv.org/abs/2103.03874
Many intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models, we introduce MATH, a new data... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/hendrycks_math/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/hendrycks_math/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple tes... |
# IFEval
### Paper
Title: Instruction-Following Evaluation for Large Language Models
Abstract: https://arxiv.org/abs/2311.07911
One core capability of Large Language Models (LLMs) is to follow natural language instructions. However, the evaluation of such abilities is not standardized: Human evaluations are expensiv... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/ifeval/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/ifeval/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
# inverse_scaling
### Paper
Title: `Inverse Scaling: When Bigger Isn't Better`
Abstract: `Work on scaling laws has found that large language models (LMs) show predictable improvements to overall loss with increased scale (model size, training data, and compute). Here, we present evidence for the claim that LMs may s... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/inverse_scaling/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/inverse_scaling/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple t... |
# k_mmlu
### Paper
Title: `KMMLU : Measuring Massive Multitask Language Understanding in Korean`
Abstract: `We propose KMMLU, a new Korean benchmark with 35,030 expert-level multiple-choice questions across 45 subjects ranging from humanities to STEM. Unlike previous Korean benchmarks that are translated from existi... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/kmmlu/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/kmmlu/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
... |
# LAMBADA
### Paper
Title: `KOBEST: Korean Balanced Evaluation of Significant Tasks`
Abstract: https://arxiv.org/abs/2204.04541
A well-formulated benchmark plays a critical role in spurring advancements in the natural language processing (NLP) field, as it allows objective and precise evaluation of diverse models. A... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/kobest/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/kobest/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
# KorMedMCQA
### Paper
Title: `KorMedMCQA: Multi-Choice Question Answering Benchmark for Korean Healthcare Professional Licensing Examinations`
Abstract: `We introduce KorMedMCQA, the first Korean multiple-choice question answering (MCQA) benchmark derived from Korean healthcare professional licensing examinations, ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/kormedmcqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/kormedmcqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time s... |
# LAMBADA
### Paper
Title: `The LAMBADA dataset: Word prediction requiring a broad discourse context`
Abstract: https://arxiv.org/pdf/1606.06031.pdf
LAMBADA is a dataset to evaluate the capabilities of computational models for text
understanding by means of a word prediction task. LAMBADA is a collection of narrativ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/lambada/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/lambada/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling... |
# LAMBADA Cloze
### Paper
Title: `The LAMBADA dataset: Word prediction requiring a broad discourse context`
Abstract: https://arxiv.org/abs/1606.06031
Cloze-style LAMBADA dataset.
LAMBADA is a dataset to evaluate the capabilities of computational models for text
understanding by means of a word prediction task. LAM... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/lambada_cloze/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/lambada_cloze/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-... |
# LAMBADA
### Paper
The LAMBADA dataset: Word prediction requiring a broad discourse context
https://arxiv.org/pdf/1606.06031.pdf
LAMBADA is a dataset to evaluate the capabilities of computational models for text
understanding by means of a word prediction task. LAMBADA is a collection of narrative
passages sharing t... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/lambada_multilingual/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/lambada_multilingual/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1... |
# LAMBADA
### Paper
The LAMBADA dataset: Word prediction requiring a broad discourse context
https://arxiv.org/pdf/1606.06031.pdf
LAMBADA is a dataset to evaluate the capabilities of computational models for text
understanding by means of a word prediction task. LAMBADA is a collection of narrative
passages sharing t... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/lambada_multilingual_stablelm/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/lambada_multilingual_stablelm/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
... |
# Leaderboard evaluations
Our goal with this group is to create an unchanging through time version of
evaluations that will power the Open LLM Leaderboard on HuggingFace.
As we want to evaluate models across capabilities, the list currently contains:
- BBH (3-shots, multichoice)
- GPQA (0-shot, multichoice)
- mmlu-pro... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/leaderboard/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/leaderboard/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time... |
# LingOly
### Paper
Title: `LINGOLY: A Benchmark of Olympiad-Level Linguistic Reasoning Puzzles in Low-Resource and Extinct Languages`
Abstract: `https://arxiv.org/abs/2406.06196`
`In this paper, we present the LingOly benchmark, a novel benchmark for advanced reasoning abilities in large language models. Using ch... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/lingoly/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/lingoly/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling... |
# LogiQA
### Paper
Title: `LogiQA: A Challenge Dataset for Machine Reading Comprehension with Logical Reasoning`
Abstract: https://arxiv.org/abs/2007.08124
LogiQA is a dataset for testing human logical reasoning. It consists of 8,678 QA
instances, covering multiple types of deductive reasoning. Results show that st... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/logiqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/logiqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
# LogiQA 2.0
### Paper
LogiQA 2.0 — An Improved Dataset for Logical Reasoning in Natural Language Understanding https://ieeexplore.ieee.org/document/10174688
The dataset is an amendment and re-annotation of LogiQA in 2020, a large-scale logical reasoning reading comprehension dataset adapted from the Chinese Civil ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/logiqa2/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/logiqa2/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling... |
# MathQA
### Paper
MathQA: Towards Interpretable Math Word Problem Solving with Operation-Based Formalisms
https://arxiv.org/pdf/1905.13319.pdf
MathQA is a large-scale dataset of 37k English multiple-choice math word problems
covering multiple math domain categories by modeling operation programs corresponding
to wo... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/mathqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mathqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
# MC Taco
### Paper
Title: `"Going on a vacation" takes longer than "Going for a walk": A Study of Temporal Commonsense Understanding`
Abstract: https://arxiv.org/abs/1909.03065
MC-TACO is a dataset of 13k question-answer pairs that require temporal commonsense
comprehension. The dataset contains five temporal prope... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/mc_taco/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mc_taco/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling... |
# MedConceptsQA
### Paper
Title: `MedConceptsQA: Open Source Medical Concepts QA Benchmark`
Abstract: https://arxiv.org/abs/2405.07348
MedConceptsQA is a dedicated open source benchmark for medical concepts question answering. The benchmark comprises of questions of various medical concepts across different vocabul... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/med_concepts_qa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/med_concepts_qa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple t... |
# Task-name
### Paper
Title: [MELA: Multilingual Evaluation of Linguistic Acceptability](https://arxiv.org/abs/2311.09033)
**Abstract**: In this work, we present the largest benchmark to date on linguistic acceptability: Multilingual Evaluation of Linguistic Acceptability -- MELA, with 46K samples covering 10 langua... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/mela/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mela/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# MGSM
### Paper
Title: `Language Models are Multilingual Chain-of-Thought Reasoners`
Abstract: https://arxiv.org/abs/2210.03057
Multilingual Grade School Math Benchmark (MGSM) is a benchmark of grade-school math problems, proposed in the paper [Language models are multilingual chain-of-thought reasoners](http://ar... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/mgsm/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mgsm/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# MATH
ℹ️ This is the 4-shot variant!
## Paper
Measuring Mathematical Problem Solving With the MATH Dataset
https://arxiv.org/abs/2103.03874
Many intellectual endeavors require mathematical problem solving, but this skill remains beyond the capabilities of computers. To measure this ability in machine learning models,... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/minerva_math/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/minerva_math/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-ti... |
# Task-name
### Paper
Title: `Measuring Massive Multitask Language Understanding`
Abstract: `https://arxiv.org/abs/2009.03300`
`The test covers 57 tasks including elementary mathematics, US history, computer science, law, and more.`
Homepage: `https://github.com/hendrycks/test`
Note: The `Flan` variants are deriv... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/mmlu/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mmlu/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# mmlu_pro
### Paper
Title: `MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding Benchmark`
Abstract: `In the age of large-scale language models, benchmarks like the Massive Multitask Language Understanding (MMLU) have been pivotal in pushing the boundaries of what AI can achieve in language co... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/mmlu_pro/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mmlu_pro/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scali... |
# MMLU-SR
## Paper
Title: [Reasoning or Simply Next Token Prediction? A Benchmark for Stress-Testing Large Language Models](https://arxiv.org/abs/2406.15468v1)
We propose MMLU-SR, a novel dataset designed to measure the true comprehension abilities of Large Language Models (LLMs) by challenging their performance in ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/mmlusr/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mmlusr/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
# MMMU Benchmark
### Paper
Title: `MMMU: A Massive Multi-discipline MultimodalUnderstanding and Reasoning Benchmark for Expert AGI`
Abstract: `MMMU is a new benchmark designed to evaluate multimodal models on massive multi-discipline tasks demanding college-level subject knowledge and deliberate reasoning.`
`The be... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/mmmu/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mmmu/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# MuTual
### Paper
Title: `MuTual: A Dataset for Multi-Turn Dialogue Reasoning`
Abstract: https://www.aclweb.org/anthology/2020.acl-main.130/
MuTual is a retrieval-based dataset for multi-turn dialogue reasoning, which is
modified from Chinese high school English listening comprehension test data.
Homepage: https:... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/mutual/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/mutual/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
# NoticIA
### Paper
Title: `NoticIA: A Clickbait Article Summarization Dataset in Spanish`
Abstract: https://arxiv.org/abs/2404.07611
We present NoticIA, a dataset consisting of 850 Spanish news articles featuring prominent clickbait headlines, each paired with high-quality, single-sentence generative summarization... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/noticia/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/noticia/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling... |
### Paper
Question Answering dataset based on aggregated user queries from Google Search.
Homepage: https://research.google/pubs/natural-questions-a-benchmark-for-question-answering-research/
Homepage: [google-research-datasets/natural-questions@master/nq_open](https://github.com/google-research-datasets/natural-que... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/nq_open/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/nq_open/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling... |
# OpenBookQA
### Paper
Title: `Can a Suit of Armor Conduct Electricity? A New Dataset for Open Book Question Answering`
Abstract: https://arxiv.org/abs/1809.02789
OpenBookQA is a question-answering dataset modeled after open book exams for
assessing human understanding of a subject. It consists of 5,957 multiple-ch... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/openbookqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/openbookqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time s... |
# Paloma
### Paper
Title: Paloma: A Benchmark for Evaluating Language Model Fit
Abstract: https://arxiv.org/abs/2312.10523v1
Paloma is a comprehensive benchmark designed to evaluate open language models across a wide range of domains, ranging from niche artist communities to mental health forums on Reddit. It assess... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/paloma/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/paloma/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
# PAWS-X
### Paper
Title: `PAWS-X: A Cross-lingual Adversarial Dataset for Paraphrase Identification`
Abstract: https://arxiv.org/abs/1908.11828
The dataset consists of 23,659 human translated PAWS evaluation pairs and
296,406 machine translated training pairs in 6 typologically distinct languages.
Examples are ada... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/paws-x/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/paws-x/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",... |
# The Pile
### Paper
Title: The Pile: An 800GB Dataset of Diverse Text for Language Modeling
Abstract: https://arxiv.org/abs/2101.00027
The Pile is a 825 GiB diverse, open source language modelling data set that consists
of 22 smaller, high-quality datasets combined together. To score well on Pile
BPB (bits per byte... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/pile/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/pile/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# Pile-10k
### Paper
Title: `NeelNanda/pile-10k`
Abstract: The first 10K elements of [The Pile](https://pile.eleuther.ai/), useful for debugging models trained on it. See the [HuggingFace page for the full Pile](https://huggingface.co/datasets/the_pile) for more info. Inspired by [stas' great resource](https://huggi... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/pile_10k/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/pile_10k/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scali... |
# PIQA
### Paper
Title: `PIQA: Reasoning about Physical Commonsense in Natural Language`
Abstract: https://arxiv.org/abs/1911.11641
Physical Interaction: Question Answering (PIQA) is a physical commonsense
reasoning and a corresponding benchmark dataset. PIQA was designed to investigate
the physical knowledge of ex... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/piqa/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/piqa/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
"... |
# PolEmo 2.0
### Paper
Title: `Multi-Level Sentiment Analysis of PolEmo 2.0: Extended Corpus of Multi-Domain Consumer Reviews`
Abstract: https://aclanthology.org/K19-1092/
The PolEmo 2.0 is a dataset of online consumer reviews in Polish from four domains: medicine, hotels, products, and university. It is human-anno... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/polemo2/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/polemo2/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling... |
# PortugueseBench
### Paper
PortugueseBench is a benchmark for evaluating language models in Portuguese tasks. This is, it evaluates the ability of a language model to understand and generate Portuguese text. PortugueseBench offers a combination of pre-existing, open datasets. All the details of PortugueseBench will ... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/portuguese_bench/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/portuguese_bench/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple... |
# PROST
### Paper
Title: `PROST: Physical Reasoning about Objects Through Space and Time`
Abstract: https://arxiv.org/abs/2106.03634
PROST, Physical Reasoning about Objects Through Space and Time, is a dataset
consisting of 18,736 multiple-choice questions made from 14 manually curated
templates, covering 10 physic... | {
"source": "simplescaling/s1",
"title": "eval/lm-evaluation-harness/lm_eval/tasks/prost/README.md",
"url": "https://github.com/simplescaling/s1/blob/main/eval/lm-evaluation-harness/lm_eval/tasks/prost/README.md",
"date": "2025-02-01T02:38:16",
"stars": 5696,
"description": "s1: Simple test-time scaling",
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.