File size: 12,892 Bytes
040e7dd 7378ede 040e7dd f23df33 040e7dd 7378ede 040e7dd f23df33 7378ede 040e7dd 76e5a0f 040e7dd 7378ede 040e7dd 7378ede 040e7dd 76e5a0f 7378ede 040e7dd 7378ede 040e7dd 7378ede 040e7dd 7378ede 040e7dd 76e5a0f 040e7dd 7378ede 040e7dd 7378ede 040e7dd 7378ede 040e7dd 7378ede 040e7dd 7378ede 040e7dd 7378ede 040e7dd 7378ede 040e7dd 76e5a0f 7378ede 040e7dd 7378ede 040e7dd 7378ede 040e7dd 7378ede | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 | ---
---
annotations_creators:
- expert-generated
language:
- en
license: mit
pretty_name: "ALL Bench Leaderboard 2026"
size_categories:
- n<1K
source_datasets:
- original
tags:
- benchmark
- leaderboard
- llm
- vlm
- ai-evaluation
- gpt-5
- claude
- gemini
- final-bench
- metacognition
- multimodal
- ai-agent
- image-generation
- video-generation
- music-generation
- union-eval
task_categories:
- text-generation
- visual-question-answering
- text-to-image
- text-to-video
- text-to-audio
configs:
- config_name: llm
data_files:
- split: train
path: data/llm.jsonl
- config_name: vlm_flagship
data_files:
- split: train
path: data/vlm_flagship.jsonl
- config_name: agent
data_files:
- split: train
path: data/agent.jsonl
- config_name: image
data_files:
- split: train
path: data/image.jsonl
- config_name: video
data_files:
- split: train
path: data/video.jsonl
- config_name: music
data_files:
- split: train
path: data/music.jsonl
models:
# LLM - Open Source
- Qwen/Qwen3.5-122B-A10B
- Qwen/Qwen3.5-27B
- Qwen/Qwen3.5-35B-A3B
- Qwen/Qwen3.5-9B
- Qwen/Qwen3.5-4B
- Qwen/Qwen3-Next-80B-A3B-Thinking
- deepseek-ai/DeepSeek-V3
- deepseek-ai/DeepSeek-R1
- zai-org/GLM-5
- meta-llama/Llama-4-Scout-17B-16E-Instruct
- meta-llama/Llama-4-Maverick-17B-128E-Instruct
- microsoft/phi-4
- upstage/Solar-Open-100B
- K-intelligence/Midm-2.0-Base-Instruct
- Nanbeige/Nanbeige4.1-3B
- MiniMaxAI/MiniMax-M2.5
- stepfun-ai/Step-3.5-Flash
# VLM - Open Source
- OpenGVLab/InternVL3-78B
- Qwen/Qwen2.5-VL-72B-Instruct
- Qwen/Qwen3-VL-30B-A3B
# Image Generation
- black-forest-labs/FLUX.1-dev
- stabilityai/stable-diffusion-3.5-large
# Video Generation
- Lightricks/LTX-Video
# Music Generation
- facebook/musicgen-large
- facebook/jasco-chords-drums-melody-1B
---
# 🏆 ALL Bench Leaderboard 2026
**The only AI benchmark dataset covering LLM · VLM · Agent · Image · Video · Music in a single unified file.**
<p align="center">
<a href="https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard"><img src="https://img.shields.io/badge/🏆_Live_Leaderboard-ALL_Bench-6366f1?style=for-the-badge" alt="Live Leaderboard"></a>
</p>
<p align="center">
<a href="https://github.com/final-bench/ALL-Bench-Leaderboard"><img src="https://img.shields.io/badge/GitHub-Repo-black?style=flat-square&logo=github" alt="GitHub"></a>
<a href="https://huggingface.co/datasets/FINAL-Bench/Metacognitive"><img src="https://img.shields.io/badge/🧬_FINAL_Bench-Dataset-blueviolet?style=flat-square" alt="FINAL Bench"></a>
<a href="https://huggingface.co/spaces/FINAL-Bench/Leaderboard"><img src="https://img.shields.io/badge/🧬_FINAL_Bench-Leaderboard-teal?style=flat-square" alt="FINAL Leaderboard"></a>
</p>


## Dataset Summary
ALL Bench Leaderboard aggregates and cross-verifies benchmark scores for **90+ AI models** across 6 modalities. Every numerical score is tagged with a confidence level (`cross-verified`, `single-source`, or `self-reported`) and its original source. The dataset is designed for researchers, developers, and decision-makers who need a trustworthy, unified view of the AI model landscape.
| Category | Models | Benchmarks | Description |
|----------|--------|------------|-------------|
| **LLM** | 41 | 32 fields | MMLU-Pro, GPQA, AIME, HLE, ARC-AGI-2, Metacog, SWE-Pro, IFEval, LCB, **Union Eval**, etc. |
| **VLM Flagship** | 11 | 10 fields | MMMU, MMMU-Pro, MathVista, AI2D, OCRBench, MMStar, HallusionBench, etc. |
| **Agent** | 10 | 8 fields | OSWorld, τ²-bench, BrowseComp, Terminal-Bench 2.0, GDPval-AA, SWE-Pro |
| **Image Gen** | 10 | 7 fields | Photo realism, text rendering, instruction following, style, aesthetics |
| **Video Gen** | 10 | 7 fields | Quality, motion, consistency, text rendering, duration, resolution |
| **Music Gen** | 8 | 6 fields | Quality, vocals, instrumental, lyrics, duration |



## What's New — v2.2.1
### 🏅 Union Eval ★NEW
**ALL Bench's proprietary integrated benchmark.** Fuses the discriminative core of 10 existing benchmarks (GPQA, AIME, HLE, MMLU-Pro, IFEval, LiveCodeBench, BFCL, ARC-AGI, SWE, FINAL Bench) into a single 1000-question pool with a season-based rotation system.
**Key features:**
- **100% JSON auto-graded** — every question requires mandatory JSON output with verifiable fields. Zero keyword matching.
- **Fuzzy JSON matching** — tolerates key name variants, fraction formats, text fallback when JSON parsing fails.
- **Season rotation** — 70% new questions each season, 30% anchor questions for cross-season IRT calibration.
- **8 rounds of empirical testing** — v2 (82.4%) → v3 (82.0%) → Final (79.5%) → S2 (81.8%) → S3 (75.0%) → Fuzzy (69.9/69.3%).
**Key discovery:** *"The bottleneck in benchmarking is not question difficulty — it's grading methodology."*
**Empirically confirmed LLM weakness map:**
- 🔴 Poetry + code cross-constraints: 18-28%
- 🔴 Complex JSON structure (10+ constraints): 0%
- 🔴 Pure series computation (Σk²/3ᵏ): 0%
- 🟢 Metacognitive reasoning (Bayes, proof errors): 95%
- 🟢 Revised science detection: 86%
**Current scores (S3, 20Q sample, Fuzzy JSON):**
| Model | Union Eval |
|-------|-----------|
| Claude Sonnet 4.6 | **69.9** |
| Claude Opus 4.6 | **69.3** |
### Other v2.2 changes
- Fair Coverage Correction: composite scoring ^0.5 → ^0.7
- +7 FINAL Bench scores (15 total)
- Columns sorted by fill rate
- Model Card popup (click model name) · FINAL Bench detail popup (click Metacog score)
- 🔥 Heatmap, 💰 Price vs Performance scatter tools
## Live Leaderboard
👉 **[https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard](https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard)**
Interactive features: composite ranking, dark mode, advanced search (`GPQA > 90 open`, `price < 1`), Model Finder, Head-to-Head comparison, Trust Map heatmap, Bar Race animation, Model Card popup, FINAL Bench detail popup, and downloadable Intelligence Report (PDF/DOCX).
## Data Structure
```
data/
├── llm.jsonl # 41 LLMs × 32 fields (incl. unionEval ★NEW)
├── vlm_flagship.jsonl # 11 flagship VLMs × 10 benchmarks
├── agent.jsonl # 10 agent models × 8 benchmarks
├── image.jsonl # 10 image gen models × S/A/B/C ratings
├── video.jsonl # 10 video gen models × S/A/B/C ratings
└── music.jsonl # 8 music gen models × S/A/B/C ratings
```
## LLM Field Schema
| Field | Type | Description |
|-------|------|-------------|
| `name` | string | Model name |
| `provider` | string | Organization |
| `type` | string | `open` or `closed` |
| `group` | string | `flagship`, `open`, `korean`, etc. |
| `released` | string | Release date (YYYY.MM) |
| `mmluPro` | float \| null | MMLU-Pro score (%) |
| `gpqa` | float \| null | GPQA Diamond (%) |
| `aime` | float \| null | AIME 2025 (%) |
| `hle` | float \| null | Humanity's Last Exam (%) |
| `arcAgi2` | float \| null | ARC-AGI-2 (%) |
| `metacog` | float \| null | FINAL Bench Metacognitive score |
| `swePro` | float \| null | SWE-bench Pro (%) |
| `bfcl` | float \| null | Berkeley Function Calling (%) |
| `ifeval` | float \| null | IFEval instruction following (%) |
| `lcb` | float \| null | LiveCodeBench (%) |
| `sweV` | float \| null | SWE-bench Verified (%) — deprecated |
| `mmmlu` | float \| null | Multilingual MMLU (%) |
| `termBench` | float \| null | Terminal-Bench 2.0 (%) |
| `sciCode` | float \| null | SciCode (%) |
| `unionEval` | float \| null | **★NEW** Union Eval S3 — ALL Bench integrated benchmark (100% JSON auto-graded) |
| `priceIn` / `priceOut` | float \| null | USD per 1M tokens |
| `elo` | int \| null | Arena Elo rating |
| `license` | string | `Prop`, `Apache2`, `MIT`, `Open`, etc. |



## Composite Score
```
Score = Avg(confirmed benchmarks) × (N/10)^0.7
```
10 core benchmarks across the **5-Axis Intelligence Framework**: Knowledge · Expert Reasoning · Abstract Reasoning · Metacognition · Execution.
**v2.2 change:** Exponent adjusted from 0.5 to 0.7 for fairer coverage weighting. Models with 7/10 benchmarks receive ×0.79 (was ×0.84), while 4/10 receives ×0.53 (was ×0.63).
## Confidence System
Each benchmark score in the `confidence` object is tagged:
| Level | Badge | Meaning |
|-------|-------|---------|
| `cross-verified` | ✓✓ | Confirmed by 2+ independent sources |
| `single-source` | ✓ | One official or third-party source |
| `self-reported` | ~ | Provider's own claim, unverified |
Example:
```json
"Claude Opus 4.6": {
"gpqa": { "level": "cross-verified", "source": "Anthropic + Vellum + DataCamp" },
"arcAgi2": { "level": "cross-verified", "source": "Vellum + llm-stats + NxCode + DataCamp" },
"metacog": { "level": "single-source", "source": "FINAL Bench dataset" },
"unionEval": { "level": "single-source", "source": "Union Eval S3 — ALL Bench official" }
}
```
## Usage
```python
from datasets import load_dataset
# Load LLM data
ds = load_dataset("FINAL-Bench/ALL-Bench-Leaderboard", "llm")
df = ds["train"].to_pandas()
# Top 5 LLMs by GPQA
ranked = df.dropna(subset=["gpqa"]).sort_values("gpqa", ascending=False)
for _, m in ranked.head(5).iterrows():
print(f"{m['name']:25s} GPQA={m['gpqa']}")
# Union Eval scores
union = df.dropna(subset=["unionEval"]).sort_values("unionEval", ascending=False)
for _, m in union.iterrows():
print(f"{m['name']:25s} Union Eval={m['unionEval']}")
```



## Union Eval — Integrated AI Assessment
Union Eval is ALL Bench's proprietary benchmark designed to address three fundamental problems with existing AI evaluations:
1. **Contamination** — Public benchmarks leak into training data. Union Eval rotates 70% of questions each season.
2. **Single-axis measurement** — AIME tests only math, IFEval only instruction-following. Union Eval integrates arithmetic, poetry constraints, metacognition, coding, calibration, and myth detection.
3. **Score inflation via keyword matching** — Traditional rubric grading gives 100% to "well-written" answers even if content is wrong. Union Eval enforces mandatory JSON output with zero keyword matching.
**Structure (S3 — 100 Questions from 1000 Pool):**
| Category | Questions | Role | Expected Score |
|----------|-----------|------|---------------|
| Pure Arithmetic | 10 | Confirmed Killer #1 | 0-57% |
| Poetry/Verse IFEval | 8 | Confirmed Killer #2 | 18-28% |
| Structured Data IFEval | 7 | JSON/CSV verification | 0-70% |
| FINAL Bench Metacognition | 20 | Core brand | 50-95% |
| Union Complex Synthesis | 15 | Extreme multi-domain | 40-73% |
| Revised Science / Myths | 5 | Calibration traps | 50-86% |
| Code I/O, GPQA, HLE | 19 | Expert + execution | 50-100% |
| BFCL Tool Use, Anchors | 16 | Cross-season calibration | varies |
Note: The 100-question dataset is **not publicly released** to prevent contamination. Only scores are published.
## FINAL Bench — Metacognitive Benchmark
FINAL Bench measures AI self-correction ability. Error Recovery (ER) explains 94.8% of metacognitive performance variance. 15 frontier models evaluated.
- 🧬 [FINAL-Bench/Metacognitive Dataset](https://huggingface.co/datasets/FINAL-Bench/Metacognitive)
- 🏆 [FINAL-Bench/Leaderboard](https://huggingface.co/spaces/FINAL-Bench/Leaderboard)
## Changelog
| Version | Date | Changes |
|---------|------|---------|
| **v2.2.1** | 2026-03-10 | 🏅 **Union Eval ★NEW** — integrated benchmark column (`unionEval` field). Claude Opus 4.6: 69.3 · Sonnet 4.6: 69.9 |
| v2.2 | 2026-03-10 | Fair Coverage (^0.7), +7 Metacog scores, Model Cards, FINAL Bench popup, Heatmap, Price-Perf |
| v2.1 | 2026-03-08 | Confidence badges, Intelligence Report, source tracking |
| v2.0 | 2026-03-07 | All blanks filled, Korean AI data, 42 LLMs cross-verified |
| v1.9 | 2026-03-05 | +3 LLMs, dark mode, mobile responsive |
## Citation
```bibtex
@misc{allbench2026,
title={ALL Bench Leaderboard 2026: Unified Multi-Modal AI Evaluation},
author={ALL Bench Team},
year={2026},
url={https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard}
}
```
---
`#AIBenchmark` `#LLMLeaderboard` `#GPT5` `#Claude` `#Gemini` `#ALLBench` `#FINALBench` `#Metacognition` `#UnionEval` `#VLM` `#AIAgent` `#MultiModal` `#HuggingFace` `#ARC-AGI` `#AIEvaluation` `#VIDRAFT.net` |