Update README.md
Browse files
README.md
CHANGED
|
@@ -65,6 +65,11 @@ dataset_info:
|
|
| 65 |
<a href="https://huggingface.co/spaces/FINAL-Bench/Leaderboard"><img src="https://img.shields.io/badge/🧬_FINAL_Bench-Leaderboard-teal?style=flat-square" alt="FINAL Leaderboard"></a>
|
| 66 |
</p>
|
| 67 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
## Dataset Summary
|
| 69 |
|
| 70 |
ALL Bench Leaderboard aggregates and cross-verifies benchmark scores for **91 AI models** across 6 modalities. Every numerical score is tagged with a confidence level (`cross-verified`, `single-source`, or `self-reported`) and its original source. The dataset is designed for researchers, developers, and decision-makers who need a trustworthy, unified view of the AI model landscape.
|
|
@@ -79,6 +84,13 @@ ALL Bench Leaderboard aggregates and cross-verifies benchmark scores for **91 AI
|
|
| 79 |
| **Video Gen** | 10 | 7 fields | Quality, motion, consistency, text rendering, duration, resolution |
|
| 80 |
| **Music Gen** | 8 | 6 fields | Quality, vocals, instrumental, lyrics, duration |
|
| 81 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 82 |
## Live Leaderboard
|
| 83 |
|
| 84 |
👉 **[https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard](https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard)**
|
|
@@ -128,6 +140,12 @@ all_bench_leaderboard_v2.1.json
|
|
| 128 |
| `elo` | int \| null | Arena Elo rating |
|
| 129 |
| `license` | string | `Prop`, `Apache2`, `MIT`, `Open`, etc. |
|
| 130 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 131 |
## Composite Score
|
| 132 |
|
| 133 |
```
|
|
@@ -178,6 +196,13 @@ print(data["confidence"]["Gemini 3.1 Pro"]["gpqa"])
|
|
| 178 |
# → {"level": "single-source", "source": "Google DeepMind model card"}
|
| 179 |
```
|
| 180 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 181 |
## FINAL Bench — Metacognitive Benchmark
|
| 182 |
|
| 183 |
FINAL Bench measures AI self-correction ability. Error Recovery (ER) explains 94.8% of metacognitive performance variance. 9 frontier models evaluated.
|
|
|
|
| 65 |
<a href="https://huggingface.co/spaces/FINAL-Bench/Leaderboard"><img src="https://img.shields.io/badge/🧬_FINAL_Bench-Leaderboard-teal?style=flat-square" alt="FINAL Leaderboard"></a>
|
| 66 |
</p>
|
| 67 |
|
| 68 |
+

|
| 69 |
+
|
| 70 |
+

|
| 71 |
+
|
| 72 |
+
|
| 73 |
## Dataset Summary
|
| 74 |
|
| 75 |
ALL Bench Leaderboard aggregates and cross-verifies benchmark scores for **91 AI models** across 6 modalities. Every numerical score is tagged with a confidence level (`cross-verified`, `single-source`, or `self-reported`) and its original source. The dataset is designed for researchers, developers, and decision-makers who need a trustworthy, unified view of the AI model landscape.
|
|
|
|
| 84 |
| **Video Gen** | 10 | 7 fields | Quality, motion, consistency, text rendering, duration, resolution |
|
| 85 |
| **Music Gen** | 8 | 6 fields | Quality, vocals, instrumental, lyrics, duration |
|
| 86 |
|
| 87 |
+

|
| 88 |
+
|
| 89 |
+

|
| 90 |
+
|
| 91 |
+

|
| 92 |
+
|
| 93 |
+
|
| 94 |
## Live Leaderboard
|
| 95 |
|
| 96 |
👉 **[https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard](https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard)**
|
|
|
|
| 140 |
| `elo` | int \| null | Arena Elo rating |
|
| 141 |
| `license` | string | `Prop`, `Apache2`, `MIT`, `Open`, etc. |
|
| 142 |
|
| 143 |
+

|
| 144 |
+
|
| 145 |
+

|
| 146 |
+
|
| 147 |
+

|
| 148 |
+
|
| 149 |
## Composite Score
|
| 150 |
|
| 151 |
```
|
|
|
|
| 196 |
# → {"level": "single-source", "source": "Google DeepMind model card"}
|
| 197 |
```
|
| 198 |
|
| 199 |
+

|
| 200 |
+
|
| 201 |
+

|
| 202 |
+
|
| 203 |
+

|
| 204 |
+
|
| 205 |
+
|
| 206 |
## FINAL Bench — Metacognitive Benchmark
|
| 207 |
|
| 208 |
FINAL Bench measures AI self-correction ability. Error Recovery (ER) explains 94.8% of metacognitive performance variance. 9 frontier models evaluated.
|