SeaWolf-AI commited on
Commit
76e5a0f
·
verified ·
1 Parent(s): 1180951

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +25 -0
README.md CHANGED
@@ -65,6 +65,11 @@ dataset_info:
65
  <a href="https://huggingface.co/spaces/FINAL-Bench/Leaderboard"><img src="https://img.shields.io/badge/🧬_FINAL_Bench-Leaderboard-teal?style=flat-square" alt="FINAL Leaderboard"></a>
66
  </p>
67
 
 
 
 
 
 
68
  ## Dataset Summary
69
 
70
  ALL Bench Leaderboard aggregates and cross-verifies benchmark scores for **91 AI models** across 6 modalities. Every numerical score is tagged with a confidence level (`cross-verified`, `single-source`, or `self-reported`) and its original source. The dataset is designed for researchers, developers, and decision-makers who need a trustworthy, unified view of the AI model landscape.
@@ -79,6 +84,13 @@ ALL Bench Leaderboard aggregates and cross-verifies benchmark scores for **91 AI
79
  | **Video Gen** | 10 | 7 fields | Quality, motion, consistency, text rendering, duration, resolution |
80
  | **Music Gen** | 8 | 6 fields | Quality, vocals, instrumental, lyrics, duration |
81
 
 
 
 
 
 
 
 
82
  ## Live Leaderboard
83
 
84
  👉 **[https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard](https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard)**
@@ -128,6 +140,12 @@ all_bench_leaderboard_v2.1.json
128
  | `elo` | int \| null | Arena Elo rating |
129
  | `license` | string | `Prop`, `Apache2`, `MIT`, `Open`, etc. |
130
 
 
 
 
 
 
 
131
  ## Composite Score
132
 
133
  ```
@@ -178,6 +196,13 @@ print(data["confidence"]["Gemini 3.1 Pro"]["gpqa"])
178
  # → {"level": "single-source", "source": "Google DeepMind model card"}
179
  ```
180
 
 
 
 
 
 
 
 
181
  ## FINAL Bench — Metacognitive Benchmark
182
 
183
  FINAL Bench measures AI self-correction ability. Error Recovery (ER) explains 94.8% of metacognitive performance variance. 9 frontier models evaluated.
 
65
  <a href="https://huggingface.co/spaces/FINAL-Bench/Leaderboard"><img src="https://img.shields.io/badge/🧬_FINAL_Bench-Leaderboard-teal?style=flat-square" alt="FINAL Leaderboard"></a>
66
  </p>
67
 
68
+ ![ALL Bench Leaderboard](./1.png)
69
+
70
+ ![ALL Bench Leaderboard](./2.png)
71
+
72
+
73
  ## Dataset Summary
74
 
75
  ALL Bench Leaderboard aggregates and cross-verifies benchmark scores for **91 AI models** across 6 modalities. Every numerical score is tagged with a confidence level (`cross-verified`, `single-source`, or `self-reported`) and its original source. The dataset is designed for researchers, developers, and decision-makers who need a trustworthy, unified view of the AI model landscape.
 
84
  | **Video Gen** | 10 | 7 fields | Quality, motion, consistency, text rendering, duration, resolution |
85
  | **Music Gen** | 8 | 6 fields | Quality, vocals, instrumental, lyrics, duration |
86
 
87
+ ![ALL Bench Leaderboard](./3.png)
88
+
89
+ ![ALL Bench Leaderboard](./4.png)
90
+
91
+ ![ALL Bench Leaderboard](./5.png)
92
+
93
+
94
  ## Live Leaderboard
95
 
96
  👉 **[https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard](https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard)**
 
140
  | `elo` | int \| null | Arena Elo rating |
141
  | `license` | string | `Prop`, `Apache2`, `MIT`, `Open`, etc. |
142
 
143
+ ![ALL Bench Leaderboard](./6.png)
144
+
145
+ ![ALL Bench Leaderboard](./7.png)
146
+
147
+ ![ALL Bench Leaderboard](./8.png)
148
+
149
  ## Composite Score
150
 
151
  ```
 
196
  # → {"level": "single-source", "source": "Google DeepMind model card"}
197
  ```
198
 
199
+ ![ALL Bench Leaderboard](./9.png)
200
+
201
+ ![ALL Bench Leaderboard](./10.png)
202
+
203
+ ![ALL Bench Leaderboard](./11.png)
204
+
205
+
206
  ## FINAL Bench — Metacognitive Benchmark
207
 
208
  FINAL Bench measures AI self-correction ability. Error Recovery (ER) explains 94.8% of metacognitive performance variance. 9 frontier models evaluated.