Update README.md
Browse files
README.md
CHANGED
|
@@ -1,4 +1,5 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
annotations_creators:
|
| 3 |
- expert-generated
|
| 4 |
language:
|
|
@@ -25,6 +26,7 @@ tags:
|
|
| 25 |
- image-generation
|
| 26 |
- video-generation
|
| 27 |
- music-generation
|
|
|
|
| 28 |
task_categories:
|
| 29 |
- text-generation
|
| 30 |
- visual-question-answering
|
|
@@ -56,6 +58,37 @@ configs:
|
|
| 56 |
data_files:
|
| 57 |
- split: train
|
| 58 |
path: data/music.jsonl
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
---
|
| 60 |
|
| 61 |
# 🏆 ALL Bench Leaderboard 2026
|
|
@@ -79,13 +112,12 @@ configs:
|
|
| 79 |
|
| 80 |
## Dataset Summary
|
| 81 |
|
| 82 |
-
ALL Bench Leaderboard aggregates and cross-verifies benchmark scores for **
|
| 83 |
|
| 84 |
| Category | Models | Benchmarks | Description |
|
| 85 |
|----------|--------|------------|-------------|
|
| 86 |
-
| **LLM** |
|
| 87 |
| **VLM Flagship** | 11 | 10 fields | MMMU, MMMU-Pro, MathVista, AI2D, OCRBench, MMStar, HallusionBench, etc. |
|
| 88 |
-
| **VLM Lightweight** | 5 | 34 fields | Detailed Qwen-series edge model comparison across 3 sub-categories |
|
| 89 |
| **Agent** | 10 | 8 fields | OSWorld, τ²-bench, BrowseComp, Terminal-Bench 2.0, GDPval-AA, SWE-Pro |
|
| 90 |
| **Image Gen** | 10 | 7 fields | Photo realism, text rendering, instruction following, style, aesthetics |
|
| 91 |
| **Video Gen** | 10 | 7 fields | Quality, motion, consistency, text rendering, duration, resolution |
|
|
@@ -98,26 +130,58 @@ ALL Bench Leaderboard aggregates and cross-verifies benchmark scores for **91 AI
|
|
| 98 |

|
| 99 |
|
| 100 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 101 |
## Live Leaderboard
|
| 102 |
|
| 103 |
👉 **[https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard](https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard)**
|
| 104 |
|
| 105 |
-
Interactive features: composite ranking, dark mode, advanced search (`GPQA > 90 open`, `price < 1`), Model Finder, Head-to-Head comparison, Trust Map heatmap, Bar Race animation, and downloadable Intelligence Report (PDF/DOCX).
|
| 106 |
|
| 107 |
## Data Structure
|
| 108 |
|
| 109 |
```
|
| 110 |
-
|
| 111 |
-
├──
|
| 112 |
-
├──
|
| 113 |
-
├──
|
| 114 |
-
|
| 115 |
-
|
| 116 |
-
|
| 117 |
-
├── image[10] # 10 image gen models × S/A/B/C ratings
|
| 118 |
-
├── video[10] # 10 video gen models × S/A/B/C ratings
|
| 119 |
-
├── music[8] # 8 music gen models × S/A/B/C ratings
|
| 120 |
-
└── confidence{42} # per-model, per-benchmark source & trust level
|
| 121 |
```
|
| 122 |
|
| 123 |
## LLM Field Schema
|
|
@@ -143,6 +207,7 @@ all_bench_leaderboard_v2.1.json
|
|
| 143 |
| `mmmlu` | float \| null | Multilingual MMLU (%) |
|
| 144 |
| `termBench` | float \| null | Terminal-Bench 2.0 (%) |
|
| 145 |
| `sciCode` | float \| null | SciCode (%) |
|
|
|
|
| 146 |
| `priceIn` / `priceOut` | float \| null | USD per 1M tokens |
|
| 147 |
| `elo` | int \| null | Arena Elo rating |
|
| 148 |
| `license` | string | `Prop`, `Apache2`, `MIT`, `Open`, etc. |
|
|
@@ -156,11 +221,13 @@ all_bench_leaderboard_v2.1.json
|
|
| 156 |
## Composite Score
|
| 157 |
|
| 158 |
```
|
| 159 |
-
Score = Avg(confirmed benchmarks) ×
|
| 160 |
```
|
| 161 |
|
| 162 |
10 core benchmarks across the **5-Axis Intelligence Framework**: Knowledge · Expert Reasoning · Abstract Reasoning · Metacognition · Execution.
|
| 163 |
|
|
|
|
|
|
|
| 164 |
## Confidence System
|
| 165 |
|
| 166 |
Each benchmark score in the `confidence` object is tagged:
|
|
@@ -176,31 +243,29 @@ Example:
|
|
| 176 |
"Claude Opus 4.6": {
|
| 177 |
"gpqa": { "level": "cross-verified", "source": "Anthropic + Vellum + DataCamp" },
|
| 178 |
"arcAgi2": { "level": "cross-verified", "source": "Vellum + llm-stats + NxCode + DataCamp" },
|
| 179 |
-
"metacog": { "level": "single-source", "source": "FINAL Bench dataset" }
|
|
|
|
| 180 |
}
|
| 181 |
```
|
| 182 |
|
| 183 |
## Usage
|
| 184 |
|
| 185 |
```python
|
| 186 |
-
import
|
| 187 |
-
from huggingface_hub import hf_hub_download
|
| 188 |
|
| 189 |
-
|
| 190 |
-
|
| 191 |
-
|
| 192 |
-
repo_type="dataset"
|
| 193 |
-
)
|
| 194 |
-
data = json.load(open(path))
|
| 195 |
|
| 196 |
# Top 5 LLMs by GPQA
|
| 197 |
-
ranked =
|
| 198 |
-
for m in ranked
|
| 199 |
print(f"{m['name']:25s} GPQA={m['gpqa']}")
|
| 200 |
|
| 201 |
-
#
|
| 202 |
-
|
| 203 |
-
|
|
|
|
| 204 |
```
|
| 205 |
|
| 206 |

|
|
@@ -210,13 +275,48 @@ print(data["confidence"]["Gemini 3.1 Pro"]["gpqa"])
|
|
| 210 |

|
| 211 |
|
| 212 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 213 |
## FINAL Bench — Metacognitive Benchmark
|
| 214 |
|
| 215 |
-
FINAL Bench measures AI self-correction ability. Error Recovery (ER) explains 94.8% of metacognitive performance variance.
|
| 216 |
|
| 217 |
- 🧬 [FINAL-Bench/Metacognitive Dataset](https://huggingface.co/datasets/FINAL-Bench/Metacognitive)
|
| 218 |
- 🏆 [FINAL-Bench/Leaderboard](https://huggingface.co/spaces/FINAL-Bench/Leaderboard)
|
| 219 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 220 |
## Citation
|
| 221 |
|
| 222 |
```bibtex
|
|
@@ -230,4 +330,4 @@ FINAL Bench measures AI self-correction ability. Error Recovery (ER) explains 94
|
|
| 230 |
|
| 231 |
---
|
| 232 |
|
| 233 |
-
`#AIBenchmark` `#LLMLeaderboard` `#GPT5` `#Claude` `#Gemini` `#ALLBench` `#FINALBench` `#Metacognition` `#VLM` `#AIAgent` `#MultiModal` `#HuggingFace` `#ARC-AGI` `#AIEvaluation` `#VIDRAFT.net`
|
|
|
|
| 1 |
---
|
| 2 |
+
---
|
| 3 |
annotations_creators:
|
| 4 |
- expert-generated
|
| 5 |
language:
|
|
|
|
| 26 |
- image-generation
|
| 27 |
- video-generation
|
| 28 |
- music-generation
|
| 29 |
+
- union-eval
|
| 30 |
task_categories:
|
| 31 |
- text-generation
|
| 32 |
- visual-question-answering
|
|
|
|
| 58 |
data_files:
|
| 59 |
- split: train
|
| 60 |
path: data/music.jsonl
|
| 61 |
+
models:
|
| 62 |
+
# LLM - Open Source
|
| 63 |
+
- Qwen/Qwen3.5-122B-A10B
|
| 64 |
+
- Qwen/Qwen3.5-27B
|
| 65 |
+
- Qwen/Qwen3.5-35B-A3B
|
| 66 |
+
- Qwen/Qwen3.5-9B
|
| 67 |
+
- Qwen/Qwen3.5-4B
|
| 68 |
+
- Qwen/Qwen3-Next-80B-A3B-Thinking
|
| 69 |
+
- deepseek-ai/DeepSeek-V3
|
| 70 |
+
- deepseek-ai/DeepSeek-R1
|
| 71 |
+
- zai-org/GLM-5
|
| 72 |
+
- meta-llama/Llama-4-Scout-17B-16E-Instruct
|
| 73 |
+
- meta-llama/Llama-4-Maverick-17B-128E-Instruct
|
| 74 |
+
- microsoft/phi-4
|
| 75 |
+
- upstage/Solar-Open-100B
|
| 76 |
+
- K-intelligence/Midm-2.0-Base-Instruct
|
| 77 |
+
- Nanbeige/Nanbeige4.1-3B
|
| 78 |
+
- MiniMaxAI/MiniMax-M2.5
|
| 79 |
+
- stepfun-ai/Step-3.5-Flash
|
| 80 |
+
# VLM - Open Source
|
| 81 |
+
- OpenGVLab/InternVL3-78B
|
| 82 |
+
- Qwen/Qwen2.5-VL-72B-Instruct
|
| 83 |
+
- Qwen/Qwen3-VL-30B-A3B
|
| 84 |
+
# Image Generation
|
| 85 |
+
- black-forest-labs/FLUX.1-dev
|
| 86 |
+
- stabilityai/stable-diffusion-3.5-large
|
| 87 |
+
# Video Generation
|
| 88 |
+
- Lightricks/LTX-Video
|
| 89 |
+
# Music Generation
|
| 90 |
+
- facebook/musicgen-large
|
| 91 |
+
- facebook/jasco-chords-drums-melody-1B
|
| 92 |
---
|
| 93 |
|
| 94 |
# 🏆 ALL Bench Leaderboard 2026
|
|
|
|
| 112 |
|
| 113 |
## Dataset Summary
|
| 114 |
|
| 115 |
+
ALL Bench Leaderboard aggregates and cross-verifies benchmark scores for **90+ AI models** across 6 modalities. Every numerical score is tagged with a confidence level (`cross-verified`, `single-source`, or `self-reported`) and its original source. The dataset is designed for researchers, developers, and decision-makers who need a trustworthy, unified view of the AI model landscape.
|
| 116 |
|
| 117 |
| Category | Models | Benchmarks | Description |
|
| 118 |
|----------|--------|------------|-------------|
|
| 119 |
+
| **LLM** | 41 | 32 fields | MMLU-Pro, GPQA, AIME, HLE, ARC-AGI-2, Metacog, SWE-Pro, IFEval, LCB, **Union Eval**, etc. |
|
| 120 |
| **VLM Flagship** | 11 | 10 fields | MMMU, MMMU-Pro, MathVista, AI2D, OCRBench, MMStar, HallusionBench, etc. |
|
|
|
|
| 121 |
| **Agent** | 10 | 8 fields | OSWorld, τ²-bench, BrowseComp, Terminal-Bench 2.0, GDPval-AA, SWE-Pro |
|
| 122 |
| **Image Gen** | 10 | 7 fields | Photo realism, text rendering, instruction following, style, aesthetics |
|
| 123 |
| **Video Gen** | 10 | 7 fields | Quality, motion, consistency, text rendering, duration, resolution |
|
|
|
|
| 130 |

|
| 131 |
|
| 132 |
|
| 133 |
+
## What's New — v2.2.1
|
| 134 |
+
|
| 135 |
+
### 🏅 Union Eval ★NEW
|
| 136 |
+
|
| 137 |
+
**ALL Bench's proprietary integrated benchmark.** Fuses the discriminative core of 10 existing benchmarks (GPQA, AIME, HLE, MMLU-Pro, IFEval, LiveCodeBench, BFCL, ARC-AGI, SWE, FINAL Bench) into a single 1000-question pool with a season-based rotation system.
|
| 138 |
+
|
| 139 |
+
**Key features:**
|
| 140 |
+
- **100% JSON auto-graded** — every question requires mandatory JSON output with verifiable fields. Zero keyword matching.
|
| 141 |
+
- **Fuzzy JSON matching** — tolerates key name variants, fraction formats, text fallback when JSON parsing fails.
|
| 142 |
+
- **Season rotation** — 70% new questions each season, 30% anchor questions for cross-season IRT calibration.
|
| 143 |
+
- **8 rounds of empirical testing** — v2 (82.4%) → v3 (82.0%) → Final (79.5%) → S2 (81.8%) → S3 (75.0%) → Fuzzy (69.9/69.3%).
|
| 144 |
+
|
| 145 |
+
**Key discovery:** *"The bottleneck in benchmarking is not question difficulty — it's grading methodology."*
|
| 146 |
+
|
| 147 |
+
**Empirically confirmed LLM weakness map:**
|
| 148 |
+
- 🔴 Poetry + code cross-constraints: 18-28%
|
| 149 |
+
- 🔴 Complex JSON structure (10+ constraints): 0%
|
| 150 |
+
- 🔴 Pure series computation (Σk²/3ᵏ): 0%
|
| 151 |
+
- 🟢 Metacognitive reasoning (Bayes, proof errors): 95%
|
| 152 |
+
- 🟢 Revised science detection: 86%
|
| 153 |
+
|
| 154 |
+
**Current scores (S3, 20Q sample, Fuzzy JSON):**
|
| 155 |
+
|
| 156 |
+
| Model | Union Eval |
|
| 157 |
+
|-------|-----------|
|
| 158 |
+
| Claude Sonnet 4.6 | **69.9** |
|
| 159 |
+
| Claude Opus 4.6 | **69.3** |
|
| 160 |
+
|
| 161 |
+
### Other v2.2 changes
|
| 162 |
+
- Fair Coverage Correction: composite scoring ^0.5 → ^0.7
|
| 163 |
+
- +7 FINAL Bench scores (15 total)
|
| 164 |
+
- Columns sorted by fill rate
|
| 165 |
+
- Model Card popup (click model name) · FINAL Bench detail popup (click Metacog score)
|
| 166 |
+
- 🔥 Heatmap, 💰 Price vs Performance scatter tools
|
| 167 |
+
|
| 168 |
+
|
| 169 |
## Live Leaderboard
|
| 170 |
|
| 171 |
👉 **[https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard](https://huggingface.co/spaces/FINAL-Bench/all-bench-leaderboard)**
|
| 172 |
|
| 173 |
+
Interactive features: composite ranking, dark mode, advanced search (`GPQA > 90 open`, `price < 1`), Model Finder, Head-to-Head comparison, Trust Map heatmap, Bar Race animation, Model Card popup, FINAL Bench detail popup, and downloadable Intelligence Report (PDF/DOCX).
|
| 174 |
|
| 175 |
## Data Structure
|
| 176 |
|
| 177 |
```
|
| 178 |
+
data/
|
| 179 |
+
├── llm.jsonl # 41 LLMs × 32 fields (incl. unionEval ★NEW)
|
| 180 |
+
├── vlm_flagship.jsonl # 11 flagship VLMs × 10 benchmarks
|
| 181 |
+
├── agent.jsonl # 10 agent models × 8 benchmarks
|
| 182 |
+
├── image.jsonl # 10 image gen models × S/A/B/C ratings
|
| 183 |
+
├── video.jsonl # 10 video gen models × S/A/B/C ratings
|
| 184 |
+
└── music.jsonl # 8 music gen models × S/A/B/C ratings
|
|
|
|
|
|
|
|
|
|
|
|
|
| 185 |
```
|
| 186 |
|
| 187 |
## LLM Field Schema
|
|
|
|
| 207 |
| `mmmlu` | float \| null | Multilingual MMLU (%) |
|
| 208 |
| `termBench` | float \| null | Terminal-Bench 2.0 (%) |
|
| 209 |
| `sciCode` | float \| null | SciCode (%) |
|
| 210 |
+
| `unionEval` | float \| null | **★NEW** Union Eval S3 — ALL Bench integrated benchmark (100% JSON auto-graded) |
|
| 211 |
| `priceIn` / `priceOut` | float \| null | USD per 1M tokens |
|
| 212 |
| `elo` | int \| null | Arena Elo rating |
|
| 213 |
| `license` | string | `Prop`, `Apache2`, `MIT`, `Open`, etc. |
|
|
|
|
| 221 |
## Composite Score
|
| 222 |
|
| 223 |
```
|
| 224 |
+
Score = Avg(confirmed benchmarks) × (N/10)^0.7
|
| 225 |
```
|
| 226 |
|
| 227 |
10 core benchmarks across the **5-Axis Intelligence Framework**: Knowledge · Expert Reasoning · Abstract Reasoning · Metacognition · Execution.
|
| 228 |
|
| 229 |
+
**v2.2 change:** Exponent adjusted from 0.5 to 0.7 for fairer coverage weighting. Models with 7/10 benchmarks receive ×0.79 (was ×0.84), while 4/10 receives ×0.53 (was ×0.63).
|
| 230 |
+
|
| 231 |
## Confidence System
|
| 232 |
|
| 233 |
Each benchmark score in the `confidence` object is tagged:
|
|
|
|
| 243 |
"Claude Opus 4.6": {
|
| 244 |
"gpqa": { "level": "cross-verified", "source": "Anthropic + Vellum + DataCamp" },
|
| 245 |
"arcAgi2": { "level": "cross-verified", "source": "Vellum + llm-stats + NxCode + DataCamp" },
|
| 246 |
+
"metacog": { "level": "single-source", "source": "FINAL Bench dataset" },
|
| 247 |
+
"unionEval": { "level": "single-source", "source": "Union Eval S3 — ALL Bench official" }
|
| 248 |
}
|
| 249 |
```
|
| 250 |
|
| 251 |
## Usage
|
| 252 |
|
| 253 |
```python
|
| 254 |
+
from datasets import load_dataset
|
|
|
|
| 255 |
|
| 256 |
+
# Load LLM data
|
| 257 |
+
ds = load_dataset("FINAL-Bench/ALL-Bench-Leaderboard", "llm")
|
| 258 |
+
df = ds["train"].to_pandas()
|
|
|
|
|
|
|
|
|
|
| 259 |
|
| 260 |
# Top 5 LLMs by GPQA
|
| 261 |
+
ranked = df.dropna(subset=["gpqa"]).sort_values("gpqa", ascending=False)
|
| 262 |
+
for _, m in ranked.head(5).iterrows():
|
| 263 |
print(f"{m['name']:25s} GPQA={m['gpqa']}")
|
| 264 |
|
| 265 |
+
# Union Eval scores
|
| 266 |
+
union = df.dropna(subset=["unionEval"]).sort_values("unionEval", ascending=False)
|
| 267 |
+
for _, m in union.iterrows():
|
| 268 |
+
print(f"{m['name']:25s} Union Eval={m['unionEval']}")
|
| 269 |
```
|
| 270 |
|
| 271 |

|
|
|
|
| 275 |

|
| 276 |
|
| 277 |
|
| 278 |
+
## Union Eval — Integrated AI Assessment
|
| 279 |
+
|
| 280 |
+
Union Eval is ALL Bench's proprietary benchmark designed to address three fundamental problems with existing AI evaluations:
|
| 281 |
+
|
| 282 |
+
1. **Contamination** — Public benchmarks leak into training data. Union Eval rotates 70% of questions each season.
|
| 283 |
+
2. **Single-axis measurement** — AIME tests only math, IFEval only instruction-following. Union Eval integrates arithmetic, poetry constraints, metacognition, coding, calibration, and myth detection.
|
| 284 |
+
3. **Score inflation via keyword matching** — Traditional rubric grading gives 100% to "well-written" answers even if content is wrong. Union Eval enforces mandatory JSON output with zero keyword matching.
|
| 285 |
+
|
| 286 |
+
**Structure (S3 — 100 Questions from 1000 Pool):**
|
| 287 |
+
|
| 288 |
+
| Category | Questions | Role | Expected Score |
|
| 289 |
+
|----------|-----------|------|---------------|
|
| 290 |
+
| Pure Arithmetic | 10 | Confirmed Killer #1 | 0-57% |
|
| 291 |
+
| Poetry/Verse IFEval | 8 | Confirmed Killer #2 | 18-28% |
|
| 292 |
+
| Structured Data IFEval | 7 | JSON/CSV verification | 0-70% |
|
| 293 |
+
| FINAL Bench Metacognition | 20 | Core brand | 50-95% |
|
| 294 |
+
| Union Complex Synthesis | 15 | Extreme multi-domain | 40-73% |
|
| 295 |
+
| Revised Science / Myths | 5 | Calibration traps | 50-86% |
|
| 296 |
+
| Code I/O, GPQA, HLE | 19 | Expert + execution | 50-100% |
|
| 297 |
+
| BFCL Tool Use, Anchors | 16 | Cross-season calibration | varies |
|
| 298 |
+
|
| 299 |
+
Note: The 100-question dataset is **not publicly released** to prevent contamination. Only scores are published.
|
| 300 |
+
|
| 301 |
+
|
| 302 |
## FINAL Bench — Metacognitive Benchmark
|
| 303 |
|
| 304 |
+
FINAL Bench measures AI self-correction ability. Error Recovery (ER) explains 94.8% of metacognitive performance variance. 15 frontier models evaluated.
|
| 305 |
|
| 306 |
- 🧬 [FINAL-Bench/Metacognitive Dataset](https://huggingface.co/datasets/FINAL-Bench/Metacognitive)
|
| 307 |
- 🏆 [FINAL-Bench/Leaderboard](https://huggingface.co/spaces/FINAL-Bench/Leaderboard)
|
| 308 |
|
| 309 |
+
|
| 310 |
+
## Changelog
|
| 311 |
+
|
| 312 |
+
| Version | Date | Changes |
|
| 313 |
+
|---------|------|---------|
|
| 314 |
+
| **v2.2.1** | 2026-03-10 | 🏅 **Union Eval ★NEW** — integrated benchmark column (`unionEval` field). Claude Opus 4.6: 69.3 · Sonnet 4.6: 69.9 |
|
| 315 |
+
| v2.2 | 2026-03-10 | Fair Coverage (^0.7), +7 Metacog scores, Model Cards, FINAL Bench popup, Heatmap, Price-Perf |
|
| 316 |
+
| v2.1 | 2026-03-08 | Confidence badges, Intelligence Report, source tracking |
|
| 317 |
+
| v2.0 | 2026-03-07 | All blanks filled, Korean AI data, 42 LLMs cross-verified |
|
| 318 |
+
| v1.9 | 2026-03-05 | +3 LLMs, dark mode, mobile responsive |
|
| 319 |
+
|
| 320 |
## Citation
|
| 321 |
|
| 322 |
```bibtex
|
|
|
|
| 330 |
|
| 331 |
---
|
| 332 |
|
| 333 |
+
`#AIBenchmark` `#LLMLeaderboard` `#GPT5` `#Claude` `#Gemini` `#ALLBench` `#FINALBench` `#Metacognition` `#UnionEval` `#VLM` `#AIAgent` `#MultiModal` `#HuggingFace` `#ARC-AGI` `#AIEvaluation` `#VIDRAFT.net`
|