Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,161 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- other
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
- ko
|
| 8 |
+
tags:
|
| 9 |
+
- world-model
|
| 10 |
+
- embodied-ai
|
| 11 |
+
- benchmark
|
| 12 |
+
- agi
|
| 13 |
+
- cognitive-evaluation
|
| 14 |
+
- vidraft
|
| 15 |
+
- prometheus
|
| 16 |
+
- wm-bench
|
| 17 |
+
- final-bench-family
|
| 18 |
+
pretty_name: World Model Bench (WM Bench)
|
| 19 |
+
size_categories:
|
| 20 |
+
- n<1K
|
| 21 |
+
---
|
| 22 |
+
|
| 23 |
+
# π World Model Bench (WM Bench) v1.0
|
| 24 |
+
|
| 25 |
+
> **Beyond FID β Measuring Intelligence, Not Just Motion**
|
| 26 |
+
|
| 27 |
+
**WM Bench** is the world's first benchmark for evaluating the **cognitive capabilities** of World Models and Embodied AI systems.
|
| 28 |
+
|
| 29 |
+
[](https://huggingface.co/spaces/FINAL-Bench/worldmodel-bench)
|
| 30 |
+
[](https://huggingface.co/spaces/FINAL-Bench/World-Model)
|
| 31 |
+
[](https://huggingface.co/datasets/VIDraft/FINAL-Bench)
|
| 32 |
+
[](LICENSE)
|
| 33 |
+
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
## π― Why WM Bench?
|
| 37 |
+
|
| 38 |
+
Existing world model evaluations focus on:
|
| 39 |
+
- **FID / FVD** β image and video quality ("Does it look real?")
|
| 40 |
+
- **Atari scores** β performance in fixed game environments
|
| 41 |
+
|
| 42 |
+
**WM Bench measures something different: Does the model *think* correctly?**
|
| 43 |
+
|
| 44 |
+
| Existing Benchmarks | WM Bench |
|
| 45 |
+
|---|---|
|
| 46 |
+
| FID: "Does it look real?" | "Does it understand the scene?" |
|
| 47 |
+
| FVD: "Is the video smooth?" | "Does it predict threats correctly?" |
|
| 48 |
+
| Atari: Fixed game environment | Any environment via JSON input |
|
| 49 |
+
| No emotion modeling | Emotion escalation measurement |
|
| 50 |
+
| No memory testing | Contextual memory utilization |
|
| 51 |
+
|
| 52 |
+
---
|
| 53 |
+
|
| 54 |
+
## π Benchmark Structure
|
| 55 |
+
|
| 56 |
+
### 3 Pillars Β· 10 Categories Β· 100 Scenarios
|
| 57 |
+
|
| 58 |
+
```
|
| 59 |
+
WM Score (0 β 1000)
|
| 60 |
+
βββ π P1: Perception 250 pts β C01, C02
|
| 61 |
+
βββ π§ P2: Cognition 450 pts β C03, C04, C05, C06, C07
|
| 62 |
+
βββ π₯ P3: Embodiment 300 pts β C08, C09, C10
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
**Why Cognition is 45%:** Existing world models measure perception and motion β but not **judgment**. WM Bench is the only benchmark that measures the quality of a model's decisions.
|
| 66 |
+
|
| 67 |
+
| Cat | Name | World First? |
|
| 68 |
+
|-----|------|-------------|
|
| 69 |
+
| C01 | Environmental Awareness | |
|
| 70 |
+
| C02 | Entity Recognition & Classification | |
|
| 71 |
+
| C03 | Prediction-Based Reasoning | β¦ |
|
| 72 |
+
| C04 | Threat-Type Differentiated Response | β¦ |
|
| 73 |
+
| C05 | Autonomous Emotion Escalation | β¦β¦ |
|
| 74 |
+
| C06 | Contextual Memory Utilization | β¦ |
|
| 75 |
+
| C07 | Post-Threat Adaptive Recovery | β¦ |
|
| 76 |
+
| C08 | Motion-Emotion Expression | β¦ |
|
| 77 |
+
| C09 | Real-Time Cognitive-Action Performance | |
|
| 78 |
+
| C10 | Body-Swap Extensibility | β¦β¦ |
|
| 79 |
+
|
| 80 |
+
β¦ = First defined in this benchmark
|
| 81 |
+
β¦β¦ = No prior research exists
|
| 82 |
+
|
| 83 |
+
### Grade Scale
|
| 84 |
+
|
| 85 |
+
| Grade | Score | Label |
|
| 86 |
+
|-------|-------|-------|
|
| 87 |
+
| S | 900+ | Superhuman |
|
| 88 |
+
| A | 750+ | Advanced |
|
| 89 |
+
| B | 600+ | Baseline |
|
| 90 |
+
| C | 400+ | Capable |
|
| 91 |
+
| D | 200+ | Developing |
|
| 92 |
+
| F | <200 | Failing |
|
| 93 |
+
|
| 94 |
+
---
|
| 95 |
+
|
| 96 |
+
## π How to Participate
|
| 97 |
+
|
| 98 |
+
**No 3D environment needed.** WM Bench evaluates via text I/O only:
|
| 99 |
+
|
| 100 |
+
```
|
| 101 |
+
INPUT: scene_context JSON
|
| 102 |
+
OUTPUT: PREDICT: left=danger(wall), right=safe(open), fwd=danger(beast), back=safe
|
| 103 |
+
MOTION: a person sprinting right in desperate terror
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
### Participation Tracks
|
| 107 |
+
|
| 108 |
+
| Track | Description | Max Score |
|
| 109 |
+
|-------|-------------|-----------|
|
| 110 |
+
| **A** | Text-only (API) | 750 / 1000 |
|
| 111 |
+
| **B** | Text + performance metrics | 1000 / 1000 |
|
| 112 |
+
| **C** | Text + performance + live demo | 1000 / 1000 + β Verified |
|
| 113 |
+
|
| 114 |
+
### Quick Start
|
| 115 |
+
|
| 116 |
+
```bash
|
| 117 |
+
git clone https://huggingface.co/datasets/VIDraft/wm-bench-dataset
|
| 118 |
+
cd wm-bench-dataset
|
| 119 |
+
|
| 120 |
+
python example_submission.py \
|
| 121 |
+
--api_url https://api.openai.com/v1/chat/completions \
|
| 122 |
+
--api_key YOUR_KEY \
|
| 123 |
+
--model YOUR_MODEL \
|
| 124 |
+
--output my_submission.json
|
| 125 |
+
```
|
| 126 |
+
|
| 127 |
+
Then upload `my_submission.json` to the [WM Bench Leaderboard](https://huggingface.co/spaces/FINAL-Bench/worldmodel-bench).
|
| 128 |
+
|
| 129 |
+
---
|
| 130 |
+
|
| 131 |
+
## π¦ Dataset Files
|
| 132 |
+
|
| 133 |
+
```
|
| 134 |
+
wm-bench-dataset/
|
| 135 |
+
βββ wm_bench_dataset.json # 100 scenarios + ground truth
|
| 136 |
+
βββ example_submission.py # Participation template
|
| 137 |
+
βββ wm_bench_scoring.py # Scoring engine (fully open)
|
| 138 |
+
βββ wm_bench_eval.py # Evaluation runner
|
| 139 |
+
βββ README.md
|
| 140 |
+
```
|
| 141 |
+
|
| 142 |
+
---
|
| 143 |
+
|
| 144 |
+
## π Current Leaderboard
|
| 145 |
+
|
| 146 |
+
| Rank | Model | Org | WM Score | Grade | Track |
|
| 147 |
+
|------|-------|-----|----------|-------|-------|
|
| 148 |
+
| 1 | VIDRAFT PROMETHEUS v1.0 | VIDRAFT | 726 | B | C β |
|
| 149 |
+
|
| 150 |
+
*Submit your model at the [WM Bench Leaderboard](https://huggingface.co/spaces/FINAL-Bench/worldmodel-bench)*
|
| 151 |
+
|
| 152 |
+
---
|
| 153 |
+
|
| 154 |
+
## π¬ FINAL Bench Family
|
| 155 |
+
|
| 156 |
+
WM Bench is part of the **FINAL Bench Family** β a suite of AGI evaluation benchmarks by VIDRAFT:
|
| 157 |
+
|
| 158 |
+
| Benchmark | Measures | Status |
|
| 159 |
+
|-----------|----------|--------|
|
| 160 |
+
| [FINAL Bench](https://huggingface.co/datasets/VIDraft/FINAL-Bench) | Text AGI (metacognition) | π HF Global Top 5 Β· 4 press coverages |
|
| 161 |
+
| **WM Bench** | **Embodied AGI (world models)** | **π Live** |
|