Datasets:
Tasks:
Video Classification
Formats:
text
Languages:
English
Size:
1K - 10K
ArXiv:
Tags:
robotics
physical-reasoning
causal-reasoning
action-understanding
video-understanding
embodied-ai
License:
| language: | |
| - en | |
| pretty_name: WoW-1 Benchmark Samples | |
| tags: | |
| - robotics | |
| - physical-reasoning | |
| - causal-reasoning | |
| - action-understanding | |
| - video-understanding | |
| - embodied-ai | |
| - wow | |
| - arxiv:2509.22642 | |
| license: mit | |
| task_categories: | |
| - video-classification | |
| - action-generation | |
| dataset_type: benchmark | |
| size_categories: | |
| - 1K<n<10K | |
| # π§ WoW-1 Benchmark Samples | |
| **WoW-1 Benchmark Samples** is the official evaluation dataset released as part of the [WoW (World-Omniscient World Model)](https://github.com/wow-world-model/wow-world-model) project. This benchmark is designed to assess the physical consistency and causal reasoning capabilities of generative world models for robotics and embodied AI. | |
| ## π Dataset Overview | |
| This dataset contains **612** natural language prompts representing real-world robot interaction tasks. These instructions are used to evaluate world models on their ability to understand and generate plausible, physically grounded responses in video or action space. | |
| Each sample describes a short-term or long-horizon task involving: | |
| - Object manipulation (e.g., _"Put the screw driver into the drawer"_) | |
| - Physical causality (e.g., _"Pick up an egg and crack it into the bowl"_) | |
| - Spatial reasoning (e.g., _"Move the lid from the black pot to the blue pan"_) | |
| - State transitions (e.g., _"Turn off the light switch"_) | |
| ## π§ͺ Use Cases | |
| This dataset is intended for: | |
| - Evaluating generative video models on **physical realism** | |
| - Testing embodied agents on **causal reasoning** | |
| - Benchmarking **language-to-action** and **planning** models | |
| - Training or fine-tuning **robotic manipulation** systems | |
| ## π’ Format | |
| - **Modality**: Text (natural language commands) | |
| - **Format**: Plain text / JSON / Parquet | |
| - **Example**: | |
| ```json | |
| { | |
| "text": "Put the apples on the table into the basket." | |
| } | |
| ``` | |
| ## π Dataset Stats | |
| - Number of samples: 612 | |
| - Text lengths: 11 to 230 characters | |
| - Language: English | |
| ## π Example Samples | |
| - `Clean the table surface` | |
| - `Use the right arm to grab the pearl and give it to the left arm` | |
| - `Open the door of the red microwave` | |
| - `Place the tennis ball in the brown object` | |
| ## π Related Models | |
| This dataset is used for evaluating models such as: | |
| - `WoW-1-DiT-2B`, `WoW-1-DiT-7B` | |
| - `WoW-1-Wan-14B` | |
| - `SOPHIA`-guided generative models | |
| ## π Related Paper | |
| > **[WoW: Towards a World omniscient World model Through Embodied Interaction](https://arxiv.org/abs/2509.22642)** | |
| > *Xiaowei Chi et al., 2025 β arXiv:2509.22642* | |
| Please cite this paper if you use the dataset: | |
| ```bibtex | |
| @article{chi2025wow, | |
| title={WoW: Towards a World omniscient World model Through Embodied Interaction}, | |
| author={Chi, Xiaowei and Jia, Peidong and Fan, Chun-Kai and Ju, Xiaozhu and Mi, Weishi and Qin, Zhiyuan and Zhang, Kevin and Tian, Wanxin and Ge, Kuangzhi and Li, Hao and others}, | |
| journal={arXiv preprint arXiv:2509.22642}, | |
| year={2025} | |
| } | |
| ``` | |
| ## π Project Links | |
| - π¬ Project site: [wow-world-model.github.io](https://wow-world-model.github.io/) | |
| - π» GitHub: [github.com/wow-world-model/wow-world-model](https://github.com/wow-world-model/wow-world-model) | |
| - π ArXiv: [arxiv.org/abs/2509.22642](https://arxiv.org/abs/2509.22642) | |
| ## πͺͺ License | |
| This dataset is released under the [MIT License](https://opensource.org/licenses/MIT). | |
| --- | |
| π€ We encourage the community to explore, evaluate, and extend this benchmark. Contributions and feedback are welcome via GitHub or the project website. | |