--- license: cc-by-4.0 task_categories: - image-to-video - text-to-video language: - en size_categories: - n<1K pretty_name: VBVR-MultiStep-Bench tags: - video-reasoning - multi-step - long-horizon - image-to-video - evaluation - benchmark --- # VBVR-MultiStep-Bench The frozen **180-instance public evaluation split** released alongside the [VBVR-MultiStep](https://huggingface.co/datasets/Video-Reason/VBVR-MultiStep) training corpus. Designed for long-horizon multi-step image-to-video (I2V) reasoning evaluation. This dataset is part of the **VBVR (Very Big Video Reasoning Suite)** project. See the parent suite at and the suite paper [VBVR: A Very Big Video Reasoning Suite (Wang et al., ICML 2026)](https://icml.cc/virtual/2026/poster/65709). ## At a glance | Property | Value | |---|---| | Tasks | **36** parameterized tasks (`Multi-01` … `Multi-36`) | | Reasoning families | Navigation, Planning, CSP, Execution, Geometry, Physics | | Instances | **180** (5 per task × 36) | | Per-instance artifacts | 5 (see below) | | License | CC-BY-4.0 | ## Five-artifact data contract Every instance lives at: ``` Multi-XX__data-generator/Multi-XX__data-generator_task/Multi-XX__data-generator_/ ``` and contains exactly: | File | Role | |---|---| | `first_frame.png` | Model conditioning image (the only visual input the model receives at inference) | | `prompt.txt` | Natural-language task contract | | `final_frame.png` | Target endpoint (held out from the model) | | `ground_truth.mp4` | Reference rollout demonstrating the correct trajectory | | `question_metadata.json` | Seed, version, tolerances, task-specific fields | A top-level `metadata.parquet` indexes every instance with the task id, family, seed, and per-instance metadata for fast filtering. ## Reasoning families | Family | Characteristic | Released tasks | |---|---|---| | Navigation | Discrete motion under adjacency / obstacle constraints | 6 | | Planning | Operator-based state transformation | 6 | | CSP | Incremental labeling under global consistency | 6 (3 used for human judging) | | Execution | Clocked deterministic update rules | 6 | | Geometry | Ordered constructive geometry | 6 | | Physics | Continuous dynamics with contact / conservation | 6 | Tasks `Multi-13`, `Multi-14`, `Multi-15` (CSP) are excluded from the human-judging pool described in the paper but are included in this release for completeness. ## Intended use - **Primary use**: trajectory-level evaluation of I2V systems under a fixed five-artifact contract. - **Comparison protocol**: blind human pairwise judging on three independent axes — process correctness, reference fidelity, render quality. - **Companion training corpus**: [Video-Reason/VBVR-MultiStep](https://huggingface.co/datasets/Video-Reason/VBVR-MultiStep) (~360k samples). ## Loading ```python import pandas as pd meta = pd.read_parquet("hf://datasets/Video-Reason/VBVR-MultiStep-Bench/metadata.parquet") ``` Or pull a single instance: ```python from huggingface_hub import hf_hub_download prompt_path = hf_hub_download( "Video-Reason/VBVR-MultiStep-Bench", "Multi-01_maze_shortest_path_data-generator/Multi-01_maze_shortest_path_data-generator_task/Multi-01_maze_shortest_path_data-generator_00000000/prompt.txt", repo_type="dataset", ) ``` ## License Released under **CC-BY-4.0**. The reference rollouts are produced from generators that consume only released task definitions; no third-party copyrighted content is embedded. Wan2.2-I2V-A14B (Apache-2.0) is referenced as a baseline model and a fine-tuning ancestor for `VBVR-Wan2.2`; this dataset does not redistribute Wan2.2 weights. ## Responsible AI This dataset is fully synthetic — generators produce every instance from controlled parameters. There are no human subjects, no scraped media, and no personal information. See the [Croissant file](./croissant.json) for the complete RAI metadata.