File size: 5,656 Bytes
7cb2657 ef47eb3 7cb2657 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 | ---
license: cc-by-4.0
task_categories:
- image-to-video
- text-to-video
language:
- en
size_categories:
- 100K<n<1M
pretty_name: VBVR-MultiStep
tags:
- video-reasoning
- multi-step
- long-horizon
- image-to-video
- training
---
# VBVR-MultiStep
The **~360k-sample programmatic training corpus** for long-horizon multi-step image-to-video (I2V) reasoning. Companion to the frozen [VBVR-MultiStep-Bench](https://huggingface.co/datasets/Video-Reason/VBVR-MultiStep-Bench) (180-instance evaluation split).
Part of the **VBVR (Very Big Video Reasoning Suite)** project: <https://video-reason.com>. See [Wang et al., ICML 2026](https://icml.cc/virtual/2026/poster/65709) for the parent suite.
## At a glance
| Property | Value |
|---|---|
| Tasks | **36** parameterized tasks (`Multi-01` … `Multi-36`) |
| Reasoning families | Navigation, Planning, CSP, Execution, Geometry, Physics |
| Total samples | **~360,000** (≈10k per task) |
| Total size | **~164 GB** |
| Format | Tar.gz shards (nested per-sample folders) + Parquet metadata |
| Shards | 7,200 (≈50 samples per shard) |
| License | CC-BY-4.0 |
## Repository layout
```
.
├── README.md
├── croissant.json # Croissant + RAI metadata
├── data/
│ ├── metadata.parquet # global index of all 360k samples
│ └── metadata_shards/
│ └── Multi-XX_<name>.parquet # per-task metadata (36 files)
├── questions/ # WebDataset shards
│ └── Multi-XX_<name>_NNNNN-NNNNN.tar.gz
│ └── (50 samples per shard, 5 files per sample, see "Sample format" below)
└── sample/ # ~5 GB representative subset for quick inspection
├── data/metadata_shards/...
└── questions/ # 6 shards × 36 tasks = 216 shards
```
The `sample/` subdirectory is a 5 GB pre-curated subset (the first 300 samples of every task) for reviewers and quick experimentation. To pull it:
```bash
huggingface-cli download Video-Reason/VBVR-MultiStep \
--repo-type dataset \
--include "sample/**" \
--local-dir ./vbvr-multistep-sample
```
## Sample format (inside each `.tar.gz` shard)
Each shard expands to a nested folder tree, identical in shape to the evaluation split:
```
Multi-XX_<name>_data-generator/
└── Multi-XX_<name>_data-generator_task/
└── Multi-XX_<name>_data-generator_<id>/
├── first_frame.png # conditioning frame
├── prompt.txt # natural-language task contract
├── final_frame.png # target endpoint (held-out at inference)
├── ground_truth.mp4 # reference rollout
└── question_metadata.json # seed, version, tolerances, task fields
```
Each shard contains 50 such instance folders. The five-artifact contract is identical to the evaluation split.
To extract:
```bash
tar xzf Multi-01_maze_shortest_path_data-generator_00000-00049.tar.gz
```
## Loading
### Per-task metadata (recommended entry point)
```python
import pandas as pd
m = pd.read_parquet(
"hf://datasets/Video-Reason/VBVR-MultiStep/data/metadata_shards/Multi-01_maze_shortest_path_data-generator.parquet"
)
print(m.head())
```
### Direct shard download
```python
from huggingface_hub import hf_hub_download
import tarfile
shard = hf_hub_download(
"Video-Reason/VBVR-MultiStep",
"questions/Multi-01_maze_shortest_path_data-generator_00000-00049.tar.gz",
repo_type="dataset",
)
with tarfile.open(shard) as t:
t.extractall("./extracted")
```
### Pull only the 5 GB sample
```bash
huggingface-cli download Video-Reason/VBVR-MultiStep \
--repo-type dataset \
--include "sample/**" \
--local-dir ./vbvr-multistep-sample
```
## Splits and seeds
The training corpus is partitioned into disjoint seed bands:
| Band | Seed range | Samples per task | Total samples |
|---|---|---|---|
| First-half | 1–5,000 | 5,000 | ~170k (across 34 trained tasks) |
| Second-half | 5,001–10,000 | 5,000 | ~170k (across 34 trained tasks) |
Both bands are disjoint from the **180-instance evaluation seeds** in `VBVR-MultiStep-Bench`. The submitted paper trains on 34 of 36 tasks; the released corpus contains all 36 task families.
## Reasoning families
See the [bench dataset card](https://huggingface.co/datasets/Video-Reason/VBVR-MultiStep-Bench) for the family taxonomy. Each family contributes 6 tasks, for 36 total.
## Intended use and out-of-scope
- **Primary use**: training I2V systems on long-horizon multi-step reasoning under explicit per-step rules.
- **Out-of-scope**: this corpus is fully synthetic and stylized; transfer to unconstrained open-world video is not validated by this release.
- **Not validated for**: production VLM pretraining at scale, real-world video generation, or any safety-critical use.
## License
Released under **CC-BY-4.0**. Generators consume only released task definitions; no third-party copyrighted content is embedded.
Derivatives of `Wan2.2-I2V-A14B` (Apache-2.0) referenced in the companion paper comply with the upstream license. This dataset does not redistribute model weights.
## Responsible AI
The dataset is fully synthetic. There are no human subjects, no scraped media, and no personal or sensitive information. Known biases inherit from the deterministic generators — every task family covers a deliberately narrow conceptual slice, and visual style is controlled by a fixed renderer family (no demographic content). See [`croissant.json`](./croissant.json) for the complete RAI metadata.
|