VBVR-MultiStep / README.md
Mark7121983123's picture
Remove citation block (paper under review)
992e6b5 verified
metadata
license: cc-by-4.0
task_categories:
  - image-to-video
  - text-to-video
language:
  - en
size_categories:
  - 100K<n<1M
pretty_name: VBVR-MultiStep
tags:
  - video-reasoning
  - multi-step
  - long-horizon
  - image-to-video
  - training

VBVR-MultiStep

The ~360k-sample programmatic training corpus for long-horizon multi-step image-to-video (I2V) reasoning. Companion to the frozen VBVR-MultiStep-Bench (180-instance evaluation split).

Part of the VBVR (Very Big Video Reasoning Suite) project: https://video-reason.com. See Wang et al., ICML 2026 for the parent suite.

At a glance

Property Value
Tasks 36 parameterized tasks (Multi-01Multi-36)
Reasoning families Navigation, Planning, CSP, Execution, Geometry, Physics
Total samples ~360,000 (≈10k per task)
Total size ~164 GB
Format Tar.gz shards (nested per-sample folders) + Parquet metadata
Shards 7,200 (≈50 samples per shard)
License CC-BY-4.0

Repository layout

.
├── README.md
├── croissant.json                       # Croissant + RAI metadata
├── data/
│   ├── metadata.parquet                 # global index of all 360k samples
│   └── metadata_shards/
│       └── Multi-XX_<name>.parquet      # per-task metadata (36 files)
├── questions/                           # WebDataset shards
│   └── Multi-XX_<name>_NNNNN-NNNNN.tar.gz
│       └── (50 samples per shard, 5 files per sample, see "Sample format" below)
└── sample/                              # ~5 GB representative subset for quick inspection
    ├── data/metadata_shards/...
    └── questions/                        # 6 shards × 36 tasks = 216 shards

The sample/ subdirectory is a 5 GB pre-curated subset (the first 300 samples of every task) for reviewers and quick experimentation. To pull it:

huggingface-cli download Video-Reason/VBVR-MultiStep \
    --repo-type dataset \
    --include "sample/**" \
    --local-dir ./vbvr-multistep-sample

Sample format (inside each .tar.gz shard)

Each shard expands to a nested folder tree, identical in shape to the evaluation split:

Multi-XX_<name>_data-generator/
└── Multi-XX_<name>_data-generator_task/
    └── Multi-XX_<name>_data-generator_<id>/
        ├── first_frame.png         # conditioning frame
        ├── prompt.txt              # natural-language task contract
        ├── final_frame.png         # target endpoint (held-out at inference)
        ├── ground_truth.mp4        # reference rollout
        └── question_metadata.json  # seed, version, tolerances, task fields

Each shard contains 50 such instance folders. The five-artifact contract is identical to the evaluation split.

To extract:

tar xzf Multi-01_maze_shortest_path_data-generator_00000-00049.tar.gz

Loading

Per-task metadata (recommended entry point)

import pandas as pd
m = pd.read_parquet(
    "hf://datasets/Video-Reason/VBVR-MultiStep/data/metadata_shards/Multi-01_maze_shortest_path_data-generator.parquet"
)
print(m.head())

Direct shard download

from huggingface_hub import hf_hub_download
import tarfile
shard = hf_hub_download(
    "Video-Reason/VBVR-MultiStep",
    "questions/Multi-01_maze_shortest_path_data-generator_00000-00049.tar.gz",
    repo_type="dataset",
)
with tarfile.open(shard) as t:
    t.extractall("./extracted")

Pull only the 5 GB sample

huggingface-cli download Video-Reason/VBVR-MultiStep \
    --repo-type dataset \
    --include "sample/**" \
    --local-dir ./vbvr-multistep-sample

Splits and seeds

The training corpus is partitioned into disjoint seed bands:

Band Seed range Samples per task Total samples
First-half 1–5,000 5,000 ~170k (across 34 trained tasks)
Second-half 5,001–10,000 5,000 ~170k (across 34 trained tasks)

Both bands are disjoint from the 180-instance evaluation seeds in VBVR-MultiStep-Bench. The submitted paper trains on 34 of 36 tasks; the released corpus contains all 36 task families.

Reasoning families

See the bench dataset card for the family taxonomy. Each family contributes 6 tasks, for 36 total.

Intended use and out-of-scope

  • Primary use: training I2V systems on long-horizon multi-step reasoning under explicit per-step rules.
  • Out-of-scope: this corpus is fully synthetic and stylized; transfer to unconstrained open-world video is not validated by this release.
  • Not validated for: production VLM pretraining at scale, real-world video generation, or any safety-critical use.

License

Released under CC-BY-4.0. Generators consume only released task definitions; no third-party copyrighted content is embedded.

Derivatives of Wan2.2-I2V-A14B (Apache-2.0) referenced in the companion paper comply with the upstream license. This dataset does not redistribute model weights.

Responsible AI

The dataset is fully synthetic. There are no human subjects, no scraped media, and no personal or sensitive information. Known biases inherit from the deterministic generators — every task family covers a deliberately narrow conceptual slice, and visual style is controlled by a fixed renderer family (no demographic content). See croissant.json for the complete RAI metadata.