The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 82, in _split_generators
raise ValueError(
ValueError: The TAR archives of the dataset should be in WebDataset format, but the files in the archive don't share the same prefix or the same types.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
VR-MultiStep
The ~360k-sample programmatic training corpus for long-horizon multi-step image-to-video (I2V) reasoning. Companion to the frozen VR-MultiStep-Bench (180-instance evaluation split).
At a glance
| Property | Value |
|---|---|
| Tasks | 36 parameterized tasks (Multi-01 … Multi-36) |
| Reasoning families | Navigation, Planning, CSP, Execution, Geometry, Physics |
| Total samples | ~360,000 (≈10k per task) |
| Total size | ~164 GB |
| Format | Tar.gz shards (nested per-sample folders) + Parquet metadata |
| Shards | 7,200 (≈50 samples per shard) |
| License | CC-BY-4.0 |
Repository layout
.
├── README.md
├── croissant.json # Croissant + RAI metadata
├── data/
│ ├── metadata.parquet # global index of all 360k samples
│ └── metadata_shards/
│ └── Multi-XX_<name>.parquet # per-task metadata (36 files)
├── questions/ # WebDataset shards
│ └── Multi-XX_<name>_NNNNN-NNNNN.tar.gz
│ └── (50 samples per shard, 5 files per sample, see "Sample format" below)
└── sample/ # ~5 GB representative subset for quick inspection
├── data/metadata_shards/...
└── questions/ # 6 shards × 36 tasks = 216 shards
The sample/ subdirectory is a 5 GB pre-curated subset (the first 300 samples of every task) for reviewers and quick experimentation. To pull it:
huggingface-cli download Mark7121983123/VR-MultiStep \
--repo-type dataset \
--include "sample/**" \
--local-dir ./vr-multistep-sample
Sample format (inside each .tar.gz shard)
Each shard expands to a nested folder tree, identical in shape to the evaluation split:
Multi-XX_<name>_data-generator/
└── Multi-XX_<name>_data-generator_task/
└── Multi-XX_<name>_data-generator_<id>/
├── first_frame.png # conditioning frame
├── prompt.txt # natural-language task contract
├── final_frame.png # target endpoint (held-out at inference)
├── ground_truth.mp4 # reference rollout
└── question_metadata.json # seed, version, tolerances, task fields
Each shard contains 50 such instance folders. The five-artifact contract is identical to the evaluation split.
To extract:
tar xzf Multi-01_maze_shortest_path_data-generator_00000-00049.tar.gz
Loading
Per-task metadata (recommended entry point)
import pandas as pd
m = pd.read_parquet(
"hf://datasets/Mark7121983123/VR-MultiStep/data/metadata_shards/Multi-01_maze_shortest_path_data-generator.parquet"
)
print(m.head())
Direct shard download
from huggingface_hub import hf_hub_download
import tarfile
shard = hf_hub_download(
"Mark7121983123/VR-MultiStep",
"questions/Multi-01_maze_shortest_path_data-generator_00000-00049.tar.gz",
repo_type="dataset",
)
with tarfile.open(shard) as t:
t.extractall("./extracted")
Pull only the 5 GB sample
huggingface-cli download Mark7121983123/VR-MultiStep \
--repo-type dataset \
--include "sample/**" \
--local-dir ./vr-multistep-sample
Splits and seeds
The training corpus is partitioned into disjoint seed bands:
| Band | Seed range | Samples per task | Total samples |
|---|---|---|---|
| First-half | 1–5,000 | 5,000 | ~170k (across 34 trained tasks) |
| Second-half | 5,001–10,000 | 5,000 | ~170k (across 34 trained tasks) |
Both bands are disjoint from the 180-instance evaluation seeds in VR-MultiStep-Bench. The submitted paper trains on 34 of 36 tasks; the released corpus contains all 36 task families.
Reasoning families
See the evaluation split dataset card for the family taxonomy. Each family contributes 6 tasks, for 36 total.
Intended use and out-of-scope
- Primary use: training I2V systems on long-horizon multi-step reasoning under explicit per-step rules.
- Out-of-scope: this corpus is fully synthetic and stylized; transfer to unconstrained open-world video is not validated by this release.
- Not validated for: production VLM pretraining at scale, real-world video generation, or any safety-critical use.
License
Released under CC-BY-4.0. Generators consume only released task definitions; no third-party copyrighted content is embedded.
Derivatives of Wan2.2-I2V-A14B (Apache-2.0) referenced in the companion paper comply with the upstream license. This dataset does not redistribute model weights.
Responsible AI
The dataset is fully synthetic. There are no human subjects, no scraped media, and no personal or sensitive information. Known biases inherit from the deterministic generators — every task family covers a deliberately narrow conceptual slice, and visual style is controlled by a fixed renderer family (no demographic content). See croissant.json for the complete RAI metadata.
- Downloads last month
- -