Datasets:
metadata
license: apache-2.0
task_categories:
- video-text-to-text
- robotics
language:
- en
tags:
- video
- reasoning
- perception
- embodied
- simulation
- webdataset
size_categories:
- 100K<n<1M
VideoReason Training Dataset
A large-scale video reasoning training dataset spanning perception, simulation, and embodied tasks.
Dataset Overview
| Subset | Samples | Description |
|---|---|---|
| perception | 177,407 | Visual perception tasks |
| simulation | 105,818 | 3D scene navigation with camera motion sequences in simulated environments |
| embodied | 188,845 | Robotic manipulation tasks |
| Total | 472,070 |
Each sample consists of a video (.mp4) paired with a text prompt describing the task.
Perception Breakdown
| Task | Samples | Description |
|---|---|---|
| Segmentation | 40,000 | Instance segmentation with color-fill visualization |
| Denoising | 32,093 | Image denoising from noisy inputs |
| Low-light Enhancement | 32,103 | Enhancing images captured in low-light conditions |
| Super-resolution | 32,090 | Single-image super-resolution |
| Edge Detection | 30,000 | Boundary and edge extraction |
| Keypoint Detection | 11,121 | Detecting structural keypoints |
Embodied Breakdown
| Source | Samples | Description |
|---|---|---|
| DROID | 94,237 | Real-world robotic manipulation from the DROID dataset |
| RoboTwin | 94,608 | Simulated bimanual robotic manipulation from RoboTwin |
Repository Structure
README.md
{subset}/prompts.jsonl # Metadata: one JSON object per line
{subset}/shards/shard-NNNNNN.tar # WebDataset tar shards (~1GB each)
prompts.jsonl format
{"video_path": "video/seg/example.mp4", "prompt": "Identify a dining table in this image..."}
video_path: original relative path of the video within the subsetprompt: text prompt associated with the video
Tar shard contents
Each shard is a standard .tar archive containing paired files:
000000.mp4 # video file
000000.json # {"video_path": "...", "prompt": "..."}
000001.mp4
000001.json
...
The 6-digit key is the global sample index within the subset (matching the line number in prompts.jsonl, 0-indexed).
Download
From ModelScope
pip install modelscope
# Download entire dataset
modelscope download --dataset ZaneQiu/XVReason --local_dir videoreason-training
# Download a specific subset
modelscope download --dataset ZaneQiu/XVReason --local_dir videoreason-training --include "perception/*"
From HuggingFace
pip install huggingface_hub
huggingface-cli download Zane-QIU/videoreason-training --repo-type dataset --local-dir videoreason-training
Extracting Tar Shards
Option 1: Extract all shards to a directory
# Extract all shards for a subset
mkdir -p extracted/perception
for f in videoreason-training/perception/shards/shard-*.tar; do
tar xf "$f" -C extracted/perception
done
Option 2: Use WebDataset for streaming (recommended for training)
import webdataset as wds
dataset = wds.WebDataset("videoreason-training/perception/shards/shard-{000000..000028}.tar")
for sample in dataset:
video_bytes = sample[".mp4"]
metadata = json.loads(sample[".json"])
print(metadata["prompt"])
Option 3: Extract a single shard
tar xf perception/shards/shard-000000.tar -C output_dir/
License
Please refer to the original dataset licenses for each subset:
- DROID: DROID Dataset
- RoboTwin: RoboTwin