Datasets:
File size: 2,520 Bytes
6352ee3 f1e51ce 6352ee3 6fccbfc 6352ee3 a7f1ca3 6352ee3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 |
---
license: mit
task_categories:
- visual-question-answering
- video-classification
tags:
- spatial-reasoning
- vision-language
- video-generation
size_categories:
- 10K<n<100K
---
# VR-Bench Dataset
VR-Bench is a benchmark dataset for evaluating spatial reasoning capabilities of Vision-Language Models (VLMs) and Video Generation Models.
## Dataset Structure
The dataset is split into two subsets:
```
dataset_VR_split/
├── train/ # Training set (96 cases)
│ ├── maze/
│ ├── maze3d/
│ ├── pathfinder/
│ ├── sokoban/
│ └── trapfield/
└── eval/ # Evaluation set (24 cases)
├── maze/
├── maze3d/
├── pathfinder/
├── sokoban/
└── trapfield/
```
Each game directory contains:
- `images/`: Initial state images (PNG)
- `states/`: Game state metadata (JSON)
- `videos/`: Solution trajectory videos (MP4)
## Games
- **Maze**: 2D grid-based navigation with walls
- **TrapField**: 2D grid-based navigation with traps
- **Sokoban**: Box-pushing puzzle game
- **PathFinder**: Irregular maze with curved paths
- **Maze3D**: 3D maze with vertical navigation
## Usage
### For Video Model Evaluation
```python
from datasets import load_dataset
dataset = load_dataset("your-username/VR-Bench")
train_data = dataset["train"]
eval_data = dataset["eval"]
```
Each video file shows the optimal solution trajectory for the corresponding game state.
## Baseline Model Outputs
We have uploaded the output videos from all baseline models evaluated in our paper. These outputs are available in the `output_video.tar.gz` file and can be used with the VR-Bench evaluation infrastructure for reproduction and testing.
### Available Baseline Models
The baseline outputs are organized by model name:
```
output_video/
├── doubao-seedance-1-0-pro-250528/
├── kling-v1/
├── MiniMax-Hailuo-2_3/
├── sora-2/
├── veo3_1/
├── veo3_1-pro/
├── wan2.2/
└── wan2.5/
```
## Citation
If you use this dataset, please cite:
```bibtex
@article{yang2025vrbench,
title={Reasoning via Video: The First Evaluation of Video Models' Reasoning Abilities through Maze-Solving Tasks},
author={Cheng Yang and Haiyuan Wan and Yiran Peng and Xin Cheng and Zhaoyang Yu and Jiayi Zhang and Junchi Yu and Xinlei Yu and Xiawu Zheng and Dongzhan Zhou and Chenglin Wu},
journal={arXiv preprint arXiv:2511.15065},
year={2025}
}
```
## License
MIT License
|