Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,121 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
size_categories:
|
| 6 |
+
- n<1K
|
| 7 |
+
---
|
| 8 |
+
# VIPER Benchmark
|
| 9 |
+
|
| 10 |
+
## Dataset Description
|
| 11 |
+
|
| 12 |
+
VIPER (VIdeo Process Evaluation for Reasoning tasks) is a comprehensive benchmark for Generative Video Reasoning (GVR) tasks.
|
| 13 |
+
|
| 14 |
+
### Dataset Summary
|
| 15 |
+
|
| 16 |
+
This dataset contains **309 samples** spanning **6 distinct reasoning domains**:
|
| 17 |
+
|
| 18 |
+
- **Physics**: Understanding of physical laws and dynamics (32 samples)
|
| 19 |
+
- **Planning**: Spatial navigation and object manipulation (44 samples)
|
| 20 |
+
- **Spatial**: 3D spatial reasoning and transformations (60 samples)
|
| 21 |
+
- **Structural**: Pattern recognition and constraint satisfaction (70 samples)
|
| 22 |
+
- **Symbolic**: Knowledge reasoning and mathematical problem-solving (60 samples)
|
| 23 |
+
- **Temporal**: Temporal dynamics and object tracking (43 samples)
|
| 24 |
+
|
| 25 |
+
## Dataset Structure
|
| 26 |
+
|
| 27 |
+
### Data Instances
|
| 28 |
+
|
| 29 |
+
Each instance contains:
|
| 30 |
+
|
| 31 |
+
```json
|
| 32 |
+
{
|
| 33 |
+
"domain": "Spatial",
|
| 34 |
+
"id": "block_rotate_0",
|
| 35 |
+
"image": "Spatial/images/block_rotate_0.jpg",
|
| 36 |
+
"reference": {
|
| 37 |
+
"frames": ["Spatial/references/block_rotate_0_0.jpg"],
|
| 38 |
+
"text": []
|
| 39 |
+
},
|
| 40 |
+
"prompt": "A structure composed of 9 cubes viewed from...",
|
| 41 |
+
"protocol": ["Maintain the block structure..."],
|
| 42 |
+
"task_type": "block_rotate"
|
| 43 |
+
}
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
### Data Fields
|
| 47 |
+
|
| 48 |
+
- `domain`: The reasoning domain (Physics, Planning, Spatial, Structural, Symbolic, Temporal)
|
| 49 |
+
- `id`: Unique identifier for the sample
|
| 50 |
+
- `image`: Path to the input image
|
| 51 |
+
- `reference`: Reference materials for evaluation
|
| 52 |
+
- `frames`: Reference image frames
|
| 53 |
+
- `text`: Reference text descriptions
|
| 54 |
+
- `prompt`: Text prompt describing the task or desired output
|
| 55 |
+
- `protocol`: Evaluation criteria or constraints
|
| 56 |
+
- `task_type`: Specific task category within the domain
|
| 57 |
+
|
| 58 |
+
|
| 59 |
+
## Dataset Statistics
|
| 60 |
+
|
| 61 |
+
### Domain Distribution
|
| 62 |
+
|
| 63 |
+
| Domain | Total Samples | Task Types |
|
| 64 |
+
|--------|---------------|------------|
|
| 65 |
+
| Physics | 32 | experiment, game |
|
| 66 |
+
| Planning | 44 | navigation, obj_manipulation |
|
| 67 |
+
| Spatial | 60 | block_rotate, dice, image_restore |
|
| 68 |
+
| Structural | 70 | chess, maze, sudoku, ttt |
|
| 69 |
+
| Symbolic | 60 | knowledge, math, multimodal |
|
| 70 |
+
| Temporal | 43 | obj_move, zoom |
|
| 71 |
+
|
| 72 |
+
### Task Distribution by Domain
|
| 73 |
+
|
| 74 |
+
**Physics** (32 samples):
|
| 75 |
+
- experiment: 18 samples
|
| 76 |
+
- game: 14 samples
|
| 77 |
+
|
| 78 |
+
**Planning** (44 samples):
|
| 79 |
+
- navigation: 25 samples
|
| 80 |
+
- obj_manipulation: 19 samples
|
| 81 |
+
|
| 82 |
+
**Spatial** (60 samples):
|
| 83 |
+
- block_rotate: 25 samples
|
| 84 |
+
- dice: 20 samples
|
| 85 |
+
- image_restore: 15 samples
|
| 86 |
+
|
| 87 |
+
**Structural** (70 samples):
|
| 88 |
+
- chess: 20 samples
|
| 89 |
+
- maze: 20 samples
|
| 90 |
+
- sudoku: 20 samples
|
| 91 |
+
- ttt: 10 samples
|
| 92 |
+
|
| 93 |
+
**Symbolic** (60 samples):
|
| 94 |
+
- knowledge: 20 samples
|
| 95 |
+
- math: 20 samples
|
| 96 |
+
- multimodal: 20 samples
|
| 97 |
+
|
| 98 |
+
**Temporal** (43 samples):
|
| 99 |
+
- obj_move: 25 samples
|
| 100 |
+
- zoom: 18 samples
|
| 101 |
+
|
| 102 |
+
## Usage
|
| 103 |
+
|
| 104 |
+
```python
|
| 105 |
+
from datasets import load_dataset
|
| 106 |
+
|
| 107 |
+
# Load the full dataset
|
| 108 |
+
dataset = load_dataset("Monosail/VIPER")
|
| 109 |
+
|
| 110 |
+
```
|
| 111 |
+
|
| 112 |
+
## Citation Information
|
| 113 |
+
|
| 114 |
+
```bibtex
|
| 115 |
+
@article{li2026viper,
|
| 116 |
+
title={Beyond the Last Frame: Process-aware Evaluation for Generative Video Reasoning},
|
| 117 |
+
author={Li, Yifan and Gu, Yukai and Min, Yingqian and Liu, Zikang and Du, Yifan and Zhou, Kun and Yang, Min and Zhao, Wayne Xin and Qiu, Minghui},
|
| 118 |
+
journal={arXiv preprint arXiv:2512.24952},
|
| 119 |
+
year={2025}
|
| 120 |
+
}
|
| 121 |
+
```
|