metadata
dataset_info:
features:
- name: id
dtype: string
- name: domain
dtype: string
- name: task_type
dtype: string
- name: prompt
dtype: string
- name: image
dtype: image
- name: reference_frames
sequence: image
- name: reference_text
sequence: string
- name: protocol
sequence: string
configs:
- config_name: default
data_files:
- split: train
path: dataset.parquet
license: mit
language:
- en
Beyond the Last Frame: Process-aware Evaluation for Generative Video Reasoning
π About VIPER
- Overview: Process-aware evaluation for Generative Video Reasoning tasks.
- Statistics: 309 carefully curated samples spanning 6 distinct domains (i.e., temporal, structural, symbolic, spatial, physics and planning reasoning).
- New Metric: Process-outcome Consistency (POC@r). POC@r evaluate video correctness at both process- and outcome-level, with multiple frames uniformly sampled from the whole video at rate r, instead of the last frame only.
Dataset Statistics
Domain Distribution
| Domain | Total Samples | Task Types |
|---|---|---|
| Physics | 32 | experiment, game |
| Planning | 44 | navigation, obj_manipulation |
| Spatial | 60 | block_rotate, dice, image_restore |
| Structural | 70 | chess, maze, sudoku, ttt |
| Symbolic | 60 | knowledge, math, multimodal |
| Temporal | 43 | obj_move, zoom |
π¦ Dataset Usage
Download
from datasets import load_dataset
# Load the full dataset
dataset = load_dataset("Monosail/VIPER")
Data Fields
id: Unique identifier for the sampledomain: The reasoning domain (Physics, Planning, Spatial, Structural, Symbolic, Temporal)task_type: Specific task category within the domainprompt: Text prompt describing the taskimage: The input imagereference_frames: Ground-truth image framesreference_texts: Ground-truth text descriptionsprotocol: Process-level task constraints
π Citation
If you find our benchmark useful, please consider citing us:
@article{li2026viper,
title={Beyond the Last Frame: Process-aware Evaluation for Generative Video Reasoning},
author={Li, Yifan and Gu, Yukai and Min, Yingqian and Liu, Zikang and Du, Yifan and Zhou, Kun and Yang, Min and Zhao, Wayne Xin and Qiu, Minghui},
journal={arXiv preprint arXiv:2512.24952},
year={2025}
}