Datasets:

Modalities:
Image
Text
Formats:
parquet
Languages:
English
Size:
< 1K
ArXiv:
License:
VIPER / README.md
Monosail's picture
Update README.md
be4d078 verified
metadata
dataset_info:
  features:
    - name: id
      dtype: string
    - name: domain
      dtype: string
    - name: task_type
      dtype: string
    - name: prompt
      dtype: string
    - name: image
      dtype: image
    - name: reference_frames
      sequence: image
    - name: reference_text
      sequence: string
    - name: protocol
      sequence: string
configs:
  - config_name: default
    data_files:
      - split: train
        path: dataset.parquet
license: mit
language:
  - en

Beyond the Last Frame: Process-aware Evaluation for Generative Video Reasoning

πŸ‘€ About VIPER

  • Overview: Process-aware evaluation for Generative Video Reasoning tasks.
  • Statistics: 309 carefully curated samples spanning 6 distinct domains (i.e., temporal, structural, symbolic, spatial, physics and planning reasoning).
  • New Metric: Process-outcome Consistency (POC@r). POC@r evaluate video correctness at both process- and outcome-level, with multiple frames uniformly sampled from the whole video at rate r, instead of the last frame only.


Dataset Statistics

Domain Distribution

Domain Total Samples Task Types
Physics 32 experiment, game
Planning 44 navigation, obj_manipulation
Spatial 60 block_rotate, dice, image_restore
Structural 70 chess, maze, sudoku, ttt
Symbolic 60 knowledge, math, multimodal
Temporal 43 obj_move, zoom

πŸ“¦ Dataset Usage

Download

from datasets import load_dataset

# Load the full dataset
dataset = load_dataset("Monosail/VIPER")

Data Fields

  • id: Unique identifier for the sample
  • domain: The reasoning domain (Physics, Planning, Spatial, Structural, Symbolic, Temporal)
  • task_type: Specific task category within the domain
  • prompt: Text prompt describing the task
  • image: The input image
  • reference_frames: Ground-truth image frames
  • reference_texts: Ground-truth text descriptions
  • protocol: Process-level task constraints

πŸ“ Citation

If you find our benchmark useful, please consider citing us:

@article{li2026viper,
  title={Beyond the Last Frame: Process-aware Evaluation for Generative Video Reasoning},
  author={Li, Yifan and Gu, Yukai and Min, Yingqian and Liu, Zikang and Du, Yifan and Zhou, Kun and Yang, Min and Zhao, Wayne Xin and Qiu, Minghui},
  journal={arXiv preprint arXiv:2512.24952},
  year={2025}
}