File size: 5,464 Bytes
89b19eb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
---
license: apache-2.0
language:
- en
tags:
- video-generation
pretty_name: VBVR-Bench-Data
size_categories:
- n<1K
configs:
- config_name: VBVR-Bench-Data
  data_files:
  - split: test
    path: VBVR-Bench.json
---

# VBVR: A Very Big Video Reasoning Suite

<a href="https://video-reason.com" target="_blank">
    <img alt="Code" src="https://img.shields.io/badge/Project%20-%20Homepage-4285F4" height="20" />
</a>
<a href="https://github.com/orgs/Video-Reason/repositories" target="_blank">
    <img alt="Code" src="https://img.shields.io/badge/VBVR-Code-100000?style=flat-square&logo=github&logoColor=white" height="20" />
</a>
  <a href="https://arxiv.org/abs/2602.20159" target="_blank">
      <img alt="arXiv" src="https://img.shields.io/badge/arXiv-VBVR-red?logo=arxiv" height="20" />
  </a>
<a href="https://huggingface.co/Video-Reason/VBVR-Wan2.2" target="_blank">
    <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Wan2.2-Model-ffc107?color=ffc107&logoColor=white" height="20" />
</a>
<a href="https://huggingface.co/datasets/Video-Reason/VBVR-Dataset" target="_blank">
    <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Dataset-Data-ffc107?color=ffc107&logoColor=white" height="20" />
</a>
<a href="https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard" target="_blank">
    <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Bench-Leaderboard-ffc107?color=ffc107&logoColor=white" height="20" />
</a>

## Overview
Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture, 
enabling intuitive reasoning over motion, interaction, and causality. Rapid progress in video models has focused primarily on visual quality. 
Systematically studying video reasoning and its scaling behavior suffers from a lack of video reasoning (training) data. 
To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks 
and over one million video clips—approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench, 
a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers, 
enabling reproducible and interpretable diagnosis of video reasoning capabilities. 
Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization 
to unseen reasoning tasks. **Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning.**


## Release Information
We are pleased to release the official **VBVR-Bench** test dataset, designed for standardized and rigorous evaluation of video-based visual reasoning models. 
The test split is designed along with the evaluation toolkit provided by Video-Reason at [VBVR-Bench evaluation code](https://github.com/Video-Reason/VBVR-Bench).

After running evaluation, you can compare your model’s performance on the public leaderboard at [VBVR-Bench Leaderboard](https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard).

In this release, we present 
[**VBVR-Wan2.2**](https://huggingface.co/Video-Reason/VBVR-Wan2.2), 
[**VBVR-Dataset**](https://huggingface.co/datasets/Video-Reason/VBVR-Dataset),
[**VBVR-Bench-Data**](https://huggingface.co/datasets/Video-Reason/VBVR-Bench-Data) and 
[**VBVR-Bench-Leaderboard**](https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard).

## Data Structure
The dataset is organized by domain and task generator. For example:

```bash
In-Domain_50/
  G-31_directed_graph_navigation_data-generator/
    00000/
      first_frame.png
      final_frame.png
      ground_truth.mp4
      prompt.txt
```
Structure Description

- In-Domain_50/Out-of-Domain_50: 
Evaluation splits indicating whether samples belong to in-domain or out-of-domain settings.

- G-XXX_task-name_data-generator: 
A specific reasoning task category and its corresponding data generator.

- 00000-00004: 
Individual sample instances.

Each sample directory contains
- first_frame.png: The initial frame of the video

- final_frame.png: The final frame

- ground_truth.mp4: The full video sequence

- prompt.txt: The textual reasoning question or instruction

## 🖊️ Citation

```bib
@article{vbvr2026,
      title={A Very Big Video Reasoning Suite}, 
      author={Maijunxian Wang and Ruisi Wang and Juyi Lin and Ran Ji and Thaddäus Wiedemer and Qingying Gao and Dezhi Luo and Yaoyao Qian and Lianyu Huang and Zelong Hong and Jiahui Ge and Qianli Ma and Hang He and Yifan Zhou and Lingzi Guo and Lantao Mei and Jiachen Li and Hanwen Xing and Tianqi Zhao and Fengyuan Yu and Weihang Xiao and Yizheng Jiao and Jianheng Hou and Danyang Zhang and Pengcheng Xu and Boyang Zhong and Zehong Zhao and Gaoyun Fang and John Kitaoka and Yile Xu and Hua Xu and Kenton Blacutt and Tin Nguyen and Siyuan Song and Haoran Sun and Shaoyue Wen and Linyang He and Runming Wang and Yanzhi Wang and Mengyue Yang and Ziqiao Ma and Raphaël Millière and Freda Shi and Nuno Vasconcelos and Daniel Khashabi and Alan Yuille and Yilun Du and Ziming Liu and Bo Li and Dahua Lin and Ziwei Liu and Vikash Kumar and Yijiang Li and Lei Yang and Zhongang Cai and Hokin Deng},
  journal = {arXiv preprint arXiv:2602.20159},
  year = {2026}
}
```