Transformers
Diffusers
Safetensors
File size: 9,673 Bytes
7374bcd
 
 
9234e00
7374bcd
9234e00
7374bcd
 
 
 
838ded2
9234e00
838ded2
9234e00
edaad7b
7374bcd
9234e00
 
 
85eccfd
9234e00
838ded2
85eccfd
9234e00
838ded2
7f93add
838ded2
7374bcd
 
 
 
edaad7b
 
 
9234e00
edaad7b
 
 
 
9234e00
edaad7b
 
7374bcd
9234e00
1dd0231
7374bcd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9234e00
7374bcd
 
838ded2
85eccfd
 
7f93add
7374bcd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9234e00
edaad7b
9234e00
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
14d93e9
9234e00
 
edaad7b
7374bcd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
---
base_model:
- Wan-AI/Wan2.2-I2V-A14B-Diffusers
library_name: diffusers
license: apache-2.0
pipeline_tag: image-to-video
---

# VBVR: A Very Big Video Reasoning Suite

<a href="https://video-reason.com" target="_blank">
    <img alt="Project Page" src="https://img.shields.io/badge/Project%20-%20Homepage-4285F4" height="20" />
</a>
<a href="https://github.com/Video-Reason/VBVR-EvalKit" target="_blank">
    <img alt="Code" src="https://img.shields.io/badge/VBVR-Code-100000?style=flat-square&logo=github&logoColor=white" height="20" />
</a>
<a href="https://huggingface.co/papers/2602.20159" target="_blank">
    <img alt="arXiv" src="https://img.shields.io/badge/arXiv-VBVR-red?logo=arxiv" height="20" />
</a>
<a href="https://huggingface.co/datasets/Video-Reason/VBVR-Dataset" target="_blank">
    <img alt="Dataset" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Dataset-Data-ffc107?color=ffc107&logoColor=white" height="20" />
</a>
<a href="https://huggingface.co/datasets/Video-Reason/VBVR-Bench-Data" target="_blank">
    <img alt="Bench Data" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Bench-Data-ffc107?color=ffc107&logoColor=white" height="20" />
</a>
<a href="https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard" target="_blank">
    <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Bench-Leaderboard-ffc107?color=ffc107&logoColor=white" height="20" />
</a>


## Overview
Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture, 
enabling intuitive reasoning over motion, interaction, and causality. Rapid progress in video models has focused primarily on visual quality. 
Systematically studying video reasoning and its scaling behavior suffers from a lack of video reasoning (training) data. 

To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks 
and over one million video clips—approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench, 
a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers, 
enabling reproducible and interpretable diagnosis of video reasoning capabilities. 

Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization 
to unseen reasoning tasks. **Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning.**

The model was presented in the paper [A Very Big Video Reasoning Suite](https://huggingface.co/papers/2602.20159).

<table>
    <tr>
      <th>Model</th>
      <th>Overall</th>
      <th>ID</th>
      <th>ID-Abst.</th>
      <th>ID-Know.</th>
      <th>ID-Perc.</th>
      <th>ID-Spat.</th>
      <th>ID-Trans.</th>
      <th>OOD</th>
      <th>OOD-Abst.</th>
      <th>OOD-Know.</th>
      <th>OOD-Perc.</th>
      <th>OOD-Spat.</th>
      <th>OOD-Trans.</th>
    </tr>
  <tbody>
    <tr>
      <td><strong>Human</strong></td>
      <td>0.974</td><td>0.960</td><td>0.919</td><td>0.956</td><td>1.00</td><td>0.95</td><td>1.00</td>
      <td>0.988</td><td>1.00</td><td>1.00</td><td>0.990</td><td>1.00</td><td>0.970</td>
    </tr>
    <tr style="background:#F2F0EF;font-weight:700;text-align:center;">
      <td colspan="14"><em>Open-source Models</em></td>
    </tr>
    <tr>
      <td>CogVideoX1.5-5B-I2V</td>
      <td>0.273</td><td>0.283</td><td>0.241</td><td>0.328</td><td>0.257</td><td>0.328</td><td>0.305</td>
      <td>0.262</td><td><u>0.281</u></td><td>0.235</td><td>0.250</td><td><strong>0.254</strong></td><td>0.282</td>
    </tr>
    <tr>
      <td>HunyuanVideo-I2V</td>
      <td>0.273</td><td>0.280</td><td>0.207</td><td>0.357</td><td>0.293</td><td>0.280</td><td><u>0.316</u></td>
      <td>0.265</td><td>0.175</td><td><strong>0.369</strong></td><td>0.290</td><td><u>0.253</u></td><td>0.250</td>
    </tr>
    <tr>
      <td><strong>Wan2.2-I2V-A14B</strong></td>
      <td><strong>0.371</strong></td><td><strong>0.412</strong></td><td><strong>0.430</strong></td>
      <td><strong>0.382</strong></td><td><strong>0.415</strong></td><td><strong>0.404</strong></td>
      <td><strong>0.419</strong></td><td><strong>0.329</strong></td>
      <td><strong>0.405</strong></td><td>0.308</td><td><strong>0.343</strong></td>
      <td>0.236</td><td><u>0.307</u></td>
    </tr>
    <tr>
      <td><u>LTX-2</u></td>
      <td><u>0.313</u></td><td><u>0.329</u></td><td><u>0.316</u></td>
      <td><u>0.362</u></td><td><u>0.326</u></td><td><u>0.340</u></td>
      <td>0.306</td><td><u>0.297</u></td>
      <td>0.244</td><td><u>0.337</u></td><td><u>0.317</u></td>
      <td>0.231</td><td><strong>0.311</strong></td>
    </tr>
    <tr style="background:#F2F0EF;font-weight:700;text-align:center;">
      <td colspan="14"><em>Proprietary Models</em></td>
    </tr>
    <tr>
      <td>Runway Gen-4 Turbo</td>
      <td>0.403</td><td>0.392</td><td>0.396</td><td>0.409</td><td>0.429</td><td>0.341</td><td>0.363</td>
      <td>0.414</td><td>0.515</td><td><u>0.429</u></td><td>0.419</td><td>0.327</td><td>0.373</td>
    </tr>
    <tr>
      <td><strong>Sora 2</strong></td>
      <td><strong>0.546</strong></td><td><strong>0.569</strong></td><td><u>0.602</u></td>
      <td><u>0.477</u></td><td><strong>0.581</strong></td><td><strong>0.572</strong></td>
      <td><strong>0.597</strong></td><td><strong>0.523</strong></td>
      <td><u>0.546</u></td><td><strong>0.472</strong></td><td><strong>0.525</strong></td>
      <td><strong>0.462</strong></td><td><strong>0.546</strong></td>
    </tr>
    <tr>
      <td>Kling 2.6</td>
      <td>0.369</td><td>0.408</td><td>0.465</td><td>0.323</td><td>0.375</td><td>0.347</td><td><u>0.519</u></td>
      <td>0.330</td><td>0.528</td><td>0.135</td><td>0.272</td><td>0.356</td><td>0.359</td>
    </tr>
    <tr>
      <td><u>Veo 3.1</u></td>
      <td><u>0.480</u></td><td><u>0.531</u></td><td><strong>0.611</strong></td>
      <td><strong>0.503</strong></td><td><u>0.520</u></td><td><u>0.444</u></td>
      <td>0.510</td><td><u>0.429</u></td>
      <td><strong>0.577</strong></td><td>0.277</td><td><u>0.420</u></td>
      <td><u>0.441</u></td><td><u>0.404</u></td>
    </tr>
    <tr style="background:#F2F0EF;font-weight:700;text-align:center;">
      <td colspan="14"><em>Data Scaling Strong Baseline</em></td>
    </tr>
    <tr>
      <td><strong>VBVR-Wan2.2</strong></td>
      <td><strong>0.685</strong></td><td><strong>0.760</strong></td><td><strong>0.724</strong></td>
      <td><strong>0.750</strong></td><td><strong>0.782</strong></td><td><strong>0.745</strong></td>
      <td><strong>0.833</strong></td><td><strong>0.610</strong></td>
      <td><strong>0.768</strong></td><td><strong>0.572</strong></td><td><strong>0.547</strong></td>
      <td><strong>0.618</strong></td><td><strong>0.615</strong></td>
    </tr>
  </tbody>
</table>

## Release Information
VBVR-Wan2.2 is trained from Wan2.2-I2V-A14B without architectural modifications, as the goal of VBVR-Wan2.2 is to *investigate data scaling behavior* and provide a *strong baseline model* for the video reasoning research community. Leveraging the VBVR-Dataset, which constitutes one of the largest video reasoning datasets to date, VBVR-Wan2.2 achieved highest score on VBVR-Bench.

In this release, we present 
[**VBVR-Wan2.2**](https://huggingface.co/Video-Reason/VBVR-Wan2.2), 
[**VBVR-Dataset**](https://huggingface.co/datasets/Video-Reason/VBVR-Dataset),
[**VBVR-Bench-Data**](https://huggingface.co/datasets/Video-Reason/VBVR-Bench-Data) and 
[**VBVR-Bench-Leaderboard**](https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard).


## 🛠️ QuickStart

### Installation

We recommend using [uv](https://docs.astral.sh/uv/) to manage the environment.

> uv installation guide: <https://docs.astral.sh/uv/getting-started/installation/#installing-uv>

```bash 
pip install torch>=2.4.0 torchvision>=0.19.0 transformers Pillow huggingface_hub[cli]
uv pip install git+https://github.com/huggingface/diffusers
```

#### Example Code

```bash
huggingface-cli download Video-Reason/VBVR-Wan2.2 --local-dir ./VBVR-Wan2.2

python example.py \
  --model_path ./VBVR-Wan2.2
```

## 🖊️ Citation

```bibtex
@article{vbvr2026,
  title   = {A Very Big Video Reasoning Suite},
  author  = {Wang, Maijunxian and Wang, Ruisi and Lin, Juyi and Ji, Ran and
             Wiedemer, Thadd{\"a}us and Gao, Qingying and Luo, Dezhi and
             Qian, Yaoyao and Huang, Lianyu and Hong, Zelong and Ge, Jiahui and
             Ma, Qianli and He, Hang and Zhou, Yifan and Guo, Lingzi and
             Mei, Lantao and Li, Jiachen and Xing, Hanwen and Zhao, Tianqi and
             Yu, Fengyuan and Xiao, Weihang and Jiao, Yizheng and
             Hou, Jianheng and Zhang, Danyang and Xu, Pengcheng and
             Zhong, Boyang and Zhao, Zehong and Fang, Gaoyun and Kitaoka, John and
             Xu, Yile and Xu, Hua bureau and Blacutt, Kenton and Nguyen, Tin and
             Song, Siyuan and Sun, Haoran and Wen, Shaoyue and He, Linyang and
             Wang, Runming and Wang, Yanzhi and Yang, Mengyue and Ma, Ziqiao and
             Milli{\`e}re, Rapha{\"e}l and Shi, Freda and Vasconcelos, Nuno and
             Khashabi, Daniel and Yuille, Alan and Du, Yilun and Liu, Ziming and
             Lin, Dahua and Liu, Ziwei and Kumar, Vikash and Li, Yijiang and
             Yang, Lei and Cai, Zhongang and Deng, Hokin},
  journal = {arXiv preprint arXiv:2602.20159},
  year    = {2026},
  url     = {https://arxiv.org/abs/2602.20159}
}
```