base_model:
- Wan-AI/Wan2.2-I2V-A14B-Diffusers
library_name: diffusers
license: apache-2.0
pipeline_tag: image-to-video
VBVR: A Very Big Video Reasoning Suite
Overview
Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture, enabling intuitive reasoning over motion, interaction, and causality. Rapid progress in video models has focused primarily on visual quality. Systematically studying video reasoning and its scaling behavior suffers from a lack of video reasoning (training) data.
To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks and over one million video clips—approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench, a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers, enabling reproducible and interpretable diagnosis of video reasoning capabilities.
Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization to unseen reasoning tasks. Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning.
The model was presented in the paper A Very Big Video Reasoning Suite.
| Model | Overall | ID | ID-Abst. | ID-Know. | ID-Perc. | ID-Spat. | ID-Trans. | OOD | OOD-Abst. | OOD-Know. | OOD-Perc. | OOD-Spat. | OOD-Trans. |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Human | 0.974 | 0.960 | 0.919 | 0.956 | 1.00 | 0.95 | 1.00 | 0.988 | 1.00 | 1.00 | 0.990 | 1.00 | 0.970 |
| Open-source Models | |||||||||||||
| CogVideoX1.5-5B-I2V | 0.273 | 0.283 | 0.241 | 0.328 | 0.257 | 0.328 | 0.305 | 0.262 | 0.281 | 0.235 | 0.250 | 0.254 | 0.282 |
| HunyuanVideo-I2V | 0.273 | 0.280 | 0.207 | 0.357 | 0.293 | 0.280 | 0.316 | 0.265 | 0.175 | 0.369 | 0.290 | 0.253 | 0.250 |
| Wan2.2-I2V-A14B | 0.371 | 0.412 | 0.430 | 0.382 | 0.415 | 0.404 | 0.419 | 0.329 | 0.405 | 0.308 | 0.343 | 0.236 | 0.307 |
| LTX-2 | 0.313 | 0.329 | 0.316 | 0.362 | 0.326 | 0.340 | 0.306 | 0.297 | 0.244 | 0.337 | 0.317 | 0.231 | 0.311 |
| Proprietary Models | |||||||||||||
| Runway Gen-4 Turbo | 0.403 | 0.392 | 0.396 | 0.409 | 0.429 | 0.341 | 0.363 | 0.414 | 0.515 | 0.429 | 0.419 | 0.327 | 0.373 |
| Sora 2 | 0.546 | 0.569 | 0.602 | 0.477 | 0.581 | 0.572 | 0.597 | 0.523 | 0.546 | 0.472 | 0.525 | 0.462 | 0.546 |
| Kling 2.6 | 0.369 | 0.408 | 0.465 | 0.323 | 0.375 | 0.347 | 0.519 | 0.330 | 0.528 | 0.135 | 0.272 | 0.356 | 0.359 |
| Veo 3.1 | 0.480 | 0.531 | 0.611 | 0.503 | 0.520 | 0.444 | 0.510 | 0.429 | 0.577 | 0.277 | 0.420 | 0.441 | 0.404 |
| Data Scaling Strong Baseline | |||||||||||||
| VBVR-Wan2.2 | 0.685 | 0.760 | 0.724 | 0.750 | 0.782 | 0.745 | 0.833 | 0.610 | 0.768 | 0.572 | 0.547 | 0.618 | 0.615 |
Release Information
VBVR-Wan2.2 is trained from Wan2.2-I2V-A14B without architectural modifications, as the goal of VBVR-Wan2.2 is to investigate data scaling behavior and provide a strong baseline model for the video reasoning research community. Leveraging the VBVR-Dataset, which constitutes one of the largest video reasoning datasets to date, VBVR-Wan2.2 achieved highest score on VBVR-Bench.
In this release, we present VBVR-Wan2.2, VBVR-Dataset, VBVR-Bench-Data and VBVR-Bench-Leaderboard.
🛠️ QuickStart
Installation
We recommend using uv to manage the environment.
uv installation guide: https://docs.astral.sh/uv/getting-started/installation/#installing-uv
pip install torch>=2.4.0 torchvision>=0.19.0 transformers Pillow huggingface_hub[cli]
uv pip install git+https://github.com/huggingface/diffusers
Example Code
huggingface-cli download Video-Reason/VBVR-Wan2.2 --local-dir ./VBVR-Wan2.2
python example.py \
--model_path ./VBVR-Wan2.2
🖊️ Citation
@article{vbvr2026,
title = {A Very Big Video Reasoning Suite},
author = {Wang, Maijunxian and Wang, Ruisi and Lin, Juyi and Ji, Ran and
Wiedemer, Thadd{\"a}us and Gao, Qingying and Luo, Dezhi and
Qian, Yaoyao and Huang, Lianyu and Hong, Zelong and Ge, Jiahui and
Ma, Qianli and He, Hang and Zhou, Yifan and Guo, Lingzi and
Mei, Lantao and Li, Jiachen and Xing, Hanwen and Zhao, Tianqi and
Yu, Fengyuan and Xiao, Weihang and Jiao, Yizheng and
Hou, Jianheng and Zhang, Danyang and Xu, Pengcheng and
Zhong, Boyang and Zhao, Zehong and Fang, Gaoyun and Kitaoka, John and
Xu, Yile and Xu, Hua bureau and Blacutt, Kenton and Nguyen, Tin and
Song, Siyuan and Sun, Haoran and Wen, Shaoyue and He, Linyang and
Wang, Runming and Wang, Yanzhi and Yang, Mengyue and Ma, Ziqiao and
Milli{\`e}re, Rapha{\"e}l and Shi, Freda and Vasconcelos, Nuno and
Khashabi, Daniel and Yuille, Alan and Du, Yilun and Liu, Ziming and
Lin, Dahua and Liu, Ziwei and Kumar, Vikash and Li, Yijiang and
Yang, Lei and Cai, Zhongang and Deng, Hokin},
journal = {arXiv preprint arXiv:2602.20159},
year = {2026},
url = {https://arxiv.org/abs/2602.20159}
}