| | --- |
| | base_model: |
| | - Wan-AI/Wan2.2-I2V-A14B-Diffusers |
| | library_name: diffusers |
| | license: apache-2.0 |
| | pipeline_tag: image-to-video |
| | --- |
| | |
| | # VBVR: A Very Big Video Reasoning Suite |
| |
|
| | <a href="https://video-reason.com" target="_blank"> |
| | <img alt="Project Page" src="https://img.shields.io/badge/Project%20-%20Homepage-4285F4" height="20" /> |
| | </a> |
| | <a href="https://github.com/Video-Reason/VBVR-EvalKit" target="_blank"> |
| | <img alt="Code" src="https://img.shields.io/badge/VBVR-Code-100000?style=flat-square&logo=github&logoColor=white" height="20" /> |
| | </a> |
| | <a href="https://huggingface.co/papers/2602.20159" target="_blank"> |
| | <img alt="arXiv" src="https://img.shields.io/badge/arXiv-VBVR-red?logo=arxiv" height="20" /> |
| | </a> |
| | <a href="https://huggingface.co/datasets/Video-Reason/VBVR-Dataset" target="_blank"> |
| | <img alt="Dataset" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Dataset-Data-ffc107?color=ffc107&logoColor=white" height="20" /> |
| | </a> |
| | <a href="https://huggingface.co/datasets/Video-Reason/VBVR-Bench-Data" target="_blank"> |
| | <img alt="Bench Data" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Bench-Data-ffc107?color=ffc107&logoColor=white" height="20" /> |
| | </a> |
| | <a href="https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard" target="_blank"> |
| | <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Bench-Leaderboard-ffc107?color=ffc107&logoColor=white" height="20" /> |
| | </a> |
| | |
| |
|
| | ## Overview |
| | Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture, |
| | enabling intuitive reasoning over motion, interaction, and causality. Rapid progress in video models has focused primarily on visual quality. |
| | Systematically studying video reasoning and its scaling behavior suffers from a lack of video reasoning (training) data. |
| |
|
| | To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks |
| | and over one million video clips—approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench, |
| | a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers, |
| | enabling reproducible and interpretable diagnosis of video reasoning capabilities. |
| |
|
| | Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization |
| | to unseen reasoning tasks. **Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning.** |
| |
|
| | The model was presented in the paper [A Very Big Video Reasoning Suite](https://huggingface.co/papers/2602.20159). |
| |
|
| | <table> |
| | <tr> |
| | <th>Model</th> |
| | <th>Overall</th> |
| | <th>ID</th> |
| | <th>ID-Abst.</th> |
| | <th>ID-Know.</th> |
| | <th>ID-Perc.</th> |
| | <th>ID-Spat.</th> |
| | <th>ID-Trans.</th> |
| | <th>OOD</th> |
| | <th>OOD-Abst.</th> |
| | <th>OOD-Know.</th> |
| | <th>OOD-Perc.</th> |
| | <th>OOD-Spat.</th> |
| | <th>OOD-Trans.</th> |
| | </tr> |
| | <tbody> |
| | <tr> |
| | <td><strong>Human</strong></td> |
| | <td>0.974</td><td>0.960</td><td>0.919</td><td>0.956</td><td>1.00</td><td>0.95</td><td>1.00</td> |
| | <td>0.988</td><td>1.00</td><td>1.00</td><td>0.990</td><td>1.00</td><td>0.970</td> |
| | </tr> |
| | <tr style="background:#F2F0EF;font-weight:700;text-align:center;"> |
| | <td colspan="14"><em>Open-source Models</em></td> |
| | </tr> |
| | <tr> |
| | <td>CogVideoX1.5-5B-I2V</td> |
| | <td>0.273</td><td>0.283</td><td>0.241</td><td>0.328</td><td>0.257</td><td>0.328</td><td>0.305</td> |
| | <td>0.262</td><td><u>0.281</u></td><td>0.235</td><td>0.250</td><td><strong>0.254</strong></td><td>0.282</td> |
| | </tr> |
| | <tr> |
| | <td>HunyuanVideo-I2V</td> |
| | <td>0.273</td><td>0.280</td><td>0.207</td><td>0.357</td><td>0.293</td><td>0.280</td><td><u>0.316</u></td> |
| | <td>0.265</td><td>0.175</td><td><strong>0.369</strong></td><td>0.290</td><td><u>0.253</u></td><td>0.250</td> |
| | </tr> |
| | <tr> |
| | <td><strong>Wan2.2-I2V-A14B</strong></td> |
| | <td><strong>0.371</strong></td><td><strong>0.412</strong></td><td><strong>0.430</strong></td> |
| | <td><strong>0.382</strong></td><td><strong>0.415</strong></td><td><strong>0.404</strong></td> |
| | <td><strong>0.419</strong></td><td><strong>0.329</strong></td> |
| | <td><strong>0.405</strong></td><td>0.308</td><td><strong>0.343</strong></td> |
| | <td>0.236</td><td><u>0.307</u></td> |
| | </tr> |
| | <tr> |
| | <td><u>LTX-2</u></td> |
| | <td><u>0.313</u></td><td><u>0.329</u></td><td><u>0.316</u></td> |
| | <td><u>0.362</u></td><td><u>0.326</u></td><td><u>0.340</u></td> |
| | <td>0.306</td><td><u>0.297</u></td> |
| | <td>0.244</td><td><u>0.337</u></td><td><u>0.317</u></td> |
| | <td>0.231</td><td><strong>0.311</strong></td> |
| | </tr> |
| | <tr style="background:#F2F0EF;font-weight:700;text-align:center;"> |
| | <td colspan="14"><em>Proprietary Models</em></td> |
| | </tr> |
| | <tr> |
| | <td>Runway Gen-4 Turbo</td> |
| | <td>0.403</td><td>0.392</td><td>0.396</td><td>0.409</td><td>0.429</td><td>0.341</td><td>0.363</td> |
| | <td>0.414</td><td>0.515</td><td><u>0.429</u></td><td>0.419</td><td>0.327</td><td>0.373</td> |
| | </tr> |
| | <tr> |
| | <td><strong>Sora 2</strong></td> |
| | <td><strong>0.546</strong></td><td><strong>0.569</strong></td><td><u>0.602</u></td> |
| | <td><u>0.477</u></td><td><strong>0.581</strong></td><td><strong>0.572</strong></td> |
| | <td><strong>0.597</strong></td><td><strong>0.523</strong></td> |
| | <td><u>0.546</u></td><td><strong>0.472</strong></td><td><strong>0.525</strong></td> |
| | <td><strong>0.462</strong></td><td><strong>0.546</strong></td> |
| | </tr> |
| | <tr> |
| | <td>Kling 2.6</td> |
| | <td>0.369</td><td>0.408</td><td>0.465</td><td>0.323</td><td>0.375</td><td>0.347</td><td><u>0.519</u></td> |
| | <td>0.330</td><td>0.528</td><td>0.135</td><td>0.272</td><td>0.356</td><td>0.359</td> |
| | </tr> |
| | <tr> |
| | <td><u>Veo 3.1</u></td> |
| | <td><u>0.480</u></td><td><u>0.531</u></td><td><strong>0.611</strong></td> |
| | <td><strong>0.503</strong></td><td><u>0.520</u></td><td><u>0.444</u></td> |
| | <td>0.510</td><td><u>0.429</u></td> |
| | <td><strong>0.577</strong></td><td>0.277</td><td><u>0.420</u></td> |
| | <td><u>0.441</u></td><td><u>0.404</u></td> |
| | </tr> |
| | <tr style="background:#F2F0EF;font-weight:700;text-align:center;"> |
| | <td colspan="14"><em>Data Scaling Strong Baseline</em></td> |
| | </tr> |
| | <tr> |
| | <td><strong>VBVR-Wan2.2</strong></td> |
| | <td><strong>0.685</strong></td><td><strong>0.760</strong></td><td><strong>0.724</strong></td> |
| | <td><strong>0.750</strong></td><td><strong>0.782</strong></td><td><strong>0.745</strong></td> |
| | <td><strong>0.833</strong></td><td><strong>0.610</strong></td> |
| | <td><strong>0.768</strong></td><td><strong>0.572</strong></td><td><strong>0.547</strong></td> |
| | <td><strong>0.618</strong></td><td><strong>0.615</strong></td> |
| | </tr> |
| | </tbody> |
| | </table> |
| | |
| | ## Release Information |
| | VBVR-Wan2.2 is trained from Wan2.2-I2V-A14B without architectural modifications, as the goal of VBVR-Wan2.2 is to *investigate data scaling behavior* and provide a *strong baseline model* for the video reasoning research community. Leveraging the VBVR-Dataset, which constitutes one of the largest video reasoning datasets to date, VBVR-Wan2.2 achieved highest score on VBVR-Bench. |
| |
|
| | In this release, we present |
| | [**VBVR-Wan2.2**](https://huggingface.co/Video-Reason/VBVR-Wan2.2), |
| | [**VBVR-Dataset**](https://huggingface.co/datasets/Video-Reason/VBVR-Dataset), |
| | [**VBVR-Bench-Data**](https://huggingface.co/datasets/Video-Reason/VBVR-Bench-Data) and |
| | [**VBVR-Bench-Leaderboard**](https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard). |
| |
|
| |
|
| | ## 🛠️ QuickStart |
| |
|
| | ### Installation |
| |
|
| | We recommend using [uv](https://docs.astral.sh/uv/) to manage the environment. |
| |
|
| | > uv installation guide: <https://docs.astral.sh/uv/getting-started/installation/#installing-uv> |
| |
|
| | ```bash |
| | pip install torch>=2.4.0 torchvision>=0.19.0 transformers Pillow huggingface_hub[cli] |
| | uv pip install git+https://github.com/huggingface/diffusers |
| | ``` |
| |
|
| | #### Example Code |
| |
|
| | ```bash |
| | huggingface-cli download Video-Reason/VBVR-Wan2.2 --local-dir ./VBVR-Wan2.2 |
| | |
| | python example.py \ |
| | --model_path ./VBVR-Wan2.2 |
| | ``` |
| |
|
| | ## 🖊️ Citation |
| |
|
| | ```bibtex |
| | @article{vbvr2026, |
| | title = {A Very Big Video Reasoning Suite}, |
| | author = {Wang, Maijunxian and Wang, Ruisi and Lin, Juyi and Ji, Ran and |
| | Wiedemer, Thadd{\"a}us and Gao, Qingying and Luo, Dezhi and |
| | Qian, Yaoyao and Huang, Lianyu and Hong, Zelong and Ge, Jiahui and |
| | Ma, Qianli and He, Hang and Zhou, Yifan and Guo, Lingzi and |
| | Mei, Lantao and Li, Jiachen and Xing, Hanwen and Zhao, Tianqi and |
| | Yu, Fengyuan and Xiao, Weihang and Jiao, Yizheng and |
| | Hou, Jianheng and Zhang, Danyang and Xu, Pengcheng and |
| | Zhong, Boyang and Zhao, Zehong and Fang, Gaoyun and Kitaoka, John and |
| | Xu, Yile and Xu, Hua bureau and Blacutt, Kenton and Nguyen, Tin and |
| | Song, Siyuan and Sun, Haoran and Wen, Shaoyue and He, Linyang and |
| | Wang, Runming and Wang, Yanzhi and Yang, Mengyue and Ma, Ziqiao and |
| | Milli{\`e}re, Rapha{\"e}l and Shi, Freda and Vasconcelos, Nuno and |
| | Khashabi, Daniel and Yuille, Alan and Du, Yilun and Liu, Ziming and |
| | Lin, Dahua and Liu, Ziwei and Kumar, Vikash and Li, Yijiang and |
| | Yang, Lei and Cai, Zhongang and Deng, Hokin}, |
| | journal = {arXiv preprint arXiv:2602.20159}, |
| | year = {2026}, |
| | url = {https://arxiv.org/abs/2602.20159} |
| | } |
| | ``` |