| | --- |
| | language: |
| | - en |
| | license: apache-2.0 |
| | size_categories: |
| | - n<1K |
| | task_categories: |
| | - image-to-video |
| | pretty_name: VBVR-Bench-Data |
| | tags: |
| | - video-generation |
| | - video-reasoning |
| | configs: |
| | - config_name: VBVR-Bench-Data |
| | data_files: |
| | - split: test |
| | path: VBVR-Bench.json |
| | --- |
| | |
| | # VBVR: A Very Big Video Reasoning Suite |
| |
|
| | <a href="https://video-reason.com" target="_blank"> |
| | <img alt="Project Page" src="https://img.shields.io/badge/Project%20-%20Homepage-4285F4" height="20" /> |
| | </a> |
| | <a href="https://github.com/Video-Reason/VBVR-EvalKit" target="_blank"> |
| | <img alt="Code" src="https://img.shields.io/badge/VBVR-Code-100000?style=flat-square&logo=github&logoColor=white" height="20" /> |
| | </a> |
| | <a href="https://huggingface.co/papers/2602.20159" target="_blank"> |
| | <img alt="Paper" src="https://img.shields.io/badge/Paper-HF-red?logo=huggingface" height="20" /> |
| | </a> |
| | <a href="https://huggingface.co/Video-Reason/VBVR-Wan2.2" target="_blank"> |
| | <img alt="Model" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Wan2.2-Model-ffc107?color=ffc107&logoColor=white" height="20" /> |
| | </a> |
| | <a href="https://huggingface.co/datasets/Video-Reason/VBVR-Dataset" target="_blank"> |
| | <img alt="Data" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Dataset-Data-ffc107?color=ffc107&logoColor=white" height="20" /> |
| | </a> |
| | <a href="https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard" target="_blank"> |
| | <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Bench-Leaderboard-ffc107?color=ffc107&logoColor=white" height="20" /> |
| | </a> |
| | |
| | ## Overview |
| | Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture, |
| | enabling intuitive reasoning over motion, interaction, and causality. Rapid progress in video models has focused primarily on visual quality. |
| | Systematically studying video reasoning and its scaling behavior suffers from a lack of video reasoning (training) data. |
| | To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks |
| | and over one million video clips—approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench, |
| | a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers, |
| | enabling reproducible and interpretable diagnosis of video reasoning capabilities. |
| |
|
| | For more details, please refer to the paper: [A Very Big Video Reasoning Suite](https://huggingface.co/papers/2602.20159). |
| |
|
| | ## Sample Usage |
| |
|
| | To evaluate a model using the VBVR suite, you can use the official evaluation toolkit [VBVR-EvalKit](https://github.com/Video-Reason/VBVR-EvalKit): |
| |
|
| | ```bash |
| | # Install the toolkit |
| | git clone https://github.com/Video-Reason/VBVR-EvalKit.git && cd VBVR-EvalKit |
| | python -m venv venv && source venv/bin/activate |
| | pip install -e . |
| | |
| | # Setup a model (example: SVD) |
| | bash setup/install_model.sh --model svd --validate |
| | |
| | # Inference |
| | python examples/generate_videos.py --questions-dir /path/to/VBVR-Bench-Data --output-dir ./outputs --model svd |
| | |
| | # Evaluation (VBVR-Bench) |
| | python examples/score_videos.py --inference-dir ./outputs |
| | ``` |
| |
|
| | ## Release Information |
| | We are pleased to release the official **VBVR-Bench** test dataset, designed for standardized and rigorous evaluation of video-based visual reasoning models. |
| | The test split is designed along with the evaluation toolkit provided by Video-Reason at [VBVR-EvalKit](https://github.com/Video-Reason/VBVR-EvalKit). |
| |
|
| | After running evaluation, you can compare your model’s performance on the public leaderboard at [VBVR-Bench Leaderboard](https://huggingface.co/spaces/Video-Reason/VBVR-Bench-Leaderboard). |
| |
|
| | ## Data Structure |
| | The dataset is organized by domain and task generator. For example: |
| |
|
| | ```bash |
| | In-Domain_50/ |
| | G-31_directed_graph_navigation_data-generator/ |
| | 00000/ |
| | first_frame.png |
| | final_frame.png |
| | ground_truth.mp4 |
| | prompt.txt |
| | ``` |
| | ### Structure Description |
| |
|
| | - **In-Domain_50/Out-of-Domain_50**: Evaluation splits indicating whether samples belong to in-domain or out-of-domain settings. |
| | - **G-XXX_task-name_data-generator**: A specific reasoning task category and its corresponding data generator. |
| | - **00000-00004**: Individual sample instances. |
| |
|
| | Each sample directory contains: |
| | - `first_frame.png`: The initial frame of the video |
| | - `final_frame.png`: The final frame |
| | - `ground_truth.mp4`: The full video sequence |
| | - `prompt.txt`: The textual reasoning question or instruction |
| |
|
| | ## 🖊️ Citation |
| |
|
| | ```bibtex |
| | @article{vbvr2026, |
| | title = {A Very Big Video Reasoning Suite}, |
| | author = {Wang, Maijunxian and Wang, Ruisi and Lin, Juyi and Ji, Ran and |
| | Wiedemer, Thadd{\"{a}}us and Gao, Qingying and Luo, Dezhi and |
| | Qian, Yaoyao and Huang, Lianyu and Hong, Zelong and Ge, Jiahui and |
| | Ma, Qianli and He, Hang and Zhou, Yifan and Guo, Lingzi and |
| | Mei, Lantao and Li, Jiachen and Xing, Hanwen and Zhao, Tianqi and |
| | Yu, Fengyuan and Xiao, Weihang and Jiao, Yizheng and |
| | Hou, Jianheng and Zhang, Danyang and Xu, Pengcheng and |
| | Zhong, Boyang and Zhao, Zehong and Fang, Gaoyun and Kitaoka, John and |
| | Xu, Yile and Xu, Hua and Blacutt, Kenton and Nguyen, Tin and |
| | Song, Siyuan and Sun, Haoran and Wen, Shaoyue and He, Linyang and |
| | Wang, Runming and Wang, Yanzhi and Yang, Mengyue and Ma, Ziqiao and |
| | Milli{\`e}re, Rapha{\"{e}}l and Shi, Freda and Vasconcelos, Nuno and |
| | Khashabi, Daniel and Yuille, Alan and Du, Yilun and Liu, Ziming and |
| | Bo Li and Dahua Lin and Ziwei Liu and Vikash Kumar and Yijiang Li and |
| | Lei Yang and Zhongang Cai and Hokin Deng}, |
| | journal = {arXiv preprint arXiv:2602.20159}, |
| | year = {2026}, |
| | url = {https://arxiv.org/abs/2602.20159} |
| | } |
| | ``` |