Transformers
Diffusers
Safetensors
wruisi commited on
Commit
edaad7b
·
verified ·
1 Parent(s): 9ee7266

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -4
README.md CHANGED
@@ -8,10 +8,10 @@ library_name: transformers
8
  # VBVR: A Very Big Video Reasoning Suite
9
 
10
  <a href="" target="_blank">
11
- <img alt="Code" src="https://img.shields.io/badge/SenseNova_SI-Code-100000?style=flat-square&logo=github&logoColor=white" height="20" />
12
  </a>
13
  <a href="" target="_blank">
14
- <img alt="arXiv" src="https://img.shields.io/badge/arXiv-SenseNova_SI-red?logo=arxiv" height="20" />
15
  </a>
16
 
17
  <a href="" target="_blank">
@@ -20,7 +20,15 @@ library_name: transformers
20
 
21
 
22
  ## Overview
23
- Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture, enabling intuitive reasoning over motion, interaction, and causality. Rapid progress in video models has focused primarily on visual quality. Systematically studying video reasoning and its scaling behavior suffers from a lack of video reasoning (training) data. To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks and over one million video clips—approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench, a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers, enabling reproducible and interpretable diagnosis of video reasoning capabilities. Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization to unseen reasoning tasks. *Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning.*
 
 
 
 
 
 
 
 
24
 
25
 
26
  <table>
@@ -152,5 +160,9 @@ python example.py \
152
  ## 🖊️ Citation
153
 
154
  ```bib
155
-
 
 
 
 
156
  ```
 
8
  # VBVR: A Very Big Video Reasoning Suite
9
 
10
  <a href="" target="_blank">
11
+ <img alt="Code" src="https://img.shields.io/badge/VBVR-Code-100000?style=flat-square&logo=github&logoColor=white" height="20" />
12
  </a>
13
  <a href="" target="_blank">
14
+ <img alt="arXiv" src="https://img.shields.io/badge/arXiv-VBVR-red?logo=arxiv" height="20" />
15
  </a>
16
 
17
  <a href="" target="_blank">
 
20
 
21
 
22
  ## Overview
23
+ Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture,
24
+ enabling intuitive reasoning over motion, interaction, and causality. Rapid progress in video models has focused primarily on visual quality.
25
+ Systematically studying video reasoning and its scaling behavior suffers from a lack of video reasoning (training) data.
26
+ To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks
27
+ and over one million video clips—approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench,
28
+ a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers,
29
+ enabling reproducible and interpretable diagnosis of video reasoning capabilities.
30
+ Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization
31
+ to unseen reasoning tasks. **Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning.**
32
 
33
 
34
  <table>
 
160
  ## 🖊️ Citation
161
 
162
  ```bib
163
+ @article{vbvr2026,
164
+ title={A Very Big Video Reasoning Suite},
165
+ author={Wang, Maijunxian and Wang, Ruisi and Lin, Juyi and Ji, Ran and Wiedemer, Thaddäus and Gao, Qingying and Luo, Dezhi and Qian, Yaoyao and Huang, Lianyu and Hong, Zelong and Ge, Jiahui and Ma, Qianli and He, Hang and Zhou, Yifan and Guo, Lingzi and Mei, Lantao and Li, Jiachen and Xing, Hanwen and Zhao, Tianqi and Yu, Fengyuan and Xiao, Weihang and Jiao, Yizheng and Hou, Jianheng and Zhang, Danyang and Xu, Pengcheng and Zhong, Boyang and Zhao, Zehong and Fang, Gaoyun and Kitaoka, John and Xu, Yile and Xu, Hua and Blacutt, Kenton and Nguyen, Tin and Song, Siyuan and Sun, Haoran and Wen, Shaoyue and He, Linyang and Wang, Runming and Wang, Yanzhi and Yang, Mengyue and Ma, Ziqiao and Millière, Raphaël and Shi, Freda and Vasconcelos, Nuno and Khashabi, Daniel and Yuille, Alan and Du, Yilun and Liu, Ziming and Lin, Dahua and Liu, Ziwei and Kumar, Vikash and Li, Yijiang and Yang, Lei and Cai, Zhongang and Deng, Hokin},
166
+ year={2026}
167
+ }
168
  ```