Transformers
Diffusers
Safetensors
wruisi commited on
Commit
1dd0231
·
verified ·
1 Parent(s): 7374bcd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -16
README.md CHANGED
@@ -22,8 +22,8 @@ library_name: transformers
22
  ## Overview
23
  Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture, enabling intuitive reasoning over motion, interaction, and causality. Rapid progress in video models has focused primarily on visual quality. Systematically studying video reasoning and its scaling behavior suffers from a lack of video reasoning (training) data. To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks and over one million video clips—approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench, a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers, enabling reproducible and interpretable diagnosis of video reasoning capabilities. Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization to unseen reasoning tasks. *Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning.*
24
 
 
25
  <table>
26
- <thead>
27
  <tr>
28
  <th>Model</th>
29
  <th>Overall</th>
@@ -40,31 +40,25 @@ Video reasoning grounds intelligence in spatiotemporally consistent visual envir
40
  <th>OOD-Spat.</th>
41
  <th>OOD-Trans.</th>
42
  </tr>
43
- </thead>
44
  <tbody>
45
-
46
  <tr>
47
  <td><strong>Human</strong></td>
48
  <td>0.974</td><td>0.960</td><td>0.919</td><td>0.956</td><td>1.00</td><td>0.95</td><td>1.00</td>
49
  <td>0.988</td><td>1.00</td><td>1.00</td><td>0.990</td><td>1.00</td><td>0.970</td>
50
  </tr>
51
-
52
  <tr style="background:#F2F0EF;font-weight:700;text-align:center;">
53
  <td colspan="14"><em>Open-source Models</em></td>
54
  </tr>
55
-
56
  <tr>
57
  <td>CogVideoX1.5-5B-I2V</td>
58
  <td>0.273</td><td>0.283</td><td>0.241</td><td>0.328</td><td>0.257</td><td>0.328</td><td>0.305</td>
59
  <td>0.262</td><td><u>0.281</u></td><td>0.235</td><td>0.250</td><td><strong>0.254</strong></td><td>0.282</td>
60
  </tr>
61
-
62
  <tr>
63
  <td>HunyuanVideo-I2V</td>
64
  <td>0.273</td><td>0.280</td><td>0.207</td><td>0.357</td><td>0.293</td><td>0.280</td><td><u>0.316</u></td>
65
  <td>0.265</td><td>0.175</td><td><strong>0.369</strong></td><td>0.290</td><td><u>0.253</u></td><td>0.250</td>
66
  </tr>
67
-
68
  <tr>
69
  <td><strong>Wan2.2-I2V-A14B</strong></td>
70
  <td><strong>0.371</strong></td><td><strong>0.412</strong></td><td><strong>0.430</strong></td>
@@ -73,7 +67,6 @@ Video reasoning grounds intelligence in spatiotemporally consistent visual envir
73
  <td><strong>0.405</strong></td><td>0.308</td><td><strong>0.343</strong></td>
74
  <td>0.236</td><td><u>0.307</u></td>
75
  </tr>
76
-
77
  <tr>
78
  <td><u>LTX-2</u></td>
79
  <td><u>0.313</u></td><td><u>0.329</u></td><td><u>0.316</u></td>
@@ -82,17 +75,14 @@ Video reasoning grounds intelligence in spatiotemporally consistent visual envir
82
  <td>0.244</td><td><u>0.337</u></td><td><u>0.317</u></td>
83
  <td>0.231</td><td><strong>0.311</strong></td>
84
  </tr>
85
-
86
  <tr style="background:#F2F0EF;font-weight:700;text-align:center;">
87
  <td colspan="14"><em>Proprietary Models</em></td>
88
  </tr>
89
-
90
  <tr>
91
  <td>Runway Gen-4 Turbo</td>
92
  <td>0.403</td><td>0.392</td><td>0.396</td><td>0.409</td><td>0.429</td><td>0.341</td><td>0.363</td>
93
  <td>0.414</td><td>0.515</td><td><u>0.429</u></td><td>0.419</td><td>0.327</td><td>0.373</td>
94
  </tr>
95
-
96
  <tr>
97
  <td><strong>Sora 2</strong></td>
98
  <td><strong>0.546</strong></td><td><strong>0.569</strong></td><td><u>0.602</u></td>
@@ -101,13 +91,11 @@ Video reasoning grounds intelligence in spatiotemporally consistent visual envir
101
  <td><u>0.546</u></td><td><strong>0.472</strong></td><td><strong>0.525</strong></td>
102
  <td><strong>0.462</strong></td><td><strong>0.546</strong></td>
103
  </tr>
104
-
105
  <tr>
106
  <td>Kling 2.6</td>
107
  <td>0.369</td><td>0.408</td><td>0.465</td><td>0.323</td><td>0.375</td><td>0.347</td><td><u>0.519</u></td>
108
  <td>0.330</td><td>0.528</td><td>0.135</td><td>0.272</td><td>0.356</td><td>0.359</td>
109
  </tr>
110
-
111
  <tr>
112
  <td><u>Veo 3.1</u></td>
113
  <td><u>0.480</u></td><td><u>0.531</u></td><td><strong>0.611</strong></td>
@@ -116,11 +104,9 @@ Video reasoning grounds intelligence in spatiotemporally consistent visual envir
116
  <td><strong>0.577</strong></td><td>0.277</td><td><u>0.420</u></td>
117
  <td><u>0.441</u></td><td><u>0.404</u></td>
118
  </tr>
119
-
120
  <tr style="background:#F2F0EF;font-weight:700;text-align:center;">
121
  <td colspan="14"><em>Data Scaling Strong Baseline</em></td>
122
  </tr>
123
-
124
  <tr>
125
  <td><strong>VBVR-Wan2.2</strong></td>
126
  <td><strong>0.685</strong></td><td><strong>0.760</strong></td><td><strong>0.724</strong></td>
@@ -129,7 +115,6 @@ Video reasoning grounds intelligence in spatiotemporally consistent visual envir
129
  <td><strong>0.768</strong></td><td><strong>0.572</strong></td><td><strong>0.547</strong></td>
130
  <td><strong>0.618</strong></td><td><strong>0.615</strong></td>
131
  </tr>
132
-
133
  </tbody>
134
  </table>
135
 
 
22
  ## Overview
23
  Video reasoning grounds intelligence in spatiotemporally consistent visual environments that go beyond what text can naturally capture, enabling intuitive reasoning over motion, interaction, and causality. Rapid progress in video models has focused primarily on visual quality. Systematically studying video reasoning and its scaling behavior suffers from a lack of video reasoning (training) data. To address this gap, we introduce the Very Big Video Reasoning (VBVR) Dataset, an unprecedentedly large-scale resource spanning 200 curated reasoning tasks and over one million video clips—approximately three orders of magnitude larger than existing datasets. We further present VBVR-Bench, a verifiable evaluation framework that moves beyond model-based judging by incorporating rule-based, human-aligned scorers, enabling reproducible and interpretable diagnosis of video reasoning capabilities. Leveraging the VBVR suite, we conduct one of the first large-scale scaling studies of video reasoning and observe early signs of emergent generalization to unseen reasoning tasks. *Together, VBVR lays a foundation for the next stage of research in generalizable video reasoning.*
24
 
25
+
26
  <table>
 
27
  <tr>
28
  <th>Model</th>
29
  <th>Overall</th>
 
40
  <th>OOD-Spat.</th>
41
  <th>OOD-Trans.</th>
42
  </tr>
 
43
  <tbody>
 
44
  <tr>
45
  <td><strong>Human</strong></td>
46
  <td>0.974</td><td>0.960</td><td>0.919</td><td>0.956</td><td>1.00</td><td>0.95</td><td>1.00</td>
47
  <td>0.988</td><td>1.00</td><td>1.00</td><td>0.990</td><td>1.00</td><td>0.970</td>
48
  </tr>
 
49
  <tr style="background:#F2F0EF;font-weight:700;text-align:center;">
50
  <td colspan="14"><em>Open-source Models</em></td>
51
  </tr>
 
52
  <tr>
53
  <td>CogVideoX1.5-5B-I2V</td>
54
  <td>0.273</td><td>0.283</td><td>0.241</td><td>0.328</td><td>0.257</td><td>0.328</td><td>0.305</td>
55
  <td>0.262</td><td><u>0.281</u></td><td>0.235</td><td>0.250</td><td><strong>0.254</strong></td><td>0.282</td>
56
  </tr>
 
57
  <tr>
58
  <td>HunyuanVideo-I2V</td>
59
  <td>0.273</td><td>0.280</td><td>0.207</td><td>0.357</td><td>0.293</td><td>0.280</td><td><u>0.316</u></td>
60
  <td>0.265</td><td>0.175</td><td><strong>0.369</strong></td><td>0.290</td><td><u>0.253</u></td><td>0.250</td>
61
  </tr>
 
62
  <tr>
63
  <td><strong>Wan2.2-I2V-A14B</strong></td>
64
  <td><strong>0.371</strong></td><td><strong>0.412</strong></td><td><strong>0.430</strong></td>
 
67
  <td><strong>0.405</strong></td><td>0.308</td><td><strong>0.343</strong></td>
68
  <td>0.236</td><td><u>0.307</u></td>
69
  </tr>
 
70
  <tr>
71
  <td><u>LTX-2</u></td>
72
  <td><u>0.313</u></td><td><u>0.329</u></td><td><u>0.316</u></td>
 
75
  <td>0.244</td><td><u>0.337</u></td><td><u>0.317</u></td>
76
  <td>0.231</td><td><strong>0.311</strong></td>
77
  </tr>
 
78
  <tr style="background:#F2F0EF;font-weight:700;text-align:center;">
79
  <td colspan="14"><em>Proprietary Models</em></td>
80
  </tr>
 
81
  <tr>
82
  <td>Runway Gen-4 Turbo</td>
83
  <td>0.403</td><td>0.392</td><td>0.396</td><td>0.409</td><td>0.429</td><td>0.341</td><td>0.363</td>
84
  <td>0.414</td><td>0.515</td><td><u>0.429</u></td><td>0.419</td><td>0.327</td><td>0.373</td>
85
  </tr>
 
86
  <tr>
87
  <td><strong>Sora 2</strong></td>
88
  <td><strong>0.546</strong></td><td><strong>0.569</strong></td><td><u>0.602</u></td>
 
91
  <td><u>0.546</u></td><td><strong>0.472</strong></td><td><strong>0.525</strong></td>
92
  <td><strong>0.462</strong></td><td><strong>0.546</strong></td>
93
  </tr>
 
94
  <tr>
95
  <td>Kling 2.6</td>
96
  <td>0.369</td><td>0.408</td><td>0.465</td><td>0.323</td><td>0.375</td><td>0.347</td><td><u>0.519</u></td>
97
  <td>0.330</td><td>0.528</td><td>0.135</td><td>0.272</td><td>0.356</td><td>0.359</td>
98
  </tr>
 
99
  <tr>
100
  <td><u>Veo 3.1</u></td>
101
  <td><u>0.480</u></td><td><u>0.531</u></td><td><strong>0.611</strong></td>
 
104
  <td><strong>0.577</strong></td><td>0.277</td><td><u>0.420</u></td>
105
  <td><u>0.441</u></td><td><u>0.404</u></td>
106
  </tr>
 
107
  <tr style="background:#F2F0EF;font-weight:700;text-align:center;">
108
  <td colspan="14"><em>Data Scaling Strong Baseline</em></td>
109
  </tr>
 
110
  <tr>
111
  <td><strong>VBVR-Wan2.2</strong></td>
112
  <td><strong>0.685</strong></td><td><strong>0.760</strong></td><td><strong>0.724</strong></td>
 
115
  <td><strong>0.768</strong></td><td><strong>0.572</strong></td><td><strong>0.547</strong></td>
116
  <td><strong>0.618</strong></td><td><strong>0.615</strong></td>
117
  </tr>
 
118
  </tbody>
119
  </table>
120