Transformers
Diffusers
Safetensors
wruisi commited on
Commit
838ded2
·
verified ·
1 Parent(s): d32bfcc

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +16 -7
README.md CHANGED
@@ -7,15 +7,23 @@ library_name: transformers
7
 
8
  # VBVR: A Very Big Video Reasoning Suite
9
 
10
- <a href="" target="_blank">
 
 
 
11
  <img alt="Code" src="https://img.shields.io/badge/VBVR-Code-100000?style=flat-square&logo=github&logoColor=white" height="20" />
12
  </a>
13
  <a href="" target="_blank">
14
  <img alt="arXiv" src="https://img.shields.io/badge/arXiv-VBVR-red?logo=arxiv" height="20" />
15
  </a>
16
-
17
- <a href="" target="_blank">
18
- <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR-Leaderboard-ffc107?color=ffc107&logoColor=white" height="20" />
 
 
 
 
 
19
  </a>
20
 
21
 
@@ -130,9 +138,10 @@ to unseen reasoning tasks. **Together, VBVR lays a foundation for the next stage
130
  VBVR-Wan2.2 is trained from Wan2.2-I2V-A14B without architectural modifications, as the goal of VBVR-Wan2.2 is to *investigate data scaling behavior* and provide a *strong baseline model* for the video reasoning research community. Leveraging the VBVR-Dataset, which to our knowledge constitutes one of the largest video reasoning datasets to date, VBVR-Wan2.2 achieved highest score on VBVR-Bench.
131
 
132
  In this release, we present
133
- [**VBVR-Wan2.2**](https://huggingface.co/Video-Reason/VBVR-Wan2.2) and
134
- [**VBVR-Bench**](https://huggingface.co/Video-Reason/VBVR-Bench),
135
- [**VBVR-Dataset**](https://huggingface.co/Video-Reason/VBVR-Dataset).
 
136
 
137
 
138
  ## 🛠️ QuickStart
 
7
 
8
  # VBVR: A Very Big Video Reasoning Suite
9
 
10
+ <a href="https://video-reason.com" target="_blank">
11
+ <img alt="Code" src="https://img.shields.io/badge/Project%20-%20Homepage-4285F4" height="20" />
12
+ </a>
13
+ <a href="https://github.com/orgs/Video-Reason/repositories" target="_blank">
14
  <img alt="Code" src="https://img.shields.io/badge/VBVR-Code-100000?style=flat-square&logo=github&logoColor=white" height="20" />
15
  </a>
16
  <a href="" target="_blank">
17
  <img alt="arXiv" src="https://img.shields.io/badge/arXiv-VBVR-red?logo=arxiv" height="20" />
18
  </a>
19
+ <a href="https://huggingface.co/Video-Reason/VBVR-Dataset" target="_blank">
20
+ <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Dataset-Data-ffc107?color=ffc107&logoColor=white" height="20" />
21
+ </a>
22
+ <a href="https://huggingface.co/Video-Reason/VBVR-Bench-Data" target="_blank">
23
+ <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Bench-Data-ffc107?color=ffc107&logoColor=white" height="20" />
24
+ </a>
25
+ <a href="https://huggingface.co/Video-Reason/VBVR-Bench-Leaderboard" target="_blank">
26
+ <img alt="Leaderboard" src="https://img.shields.io/badge/%F0%9F%A4%97%20_VBVR_Bench-Leaderboard-ffc107?color=ffc107&logoColor=white" height="20" />
27
  </a>
28
 
29
 
 
138
  VBVR-Wan2.2 is trained from Wan2.2-I2V-A14B without architectural modifications, as the goal of VBVR-Wan2.2 is to *investigate data scaling behavior* and provide a *strong baseline model* for the video reasoning research community. Leveraging the VBVR-Dataset, which to our knowledge constitutes one of the largest video reasoning datasets to date, VBVR-Wan2.2 achieved highest score on VBVR-Bench.
139
 
140
  In this release, we present
141
+ [**VBVR-Wan2.2**](https://huggingface.co/Video-Reason/VBVR-Wan2.2),
142
+ [**VBVR-Dataset**](https://huggingface.co/Video-Reason/VBVR-Dataset),
143
+ [**VBVR-Bench-Data**](https://huggingface.co/Video-Reason/VBVR-Bench-Data) and
144
+ [**VBVR-Bench-Leaderboard**](https://huggingface.co/Video-Reason/VBVR-Bench-Leaderboard).
145
 
146
 
147
  ## 🛠️ QuickStart