Two-hot commited on
Commit
806e4ac
·
verified ·
1 Parent(s): 47c2509

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +53 -3
README.md CHANGED
@@ -1,3 +1,53 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # How Far are VLMs from Visual Spatial Intelligence? A Benchmark-Driven Perspective
5
+
6
+ # About SIBench
7
+ At present, there already exist numerous open-source benchmarks for visual-spatial reasoning; however, each benchmark typically covers only a subset of tasks. We collected, categorized, and filtered them to construct **SIBench**.
8
+ figure radar
9
+
10
+ ## 💡 Key Features
11
+
12
+ 1. **Hierarchical Evaluation**
13
+
14
+ We categorize Visual Spatial Reasoning tasks into three types based on a reasoning levels: **Foundational Perception**, **Spatial Understanding**, and **Planning**. Furthermore, each category contains a rich set of evaluation tasks to comprehensively assess the visuospatial reasoning capabilities of existing VLMs.
15
+
16
+ 2. **Comprehensive evaluation**
17
+
18
+ The evaluation data in SIBench cover diverse input formats, including **single images**, **multi-view images**, and **videos**, as well as various question formats, such as true/false judgment, multiple-choice, and numerical question answering. The data are derived from **23** relevant tasks across nearly **20** open-source benchmarks.
19
+
20
+ 3. **High Quality**
21
+
22
+ SIBench prioritizes datasets with human annotations, filters out excessively long videos to avoid unreasonable task settings, and adds timestamps to videos requiring temporal information, thereby ensuring high data quality.
23
+
24
+ ## 👨‍💻 Code
25
+
26
+ We offer a comprehensive evaluation methodology. For more details, please refer to our evaluation [code](https://github.com/song2yu/SIBench-VSR) and [project page](https://sibench.github.io/Awesome-Visual-Spatial-Reasoning/).
27
+
28
+
29
+ ## 📊 Dataset
30
+
31
+ SIBench contains a total of **8.8K** data points. The data formats include single images, multiple images, and videos, while the question types include true/false, multiple-choice, and numerical questions.
32
+
33
+ Additionally, we provide a streamlined version for evaluation called SIBench-mini. The data for this version is randomly selected from SIBench. SIBench-mini maintains the same comprehensive task settings as the full version, but with a more uniform data distribution.
34
+
35
+ figure datacard
36
+
37
+ ## 🎯 Evaluation Results
38
+ We've provided a [leaderboard](https://sibench.github.io/Awesome-Visual-Spatial-Reasoning/), and we welcome you to add your evaluation results. Please feel free to contact us directly at: sduyusong@gmail.com.
39
+
40
+ figure table
41
+
42
+
43
+ ## 📖 Citation
44
+
45
+ If you find *SIBench* useful in your research, please consider to cite the following related papers:
46
+
47
+ ```
48
+ @article{sibench2025,
49
+ title = {How Far are VLMs from Visual Spatial Intelligence? A Benchmark-Driven Perspective},
50
+ author = {Songsongyu, Yuxin Chen, Hao Ju, Lianjie Jia, Shaofei Huang, Rundi Cui, Yuhan Wu, Binghao Ran, Zaibin Zhang, Zhedong Zheng, Zhipeng Zhang, Yifan Wang, Lin Song, Lijun Wang, Yanwei Li, Ying Shan, Huchuan Lu},
51
+ journal = {arXiv preprint},
52
+ year = {2025} }
53
+ ```