File size: 4,302 Bytes
806e4ac
 
3c38269
 
 
806e4ac
3c38269
806e4ac
 
 
3c38269
2c8d10e
 
 
806e4ac
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2c8d10e
 
806e4ac
 
 
2c8d10e
 
 
806e4ac
 
 
 
 
 
 
 
 
 
 
3c38269
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
806e4ac
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
---
license: mit
task_categories:
- image-text-to-text
- video-text-to-text
---

# How Far are VLMs from Visual Spatial Intelligence? A Benchmark-Driven Perspective

# About SIBench
At present, there already exist numerous open-source benchmarks for visual-spatial reasoning; however, each benchmark typically covers only a subset of tasks. We collected, categorized, and filtered them to construct **SIBench**.\

![teaser](radar2.6_calibri.png)

## 💡 Key Features

1. **Hierarchical Evaluation**

   We categorize Visual Spatial Reasoning tasks into three types based on a reasoning levels: **Foundational Perception**, **Spatial Understanding**, and **Planning**. Furthermore, each category contains a rich set of evaluation tasks to comprehensively assess the visuospatial reasoning capabilities of existing VLMs.

2. **Comprehensive evaluation**

   The evaluation data in SIBench cover diverse input formats, including **single images**, **multi-view images**, and **videos**, as well as various question formats, such as true/false judgment, multiple-choice, and numerical question answering. The data are derived from **23** relevant tasks across nearly **20** open-source benchmarks.

3. **High Quality**

   SIBench prioritizes datasets with human annotations, filters out excessively long videos to avoid unreasonable task settings, and adds timestamps to videos requiring temporal information, thereby ensuring high data quality.

## 👨‍💻 Code

We offer a comprehensive evaluation methodology. For more details, please refer to our evaluation [code](https://github.com/song2yu/SIBench-VSR) and [project page](https://sibench.github.io/Awesome-Visual-Spatial-Reasoning/).

## 📊 Dataset

SIBench contains a total of **8.8K** data points. The data formats include single images, multiple images, and videos, while the question types include true/false, multiple-choice, and numerical questions.

Additionally, we provide a streamlined version for evaluation called SIBench-mini. The data for this version is randomly selected from SIBench. SIBench-mini maintains the same comprehensive task settings as the full version, but with a more uniform data distribution.

![data](cognitive_levels.png)

## 🎯 Evaluation Results
We've provided a [leaderboard](https://sibench.github.io/Awesome-Visual-Spatial-Reasoning/), and we welcome you to add your evaluation results. Please feel free to contact us directly at: sduyusong@gmail.com.

![table2](table2.png)

![table1](table1.png)

## 📖 Citation

If you find *SIBench* useful in your research, please consider to cite the following related papers:

```
@article{sibench2025,
title = {How Far are VLMs from Visual Spatial Intelligence? A Benchmark-Driven Perspective},
author = {Songsongyu, Yuxin Chen, Hao Ju, Lianjie Jia, Shaofei Huang, Rundi Cui, Yuhan Wu, Binghao Ran, Zaibin Zhang, Zhedong Zheng, Zhipeng Zhang, Yifan Wang, Lin Song, Lijun Wang, Yanwei Li, Ying Shan, Huchuan Lu},
journal = {arXiv preprint},
year = {2025} }
```

## 🚀 Quick Start

**1. Clone this repo:**

```
git clone https://github.com/song2yu/SIBench-VSR.git

cd SIBench-VSR

conda create -n sibench python=3.10.6 -y

conda activate sibench

pip install -e .

pip install transformers==4.49.0 accelerate==0.26.0 flash-attn==2.7.3 # the specific packages that are prone to issues 
```

**2. Prepare the test data according to the following format:**

Obtain the data from the following sources:

```html
https://huggingface.co/datasets/Two-hot/SIBench
```

or run download.py:
```
cd Spatial_Intelligence_Benchmark

python download.py
```

For convenience, we sampled the videos and retained only **30 frames** for each one. The processed data are stored in **data_sampled_video**. We recommend replacing the original videos with these and setting the total number of sampled frames to 30 frames, which is consistent with the experimental setup in our paper. If you need to change the sampling rate, you can directly use these videos.

**3.  Run Examples**

To test a particular task separately, run the following code:

```
export LMUData=/your/path/to/SIBench-VSR/Spatial_Intelligence_Benchmark/data

python run.py --data <setting_name> --model <model_name> --verbose

e.g.

python run.py --data relative_distance --model InternVL2_5-2B --verbose
```