Add task categories and sample usage
#2
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,14 +1,17 @@
|
|
| 1 |
---
|
| 2 |
license: mit
|
|
|
|
|
|
|
|
|
|
| 3 |
---
|
|
|
|
| 4 |
# How Far are VLMs from Visual Spatial Intelligence? A Benchmark-Driven Perspective
|
| 5 |
|
| 6 |
# About SIBench
|
| 7 |
-
At present, there already exist numerous open-source benchmarks for visual-spatial reasoning; however, each benchmark typically covers only a subset of tasks. We collected, categorized, and filtered them to construct **SIBench
|
| 8 |
|
| 9 |

|
| 10 |
|
| 11 |
-
|
| 12 |
## 💡 Key Features
|
| 13 |
|
| 14 |
1. **Hierarchical Evaluation**
|
|
@@ -27,7 +30,6 @@ At present, there already exist numerous open-source benchmarks for visual-spati
|
|
| 27 |
|
| 28 |
We offer a comprehensive evaluation methodology. For more details, please refer to our evaluation [code](https://github.com/song2yu/SIBench-VSR) and [project page](https://sibench.github.io/Awesome-Visual-Spatial-Reasoning/).
|
| 29 |
|
| 30 |
-
|
| 31 |
## 📊 Dataset
|
| 32 |
|
| 33 |
SIBench contains a total of **8.8K** data points. The data formats include single images, multiple images, and videos, while the question types include true/false, multiple-choice, and numerical questions.
|
|
@@ -36,7 +38,6 @@ Additionally, we provide a streamlined version for evaluation called SIBench-min
|
|
| 36 |
|
| 37 |

|
| 38 |
|
| 39 |
-
|
| 40 |
## 🎯 Evaluation Results
|
| 41 |
We've provided a [leaderboard](https://sibench.github.io/Awesome-Visual-Spatial-Reasoning/), and we welcome you to add your evaluation results. Please feel free to contact us directly at: sduyusong@gmail.com.
|
| 42 |
|
|
@@ -44,7 +45,6 @@ We've provided a [leaderboard](https://sibench.github.io/Awesome-Visual-Spatial-
|
|
| 44 |
|
| 45 |

|
| 46 |
|
| 47 |
-
|
| 48 |
## 📖 Citation
|
| 49 |
|
| 50 |
If you find *SIBench* useful in your research, please consider to cite the following related papers:
|
|
@@ -55,4 +55,53 @@ title = {How Far are VLMs from Visual Spatial Intelligence? A Benchmark-Driven P
|
|
| 55 |
author = {Songsongyu, Yuxin Chen, Hao Ju, Lianjie Jia, Shaofei Huang, Rundi Cui, Yuhan Wu, Binghao Ran, Zaibin Zhang, Zhedong Zheng, Zhipeng Zhang, Yifan Wang, Lin Song, Lijun Wang, Yanwei Li, Ying Shan, Huchuan Lu},
|
| 56 |
journal = {arXiv preprint},
|
| 57 |
year = {2025} }
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 58 |
```
|
|
|
|
| 1 |
---
|
| 2 |
license: mit
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-text-to-text
|
| 5 |
+
- video-text-to-text
|
| 6 |
---
|
| 7 |
+
|
| 8 |
# How Far are VLMs from Visual Spatial Intelligence? A Benchmark-Driven Perspective
|
| 9 |
|
| 10 |
# About SIBench
|
| 11 |
+
At present, there already exist numerous open-source benchmarks for visual-spatial reasoning; however, each benchmark typically covers only a subset of tasks. We collected, categorized, and filtered them to construct **SIBench**.\
|
| 12 |
|
| 13 |

|
| 14 |
|
|
|
|
| 15 |
## 💡 Key Features
|
| 16 |
|
| 17 |
1. **Hierarchical Evaluation**
|
|
|
|
| 30 |
|
| 31 |
We offer a comprehensive evaluation methodology. For more details, please refer to our evaluation [code](https://github.com/song2yu/SIBench-VSR) and [project page](https://sibench.github.io/Awesome-Visual-Spatial-Reasoning/).
|
| 32 |
|
|
|
|
| 33 |
## 📊 Dataset
|
| 34 |
|
| 35 |
SIBench contains a total of **8.8K** data points. The data formats include single images, multiple images, and videos, while the question types include true/false, multiple-choice, and numerical questions.
|
|
|
|
| 38 |
|
| 39 |

|
| 40 |
|
|
|
|
| 41 |
## 🎯 Evaluation Results
|
| 42 |
We've provided a [leaderboard](https://sibench.github.io/Awesome-Visual-Spatial-Reasoning/), and we welcome you to add your evaluation results. Please feel free to contact us directly at: sduyusong@gmail.com.
|
| 43 |
|
|
|
|
| 45 |
|
| 46 |

|
| 47 |
|
|
|
|
| 48 |
## 📖 Citation
|
| 49 |
|
| 50 |
If you find *SIBench* useful in your research, please consider to cite the following related papers:
|
|
|
|
| 55 |
author = {Songsongyu, Yuxin Chen, Hao Ju, Lianjie Jia, Shaofei Huang, Rundi Cui, Yuhan Wu, Binghao Ran, Zaibin Zhang, Zhedong Zheng, Zhipeng Zhang, Yifan Wang, Lin Song, Lijun Wang, Yanwei Li, Ying Shan, Huchuan Lu},
|
| 56 |
journal = {arXiv preprint},
|
| 57 |
year = {2025} }
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
## 🚀 Quick Start
|
| 61 |
+
|
| 62 |
+
**1. Clone this repo:**
|
| 63 |
+
|
| 64 |
+
```
|
| 65 |
+
git clone https://github.com/song2yu/SIBench-VSR.git
|
| 66 |
+
|
| 67 |
+
cd SIBench-VSR
|
| 68 |
+
|
| 69 |
+
conda create -n sibench python=3.10.6 -y
|
| 70 |
+
|
| 71 |
+
conda activate sibench
|
| 72 |
+
|
| 73 |
+
pip install -e .
|
| 74 |
+
|
| 75 |
+
pip install transformers==4.49.0 accelerate==0.26.0 flash-attn==2.7.3 # the specific packages that are prone to issues
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
**2. Prepare the test data according to the following format:**
|
| 79 |
+
|
| 80 |
+
Obtain the data from the following sources:
|
| 81 |
+
|
| 82 |
+
```html
|
| 83 |
+
https://huggingface.co/datasets/Two-hot/SIBench
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
or run download.py:
|
| 87 |
+
```
|
| 88 |
+
cd Spatial_Intelligence_Benchmark
|
| 89 |
+
|
| 90 |
+
python download.py
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
For convenience, we sampled the videos and retained only **30 frames** for each one. The processed data are stored in **data_sampled_video**. We recommend replacing the original videos with these and setting the total number of sampled frames to 30 frames, which is consistent with the experimental setup in our paper. If you need to change the sampling rate, you can directly use these videos.
|
| 94 |
+
|
| 95 |
+
**3. Run Examples**
|
| 96 |
+
|
| 97 |
+
To test a particular task separately, run the following code:
|
| 98 |
+
|
| 99 |
+
```
|
| 100 |
+
export LMUData=/your/path/to/SIBench-VSR/Spatial_Intelligence_Benchmark/data
|
| 101 |
+
|
| 102 |
+
python run.py --data <setting_name> --model <model_name> --verbose
|
| 103 |
+
|
| 104 |
+
e.g.
|
| 105 |
+
|
| 106 |
+
python run.py --data relative_distance --model InternVL2_5-2B --verbose
|
| 107 |
```
|