YuyouZhang commited on
Commit
dae1eea
Β·
verified Β·
1 Parent(s): bf825db

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +66 -3
README.md CHANGED
@@ -1,3 +1,66 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - question-answering
5
+ language:
6
+ - en
7
+ arxiv_id: 2509.25390
8
+ pretty_name: SpinBench
9
+ ---
10
+
11
+ <div align="center">
12
+ <h1><img src="assets/spinbench_logo.png" width="50" /> SpinBench: Perspective and Rotation as a Lens on Spatial Reasoning in VLMs </h1>
13
+ </div>
14
+
15
+ <h5 align="center">
16
+ <a href="https://spinbench25.github.io/">🌐 Website</a> |
17
+ <a href="https://huggingface.co/datasets/YuyouZhang/SpinBench">πŸ€— Dataset</a> |
18
+ <a href="https://arxiv.org/abs/2509.25390">πŸ“‘ Paper</a> |
19
+ <a href="">πŸ’» Code</a>
20
+ </h5>
21
+
22
+ [Project page](https://spinbench25.github.io/) β€’ [arXiv:2509.25390](https://arxiv.org/abs/2509.25390)
23
+
24
+ **SpinBench** is a cognitively grounded diagnostic benchmark for evaluating **spatial reasoning** in vision-language models (VLMs).
25
+ SpinBench is designed around the core challenge of spatial reasoning: perspective taking, the ability to reason about how scenes and object relations change under viewpoint transformation. Since perspective taking requires multiple cognitive capabilities, such as recognizing objects across views, relative positions grounding, and mentally simulating transformations, SpinBench introduces a set of fine-grained diagnostic categories. Our categories target translation, rotation, object relative pose, and viewpoint change, and are progressively structured so that single-object simpler tasks scaffold toward the most demanding multi-object perspective-taking setting. We evaluate 37 state-of-the-art VLMs, both proprietary and open source. Results reveal systematic weaknesses: strong egocentric bias, poor rotational understanding, and inconsistencies under symmetrical and syntactic reformulations. Scaling analysis shows both smooth improvements and emergent capabilities. While human subjects achieve high accuracy (91.2%), task difficulty as measured by human response time shows strong correlation with VLM accuracy, indicating that SpinBench captures spatial reasoning challenges shared across humans and VLMs. Together, our findings highlight the need for structured, cognitively inspired diagnostic tools to advance spatial reasoning in multimodal foundation models.
26
+
27
+ ---
28
+
29
+ ## πŸ“ Dataset structure
30
+
31
+ <details>
32
+ <summary><strong>Click to expand folder structure</strong></summary>
33
+
34
+ &nbsp;
35
+
36
+ ```
37
+ SpinBench/
38
+ β”œβ”€β”€ test.jsonl
39
+ β”œβ”€β”€ test_small.jsonl
40
+ β”œβ”€β”€ images/
41
+ β”‚ β”œβ”€β”€ cars_rotation_c187650a7b.jpg
42
+ β”‚ β”œβ”€β”€ face_rotation_2b4fd309cf.png
43
+ β”‚ β”œβ”€β”€ infinigen_d3f202e7a1.png
44
+ β”‚ β”œβ”€β”€ original_01bce239aa.jpg
45
+ β”‚ └── ...
46
+ β”œβ”€β”€ LICENSE
47
+ β”œβ”€β”€ README.md
48
+ └── ...
49
+ ```
50
+
51
+ </details>
52
+
53
+ ## Citation
54
+
55
+ **BibTeX:**
56
+ ```bibtex
57
+ @misc{zhang2025spinbenchperspectiverotationlens,
58
+ title={SpinBench: Perspective and Rotation as a Lens on Spatial Reasoning in VLMs},
59
+ author={Yuyou Zhang and Radu Corcodel and Chiori Hori and Anoop Cherian and Ding Zhao},
60
+ year={2025},
61
+ eprint={2509.25390},
62
+ archivePrefix={arXiv},
63
+ primaryClass={cs.CV},
64
+ url={https://arxiv.org/abs/2509.25390},
65
+ }
66
+ ```