Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Libraries:
Datasets
License:
azminetoushikwasi commited on
Commit
93bcf69
·
verified ·
1 Parent(s): 9e0ac5d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +140 -3
README.md CHANGED
@@ -1,3 +1,140 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ task_categories:
4
+ - question-answering
5
+ language:
6
+ - en
7
+ tags:
8
+ - visual-reasoning
9
+ - VQA
10
+ - MCQ
11
+ pretty_name: 'SpatiaLab: Can Vision-Language Models Perform Spatial Reasoning in the Wild?'
12
+ size_categories:
13
+ - 1K<n<10K
14
+ ---
15
+
16
+ # SpatiaLab: Can Vision–Language Models Perform Spatial Reasoning in the Wild?
17
+
18
+
19
+ <div align=center>
20
+
21
+ [![ICLR 2026](https://img.shields.io/badge/ICLR-2026-blue)](https://openreview.net/forum?id=fWWUPOb0CT)
22
+ [![Project-Website](https://img.shields.io/badge/Project-Website-red)](https://spatialab-reasoning.github.io/)
23
+ [![arxiv](https://img.shields.io/badge/-arXiv-blue?style=flat-square&logo=arXiv&color=1f1f15)](http://arxiv.org/abs/2602.03916)
24
+ [![Kaggle](https://img.shields.io/badge/Kaggle-%320beff?style=flat-square&logo=kaggle&color=1f1f18)](#)
25
+ [![GitHub](https://img.shields.io/badge/GitHub-%2320beff?style=flat-square&amp;logo=github&amp;color=360893)](https://github.com/SpatiaLab-Reasoning/SpatiaLab)
26
+ [![HuggingFace](https://img.shields.io/badge/HuggingFace-%2320beff?style=flat-square&amp;logo=huggingface&amp;color=360893)](https://huggingface.co/datasets/ciol-research/SpatiaLab)
27
+ [![HuggingFace](https://img.shields.io/badge/HuggingFace-Paper-%2320beff?style=flat-square&amp;logo=huggingface&amp;color=360893)](https://huggingface.co/papers/2602.03916)
28
+
29
+
30
+ ***Azmine Toushik Wasi, Wahid Faisal, Abdur Rahman, Mahfuz Ahmed Anik, Munem Shahriar, Mohsin Mahmud Topu, Sadia Tasnim Meem, Rahatun Nesa Priti, Sabrina Afroz Mitu, Md. Iqramul Hoque, Shahriyar Zaman Ridoy, Mohammed Eunus Ali, Majd Hawasly, Mohammad Raza, Md Rizwan Parvez***
31
+
32
+ Computational Intelligence and Operations Laboratory (CIOL) • Shahjalal University of Science and Technology (SUST) • Monash University • Qatar Computing Research Institute (QCRI)
33
+
34
+ *The Fourteenth International Conference on Learning Representations (**ICLR 2026**)*
35
+
36
+ </div>
37
+
38
+ ---
39
+
40
+
41
+
42
+ **SpatiaLab** is a benchmark for evaluating **spatial reasoning** in vision–language models (VLMs) under **real-world, in-the-wild** visual conditions.
43
+ It includes **1,400 visual question–answer pairs** across **6 core spatial categories** and **30 subcategories**, supporting both **multiple-choice (MCQ)** and **open-ended** evaluation formats.
44
+ SpatiaLab exposes substantial gaps between state-of-the-art VLMs and human performance.
45
+
46
+
47
+ ---
48
+
49
+
50
+
51
+ ## Overview
52
+
53
+ **Spatial reasoning** is fundamental to human intelligence and real-world embodied AI.
54
+ SpatiaLab provides a comprehensive evaluation suite (1,400 QA pairs) across six core spatial categories:
55
+
56
+ - Relative Positioning
57
+ - Depth & Occlusion
58
+ - Orientation
59
+ - Size & Scale
60
+ - Spatial Navigation
61
+ - 3D Geometry
62
+
63
+ It is designed to test VLMs in **realistic, unconstrained scenes** and highlights large performance gaps between models and humans.
64
+
65
+ ---
66
+
67
+ ## Benchmark Structure and Categorization
68
+
69
+ SpatiaLab comprises **1,400** validated QA items organized into **6 main categories** and **30 subcategories** (**5** subcategories each).
70
+
71
+ | Category | Example sub-tasks (5 each) |
72
+ |---|---|
73
+ | **Relative Positioning** | Left/Right, Above/Below, Between, Adjacency, Corner/Angle |
74
+ | **Depth & Occlusion** | Partial occlusion, Complete occlusion, Layer order, Reflection/visibility, Hidden feature |
75
+ | **Orientation** | Rotation angle, Facing, Tilt, Tool handedness, Mirror |
76
+ | **Size & Scale** | Relative size, Scale ratio, Big/Small, Proportion, Size consistency |
77
+ | **Spatial Navigation** | Path existence, Obstacle avoidance, Turn sequence, Viewpoint visibility, Accessibility |
78
+ | **3D Geometry** | 3D containment, Intersection, Volume ordering, Pose matching, Stability |
79
+
80
+
81
+ ---
82
+
83
+ ## Key Takeaways
84
+
85
+ - **Large human–model gap.**
86
+ MCQ: top models ~55% vs humans 87.6%.
87
+ Open-ended: best ~41% vs humans ~65%.
88
+
89
+ - **Open-ended is much harder.**
90
+ Average MCQ → Open-ended drop is substantial across models.
91
+
92
+ - **Scale alone is not sufficient.**
93
+ Some large models remain weak; small models often cluster near the bottom.
94
+
95
+ - **Spatial “specialists” don’t necessarily generalize.**
96
+ Specialized spatial models can underperform broadly, especially in open-ended settings.
97
+
98
+ ---
99
+
100
+ ## Error Analysis Summary
101
+
102
+ Common failure modes observed across models:
103
+ - Spatial mislocalization in cluttered scenes (wrong referents)
104
+ - Perspective/scale mistakes (over-reliance on size priors)
105
+ - Occlusion and ordering failures (thin/partially hidden structures)
106
+ - Fluent but visually ungrounded open-ended answers
107
+ - Multi-cue integration failures (depth + size + ordering)
108
+ - Poor confidence calibration in open-ended generation
109
+
110
+ ---
111
+
112
+ ## Methods
113
+
114
+ - **Image sources:** web crawling, targeted retrieval, manual capture
115
+ - **Annotation:** trained annotators + 3-stage review/QC
116
+ - **Evaluation:**
117
+ - MCQ: option selection + exact match
118
+ - Open-ended: free-form generation + judge scoring (validated against human agreement)
119
+ - **Metrics:** accuracy + agreement measures (e.g., Cohen’s / Fleiss’ kappa reported in paper)
120
+
121
+ ### Performance Improvement Approaches (Explored)
122
+ - Inherent reasoning-enabled models
123
+ - Chain-of-Thought (CoT) prompting
124
+ - CoT + self-reflection
125
+ - Supervised fine-tuning (SFT)
126
+ - Multi-agent system (SpatioXolver)
127
+
128
+ ---
129
+
130
+ ## Citation
131
+
132
+ ```bibtex
133
+ @inproceedings{
134
+ wasi2026spatialab,
135
+ title={SpatiaLab: Can Vision{\textendash}Language Models Perform Spatial Reasoning in the Wild?},
136
+ author={Azmine Toushik Wasi, Wahid Faisal, Abdur Rahman, Mahfuz Ahmed Anik, Munem Shahriar, Mohsin Mahmud Topu, Sadia Tasnim Meem, Rahatun Nesa Priti, Sabrina Afroz Mitu, Md. Iqramul Hoque, Shahriyar Zaman Ridoy, Mohammed Eunus Ali, Majd Hawasly, Mohammad Raza, Md Rizwan Parvez},
137
+ booktitle={The Fourteenth International Conference on Learning Representations},
138
+ year={2026},
139
+ url={https://openreview.net/forum?id=fWWUPOb0CT}
140
+ }