Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Libraries:
Datasets
License:
SpatiaLab / README.md
azminetoushikwasi's picture
Update README.md
93bcf69 verified
---
license: mit
task_categories:
- question-answering
language:
- en
tags:
- visual-reasoning
- VQA
- MCQ
pretty_name: 'SpatiaLab: Can Vision-Language Models Perform Spatial Reasoning in the Wild?'
size_categories:
- 1K<n<10K
---
# SpatiaLab: Can Vision–Language Models Perform Spatial Reasoning in the Wild?
<div align=center>
[![ICLR 2026](https://img.shields.io/badge/ICLR-2026-blue)](https://openreview.net/forum?id=fWWUPOb0CT)
[![Project-Website](https://img.shields.io/badge/Project-Website-red)](https://spatialab-reasoning.github.io/)
[![arxiv](https://img.shields.io/badge/-arXiv-blue?style=flat-square&logo=arXiv&color=1f1f15)](http://arxiv.org/abs/2602.03916)
[![Kaggle](https://img.shields.io/badge/Kaggle-%320beff?style=flat-square&logo=kaggle&color=1f1f18)](#)
[![GitHub](https://img.shields.io/badge/GitHub-%2320beff?style=flat-square&amp;logo=github&amp;color=360893)](https://github.com/SpatiaLab-Reasoning/SpatiaLab)
[![HuggingFace](https://img.shields.io/badge/HuggingFace-%2320beff?style=flat-square&amp;logo=huggingface&amp;color=360893)](https://huggingface.co/datasets/ciol-research/SpatiaLab)
[![HuggingFace](https://img.shields.io/badge/HuggingFace-Paper-%2320beff?style=flat-square&amp;logo=huggingface&amp;color=360893)](https://huggingface.co/papers/2602.03916)
***Azmine Toushik Wasi, Wahid Faisal, Abdur Rahman, Mahfuz Ahmed Anik, Munem Shahriar, Mohsin Mahmud Topu, Sadia Tasnim Meem, Rahatun Nesa Priti, Sabrina Afroz Mitu, Md. Iqramul Hoque, Shahriyar Zaman Ridoy, Mohammed Eunus Ali, Majd Hawasly, Mohammad Raza, Md Rizwan Parvez***
Computational Intelligence and Operations Laboratory (CIOL) • Shahjalal University of Science and Technology (SUST) • Monash University • Qatar Computing Research Institute (QCRI)
*The Fourteenth International Conference on Learning Representations (**ICLR 2026**)*
</div>
---
**SpatiaLab** is a benchmark for evaluating **spatial reasoning** in vision–language models (VLMs) under **real-world, in-the-wild** visual conditions.
It includes **1,400 visual question–answer pairs** across **6 core spatial categories** and **30 subcategories**, supporting both **multiple-choice (MCQ)** and **open-ended** evaluation formats.
SpatiaLab exposes substantial gaps between state-of-the-art VLMs and human performance.
---
## Overview
**Spatial reasoning** is fundamental to human intelligence and real-world embodied AI.
SpatiaLab provides a comprehensive evaluation suite (1,400 QA pairs) across six core spatial categories:
- Relative Positioning
- Depth & Occlusion
- Orientation
- Size & Scale
- Spatial Navigation
- 3D Geometry
It is designed to test VLMs in **realistic, unconstrained scenes** and highlights large performance gaps between models and humans.
---
## Benchmark Structure and Categorization
SpatiaLab comprises **1,400** validated QA items organized into **6 main categories** and **30 subcategories** (**5** subcategories each).
| Category | Example sub-tasks (5 each) |
|---|---|
| **Relative Positioning** | Left/Right, Above/Below, Between, Adjacency, Corner/Angle |
| **Depth & Occlusion** | Partial occlusion, Complete occlusion, Layer order, Reflection/visibility, Hidden feature |
| **Orientation** | Rotation angle, Facing, Tilt, Tool handedness, Mirror |
| **Size & Scale** | Relative size, Scale ratio, Big/Small, Proportion, Size consistency |
| **Spatial Navigation** | Path existence, Obstacle avoidance, Turn sequence, Viewpoint visibility, Accessibility |
| **3D Geometry** | 3D containment, Intersection, Volume ordering, Pose matching, Stability |
---
## Key Takeaways
- **Large human–model gap.**
MCQ: top models ~55% vs humans 87.6%.
Open-ended: best ~41% vs humans ~65%.
- **Open-ended is much harder.**
Average MCQ → Open-ended drop is substantial across models.
- **Scale alone is not sufficient.**
Some large models remain weak; small models often cluster near the bottom.
- **Spatial “specialists” don’t necessarily generalize.**
Specialized spatial models can underperform broadly, especially in open-ended settings.
---
## Error Analysis Summary
Common failure modes observed across models:
- Spatial mislocalization in cluttered scenes (wrong referents)
- Perspective/scale mistakes (over-reliance on size priors)
- Occlusion and ordering failures (thin/partially hidden structures)
- Fluent but visually ungrounded open-ended answers
- Multi-cue integration failures (depth + size + ordering)
- Poor confidence calibration in open-ended generation
---
## Methods
- **Image sources:** web crawling, targeted retrieval, manual capture
- **Annotation:** trained annotators + 3-stage review/QC
- **Evaluation:**
- MCQ: option selection + exact match
- Open-ended: free-form generation + judge scoring (validated against human agreement)
- **Metrics:** accuracy + agreement measures (e.g., Cohen’s / Fleiss’ kappa reported in paper)
### Performance Improvement Approaches (Explored)
- Inherent reasoning-enabled models
- Chain-of-Thought (CoT) prompting
- CoT + self-reflection
- Supervised fine-tuning (SFT)
- Multi-agent system (SpatioXolver)
---
## Citation
```bibtex
@inproceedings{
wasi2026spatialab,
title={SpatiaLab: Can Vision{\textendash}Language Models Perform Spatial Reasoning in the Wild?},
author={Azmine Toushik Wasi, Wahid Faisal, Abdur Rahman, Mahfuz Ahmed Anik, Munem Shahriar, Mohsin Mahmud Topu, Sadia Tasnim Meem, Rahatun Nesa Priti, Sabrina Afroz Mitu, Md. Iqramul Hoque, Shahriyar Zaman Ridoy, Mohammed Eunus Ali, Majd Hawasly, Mohammad Raza, Md Rizwan Parvez},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=fWWUPOb0CT}
}