Datasets:

Modalities:
Image
Languages:
English
ArXiv:
Libraries:
Datasets
License:
Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
600
600
End of preview. Expand in Data Studio

SpatiaLab: Can Vision–Language Models Perform Spatial Reasoning in the Wild?

ICLR 2026 Project-Website arxiv Kaggle GitHub HuggingFace HuggingFace

Azmine Toushik Wasi, Wahid Faisal, Abdur Rahman, Mahfuz Ahmed Anik, Munem Shahriar, Mohsin Mahmud Topu, Sadia Tasnim Meem, Rahatun Nesa Priti, Sabrina Afroz Mitu, Md. Iqramul Hoque, Shahriyar Zaman Ridoy, Mohammed Eunus Ali, Majd Hawasly, Mohammad Raza, Md Rizwan Parvez

Computational Intelligence and Operations Laboratory (CIOL) • Shahjalal University of Science and Technology (SUST) • Monash University • Qatar Computing Research Institute (QCRI)

The Fourteenth International Conference on Learning Representations (ICLR 2026)


SpatiaLab is a benchmark for evaluating spatial reasoning in vision–language models (VLMs) under real-world, in-the-wild visual conditions.
It includes 1,400 visual question–answer pairs across 6 core spatial categories and 30 subcategories, supporting both multiple-choice (MCQ) and open-ended evaluation formats.
SpatiaLab exposes substantial gaps between state-of-the-art VLMs and human performance.


Overview

Spatial reasoning is fundamental to human intelligence and real-world embodied AI.
SpatiaLab provides a comprehensive evaluation suite (1,400 QA pairs) across six core spatial categories:

  • Relative Positioning
  • Depth & Occlusion
  • Orientation
  • Size & Scale
  • Spatial Navigation
  • 3D Geometry

It is designed to test VLMs in realistic, unconstrained scenes and highlights large performance gaps between models and humans.


Benchmark Structure and Categorization

SpatiaLab comprises 1,400 validated QA items organized into 6 main categories and 30 subcategories (5 subcategories each).

Category Example sub-tasks (5 each)
Relative Positioning Left/Right, Above/Below, Between, Adjacency, Corner/Angle
Depth & Occlusion Partial occlusion, Complete occlusion, Layer order, Reflection/visibility, Hidden feature
Orientation Rotation angle, Facing, Tilt, Tool handedness, Mirror
Size & Scale Relative size, Scale ratio, Big/Small, Proportion, Size consistency
Spatial Navigation Path existence, Obstacle avoidance, Turn sequence, Viewpoint visibility, Accessibility
3D Geometry 3D containment, Intersection, Volume ordering, Pose matching, Stability

Key Takeaways

  • Large human–model gap.
    MCQ: top models ~55% vs humans 87.6%.
    Open-ended: best ~41% vs humans ~65%.

  • Open-ended is much harder.
    Average MCQ → Open-ended drop is substantial across models.

  • Scale alone is not sufficient.
    Some large models remain weak; small models often cluster near the bottom.

  • Spatial “specialists” don’t necessarily generalize.
    Specialized spatial models can underperform broadly, especially in open-ended settings.


Error Analysis Summary

Common failure modes observed across models:

  • Spatial mislocalization in cluttered scenes (wrong referents)
  • Perspective/scale mistakes (over-reliance on size priors)
  • Occlusion and ordering failures (thin/partially hidden structures)
  • Fluent but visually ungrounded open-ended answers
  • Multi-cue integration failures (depth + size + ordering)
  • Poor confidence calibration in open-ended generation

Methods

  • Image sources: web crawling, targeted retrieval, manual capture
  • Annotation: trained annotators + 3-stage review/QC
  • Evaluation:
    • MCQ: option selection + exact match
    • Open-ended: free-form generation + judge scoring (validated against human agreement)
  • Metrics: accuracy + agreement measures (e.g., Cohen’s / Fleiss’ kappa reported in paper)

Performance Improvement Approaches (Explored)

  • Inherent reasoning-enabled models
  • Chain-of-Thought (CoT) prompting
  • CoT + self-reflection
  • Supervised fine-tuning (SFT)
  • Multi-agent system (SpatioXolver)

Citation

@inproceedings{
wasi2026spatialab,
title={SpatiaLab: Can Vision{\textendash}Language Models Perform Spatial Reasoning in the Wild?},
author={Azmine Toushik Wasi, Wahid Faisal, Abdur Rahman, Mahfuz Ahmed Anik, Munem Shahriar, Mohsin Mahmud Topu, Sadia Tasnim Meem, Rahatun Nesa Priti, Sabrina Afroz Mitu, Md. Iqramul Hoque, Shahriyar Zaman Ridoy, Mohammed Eunus Ali, Majd Hawasly, Mohammad Raza, Md Rizwan Parvez},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=fWWUPOb0CT}
}
Downloads last month
-

Collection including ciol-research/SpatiaLab

Paper for ciol-research/SpatiaLab