|
|
--- |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- question-answering |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
|
|
|
configs: |
|
|
- config_name: SPBench-SI |
|
|
data_files: |
|
|
- split: test |
|
|
path: SPBench-SI.parquet |
|
|
- config_name: SPBench-MV |
|
|
data_files: |
|
|
- split: test |
|
|
path: SPBench-MV.parquet |
|
|
--- |
|
|
|
|
|
<a href="https://arxiv.org/pdf/2510.08531" target="_blank"> |
|
|
<img alt="arXiv" src="https://img.shields.io/badge/arXiv-SpatialLadder-red?logo=arxiv" height="20" /> |
|
|
</a> |
|
|
<a href="https://zju-real.github.io/SpatialLadder/" target="_blank"> |
|
|
<img alt="Website" src="https://img.shields.io/badge/🌎_Website-SpaitalLadder-blue.svg" height="20" /> |
|
|
</a> |
|
|
<a href="https://github.com/ZJU-REAL/SpatialLadder" target="_blank"> |
|
|
<img alt="Code" src="https://img.shields.io/badge/Code-SpaitalLadder-white?logo=github" height="20" /> |
|
|
</a> |
|
|
|
|
|
<a href="https://huggingface.co/hongxingli/SpatialLadder-3B" target="_blank"> |
|
|
<img alt="Model" src="https://img.shields.io/badge/%F0%9F%A4%97%20_Model-SpatialLadder--3B-ffc107?color=ffc107&logoColor=white" height="20" /> |
|
|
</a> |
|
|
<a href="https://huggingface.co/datasets/hongxingli/SpatialLadder-26k" target="_blank"> |
|
|
<img alt="Data" src="https://img.shields.io/badge/%F0%9F%A4%97%20_Data-SpatialLadder--26k-ffc107?color=ffc107&logoColor=white" height="20" /> |
|
|
</a> |
|
|
|
|
|
</div> |
|
|
|
|
|
# Spatial Perception and Reasoning Benchmark (SPBench) |
|
|
|
|
|
This repository contains the Spatial Perception and Reasoning Benchmark (SPBench), introduced in [SpatialLadder: Progressive Training for Spatial Reasoning in Vision-Language Models](). |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
SPBench is a comprehensive evaluation suite designed to assess the spatial perception and reasoning capabilities of Vision-Language Models (VLMs). SPBench consists of two complementary benchmarks, SPBench-SI and SPBench-MV, corresponding to single-image and multi-view modalities, respectively. Both benchmarks are constructed using the standardized pipeline applied to the ScanNet validation set, ensuring systematic coverage across diverse spatial reasoning tasks. |
|
|
|
|
|
- SPBench-SI serves as a single-image evaluation benchmark that measures models’ ability to perform spatial understanding and reasoning from individual viewpoints. It encompasses four task categories—absolute distance, object size, relative distance, and relative direction, with a total of 1,009 samples. |
|
|
- SPBench-MV focuses on multi-view spatial reasoning, requiring models to jointly model spatial relationships across multiple viewpoints. It further includes object counting tasks to evaluate models’ capability in identifying and enumerating objects within multi-view scenarios, with a total of 319 samples. |
|
|
|
|
|
Both benchmarks undergo rigorous quality control through a combination of standardized pipeline filtering strategies and manual curation, ensuring data disambiguation and high-quality annotations suitable for reliable evaluation. |
|
|
|
|
|
## Usage |
|
|
|
|
|
You can directly load the dataset from Hugging Face using the `datasets` library. |
|
|
SPBench can be accessed in three different configurations as follows: |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load the two benchmarks directly |
|
|
dataset = load_dataset("hongxingli/SPBench") |
|
|
|
|
|
# Load the SPBench-SI only |
|
|
dataset = load_dataset("hongxingli/SPBench", name="SPBench-SI") |
|
|
|
|
|
# Load the SPBench-MV only |
|
|
dataset = load_dataset("hongxingli/SPBench", name="SPBench-MV") |
|
|
``` |
|
|
|
|
|
The corresponding image resources required for the benchmarks are provided in `SPBench-SI-images.zip` |
|
|
and `SPBench-MV-images.zip`, which contain the complete image sets for SPBench-SI and SPBench-MV, respectively. |
|
|
|
|
|
## Evaluation |
|
|
|
|
|
SPBench evaluates performance using two metrics: for multiple-choice questions, we use `Accuracy`, calculated based on exact matches. For numerical questions, we use `MRA (Mean Relative Accuracy)` introduced by [Thinking in Space](https://github.com/vision-x-nyu/thinking-in-space), to assess how closely model predictions align with ground truth values. |
|
|
|
|
|
The evaluation code and usage guidelines are available in our [GitHub repository](https://github.com/ZJU-REAL/SpatialLadder). For comprehensive details, please refer to our paper and the repository documentation. |
|
|
|
|
|
## Citation |
|
|
|
|
|
```bibtex |
|
|
@misc{li2025spatialladderprogressivetrainingspatial, |
|
|
title={SpatialLadder: Progressive Training for Spatial Reasoning in Vision-Language Models}, |
|
|
author={Hongxing Li and Dingming Li and Zixuan Wang and Yuchen Yan and Hang Wu and Wenqi Zhang and Yongliang Shen and Weiming Lu and Jun Xiao and Yueting Zhuang}, |
|
|
year={2025}, |
|
|
eprint={2510.08531}, |
|
|
archivePrefix={arXiv}, |
|
|
primaryClass={cs.CV}, |
|
|
url={https://arxiv.org/abs/2510.08531}, |
|
|
} |
|
|
``` |
|
|
|
|
|
|
|
|
|
|
|
|