Datasets:
Hongxing Li commited on
Update README.md
Browse files
README.md
CHANGED
|
@@ -42,7 +42,10 @@ This repository contains the Spatial Perception and Reasoning Benchmark (SPBench
|
|
| 42 |
|
| 43 |
SPBench is a comprehensive evaluation suite designed to assess the spatial perception and reasoning capabilities of Vision-Language Models (VLMs). SPBench consists of two complementary benchmarks, SPBench-SI and SPBench-MV, corresponding to single-image and multi-view modalities, respectively. Both benchmarks are constructed using the standardized pipeline applied to the ScanNet validation set, ensuring systematic coverage across diverse spatial reasoning tasks.
|
| 44 |
|
| 45 |
-
SPBench-SI serves as a single-image evaluation benchmark that measures models’ ability to perform spatial understanding and reasoning from individual viewpoints. It encompasses four task categories—absolute distance, object size, relative distance, and relative direction, with a total of 1,009 samples.
|
|
|
|
|
|
|
|
|
|
| 46 |
|
| 47 |
## Usage
|
| 48 |
|
|
|
|
| 42 |
|
| 43 |
SPBench is a comprehensive evaluation suite designed to assess the spatial perception and reasoning capabilities of Vision-Language Models (VLMs). SPBench consists of two complementary benchmarks, SPBench-SI and SPBench-MV, corresponding to single-image and multi-view modalities, respectively. Both benchmarks are constructed using the standardized pipeline applied to the ScanNet validation set, ensuring systematic coverage across diverse spatial reasoning tasks.
|
| 44 |
|
| 45 |
+
- SPBench-SI serves as a single-image evaluation benchmark that measures models’ ability to perform spatial understanding and reasoning from individual viewpoints. It encompasses four task categories—absolute distance, object size, relative distance, and relative direction, with a total of 1,009 samples.
|
| 46 |
+
- SPBench-MV focuses on multi-view spatial reasoning, requiring models to jointly model spatial relationships across multiple viewpoints. It further includes object counting tasks to evaluate models’ capability in identifying and enumerating objects within multi-view scenarios, with a total of 319 samples.
|
| 47 |
+
|
| 48 |
+
Both benchmarks undergo rigorous quality control through a combination of standardized pipeline filtering strategies and manual curation, ensuring data disambiguation and high-quality annotations suitable for reliable evaluation.
|
| 49 |
|
| 50 |
## Usage
|
| 51 |
|