Hongxing Li commited on
Commit
cb02c9f
·
verified ·
1 Parent(s): b2b52aa

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +37 -4
README.md CHANGED
@@ -26,13 +26,46 @@ size_categories:
26
  </div>
27
 
28
  # Spatial Perception and Reasoning Benchmark (SPBench)
29
- This repository contains the Spatial Perception and Reasoning Benchmark (SPBench), introduced in [SpatialLadder: Progressive Training for Spatial Reasoning in Vision-Language Models]().
30
-
31
- ## Files
32
 
 
33
 
34
  ## Dataset Description
 
35
  SPBench is a comprehensive evaluation suite designed to assess the spatial perception and reasoning capabilities of Vision-Language Models (VLMs). SPBench consists of two complementary benchmarks, SPBench-SI and SPBench-MV, corresponding to single-image and multi-view modalities, respectively. Both benchmarks are constructed using the standardized pipeline applied to the ScanNet validation set, ensuring systematic coverage across diverse spatial reasoning tasks.
36
 
37
- SPBench-SI serves as a single-image evaluation benchmark that measures models’ ability to perform spatial understanding and reasoning from individual viewpoints. It encompasses four task categories—absolute distance, object size, relative distance, and relative direction—containing a total of 1,009 samples. In contrast, SPBench-MV focuses on multi-view spatial reasoning, requiring models to jointly model spatial relationships across multiple viewpoints. It further includes object counting tasks to evaluate models’ capability in identifying and enumerating objects within multi-view scenarios. Both benchmarks undergo rigorous quality control through a combination of standardized pipeline filtering strategies and manual curation, ensuring data disambiguation and high-quality annotations suitable for reliable evaluation.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
38
 
 
26
  </div>
27
 
28
  # Spatial Perception and Reasoning Benchmark (SPBench)
 
 
 
29
 
30
+ This repository contains the Spatial Perception and Reasoning Benchmark (SPBench), introduced in [SpatialLadder: Progressive Training for Spatial Reasoning in Vision-Language Models]().
31
 
32
  ## Dataset Description
33
+
34
  SPBench is a comprehensive evaluation suite designed to assess the spatial perception and reasoning capabilities of Vision-Language Models (VLMs). SPBench consists of two complementary benchmarks, SPBench-SI and SPBench-MV, corresponding to single-image and multi-view modalities, respectively. Both benchmarks are constructed using the standardized pipeline applied to the ScanNet validation set, ensuring systematic coverage across diverse spatial reasoning tasks.
35
 
36
+ SPBench-SI serves as a single-image evaluation benchmark that measures models’ ability to perform spatial understanding and reasoning from individual viewpoints. It encompasses four task categories—absolute distance, object size, relative distance, and relative direction, with a total of 1,009 samples. In contrast, SPBench-MV focuses on multi-view spatial reasoning, requiring models to jointly model spatial relationships across multiple viewpoints. It further includes object counting tasks to evaluate models’ capability in identifying and enumerating objects within multi-view scenarios, with a total of 319 samples. Both benchmarks undergo rigorous quality control through a combination of standardized pipeline filtering strategies and manual curation, ensuring data disambiguation and high-quality annotations suitable for reliable evaluation.
37
+
38
+ ## Usage
39
+
40
+ You can directly load the dataset from Hugging Face using the `datasets` library.
41
+ SPBench can be accessed in three different configurations as follows:
42
+
43
+ ```python
44
+ from datasets import load_dataset
45
+
46
+ # Load the two benchmarks directly
47
+ dataset = load_dataset("Gino319/SPBench")
48
+
49
+ # Load the SPBench-SI only
50
+ dataset = load_dataset("Gino319/SPBench", data_files="SPBench-SI.parquet")
51
+
52
+ # Load the SPBench-MV only
53
+ dataset = load_dataset("Gino319/SPBench", data_files="SPBench-MV.parquet")
54
+ ```
55
+
56
+ The corresponding image resources required for the benchmarks are provided in `SPBench-SI-images.zip`
57
+ and `SPBench-MV-images.zip`, which contain the complete image sets for SPBench-SI and SPBench-MV, respectively.
58
+
59
+ ## Evaluation
60
+
61
+ SPBench evaluates performance using two metrics: for multiple-choice questions, we use `Accuracy`, calculated based on exact matches. For numerical questions, we use `MRA (Mean Relative Accuracy)` introduced by [Thinking in Space](https://github.com/vision-x-nyu/thinking-in-space), to assess how closely model predictions align with ground truth values.
62
+
63
+ The evaluation code and usage guidelines are available in our [GitHub repository](https://github.com/ZJU-REAL/SpatialLadder). For comprehensive details, please refer to our paper and the repository documentation.
64
+
65
+ ## Citation
66
+
67
+ ```bibtex
68
+ ```
69
+
70
+
71