jasonzhango commited on
Commit
9c1a157
Β·
verified Β·
1 Parent(s): 5ddc4bd

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +86 -0
README.md CHANGED
@@ -29,3 +29,89 @@ configs:
29
  - split: test
30
  path: data/test-*
31
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
29
  - split: test
30
  path: data/test-*
31
  ---
32
+
33
+
34
+ <p align="left">
35
+ <a href="https://github.com/fudan-zvg/spar.git">
36
+ <img alt="GitHub Code" src="https://img.shields.io/badge/Code-spar-black?&logo=github&logoColor=white" />
37
+ </a>
38
+ <a href="https://arxiv.org/abs/xxx">
39
+ <img alt="arXiv" src="https://img.shields.io/badge/arXiv-spar-red?logo=arxiv" />
40
+ </a>
41
+ <a href="https://fudan-zvg.github.io/spar">
42
+ <img alt="Website" src="https://img.shields.io/badge/🌎_Website-spar-blue" />
43
+ </a>
44
+ </p>
45
+
46
+ # 🎯 Spatial Perception And Reasoning Benchmark (SPAR-Bench)
47
+ > A benchmark to evaluate **spatial perception and reasoning** in vision-language models (VLMs), with high-quality QA across 20 diverse tasks.
48
+
49
+ **SPAR-Bench** is a high-quality benchmark for evaluating spatial perception and reasoning in vision-language models (VLMs). It covers 20 diverse spatial tasks across single-view, multi-view, and video settings, with a total of **7,207 manually verified QA pairs**.
50
+
51
+ SPAR-Bench is derived from the large-scale [SPAR-7M](https://huggingface.co/datasets/jasonzhango/SPAR-7M) dataset, and specifically designed to support **zero-shot evaluation** and **task-specific analysis**
52
+
53
+
54
+ > πŸ“Œ SPAR-Bench at a glance:
55
+ > - βœ… 7,207 manually verified QA pairs
56
+ > - 🧠 20 spatial tasks (depth, distance, relation, imagination, etc.)
57
+ > - πŸŽ₯ Supports single-view, multi-view inputs
58
+ > - πŸ“ Two evaluation metrics: Accuracy & MRA
59
+ > - πŸ“· Available in RGB-only and RGB-D versions
60
+
61
+
62
+ ## 🧱 Available Variants
63
+
64
+ **We provide four versions of SPAR-Bench**, covering both RGB-only and RGB-D settings, as well as full-size and lightweight variants:
65
+
66
+ | Dataset Name | Description |
67
+ |------------------------------------------|--------------------------------------------------------------------|
68
+ | [`SPAR-Bench`](https://huggingface.co/datasets/jasonzhango/SPAR-Bench) | Full benchmark (7,207 QA) with RGB images |
69
+ | [`SPAR-Bench-RGBD`](https://huggingface.co/datasets/jasonzhango/SPAR-Bench-RGBD) | Full benchmark with depths, camera pose and intrinsics |
70
+ | [`SPAR-Bench-Tiny`](https://huggingface.co/datasets/jasonzhango/SPAR-Bench-Tiny) | 1,000-sample subset (50 QA per task), for fast evaluation or APIs |
71
+ | [`SPAR-Bench-Tiny-RGBD`](https://huggingface.co/datasets/jasonzhango/SPAR-Bench-Tiny-RGBD) | Tiny version with RGBD inputs |
72
+
73
+ > πŸ”Ž Tiny versions are designed for quick evaluation (e.g., APIs, human studies).
74
+ > πŸ’‘ RGBD versions include depths, poses, and intrinsics, suitable for 3D-aware models.
75
+
76
+ To load a different version via `datasets`, simply change the dataset name:
77
+
78
+ ```python
79
+ from datasets import load_dataset
80
+ spar = load_dataset("jasonzhango/SPAR-Bench")
81
+ spar_rgbd = load_dataset("jasonzhango/SPAR-Bench-RGBD")
82
+ spar_tiny = load_dataset("jasonzhango/SPAR-Bench-Tiny")
83
+ spar_tiny_rgbd = load_dataset("jasonzhango/SPAR-Bench-Tiny-RGBD")
84
+ ```
85
+ ## πŸ•ΉοΈ Evaluation
86
+
87
+ SPAR-Bench supports two evaluation metrics, depending on the question type:
88
+
89
+ - **Accuracy** – for multiple-choice questions (exact match)
90
+ - **Mean Relative Accuracy (MRA)** – for numerical-answer questions (e.g., depth, distance)
91
+
92
+
93
+ > 🧠 The MRA metric is inspired by the design in [Thinking in Space](https://github.com/vision-x-nyu/thinking-in-space), and is tailored for spatial reasoning tasks involving quantities like distance and depth.
94
+
95
+ We provide an **evaluation pipeline** in our [GitHub repository](https://github.com/hutchinsonian/spar), built on top of [lmms-eval](https://github.com/EvolvingLMMs-Lab/lmms-eval).
96
+
97
+
98
+ ## πŸ“š Bibtex
99
+
100
+ If you find this project or dataset helpful, please consider citing our paper:
101
+
102
+ ```bibtex
103
+ @article{zhang2025from,
104
+ title={From Flatland to Space: Teaching Vision-Language Models to Perceive and Reason in 3D},
105
+ author={Zhang, Jiahui and Chen, Yurui and Zhou, Yanpeng and Xu, Yueming and Huang, Ze and Mei, Jilin and Chen, Junhui and Yuan, Yujie and Cai, Xinyue and Huang, Guowei and Quan, Xingyue and Xu, Hang and Zhang, Li},
106
+ year={2025},
107
+ journal={arXiv preprint arXiv:xx},
108
+ }
109
+ ```
110
+
111
+ <!-- ## πŸ“„ License
112
+
113
+ This dataset is licensed under the **Creative Commons Attribution 4.0 International (CC BY 4.0)**.
114
+
115
+ You may use, share, modify, and redistribute this dataset **for any purpose**, including commercial use, as long as proper attribution is given.
116
+
117
+ [Learn more](https://creativecommons.org/licenses/by/4.0/) -->