| | --- |
| | license: mit |
| | task_categories: |
| | - image-text-to-text |
| | tags: |
| | - multimodal |
| | - benchmark |
| | - spatial-reasoning |
| | - indoor-scenes |
| | --- |
| | |
| | This repository contains the dataset for the paper [SpaCE-10: A Comprehensive Benchmark for Multimodal Large Language Models in Compositional Spatial Intelligence](https://huggingface.co/papers/2506.07966). |
| |
|
| | <div align="center"> |
| | <h1><img src="https://raw.githubusercontent.com/Cuzyoung/SpaCE-10/main/assets/space-10-logo.png" width="8%"> SpaCE-10: A Comprehensive Benchmark for Multimodal Large Language Models in Compositional Spatial Intelligence</h1> |
| | </div> |
| |
|
| | **GitHub Repository:** [https://github.com/Cuzyoung/SpaCE-10](https://github.com/Cuzyoung/SpaCE-10) |
| |
|
| | --- |
| | # ๐ง What is SpaCE-10? |
| |
|
| | **SpaCE-10** is a **compositional spatial intelligence benchmark** for evaluating **Multimodal Large Language Models (MLLMs)** in indoor environments. Our contribution as follows: |
| |
|
| | - ๐งฌ We define an **Atomic Capability Pool**, proposing 10 **atomic spatial capabilities.** |
| | - ๐ Based on the composition of different atomic capabilities, we design **8 compositional QA types**. |
| | - ๐ SpaCE-10 benchmark contains 5,000+ QA pairs. |
| | - ๐ All QA pairs come from 811 indoor scenes (ScanNet++, ScanNet, 3RScan, ARKitScene) |
| | - ๐ SpaCE-10 spans both 2D and 3D MLLM evaluations and can be seamlessly adapted to MLLMs that accept 3D scan input. |
| |
|
| | <div align="center"> |
| | <br><br> |
| | <img src="https://raw.githubusercontent.com/Cuzyoung/SpaCE-10/main/assets/space-10-teaser.png" width="100%"> |
| | <br><br> |
| | </div> |
| |
|
| | --- |
| | # ๐ฅ๐ฅ๐ฅ News |
| | - [2025/07/12] Adjust some QAs of Space-10 and update RemyxAI models' performance to leader board. |
| | - [2025/06/11] Scans for 3D MLLMs and our manually collected 3D snapshots will be coming soon. |
| | - [2025/06/10] Evaluation code is released at followings. |
| | - [2025/06/09] We have released the benchmark for 2D MLLMs at [Hugging Face](https://huggingface.co/datasets/Cusyoung/SpaCE-10). |
| | - [2025/06/09] The paper of SpaCE-10 is released at [Arxiv](https://arxiv.org/abs/2506.07966v1)! |
| | --- |
| |
|
| | # Performance Leader Board - Single-Choice |
| | ๐ LLaVA-OneVision-72B achieves the Rank 1 in all tested models. |
| |
|
| | ๐ GPT-4o achieves the best score in tested Close-Source models. |
| |
|
| | A large gap still exists between human and models in compositional spatial intelligence. |
| |
|
| | <div align="center"> |
| | <img src="https://raw.githubusercontent.com/Cuzyoung/SpaCE-10/main/assets/Perfomance_Leader_Board.png" width="100%"> |
| | <br> |
| | </div> |
| |
|
| | # Single-Choice vs. Double-Choice |
| | <div align="center"> |
| | <img src="https://raw.githubusercontent.com/Cuzyoung/SpaCE-10/main/assets/single-double.png" width="100%"> |
| | <br> |
| | </div> |
| |
|
| | # Capability Score Ranking - Single-Choice |
| | <div align="center"> |
| | <img src="https://raw.githubusercontent.com/Cuzyoung/SpaCE-10/main/assets/Capability_Score_Matrix.png" width="100%"> |
| | <br> |
| | </div> |
| |
|
| | # Environment |
| | The evaluation of SpaCE-10 is based on lmms-eval. Thus, we follow the environment settings of lmms-eval. |
| | ```bash |
| | git clone https://github.com/Cuzyoung/SpaCE-10.git |
| | cd SpaCE-10 |
| | uv venv dev --python=3.10 |
| | source dev/bin/activate |
| | uv pip install -e . |
| | ``` |
| |
|
| | # Evaluation |
| | Take InternVL2.5-8B as an example: |
| | ```bash |
| | cd lmms-eval/run_bash |
| | bash internvl2.5-8b.sh |
| | ``` |
| | Notably, each time we test a new model, the corresponding environment of this model needs to be installed. |
| |
|
| | --- |
| | # Sample Usage |
| |
|
| | You can load the dataset using the Hugging Face `datasets` library: |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | |
| | dataset = load_dataset("Cusyoung/SpaCE-10") |
| | |
| | # To explore the dataset splits: |
| | print(dataset) |
| | |
| | # Example of accessing a split (assuming a 'train' split exists): |
| | # train_split = dataset["train"] |
| | # print(train_split[0]) |
| | ``` |
| |
|
| | --- |
| | # Citation |
| | If you use this dataset, please cite the original paper: |
| |
|
| | ```bibtex |
| | @article{gong2025space10, |
| | title={SpaCE-10: A Comprehensive Benchmark for Multimodal Large Language Models in Compositional Spatial Intelligence}, |
| | author={Ziyang Gong, Wenhao Li, Oliver Ma, Songyuan Li, Jiayi Ji, Xue Yang, Gen Luo, Junchi Yan, Rongrong Ji}, |
| | journal={arXiv preprint arXiv:2506.07966}, |
| | year={2025} |
| | } |
| | ``` |