Upload folder using huggingface_hub
Browse files- .gitattributes +1 -0
- MMSI_Bench.parquet +3 -0
- MMSI_bench.tsv +3 -0
- README.md +117 -0
.gitattributes
CHANGED
|
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 57 |
# Video files - compressed
|
| 58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 57 |
# Video files - compressed
|
| 58 |
*.mp4 filter=lfs diff=lfs merge=lfs -text
|
| 59 |
*.webm filter=lfs diff=lfs merge=lfs -text
|
| 60 |
+
MMSI_bench.tsv filter=lfs diff=lfs merge=lfs -text
|
MMSI_Bench.parquet
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:c762def08ff9875455672f1ace2c44a9705b963d2e8f806b186a250399dc9017
|
| 3 |
+
size 704663038
|
MMSI_bench.tsv
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:5435242d20a11c0097afc1e67c4ddc6c55daa7cb5f2478240165c33e4bb07fed
|
| 3 |
+
size 1058571597
|
README.md
ADDED
|
@@ -0,0 +1,117 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
license: cc-by-4.0
|
| 5 |
+
size_categories:
|
| 6 |
+
- 1K<n<10K
|
| 7 |
+
task_categories:
|
| 8 |
+
- question-answering
|
| 9 |
+
- visual-question-answering
|
| 10 |
+
- multiple-choice
|
| 11 |
+
pretty_name: MMSI-Bench
|
| 12 |
+
dataset_info:
|
| 13 |
+
features:
|
| 14 |
+
- name: id
|
| 15 |
+
dtype: int64
|
| 16 |
+
- name: images
|
| 17 |
+
sequence: image
|
| 18 |
+
- name: question_type
|
| 19 |
+
dtype: string
|
| 20 |
+
- name: question
|
| 21 |
+
dtype: string
|
| 22 |
+
- name: answer
|
| 23 |
+
dtype: string
|
| 24 |
+
- name: thought
|
| 25 |
+
dtype: string
|
| 26 |
+
splits:
|
| 27 |
+
- name: test
|
| 28 |
+
num_examples: 1000
|
| 29 |
+
|
| 30 |
+
configs:
|
| 31 |
+
- config_name: default
|
| 32 |
+
data_files:
|
| 33 |
+
- split: test
|
| 34 |
+
path: MMSI_Bench.parquet
|
| 35 |
+
---
|
| 36 |
+
|
| 37 |
+
# MMSI-Bench
|
| 38 |
+
This repo contains evaluation code for the paper "[MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence]"
|
| 39 |
+
|
| 40 |
+
[**π Homepage**](https://runsenxu.com/projects/MMSI_Bench/) | [**π€ Dataset**](https://huggingface.co/datasets/RunsenXu/MMSI-Bench) | [**π Paper**](https://arxiv.org/pdf/2505.23764) | [**π» Code**](https://github.com/OpenRobotLab/MMSI-Bench) | [**π arXiv**](https://arxiv.org/abs/2505.23764)
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
## πNews
|
| 45 |
+
<!-- **π₯[2025-05-31]: MMSI-Bench has been supported in the [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) repository.** -->
|
| 46 |
+
|
| 47 |
+
**π₯[2025-05-30]: We released the ArXiv paper.**
|
| 48 |
+
|
| 49 |
+
## Load Dataset
|
| 50 |
+
```
|
| 51 |
+
from datasets import load_dataset
|
| 52 |
+
|
| 53 |
+
mmsi_bench = load_dataset("RunsenXu/MMSI-Bench")
|
| 54 |
+
print(dataset)
|
| 55 |
+
```
|
| 56 |
+
|
| 57 |
+
|
| 58 |
+
## Evaluation
|
| 59 |
+
Please refer to the [evaluation guidelines](https://github.com/open-compass/VLMEvalKit/blob/main/docs/en/Quickstart.md) of [VLMEvalKit](https://github.com/open-compass/VLMEvalKit)
|
| 60 |
+
|
| 61 |
+
<!-- <img src="assets/radar_v1.png" width="400" /> -->
|
| 62 |
+
|
| 63 |
+
## π MMSI-Bench Leaderboard
|
| 64 |
+
|
| 65 |
+
| Model | Avg. (%) | Type |
|
| 66 |
+
|------------------------------|:--------:|:-------------|
|
| 67 |
+
| π₯ **Human Level** | 97.2 | Baseline |
|
| 68 |
+
| π₯ o3 | 41.0 | Proprietary |
|
| 69 |
+
| π₯ GPT-4.5 | 40.3 | Proprietary |
|
| 70 |
+
| Gemini-2.5-Pro--Thinking | 37.0 | Proprietary |
|
| 71 |
+
| Gemini-2.5-Pro | 36.9 | Proprietary |
|
| 72 |
+
| Doubao-1.5-pro | 33.0 | Proprietary |
|
| 73 |
+
| GPT-4.1 | 30.9 | Proprietary |
|
| 74 |
+
| Qwen2.5-VL-72B | 30.7 | Open-source |
|
| 75 |
+
| NVILA-15B | 30.5 | Open-source |
|
| 76 |
+
| GPT-4o | 30.3 | Proprietary |
|
| 77 |
+
| Claude-3.7-Sonnet--Thinking | 30.2 | Proprietary |
|
| 78 |
+
| Seed1.5-VL | 29.7 | Proprietary |
|
| 79 |
+
| InternVL2.5-2B | 29.0 | Open-source |
|
| 80 |
+
| InternVL2.5-8B | 28.7 | Open-source |
|
| 81 |
+
| DeepSeek-VL2-Small | 28.6 | Open-source |
|
| 82 |
+
| InternVL3-78B | 28.5 | Open-source |
|
| 83 |
+
| InternVL2.5-78B | 28.5 | Open-source |
|
| 84 |
+
| LLaVA-OneVision-72B | 28.4 | Open-source |
|
| 85 |
+
| NVILA-8B | 28.1 | Open-source |
|
| 86 |
+
| InternVL2.5-26B | 28.0 | Open-source |
|
| 87 |
+
| DeepSeek-VL2 | 27.1 | Open-source |
|
| 88 |
+
| InternVL3-1B | 27.0 | Open-source |
|
| 89 |
+
| InternVL3-9B | 26.7 | Open-source |
|
| 90 |
+
| Qwen2.5-VL-3B | 26.5 | Open-source |
|
| 91 |
+
| InternVL2.5-1B | 26.1 | Open-source |
|
| 92 |
+
| InternVL2.5-4B | 26.3 | Open-source |
|
| 93 |
+
| Qwen2.5-VL-7B | 25.9 | Open-source |
|
| 94 |
+
| InternVL3-8B | 25.7 | Open-source |
|
| 95 |
+
| Llama-3.2-11B-Vision | 25.4 | Open-source |
|
| 96 |
+
| InternVL3-2B | 25.3 | Open-source |
|
| 97 |
+
| π **Random Guessing** | 25.0 | Baseline |
|
| 98 |
+
| LLaVA-OneVision-7B | 24.5 | Open-source |
|
| 99 |
+
| DeepSeek-VL2-Tiny | 24.0 | Open-source |
|
| 100 |
+
| Blind GPT-4o | 22.7 | Baseline |
|
| 101 |
+
|
| 102 |
+
## Acknowledgment
|
| 103 |
+
MMSI-Bench makes use of data from existing image datasets: [ScanNet](http://www.scan-net.org/), [nuScenes](https://www.nuscenes.org/), [Matterport3D](https://niessner.github.io/Matterport/), [Ego4D](https://ego4d-data.org/), [AgiBot-World](https://agibot-world.cn/), [DTU](https://roboimagedata.compute.dtu.dk/?page_id=36), [DAVIS-2017](https://davischallenge.org/) ,and [Waymo](https://waymo.com/open/). We thank these teams for their open-source contributions.
|
| 104 |
+
|
| 105 |
+
## Contact
|
| 106 |
+
- Sihan Yang: sihany077@gmail.com
|
| 107 |
+
- Runsen Xu: runsxu@gmail.com
|
| 108 |
+
|
| 109 |
+
## Citation
|
| 110 |
+
```bibtex
|
| 111 |
+
@article{yang2025mmsi,
|
| 112 |
+
title={MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence},
|
| 113 |
+
author={Yang, Sihan and Xu, Runsen and Xie, Yiman and Yang, Sizhe and Li, Mo and Lin, Jingli and Zhu, Chenming and Chen, Xiaochen and Duan, Haodong and Yue, Xiangyu and Lin, Dahua and Wang, Tai and Pang, Jiangmiao},
|
| 114 |
+
journal={arXiv preprint arXiv:2505.23764},
|
| 115 |
+
year={2025}
|
| 116 |
+
}
|
| 117 |
+
```
|