|
|
--- |
|
|
language: |
|
|
- en |
|
|
license: cc-by-4.0 |
|
|
size_categories: |
|
|
- 1K<n<10K |
|
|
task_categories: |
|
|
- question-answering |
|
|
- visual-question-answering |
|
|
- multiple-choice |
|
|
pretty_name: MMSI-Bench |
|
|
dataset_info: |
|
|
features: |
|
|
- name: id |
|
|
dtype: int64 |
|
|
- name: images |
|
|
sequence: image |
|
|
- name: question_type |
|
|
dtype: string |
|
|
- name: question |
|
|
dtype: string |
|
|
- name: answer |
|
|
dtype: string |
|
|
- name: thought |
|
|
dtype: string |
|
|
splits: |
|
|
- name: test |
|
|
num_examples: 1000 |
|
|
|
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: |
|
|
- split: test |
|
|
path: MMSI_Bench.parquet |
|
|
--- |
|
|
|
|
|
# MMSI-Bench |
|
|
This repo contains evaluation code for the paper "[MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence]" |
|
|
|
|
|
[**π Homepage**](https://runsenxu.com/projects/MMSI_Bench/) | [**π€ Dataset**](https://huggingface.co/datasets/RunsenXu/MMSI-Bench) | [**π Paper**](https://arxiv.org/pdf/2505.23764) | [**π» Code**](https://github.com/OpenRobotLab/MMSI-Bench) | [**π arXiv**](https://arxiv.org/abs/2505.23764) |
|
|
|
|
|
|
|
|
|
|
|
## πNews |
|
|
<!-- **π₯[2025-05-31]: MMSI-Bench has been supported in the [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) repository.** --> |
|
|
|
|
|
**π₯[2025-05-30]: We released the ArXiv paper.** |
|
|
|
|
|
## Load Dataset |
|
|
``` |
|
|
from datasets import load_dataset |
|
|
|
|
|
mmsi_bench = load_dataset("RunsenXu/MMSI-Bench") |
|
|
print(mmsi_bench) |
|
|
``` |
|
|
## After downloading the parquet file, read each record, decode images from binary, and save them as JPG files. |
|
|
``` |
|
|
import pandas as pd |
|
|
import os |
|
|
|
|
|
df = pd.read_parquet('MMSI_Bench.parquet') |
|
|
|
|
|
output_dir = './images' |
|
|
os.makedirs(output_dir, exist_ok=True) |
|
|
|
|
|
for idx, row in df.iterrows(): |
|
|
id_val = row['id'] |
|
|
images = row['images'] |
|
|
question_type = row['question_type'] |
|
|
question = row['question'] |
|
|
answer = row['answer'] |
|
|
thought = row['thought'] |
|
|
|
|
|
image_paths = [] |
|
|
if images is not None: |
|
|
for n, img_data in enumerate(images): |
|
|
image_path = f"{output_dir}/{id_val}_{n}.jpg" |
|
|
with open(image_path, "wb") as f: |
|
|
f.write(img_data) |
|
|
image_paths.append(image_path) |
|
|
else: |
|
|
image_paths = [] |
|
|
|
|
|
print(f"id: {id_val}") |
|
|
print(f"images: {image_paths}") |
|
|
print(f"question_type: {question_type}") |
|
|
print(f"question: {question}") |
|
|
print(f"answer: {answer}") |
|
|
print(f"thought: {thought}") |
|
|
print("-" * 50) |
|
|
``` |
|
|
|
|
|
|
|
|
## Evaluation |
|
|
Please refer to the [evaluation guidelines](https://github.com/open-compass/VLMEvalKit/blob/main/docs/en/Quickstart.md) of [VLMEvalKit](https://github.com/open-compass/VLMEvalKit) |
|
|
|
|
|
<!-- <img src="assets/radar_v1.png" width="400" /> --> |
|
|
|
|
|
## π MMSI-Bench Leaderboard |
|
|
|
|
|
| Model | Avg. (%) | Type | |
|
|
|------------------------------|:--------:|:-------------| |
|
|
| π₯ **Human Level** | 97.2 | Baseline | |
|
|
| π₯ o3 | 41.0 | Proprietary | |
|
|
| π₯ GPT-4.5 | 40.3 | Proprietary | |
|
|
| Gemini-2.5-Pro--Thinking | 37.0 | Proprietary | |
|
|
| Gemini-2.5-Pro | 36.9 | Proprietary | |
|
|
| Doubao-1.5-pro | 33.0 | Proprietary | |
|
|
| GPT-4.1 | 30.9 | Proprietary | |
|
|
| Qwen2.5-VL-72B | 30.7 | Open-source | |
|
|
| NVILA-15B | 30.5 | Open-source | |
|
|
| GPT-4o | 30.3 | Proprietary | |
|
|
| Claude-3.7-Sonnet--Thinking | 30.2 | Proprietary | |
|
|
| Seed1.5-VL | 29.7 | Proprietary | |
|
|
| InternVL2.5-2B | 29.0 | Open-source | |
|
|
| InternVL2.5-8B | 28.7 | Open-source | |
|
|
| DeepSeek-VL2-Small | 28.6 | Open-source | |
|
|
| InternVL3-78B | 28.5 | Open-source | |
|
|
| InternVL2.5-78B | 28.5 | Open-source | |
|
|
| LLaVA-OneVision-72B | 28.4 | Open-source | |
|
|
| NVILA-8B | 28.1 | Open-source | |
|
|
| InternVL2.5-26B | 28.0 | Open-source | |
|
|
| DeepSeek-VL2 | 27.1 | Open-source | |
|
|
| InternVL3-1B | 27.0 | Open-source | |
|
|
| InternVL3-9B | 26.7 | Open-source | |
|
|
| Qwen2.5-VL-3B | 26.5 | Open-source | |
|
|
| InternVL2.5-1B | 26.1 | Open-source | |
|
|
| InternVL2.5-4B | 26.3 | Open-source | |
|
|
| Qwen2.5-VL-7B | 25.9 | Open-source | |
|
|
| InternVL3-8B | 25.7 | Open-source | |
|
|
| Llama-3.2-11B-Vision | 25.4 | Open-source | |
|
|
| InternVL3-2B | 25.3 | Open-source | |
|
|
| π **Random Guessing** | 25.0 | Baseline | |
|
|
| LLaVA-OneVision-7B | 24.5 | Open-source | |
|
|
| DeepSeek-VL2-Tiny | 24.0 | Open-source | |
|
|
| Blind GPT-4o | 22.7 | Baseline | |
|
|
|
|
|
## Acknowledgment |
|
|
MMSI-Bench makes use of data from existing image datasets: [ScanNet](http://www.scan-net.org/), [nuScenes](https://www.nuscenes.org/), [Matterport3D](https://niessner.github.io/Matterport/), [Ego4D](https://ego4d-data.org/), [AgiBot-World](https://agibot-world.cn/), [DTU](https://roboimagedata.compute.dtu.dk/?page_id=36), [DAVIS-2017](https://davischallenge.org/) ,and [Waymo](https://waymo.com/open/). We thank these teams for their open-source contributions. |
|
|
|
|
|
## Contact |
|
|
- Sihan Yang: sihany077@gmail.com |
|
|
- Runsen Xu: runsxu@gmail.com |
|
|
|
|
|
## Citation |
|
|
```bibtex |
|
|
@article{yang2025mmsi, |
|
|
title={MMSI-Bench: A Benchmark for Multi-Image Spatial Intelligence}, |
|
|
author={Yang, Sihan and Xu, Runsen and Xie, Yiman and Yang, Sizhe and Li, Mo and Lin, Jingli and Zhu, Chenming and Chen, Xiaochen and Duan, Haodong and Yue, Xiangyu and Lin, Dahua and Wang, Tai and Pang, Jiangmiao}, |
|
|
journal={arXiv preprint arXiv:2505.23764}, |
|
|
year={2025} |
|
|
} |
|
|
``` |