MMSI-Video-Bench / README.md
rbler's picture
Update README.md
bba0626 verified
---
license: cc
task_categories:
- multiple-choice
- visual-question-answering
- video-text-to-text
language:
- en
size_categories:
- 1K<n<10K
---
# MMSI-Video-Bench: A Holistic Benchmark for Video-Based Spatial Intelligence
[**🌐 Homepage**](https://rbler1234.github.io/MMSI-VIdeo-Bench.github.io/) | [**📑 Paper**](https://arxiv.org/abs/2512.10863) | [**📖 Code**](https://github.com/InternRobotics/MMSI-Video-Bench)
</div>
<!-- contents with emoji -->
## 🔔 News
🔥[2025-12]: Our MMSI-Video-Bench has been integrated into [VLMEvalKit](https://github.com/open-compass/VLMEvalKit).
🔥[2025-12]: We released our paper, benchmark, and evaluation codes.
## 📊 Data Details
All of our data is available on [Hugging Face](https://huggingface.co/datasets/rbler/MMSI-Video-Bench) and includes the following components:
🎥 **Video Data** (`videos.zip`): Contains the video clip file (.mp4) corresponding to each sample. This file is generally not required for most models.
🎥 **Frame Data** (`frames.zip`): Contains the frames (.jpg) extracted from each sample's video at the **base sampling rate**. This rate ensures no key information loss during sampling. Each frame file is named using the format `{timestamp}_frame_{base_interval}_{image_id}` (e.g., 00:06.00_frame_1.50_4), where the timestamp, also shown on the **top-left corner** of the frame, indicates its **capture time in the original recording**.
🖼️ **Reference Image Data** (`ref_images.zip`): Contains the auxiliary images referenced in the questions for each sample.
📝 **Text Annotation** (`mmsivideo.json`):This file contains the annotation information for MMSI-Video-Bench. All time references in the questions correspond to the capture time in the original recording and **align with** the timestamp flag on each frame. Key fields include:
```
{
"ref_images": [Paths to auxiliary images referenced in the question,...],
"video_list": [
{
"path": Video clip file path,
"start": Timestamp (in seconds) of the first frame of the video clip in the original recording,
"end": Timestamp (in seconds) of the last frame of the video clip in the original recording,
"base_fps": Base sampling rate
},
...
],
"frames_list": [[Paths to frames sampled at the base sampling rate,...],...],
"system_prompt": "...",
"task_prompt": Task-specific prompt,
"user_prompt": Question text, with <video> as a placeholder for video and <image> for auxiliary images,
"format_prompt": Output format requirements,
"ground_truth": Correct answer
}
```
Unless otherwise specified, the model input generally consists of:
`system_prompt + task_prompt + user_prompt + format_prompt`.
## 🚀 Evaluation
Please refer to the evaluation guidelines in our [github repo](https://github.com/InternRobotics/MMSI-Video-Bench).
## 🏆 Leaderboard
<details> <summary>📦 Uniform-50 Setting</summary>
| Model | Avg.(%) | Type |
|----------------------------|---------|-------------|
| Human | 96.40 | Baseline |
|🥇Gemini 3 pro | 37.97 | Proprietary |
|🥈 O3 | 36.98 | Proprietary |
|🥉GPT-5 | 36.80 | Proprietary |
| Gemini 2.5 Flash | 35.44 | Proprietary |
| Gemini 2.5 Flash (Thinking) | 35.17 | Proprietary |
| Seed-1.6-vision | 34.87 | Proprietary |
| Claude-haiku-4.5 | 34.27 | Proprietary |
| O4-mini | 34.18 | Proprietary |
| QwenVL2.5-72B | 32.73 | Open-Source |
| InternVL3-78B | 32.55 | Open-Source |
| Doubao-1.5-thinking | 31.65 | Proprietary |
| GPT-4o | 31.56 | Proprietary |
| InternVL2.5-78B | 31.37 | Open-Source |
| InternVL2.5-38B | 31.01 | Open-Source |
| QwenVL3-30B (Thinking) | 30.83 | Open-Source |
| LLaVA-Video-72B | 30.38 | Open-Source |
| InternVL3-8B | 30.38 | Open-Source |
| QwenVL2.5-VL-7B-Instruct | 29.66 | Open-Source |
| InternVL2.5-8B | 29.11 | Open-Source |
| InternVL3-38B | 28.84 | Open-Source |
| QwenVL3-30B | 28.75 | Open-Source |
| QwenVL2.5-32B | 28.57 | Open-Source |
| LLaVA-Video-7B | 28.48 | Open-Source |
| QwenVL3-8B | 27.58 | Open-Source |
| InternVideo2.5-8B | 27.40 | Open-Source |
| Random Guessing | 24.10 | Baseline |
</details>
<details> <summary>📦 Sufficient-Coverage Setting</summary>
| Model | Avg.(%) | Type |
|----------------------------|---------|-------------|
| Human | 96.4 | Baseline |
| 🥇O3 | 37.34 | Proprietary |
| 🥈Gemini 2.5 Flash (Thinking) | 36.71 | Proprietary |
| 🥉Gemini 2.5 Flash | 36.62 | Proprietary |
| O4-mini | 35.08 | Proprietary |
| QwenVL2.5-32B | 32.37 | Open-Source |
| QwenVL2.5-72B | 31.83 | Open-Source |
| InternVL3-8B | 29.57 | Open-Source |
| QwenVL3-30B | 29.11 | Open-Source |
| QwenVL3-8B | 29.09 | Open-Source |
| QwenVL2.5-7B | 28.84 | Open-Source |
| InternVL2.5-8B | 28.66 | Open-Source |
| GPT-4o | 28.12 | Proprietary |
| QwenVL3-30B (Thinking) | 28.03 | Open-Source |
| InternVideo2.5-8B | 26.85 | Open-Source |
| Random Guessing | 24.10 | Baseline |
</details>
<details> <summary>🤖 Robot Sub-bench</summary>
| Model | Avg.(%) | Type |
|----------------------------|---------|-------------|
| 🥇Gemini 3 Pro | 40.20 | Proprietary |
| 🥈Gemini 2.5 Flash (Thinking) | 39.71 | Proprietary |
| 🥉Seed-1.6-vision | 39.34 | Proprietary |
| O3 | 39.22 | Proprietary |
| QwenVL2.5-72B | 37.75 | Open-Source |
| InternVL3-8B | 37.75 | Open-Source |
| GPT-5 | 37.75 | Proprietary |
| InternVL2.5-38B | 36.27 | Open-Source |
| Doubao-1.5-thinking | 36.07 | Proprietary |
| Gemini 2.5 Flash | 35.78 | Proprietary |
| O4-mini | 35.29 | Proprietary |
| QwenVL2.5-7B | 34.8 | Open-Source |
| InternVL2.5-78B | 34.8 | Open-Source |
| Claude-haiku-4.5 | 34.8 | Proprietary |
| InternVL3-78B | 34.31 | Open-Source |
| LLaVA-Video-72B | 34.31 | Open-Source |
| QwenVL3-30B | 32.84 | Open-Source |
| QwenVL2.5-32B | 32.84 | Open-Source |
| QwenVL3-8B | 32.12 | Open-Source |
| InternVideo2.5-8B | 29.90 | Open-Source |
| GPT-4o | 29.90 | Proprietary |
| InternVL2.5-8B | 28.43 | Open-Source |
| InternVL3-38B | 27.94 | Open-Source |
| QwenVL3-30B (Thinking) | 27.94 | Open-Source |
| LLaVA-Video-7B | 24.51 | Open-Source |
</details>
<details> <summary>🏠 Indoor Scene Perception Sub-bench</summary>
| Model | Avg.(%) | Type |
|----------------------------|---------|-------------|
| 🥇GPT-5 | 41.68 | Proprietary |
| 🥈O3 | 40.73 | Proprietary |
| 🥉Gemini 2.5 Flash | 39.39 | Proprietary |
| Gemini 3 Pro | 39.39 | Proprietary |
| Gemini 2.5 Flash (Thinking) | 37.86 | Proprietary |
| O4-mini | 37.48 | Proprietary |
| Seed-1.6-vision | 34.2 | Proprietary |
| Claude-haiku-4.5 | 33.46 | Proprietary |
| Doubao-1.5-thinking | 33.04 | Proprietary |
| InternVL3-78B | 32.5 | Open-Source |
| QwenVL3-30B (Thinking) | 32.31 | Open-Source |
| GPT-4o | 31.74 | Proprietary |
| QwenVL2.5-72B | 30.78 | Open-Source |
| InternVL2.5-78B | 30.4 | Open-Source |
| QwenVL3-30B | 30.02 | Open-Source |
| QwenVL2.5-32B | 29.64 | Open-Source |
| InternVL2.5-8B | 29.45 | Open-Source |
| InternVL3-38B | 29.06 | Open-Source |
| QwenVL3-8B | 28.68 | Open-Source |
| InternVL2.5-38B | 28.3 | Open-Source |
| LLaVA-Video-72B | 28.11 | Open-Source |
| InternVL3-8B | 27.72 | Open-Source |
| LLaVA-Video-7B | 27.53 | Open-Source |
| QwenVL2.5-7B | 27.15 | Open-Source |
| InternVideo2.5-8B | 26.77 | Open-Source |
</details>
<details> <summary>📍 Grounding Sub-bench</summary>
| Model | Avg.(%) | Type |
|----------------------------|---------|-------------|
| 🥇Gemini 2.5 Flash | 38.81 | Proprietary |
| 🥈Gemini 2.5 Flash (Thinking) | 38.21 | Proprietary |
| 🥉O3 | 37.61 | Proprietary |
| Doubao-1.5-thinking | 37.05 | Proprietary |
| InternVL3-78B | 35.52 | Open-Source |
| GPT-5 | 35.22 | Proprietary |
| Gemini 3 Pro | 35.22 | Proprietary |
| O4-mini | 34.33 | Proprietary |
| QwenVL2.5-72B | 34.33 | Open-Source |
| Seed-1.6-vision | 33.04 | Proprietary |
| Claude-haiku-4.5 | 32.84 | Proprietary |
| InternVL2.5-38B | 31.94 | Open-Source |
| InternVL3-8B | 31.94 | Open-Source |
| GPT-4o | 31.94 | Proprietary |
| QwenVL3-30B (Thinking) | 31.64 | Open-Source |
| QwenVL2.5-32B | 31.04 | Open-Source |
| LLaVA-Video-72B | 31.04 | Open-Source |
| InternVL3-38B | 30.45 | Open-Source |
| InternVL2.5-8B | 30.15 | Open-Source |
| InternVL2.5-78B | 29.85 | Open-Source |
| QwenVL3-30B | 29.25 | Open-Source |
| QwenVL2.5-7B | 28.66 | Open-Source |
| QwenVL3-8B | 28.66 | Open-Source |
| InternVideo2.5-8B | 27.76 | Open-Source |
| LLaVA-Video-7B | 27.16 | Open-Source |
</details>
*Note: For the three sub-benchmarks, we take the higher score of each model across the two settings for easier presentation.*