rbler commited on
Commit
f211c7e
Β·
verified Β·
1 Parent(s): 67881db

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +43 -43
README.md CHANGED
@@ -11,7 +11,7 @@ size_categories:
11
  ---
12
 
13
  # MMSI-Video-Bench: A Holistic Benchmark for Video-Based Spatial Intelligence
14
- [**🌐 Homepage**]() | [**πŸ“‘ Paper**]() | [**πŸ“– arXiv**]()
15
  </div>
16
 
17
 
@@ -68,7 +68,7 @@ and `mmsivideo.json` corresponds to the MMSI-Video-Bench annotations. Each sampl
68
  ```
69
 
70
  ## Evaluation
71
- Please refer to the evaluation guidelines in our github repo.
72
 
73
  ## πŸ† Leaderboard
74
 
@@ -76,22 +76,22 @@ Please refer to the evaluation guidelines in our github repo.
76
 
77
  | Model | Avg.(%) | Type |
78
  |----------------------------|---------|-------------|
79
- | human | 96.40 | Baseline |
80
- |πŸ₯‡gemini-3-pro | 37.97 | Proprietary |
81
- |πŸ₯ˆ o3 | 36.98 | Proprietary |
82
- |πŸ₯‰gpt-5 | 36.80 | Proprietary |
83
- | gemini-2.5-flash | 35.44 | Proprietary |
84
- | gemini-2.5-flash(thinking) | 35.17 | Proprietary |
85
- | seed-1-6-vision | 34.87 | Proprietary |
86
- | claude-haiku-4-5 | 34.27 | Proprietary |
87
- | o4-mini | 34.18 | Proprietary |
88
  | QwenVL2.5-72B | 32.73 | Open-Source |
89
  | InternVL3-78B | 32.55 | Open-Source |
90
- | doubao-1-5-thinking | 31.65 | Proprietary |
91
- | gpt-4o | 31.56 | Proprietary |
92
  | InternVL2.5-78B | 31.37 | Open-Source |
93
  | InternVL2.5-38B | 31.01 | Open-Source |
94
- | QwenVL3-30B(Thinking) | 30.83 | Open-Source |
95
  | LLaVA-Video-72B | 30.38 | Open-Source |
96
  | InternVL3-8B | 30.38 | Open-Source |
97
  | QwenVL2.5-VL-7B-Instruct | 29.66 | Open-Source |
@@ -110,11 +110,11 @@ Please refer to the evaluation guidelines in our github repo.
110
 
111
  | Model | Avg.(%) | Type |
112
  |----------------------------|---------|-------------|
113
- | human | 96.4 | Baseline |
114
- | πŸ₯‡o3 | 37.34 | Proprietary |
115
- | πŸ₯ˆgemini-2.5-flash(thinking) | 36.71 | Proprietary |
116
- | πŸ₯‰gemini-2.5-flash | 36.62 | Proprietary |
117
- | o4-mini | 35.08 | Proprietary |
118
  | QwenVL2.5-32B | 32.37 | Open-Source |
119
  | QwenVL2.5-72B | 31.83 | Open-Source |
120
  | InternVL3-8B | 29.57 | Open-Source |
@@ -122,8 +122,8 @@ Please refer to the evaluation guidelines in our github repo.
122
  | QwenVL3-8B | 29.09 | Open-Source |
123
  | QwenVL2.5-7B | 28.84 | Open-Source |
124
  | InternVL2.5-8B | 28.66 | Open-Source |
125
- | gpt-4o | 28.12 | Proprietary |
126
- | QwenVL3-30B(thinking) | 28.03 | Open-Source |
127
  | InternVideo2.5-8B | 26.85 | Open-Source |
128
  | Random Guessing | 24.10 | Baseline |
129
 
@@ -133,30 +133,30 @@ Please refer to the evaluation guidelines in our github repo.
133
 
134
  | Model | Avg.(%) | Type |
135
  |----------------------------|---------|-------------|
136
- | πŸ₯‡gemini-3-pro | 40.20 | Proprietary |
137
- | πŸ₯ˆgemini-2.5-flash(thinking) | 39.71 | Proprietary |
138
- | πŸ₯‰seed-1-6-vision | 39.34 | Proprietary |
139
- | o3 | 39.22 | Proprietary |
140
  | QwenVL2.5-72B | 37.75 | Open-Source |
141
  | InternVL3-8B | 37.75 | Open-Source |
142
- | gpt-5 | 37.75 | Proprietary |
143
  | InternVL2.5-38B | 36.27 | Open-Source |
144
- | doubao-1-5-thinking | 36.07 | Proprietary |
145
- | gemini-2.5-flash | 35.78 | Proprietary |
146
- | o4-mini | 35.29 | Proprietary |
147
  | QwenVL2.5-7B | 34.8 | Open-Source |
148
  | InternVL2.5-78B | 34.8 | Open-Source |
149
- | claude-haiku-4-5 | 34.8 | Proprietary |
150
  | InternVL3-78B | 34.31 | Open-Source |
151
  | LLaVA-Video-72B | 34.31 | Open-Source |
152
  | QwenVL3-30B | 32.84 | Open-Source |
153
  | QwenVL2.5-32B | 32.84 | Open-Source |
154
  | QwenVL3-8B | 32.12 | Open-Source |
155
  | InternVideo2.5-8B | 29.90 | Open-Source |
156
- | gpt-4o | 29.90 | Proprietary |
157
  | InternVL2.5-8B | 28.43 | Open-Source |
158
  | InternVL3-38B | 27.94 | Open-Source |
159
- | QwenVL3-30B(Thinking) | 27.94 | Open-Source |
160
  | LLaVA-Video-7B | 24.51 | Open-Source |
161
 
162
 
@@ -166,18 +166,18 @@ Please refer to the evaluation guidelines in our github repo.
166
 
167
  | Model | Avg.(%) | Type |
168
  |----------------------------|---------|-------------|
169
- | πŸ₯‡gpt-5 | 41.68 | Proprietary |
170
- | πŸ₯ˆo3 | 40.73 | Proprietary |
171
- | πŸ₯‰gemini-2.5-flash | 39.39 | Proprietary |
172
- | gemini-3-pro | 39.39 | Proprietary |
173
- | gemini-2.5-flash(thinking) | 37.86 | Proprietary |
174
- | o4-mini | 37.48 | Proprietary |
175
- | seed-1-6-vision | 34.2 | Proprietary |
176
- | claude-haiku-4-5 | 33.46 | Proprietary |
177
- | doubao-1-5-thinking | 33.04 | Proprietary |
178
  | InternVL3-78B | 32.5 | Open-Source |
179
- | QwenVL3-30B(Thinking) | 32.31 | Open-Source |
180
- | gpt-4o | 31.74 | Proprietary |
181
  | QwenVL2.5-72B | 30.78 | Open-Source |
182
  | InternVL2.5-78B | 30.4 | Open-Source |
183
  | QwenVL3-30B | 30.02 | Open-Source |
 
11
  ---
12
 
13
  # MMSI-Video-Bench: A Holistic Benchmark for Video-Based Spatial Intelligence
14
+ [**🌐 Homepage**](https://rbler1234.github.io/MMSI-VIdeo-Bench.github.io/) | [**πŸ“‘ Paper**]() | [**πŸ“– Code**](https://github.com/InternRobotics/MMSI-Video-Bench)
15
  </div>
16
 
17
 
 
68
  ```
69
 
70
  ## Evaluation
71
+ Please refer to the evaluation guidelines in our [github repo](https://github.com/InternRobotics/MMSI-Video-Bench).
72
 
73
  ## πŸ† Leaderboard
74
 
 
76
 
77
  | Model | Avg.(%) | Type |
78
  |----------------------------|---------|-------------|
79
+ | Human | 96.40 | Baseline |
80
+ |πŸ₯‡Gemini 3 pro | 37.97 | Proprietary |
81
+ |πŸ₯ˆ O3 | 36.98 | Proprietary |
82
+ |πŸ₯‰GPT-5 | 36.80 | Proprietary |
83
+ | Gemini 2.5 Flash | 35.44 | Proprietary |
84
+ | Gemini 2.5 Flash (Thinking) | 35.17 | Proprietary |
85
+ | Seed-1.6-vision | 34.87 | Proprietary |
86
+ | Claude-haiku-4.5 | 34.27 | Proprietary |
87
+ | O4-mini | 34.18 | Proprietary |
88
  | QwenVL2.5-72B | 32.73 | Open-Source |
89
  | InternVL3-78B | 32.55 | Open-Source |
90
+ | Doubao-1.5-thinking | 31.65 | Proprietary |
91
+ | GPT-4o | 31.56 | Proprietary |
92
  | InternVL2.5-78B | 31.37 | Open-Source |
93
  | InternVL2.5-38B | 31.01 | Open-Source |
94
+ | QwenVL3-30B (Thinking) | 30.83 | Open-Source |
95
  | LLaVA-Video-72B | 30.38 | Open-Source |
96
  | InternVL3-8B | 30.38 | Open-Source |
97
  | QwenVL2.5-VL-7B-Instruct | 29.66 | Open-Source |
 
110
 
111
  | Model | Avg.(%) | Type |
112
  |----------------------------|---------|-------------|
113
+ | Human | 96.4 | Baseline |
114
+ | πŸ₯‡O3 | 37.34 | Proprietary |
115
+ | πŸ₯ˆGemini 2.5 Flash (Thinking) | 36.71 | Proprietary |
116
+ | πŸ₯‰Gemini 2.5 Flash | 36.62 | Proprietary |
117
+ | O4-mini | 35.08 | Proprietary |
118
  | QwenVL2.5-32B | 32.37 | Open-Source |
119
  | QwenVL2.5-72B | 31.83 | Open-Source |
120
  | InternVL3-8B | 29.57 | Open-Source |
 
122
  | QwenVL3-8B | 29.09 | Open-Source |
123
  | QwenVL2.5-7B | 28.84 | Open-Source |
124
  | InternVL2.5-8B | 28.66 | Open-Source |
125
+ | GPT-4o | 28.12 | Proprietary |
126
+ | QwenVL3-30B (Thinking) | 28.03 | Open-Source |
127
  | InternVideo2.5-8B | 26.85 | Open-Source |
128
  | Random Guessing | 24.10 | Baseline |
129
 
 
133
 
134
  | Model | Avg.(%) | Type |
135
  |----------------------------|---------|-------------|
136
+ | πŸ₯‡Gemini 3 Pro | 40.20 | Proprietary |
137
+ | πŸ₯ˆGemini 2.5 Flash (Thinking) | 39.71 | Proprietary |
138
+ | πŸ₯‰Seed-1.6-vision | 39.34 | Proprietary |
139
+ | O3 | 39.22 | Proprietary |
140
  | QwenVL2.5-72B | 37.75 | Open-Source |
141
  | InternVL3-8B | 37.75 | Open-Source |
142
+ | GPT-5 | 37.75 | Proprietary |
143
  | InternVL2.5-38B | 36.27 | Open-Source |
144
+ | Doubao-1.5-thinking | 36.07 | Proprietary |
145
+ | Gemini 2.5 Flash | 35.78 | Proprietary |
146
+ | O4-mini | 35.29 | Proprietary |
147
  | QwenVL2.5-7B | 34.8 | Open-Source |
148
  | InternVL2.5-78B | 34.8 | Open-Source |
149
+ | Claude-haiku-4.5 | 34.8 | Proprietary |
150
  | InternVL3-78B | 34.31 | Open-Source |
151
  | LLaVA-Video-72B | 34.31 | Open-Source |
152
  | QwenVL3-30B | 32.84 | Open-Source |
153
  | QwenVL2.5-32B | 32.84 | Open-Source |
154
  | QwenVL3-8B | 32.12 | Open-Source |
155
  | InternVideo2.5-8B | 29.90 | Open-Source |
156
+ | GPT-4o | 29.90 | Proprietary |
157
  | InternVL2.5-8B | 28.43 | Open-Source |
158
  | InternVL3-38B | 27.94 | Open-Source |
159
+ | QwenVL3-30B (Thinking) | 27.94 | Open-Source |
160
  | LLaVA-Video-7B | 24.51 | Open-Source |
161
 
162
 
 
166
 
167
  | Model | Avg.(%) | Type |
168
  |----------------------------|---------|-------------|
169
+ | πŸ₯‡GPT-5 | 41.68 | Proprietary |
170
+ | πŸ₯ˆO3 | 40.73 | Proprietary |
171
+ | πŸ₯‰Gemini 2.5 Flash | 39.39 | Proprietary |
172
+ | Gemini 3 Pro | 39.39 | Proprietary |
173
+ | Gemini 2.5 Flash (Thinking) | 37.86 | Proprietary |
174
+ | O4-mini | 37.48 | Proprietary |
175
+ | Seed-1.6-vision | 34.2 | Proprietary |
176
+ | Claude-haiku-4.5 | 33.46 | Proprietary |
177
+ | Doubao-1.5-thinking | 33.04 | Proprietary |
178
  | InternVL3-78B | 32.5 | Open-Source |
179
+ | QwenVL3-30B (Thinking) | 32.31 | Open-Source |
180
+ | GPT-4o | 31.74 | Proprietary |
181
  | QwenVL2.5-72B | 30.78 | Open-Source |
182
  | InternVL2.5-78B | 30.4 | Open-Source |
183
  | QwenVL3-30B | 30.02 | Open-Source |