Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
AI4Industry commited on
Commit
91e981e
·
verified ·
1 Parent(s): 845bd58

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +24 -24
README.md CHANGED
@@ -105,31 +105,31 @@ The final evaluation reports the absolute accuracy of the benchmark in both Engl
105
 
106
  We evaluated several of the latest popular MLLMs, including both closed-source and open-source models.
107
 
108
- | Moldel | Weight | UpdateTime | RxnBench-En | RxnBench-Zh | Mean Score |
109
- | ---- |:----:|:----:|:----:|:----:|:----:|
110
- | GPT-5 (high) | Proprietary | 20250807 | **0.9279** | 0.9246 | **0.9263** |
111
- | Gemini-2.5-Pro | Proprietary | 20250617 | 0.9095 | **0.9423** | 0.9259 |
112
- | GPT-5-mini | Proprietary | 20250807 | 0.9108 | 0.9128 | 0.9118 |
113
- | Seed1.5-VL-Think | Proprietary | 20250428 | 0.9056 | 0.9161 | 0.9109 |
114
- | GPT o3 | Proprietary | 20250416 | 0.9056 | 0.9115 | 0.9086 |
115
- | GPT o4 mini | Proprietary | 20250416 | 0.9062 | 0.9075 | 0.9069 |
116
- | InternVL3.5-241B-A28B | Open | - | 0.9003 | 0.9062 | 0.9033 |
117
- | Intern-S1 | Open | - | 0.8938 | 0.8944 | 0.8941 |
118
- | Seed1.5-VL | Proprietary | 20250328 | 0.8518 | 0.8669 | 0.8594 |
119
- | InternVL3-78b | Open | - | 0.8531 | 0.8308 | 0.8420 |
120
- | Intern-S1-mini | Open | - | 0.8521 | 0.8282 | 0.8402 |
121
- | GLM-4.1V-9B-Thinking | Open | - | 0.8392 | 0.8341 | 0.8367 |
122
- | Qwen2.5-VL-72B | Open | - | 0.8341 | 0.8308 | 0.8325 |
123
- | Qwen2.5-VL-Max | Proprietary | 20250813 | 0.8192 | 0.8262 | 0.8227 |
124
- | GPT-5-nano | Proprietary | 20250807 | 0.7980 | 0.7941 | 0.7961 |
125
- | Qwen2.5-VL-32B | Open | - | 0.7980 | 0.7908 | 0.7944 |
126
- | Gemini-2.5-Flash | Proprietary | 20250617 | 0.6925 | 0.8557 | 0.7741 |
127
- | GPT-4o | Proprietary | 20240806 | 0.7462 | 0.7436 | 0.7449 |
128
- | Qwen2.5-VL-7b | Open | - | 0.7082 | 0.7233 | 0.7158 |
129
- | Qwen2.5-VL-3b | Open | - | 0.6748 | 0.6643 | 0.6696 |
130
- | GPT-4o mini | Proprietary | 20240718 | 0.6636 | 0.6066 | 0.6351 |
131
  | *Choice longest answer* | - | -| 0.4262 | 0.4525 | 0.4394 |
132
- | Deepseek-VL2 | Open | - | 0.4426 | 0.4216 | 0.4321 |
133
  | *Random* | - | - | 0.2500 | 0.2500 | 0.2500 |
134
 
135
 
 
105
 
106
  We evaluated several of the latest popular MLLMs, including both closed-source and open-source models.
107
 
108
+ | Moldel | think | Weight | UpdateTime | RxnBench-En | RxnBench-Zh | Mean Score |
109
+ | ---- |:----:|:----:|:----:|:----:|:----:|:----:|
110
+ | GPT-5 (high) | high | Proprietary | 20250807 | **0.9279** | 0.9246 | **0.9263** |
111
+ | Gemini-2.5-Pro | default | Proprietary | 20250617 | 0.9095 | **0.9423** | 0.9259 |
112
+ | GPT-5-mini | default | Proprietary | 20250807 | 0.9108 | 0.9128 | 0.9118 |
113
+ | Seed1.5-VL-Think | default | Proprietary | 20250428 | 0.9056 | 0.9161 | 0.9109 |
114
+ | GPT o3 | default | Proprietary | 20250416 | 0.9056 | 0.9115 | 0.9086 |
115
+ | GPT o4 mini | default | Proprietary | 20250416 | 0.9062 | 0.9075 | 0.9069 |
116
+ | InternVL3.5-241B-A28B | default | Open | - | 0.9003 | 0.9062 | 0.9033 |
117
+ | Intern-S1 | default | Open | - | 0.8938 | 0.8944 | 0.8941 |
118
+ | Seed1.5-VL | off | Proprietary | 20250328 | 0.8518 | 0.8669 | 0.8594 |
119
+ | InternVL3-78b | unsupported | Open | - | 0.8531 | 0.8308 | 0.8420 |
120
+ | Intern-S1-mini | default | Open | - | 0.8521 | 0.8282 | 0.8402 |
121
+ | GLM-4.1V-9B-Thinking | default | Open | - | 0.8392 | 0.8341 | 0.8367 |
122
+ | Qwen2.5-VL-72B | unsupported | Open | - | 0.8341 | 0.8308 | 0.8325 |
123
+ | Qwen2.5-VL-Max | unsupported | Proprietary | 20250813 | 0.8192 | 0.8262 | 0.8227 |
124
+ | GPT-5-nano | default | Proprietary | 20250807 | 0.7980 | 0.7941 | 0.7961 |
125
+ | Qwen2.5-VL-32B | unsupported | Open | - | 0.7980 | 0.7908 | 0.7944 |
126
+ | Gemini-2.5-Flash | default | Proprietary | 20250617 | 0.6925 | 0.8557 | 0.7741 |
127
+ | GPT-4o | unsupported | Proprietary | 20240806 | 0.7462 | 0.7436 | 0.7449 |
128
+ | Qwen2.5-VL-7b | unsupported | Open | - | 0.7082 | 0.7233 | 0.7158 |
129
+ | Qwen2.5-VL-3b | unsupported | Open | - | 0.6748 | 0.6643 | 0.6696 |
130
+ | GPT-4o mini | unsupported | Proprietary | 20240718 | 0.6636 | 0.6066 | 0.6351 |
131
  | *Choice longest answer* | - | -| 0.4262 | 0.4525 | 0.4394 |
132
+ | Deepseek-VL2 | unsupported | Open | - | 0.4426 | 0.4216 | 0.4321 |
133
  | *Random* | - | - | 0.2500 | 0.2500 | 0.2500 |
134
 
135