Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -61,31 +61,71 @@ This benchmark challenges visual-language models on their foundational knowledge
|
|
| 61 |
The benchmark is released in both English and Chinese versions.
|
| 62 |
|
| 63 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 64 |
## 📊 Benchmark Leaderboard
|
| 65 |
|
| 66 |
We evaluated several of the latest popular MLLMs, including both closed-source and open-source models.
|
| 67 |
|
| 68 |
-
| Moldel | Weight | UpdateTime | RxnBench-En | RxnBench-Zh |
|
| 69 |
-
| ----
|
| 70 |
-
|
|
| 71 |
-
|
|
| 72 |
-
| GPT-5-mini | Proprietary | 20250807 |
|
| 73 |
-
|
|
| 74 |
-
| GPT
|
| 75 |
-
|
|
| 76 |
-
|
|
| 77 |
-
|
|
| 78 |
-
|
|
| 79 |
-
|
|
| 80 |
-
|
|
| 81 |
-
| Qwen2.5-VL-
|
| 82 |
-
|
|
| 83 |
-
|
|
| 84 |
-
| Qwen2.5-VL-
|
| 85 |
-
|
|
| 86 |
-
|
|
| 87 |
-
|
|
| 88 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 89 |
|
| 90 |
|
| 91 |
## 📖 Citation
|
|
|
|
| 61 |
The benchmark is released in both English and Chinese versions.
|
| 62 |
|
| 63 |
|
| 64 |
+
## 🧮 Benchmark Evaluation
|
| 65 |
+
|
| 66 |
+
This benchmark evaluates model performance on multiple-choice question answering (MCQ) tasks.
|
| 67 |
+
|
| 68 |
+
We provide two versions of the prompt template, depending on the language setting.
|
| 69 |
+
|
| 70 |
+
** English Prompt **
|
| 71 |
+
```
|
| 72 |
+
Question: {question}
|
| 73 |
+
Choices:
|
| 74 |
+
A. {choice_A}
|
| 75 |
+
B. {choice_B}
|
| 76 |
+
C. {choice_C}
|
| 77 |
+
D. {choice_D}
|
| 78 |
+
Based on the image and the question, choose the most appropriate answer.
|
| 79 |
+
**Only output a single letter (A, B, C, or D)**. Do NOT output any other text or explanation.
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
** Chinese Prompt **
|
| 83 |
+
```
|
| 84 |
+
问题: {question}
|
| 85 |
+
选项:
|
| 86 |
+
A. {choice_A}
|
| 87 |
+
B. {choice_B}
|
| 88 |
+
C. {choice_C}
|
| 89 |
+
D. {choice_D}
|
| 90 |
+
|
| 91 |
+
请根据图像和问题,从以上四个选项中选择最合适的答案。
|
| 92 |
+
只输出单个字母 (A, B, C 或 D),不要输出选项内容,也不要输出任何解释。
|
| 93 |
+
```
|
| 94 |
+
|
| 95 |
+
** Evaluation Protocol **
|
| 96 |
+
If the model’s output is not one of A, B, C, or D, we use GPT-4o to map the output to A–D based on the option content.
|
| 97 |
+
The final evaluation reports the absolute accuracy of the benchmark in both English and Chinese versions.
|
| 98 |
+
|
| 99 |
+
|
| 100 |
## 📊 Benchmark Leaderboard
|
| 101 |
|
| 102 |
We evaluated several of the latest popular MLLMs, including both closed-source and open-source models.
|
| 103 |
|
| 104 |
+
| Moldel | Weight | UpdateTime | RxnBench-En | RxnBench-Zh | Mean Score |
|
| 105 |
+
| ---- |:----:|:----:|:----:|:----:|:----:|
|
| 106 |
+
| GPT-5 (high) | Proprietary | 20250807 | **0.9279** | 0.9246 | **0.9263** |
|
| 107 |
+
| Gemini-2.5-Pro | Proprietary | 20250617 | 0.9095 | **0.9423** | 0.9259 |
|
| 108 |
+
| GPT-5-mini | Proprietary | 20250807 | 0.9108 | 0.9128 | 0.9118 |
|
| 109 |
+
| Seed1.5-VL-Think | Proprietary | 20250428 | 0.9056 | 0.9161 | 0.9109 |
|
| 110 |
+
| GPT o3 | Proprietary | 20250416 | 0.9056 | 0.9115 | 0.9086 |
|
| 111 |
+
| GPT o4 mini | Proprietary | 20250416 | 0.9062 | 0.9075 | 0.9069 |
|
| 112 |
+
| InternVL3.5-241B-A28B | Open | - | 0.9003 | 0.9062 | 0.9033 |
|
| 113 |
+
| Seed1.5-VL | Proprietary | 20250328 | 0.8518 | 0.8669 | 0.8594 |
|
| 114 |
+
| InternVL3-78b | Open | - | 0.8531 | 0.8308 | 0.8420 |
|
| 115 |
+
| Intern-S1-mini | Open | - | 0.8521 | 0.8282 | 0.8402 |
|
| 116 |
+
| GLM-4.1V-9B-Thinking | Open | - | 0.8392 | 0.8341 | 0.8367 |
|
| 117 |
+
| Qwen2.5-VL-72B | Open | - | 0.8341 | 0.8308 | 0.8325 |
|
| 118 |
+
| Qwen2.5-VL-Max | Proprietary | 20250813 | 0.8192 | 0.8262 | 0.8227 |
|
| 119 |
+
| GPT-5-nano | Proprietary | 20250807 | 0.7980 | 0.7941 | 0.7961 |
|
| 120 |
+
| Qwen2.5-VL-32B | Open | - | 0.7980 | 0.7908 | 0.7944 |
|
| 121 |
+
| Gemini-2.5-Flash | Proprietary | 20250617 | 0.6925 | 0.8557 | 0.7741 |
|
| 122 |
+
| GPT-4o | Proprietary | 20240806 | 0.7462 | 0.7436 | 0.7449 |
|
| 123 |
+
| Qwen2.5-VL-7b | Open | - | 0.7082 | 0.7233 | 0.7158 |
|
| 124 |
+
| Qwen2.5-VL-3b | Open | - | 0.6748 | 0.6643 | 0.6696 |
|
| 125 |
+
| GPT-4o mini | Proprietary | 20240718 | 0.6636 | 0.6066 | 0.6351 |
|
| 126 |
+
| *Choice longest answer* | - | -| 0.4262 | 0.4525 | 0.4394 |
|
| 127 |
+
| Deepseek-VL2 | Open | - | 0.4426 | 0.4216 | 0.4321 |
|
| 128 |
+
| *Random* | - | - | 0.2500 | 0.2500 | 0.2500 |
|
| 129 |
|
| 130 |
|
| 131 |
## 📖 Citation
|