Datasets:

Modalities:
Image
Text
Formats:
parquet
ArXiv:
Libraries:
Datasets
pandas
License:
AI4Industry commited on
Commit
b2a9cd4
·
verified ·
1 Parent(s): 4ff7fd1

Add Qwen3-VL-4B-Instruct and Qwen3-VL-2B-Instruct

Browse files
Files changed (1) hide show
  1. README.md +8 -4
README.md CHANGED
@@ -155,9 +155,11 @@ We evaluated several of the latest popular MLLMs, including both closed-source a
155
  | Qwen3-VL-8B-Instruct | × | Open | - | 0.7548 | 0.7495 | 0.7521 |
156
  | Qwen3-VL-30BA3B-Instruct | × | Open | - | 0.7456 | 0.7436 | 0.7456 |
157
  | GPT-4o | × | Proprietary | 20240806 | 0.7462 | 0.7436 | 0.7449 |
158
- | Qwen2.5-VL-7b | × | Open | - | 0.7082 | 0.7233 | 0.7158 |
159
- | Qwen2.5-VL-3b | × | Open | - | 0.6748 | 0.6643 | 0.6696 |
 
160
  | GPT-4o mini | × | Proprietary | 20240718 | 0.6636 | 0.6066 | 0.6351 |
 
161
  | *Choice longest answer* | - | - | - | 0.4262 | 0.4525 | 0.4394 |
162
  | Deepseek-VL2 | × | Open | - | 0.4426 | 0.4216 | 0.4321 |
163
  | *Random* | - | - | - | 0.2500 | 0.2500 | 0.2500 |
@@ -198,9 +200,11 @@ We also conducted separate evaluations for different task types (in RxnBench-en)
198
  | Qwen3-VL-8B-Instruct | × | Open | - | 0.7113 | 0.8175 | 0.8446 | 0.8241 | 0.7627 | 0.5075 |
199
  | Qwen3-VL-30BA3B-Instruct | × | Open | - | 0.7042 | 0.7937 | 0.8311 | 0.7824 | 0.7119 | 0.5970 |
200
  | GPT-4o | × | Proprietary | 20240806 | 0.7359 | 0.8175 | 0.7973 | 0.7500 | 0.7627 | 0.5224 |
201
- | Qwen2.5-VL-7b | × | Open | - | 0.6678 | 0.7659 | 0.8041 | 0.7130 | 0.6441 | 0.5373 |
202
- | Qwen2.5-VL-3b | × | Open | - | 0.6426 | 0.7381 | 0.7635 | 0.6898 | 0.6610 | 0.4776 |
 
203
  | GPT-4o mini | × | Proprietary | 20240718 | 0.6391 | 0.7302 | 0.7500 | 0.6667 | 0.6271 | 0.4627 |
 
204
  | Deepseek-VL2 | × | Open | - | 0.4120 | 0.5040 | 0.4899 | 0.4907 | 0.3729 | 0.3060 |
205
 
206
 
 
155
  | Qwen3-VL-8B-Instruct | × | Open | - | 0.7548 | 0.7495 | 0.7521 |
156
  | Qwen3-VL-30BA3B-Instruct | × | Open | - | 0.7456 | 0.7436 | 0.7456 |
157
  | GPT-4o | × | Proprietary | 20240806 | 0.7462 | 0.7436 | 0.7449 |
158
+ | Qwen2.5-VL-7B | × | Open | - | 0.7082 | 0.7233 | 0.7158 |
159
+ | Qwen3-VL-4B-Instruct | × | Open | - | 0.7023 | 0.7023 | 0.7023 |
160
+ | Qwen2.5-VL-3B | × | Open | - | 0.6748 | 0.6643 | 0.6696 |
161
  | GPT-4o mini | × | Proprietary | 20240718 | 0.6636 | 0.6066 | 0.6351 |
162
+ | Qwen3-VL-2B-Instruct | × | Open | - | 0.5711 | 0.5928 | 0.5820 |
163
  | *Choice longest answer* | - | - | - | 0.4262 | 0.4525 | 0.4394 |
164
  | Deepseek-VL2 | × | Open | - | 0.4426 | 0.4216 | 0.4321 |
165
  | *Random* | - | - | - | 0.2500 | 0.2500 | 0.2500 |
 
200
  | Qwen3-VL-8B-Instruct | × | Open | - | 0.7113 | 0.8175 | 0.8446 | 0.8241 | 0.7627 | 0.5075 |
201
  | Qwen3-VL-30BA3B-Instruct | × | Open | - | 0.7042 | 0.7937 | 0.8311 | 0.7824 | 0.7119 | 0.5970 |
202
  | GPT-4o | × | Proprietary | 20240806 | 0.7359 | 0.8175 | 0.7973 | 0.7500 | 0.7627 | 0.5224 |
203
+ | Qwen2.5-VL-7B | × | Open | - | 0.6678 | 0.7659 | 0.8041 | 0.7130 | 0.6441 | 0.5373 |
204
+ | Qwen3-VL-4B-Instruct | × | Open | - | 0.6708 | 0.7302 | 0.7804 | 0.7222 | 0.6610 | 0.5970 |
205
+ | Qwen2.5-VL-3B | × | Open | - | 0.6426 | 0.7381 | 0.7635 | 0.6898 | 0.6610 | 0.4776 |
206
  | GPT-4o mini | × | Proprietary | 20240718 | 0.6391 | 0.7302 | 0.7500 | 0.6667 | 0.6271 | 0.4627 |
207
+ | Qwen3-VL-2B-Instruct | × | Open | - | 0.5405 | 0.6190 | 0.6318 | 0.6250 | 0.6102 | 0.3731 |
208
  | Deepseek-VL2 | × | Open | - | 0.4120 | 0.5040 | 0.4899 | 0.4907 | 0.3729 | 0.3060 |
209
 
210