Datasets:

Modalities:
Image
ArXiv:
Libraries:
Datasets
JamesZhutheThird commited on
Commit
45286aa
ยท
verified ยท
1 Parent(s): be24018

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +224 -224
README.md CHANGED
@@ -1,224 +1,224 @@
1
- # ๐Ÿ–ผ๏ธ MULTI-Benchmark: Multimodal Understanding Leaderboard with Text and Images
2
-
3
- <div align="center">
4
-
5
- ![MULTI](./overview.png)
6
-
7
- ๐ŸŒ [Website](https://OpenDFM.github.io/MULTI-Benchmark/) | ๐Ÿ“ƒ [Paper](https://arxiv.org/abs/2402.03173/) | ๐Ÿค— [Dataset](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark) |
8
- ๐Ÿ† [Leaderboard](https://opendfm.github.io/MULTI-Benchmark/#leaderboard) | ๐Ÿ“ฎ [Submit](https://wj.sjtu.edu.cn/q/89UmRAJn)
9
-
10
- [็ฎ€ไฝ“ไธญๆ–‡](./README_zh.md) | English
11
-
12
- </div>
13
-
14
- ## ๐Ÿ”ฅ News
15
-
16
- - **[2025.10.16]** We have released ground truth answers for all questions in MULTI as human expert baseline was surpassed by several models. Now you can run evaluation and get the final scores locally.
17
- - **[2025.9.28]** MULTI is now available online at [https://doi.org/10.1007/s11432-024-4602-x](https://doi.org/10.1007/s11432-024-4602-x).
18
- - **[2025.6.22]** MULTI is now accepted by Science China Information Sciences (Special Topic on Large Multimodal Models).
19
- - **[2025.1.7]** We have updated our [leaderboard](https://opendfm.github.io/MULTI-Benchmark/#leaderboard) with the latest results.
20
- - **[2025.1.2]** We have updated MULTI to v1.3.1.
21
- - **[2024.3.4]** We have released the [evaluation page](https://OpenDFM.github.io/MULTI-Benchmark/static/pages/submit.html) (no longer maintained).
22
- - **[2024.2.19]** We have released the [HuggingFace Page](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/).
23
- - **[2024.2.6]** We have published our [paper](https://arxiv.org/abs/2402.03173/) on arXiv.
24
- - **[2023.12.7]** We have released the [code](https://github.com/OpenDFM/MULTI-Benchmark/tree/main/eval) of our benchmark evaluation.
25
- - **[2023.12.5]** We have released the [GitHub Page](https://OpenDFM.github.io/MULTI-Benchmark/).
26
-
27
- ## ๐Ÿ“– Overview
28
-
29
- The rapid development of multimodal large language models (MLLMs) raises the question of how they compare to human performance. While existing datasets often feature synthetic or
30
- overly simplistic tasks, some models have already surpassed human expert baselines. In this paper, we present **MULTI**, a Chinese multimodal dataset derived from authentic examination
31
- questions. Comprising over 18,000 carefully selected and refined questions, **MULTI** evaluates models using real-world examination standards, encompassing image-text comprehension,
32
- complex reasoning, and knowledge recall. Additionally, We also introduce **MULTI-Elite**, a 500-question selected hard subset, and **MULTI-Extend** with more than 4,500 external knowledge
33
- context pieces for testing in-context learning capabilities. **MULTI** serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI.
34
-
35
- ## โฌ Download
36
-
37
- You can simply download data using the following command:
38
-
39
- ```shell
40
- cd eval
41
- python download_data.py
42
- ```
43
-
44
- Or directly download the [zip file](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/blob/main/MULTI_v1.3.1_20251016_release.zip) in the Huggingface repository and unzip it.
45
-
46
- The structure of `./data` should be something like:
47
-
48
- ```
49
- ./data
50
- โ”œโ”€โ”€ images # folder containing images
51
- โ”œโ”€โ”€ problem_v1.3.1_20241210.json # MULTI (with answers)
52
- โ”œโ”€โ”€ problem_v1.3.1_20241210_release.json # MULTI
53
- โ”œโ”€โ”€ knowledge_v1.2.2_20240212_release.json # MULTI-Extend
54
- โ”œโ”€โ”€ hard_list_v1.3.0_20241203.json # MULTI-Elite
55
- โ”œโ”€โ”€ captions_v1.3.1_20241210_blip.csv # image captions generated by BLIP-6.7B
56
- โ”œโ”€โ”€ captions_v1.3.1_20241210_points.csv # image captions generated by POINTS-1-5
57
- โ”œโ”€โ”€ ocr_v1.3.1_20241210_easyocr.csv # OCR data generated by EasyOCR
58
- โ””โ”€โ”€ ocr_v1.3.1_20241210_points.csv # OCR data generated by POINTS-1-5
59
- ```
60
-
61
- ## ๐Ÿ“ How to Evaluate
62
-
63
- We provide a unified evaluation framework in `eval`. Each file in `eval/models` contains an evaluator specified to one M/LLM, and implements a `generate_answer` method to receive a question as input
64
- and give out the answer of it.
65
-
66
- ```shell
67
- cd eval
68
- python eval.py -h # to list all supported arguments
69
- python eval.py -l # to list all supported models
70
- ```
71
-
72
- ### Environment Preparation Before Usage
73
-
74
- Each evaluator requires its unique environment setting, and a universal environment may not work for all evaluators. **Just follow the official guide.** If the corresponding model runs well, then so
75
- should it fit in our framework.
76
-
77
- You just need to install several another packages to run the evaluation code:
78
-
79
- ```shell
80
- pip install tiktoken tqdm rouge_chinese jieba matplotlib
81
- ```
82
-
83
- If you just want to generate data for a specific setting (using `--debug` argument), this line above is all you need.
84
-
85
- ### Running Evaluation
86
-
87
- For a quick start, see these examples:
88
-
89
- Test GPT-4o model on whole MULTI with multimodal input, using MULTI-Extend as external knowledge:
90
-
91
- ```shell
92
- python eval.py \
93
- --problem_file ../data/problem_v1.3.1_20241210_release.json \
94
- --knowledge_file ../data/knowledge_v1.2.2_20240212_release.json \
95
- --questions_type 0,1,2,3 \
96
- --image_type 0,1,2 \
97
- --input_type 2 \
98
- --model gpt-4o \
99
- --model_version gpt-4o-latest \
100
- --api_key sk-************************************************
101
- ```
102
-
103
- Test Qwen-VL model on MULTI-Elite with image caption input, skip all questions not containing images, evaluate only multiple-choice questions, automatically set cuda device:
104
-
105
- ```shell
106
- python eval.py \
107
- --problem_file ../data/problem_v1.3.1_20241210_release.json \
108
- --subset ../data/hard_list_v1.3.0_20241203.json \
109
- --caption_file ../data/captions_v1.3.1_20241210_points.csv \
110
- --questions_type 0,1 \
111
- --image_type 1,2 \
112
- --input_type 1 \
113
- --model qwen-vl \
114
- --model_dir ../models/Qwen-VL-Chat
115
- ```
116
-
117
- The evaluation script will generate a folder named `results` under the root directory, and the result will be saved in `../results/{EXPERIMENT_NAME}`. During the evaluation, the script will save
118
- checkpoints in `../results/{EXPERIMENT_NAME}/checkpoints`, you can delete them after the evaluation is done. If the evaluation is interrupted, you can continue from the last checkpoint:
119
-
120
- ```shell
121
- python eval.py \
122
- --checkpoint_dir ../results/{EXPERIMENT_NAME}
123
- ```
124
-
125
- Most of the arguments are saved in `../results/{EXPERIMENT_NAME}/args.json`, so you can continue the evaluation without specifying all the arguments again. Please note that `--api_key` is not saved in
126
- `args.json` for security reasons, so you need to specify it again.
127
-
128
- ```shell
129
- python eval.py \
130
- --checkpoint_dir ../results/{EXPERIMENT_NAME} \
131
- --api_key sk-************************************************
132
- ```
133
-
134
- For more details of arguments, please use `python eval.py -h`, and refer to `args.py` and `eval.py`.
135
-
136
- You can directly use the standard answers we provide to score the answer sheets:
137
-
138
- ```shell
139
- python metrics.py \
140
- --label_file ../data/problem_v1.3.1_20241210.json \
141
- --detail \
142
- --answer_position end \
143
- --prediction_file ../results/{EXPERIMENT_NAME}/prediction.json
144
- ```
145
-
146
- You will see the final scoring data in `../results/{EXPERIMENT_NAME}`.
147
-
148
- ### Add Support for Your Models
149
-
150
- It's recommended to read the code of the other given evaluators in `eval/models` before your implementation.
151
-
152
- Create `class YourModelEvaluator` and implement `generate_answer(self, question:dict)` to match the design supported in `eval.py` and `eval.sh`, which is anticipated to largely ease the coding
153
- process.
154
-
155
- **Do not forget to add their references into `args.py` for the convenience of usage.**
156
-
157
- You can execute `model_tester.py` in the `eval` folder to check the correctness of you implementation. Various problems including implementation errors, small bugs in code, and even wrong environment
158
- settings may cause failure of the evaluation. The examples provided in the file cover most kinds of cases presented in our benchmark. Feel free to change the code in it to debug your code๐Ÿ˜Š
159
-
160
- ```shell
161
- python model_tester.py <args> # args are similar to the default settings above
162
- ```
163
-
164
- ### Create Captions and OCR Data for Images
165
-
166
- Generate captions or OCR data for images, and save them in csv with format below:
167
-
168
- ```
169
- ../data/images/czls/502_1.png,a cartoon drawing of a man standing in front of a large block
170
- ../data/images/czls/525_1.png,a chinese newspaper with the headline, china's new year
171
- ...
172
- ```
173
-
174
- We provide two example scripts to generate captions (`image_caption.py`) and OCR data (`image_ocr.py`) for images.
175
-
176
- ## ๐Ÿ“ฎ How to Submit
177
-
178
- <details>
179
- <summary>You can do evaluation locally directly</summary>
180
- You need to first prepare a UTF-8 encoded JSON file with the following format:
181
-
182
- ```
183
- {
184
- "czsx_0_0": {
185
- "question_id": "czsx_0_0",
186
- "question_image_number": 1,
187
- "image_list": [...], # optional
188
- "input_message": ..., # optional
189
- "prediction": "C"
190
- },
191
- ...
192
- }
193
- ```
194
-
195
- If you evaluate the model with our official code, you can simply zip the prediction file `prediction.json` and the configuration file `args.json` in the experiment results folder
196
- `. /results/{EXPERIMENT_NAME}` in `.zip` format.
197
-
198
- Then, you can submit your result to our [evaluation page](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html).
199
- <details>
200
-
201
- You are also welcomed to pull a request and contribute your code to our evaluation code. We will be very grateful for your contribution!
202
-
203
- **[Notice]** Thank you for being so interested in the **MULTI** dataset! If you want to add your model in our leaderboard, please fill in [this questionnaire](https://wj.sjtu.edu.cn/q/89UmRAJn), your information will be kept strictly confidential, so please feel free to fill it out. ๐Ÿค—
204
-
205
- ## ๐Ÿ“‘ Citation
206
-
207
- If you find our work useful, please cite us!
208
-
209
- ```
210
- @article{zhu2025multi,
211
- title={{MULTI}: Multimodal Understanding Leaderboard with Text and Images},
212
- author={Zichen Zhu and Yang Xu and Lu Chen and Jingkai Yang and Yichuan Ma and Yiming Sun and Hailin Wen and Jiaqi Liu and Jinyu Cai and Yingzi Ma and Situo Zhang and Zihan Zhao and Liangtai Sun and Kai Yu},
213
- journal = "SCIENCE CHINA Information Sciences",
214
- year = "2025",
215
- volume = "68",
216
- number = "10",
217
- pages = "200107.1--200107.26",
218
- doi = "https://doi.org/10.1007/s11432-024-4602-x"
219
- }
220
- ```
221
-
222
- ## ๐Ÿ“ง Contact Us
223
-
224
- If you have any questions, please feel free to contact us via email `JamesZhutheThird@sjtu.edu.cn`
 
1
+ # ๐Ÿ–ผ๏ธ MULTI-Benchmark: Multimodal Understanding Leaderboard with Text and Images
2
+
3
+ <div align="center">
4
+
5
+ ![MULTI](./overview.png)
6
+
7
+ ๐ŸŒ [Website](https://OpenDFM.github.io/MULTI-Benchmark/) | ๐Ÿ“ƒ [Paper](https://arxiv.org/abs/2402.03173/) | ๐Ÿค— [Dataset](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark) |
8
+ ๐Ÿ† [Leaderboard](https://opendfm.github.io/MULTI-Benchmark/#leaderboard) | ๐Ÿ“ฎ [Submit](https://wj.sjtu.edu.cn/q/89UmRAJn)
9
+
10
+ [็ฎ€ไฝ“ไธญๆ–‡](./README_zh.md) | English
11
+
12
+ </div>
13
+
14
+ ## ๐Ÿ”ฅ News
15
+
16
+ - **[2025.10.16]** We have released ground truth answers for all questions in MULTI as human expert baseline was surpassed by several models. Now you can run evaluation and get the final scores locally.
17
+ - **[2025.9.28]** MULTI is now available online at [https://doi.org/10.1007/s11432-024-4602-x](https://doi.org/10.1007/s11432-024-4602-x).
18
+ - **[2025.6.22]** MULTI is now accepted by Science China Information Sciences (Special Topic on Large Multimodal Models).
19
+ - **[2025.1.7]** We have updated our [leaderboard](https://opendfm.github.io/MULTI-Benchmark/#leaderboard) with the latest results.
20
+ - **[2025.1.2]** We have updated MULTI to v1.3.1.
21
+ - **[2024.3.4]** We have released the [evaluation page](https://OpenDFM.github.io/MULTI-Benchmark/static/pages/submit.html) (no longer maintained).
22
+ - **[2024.2.19]** We have released the [HuggingFace Page](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/).
23
+ - **[2024.2.6]** We have published our [paper](https://arxiv.org/abs/2402.03173/) on arXiv.
24
+ - **[2023.12.7]** We have released the [code](https://github.com/OpenDFM/MULTI-Benchmark/tree/main/eval) of our benchmark evaluation.
25
+ - **[2023.12.5]** We have released the [GitHub Page](https://OpenDFM.github.io/MULTI-Benchmark/).
26
+
27
+ ## ๐Ÿ“– Overview
28
+
29
+ The rapid development of multimodal large language models (MLLMs) raises the question of how they compare to human performance. While existing datasets often feature synthetic or
30
+ overly simplistic tasks, some models have already surpassed human expert baselines. In this paper, we present **MULTI**, a Chinese multimodal dataset derived from authentic examination
31
+ questions. Comprising over 18,000 carefully selected and refined questions, **MULTI** evaluates models using real-world examination standards, encompassing image-text comprehension,
32
+ complex reasoning, and knowledge recall. Additionally, We also introduce **MULTI-Elite**, a 500-question selected hard subset, and **MULTI-Extend** with more than 4,500 external knowledge
33
+ context pieces for testing in-context learning capabilities. **MULTI** serves not only as a robust evaluation platform but also paves the way for the development of expert-level AI.
34
+
35
+ ## โฌ Download
36
+
37
+ You can simply download data using the following command:
38
+
39
+ ```shell
40
+ cd eval
41
+ python download_data.py
42
+ ```
43
+
44
+ Or directly download the [zip file](https://huggingface.co/datasets/OpenDFM/MULTI-Benchmark/blob/main/MULTI_v1.3.1_20251016_release.zip) in the Huggingface repository and unzip it.
45
+
46
+ The structure of `./data` should be something like:
47
+
48
+ ```
49
+ ./data
50
+ โ”œโ”€โ”€ images # folder containing images
51
+ โ”œโ”€โ”€ problem_v1.3.1_20241210.json # MULTI (with answers)
52
+ โ”œโ”€โ”€ problem_v1.3.1_20241210_release.json # MULTI
53
+ โ”œโ”€โ”€ knowledge_v1.2.2_20240212_release.json # MULTI-Extend
54
+ โ”œโ”€โ”€ hard_list_v1.3.0_20241203.json # MULTI-Elite
55
+ โ”œโ”€โ”€ captions_v1.3.1_20241210_blip.csv # image captions generated by BLIP-6.7B
56
+ โ”œโ”€โ”€ captions_v1.3.1_20241210_points.csv # image captions generated by POINTS-1-5
57
+ โ”œโ”€โ”€ ocr_v1.3.1_20241210_easyocr.csv # OCR data generated by EasyOCR
58
+ โ””โ”€โ”€ ocr_v1.3.1_20241210_points.csv # OCR data generated by POINTS-1-5
59
+ ```
60
+
61
+ ## ๐Ÿ“ How to Evaluate
62
+
63
+ We provide a unified evaluation framework in `eval`. Each file in `eval/models` contains an evaluator specified to one M/LLM, and implements a `generate_answer` method to receive a question as input
64
+ and give out the answer of it.
65
+
66
+ ```shell
67
+ cd eval
68
+ python eval.py -h # to list all supported arguments
69
+ python eval.py -l # to list all supported models
70
+ ```
71
+
72
+ ### Environment Preparation Before Usage
73
+
74
+ Each evaluator requires its unique environment setting, and a universal environment may not work for all evaluators. **Just follow the official guide.** If the corresponding model runs well, then so
75
+ should it fit in our framework.
76
+
77
+ You just need to install several another packages to run the evaluation code:
78
+
79
+ ```shell
80
+ pip install tiktoken tqdm rouge_chinese jieba matplotlib
81
+ ```
82
+
83
+ If you just want to generate data for a specific setting (using `--debug` argument), this line above is all you need.
84
+
85
+ ### Running Evaluation
86
+
87
+ For a quick start, see these examples:
88
+
89
+ Test GPT-4o model on whole MULTI with multimodal input, using MULTI-Extend as external knowledge:
90
+
91
+ ```shell
92
+ python eval.py \
93
+ --problem_file ../data/problem_v1.3.1_20241210_release.json \
94
+ --knowledge_file ../data/knowledge_v1.2.2_20240212_release.json \
95
+ --questions_type 0,1,2,3 \
96
+ --image_type 0,1,2 \
97
+ --input_type 2 \
98
+ --model gpt-4o \
99
+ --model_version gpt-4o-latest \
100
+ --api_key sk-************************************************
101
+ ```
102
+
103
+ Test Qwen-VL model on MULTI-Elite with image caption input, skip all questions not containing images, evaluate only multiple-choice questions, automatically set cuda device:
104
+
105
+ ```shell
106
+ python eval.py \
107
+ --problem_file ../data/problem_v1.3.1_20241210_release.json \
108
+ --subset ../data/hard_list_v1.3.0_20241203.json \
109
+ --caption_file ../data/captions_v1.3.1_20241210_points.csv \
110
+ --questions_type 0,1 \
111
+ --image_type 1,2 \
112
+ --input_type 1 \
113
+ --model qwen-vl \
114
+ --model_dir ../models/Qwen-VL-Chat
115
+ ```
116
+
117
+ The evaluation script will generate a folder named `results` under the root directory, and the result will be saved in `../results/{EXPERIMENT_NAME}`. During the evaluation, the script will save
118
+ checkpoints in `../results/{EXPERIMENT_NAME}/checkpoints`, you can delete them after the evaluation is done. If the evaluation is interrupted, you can continue from the last checkpoint:
119
+
120
+ ```shell
121
+ python eval.py \
122
+ --checkpoint_dir ../results/{EXPERIMENT_NAME}
123
+ ```
124
+
125
+ Most of the arguments are saved in `../results/{EXPERIMENT_NAME}/args.json`, so you can continue the evaluation without specifying all the arguments again. Please note that `--api_key` is not saved in
126
+ `args.json` for security reasons, so you need to specify it again.
127
+
128
+ ```shell
129
+ python eval.py \
130
+ --checkpoint_dir ../results/{EXPERIMENT_NAME} \
131
+ --api_key sk-************************************************
132
+ ```
133
+
134
+ For more details of arguments, please use `python eval.py -h`, and refer to `args.py` and `eval.py`.
135
+
136
+ You can directly use the standard answers we provide to score the answer sheets:
137
+
138
+ ```shell
139
+ python metrics.py \
140
+ --label_file ../data/problem_v1.3.1_20241210.json \
141
+ --detail \
142
+ --answer_position end \
143
+ --prediction_file ../results/{EXPERIMENT_NAME}/prediction.json
144
+ ```
145
+
146
+ You will see the final scoring data in `../results/{EXPERIMENT_NAME}`.
147
+
148
+ ### Add Support for Your Models
149
+
150
+ It's recommended to read the code of the other given evaluators in `eval/models` before your implementation.
151
+
152
+ Create `class YourModelEvaluator` and implement `generate_answer(self, question:dict)` to match the design supported in `eval.py` and `eval.sh`, which is anticipated to largely ease the coding
153
+ process.
154
+
155
+ **Do not forget to add their references into `args.py` for the convenience of usage.**
156
+
157
+ You can execute `model_tester.py` in the `eval` folder to check the correctness of you implementation. Various problems including implementation errors, small bugs in code, and even wrong environment
158
+ settings may cause failure of the evaluation. The examples provided in the file cover most kinds of cases presented in our benchmark. Feel free to change the code in it to debug your code๐Ÿ˜Š
159
+
160
+ ```shell
161
+ python model_tester.py <args> # args are similar to the default settings above
162
+ ```
163
+
164
+ ### Create Captions and OCR Data for Images
165
+
166
+ Generate captions or OCR data for images, and save them in csv with format below:
167
+
168
+ ```
169
+ ../data/images/czls/502_1.png,a cartoon drawing of a man standing in front of a large block
170
+ ../data/images/czls/525_1.png,a chinese newspaper with the headline, china's new year
171
+ ...
172
+ ```
173
+
174
+ We provide two example scripts to generate captions (`image_caption.py`) and OCR data (`image_ocr.py`) for images.
175
+
176
+ ## ๐Ÿ“ฎ How to Submit
177
+
178
+ <details>
179
+ <summary>You can do evaluation locally directly</summary>
180
+ You need to first prepare a UTF-8 encoded JSON file with the following format:
181
+
182
+ ```
183
+ {
184
+ "czsx_0_0": {
185
+ "question_id": "czsx_0_0",
186
+ "question_image_number": 1,
187
+ "image_list": [...], # optional
188
+ "input_message": ..., # optional
189
+ "prediction": "C"
190
+ },
191
+ ...
192
+ }
193
+ ```
194
+
195
+ If you evaluate the model with our official code, you can simply zip the prediction file `prediction.json` and the configuration file `args.json` in the experiment results folder
196
+ `. /results/{EXPERIMENT_NAME}` in `.zip` format.
197
+
198
+ Then, you can submit your result to our [evaluation page](https://opendfm.github.io/MULTI-Benchmark/static/pages/submit.html).
199
+ </details>
200
+
201
+ You are also welcomed to pull a request and contribute your code to our evaluation code. We will be very grateful for your contribution!
202
+
203
+ **[Notice]** Thank you for being so interested in the **MULTI** dataset! If you want to add your model in our leaderboard, please fill in [this questionnaire](https://wj.sjtu.edu.cn/q/89UmRAJn), your information will be kept strictly confidential, so please feel free to fill it out. ๐Ÿค—
204
+
205
+ ## ๐Ÿ“‘ Citation
206
+
207
+ If you find our work useful, please cite us!
208
+
209
+ ```
210
+ @article{zhu2025multi,
211
+ title={{MULTI}: Multimodal Understanding Leaderboard with Text and Images},
212
+ author={Zichen Zhu and Yang Xu and Lu Chen and Jingkai Yang and Yichuan Ma and Yiming Sun and Hailin Wen and Jiaqi Liu and Jinyu Cai and Yingzi Ma and Situo Zhang and Zihan Zhao and Liangtai Sun and Kai Yu},
213
+ journal = "SCIENCE CHINA Information Sciences",
214
+ year = "2025",
215
+ volume = "68",
216
+ number = "10",
217
+ pages = "200107.1--200107.26",
218
+ doi = "https://doi.org/10.1007/s11432-024-4602-x"
219
+ }
220
+ ```
221
+
222
+ ## ๐Ÿ“ง Contact Us
223
+
224
+ If you have any questions, please feel free to contact us via email `JamesZhutheThird@sjtu.edu.cn`