introvoyz041 luckychao commited on
Commit
e35a9b9
·
verified ·
0 Parent(s):

Duplicate from luckychao/EMMA-mini

Browse files

Co-authored-by: Yunzhuo Hao <luckychao@users.noreply.huggingface.co>

.gitattributes ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.lz4 filter=lfs diff=lfs merge=lfs -text
12
+ *.mds filter=lfs diff=lfs merge=lfs -text
13
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
+ *.model filter=lfs diff=lfs merge=lfs -text
15
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
16
+ *.npy filter=lfs diff=lfs merge=lfs -text
17
+ *.npz filter=lfs diff=lfs merge=lfs -text
18
+ *.onnx filter=lfs diff=lfs merge=lfs -text
19
+ *.ot filter=lfs diff=lfs merge=lfs -text
20
+ *.parquet filter=lfs diff=lfs merge=lfs -text
21
+ *.pb filter=lfs diff=lfs merge=lfs -text
22
+ *.pickle filter=lfs diff=lfs merge=lfs -text
23
+ *.pkl filter=lfs diff=lfs merge=lfs -text
24
+ *.pt filter=lfs diff=lfs merge=lfs -text
25
+ *.pth filter=lfs diff=lfs merge=lfs -text
26
+ *.rar filter=lfs diff=lfs merge=lfs -text
27
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
28
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
30
+ *.tar filter=lfs diff=lfs merge=lfs -text
31
+ *.tflite filter=lfs diff=lfs merge=lfs -text
32
+ *.tgz filter=lfs diff=lfs merge=lfs -text
33
+ *.wasm filter=lfs diff=lfs merge=lfs -text
34
+ *.xz filter=lfs diff=lfs merge=lfs -text
35
+ *.zip filter=lfs diff=lfs merge=lfs -text
36
+ *.zst filter=lfs diff=lfs merge=lfs -text
37
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
38
+ # Audio files - uncompressed
39
+ *.pcm filter=lfs diff=lfs merge=lfs -text
40
+ *.sam filter=lfs diff=lfs merge=lfs -text
41
+ *.raw filter=lfs diff=lfs merge=lfs -text
42
+ # Audio files - compressed
43
+ *.aac filter=lfs diff=lfs merge=lfs -text
44
+ *.flac filter=lfs diff=lfs merge=lfs -text
45
+ *.mp3 filter=lfs diff=lfs merge=lfs -text
46
+ *.ogg filter=lfs diff=lfs merge=lfs -text
47
+ *.wav filter=lfs diff=lfs merge=lfs -text
48
+ # Image files - uncompressed
49
+ *.bmp filter=lfs diff=lfs merge=lfs -text
50
+ *.gif filter=lfs diff=lfs merge=lfs -text
51
+ *.png filter=lfs diff=lfs merge=lfs -text
52
+ *.tiff filter=lfs diff=lfs merge=lfs -text
53
+ # Image files - compressed
54
+ *.jpg filter=lfs diff=lfs merge=lfs -text
55
+ *.jpeg filter=lfs diff=lfs merge=lfs -text
56
+ *.webp filter=lfs diff=lfs merge=lfs -text
57
+ # Video files - compressed
58
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
59
+ *.webm filter=lfs diff=lfs merge=lfs -text
Chemistry/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b5849a452f8114d114f9d355e64a4c89c4c87f6a759d30a7be6d16650996872b
3
+ size 8503247
Coding/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f161007210600bf6f254d0cff8f5007bdb386529cea350ed7880f1c1305ddec0
3
+ size 25725653
Math/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:929bee1405aaf98ab25ba082ef5dd9a67537977286ac108a757bdc7bb3110490
3
+ size 7317246
Physics/test-00000-of-00001.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8663a7e70267110f16be9609923d97e8be903a5315b6a16397f78d212ee676c5
3
+ size 9067825
README.md ADDED
@@ -0,0 +1,265 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ size_categories:
5
+ - n<1K
6
+ task_categories:
7
+ - question-answering
8
+ - visual-question-answering
9
+ - multiple-choice
10
+ dataset_info:
11
+ - config_name: Chemistry
12
+ features:
13
+ - name: pid
14
+ dtype: string
15
+ - name: question
16
+ dtype: string
17
+ - name: options
18
+ sequence: string
19
+ - name: answer
20
+ dtype: string
21
+ - name: image_1
22
+ dtype: image
23
+ - name: image_2
24
+ dtype: image
25
+ - name: image_3
26
+ dtype: image
27
+ - name: image_4
28
+ dtype: image
29
+ - name: image_5
30
+ dtype: image
31
+ - name: solution
32
+ dtype: string
33
+ - name: subject
34
+ dtype: string
35
+ - name: task
36
+ dtype: string
37
+ - name: category
38
+ dtype: string
39
+ - name: source
40
+ dtype: string
41
+ - name: type
42
+ dtype: string
43
+ - name: context
44
+ dtype: string
45
+ splits:
46
+ - name: test
47
+ num_bytes: 12977708.0
48
+ num_examples: 100
49
+ download_size: 8503247
50
+ dataset_size: 12977708.0
51
+ - config_name: Coding
52
+ features:
53
+ - name: pid
54
+ dtype: string
55
+ - name: question
56
+ dtype: string
57
+ - name: options
58
+ sequence: string
59
+ - name: answer
60
+ dtype: string
61
+ - name: image_1
62
+ dtype: image
63
+ - name: image_2
64
+ dtype: image
65
+ - name: image_3
66
+ dtype: image
67
+ - name: image_4
68
+ dtype: image
69
+ - name: image_5
70
+ dtype: image
71
+ - name: solution
72
+ dtype: string
73
+ - name: subject
74
+ dtype: string
75
+ - name: task
76
+ dtype: string
77
+ - name: category
78
+ dtype: string
79
+ - name: source
80
+ dtype: string
81
+ - name: type
82
+ dtype: string
83
+ - name: context
84
+ dtype: string
85
+ splits:
86
+ - name: test
87
+ num_bytes: 29637568.0
88
+ num_examples: 100
89
+ download_size: 25725653
90
+ dataset_size: 29637568.0
91
+ - config_name: Math
92
+ features:
93
+ - name: pid
94
+ dtype: string
95
+ - name: question
96
+ dtype: string
97
+ - name: options
98
+ sequence: string
99
+ - name: answer
100
+ dtype: string
101
+ - name: image_1
102
+ dtype: image
103
+ - name: image_2
104
+ dtype: image
105
+ - name: image_3
106
+ dtype: image
107
+ - name: image_4
108
+ dtype: image
109
+ - name: image_5
110
+ dtype: image
111
+ - name: solution
112
+ dtype: string
113
+ - name: subject
114
+ dtype: string
115
+ - name: task
116
+ dtype: string
117
+ - name: category
118
+ dtype: string
119
+ - name: source
120
+ dtype: string
121
+ - name: type
122
+ dtype: string
123
+ - name: context
124
+ dtype: string
125
+ splits:
126
+ - name: test
127
+ num_bytes: 7774412.0
128
+ num_examples: 100
129
+ download_size: 7317246
130
+ dataset_size: 7774412.0
131
+ - config_name: Physics
132
+ features:
133
+ - name: pid
134
+ dtype: string
135
+ - name: question
136
+ dtype: string
137
+ - name: options
138
+ sequence: string
139
+ - name: answer
140
+ dtype: string
141
+ - name: image_1
142
+ dtype: image
143
+ - name: image_2
144
+ dtype: image
145
+ - name: image_3
146
+ dtype: image
147
+ - name: image_4
148
+ dtype: image
149
+ - name: image_5
150
+ dtype: image
151
+ - name: solution
152
+ dtype: string
153
+ - name: subject
154
+ dtype: string
155
+ - name: task
156
+ dtype: string
157
+ - name: category
158
+ dtype: string
159
+ - name: source
160
+ dtype: string
161
+ - name: type
162
+ dtype: string
163
+ - name: context
164
+ dtype: string
165
+ splits:
166
+ - name: test
167
+ num_bytes: 14273511.0
168
+ num_examples: 100
169
+ download_size: 9067825
170
+ dataset_size: 14273511.0
171
+ configs:
172
+ - config_name: Chemistry
173
+ data_files:
174
+ - split: test
175
+ path: Chemistry/test-*
176
+ - config_name: Coding
177
+ data_files:
178
+ - split: test
179
+ path: Coding/test-*
180
+ - config_name: Math
181
+ data_files:
182
+ - split: test
183
+ path: Math/test-*
184
+ - config_name: Physics
185
+ data_files:
186
+ - split: test
187
+ path: Physics/test-*
188
+ tags:
189
+ - chemistry
190
+ - physics
191
+ - math
192
+ - coding
193
+ ---
194
+
195
+ ## Dataset Description
196
+
197
+ We introduce **EMMA (Enhanced MultiModal reAsoning)**, a benchmark targeting organic multimodal reasoning across mathematics, physics, chemistry, and coding.
198
+ EMMA tasks demand advanced cross-modal reasoning that cannot be solved by thinking separately in each modality, offering an enhanced test suite for MLLMs' reasoning capabilities.
199
+
200
+ EMMA is composed of 2,788 problems, of which 1,796 are newly constructed, across four domains. Within each subject, we further provide fine-grained labels for each question based on the specific skills it measures.
201
+
202
+ To create a more balanced subset of EMMA, we randomly sample 400 questions (100 per subject) from the benchmark and get **EMMA-mini**.
203
+
204
+
205
+ ## Paper Information
206
+
207
+ - Paper: https://www.arxiv.org/abs/2501.05444
208
+ - EMMA Dataset: https://huggingface.co/datasets/luckychao/EMMA
209
+ - Code: https://github.com/hychaochao/EMMA
210
+ - Project: https://emma-benchmark.github.io/
211
+
212
+ ## Dataset Usage
213
+
214
+ ### Data Downloading
215
+
216
+ You can download the dataset by the following command (Taking downloading math data as an example):
217
+
218
+ ```python
219
+ from datasets import load_dataset
220
+
221
+ dataset = load_dataset("luckychao/EMMA-mini", "Math", split="test")
222
+ ```
223
+
224
+ ### Data Format
225
+
226
+ The dataset is provided in jsonl format and contains the following attributes:
227
+
228
+ ```
229
+ {
230
+ "pid": [string] Problem ID, e.g., “math_1”,
231
+ "question": [string] The question text,
232
+ "options": [list] Choice options for multiple-choice problems. For free-form problems, this could be a 'none' value,
233
+ "answer": [string] The correct answer for the problem,
234
+ "image_1": [image] ,
235
+ "image_2": [image] ,
236
+ "image_3": [image] ,
237
+ "image_4": [image] ,
238
+ "image_5": [image] ,
239
+ "solution": [string] The detailed thinking steps required to solve the problem,
240
+ "subject": [string] The subject of data, e.g., “Math”, “Physics”...,
241
+ "task": [string] The task of the problem, e.g., “Code Choose Vis”,
242
+ "category": [string] The category of the problem, e.g., “2D Transformation”,
243
+ "source": [string] The original source dataset of the data, e.g., “math-vista”. For handmade data, this could be “Newly annotated” ,
244
+ "type": [string] Types of questions, e.g., “Multiple Choice”, “Open-ended”,
245
+ "context": [string] Background knowledge required for the question. For problems without context, this could be a 'none' value,
246
+ }
247
+ ```
248
+
249
+ ### Automatic Evaluation
250
+
251
+ To automatically evaluate a model on the dataset, please refer to our GitHub repository [here](https://github.com/hychaochao/EMMA).
252
+
253
+ ## Citation
254
+
255
+ ```
256
+ @misc{hao2025mllmsreasonmultimodalityemma,
257
+ title={Can MLLMs Reason in Multimodality? EMMA: An Enhanced MultiModal ReAsoning Benchmark},
258
+ author={Yunzhuo Hao and Jiawei Gu and Huichen Will Wang and Linjie Li and Zhengyuan Yang and Lijuan Wang and Yu Cheng},
259
+ year={2025},
260
+ eprint={2501.05444},
261
+ archivePrefix={arXiv},
262
+ primaryClass={cs.CV},
263
+ url={https://arxiv.org/abs/2501.05444},
264
+ }
265
+ ```