File size: 2,030 Bytes
588e543
219c2e4
 
588e543
219c2e4
 
588e543
 
 
 
219c2e4
588e543
219c2e4
07959f3
588e543
 
7988057
588e543
8d7a9ad
588e543
c219f6f
588e543
 
 
07959f3
588e543
 
 
 
07959f3
588e543
 
07959f3
 
 
 
 
 
 
 
 
 
 
 
 
588e543
 
 
 
 
07959f3
588e543
07959f3
588e543
07959f3
588e543
07959f3
588e543
07959f3
588e543
 
219c2e4
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
language:
- en
license: apache-2.0
size_categories:
- 100B<n<1T
task_categories:
- multiple-choice
- question-answering
- visual-question-answering
- image-text-to-text
---

* **`2024.08.20`** 🌟 We are proud to open-source MME-Unify, a comprehensive evaluation framework designed to systematically assess U-MLLMs. Our Benchmark covers 10 tasks with 30 subtasks, ensuring consistent and fair comparisons across studies.


Paper: https://arxiv.org/abs/2504.03641

Code: https://github.com/MME-Benchmarks/MME-Unify

Project page: https://mme-unify.github.io/



![](leaderboard.png)


## How to use?

You can download images in this repository and the final structure should look like this:

```
MME-Unify
β”œβ”€β”€ CommonSense_Questions
β”œβ”€β”€ Conditional_Image_to_Video_Generation
β”œβ”€β”€ Fine-Grained_Image_Reconstruction
β”œβ”€β”€ Math_Reasoning
β”œβ”€β”€ Multiple_Images_and_Text_Interlaced
β”œβ”€β”€ Single_Image_Perception_and_Understanding
β”œβ”€β”€ Spot_Diff
β”œβ”€β”€ Text-Image_Editing
β”œβ”€β”€ Text-Image_Generation
β”œβ”€β”€ Text-to-Video_Generation
β”œβ”€β”€ Video_Perception_and_Understanding
└── Visual_CoT
```

## Dataset details


We present MME-Unify, a comprehensive evaluation framework designed to assess U-MLLMs systematically. Our benchmark includes:

1. **Standardized Traditional Task Evaluation** We sample from 12 datasets, covering 10 tasks with 30 subtasks, ensuring consistent and fair comparisons across studies.

2. **Unified Task Assessment** We introduce five novel tasks testing multimodal reasoning, including image editing, commonsense QA with image generation, and geometric reasoning.

3. **Comprehensive Model Benchmarking** We evaluate 12 leading U-MLLMs, such as Janus-Pro, EMU3, and VILA-U, alongside specialized understanding (e.g., Claude-3.5) and generation models (e.g., DALL-E-3).

Our findings reveal substantial performance gaps in existing U-MLLMs, highlighting the need for more robust models capable of handling mixed-modality tasks effectively.


![](Bin.png)