File size: 2,015 Bytes
588e543 07959f3 588e543 7988057 588e543 8d7a9ad 588e543 c219f6f 588e543 07959f3 588e543 07959f3 588e543 07959f3 588e543 07959f3 588e543 07959f3 588e543 07959f3 588e543 07959f3 588e543 07959f3 588e543 07959f3 588e543 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 | ---
license: apache-2.0
task_categories:
- multiple-choice
- question-answering
- visual-question-answering
language:
- en
size_categories:
- 100B<n<1T
---
* **`2024.08.20`** π We are proud to open-source MME-Unify, a comprehensive evaluation framework designed to systematically assess U-MLLMs. Our Benchmark covers 10 tasks with 30 subtasks, ensuring consistent and fair comparisons across studies.
Paper: https://arxiv.org/abs/2504.03641
Code: https://github.com/MME-Benchmarks/MME-Unify
Project page: https://mme-unify.github.io/

## How to use?
You can download images in this repository and the final structure should look like this:
```
MME-Unify
βββ CommonSense_Questions
βββ Conditional_Image_to_Video_Generation
βββ Fine-Grained_Image_Reconstruction
βββ Math_Reasoning
βββ Multiple_Images_and_Text_Interlaced
βββ Single_Image_Perception_and_Understanding
βββ Spot_Diff
βββ Text-Image_Editing
βββ Text-Image_Generation
βββ Text-to-Video_Generation
βββ Video_Perception_and_Understanding
βββ Visual_CoT
```
## Dataset details
We present MME-Unify, a comprehensive evaluation framework designed to assess U-MLLMs systematically. Our benchmark includes:
1. **Standardized Traditional Task Evaluation** We sample from 12 datasets, covering 10 tasks with 30 subtasks, ensuring consistent and fair comparisons across studies.
2. **Unified Task Assessment** We introduce five novel tasks testing multimodal reasoning, including image editing, commonsense QA with image generation, and geometric reasoning.
3. **Comprehensive Model Benchmarking** We evaluate 12 leading U-MLLMs, such as Janus-Pro, EMU3, and VILA-U, alongside specialized understanding (e.g., Claude-3.5) and generation models (e.g., DALL-E-3).
Our findings reveal substantial performance gaps in existing U-MLLMs, highlighting the need for more robust models capable of handling mixed-modality tasks effectively.

|