|
|
--- |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- multiple-choice |
|
|
- question-answering |
|
|
- visual-question-answering |
|
|
language: |
|
|
- en |
|
|
size_categories: |
|
|
- 100B<n<1T |
|
|
--- |
|
|
* **`2024.08.20`** π We are proud to open-source MME-Unify, a comprehensive evaluation framework designed to systematically assess U-MLLMs. Our Benchmark covers 10 tasks with 30 subtasks, ensuring consistent and fair comparisons across studies. |
|
|
|
|
|
|
|
|
Paper: https://arxiv.org/abs/2504.03641 |
|
|
|
|
|
Code: https://github.com/MME-Benchmarks/MME-Unify |
|
|
|
|
|
Project page: https://mme-unify.github.io/ |
|
|
|
|
|
|
|
|
|
|
|
 |
|
|
|
|
|
|
|
|
## How to use? |
|
|
|
|
|
You can download images in this repository and the final structure should look like this: |
|
|
|
|
|
``` |
|
|
MME-Unify |
|
|
βββ CommonSense_Questions |
|
|
βββ Conditional_Image_to_Video_Generation |
|
|
βββ Fine-Grained_Image_Reconstruction |
|
|
βββ Math_Reasoning |
|
|
βββ Multiple_Images_and_Text_Interlaced |
|
|
βββ Single_Image_Perception_and_Understanding |
|
|
βββ Spot_Diff |
|
|
βββ Text-Image_Editing |
|
|
βββ Text-Image_Generation |
|
|
βββ Text-to-Video_Generation |
|
|
βββ Video_Perception_and_Understanding |
|
|
βββ Visual_CoT |
|
|
``` |
|
|
|
|
|
## Dataset details |
|
|
|
|
|
|
|
|
We present MME-Unify, a comprehensive evaluation framework designed to assess U-MLLMs systematically. Our benchmark includes: |
|
|
|
|
|
1. **Standardized Traditional Task Evaluation** We sample from 12 datasets, covering 10 tasks with 30 subtasks, ensuring consistent and fair comparisons across studies. |
|
|
|
|
|
2. **Unified Task Assessment** We introduce five novel tasks testing multimodal reasoning, including image editing, commonsense QA with image generation, and geometric reasoning. |
|
|
|
|
|
3. **Comprehensive Model Benchmarking** We evaluate 12 leading U-MLLMs, such as Janus-Pro, EMU3, and VILA-U, alongside specialized understanding (e.g., Claude-3.5) and generation models (e.g., DALL-E-3). |
|
|
|
|
|
Our findings reveal substantial performance gaps in existing U-MLLMs, highlighting the need for more robust models capable of handling mixed-modality tasks effectively. |
|
|
|
|
|
|
|
|
 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|