MME-Unify / README.md
wulin222's picture
Update README.md
7988057 verified
metadata
license: apache-2.0
task_categories:
  - multiple-choice
  - question-answering
  - visual-question-answering
language:
  - en
size_categories:
  - 100B<n<1T
  • 2024.08.20 🌟 We are proud to open-source MME-Unify, a comprehensive evaluation framework designed to systematically assess U-MLLMs. Our Benchmark covers 10 tasks with 30 subtasks, ensuring consistent and fair comparisons across studies.

Paper: https://arxiv.org/abs/2504.03641

Code: https://github.com/MME-Benchmarks/MME-Unify

Project page: https://mme-unify.github.io/

How to use?

You can download images in this repository and the final structure should look like this:

MME-Unify
β”œβ”€β”€ CommonSense_Questions
β”œβ”€β”€ Conditional_Image_to_Video_Generation
β”œβ”€β”€ Fine-Grained_Image_Reconstruction
β”œβ”€β”€ Math_Reasoning
β”œβ”€β”€ Multiple_Images_and_Text_Interlaced
β”œβ”€β”€ Single_Image_Perception_and_Understanding
β”œβ”€β”€ Spot_Diff
β”œβ”€β”€ Text-Image_Editing
β”œβ”€β”€ Text-Image_Generation
β”œβ”€β”€ Text-to-Video_Generation
β”œβ”€β”€ Video_Perception_and_Understanding
└── Visual_CoT

Dataset details

We present MME-Unify, a comprehensive evaluation framework designed to assess U-MLLMs systematically. Our benchmark includes:

  1. Standardized Traditional Task Evaluation We sample from 12 datasets, covering 10 tasks with 30 subtasks, ensuring consistent and fair comparisons across studies.

  2. Unified Task Assessment We introduce five novel tasks testing multimodal reasoning, including image editing, commonsense QA with image generation, and geometric reasoning.

  3. Comprehensive Model Benchmarking We evaluate 12 leading U-MLLMs, such as Janus-Pro, EMU3, and VILA-U, alongside specialized understanding (e.g., Claude-3.5) and generation models (e.g., DALL-E-3).

Our findings reveal substantial performance gaps in existing U-MLLMs, highlighting the need for more robust models capable of handling mixed-modality tasks effectively.