Update README.md
Browse files
README.md
CHANGED
|
@@ -9,7 +9,7 @@ language:
|
|
| 9 |
size_categories:
|
| 10 |
- 100B<n<1T
|
| 11 |
---
|
| 12 |
-
* **`2024.08.20`** π We are proud to open-source MME-Unify, a comprehensive evaluation framework designed to systematically assess U-MLLMs. Our Benchmark
|
| 13 |
|
| 14 |
|
| 15 |
Paper: arxiv.org/abs/2408.13257
|
|
@@ -20,69 +20,44 @@ Project page: https://aba122.github.io/MME-Unify.github.io/
|
|
| 20 |
|
| 21 |
|
| 22 |
|
| 23 |
-
![
|
| 24 |
|
| 25 |
|
| 26 |
## How to use?
|
| 27 |
|
| 28 |
-
|
| 29 |
|
| 30 |
```
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
tar -xzf "${base_name}.tar.gz"
|
| 45 |
-
|
| 46 |
-
# Remove the individual split files
|
| 47 |
-
rm -rf "${base_name}".tar.gz.part_*
|
| 48 |
-
|
| 49 |
-
rm -rf "${base_name}.tar.gz"
|
| 50 |
-
}
|
| 51 |
-
|
| 52 |
-
export -f process_files
|
| 53 |
-
|
| 54 |
-
# Find all .tar.gz.part_aa files and process them in parallel
|
| 55 |
-
find . -name '*.tar.gz.part_aa' | parallel process_files
|
| 56 |
-
|
| 57 |
-
# Wait for all background jobs to finish
|
| 58 |
-
wait
|
| 59 |
-
|
| 60 |
-
# nohup bash unzip_file.sh >> unfold.log 2>&1 &
|
| 61 |
-
|
| 62 |
-
|
| 63 |
```
|
| 64 |
|
| 65 |
-
# MME-RealWorld Data Card
|
| 66 |
-
|
| 67 |
## Dataset details
|
| 68 |
|
| 69 |
|
| 70 |
-
|
| 71 |
-
1) small data scale leads to a large performance variance;
|
| 72 |
-
2) reliance on model-based annotations results in restricted data quality;
|
| 73 |
-
3) insufficient task difficulty, especially caused by the limited image resolution.
|
| 74 |
|
| 75 |
-
|
| 76 |
|
| 77 |
-
|
| 78 |
|
| 79 |
-
|
| 80 |
|
| 81 |
-
|
| 82 |
|
| 83 |
-
4. **MME-RealWord-CN.**: Existing Chinese benchmark is usually translated from its English version. This has two limitations: 1) Question-image mismatch. The image may relate to an English scenario, which is not intuitively connected to a Chinese question. 2) Translation mismatch [58]. The machine translation is not always precise and perfect enough. We collect additional images that focus on Chinese scenarios, asking Chinese volunteers for annotation. This results in 5,917 QA pairs.
|
| 84 |
|
| 85 |
-

|
| 24 |
|
| 25 |
|
| 26 |
## How to use?
|
| 27 |
|
| 28 |
+
You can download images in this repository and the final structure should look like this:
|
| 29 |
|
| 30 |
```
|
| 31 |
+
MME-Unify
|
| 32 |
+
βββ CommonSense_Questions
|
| 33 |
+
βββ Conditional_Image_to_Video_Generation
|
| 34 |
+
βββ Fine-Grained_Image_Reconstruction
|
| 35 |
+
βββ Math_Reasoning
|
| 36 |
+
βββ Multiple_Images_and_Text_Interlaced
|
| 37 |
+
βββ Single_Image_Perception_and_Understanding
|
| 38 |
+
βββ Spot_Diff
|
| 39 |
+
βββ Text-Image_Editing
|
| 40 |
+
βββ Text-Image_Generation
|
| 41 |
+
βββ Text-to-Video_Generation
|
| 42 |
+
βββ Video_Perception_and_Understanding
|
| 43 |
+
βββ Visual_CoT
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
```
|
| 45 |
|
|
|
|
|
|
|
| 46 |
## Dataset details
|
| 47 |
|
| 48 |
|
| 49 |
+
We present MME-Unify, a comprehensive evaluation framework designed to assess U-MLLMs systematically. Our benchmark includes:
|
|
|
|
|
|
|
|
|
|
| 50 |
|
| 51 |
+
1. **Standardized Traditional Task Evaluation** We sample from 12 datasets, covering 10 tasks with 30 subtasks, ensuring consistent and fair comparisons across studies.
|
| 52 |
|
| 53 |
+
2. **Unified Task Assessment** We introduce five novel tasks testing multimodal reasoning, including image editing, commonsense QA with image generation, and geometric reasoning.
|
| 54 |
|
| 55 |
+
3. **Comprehensive Model Benchmarking** We evaluate 12 leading U-MLLMs, such as Janus-Pro, EMU3, and VILA-U, alongside specialized understanding (e.g., Claude-3.5) and generation models (e.g., DALL-E-3).
|
| 56 |
|
| 57 |
+
Our findings reveal substantial performance gaps in existing U-MLLMs, highlighting the need for more robust models capable of handling mixed-modality tasks effectively.
|
| 58 |
|
|
|
|
| 59 |
|
| 60 |
+

|
| 61 |
|
| 62 |
|
| 63 |
|