wulin222 commited on
Commit
07959f3
Β·
verified Β·
1 Parent(s): e318556

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +22 -47
README.md CHANGED
@@ -9,7 +9,7 @@ language:
9
  size_categories:
10
  - 100B<n<1T
11
  ---
12
- * **`2024.08.20`** 🌟 We are proud to open-source MME-Unify, a comprehensive evaluation framework designed to systematically assess U-MLLMs. Our Benchmark covering 10 tasks with 30 subtasks, ensuring consistent and fair comparisons across studies.
13
 
14
 
15
  Paper: arxiv.org/abs/2408.13257
@@ -20,69 +20,44 @@ Project page: https://aba122.github.io/MME-Unify.github.io/
20
 
21
 
22
 
23
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623d8ca4c29adf5ef6175615/ZnczJh10NHm0u03p7kjm_.png)
24
 
25
 
26
  ## How to use?
27
 
28
- Since the image files are large and have been split into multiple compressed parts, please first merge the compressed files with the same name and then extract them together.
29
 
30
  ```
31
- #!/bin/bash
32
-
33
- # Function to process each set of split files
34
- process_files() {
35
- local part="$1"
36
-
37
- # Extract the base name of the file
38
- local base_name=$(basename "$part" .tar.gz.part_aa)
39
-
40
- # Merge the split files into a single archive
41
- cat "${base_name}".tar.gz.part_* > "${base_name}.tar.gz"
42
-
43
- # Extract the merged archive
44
- tar -xzf "${base_name}.tar.gz"
45
-
46
- # Remove the individual split files
47
- rm -rf "${base_name}".tar.gz.part_*
48
-
49
- rm -rf "${base_name}.tar.gz"
50
- }
51
-
52
- export -f process_files
53
-
54
- # Find all .tar.gz.part_aa files and process them in parallel
55
- find . -name '*.tar.gz.part_aa' | parallel process_files
56
-
57
- # Wait for all background jobs to finish
58
- wait
59
-
60
- # nohup bash unzip_file.sh >> unfold.log 2>&1 &
61
-
62
-
63
  ```
64
 
65
- # MME-RealWorld Data Card
66
-
67
  ## Dataset details
68
 
69
 
70
- Existing Multimodal Large Language Model benchmarks present several common barriers that make it difficult to measure the significant challenges that models face in the real world, including:
71
- 1) small data scale leads to a large performance variance;
72
- 2) reliance on model-based annotations results in restricted data quality;
73
- 3) insufficient task difficulty, especially caused by the limited image resolution.
74
 
75
- We present MME-RealWord, a benchmark meticulously designed to address real-world applications with practical relevance. Featuring 13,366 high-resolution images averaging 2,000 Γ— 1,500 pixels, MME-RealWord poses substantial recognition challenges. Our dataset encompasses 29,429 annotations across 43 tasks, all expertly curated by a team of 25 crowdsource workers and 7 MLLM experts. The main advantages of MME-RealWorld compared to existing MLLM benchmarks as follows:
76
 
77
- 1. **Data Scale**: with the efforts of a total of 32 volunteers, we have manually annotated 29,429 QA pairs focused on real-world scenarios, making this the largest fully human-annotated benchmark known to date.
78
 
79
- 2. **Data Quality**: 1) Resolution: Many image details, such as a scoreboard in a sports event, carry critical information. These details can only be properly interpreted with high- resolution images, which are essential for providing meaningful assistance to humans. To the best of our knowledge, MME-RealWorld features the highest average image resolution among existing competitors. 2) Annotation: All annotations are manually completed, with a professional team cross-checking the results to ensure data quality.
80
 
81
- 3. **Task Difficulty and Real-World Utility.**: We can see that even the most advanced models have not surpassed 60% accuracy. Additionally, many real-world tasks are significantly more difficult than those in traditional benchmarks. For example, in video monitoring, a model needs to count the presence of 133 vehicles, or in remote sensing, it must identify and count small objects on a map with an average resolution exceeding 5000Γ—5000.
82
 
83
- 4. **MME-RealWord-CN.**: Existing Chinese benchmark is usually translated from its English version. This has two limitations: 1) Question-image mismatch. The image may relate to an English scenario, which is not intuitively connected to a Chinese question. 2) Translation mismatch [58]. The machine translation is not always precise and perfect enough. We collect additional images that focus on Chinese scenarios, asking Chinese volunteers for annotation. This results in 5,917 QA pairs.
84
 
85
- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/623d8ca4c29adf5ef6175615/Do69D0sNlG9eqr9cyE7bm.png)
86
 
87
 
88
 
 
9
  size_categories:
10
  - 100B<n<1T
11
  ---
12
+ * **`2024.08.20`** 🌟 We are proud to open-source MME-Unify, a comprehensive evaluation framework designed to systematically assess U-MLLMs. Our Benchmark covers 10 tasks with 30 subtasks, ensuring consistent and fair comparisons across studies.
13
 
14
 
15
  Paper: arxiv.org/abs/2408.13257
 
20
 
21
 
22
 
23
+ ![](leaderboard.png)
24
 
25
 
26
  ## How to use?
27
 
28
+ You can download images in this repository and the final structure should look like this:
29
 
30
  ```
31
+ MME-Unify
32
+ β”œβ”€β”€ CommonSense_Questions
33
+ β”œβ”€β”€ Conditional_Image_to_Video_Generation
34
+ β”œβ”€β”€ Fine-Grained_Image_Reconstruction
35
+ β”œβ”€β”€ Math_Reasoning
36
+ β”œβ”€β”€ Multiple_Images_and_Text_Interlaced
37
+ β”œβ”€β”€ Single_Image_Perception_and_Understanding
38
+ β”œβ”€β”€ Spot_Diff
39
+ β”œβ”€β”€ Text-Image_Editing
40
+ β”œβ”€β”€ Text-Image_Generation
41
+ β”œβ”€β”€ Text-to-Video_Generation
42
+ β”œβ”€β”€ Video_Perception_and_Understanding
43
+ └── Visual_CoT
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
44
  ```
45
 
 
 
46
  ## Dataset details
47
 
48
 
49
+ We present MME-Unify, a comprehensive evaluation framework designed to assess U-MLLMs systematically. Our benchmark includes:
 
 
 
50
 
51
+ 1. **Standardized Traditional Task Evaluation** We sample from 12 datasets, covering 10 tasks with 30 subtasks, ensuring consistent and fair comparisons across studies.
52
 
53
+ 2. **Unified Task Assessment** We introduce five novel tasks testing multimodal reasoning, including image editing, commonsense QA with image generation, and geometric reasoning.
54
 
55
+ 3. **Comprehensive Model Benchmarking** We evaluate 12 leading U-MLLMs, such as Janus-Pro, EMU3, and VILA-U, alongside specialized understanding (e.g., Claude-3.5) and generation models (e.g., DALL-E-3).
56
 
57
+ Our findings reveal substantial performance gaps in existing U-MLLMs, highlighting the need for more robust models capable of handling mixed-modality tasks effectively.
58
 
 
59
 
60
+ ![](Bin.png)
61
 
62
 
63