Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,91 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- multiple-choice
|
| 5 |
+
- question-answering
|
| 6 |
+
- visual-question-answering
|
| 7 |
+
language:
|
| 8 |
+
- en
|
| 9 |
+
size_categories:
|
| 10 |
+
- 100B<n<1T
|
| 11 |
+
---
|
| 12 |
+
* **`2024.08.20`** 🌟 We are proud to open-source MME-Unify, a comprehensive evaluation framework designed to systematically assess U-MLLMs. Our Benchmark covering 10 tasks with 30 subtasks, ensuring consistent and fair comparisons across studies.
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
Paper: arxiv.org/abs/2408.13257
|
| 16 |
+
|
| 17 |
+
Code: https://github.com/aba122/MME-Unify
|
| 18 |
+
|
| 19 |
+
Project page: https://aba122.github.io/MME-Unify.github.io/
|
| 20 |
+
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+

|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
## How to use?
|
| 27 |
+
|
| 28 |
+
Since the image files are large and have been split into multiple compressed parts, please first merge the compressed files with the same name and then extract them together.
|
| 29 |
+
|
| 30 |
+
```
|
| 31 |
+
#!/bin/bash
|
| 32 |
+
|
| 33 |
+
# Function to process each set of split files
|
| 34 |
+
process_files() {
|
| 35 |
+
local part="$1"
|
| 36 |
+
|
| 37 |
+
# Extract the base name of the file
|
| 38 |
+
local base_name=$(basename "$part" .tar.gz.part_aa)
|
| 39 |
+
|
| 40 |
+
# Merge the split files into a single archive
|
| 41 |
+
cat "${base_name}".tar.gz.part_* > "${base_name}.tar.gz"
|
| 42 |
+
|
| 43 |
+
# Extract the merged archive
|
| 44 |
+
tar -xzf "${base_name}.tar.gz"
|
| 45 |
+
|
| 46 |
+
# Remove the individual split files
|
| 47 |
+
rm -rf "${base_name}".tar.gz.part_*
|
| 48 |
+
|
| 49 |
+
rm -rf "${base_name}.tar.gz"
|
| 50 |
+
}
|
| 51 |
+
|
| 52 |
+
export -f process_files
|
| 53 |
+
|
| 54 |
+
# Find all .tar.gz.part_aa files and process them in parallel
|
| 55 |
+
find . -name '*.tar.gz.part_aa' | parallel process_files
|
| 56 |
+
|
| 57 |
+
# Wait for all background jobs to finish
|
| 58 |
+
wait
|
| 59 |
+
|
| 60 |
+
# nohup bash unzip_file.sh >> unfold.log 2>&1 &
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
# MME-RealWorld Data Card
|
| 66 |
+
|
| 67 |
+
## Dataset details
|
| 68 |
+
|
| 69 |
+
|
| 70 |
+
Existing Multimodal Large Language Model benchmarks present several common barriers that make it difficult to measure the significant challenges that models face in the real world, including:
|
| 71 |
+
1) small data scale leads to a large performance variance;
|
| 72 |
+
2) reliance on model-based annotations results in restricted data quality;
|
| 73 |
+
3) insufficient task difficulty, especially caused by the limited image resolution.
|
| 74 |
+
|
| 75 |
+
We present MME-RealWord, a benchmark meticulously designed to address real-world applications with practical relevance. Featuring 13,366 high-resolution images averaging 2,000 × 1,500 pixels, MME-RealWord poses substantial recognition challenges. Our dataset encompasses 29,429 annotations across 43 tasks, all expertly curated by a team of 25 crowdsource workers and 7 MLLM experts. The main advantages of MME-RealWorld compared to existing MLLM benchmarks as follows:
|
| 76 |
+
|
| 77 |
+
1. **Data Scale**: with the efforts of a total of 32 volunteers, we have manually annotated 29,429 QA pairs focused on real-world scenarios, making this the largest fully human-annotated benchmark known to date.
|
| 78 |
+
|
| 79 |
+
2. **Data Quality**: 1) Resolution: Many image details, such as a scoreboard in a sports event, carry critical information. These details can only be properly interpreted with high- resolution images, which are essential for providing meaningful assistance to humans. To the best of our knowledge, MME-RealWorld features the highest average image resolution among existing competitors. 2) Annotation: All annotations are manually completed, with a professional team cross-checking the results to ensure data quality.
|
| 80 |
+
|
| 81 |
+
3. **Task Difficulty and Real-World Utility.**: We can see that even the most advanced models have not surpassed 60% accuracy. Additionally, many real-world tasks are significantly more difficult than those in traditional benchmarks. For example, in video monitoring, a model needs to count the presence of 133 vehicles, or in remote sensing, it must identify and count small objects on a map with an average resolution exceeding 5000×5000.
|
| 82 |
+
|
| 83 |
+
4. **MME-RealWord-CN.**: Existing Chinese benchmark is usually translated from its English version. This has two limitations: 1) Question-image mismatch. The image may relate to an English scenario, which is not intuitively connected to a Chinese question. 2) Translation mismatch [58]. The machine translation is not always precise and perfect enough. We collect additional images that focus on Chinese scenarios, asking Chinese volunteers for annotation. This results in 5,917 QA pairs.
|
| 84 |
+
|
| 85 |
+

|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
|
| 89 |
+
|
| 90 |
+
|
| 91 |
+
|