Improve dataset card: Add paper info, project link, task categories, tags, and language
#3
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,5 +1,12 @@
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 3 |
dataset_info:
|
| 4 |
features:
|
| 5 |
- name: question_id
|
|
@@ -24,3 +31,29 @@ configs:
|
|
| 24 |
- split: test
|
| 25 |
path: data/test-*
|
| 26 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- image-text-to-text
|
| 5 |
+
tags:
|
| 6 |
+
- multimodal-llm-evaluation
|
| 7 |
+
- benchmark
|
| 8 |
+
language:
|
| 9 |
+
- en
|
| 10 |
dataset_info:
|
| 11 |
features:
|
| 12 |
- name: question_id
|
|
|
|
| 31 |
- split: test
|
| 32 |
path: data/test-*
|
| 33 |
---
|
| 34 |
+
|
| 35 |
+
# MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models
|
| 36 |
+
|
| 37 |
+
The MME dataset is the first comprehensive evaluation benchmark specifically designed for Multimodal Large Language Models (MLLMs). It measures both perception and cognition abilities across a total of 14 subtasks. To ensure fair evaluation and avoid data leakage, all instruction-answer pairs are manually designed. The concise instruction design allows for a fair comparison of MLLMs without extensive prompt engineering, facilitating quantitative statistics. MME has been used to comprehensively evaluate 30 advanced MLLMs, revealing areas for improvement and potential directions for future model optimization.
|
| 38 |
+
|
| 39 |
+
**Paper**: [MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models](https://huggingface.co/papers/2306.13394)
|
| 40 |
+
**Authors**: ['Chaoyou Fu', 'Peixian Chen', 'Yunhang Shen', 'Yulei Qin', 'Mengdan Zhang', 'Xu Lin', 'Zhenyu Qiu', 'Wei Lin', 'Jinrui Yang', 'Xiawu Zheng', 'Ke Li', 'Xing Sun', 'Rongrong Ji']
|
| 41 |
+
|
| 42 |
+
**Project Page**: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation
|
| 43 |
+
|
| 44 |
+
## Abstract
|
| 45 |
+
|
| 46 |
+
Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization. The data are released at the project page this https URL.
|
| 47 |
+
|
| 48 |
+
## Citation
|
| 49 |
+
|
| 50 |
+
If you find this dataset useful for your research, please cite the original paper:
|
| 51 |
+
|
| 52 |
+
```bibtex
|
| 53 |
+
@article{fu2023mme,
|
| 54 |
+
title={MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models},
|
| 55 |
+
author={Fu, Chaoyou and Chen, Peixian and Shen, Yunhang and Qin, Yulei and Zhang, Mengdan and Lin, Xu and Qiu, Zhenyu and Lin, Wei and Yang, Jinrui and Zheng, Xiawu and Li, Ke and Sun, Xing and Ji, Rongrong},
|
| 56 |
+
journal={arXiv preprint arXiv:2306.13394},
|
| 57 |
+
year={2023}
|
| 58 |
+
}
|
| 59 |
+
```
|