MME / README.md
nielsr's picture
nielsr HF Staff
Improve dataset card: Add paper info, project link, task categories, tags, and language
93d72b2 verified
|
raw
history blame
3.35 kB
metadata
license: apache-2.0
task_categories:
  - image-text-to-text
tags:
  - multimodal-llm-evaluation
  - benchmark
language:
  - en
dataset_info:
  features:
    - name: question_id
      dtype: string
    - name: image
      dtype: image
    - name: question
      dtype: string
    - name: answer
      dtype: string
    - name: category
      dtype: string
  splits:
    - name: test
      num_bytes: 669725855.604
      num_examples: 2374
  download_size: 201071792
  dataset_size: 669725855.604
configs:
  - config_name: default
    data_files:
      - split: test
        path: data/test-*

MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models

The MME dataset is the first comprehensive evaluation benchmark specifically designed for Multimodal Large Language Models (MLLMs). It measures both perception and cognition abilities across a total of 14 subtasks. To ensure fair evaluation and avoid data leakage, all instruction-answer pairs are manually designed. The concise instruction design allows for a fair comparison of MLLMs without extensive prompt engineering, facilitating quantitative statistics. MME has been used to comprehensively evaluate 30 advanced MLLMs, revealing areas for improvement and potential directions for future model optimization.

Paper: MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models Authors: ['Chaoyou Fu', 'Peixian Chen', 'Yunhang Shen', 'Yulei Qin', 'Mengdan Zhang', 'Xu Lin', 'Zhenyu Qiu', 'Wei Lin', 'Jinrui Yang', 'Xiawu Zheng', 'Ke Li', 'Xing Sun', 'Rongrong Ji']

Project Page: https://github.com/BradyFU/Awesome-Multimodal-Large-Language-Models/tree/Evaluation

Abstract

Multimodal Large Language Model (MLLM) relies on the powerful LLM to perform multimodal tasks, showing amazing emergent abilities in recent studies, such as writing poems based on an image. However, it is difficult for these case studies to fully reflect the performance of MLLM, lacking a comprehensive evaluation. In this paper, we fill in this blank, presenting the first comprehensive MLLM Evaluation benchmark MME. It measures both perception and cognition abilities on a total of 14 subtasks. In order to avoid data leakage that may arise from direct use of public datasets for evaluation, the annotations of instruction-answer pairs are all manually designed. The concise instruction design allows us to fairly compare MLLMs, instead of struggling in prompt engineering. Besides, with such an instruction, we can also easily carry out quantitative statistics. A total of 30 advanced MLLMs are comprehensively evaluated on our MME, which not only suggests that existing MLLMs still have a large room for improvement, but also reveals the potential directions for the subsequent model optimization. The data are released at the project page this https URL.

Citation

If you find this dataset useful for your research, please cite the original paper:

@article{fu2023mme,
  title={MME: A Comprehensive Evaluation Benchmark for Multimodal Large Language Models},
  author={Fu, Chaoyou and Chen, Peixian and Shen, Yunhang and Qin, Yulei and Zhang, Mengdan and Lin, Xu and Qiu, Zhenyu and Lin, Wei and Yang, Jinrui and Zheng, Xiawu and Li, Ke and Sun, Xing and Ji, Rongrong},
  journal={arXiv preprint arXiv:2306.13394},
  year={2023}
}