On Path to Multimodal Generalist: Levels and Benchmarks
[π Project] [π Leaderboard] [π Paper] [π€ Dataset-HF] [π Dataset-Github]
Does higher performance across tasks indicate a stronger capability of MLLM, and closer to AGI?
NO! But synergy does.
Most current MLLMs predominantly build on the language intelligence of LLMs to simulate the indirect intelligence of multimodality, which is merely extending language intelligence to aid multimodal understanding. While LLMs (e.g., ChatGPT) have already demonstrated such synergy in NLP, reflecting language intelligence, unfortunately, the vast majority of MLLMs do not really achieve it across modalities and tasks.
We argue that the key to advancing towards AGI lies in the synergy effectβa capability that enables knowledge learned in one modality or task to generalize and enhance mastery in other modalities or tasks, fostering mutual improvement across different modalities and tasks through interconnected learning.
This project introduces General-Level and General-Bench.
πππ General-Level
A 5-scale level evaluation system with a new norm for assessing the multimodal generalists (multimodal LLMs/agents). The core is the use of Synergy as the evaluative criterion, categorizing capabilities based on whether MLLMs preserve synergy across comprehension and generation, as well as across multimodal interactions.
πππ General-Bench
A companion massive multimodal benchmark dataset, encompasses a broader spectrum of skills, modalities, formats, and capabilities, including over 700 tasks and 325K instances.
Figure: Overview of General-Bench, which covers 145 skills for more than 700 tasks with over 325,800 samples under comprehension and generation categories in various modalities
Figure: Distribution of various capabilities evaluated in General-Bench.
Figure: Distribution of various domains and disciplines covered by General-Bench.
β¨β¨β¨ File Origanization Structure
Here is the organization structure of the file system:
General-Bench
βββ Image
β βββ comprehension
β β βββ Bird-Detection
β β β βββ annotation.json
β β β βββ images
β β β βββ Acadian_Flycatcher_0070_29150.jpg
β β βββ Bottle-Anomaly-Detection
β β β βββ annotation.json
β β β βββ images
β β βββ ...
β βββ generation
β βββ Layout-to-Face-Image-Generation
β βββ annotation.json
β βββ images
β βββ ...
βββ Video
β βββ comprehension
β β βββ Human-Object-Interaction-Video-Captioning
β β βββ annotation.json
β β βββ videos
β β βββ ...
β βββ generation
β βββ Scene-Image-to-Video-Generation
β βββ annotation.json
β βββ videos
β βββ ...
βββ 3d
β βββ comprehension
β β βββ 3D-Furniture-Classification
β β βββ annotation.json
β β βββ pointclouds
β β βββ ...
β βββ generation
β βββ Text-to-3D-Living-and-Arts-Point-Cloud-Generation
β βββ annotation.json
β βββ pointclouds
β βββ ...
βββ Audio
β βββ comprehension
β β βββ Accent-Classification
β β βββ annotation.json
β β βββ audios
β β βββ ...
β βββ generation
β βββ Video-To-Audio
β βββ annotation.json
β βββ audios
β βββ ...
βββ NLP
β βββ History-Question-Answering
β β βββ annotation.json
β βββ Abstractive-Summarization
β β βββ annotation.json
β βββ ...
An illustrative example of file formats:
Usage
Please download all the files in this repository. We also provide overview.json, which is an example of the format of our dataset.
xxxx
π© Citation
If you find our benchmark useful in your research, please kindly consider citing us:
@article{generalist2025,
title={On Path to Multimodal Generalist: Levels and Benchmarks},
author={Hao Fei, Yuan Zhou, Juncheng Li, Xiangtai Li, Qingshan Xu, Bobo Li, Shengqiong Wu, Yaoting Wang, Junbao Zhou, Jiahao Meng, Qingyu Shi, Zhiyuan Zhou, Liangtao Shi, Minghe Gao, Daoan Zhang, Zhiqi Ge, Siliang Tang, Kaihang Pan, Yaobo Ye, Haobo Yuan, Tao Zhang, Weiming Wu, Tianjie Ju, Zixiang Meng, Shilin Xu, Liyu Jia, Wentao Hu, Meng Luo, Jiebo Luo, Tat-Seng Chua, Hanwang Zhang, Shuicheng YAN},
journal={arXiv},
year={2025}
}
