File size: 2,230 Bytes
e3f3489
26920cd
 
 
 
 
 
e3f3489
 
26920cd
 
e3f3489
 
26920cd
e3f3489
 
 
 
 
 
 
 
 
 
 
 
 
90b19d0
 
 
 
26920cd
87cfb6f
 
 
 
 
 
 
26920cd
542e863
87cfb6f
 
 
5a95779
87cfb6f
 
 
 
 
 
 
 
 
 
 
c52f944
87cfb6f
 
897f8a5
c52f944
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
---
base_model:
- llava-hf/llava-1.5-7b-hf
- OpenGVLab/InternVL-Chat-ViT-6B-Vicuna-7B
datasets:
- MLLM-CL/MLLM-CL
- MLLM-CL/MLLM-CL-ReplayData
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- accuracy
pipeline_tag: image-text-to-text
tags:
- finance
- medical
- AD
- MLLM-CL
- Sci
- RS
- Math
- OCR
- Count
- GUI-Agent
- DCL
- ACL
- llava
- multimodal
- image-to-text
- text-generation
base_model_relation: adapter
---

## MLLM-CL Benchmark Description
MLLM-CL is a novel benchmark encompassing domain and ability continual learning, where the former focuses on independently and identically distributed (IID) evaluation across evolving mainstream domains, 
whereas the latter evaluates on non-IID scenarios with emerging model ability.
For more details, please refer to: 

**MLLM-CL: Continual Learning for Multimodal Large Language Models** [[paper](https://arxiv.org/abs/2506.05453)], [[HF paper](https://huggingface.co/papers/2506.05453)], [[code](https://github.com/bjzhb666/MLLM-CL/)].
![](MLLM-CL.png "Magic Gardens")
[‪Hongbo Zhao](https://scholar.google.com/citations?user=Gs22F0UAAAAJ&hl=zh-CN), [Fei Zhu](https://impression2805.github.io/), [Haiyang Guo](https://ghy0501.github.io/guohaiyang0501.github.io/), [Meng Wang](https://moenupa.github.io/), Rundong Wang, [‪Gaofeng Meng](https://scholar.google.com/citations?hl=zh-CN&user=5hti_r0AAAAJ), [‪Zhaoxiang Zhang‬](https://scholar.google.com/citations?hl=zh-CN&user=qxWfV6cAAAAJ)

## Usage
This repo is used to open-source all the experts in MLLM-CL experiments, including 4 branches (DCL_InternVL, DCL_LLaVA, ACL_InternVL, ACL_LLaVA).

## Citation
```
@article{zhao2025mllm,
  title={MLLM-CL: Continual Learning for Multimodal Large Language Models},
  author={Zhao, Hongbo and Zhu, Fei and Guo, Haiyang and Wang, Meng and Wang, Rundong and Meng, Gaofeng and Zhang, Zhaoxiang},
  journal={arXiv preprint arXiv:2506.05453},
  year={2025}
}
```
## Contact
Please post an issue on our GitHub.

## About us: MLLM-CL Community

We are the members from MLLM-CL, an open-source community focused on Continual learning of Multimodal Large Language Models. 
If you are interested in our community, feel free to contact us on GitHub or by email.