Update README.md
Browse files
README.md
CHANGED
|
@@ -12,6 +12,36 @@ tags:
|
|
| 12 |
- router
|
| 13 |
- MLLM-CL
|
| 14 |
- llava
|
|
|
|
|
|
|
| 15 |
pipeline_tag: visual-question-answering
|
| 16 |
library_name: transformers
|
| 17 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 12 |
- router
|
| 13 |
- MLLM-CL
|
| 14 |
- llava
|
| 15 |
+
- internvl
|
| 16 |
+
- MR-LoRA
|
| 17 |
pipeline_tag: visual-question-answering
|
| 18 |
library_name: transformers
|
| 19 |
+
---
|
| 20 |
+
## MLLM-CL Benchmark Description
|
| 21 |
+
MLLM-CL is a novel benchmark encompassing domain and ability continual learning, where the former focuses on independently and identically distributed (IID) evaluation across evolving mainstream domains,
|
| 22 |
+
whereas the latter evaluates on non-IID scenarios with emerging model ability.
|
| 23 |
+
For more details, please refer to:
|
| 24 |
+
|
| 25 |
+
**MLLM-CL: Continual Learning for Multimodal Large Language Models** [[paper](https://arxiv.org/abs/2506.05453)], [[code](https://github.com/bjzhb666/MLLM-CL/)].
|
| 26 |
+

|
| 27 |
+
[Hongbo Zhao](https://scholar.google.com/citations?user=Gs22F0UAAAAJ&hl=zh-CN), [Fei Zhu](https://impression2805.github.io/), [Haiyang Guo](https://ghy0501.github.io/guohaiyang0501.github.io/), [Meng Wang](https://moenupa.github.io/), Rundong Wang, [Gaofeng Meng](https://scholar.google.com/citations?hl=zh-CN&user=5hti_r0AAAAJ), [Zhaoxiang Zhang](https://scholar.google.com/citations?hl=zh-CN&user=qxWfV6cAAAAJ)
|
| 28 |
+
|
| 29 |
+
## Usage
|
| 30 |
+
This repo is used to open-source MR-LoRA's router LoRA, including 2 branches.
|
| 31 |
+
|
| 32 |
+
## Citation
|
| 33 |
+
```
|
| 34 |
+
@article{zhao2025mllm,
|
| 35 |
+
title={MLLM-CL: Continual Learning for Multimodal Large Language Models},
|
| 36 |
+
author={Zhao, Hongbo and Zhu, Fei and Guo, Haiyang and Wang, Meng and Wang, Rundong and Meng, Gaofeng and Zhang, Zhaoxiang},
|
| 37 |
+
journal={arXiv preprint arXiv:2506.05453},
|
| 38 |
+
year={2025}
|
| 39 |
+
}
|
| 40 |
+
```
|
| 41 |
+
## Contact
|
| 42 |
+
Please post an issue in our Github.
|
| 43 |
+
|
| 44 |
+
## About us: MLLM-CL Community
|
| 45 |
+

|
| 46 |
+
We are the members from MLLM-CL, an open-source community focus on Continual learning of Multimodal Large Language Models.
|
| 47 |
+
If you are interested in our community, feel free to contact us in github or email.
|