Datasets:
Populate dataset card with description, links, installation, and sample usage
Browse filesThis PR significantly enhances the dataset card by populating it with comprehensive information extracted from the paper abstract and the GitHub README.
Key updates include:
- An overview of the `MLLM-CL` benchmark and the `MR-LoRA` method.
- Links to the paper: [MLLM-CL: Continual Learning for Multimodal Large Language Models](https://huggingface.co/papers/2506.05453)
- A link to the GitHub repository: [https://github.com/bjzhb666/MLLM-CL](https://github.com/bjzhb666/MLLM-CL)
- Detailed installation instructions.
- A description of the dataset organization.
- A "Sample Usage" section featuring `bash` commands for training and evaluation, directly from the GitHub README.
- Citation information, acknowledgments, license, contact, and community details.
The `task_categories` metadata has been updated from `visual-question-answering` to `image-text-to-text` to better reflect the multimodal large language model nature of the dataset and the diverse range of tasks it supports in continual learning.
|
@@ -1,9 +1,11 @@
|
|
| 1 |
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
task_categories:
|
| 4 |
-
- visual-question-answering
|
| 5 |
language:
|
| 6 |
- en
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
tags:
|
| 8 |
- MLLM-CL
|
| 9 |
- MR-LoRA
|
|
@@ -14,6 +16,172 @@ tags:
|
|
| 14 |
- Continual-learning
|
| 15 |
- MLLM
|
| 16 |
- internvl
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
size_categories:
|
| 6 |
+
- 100K<n<1M
|
| 7 |
+
task_categories:
|
| 8 |
+
- image-text-to-text
|
| 9 |
tags:
|
| 10 |
- MLLM-CL
|
| 11 |
- MR-LoRA
|
|
|
|
| 16 |
- Continual-learning
|
| 17 |
- MLLM
|
| 18 |
- internvl
|
| 19 |
+
---
|
| 20 |
+
|
| 21 |
+
# MLLM-CL: Continual Learning for Multimodal Large Language Models
|
| 22 |
+
|
| 23 |
+
This is the official dataset repository for **MLLM-CL: Continual Learning for Multimodal Large Language Models**.
|
| 24 |
+
|
| 25 |
+
**Paper:** [MLLM-CL: Continual Learning for Multimodal Large Language Models](https://huggingface.co/papers/2506.05453)
|
| 26 |
+
**Code:** [https://github.com/bjzhb666/MLLM-CL](https://github.com/bjzhb666/MLLM-CL)
|
| 27 |
+
|
| 28 |
+
Recent Multimodal Large Language Models (MLLMs) excel in vision-language understanding but face challenges in adapting to dynamic real-world scenarios that require continuous integration of new knowledge and skills. This work introduces MLLM-CL, a novel benchmark encompassing domain and ability continual learning, where the former focuses on independently and identically distributed (IID) evaluation across evolving mainstream domains, whereas the latter evaluates on non-IID scenarios with new model abilities. Methodologically, it proposes preventing catastrophic interference through parameter isolation and an MLLM-based routing mechanism called MR-LoRA.
|
| 29 |
+
|
| 30 |
+
## MLLM-CL Benchmark
|
| 31 |
+
MLLM-CL is a benchmark for continual learning in multimodal large language models (MLLMs). It consists of two main components: domain continual learning and ability continual learning. The benchmark includes a variety of datasets and tasks to evaluate the performance of MLLMs in evolving scenarios.
|
| 32 |
+
|
| 33 |
+
### Domain Continual Learning
|
| 34 |
+
Continually adding domain knowledge is crucial for constructing a powerful MLLM. To achieve this goal, we propose domain continual learning and choose five mainstream and common domains: remote sensing, medical, science, autonomous driving and finance. In domain continual learning, the training set and test set are IID.
|
| 35 |
+
|
| 36 |
+
### Ability Continual Learning
|
| 37 |
+
Domain continual learning assumes that training and test data are IID. However, achieving IID between training and test sets is often challenging in real-world scenarios. In ability continual learning, we assume that the training and test data are non-IID. We select four fundamental abilities for the MLLM to learn sequentially: OCR, math & logic, visual perception and GUI agent.
|
| 38 |
+
|
| 39 |
+
## MR-LoRA
|
| 40 |
+
MR-LoRA performs two-stage inference for a given multimodal input, consisting of a routing phase followed by a prediction phase. In the first stage, the expert selection router is performed to select a domain or ability-specific expert. Then, the selected expert is combined with the pre-trained backbone to output the final response.
|
| 41 |
+
|
| 42 |
+
## Installation
|
| 43 |
+
1. Clone this repository and navigate to MLLM-CL folder
|
| 44 |
+
```bash
|
| 45 |
+
git clone https://github.com/bjzhb666/MLLM-CL.git
|
| 46 |
+
cd MLLM-CL
|
| 47 |
+
```
|
| 48 |
+
2. Install Package
|
| 49 |
+
```bash
|
| 50 |
+
pip install -e .
|
| 51 |
+
```
|
| 52 |
+
3. Install additional packages for training cases
|
| 53 |
+
```bash
|
| 54 |
+
pip install -e ".[train]" -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
|
| 55 |
+
conda install git
|
| 56 |
+
pip install flash-attn==2.7.0.post2 --no-build-isolation -i https://mirrors.tuna.tsinghua.edu.cn/pypi/web/simple
|
| 57 |
+
|
| 58 |
+
huggingface-cli download liuhaotian/llava-v1.5-7b --local-dir checkpoints/LLaVA/Vicuna/llava-7b-v1.5
|
| 59 |
+
huggingface-cli download openai/clip-vit-large-patch14-336 --local-dir checkpoints/LLaVA/clip-vit-large-patch14-336
|
| 60 |
+
```
|
| 61 |
+
4. Prepare the API key
|
| 62 |
+
The evaluation of Math & Logic tasks requires the OpenAI API key. Create an `.env` file in the root directory of the project and add your OpenAI API key:
|
| 63 |
+
```
|
| 64 |
+
# .env file
|
| 65 |
+
# QwenVL APIs
|
| 66 |
+
DASHSCOPE_API_KEY=
|
| 67 |
+
# Gemini w. Google Cloud Backends
|
| 68 |
+
GOOGLE_API_KEY=
|
| 69 |
+
# OpenAI API
|
| 70 |
+
OPENAI_API_KEY=YOUR_OPENAI_API_KEY
|
| 71 |
+
OPENAI_API_BASE=
|
| 72 |
+
LMUData=/data/hongbo_zhao/code/VLMEvalKit/LMUData
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
## Dataset
|
| 76 |
+
Please download the images of MLLM-CL from huggingface or modelscope: [[huggingface](https://huggingface.co/datasets/MLLM-CL/MLLM-CL)] or
|
| 77 |
+
[[modelscope](https://www.modelscope.cn/datasets/MLLM-CL/MLLM-CL)].
|
| 78 |
+
|
| 79 |
+
After downloading all of them, organize the data as follows:
|
| 80 |
+
|
| 81 |
+
Domain Continual Learning Data:
|
| 82 |
+
```
|
| 83 |
+
├── RS
|
| 84 |
+
│ └── images
|
| 85 |
+
| └──train.json
|
| 86 |
+
| └──test.json
|
| 87 |
+
├── Med
|
| 88 |
+
│ └── images
|
| 89 |
+
| └──train.json
|
| 90 |
+
| └──test.json
|
| 91 |
+
├── AD
|
| 92 |
+
│ └── images
|
| 93 |
+
| └──train.json
|
| 94 |
+
| └──test.json
|
| 95 |
+
├── Fin
|
| 96 |
+
│ └── images
|
| 97 |
+
│ └── test.sjon
|
| 98 |
+
│ └── train.json
|
| 99 |
+
├── Sci
|
| 100 |
+
| └── images
|
| 101 |
+
| └──train.json
|
| 102 |
+
| └──test.json
|
| 103 |
+
```
|
| 104 |
+
Ability Continual Learning Data:
|
| 105 |
+
```
|
| 106 |
+
├── OCR
|
| 107 |
+
| └── images
|
| 108 |
+
| └──train.json
|
| 109 |
+
├── OCR_test
|
| 110 |
+
| └── images
|
| 111 |
+
| └──test.json
|
| 112 |
+
├── Math
|
| 113 |
+
| └── images
|
| 114 |
+
| └──train.json
|
| 115 |
+
├── Math_test
|
| 116 |
+
| └── images
|
| 117 |
+
| └──test.json
|
| 118 |
+
├── APP
|
| 119 |
+
| └── images
|
| 120 |
+
| └──train.json
|
| 121 |
+
├── APP_test
|
| 122 |
+
| └── images
|
| 123 |
+
| └──test.json
|
| 124 |
+
├── VP
|
| 125 |
+
| └── images
|
| 126 |
+
| └──train.json
|
| 127 |
+
├── VP_test
|
| 128 |
+
| └── images
|
| 129 |
+
| └──test.json
|
| 130 |
+
```
|
| 131 |
+
|
| 132 |
+
Note: You need to modify the data path in all the scripts to your own path.
|
| 133 |
+
|
| 134 |
+
## Sample Usage (MR-LoRA training and evaluation)
|
| 135 |
+
All the configs are in the `configs` folder. We provide the scripts of our train order in `scripts/Train`.
|
| 136 |
+
|
| 137 |
+
1. Modify the configs in the `configs` folder. You should modify the data_configs and model_configs.
|
| 138 |
+
2. Train the expert LoRA independently using the scripts in Train_dom_single folder or Train_ability_single folder. Then you should use checkpoints of LoRA to get cross-task evaluation results. For example, in domain continual learning, you should test 25 times. You can directly run the following command to train the experts and get the cross-task evaluation results.
|
| 139 |
+
```bash
|
| 140 |
+
bash scripts/Train/train_DCL.sh
|
| 141 |
+
```
|
| 142 |
+
3. Train the router LoRA
|
| 143 |
+
Before training the router LoRA, you should modify the configs about the router (`data_configs_router, model_configs_router`). Then use the command to train the router LoRA. You can get the router training data and replay data in [huggingface](https://huggingface.co/datasets/MLLM-CL/MLLM-CL-ReplayData) or [modelscope](https://www.modelscope.cn/datasets/MLLM-CL/mllmcl-replaydata).
|
| 144 |
+
```bash
|
| 145 |
+
bash scripts/Train/train_DCL_router.sh
|
| 146 |
+
```
|
| 147 |
+
4. Transfer the cross-task results to the desired format `M_N` where M is the model name and N is the dataset name. You can refer to the detailed usage is `mrlora_result_link.py`.
|
| 148 |
+
```bash
|
| 149 |
+
python scripts/mrlora_result_link.py [your_cross_result_path]
|
| 150 |
+
```
|
| 151 |
+
5. Use the router LoRA to select the final results, you should first modify some path in `Eval_MR_LoRA/eval_use_router_DCL`.
|
| 152 |
+
```bash
|
| 153 |
+
bash scripts/Eval_MR_LoRA/eval_use_router_DCL.sh Med
|
| 154 |
+
bash scripts/Eval_MR_LoRA/eval_use_router_DCL.sh AD
|
| 155 |
+
bash scripts/Eval_MR_LoRA/eval_use_router_DCL.sh Sci
|
| 156 |
+
bash scripts/Eval_MR_LoRA/eval_use_router_DCL.sh Fin
|
| 157 |
+
bash scripts/Eval_MR_LoRA/eval_use_router_DCL.sh RS
|
| 158 |
+
```
|
| 159 |
+
Note: For the GUI agent task in ability continual learning, the final results are in a tsv file and you should submit it to the [evaluation server](https://eval.ai/web/challenges/challenge-page/2328/overview). The evaluation server will return the final results.
|
| 160 |
+
|
| 161 |
+
## Citation
|
| 162 |
+
If you find our dataset or model useful for your research and applications, please cite using this BibTeX:
|
| 163 |
+
```bibtex
|
| 164 |
+
@article{zhao2025mllm,
|
| 165 |
+
title={MLLM-CL: Continual Learning for Multimodal Large Language Models},
|
| 166 |
+
author={Zhao, Hongbo and Zhu, Fei and Guo, Haiyang and Wang, Meng and Wang, Rundong and Meng, Gaofeng and Zhang, Zhaoxiang},
|
| 167 |
+
journal={arXiv preprint arXiv:2506.05453},
|
| 168 |
+
year={2025}
|
| 169 |
+
}
|
| 170 |
+
```
|
| 171 |
+
|
| 172 |
+
## Acknowledgement
|
| 173 |
+
* [LLaVA](https://github.com/haotian-liu/LLaVA): the codebase we built upon, and our base model LLaVA-1.5-7b that has the amazing vision-language capabilities!
|
| 174 |
+
* [MCITlib](https://github.com/Ghy0501/MCITlib): the codebase we train all our baselines on. MR-LoRA will be in this codebase in the future version.
|
| 175 |
+
* [CoIN](https://github.com/zackschen/CoIN), [VLMEvalKit](https://github.com/open-compass/VLMEvalKit): the codebase we built upon.
|
| 176 |
+
|
| 177 |
+
## License
|
| 178 |
+
This project is licensed under the terms of the Apache-2.0 license.
|
| 179 |
+
|
| 180 |
+
## Contact
|
| 181 |
+
Please contact us or post an issue if you have any questions.
|
| 182 |
+
|
| 183 |
+
## About us: MLLM-CL Community
|
| 184 |
+
We are the members from [MLLM-CL(hf)](https://huggingface.co/MLLM-CL), [MLLM-CL(modelscope)](https://www.modelscope.cn/organization/MLLM-CL), an open-source community focus on Continual learning of Multimodal Large Language Models.
|
| 185 |
+
We aim to construct a continuously evolving multimodal large language model (MLLM) system. If you are interested in our community and want to join us, feel free to contact us on GitHub or by email.
|
| 186 |
+
* We are looking for contributors, collaborators and partners to build a better MLLM-CL community.
|
| 187 |
+
* We are also looking for sponsors to support our community and projects. If you are interested in sponsoring us, please contact us.
|