AdaptLLM commited on
Commit
ad0f83c
·
verified ·
1 Parent(s): 6b4ded8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -15,7 +15,7 @@ datasets:
15
  - AdaptLLM/food-visual-instructions
16
  ---
17
 
18
- # Adapting Multimodal Large Language Models to Domains via Post-Training
19
 
20
  This repos contains the **food MLLM developed from Qwen2.5-VL-3B-Instruct** in our paper: [On Domain-Adaptive Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930). The correspoding training dataset is in [food-visual-instructions](https://huggingface.co/datasets/AdaptLLM/food-visual-instructions).
21
 
@@ -131,10 +131,10 @@ For reference, we train from Qwen2.5-VL-3B-Instruct for 1 epoch with a learning
131
  ## Citation
132
  If you find our work helpful, please cite us.
133
 
134
- [AdaMLLM](https://huggingface.co/papers/2411.19930)
135
  ```bibtex
136
  @article{adamllm,
137
- title={On Domain-Specific Post-Training for Multimodal Large Language Models},
138
  author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
139
  journal={arXiv preprint arXiv:2411.19930},
140
  year={2024}
 
15
  - AdaptLLM/food-visual-instructions
16
  ---
17
 
18
+ # Adapting Multimodal Large Language Models to Domains via Post-Training (EMNLP 2025)
19
 
20
  This repos contains the **food MLLM developed from Qwen2.5-VL-3B-Instruct** in our paper: [On Domain-Adaptive Post-Training for Multimodal Large Language Models](https://huggingface.co/papers/2411.19930). The correspoding training dataset is in [food-visual-instructions](https://huggingface.co/datasets/AdaptLLM/food-visual-instructions).
21
 
 
131
  ## Citation
132
  If you find our work helpful, please cite us.
133
 
134
+ [Adapt MLLM to Domains](https://huggingface.co/papers/2411.19930) (EMNLP 2025 Findings)
135
  ```bibtex
136
  @article{adamllm,
137
+ title={On Domain-Adaptive Post-Training for Multimodal Large Language Models},
138
  author={Cheng, Daixuan and Huang, Shaohan and Zhu, Ziyu and Zhang, Xintong and Zhao, Wayne Xin and Luan, Zhongzhi and Dai, Bo and Zhang, Zhenliang},
139
  journal={arXiv preprint arXiv:2411.19930},
140
  year={2024}