
the evaluation of conditional image generation models. In
International Conference on Learning Representations, 2024.
5
[26]
Nupur Kumari, Bingliang Zhang, Richard Zhang, Eli Shecht-
man, and Jun-Yan Zhu. Multi-concept customization of text-
to-image diffusion. In Proceedings of the IEEE/CVF Confer-
ence on Computer Vision and Pattern Recognition, 2023. 2,
6
[27]
Youngwan Lee, Kwanyong Park, Yoorhim Cho, Yong-Ju
Lee, and Sung Ju Hwang. Koala: Empirical lessons toward
memory-efficient and fast diffusion models for text-to-image
synthesis. In Advances in Neural Information Processing
Systems, 2024. 8,15
[28]
Dongxu Li, Junnan Li, and Steven CH Hoi. Blip-diffusion:
Pre-trained subject representation for controllable text-to-
image generation and editing. In Advances in Neural In-
formation Processing Systems, 2024. 2
[29]
Jian Ma, Junhao Liang, Chen Chen, and Haonan Lu. Subject-
diffusion: Open domain personalized text-to-image genera-
tion without test-time fine-tuning. In ACM SIGGRAPH 2024
Conference Papers, 2024.
[30]
Xichen Pan, Li Dong, Shaohan Huang, Zhiliang Peng, Wenhu
Chen, and Furu Wei. Kosmos-g: Generating images in context
with multimodal large language models. In International
Conference on Learning Representations, 2024.
[31]
Maitreya Patel, Sangmin Jung, Chitta Baral, and Yezhou Yang.
λ
-ECLIPSE: Multi-concept personalized text-to-image diffu-
sion models by leveraging CLIP latent space. Transactions
on Machine Learning Research, 2024. 2
[32]
Yuang Peng, Yuxin Cui, Haomiao Tang, Zekun Qi, Runpei
Dong, Jing Bai, Chunrui Han, Zheng Ge, Xiangyu Zhang,
and Shu-Tao Xia. Dreambench++: A human-aligned bench-
mark for personalized image generation. In International
Conference on Learning Representations, 2025. 5
[33]
Dustin Podell, Zion English, Kyle Lacey, Andreas Blattmann,
Tim Dockhorn, Jonas M
¨
uller, Joe Penna, and Robin Rombach.
SDXL: Improving latent diffusion models for high-resolution
image synthesis. In International Conference on Learning
Representations, 2024. 6
[34]
Senthil Purushwalkam, Akash Gokul, Shafiq Joty, and Nikhil
Naik. Bootpig: Bootstrapping zero-shot personalized image
generation capabilities in pretrained diffusion models. In
European Conference on Computer Vision. Springer, 2024. 2
[35]
L Rout, Y Chen, N Ruiz, A Kumar, C Caramanis, S Shakkot-
tai, and W Chu. Rb-modulation: Training-free stylization
using reference-based modulation. In International Confer-
ence on Learning Representations, 2025. 2
[36]
Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Yael Pritch,
Michael Rubinstein, and Kfir Aberman. Dreambooth: Fine
tuning text-to-image diffusion models for subject-driven gen-
eration. In Proceedings of the IEEE/CVF Conference on
Computer Vision and Pattern Recognition, 2023. 1,2,4,5,6,
13,14,15
[37]
Nataniel Ruiz, Yuanzhen Li, Varun Jampani, Wei Wei, Tingbo
Hou, Yael Pritch, Neal Wadhwa, Michael Rubinstein, and
Kfir Aberman. HyperDreamBooth: HyperNetworks for Fast
Personalization of Text-to-Image Models . In Proceedings of
the IEEE/CVF Conference on Computer Vision and Pattern
Recognition, 2024. 2,3
[38]
Viraj Shah, Nataniel Ruiz, Forrester Cole, Erika Lu, Svetlana
Lazebnik, Yuanzhen Li, and Varun Jampani. Ziplora: Any
subject in any style by effectively merging loras. In European
Conference on Computer Vision, 2024. 1,2,4,5,6,7,13,14,
15
[39]
Kihyuk Sohn, Lu Jiang, Jarred Barber, Kimin Lee, Nataniel
Ruiz, Dilip Krishnan, Huiwen Chang, Yuanzhen Li, Irfan
Essa, Michael Rubinstein, Yuan Hao, Glenn Entis, Irina Blok,
and Daniel Castro Chin. Styledrop: Text-to-image synthesis
of any style. In Advances in Neural Information Processing
Systems, 2023. 2,4,5,6,14
[40]
Yoad Tewel, Rinon Gal, Gal Chechik, and Yuval Atzmon.
Key-locked rank one editing for text-to-image personalization.
In ACM SIGGRAPH 2023 Conference Proceedings, 2023. 2
[41]
Andrey Voynov, Qinghao Chu, Daniel Cohen-Or, and Kfir
Aberman. p+: Extended textual conditioning in text-to-image
generation. arXiv preprint arXiv:2303.09522, 2023. 2
[42]
Yuxiang Wei, Yabo Zhang, Zhilong Ji, Jinfeng Bai, Lei Zhang,
and Wangmeng Zuo. Elite: Encoding visual concepts into
textual embeddings for customized text-to-image generation.
Proceedings of the IEEE/CVF International Conference on
Computer Vision, 2023. 2
[43]
Mitchell Wortsman, Gabriel Ilharco, Samir Ya Gadre, Re-
becca Roelofs, Raphael Gontijo-Lopes, Ari S Morcos,
Hongseok Namkoong, Ali Farhadi, Yair Carmon, Simon Ko-
rnblith, et al. Model soups: averaging weights of multiple
fine-tuned models improves accuracy without increasing infer-
ence time. In International Conference on Machine Learning,
2022. 2,6,15
[44]
Yujia Wu, Yiming Shi, Jiwei Wei, Chengwei Sun, Yuyang
Zhou, Yang Yang, and Heng Tao Shen. Difflora: Generat-
ing personalized low-rank adaptation weights with diffusion.
arXiv preprint arXiv:2408.06740, 2024. 2
[45]
Tianyi Xiong, Xiyao Wang, Dong Guo, Qinghao Ye, Haoqi
Fan, Quanquan Gu, Heng Huang, and Chunyuan Li. Llava-
critic: Learning to evaluate multimodal models. In Proceed-
ings of the IEEE/CVF Conference on Computer Vision and
Pattern Recognition, 2025. 5,6
[46]
Yu Xu, Fan Tang, Juan Cao, Yuxin Zhang, Oliver Deussen,
Weiming Dong, Jintao Li, and Tong-Yee Lee. Break-for-make:
Modular low-rank adaptations for composable content-style
customization. arXiv preprint arXiv:2403.19456, 2024. 2
[47] Youcan Xu, Zhen Wang, Jun Xiao, Wei Liu, and Long Chen.
Freetuner: Any subject in any style with training-free diffu-
sion. arXiv preprint arXiv:2405.14201, 2024. 2
[48]
Prateek Yadav, Derek Tam, Leshem Choshen, Colin A Raffel,
and Mohit Bansal. Ties-merging: Resolving interference
when merging models. In Advances in Neural Information
Processing Systems, 2024. 2,6,15
[49]
Enneng Yang, Li Shen, Guibing Guo, Xingwei Wang, Xi-
aochun Cao, Jie Zhang, and Dacheng Tao. Model merging in
llms, mllms, and beyond: Methods, theories, applications and
opportunities. arXiv preprint arXiv:2408.07666, 2024. 2
[50]
Jianan Yang, Haobo Wang, Yanming Zhang, Ruixuan Xiao,
Sai Wu, Gang Chen, and Junbo Zhao. Controllable textual
10