Papers
arxiv:2603.21884

Not All Layers Are Created Equal: Adaptive LoRA Ranks for Personalized Image Generation

Published on Mar 23
· Submitted by
Donald Shenaj
on Mar 24
Authors:
,

Abstract

LoRA² adapts layer-specific ranks during fine-tuning for personalized image generation, achieving better performance-memory trade-offs than fixed-rank approaches.

AI-generated summary

Low Rank Adaptation (LoRA) is the de facto fine-tuning strategy to generate personalized images from pre-trained diffusion models. Choosing a good rank is extremely critical, since it trades off performance and memory consumption, but today the decision is often left to the community's consensus, regardless of the personalized subject's complexity. The reason is evident: the cost of selecting a good rank for each LoRA component is combinatorial, so we opt for practical shortcuts such as fixing the same rank for all components. In this paper, we take a first step to overcome this challenge. Inspired by variational methods that learn an adaptive width of neural networks, we let the ranks of each layer freely adapt during fine-tuning on a subject. We achieve it by imposing an ordering of importance on the rank's positions, effectively encouraging the creation of higher ranks when strictly needed. Qualitatively and quantitatively, our approach, LoRA^2, achieves a competitive trade-off between DINO, CLIP-I, and CLIP-T across 29 subjects while requiring much less memory and lower rank than high rank LoRA versions. Code: https://github.com/donaldssh/NotAllLayersAreCreatedEqual.

Community

Paper author Paper submitter

Excited to share our latest work on personalized image generation: Not All Layers Are Created Equal: Adaptive LoRA Ranks for Personalized Image Generation.

Personalized image generation with diffusion models typically relies on LoRA fine-tuning, but a crucial question is often overlooked: which rank should be used?

Too low, and the model lacks the capacity to capture the subject's unique features. Too high, and you waste memory and degrade prompt alignment. Yet today, this choice is largely driven by community consensus, with little regard for the actual complexity of the subject being personalized.

In this work, we:

🎯 Propose LoRA², an easy-to-implement, fully differentiable, and model-agnostic modification of LoRA that learns a proper rank for each component automatically, with no manual selection and no fixed rank for all layers.

🧠 Show that not all layers contribute equally to subject personalization. LoRA² dynamically increases or reduces each component's rank depending on the specific subject, encouraging higher ranks only when strictly needed.

📊 Achieve a competitive trade-off between DINO, CLIP-I, and CLIP-T across 29 subjects, while requiring significantly less memory than high-rank LoRA versions.

💡 Deliver better subject-prompt alignment and lower memory consumption, without the combinatorial cost of manual rank tuning.

arxiv: https://arxiv.org/pdf/2412.05148
project page: https://donaldssh.github.io/NotAllLayersAreCreatedEqual/
code: https://github.com/donaldssh/NotAllLayersAreCreatedEqual

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.21884 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.21884 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.21884 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.