How to understand "Lower alpha increases fine-tuning effect; higher alpha preserves more of the base model."? Since a larger alpha amplifies the output of the LoRA adapter, does this mean that a larger alpha would actually boost the fine-tuning effect instead?
Xiao
AaTu9903
AI & ML interests
None yet
Recent Activity
commented on
an
article
5 days ago
Understanding Low-Rank Adaptation (LoRA): A Revolution in Fine-Tuning Large Language Models
Organizations
None yet