| Findings | \nEmpirical Observation | \nTheoretical Insights | \n
|---|---|---|
| \n For a fixed downstream task, larger models require a lower LoRA-rank to achieve the desired performance. \n | \n\n Sec.\u00a0G.9 \n | \n\n\n | \n
| \n When the frozen model is closer to the target model, a lower LoRA-rank is sufficient to attain the desired performance. \n | \n\n\n | \n\n\n | \n
| \n LoRA outperforms final layers tuning if the quality of shared representation is not good. \n | \n\n Sec.\u00a0G.4 and observations by Kaplun et\u00a0al. (2023) and Ding et\u00a0al. (2023) \n | \n\n Lemma\u00a04 \n | \n
| \n In addition to applying low-rank updates to weight matrices, it is crucial to also update the bias. \n | \n\n\n | \n\n\n | \n
| \n Tuning attention weights is sufficient for achieving good performance on TFNs. \n | \n\n Sec.\u00a04.2 in Hu et\u00a0al. (2022a) \n | \n\n Theorem\u00a07 \n | \n
| \n Current optimization algorithms for LoRA training might be suboptimal. \n | \n\n\n | \n\n \u2014 \n | \n
| Model | \n\n | MNLI | \nSST-2 | \nMRPC | \nCoLA | \nQNLI | \nQQP | \nRTE | \nSTS-B | \n
| \n | 0 | \n.330 | \n.491 | \n.316 | \n0 | \n.495 | \n.682 | \n.527 | \n.024 | \n
| \n | 0 | \n.318 | \n.505 | \n.684 | \n0 | \n.505 | \n.369 | \n.473 | \n.032 | \n
| \n | 2 | \n.861 | \n.950 | \n.892 | \n.632 | \n.928 | \n.891 | \n.780 | \n.907 | \n
| \n | 6 | \n.870 | \n.948 | \n.892 | \n.629 | \n.931 | \n.900 | \n.773 | \n.909 | \n
| \n | 2 | \n.904 | \n.956 | \n.917 | \n.631 | \n.946 | \n.887 | \n.884 | \n.916 | \n
| Model | \n\n | MNLI | \nSST-2 | \nMRPC | \nCoLA | \nQNLI | \nQQP | \nRTE | \nSTS-B | \n
|---|---|---|---|---|---|---|---|---|---|
| Random | \n2 | \n.523 | \n.775 | \n.691 | \n.154 | \n.627 | \n.761 | \n.542 | \n.213 | \n
| Pretrained | \n.861 | \n.950 | \n.892 | \n.632 | \n.928 | \n.891 | \n.780 | \n.907 | \n|
| Random | \n4 | \n.535 | \n.788 | \n.696 | \n.145 | \n.625 | \n.768 | \n.542 | \n.224 | \n
| Pretrained | \n.868 | \n.950 | \n.890 | \n.634 | \n.929 | \n.898 | \n.805 | \n.910 | \n|
| Random | \n6 | \n.544 | \n.799 | \n.696 | \n.154 | \n.632 | \n.768 | \n.542 | \n.210 | \n
| Pretrained | \n.868 | \n.948 | \n.892 | \n.629 | \n.931 | \n.900 | \n.773 | \n.909 | \n