| <!--Copyright 2023 The HuggingFace Team. All rights reserved. | |
| Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with | |
| the License. You may obtain a copy of the License at | |
| http://www.apache.org/licenses/LICENSE-2.0 | |
| Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on | |
| an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the | |
| specific language governing permissions and limitations under the License. | |
| ⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be | |
| rendered properly in your Markdown viewer. | |
| --> | |
| # LoRA | |
| Low-Rank Adaptation ([LoRA](https://huggingface.co/papers/2309.15223)) is a PEFT method that decomposes a large matrix into two smaller low-rank matrices in the attention layers. This drastically reduces the number of parameters that need to be fine-tuned. | |
| The abstract from the paper is: | |
| *We propose a neural language modeling system based on low-rank adaptation (LoRA) for speech recognition output rescoring. Although pretrained language models (LMs) like BERT have shown superior performance in second-pass rescoring, the high computational cost of scaling up the pretraining stage and adapting the pretrained models to specific domains limit their practical use in rescoring. Here we present a method based on low-rank decomposition to train a rescoring BERT model and adapt it to new domains using only a fraction (0.08%) of the pretrained parameters. These inserted matrices are optimized through a discriminative training objective along with a correlation-based regularization loss. The proposed low-rank adaptation Rescore-BERT (LoRB) architecture is evaluated on LibriSpeech and internal datasets with decreased training times by factors between 5.4 and 3.6.*. | |
| ## LoraConfig | |
| [[autodoc]] tuners.lora.config.LoraConfig | |
| ## LoraModel | |
| [[autodoc]] tuners.lora.model.LoraModel | |
| ## Utility | |
| ### LoftQ | |
| [[autodoc]] utils.loftq_utils.replace_lora_weights_loftq | |
| ### Eva | |
| #### EvaConfig | |
| [[autodoc]] tuners.lora.config.EvaConfig | |
| #### initialize_lora_eva_weights | |
| [[autodoc]] tuners.lora.eva.initialize_lora_eva_weights | |
| #### get_eva_state_dict | |
| [[autodoc]] tuners.lora.eva.get_eva_state_dict | |