diff --git "a/20240318/2310.17513v3.json" "b/20240318/2310.17513v3.json" new file mode 100644--- /dev/null +++ "b/20240318/2310.17513v3.json" @@ -0,0 +1,870 @@ +{ + "title": "The Expressive Power of Low-Rank Adaptation", + "abstract": "Low-Rank Adaptation (LoRA), a parameter-efficient fine-tuning method that leverages low-rank adaptation of weight matrices, has emerged as a prevalent technique for fine-tuning pre-trained models such as large language models and diffusion models.\nDespite its huge success in practice, the theoretical underpinnings of LoRA have largely remained unexplored.\nThis paper takes the first step to bridge this gap by theoretically analyzing the expressive power of LoRA.\nWe prove that, for fully connected neural networks, LoRA can adapt any model to accurately represent any smaller target model if LoRA-rank , under a mild assumption.\nWe quantify the approximation error when the LoRA-rank is lower than the threshold.\nFor Transformer networks, we show any model can be adapted to a target model of the same size with rank- LoRA adapters.\nOur study reveals numerous theoretical insights on hyperparameter tuning and algorithm development for LoRA, all of which are empirically validated.", + "sections": [ + { + "section_id": "1", + "parent_section_id": null, + "section_name": "Introduction", + "text": "Recent foundation models, such as large language models (OpenAI, 2023 ###reference_b55###, Liu et al., 2019 ###reference_b49###, Touvron et al., 2023 ###reference_b69###), have achieved remarkable success in a wide range of applications.\nDue to their substantial size, the standard full fine-tuning approach\u2014where all the model\u2019s parameters are updated for specialized tasks\u2014is becoming increasingly difficult and inefficient.\nThis leads to the growing popularity of parameter-efficient fine-tuning approaches (Hu et al., 2022a ###reference_b33###, Liu et al., 2022b ###reference_b48###, Ben Zaken et al., 2022 ###reference_b5###, Hu et al., 2022b ###reference_b34###).\nInstead of updating all parameters, these approaches selectively update smaller subsets of weights or introduce lightweight adapters, thereby greatly decreasing the computational and storage costs.\nThe most dominant approach along this line is Low-Rank Adaptation (LoRA) (Hu et al., 2022a ###reference_b33###), which employs lightweight low-rank adapters to pre-trained weight matrices.\nFar from merely enhancing computational efficiency, empirical evidence has shown that LoRA can match or even exceed the performance of full fine-tuning (Hu et al., 2022a ###reference_b33###).\nTo date, LoRA has been widely used and achieved considerable success in adapting large language models (Hu et al., 2022a ###reference_b33###, Dinh et al., 2022b ###reference_b17###) and image generation models (Ryu, 2023 ###reference_b62###, Fan et al., 2023 ###reference_b25###) for various downstream tasks.\nDespite the empirical success of LoRA, little is known in theory about how it works.\nA notable exception (Malladi et al., 2023 ###reference_b52###) showed that LoRA finetuning is approximately equivalent to full fine-tuning in the lazy regime.\nHowever, many theoretical questions remain open, such as:\nWhat is the minimum rank of the LoRA adapters required to adapt a (pre-trained) model to match the functionality of the target model ?\nHow does the model architecture (i.e., depth, width) affect the minimal rank?\nIf the adapter rank is lower than this threshold, what is the resulting approximation error?\nAnswering such questions will provide important theoretical insights into when and why LoRA achieves effective adaptation." + }, + { + "section_id": "1.1", + "parent_section_id": "1", + "section_name": "Related Works", + "text": "Theoretical study of the expressive power of unfrozen neural networks has progressed since the first universal approximation theorem (Hornik et al., 1989 ###reference_b31###), showing that sufficient network width and depth can guarantee function approximation (Bengio & Delalleau, 2011 ###reference_b6###, Eldan & Shamir, 2016 ###reference_b21###, Liang & Srikant, 2017 ###reference_b45###).\nMany recent studies obtained similar results for deep neural networks with modern twists such as ReLU activations and Transformer networks (Yun et al., 2020a ###reference_b78###, Raghu et al., 2017 ###reference_b60###, Telgarsky, 2016 ###reference_b68###; 2015 ###reference_b67###, Bietti & Bach, 2021 ###reference_b7###, Oymak et al., 2023 ###reference_b56###, Lee et al., 2017 ###reference_b41###, Shen & Zhang, 2020 ###reference_b66###, Likhosherstov et al., 2021 ###reference_b46###, Hsu et al., 2021 ###reference_b32###, Park et al., 2021 ###reference_b57###, Yun et al., 2020b ###reference_b79###, Giannou et al., 2023b ###reference_b27###).\nMetrics like Vapnik-Chervonenkis and Rademacher complexities (Vapnik & Chervonenkis, 2015 ###reference_b72###, Bartlett & Mendelson, 2001 ###reference_b4###) assess classification capacity.\nHowever, these theories cannot fully explain the performance of frozen neural networks as they generally cannot factor in pre-trained model parameters and adaptation methods.\nIn stark contrast to the flourishing research on the expressive power of neural networks, there exists a limited number of works investigating the expressive power of adaptation methods.\nA notable exception is Giannou et al. (2023a ###reference_b26###), investigating the expressive power of normalization parameter fine-tuning.\nThey demonstrate that fine-tuning the normalization layers alone can adapt a randomly initialized ReLU network to match any target network that is times smaller.\nWe borrow some proof techniques from this work, including techniques for extending results from linear neural networks to ReLU neural networks.\nIn another recent work (Englert & Lazic, 2022 ###reference_b24###), the authors show that neural reprogramming (Elsayed et al., 2019 ###reference_b22###, Engel et al., 2018 ###reference_b23###, Lee et al., 2020 ###reference_b42###, Dinh et al., 2022a ###reference_b16###, Chen, 2022 ###reference_b11###), a technique that modifies only the inputs while keeping the pretrained network frozen, can adapt any random two-layer ReLU network to achieve arbitrarily high accuracy on a Bernoulli data model over hypercube vertices.\nDespite these early attempts, no existing study has yet explored the expressive power of LoRA, the current leading adaptation method.\nA more detailed discussion of related works is provided in Sec. B ###reference_###." + }, + { + "section_id": "1.2", + "parent_section_id": "1", + "section_name": "Notations", + "text": "Define .\nLet the operators and denote the minimum function and the maximum function, respectively.\nWe use to represent the identity matrix.\nFor a sequence of matrices , we simplify the product of these matrices as , with matrices multiplied in descending order from to .\nWhen , we define and for scalars , and and for square matrices .\nSingular Value Decomposition (SVD) of the matrix can be expressed as , where are orthonormal matrices and is a diagonal matrix.\nThe singular values, sorted in descending sequence, are represented on the diagonal of , denoted as , where denotes the -th largest singular value for all .\nWhen , is defined as zero.\nThe best rank- approximation (in the Frobenius norm or the -norm) of is , where and are the -th column of and , respectively (Eckart & Young, 1936 ###reference_b20###, Mirsky, 1960 ###reference_b54###).\nWe denote this best rank- approximation by , where is a shorthand for \u201cLow-Rank\u201d.\nWhen , it is clear that .\nOccasionally, the subscript may be omitted to indicate a general low-rank approximation without specifying the rank." + }, + { + "section_id": "2", + "parent_section_id": null, + "section_name": "Warm up: Expressive Power of Linear Models with LoRA", + "text": "Before delving into the expressive power of LoRA for FNN and TFN, we begin by investigating the simplest scenario: both the target model and the frozen model are linear, i.e.,\nThis problem serves as a simplified version of approximating a target FNN, where the target model has a single layer, the frozen model has layers, all bias vectors in both two models are zero, and the activation functions are linear.\nThroughout this paper, for the sake of simplicity, we will assume that both models have the same number of neurons in each layer, i.e., .\nNevertheless, our results are readily extendable to situations where the frozen model is wider than the target model, which is a more natural setting as the frozen models are often overparameterized to ensure high capacity and good performance across diverse tasks in practice.\nSee the discussion in Sec. H ###reference_### for more details.\nThe objective here is to incorporate low-rank adapters into the frozen model so that the adapted model can effectively approximate the target model.\nUnless otherwise specified, we always consider a uniform LoRA-rank for all low-rank adapters throughout this paper.\nFor a given LoRA-rank , we apply LoRA adapters to the frozen model, and the adapted model can be represented as\nwhere for all .\nSince the frozen model and adpated model are all linear, we can focus on quantifying the discrepancy between the linear coefficients, i.e., .\nIn the subsequent lemma, we establish the minimal achievable norm, and identify the smallest LoRA-rank required for the adapted model to exactly represent the target model, i.e., , under a non-singularity assumption.\nWe will demonstrate in Sec. 3.3 ###reference_### that this non-singularity assumption is mild, as it can be satisfied even by randomly generated weight matrices.\nDefine error matrix , and denote its rank by .\nFor a given LoRA-rank , assume that all the weight matrices of the frozen model , and are non-singular for all .\nThen, we have the following:\nThus, when , the optimal solution satisfies , implying .\nWe start the proof by noting that the distance between the adapted and target models\nThe remaining proof aims to minimize the right-hand side under the constraint for all .\nThe basic idea here is to match with the best rank- approximation of .\nThe key steps to solve this problem are as follows.\nDemonstrate that can be decomposed into terms:\nSince , it follows that\nConsider the rank- approximation of .\nDecompose this low-rank approximation into terms such that , where \u2019s will be determined later.\nTo match with the rank- approximation of , we let by choosing .\nSelect appropriate such that are invertible for .\n\u220e\nThe complete proof and the explicit construction of optimal LoRA adapters, are detailed in Sec. D ###reference_###.\nIn fact, this lemma delivers a crucial insight. When we consider and , the lemma becomes strikingly similar to the Eckart\u2013Young\u2013Mirsky theorem (Eckart & Young, 1936 ###reference_b20###, Mirsky, 1960 ###reference_b54###).\nHowever, there is a significant difference from the classical theorem on the optimal low-rank approximation, which involves a single target matrix and a single matrix as an optimization variable.\nOur lemma demonstrates that a comparable result can be achieved for a \u201cproduct of matrices,\u201d where each matrix is optimized subject to a low-rank constraint.\nThat being said, even though each matrix is constrained by a low rank, the \u201ceffective rank\u201d is the sum of these low ranks, i.e., in this scenario, is . Consequently, once the low-rank adapters are optimally configured, one can make the product equal to the best rank -approximation of the target matrix.\nThis can be viewed as an extension of the matrix approximation theorem to a product of matrices, each subject to low-rank constraints.\nOur main theoretical results on the expressive power of LoRA, which we will present in the subsequent sections, will build upon this core matrix approximation result." + }, + { + "section_id": "3", + "parent_section_id": null, + "section_name": "Expressive Power of FNNs with LoRA", + "text": "" + }, + { + "section_id": "3.1", + "parent_section_id": "3", + "section_name": "Problem Setting", + "text": "We use to denote a -layer width- fully connected ReLU neural network with weight matrices and biases , where .\nThe target FNN and frozen FNN can be represented as follows:\nwhere and represent the weight matrix and bias vector for the -th layer of the target model , respectively.\nLikewise, , are those for , for layer .\nGiven a specified LoRA-rank , we adapt the frozen FNN into a new model via LoRA.\nThe adapted model is defined as\nwhere the weight matrix for the low-rank adapter satisfies specified rank constraints, updated bias vector for 111We consider the case where the bias parameters can also be updated, as suggested by Hu et al. (2022a ###reference_b33###). Experiments investigating the impact of updating bias parameters are presented in Sec. G.5 ###reference_###..\nAs noted in Sec. 2 ###reference_###, it is common for the pretrained model to be larger than necessary.\nTherefore, we focus on a setting where the frozen model is deeper than the target model, i.e., .\nFurthermore, in this section, we let the input space be bounded." + }, + { + "section_id": "3.2", + "parent_section_id": "3", + "section_name": "One-Layer ReLU FNN Approximation", + "text": "We start with investigating the expressive power of LoRA on one-layer FNN.\nIn this setting, our aim is to identify LoRA adapters and bias vectors such that the adapted model\nclosely approximates the target one-layer ReLU FNN model .\nThis differs from the setting described in Sec. 2 ###reference_###, where a multi-layer FNN with linear activation functions and zero biases was used to approximate a one-layer FNN with the same properties.\nIn the current setting, we introduce non-linearity through the use of ReLU activation functions in the frozen model and also take biases into account.\nConsequently, to generalize the findings to this new setting, addressing the introduced non-linearity due to the ReLU activation functions in the frozen model is the main challenge.\nWe employ the following two steps to extend the results in Sec. 2 ###reference_### to the current setting.\n(Linearization) We eliminate the nonlinearity in the first layers of the adapted model, making it equivalent to a one-layer ReLU FNN.\nThis can be readily achieved by choosing sufficiently large bias vectors for the first layers to ensure that all ReLUs in these layers are activated.\nThis technique of eliminating non-linearity is inspired by Giannou et al. (2023a ###reference_b26###).\n(Weight Matrix Alignment) We update the bias vectors of the last layer to align with that of the target model , and apply the linear model approximation results (i.e., Lemma 1 ###reference_ma1###) to identify the low-rank adapters that match the weight matrix .\nFollowing the steps above, we arrive at the subsequent lemma, which demonstrates that any one-layer FNN can be closely approximated by a multi-layer FNN finetuned via LoRA.\nThe complete proof is provided in Sec. E.1 ###reference_###.\nDefine error matrix , with its rank represented by .\nConsider a LoRA-rank .\nAssume that the weight matrices and for all are non-singular.\nLet be a random input sampled from a distribution with bounded support and\nlet .\nThen, there exists rank- or lower matrices and bias vectors such that the expected squared error can be bounded as\nMoreover, when , we have for all ." + }, + { + "section_id": "3.3", + "parent_section_id": "3", + "section_name": "Multi-Layer ReLU FNN Approximation", + "text": "We now generalize our discussion to the approximation of multi-layer ReLU FNNs.\nThe key strategy for extending the results to approximating multi-layer ReLU FNNs under LoRA is model partition, inspired from Giannou et al. (2023a ###reference_b26###).\nTo elucidate this, we start with a specific example.\nConsider the case where and .\nWe view a two-layer target model as a composition of two one-layer ReLU FNNs.\nAccordingly, we partition the four-layer adapted model into two submodels, each consisting of two layers.\nFor each layer in the target model, we utilize two corresponding layers in the frozen/adapted model for approximation.\nThis problem then simplifies into a one-layer FNN approximation problem, which has already been addressed in Lemma 2 ###reference_ma2###.\nBased on this example, we introduce a ordered partition to partition the layers in the adapted model , where .\nEach element consists of consecutive integers.\nGiven a partition , each element specifies that the layers with index in the adapted model will be used to approximate the -th layer in the target model.\nExample 1 ###reference_mple1###, which uses every two layers in the adapted model to approximate each layer in the target model, can be considered as a partition represented as .\nSimilarly, we extend this simple uniform partition into general cases for -layer target FNN and -layer frozen FNN:\nwhere .\nThe uniform partition indicates that every layers in the adapted model are employed to approximate each layer in the target model.\nWe use to denote the product of the weight matrices from the layers , with the later layer positioned to the left and the earlier layer to the right in the matrix product.\nFor example, .\nWe first extend Lemma 2 ###reference_ma2### to multi-layer FNN approximation setting using this uniform partition.\nGiven a specified LoRA-rank , to derive our results, we introduce a mild non-singularity assumption on the weight matrices of the target model and frozen model for the feasibility of our analysis.\nThis assumption is mild, supported by Lemma 3 ###reference_ma3### that even weight matrices initialized at random can meet this requirement.\nFor a fixed LoRA-rank ,\nthe weight matrices of the frozen model and matrices\n are non-singular for all and .\nLet be matrices whose elements are drawn independently from arbitrary continuous distributions.\nThen, with probability 1, Assumption 1 ###reference_umption1### holds .\nGiven this assumption, here we present our first main result, which shows that any frozen FNN can be adapted to exactly approximate the target FNN via LoRA.\nUnder Assumption 1 ###reference_umption1###, if LoRA-rank , then there exists rank- or lower matrices and bias vectors such that the low-rank adapted model can exactly approximate the target model , i.e., , .\nMoreover, combining Lemma 3 ###reference_ma3### and Theorem 3 ###reference_orem3### gives the following corollary.\nAssume that the elements of are independently drawn from arbitrary continuous distributions.\nWhen , with probability 1, there exists rank- or lower matrices and bias vectors such that low-rank adapted model can exactly approximate the target model on , i.e., , .\nTo understand the implications of this corollary, let us consider .\nIn this scenario, the required LoRA-rank is sufficiently small such that the dimension of the rank- matrix is approximately .\nThis corollary suggests that with learnable parameters, even a random FNN can be adapted into the target model .\nIt is noteworthy that the total number of parameters of the target model is .\nThis indicates that even though the learnable parameters under LoRA finetuning appear to be highly constrained (low-rank constrained learnable parameters distributed across many layers), the effective expressive power of LoRA is nearly optimal up to a constant factor of .\nOur discovery provides the first theoretical insights into the practical success of LoRA.\nFurthermore, Theorem 3 ###reference_orem3### indicates that if the model is \u2018close\u2019 to such that is small, the number of learnable parameters used by LoRA can be lower than .\nMeanwhile, when the employed LoRA-rank is lower than the critical threshold, the following theorem provides an upper bound for the approximation error.\nDefine the approximation error of -th layer as , and the magnitude of the parameters and the input as .\nUnder Assumption 1 ###reference_umption1###, there exists rank- or lower matrices with and bias vectors with such that for input with ,\nTheorem 5 ###reference_orem5### provides an upper bound on the approximation error for the adapted model.\nThis bound is influenced by several factors:\n(i) magnitude of the target model\u2019s parameters and the input, which is captured by and ,\n(ii) the rank of the adapter and the discrepancy between the frozen model and the target model , both of which contribute to the term ,\n(iii) the depth of the frozen model , reflected in and consequenly .\nAll the proofs of the results derived for uniform partition are provided in Sec. E.2 ###reference_###.\nWe note that employing this uniform partition strategy for approximating the target model may not always yield optimal results.\nTo illustrate this, we revisit the case considered by Example 1 ###reference_mple1###, where and .\nConsider a scenario where the first layer of the frozen model has been pretrained to match the first layer of the target model.\nIn this case, we can use just the first layer in to approximate the first layer in , and a zero LoRA-rank is sufficient for the exact representation of the first layer.\nThe remaining three layers in can then be used to approximate the second layer in .\nCompared to uniform partition, this partition leverages more layers to approximate the second layer in , allowing us to achieve the desired performance with a lower LoRA-rank, as per Lemma 2 ###reference_ma2###.\nThis suggests that our approximation error bounds could be further optimized by considering partitioning schemes tailored to specific scenarios.\nWe now extend our results to a more general setting, where we do not assume a uniform partition.\nConcurrently, recent research by Zhang et al. (2023 ###reference_b80###) has shown that the application of varying LoRA-ranks leads to improved results.\nConsequently, we permit each layer in the frozen model to utilize adapters with different LoRA-ranks.\nThe rank of the LoRA adapter associated with the -th layer in the frozen model is denoted by , where .\nThis result relies on Assumption 2 ###reference_umption2###, an analog of Assumption 1 ###reference_umption1###, but revised to include a general model partition.\nMore details, including the proofs, are provided in Sec. E.3 ###reference_###.\nConsider a partition for the frozen model.\nLet Assumption 2 ###reference_umption2### hold.\nIf for all , there exists LoRA adapters with and biases such that the adapted model can exactly approximate the target model.\nMoreover, define the approximation error of the -th layer as , and the magnitude of the parameters and the input as .\nThen, there exists LoRA adapters with and biases such that for any input with , the approximation error can be bounded as\nUpdating the final layers and keeping the initial layers frozen (Chatfield et al., 2014 ###reference_b10###, Donahue et al., 2014 ###reference_b18###, Sharif Razavian et al., 2014 ###reference_b65###, Rahimi & Recht, 2007 ###reference_b61###) is another popular model adaptation method.\nHowever, unlike LoRA, which can adapt even randomly generated networks to match a target model, empirical studies (Kornblith et al., 2019 ###reference_b39###) suggest that the effectiveness of final layers tuning heavily depends on the quality of the initial layers.\nThis indicates that merely tuning the final layers of randomly generated networks may not yield desirable performance.\nThe following lemma rigorously supports this assertion, demonstrating that regardless of how the final layers are tuned, it is impossible to adapt a randomly generated model into even a one-layer FNN, a model of very low complexity.\nLet and be a one-layer target FNN.\nAssume that the elements of weight matrices are independently drawn from arbitrary continuous distributions.\nWith probability 1, for any tuning of the last layers, .\nIn Corollary 4 ###reference_orem4###, we demonstrate that LoRA can adapt any randomly generated models to match the target model, using at most twice the number of learnable parameters as the target model.\nHowever, this lemma reveals that final layers tuning, even with times the learnable parameters of the target model, cannot achieve performance comparable to LoRA.\nIn other words, LoRA requires at most learnable parameters to achieve an exact approximation, while final layers tuning fails to approximate the target model even with learnable parameters.\nTherefore, when , LoRA can deliver strictly superior performance than final layers tuning with the same or fewer parameters.\nThis provides insights into the empirical observation that LoRA outperforms final layers tuning (Kaplun et al., 2023 ###reference_b37###, Ding et al., 2023 ###reference_b15###)." + }, + { + "section_id": "4", + "parent_section_id": null, + "section_name": "Expressive Power of Transformer Networks with LoRA", + "text": "" + }, + { + "section_id": "4.1", + "parent_section_id": "4", + "section_name": "Problem Setting", + "text": "Transformer network, denoted as , is a composition of Transformer blocks and an output layer, parameterized by weight .\nEach transformer block comprises a -head self-attention layer, parameterized by weight , followed by a token-wise feedforward layer, parameterized by weight and bias .\nWe assume that all weight matrices have a dimension of , while the bias vectors are of dimension .\nWe employ the same formulations of transformer blocks as Yun et al. (2020a ###reference_b78###), with one exception: we exclude skip connections for analytical feasibility.\nAs before, we use (e.g., ) to represent the corresponding parameters for the target model, and (e.g., ) to represent the corresponding low-rank update.\nFor TFN cases,we consider scenarios where both the frozen model and the target model have Transformer blocks.\nFor an explicit formulation, please refer to Sec. F.2 ###reference_###." + }, + { + "section_id": "4.2", + "parent_section_id": "4", + "section_name": "Main Results on Transformer Networks", + "text": "We now present our main findings on TFNs.\nThe first result relies on a non-singularity assumption (Assumption 4 ###reference_umption4###) tailored for TFN.\nThis assumption is mild, and models with randomly generated weights can satisfy its criteria (Lemma 14 ###reference_ma14###).\nFurther details are deferred to Sec. F.2 ###reference_###.\nThe following theorem shows that adding LoRA adapters primarily to the self-attention layers enables the adapted model to exactly approximate the target model .\nThis finding is consistent with a recent observation made by Hu et al. (2022a ###reference_b33###), which indicates that a good performance can be achieved by adapting only the attention layers when applying LoRA to TFNs.\nConsider a given LoRA-rank .\nLet Assumption 4 ###reference_umption4### hold.\nLet be the rank-based functionality gap to -th transformer block () or output layer () defined in (190 ###reference_0###).\nIf , then there exists\nlow-rank adapters with rank lower than with other low-rank adapters set to ,\nand updated bias vectors ,\nsuch that for any , the adapted model exactly approximates target model , i.e., .\nThe primary challenge for extending our analysis to TFNs, similar to FNN cases, is the nonlinearity introduced by softmax and ReLU.\nTo manage this, we segment a sequence of transformer blocks based on the softmax and ReLU functions.\nSpecifically, we align the output of attention scores before the softmax is applied, and then match the output of the first feedforward layer before ReLU is applied.\n\u220e\nThe complete proof of Theorem 7 ###reference_orem7### and results for randomly generated models can be found in Sec. F.2 ###reference_###.\nMeanwhile, our results here are specifically for TFNs with multi-head attention layers.\nFor TFNs with single-head attention layers, the construction of LoRA adapters differs due to the absence of .\nSince the results are similar, we defer the problem setting and results for TFNs with single-head attention layers to Sec. F.1 ###reference_###." + }, + { + "section_id": "5", + "parent_section_id": null, + "section_name": "Experiments", + "text": "Recall that all our theoretical statements are based on our construction of the LoRA adapters presented in their corresponding proofs.\nTo validate these results, here we empirically examine the relationship between approximation error and rank by integrating the LoRA adapters, which are constructed with the uniform partition in our proof, into the frozen model." + }, + { + "section_id": "6", + "parent_section_id": null, + "section_name": "Conclusions", + "text": "This work pioneers the theoretical analysis of LoRA fine-tuning\u2019s expressive capabilities in FNNs and TFNs, offering novel insights into how rank, model depth, and proximity to the target model influence LoRA\u2019s effectiveness.\nOur theoretical findings are validated by empirical evidence.\nFuture work includes quantifying approximation errors for TFNs when the LoRA-ranks are lower than required and refining LoRA adapter update algorithms based on our construction of LoRA adapters." + } + ], + "appendix": [ + { + "section_id": "Appendix 1", + "parent_section_id": null, + "section_name": "Appendix A List of Common Notations", + "text": "We first give a list of common notations that are used in the main body and appendix for reference.\n: LoRA-adapted model.\n: target model.\n: frozen/pretrained model.\n: rank of LoRA adapters.\n: dimensionality of the model, representing the number of neurons in each layer for FNNs and the embedding size for TFNs.\n: depth of the (frozen) model, representing the number of layers for FNNs and the number of transformer blocks for TFNs.\n: sequence length of the input for TFNs.\n: input.\n: random input.\n: matrix input.\n: input space.\n: .\n: a weight matrix associated with (frozen) model. Subscripts and superscripts may be added for specificity.\n: a bias vector associated with the (frozen) model. Subscripts may be added for specificity.\n: the output of the first layers in the (frozen) FNN.\n: the output of the first transformer blocks in a (frozen) TFN.\n: a weight matrix associated with the target model. Subscripts and superscripts may be added for specificity.\n: a bias vector associated with the target model. Subscripts may be added for specificity.\n: the intermediate output of the first layers in target FNN given the random input .\n: the output of the first transformer blocks in a target TFN.\n: depth of the target model, representing the number of layers for FNNs and the number of transformer blocks for TFNs.\n: the weight matrix of a LoRA adapter.\n: a bias vector associated with the LoRA-adapted model.\n: the output of the first layers in the LoRA-adapted model given the random input .\n: the output of the first transformer blocks in the LoRA-adapted model.\n: the ratio of the depth of the frozen model to that of the target model, i.e., .\n: partition , each element specifies that the layers with index in the adapted model will be used to approximate the -th layer in the target model.\n: the -th element in partition .\n: uniform partition . The uniform partition indicates that every layers in the adapted model are employed to approximate each layer in the target model.\n: the -th element in uniform partition .\n: the identity matrix. When the context permits, the subscript of may be omitted, simplifying the notation to .\n: a diagonal matrix where the diagonal entries from the th to th position are set to 1, while all remaining entries are 0s.\n: the -th largest singular value for the given square matrix. When is greater than the width of the matrix, .\n: best rank- approximation of a square matrix in Frobenuis norm and spectral norm. The subscript may be omitted to indicate a general low-rank approximation without specifying the rank.\n: product of the weight matrices from the layers , with the later layer positioned to the left and the earlier layer to the right in the matrix product.\nFor example, ." + }, + { + "section_id": "Appendix 2", + "parent_section_id": null, + "section_name": "Appendix B Expanded Related Works", + "text": "" + }, + { + "section_id": "Appendix 3", + "parent_section_id": null, + "section_name": "Appendix C Proofs Related to Linear Algebra", + "text": "In this section, we present a collection of commonly used matrix inequalities and the basic properties of randomly generated matrices.\nHere, we present some commonly used basic properties for matrix multiplication including rank computation, norm inequalities, as well as key results involving the trace and Frobenius norm of matrices for reference:\nAlthough the non-singularity of randomly generated matrices is already established, we include a proof for completeness.\nTo facilitate the proof, we introduce a lemma which states that if a polynomial is non-zero, then the set of roots corresponding to a zero value of the polynomial has a Lebesgue measure of zero.\nLet be a polynomial of degree , . If is not the zero polynomial, then the set is of Lebesgue measure zero.\nWe note that the determinant of a matrix can be viewed as a polynomial function of its vectorized version.\nBased on this insight, we proceed with our proof.\nLet be a random matrix that follows arbitrary continuous distribution with support having non-zero Lebesgue measure on .\nThen, is non-singular with probability 1.\nThe result is a direct consequence of Lemma 5 ###reference_ma5###.\nLet .\nThen, is a random vector following arbitrary continuous distribution with a support having non-zero Lebesgue measure on .\nFirst, we establish the relationship:\nfor some polynomial function .\nWe denote the support of random vector by , and the probability density function (PDF) of by . Then,\nBy Lemma 5 ###reference_ma5###, the Lebesgue measure of is zero. Hence,\nBy combining all the equations above, we conclude that , which implies is non-singular with probability 1.\n\u220e" + }, + { + "section_id": "Appendix 4", + "parent_section_id": null, + "section_name": "Appendix D Proofs for Linear Model Approximation", + "text": "In this section, we present the results and corresponding proofs for the linear model approximation problem introduced in Sec. 2 ###reference_###.\nThe deep linear model is a common technique in theoretical deep learning research, which offers valuable insights into deep nonlinear models, and has been employed in many notable studies, including those by Saxe et al. (2014 ###reference_b63###), Kawaguchi (2016 ###reference_b38###), Lu & Kawaguchi (2017 ###reference_b50###), Hardt & Ma (2017 ###reference_b29###) and Laurent & von Brecht (2018 ###reference_b40###).\nWe employ this toy model as a preliminary model, which serves as a foundation for extending our results to nonlinear models (i.e., FNN and TFN).\nWe first provide a slightly more detailed version of Lemma 1 ###reference_ma1### along with its proof.\nThen, we present a variant of it that allows for different LoRA-ranks for each low-rank adapter.\nThe proof for this variant involves only a minor modification of the proof for Lemma 7 ###reference_ma7###.\n[Detailed version of Lemma 1 ###reference_ma1###]\nDefine error matrix , and denote its rank by .\nFor a given LoRA-rank , assume that all the weight matrices of the frozen model , and are non-singular for all .\nThen, the approximation error\nand the optimal solution to the matrix approximation problem satisfies\n\nTherefore, when , we have , implying .\nOur goal is to find matrices of rank or lower such that the product of the adapted matrices approximates the target matrix well, i.e., we aim to solve the following constrained optimization problem:\nBy subtracting from both terms, the constrain optimization problem becomes\nTo perform analysis on (26 ###reference_###), we start with the analysis of as follows:\nHere, we have separated the first term in the product , breaking it into two parts: one involving and the other .\nWe can further expand the part involving :\nAt this point, it becomes clear that this expression can be iteratively decomposed.\nFollowing this pattern, we can express as:\nIn this final form, is decomposed as .\nIt is important to note that .\nConsequently, .\nThen, the optimization problem (26 ###reference_###) can be relaxed into a low-rank approximation problem\nwhere the optimal solution is .\nTherefore, if we can identify rank- or lower matrices such that\nthen we effectively solve the matrix approximation problem as defined in (26 ###reference_###).\nMoreover, it is straightforward to verify that (35 ###reference_###) directly implies all statements in this lemma.\nTherefore, our remaining proof focuses on proving (35 ###reference_###).\nDenote .\nTo derive the explicit form of ,\nwe first refer to the SVD of as\nwhere and are orthonormal matrices and the first diagonal entries of are non-zero, with all remaining entries being zero.\nBased on this, is expressed as\nHaving already derived the decomposition , we next aim to decompose as , where .\nThe goal now shifts to identifying such that for each .\nAchieving this would complete the proof of (35 ###reference_###).\nTherefore, our goal becomes finding with for all such that\nOne sufficient condition for achieving (38 ###reference_###) is that the decomposed matrices and low-rank adapters meet the following conditions:\nHere (39 ###reference_###) describes the decomposition of , (40 ###reference_###) provides one simple solution to (38 ###reference_###) when (42 ###reference_###) holds, and (41 ###reference_###) is the rank constraint on the low-rank adapter.\nIn particular, the (42 ###reference_###) is used to ensure the invertibility of for .\nThis condition is not necessary for as the inverse of is not required for computing any low-rank adapters.\nWe will show that the matrices defined by\nand defined by (40 ###reference_###) for all satisfies the all four conditions (39 ###reference_###), (40 ###reference_###), (41 ###reference_###), and (42 ###reference_###).\nWe note that the definition of clearly satisfies condition (39 ###reference_###).\nFor the remaining conditions, namely (40 ###reference_###), (41 ###reference_###), (42 ###reference_###), we proceed the proof by induction." + }, + { + "section_id": "Appendix 5", + "parent_section_id": null, + "section_name": "Appendix E Proofs for FNN Approximation", + "text": "In this section, we provide the full proof for deriving the main results outlined in Sec. 3 ###reference_###.\nFor the sake of completeness, we restate our results from the main body before presenting the proof.\nWe first provide a slightly more detailed result on the one-layer ReLU FNN approximation (Lemma 9 ###reference_ma9###) along with its corresponding proof.\nThen, we present a variant of this lemma by allowing for different LoRA-ranks for each low-rank adapter.\nThe proof for this variant involves only a minor modification of the proof for Lemma 9 ###reference_ma9###.\nDefine error matrix , with its rank represented by .\nConsider a LoRA-rank .\nAssume that the weight matrices and for all are non-singular.\nLet be a random input sampled from a distribution with bounded support and\nlet .\nThen, there exists rank- or lower matrices and bias vectors such that for any input ,\nTherefore, when , the adapted model exactly approximates the target model, i.e., for all .\nFurthermore, let be a random input sampled from a distribution with bounded support and\nlet .\nThen, the expected squared error is bounded as\nThis proof consists of three main steps:\n(i) linearize the first layers of the adapted model to reduce it to a single-layer FNN,\n(ii) align the weight matrices and bias vectors of this simplified with those of the target model ,\n(iii) derive an upper bound of the error .\nIn this part, we restate all the results considering uniform model partition from Sec. 3.3 ###reference_###, along with their corresponding proofs, presented in the same order.\nFor a fixed LoRA-rank ,\nthe weight matrices of the frozen model and matrices\n are non-singular for all and .\nLet be matrices whose elements are drawn independently from arbitrary continuous distributions.\nThen, with probability 1, Assumption 1 ###reference_umption1### holds .\nWe first use Lemma 6 ###reference_ma6### to establish that are non-singular with probability 1.\nThe goal of the remaining proof is to demonstrate that is full-rank with probability 1.\nIn this proof, we use to denote the probability density function, where the subscript indicates the associated random variable.\nFix an arbitrary and .\nThen probability of the being full-rank can be computed as\nIf the conditional random matrix has a continuous distribution with support of non-zero Lebesgue measure on , then\nensuring is full-rank with probability 1.\nConsequently, the remaining part of the proof aims to show that the conditional random matrix follows arbitrary continuous distribution with support having non-zero Lebesgue measure on .\nDenote .\nNow, consider the conditional distribution of , which can be written as\nSince is continuous with support of non-zero Lebesgue measure on , the same holds for .\nFurthermore, adding a constant matrix to this conditional distribution preserves the desired properties, thus completing the proof.\n\u220e\nUnder Assumption 1 ###reference_umption1###, there exists rank- or lower matrices with and bias vectors with when the rank of the low-rank adapter , the low-rank adapted model can exactly approximate the target model , i.e., for all input .\nThe key to this proof lies in a simple idea: for each layer in the target model, we can update layers (i.e., -th layer to -th layer) in the frozen model to approximate it as guaranteed by Lemma 9 ###reference_ma9###.\nHence, all layers of the target model can be approximated by the adapted model.\nFirstly, we provide the required non-singular assumption and the lemma demonstrating the mildness of this assumption for the general model partition cases after introducing necessary notations.\nFor the given LoRA-rank sequence and partition ,\nthe weight matrices of the frozen model and\n are non-singular for all and .\nNote that and here represent the maximum and minimum elements in the set , respectively.\nLet be matrices whose elements are drawn independently from arbitrary continuous distributions.\nThen, with probability 1, Assumption 2 ###reference_umption2### holds for all .\nFollowing the same steps in the proof of Lemma 3 ###reference_ma3### but replacing the uniform partition with the general partition completes the proof.\n\u220e\nWe now restate Theorem 6 ###reference_orem6### and provide its proof.\nConsider a partition for the frozen model.\nLet Assumption 2 ###reference_umption2### hold.\nIf for all , there exists LoRA adapters with and biases such that the adapted model can exactly approximate the target model.\nMoreover, define the approximation error of the -th layer as , and the magnitude of the parameters and the input as .\nThen, there exists LoRA adapters with and biases such that for any input with , the approximation error can be bounded as\nThis proof follows the same steps as the proofs of Theorem 3 ###reference_orem3### and Theorem 5 ###reference_orem5###, substituting the uniform partition with the general partition and applying Lemma 10 ###reference_ma10### in place of Lemma 2 ###reference_ma2### to derive the desired outcome.\n\u220e\nWe now aim to examine another commonly used model adaptation method, the final layers tuning, within the same theoretical framework.\nThe main limitation of this method, as compared to LoRA, is that while LoRA can update all layers, the tuning of final layers keeps the initial layers frozen.\nConsequently, a clear limitation arises when the initial layers of the frozen model are less discriminative than the target model .\nThat is, if there exist two input vectors such that the output of the initial layers of the frozen model is the same, but the output of the target model is different, then no matter how the final layers are tuned, it is impossible for the adapted model to exactly approximate the target model .\nTo formalize this, we observe that for the first layer of the frozen model, the outputs of the inputs in the non-activation region are always zero.\nIn other words, when , we have .\nTherefore, no matter how the subsequent layers are tuned, we still have .\nWhen we fix the first layers, the non-activation region becomes .\nSimilarly, we define the non-active region of the first layer in the frozen model as .\nCorrespondingly, we define .\nThe following lemma is provided based on these definitions.\nIf such that and the weight matrices of the target model are non-singular, then for any tuning of the last layers, .\nFor the simplicity of the presentation, we let to denote the non-activation region of the target model.\nThen, the condition can be written as .\nClearly, both and are closed convex sets." + }, + { + "section_id": "Appendix 6", + "parent_section_id": null, + "section_name": "Appendix F Proofs for TFN Approximation", + "text": "In this section, we not only provide the proof for the results outlined in Sec. 4 ###reference_###, but also introduce the problem setting for TFNs with single-head attention layers and present the corresponding results.\nIn this part, we outline the problem setting to investigate the expressive power of LoRA in TFNs that utilize single-head attention layers.\nThe primary distinction between this setting and that of TFNs with multi-head attention layers lies in the weight matrices.\nSpecifically, the matrices for combining different attention heads are absent in this case.\nDespite this difference, the derived results are consistent, albeit under slightly modified assumptions regarding the weight matrices and a different LoRA adaptation strategy.\nWe start by introducing necessary notations.\nFor an input matrix , where is the dimension of the token embeddings and is the number of tokens, the -th Transformer block using single-head self-attention can be expressed as:\nwhere the weight matrices , bias vectors , is the output of -th transformer block, with .\nThe output of the first Transformer blocks are subsequently fed into the output layer.\nThis produces the final output of the TFN, given by \nwhere represents the weight matrix of the output layer.\nFor single-head self-attention layers, the target model , frozen model , and the adapted model can be formally represented as:\nHere, are the weight matrices for generating key, query, and values in the -th transformer block of the target TFN;\n and serve as the weight matrices and bias vectors, respectively, for the feedforward layer in the same block;\n is the weight matrix for the output layer.\nFor the frozen TFN, the same roles are played by , , and for all and .\nFor the adapted model, low-rank adapters with a rank constraint are added to each weight matrix, and the bias vectors are updated to for all .\nGiven the problem setting outlined above, we give the non-singularity assumption for TFNs with single-head attention layers.\nAll the weight matrices of both the target model and the frozen model, as well as the following matrices for all ,\nare non-singular.\nLet the elements of all weight matrices in target model and the frozen model be independently sampled from continuous distributions.\nThen, Assumption 3 ###reference_umption3### holds with probability 1.\nThe results can be obtained by replicating the same steps outlined in the proof of Lemma 3 ###reference_ma3###.\n\u220e\nConsider the rank of the adapter weight matrices .\nLet Assumption 3 ###reference_umption3### hold.\nDefine the rank-based functionality gap to -th transformer block () or output layer () as\nIf , there exists rank- or lower weight matrices for low-rank adapters with other low-rank adapters set to ,\nand updated bias vectors: ,\nsuch that for any , the adapted model exactly approximates , i.e., , with probability 1.\nLet and denote the intermediate and final outputs of the -th transformer block in the target model , respectively.\nSpecifically, represents the output from the first feedforward layer in the -th transformer block.\nThey are defined as\nwhere .\nFor the adapted model , we introduce and to denote the corresponding intermediate output of the first feedforward layer and the final output of the -th transformer block for the adapted model, respectively:\nwhere . We note that .\nIn this proof, we set for all .\nOur goal is to show that adding low-rank adapters to self-attention layers and the first feedforward layers in all transformer blocks enables the adapted model to be functionally equivalent to the target model of the same dimensions.\nWe start by inductively constructing the adapter weight matrices such that for all .\nWe then select the low-rank adapters for and the to approximate the output of the target model.\nFor unmentioned low-rank adapters, we set them as .\nIn this section, we first provide the explicit formulation of TFN with multi-head attention layers.\nConsider an input matrix , where is the dimension of the token embeddings and is the number of tokens.\nThe output of the -th transformer block is denoted as , which can be computed as follows:\nwhere we define .\nHere, is the number of attention heads.\nThe weight matrices for each head in the -th transformer block are .\nThe softmax operator is applied column-wise to the matrix.\nFurther, are the weight matrices and are the bias vectors in the feedforward layers.\nA Transformer network, denoted as , is a composition of Transformer blocks, followed by an softmax output layer , where .\nThe final output of the TFN is given by .\nTo study the expressive power of LoRA within TFNs featuring multi-head attention layers, we next specify the parameters of the target model , frozen model , and the adapted model , each with transformer blocks and a dimension .\nTo study the expressive power of LoRA within TFNs featuring multi-head attention layers, we next specify the parameters of the target model , frozen model , and the adapted model , each with transformer blocks and a dimension .\nFor ease of presentation, we drop the subscript in , referring to it simply as TFN.\nGiven a specified rank for LoRA, these models are defined as follows:\n\n\n\n\n\n\n\n\n\n(177)\n\n\n\n\n\n(178)\n\n\n\n\n\n(179)\n\n\n\n\n\n(180)\n\n\n\nwhere the weight matrices , and the bias vectors .\nMoreover, the weight matrices of the low-rank adapters for all and are of rank or lower.\nWe next introduce non-singularity Assumption 4 ###reference_umption4### for TFN with multi-head attention layers scenarios, which is then validated by Lemma 14 ###reference_ma14###.\nWe then provide proof of our main results for TFNs \u2014 Theorem 7 ###reference_orem7###.\nAdditionally, we introduce a supplementary theorem that amalgamates results for TFNs with both single-head and multi-head attention layers when the weight matrices are randomly initialized.\nThis is articulated in Corollary 10 ###reference_orem10###.\nFor a fixed ,\nall the weight matrices of both the target model and the frozen model\nand the following matrices for all ,\n\n\n\n\n\n\n\n\n(181)\n\n\n\n\n(182)\n\n\n\n\n(183)\n\n\n\n\n(184)\n\n\n\n\n(185)\n\n\n\nare non-singular.\nLet the elements of all weight matrices in the target model and frozen model be independently sampled from continuous distributions.\nThen, Assumption 4 ###reference_umption4### holds with probability 1.\nThe results can be obtained by replicating the same steps outlined in the proof of Lemma 3 ###reference_ma3###.\n\u220e\nFor the reader\u2019s reference, we restate Theorem 7 ###reference_orem7### here integrated with the explicit formulation of the rank-based functionality gap .\nConsider a given LoRA-rank .\nLet Assumption 4 ###reference_umption4### hold.\nDefine the rank-based functionality gap to -th transformer block () or output layer () as \n\n\n\n\n\n\n\n\n(190)\n\n\n\nIf , then there exists\nlow-rank adapters with rank lower than with other low-rank adapters set to ,\nand updated bias vectors ,\nsuch that for any , the adapted model exactly approximates target model , i.e., .\nThe key idea of this proof is the same as the proof of Theorem 8 ###reference_orem8###: our first step is to ensure that, for each transformer block, the output from the first feedforward layer in the target model matches that in the adapted model.\nOnce this is established, we select an appropriate output layer weight matrix to complete the proof.\nSimilar to the proof of Theorem 8 ###reference_orem8###, we define and as the intermediate and final outputs of the -th transformer block in the target model , respectively.\nIn particular, corresponds to the output of the first feedforward layer in the -th transformer block.\nThey are formulated as\nFor the adapted model , we introduce and accordingly to denote the intermediate output of the first feedforward layer and the final output of the -th transformer block for the adapted model, respectively:\nNote that .\nWe aim to demonstrate that adding low-rank adapters to the weight matrices allows the adapted TFN to be functionally equivalent to the target TFN of identical dimensions.\nWe will initiate our proof by inductively constructing the adapter weight matrices such that for all , and then select the and the low-rank adapter for the output layer to approximate the output of the target model.\nFor unmentioned low-rank adapters, we set them as ." + }, + { + "section_id": "Appendix 7", + "parent_section_id": null, + "section_name": "Appendix G Experiments", + "text": "In this section, we perform experiments on both synthetic and real datasets to corroborate our theoretical results.\nFirstly, we focus on validating the construction of the LoRA adapter in our proof.\nSubsequently, we extend our experimental validation to encompass the effects of tuning final layers and the significance of updatable bias.\nAdditionally, we offer visual representations of training curves, assess the generalization performance of LoRA, and evaluate its efficacy on classification tasks.\nWe also conduct experiments on real datasets to further support our theoretical insights in real-world scenarios.\nWe implement LoRA adapter by reparameterizing it as where and we use the same initialization scheme as proposed by Hu et al. (2022a ###reference_b33###).\nFor experiments presented in Sec. 5 ###reference_###, G.3.1 ###reference_.SSS1###, G.3.2 ###reference_.SSS2###, G.4 ###reference_###, and G.5 ###reference_###, we consider two variants of frozen models:\n(Random) The first method involves randomly generating all the weight matrices using the Xavier uniform distribution, which is the default weight initialization method used in PyTorch.\n(Pretrained) The second method aims to simulate scenarios where the pretrained model is relatively closer to the target model.\nWe achieve this by initially creating the target model and the frozen model in the same way as the first method and then performing full-rank updates on the frozen model via gradient descent to approximate the target model until the approximation error is reduced by 1/3.\nFor other experiments on synthetic datasets, we default to the randomly parameterized frozen model unless specified otherwise.\nIn our experiments, we utilize the Adam optimizer.\nWe tune the learning rate and the weight decay .\nThe optimal configuration is determined based on the validation loss on a set of 256 samples independently drawn from a standard normal distribution.\nWe run 5,000 iterations for each hyperparameter setting, where at each step 256 fresh standard Gaussian samples are generated for loss and gradient computation.\nRecall that all our theoretical statements are based on our construction of the LoRA adapters presented in their corresponding proofs.\nTo validate these results, here we empirically examine the relationship between approximation error and rank by integrating the LoRA adapters, which are constructed with the uniform partition in our proof, into the frozen model.\nFurthermore, we evaluate the effectiveness of our constructed LoRA adapters by comparing their performance against adapters updated through gradient descent and optimized by Adam.\nAll simulations are conducted five times using different seeds, and the reported values represent the median computed across different runs.\n###figure_1### ###figure_2### Tuning or adding the final layers only is also a common adaptation method used in various domains, including computer vision (Chatfield et al., 2014 ###reference_b10###, Donahue et al., 2014 ###reference_b18###, Sharif Razavian et al., 2014 ###reference_b65###), and natural language processing (Devlin et al., 2019 ###reference_b14###, Gira et al., 2022 ###reference_b28###).\nRecall that Corollary 4 ###reference_orem4### and Lemma 12 ###reference_ma12### demonstrate that tuning final layers does not perform as well as LoRA for randomly generated models, provided the LoRA-rank satisfies the rank constraints shown in Corollary 4 ###reference_orem4###.\nIn this experiment, we aim to validate this assertion and compare the performance of tuning final layers and LoRA in more general scenarios, such as when the frozen model has been pretrained, and when the LoRA-rank is smaller than required.\nIn our proof, as detailed in Sec. 3.2 ###reference_### and E.1 ###reference_###, the updatable biases in the FNN play a crucial role in eliminating the nonlinearity of ReLUs.\nIn this experiment, we investigate the importance of updatable biases in ensuring the success of LoRA in FNN cases.\n###figure_3### Although our theoretical study does not incorporate any training process, we present the training curves of the LoRA gradient update method to illuminate the optimization aspects of LoRA.\nWhile our theoretical study only establishes the upper bound of LoRA\u2019s performance with infinite data samples, it does not consider LoRA\u2019s generalization performance in practice.\nAlthough this is beyond the current scope of our paper, we empirically investigate LoRA\u2019s generalization performance in this experiment.\n###figure_4### ###figure_5### Our theory and previous experiments all focus on regression cases.\nIn this experiment, we consider binary and multi-class classification tasks to optimize the LoRA adapter vias cross-entropy and report the performance of LoRA using accuracy.\nIn our theoretical analysis, we demonstrate how the sizes of frozen models and the distance between the frozen and target models influence the necessary LoRA-ranks to achieve the desired performance (see Lemma 1 ###reference_ma1###, 2 ###reference_ma2###, and Theorem 5 ###reference_orem5###, 6 ###reference_orem6###, 7 ###reference_orem7###).\nSpecifically, our results suggest that larger models require fewer LoRA-ranks to reach the desired performance.\nSimilarly, when the frozen model is closer to the target model, a lower LoRA-rank is sufficient to achieve the same performance.\nWe validate these theoretical insights through experiments on the GLUE benchmark (Wang et al., 2018 ###reference_b74###)." + }, + { + "section_id": "Appendix 8", + "parent_section_id": null, + "section_name": "Appendix H Extension to Cases with Different Model Dimensions", + "text": "This discussion only applies to linear model approximation and FNN approximation.\nAs highlighted in Sec. 2 ###reference_###, our results can be easily extended to scenarios where the target model, , and the frozen model, , have different model dimensions.\nSpecifically, for linear model or FNN approximation, we use to represent the number of hidden neurons per layer in the target model and for the frozen model.\nWe particularly consider the cases where the frozen model is wider than the target model, i.e., .\nThis is because the frozen model is typically overparameterized in practical applications.\nThe key idea for extending our analysis to scenarios with different model dimensions is expanding the dimension of the target model.\nFor the sake of simplicity, we focus on the simplest case, the linear model approximation, as an example.\nIn this setting, the difference between the output of the adapted model and the target model can be measured by\nwhere .\nConsequently, the last columns and rows of does not affect the results at all.\nDenote the submatrix consisting of the first rows and columns of a matrix by .\nThen, to approximate the target model, we aim to solve the following constrained optimization problem for a given LoRA-rank :\nTo solve this problem, we first define an expanded target matrix, denoted by .\nThe expanded target matrix is constructed such that , while the remaining entries matches the corresponding entries in .\nThen, the error matrix , consists entirely of zeros except for the first rows and columns.\nTherefore, we obtain .\nGiven the expanded target matrix, we consider the updated constrained optimization problem as follows:\nBy Lemma 1 ###reference_ma1###, we obtain that when the LoRA-rank , the optimal solution to (221 ###reference_1###) satisfies , given that .\nThis result implies that and therefore the approximation error defined in (219 ###reference_9###) is 0 for all input .\nA similar analysis can be conducted for FNN approximation." + }, + { + "section_id": "Appendix 9", + "parent_section_id": null, + "section_name": "Appendix I Extended Future Works", + "text": "To the best of our knowledge, this paper is the first to offer a theoretical understanding of LoRA fine-tuning on both FNN and TFN.\nOur work delivers insightful results, elucidating the impact of rank, depth of the pre-trained model, and the distance between the pre-trained model and the target model on the expressive power of LoRA.\nThose theoretical results are further corroborated via our experiments.\nDespite these advancements, several intriguing questions still remain open.\nFirst, as observed in the numerical experiments, our construction of LoRA adapters for FNN and TFN may not be always optimal.\nGiven that more complex models offer increased flexibility, an open question is whether we can devise a more parameter-efficient scheme to construct the LoRA adapters, thereby deriving a tighter bound on approximation error.\nSecond, for TFN, we have only identified the conditions under which the LoRA-adapted model exactly matches the target model, due to the analytical complexity of TFN.\nIt would be interesting to quantify the approximation error when the rank is lower than required.\nFurthermore, for TFN, we constrain the target model and the frozen model to have identical embedding size and depth, and we omit the skip connections and layer norms for simplicity.\nAnother intriguing direction would be to study the expressive power of LoRA under TFN cases with more general settings on TFN architectures.\nWhile our analysis does not involve any training process, an interesting direction for future research would be to consider gradient-based optimization algorithms and examine how efficiently LoRA can be optimized.\nFinally, theoretical questions about LoRA\u2019s generalization to unseen data also remain unresolved." + } + ], + "tables": { + "1": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
FindingsEmpirical ObservationTheoretical Insights
\n

For a fixed downstream task, larger models require a lower LoRA-rank to achieve the desired performance.

\n
\n

Sec.\u00a0G.9

\n
\n

Lemma\u00a01, 2, and Theorem\u00a05, 6

\n
\n

When the frozen model is closer to the target model, a lower LoRA-rank is sufficient to attain the desired performance.

\n
\n

Sec.\u00a0G.9 and 6-th footnote in Hu et\u00a0al. (2022a)

\n
\n

Lemma\u00a01, 2, and Theorem\u00a05, 6, 7

\n
\n

LoRA outperforms final layers tuning if the quality of shared representation is not good.

\n
\n

Sec.\u00a0G.4 and observations by Kaplun et\u00a0al. (2023) and Ding et\u00a0al. (2023)

\n
\n

Lemma\u00a04

\n
\n

In addition to applying low-rank updates to weight matrices, it is crucial to also update the bias.

\n
\n

Sec.\u00a0G.5 and 2-nd footnote in Hu et\u00a0al. (2022a)

\n
\n

Proofs in Sec.\u00a03.2 and E.1

\n
\n

Tuning attention weights is sufficient for achieving good performance on TFNs.

\n
\n

Sec.\u00a04.2 in Hu et\u00a0al. (2022a)

\n
\n

Theorem\u00a07

\n
\n

Current optimization algorithms for LoRA training might be suboptimal.

\n
\n

Fig.\u00a04, 5, and 9

\n
\n

\u2014

\n
\n
\n
Table 1: Summary of our findings, supported by empirical evidence and theoretical results.
\n
", + "capture": "Table 1: Summary of our findings, supported by empirical evidence and theoretical results." + }, + "2": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelMNLISST-2MRPCCoLAQNLIQQPRTESTS-B
0.330.491.3160.495.682.527.024
0.318.505.6840.505.369.473.032
2.861.950.892.632.928.891.780.907
6.870.948.892.629.931.900.773.909
2.904.956.917.631.946.887.884.916
\n
\n
Table 2: \nComparison of the fine-tuned performance of RoBERTa-base and RoBERTa-large using LoRA with different LoRA-ranks on the GLUE benchmark. \nFollowing Hu et\u00a0al. (2022a), we report the overall (matched and mismatched) accuracy for MNLI, Matthew\u2019s correlation for CoLA, Pearson correlation for STS-B, and accuracy for other tasks.\nHigher is better for all metrics.\nDespite the absence of a clear pattern indicating which pretrained model is generally superior, after fine-tuning using LoRA, we observe that RoBERTa-large (340M) fine-tuned with LoRA-rank outperforms RoBERTa-base (110M) with LoRA-rank in 7 out of 8 tasks.\nThis observation aligns with our theoretical conclusion that larger models require lower LoRA-ranks to achieve the desired performance.\n
\n
", + "capture": "Table 2: \nComparison of the fine-tuned performance of RoBERTa-base and RoBERTa-large using LoRA with different LoRA-ranks on the GLUE benchmark. \nFollowing Hu et\u00a0al. (2022a), we report the overall (matched and mismatched) accuracy for MNLI, Matthew\u2019s correlation for CoLA, Pearson correlation for STS-B, and accuracy for other tasks.\nHigher is better for all metrics.\nDespite the absence of a clear pattern indicating which pretrained model is generally superior, after fine-tuning using LoRA, we observe that RoBERTa-large (340M) fine-tuned with LoRA-rank outperforms RoBERTa-base (110M) with LoRA-rank in 7 out of 8 tasks.\nThis observation aligns with our theoretical conclusion that larger models require lower LoRA-ranks to achieve the desired performance.\n" + }, + "3": { + "table_html": "
\n
\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n\n
ModelMNLISST-2MRPCCoLAQNLIQQPRTESTS-B
Random2.523.775.691.154.627.761.542.213
Pretrained.861.950.892.632.928.891.780.907
Random4.535.788.696.145.625.768.542.224
Pretrained.868.950.890.634.929.898.805.910
Random6.544.799.696.154.632.768.542.210
Pretrained.868.948.892.629.931.900.773.909
\n
\n
Table 3: \nComparison of the fine-tuned performance of randomly initialized and pretrained RoBERTa-base. \nFollowing Hu et\u00a0al. (2022a), we report the overall (matched and mismatched) accuracy for MNLI, Matthew\u2019s correlation for CoLA, Pearson correlation for STS-B, and accuracy for other tasks.\nHigher is better for all metrics.\nWe observe that the performance of the pretrained RoBERTa-base significantly surpasses that of the randomly initialized RoBERTa-base given the same LoRA-rank.\nThis observation is consistent with our theoretical findings, which suggest that a frozen model closer to the target model yields better performance given the same LoRA-rank.\n
\n
", + "capture": "Table 3: \nComparison of the fine-tuned performance of randomly initialized and pretrained RoBERTa-base. \nFollowing Hu et\u00a0al. (2022a), we report the overall (matched and mismatched) accuracy for MNLI, Matthew\u2019s correlation for CoLA, Pearson correlation for STS-B, and accuracy for other tasks.\nHigher is better for all metrics.\nWe observe that the performance of the pretrained RoBERTa-base significantly surpasses that of the randomly initialized RoBERTa-base given the same LoRA-rank.\nThis observation is consistent with our theoretical findings, which suggest that a frozen model closer to the target model yields better performance given the same LoRA-rank.\n" + } + }, + "image_paths": { + "1(a)": { + "figure_path": "2310.17513v3_figure_1(a).png", + "caption": "(a) Linear model approximation.\nFigure 1: Approximation error (measured by MSE) versus LoRA-rank.", + "url": "http://arxiv.org/html/2310.17513v3/x1.png" + }, + "1(b)": { + "figure_path": "2310.17513v3_figure_1(b).png", + "caption": "(b) FNN approximation.\nFigure 1: Approximation error (measured by MSE) versus LoRA-rank.", + "url": "http://arxiv.org/html/2310.17513v3/x2.png" + }, + "2": { + "figure_path": "2310.17513v3_figure_2.png", + "caption": "Figure 2: An example of I1subscript\ud835\udc3c1I_{1}italic_I start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT and \\macc@depth\u2062\u0394\u2062\\frozen@everymath\u2062\\macc@group\u2062\\macc@set@skewchar\u2062\\macc@nested@a\u2062111\u2062I1\\macc@depth\u0394\\frozen@everymath\\macc@group\\macc@set@skewchar\\macc@nested@a111subscript\ud835\udc3c1{\\macc@depth\\char 1\\frozen@everymath{\\macc@group}\\macc@set@skewchar%\n\\macc@nested@a 111{I}}_{1}roman_\u0394 111 italic_I start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT when D=2\ud835\udc372D=2italic_D = 2.", + "url": "http://arxiv.org/html/2310.17513v3/x3.png" + }, + "3(a)": { + "figure_path": "2310.17513v3_figure_3(a).png", + "caption": "(a) Frozen model is randomly generated.\nFigure 3: Approximation error (measured by MSE) versus LoRA-rank on FNNs.", + "url": "http://arxiv.org/html/2310.17513v3/x4.png" + }, + "3(b)": { + "figure_path": "2310.17513v3_figure_3(b).png", + "caption": "(b) Frozen model is pretrained.\nFigure 3: Approximation error (measured by MSE) versus LoRA-rank on FNNs.", + "url": "http://arxiv.org/html/2310.17513v3/x5.png" + }, + "4": { + "figure_path": "2310.17513v3_figure_4.png", + "caption": "Figure 4: Log-scale MSE versus LoRA-rank on randomly initialized FNNs.", + "url": "http://arxiv.org/html/2310.17513v3/x6.png" + }, + "5(a)": { + "figure_path": "2310.17513v3_figure_5(a).png", + "caption": "(a) Frozen model is randomly generated.\nFigure 5: Approximation error (measured by MSE) versus LoRA-rank on TFNs.", + "url": "http://arxiv.org/html/2310.17513v3/x7.png" + }, + "5(b)": { + "figure_path": "2310.17513v3_figure_5(b).png", + "caption": "(b) Frozen model is pretrained.\nFigure 5: Approximation error (measured by MSE) versus LoRA-rank on TFNs.", + "url": "http://arxiv.org/html/2310.17513v3/x8.png" + }, + "6(a)": { + "figure_path": "2310.17513v3_figure_6(a).png", + "caption": "(a) Comparison between LoRA and tuning final layers.\nFigure 6: \nApproximation error (measured by MSE) versus the number of tunable parameters when various methods are employed.\nThe analyses are conducted on FNN models.", + "url": "http://arxiv.org/html/2310.17513v3/x9.png" + }, + "6(b)": { + "figure_path": "2310.17513v3_figure_6(b).png", + "caption": "(b) Comparison between LoRA with fixed biases and LoRA with updatable biases.\nFigure 6: \nApproximation error (measured by MSE) versus the number of tunable parameters when various methods are employed.\nThe analyses are conducted on FNN models.", + "url": "http://arxiv.org/html/2310.17513v3/x10.png" + }, + "7": { + "figure_path": "2310.17513v3_figure_7.png", + "caption": "Figure 7: Training curves of LoRA with varying LoRA-ranks when D=16\ud835\udc3716D=16italic_D = 16.", + "url": "http://arxiv.org/html/2310.17513v3/x11.png" + }, + "8": { + "figure_path": "2310.17513v3_figure_8.png", + "caption": "Figure 8: Assessment of LoRA\u2019s generalization performance on FNNs.", + "url": "http://arxiv.org/html/2310.17513v3/x12.png" + }, + "9(a)": { + "figure_path": "2310.17513v3_figure_9(a).png", + "caption": "(a) Multi-class classification tasks with 16 classes.\nFigure 9: \nAccuracy versus the rank on classification tasks.\nThe analyses are conducted on FNN models.", + "url": "http://arxiv.org/html/2310.17513v3/x13.png" + }, + "9(b)": { + "figure_path": "2310.17513v3_figure_9(b).png", + "caption": "(b) Binary classification task.\nFigure 9: \nAccuracy versus the rank on classification tasks.\nThe analyses are conducted on FNN models.", + "url": "http://arxiv.org/html/2310.17513v3/x14.png" + } + }, + "validation": true, + "references": [ + { + "1": { + "title": "Transformers learn to implement preconditioned gradient descent for\nin-context learning.", + "author": "Kwangjun Ahn, Xiang Cheng, Hadi Daneshmand, and Suvrit Sra.", + "venue": "arXiv preprint arXiv:2306.00297, 2023.", + "url": null + } + }, + { + "2": { + "title": "What learning algorithm is in-context learning? Investigations with\nlinear models.", + "author": "Ekin Aky\u00fcrek, Dale Schuurmans, Jacob Andreas, Tengyu Ma, and Denny Zhou.", + "venue": "In International Conference on Learning Representations\n(ICLR), 2023.", + "url": null + } + }, + { + "3": { + "title": "Transformers as statisticians: Provable in-context learning with\nin-context algorithm selection.", + "author": "Yu Bai, Fan Chen, Huan Wang, Caiming Xiong, and Song Mei.", + "venue": "arXiv preprint arXiv:2306.04637, 2023.", + "url": null + } + }, + { + "4": { + "title": "Rademacher and Gaussian complexities: Risk bounds and structural\nresults.", + "author": "Peter L Bartlett and Shahar Mendelson.", + "venue": "In Computational Learning Theory (COLT), volume 2111, pp. 224\u2013240, 2001.", + "url": null + } + }, + { + "5": { + "title": "BitFit: Simple parameter-efficient fine-tuning for\nTransformer-based masked language-models.", + "author": "Elad Ben Zaken, Yoav Goldberg, and Shauli Ravfogel.", + "venue": "In Annual Meeting of the Association for Computational\nLinguistics (ACL), pp. 1\u20139, 2022.", + "url": null + } + }, + { + "6": { + "title": "On the expressive power of deep architectures.", + "author": "Yoshua Bengio and Olivier Delalleau.", + "venue": "In Algorithmic Learning Theory, pp. 18\u201336, 2011.", + "url": null + } + }, + { + "7": { + "title": "Deep equals shallow for ReLU networks in kernel regimes.", + "author": "Alberto Bietti and Francis Bach.", + "venue": "2021.", + "url": null + } + }, + { + "8": { + "title": "Language models are few-shot learners.", + "author": "Tom B. Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared Kaplan,\nPrafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda\nAskell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom\nHenighan, Rewon Child, Aditya Ramesh, Daniel M. Ziegler, Jeffrey Wu, Clemens\nWinter, Christopher Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Scott\nGray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec\nRadford, Ilya Sutskever, and Dario Amodei.", + "venue": "In Advances in Neural Information Processing Systems\n(NeurIPS), 2020.", + "url": null + } + }, + { + "9": { + "title": "The zero set of a polynomial.", + "author": "Richard Caron and Tim Traynor.", + "venue": "WSMR Report, 2005.", + "url": null + } + }, + { + "10": { + "title": "Return of the devil in the details: Delving deep into convolutional\nnets.", + "author": "Ken Chatfield, Karen Simonyan, Andrea Vedaldi, and Andrew Zisserman.", + "venue": "arXiv preprint arXiv:1405.3531, 2014.", + "url": null + } + }, + { + "11": { + "title": "Model reprogramming: Resource-efficient cross-domain machine\nlearning.", + "author": "Pin-Yu Chen.", + "venue": "arXiv preprint arXiv:2202.10629, 2022.", + "url": null + } + }, + { + "12": { + "title": "Approximation by superpositions of a sigmoidal function.", + "author": "George Cybenko.", + "venue": "Mathematics of control, signals and systems, 2:303\u2013314, 1989.", + "url": null + } + }, + { + "13": { + "title": "QLoRA: Efficient finetuning of quantized llms.", + "author": "Tim Dettmers, Artidoro Pagnoni, Ari Holtzman, and Luke Zettlemoyer.", + "venue": "arXiv preprint arXiv:2305.14314, 2023.", + "url": null + } + }, + { + "14": { + "title": "BERT: Pre-training of deep bidirectional Transformers for\nlanguage understanding.", + "author": "Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova.", + "venue": "In North American Chapter of the Association for\nComputational Linguistics (NAACL), pp. 4171\u20134186, 2019.", + "url": null + } + }, + { + "15": { + "title": "Parameter-efficient fine-tuning of large-scale pre-trained language\nmodels.", + "author": "Ning Ding, Yujia Qin, Guang Yang, Fuchao Wei, Zonghan Yang, Yusheng Su,\nShengding Hu, Yulin Chen, Chi-Min Chan, Weize Chen, Jing Yi, Weilin Zhao,\nXiaozhi Wang, Zhiyuan Liu, Hai-Tao Zheng, Jianfei Chen, Yang Liu, Jie Tang,\nJuanzi Li, and Maosong Sun.", + "venue": "Nature Machine Intelligence, 5(3):220\u2013235, 2023.", + "url": null + } + }, + { + "16": { + "title": "Improved input reprogramming for GAN conditioning.", + "author": "Tuan Dinh, Daewon Seo, Zhixu Du, Liang Shang, and Kangwook Lee.", + "venue": "arXiv preprint arXiv:2201.02692, 2022a.", + "url": null + } + }, + { + "17": { + "title": "LIFT: Language-interfaced fine-tuning for non-language machine\nlearning tasks.", + "author": "Tuan Dinh, Yuchen Zeng, Ruisu Zhang, Ziqian Lin, Michael Gira, Shashank Rajput,\nJy-yong Sohn, Dimitris Papailiopoulos, and Kangwook Lee.", + "venue": "Advances in Neural Information Processing Systems (NeurIPS),\n35:11763\u201311784, 2022b.", + "url": null + } + }, + { + "18": { + "title": "DeCAF: A deep convolutional activation feature for generic visual\nrecognition.", + "author": "Jeff Donahue, Yangqing Jia, Oriol Vinyals, Judy Hoffman, Ning Zhang, Eric\nTzeng, and Trevor Darrell.", + "venue": "In International Conference on Machine Learning (ICML), pp. 647\u2013655, 2014.", + "url": null + } + }, + { + "19": { + "title": "Few-shot learning via learning the representation, provably.", + "author": "Simon S Du, Wei Hu, Sham M Kakade, Jason D Lee, and Qi Lei.", + "venue": "In International Conference on Learning Representations\n(ICLR), 2021.", + "url": null + } + }, + { + "20": { + "title": "The approximation of one matrix by another of lower rank.", + "author": "Carl Eckart and Gale Young.", + "venue": "Psychometrika, 1936.", + "url": null + } + }, + { + "21": { + "title": "The power of depth for feedforward neural networks.", + "author": "Ronen Eldan and Ohad Shamir.", + "venue": "In Annual Conference on Learning Theory, volume 49, pp. 907\u2013940, 2016.", + "url": null + } + }, + { + "22": { + "title": "Adversarial Reprogramming of neural networks.", + "author": "Gamaleldin F Elsayed, Ian Goodfellow, and Jascha Sohl-Dickstein.", + "venue": "In International Conference on Learning Representations\n(ICLR), 2019.", + "url": null + } + }, + { + "23": { + "title": "Latent constraints: Learning to generate conditionally from\nunconditional generative models.", + "author": "Jesse Engel, Matthew Hoffman, and Adam Roberts.", + "venue": "In International Conference on Learning Representations\n(ICLR), 2018.", + "url": null + } + }, + { + "24": { + "title": "Adversarial Reprogramming revisited.", + "author": "Matthias Englert and Ranko Lazic.", + "venue": "In Advances in Neural Information Processing Systems\n(NeurIPS), volume 35, pp. 28588\u201328600, 2022.", + "url": null + } + }, + { + "25": { + "title": "DPOK: Reinforcement learning for fine-tuning text-to-image\ndiffusion models.", + "author": "Ying Fan, Olivia Watkins, Yuqing Du, Hao Liu, Moonkyung Ryu, Craig Boutilier,\nPieter Abbeel, Mohammad Ghavamzadeh, Kangwook Lee, and Kimin Lee.", + "venue": "arXiv preprint arXiv:2305.16381, 2023.", + "url": null + } + }, + { + "26": { + "title": "The expressive power of tuning only the Norm layers.", + "author": "Angeliki Giannou, Shashank Rajput, and Dimitris Papailiopoulos.", + "venue": "arXiv preprint arXiv:2302.07937, 2023a.", + "url": null + } + }, + { + "27": { + "title": "Looped Transformers as programmable computers.", + "author": "Angeliki Giannou, Shashank Rajput, Jy-Yong Sohn, Kangwook Lee, Jason D. Lee,\nand Dimitris Papailiopoulos.", + "venue": "In International Conference on Machine Learning (ICML), volume\n202, pp. 11398\u201311442, 2023b.", + "url": null + } + }, + { + "28": { + "title": "Debiasing pre-trained language models via efficient fine-tuning.", + "author": "Michael Gira, Ruisu Zhang, and Kangwook Lee.", + "venue": "In Workshop on Language Technology for Equality, Diversity and\nInclusion, pp. 59\u201369, 2022.", + "url": null + } + }, + { + "29": { + "title": "Identity matters in deep learning.", + "author": "Moritz Hardt and Tengyu Ma.", + "venue": "In International Conference on Learning Representations, 2017.", + "url": null + } + }, + { + "30": { + "title": "DeBERTa: Decoding-enhanced BERT with disentangled attention.", + "author": "Pengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen.", + "venue": "In International Conference on Learning Representations, 2021.", + "url": null + } + }, + { + "31": { + "title": "Multilayer feedforward networks are universal approximators.", + "author": "Kurt Hornik, Maxwell Stinchcombe, and Halbert White.", + "venue": "Neural Networks, 2:359\u2013366, 1989.", + "url": null + } + }, + { + "32": { + "title": "On the approximation power of two-layer networks of random ReLUs.", + "author": "Daniel Hsu, Clayton H Sanford, Rocco Servedio, and Emmanouil Vasileios\nVlatakis-Gkaragkounis.", + "venue": "In Conference on Learning Theory, volume 134, pp. 2423\u20132461, 2021.", + "url": null + } + }, + { + "33": { + "title": "LoRA: Low-rank adaptation of large language models.", + "author": "Edward J Hu, Yelong Shen, Phillip Wallis, Zeyuan Allen-Zhu, Yuanzhi Li, Shean\nWang, Lu Wang, and Weizhu Chen.", + "venue": "In International Conference on Learning Representations\n(ICLR), 2022a.", + "url": null + } + }, + { + "34": { + "title": "Sparse structure search for delta tuning.", + "author": "Shengding Hu, Zhen Zhang, Ning Ding, Yadao Wang, Yasheng Wang, Zhiyuan Liu, and\nMaosong Sun.", + "venue": "In Advances in Neural Information Processing Systems\n(NeurIPS), volume 35, pp. 9853\u20139865, 2022b.", + "url": null + } + }, + { + "35": { + "title": "Evaluation of neural architectures trained with square loss vs\ncross-entropy in classification tasks.", + "author": "Like Hui and Mikhail Belkin.", + "venue": "In International Conference on Learning Representations, 2021.", + "url": null + } + }, + { + "36": { + "title": "Neural tangent kernel: Convergence and generalization in neural\nnetworks.", + "author": "Arthur Jacot, Franck Gabriel, and Cl\u00e9ment Hongler.", + "venue": "Advances in neural information processing systems, 31, 2018.", + "url": null + } + }, + { + "37": { + "title": "SubTuning: Efficient finetuning for multi-task learning.", + "author": "Gal Kaplun, Andrey Gurevich, Tal Swisa, Mazor David, Shai Shalev-Shwartz, and\nEran Malach.", + "venue": "arXiv preprint arXiv:2302.06354, 2023.", + "url": null + } + }, + { + "38": { + "title": "Deep learning without poor local minima.", + "author": "Kenji Kawaguchi.", + "venue": "Advances in neural information processing systems, 29, 2016.", + "url": null + } + }, + { + "39": { + "title": "Do better ImageNet models transfer better?", + "author": "Simon Kornblith, Jonathon Shlens, and Quoc V Le.", + "venue": "In Proceedings of the IEEE/CVF conference on computer vision\nand pattern recognition, 2019.", + "url": null + } + }, + { + "40": { + "title": "Deep linear networks with arbitrary loss: All local minima are\nglobal.", + "author": "Thomas Laurent and James von Brecht.", + "venue": "In Proceedings of the 35th International Conference on Machine\nLearning, volume 80, pp. 2902\u20132907, 2018.", + "url": null + } + }, + { + "41": { + "title": "On the ability of neural nets to express distributions.", + "author": "Holden Lee, Rong Ge, Tengyu Ma, Andrej Risteski, and Sanjeev Arora.", + "venue": "In Conference on Learning Theory, pp. 1271\u20131296, 2017.", + "url": null + } + }, + { + "42": { + "title": "Reprogramming GANs via input noise design.", + "author": "Kangwook Lee, Changho Suh, and Kannan Ramchandran.", + "venue": "In Machine Learning and Knowledge Discovery in Databases -\nEuropean Conference, (ECML PKDD), volume 12458, pp. 256\u2013271, 2020.", + "url": null + } + }, + { + "43": { + "title": "The power of scale for parameter-efficient prompt tuning.", + "author": "Brian Lester, Rami Al-Rfou, and Noah Constant.", + "venue": "In Empirical Methods in Natural Language Processing (EMNLP),\npp. 3045\u20133059, 2021.", + "url": null + } + }, + { + "44": { + "title": "Prefix-tuning: Optimizing continuous prompts for generation.", + "author": "Xiang Lisa Li and Percy Liang.", + "venue": "In Association for Computational Linguistics and International\nJoint Conference on Natural Language Processing (ACL/IJCNLP), pp. 4582\u20134597, 2021.", + "url": null + } + }, + { + "45": { + "title": "Why deep neural networks for function approximation?", + "author": "Shiyu Liang and R. Srikant.", + "venue": "In International Conference on Learning Representations\n(ICLR), 2017.", + "url": null + } + }, + { + "46": { + "title": "On the expressive power of self-attention matrices.", + "author": "Valerii Likhosherstov, Krzysztof Choromanski, and Adrian Weller.", + "venue": "2021.", + "url": null + } + }, + { + "47": { + "title": "Loss landscapes and optimization in over-parameterized non-linear\nsystems and neural networks.", + "author": "Chaoyue Liu, Libin Zhu, and Mikhail Belkin.", + "venue": "Applied and Computational Harmonic Analysis, 59:85\u2013116, 2022a.", + "url": null + } + }, + { + "48": { + "title": "Few-shot parameter-efficient fine-tuning is better and cheaper than\nin-context learning.", + "author": "Haokun Liu, Derek Tam, Mohammed Muqeeth, Jay Mohta, Tenghao Huang, Mohit\nBansal, and Colin A Raffel.", + "venue": "In Advances in Neural Information Processing Systems\n(NeurIPS), volume 35, pp. 1950\u20131965, 2022b.", + "url": null + } + }, + { + "49": { + "title": "RoBERTa: A robustly optimized bert pretraining approach.", + "author": "Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer\nLevy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov.", + "venue": "arXiv preprint arXiv:1907.11692, 2019.", + "url": null + } + }, + { + "50": { + "title": "Depth creates no bad local minima.", + "author": "Haihao Lu and Kenji Kawaguchi.", + "venue": "arXiv preprint arXiv:1702.08580, 2017.", + "url": null + } + }, + { + "51": { + "title": "The expressive power of neural networks: A view from the width.", + "author": "Zhou Lu, Hongming Pu, Feicheng Wang, Zhiqiang Hu, and Liwei Wang.", + "venue": "Advances in neural information processing systems, 30, 2017.", + "url": null + } + }, + { + "52": { + "title": "A kernel-based view of language model fine-tuning.", + "author": "Sadhika Malladi, Alexander Wettig, Dingli Yu, Danqi Chen, and Sanjeev Arora.", + "venue": "In International Conference on Machine Learning, pp. 23610\u201323641, 2023.", + "url": null + } + }, + { + "53": { + "title": "The benefit of multitask representation learning.", + "author": "Andreas Maurer, Massimiliano Pontil, and Bernardino Romera-Paredes.", + "venue": "Journal of Machine Learning Research, 17(81):1\u201332, 2016.", + "url": null + } + }, + { + "54": { + "title": "Symmetric gauge functions and unitarily invariant norms.", + "author": "Leon Mirsky.", + "venue": "The quarterly journal of mathematics, 1960.", + "url": null + } + }, + { + "55": { + "title": "GPT-4 technical report, 2023.", + "author": "OpenAI.", + "venue": null, + "url": null + } + }, + { + "56": { + "title": "On the role of attention in prompt-tuning.", + "author": "Samet Oymak, Ankit Singh Rawat, Mahdi Soltanolkotabi, and Christos\nThrampoulidis.", + "venue": "In International Conference on Machine Learning (ICML), 2023.", + "url": null + } + }, + { + "57": { + "title": "Minimum width for universal approximation.", + "author": "Sejun Park, Chulhee Yun, Jaeho Lee, and Jinwoo Shin.", + "venue": "In International Conference on Learning Representations\n(ICLR), 2021.", + "url": null + } + }, + { + "58": { + "title": "On the turing completeness of modern neural network architectures.", + "author": "Jorge P\u00e9rez, Javier Marinkovi\u0107, and Pablo Barcel\u00f3.", + "venue": "arXiv preprint arXiv:1901.03429, 2019.", + "url": null + } + }, + { + "59": { + "title": "When do prompting and prefix-tuning work? A theory of capabilities\nand limitations.", + "author": "Aleksandar Petrov, Philip HS Torr, and Adel Bibi.", + "venue": "arXiv preprint arXiv:2310.19698, 2023.", + "url": null + } + }, + { + "60": { + "title": "On the expressive power of deep neural networks.", + "author": "Maithra Raghu, Ben Poole, Jon Kleinberg, Surya Ganguli, and Jascha\nSohl-Dickstein.", + "venue": "In International Conference on Machine Learning (ICML), pp. 2847\u20132854, 2017.", + "url": null + } + }, + { + "61": { + "title": "Random features for large-scale kernel machines.", + "author": "Ali Rahimi and Benjamin Recht.", + "venue": "In Advances in Neural Information Processing Systems\n(NeurIPS), volume 3, pp. 5, 2007.", + "url": null + } + }, + { + "62": { + "title": "Low-rank adaptation for fast text-to-image diffusion fine-tuning.", + "author": "Simo Ryu.", + "venue": "https://github.com/cloneofsimo/lora, 2023.", + "url": null + } + }, + { + "63": { + "title": "Exact solutions to the nonlinear dynamics of learning in deep linear\nneural networks.", + "author": "Andrew M Saxe, James L McClelland, and Surya Ganguli.", + "venue": "2014.", + "url": null + } + }, + { + "64": { + "title": "Empirical study on the effective VC dimension of low-rank neural\nnetworks.", + "author": "Daewon Seo, Hongyi Wang, Dimitris Papailiopoulos, and Kangwook Lee.", + "venue": "In ICML Workshop on Overparameterization: Pitfalls &\nOpportunities, 2021.", + "url": null + } + }, + { + "65": { + "title": "CNN features off-the-shelf: An astounding baseline for recognition.", + "author": "Ali Sharif Razavian, Hossein Azizpour, Josephine Sullivan, and Stefan Carlsson.", + "venue": "In Proceedings of the IEEE conference on computer vision and\npattern recognition workshops, pp. 806\u2013813, 2014.", + "url": null + } + }, + { + "66": { + "title": "Deep network approximation characterized by number of neurons.", + "author": "Haizhao Shen, ZuoweiYang and Shijun Zhang.", + "venue": "Communications in Computational Physics, (5):1768\u20131811, 2020.", + "url": null + } + }, + { + "67": { + "title": "Representation benefits of deep feedforward networks.", + "author": "Matus Telgarsky.", + "venue": "arXiv preprint arXiv:1509.08101, 2015.", + "url": null + } + }, + { + "68": { + "title": "Benefits of depth in neural networks.", + "author": "Matus Telgarsky.", + "venue": "In Conference on Learning Theory, pp. 1517\u20131539, 2016.", + "url": null + } + }, + { + "69": { + "title": "LLaMA: Open and efficient foundation language models.", + "author": "Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne\nLachaux, Timoth\u00e9e Lacroix, Baptiste Rozi\u00e8re, Naman Goyal, Eric\nHambro, Faisal Azhar, Aur\u00e9lien Rodriguez, Armand Joulin, Edouard Grave,\nand Guillaume Lample.", + "venue": "arXiv preprint arXiv:2302.13971, 2023.", + "url": null + } + }, + { + "70": { + "title": "On the theory of transfer learning: The importance of task diversity.", + "author": "Nilesh Tripuraneni, Michael Jordan, and Chi Jin.", + "venue": "In Advances in neural information processing systems\n(NeurIPS), volume 33, pp. 7852\u20137862, 2020.", + "url": null + } + }, + { + "71": { + "title": "Why can large language models generate correct chain-of-thoughts?", + "author": "Rasul Tutunov, Antoine Grosnit, Juliusz Ziomek, Jun Wang, and Haitham\nBou-Ammar.", + "venue": "arXiv preprint arXiv:2310.13571, 2023.", + "url": null + } + }, + { + "72": { + "title": "On the uniform convergence of relative frequencies of events to their\nprobabilities.", + "author": "Vladimir N Vapnik and A Ya Chervonenkis.", + "venue": "In Measures of Complexity: Festschrift for Alexey\nChervonenkis, pp. 11\u201330. 2015.", + "url": null + } + }, + { + "73": { + "title": "Transformers learn in-context by gradient descent.", + "author": "Johannes von Oswald, Eyvind Niklasson, Ettore Randazzo, Jo\u00e3o Sacramento,\nAlexander Mordvintsev, Andrey Zhmoginov, and Max Vladymyrov.", + "venue": "arXiv preprint arXiv:2212.07677, 2022.", + "url": null + } + }, + { + "74": { + "title": "GLUE: A multi-task benchmark and analysis platform for natural\nlanguage understanding.", + "author": "Alex Wang, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel\nBowman.", + "venue": "In EMNLP Workshop BlackboxNLP: Analyzing and Interpreting\nNeural Networks for NLP, pp. 353\u2013355, 2018.", + "url": null + } + }, + { + "75": { + "title": "Chain of thought prompting elicits reasoning in large language\nmodels.", + "author": "Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, brian ichter, Fei Xia,\nEd H. Chi, Quoc V Le, and Denny Zhou.", + "venue": "In Advances in Neural Information Processing Systems\n(NeurIPS), 2022.", + "url": null + } + }, + { + "76": { + "title": "The learnability of in-context learning.", + "author": "Noam Wies, Yoav Levine, and Amnon Shashua.", + "venue": "arXiv preprint arXiv:2303.07895, 2023.", + "url": null + } + }, + { + "77": { + "title": "An explanation of in-context learning as implicit bayesian inference.", + "author": "Sang Michael Xie, Aditi Raghunathan, Percy Liang, and Tengyu Ma.", + "venue": "In International Conference on Learning Representations\n(ICLR), 2022.", + "url": null + } + }, + { + "78": { + "title": "Are Transformers universal approximators of sequence-to-sequence\nfunctions?", + "author": "Chulhee Yun, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank Reddi, and Sanjiv\nKumar.", + "venue": "In International Conference on Learning Representations\n(ICLR), 2020a.", + "url": null + } + }, + { + "79": { + "title": "(n) connections are expressive enough: Universal approximability\nof sparse transformers.", + "author": "Chulhee Yun, Yin-Wen Chang, Srinadh Bhojanapalli, Ankit Singh Rawat, Sashank\nReddi, and Sanjiv Kumar.", + "venue": "33:13783\u201313794, 2020b.", + "url": null + } + }, + { + "80": { + "title": "Adaptive budget allocation for parameter-efficient fine-tuning.", + "author": "Qingru Zhang, Minshuo Chen, Alexander Bukharin, Pengcheng He, Yu Cheng, Weizhu\nChen, and Tuo Zhao.", + "venue": "In The Eleventh International Conference on Learning\nRepresentations, 2023.", + "url": null + } + } + ], + "url": "http://arxiv.org/html/2310.17513v3" +} \ No newline at end of file