LoRA Variant Catalogue
updated
Parameter-Efficient Fine-Tuning for Large Models: A Comprehensive Survey
Paper
• 2403.14608
• Published
Towards Better Parameter-Efficient Fine-Tuning for Large Language
Models: A Position Paper
Paper
• 2311.13126
• Published
• 1
Comparing Retrieval-Augmentation and Parameter-Efficient Fine-Tuning for
Privacy-Preserving Personalization of Large Language Models
Paper
• 2409.09510
• Published
Increasing Model Capacity for Free: A Simple Strategy for Parameter
Efficient Fine-tuning
Paper
• 2407.01320
• Published
Introducing Routing Functions to Vision-Language Parameter-Efficient
Fine-Tuning with Low-Rank Bottlenecks
Paper
• 2403.09377
• Published
• 1
AutoPEFT: Automatic Configuration Search for Parameter-Efficient
Fine-Tuning
Paper
• 2301.12132
• Published
• 2
Make Pre-trained Model Reversible: From Parameter to Memory Efficient
Fine-Tuning
Paper
• 2306.00477
• Published
• 1
Scaling Down to Scale Up: A Guide to Parameter-Efficient Fine-Tuning
Paper
• 2303.15647
• Published
• 4
Hydra: Multi-head Low-rank Adaptation for Parameter Efficient
Fine-tuning
Paper
• 2309.06922
• Published
• 1
LoRA: Low-Rank Adaptation of Large Language Models
Paper
• 2106.09685
• Published
• 59
One-for-All: Generalized LoRA for Parameter-Efficient Fine-tuning
Paper
• 2306.07967
• Published
• 26
LoRAShear: Efficient Large Language Model Structured Pruning and
Knowledge Recovery
Paper
• 2310.18356
• Published
• 24
LoftQ: LoRA-Fine-Tuning-Aware Quantization for Large Language Models
Paper
• 2310.08659
• Published
• 27
LongLoRA: Efficient Fine-tuning of Long-Context Large Language Models
Paper
• 2309.12307
• Published
• 90
LoRAPrune: Pruning Meets Low-Rank Parameter-Efficient Fine-Tuning
Paper
• 2305.18403
• Published
• 3
Few-Shot Parameter-Efficient Fine-Tuning is Better and Cheaper than
In-Context Learning
Paper
• 2205.05638
• Published
• 6
DEFT: Data Efficient Fine-Tuning for Large Language Models via
Unsupervised Core-Set Selection
Paper
• 2310.16776
• Published
A Rank Stabilization Scaling Factor for Fine-Tuning with LoRA
Paper
• 2312.03732
• Published
• 12
MoELoRA: Contrastive Learning Guided Mixture of Experts on
Parameter-Efficient Fine-Tuning for Large Language Models
Paper
• 2402.12851
• Published
• 2
NOLA: Networks as Linear Combination of Low Rank Random Basis
Paper
• 2310.02556
• Published
• 2
LoRA-FA: Memory-efficient Low-rank Adaptation for Large Language Models
Fine-tuning
Paper
• 2308.03303
• Published
• 3
DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning
Paper
• 2309.05173
• Published
• 1
In-context Autoencoder for Context Compression in a Large Language Model
Paper
• 2307.06945
• Published
• 29
A Unified Generative Retriever for Knowledge-Intensive Language Tasks
via Prompt Learning
Paper
• 2304.14856
• Published
• 1
Multi-Head Adapter Routing for Cross-Task Generalization
Paper
• 2211.03831
• Published
• 2
VeRA: Vector-based Random Matrix Adaptation
Paper
• 2310.11454
• Published
• 30
TART: A plug-and-play Transformer module for task-agnostic reasoning
Paper
• 2306.07536
• Published
• 12
Sparse Finetuning for Inference Acceleration of Large Language Models
Paper
• 2310.06927
• Published
• 15
MoRA: High-Rank Updating for Parameter-Efficient Fine-Tuning
Paper
• 2405.12130
• Published
• 50
SLTrain: a sparse plus low-rank approach for parameter and memory
efficient pretraining
Paper
• 2406.02214
• Published
LoRA-XS: Low-Rank Adaptation with Extremely Small Number of Parameters
Paper
• 2405.17604
• Published
• 3
VB-LoRA: Extreme Parameter Efficient Fine-Tuning with Vector Banks
Paper
• 2405.15179
• Published
• 1
Bridging The Gap between Low-rank and Orthogonal Adaptation via
Householder Reflection Adaptation
Paper
• 2405.17484
• Published
• 2
Spectral Adapter: Fine-Tuning in Spectral Space
Paper
• 2405.13952
• Published
ETHER: Efficient Finetuning of Large-Scale Models with Hyperplane
Reflections
Paper
• 2405.20271
• Published
SLoPe: Double-Pruned Sparse Plus Lazy Low-Rank Adapter Pretraining of
LLMs
Paper
• 2405.16325
• Published
• 1
SinkLoRA: Enhanced Efficiency and Chat Capabilities for Long-Context
Large Language Models
Paper
• 2406.05678
• Published
• 1
ShareLoRA: Parameter Efficient and Robust Large Language Model
Fine-tuning via Shared Low-Rank Adaptation
Paper
• 2406.10785
• Published
• 1
SPP: Sparsity-Preserved Parameter-Efficient Fine-Tuning for Large
Language Models
Paper
• 2405.16057
• Published
GraLoRA: Granular Low-Rank Adaptation for Parameter-Efficient
Fine-Tuning
Paper
• 2505.20355
• Published
• 36
ElaLoRA: Elastic & Learnable Low-Rank Adaptation for Efficient Model
Fine-Tuning
Paper
• 2504.00254
• Published
• 1
OwLore: Outlier-weighed Layerwise Sampled Low-Rank Projection for
Memory-Efficient LLM Fine-tuning
Paper
• 2405.18380
• Published
• 1
RoseLoRA: Row and Column-wise Sparse Low-rank Adaptation of Pre-trained
Language Model for Knowledge Editing and Fine-tuning
Paper
• 2406.10777
• Published
• 2
OLoRA: Orthonormal Low-Rank Adaptation of Large Language Models
Paper
• 2406.01775
• Published
• 3
Light-PEFT: Lightening Parameter-Efficient Fine-Tuning via Early Pruning
Paper
• 2406.03792
• Published
LoRA-SP: Streamlined Partial Parameter Adaptation for Resource-Efficient
Fine-Tuning of Large Language Models
Paper
• 2403.08822
• Published
PeftCD: Leveraging Vision Foundation Models with Parameter-Efficient
Fine-Tuning for Remote Sensing Change Detection
Paper
• 2509.09572
• Published
High-Rank Structured Modulation for Parameter-Efficient Fine-Tuning
Paper
• 2601.07507
• Published
QWHA: Quantization-Aware Walsh-Hadamard Adaptation for
Parameter-Efficient Fine-Tuning on Large Language Models
Paper
• 2509.17428
• Published
• 9
LoCA: Location-Aware Cosine Adaptation for Parameter-Efficient
Fine-Tuning
Paper
• 2502.06820
• Published
Towards Scalable Exact Machine Unlearning Using Parameter-Efficient
Fine-Tuning
Paper
• 2406.16257
• Published
• 1
BitFit: Simple Parameter-efficient Fine-tuning for Transformer-based
Masked Language-models
Paper
• 2106.10199
• Published
Position-Aware Parameter Efficient Fine-Tuning Approach for Reducing
Positional Bias in LLMs
Paper
• 2404.01430
• Published
Bayesian-LoRA: LoRA based Parameter Efficient Fine-Tuning using Optimal Quantization levels and Rank Values trough Differentiable Bayesian Gates
Paper
• 2406.13046
• Published
Step-by-Step Unmasking for Parameter-Efficient Fine-tuning of Large
Language Models
Paper
• 2408.14470
• Published
DropLoRA: Sparse Low-Rank Adaptation for Parameter-Efficient Fine-Tuning
Paper
• 2508.17337
• Published
AFLoRA: Adaptive Freezing of Low Rank Adaptation in Parameter Efficient
Fine-Tuning of Large Models
Paper
• 2403.13269
• Published
• 1
Quantum-PEFT: Ultra parameter-efficient fine-tuning
Paper
• 2503.05431
• Published
• 1
RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation
Paper
• 2401.04679
• Published
• 2
RandLoRA: Full-rank parameter-efficient fine-tuning of large models
Paper
• 2502.00987
• Published
• 9
SLoRA: Federated Parameter Efficient Fine-Tuning of Language Models
Paper
• 2308.06522
• Published
PERFT: Parameter-Efficient Routed Fine-Tuning for Mixture-of-Expert
Model
Paper
• 2411.08212
• Published
Adaptive Parameter-Efficient Federated Fine-Tuning on Heterogeneous
Devices
Paper
• 2412.20004
• Published
ASLoRA: Adaptive Sharing Low-Rank Adaptation Across Layers
Paper
• 2412.10135
• Published
SaLoRA: Safety-Alignment Preserved Low-Rank Adaptation
Paper
• 2501.01765
• Published
DoRA: Enhancing Parameter-Efficient Fine-Tuning with Dynamic Rank
Distribution
Paper
• 2405.17357
• Published
SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors
Paper
• 2405.19597
• Published
DiffoRA: Enabling Parameter-Efficient LLM Fine-Tuning via Differential
Low-Rank Matrix Adaptation
Paper
• 2502.08905
• Published
IncreLoRA: Incremental Parameter Allocation Method for
Parameter-Efficient Fine-tuning
Paper
• 2308.12043
• Published
• 1
Gradient-based Parameter Selection for Efficient Fine-Tuning
Paper
• 2312.10136
• Published
• 1
NeuroAda: Activating Each Neuron's Potential for Parameter-Efficient
Fine-Tuning
Paper
• 2510.18940
• Published
• 9
LoFT: Parameter-Efficient Fine-Tuning for Long-tailed Semi-Supervised
Learning in Open-World Scenarios
Paper
• 2509.09926
• Published
• 14
Mini-Ensemble Low-Rank Adapters for Parameter-Efficient Fine-Tuning
Paper
• 2402.17263
• Published
Q-PEFT: Query-dependent Parameter Efficient Fine-tuning for Text
Reranking with Large Language Models
Paper
• 2404.04522
• Published
Structured Unrestricted-Rank Matrices for Parameter Efficient
Fine-tuning
Paper
• 2406.17740
• Published
Parameter-Efficient Fine-Tuning via Circular Convolution
Paper
• 2407.19342
• Published
SVFit: Parameter-Efficient Fine-Tuning of Large Pre-Trained Models Using
Singular Values
Paper
• 2409.05926
• Published
TriAdaptLoRA: Brain-Inspired Triangular Adaptive Low-Rank Adaptation for
Parameter-Efficient Fine-Tuning
Paper
• 2501.08008
• Published
Parameter-Efficient Fine-Tuning of Large Language Models via
Deconvolution in Subspace
Paper
• 2503.01419
• Published
ReCIT: Reconstructing Full Private Data from Gradient in
Parameter-Efficient Fine-Tuning of Large Language Models
Paper
• 2504.20570
• Published
Exploring Sparsity for Parameter Efficient Fine Tuning Using Wavelets
Paper
• 2505.12532
• Published
C-LoRA: Continual Low-Rank Adaptation for Pre-trained Models
Paper
• 2502.17920
• Published
Adapt Once, Thrive with Updates: Transferable Parameter-Efficient Fine-Tuning on Evolving Base Models
Paper
• 2506.06844
• Published
Towards Higher Effective Rank in Parameter-efficient Fine-tuning using
Khatri--Rao Product
Paper
• 2508.00230
• Published
Parameter-Efficient Fine-Tuning with Layer Pruning on Free-Text
Sequence-to-Sequence Modeling
Paper
• 2305.08285
• Published
• 1
Non-Intrusive Adaptation: Input-Centric Parameter-efficient Fine-Tuning
for Versatile Multimodal Modeling
Paper
• 2310.12100
• Published
• 1