LoRA Adapter for distilbert-base-uncased
This adapter was trained using Flax NNX for the task of sentiment classification on the IMDB dataset.
Configuration
- LoRA rank: 16
- LoRA alpha: 32
- Target modules: ['q_lin', 'v_lin']
Use
# Loading and using the adapter
import orbax.checkpoint as ocp
import pickle
# Method 1: Orbax (recommended for larger models)
checkpointer = ocp.StandardCheckpointer()
lora_params = checkpointer.restore("lora_checkpoint")
# Method 2: Pickle (easier for smaller models)
with open("lora_params.pkl", "rb") as f:
lora_params = pickle.load(f)
# Reconstruction of LoRA layers
from flax import nnx
lora_layers = None
for layer_name, params in lora_params.items():
# Create a new LoRA layer with the correct parameters
# (implementation depends on the specific architecture)
pass