Text Classification
Transformers
lora
fine-tuning
adaptive
research
nested-lora
synaptic-plasticity
rank-adaptation
Instructions to use Simo76/Unified-LoRA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Simo76/Unified-LoRA with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Simo76/Unified-LoRA")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Simo76/Unified-LoRA", dtype="auto") - Notebooks
- Google Colab
- Kaggle
| license: apache-2.0 | |
| tags: | |
| - lora | |
| - fine-tuning | |
| - adaptive | |
| - research | |
| - nested-lora | |
| - synaptic-plasticity | |
| - rank-adaptation | |
| library_name: transformers | |
| datasets: | |
| - nyu-mll/glue | |
| pipeline_tag: text-classification | |
| # Unified-LoRA | |
| **LoRA fine-tuning with synaptic plasticity: a neurobiologically-inspired controller that switches between qualitatively different operational modes based on training stress.** | |
| β οΈ **This is NOT a pretrained model.** Unified-LoRA is a training method/controller. | |
| π **Code**: [github.com/Sva76/Unified-LoRa](https://github.com/Sva76/Unified-LoRa) | |
| π **Demo**: [unified_lora_demo.ipynb](https://github.com/Sva76/Unified-LoRa/blob/main/notebooks/unified_lora_demo.ipynb) | |
| ## What It Does | |
| A composite synaptic stress signal **Ο(t) = f(Convergence, Entropy, Stress)** drives a 3-state FSM: | |
| | Mode | Ο range | Rank | Behavior | | |
| |------|---------|------|----------| | |
| | SINGLE | Ο < 0.3 | r=4 | Efficient cruise | | |
| | MULTI | 0.3 β€ Ο < 0.7 | r=8 | Active learning | | |
| | MIRROR | Ο β₯ 0.7 | r=16 | Max capacity + weight snapshot for rollback | | |
| Rank transitions use **nested matrix slicing** (r4 β r8 β r16) β zero cold-start, zero re-allocation. | |
| Mirror mode saves a weight snapshot on entry. On exit, if weights drifted <5% (transient noise), the snapshot is restored. If drift was significant (real signal), the new weights are kept. | |
| ## Results | |
| **GLUE (DistilBERT):** 3/4 tasks equal or better with 33β56% rank reduction. | |
| **Noise resilience:** +31 F1 at 50% label noise, 9Γ lower variance. No benefit on clean data. Confirmed at 67Mβ3B. | |
| **Stress-recovery cycle (Tinker/Llama-3.2-1B):** Ο returns to pre-shock baseline (0.33 β 0.83 β 0.33), demonstrating fully reversible stress handling. | |
| ## Quick Start | |
| ```python | |
| from controller import setup_unified_lora | |
| adapters, ctrl = setup_unified_lora(model, target_modules=["q_proj", "v_proj"]) | |
| for batch in dataloader: | |
| loss = model(**batch).loss | |
| loss.backward() | |
| ctrl.step(loss=loss.item()) # Ο(t) needs the loss for convergence signal | |
| optimizer.step() | |
| optimizer.zero_grad() | |
| ``` | |
| ## Citation | |
| ```bibtex | |
| @software{unified_lora_2025, | |
| author = {Simona Vargiu}, | |
| title = {Unified-LoRA: Synaptic Plasticity Controller for Adaptive LoRA Fine-Tuning}, | |
| year = {2025}, | |
| url = {https://github.com/Sva76/Unified-LoRa} | |
| } | |
| ``` | |
| ## Contact | |
| Simona Vargiu (Independent Researcher) β simona.vargiu.malta@gmail.com | |