Text Classification
Transformers
lora
fine-tuning
adaptive
research
nested-lora
synaptic-plasticity
rank-adaptation
Instructions to use Simo76/Unified-LoRA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Simo76/Unified-LoRA with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Simo76/Unified-LoRA")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Simo76/Unified-LoRA", dtype="auto") - Notebooks
- Google Colab
- Kaggle
File size: 1,218 Bytes
28c5d43 1728e21 28c5d43 1728e21 b15ebfb 1728e21 28c5d43 1728e21 91ae3e3 1728e21 91ae3e3 1728e21 91ae3e3 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | """
Unified-LoRA Controller
======================
Convenience wrapper that exposes the full Unified-LoRA stack:
- nested_lora.py β execution engine (LoRA with dynamic rank slicing)
- orbital_controller.py β control logic (stress-driven rank adaptation)
Use this module for simple integration, or import submodules directly
for fine-grained control.
Author: Simona Vargiu
License: Apache 2.0
"""
# ββ ENGINE ββββββββββββββββββββββββββββββββββββββββββ
from nested_lora import (
NestedLoRALinear,
inject_nested_lora,
set_rank,
get_lora_params,
count_params,
)
# ββ CONTROLLER ββββββββββββββββββββββββββββββββββββββ
from orbital_controller import (
OrbitalController,
setup_unified_lora,
)
# ββ EXPORT ββββββββββββββββββββββββββββββββββββββββββ
__all__ = [
"NestedLoRALinear",
"inject_nested_lora",
"set_rank",
"get_lora_params",
"count_params",
"OrbitalController",
"setup_unified_lora",
]
|