Text Classification
Transformers
lora
fine-tuning
adaptive
research
nested-lora
synaptic-plasticity
rank-adaptation
Instructions to use Simo76/Unified-LoRA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Simo76/Unified-LoRA with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="Simo76/Unified-LoRA")# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("Simo76/Unified-LoRA", dtype="auto") - Notebooks
- Google Colab
- Kaggle
| """ | |
| Unified-LoRA Controller | |
| ====================== | |
| Convenience wrapper that exposes the full Unified-LoRA stack: | |
| - nested_lora.py β execution engine (LoRA with dynamic rank slicing) | |
| - orbital_controller.py β control logic (stress-driven rank adaptation) | |
| Use this module for simple integration, or import submodules directly | |
| for fine-grained control. | |
| Author: Simona Vargiu | |
| License: Apache 2.0 | |
| """ | |
| # ββ ENGINE ββββββββββββββββββββββββββββββββββββββββββ | |
| from nested_lora import ( | |
| NestedLoRALinear, | |
| inject_nested_lora, | |
| set_rank, | |
| get_lora_params, | |
| count_params, | |
| ) | |
| # ββ CONTROLLER ββββββββββββββββββββββββββββββββββββββ | |
| from orbital_controller import ( | |
| OrbitalController, | |
| setup_unified_lora, | |
| ) | |
| # ββ EXPORT ββββββββββββββββββββββββββββββββββββββββββ | |
| __all__ = [ | |
| "NestedLoRALinear", | |
| "inject_nested_lora", | |
| "set_rank", | |
| "get_lora_params", | |
| "count_params", | |
| "OrbitalController", | |
| "setup_unified_lora", | |
| ] | |