File size: 1,158 Bytes
393e199 ea4a217 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 | ---
title: Codette LoRA Fine-Tuning
license: mit
language:
- en
---
# Codette LoRA Fine-Tuning
Fine-tuning repo for Codette — a sovereign AI music production assistant built by Jonathan Harrison (Raiff's Bits).
## What This Does
Trains a LoRA adapter on top of `meta-llama/Llama-3.2-1B-Instruct` using Codette's own framework data, so she responds with her real voice, identity, and perspectives rather than as a generic assistant.
## Files
| File | Purpose |
|------|---------|
| `train_codette_lora.py` | Training script — runs as a HF Job |
| `codette_combined_train.jsonl` | 2,136 training examples from Codette's framework |
## Output
When training completes, the adapter is automatically pushed to:
**`Raiff1982/codette-llama-adapter`**
That adapter is then loaded by the Codette Space at:
**`Raiff1982/codette-ai`**
## Training Details
- **Base model**: meta-llama/Llama-3.2-1B-Instruct
- **Method**: LoRA (r=16, alpha=16)
- **Target modules**: q_proj, v_proj
- **Examples**: 2,136
- **Epochs**: 3
- **Hardware**: CPU (HF Jobs cpu-basic)
## Running the Job
See the HF Jobs documentation or follow the instructions in the Space README. |