File size: 7,848 Bytes
3de5569 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 | ---
title: "Codette LoRA Adapters"
authors:
- name: Jonathan Harrison
orcid: 0009-0003-7005-8187
affiliation: "Raiff's Bits LLC, Bridge City, Texas, USA"
tags:
- lora
- peft
- llama
- cognitive-architecture
- multi-agent
- ethical-ai
- recursive-convergence
- qlora
license: cc-by-4.0
base_model: meta-llama/Llama-3.1-8B-Instruct
library_name: peft
pipeline_tag: text-generation
---
# Codette LoRA Adapters
[](https://doi.org/10.5281/zenodo.18913936)
**8 domain-specialized LoRA adapters** for the [Codette cognitive architecture](https://huggingface.co/Raiff1982/codette-paper) β a sovereign modular AI framework for ethical multi-agent reasoning.
**Author:** Jonathan Harrison Β· [ORCID](https://orcid.org/0009-0003-7005-8187) Β· Raiff's Bits LLC
---
## Base Model
**meta-llama/Llama-3.1-8B-Instruct** with QLoRA (4-bit quantization)
## Adapter Configuration
| Parameter | Value |
|-----------|-------|
| PEFT Type | LoRA |
| Rank (r) | 16 |
| Alpha | 32 |
| Dropout | 0.05 |
| Target Modules | `q_proj`, `k_proj`, `v_proj`, `o_proj` |
| Bias | none |
| Task Type | CAUSAL_LM |
| Quantization | 4-bit (QLoRA) |
## Adapters
Each adapter specializes in a distinct cognitive perspective, trained on curated perspective-tagged datasets:
| Adapter | Description | Training Examples | Status |
|---------|-------------|-------------------|--------|
| `newton/` | Analytical physics reasoning β Newtonian precision and scientific method | 3,000 | β
Uploaded |
| `davinci/` | Creative invention thinking β DaVinci's cross-disciplinary creativity | 2,500 | β
Uploaded |
| `empathy/` | Emotional understanding and compassionate reasoning | 2,500 | β
Uploaded |
| `philosophy/` | Conceptual and philosophical reasoning β depth and rigor | 2,000 | β
Uploaded |
| `quantum/` | Probabilistic and quantum-inspired reasoning | 2,000 | β
Uploaded |
| `consciousness/` | Recursive cognition and RC+ΞΎ framework reasoning | 3,000 | β
Uploaded |
| `multi_perspective/` | Multi-perspective synthesis across analytical lenses | 2,500 | β
Uploaded |
| `systems_architecture/` | AI systems architecture and design reasoning | 2,000 | π Training |
**Total: 20,500 training examples across 8 cognitive domains**
## Training Details
- **Epochs**: 3 per adapter
- **Hardware**: NVIDIA A10G (cloud) + Intel Arc 140V / CPU (local)
- **Framework**: Hugging Face TRL (SFTTrainer) + PEFT
- **Training Pipeline**: [`Raiff1982/codette-training-lab`](https://huggingface.co/Raiff1982/codette-training-lab)
- **Novel contribution**: Two GPU-free CPU training pipelines validated on consumer laptops (see paper)
### Training Metrics (Newton adapter example)
| Metric | Value |
|--------|-------|
| Final Loss | ~0.071 |
| Mean Token Accuracy | 97.4% |
| Gradient Norm | ~0.05β0.13 |
## Usage
### Load a single adapter
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
base_model = AutoModelForCausalLM.from_pretrained(
"meta-llama/Llama-3.1-8B-Instruct",
load_in_4bit=True,
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct")
# Load the newton adapter
model = PeftModel.from_pretrained(base_model, "Raiff1982/codette-lora-adapters", subfolder="newton")
```
### Load multiple adapters (multi-perspective reasoning)
```python
from peft import PeftModel
# Load base
model = PeftModel.from_pretrained(base_model, "Raiff1982/codette-lora-adapters", subfolder="newton", adapter_name="newton")
# Add additional perspectives
model.load_adapter("Raiff1982/codette-lora-adapters", subfolder="empathy", adapter_name="empathy")
model.load_adapter("Raiff1982/codette-lora-adapters", subfolder="davinci", adapter_name="davinci")
# Switch between perspectives
model.set_adapter("empathy")
```
## How Adapters Fit in the Codette Architecture
```
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Codette Orchestrator β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β Reasoning Forge (6 agents + Critic + Synthesis) β
β βββββββββββ βββββββββββ βββββββββββ β
β β Newton β β DaVinci β β Empathy β ... β β LoRA adapters
β ββββββ¬βββββ ββββββ¬βββββ ββββββ¬βββββ β
β βββββββββββββΌββββββββββββ β
β βΌ β
β RC+ΞΎ Attractor Convergence β
β Phase Coherence Ξ β 0.99 β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β AEGIS Ethical Governance (Ξ· = 0.961) β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ€
β QuantumSpiderweb Β· CognitionCocooner Β· Memory β
βββββββββββββββββββββββββββββββββββββββββββββββββββββββ
```
Each adapter represents a specialized cognitive perspective. The Reasoning Forge orchestrates them through shared attractor dynamics, achieving multi-agent phase coherence (Ξ = 0.99) within 10 recursive iterations.
## Directory Structure
```
codette-lora-adapters/
βββ newton/
β βββ adapter_config.json
β βββ adapter_model.safetensors
β βββ tokenizer.json
β βββ tokenizer_config.json
β βββ chat_template.jinja
β βββ checkpoint-500/
β βββ checkpoint-1125/
βββ davinci/
β βββ adapter_config.json
β βββ adapter_model.safetensors
β βββ ...
β βββ checkpoint-500/
β βββ checkpoint-939/
βββ empathy/
β βββ adapter_config.json
β βββ adapter_model.safetensors
β βββ ...
β βββ checkpoint-500/
β βββ checkpoint-939/
βββ philosophy/ (coming soon)
βββ quantum/ (coming soon)
βββ consciousness/ (coming soon)
βββ multi_perspective/ (coming soon)
βββ systems_architecture/ (coming soon)
```
## Related Resources
| Resource | Link |
|----------|------|
| Paper | [Raiff1982/codette-paper](https://huggingface.co/Raiff1982/codette-paper) |
| Training Lab | [Raiff1982/codette-training-lab](https://huggingface.co/Raiff1982/codette-training-lab) |
| Training Data | [Raiff1982/codette-training-data](https://huggingface.co/datasets/Raiff1982/codette-training-data) |
| Zenodo DOI | [10.5281/zenodo.18913936](https://doi.org/10.5281/zenodo.18913936) |
| GitHub | [Raiff1982/codette-training-lab](https://github.com/Raiff1982/codette-training-lab) |
| ORCID | [0009-0003-7005-8187](https://orcid.org/0009-0003-7005-8187) |
## Citation
```bibtex
@article{harrison2026codette,
title={Codette: A Sovereign Modular Cognitive Architecture for Ethical Multi-Agent AI},
author={Harrison, Jonathan},
year={2026},
doi={10.5281/zenodo.18913936},
publisher={Raiff's Bits LLC},
url={https://huggingface.co/Raiff1982/codette-paper}
}
```
## License
CC BY 4.0 β [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/)
|