| | --- |
| | title: "Codette LoRA Adapters" |
| | authors: |
| | - name: Jonathan Harrison |
| | orcid: 0009-0003-7005-8187 |
| | affiliation: "Raiff's Bits LLC, Bridge City, Texas, USA" |
| | tags: |
| | - lora |
| | - peft |
| | - llama |
| | - cognitive-architecture |
| | - multi-agent |
| | - ethical-ai |
| | - recursive-convergence |
| | - qlora |
| | license: cc-by-4.0 |
| | base_model: meta-llama/Llama-3.1-8B-Instruct |
| | library_name: peft |
| | pipeline_tag: text-generation |
| | --- |
| | |
| | # Codette LoRA Adapters |
| |
|
| | [](https://doi.org/10.5281/zenodo.18913936) |
| |
|
| | **8 domain-specialized LoRA adapters** for the [Codette cognitive architecture](https://huggingface.co/Raiff1982/codette-paper) β a sovereign modular AI framework for ethical multi-agent reasoning. |
| |
|
| | **Author:** Jonathan Harrison Β· [ORCID](https://orcid.org/0009-0003-7005-8187) Β· Raiff's Bits LLC |
| |
|
| | --- |
| |
|
| | ## Base Model |
| |
|
| | **meta-llama/Llama-3.1-8B-Instruct** with QLoRA (4-bit quantization) |
| |
|
| | ## Adapter Configuration |
| |
|
| | | Parameter | Value | |
| | |-----------|-------| |
| | | PEFT Type | LoRA | |
| | | Rank (r) | 16 | |
| | | Alpha | 32 | |
| | | Dropout | 0.05 | |
| | | Target Modules | `q_proj`, `k_proj`, `v_proj`, `o_proj` | |
| | | Bias | none | |
| | | Task Type | CAUSAL_LM | |
| | | Quantization | 4-bit (QLoRA) | |
| | |
| | ## Adapters |
| | |
| | Each adapter specializes in a distinct cognitive perspective, trained on curated perspective-tagged datasets: |
| | |
| | | Adapter | Description | Training Examples | Status | |
| | |---------|-------------|-------------------|--------| |
| | | `newton/` | Analytical physics reasoning β Newtonian precision and scientific method | 3,000 | β
Uploaded | |
| | | `davinci/` | Creative invention thinking β DaVinci's cross-disciplinary creativity | 2,500 | β
Uploaded | |
| | | `empathy/` | Emotional understanding and compassionate reasoning | 2,500 | β
Uploaded | |
| | | `philosophy/` | Conceptual and philosophical reasoning β depth and rigor | 2,000 | β
Uploaded | |
| | | `quantum/` | Probabilistic and quantum-inspired reasoning | 2,000 | β
Uploaded | |
| | | `consciousness/` | Recursive cognition and RC+ΞΎ framework reasoning | 3,000 | β
Uploaded | |
| | | `multi_perspective/` | Multi-perspective synthesis across analytical lenses | 2,500 | β
Uploaded | |
| | | `systems_architecture/` | AI systems architecture and design reasoning | 2,000 | π Training | |
| |
|
| | **Total: 20,500 training examples across 8 cognitive domains** |
| |
|
| | ## Training Details |
| |
|
| | - **Epochs**: 3 per adapter |
| | - **Hardware**: NVIDIA A10G (cloud) + Intel Arc 140V / CPU (local) |
| | - **Framework**: Hugging Face TRL (SFTTrainer) + PEFT |
| | - **Training Pipeline**: [`Raiff1982/codette-training-lab`](https://huggingface.co/Raiff1982/codette-training-lab) |
| | - **Novel contribution**: Two GPU-free CPU training pipelines validated on consumer laptops (see paper) |
| |
|
| | ### Training Metrics (Newton adapter example) |
| |
|
| | | Metric | Value | |
| | |--------|-------| |
| | | Final Loss | ~0.071 | |
| | | Mean Token Accuracy | 97.4% | |
| | | Gradient Norm | ~0.05β0.13 | |
| |
|
| | ## Usage |
| |
|
| | ### Load a single adapter |
| |
|
| | ```python |
| | from transformers import AutoModelForCausalLM, AutoTokenizer |
| | from peft import PeftModel |
| | |
| | base_model = AutoModelForCausalLM.from_pretrained( |
| | "meta-llama/Llama-3.1-8B-Instruct", |
| | load_in_4bit=True, |
| | device_map="auto" |
| | ) |
| | tokenizer = AutoTokenizer.from_pretrained("meta-llama/Llama-3.1-8B-Instruct") |
| | |
| | # Load the newton adapter |
| | model = PeftModel.from_pretrained(base_model, "Raiff1982/codette-lora-adapters", subfolder="newton") |
| | ``` |
| |
|
| | ### Load multiple adapters (multi-perspective reasoning) |
| |
|
| | ```python |
| | from peft import PeftModel |
| | |
| | # Load base |
| | model = PeftModel.from_pretrained(base_model, "Raiff1982/codette-lora-adapters", subfolder="newton", adapter_name="newton") |
| | |
| | # Add additional perspectives |
| | model.load_adapter("Raiff1982/codette-lora-adapters", subfolder="empathy", adapter_name="empathy") |
| | model.load_adapter("Raiff1982/codette-lora-adapters", subfolder="davinci", adapter_name="davinci") |
| | |
| | # Switch between perspectives |
| | model.set_adapter("empathy") |
| | ``` |
| |
|
| | ## How Adapters Fit in the Codette Architecture |
| |
|
| | ``` |
| | βββββββββββββββββββββββββββββββββββββββββββββββββββββββ |
| | β Codette Orchestrator β |
| | βββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ |
| | β Reasoning Forge (6 agents + Critic + Synthesis) β |
| | β βββββββββββ βββββββββββ βββββββββββ β |
| | β β Newton β β DaVinci β β Empathy β ... β β LoRA adapters |
| | β ββββββ¬βββββ ββββββ¬βββββ ββββββ¬βββββ β |
| | β βββββββββββββΌββββββββββββ β |
| | β βΌ β |
| | β RC+ΞΎ Attractor Convergence β |
| | β Phase Coherence Ξ β 0.99 β |
| | βββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ |
| | β AEGIS Ethical Governance (Ξ· = 0.961) β |
| | βββββββββββββββββββββββββββββββββββββββββββββββββββββββ€ |
| | β QuantumSpiderweb Β· CognitionCocooner Β· Memory β |
| | βββββββββββββββββββββββββββββββββββββββββββββββββββββββ |
| | ``` |
| |
|
| | Each adapter represents a specialized cognitive perspective. The Reasoning Forge orchestrates them through shared attractor dynamics, achieving multi-agent phase coherence (Ξ = 0.99) within 10 recursive iterations. |
| |
|
| | ## Directory Structure |
| |
|
| | ``` |
| | codette-lora-adapters/ |
| | βββ newton/ |
| | β βββ adapter_config.json |
| | β βββ adapter_model.safetensors |
| | β βββ tokenizer.json |
| | β βββ tokenizer_config.json |
| | β βββ chat_template.jinja |
| | β βββ checkpoint-500/ |
| | β βββ checkpoint-1125/ |
| | βββ davinci/ |
| | β βββ adapter_config.json |
| | β βββ adapter_model.safetensors |
| | β βββ ... |
| | β βββ checkpoint-500/ |
| | β βββ checkpoint-939/ |
| | βββ empathy/ |
| | β βββ adapter_config.json |
| | β βββ adapter_model.safetensors |
| | β βββ ... |
| | β βββ checkpoint-500/ |
| | β βββ checkpoint-939/ |
| | βββ philosophy/ (coming soon) |
| | βββ quantum/ (coming soon) |
| | βββ consciousness/ (coming soon) |
| | βββ multi_perspective/ (coming soon) |
| | βββ systems_architecture/ (coming soon) |
| | ``` |
| |
|
| | ## Related Resources |
| |
|
| | | Resource | Link | |
| | |----------|------| |
| | | Paper | [Raiff1982/codette-paper](https://huggingface.co/Raiff1982/codette-paper) | |
| | | Training Lab | [Raiff1982/codette-training-lab](https://huggingface.co/Raiff1982/codette-training-lab) | |
| | | Training Data | [Raiff1982/codette-training-data](https://huggingface.co/datasets/Raiff1982/codette-training-data) | |
| | | Zenodo DOI | [10.5281/zenodo.18913936](https://doi.org/10.5281/zenodo.18913936) | |
| | | GitHub | [Raiff1982/codette-training-lab](https://github.com/Raiff1982/codette-training-lab) | |
| | | ORCID | [0009-0003-7005-8187](https://orcid.org/0009-0003-7005-8187) | |
| |
|
| | ## Citation |
| |
|
| | ```bibtex |
| | @article{harrison2026codette, |
| | title={Codette: A Sovereign Modular Cognitive Architecture for Ethical Multi-Agent AI}, |
| | author={Harrison, Jonathan}, |
| | year={2026}, |
| | doi={10.5281/zenodo.18913936}, |
| | publisher={Raiff's Bits LLC}, |
| | url={https://huggingface.co/Raiff1982/codette-paper} |
| | } |
| | ``` |
| |
|
| | ## License |
| |
|
| | CC BY 4.0 β [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0/) |
| |
|