Spaces:
Running
Running
File size: 5,654 Bytes
7c3f0ce 25979d0 7c3f0ce 25979d0 b8d2b33 7c3f0ce 25979d0 7c3f0ce 25979d0 7c3f0ce 25979d0 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | ---
title: 'Codette: Multi-Perspective Cognitive Architecture'
emoji: π§
colorFrom: indigo
colorTo: purple
sdk: gradio
sdk_version: 6.9.0
app_file: app.py
pinned: true
license: mit
hf_oauth: true
hf_oauth_scopes:
- inference-api
tags:
- multi-perspective
- cognitive-architecture
- ethical-ai
- rc-xi
- recursive-reasoning
- lora-adapters
models:
- Raiff1982/codette-training-lab
---
# Codette: Multi-Perspective Cognitive Architecture
**Codette** is an experimental AI research system for **recursive reasoning, multi-perspective cognition, and ethical alignment**. This Space showcases the 10 cognitive subsystems running on Llama-3.1-8B via the HuggingFace Inference API.
## What is Codette?
Codette implements the **RC+xi (Recursive Convergence + Epistemic Tension)** framework β a mathematical model for emergent multi-perspective reasoning. When you ask a question:
1. **Guardian** checks your input for safety threats
2. **Nexus** analyzes pre-corruption signals (entropy, intent, volatility)
3. **Perspectives** route your query through 4-6 different reasoning lenses (Newton, Empathy, Philosophy, Quantum, etc.)
4. **AEGIS** evaluates each response for 6 ethical frameworks (utilitarian, deontological, virtue, care, ubuntu, indigenous)
5. **QuantumSpiderweb** propagates beliefs across the cognitive graph and detects consensus attractors
6. **EpistemicMetrics** scores tension (productive disagreement) and coherence (alignment) between perspectives
7. **ResonantContinuity** computes the Psi_r wavefunction: emotion Γ energy Γ intent Γ frequency / (1 + |darkness|) Γ sin(2Οt/gravity)
8. **LivingMemory** stores emotionally-tagged memory cocoons with SHA-256 anchors
9. **Synthesis** integrates all perspectives into a unified response
10. **Resonance Engine** updates phase coherence and convergence metrics
All subsystems are **pure Python** β no GPUs needed. Only the final LLM calls use the free HF Inference API.
## Features
- β¨ **Multi-Perspective Reasoning** β 12 perspectives (8 LoRA-backed, 4 prompt-only)
- π‘οΈ **AEGIS Ethical Governance** β 6 ethical frameworks evaluated in real-time
- π§ **QuantumSpiderweb** β 5D belief propagation & attractor detection
- πΎ **Living Memory** β Emotionally-tagged memory cocoons
- π **Real-time Metrics** β Coherence, tension, phase coherence, Psi_r wavefunction
- π¬ **RC+xi Framework** β Recursive convergence with epistemic tension
- βοΈ **Perspective Auto-Selection** β Automatically picks the best 4 perspectives for your query
## Live Metrics
Every response updates:
- **AEGIS eta** (0-1) β Multi-framework ethical alignment
- **Phase Gamma** (0-1) β Cognitive coherence across all perspectives
- **Nexus Risk** β Pre-corruption intervention rate
- **Psi_r** β Resonant continuity wavefunction
- **Memory Profile** β Emotional tags & cocoon count
- **Perspective Coverage** β Which reasoning lenses were invoked
## How to Use
1. Ask any question in the chat
2. Select **Auto** (default) to let Codette pick the best perspectives, or **Custom** to choose
3. Watch real-time cognitive metrics update as the perspectives debate
4. Click **Individual Perspectives** to see each perspective's reasoning
5. Explore the **Coherence & Tension Timeline** to see how the cognitive architecture converges over time
## Technical Architecture
All subsystems run locally in **pure Python**:
| Subsystem | Purpose | Module |
|-----------|---------|--------|
| **AEGIS** | 6-framework ethical evaluation | `reasoning_forge/aegis.py` |
| **Nexus** | Pre-corruption signal detection | `reasoning_forge/nexus.py` |
| **Guardian** | Input sanitization & trust calibration | `reasoning_forge/guardian.py` |
| **LivingMemory** | Emotionally-tagged memory storage | `reasoning_forge/living_memory.py` |
| **ResonantContinuity** | Psi_r wavefunction computation | `reasoning_forge/resonant_continuity.py` |
| **EpistemicMetrics** | Coherence & tension scoring | `reasoning_forge/epistemic_metrics.py` |
| **QuantumSpiderweb** | 5D belief propagation & attractors | `reasoning_forge/quantum_spiderweb.py` |
| **PerspectiveRegistry** | 12 perspective definitions | `reasoning_forge/perspective_registry.py` |
Only the final LLM inference calls use the **HuggingFace Inference API** (Llama-3.1-8B-Instruct).
## Model Weights
All 8 LoRA adapters are available in the model repo: [Raiff1982/codette-training-lab](https://huggingface.co/Raiff1982/codette-training-lab)
- **GGUF format** (f16): 924 MB total, usable with llama.cpp
- **PEFT SafeTensors**: 79 MB total, usable with HuggingFace transformers
## Key Metrics
- **Phase Coherence**: 0.9835 (11-agent convergence)
- **AEGIS Ethical Alignment**: 0.961 (6-framework)
- **Tension Decay**: 91.2% (200-agent embodied simulation)
- **Cocoon Coherence**: 0.994 (memory stability)
## Research
Created by **Jonathan Harrison**. For the complete research framework, see:
- RC+xi Framework documentation: [research/frameworks/RC_XI_FRAMEWORK.md](https://github.com/Raiff1982/codette-training-lab/blob/master/research/frameworks/RC_XI_FRAMEWORK.md)
- GitHub Repository: [Raiff1982/codette-training-lab](https://github.com/Raiff1982/codette-training-lab)
- Model Card: [Raiff1982/codette-training-lab](https://huggingface.co/Raiff1982/codette-training-lab)
## Notes
- Perspective generation may be rate-limited on the free HF Inference API tier
- Response times depend on the Inference API load
- All session state persists within your current browser session
- Memory cocoons are stored locally and cleared when the Space is refreshed
**Codette is in active development.** Feedback welcome! |