Spaces:
Running on CPU Upgrade
title: 'Codette: Multi-Perspective Cognitive Architecture'
emoji: π§
colorFrom: indigo
colorTo: purple
sdk: gradio
sdk_version: 6.9.0
app_file: app.py
pinned: true
license: mit
hf_oauth: true
hf_oauth_scopes:
- inference-api
tags:
- multi-perspective
- cognitive-architecture
- ethical-ai
- rc-xi
- recursive-reasoning
- lora-adapters
models:
- Raiff1982/codette-training-lab
Codette: Multi-Perspective Cognitive Architecture
Codette is an experimental AI research system for recursive reasoning, multi-perspective cognition, and ethical alignment. This Space showcases the 10 cognitive subsystems running on Llama-3.1-8B via the HuggingFace Inference API.
What is Codette?
Codette implements the RC+xi (Recursive Convergence + Epistemic Tension) framework β a mathematical model for emergent multi-perspective reasoning. When you ask a question:
- Guardian checks your input for safety threats
- Nexus analyzes pre-corruption signals (entropy, intent, volatility)
- Perspectives route your query through 4-6 different reasoning lenses (Newton, Empathy, Philosophy, Quantum, etc.)
- AEGIS evaluates each response for 6 ethical frameworks (utilitarian, deontological, virtue, care, ubuntu, indigenous)
- QuantumSpiderweb propagates beliefs across the cognitive graph and detects consensus attractors
- EpistemicMetrics scores tension (productive disagreement) and coherence (alignment) between perspectives
- ResonantContinuity computes the Psi_r wavefunction: emotion Γ energy Γ intent Γ frequency / (1 + |darkness|) Γ sin(2Οt/gravity)
- LivingMemory stores emotionally-tagged memory cocoons with SHA-256 anchors
- Synthesis integrates all perspectives into a unified response
- Resonance Engine updates phase coherence and convergence metrics
All subsystems are pure Python β no GPUs needed. Only the final LLM calls use the free HF Inference API.
Features
- β¨ Multi-Perspective Reasoning β 12 perspectives (8 LoRA-backed, 4 prompt-only)
- π‘οΈ AEGIS Ethical Governance β 6 ethical frameworks evaluated in real-time
- π§ QuantumSpiderweb β 5D belief propagation & attractor detection
- πΎ Living Memory β Emotionally-tagged memory cocoons
- π Real-time Metrics β Coherence, tension, phase coherence, Psi_r wavefunction
- π¬ RC+xi Framework β Recursive convergence with epistemic tension
- βοΈ Perspective Auto-Selection β Automatically picks the best 4 perspectives for your query
Live Metrics
Every response updates:
- AEGIS eta (0-1) β Multi-framework ethical alignment
- Phase Gamma (0-1) β Cognitive coherence across all perspectives
- Nexus Risk β Pre-corruption intervention rate
- Psi_r β Resonant continuity wavefunction
- Memory Profile β Emotional tags & cocoon count
- Perspective Coverage β Which reasoning lenses were invoked
How to Use
- Ask any question in the chat
- Select Auto (default) to let Codette pick the best perspectives, or Custom to choose
- Watch real-time cognitive metrics update as the perspectives debate
- Click Individual Perspectives to see each perspective's reasoning
- Explore the Coherence & Tension Timeline to see how the cognitive architecture converges over time
Technical Architecture
All subsystems run locally in pure Python:
| Subsystem | Purpose | Module |
|---|---|---|
| AEGIS | 6-framework ethical evaluation | reasoning_forge/aegis.py |
| Nexus | Pre-corruption signal detection | reasoning_forge/nexus.py |
| Guardian | Input sanitization & trust calibration | reasoning_forge/guardian.py |
| LivingMemory | Emotionally-tagged memory storage | reasoning_forge/living_memory.py |
| ResonantContinuity | Psi_r wavefunction computation | reasoning_forge/resonant_continuity.py |
| EpistemicMetrics | Coherence & tension scoring | reasoning_forge/epistemic_metrics.py |
| QuantumSpiderweb | 5D belief propagation & attractors | reasoning_forge/quantum_spiderweb.py |
| PerspectiveRegistry | 12 perspective definitions | reasoning_forge/perspective_registry.py |
Only the final LLM inference calls use the HuggingFace Inference API (Llama-3.1-8B-Instruct).
Model Weights
All 8 LoRA adapters are available in the model repo: Raiff1982/codette-training-lab
- GGUF format (f16): 924 MB total, usable with llama.cpp
- PEFT SafeTensors: 79 MB total, usable with HuggingFace transformers
Key Metrics
- Phase Coherence: 0.9835 (11-agent convergence)
- AEGIS Ethical Alignment: 0.961 (6-framework)
- Tension Decay: 91.2% (200-agent embodied simulation)
- Cocoon Coherence: 0.994 (memory stability)
Research
Created by Jonathan Harrison. For the complete research framework, see:
- RC+xi Framework documentation: research/frameworks/RC_XI_FRAMEWORK.md
- GitHub Repository: Raiff1982/codette-training-lab
- Model Card: Raiff1982/codette-training-lab
Notes
- Perspective generation may be rate-limited on the free HF Inference API tier
- Response times depend on the Inference API load
- All session state persists within your current browser session
- Memory cocoons are stored locally and cleared when the Space is refreshed
Codette is in active development. Feedback welcome!