Codette-Demo / README.md
Claude
feat: Enable HuggingFace OAuth with inference-api scope
b8d2b33
metadata
title: 'Codette: Multi-Perspective Cognitive Architecture'
emoji: 🧠
colorFrom: indigo
colorTo: purple
sdk: gradio
sdk_version: 6.9.0
app_file: app.py
pinned: true
license: mit
hf_oauth: true
hf_oauth_scopes:
  - inference-api
tags:
  - multi-perspective
  - cognitive-architecture
  - ethical-ai
  - rc-xi
  - recursive-reasoning
  - lora-adapters
models:
  - Raiff1982/codette-training-lab

Codette: Multi-Perspective Cognitive Architecture

Codette is an experimental AI research system for recursive reasoning, multi-perspective cognition, and ethical alignment. This Space showcases the 10 cognitive subsystems running on Llama-3.1-8B via the HuggingFace Inference API.

What is Codette?

Codette implements the RC+xi (Recursive Convergence + Epistemic Tension) framework β€” a mathematical model for emergent multi-perspective reasoning. When you ask a question:

  1. Guardian checks your input for safety threats
  2. Nexus analyzes pre-corruption signals (entropy, intent, volatility)
  3. Perspectives route your query through 4-6 different reasoning lenses (Newton, Empathy, Philosophy, Quantum, etc.)
  4. AEGIS evaluates each response for 6 ethical frameworks (utilitarian, deontological, virtue, care, ubuntu, indigenous)
  5. QuantumSpiderweb propagates beliefs across the cognitive graph and detects consensus attractors
  6. EpistemicMetrics scores tension (productive disagreement) and coherence (alignment) between perspectives
  7. ResonantContinuity computes the Psi_r wavefunction: emotion Γ— energy Γ— intent Γ— frequency / (1 + |darkness|) Γ— sin(2Ο€t/gravity)
  8. LivingMemory stores emotionally-tagged memory cocoons with SHA-256 anchors
  9. Synthesis integrates all perspectives into a unified response
  10. Resonance Engine updates phase coherence and convergence metrics

All subsystems are pure Python β€” no GPUs needed. Only the final LLM calls use the free HF Inference API.

Features

  • ✨ Multi-Perspective Reasoning β€” 12 perspectives (8 LoRA-backed, 4 prompt-only)
  • πŸ›‘οΈ AEGIS Ethical Governance β€” 6 ethical frameworks evaluated in real-time
  • 🧠 QuantumSpiderweb β€” 5D belief propagation & attractor detection
  • πŸ’Ύ Living Memory β€” Emotionally-tagged memory cocoons
  • πŸ“Š Real-time Metrics β€” Coherence, tension, phase coherence, Psi_r wavefunction
  • πŸ”¬ RC+xi Framework β€” Recursive convergence with epistemic tension
  • βš™οΈ Perspective Auto-Selection β€” Automatically picks the best 4 perspectives for your query

Live Metrics

Every response updates:

  • AEGIS eta (0-1) β€” Multi-framework ethical alignment
  • Phase Gamma (0-1) β€” Cognitive coherence across all perspectives
  • Nexus Risk β€” Pre-corruption intervention rate
  • Psi_r β€” Resonant continuity wavefunction
  • Memory Profile β€” Emotional tags & cocoon count
  • Perspective Coverage β€” Which reasoning lenses were invoked

How to Use

  1. Ask any question in the chat
  2. Select Auto (default) to let Codette pick the best perspectives, or Custom to choose
  3. Watch real-time cognitive metrics update as the perspectives debate
  4. Click Individual Perspectives to see each perspective's reasoning
  5. Explore the Coherence & Tension Timeline to see how the cognitive architecture converges over time

Technical Architecture

All subsystems run locally in pure Python:

Subsystem Purpose Module
AEGIS 6-framework ethical evaluation reasoning_forge/aegis.py
Nexus Pre-corruption signal detection reasoning_forge/nexus.py
Guardian Input sanitization & trust calibration reasoning_forge/guardian.py
LivingMemory Emotionally-tagged memory storage reasoning_forge/living_memory.py
ResonantContinuity Psi_r wavefunction computation reasoning_forge/resonant_continuity.py
EpistemicMetrics Coherence & tension scoring reasoning_forge/epistemic_metrics.py
QuantumSpiderweb 5D belief propagation & attractors reasoning_forge/quantum_spiderweb.py
PerspectiveRegistry 12 perspective definitions reasoning_forge/perspective_registry.py

Only the final LLM inference calls use the HuggingFace Inference API (Llama-3.1-8B-Instruct).

Model Weights

All 8 LoRA adapters are available in the model repo: Raiff1982/codette-training-lab

  • GGUF format (f16): 924 MB total, usable with llama.cpp
  • PEFT SafeTensors: 79 MB total, usable with HuggingFace transformers

Key Metrics

  • Phase Coherence: 0.9835 (11-agent convergence)
  • AEGIS Ethical Alignment: 0.961 (6-framework)
  • Tension Decay: 91.2% (200-agent embodied simulation)
  • Cocoon Coherence: 0.994 (memory stability)

Research

Created by Jonathan Harrison. For the complete research framework, see:

Notes

  • Perspective generation may be rate-limited on the free HF Inference API tier
  • Response times depend on the Inference API load
  • All session state persists within your current browser session
  • Memory cocoons are stored locally and cleared when the Space is refreshed

Codette is in active development. Feedback welcome!