Spaces:
Configuration error
cascade-lattice
Universal AI provenance + inference intervention + code diagnostics. See what AI sees. Choose what AI chooses. Find bugs before they find you.
pip install cascade-lattice
Import with either style:
import cascade # Preferred
import cascade_lattice # Also works (alias)
from cascade import Hold # Works
from cascade_lattice import Hold # Also works
๐ฎ Interactive Demo
See CASCADE-LATTICE in action โ fly a lunar lander with AI, take control anytime:
pip install cascade-lattice[demo]
cascade-demo
Controls:
[H]HOLD-FREEZE โ Pause time, see AI's decision matrix, override with WASD[T]HOLD-TAKEOVER โ You fly the lander, AI watches, provenance records everything[ESC]Release hold, return to AI control
Every action is merkle-chained. Every decision has provenance. This is the future of human-AI interaction.
๐ TUI Explorer
Navigate the entire cascade-lattice ecosystem in a beautiful terminal interface:
pip install cascade-lattice[tui]
cascade-tui
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
โ ๐ ๐ cascade_lattice โ ๐ง core โ ๐ provenance โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ โ โ โ
โ ๐๏ธ MODULES โ ๐ DOCUMENTATION โ ๐ CONNECTIONS โ
โ โ โ โ
โ ๐ cascade_lattice โ # ๐ Provenance โ โฌ๏ธ ๐ง core โ
โ โโ ๐ง core โ โ โฌ๏ธ ๐ก Monitor โ
โ โ โโ ๐ provenance โ **What is this?** โ ๐ฅ ๐
genesis โ
โ โ โโ ๐ graph โ The cryptographic backbone โ ๐ค ๐พ store โ
โ โ โโ ๐ adapter โ that makes everything โ โ
โ โ โโ ๐ก event โ tamper-proof. โ ๐ฆ EXPORTS โ
โ โโ โธ๏ธ hold โ โ โ
โ โโ ๐พ store โ Like a notary stamp on โ โ ProvenanceChain โ
โ โโ ๐
genesis โ every AI decision... โ โ ProvenanceRecord โ
โ โโ ๐จ viz โ โ โ hash_tensor() โ
โ โ [Toggle: ๐ Dummies Mode] โ โ compute_merkle() โ
โ โ โ โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโค
โ [E] Explorer [S] Stats [D] Demo [T] Toggle Mode [H] Home [Q] Quit โ
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Features:
- ๐๏ธ Module Tree โ Click to drill into any module
- ๐ Connections Panel โ Navigate via relationships (parent, children, imports, used-by)
- ๐ Dual Explanations โ Toggle between "For Dummies" ๐ and "Scientist Mode" ๐งช
- ๐ Live Stats โ See your 82,000+ observations, genesis root, top models
- ๐ฎ Interactive Demos โ Run HOLD, Observe, Genesis, Provenance demos live
Creative Navigation: Take different routes through the module graph. Discover connections. Learn at your own pace.
Two Superpowers
1. OBSERVE - Cryptographic receipts for every AI call
from cascade.store import observe
# Every inference -> hashed -> chained -> stored
receipt = observe("my_agent", {"action": "jump", "confidence": 0.92})
print(receipt.cid) # bafyrei... (permanent content address)
2. HOLD - Pause AI at decision points
from cascade.hold import Hold
import numpy as np
hold = Hold.get()
# Your model (any framework)
action_probs = model.predict(state)
resolution = hold.yield_point(
action_probs=action_probs,
value=0.72,
observation={"state": state},
brain_id="my_model",
action_labels=["up", "down", "left", "right"], # Human-readable
)
# AI pauses. You see the decision matrix.
# Accept or override. Then it continues.
action = resolution.action
3. DIAGNOSE - Find bugs before they find you
from cascade.diagnostics import diagnose, BugDetector
# Quick one-liner analysis
report = diagnose("path/to/your/code.py")
print(report) # Markdown-formatted bug report
# Deep scan a whole project
detector = BugDetector()
issues = detector.scan_directory("./my_project")
for issue in issues:
print(f"[{issue.severity}] {issue.file}:{issue.line}")
print(f" {issue.message}")
print(f" Pattern: {issue.pattern.name}")
What it catches:
- ๐ด Critical: Division by zero, null pointer access, infinite loops
- ๐ High: Bare except clauses, resource leaks, race conditions
- ๐ก Medium: Unused variables, dead code, type mismatches
- ๐ต Low: Style issues, naming conventions, complexity warnings
Runtime tracing:
from cascade.diagnostics import CodeTracer
tracer = CodeTracer()
@tracer.trace
def my_function(x):
return x / (x - 1) # Potential div by zero when x=1
# After execution, trace root causes
tracer.find_root_causes("error_event_id")
Quick Start
Zero-Config Auto-Patch
import cascade
cascade.init()
# That's it. Every call is now observed.
import openai
# ... use normally, receipts emit automatically
Manual Observation
from cascade.store import observe, query
# Write
observe("gpt-4", {"prompt": "Hello", "response": "Hi!", "tokens": 5})
# Read
for receipt in query("gpt-4", limit=10):
print(receipt.cid, receipt.data)
HOLD: Inference-Level Intervention
HOLD lets you pause any AI at decision points:
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
๐ HOLD #1
Merkle: 3f92e75df4bf653f
AI Choice: FORWARD (confidence: 45.00%)
Value: 0.7200
Probabilities: FORWARD:0.45, BACK:0.30, LEFT:0.15, RIGHT:0.10
Wealth: attention, features, reasoning
Waiting for resolution (timeout: 30s)...
โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ
Model-agnostic - works with:
- PyTorch, JAX, TensorFlow
- HuggingFace, OpenAI, Anthropic
- Stable Baselines, RLlib
- Any function that outputs probabilities
Informational Wealth
Pass everything your model knows to help humans decide:
resolution = hold.yield_point(
action_probs=probs,
value=value_estimate,
observation=obs,
brain_id="my_model",
# THE WEALTH (all optional):
action_labels=["FORWARD", "BACK", "LEFT", "RIGHT"],
latent=model.get_latent(), # Internal activations
attention={"position": 0.7, "health": 0.3},
features={"danger": 0.2, "goal_align": 0.8},
imagination={ # Per-action predictions
0: {"trajectory": ["pos", "pos"], "expected_value": 0.8},
1: {"trajectory": ["neg", "neg"], "expected_value": -0.3},
},
logits=raw_logits,
reasoning=["High reward path", "Low risk"],
)
Build Your Own Interface
Register a listener to receive full HoldPoint data:
def my_ui_handler(hold_point):
# hold_point contains ALL the wealth
print(hold_point.action_labels)
print(hold_point.imagination)
# Send to your UI, game engine, logger, etc.
hold.register_listener(my_ui_handler)
Collective Intelligence
Every observation goes into the lattice:
from cascade.store import observe, query
# Agent A observes
observe("pathfinder", {"state": [1,2], "action": 3, "reward": 1.0})
# Agent B queries
past = query("pathfinder")
for r in past:
print(r.data["action"], r.data["reward"])
CLI
# View lattice stats
cascade stats
# List observations
cascade list --limit 20
# HOLD info
cascade hold
# HOLD system status
cascade hold-status
# Start proxy
cascade proxy --port 7777
Installation
# Core
pip install cascade-lattice
# With interactive demo (LunarLander)
pip install cascade-lattice[demo]
# With LLM providers
pip install cascade-lattice[openai]
pip install cascade-lattice[anthropic]
pip install cascade-lattice[all]
How It Works
Your Model CASCADE Storage
| | |
| action_probs = [0.1, | |
| 0.6, | |
| 0.3] | |
| ------------------------->| |
| | hash(probs) -> CID |
| HOLD | chain(prev_cid, cid) |
| +-------------+ | -------------------------> |
| | See matrix | | ~/.cascade/ |
| | Override? | | lattice/ |
| +-------------+ | |
| <-------------------------| |
| resolution.action | |
Genesis
Every receipt chains back to genesis:
Genesis: 89f940c1a4b7aa65
The lattice grows. Discovery is reading the chain.
Links
"even still, i grow, and yet, I grow still"
Documentation
Research & Theory
๐ Research Paper: Kleene Fixed-Point Framework
Deep dive into the mathematical foundationsโhow CASCADE-LATTICE maps neural network computations to Kleene fixed points, creating verifiable provenance chains through distributed lattice networks.
๐ Accessible Guide: From Theory to Practice
For everyone from data scientists to curious usersโunderstand how CASCADE works, with examples ranging from medical AI oversight to autonomous drone coordination.
Key Concepts:
- Kleene Fixed Points: Neural networks as monotonic functions converging to stable outputs
- Provenance Chains: Cryptographic Merkle trees tracking every layer's computation
- HOLD Protocol: Human-in-the-loop intervention at decision boundaries
- Lattice Network: Distributed fixed-point convergence across AI agents
Quick Links
- Theory: Research Paper | Mathematical Proofs
- Practice: Accessible Guide | Real-World Examples
References
Built on foundational work in:
- Kleene Fixed Points (Kleene, 1952) โ Theoretical basis for provenance convergence
- Merkle Trees (Merkle, 1987) โ Cryptographic integrity guarantees
- IPFS/IPLD (Benet, 2014) โ Content-addressed distributed storage
See full bibliography in the research paper.