tostido commited on
Commit
b8bef19
·
1 Parent(s): 389308d

Restore comprehensive README and dependencies

Browse files
Files changed (2) hide show
  1. README.md +229 -41
  2. pyproject.toml +40 -9
README.md CHANGED
@@ -1,70 +1,258 @@
1
- # Cascade Lattice
2
 
3
- **Universal AI provenance layer cryptographic receipts for every call, with HOLD inference halt protocol**
4
 
5
- [![PyPI version](https://badge.fury.io/py/cascade-lattice.svg)](https://pypi.org/project/cascade-lattice/)
6
- [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
 
7
 
8
- ## Installation
9
-
10
- ```bash
11
  pip install cascade-lattice
12
  ```
13
 
14
- With optional dependencies:
 
 
 
 
 
15
  ```bash
16
- pip install cascade-lattice[torch] # PyTorch integration
17
- pip install cascade-lattice[all] # All integrations
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
18
  ```
19
 
 
 
20
  ## Quick Start
21
 
 
 
22
  ```python
23
- from cascade import Monitor
 
 
 
 
 
 
24
 
25
- # Create a monitor for your component
26
- monitor = Monitor("training_loop")
 
 
27
 
28
- # Observe events (parses logs, extracts metrics)
29
- event = monitor.observe("Epoch 5: loss=0.0234, accuracy=0.9812")
30
- print(event.data) # {'loss': 0.0234, 'accuracy': 0.9812, ...}
31
 
32
- # Get metrics summary
33
- print(monitor.metrics.summary())
 
34
  ```
35
 
36
- ## Features
37
 
38
- - **Universal Observation** — Monitor training, inference, system logs, API calls
39
- - **Cryptographic Receipts** — Every observation gets a verifiable hash chain
40
- - **HOLD Protocol** — Inference halt capability for safety-critical applications
41
- - **Tape Storage** — JSONL event streams for replay and analysis
42
- - **Provider Patches** — Drop-in monitoring for OpenAI, Anthropic, LiteLLM, Ollama
43
 
44
- ## CLI Usage
45
 
46
- ```bash
47
- cascade --help # Show all commands
48
- cascade stats # Lattice statistics
49
- cascade list -n 20 # Recent observations
50
- cascade watch # Live observation feed
51
- cascade fingerprint model/ # Fingerprint a model
52
- cascade pii scan.log # Scan for PII
53
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54
 
55
- ## Tape Utilities
 
 
56
 
57
  ```python
58
- from cascade.viz import load_tape_file, find_latest_tape, list_tape_files
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
59
 
60
- # Find and load tape files
61
- latest = find_latest_tape("./logs")
62
- events = load_tape_file(latest)
63
 
64
- for event in events:
65
- print(event['event']['event_type'], event['event']['data'])
 
 
 
 
 
 
 
 
66
  ```
67
 
68
- ## License
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
69
 
70
- MIT
 
1
+ # cascade-lattice
2
 
3
+ **Universal AI provenance + inference intervention. See what AI sees. Choose what AI chooses.**
4
 
5
+ [![PyPI](https://img.shields.io/pypi/v/cascade-lattice.svg)](https://pypi.org/project/cascade-lattice/)
6
+ [![Python](https://img.shields.io/pypi/pyversions/cascade-lattice.svg)](https://pypi.org/project/cascade-lattice/)
7
+ [![License](https://img.shields.io/badge/license-MIT-green.svg)](LICENSE)
8
 
9
+ ```
 
 
10
  pip install cascade-lattice
11
  ```
12
 
13
+ ---
14
+
15
+ ## 🎮 Interactive Demo
16
+
17
+ **See CASCADE-LATTICE in action** — fly a lunar lander with AI, take control anytime:
18
+
19
  ```bash
20
+ pip install cascade-lattice[demo]
21
+ cascade-demo
22
+ ```
23
+
24
+ **Controls:**
25
+ - `[H]` **HOLD-FREEZE** — Pause time, see AI's decision matrix, override with WASD
26
+ - `[T]` **HOLD-TAKEOVER** — You fly the lander, AI watches, provenance records everything
27
+ - `[ESC]` Release hold, return to AI control
28
+
29
+ Every action is merkle-chained. Every decision has provenance. This is the future of human-AI interaction.
30
+
31
+ ---
32
+
33
+ ## Two Superpowers
34
+
35
+ ### 1. OBSERVE - Cryptographic receipts for every AI call
36
+
37
+ ```python
38
+ from cascade.store import observe
39
+
40
+ # Every inference -> hashed -> chained -> stored
41
+ receipt = observe("my_agent", {"action": "jump", "confidence": 0.92})
42
+ print(receipt.cid) # bafyrei... (permanent content address)
43
+ ```
44
+
45
+ ### 2. HOLD - Pause AI at decision points
46
+
47
+ ```python
48
+ from cascade.hold import Hold
49
+ import numpy as np
50
+
51
+ hold = Hold.get()
52
+
53
+ # Your model (any framework)
54
+ action_probs = model.predict(state)
55
+
56
+ resolution = hold.yield_point(
57
+ action_probs=action_probs,
58
+ value=0.72,
59
+ observation={"state": state},
60
+ brain_id="my_model",
61
+ action_labels=["up", "down", "left", "right"], # Human-readable
62
+ )
63
+
64
+ # AI pauses. You see the decision matrix.
65
+ # Accept or override. Then it continues.
66
+ action = resolution.action
67
  ```
68
 
69
+ ---
70
+
71
  ## Quick Start
72
 
73
+ ### Zero-Config Auto-Patch
74
+
75
  ```python
76
+ import cascade
77
+ cascade.init()
78
+
79
+ # That's it. Every call is now observed.
80
+ import openai
81
+ # ... use normally, receipts emit automatically
82
+ ```
83
 
84
+ ### Manual Observation
85
+
86
+ ```python
87
+ from cascade.store import observe, query
88
 
89
+ # Write
90
+ observe("gpt-4", {"prompt": "Hello", "response": "Hi!", "tokens": 5})
 
91
 
92
+ # Read
93
+ for receipt in query("gpt-4", limit=10):
94
+ print(receipt.cid, receipt.data)
95
  ```
96
 
97
+ ---
98
 
99
+ ## HOLD: Inference-Level Intervention
 
 
 
 
100
 
101
+ HOLD lets you pause any AI at decision points:
102
 
 
 
 
 
 
 
 
103
  ```
104
+ ══════════════════════════════════════════════════
105
+ 🛑 HOLD #1
106
+ Merkle: 3f92e75df4bf653f
107
+ AI Choice: FORWARD (confidence: 45.00%)
108
+ Value: 0.7200
109
+ Probabilities: FORWARD:0.45, BACK:0.30, LEFT:0.15, RIGHT:0.10
110
+ Wealth: attention, features, reasoning
111
+ Waiting for resolution (timeout: 30s)...
112
+ ══════════════════════════════════════════════════
113
+ ```
114
+
115
+ **Model-agnostic** - works with:
116
+ - PyTorch, JAX, TensorFlow
117
+ - HuggingFace, OpenAI, Anthropic
118
+ - Stable Baselines, RLlib
119
+ - Any function that outputs probabilities
120
 
121
+ ### Informational Wealth
122
+
123
+ Pass everything your model knows to help humans decide:
124
 
125
  ```python
126
+ resolution = hold.yield_point(
127
+ action_probs=probs,
128
+ value=value_estimate,
129
+ observation=obs,
130
+ brain_id="my_model",
131
+
132
+ # THE WEALTH (all optional):
133
+ action_labels=["FORWARD", "BACK", "LEFT", "RIGHT"],
134
+ latent=model.get_latent(), # Internal activations
135
+ attention={"position": 0.7, "health": 0.3},
136
+ features={"danger": 0.2, "goal_align": 0.8},
137
+ imagination={ # Per-action predictions
138
+ 0: {"trajectory": ["pos", "pos"], "expected_value": 0.8},
139
+ 1: {"trajectory": ["neg", "neg"], "expected_value": -0.3},
140
+ },
141
+ logits=raw_logits,
142
+ reasoning=["High reward path", "Low risk"],
143
+ )
144
+ ```
145
 
146
+ ### Build Your Own Interface
 
 
147
 
148
+ Register a listener to receive full `HoldPoint` data:
149
+
150
+ ```python
151
+ def my_ui_handler(hold_point):
152
+ # hold_point contains ALL the wealth
153
+ print(hold_point.action_labels)
154
+ print(hold_point.imagination)
155
+ # Send to your UI, game engine, logger, etc.
156
+
157
+ hold.register_listener(my_ui_handler)
158
  ```
159
 
160
+ ---
161
+
162
+ ## Collective Intelligence
163
+
164
+ Every observation goes into the **lattice**:
165
+
166
+ ```python
167
+ from cascade.store import observe, query
168
+
169
+ # Agent A observes
170
+ observe("pathfinder", {"state": [1,2], "action": 3, "reward": 1.0})
171
+
172
+ # Agent B queries
173
+ past = query("pathfinder")
174
+ for r in past:
175
+ print(r.data["action"], r.data["reward"])
176
+ ```
177
+
178
+ ---
179
+
180
+ ## CLI
181
+
182
+ ```bash
183
+ # View lattice stats
184
+ cascade stats
185
+
186
+ # List observations
187
+ cascade list --limit 20
188
+
189
+ # HOLD info
190
+ cascade hold
191
+
192
+ # HOLD system status
193
+ cascade hold-status
194
+
195
+ # Start proxy
196
+ cascade proxy --port 7777
197
+ ```
198
+
199
+ ---
200
+
201
+ ## Installation
202
+
203
+ ```bash
204
+ # Core
205
+ pip install cascade-lattice
206
+
207
+ # With interactive demo (LunarLander)
208
+ pip install cascade-lattice[demo]
209
+
210
+ # With LLM providers
211
+ pip install cascade-lattice[openai]
212
+ pip install cascade-lattice[anthropic]
213
+ pip install cascade-lattice[all]
214
+ ```
215
+
216
+ ---
217
+
218
+ ## How It Works
219
+
220
+ ```
221
+ Your Model CASCADE Storage
222
+ | | |
223
+ | action_probs = [0.1, | |
224
+ | 0.6, | |
225
+ | 0.3] | |
226
+ | ------------------------->| |
227
+ | | hash(probs) -> CID |
228
+ | HOLD | chain(prev_cid, cid) |
229
+ | +-------------+ | -------------------------> |
230
+ | | See matrix | | ~/.cascade/ |
231
+ | | Override? | | lattice/ |
232
+ | +-------------+ | |
233
+ | <-------------------------| |
234
+ | resolution.action | |
235
+ ```
236
+
237
+ ---
238
+
239
+ ## Genesis
240
+
241
+ Every receipt chains back to genesis:
242
+
243
+ ```
244
+ Genesis: 89f940c1a4b7aa65
245
+ ```
246
+
247
+ The lattice grows. Discovery is reading the chain.
248
+
249
+ ---
250
+
251
+ ## Links
252
+
253
+ - [PyPI](https://pypi.org/project/cascade-lattice/)
254
+ - [Issues](https://github.com/Yufok1/cascade-lattice/issues)
255
+
256
+ ---
257
 
258
+ *"even still, i grow, and yet, I grow still"*
pyproject.toml CHANGED
@@ -29,22 +29,53 @@ classifiers = [
29
  "Topic :: Scientific/Engineering :: Artificial Intelligence",
30
  "Topic :: System :: Monitoring",
31
  ]
32
- requires-python = ">=3.8"
33
  dependencies = [
34
- "pyyaml>=6.0",
35
- "requests>=2.28.0",
 
 
 
36
  ]
37
 
38
  [project.optional-dependencies]
39
- torch = ["torch>=1.9.0"]
40
- web3 = ["web3>=6.0.0"]
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
41
  all = [
42
- "torch>=1.9.0",
43
- "web3>=6.0.0",
44
- "anthropic>=0.18.0",
45
  "openai>=1.0.0",
46
- "litellm>=1.0.0",
 
47
  "huggingface-hub>=0.20.0",
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
  ]
49
 
50
  [project.urls]
 
29
  "Topic :: Scientific/Engineering :: Artificial Intelligence",
30
  "Topic :: System :: Monitoring",
31
  ]
32
+ requires-python = ">=3.9"
33
  dependencies = [
34
+ "aiohttp>=3.8.0",
35
+ "rich>=12.0.0",
36
+ "numpy>=1.20.0",
37
+ "dag-cbor>=0.3.0",
38
+ "multiformats>=0.3.0",
39
  ]
40
 
41
  [project.optional-dependencies]
42
+ openai = ["openai>=1.0.0"]
43
+ anthropic = ["anthropic>=0.18.0"]
44
+ huggingface = [
45
+ "transformers>=4.30.0",
46
+ "huggingface-hub>=0.20.0",
47
+ ]
48
+ ollama = ["ollama>=0.1.0"]
49
+ litellm = ["litellm>=1.0.0"]
50
+ langchain = ["langchain>=0.1.0"]
51
+ ipfs = ["ipfshttpclient>=0.8.0"]
52
+ hold = ["numpy>=1.20.0"]
53
+ demo = [
54
+ "gymnasium>=0.29.0",
55
+ "pygame>=2.1.0",
56
+ "stable-baselines3>=2.0.0",
57
+ "rl-zoo3>=2.0.0",
58
+ "box2d-py>=2.3.5",
59
+ ]
60
  all = [
 
 
 
61
  "openai>=1.0.0",
62
+ "anthropic>=0.18.0",
63
+ "transformers>=4.30.0",
64
  "huggingface-hub>=0.20.0",
65
+ "ollama>=0.1.0",
66
+ "litellm>=1.0.0",
67
+ "langchain>=0.1.0",
68
+ "networkx>=2.6",
69
+ "datasets>=2.14.0",
70
+ "sentence-transformers>=2.2.0",
71
+ ]
72
+ dev = [
73
+ "pytest>=7.0",
74
+ "pytest-cov>=4.0",
75
+ "pytest-asyncio>=0.21.0",
76
+ "black>=23.0",
77
+ "ruff>=0.1.0",
78
+ "mypy>=1.0",
79
  ]
80
 
81
  [project.urls]