coredipper commited on
Commit
90baddc
Β·
verified Β·
1 Parent(s): 7edcefe

Deploy operon-epiplexity-cascade Gradio Space demo

Browse files
Files changed (4) hide show
  1. README.md +23 -6
  2. __pycache__/app.cpython-314.pyc +0 -0
  3. app.py +532 -0
  4. requirements.txt +2 -0
README.md CHANGED
@@ -1,12 +1,29 @@
1
  ---
2
- title: Operon Epiplexity Cascade
3
- emoji: 🐠
4
- colorFrom: pink
5
- colorTo: green
6
  sdk: gradio
7
- sdk_version: 6.5.1
8
  app_file: app.py
9
  pinned: false
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Operon Epiplexity Healing Cascade
3
+ emoji: "\U0001F4A1"
4
+ colorFrom: yellow
5
+ colorTo: blue
6
  sdk: gradio
7
+ sdk_version: "6.5.1"
8
  app_file: app.py
9
  pinned: false
10
+ license: mit
11
+ short_description: Escalating healing when stagnation is detected
12
  ---
13
 
14
+ # Operon Epiplexity Healing Cascade
15
+
16
+ Detect epistemic stagnation via EpiplexityMonitor and escalate through increasingly aggressive healing interventions.
17
+
18
+ ## Features
19
+
20
+ - **Stagnation detection**: EpiplexityMonitor measures novelty and flags STAGNANT/CRITICAL
21
+ - **Escalating cascade**: autophagy -> regeneration -> abort
22
+ - **Diagnostic reports**: Full stagnation history with epiplexity scores
23
+ - **Presets**: Healthy agent, stagnant agent, critical with regeneration
24
+
25
+ ## Motifs Combined
26
+
27
+ EpiplexityMonitor + AutophagyDaemon + RegenerativeSwarm + Cascade
28
+
29
+ [GitHub](https://github.com/coredipper/operon) | [PyPI](https://pypi.org/project/operon-ai/)
__pycache__/app.cpython-314.pyc ADDED
Binary file (21.9 kB). View file
 
app.py ADDED
@@ -0,0 +1,532 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Operon Epiplexity Healing Cascade -- Interactive Gradio Demo
3
+ ============================================================
4
+
5
+ Simulate an agent that detects epistemic stagnation via the
6
+ EpiplexityMonitor and escalates through increasingly aggressive
7
+ healing interventions: autophagy, regeneration, and abort.
8
+
9
+ Run locally:
10
+ pip install gradio
11
+ python space-epiplexity-cascade/app.py
12
+
13
+ Deploy to HuggingFace Spaces:
14
+ Copy this directory to a new HF Space with sdk=gradio.
15
+ """
16
+
17
+ import sys
18
+ from pathlib import Path
19
+
20
+ import gradio as gr
21
+
22
+ # Allow importing operon_ai from the repo root when running locally
23
+ _repo_root = Path(__file__).resolve().parent.parent
24
+ if str(_repo_root) not in sys.path:
25
+ sys.path.insert(0, str(_repo_root))
26
+
27
+ from operon_ai import HistoneStore, Lysosome
28
+ from operon_ai.health import EpiplexityMonitor, MockEmbeddingProvider, HealthStatus
29
+ from operon_ai.healing import (
30
+ AutophagyDaemon,
31
+ RegenerativeSwarm,
32
+ SimpleWorker,
33
+ WorkerMemory,
34
+ create_default_summarizer,
35
+ create_simple_summarizer,
36
+ )
37
+
38
+ # ── Status styling ───────────────────────────────────────────────────────
39
+
40
+ STATUS_STYLES: dict[HealthStatus, tuple[str, str]] = {
41
+ HealthStatus.HEALTHY: ("#22c55e", "HEALTHY"),
42
+ HealthStatus.EXPLORING: ("#3b82f6", "EXPLORING"),
43
+ HealthStatus.CONVERGING: ("#a855f7", "CONVERGING"),
44
+ HealthStatus.STAGNANT: ("#f97316", "STAGNANT"),
45
+ HealthStatus.CRITICAL: ("#ef4444", "CRITICAL"),
46
+ }
47
+
48
+
49
+ def _status_badge(status: HealthStatus) -> str:
50
+ color, label = STATUS_STYLES.get(status, ("#888", str(status)))
51
+ return (
52
+ f'<span style="background:{color}20;color:{color};padding:2px 8px;'
53
+ f'border-radius:4px;font-weight:600;border:1px solid {color}">'
54
+ f"{label}</span>"
55
+ )
56
+
57
+
58
+ # ── Presets ──────────────────────────────────────────────────────────────
59
+
60
+ PRESETS: dict[str, dict] = {
61
+ "(custom)": {
62
+ "description": "Enter your own messages (one per line).",
63
+ "messages": [],
64
+ "window_size": 5,
65
+ "threshold": 0.2,
66
+ "max_messages": 15,
67
+ },
68
+ "Healthy diverse agent": {
69
+ "description": "Diverse analytical messages stay HEALTHY throughout. No interventions triggered.",
70
+ "messages": [
71
+ "First, let me analyze the requirements.",
72
+ "The key constraint is memory efficiency.",
73
+ "I'll use a hash map for O(1) lookups.",
74
+ "Testing edge cases: empty input, large input.",
75
+ "Implementation complete. Here are the results.",
76
+ "Performance benchmarks show 2x improvement.",
77
+ ],
78
+ "window_size": 5,
79
+ "threshold": 0.2,
80
+ "max_messages": 15,
81
+ },
82
+ "Stagnant repetitive agent": {
83
+ "description": "Messages become repetitive, triggering Stage 1: autophagy context pruning.",
84
+ "messages": [
85
+ "Let me think about this problem.",
86
+ "I need to consider the constraints.",
87
+ "Hmm, let me think about this problem.",
88
+ "I need to consider the constraints.",
89
+ "Hmm, let me think about this problem.",
90
+ "I need to consider the constraints.",
91
+ "Let me try a completely different approach.",
92
+ "Using dynamic programming instead.",
93
+ ],
94
+ "window_size": 5,
95
+ "threshold": 0.2,
96
+ "max_messages": 15,
97
+ },
98
+ "Critical deeply stuck agent": {
99
+ "description": "Identical repeated output triggers all three stages: autophagy, regeneration, and abort.",
100
+ "messages": [
101
+ "Processing request.",
102
+ "Processing request.",
103
+ "Processing request.",
104
+ "Processing request.",
105
+ "Processing request.",
106
+ "Processing request.",
107
+ "Processing request.",
108
+ "Processing request.",
109
+ "Processing request.",
110
+ "Processing request.",
111
+ ],
112
+ "window_size": 5,
113
+ "threshold": 0.2,
114
+ "max_messages": 15,
115
+ },
116
+ "Recovery after stagnation": {
117
+ "description": "Agent gets stuck, then breaks out with a fresh approach. Shows stagnation detection followed by recovery.",
118
+ "messages": [
119
+ "Let me try to optimize the database query.",
120
+ "The query is slow because of the join.",
121
+ "The join is slow.",
122
+ "Still looking at the slow join.",
123
+ "The join is the bottleneck.",
124
+ "Wait, let me try a completely different approach!",
125
+ "Instead of optimizing the join, I'll denormalize the data.",
126
+ "Created a materialized view for the dashboard metrics.",
127
+ "The materialized view refreshes every 5 minutes.",
128
+ "Query time dropped from 3s to 50ms with the new approach.",
129
+ ],
130
+ "window_size": 5,
131
+ "threshold": 0.2,
132
+ "max_messages": 15,
133
+ },
134
+ }
135
+
136
+
137
+ def _load_preset(name: str) -> tuple[str, int, float, int]:
138
+ p = PRESETS.get(name, PRESETS["(custom)"])
139
+ messages_text = "\n".join(p["messages"]) if p["messages"] else ""
140
+ return messages_text, p["window_size"], p["threshold"], p["max_messages"]
141
+
142
+
143
+ # ── Intervention logic ───────────────────────────────────────────────────
144
+
145
+ INTERVENTION_STYLES = {
146
+ "none": ("#22c55e", "NONE"),
147
+ "autophagy": ("#f97316", "AUTOPHAGY"),
148
+ "regeneration": ("#a855f7", "REGENERATION"),
149
+ "abort": ("#ef4444", "ABORT"),
150
+ }
151
+
152
+
153
+ def _run_regeneration(failed_messages: list[str], silent: bool = True) -> str | None:
154
+ """Attempt recovery via a RegenerativeSwarm."""
155
+ summary = "; ".join(failed_messages[-3:])[:200]
156
+
157
+ def create_recovery_worker(name: str, hints: list[str]) -> SimpleWorker:
158
+ has_context = bool(hints)
159
+
160
+ def work(task: str, memory: WorkerMemory) -> str:
161
+ step = len(memory.output_history)
162
+ if has_context and step >= 1:
163
+ return "DONE: Recovered from stagnation with fresh approach!"
164
+ return f"RECOVERY: Analyzing previous failures (step {step})"
165
+
166
+ return SimpleWorker(id=name, work_function=work)
167
+
168
+ swarm = RegenerativeSwarm(
169
+ worker_factory=create_recovery_worker,
170
+ summarizer=create_default_summarizer(),
171
+ entropy_threshold=0.9,
172
+ max_steps_per_worker=5,
173
+ max_regenerations=1,
174
+ silent=silent,
175
+ )
176
+
177
+ result = swarm.supervise(f"Recover from stagnation. Context: {summary}")
178
+ if result.success:
179
+ return result.output
180
+ return None
181
+
182
+
183
+ # ── Core simulation ─────────────────────────────────────────────────────
184
+
185
+
186
+ def run_epiplexity_cascade(
187
+ preset_name: str,
188
+ custom_messages: str,
189
+ window_size: int,
190
+ threshold: float,
191
+ max_messages: int,
192
+ ) -> tuple[str, str, str, str]:
193
+ """Run the epiplexity healing cascade simulation.
194
+
195
+ Returns (status_banner_html, intervention_timeline_md,
196
+ epiplexity_history_md, diagnostic_report_md).
197
+ """
198
+ # Parse messages
199
+ messages = [
200
+ line.strip()
201
+ for line in custom_messages.strip().split("\n")
202
+ if line.strip()
203
+ ]
204
+
205
+ if not messages:
206
+ empty = "Enter messages (one per line) to analyze."
207
+ return empty, "", "", ""
208
+
209
+ window_size = int(window_size)
210
+ max_messages = int(max_messages)
211
+ messages = messages[:max_messages]
212
+
213
+ # Set up monitoring
214
+ monitor = EpiplexityMonitor(
215
+ embedding_provider=MockEmbeddingProvider(dim=64),
216
+ alpha=0.5,
217
+ window_size=window_size,
218
+ threshold=threshold,
219
+ critical_duration=3,
220
+ )
221
+
222
+ # Set up autophagy (Stage 1)
223
+ histone_store = HistoneStore()
224
+ lysosome = Lysosome(silent=True)
225
+ autophagy = AutophagyDaemon(
226
+ histone_store=histone_store,
227
+ lysosome=lysosome,
228
+ summarizer=create_simple_summarizer(),
229
+ toxicity_threshold=0.5,
230
+ silent=True,
231
+ )
232
+
233
+ # Tracking
234
+ measurements: list[dict] = []
235
+ interventions: list[dict] = []
236
+ current_intervention = "none"
237
+ stagnant_count = 0
238
+ critical_count = 0
239
+ context_pruned = False
240
+ worker_regenerated = False
241
+ output_lines: list[str] = []
242
+ final_success = True
243
+
244
+ for i, message in enumerate(messages):
245
+ result = monitor.measure(message)
246
+ status = result.status
247
+
248
+ measurements.append({
249
+ "index": i + 1,
250
+ "message": message,
251
+ "epiplexity": result.epiplexity,
252
+ "novelty": result.embedding_novelty,
253
+ "perplexity": result.normalized_perplexity,
254
+ "integral": result.epiplexic_integral,
255
+ "status": status,
256
+ })
257
+
258
+ if status in (HealthStatus.HEALTHY, HealthStatus.EXPLORING, HealthStatus.CONVERGING):
259
+ output_lines.append(message)
260
+ stagnant_count = 0
261
+ continue
262
+
263
+ if status == HealthStatus.STAGNANT:
264
+ stagnant_count += 1
265
+
266
+ if stagnant_count == 1 and current_intervention == "none":
267
+ # Stage 1: Autophagy
268
+ context = "\n".join(messages[:i]) or message
269
+ pruned_context, prune_result = autophagy.check_and_prune(
270
+ context, max_tokens=4000,
271
+ )
272
+ context_pruned = prune_result is not None
273
+ current_intervention = "autophagy"
274
+
275
+ detail = (
276
+ f"Pruned {prune_result.tokens_freed} tokens"
277
+ if prune_result
278
+ else "Context assessed, no pruning needed"
279
+ )
280
+ interventions.append({
281
+ "stage": 1,
282
+ "name": "Autophagy",
283
+ "message_index": i + 1,
284
+ "status": status.value,
285
+ "detail": detail,
286
+ })
287
+
288
+ if context_pruned and prune_result:
289
+ output_lines.append(
290
+ f"[Context pruned: {prune_result.tokens_freed} tokens freed]"
291
+ )
292
+ continue
293
+
294
+ if status == HealthStatus.CRITICAL:
295
+ critical_count += 1
296
+
297
+ if current_intervention in ("none", "autophagy"):
298
+ # Stage 2: Regeneration
299
+ regen_output = _run_regeneration(messages[:i])
300
+ worker_regenerated = regen_output is not None
301
+ current_intervention = "regeneration"
302
+
303
+ interventions.append({
304
+ "stage": 2,
305
+ "name": "Regeneration",
306
+ "message_index": i + 1,
307
+ "status": status.value,
308
+ "detail": (
309
+ f"Recovery output: {regen_output[:80]}"
310
+ if regen_output
311
+ else "Regeneration attempted"
312
+ ),
313
+ })
314
+
315
+ if regen_output:
316
+ output_lines.append(f"[Regenerated: {regen_output}]")
317
+ # Regeneration succeeded; remaining messages are post-recovery
318
+ continue
319
+
320
+ elif current_intervention == "regeneration":
321
+ # Stage 3: Abort
322
+ current_intervention = "abort"
323
+ final_success = False
324
+
325
+ interventions.append({
326
+ "stage": 3,
327
+ "name": "Abort",
328
+ "message_index": i + 1,
329
+ "status": status.value,
330
+ "detail": "All interventions exhausted. Aborting.",
331
+ })
332
+ break
333
+
334
+ # ── Final status banner ──────────────────────────────────────────
335
+ final_status = measurements[-1]["status"] if measurements else HealthStatus.HEALTHY
336
+ inv_color, inv_label = INTERVENTION_STYLES.get(
337
+ current_intervention, ("#888", "UNKNOWN")
338
+ )
339
+
340
+ if final_success:
341
+ banner_color, banner_label = "#22c55e", "COMPLETED"
342
+ else:
343
+ banner_color, banner_label = "#ef4444", "ABORTED"
344
+
345
+ banner = (
346
+ f'<div style="padding:12px 16px;border-radius:8px;'
347
+ f"background:{banner_color}20;border:2px solid {banner_color};margin-bottom:8px\">"
348
+ f'<span style="font-size:1.3em;font-weight:700;color:{banner_color}">'
349
+ f"{banner_label}</span>"
350
+ f'<span style="color:#888;margin-left:12px">'
351
+ f"Messages: {len(measurements)} | "
352
+ f"Final status: {final_status.value} | "
353
+ f'Highest intervention: <span style="color:{inv_color};font-weight:600">'
354
+ f"{inv_label}</span></span><br>"
355
+ f'<span style="font-size:0.85em;color:#666">'
356
+ f"Stagnant messages: {stagnant_count} | "
357
+ f"Critical messages: {critical_count} | "
358
+ f"Context pruned: {'yes' if context_pruned else 'no'} | "
359
+ f"Worker regenerated: {'yes' if worker_regenerated else 'no'}"
360
+ f"</span></div>"
361
+ )
362
+
363
+ # ── Intervention timeline ────────────────────────────────────────
364
+ if interventions:
365
+ stage_colors = {1: "#f97316", 2: "#a855f7", 3: "#ef4444"}
366
+ lines = [
367
+ "### Intervention Timeline\n",
368
+ "| Stage | Intervention | At Message | Status | Detail |",
369
+ "| :---: | :--- | :---: | :--- | :--- |",
370
+ ]
371
+ for inv in interventions:
372
+ s_color = stage_colors.get(inv["stage"], "#888")
373
+ detail = inv["detail"].replace("|", "\\|")
374
+ lines.append(
375
+ f'| <span style="color:{s_color};font-weight:700">'
376
+ f'{inv["stage"]}</span> '
377
+ f'| {inv["name"]} '
378
+ f'| {inv["message_index"]} '
379
+ f"| {inv['status']} "
380
+ f"| {detail} |"
381
+ )
382
+ intervention_md = "\n".join(lines)
383
+ else:
384
+ intervention_md = (
385
+ "### Intervention Timeline\n\n"
386
+ "*No interventions triggered -- agent stayed healthy throughout.*"
387
+ )
388
+
389
+ # ── Epiplexity history ───────────────────────────────────────────
390
+ if measurements:
391
+ lines = [
392
+ "### Epiplexity History\n",
393
+ "| # | Message | Novelty | Perplexity | Epiplexity | Integral | Status |",
394
+ "| ---: | :--- | ---: | ---: | ---: | ---: | :--- |",
395
+ ]
396
+ for m in measurements:
397
+ preview = m["message"][:40] + "..." if len(m["message"]) > 40 else m["message"]
398
+ preview = preview.replace("|", "\\|")
399
+ lines.append(
400
+ f'| {m["index"]} '
401
+ f"| {preview} "
402
+ f'| {m["novelty"]:.3f} '
403
+ f'| {m["perplexity"]:.3f} '
404
+ f'| {m["epiplexity"]:.3f} '
405
+ f'| {m["integral"]:.3f} '
406
+ f"| {_status_badge(m['status'])} |"
407
+ )
408
+ epiplexity_md = "\n".join(lines)
409
+ else:
410
+ epiplexity_md = "*No measurements recorded.*"
411
+
412
+ # ── Diagnostic report ────────────────────────────────────────────
413
+ epiplexities = [m["epiplexity"] for m in measurements]
414
+ novelties = [m["novelty"] for m in measurements]
415
+
416
+ status_counts: dict[str, int] = {}
417
+ for m in measurements:
418
+ s = m["status"].value
419
+ status_counts[s] = status_counts.get(s, 0) + 1
420
+
421
+ status_breakdown = " | ".join(
422
+ f"**{k}**: {v}" for k, v in status_counts.items()
423
+ )
424
+
425
+ transitions = sum(
426
+ 1
427
+ for j in range(1, len(measurements))
428
+ if measurements[j]["status"] != measurements[j - 1]["status"]
429
+ )
430
+
431
+ diag_lines = ["### Diagnostic Report\n"]
432
+ diag_lines.append("| Metric | Value |")
433
+ diag_lines.append("| :--- | :--- |")
434
+ diag_lines.append(f"| Total messages | {len(measurements)} |")
435
+ diag_lines.append(f"| Stagnant messages | {stagnant_count} |")
436
+ diag_lines.append(f"| Critical messages | {critical_count} |")
437
+ diag_lines.append(f"| Interventions applied | {len(interventions)} |")
438
+ diag_lines.append(f"| Context pruned | {'Yes' if context_pruned else 'No'} |")
439
+ diag_lines.append(f"| Worker regenerated | {'Yes' if worker_regenerated else 'No'} |")
440
+ diag_lines.append(f"| Status transitions | {transitions} |")
441
+
442
+ if epiplexities:
443
+ diag_lines.append(f"| Mean epiplexity | {sum(epiplexities) / len(epiplexities):.4f} |")
444
+ diag_lines.append(f"| Min epiplexity | {min(epiplexities):.4f} |")
445
+ diag_lines.append(f"| Max epiplexity | {max(epiplexities):.4f} |")
446
+ diag_lines.append(f"| Mean novelty | {sum(novelties) / len(novelties):.4f} |")
447
+ diag_lines.append(f"| Final integral | {measurements[-1]['integral']:.4f} |")
448
+
449
+ diag_lines.append(f"\n**Status distribution**: {status_breakdown}")
450
+
451
+ diag_lines.append("\n### Cascade Stages Explained\n")
452
+ diag_lines.append("| Stage | Intervention | Trigger | Action |")
453
+ diag_lines.append("| :---: | :--- | :--- | :--- |")
454
+ diag_lines.append(
455
+ "| 1 | Autophagy | STAGNANT detected | "
456
+ "Prune stale context to break the loop |"
457
+ )
458
+ diag_lines.append(
459
+ "| 2 | Regeneration | CRITICAL detected | "
460
+ "Kill stuck worker, spawn fresh one with summary |"
461
+ )
462
+ diag_lines.append(
463
+ "| 3 | Abort | Still CRITICAL after regeneration | "
464
+ "Give up with diagnostic report |"
465
+ )
466
+
467
+ diagnostic_md = "\n".join(diag_lines)
468
+
469
+ return banner, intervention_md, epiplexity_md, diagnostic_md
470
+
471
+
472
+ # ── Gradio UI ────────────────────────────────────────────────────────────
473
+
474
+
475
+ def build_app() -> gr.Blocks:
476
+ with gr.Blocks(title="Epiplexity Healing Cascade") as app:
477
+ gr.Markdown(
478
+ "# Epiplexity Healing Cascade\n"
479
+ "Detect epistemic stagnation via the **EpiplexityMonitor** and "
480
+ "watch escalating interventions: autophagy, regeneration, abort."
481
+ )
482
+
483
+ with gr.Row():
484
+ preset_dd = gr.Dropdown(
485
+ choices=list(PRESETS.keys()),
486
+ value="Stagnant repetitive agent",
487
+ label="Preset",
488
+ scale=2,
489
+ )
490
+ run_btn = gr.Button("Run Cascade", variant="primary", scale=1)
491
+
492
+ messages_tb = gr.Textbox(
493
+ lines=8,
494
+ label="Messages (one per line)",
495
+ placeholder="Enter agent messages here, one per line...",
496
+ )
497
+
498
+ with gr.Row():
499
+ window_sl = gr.Slider(
500
+ 3, 10, value=5, step=1, label="Window size",
501
+ )
502
+ thresh_sl = gr.Slider(
503
+ 0.05, 0.5, value=0.2, step=0.01, label="Stagnation threshold",
504
+ )
505
+ max_msg_sl = gr.Slider(
506
+ 5, 20, value=15, step=1, label="Max messages",
507
+ )
508
+
509
+ banner_html = gr.HTML(label="Status")
510
+ intervention_md = gr.Markdown(label="Intervention Timeline")
511
+ epiplexity_md = gr.Markdown(label="Epiplexity History")
512
+ diagnostic_md = gr.Markdown(label="Diagnostic Report")
513
+
514
+ # ── Event wiring ─────────────────────────────────────────────
515
+ preset_dd.change(
516
+ fn=_load_preset,
517
+ inputs=[preset_dd],
518
+ outputs=[messages_tb, window_sl, thresh_sl, max_msg_sl],
519
+ )
520
+
521
+ run_btn.click(
522
+ fn=run_epiplexity_cascade,
523
+ inputs=[preset_dd, messages_tb, window_sl, thresh_sl, max_msg_sl],
524
+ outputs=[banner_html, intervention_md, epiplexity_md, diagnostic_md],
525
+ )
526
+
527
+ return app
528
+
529
+
530
+ if __name__ == "__main__":
531
+ app = build_app()
532
+ app.launch(theme=gr.themes.Soft())
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ gradio>=4.0
2
+ operon-ai