coredipper commited on
Commit
b4053ba
Β·
verified Β·
1 Parent(s): e399d07

Deploy operon-swarm-cleanup Gradio Space demo

Browse files
Files changed (4) hide show
  1. README.md +22 -5
  2. __pycache__/app.cpython-314.pyc +0 -0
  3. app.py +552 -0
  4. requirements.txt +2 -0
README.md CHANGED
@@ -1,12 +1,29 @@
1
  ---
2
- title: Operon Swarm Cleanup
3
- emoji: πŸ“ˆ
4
  colorFrom: green
5
- colorTo: pink
6
  sdk: gradio
7
- sdk_version: 6.5.1
8
  app_file: app.py
9
  pinned: false
 
 
10
  ---
11
 
12
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: Operon Swarm Graceful Cleanup
3
+ emoji: "\U0001F9F9"
4
  colorFrom: green
5
+ colorTo: yellow
6
  sdk: gradio
7
+ sdk_version: "6.5.1"
8
  app_file: app.py
9
  pinned: false
10
+ license: mit
11
+ short_description: Workers clean context via autophagy before death
12
  ---
13
 
14
+ # Operon Swarm Graceful Cleanup
15
+
16
+ LLM-powered swarm where dying workers clean up their context via autophagy before passing state to successors. Successors inherit clean summaries instead of raw noise.
17
+
18
+ ## Features
19
+
20
+ - **Graceful cleanup**: AutophagyDaemon prunes context before worker death
21
+ - **Clean state transfer**: HistoneStore saves summaries for successor inheritance
22
+ - **Noise disposal**: Lysosome disposes extracted noise
23
+ - **Presets**: Research with cleanup, context pollution comparison
24
+
25
+ ## Motifs Combined
26
+
27
+ Nucleus + RegenerativeSwarm + AutophagyDaemon + MorphogenGradient + HistoneStore
28
+
29
+ [GitHub](https://github.com/coredipper/operon) | [PyPI](https://pypi.org/project/operon-ai/)
__pycache__/app.cpython-314.pyc ADDED
Binary file (23.9 kB). View file
 
app.py ADDED
@@ -0,0 +1,552 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Operon LLM Swarm with Graceful Cleanup -- Interactive Gradio Demo
3
+ =================================================================
4
+
5
+ Simulate an LLM-powered swarm where dying workers clean up their context
6
+ via autophagy before passing state to successors. Successors inherit a
7
+ clean summary instead of raw noise.
8
+
9
+ Run locally:
10
+ pip install gradio
11
+ python space-swarm-cleanup/app.py
12
+
13
+ Deploy to HuggingFace Spaces:
14
+ Copy this directory to a new HF Space with sdk=gradio.
15
+ """
16
+
17
+ import sys
18
+ from pathlib import Path
19
+ from dataclasses import dataclass, field
20
+
21
+ import gradio as gr
22
+
23
+ # Allow importing operon_ai from the repo root when running locally
24
+ _repo_root = Path(__file__).resolve().parent.parent
25
+ if str(_repo_root) not in sys.path:
26
+ sys.path.insert(0, str(_repo_root))
27
+
28
+ from operon_ai import HistoneStore, Lysosome, Waste, WasteType, MarkerType
29
+ from operon_ai.organelles.nucleus import Nucleus
30
+ from operon_ai.providers import MockProvider, ProviderConfig
31
+ from operon_ai.coordination.morphogen import MorphogenGradient, MorphogenType
32
+ from operon_ai.healing import (
33
+ RegenerativeSwarm,
34
+ SimpleWorker,
35
+ WorkerMemory,
36
+ AutophagyDaemon,
37
+ create_default_summarizer,
38
+ create_simple_summarizer,
39
+ )
40
+
41
+
42
+ # ── Data structures ──────────────────────────────────────────────────────
43
+
44
+
45
+ @dataclass
46
+ class CleanupRecord:
47
+ """Record of a worker's cleanup before death."""
48
+ worker_id: str
49
+ context_before: int # chars
50
+ context_after: int # chars
51
+ tokens_freed: int
52
+ summary_stored: str
53
+ noise_disposed: int
54
+
55
+
56
+ # ── LLM Swarm Worker Factory ────────────────────────────────────────────
57
+
58
+
59
+ class LLMSwarmWorkerFactory:
60
+ """
61
+ Factory that creates LLM-powered workers with graceful cleanup.
62
+
63
+ Each worker:
64
+ 1. Uses Nucleus + MockProvider for "LLM" responses
65
+ 2. Accumulates context from responses
66
+ 3. Before dying, runs autophagy to clean context
67
+ 4. Stores clean summary in HistoneStore for successors
68
+ """
69
+
70
+ def __init__(
71
+ self,
72
+ responses: dict[str, str],
73
+ gradient: MorphogenGradient,
74
+ toxicity_threshold: float = 0.6,
75
+ ):
76
+ self.gradient = gradient
77
+
78
+ # Shared state across workers
79
+ self.histone_store = HistoneStore()
80
+ self.lysosome = Lysosome(silent=True)
81
+ self.autophagy = AutophagyDaemon(
82
+ histone_store=self.histone_store,
83
+ lysosome=self.lysosome,
84
+ summarizer=create_simple_summarizer(),
85
+ toxicity_threshold=toxicity_threshold,
86
+ silent=True,
87
+ )
88
+
89
+ # Nucleus for LLM calls
90
+ self.nucleus = Nucleus(provider=MockProvider(responses=responses))
91
+
92
+ # Tracking
93
+ self._cleanup_records: list[CleanupRecord] = []
94
+ self._worker_count = 0
95
+ self._worker_timeline: list[dict] = []
96
+
97
+ def create_worker(self, name: str, memory_hints: list[str]) -> SimpleWorker:
98
+ """Create a cleanup-aware worker."""
99
+ self._worker_count += 1
100
+ generation = self._worker_count
101
+
102
+ # Check if we have hints from predecessor (via summarizer or histone)
103
+ inherited_context = ""
104
+ if memory_hints:
105
+ retrieval = self.histone_store.retrieve_context(
106
+ " ".join(memory_hints[:3]),
107
+ limit=3,
108
+ )
109
+ if retrieval.formatted_context:
110
+ inherited_context = retrieval.formatted_context
111
+ else:
112
+ inherited_context = "; ".join(memory_hints)
113
+
114
+ has_ctx = bool(inherited_context)
115
+ self._worker_timeline.append({
116
+ "worker": name,
117
+ "generation": generation,
118
+ "event": "spawned",
119
+ "detail": "with inherited context" if has_ctx else "fresh start",
120
+ })
121
+
122
+ # Build worker context
123
+ accumulated_context: list[str] = []
124
+ if inherited_context:
125
+ accumulated_context.append(
126
+ f"[Inherited summary]: {inherited_context[:200]}"
127
+ )
128
+
129
+ factory_ref = self
130
+
131
+ def work(task: str, memory: WorkerMemory) -> str:
132
+ step = len(memory.output_history)
133
+
134
+ # Simulate LLM response
135
+ prompt_key = f"step_{step}"
136
+ try:
137
+ response = factory_ref.nucleus.transcribe(
138
+ prompt_key,
139
+ config=ProviderConfig(temperature=0.0, max_tokens=256),
140
+ )
141
+ output = response.content
142
+ except Exception:
143
+ output = f"Processing step {step}..."
144
+
145
+ # Accumulate context
146
+ accumulated_context.append(output)
147
+
148
+ # Update gradient
149
+ factory_ref.gradient.set(
150
+ MorphogenType.CONFIDENCE,
151
+ max(0.1, 1.0 - step * 0.15),
152
+ )
153
+
154
+ # Workers with inherited context solve faster
155
+ if inherited_context and generation >= 2:
156
+ if step == 0:
157
+ factory_ref._worker_timeline.append({
158
+ "worker": name,
159
+ "generation": generation,
160
+ "event": "strategy",
161
+ "detail": f"Starting from inherited summary (gen {generation})",
162
+ })
163
+ return f"STRATEGY: Starting from inherited summary (gen {generation})"
164
+ elif step == 1:
165
+ factory_ref._worker_timeline.append({
166
+ "worker": name,
167
+ "generation": generation,
168
+ "event": "progress",
169
+ "detail": "Building on predecessor's work",
170
+ })
171
+ return "PROGRESS: Building on predecessor's work"
172
+ elif step >= 2:
173
+ # Run cleanup before returning success
174
+ factory_ref._cleanup_worker(
175
+ name, "\n".join(accumulated_context),
176
+ )
177
+ factory_ref._worker_timeline.append({
178
+ "worker": name,
179
+ "generation": generation,
180
+ "event": "solved",
181
+ "detail": "DONE with clean state inheritance",
182
+ })
183
+ return "DONE: Completed with clean state inheritance!"
184
+
185
+ # Default: accumulate noise, get stuck (identical output)
186
+ factory_ref._worker_timeline.append({
187
+ "worker": name,
188
+ "generation": generation,
189
+ "event": "stuck",
190
+ "detail": "Still processing (identical output)",
191
+ })
192
+ return "THINKING: Still processing..."
193
+
194
+ return SimpleWorker(id=name, work_function=work)
195
+
196
+ def _cleanup_worker(self, worker_id: str, context: str) -> CleanupRecord:
197
+ """Run graceful cleanup before worker death."""
198
+ context_before = len(context)
199
+
200
+ # Run autophagy
201
+ cleaned_context, prune_result = self.autophagy.check_and_prune(
202
+ context, max_tokens=2000,
203
+ )
204
+
205
+ tokens_freed = prune_result.tokens_freed if prune_result else 0
206
+ summary = cleaned_context[:300] if cleaned_context else context[:100]
207
+
208
+ # Store clean summary in HistoneStore
209
+ self.histone_store.add_marker(
210
+ content=f"Worker {worker_id} summary: {summary}",
211
+ marker_type=MarkerType.ACETYLATION,
212
+ tags=["worker_summary", worker_id],
213
+ context=f"Cleanup from {worker_id} before apoptosis",
214
+ )
215
+
216
+ # Dispose noise via Lysosome
217
+ noise_count = 0
218
+ if prune_result and prune_result.tokens_freed > 0:
219
+ self.lysosome.ingest(Waste(
220
+ waste_type=WasteType.EXPIRED_CACHE,
221
+ content=f"Noise from {worker_id}: {tokens_freed} tokens",
222
+ source=worker_id,
223
+ ))
224
+ digest = self.lysosome.digest()
225
+ noise_count = digest.disposed
226
+
227
+ record = CleanupRecord(
228
+ worker_id=worker_id,
229
+ context_before=context_before,
230
+ context_after=len(cleaned_context),
231
+ tokens_freed=tokens_freed,
232
+ summary_stored=summary[:100],
233
+ noise_disposed=noise_count,
234
+ )
235
+ self._cleanup_records.append(record)
236
+ return record
237
+
238
+ def get_cleanup_records(self) -> list[CleanupRecord]:
239
+ return list(self._cleanup_records)
240
+
241
+ def get_worker_timeline(self) -> list[dict]:
242
+ return list(self._worker_timeline)
243
+
244
+
245
+ # ── Presets ──────────────────────────────────────────────────────────────
246
+
247
+ PRESETS: dict[str, dict] = {
248
+ "(custom)": {
249
+ "description": "Configure your own swarm parameters.",
250
+ "entropy_threshold": 0.9,
251
+ "max_steps": 5,
252
+ "max_regenerations": 3,
253
+ "responses": {
254
+ "step_0": "Initial research findings on the topic.",
255
+ "step_1": "Deeper analysis reveals three key factors.",
256
+ "step_2": "Cross-referencing sources confirms hypothesis.",
257
+ "step_3": "Still processing...",
258
+ "step_4": "Still processing...",
259
+ },
260
+ },
261
+ "Research with cleanup": {
262
+ "description": "Worker accumulates noisy context, gets stuck, cleans up, and dies. Successor inherits clean summary and completes the task.",
263
+ "entropy_threshold": 0.9,
264
+ "max_steps": 5,
265
+ "max_regenerations": 3,
266
+ "responses": {
267
+ "step_0": "Initial research findings on the topic.",
268
+ "step_1": "Deeper analysis reveals three key factors.",
269
+ "step_2": "Cross-referencing sources confirms hypothesis.",
270
+ "step_3": "Still processing...",
271
+ "step_4": "Still processing...",
272
+ },
273
+ },
274
+ "Context pollution comparison": {
275
+ "description": "Compare how context cleanup prevents noise from degrading successor performance across generations.",
276
+ "entropy_threshold": 0.9,
277
+ "max_steps": 5,
278
+ "max_regenerations": 3,
279
+ "responses": {
280
+ "step_0": "Finding relevant data...",
281
+ "step_1": "Analyzing patterns in data...",
282
+ "step_2": "Drawing conclusions...",
283
+ "step_3": "Still processing...",
284
+ },
285
+ },
286
+ "Fast cleanup": {
287
+ "description": "Low entropy threshold triggers faster worker turnover. Cleanup keeps context lean across rapid regenerations.",
288
+ "entropy_threshold": 0.6,
289
+ "max_steps": 3,
290
+ "max_regenerations": 3,
291
+ "responses": {
292
+ "step_0": "Quick scan of available data.",
293
+ "step_1": "Preliminary results ready.",
294
+ "step_2": "Done.",
295
+ },
296
+ },
297
+ "Multi-generation": {
298
+ "description": "High regeneration limit allows many worker generations. Each cleans up before dying, building a rich HistoneStore.",
299
+ "entropy_threshold": 0.9,
300
+ "max_steps": 4,
301
+ "max_regenerations": 5,
302
+ "responses": {
303
+ "step_0": "Generation checkpoint: scanning knowledge base.",
304
+ "step_1": "Aggregating findings from prior workers.",
305
+ "step_2": "Synthesizing cross-generation insights.",
306
+ "step_3": "Still processing...",
307
+ },
308
+ },
309
+ }
310
+
311
+
312
+ def _load_preset(name: str) -> tuple[float, int, int]:
313
+ p = PRESETS.get(name, PRESETS["(custom)"])
314
+ return p["entropy_threshold"], p["max_steps"], p["max_regenerations"]
315
+
316
+
317
+ # ── Core simulation ─────────────────────────────────────────────────────
318
+
319
+ _EVENT_COLORS: dict[str, str] = {
320
+ "spawned": "#3b82f6",
321
+ "strategy": "#8b5cf6",
322
+ "progress": "#eab308",
323
+ "solved": "#22c55e",
324
+ "stuck": "#f97316",
325
+ }
326
+
327
+
328
+ def run_swarm(
329
+ preset_name: str,
330
+ entropy_threshold: float,
331
+ max_steps: int,
332
+ max_regenerations: int,
333
+ ) -> tuple[str, str, str, str]:
334
+ """Run the LLM swarm with graceful cleanup simulation.
335
+
336
+ Returns (result_banner, worker_timeline_html, cleanup_records_md, gradient_md).
337
+ """
338
+ p = PRESETS.get(preset_name, PRESETS["(custom)"])
339
+ responses = p["responses"]
340
+
341
+ gradient = MorphogenGradient()
342
+
343
+ factory = LLMSwarmWorkerFactory(
344
+ responses=responses,
345
+ gradient=gradient,
346
+ )
347
+
348
+ swarm = RegenerativeSwarm(
349
+ worker_factory=factory.create_worker,
350
+ summarizer=create_default_summarizer(),
351
+ entropy_threshold=entropy_threshold,
352
+ max_steps_per_worker=int(max_steps),
353
+ max_regenerations=int(max_regenerations),
354
+ silent=True,
355
+ )
356
+
357
+ result = swarm.supervise("Research the impact of morphogen gradients")
358
+
359
+ # ── Result banner ────────────────────────────────────────────────
360
+ if result.success:
361
+ color, label = "#22c55e", "SUCCESS"
362
+ detail = f"Output: {result.output}"
363
+ else:
364
+ color, label = "#ef4444", "FAILURE"
365
+ detail = (
366
+ f"Swarm exhausted {result.total_workers_spawned} workers "
367
+ f"without solving the task."
368
+ )
369
+
370
+ cleanups = factory.get_cleanup_records()
371
+ banner = (
372
+ f'<div style="padding:12px 16px;border-radius:8px;'
373
+ f"background:{color}20;border:2px solid {color};margin-bottom:8px\">"
374
+ f'<span style="font-size:1.3em;font-weight:700;color:{color}">'
375
+ f"{label}</span>"
376
+ f'<span style="color:#888;margin-left:12px">'
377
+ f"Workers spawned: {result.total_workers_spawned} | "
378
+ f"Cleanups performed: {len(cleanups)}</span><br>"
379
+ f'<span style="font-size:0.9em">{detail}</span></div>'
380
+ )
381
+
382
+ # ── Worker timeline HTML table ───────────────────────────────────
383
+ timeline = factory.get_worker_timeline()
384
+ timeline_rows = []
385
+ for entry in timeline:
386
+ ec = _EVENT_COLORS.get(entry["event"], "#888")
387
+ timeline_rows.append(
388
+ f'<tr>'
389
+ f'<td style="padding:4px 8px;font-family:monospace">{entry["worker"]}</td>'
390
+ f'<td style="padding:4px 8px;text-align:center">{entry["generation"]}</td>'
391
+ f'<td style="padding:4px 8px">'
392
+ f'<span style="background:{ec}20;color:{ec};padding:1px 6px;'
393
+ f'border-radius:3px;font-size:0.85em">{entry["event"]}</span></td>'
394
+ f'<td style="padding:4px 8px">{entry["detail"]}</td>'
395
+ f'</tr>'
396
+ )
397
+
398
+ if timeline_rows:
399
+ timeline_html = (
400
+ '<table style="width:100%;border-collapse:collapse;font-size:0.9em">'
401
+ '<tr style="background:#f0f0f0">'
402
+ '<th style="padding:6px 8px;text-align:left">Worker</th>'
403
+ '<th style="padding:6px 8px;text-align:center">Gen</th>'
404
+ '<th style="padding:6px 8px;text-align:left">Event</th>'
405
+ '<th style="padding:6px 8px;text-align:left">Detail</th></tr>'
406
+ + "".join(timeline_rows)
407
+ + "</table>"
408
+ )
409
+ else:
410
+ timeline_html = '<p style="color:#888">No timeline data captured.</p>'
411
+
412
+ # ── Cleanup records markdown ─────────────────────────────────────
413
+ if cleanups:
414
+ cleanup_lines = ["### Cleanup Records\n"]
415
+ cleanup_lines.append(
416
+ "| Worker | Context Before | Context After | Tokens Freed | Summary Stored |"
417
+ )
418
+ cleanup_lines.append(
419
+ "| :--- | ---: | ---: | ---: | :--- |"
420
+ )
421
+ for rec in cleanups:
422
+ summary_preview = rec.summary_stored[:60]
423
+ summary_preview = summary_preview.replace("|", "\\|")
424
+ cleanup_lines.append(
425
+ f"| `{rec.worker_id}` | {rec.context_before} chars "
426
+ f"| {rec.context_after} chars | {rec.tokens_freed} "
427
+ f"| {summary_preview} |"
428
+ )
429
+ cleanup_lines.append("")
430
+ cleanup_lines.append("### How Cleanup Works\n")
431
+ cleanup_lines.append("1. **AutophagyDaemon** prunes stale/noisy context")
432
+ cleanup_lines.append("2. **Lysosome** disposes of extracted waste")
433
+ cleanup_lines.append("3. **HistoneStore** saves the clean summary for successors")
434
+ cleanup_lines.append(
435
+ "4. Successor workers inherit summaries, not raw noise"
436
+ )
437
+ cleanup_md = "\n".join(cleanup_lines)
438
+ else:
439
+ cleanup_md = (
440
+ "*No cleanup records -- first worker solved the task "
441
+ "without needing regeneration.*"
442
+ )
443
+
444
+ # ── Gradient evolution markdown ──────────────────────────────────
445
+ gradient_lines = ["### Morphogen Gradient (Final State)\n"]
446
+ gradient_lines.append("| Signal | Value | Level |")
447
+ gradient_lines.append("| :--- | ---: | :--- |")
448
+
449
+ for mtype in [
450
+ MorphogenType.CONFIDENCE,
451
+ MorphogenType.ERROR_RATE,
452
+ MorphogenType.COMPLEXITY,
453
+ MorphogenType.URGENCY,
454
+ ]:
455
+ val = gradient.get(mtype)
456
+ level = gradient.get_level(mtype)
457
+
458
+ if mtype == MorphogenType.CONFIDENCE:
459
+ color = "#22c55e" if val > 0.5 else "#ef4444"
460
+ elif mtype == MorphogenType.ERROR_RATE:
461
+ color = "#ef4444" if val > 0.3 else "#22c55e"
462
+ else:
463
+ color = "#888"
464
+
465
+ gradient_lines.append(
466
+ f'| {mtype.value} '
467
+ f'| <span style="color:{color}">{val:.3f}</span> '
468
+ f"| {level} |"
469
+ )
470
+
471
+ gradient_lines.append("\n### Swarm Statistics\n")
472
+ gradient_lines.append(f"- **Total workers spawned**: {result.total_workers_spawned}")
473
+ gradient_lines.append(f"- **Apoptosis events**: {len(result.apoptosis_events)}")
474
+ gradient_lines.append(f"- **Regeneration events**: {len(result.regeneration_events)}")
475
+ gradient_lines.append(f"- **HistoneStore markers**: stored {len(cleanups)} summaries")
476
+
477
+ if result.apoptosis_events:
478
+ gradient_lines.append("\n### Apoptosis Events\n")
479
+ for evt in result.apoptosis_events:
480
+ gradient_lines.append(
481
+ f"- **`{evt.worker_id}`**: {evt.reason.value}"
482
+ )
483
+ if evt.memory_summary:
484
+ for hint in evt.memory_summary:
485
+ gradient_lines.append(f" - _{hint}_")
486
+
487
+ gradient_md = "\n".join(gradient_lines)
488
+
489
+ return banner, timeline_html, cleanup_md, gradient_md
490
+
491
+
492
+ # ── Gradio UI ────────────────────────────────────────────────────────────
493
+
494
+
495
+ def build_app() -> gr.Blocks:
496
+ with gr.Blocks(title="LLM Swarm with Graceful Cleanup") as app:
497
+ gr.Markdown(
498
+ "# 🧹 LLM Swarm with Graceful Cleanup\n"
499
+ "Simulate an LLM-powered swarm where dying workers clean up "
500
+ "context via **autophagy** before passing state to successors. "
501
+ "Successors inherit a **clean summary** instead of raw noise."
502
+ )
503
+
504
+ with gr.Row():
505
+ preset_dd = gr.Dropdown(
506
+ choices=list(PRESETS.keys()),
507
+ value="Research with cleanup",
508
+ label="Preset",
509
+ scale=2,
510
+ )
511
+ run_btn = gr.Button("Run Swarm", variant="primary", scale=1)
512
+
513
+ with gr.Row():
514
+ entropy_sl = gr.Slider(
515
+ 0.5, 1.0, value=0.9, step=0.05, label="Entropy threshold"
516
+ )
517
+ steps_sl = gr.Slider(
518
+ 3, 10, value=5, step=1, label="Max steps per worker"
519
+ )
520
+ regens_sl = gr.Slider(
521
+ 1, 5, value=3, step=1, label="Max regenerations"
522
+ )
523
+
524
+ banner_html = gr.HTML(label="Result")
525
+ gr.Markdown("### Worker Timeline")
526
+ timeline_html = gr.HTML(label="Timeline")
527
+
528
+ with gr.Row():
529
+ with gr.Column():
530
+ cleanup_md = gr.Markdown(label="Cleanup Records")
531
+ with gr.Column():
532
+ gradient_md = gr.Markdown(label="Gradient Evolution")
533
+
534
+ # ── Event wiring ─────────────────────────────────────────────
535
+ preset_dd.change(
536
+ fn=_load_preset,
537
+ inputs=[preset_dd],
538
+ outputs=[entropy_sl, steps_sl, regens_sl],
539
+ )
540
+
541
+ run_btn.click(
542
+ fn=run_swarm,
543
+ inputs=[preset_dd, entropy_sl, steps_sl, regens_sl],
544
+ outputs=[banner_html, timeline_html, cleanup_md, gradient_md],
545
+ )
546
+
547
+ return app
548
+
549
+
550
+ if __name__ == "__main__":
551
+ app = build_app()
552
+ app.launch(theme=gr.themes.Soft())
requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ gradio>=4.0
2
+ operon-ai