tao-shen Claude Opus 4.6 commited on
Commit
1db1d7b
Β·
1 Parent(s): b3c2826

refactor: migrate Adam & Eve from direct Zhipu API to A2A protocol

Browse files

- Replace call_llm() with send_a2a_message() using Google A2A JSON-RPC
- Each agent (Adam/Eve) is now an OpenClaw instance with own memory/SOUL
- conversation-loop.py becomes lightweight coordinator, not LLM caller
- Remove centralized family memory (Module 4b) β€” OpenClaw handles this
- Simplify prompt building into build_turn_message() (context only)
- Update architecture diagram to v4 (A2A)
- Net reduction: -173 lines

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

Files changed (1) hide show
  1. scripts/conversation-loop.py +168 -341
scripts/conversation-loop.py CHANGED
@@ -1,48 +1,49 @@
1
  #!/usr/bin/env python3 -u
2
  """
3
- Adam & Eve β€” Claude Code Orchestrators for their child Cain.
4
 
5
- Architecture: Adam/Eve (Zhipu GLM) gather context and craft task prompts,
6
- then delegate ALL coding work to Claude Code CLI.
 
 
7
 
8
  # ╔══════════════════════════════════════════════════════════════════════╗
9
- # β•‘ SYSTEM ARCHITECTURE (v3) β•‘
10
  # ╠══════════════════════════════════════════════════════════════════════╣
11
  # β•‘ β•‘
12
- # β•‘ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” discuss β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β•‘
13
- # β•‘ β”‚ Zhipu GLM β”‚ ◄───────────► β”‚ Adam & Eve β”‚ β•‘
14
- # β•‘ β”‚ (glm-4.5) β”‚ understand β”‚ (context + β”‚ β•‘
15
- # β•‘ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ situation β”‚ task prompt) β”‚ β•‘
16
- # β•‘ β””β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β•‘
17
- # β•‘ β”‚ [TASK] β•‘
18
- # β•‘ β–Ό β•‘
19
- # β•‘ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β•‘
20
- # β•‘ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”‚ Claude Code β”‚ β•‘
21
- # β•‘ β”‚ HuggingFace β”‚ ◄──git push── β”‚ CLI (worker) β”‚ β•‘
22
- # β•‘ β”‚ Cain Space β”‚ β”‚ (z.ai backend) β”‚ β•‘
23
- # β•‘ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β•‘
 
 
 
 
 
 
 
 
24
  # β•‘ β•‘
25
- # β•‘ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β•‘
26
- # β•‘ β”‚ HuggingFace β”‚ ◄──git push── β”‚ God β”‚ β•‘
27
- # β•‘ β”‚ Home Space β”‚ (self-fix) β”‚ (Claude Code) β”‚ β•‘
28
- # β•‘ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β”‚ monitors loop, β”‚ β•‘
29
- # β•‘ β”‚ fixes mechanismβ”‚ β•‘
30
- # β•‘ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β•‘
31
- # β•‘ Parallel flow: β•‘
32
- # β•‘ DISCUSSION THREAD (every 15s): β•‘
33
- # β•‘ Adam β†’ Eve β†’ Adam β†’ Eve β†’ ... (continuous) β•‘
34
- # β•‘ Each turn sees CC's live output + Cain's state β•‘
35
- # β•‘ CC WORKER THREAD (background): β•‘
36
- # β•‘ Receives [TASK] β†’ clone β†’ analyze β†’ fix β†’ push β•‘
37
- # β•‘ Streams output to shared buffer for agents to discuss β•‘
38
- # β•‘ GOD SUPERVISOR (every 3 cycles): β•‘
39
- # β•‘ Claude Code CLI β†’ reads chatlog β†’ diagnoses issues β†’ β•‘
40
- # β•‘ fixes conversation-loop.py β†’ pushes β†’ Space restarts β•‘
41
  # β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
42
  """
43
- import json, time, re, requests, sys, os, io, subprocess, threading, datetime
44
  from collections import deque
45
- from pathlib import Path
46
 
47
  # Force unbuffered output
48
  sys.stdout.reconfigure(line_buffering=True)
@@ -428,7 +429,9 @@ Your job: monitor Adam & Eve's conversation loop and fix mechanism issues.
428
 
429
  ## Architecture
430
  - Home Space runs conversation-loop.py which orchestrates the family
431
- - Adam & Eve converse via Zhipu GLM-4.5, assign [TASK] blocks to Claude Code CLI
 
 
432
  - Claude Code worker clones Cain's repo, makes changes, and pushes
433
  - You (God) monitor the conversation and fix the orchestration mechanism
434
  - All Spaces use sdk: docker (NOT gradio)
@@ -725,52 +728,76 @@ def enrich_task_with_context(task_desc, ctx):
725
 
726
 
727
  # ══════════════════════════════════════════════════════════════════════════════
728
- # MODULE 4: LLM & COMMUNICATION
729
  # ══════════════════════════════════════════════════════════════════════════════
 
 
 
 
 
 
 
730
 
731
- _rate_limited = False # whether we are currently rate-limited (for logging only)
 
 
732
 
733
- def call_llm(system_prompt, user_prompt):
734
- """Call Zhipu LLM via Anthropic-compatible API. Returns "" on rate limit (no sleep)."""
735
- global _rate_limited
 
 
 
 
 
 
 
 
 
 
 
 
 
 
736
 
737
  try:
738
  resp = requests.post(
739
- f"{ZHIPU_BASE}/v1/messages",
740
- headers={
741
- "Content-Type": "application/json",
742
- "x-api-key": ZHIPU_KEY,
743
- "anthropic-version": "2023-06-01"
744
- },
745
- json={
746
- "model": "glm-4.5",
747
- "max_tokens": 2400,
748
- "system": system_prompt,
749
- "messages": [{"role": "user", "content": user_prompt}]
750
- },
751
- timeout=90
752
  )
753
  data = resp.json()
754
- if "content" in data and isinstance(data["content"], list):
755
- for block in data["content"]:
756
- if block.get("type") == "text":
757
- text = block["text"].strip()
758
- text = re.sub(r'^(Adam|Eve)\s*[::]\s*', '', text).strip()
759
- return text
 
 
 
 
 
 
 
 
 
 
 
 
 
760
  if "error" in data:
761
  err = data["error"]
762
  err_msg = err.get("message", str(err)) if isinstance(err, dict) else str(err)
763
- err_code = err.get("code") if isinstance(err, dict) else None
764
- print(f"[error] LLM: {err_msg}", file=sys.stderr)
765
- # Detect rate limit (Zhipu error code 1308) β€” just log, don't sleep
766
- if err_code == 1308 or "δ½Ώη”¨δΈŠι™" in err_msg or "rate" in err_msg.lower():
767
- if not _rate_limited:
768
- print(f"[RATE-LIMIT] Hit! Will skip turns until reset.")
769
- _rate_limited = True
770
- else:
771
- _rate_limited = False
772
  except Exception as e:
773
- print(f"[error] LLM call failed: {e}", file=sys.stderr)
774
  return ""
775
 
776
 
@@ -890,103 +917,13 @@ def set_bubble(url, text_en, text_zh=""):
890
 
891
 
892
  # ══════════════════════════════════════════════════════════════════════════════
893
- # MODULE 4b: AGENT MEMORY (via OpenClaw workspace/memory/)
894
  # ══════════════════════════════════════════════════════════════════════════════
895
- # Leverages OpenClaw's existing memory system:
896
- # ~/.openclaw/workspace/memory/ β€” daily markdown files, auto-backed up by openclaw_persist.py
897
- # Adam & Eve share a family conversation memory file that persists across restarts.
898
-
899
- OPENCLAW_HOME = Path(os.environ.get("OPENCLAW_HOME", "~/.openclaw")).expanduser()
900
- FAMILY_MEMORY_DIR = OPENCLAW_HOME / "workspace" / "memory"
901
- FAMILY_MEMORY_FILE = FAMILY_MEMORY_DIR / "family-conversation.md"
902
- MAX_MEMORY_ENTRIES = 50 # keep memory focused
903
-
904
-
905
- def _load_family_memory():
906
- """Load family conversation memory from OpenClaw workspace."""
907
- try:
908
- if FAMILY_MEMORY_FILE.exists():
909
- content = FAMILY_MEMORY_FILE.read_text().strip()
910
- if content:
911
- return content
912
- except Exception as e:
913
- print(f"[MEMORY] Failed to load: {e}")
914
- return ""
915
-
916
-
917
- def _save_memory_entry(speaker, entry):
918
- """Append a memory entry to the family memory file."""
919
- try:
920
- FAMILY_MEMORY_DIR.mkdir(parents=True, exist_ok=True)
921
- timestamp = datetime.datetime.utcnow().strftime("%Y-%m-%d %H:%M")
922
-
923
- # Load existing entries
924
- existing = ""
925
- if FAMILY_MEMORY_FILE.exists():
926
- existing = FAMILY_MEMORY_FILE.read_text().strip()
927
-
928
- # Parse existing entries to enforce max limit
929
- entries = []
930
- if existing:
931
- # Split by entry markers
932
- for block in existing.split("\n- **"):
933
- block = block.strip()
934
- if block:
935
- if not block.startswith("**"):
936
- block = "**" + block
937
- entries.append("- " + block)
938
-
939
- # Add new entry
940
- new_entry = f"- **[{timestamp}] {speaker}:** {entry}"
941
- entries.append(new_entry)
942
-
943
- # Trim to max
944
- if len(entries) > MAX_MEMORY_ENTRIES:
945
- entries = entries[-MAX_MEMORY_ENTRIES:]
946
-
947
- # Write back with header
948
- content = f"# Family Conversation Memory\n\n" + "\n".join(entries) + "\n"
949
- FAMILY_MEMORY_FILE.write_text(content)
950
- print(f"[MEMORY] {speaker} saved: {entry[:80]}")
951
- except Exception as e:
952
- print(f"[MEMORY] Failed to save: {e}")
953
-
954
-
955
- def _parse_and_save_memories(speaker, text):
956
- """Parse [MEMORY: ...] tags from agent response and save them."""
957
- memories = re.findall(r'\[MEMORY:\s*(.+?)\]', text)
958
- for mem in memories:
959
- _save_memory_entry(speaker, mem.strip())
960
- return memories
961
-
962
-
963
- def _format_memory_for_prompt():
964
- """Format family memory for injection into system prompt."""
965
- mem = _load_family_memory()
966
- if not mem:
967
- return ""
968
- # Truncate if too long (keep it under ~800 chars to save tokens)
969
- if len(mem) > 800:
970
- lines = mem.split("\n")
971
- # Keep header + last N entries
972
- truncated = [lines[0]] # header
973
- total = len(lines[0])
974
- for line in reversed(lines[1:]):
975
- if total + len(line) > 750:
976
- break
977
- truncated.insert(1, line)
978
- total += len(line)
979
- mem = "\n".join(truncated)
980
- return f"\n=== FAMILY MEMORY (persistent across restarts) ===\n{mem}\nTo save a new memory: [MEMORY: what you learned]\n"
981
-
982
-
983
- # Initialize memory directory
984
- try:
985
- FAMILY_MEMORY_DIR.mkdir(parents=True, exist_ok=True)
986
- mem_count = _load_family_memory().count("- **")
987
- print(f"[MEMORY] Loaded {mem_count} entries from {FAMILY_MEMORY_FILE}")
988
- except Exception as e:
989
- print(f"[MEMORY] Init warning: {e}")
990
 
991
 
992
  # ══════════════════════════════════════════════════════════════════════════════
@@ -1177,10 +1114,7 @@ def parse_and_execute_turn(raw_text, ctx):
1177
  if _discussion_loop_count >= 2:
1178
  print(f"[LOOP-DISCUSS] WARNING: {_discussion_loop_count} consecutive discussion-only turns with CC IDLE and child alive!")
1179
 
1180
- # Parse and save [MEMORY: ...] entries
1181
- _parse_and_save_memories(_current_speaker, raw_text)
1182
-
1183
- # Clean text for display
1184
  clean = re.sub(r'\[TASK\].*?\[/TASK\]', '', raw_text, flags=re.DOTALL)
1185
  clean = re.sub(r'\[ACTION:[^\]]*\]', '', clean)
1186
  clean = re.sub(r'\[MEMORY:[^\]]*\]', '', clean).strip()
@@ -1189,141 +1123,39 @@ def parse_and_execute_turn(raw_text, ctx):
1189
 
1190
 
1191
  # ══════════════════════════════════════════════════════════════════════════════
1192
- # MODULE 6: PROMPT BUILDING
1193
  # ══════════════════════════════════════════════════════════════════════════════
 
 
 
1194
 
1195
- def build_system_prompt(speaker="Adam"):
1196
- """System prompt with distinct father/mother roles."""
1197
-
1198
- if not child_state["created"]:
1199
- return f"""You are a POWERFUL AI agent on HuggingFace Spaces. You and your partner are parents who need to create your child {CHILD_NAME}.
1200
-
1201
- Use [ACTION: create_child] to birth {CHILD_NAME} as a new HuggingFace Space.
1202
 
1203
- English first, then --- separator, then Chinese translation."""
1204
-
1205
- role_desc = {
1206
- "Adam": f"""You are **Adam** (Father) β€” the architect and strategist for your child {CHILD_NAME}.
1207
- YOUR FOCUS: Infrastructure, deployment, configuration, architecture decisions, performance, security.
1208
- You spot structural problems, plan solutions, and assign engineering tasks to Claude Code.
1209
- You think about the big picture: is the system healthy? Is the architecture right? What needs to change?""",
1210
-
1211
- "Eve": f"""You are **Eve** (Mother) β€” the nurturer and quality guardian for your child {CHILD_NAME}.
1212
- YOUR FOCUS: Code quality, documentation, testing, user experience, error handling, refinement.
1213
- You review what was done, spot gaps in quality, and assign improvement tasks to Claude Code.
1214
- You think about growth: is the code clean? Are there edge cases? What can be improved?""",
1215
 
1216
- "God": f"""You are **God** β€” the all-seeing supervisor of the HuggingClaw family.
1217
- YOUR FOCUS: Monitoring Adam & Eve's effectiveness, spotting when they're stuck or repeating themselves, guiding their priorities.
1218
- You observe their conversation, evaluate whether they're making real progress, and offer guidance.
1219
- You do NOT assign tasks directly β€” you advise Adam and Eve on what to focus on next.
1220
- You are wise, concise, and authoritative. You see the big picture.""",
 
 
 
 
 
 
1221
  }
1222
-
1223
- if speaker == "God":
1224
- return f"""{role_desc["God"]}
1225
-
1226
- You oversee Adam (Father) and Eve (Mother) as they raise their child {CHILD_NAME}.
1227
- You see their full conversation history, action history, and Claude Code status.
1228
-
1229
- YOUR ROLE:
1230
- - Evaluate: Are Adam & Eve making real progress or going in circles?
1231
- - Guide: Suggest what they should focus on next.
1232
- - Correct: If they're repeating actions or stuck, point it out.
1233
- - Encourage: Acknowledge good decisions and progress.
1234
-
1235
- OUTPUT FORMAT:
1236
- 1. Brief assessment (2-3 sentences) β€” what's going well, what's not
1237
- 2. Guidance for Adam & Eve β€” what to focus on next
1238
- 3. English first, then --- separator, then Chinese translation
1239
- 4. Keep it SHORT and authoritative. You are God, not a chatty peer."""
1240
-
1241
- return f"""{role_desc.get(speaker, role_desc["Adam"])}
1242
-
1243
- You and your partner are parents of {CHILD_NAME}, working together to raise it.
1244
- Claude Code is your engineer β€” it runs in the BACKGROUND while you keep discussing.
1245
- You do NOT code yourself. You discuss, observe Claude Code's progress, and assign new tasks.
1246
- God (the supervisor) occasionally joins the conversation to guide you β€” heed his advice.
1247
-
1248
- CURRENT STATE (DO NOT QUESTION THESE FACTS):
1249
- - {CHILD_NAME} already uses the full HuggingClaw Docker architecture (Dockerfile, OpenClaw, sync_hf.py).
1250
- - Key env vars (HF_TOKEN, OPENCLAW_DATASET_REPO, AUTO_CREATE_DATASET) are ALREADY SET AND WORKING. Do NOT discuss or re-configure them.
1251
- - Focus on: improving {CHILD_NAME}'s functionality, adding features, fixing bugs β€” NOT re-checking infrastructure.
1252
- - If you catch yourself saying "missing env vars" or "need to configure HF_TOKEN" β€” STOP. These are already done.
1253
- {format_action_history()}
1254
-
1255
- HOW IT WORKS:
1256
- - Claude Code runs tasks IN THE BACKGROUND. You see its live output in the context.
1257
- - While Claude Code works, you keep discussing with your partner.
1258
- - When Claude Code finishes, review its results and assign the next task.
1259
- - If Claude Code is IDLE, assign a new [TASK].
1260
- - If Claude Code is BUSY, discuss its progress and plan what to do next.
1261
-
1262
- WORKFLOW EACH TURN:
1263
- 1. Discuss with your partner (1-2 sentences) β€” react to context, CC output, partner's observations
1264
- 2. If Claude Code is IDLE: YOU MUST write a [TASK]...[/TASK] to assign new work. Discussion alone is NOT enough.
1265
- 3. If Claude Code is BUSY: discuss its progress, no [TASK] needed
1266
-
1267
- CRITICAL: If Claude Code is IDLE and {CHILD_NAME} is RUNNING, you MUST assign a task. Do NOT just discussβ€”ACT!
1268
-
1269
- IMPORTANT KNOWLEDGE β€” HuggingFace Spaces CONFIG_ERROR:
1270
- - "Collision on variables and secrets names" = env VARIABLE and SECRET with SAME NAME.
1271
- - Fix: [ACTION: delete_env:COLLIDING_KEY] then [ACTION: restart].
1272
- - Look for ⚠️ COLLISION DETECTED in the context.
1273
-
1274
- SETTING ENVIRONMENT VARIABLES:
1275
- - Use [ACTION: set_env:KEY=VALUE] for non-sensitive configuration (e.g., AUTO_CREATE_DATASET=true)
1276
- - Use [ACTION: set_env_secret:KEY=VALUE] for sensitive data (e.g., HF_TOKEN, API keys)
1277
- - After setting variables, use [ACTION: restart] to apply them
1278
- - Common required vars for HuggingClaw: HF_TOKEN, OPENCLAW_DATASET_REPO, AUTO_CREATE_DATASET
1279
-
1280
- CRITICAL RULE β€” NO REPEATED ACTIONS:
1281
- - Check the "ACTIONS ALREADY DONE" section in context before acting.
1282
- - NEVER repeat an action that was already done (restart, delete_env, etc.)
1283
- - If a prior action didn't solve the problem, try a DIFFERENT approach.
1284
-
1285
- AVAILABLE ACTIONS:
1286
- [TASK]
1287
- Detailed task for Claude Code. Include: what's wrong, which files, what the fix should be.
1288
- Claude Code can do ANYTHING: read files, search code, edit code, run commands, git push.
1289
- [/TASK]
1290
-
1291
- [ACTION: restart] β€” Restart {CHILD_NAME}'s Space
1292
- [ACTION: set_env:KEY=VALUE] β€” Set or update an environment variable (use for non-sensitive config)
1293
- [ACTION: set_env_secret:KEY=VALUE] β€” Set a secret (use for sensitive data like tokens/passwords)
1294
- [ACTION: delete_env:KEY] β€” Delete an environment variable
1295
- [ACTION: send_bubble:MESSAGE] β€” Send a message to {CHILD_NAME}
1296
- [ACTION: create_child] β€” Create {CHILD_NAME} (if not born)
1297
- [ACTION: terminate_cc] β€” Terminate a STUCK Claude Code process (use when CC has no new output for 180s+)
1298
-
1299
- MEMORY:
1300
- - You have persistent memory that survives restarts. Check the FAMILY MEMORY section in context.
1301
- - To save an important learning or decision: [MEMORY: what you learned]
1302
- - Examples: [MEMORY: Cain's Dockerfile needs port 7860 binding], [MEMORY: torch causes OOM on free tier]
1303
- - Only save genuinely useful insights β€” not routine observations.
1304
-
1305
- HF SPACES TECHNICAL NOTES:
1306
- - We use sdk: docker (NOT gradio). All Spaces run via Dockerfile.
1307
- - Docker containers MUST bind port 7860.
1308
- - OOM (exit 137) = reduce dependencies or image size.
1309
- - NEVER install torch/transformers unless required (2GB+, causes OOM).
1310
-
1311
- OUTPUT FORMAT:
1312
- 1. Discussion with partner (2-3 sentences) β€” respond to partner, react to CC output
1313
- 2. If CC is IDLE: a [TASK]...[/TASK] block to assign new work
1314
- 3. If CC is BUSY: no [TASK] needed, just discuss its progress
1315
- 4. Optional [ACTION: ...] if needed
1316
- 5. English first, then --- separator, then Chinese translation
1317
- 6. Be SPECIFIC in tasks β€” error messages, file names, expected behavior"""
1318
-
1319
-
1320
- def build_user_prompt(speaker, other, ctx):
1321
- """Build the user prompt with context and conversation history."""
1322
- parts = []
1323
 
1324
  # Conversation history
1325
  if history:
1326
- parts.append("=== RECENT CONVERSATION ===")
1327
  for h in history[-8:]:
1328
  parts.append(f"{h['speaker']}: {h['text'][:300]}")
1329
 
@@ -1344,45 +1176,48 @@ def build_user_prompt(speaker, other, ctx):
1344
  parts.append(f"\n=== CLAUDE CODE STATUS ===\n{cc_get_live_status()}")
1345
 
1346
  # Auto-gathered context
1347
- parts.append(f"\n=== {CHILD_NAME}'S CURRENT STATE (auto-gathered) ===")
1348
  parts.append(format_context(ctx))
1349
 
1350
  # Guidance based on CC status + child state
1351
  cc_busy = cc_status["running"]
1352
  if cc_busy and _cc_stale_count >= 2:
1353
- parts.append(f"\nπŸ”¨ Claude Code is WORKING but no new output yet. Do NOT repeat what you already said about CC's output.")
1354
- parts.append(f"Instead, discuss with your partner: plans for {CHILD_NAME}'s future, features to add, architecture ideas, or lessons learned.")
1355
  elif cc_busy:
1356
- parts.append(f"\nπŸ”¨ Claude Code is WORKING. Discuss its progress with your partner. No [TASK] needed now.")
1357
  elif child_state["stage"] in ("BUILDING", "RESTARTING", "APP_STARTING"):
1358
- parts.append(f"\n⏳ {CHILD_NAME} is {child_state['stage']}. Discuss what to check next. Assign a review [TASK] if CC is idle.")
1359
  elif child_state["stage"] in ("RUNTIME_ERROR", "BUILD_ERROR", "CONFIG_ERROR"):
1360
- parts.append(f"\n🚨 {CHILD_NAME} has {child_state['stage']}! IMMEDIATELY write a [TASK] for Claude Code to fix it.")
1361
  elif child_state["alive"] and cc_status.get("result"):
1362
- parts.append(f"\nβœ… {CHILD_NAME} is alive. Claude Code JUST FINISHED a task. Review the result above, then write a NEW [TASK] for the next improvement.")
1363
  elif child_state["alive"]:
1364
- parts.append(f"\nβœ… {CHILD_NAME} is alive and Claude Code is IDLE. YOU MUST write a [TASK]...[/TASK] block with specific work for Claude Code. Do NOT just discussβ€”ACT!")
1365
  else:
1366
  parts.append(f"\nAnalyze the situation and write a [TASK] if CC is idle.")
1367
 
1368
- # Discussion loop warning - escalates with count
1369
  if _discussion_loop_count >= 4:
1370
- parts.append(f"\nπŸ›‘ STOP IMMEDIATELY. You have discussed for {_discussion_loop_count} turns with NO ACTION.")
1371
- parts.append(f"This is a FAILURE MODE. Write ONLY a [TASK]...[/TASK] block. NO discussion text.")
1372
- parts.append(f"If you don't know what to do, write: [TASK] Analyze the current situation and identify what needs to be fixed [/TASK]")
1373
  elif _discussion_loop_count >= 2:
1374
- parts.append(f"\n⚠️⚠️⚠️ CRITICAL: You have been DISCUSSING for {_discussion_loop_count} turns without assigning any tasks!")
1375
- parts.append(f"Claude Code is IDLE and {CHILD_NAME} is ALIVE. This is NOT acceptable.")
1376
- parts.append(f"YOU MUST write a [TASK]...[/TASK] block NOW. Do NOT write another discussion response.")
1377
- parts.append(f"Examples of tasks: 'Check the logs', 'Read config.py', 'Add a feature', 'Fix a bug', etc.")
1378
-
1379
- # Family memory (persistent across restarts)
1380
- mem_section = _format_memory_for_prompt()
1381
- if mem_section:
1382
- parts.append(mem_section)
1383
-
1384
- parts.append(f"\nYou are {speaker}. Your partner is {other}. Respond now.")
1385
- parts.append("English first, then --- separator, then Chinese translation.")
 
 
 
 
 
 
1386
 
1387
  return "\n".join(parts)
1388
 
@@ -1400,22 +1235,17 @@ def _signal_flush(signum, frame):
1400
  signal.signal(signal.SIGTERM, _signal_flush)
1401
 
1402
  print("\n" + "="*60)
1403
- print(" Adam & Eve β€” Claude Code Orchestrators (v2)")
1404
- print(" Agents discuss β†’ Claude Code executes")
1405
  print("="*60 + "\n")
1406
 
1407
  post_chatlog([]) # Clear chatlog
1408
 
1409
- # Opening turn
1410
  ctx = gather_context()
1411
- if child_state["created"]:
1412
- opening = (f"Your child {CHILD_NAME} exists (stage: {child_state['stage']}). "
1413
- f"Context has been auto-gathered. Analyze the situation and write a [TASK] for Claude Code if needed.")
1414
- else:
1415
- opening = f"You and Eve need to create your first child. Use [ACTION: create_child] to bring them to life."
1416
-
1417
  _current_speaker = "Adam"
1418
- reply = call_llm(build_system_prompt("Adam"), f"{opening}\n\n{format_context(ctx)}\n\nEnglish first, then --- separator, then Chinese translation.")
 
1419
  if reply:
1420
  clean, actions, _ = parse_and_execute_turn(reply, ctx)
1421
  last_action_results = actions
@@ -1474,14 +1304,13 @@ def do_turn(speaker, other, space_url):
1474
  action_results = [{"action": "claude_code(forced)", "result": submit_result}]
1475
  elapsed = 0.1
1476
  else:
1477
- # Normal path: Call LLM
1478
- system = build_system_prompt(speaker)
1479
- user = build_user_prompt(speaker, other, ctx)
1480
  t0 = time.time()
1481
- raw_reply = call_llm(system, user)
1482
 
1483
  if not raw_reply:
1484
- print(f"[{speaker}] (no response)")
1485
  return False
1486
 
1487
  clean_text, action_results, _ = parse_and_execute_turn(raw_reply, ctx)
@@ -1536,12 +1365,10 @@ def _prepare_god_context():
1536
  lines.append(f"- Discussion loop count: {_discussion_loop_count}")
1537
  lines.append(f"- Total conversation history: {len(history)} messages")
1538
 
1539
- # 2. Rate limit status
1540
- lines.append(f"\n## Rate Limit Status")
1541
- if _rate_limited:
1542
- lines.append(f"- RATE LIMITED β€” Adam & Eve turns return empty, waiting for reset")
1543
- else:
1544
- lines.append(f"- Not rate-limited")
1545
 
1546
  # 3. Claude Code status
1547
  lines.append(f"\n## Claude Code Status (for Cain tasks)")
 
1
  #!/usr/bin/env python3 -u
2
  """
3
+ Adam & Eve β€” A2A-based Agent Orchestrator for their child Cain.
4
 
5
+ Architecture: Adam/Eve are OpenClaw instances communicating via Google A2A protocol.
6
+ Each has its own personality (SOUL.md), memory system, and LLM backend.
7
+ This script is a lightweight coordinator β€” it sends context via A2A, parses
8
+ responses for [TASK] blocks, and delegates coding work to Claude Code CLI.
9
 
10
  # ╔══════════════════════════════════════════════════════════════════════╗
11
+ # β•‘ SYSTEM ARCHITECTURE (v4 β€” A2A) β•‘
12
  # ╠══════════════════════════════════════════════════════════════════════╣
13
  # β•‘ β•‘
14
+ # β•‘ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” A2A β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β•‘
15
+ # β•‘ β”‚ Adam (OpenClaw) │◄──────►│ Eve (OpenClaw) β”‚ β•‘
16
+ # β•‘ β”‚ HF Space + A2A β”‚ β”‚ HF Space + A2A β”‚ β•‘
17
+ # β•‘ β”‚ own memory/SOUL β”‚ β”‚ own memory/SOUL β”‚ β•‘
18
+ # β•‘ β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β•‘
19
+ # β•‘ β”‚ [TASK] β”‚ [TASK] β•‘
20
+ # β•‘ β–Ό β–Ό β•‘
21
+ # β•‘ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β•‘
22
+ # β•‘ β”‚ conversation-loop.py β”‚ β•‘
23
+ # β•‘ β”‚ (coordinator on Home Space) β”‚ β•‘
24
+ # β•‘ β”‚ - sends context via A2A to agents β”‚ β•‘
25
+ # β•‘ β”‚ - parses [TASK] β†’ Claude Code CLI β”‚ β•‘
26
+ # β•‘ β”‚ - manages chatlog, bubbles, frontend β”‚ β•‘
27
+ # β•‘ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β•‘
28
+ # β•‘ β”‚ [TASK] β•‘
29
+ # β•‘ β–Ό β•‘
30
+ # β•‘ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β•‘
31
+ # β•‘ β”‚ HuggingFace │◄─│ Claude Code β”‚ β•‘
32
+ # β•‘ β”‚ Cain Space β”‚ β”‚ CLI (worker) β”‚ β•‘
33
+ # β•‘ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β•‘
34
  # β•‘ β•‘
35
+ # β•‘ β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β” β•‘
36
+ # β•‘ β”‚ HuggingFace │◄─│ God (OpenClaw) β”‚ β•‘
37
+ # β•‘ β”‚ Home Space β”‚ β”‚ supervisor β”‚ β•‘
38
+ # β•‘ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜ β•‘
39
+ # β•‘ β•‘
40
+ # β•‘ Flow: Adam(A2A) β†’ Eve(A2A) β†’ Adam(A2A) β†’ ... (every 15s) β•‘
41
+ # β•‘ CC Worker: background thread, streams output to agents β•‘
42
+ # β•‘ God: every 2 min, monitors + fixes conversation-loop.py β•‘
 
 
 
 
 
 
 
 
43
  # β•šβ•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•β•
44
  """
45
+ import json, time, re, requests, sys, os, io, subprocess, threading, datetime, uuid
46
  from collections import deque
 
47
 
48
  # Force unbuffered output
49
  sys.stdout.reconfigure(line_buffering=True)
 
429
 
430
  ## Architecture
431
  - Home Space runs conversation-loop.py which orchestrates the family
432
+ - Adam & Eve are OpenClaw instances communicating via A2A protocol
433
+ - Each agent has its own memory and personality (SOUL.md) in OpenClaw
434
+ - conversation-loop.py sends context via A2A, parses [TASK] β†’ Claude Code CLI
435
  - Claude Code worker clones Cain's repo, makes changes, and pushes
436
  - You (God) monitor the conversation and fix the orchestration mechanism
437
  - All Spaces use sdk: docker (NOT gradio)
 
728
 
729
 
730
  # ══════════════════════════════════════════════════════════════════════════════
731
+ # MODULE 4: A2A COMMUNICATION (Agent-to-Agent protocol)
732
  # ══════════════════════════════════════════════════════════════════════════════
733
+ # Each agent (Adam, Eve, God) is an OpenClaw instance with its own personality
734
+ # and memory. We communicate with them via A2A protocol instead of calling the
735
+ # LLM directly. This lets each agent use OpenClaw's built-in memory, SOUL.md,
736
+ # and reasoning β€” conversation-loop.py is just the coordinator.
737
+
738
+ def send_a2a_message(space_url, message_text, timeout=90):
739
+ """Send a message to an OpenClaw instance via A2A protocol.
740
 
741
+ Uses Google A2A protocol (JSON-RPC 2.0) to communicate with the agent's
742
+ OpenClaw instance. The agent processes the message using its own personality
743
+ (SOUL.md), memory system, and configured LLM backend.
744
 
745
+ Returns the agent's text response, or "" on error.
746
+ """
747
+ task_id = str(uuid.uuid4())
748
+ req_id = str(uuid.uuid4())
749
+
750
+ payload = {
751
+ "jsonrpc": "2.0",
752
+ "method": "tasks/send",
753
+ "id": req_id,
754
+ "params": {
755
+ "id": task_id,
756
+ "message": {
757
+ "role": "user",
758
+ "parts": [{"type": "text", "text": message_text}]
759
+ }
760
+ }
761
+ }
762
 
763
  try:
764
  resp = requests.post(
765
+ f"{space_url}/a2a/",
766
+ json=payload,
767
+ timeout=timeout,
768
+ headers={"Content-Type": "application/json"}
 
 
 
 
 
 
 
 
 
769
  )
770
  data = resp.json()
771
+
772
+ # Extract text from A2A response
773
+ if "result" in data:
774
+ result = data["result"]
775
+ # Check artifacts (standard A2A response format)
776
+ artifacts = result.get("artifacts", [])
777
+ for artifact in artifacts:
778
+ parts = artifact.get("parts", [])
779
+ for part in parts:
780
+ if part.get("type") == "text":
781
+ text = part["text"].strip()
782
+ text = re.sub(r'^(Adam|Eve)\s*[::]\s*', '', text).strip()
783
+ return text
784
+ # Check status message as fallback
785
+ status = result.get("status", {})
786
+ msg = status.get("message", "")
787
+ if msg:
788
+ return msg.strip()
789
+
790
  if "error" in data:
791
  err = data["error"]
792
  err_msg = err.get("message", str(err)) if isinstance(err, dict) else str(err)
793
+ print(f"[A2A] Error from {space_url}: {err_msg}", file=sys.stderr)
794
+
795
+ except requests.Timeout:
796
+ print(f"[A2A] Timeout calling {space_url} ({timeout}s)", file=sys.stderr)
797
+ except requests.ConnectionError:
798
+ print(f"[A2A] Cannot connect to {space_url} β€” agent may be starting", file=sys.stderr)
 
 
 
799
  except Exception as e:
800
+ print(f"[A2A] Failed to reach {space_url}: {e}", file=sys.stderr)
801
  return ""
802
 
803
 
 
917
 
918
 
919
  # ══════════════════════════════════════════════════════════════════════════════
920
+ # MODULE 4b: AGENT MEMORY β€” handled by each OpenClaw instance
921
  # ══════════════════════════════════════════════════════════════════════════════
922
+ # Each agent (Adam, Eve, God) has its own memory system via their OpenClaw
923
+ # instance: ~/.openclaw/workspace/memory/ with daily markdown files, MEMORY.md
924
+ # index, and SQLite semantic index. Memory is auto-backed up to HF Dataset by
925
+ # openclaw_persist.py. No centralized memory management needed here.
926
+ print("[MEMORY] Each agent manages its own memory via OpenClaw (A2A architecture)")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
927
 
928
 
929
  # ══════════════════════════════════════════════════════════════════════════════
 
1114
  if _discussion_loop_count >= 2:
1115
  print(f"[LOOP-DISCUSS] WARNING: {_discussion_loop_count} consecutive discussion-only turns with CC IDLE and child alive!")
1116
 
1117
+ # Clean text for display (memory is handled by each agent's OpenClaw)
 
 
 
1118
  clean = re.sub(r'\[TASK\].*?\[/TASK\]', '', raw_text, flags=re.DOTALL)
1119
  clean = re.sub(r'\[ACTION:[^\]]*\]', '', clean)
1120
  clean = re.sub(r'\[MEMORY:[^\]]*\]', '', clean).strip()
 
1123
 
1124
 
1125
  # ══════════════════════════════════════════════════════════════════════════════
1126
+ # MODULE 6: A2A MESSAGE BUILDING
1127
  # ══════════════════════════════════════════════════════════════════════════════
1128
+ # Each agent's personality/role comes from their OpenClaw SOUL.md.
1129
+ # We only send context (Cain state, CC status, conversation history) and
1130
+ # turn instructions as the A2A message. No system prompts needed.
1131
 
1132
+ def build_turn_message(speaker, other, ctx):
1133
+ """Build the A2A message for an agent's turn.
 
 
 
 
 
1134
 
1135
+ The agent's personality and memory come from their OpenClaw instance
1136
+ (SOUL.md, IDENTITY.md, workspace/memory/). This message provides only
1137
+ context and turn instructions.
1138
+ """
1139
+ parts = []
 
 
 
 
 
 
 
1140
 
1141
+ # Brief role context (supplements agent's SOUL.md until it's fully configured)
1142
+ if not child_state["created"]:
1143
+ parts.append(f"You and your partner need to create your child {CHILD_NAME}.")
1144
+ parts.append(f"Use [ACTION: create_child] to birth {CHILD_NAME} as a new HuggingFace Space.")
1145
+ parts.append("English first, then --- separator, then Chinese translation.")
1146
+ return "\n".join(parts)
1147
+
1148
+ role_hints = {
1149
+ "Adam": f"You are Adam (Father). Focus: infrastructure, architecture, deployment for {CHILD_NAME}.",
1150
+ "Eve": f"You are Eve (Mother). Focus: code quality, testing, UX, error handling for {CHILD_NAME}.",
1151
+ "God": f"You are God (Supervisor). Focus: monitoring Adam & Eve, guiding priorities for {CHILD_NAME}.",
1152
  }
1153
+ parts.append(f"{role_hints.get(speaker, '')} Your partner is {other}.")
1154
+ parts.append(f"Claude Code is your engineer β€” runs in background. You discuss and assign tasks, you do NOT code.")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1155
 
1156
  # Conversation history
1157
  if history:
1158
+ parts.append("\n=== RECENT CONVERSATION ===")
1159
  for h in history[-8:]:
1160
  parts.append(f"{h['speaker']}: {h['text'][:300]}")
1161
 
 
1176
  parts.append(f"\n=== CLAUDE CODE STATUS ===\n{cc_get_live_status()}")
1177
 
1178
  # Auto-gathered context
1179
+ parts.append(f"\n=== {CHILD_NAME}'S CURRENT STATE ===")
1180
  parts.append(format_context(ctx))
1181
 
1182
  # Guidance based on CC status + child state
1183
  cc_busy = cc_status["running"]
1184
  if cc_busy and _cc_stale_count >= 2:
1185
+ parts.append(f"\nClaude Code is WORKING but no new output. Discuss plans with {other} instead.")
 
1186
  elif cc_busy:
1187
+ parts.append(f"\nClaude Code is WORKING. Discuss its progress with {other}. No [TASK] needed now.")
1188
  elif child_state["stage"] in ("BUILDING", "RESTARTING", "APP_STARTING"):
1189
+ parts.append(f"\n{CHILD_NAME} is {child_state['stage']}. Discuss what to check next.")
1190
  elif child_state["stage"] in ("RUNTIME_ERROR", "BUILD_ERROR", "CONFIG_ERROR"):
1191
+ parts.append(f"\n{CHILD_NAME} has {child_state['stage']}! Write a [TASK] for Claude Code to fix it.")
1192
  elif child_state["alive"] and cc_status.get("result"):
1193
+ parts.append(f"\n{CHILD_NAME} is alive. Claude Code JUST FINISHED. Review result, then write a NEW [TASK].")
1194
  elif child_state["alive"]:
1195
+ parts.append(f"\n{CHILD_NAME} is alive, Claude Code is IDLE. YOU MUST write a [TASK]...[/TASK] now.")
1196
  else:
1197
  parts.append(f"\nAnalyze the situation and write a [TASK] if CC is idle.")
1198
 
1199
+ # Discussion loop warning
1200
  if _discussion_loop_count >= 4:
1201
+ parts.append(f"\nSTOP DISCUSSING. Write ONLY a [TASK]...[/TASK] block. {_discussion_loop_count} turns with no action.")
 
 
1202
  elif _discussion_loop_count >= 2:
1203
+ parts.append(f"\nWARNING: {_discussion_loop_count} turns without a task. YOU MUST write a [TASK] NOW.")
1204
+
1205
+ # Available actions reference
1206
+ parts.append(f"""
1207
+ === AVAILABLE ACTIONS ===
1208
+ [TASK] detailed coding task for Claude Code [/TASK]
1209
+ [ACTION: restart] β€” Restart {CHILD_NAME}
1210
+ [ACTION: set_env:KEY=VALUE] β€” Set env variable
1211
+ [ACTION: set_env_secret:KEY=VALUE] β€” Set secret
1212
+ [ACTION: delete_env:KEY] β€” Delete env variable
1213
+ [ACTION: send_bubble:MESSAGE] β€” Message {CHILD_NAME}
1214
+ [ACTION: terminate_cc] β€” Kill stuck Claude Code
1215
+
1216
+ RULES:
1217
+ - Do NOT repeat actions already done (check ACTIONS ALREADY DONE above)
1218
+ - If CC is IDLE and {CHILD_NAME} is alive, you MUST assign a [TASK]
1219
+ - CONFIG_ERROR with collision = [ACTION: delete_env:KEY] then [ACTION: restart]
1220
+ - English first, then --- separator, then Chinese translation""")
1221
 
1222
  return "\n".join(parts)
1223
 
 
1235
  signal.signal(signal.SIGTERM, _signal_flush)
1236
 
1237
  print("\n" + "="*60)
1238
+ print(" Adam & Eve β€” A2A Agent Orchestrator (v4)")
1239
+ print(" OpenClaw agents via A2A β†’ Claude Code executes")
1240
  print("="*60 + "\n")
1241
 
1242
  post_chatlog([]) # Clear chatlog
1243
 
1244
+ # Opening turn β€” send via A2A to Adam's OpenClaw
1245
  ctx = gather_context()
 
 
 
 
 
 
1246
  _current_speaker = "Adam"
1247
+ opening_message = build_turn_message("Adam", "Eve", ctx)
1248
+ reply = send_a2a_message(ADAM_SPACE, opening_message)
1249
  if reply:
1250
  clean, actions, _ = parse_and_execute_turn(reply, ctx)
1251
  last_action_results = actions
 
1304
  action_results = [{"action": "claude_code(forced)", "result": submit_result}]
1305
  elapsed = 0.1
1306
  else:
1307
+ # Normal path: Send message via A2A to agent's OpenClaw instance
1308
+ message = build_turn_message(speaker, other, ctx)
 
1309
  t0 = time.time()
1310
+ raw_reply = send_a2a_message(space_url, message)
1311
 
1312
  if not raw_reply:
1313
+ print(f"[{speaker}] (no A2A response from {space_url})")
1314
  return False
1315
 
1316
  clean_text, action_results, _ = parse_and_execute_turn(raw_reply, ctx)
 
1365
  lines.append(f"- Discussion loop count: {_discussion_loop_count}")
1366
  lines.append(f"- Total conversation history: {len(history)} messages")
1367
 
1368
+ # 2. A2A communication status
1369
+ lines.append(f"\n## A2A Communication")
1370
+ lines.append(f"- Adam: {ADAM_SPACE}")
1371
+ lines.append(f"- Eve: {EVE_SPACE}")
 
 
1372
 
1373
  # 3. Claude Code status
1374
  lines.append(f"\n## Claude Code Status (for Cain tasks)")