LeroyDyer commited on
Commit
14a2118
·
verified ·
1 Parent(s): 07ad245

Upload WorkflowDesigner.py

Browse files
Files changed (1) hide show
  1. WorkflowDesigner.py +2459 -0
WorkflowDesigner.py ADDED
@@ -0,0 +1,2459 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from dataclasses import asdict, dataclass, field
2
+ import os
3
+ import pickle
4
+ from typing import Dict, List, Optional, Any
5
+ import gradio as gr
6
+ import json
7
+ import tempfile
8
+ import asyncio
9
+ import uuid
10
+ from dataclasses import dataclass, asdict
11
+ from typing import List, Dict, Optional, Any
12
+ from openai import AsyncOpenAI
13
+ import base64
14
+
15
+ # Complete Hierarchical Component definitions with Implementation Details
16
+ COMPONENT_INFO = {
17
+ "SYSTEM": {
18
+ "description": "Top-level system architecture containing all components",
19
+ "color": "#333333",
20
+ "icon": "🌐",
21
+ "shape": "folder",
22
+ "sub_components": ["AGENT", "USER", "TOOL", "DATA", "PROCESSOR", "ROUTER", "INFRASTRUCTURE", "CONFIG"]
23
+ },
24
+
25
+ # ===================================
26
+ # AGENT: Autonomous reasoning units
27
+ # ===================================
28
+ "AGENT": {
29
+ "description": "Autonomous reasoning and decision-making units",
30
+ "color": "#4CAF50",
31
+ "icon": "🤖",
32
+ "shape": "rect",
33
+ "sub_components": ["REASONING_AGENT", "ACTION_AGENT", "PLANNER_AGENT", "REACT_AGENT", "MULTI_AGENT"]
34
+ },
35
+
36
+ "REASONING_AGENT": {
37
+ "shape": "rect",
38
+ "color": "#4CAF50",
39
+ "icon": "🧠",
40
+ "description": [
41
+ "• Performs complex reasoning tasks",
42
+ "• Uses chain-of-thought or tree-of-thought",
43
+ "• Can break down complex problems",
44
+ "• Maintains reasoning traces"
45
+ ],
46
+ "implementation": {
47
+ "python_snippet": """
48
+ class ReasoningAgent:
49
+ def __init__(self, model, tools=None):
50
+ self.model = model
51
+ self.tools = tools or []
52
+ self.reasoning_history = []
53
+
54
+ async def process(self, query, context=None):
55
+ # Chain-of-thought reasoning
56
+ reasoning_steps = await self.generate_reasoning_steps(query, context)
57
+ self.reasoning_history.extend(reasoning_steps)
58
+
59
+ # Final answer generation
60
+ answer = await self.synthesize_answer(reasoning_steps)
61
+ return answer
62
+
63
+ async def generate_reasoning_steps(self, query, context):
64
+ prompt = f\"\"\"Analyze this problem step by step:
65
+ Query: {query}
66
+ Context: {context}
67
+
68
+ Break down your reasoning:\"\"\"
69
+ return await self.model.generate(prompt)
70
+ """,
71
+ "prompt_template": """
72
+ You are a reasoning agent. Analyze the user's query step by step:
73
+
74
+ Query: {user_input}
75
+ Context: {context}
76
+
77
+ Please:
78
+ 1. Break down the problem into logical steps
79
+ 2. Consider different perspectives
80
+ 3. Evaluate evidence and constraints
81
+ 4. Synthesize a comprehensive answer
82
+
83
+ Reasoning steps:
84
+ """,
85
+ "dependencies": ["openai", "langchain", "pydantic"],
86
+ "config": {
87
+ "model": "gpt-4",
88
+ "temperature": 0.1,
89
+ "max_tokens": 2000
90
+ }
91
+ }
92
+ },
93
+
94
+ "ACTION_AGENT": {
95
+ "shape": "rect",
96
+ "color": "#4CAF50",
97
+ "icon": "⚡",
98
+ "description": [
99
+ "• Executes actions using available tools",
100
+ "• Monitors action outcomes",
101
+ "• Handles errors and retries",
102
+ "• Updates state after actions"
103
+ ],
104
+ "implementation": {
105
+ "python_snippet": """
106
+ class ActionAgent:
107
+ def __init__(self, tools, model):
108
+ self.tools = {tool.name: tool for tool in tools}
109
+ self.model = model
110
+
111
+ async def execute_action(self, action_request):
112
+ tool_name, parameters = self.parse_action(action_request)
113
+ if tool_name in self.tools:
114
+ return await self.tools[tool_name].execute(parameters)
115
+ else:
116
+ raise ValueError(f"Unknown tool: {tool_name}")
117
+
118
+ def parse_action(self, action_request):
119
+ # Parse action from model response
120
+ return action_request['tool'], action_request['parameters']
121
+ """,
122
+ "dependencies": ["pydantic", "asyncio"],
123
+ "config": {
124
+ "retry_attempts": 3,
125
+ "timeout_seconds": 30
126
+ }
127
+ }
128
+ },
129
+
130
+ "PLANNER_AGENT": {
131
+ "shape": "rect",
132
+ "color": "#4CAF50",
133
+ "icon": "📋",
134
+ "description": [
135
+ "• Creates multi-step plans to achieve goals",
136
+ "• Decomposes complex tasks",
137
+ "• Optimizes execution order",
138
+ "• Monitors plan progress"
139
+ ],
140
+ "implementation": {
141
+ "python_snippet": """
142
+ class PlannerAgent:
143
+ def __init__(self, model):
144
+ self.model = model
145
+ self.plans = []
146
+
147
+ async def create_plan(self, goal, context):
148
+ plan_prompt = f\"\"\"Create a step-by-step plan to achieve:
149
+ Goal: {goal}
150
+ Context: {context}
151
+
152
+ Return a list of actionable steps:\"\"\"
153
+ plan_steps = await self.model.generate(plan_prompt)
154
+ plan = Plan(steps=plan_steps, goal=goal)
155
+ self.plans.append(plan)
156
+ return plan
157
+ """,
158
+ "dependencies": ["pydantic"],
159
+ "config": {
160
+ "max_steps": 20,
161
+ "planning_temperature": 0.3
162
+ }
163
+ }
164
+ },
165
+
166
+ "REACT_AGENT": {
167
+ "shape": "rect",
168
+ "color": "#4CAF50",
169
+ "icon": "🔄",
170
+ "description": [
171
+ "• Implements ReAct (Reason + Act) framework",
172
+ "• Alternates reasoning and action steps",
173
+ "• Maintains conversation history",
174
+ "• Handles tool interactions"
175
+ ],
176
+ "implementation": {
177
+ "python_snippet": """
178
+ class ReActAgent:
179
+ def __init__(self, model, tools):
180
+ self.model = model
181
+ self.tools = tools
182
+ self.conversation_history = []
183
+
184
+ async def step(self, input_text):
185
+ # Generate thought
186
+ thought = await self.generate_thought(input_text)
187
+
188
+ # Decide on action
189
+ action = await self.decide_action(thought)
190
+
191
+ # Execute action if needed
192
+ if action:
193
+ observation = await self.execute_action(action)
194
+ return {"thought": thought, "action": action, "observation": observation}
195
+ else:
196
+ return {"thought": thought, "answer": await self.generate_answer(thought)}
197
+ """,
198
+ "dependencies": ["asyncio", "langchain"],
199
+ "config": {
200
+ "max_iterations": 10,
201
+ "react_temperature": 0.7
202
+ }
203
+ }
204
+ },
205
+
206
+ "MULTI_AGENT": {
207
+ "shape": "rect",
208
+ "color": "#4CAF50",
209
+ "icon": "👥",
210
+ "description": [
211
+ "• Coordinates multiple specialized agents",
212
+ "• Manages agent communication",
213
+ "• Distributes tasks among agents",
214
+ "• Aggregates results from agents"
215
+ ],
216
+ "implementation": {
217
+ "python_snippet": """
218
+ class MultiAgentSystem:
219
+ def __init__(self, agents, orchestrator):
220
+ self.agents = {agent.name: agent for agent in agents}
221
+ self.orchestrator = orchestrator
222
+
223
+ async def coordinate(self, task):
224
+ # Assign task to appropriate agents
225
+ agent_assignments = await self.orchestrator.assign(task)
226
+
227
+ # Execute in parallel
228
+ results = await asyncio.gather(*[
229
+ self.agents[agent_name].process(subtask)
230
+ for agent_name, subtask in agent_assignments.items()
231
+ ])
232
+
233
+ return self.orchestrator.aggregate(results)
234
+ """,
235
+ "dependencies": ["asyncio", "concurrent.futures"],
236
+ "config": {
237
+ "max_concurrent_agents": 10,
238
+ "communication_protocol": "message_queue"
239
+ }
240
+ }
241
+ },
242
+
243
+ # ===================================
244
+ # USER: Interaction interfaces
245
+ # ===================================
246
+ "USER": {
247
+ "description": "User interaction points and interfaces",
248
+ "color": "#9C27B0",
249
+ "icon": "👤",
250
+ "shape": "ellipse",
251
+ "sub_components": ["USER_INPUT", "USER_OUTPUT", "MULTIMODAL_INTERFACE"]
252
+ },
253
+
254
+ "USER_INPUT": {
255
+ "shape": "ellipse",
256
+ "color": "#9C27B0",
257
+ "icon": "⌨️",
258
+ "description": [
259
+ "• Accepts text, voice, or gesture input",
260
+ "• Validates and sanitizes input",
261
+ "• Converts to structured format",
262
+ "• Handles multiple input channels"
263
+ ],
264
+ "implementation": {
265
+ "python_snippet": """
266
+ class UserInputHandler:
267
+ def __init__(self):
268
+ self.input_validators = {
269
+ 'text': self.validate_text,
270
+ 'voice': self.validate_voice,
271
+ 'gesture': self.validate_gesture
272
+ }
273
+
274
+ async def process_input(self, input_type, raw_input):
275
+ validator = self.input_validators.get(input_type)
276
+ if validator:
277
+ return await validator(raw_input)
278
+ else:
279
+ raise ValueError(f"Unsupported input type: {input_type}")
280
+
281
+ async def validate_text(self, text):
282
+ # Sanitize and structure text input
283
+ return {"type": "text", "content": text.strip()}
284
+ """,
285
+ "dependencies": ["validators", "pydantic"],
286
+ "config": {
287
+ "max_input_length": 10000,
288
+ "allowed_input_types": ["text", "voice", "gesture"]
289
+ }
290
+ }
291
+ },
292
+
293
+ "USER_OUTPUT": {
294
+ "shape": "ellipse",
295
+ "color": "#9C27B0",
296
+ "icon": "🔊",
297
+ "description": [
298
+ "• Formats responses for user consumption",
299
+ "• Supports multiple output formats",
300
+ "• Handles accessibility features",
301
+ "• Manages response timing"
302
+ ],
303
+ "implementation": {
304
+ "python_snippet": """
305
+ class UserOutputHandler:
306
+ def __init__(self):
307
+ self.formatters = {
308
+ 'text': self.format_text,
309
+ 'audio': self.format_audio,
310
+ 'visual': self.format_visual
311
+ }
312
+
313
+ async def deliver_response(self, response_data, output_format):
314
+ formatter = self.formatters.get(output_format)
315
+ if formatter:
316
+ formatted_response = await formatter(response_data)
317
+ return await self.send_to_user(formatted_response)
318
+
319
+ async def format_text(self, data):
320
+ # Format response as structured text
321
+ return {"format": "text", "content": data}
322
+ """,
323
+ "dependencies": ["jinja2", "markdown"],
324
+ "config": {
325
+ "default_format": "text",
326
+ "max_response_length": 5000
327
+ }
328
+ }
329
+ },
330
+
331
+ "MULTIMODAL_INTERFACE": {
332
+ "shape": "ellipse",
333
+ "color": "#9C27B0",
334
+ "icon": "🖼️",
335
+ "description": [
336
+ "• Handles multiple input/output modalities",
337
+ "• Integrates text, image, audio, video",
338
+ "• Manages modality conversion",
339
+ "• Supports rich media responses"
340
+ ],
341
+ "implementation": {
342
+ "python_snippet": """
343
+ class MultimodalInterface:
344
+ def __init__(self):
345
+ self.input_processors = {
346
+ 'image': ImageProcessor(),
347
+ 'audio': AudioProcessor(),
348
+ 'text': TextProcessor()
349
+ }
350
+ self.output_formatters = {
351
+ 'rich_text': RichTextFormatter(),
352
+ 'multimedia': MultimediaFormatter()
353
+ }
354
+
355
+ async def process_multimodal_input(self, inputs):
356
+ processed_inputs = {}
357
+ for input_type, input_data in inputs.items():
358
+ processor = self.input_processors.get(input_type)
359
+ if processor:
360
+ processed_inputs[input_type] = await processor.process(input_data)
361
+ return processed_inputs
362
+ """,
363
+ "dependencies": ["pillow", "pyaudio", "opencv-python"],
364
+ "config": {
365
+ "supported_modalities": ["text", "image", "audio", "video"],
366
+ "max_file_size_mb": 50
367
+ }
368
+ }
369
+ },
370
+
371
+ # ===================================
372
+ # TOOL: External functions and capabilities
373
+ # ===================================
374
+ "TOOL": {
375
+ "description": "External functions and capabilities",
376
+ "color": "#795548",
377
+ "icon": "🔧",
378
+ "shape": "hexagon",
379
+ "sub_components": ["MCP_TOOL", "API_TOOL", "LOCAL_TOOL", "AGENT_TOOL", "FUNCTION_TOOL"]
380
+ },
381
+
382
+ "MCP_TOOL": {
383
+ "shape": "hexagon",
384
+ "color": "#795548",
385
+ "icon": "🔌",
386
+ "description": [
387
+ "• Model Context Protocol server",
388
+ "• Standardized tool interface",
389
+ "• Dynamic tool discovery",
390
+ "• Secure resource access"
391
+ ],
392
+ "implementation": {
393
+ "python_snippet": """
394
+ # MCP Server implementation
395
+ from mcp import MCPServer, Tool
396
+
397
+ class FileSystemTool:
398
+ @Tool
399
+ async def read_file(self, path: str) -> str:
400
+ \"\"\"Read content from a file\"\"\"
401
+ with open(path, 'r') as f:
402
+ return f.read()
403
+
404
+ @Tool
405
+ async def write_file(self, path: str, content: str) -> str:
406
+ \"\"\"Write content to a file\"\"\"
407
+ with open(path, 'w') as f:
408
+ f.write(content)
409
+ return f"Written to {path}"
410
+
411
+ # MCP Client usage
412
+ async def use_mcp_tool(agent, tool_name, parameters):
413
+ result = await agent.use_tool(tool_name, parameters)
414
+ return result
415
+ """,
416
+ "protocol_spec": {
417
+ "version": "1.0",
418
+ "transport": ["stdio", "sse"],
419
+ "authentication": ["none", "bearer"]
420
+ },
421
+ "example_tools": ["filesystem", "calculator", "web_search", "database"]
422
+ }
423
+ },
424
+
425
+ "API_TOOL": {
426
+ "shape": "hexagon",
427
+ "color": "#795548",
428
+ "icon": "🔗",
429
+ "description": [
430
+ "• Wraps external REST/gRPC APIs",
431
+ "• Handles authentication and rate limits",
432
+ "• Manages request/response mapping",
433
+ "• Provides error handling and retries"
434
+ ],
435
+ "implementation": {
436
+ "python_snippet": """
437
+ class APITool:
438
+ def __init__(self, base_url, auth_token=None, rate_limit=10):
439
+ self.base_url = base_url
440
+ self.auth_token = auth_token
441
+ self.rate_limit = rate_limit
442
+ self.session = aiohttp.ClientSession()
443
+
444
+ async def call(self, endpoint, method='GET', data=None):
445
+ headers = {"Authorization": f"Bearer {self.auth_token}"} if self.auth_token else {}
446
+ url = f"{self.base_url}/{endpoint}"
447
+
448
+ async with self.session.request(method, url, json=data, headers=headers) as response:
449
+ return await response.json()
450
+ """,
451
+ "dependencies": ["aiohttp", "requests"],
452
+ "config": {
453
+ "timeout": 30,
454
+ "max_retries": 3,
455
+ "retry_delay": 1.0
456
+ }
457
+ }
458
+ },
459
+
460
+ "LOCAL_TOOL": {
461
+ "shape": "hexagon",
462
+ "color": "#795548",
463
+ "icon": "💻",
464
+ "description": [
465
+ "• Locally executed utility functions",
466
+ "• File operations, math calculations",
467
+ "• System utilities and helpers",
468
+ "• Fast execution without network calls"
469
+ ],
470
+ "implementation": {
471
+ "python_snippet": """
472
+ class LocalTool:
473
+ @staticmethod
474
+ async def file_operations(action, **kwargs):
475
+ if action == 'read':
476
+ with open(kwargs['path'], 'r') as f:
477
+ return f.read()
478
+ elif action == 'write':
479
+ with open(kwargs['path'], 'w') as f:
480
+ f.write(kwargs['content'])
481
+ return f"File written to {kwargs['path']}"
482
+
483
+ @staticmethod
484
+ async def math_operations(operation, **kwargs):
485
+ if operation == 'add':
486
+ return kwargs['a'] + kwargs['b']
487
+ elif operation == 'multiply':
488
+ return kwargs['a'] * kwargs['b']
489
+ """,
490
+ "dependencies": ["os", "math"],
491
+ "config": {
492
+ "max_execution_time": 5.0,
493
+ "allowed_operations": ["file", "math", "system"]
494
+ }
495
+ }
496
+ },
497
+
498
+ "AGENT_TOOL": {
499
+ "shape": "hexagon",
500
+ "color": "#795548",
501
+ "icon": "🛠️",
502
+ "description": [
503
+ "• Allows one agent to act as a tool for another",
504
+ "• Wraps agent functionality for external use",
505
+ "• Handles agent-to-agent communication",
506
+ "• Manages agent state and context"
507
+ ],
508
+ "implementation": {
509
+ "python_snippet": """
510
+ class AgentTool:
511
+ def __init__(self, agent):
512
+ self.agent = agent
513
+
514
+ async def execute(self, query, context=None):
515
+ # Wrap agent execution as a tool call
516
+ result = await self.agent.process(query, context)
517
+ return {
518
+ "result": result,
519
+ "agent_name": self.agent.name,
520
+ "execution_time": time.time()
521
+ }
522
+ """,
523
+ "dependencies": ["asyncio", "time"],
524
+ "config": {
525
+ "max_concurrent_calls": 5,
526
+ "timeout_seconds": 60
527
+ }
528
+ }
529
+ },
530
+
531
+ "FUNCTION_TOOL": {
532
+ "shape": "hexagon",
533
+ "color": "#795548",
534
+ "icon": "🧮",
535
+ "description": [
536
+ "• Generic callable function exposed to agents",
537
+ "• Wraps Python functions for tool use",
538
+ "• Handles parameter validation",
539
+ "• Provides type safety and documentation"
540
+ ],
541
+ "implementation": {
542
+ "python_snippet": """
543
+ from pydantic import BaseModel, create_model
544
+
545
+ class FunctionTool:
546
+ def __init__(self, func, description, param_schema=None):
547
+ self.func = func
548
+ self.description = description
549
+ self.param_schema = param_schema or self._infer_schema(func)
550
+
551
+ async def execute(self, **kwargs):
552
+ validated_params = self.param_schema(**kwargs)
553
+ return await self.func(**validated_params.dict())
554
+
555
+ def _infer_schema(self, func):
556
+ # Infer schema from function signature
557
+ sig = inspect.signature(func)
558
+ fields = {}
559
+ for name, param in sig.parameters.items():
560
+ fields[name] = (param.annotation, param.default if param.default != param.empty else ...)
561
+ return create_model(f"{func.__name__}Params", **fields)
562
+ """,
563
+ "dependencies": ["pydantic", "inspect"],
564
+ "config": {
565
+ "max_params": 10,
566
+ "validation_enabled": True
567
+ }
568
+ }
569
+ },
570
+
571
+ # ===================================
572
+ # DATA: Storage and knowledge systems
573
+ # ===================================
574
+ "DATA": {
575
+ "description": "Data sources and storage systems",
576
+ "color": "#009688",
577
+ "icon": "💾",
578
+ "shape": "cylinder",
579
+ "sub_components": ["KNOWLEDGE_BASE", "VECTOR_DB", "DOCUMENT_STORE", "CACHE", "MEMORY"]
580
+ },
581
+
582
+ "KNOWLEDGE_BASE": {
583
+ "shape": "cylinder",
584
+ "color": "#009688",
585
+ "icon": "📘",
586
+ "description": [
587
+ "• Curated domain-specific facts and rules",
588
+ "• Structured knowledge representation",
589
+ "• Supports inference and reasoning",
590
+ "• Maintains consistency and accuracy"
591
+ ],
592
+ "implementation": {
593
+ "python_snippet": """
594
+ class KnowledgeBase:
595
+ def __init__(self, storage_backend):
596
+ self.storage = storage_backend
597
+ self.index = {}
598
+
599
+ async def query(self, query_text, context=None):
600
+ # Query knowledge base with optional context
601
+ results = await self.storage.search(query_text)
602
+ return self._format_results(results)
603
+
604
+ async def update(self, fact, metadata=None):
605
+ # Add or update knowledge fact
606
+ await self.storage.insert(fact, metadata)
607
+ self._update_index(fact)
608
+ """,
609
+ "dependencies": ["sqlite3", "nltk"],
610
+ "config": {
611
+ "max_facts": 100000,
612
+ "update_frequency": "daily"
613
+ }
614
+ }
615
+ },
616
+
617
+ "VECTOR_DB": {
618
+ "shape": "cylinder",
619
+ "color": "#009688",
620
+ "icon": "🔍",
621
+ "description": [
622
+ "• Embedding-based database for semantic search",
623
+ "• Stores vector representations of text",
624
+ "• Enables similarity-based retrieval",
625
+ "• Supports semantic understanding"
626
+ ],
627
+ "implementation": {
628
+ "python_snippet": """
629
+ import numpy as np
630
+ from sentence_transformers import SentenceTransformer
631
+
632
+ class VectorDB:
633
+ def __init__(self, embedding_model="all-MiniLM-L6-v2"):
634
+ self.model = SentenceTransformer(embedding_model)
635
+ self.vectors = {}
636
+ self.metadata = {}
637
+
638
+ async def add_document(self, doc_id, text, metadata=None):
639
+ embedding = self.model.encode(text)
640
+ self.vectors[doc_id] = embedding
641
+ self.metadata[doc_id] = metadata or {}
642
+
643
+ async def search(self, query, top_k=5):
644
+ query_embedding = self.model.encode(query)
645
+ similarities = []
646
+ for doc_id, vector in self.vectors.items():
647
+ similarity = np.dot(query_embedding, vector) / (
648
+ np.linalg.norm(query_embedding) * np.linalg.norm(vector)
649
+ )
650
+ similarities.append((doc_id, similarity))
651
+
652
+ return sorted(similarities, key=lambda x: x[1], reverse=True)[:top_k]
653
+ """,
654
+ "dependencies": ["sentence-transformers", "numpy"],
655
+ "config": {
656
+ "embedding_model": "all-MiniLM-L6-v2",
657
+ "max_documents": 10000
658
+ }
659
+ }
660
+ },
661
+
662
+ "DOCUMENT_STORE": {
663
+ "shape": "cylinder",
664
+ "color": "#009688",
665
+ "icon": "🗂️",
666
+ "description": [
667
+ "• Raw document repository (PDFs, web pages, etc.)",
668
+ "• Handles various document formats",
669
+ "• Provides document parsing and extraction",
670
+ "• Manages document lifecycle and metadata"
671
+ ],
672
+ "implementation": {
673
+ "python_snippet": """
674
+ class DocumentStore:
675
+ def __init__(self, storage_path):
676
+ self.storage_path = storage_path
677
+ self.parsers = {
678
+ '.pdf': self._parse_pdf,
679
+ '.txt': self._parse_text,
680
+ '.docx': self._parse_docx
681
+ }
682
+
683
+ async def store_document(self, filename, content):
684
+ # Parse and store document with metadata
685
+ ext = os.path.splitext(filename)[1].lower()
686
+ parser = self.parsers.get(ext)
687
+ if parser:
688
+ parsed_content = await parser(content)
689
+ # Store in database with metadata
690
+ return await self._save_to_db(filename, parsed_content)
691
+
692
+ async def _parse_pdf(self, content):
693
+ # Extract text from PDF
694
+ import PyPDF2
695
+ pdf_reader = PyPDF2.PdfReader(content)
696
+ text = ""
697
+ for page in pdf_reader.pages:
698
+ text += page.extract_text()
699
+ return text
700
+ """,
701
+ "dependencies": ["PyPDF2", "python-docx"],
702
+ "config": {
703
+ "supported_formats": [".pdf", ".txt", ".docx", ".html"],
704
+ "max_file_size_mb": 100
705
+ }
706
+ }
707
+ },
708
+
709
+ "CACHE": {
710
+ "shape": "cylinder",
711
+ "color": "#009688",
712
+ "icon": "⏱️",
713
+ "description": [
714
+ "• Temporary fast-access storage for responses or embeddings",
715
+ "• Implements LRU or TTL eviction policies",
716
+ "• Reduces computation and API costs",
717
+ "• Improves response times"
718
+ ],
719
+ "implementation": {
720
+ "python_snippet": """
721
+ import time
722
+ from collections import OrderedDict
723
+
724
+ class Cache:
725
+ def __init__(self, max_size=1000, ttl_seconds=3600):
726
+ self.cache = OrderedDict()
727
+ self.max_size = max_size
728
+ self.ttl = ttl_seconds
729
+
730
+ async def get(self, key):
731
+ if key in self.cache:
732
+ value, timestamp = self.cache[key]
733
+ if time.time() - timestamp < self.ttl:
734
+ return value
735
+ else:
736
+ del self.cache[key]
737
+ return None
738
+
739
+ async def set(self, key, value):
740
+ if len(self.cache) >= self.max_size:
741
+ self.cache.popitem(last=False)
742
+ self.cache[key] = (value, time.time())
743
+ """,
744
+ "dependencies": ["time", "collections"],
745
+ "config": {
746
+ "max_size": 1000,
747
+ "ttl_seconds": 3600,
748
+ "eviction_policy": "lru"
749
+ }
750
+ }
751
+ },
752
+
753
+ "MEMORY": {
754
+ "shape": "cylinder",
755
+ "color": "#009688",
756
+ "icon": "🧠",
757
+ "description": [
758
+ "• Short-term context memory (conversation history, scratchpad)",
759
+ "• Maintains session state and context",
760
+ "• Supports conversation continuity",
761
+ "• Manages memory lifecycle"
762
+ ],
763
+ "implementation": {
764
+ "python_snippet": """
765
+ class Memory:
766
+ def __init__(self, max_context_length=2000):
767
+ self.conversation_history = []
768
+ self.scratchpad = {}
769
+ self.max_context_length = max_context_length
770
+
771
+ async def add_interaction(self, user_input, agent_response):
772
+ interaction = {
773
+ "timestamp": time.time(),
774
+ "user": user_input,
775
+ "agent": agent_response
776
+ }
777
+ self.conversation_history.append(interaction)
778
+ self._trim_history()
779
+
780
+ def _trim_history(self):
781
+ # Trim history to maintain context length
782
+ total_length = sum(len(str(item)) for item in self.conversation_history)
783
+ while total_length > self.max_context_length and len(self.conversation_history) > 1:
784
+ removed = self.conversation_history.pop(0)
785
+ total_length -= len(str(removed))
786
+ """,
787
+ "dependencies": ["time"],
788
+ "config": {
789
+ "max_context_length": 2000,
790
+ "history_retention_hours": 24
791
+ }
792
+ }
793
+ },
794
+
795
+ # ===================================
796
+ # PROCESSOR: Data processing units
797
+ # ===================================
798
+ "PROCESSOR": {
799
+ "description": "Data processing and transformation units",
800
+ "color": "#2196F3",
801
+ "icon": "⚙️",
802
+ "shape": "rect",
803
+ "sub_components": ["QUERY_PROCESSOR", "CONTENT_RETRIEVAL", "PROMPT_TEMPLATE", "RESPONSE_FORMATTER"]
804
+ },
805
+
806
+ "QUERY_PROCESSOR": {
807
+ "shape": "rect",
808
+ "color": "#2196F3",
809
+ "icon": "🔎",
810
+ "description": [
811
+ "• Parses and enriches incoming queries",
812
+ "• Extracts intent and entities",
813
+ "• Normalizes query structure",
814
+ "• Handles query validation"
815
+ ],
816
+ "implementation": {
817
+ "python_snippet": """
818
+ class QueryProcessor:
819
+ def __init__(self):
820
+ self.intent_classifier = IntentClassifier()
821
+ self.entity_extractor = EntityExtractor()
822
+
823
+ async def process_query(self, query_text):
824
+ # Classify intent and extract entities
825
+ intent = await self.intent_classifier.classify(query_text)
826
+ entities = await self.entity_extractor.extract(query_text)
827
+
828
+ return {
829
+ "original_query": query_text,
830
+ "intent": intent,
831
+ "entities": entities,
832
+ "processed_query": self._normalize_query(query_text, entities)
833
+ }
834
+
835
+ def _normalize_query(self, query, entities):
836
+ # Normalize query for downstream processing
837
+ normalized = query
838
+ for entity, value in entities.items():
839
+ normalized = normalized.replace(value, f"[{entity}]")
840
+ return normalized
841
+ """,
842
+ "dependencies": ["spacy", "transformers"],
843
+ "config": {
844
+ "max_query_length": 1000,
845
+ "confidence_threshold": 0.7
846
+ }
847
+ }
848
+ },
849
+
850
+ "CONTENT_RETRIEVAL": {
851
+ "shape": "rect",
852
+ "color": "#2196F3",
853
+ "icon": "📤",
854
+ "description": [
855
+ "• Fetches relevant content from data stores",
856
+ "• Implements semantic and keyword search",
857
+ "• Ranks and filters retrieved content",
858
+ "• Handles multi-source retrieval"
859
+ ],
860
+ "implementation": {
861
+ "python_snippet": """
862
+ class ContentRetrieval:
863
+ def __init__(self, data_sources):
864
+ self.data_sources = data_sources
865
+
866
+ async def retrieve(self, query, top_k=5, sources=None):
867
+ all_results = []
868
+
869
+ for source_name, source in self.data_sources.items():
870
+ if sources is None or source_name in sources:
871
+ results = await source.search(query, top_k)
872
+ all_results.extend(results)
873
+
874
+ # Rank and deduplicate results
875
+ ranked_results = self._rank_results(all_results, query)
876
+ return ranked_results[:top_k]
877
+
878
+ def _rank_results(self, results, query):
879
+ # Implement ranking algorithm
880
+ return sorted(results, key=lambda x: x.get('relevance_score', 0), reverse=True)
881
+ """,
882
+ "dependencies": ["numpy", "scikit-learn"],
883
+ "config": {
884
+ "top_k": 5,
885
+ "max_sources": 10,
886
+ "relevance_threshold": 0.5
887
+ }
888
+ }
889
+ },
890
+
891
+ "PROMPT_TEMPLATE": {
892
+ "shape": "rect",
893
+ "color": "#2196F3",
894
+ "icon": "📝",
895
+ "description": [
896
+ "• Template-based prompt construction",
897
+ "• Supports variable substitution",
898
+ "• Handles different prompt formats",
899
+ "• Manages prompt versioning"
900
+ ],
901
+ "implementation": {
902
+ "python_snippet": """
903
+ from jinja2 import Template
904
+
905
+ class PromptTemplate:
906
+ def __init__(self, template_string):
907
+ self.template = Template(template_string)
908
+
909
+ async def format(self, **kwargs):
910
+ return self.template.render(**kwargs)
911
+
912
+ @classmethod
913
+ def load_from_file(cls, file_path):
914
+ with open(file_path, 'r') as f:
915
+ template_string = f.read()
916
+ return cls(template_string)
917
+
918
+ def validate_variables(self, required_vars):
919
+ # Validate that all required variables are provided
920
+ pass
921
+ """,
922
+ "dependencies": ["jinja2"],
923
+ "config": {
924
+ "default_template": "You are a helpful assistant. User: {query}",
925
+ "max_template_length": 5000
926
+ }
927
+ }
928
+ },
929
+
930
+ "RESPONSE_FORMATTER": {
931
+ "shape": "rect",
932
+ "color": "#2196F3",
933
+ "icon": "📄",
934
+ "description": [
935
+ "• Structures final output (JSON, XML, markdown, etc.)",
936
+ "• Applies formatting rules and styles",
937
+ "• Validates response structure",
938
+ "• Supports multiple output formats"
939
+ ],
940
+ "implementation": {
941
+ "python_snippet": """
942
+ class ResponseFormatter:
943
+ def __init__(self):
944
+ self.formatters = {
945
+ 'json': self._format_json,
946
+ 'xml': self._format_xml,
947
+ 'markdown': self._format_markdown,
948
+ 'text': self._format_text
949
+ }
950
+
951
+ async def format(self, data, format_type='json'):
952
+ formatter = self.formatters.get(format_type)
953
+ if formatter:
954
+ return formatter(data)
955
+ else:
956
+ raise ValueError(f"Unsupported format: {format_type}")
957
+
958
+ def _format_json(self, data):
959
+ import json
960
+ return json.dumps(data, indent=2)
961
+ """,
962
+ "dependencies": ["json", "xml.etree.ElementTree"],
963
+ "config": {
964
+ "default_format": "json",
965
+ "max_output_length": 10000
966
+ }
967
+ }
968
+ },
969
+
970
+ # ===================================
971
+ # ROUTER: Decision points and workflow routing
972
+ # ===================================
973
+ "ROUTER": {
974
+ "description": "Decision points and workflow routing",
975
+ "color": "#FF9800",
976
+ "icon": "🎯",
977
+ "shape": "diamond",
978
+ "sub_components": ["INTENT_DISCOVERY", "MODEL_SELECTOR", "WORKFLOW_ROUTER", "VALIDATOR"]
979
+ },
980
+
981
+ "INTENT_DISCOVERY": {
982
+ "shape": "diamond",
983
+ "color": "#FF9800",
984
+ "icon": "🎯",
985
+ "description": [
986
+ "• Identifies user intent from input",
987
+ "• Uses machine learning classification",
988
+ "• Handles intent confidence scoring",
989
+ "• Supports intent hierarchy"
990
+ ],
991
+ "implementation": {
992
+ "python_snippet": """
993
+ class IntentDiscovery:
994
+ def __init__(self, model_path):
995
+ self.model = self.load_model(model_path)
996
+
997
+ async def discover_intent(self, text):
998
+ # Classify intent using trained model
999
+ predictions = await self.model.predict(text)
1000
+ top_intent = max(predictions, key=predictions.get)
1001
+ confidence = predictions[top_intent]
1002
+
1003
+ return {
1004
+ "intent": top_intent,
1005
+ "confidence": confidence,
1006
+ "all_predictions": predictions
1007
+ }
1008
+ """,
1009
+ "dependencies": ["transformers", "torch"],
1010
+ "config": {
1011
+ "confidence_threshold": 0.8,
1012
+ "fallback_intent": "unknown"
1013
+ }
1014
+ }
1015
+ },
1016
+
1017
+ "MODEL_SELECTOR": {
1018
+ "shape": "diamond",
1019
+ "color": "#FF9800",
1020
+ "icon": "🧠",
1021
+ "description": [
1022
+ "• Selects appropriate model based on task",
1023
+ "• Considers task complexity and cost",
1024
+ "• Handles model availability and load",
1025
+ "• Supports A/B testing of models"
1026
+ ],
1027
+ "implementation": {
1028
+ "python_snippet": """
1029
+ class ModelSelector:
1030
+ def __init__(self, models):
1031
+ self.models = models
1032
+ self.model_performance = {}
1033
+
1034
+ async def select_model(self, task_description, context=None):
1035
+ # Select best model based on task requirements
1036
+ suitable_models = self._filter_suitable_models(task_description)
1037
+
1038
+ # Choose based on performance metrics and availability
1039
+ best_model = self._select_best_model(suitable_models)
1040
+ return best_model
1041
+
1042
+ def _filter_suitable_models(self, task_description):
1043
+ # Filter models based on task compatibility
1044
+ return [model for model in self.models if model.can_handle(task_description)]
1045
+ """,
1046
+ "dependencies": ["numpy"],
1047
+ "config": {
1048
+ "selection_strategy": "performance_weighted",
1049
+ "max_model_candidates": 5
1050
+ }
1051
+ }
1052
+ },
1053
+
1054
+ "WORKFLOW_ROUTER": {
1055
+ "shape": "diamond",
1056
+ "color": "#FF9800",
1057
+ "icon": "🔄",
1058
+ "description": [
1059
+ "• Routes requests through appropriate workflows",
1060
+ "• Manages workflow state and transitions",
1061
+ "• Handles parallel and sequential execution",
1062
+ "• Supports workflow versioning"
1063
+ ],
1064
+ "implementation": {
1065
+ "python_snippet": """
1066
+ class WorkflowRouter:
1067
+ def __init__(self, workflows):
1068
+ self.workflows = workflows
1069
+ self.current_executions = {}
1070
+
1071
+ async def route(self, request, workflow_name=None):
1072
+ if workflow_name:
1073
+ workflow = self.workflows.get(workflow_name)
1074
+ else:
1075
+ workflow = await self._auto_select_workflow(request)
1076
+
1077
+ execution_id = str(uuid.uuid4())
1078
+ self.current_executions[execution_id] = workflow
1079
+
1080
+ result = await workflow.execute(request)
1081
+ del self.current_executions[execution_id]
1082
+
1083
+ return result
1084
+ """,
1085
+ "dependencies": ["uuid", "asyncio"],
1086
+ "config": {
1087
+ "max_concurrent_workflows": 100,
1088
+ "workflow_timeout": 300
1089
+ }
1090
+ }
1091
+ },
1092
+
1093
+ "VALIDATOR": {
1094
+ "shape": "diamond",
1095
+ "color": "#FF9800",
1096
+ "icon": "✅",
1097
+ "description": [
1098
+ "• Validates inputs, outputs, and intermediate results",
1099
+ "• Implements schema and business rule validation",
1100
+ "• Handles data quality checks",
1101
+ "• Provides validation feedback"
1102
+ ],
1103
+ "implementation": {
1104
+ "python_snippet": """
1105
+ from pydantic import BaseModel, ValidationError
1106
+
1107
+ class Validator:
1108
+ def __init__(self, schema_class: BaseModel):
1109
+ self.schema_class = schema_class
1110
+
1111
+ async def validate(self, data):
1112
+ try:
1113
+ validated_data = self.schema_class(**data)
1114
+ return {
1115
+ "valid": True,
1116
+ "data": validated_data.dict(),
1117
+ "errors": []
1118
+ }
1119
+ except ValidationError as e:
1120
+ return {
1121
+ "valid": False,
1122
+ "data": None,
1123
+ "errors": e.errors()
1124
+ }
1125
+ """,
1126
+ "dependencies": ["pydantic"],
1127
+ "config": {
1128
+ "strict_validation": True,
1129
+ "validation_timeout": 10
1130
+ }
1131
+ }
1132
+ },
1133
+
1134
+ # ===================================
1135
+ # INFRASTRUCTURE: System services
1136
+ # ===================================
1137
+ "INFRASTRUCTURE": {
1138
+ "description": "System infrastructure and services",
1139
+ "color": "#FF5722",
1140
+ "icon": "🌐",
1141
+ "shape": "rect",
1142
+ "sub_components": ["PROVIDER", "MONITOR", "FALLBACK", "ORCHESTRATOR"]
1143
+ },
1144
+
1145
+ "PROVIDER": {
1146
+ "shape": "rect",
1147
+ "color": "#FF5722",
1148
+ "icon": "🌐",
1149
+ "description": [
1150
+ "• API connection to LLM service",
1151
+ "• Manages authentication and rate limits",
1152
+ "• Handles retries and error recovery",
1153
+ "• Tracks usage and costs"
1154
+ ],
1155
+ "implementation": {
1156
+ "python_snippet": """
1157
+ class LLMProvider:
1158
+ def __init__(self, base_url: str, api_key: str = None, model: str = "default"):
1159
+ self.base_url = base_url
1160
+ self.api_key = api_key
1161
+ self.model = model
1162
+ self.client = AsyncOpenAI(base_url=base_url, api_key=api_key)
1163
+ self.usage_tracker = UsageTracker()
1164
+
1165
+ async def generate(self, prompt: str, **kwargs) -> str:
1166
+ try:
1167
+ response = await self.client.chat.completions.create(
1168
+ model=self.model,
1169
+ messages=[{"role": "user", "content": prompt}],
1170
+ **kwargs
1171
+ )
1172
+ self.usage_tracker.record_usage(response.usage)
1173
+ return response.choices[0].message.content
1174
+ except Exception as e:
1175
+ raise ProviderError(f"Generation failed: {e}")
1176
+
1177
+ def get_cost_estimate(self) -> float:
1178
+ return self.usage_tracker.calculate_cost()
1179
+ """,
1180
+ "supported_providers": {
1181
+ "openai": {"base_url": "https://api.openai.com/v1", "models": ["gpt-4", "gpt-3.5-turbo"]},
1182
+ "anthropic": {"base_url": "https://api.anthropic.com/v1", "models": ["claude-3", "claude-2"]},
1183
+ "local": {"base_url": "http://localhost:1234/v1", "models": ["local-model"]},
1184
+ "azure": {"base_url": "https://your-resource.openai.azure.com/", "models": ["gpt-4", "gpt-35-turbo"]}
1185
+ },
1186
+ "config_template": {
1187
+ "base_url": "https://api.openai.com/v1",
1188
+ "api_key": "your-api-key-here",
1189
+ "model": "gpt-4",
1190
+ "max_retries": 3,
1191
+ "timeout": 30
1192
+ }
1193
+ }
1194
+ },
1195
+
1196
+ "MONITOR": {
1197
+ "shape": "rect",
1198
+ "color": "#FF5722",
1199
+ "icon": "📊",
1200
+ "description": [
1201
+ "• Tracks system performance and metrics",
1202
+ "• Monitors resource usage and errors",
1203
+ "• Provides health checks and alerts",
1204
+ "• Supports logging and analytics"
1205
+ ],
1206
+ "implementation": {
1207
+ "python_snippet": """
1208
+ import time
1209
+ import logging
1210
+ from collections import defaultdict
1211
+
1212
+ class Monitor:
1213
+ def __init__(self):
1214
+ self.metrics = defaultdict(list)
1215
+ self.logger = logging.getLogger(__name__)
1216
+
1217
+ async def record_metric(self, name, value, timestamp=None):
1218
+ if timestamp is None:
1219
+ timestamp = time.time()
1220
+ self.metrics[name].append((timestamp, value))
1221
+
1222
+ async def get_health_status(self):
1223
+ recent_errors = [m for m in self.metrics['errors'] if time.time() - m[0] < 300]
1224
+ avg_response_time = self._calculate_avg_time('response_time', 300)
1225
+
1226
+ return {
1227
+ "status": "healthy" if len(recent_errors) == 0 else "degraded",
1228
+ "recent_errors": len(recent_errors),
1229
+ "avg_response_time": avg_response_time
1230
+ }
1231
+ """,
1232
+ "dependencies": ["logging", "time"],
1233
+ "config": {
1234
+ "metrics_retention_hours": 24,
1235
+ "alert_thresholds": {"error_rate": 0.05, "response_time": 5.0}
1236
+ }
1237
+ }
1238
+ },
1239
+
1240
+ "FALLBACK": {
1241
+ "shape": "rect",
1242
+ "color": "#FF5722",
1243
+ "icon": "🔄",
1244
+ "description": [
1245
+ "• Provides alternative execution paths",
1246
+ "• Handles primary system failures",
1247
+ "• Implements graceful degradation",
1248
+ "• Maintains service availability"
1249
+ ],
1250
+ "implementation": {
1251
+ "python_snippet": """
1252
+ class FallbackHandler:
1253
+ def __init__(self, primary_handler, fallback_handlers):
1254
+ self.primary = primary_handler
1255
+ self.fallbacks = fallback_handlers
1256
+
1257
+ async def execute_with_fallback(self, *args, **kwargs):
1258
+ try:
1259
+ return await self.primary(*args, **kwargs)
1260
+ except PrimaryError as e:
1261
+ self.logger.warning(f"Primary failed: {e}, trying fallbacks")
1262
+
1263
+ for fallback in self.fallbacks:
1264
+ try:
1265
+ return await fallback(*args, **kwargs)
1266
+ except FallbackError:
1267
+ continue
1268
+
1269
+ raise ServiceUnavailableError("All fallbacks exhausted")
1270
+ """,
1271
+ "dependencies": ["logging"],
1272
+ "config": {
1273
+ "max_fallback_attempts": 3,
1274
+ "fallback_timeout": 10
1275
+ }
1276
+ }
1277
+ },
1278
+
1279
+ "ORCHESTRATOR": {
1280
+ "shape": "rect",
1281
+ "color": "#FF5722",
1282
+ "icon": "🎬",
1283
+ "description": [
1284
+ "• Coordinates complex multi-step processes",
1285
+ "• Manages component interactions",
1286
+ "• Handles state and error propagation",
1287
+ "• Supports distributed execution"
1288
+ ],
1289
+ "implementation": {
1290
+ "python_snippet": """
1291
+ class Orchestrator:
1292
+ def __init__(self, components):
1293
+ self.components = components
1294
+ self.state = {}
1295
+
1296
+ async def orchestrate(self, workflow_definition, input_data):
1297
+ current_state = input_data.copy()
1298
+
1299
+ for step in workflow_definition.steps:
1300
+ component = self.components[step.component]
1301
+ step_result = await component.execute(current_state, step.config)
1302
+ current_state.update(step_result)
1303
+
1304
+ return current_state
1305
+ """,
1306
+ "dependencies": ["asyncio"],
1307
+ "config": {
1308
+ "max_workflow_steps": 100,
1309
+ "step_timeout": 60
1310
+ }
1311
+ }
1312
+ }
1313
+ }
1314
+
1315
+ # Hierarchical Component definitions
1316
+ COMPONENT_HIERARCHY = {
1317
+ "HIGH_LEVEL": {
1318
+ "AGENT": {
1319
+ "description": "Autonomous reasoning and decision-making units",
1320
+ "color": "#4CAF50",
1321
+ "icon": "🤖",
1322
+ "shape": "rect",
1323
+ "sub_components": ["REASONING_AGENT", "ACTION_AGENT", "PLANNER_AGENT", "REACT_AGENT", "MULTI_AGENT"]
1324
+ },
1325
+ "USER": {
1326
+ "description": "User interaction points and interfaces",
1327
+ "color": "#9C27B0",
1328
+ "icon": "👤",
1329
+ "shape": "ellipse",
1330
+ "sub_components": ["USER_INPUT", "USER_OUTPUT", "MULTIMODAL_INTERFACE"]
1331
+ },
1332
+ "TOOL": {
1333
+ "description": "External functions and capabilities",
1334
+ "color": "#795548",
1335
+ "icon": "🔧",
1336
+ "shape": "hexagon",
1337
+ "sub_components": ["MCP_TOOL", "API_TOOL", "LOCAL_TOOL", "AGENT_TOOL", "FUNCTION_TOOL"]
1338
+ },
1339
+ "DATA": {
1340
+ "description": "Data sources and storage systems",
1341
+ "color": "#009688",
1342
+ "icon": "💾",
1343
+ "shape": "cylinder",
1344
+ "sub_components": ["KNOWLEDGE_BASE", "VECTOR_DB", "DOCUMENT_STORE", "CACHE", "MEMORY"]
1345
+ },
1346
+ "PROCESSOR": {
1347
+ "description": "Data processing and transformation units",
1348
+ "color": "#2196F3",
1349
+ "icon": "⚙️",
1350
+ "shape": "rect",
1351
+ "sub_components": ["QUERY_PROCESSOR", "CONTENT_RETRIEVAL", "PROMPT_TEMPLATE", "RESPONSE_FORMATTER"]
1352
+ },
1353
+ "ROUTER": {
1354
+ "description": "Decision points and workflow routing",
1355
+ "color": "#FF9800",
1356
+ "icon": "🎯",
1357
+ "shape": "diamond",
1358
+ "sub_components": ["INTENT_DISCOVERY", "MODEL_SELECTOR", "WORKFLOW_ROUTER", "VALIDATOR"]
1359
+ },
1360
+ "INFRASTRUCTURE": {
1361
+ "description": "System infrastructure and services",
1362
+ "color": "#FF5722",
1363
+ "icon": "🌐",
1364
+ "shape": "rect",
1365
+ "sub_components": ["PROVIDER", "MONITOR", "FALLBACK", "ORCHESTRATOR"]
1366
+ }
1367
+ }
1368
+ }
1369
+
1370
+ # Enhanced Example workflows
1371
+ EXAMPLE_WORKFLOWS = {
1372
+ "Simple Chat Agent": {
1373
+ "description": "Basic conversational agent with single LLM call",
1374
+ "nodes": [
1375
+ {"id": "user_1", "type": "USER_INPUT", "x": 150, "y": 200},
1376
+ {"id": "agent_1", "type": "REASONING_AGENT", "x": 400, "y": 200},
1377
+ {"id": "provider_1", "type": "PROVIDER", "x": 650, "y": 200},
1378
+ {"id": "output_1", "type": "USER_OUTPUT", "x": 900, "y": 200}
1379
+ ],
1380
+ "connections": [
1381
+ {"from": "user_1", "to": "agent_1"},
1382
+ {"from": "agent_1", "to": "provider_1"},
1383
+ {"from": "provider_1", "to": "output_1"}
1384
+ ]
1385
+ },
1386
+ "Intent-Driven Routing": {
1387
+ "description": "Routes to specialized agents based on user intent",
1388
+ "nodes": [
1389
+ {"id": "user_1", "type": "USER_INPUT", "x": 150, "y": 300},
1390
+ {"id": "intent_1", "type": "INTENT_DISCOVERY", "x": 400, "y": 300},
1391
+ {"id": "agent_1", "type": "REASONING_AGENT", "x": 650, "y": 150},
1392
+ {"id": "agent_2", "type": "ACTION_AGENT", "x": 650, "y": 450},
1393
+ {"id": "output_1", "type": "USER_OUTPUT", "x": 900, "y": 300}
1394
+ ],
1395
+ "connections": [
1396
+ {"from": "user_1", "to": "intent_1"},
1397
+ {"from": "intent_1", "to": "agent_1"},
1398
+ {"from": "intent_1", "to": "agent_2"},
1399
+ {"from": "agent_1", "to": "output_1"},
1400
+ {"from": "agent_2", "to": "output_1"}
1401
+ ]
1402
+ },
1403
+ "RAG Pipeline": {
1404
+ "description": "Retrieval-Augmented Generation with context",
1405
+ "nodes": [
1406
+ {"id": "user_1", "type": "USER_INPUT", "x": 100, "y": 250},
1407
+ {"id": "query_1", "type": "QUERY_PROCESSOR", "x": 250, "y": 250},
1408
+ {"id": "content_1", "type": "CONTENT_RETRIEVAL", "x": 400, "y": 250},
1409
+ {"id": "prompt_1", "type": "PROMPT_TEMPLATE", "x": 550, "y": 250},
1410
+ {"id": "agent_1", "type": "REASONING_AGENT", "x": 700, "y": 250},
1411
+ {"id": "output_1", "type": "USER_OUTPUT", "x": 900, "y": 250}
1412
+ ],
1413
+ "connections": [
1414
+ {"from": "user_1", "to": "query_1"},
1415
+ {"from": "query_1", "to": "content_1"},
1416
+ {"from": "content_1", "to": "prompt_1"},
1417
+ {"from": "prompt_1", "to": "agent_1"},
1418
+ {"from": "agent_1", "to": "output_1"}
1419
+ ]
1420
+ },
1421
+ "Multi-Agent with Tools": {
1422
+ "description": "Coordinated agents with tool access and validation",
1423
+ "nodes": [
1424
+ {"id": "user_1", "type": "USER_INPUT", "x": 100, "y": 300},
1425
+ {"id": "intent_1", "type": "INTENT_DISCOVERY", "x": 280, "y": 300},
1426
+ {"id": "agent_1", "type": "REASONING_AGENT", "x": 460, "y": 150},
1427
+ {"id": "agent_2", "type": "ACTION_AGENT", "x": 460, "y": 450},
1428
+ {"id": "tool_1", "type": "MCP_TOOL", "x": 640, "y": 150},
1429
+ {"id": "tool_2", "type": "API_TOOL", "x": 640, "y": 450},
1430
+ {"id": "validator_1", "type": "VALIDATOR", "x": 820, "y": 300},
1431
+ {"id": "output_1", "type": "USER_OUTPUT", "x": 980, "y": 300}
1432
+ ],
1433
+ "connections": [
1434
+ {"from": "user_1", "to": "intent_1"},
1435
+ {"from": "intent_1", "to": "agent_1"},
1436
+ {"from": "intent_1", "to": "agent_2"},
1437
+ {"from": "agent_1", "to": "tool_1"},
1438
+ {"from": "agent_2", "to": "tool_2"},
1439
+ {"from": "tool_1", "to": "validator_1"},
1440
+ {"from": "tool_2", "to": "validator_1"},
1441
+ {"from": "validator_1", "to": "output_1"}
1442
+ ]
1443
+ },
1444
+ "Advanced RAG with Cache": {
1445
+ "description": "Enhanced RAG with caching and monitoring",
1446
+ "nodes": [
1447
+ {"id": "user_1", "type": "USER_INPUT", "x": 100, "y": 200},
1448
+ {"id": "query_1", "type": "QUERY_PROCESSOR", "x": 250, "y": 200},
1449
+ {"id": "cache_1", "type": "CACHE", "x": 400, "y": 100},
1450
+ {"id": "knowledge_1", "type": "KNOWLEDGE_BASE", "x": 400, "y": 300},
1451
+ {"id": "prompt_1", "type": "PROMPT_TEMPLATE", "x": 550, "y": 200},
1452
+ {"id": "agent_1", "type": "REASONING_AGENT", "x": 700, "y": 200},
1453
+ {"id": "monitor_1", "type": "MONITOR", "x": 850, "y": 100},
1454
+ {"id": "output_1", "type": "USER_OUTPUT", "x": 850, "y": 300}
1455
+ ],
1456
+ "connections": [
1457
+ {"from": "user_1", "to": "query_1"},
1458
+ {"from": "query_1", "to": "cache_1"},
1459
+ {"from": "query_1", "to": "knowledge_1"},
1460
+ {"from": "cache_1", "to": "prompt_1"},
1461
+ {"from": "knowledge_1", "to": "prompt_1"},
1462
+ {"from": "prompt_1", "to": "agent_1"},
1463
+ {"from": "agent_1", "to": "monitor_1"},
1464
+ {"from": "agent_1", "to": "output_1"}
1465
+ ]
1466
+ },
1467
+ "MCP Tool Agent": {
1468
+ "description": "Agent using MCP tools for extended capabilities",
1469
+ "nodes": [
1470
+ {"id": "user_1", "type": "USER_INPUT", "x": 100, "y": 250},
1471
+ {"id": "agent_1", "type": "REACT_AGENT", "x": 300, "y": 250},
1472
+ {"id": "mcp_tool_1", "type": "MCP_TOOL", "x": 500, "y": 150},
1473
+ {"id": "mcp_tool_2", "type": "MCP_TOOL", "x": 500, "y": 350},
1474
+ {"id": "memory_1", "type": "MEMORY", "x": 700, "y": 250},
1475
+ {"id": "output_1", "type": "USER_OUTPUT", "x": 900, "y": 250}
1476
+ ],
1477
+ "connections": [
1478
+ {"from": "user_1", "to": "agent_1"},
1479
+ {"from": "agent_1", "to": "mcp_tool_1"},
1480
+ {"from": "agent_1", "to": "mcp_tool_2"},
1481
+ {"from": "mcp_tool_1", "to": "agent_1"},
1482
+ {"from": "mcp_tool_2", "to": "agent_1"},
1483
+ {"from": "agent_1", "to": "memory_1"},
1484
+ {"from": "agent_1", "to": "output_1"}
1485
+ ]
1486
+ }
1487
+ }
1488
+
1489
+ @dataclass
1490
+ class ComponentData:
1491
+ """Complete component information"""
1492
+ type: str
1493
+ shape: str
1494
+ color: str
1495
+ icon: str
1496
+ description: List[str]
1497
+ category: Optional[str] = None
1498
+ sub_category: Optional[str] = None
1499
+
1500
+ @dataclass
1501
+ class AgentNode:
1502
+ id: str
1503
+ type: str
1504
+ x: int
1505
+ y: int
1506
+ component_data: ComponentData = field(default_factory=lambda: ComponentData("", "", "", "", []))
1507
+
1508
+ @dataclass
1509
+ class Connection:
1510
+ from_node: str
1511
+ to_node: str
1512
+
1513
+ class CustomNodeManager:
1514
+ def __init__(self, storage_path: str = "custom_nodes.pkl"):
1515
+ self.storage_path = storage_path
1516
+ self.custom_nodes: Dict[str, Dict[str, Any]] = {}
1517
+ self.load_custom_nodes()
1518
+
1519
+ def load_custom_nodes(self):
1520
+ """Load custom nodes from storage"""
1521
+ if os.path.exists(self.storage_path):
1522
+ try:
1523
+ with open(self.storage_path, 'rb') as f:
1524
+ self.custom_nodes = pickle.load(f)
1525
+ except Exception as e:
1526
+ print(f"Error loading custom nodes: {e}")
1527
+ self.custom_nodes = {}
1528
+
1529
+ def save_custom_nodes(self):
1530
+ """Save custom nodes to storage"""
1531
+ try:
1532
+ with open(self.storage_path, 'wb') as f:
1533
+ pickle.dump(self.custom_nodes, f)
1534
+ except Exception as e:
1535
+ print(f"Error saving custom nodes: {e}")
1536
+
1537
+ def create_custom_node(self, name: str, config: Dict[str, Any]):
1538
+ """Create a new custom node"""
1539
+ node_id = f"custom_{name.lower().replace(' ', '_')}"
1540
+ self.custom_nodes[node_id] = {
1541
+ "id": node_id,
1542
+ "name": name,
1543
+ "type": "CUSTOM",
1544
+ "config": config,
1545
+ "created_at": __import__('datetime').datetime.now().isoformat()
1546
+ }
1547
+ self.save_custom_nodes()
1548
+ return node_id
1549
+
1550
+ def get_custom_node_info(self, node_id: str) -> Dict[str, Any]:
1551
+ """Get information for a custom node"""
1552
+ return self.custom_nodes.get(node_id, {})
1553
+
1554
+ def delete_custom_node(self, node_id: str):
1555
+ """Delete a custom node"""
1556
+ if node_id in self.custom_nodes:
1557
+ del self.custom_nodes[node_id]
1558
+ self.save_custom_nodes()
1559
+
1560
+ # Initialize custom node manager
1561
+ custom_node_manager = CustomNodeManager()
1562
+
1563
+ class WorkflowDesigner:
1564
+ def __init__(self):
1565
+ self.nodes: Dict[str, AgentNode] = {}
1566
+ self.connections: List[Connection] = []
1567
+ self.node_counter = 0
1568
+ self.selected_node: Optional[str] = None
1569
+
1570
+ def select_node(self, node_id: str) -> None:
1571
+ """Select a node and deselect others"""
1572
+ self.selected_node = node_id if node_id in self.nodes else None
1573
+
1574
+ def move_selected_node(self, dx: int, dy: int) -> None:
1575
+ """Move selected node by delta"""
1576
+ if self.selected_node and self.selected_node in self.nodes:
1577
+ node = self.nodes[self.selected_node]
1578
+ node.x = max(0, node.x + dx)
1579
+ node.y = max(0, node.y + dy)
1580
+ def add_custom_node(self, custom_config: Dict[str, Any]) -> AgentNode:
1581
+ """Add a custom node to the workflow"""
1582
+ self.node_counter += 1
1583
+ node_id = f"custom_{self.node_counter}"
1584
+
1585
+ # Create custom node configuration
1586
+ custom_node_config = {
1587
+ "shape": custom_config.get("shape", "rect"),
1588
+ "color": custom_config.get("color", "#666666"),
1589
+ "icon": custom_config.get("icon", "🔧"),
1590
+ "description": custom_config.get("description", ["Custom node"]),
1591
+ "implementation": custom_config.get("implementation", {})
1592
+ }
1593
+
1594
+ # Add to COMPONENT_INFO for rendering
1595
+ COMPONENT_INFO[node_id] = custom_node_config
1596
+
1597
+ col = len(self.nodes) % 3
1598
+ row = len(self.nodes) // 3
1599
+ x_pos = 200 + (col * 350)
1600
+ y_pos = 150 + (row * 200)
1601
+
1602
+ node = AgentNode(
1603
+ id=node_id,
1604
+ type=node_id, # Use node_id as type for custom nodes
1605
+ x=x_pos,
1606
+ y=y_pos
1607
+ )
1608
+
1609
+ self.nodes[node_id] = node
1610
+ self.selected_node = node_id
1611
+ return node
1612
+
1613
+ def get_workflow_json(self) -> Dict[str, Any]:
1614
+ """Get complete workflow data including component implementations"""
1615
+ nodes_data = []
1616
+ for node in self.nodes.values():
1617
+ node_info = COMPONENT_INFO.get(node.type, {})
1618
+ nodes_data.append({
1619
+ "id": node.id,
1620
+ "type": node.type,
1621
+ "x": node.x,
1622
+ "y": node.y,
1623
+ "component_info": node_info,
1624
+ "implementation": node_info.get("implementation", {})
1625
+ })
1626
+
1627
+ return {
1628
+ "nodes": nodes_data,
1629
+ "connections": [asdict(c) for c in self.connections],
1630
+ "selected_node": self.selected_node,
1631
+ "metadata": {
1632
+ "total_nodes": len(self.nodes),
1633
+ "total_connections": len(self.connections),
1634
+ "generated_at": __import__('datetime').datetime.now().isoformat()
1635
+ }
1636
+ }
1637
+
1638
+ def add_node(self, node_type: str) -> AgentNode:
1639
+ self.node_counter += 1
1640
+ node_id = f"{node_type}_{self.node_counter}"
1641
+
1642
+ col = len(self.nodes) % 3
1643
+ row = len(self.nodes) // 3
1644
+ x_pos = 200 + (col * 350)
1645
+ y_pos = 150 + (row * 200)
1646
+
1647
+ # Get complete component information
1648
+ component_info = COMPONENT_INFO.get(node_type, {
1649
+ "shape": "rect",
1650
+ "color": "#666666",
1651
+ "icon": "❓",
1652
+ "description": ["Unknown component type"]
1653
+ })
1654
+
1655
+ # Create component data with full information
1656
+ component_data = ComponentData(
1657
+ type=node_type,
1658
+ shape=component_info["shape"],
1659
+ color=component_info["color"],
1660
+ icon=component_info["icon"],
1661
+ description=component_info["description"],
1662
+ category=self._find_component_category(node_type),
1663
+ sub_category=self._find_component_sub_category(node_type)
1664
+ )
1665
+
1666
+ node = AgentNode(
1667
+ id=node_id,
1668
+ type=node_type,
1669
+ x=x_pos,
1670
+ y=y_pos,
1671
+ component_data=component_data
1672
+ )
1673
+
1674
+ self.nodes[node_id] = node
1675
+ self.selected_node = node_id
1676
+ return node
1677
+
1678
+ def _find_component_category(self, node_type: str) -> Optional[str]:
1679
+ """Find which high-level category this component belongs to"""
1680
+
1681
+ for category, components in COMPONENT_HIERARCHY["HIGH_LEVEL"].items():
1682
+ if node_type == category or node_type in components.get('sub_components', []):
1683
+ return category
1684
+ return None
1685
+
1686
+ def _find_component_sub_category(self, node_type: str) -> Optional[str]:
1687
+ """Determine if this is a high-level or sub-component"""
1688
+
1689
+ for category, components in COMPONENT_HIERARCHY["HIGH_LEVEL"].items():
1690
+ if node_type == category:
1691
+ return "HIGH_LEVEL"
1692
+ elif node_type in components.get('sub_components', []):
1693
+ return "SUB_COMPONENT"
1694
+ return None
1695
+
1696
+ def load_example(self, example_name: str):
1697
+ if example_name not in EXAMPLE_WORKFLOWS:
1698
+ return
1699
+
1700
+ example = EXAMPLE_WORKFLOWS[example_name]
1701
+ self.nodes.clear()
1702
+ self.connections.clear()
1703
+
1704
+ for node_data in example["nodes"]:
1705
+ node_type = node_data["type"]
1706
+
1707
+ # Get complete component information for example nodes too
1708
+ component_info = COMPONENT_INFO.get(node_type, {
1709
+ "shape": "rect",
1710
+ "color": "#666666",
1711
+ "icon": "❓",
1712
+ "description": ["Unknown component type"]
1713
+ })
1714
+
1715
+ component_data = ComponentData(
1716
+ type=node_type,
1717
+ shape=component_info["shape"],
1718
+ color=component_info["color"],
1719
+ icon=component_info["icon"],
1720
+ description=component_info["description"],
1721
+ category=self._find_component_category(node_type),
1722
+ sub_category=self._find_component_sub_category(node_type)
1723
+ )
1724
+
1725
+ node = AgentNode(
1726
+ id=node_data["id"],
1727
+ type=node_type,
1728
+ x=node_data["x"],
1729
+ y=node_data["y"],
1730
+ component_data=component_data
1731
+ )
1732
+ self.nodes[node.id] = node
1733
+
1734
+ for conn_data in example["connections"]:
1735
+ conn = Connection(
1736
+ from_node=conn_data["from"],
1737
+ to_node=conn_data["to"]
1738
+ )
1739
+ self.connections.append(conn)
1740
+
1741
+ if self.nodes:
1742
+ self.selected_node = list(self.nodes.keys())[0]
1743
+
1744
+ def get_workflow_json(self) -> Dict[str, Any]:
1745
+ """Get complete workflow data including full component information"""
1746
+ return {
1747
+ "metadata": {
1748
+ "total_nodes": len(self.nodes),
1749
+ "total_connections": len(self.connections),
1750
+ "selected_node": self.selected_node,
1751
+ "generated_with": "Agent Workflow Designer"
1752
+ },
1753
+ "nodes": [
1754
+ {
1755
+ "id": node.id,
1756
+ "type": node.type,
1757
+ "position": {"x": node.x, "y": node.y},
1758
+ "component_data": {
1759
+ "type": node.component_data.type,
1760
+ "shape": node.component_data.shape,
1761
+ "color": node.component_data.color,
1762
+ "icon": node.component_data.icon,
1763
+ "description": node.component_data.description,
1764
+ "category": node.component_data.category,
1765
+ "sub_category": node.component_data.sub_category
1766
+ }
1767
+ }
1768
+ for node in self.nodes.values()
1769
+ ],
1770
+ "connections": [
1771
+ {
1772
+ "from": conn.from_node,
1773
+ "to": conn.to_node
1774
+ }
1775
+ for conn in self.connections
1776
+ ]
1777
+ }
1778
+
1779
+ def render_svg(self) -> str:
1780
+ """Render workflow as beautiful SVG with selection support"""
1781
+ if not self.nodes:
1782
+ return '''
1783
+ <svg width="1200" height="600" style="border-radius: 12px;">
1784
+ <defs>
1785
+ <linearGradient id="bg" x1="0%" y1="0%" x2="100%" y2="100%">
1786
+ <stop offset="0%" style="stop-color:#667eea;stop-opacity:1" />
1787
+ <stop offset="100%" style="stop-color:#764ba2;stop-opacity:1" />
1788
+ </linearGradient>
1789
+ </defs>
1790
+ <rect width="1200" height="600" fill="url(#bg)"/>
1791
+ <text x="600" y="280" text-anchor="middle" fill="white" font-size="32" font-weight="bold">🚀 Start Building Your Workflow</text>
1792
+ <text x="600" y="320" text-anchor="middle" fill="white" font-size="18" opacity="0.9">Add components from the library on the left</text>
1793
+ </svg>
1794
+ '''
1795
+
1796
+ width = 1200
1797
+ height = max(600, max([n.y for n in self.nodes.values()], default=0) + 200)
1798
+
1799
+ svg_parts = [
1800
+ f'<svg width="{width}" height="{height}" style="border-radius: 12px; cursor: pointer;">',
1801
+ '<defs>',
1802
+ '<linearGradient id="bg" x1="0%" y1="0%" x2="100%" y2="100%">',
1803
+ '<stop offset="0%" style="stop-color:#667eea;stop-opacity:1" />',
1804
+ '<stop offset="100%" style="stop-color:#764ba2;stop-opacity:1" />',
1805
+ '</linearGradient>',
1806
+ '<marker id="arrowhead" markerWidth="12" markerHeight="12" refX="11" refY="3" orient="auto">',
1807
+ '<polygon points="0 0, 12 3, 0 6" fill="white" opacity="0.9"/>',
1808
+ '</marker>',
1809
+ '<filter id="glow">',
1810
+ '<feGaussianBlur stdDeviation="3" result="coloredBlur"/>',
1811
+ '<feMerge><feMergeNode in="coloredBlur"/><feMergeNode in="SourceGraphic"/></feMerge>',
1812
+ '</filter>',
1813
+ '<filter id="shadow">',
1814
+ '<feDropShadow dx="0" dy="4" stdDeviation="4" flood-opacity="0.3"/>',
1815
+ '</filter>',
1816
+ '<filter id="selected-glow">',
1817
+ '<feGaussianBlur stdDeviation="5" result="coloredBlur"/>',
1818
+ '<feMerge><feMergeNode in="coloredBlur"/><feMergeNode in="SourceGraphic"/></feMerge>',
1819
+ '</filter>',
1820
+ '</defs>',
1821
+ '<rect width="100%" height="100%" fill="url(#bg)"/>'
1822
+ ]
1823
+
1824
+ # Draw connections with glow
1825
+ for conn in self.connections:
1826
+ if conn.from_node in self.nodes and conn.to_node in self.nodes:
1827
+ from_node = self.nodes[conn.from_node]
1828
+ to_node = self.nodes[conn.to_node]
1829
+
1830
+ from_x = from_node.x + 85
1831
+ from_y = from_node.y + 60
1832
+ to_x = to_node.x + 15
1833
+ to_y = to_node.y + 60
1834
+
1835
+ mid_x = (from_x + to_x) / 2
1836
+
1837
+ # Glow path
1838
+ svg_parts.append(
1839
+ f'<path d="M {from_x} {from_y} C {mid_x} {from_y}, {mid_x} {to_y}, {to_x} {to_y}" '
1840
+ f'stroke="white" stroke-width="8" fill="none" opacity="0.3" filter="url(#glow)"/>'
1841
+ )
1842
+
1843
+ # Main path
1844
+ svg_parts.append(
1845
+ f'<path d="M {from_x} {from_y} C {mid_x} {from_y}, {mid_x} {to_y}, {to_x} {to_y}" '
1846
+ f'stroke="white" stroke-width="3" fill="none" opacity="0.8" marker-end="url(#arrowhead)"/>'
1847
+ )
1848
+
1849
+ # Draw nodes with selection support
1850
+ for node in self.nodes.values():
1851
+ # Use the stored component data instead of looking it up
1852
+ shape = node.component_data.shape
1853
+ color = node.component_data.color
1854
+ icon = node.component_data.icon
1855
+
1856
+ cx = node.x + 85
1857
+ cy = node.y + 60
1858
+ label = node.id.replace("_", " ").title()
1859
+
1860
+ is_selected = (node.id == self.selected_node)
1861
+ selection_glow = 'filter="url(#selected-glow)"' if is_selected else 'filter="url(#shadow)"'
1862
+ selection_stroke = "6" if is_selected else "4"
1863
+
1864
+ # Node background with selection highlight
1865
+ if shape == "ellipse":
1866
+ svg_parts.append(
1867
+ f'<ellipse cx="{cx}" cy="{cy}" rx="80" ry="50" '
1868
+ f'fill="white" stroke="{color}" stroke-width="{selection_stroke}" {selection_glow} '
1869
+ f'class="node" id="node_{node.id}" style="cursor: move;"/>'
1870
+ )
1871
+ elif shape == "diamond":
1872
+ size = 70
1873
+ points = f"{cx},{cy-size} {cx+size},{cy} {cx},{cy+size} {cx-size},{cy}"
1874
+ svg_parts.append(
1875
+ f'<polygon points="{points}" '
1876
+ f'fill="white" stroke="{color}" stroke-width="{selection_stroke}" {selection_glow} '
1877
+ f'class="node" id="node_{node.id}" style="cursor: move;"/>'
1878
+ )
1879
+ elif shape == "hexagon":
1880
+ w, h = 70, 50
1881
+ points = f"{cx-w},{cy-h/2} {cx-w/2},{cy-h} {cx+w/2},{cy-h} {cx+w},{cy-h/2} {cx+w},{cy+h/2} {cx+w/2},{cy+h} {cx-w/2},{cy+h} {cx-w},{cy+h/2}"
1882
+ svg_parts.append(
1883
+ f'<polygon points="{points}" '
1884
+ f'fill="white" stroke="{color}" stroke-width="{selection_stroke}" {selection_glow} '
1885
+ f'class="node" id="node_{node.id}" style="cursor: move;"/>'
1886
+ )
1887
+ elif shape == "cylinder":
1888
+ svg_parts.append(
1889
+ f'<ellipse cx="{cx}" cy="{cy-35}" rx="70" ry="18" '
1890
+ f'fill="white" stroke="{color}" stroke-width="3"/>'
1891
+ )
1892
+ svg_parts.append(
1893
+ f'<rect x="{cx-70}" y="{cy-35}" width="140" height="70" '
1894
+ f'fill="white" stroke="none"/>'
1895
+ )
1896
+ svg_parts.append(
1897
+ f'<line x1="{cx-70}" y1="{cy-35}" x2="{cx-70}" y2="{cy+35}" '
1898
+ f'stroke="{color}" stroke-width="3"/>'
1899
+ )
1900
+ svg_parts.append(
1901
+ f'<line x1="{cx+70}" y1="{cy-35}" x2="{cx+70}" y2="{cy+35}" '
1902
+ f'stroke="{color}" stroke-width="3"/>'
1903
+ )
1904
+ svg_parts.append(
1905
+ f'<ellipse cx="{cx}" cy="{cy+35}" rx="70" ry="18" '
1906
+ f'fill="white" stroke="{color}" stroke-width="3" {selection_glow} '
1907
+ f'class="node" id="node_{node.id}" style="cursor: move;"/>'
1908
+ )
1909
+ else: # rect
1910
+ svg_parts.append(
1911
+ f'<rect x="{cx-80}" y="{cy-45}" width="160" height="90" rx="12" '
1912
+ f'fill="white" stroke="{color}" stroke-width="{selection_stroke}" {selection_glow} '
1913
+ f'class="node" id="node_{node.id}" style="cursor: move;"/>'
1914
+ )
1915
+
1916
+ # Icon
1917
+ svg_parts.append(
1918
+ f'<text x="{cx}" y="{cy-10}" text-anchor="middle" font-size="36">{icon}</text>'
1919
+ )
1920
+
1921
+ # Label
1922
+ svg_parts.append(
1923
+ f'<text x="{cx}" y="{cy+25}" text-anchor="middle" '
1924
+ f'fill="#333" font-size="13" font-weight="600">{label}</text>'
1925
+ )
1926
+
1927
+ # Add JavaScript for drag and drop
1928
+ svg_parts.append('''
1929
+ <script>
1930
+ // Node selection and drag functionality
1931
+ let selectedNode = null;
1932
+ let isDragging = false;
1933
+ let startX, startY;
1934
+ let originalX, originalY;
1935
+
1936
+ // Add click handlers for all nodes
1937
+ document.querySelectorAll('.node').forEach(node => {
1938
+ node.addEventListener('click', (e) => {
1939
+ e.stopPropagation();
1940
+ const nodeId = node.id.replace('node_', '');
1941
+ selectNode(nodeId);
1942
+ });
1943
+
1944
+ node.addEventListener('mousedown', startDrag);
1945
+ });
1946
+
1947
+ // Click on canvas to deselect
1948
+ document.querySelector('svg').addEventListener('click', (e) => {
1949
+ if (e.target.tagName === 'svg') {
1950
+ selectedNode = null;
1951
+ updateSelection();
1952
+ }
1953
+ });
1954
+
1955
+ function selectNode(nodeId) {
1956
+ selectedNode = nodeId;
1957
+ updateSelection();
1958
+
1959
+ // Notify Gradio about selection
1960
+ if (window.gradio_api) {
1961
+ window.gradio_api('select_node', nodeId);
1962
+ }
1963
+ }
1964
+
1965
+ function updateSelection() {
1966
+ // Visual feedback handled by server-side re-render
1967
+ // This will trigger when we call back to Python
1968
+ }
1969
+
1970
+ function startDrag(e) {
1971
+ if (!selectedNode) return;
1972
+
1973
+ isDragging = true;
1974
+ startX = e.clientX;
1975
+ startY = e.clientY;
1976
+
1977
+ const node = document.getElementById('node_' + selectedNode);
1978
+ const transform = node.getAttribute('transform') || '';
1979
+ const match = transform.match(/translate\\(([^,]+),([^)]+)\\)/);
1980
+ originalX = match ? parseFloat(match[1]) : 0;
1981
+ originalY = match ? parseFloat(match[2]) : 0;
1982
+
1983
+ document.addEventListener('mousemove', doDrag);
1984
+ document.addEventListener('mouseup', stopDrag);
1985
+ e.preventDefault();
1986
+ }
1987
+
1988
+ function doDrag(e) {
1989
+ if (!isDragging || !selectedNode) return;
1990
+
1991
+ const dx = e.clientX - startX;
1992
+ const dy = e.clientY - startY;
1993
+
1994
+ const node = document.getElementById('node_' + selectedNode);
1995
+ node.setAttribute('transform', `translate(${originalX + dx}, ${originalY + dy})`);
1996
+ }
1997
+
1998
+ function stopDrag(e) {
1999
+ if (!isDragging || !selectedNode) return;
2000
+
2001
+ const dx = e.clientX - startX;
2002
+ const dy = e.clientY - startY;
2003
+
2004
+ // Final position update to Gradio
2005
+ if (window.gradio_api && (Math.abs(dx) > 5 || Math.abs(dy) > 5)) {
2006
+ window.gradio_api('move_node', {
2007
+ node_id: selectedNode,
2008
+ dx: Math.round(dx),
2009
+ dy: Math.round(dy)
2010
+ });
2011
+ }
2012
+
2013
+ isDragging = false;
2014
+ document.removeEventListener('mousemove', doDrag);
2015
+ document.removeEventListener('mouseup', stopDrag);
2016
+ }
2017
+ </script>
2018
+ ''')
2019
+
2020
+ svg_parts.append('</svg>')
2021
+ return '\n'.join(svg_parts)
2022
+
2023
+ workflow = WorkflowDesigner()
2024
+
2025
+ # Report generation class
2026
+ class WorkflowReporter:
2027
+ def __init__(self):
2028
+ try:
2029
+ self.client = AsyncOpenAI(base_url="http://localhost:1234/v1", api_key="lm-studio")
2030
+ except Exception as e:
2031
+ print("LM Studio client init failed:", e)
2032
+
2033
+ async def generate_report(self, workflow_json: str) -> str:
2034
+ prompt = f"""
2035
+ Generate a comprehensive system design report based on the following workflow:
2036
+ {workflow_json}
2037
+
2038
+ The report should include a detailed repost and system breif with full examples and implimentations where possible and explanaion of requirement in cases where the workflow is complexed and need further deconstruction, as well as example usages :
2039
+ 1. A high-level system overview
2040
+ 2. User stories for each component or connection expetation
2041
+ 3. Use case briefs for each component interaction and component relationship
2042
+ 4. Pseudocode for the implementation for each component and for the overall workflow
2043
+ 5. Component responsibilities and interfaces
2044
+ 6. Data flow description and example use-cases
2045
+
2046
+ """
2047
+
2048
+ try:
2049
+ response = await self.client.chat.completions.create(
2050
+ model="leroydyer/qwen/qwen3-0.6b-q4_k_m.gguf",
2051
+ messages=[{"role": "user", "content": prompt}],
2052
+ temperature=0.7,
2053
+ max_tokens=2048
2054
+ )
2055
+ return response.choices[0].message.content
2056
+ except Exception as e:
2057
+ return f"Error generating report: {str(e)}"
2058
+
2059
+ # Initialize reporter
2060
+ reporter = WorkflowReporter()
2061
+
2062
+ def create_workflow_ui():
2063
+ with gr.Blocks(title="Agent Workflow Designer", theme=gr.themes.Soft()) as demo:
2064
+ gr.Markdown("# 🎓 Agentic System Workflow Designer")
2065
+ gr.Markdown("**Educational tool for planning and understanding agent architectures**")
2066
+
2067
+ # Hidden components for JavaScript communication
2068
+ select_node_trigger = gr.Textbox(visible=False)
2069
+ move_node_trigger = gr.Textbox(visible=False)
2070
+
2071
+ # Define all UI components first
2072
+ with gr.Row():
2073
+ # Left Sidebar - Component Library
2074
+ with gr.Column(scale=1):
2075
+ gr.Markdown("## 📚 Component Library")
2076
+
2077
+ # Store component buttons for later connection
2078
+ component_buttons = []
2079
+
2080
+ # High-level component accordions
2081
+ for category, components in COMPONENT_HIERARCHY["HIGH_LEVEL"].items():
2082
+ with gr.Accordion(f"{components['icon']} {category}", open=False):
2083
+ # High-level component button
2084
+ high_level_btn = gr.Button(
2085
+ f"{components['icon']} {category}",
2086
+ size="sm",
2087
+ variant="primary"
2088
+ )
2089
+ component_buttons.append((high_level_btn, category))
2090
+
2091
+ # Sub-components
2092
+ if components['sub_components']:
2093
+ gr.Markdown("**Sub-components:**")
2094
+ for sub_comp in components['sub_components']:
2095
+ sub_info = COMPONENT_INFO[sub_comp]
2096
+ sub_btn = gr.Button(
2097
+ f"{sub_info['icon']} {sub_comp.replace('_', ' ').title()}",
2098
+ size="sm"
2099
+ )
2100
+ component_buttons.append((sub_btn, sub_comp))
2101
+
2102
+ gr.Markdown("---")
2103
+ gr.Markdown("## 🔗 Connect Nodes")
2104
+ from_node = gr.Dropdown(label="From", choices=[], interactive=True)
2105
+ to_node = gr.Dropdown(label="To", choices=[], interactive=True)
2106
+ connect_btn = gr.Button("➡️ Connect", variant="secondary")
2107
+
2108
+ gr.Markdown("---")
2109
+ gr.Markdown("## 📋 Examples")
2110
+ example_dropdown = gr.Dropdown(
2111
+ choices=list(EXAMPLE_WORKFLOWS.keys()),
2112
+ label="Load Example Workflow",
2113
+ interactive=True
2114
+ )
2115
+ load_example_btn = gr.Button("📥 Load Example")
2116
+
2117
+ gr.Markdown("---")
2118
+ with gr.Row():
2119
+ download_json_btn = gr.Button("💾 Download JSON", variant="primary", size="sm")
2120
+ download_svg_btn = gr.Button("🖼️ Download SVG", variant="primary", size="sm")
2121
+ clear_btn = gr.Button("🗑️ Clear All", variant="stop", size="sm")
2122
+
2123
+ # Output for multiple downloadable files
2124
+ download_files = gr.Files(label="📥 Download Files", visible=True)
2125
+
2126
+ # Center - Canvas
2127
+ with gr.Column(scale=3):
2128
+ gr.Markdown("## 🎨 Workflow Canvas")
2129
+ gr.Markdown("**💡 Tip:** Click nodes to select, then drag or use arrow keys")
2130
+ canvas = gr.HTML()
2131
+
2132
+ gr.Markdown("## 📖 Component Information")
2133
+ component_info = gr.Markdown("Select a component to see its description")
2134
+
2135
+ # Right Sidebar - Movement Controls
2136
+ with gr.Column(scale=1):
2137
+ gr.Markdown("## 🎯 Selection & Movement")
2138
+
2139
+ gr.Markdown("**Navigation:**")
2140
+ with gr.Row():
2141
+ select_prev_btn = gr.Button("⬅️ Prev", size="sm")
2142
+ select_next_btn = gr.Button("➡️ Next", size="sm")
2143
+ deselect_btn = gr.Button("❌ Deselect", size="sm")
2144
+
2145
+ gr.Markdown("**Selected Node:**")
2146
+ selected_node_info = gr.Markdown("No node selected")
2147
+
2148
+ gr.Markdown("**Move Selected:**")
2149
+ with gr.Row():
2150
+ move_left_btn = gr.Button("⬅️", size="sm")
2151
+ move_up_btn = gr.Button("⬆️", size="sm")
2152
+ move_down_btn = gr.Button("⬇️", size="sm")
2153
+ move_right_btn = gr.Button("➡️", size="sm")
2154
+
2155
+ gr.Markdown("**Movement Modes:**")
2156
+ with gr.Row():
2157
+ move_fine_btn = gr.Button("🎯 Fine (5px)", size="sm")
2158
+ move_coarse_btn = gr.Button("🚀 Coarse (50px)", size="sm")
2159
+
2160
+ gr.Markdown("---")
2161
+ with gr.Accordion("🗑️ Delete Selected", open=False):
2162
+ delete_selected_btn = gr.Button("❌ Delete Selected Node", variant="stop", size="sm")
2163
+
2164
+ gr.Markdown("---")
2165
+ with gr.Accordion("📊 Workflow Data", open=False):
2166
+ json_output = gr.Code(language="json", label="Workflow JSON", lines=10)
2167
+
2168
+ gr.Markdown("---")
2169
+ with gr.Accordion("📋 Generate Report", open=False):
2170
+ report_btn = gr.Button("📄 Generate System Report", variant="primary")
2171
+ report_output = gr.Textbox(label="System Design Report", lines=15, interactive=False)
2172
+ download_report_btn = gr.Button("📝 Download Report", variant="secondary", size="sm")
2173
+
2174
+ # Now define all the handler functions after UI components are defined
2175
+ def get_full_state():
2176
+ svg = workflow.render_svg()
2177
+ node_choices = list(workflow.nodes.keys())
2178
+ workflow_json = json.dumps({
2179
+ "nodes": [asdict(n) for n in workflow.nodes.values()],
2180
+ "connections": [asdict(c) for c in workflow.connections],
2181
+ "selected_node": workflow.selected_node
2182
+ }, indent=2)
2183
+
2184
+ selected_info = "**No node selected**"
2185
+ comp_info = "Select a component to see its description"
2186
+
2187
+ if workflow.selected_node and workflow.selected_node in workflow.nodes:
2188
+ node = workflow.nodes[workflow.selected_node]
2189
+ info = COMPONENT_INFO[node.type]
2190
+ selected_info = f"**Selected:** `{node.id}` ({info['icon']} {node.type.replace('_', ' ').title()}) at position ({node.x}, {node.y})"
2191
+ comp_info = f"### {node.type.replace('_', ' ').title()} {info['icon']}\n\n" + "\n".join(info["description"])
2192
+
2193
+ return svg, workflow_json, node_choices, selected_info, comp_info
2194
+
2195
+ def add_node_handler(node_type):
2196
+ node = workflow.add_node(node_type)
2197
+ svg, wf_json, choices, selected_info, comp_info = get_full_state()
2198
+ return svg, wf_json, gr.Dropdown(choices=choices), gr.Dropdown(choices=choices), selected_info, comp_info
2199
+
2200
+ # Connect all component buttons
2201
+ for btn, comp_type in component_buttons:
2202
+ btn.click(
2203
+ lambda ct=comp_type: add_node_handler(ct),
2204
+ outputs=[canvas, json_output, from_node, to_node, selected_node_info, component_info]
2205
+ )
2206
+
2207
+ # Selection handlers
2208
+ def select_node_handler(node_id):
2209
+ if node_id:
2210
+ workflow.select_node(node_id)
2211
+ svg, wf_json, choices, selected_info, comp_info = get_full_state()
2212
+ return svg, wf_json, selected_info, comp_info
2213
+ return workflow.render_svg(), "", "No node selected", "Select a component to see its description"
2214
+
2215
+ def select_next_node():
2216
+ if workflow.nodes:
2217
+ node_ids = list(workflow.nodes.keys())
2218
+ if not workflow.selected_node:
2219
+ workflow.selected_node = node_ids[0]
2220
+ else:
2221
+ current_idx = node_ids.index(workflow.selected_node)
2222
+ next_idx = (current_idx + 1) % len(node_ids)
2223
+ workflow.selected_node = node_ids[next_idx]
2224
+
2225
+ svg, wf_json, choices, selected_info, comp_info = get_full_state()
2226
+ return svg, wf_json, selected_info, comp_info
2227
+
2228
+ def select_prev_node():
2229
+ if workflow.nodes:
2230
+ node_ids = list(workflow.nodes.keys())
2231
+ if not workflow.selected_node:
2232
+ workflow.selected_node = node_ids[-1]
2233
+ else:
2234
+ current_idx = node_ids.index(workflow.selected_node)
2235
+ prev_idx = (current_idx - 1) % len(node_ids)
2236
+ workflow.selected_node = node_ids[prev_idx]
2237
+
2238
+ svg, wf_json, choices, selected_info, comp_info = get_full_state()
2239
+ return svg, wf_json, selected_info, comp_info
2240
+
2241
+ def deselect_all():
2242
+ workflow.selected_node = None
2243
+ svg, wf_json, choices, selected_info, comp_info = get_full_state()
2244
+ return svg, wf_json, selected_info, comp_info
2245
+
2246
+ # Delete selected node
2247
+ def delete_selected_node():
2248
+ if workflow.selected_node and workflow.selected_node in workflow.nodes:
2249
+ workflow.connections = [
2250
+ c for c in workflow.connections
2251
+ if c.from_node != workflow.selected_node and c.to_node != workflow.selected_node
2252
+ ]
2253
+ del workflow.nodes[workflow.selected_node]
2254
+ workflow.selected_node = None
2255
+
2256
+ svg, wf_json, choices, selected_info, comp_info = get_full_state()
2257
+ return svg, wf_json, gr.Dropdown(choices=choices), gr.Dropdown(choices=choices), selected_info, comp_info
2258
+
2259
+ # Movement handlers
2260
+ def move_selected_node(dx, dy):
2261
+ if workflow.selected_node:
2262
+ workflow.move_selected_node(dx, dy)
2263
+ svg, wf_json, choices, selected_info, comp_info = get_full_state()
2264
+ return svg, wf_json, selected_info, comp_info
2265
+ return workflow.render_svg(), "", "No node selected", component_info.value
2266
+
2267
+ # Connect selection events
2268
+ select_node_trigger.change(
2269
+ select_node_handler,
2270
+ inputs=[select_node_trigger],
2271
+ outputs=[canvas, json_output, selected_node_info, component_info]
2272
+ )
2273
+
2274
+ select_next_btn.click(select_next_node, outputs=[canvas, json_output, selected_node_info, component_info])
2275
+ select_prev_btn.click(select_prev_node, outputs=[canvas, json_output, selected_node_info, component_info])
2276
+ deselect_btn.click(deselect_all, outputs=[canvas, json_output, selected_node_info, component_info])
2277
+
2278
+ # Movement buttons
2279
+ move_left_btn.click(lambda: move_selected_node(-20, 0), outputs=[canvas, json_output, selected_node_info, component_info])
2280
+ move_right_btn.click(lambda: move_selected_node(20, 0), outputs=[canvas, json_output, selected_node_info, component_info])
2281
+ move_up_btn.click(lambda: move_selected_node(0, -20), outputs=[canvas, json_output, selected_node_info, component_info])
2282
+ move_down_btn.click(lambda: move_selected_node(0, 20), outputs=[canvas, json_output, selected_node_info, component_info])
2283
+ move_fine_btn.click(lambda: move_selected_node(-5, 0), outputs=[canvas, json_output, selected_node_info, component_info])
2284
+ move_coarse_btn.click(lambda: move_selected_node(-50, 0), outputs=[canvas, json_output, selected_node_info, component_info])
2285
+
2286
+ # Delete button
2287
+ delete_selected_btn.click(
2288
+ delete_selected_node,
2289
+ outputs=[canvas, json_output, from_node, to_node, selected_node_info, component_info]
2290
+ )
2291
+
2292
+ # Drag handler
2293
+ def handle_node_drag(move_data):
2294
+ try:
2295
+ data = json.loads(move_data)
2296
+ node_id = data.get('node_id')
2297
+ dx = data.get('dx', 0)
2298
+ dy = data.get('dy', 0)
2299
+ if node_id and node_id in workflow.nodes:
2300
+ workflow.select_node(node_id)
2301
+ workflow.move_selected_node(dx, dy)
2302
+ svg, wf_json, choices, selected_info, comp_info = get_full_state()
2303
+ return svg, wf_json, selected_info, comp_info
2304
+ except Exception as e:
2305
+ print("Drag error:", e)
2306
+ return workflow.render_svg(), "", "Drag completed", component_info.value
2307
+
2308
+ move_node_trigger.change(
2309
+ handle_node_drag,
2310
+ inputs=[move_node_trigger],
2311
+ outputs=[canvas, json_output, selected_node_info, component_info]
2312
+ )
2313
+
2314
+ # Connection handler
2315
+ def connect_nodes(from_n, to_n):
2316
+ if from_n and to_n and from_n != to_n:
2317
+ existing = [c for c in workflow.connections if c.from_node == from_n and c.to_node == to_n]
2318
+ if not existing:
2319
+ workflow.connections.append(Connection(from_node=from_n, to_node=to_n))
2320
+ svg, wf_json, choices, selected_info, comp_info = get_full_state()
2321
+ return svg, wf_json, selected_info
2322
+ return workflow.render_svg(), "", selected_node_info.value
2323
+
2324
+ connect_btn.click(connect_nodes, inputs=[from_node, to_node], outputs=[canvas, json_output, selected_node_info])
2325
+
2326
+ # Example loading
2327
+ def load_example_handler(example_name):
2328
+ if example_name:
2329
+ workflow.load_example(example_name)
2330
+ svg, wf_json, choices, selected_info, comp_info = get_full_state()
2331
+ desc = EXAMPLE_WORKFLOWS[example_name]["description"]
2332
+ info_text = f"### {example_name}\n\n{desc}"
2333
+ return svg, wf_json, gr.Dropdown(choices=choices), gr.Dropdown(choices=choices), selected_info, info_text
2334
+ return workflow.render_svg(), "", gr.Dropdown(choices=[]), gr.Dropdown(choices=[]), "No node selected", "Select a component to see its description"
2335
+
2336
+ load_example_btn.click(
2337
+ load_example_handler,
2338
+ inputs=[example_dropdown],
2339
+ outputs=[canvas, json_output, from_node, to_node, selected_node_info, component_info]
2340
+ )
2341
+
2342
+ # Unified download handler (returns list of files)
2343
+ def download_all_files():
2344
+ file_list = []
2345
+ fid = str(uuid.uuid4())
2346
+
2347
+ # JSON
2348
+ json_data = { ... }
2349
+ json_path = tempfile.mktemp(suffix=f"_{fid}.json")
2350
+ with open(json_path, "w", encoding="utf-8") as f:
2351
+ json.dump(json_data, f, indent=2)
2352
+ file_list.append(json_path)
2353
+
2354
+ # SVG
2355
+ svg_path = tempfile.mktemp(suffix=f"_{fid}.svg")
2356
+ with open(svg_path, "w", encoding="utf-8") as f:
2357
+ f.write(workflow.render_svg())
2358
+ file_list.append(svg_path)
2359
+
2360
+ return file_list
2361
+ # In your download_json function, replace:
2362
+ def download_json():
2363
+ fid = str(uuid.uuid4())
2364
+ # Use the new method that includes full component data
2365
+ json_data = workflow.get_workflow_json()
2366
+ json_path = tempfile.mktemp(suffix=f"_{fid}.json")
2367
+ with open(json_path, "w", encoding="utf-8") as f:
2368
+ json.dump(json_data, f, indent=2)
2369
+ return [json_path]
2370
+ # Download SVG only
2371
+ def download_svg():
2372
+ fid = str(uuid.uuid4())
2373
+ svg_content = workflow.render_svg()
2374
+ svg_path = tempfile.mktemp(suffix=f"_{fid}.svg")
2375
+ with open(svg_path, "w", encoding="utf-8") as f: # ←← KEY CHANGE: encoding="utf-8"
2376
+ f.write(svg_content)
2377
+ return [svg_path]
2378
+ # Report generation (sync wrapper)
2379
+ def sync_generate_report():
2380
+ workflow_data = {
2381
+ "nodes": [asdict(n) for n in workflow.nodes.values()],
2382
+ "connections": [asdict(c) for c in workflow.connections],
2383
+ "selected_node": workflow.selected_node
2384
+ }
2385
+ json_str = json.dumps(workflow_data, indent=2)
2386
+ try:
2387
+ report = asyncio.run(reporter.generate_report(json_str))
2388
+ except Exception as e:
2389
+ report = f"Failed to generate report: {e}"
2390
+ return report
2391
+
2392
+ def download_report():
2393
+ report_text = sync_generate_report()
2394
+ fid = str(uuid.uuid4())
2395
+ txt_path = tempfile.mktemp(suffix=f"_report_{fid}.txt")
2396
+ with open(txt_path, "w", encoding="utf-8") as f: # ←←
2397
+ f.write(f"Agentic Workflow Design Report\nGenerated on: {str(__import__('datetime').datetime.now())}\n\n")
2398
+ f.write(report_text)
2399
+ return [txt_path]
2400
+
2401
+
2402
+ # Attach handlers
2403
+ download_json_btn.click(download_json, outputs=[download_files])
2404
+ download_svg_btn.click(download_svg, outputs=[download_files])
2405
+ report_btn.click(sync_generate_report, outputs=[report_output])
2406
+ download_report_btn.click(download_report, outputs=[download_files])
2407
+
2408
+ # Clear handler
2409
+ def clear_all():
2410
+ workflow.nodes.clear()
2411
+ workflow.connections.clear()
2412
+ workflow.node_counter = 0
2413
+ workflow.selected_node = None
2414
+ svg = workflow.render_svg()
2415
+ return (
2416
+ svg,
2417
+ "{}",
2418
+ gr.Dropdown(choices=[]),
2419
+ gr.Dropdown(choices=[]),
2420
+ "No node selected",
2421
+ "Canvas cleared. Ready to build!"
2422
+ )
2423
+
2424
+ clear_btn.click(clear_all, outputs=[canvas, json_output, from_node, to_node, selected_node_info, component_info])
2425
+
2426
+ # Initialize with JavaScript support
2427
+ def init_app():
2428
+ svg = workflow.render_svg()
2429
+ js_code = '''
2430
+ <script>
2431
+ window.gradio_api = function(type, data) {
2432
+ if (type === 'select_node') {
2433
+ const triggers = document.querySelectorAll('input[type="hidden"]');
2434
+ const selectTrigger = triggers[0];
2435
+ if (selectTrigger) {
2436
+ selectTrigger.value = data;
2437
+ selectTrigger.dispatchEvent(new Event('change'));
2438
+ }
2439
+ } else if (type === 'move_node') {
2440
+ const triggers = document.querySelectorAll('input[type="hidden"]');
2441
+ const moveTrigger = triggers[1];
2442
+ if (moveTrigger) {
2443
+ moveTrigger.value = JSON.stringify(data);
2444
+ moveTrigger.dispatchEvent(new Event('change'));
2445
+ }
2446
+ }
2447
+ };
2448
+ </script>
2449
+ '''
2450
+ return svg + js_code
2451
+
2452
+ demo.load(init_app, outputs=[canvas])
2453
+
2454
+ return demo
2455
+
2456
+
2457
+ if __name__ == "__main__":
2458
+ demo = create_workflow_ui()
2459
+ demo.launch()