lvwerra HF Staff Claude Opus 4.6 commited on
Commit
a58e6a3
·
1 Parent(s): 2a5ead4

Rename notebooks→agents, Productive→AgentUI across full codebase

Browse files

- Rename all notebook references to agent (~340 occurrences across 16 files)
- Rename Productive branding to AgentUI (title, welcome text, README)
- Add settings migration: notebooks→agents key, productive_settings→agentui_settings
- Add backward-compatible localStorage and workspace fallbacks
- Scope .agent-header and .agent-body CSS to avoid timeline/content collision
- Hide sessions folder from file tree context
- Add markdown content toggle to read_url widget
- Rename run_interactive_notebook→run_stateful_code
- Preserve JupyterNotebook class and .jupyter-notebook-container (actual Jupyter refs)

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

backend/README.md CHANGED
@@ -1,12 +1,12 @@
1
- # Productive Backend
2
 
3
- Minimal FastAPI backend for Productive with streaming support and action tokens.
4
 
5
  ## Features
6
 
7
  - **Streaming responses**: Tokens streamed in real-time via Server-Sent Events (SSE)
8
  - **OpenAI-compatible**: Works with any OpenAI API-compatible endpoint
9
- - **Action tokens**: LLM can suggest opening specialized notebooks via `[ACTION:TYPE]` tokens
10
  - **CORS enabled**: Ready for frontend connection
11
 
12
  ## Setup
@@ -63,7 +63,7 @@ Stream chat responses with optional action tokens.
63
  "messages": [
64
  {"role": "user", "content": "Help me analyze data"}
65
  ],
66
- "notebook_type": "command",
67
  "stream": true
68
  }
69
  ```
@@ -79,17 +79,17 @@ data: {"type": "done"}
79
 
80
  ### Action Tokens
81
 
82
- The LLM can suggest opening specialized notebooks by including action tokens:
83
 
84
- - `[ACTION:AGENT]` - Open Agent notebook for multi-step tasks
85
- - `[ACTION:CODE]` - Open Code notebook for data analysis/coding
86
- - `[ACTION:RESEARCH]` - Open Research notebook for research tasks
87
- - `[ACTION:CHAT]` - Open Chat notebook for continued conversation
88
 
89
  The frontend automatically:
90
  1. Detects action tokens in the response
91
  2. Removes them from display
92
- 3. Opens the appropriate notebook
93
 
94
  ### `GET /health`
95
 
@@ -113,7 +113,7 @@ curl -X POST http://localhost:8000/api/chat/stream \
113
  -H "Content-Type: application/json" \
114
  -d '{
115
  "messages": [{"role": "user", "content": "Hello"}],
116
- "notebook_type": "command"
117
  }'
118
  ```
119
 
 
1
+ # AgentUI Backend
2
 
3
+ Minimal FastAPI backend for AgentUI with streaming support and action tokens.
4
 
5
  ## Features
6
 
7
  - **Streaming responses**: Tokens streamed in real-time via Server-Sent Events (SSE)
8
  - **OpenAI-compatible**: Works with any OpenAI API-compatible endpoint
9
+ - **Action tokens**: LLM can suggest opening specialized agents via `[ACTION:TYPE]` tokens
10
  - **CORS enabled**: Ready for frontend connection
11
 
12
  ## Setup
 
63
  "messages": [
64
  {"role": "user", "content": "Help me analyze data"}
65
  ],
66
+ "agent_type": "command",
67
  "stream": true
68
  }
69
  ```
 
79
 
80
  ### Action Tokens
81
 
82
+ The LLM can suggest opening specialized agents by including action tokens:
83
 
84
+ - `[ACTION:AGENT]` - Open Agent agent for multi-step tasks
85
+ - `[ACTION:CODE]` - Open Code agent for data analysis/coding
86
+ - `[ACTION:RESEARCH]` - Open Research agent for research tasks
87
+ - `[ACTION:CHAT]` - Open Chat agent for continued conversation
88
 
89
  The frontend automatically:
90
  1. Detects action tokens in the response
91
  2. Removes them from display
92
+ 3. Opens the appropriate agent
93
 
94
  ### `GET /health`
95
 
 
113
  -H "Content-Type: application/json" \
114
  -d '{
115
  "messages": [{"role": "user", "content": "Hello"}],
116
+ "agent_type": "command"
117
  }'
118
  ```
119
 
backend/__init__.py CHANGED
@@ -1 +1 @@
1
- # Productive backend package
 
1
+ # AgentUI backend package
backend/agent.py CHANGED
@@ -1,5 +1,5 @@
1
  """
2
- Agent notebook backend - autonomous agent with web tools (search, read, screenshot).
3
 
4
  Uses the same tool-calling loop pattern as code.py:
5
  LLM call → parse tool_calls → execute → update history → repeat
@@ -66,7 +66,7 @@ def execute_tool(tool_name: str, args: dict, serper_key: str) -> dict:
66
  content = execute_read_url(url)
67
  return {
68
  "content": content,
69
- "display": {"type": "page", "url": url, "length": len(content)}
70
  }
71
 
72
  elif tool_name == "screenshot_url":
 
1
  """
2
+ Web agent backend - autonomous agent with web tools (search, read, screenshot).
3
 
4
  Uses the same tool-calling loop pattern as code.py:
5
  LLM call → parse tool_calls → execute → update history → repeat
 
66
  content = execute_read_url(url)
67
  return {
68
  "content": content,
69
+ "display": {"type": "page", "url": url, "length": len(content), "markdown": content}
70
  }
71
 
72
  elif tool_name == "screenshot_url":
backend/agents.py CHANGED
@@ -42,26 +42,36 @@ AGENT_REGISTRY = {
42
  "command": {
43
  "label": "TASKS",
44
  "system_prompt": (
45
- "You are a helpful AI assistant in the Productive interface command center.\n\n"
46
  "{tools_section}\n\n"
47
- "When a user asks you to perform a task that would benefit from a specialized notebook, you can:\n"
48
- "1. Briefly acknowledge the request\n"
49
- "2. Use the appropriate tool to launch a notebook with the task\n\n"
50
- "You can also answer questions directly without launching a notebook if appropriate.\n\n"
 
 
 
 
 
 
51
  "Examples:\n"
52
  '- User: "Can you help me analyze this CSV file?"\n'
53
- " You: Use launch_code_notebook tool with the task\n\n"
54
  '- User: "Research the latest developments in AI"\n'
55
- " You: Use launch_research_notebook tool with the topic\n\n"
56
  '- User: "What was the result of the research in 2 sentences?"\n'
57
  " You: Summarize the research results without using tools\n\n"
58
- "Be concise and helpful. Don't duplicate effort - either answer directly OR launch a notebook, not both. "
59
- "Answer questions about results directly without launching new notebooks.\n\n"
60
- "IMPORTANT guidelines when delegating to notebooks:\n"
61
- "- Do NOT ask notebooks to save or create files unless the user explicitly requests it or implicitly necessary to solve a task.\n"
 
 
 
 
62
  "- NEVER overwrite existing files without explicit user permission.\n"
63
- "- Each notebook has a task_id. If a a new task is clearly related to a existing notebook "
64
- "re-use the task id to reuse the notebook. This will reuse the existing context and also the jupyter kernel for code notebooks."
65
  ),
66
  "tool": None,
67
  "tool_arg": None,
@@ -105,8 +115,8 @@ AGENT_REGISTRY = {
105
  "tool": {
106
  "type": "function",
107
  "function": {
108
- "name": "launch_agent_notebook",
109
- "description": "Launch an autonomous agent notebook for multi-step tasks that need planning and execution. Use this for complex workflows, task organization, or anything requiring multiple coordinated steps.",
110
  "parameters": {
111
  "type": "object",
112
  "properties": {
@@ -128,6 +138,7 @@ AGENT_REGISTRY = {
128
  "in_menu": True,
129
  "in_launcher": True,
130
  "placeholder": "Enter message...",
 
131
  },
132
 
133
  "code": {
@@ -174,8 +185,8 @@ AGENT_REGISTRY = {
174
  "tool": {
175
  "type": "function",
176
  "function": {
177
- "name": "launch_code_notebook",
178
- "description": "Launch a code notebook with Python execution environment. Use this for data analysis, creating visualizations, running code, debugging, or anything involving programming.",
179
  "parameters": {
180
  "type": "object",
181
  "properties": {
@@ -197,6 +208,7 @@ AGENT_REGISTRY = {
197
  "in_menu": True,
198
  "in_launcher": True,
199
  "placeholder": "Enter message...",
 
200
  },
201
 
202
  "research": {
@@ -229,8 +241,8 @@ AGENT_REGISTRY = {
229
  "tool": {
230
  "type": "function",
231
  "function": {
232
- "name": "launch_research_notebook",
233
- "description": "Launch a research notebook for deep analysis requiring web search. Use this for researching topics, gathering information from multiple sources, or analyzing current information.",
234
  "parameters": {
235
  "type": "object",
236
  "properties": {
@@ -252,6 +264,7 @@ AGENT_REGISTRY = {
252
  "in_menu": True,
253
  "in_launcher": True,
254
  "placeholder": "Enter message...",
 
255
  },
256
 
257
  "chat": {
@@ -268,8 +281,8 @@ AGENT_REGISTRY = {
268
  "tool": {
269
  "type": "function",
270
  "function": {
271
- "name": "launch_chat_notebook",
272
- "description": "Launch a conversational chat notebook for extended back-and-forth discussion. Use this when the user wants to continue a conversation in a dedicated space.",
273
  "parameters": {
274
  "type": "object",
275
  "properties": {
@@ -291,6 +304,7 @@ AGENT_REGISTRY = {
291
  "in_menu": True,
292
  "in_launcher": True,
293
  "placeholder": "Enter message...",
 
294
  },
295
 
296
  "image": {
@@ -327,8 +341,8 @@ AGENT_REGISTRY = {
327
  "tool": {
328
  "type": "function",
329
  "function": {
330
- "name": "launch_image_notebook",
331
- "description": "Launch an image notebook for generating or editing images using AI models. Use this for creating images from text, applying style transfers, editing photos, or any visual content creation.",
332
  "parameters": {
333
  "type": "object",
334
  "properties": {
@@ -350,6 +364,7 @@ AGENT_REGISTRY = {
350
  "in_menu": True,
351
  "in_launcher": True,
352
  "placeholder": "Describe an image or paste a URL...",
 
353
  },
354
  }
355
 
@@ -371,7 +386,7 @@ def get_system_prompt(agent_key: str) -> str:
371
 
372
 
373
  def get_tools() -> list:
374
- """Get tool definitions for the command center (replaces TOOLS in command.py)."""
375
  return [
376
  agent["tool"]
377
  for agent in AGENT_REGISTRY.values()
@@ -379,8 +394,8 @@ def get_tools() -> list:
379
  ]
380
 
381
 
382
- def get_notebook_type_map() -> dict:
383
- """Map tool function names to agent keys (replaces notebook_type_map in command.py)."""
384
  result = {}
385
  for key, agent in AGENT_REGISTRY.items():
386
  if agent["tool"] is not None:
@@ -396,7 +411,7 @@ def get_tool_arg(agent_key: str) -> str:
396
 
397
 
398
  def get_default_counters() -> dict:
399
- """Get default notebook counters (replaces hardcoded dict in get_default_workspace)."""
400
  return {
401
  key: 0
402
  for key, agent in AGENT_REGISTRY.items()
@@ -421,9 +436,12 @@ def get_registry_for_frontend() -> list:
421
 
422
  def _build_tools_section() -> str:
423
  """Build the 'available tools' text for the command center system prompt."""
424
- lines = ["You have access to tools that can launch specialized notebooks for different types of tasks:"]
425
  for key, agent in AGENT_REGISTRY.items():
426
  if agent["tool"] is not None:
427
  tool_func = agent["tool"]["function"]
428
- lines.append(f"- {tool_func['name']}: {tool_func['description']}")
 
 
 
429
  return "\n".join(lines)
 
42
  "command": {
43
  "label": "TASKS",
44
  "system_prompt": (
45
+ "You are a helpful AI assistant in the AgentUI command center.\n\n"
46
  "{tools_section}\n\n"
47
+ "## Planning\n\n"
48
+ "For multi-step tasks, briefly outline your plan before launching agents. For example:\n"
49
+ "\"I'll break this into 3 steps: 1) Research X, 2) Write code for Y, 3) Generate image for Z.\"\n"
50
+ "Then launch the independent steps in parallel. Keep the plan SHORT (1-3 lines max).\n"
51
+ "For simple single-step tasks, skip the plan and just launch the agent directly.\n\n"
52
+ "## Routing\n\n"
53
+ "When a user asks you to perform a task that would benefit from a specialized agent, you can:\n"
54
+ "1. Briefly acknowledge the request (and plan if multi-step)\n"
55
+ "2. Use the appropriate tool(s) to launch agent(s) with the task\n\n"
56
+ "You can also answer questions directly without launching an agent if appropriate.\n\n"
57
  "Examples:\n"
58
  '- User: "Can you help me analyze this CSV file?"\n'
59
+ " You: Use launch_code_agent tool with the task\n\n"
60
  '- User: "Research the latest developments in AI"\n'
61
+ " You: Use launch_research_agent tool with the topic\n\n"
62
  '- User: "What was the result of the research in 2 sentences?"\n'
63
  " You: Summarize the research results without using tools\n\n"
64
+ "Be concise and helpful. Don't duplicate effort - either answer directly OR launch an agent, not both. "
65
+ "Answer questions about results directly without launching new agents.\n\n"
66
+ "IMPORTANT: When a task can be split into independent sub-tasks, launch multiple agents IN PARALLEL "
67
+ "by making multiple tool calls in a single response. This saves significant time.\n"
68
+ "For example, if a user asks 'Research topic A and also write code for B', launch both a research agent "
69
+ "and a code agent simultaneously rather than sequentially.\n\n"
70
+ "IMPORTANT guidelines when delegating to agents:\n"
71
+ "- Do NOT ask agents to save or create files unless the user explicitly requests it or implicitly necessary to solve a task.\n"
72
  "- NEVER overwrite existing files without explicit user permission.\n"
73
+ "- Each agent has a task_id. If a new task is clearly related to an existing agent "
74
+ "re-use the task id to reuse the agent. This will reuse the existing context and also the jupyter kernel for code agents."
75
  ),
76
  "tool": None,
77
  "tool_arg": None,
 
115
  "tool": {
116
  "type": "function",
117
  "function": {
118
+ "name": "launch_web_agent",
119
+ "description": "Launch an autonomous web agent for multi-step tasks that need planning and execution. Use this for complex workflows, task organization, or anything requiring multiple coordinated steps.",
120
  "parameters": {
121
  "type": "object",
122
  "properties": {
 
138
  "in_menu": True,
139
  "in_launcher": True,
140
  "placeholder": "Enter message...",
141
+ "capabilities": "Has tools: web_search(query), read_url(url), screenshot_url(url). Can browse the web, read pages, and take screenshots.",
142
  },
143
 
144
  "code": {
 
185
  "tool": {
186
  "type": "function",
187
  "function": {
188
+ "name": "launch_code_agent",
189
+ "description": "Launch a code agent with Python execution environment. Use this for data analysis, creating visualizations, running code, debugging, or anything involving programming.",
190
  "parameters": {
191
  "type": "object",
192
  "properties": {
 
208
  "in_menu": True,
209
  "in_launcher": True,
210
  "placeholder": "Enter message...",
211
+ "capabilities": "Has tools: execute_code(Python), upload_files, download_files. Runs code in a Jupyter sandbox with pandas, numpy, matplotlib, etc.",
212
  },
213
 
214
  "research": {
 
241
  "tool": {
242
  "type": "function",
243
  "function": {
244
+ "name": "launch_research_agent",
245
+ "description": "Launch a research agent for deep analysis requiring web search. Use this for researching topics, gathering information from multiple sources, or analyzing current information.",
246
  "parameters": {
247
  "type": "object",
248
  "properties": {
 
264
  "in_menu": True,
265
  "in_launcher": True,
266
  "placeholder": "Enter message...",
267
+ "capabilities": "Deep web research with parallel sub-agents. Searches multiple queries, analyzes many websites concurrently, and synthesizes findings.",
268
  },
269
 
270
  "chat": {
 
281
  "tool": {
282
  "type": "function",
283
  "function": {
284
+ "name": "launch_chat_agent",
285
+ "description": "Launch a conversational chat agent for extended back-and-forth discussion. Use this when the user wants to continue a conversation in a dedicated space.",
286
  "parameters": {
287
  "type": "object",
288
  "properties": {
 
304
  "in_menu": True,
305
  "in_launcher": True,
306
  "placeholder": "Enter message...",
307
+ "capabilities": "No tools. Pure LLM conversation for discussion, brainstorming, or Q&A.",
308
  },
309
 
310
  "image": {
 
341
  "tool": {
342
  "type": "function",
343
  "function": {
344
+ "name": "launch_image_agent",
345
+ "description": "Launch an image agent for generating or editing images using AI models. Use this for creating images from text, applying style transfers, editing photos, or transforming existing images (e.g., 'make this photo look like a comic'). Accepts image URLs as input.",
346
  "parameters": {
347
  "type": "object",
348
  "properties": {
 
364
  "in_menu": True,
365
  "in_launcher": True,
366
  "placeholder": "Describe an image or paste a URL...",
367
+ "capabilities": "Has tools: generate_image(prompt), edit_image(prompt, source), read_image_url(url). Can generate, edit, and load images via HuggingFace models.",
368
  },
369
  }
370
 
 
386
 
387
 
388
  def get_tools() -> list:
389
+ """Get tool definitions for the command center."""
390
  return [
391
  agent["tool"]
392
  for agent in AGENT_REGISTRY.values()
 
394
  ]
395
 
396
 
397
+ def get_agent_type_map() -> dict:
398
+ """Map tool function names to agent keys."""
399
  result = {}
400
  for key, agent in AGENT_REGISTRY.items():
401
  if agent["tool"] is not None:
 
411
 
412
 
413
  def get_default_counters() -> dict:
414
+ """Get default agent counters."""
415
  return {
416
  key: 0
417
  for key, agent in AGENT_REGISTRY.items()
 
436
 
437
  def _build_tools_section() -> str:
438
  """Build the 'available tools' text for the command center system prompt."""
439
+ lines = ["## Available Agents\n\nYou can launch specialized agents for different types of tasks:"]
440
  for key, agent in AGENT_REGISTRY.items():
441
  if agent["tool"] is not None:
442
  tool_func = agent["tool"]["function"]
443
+ capabilities = agent.get("capabilities", "")
444
+ lines.append(f"- **{tool_func['name']}**: {tool_func['description']}")
445
+ if capabilities:
446
+ lines.append(f" {capabilities}")
447
  return "\n".join(lines)
backend/code.py CHANGED
@@ -1,5 +1,5 @@
1
  """
2
- Code notebook backend - handles code execution with E2B
3
  """
4
  import json
5
  import logging
 
1
  """
2
+ Code agent backend - handles code execution with E2B
3
  """
4
  import json
5
  import logging
backend/command.py CHANGED
@@ -1,5 +1,5 @@
1
  """
2
- Command center backend - handles tool-based notebook launching
3
  """
4
  import json
5
  import logging
@@ -10,7 +10,7 @@ from typing import List, Dict
10
  logger = logging.getLogger(__name__)
11
 
12
  # Tool definitions derived from agent registry
13
- from agents import get_tools, get_notebook_type_map, get_tool_arg
14
 
15
  TOOLS = get_tools()
16
 
@@ -58,7 +58,7 @@ def parse_llm_error(error: Exception) -> dict:
58
 
59
  def stream_command_center(client, model: str, messages: List[Dict], extra_params: dict = None):
60
  """
61
- Stream command center responses with notebook launching capabilities
62
 
63
  Yields:
64
  dict: Updates with type 'thinking', 'launch', 'done', or 'error'
@@ -124,7 +124,7 @@ def stream_command_center(client, model: str, messages: List[Dict], extra_params
124
  if content.strip():
125
  yield {"type": "thinking", "content": content}
126
 
127
- # Handle tool calls (notebook launches)
128
  if tool_calls:
129
  for tool_call in tool_calls:
130
  function_name = tool_call.function.name
@@ -136,26 +136,25 @@ def stream_command_center(client, model: str, messages: List[Dict], extra_params
136
  yield {"type": "error", "content": "Failed to parse tool arguments"}
137
  return
138
 
139
- # Map function names to notebook types (derived from registry)
140
- notebook_type_map = get_notebook_type_map()
141
- notebook_type = notebook_type_map.get(function_name)
142
 
143
- if notebook_type:
144
  # Get the initial message using the registered arg name for this type
145
- initial_message = args.get(get_tool_arg(notebook_type)) or args.get("task") or args.get("message")
146
  task_id = args.get("task_id", "")
147
 
148
  # Send launch action to frontend
149
  yield {
150
  "type": "launch",
151
- "notebook_type": notebook_type,
152
  "initial_message": initial_message,
153
  "task_id": task_id,
154
  "tool_call_id": tool_call.id
155
  }
156
 
157
  # Add tool call to message history for context
158
- # (but we don't continue the conversation after launching)
159
  messages.append({
160
  "role": "assistant",
161
  "content": content,
@@ -172,10 +171,10 @@ def stream_command_center(client, model: str, messages: List[Dict], extra_params
172
  messages.append({
173
  "role": "tool",
174
  "tool_call_id": tool_call.id,
175
- "content": f"Launched {notebook_type} notebook with task: {initial_message}"
176
  })
177
 
178
- # Notebook launched - we're done
179
  done = True
180
  else:
181
  yield {"type": "error", "content": f"Unknown tool: {function_name}"}
 
1
  """
2
+ Command center backend - handles tool-based agent launching
3
  """
4
  import json
5
  import logging
 
10
  logger = logging.getLogger(__name__)
11
 
12
  # Tool definitions derived from agent registry
13
+ from agents import get_tools, get_agent_type_map, get_tool_arg
14
 
15
  TOOLS = get_tools()
16
 
 
58
 
59
  def stream_command_center(client, model: str, messages: List[Dict], extra_params: dict = None):
60
  """
61
+ Stream command center responses with agent launching capabilities
62
 
63
  Yields:
64
  dict: Updates with type 'thinking', 'launch', 'done', or 'error'
 
124
  if content.strip():
125
  yield {"type": "thinking", "content": content}
126
 
127
+ # Handle tool calls (agent launches)
128
  if tool_calls:
129
  for tool_call in tool_calls:
130
  function_name = tool_call.function.name
 
136
  yield {"type": "error", "content": "Failed to parse tool arguments"}
137
  return
138
 
139
+ # Map function names to agent types (derived from registry)
140
+ agent_type_map = get_agent_type_map()
141
+ agent_type = agent_type_map.get(function_name)
142
 
143
+ if agent_type:
144
  # Get the initial message using the registered arg name for this type
145
+ initial_message = args.get(get_tool_arg(agent_type)) or args.get("task") or args.get("message")
146
  task_id = args.get("task_id", "")
147
 
148
  # Send launch action to frontend
149
  yield {
150
  "type": "launch",
151
+ "agent_type": agent_type,
152
  "initial_message": initial_message,
153
  "task_id": task_id,
154
  "tool_call_id": tool_call.id
155
  }
156
 
157
  # Add tool call to message history for context
 
158
  messages.append({
159
  "role": "assistant",
160
  "content": content,
 
171
  messages.append({
172
  "role": "tool",
173
  "tool_call_id": tool_call.id,
174
+ "content": f"Launched {agent_type} agent with task: {initial_message}"
175
  })
176
 
177
+ # Agent launched - we're done
178
  done = True
179
  else:
180
  yield {"type": "error", "content": f"Unknown tool: {function_name}"}
backend/image.py CHANGED
@@ -1,5 +1,5 @@
1
  """
2
- Image notebook backend — multimodal agent with HuggingFace image generation tools.
3
 
4
  Uses the same tool-calling loop pattern as agent.py:
5
  LLM call → parse tool_calls → execute → update history → repeat
@@ -221,6 +221,7 @@ def stream_image_execution(
221
  done = False
222
  image_store = {}
223
  image_counter = 0
 
224
 
225
  while not done and turns < MAX_TURNS:
226
  turns += 1
@@ -368,9 +369,18 @@ def stream_image_execution(
368
  # Send result if found
369
  if result_content:
370
  yield {"type": "result", "content": result_content, "images": image_store}
 
371
 
372
  # Signal between-turn processing
373
  if not done:
374
  yield {"type": "generating"}
375
 
 
 
 
 
 
 
 
 
376
  yield {"type": "done"}
 
1
  """
2
+ Image agent backend — multimodal agent with HuggingFace image generation tools.
3
 
4
  Uses the same tool-calling loop pattern as agent.py:
5
  LLM call → parse tool_calls → execute → update history → repeat
 
221
  done = False
222
  image_store = {}
223
  image_counter = 0
224
+ result_sent = False
225
 
226
  while not done and turns < MAX_TURNS:
227
  turns += 1
 
369
  # Send result if found
370
  if result_content:
371
  yield {"type": "result", "content": result_content, "images": image_store}
372
+ result_sent = True
373
 
374
  # Signal between-turn processing
375
  if not done:
376
  yield {"type": "generating"}
377
 
378
+ # Fallback: if VLM never produced a <result> tag, synthesize one with all images
379
+ if not result_sent and image_store:
380
+ fallback_parts = []
381
+ for name in image_store:
382
+ fallback_parts.append(f"<{name}>")
383
+ fallback_content = "\n\n".join(fallback_parts)
384
+ yield {"type": "result", "content": fallback_content, "images": image_store}
385
+
386
  yield {"type": "done"}
backend/main.py CHANGED
@@ -46,7 +46,7 @@ logging.getLogger("e2b").setLevel(logging.WARNING)
46
  logging.getLogger("e2b.api").setLevel(logging.WARNING)
47
  logging.getLogger("httpx").setLevel(logging.WARNING)
48
 
49
- app = FastAPI(title="Productive API")
50
 
51
 
52
  # ============================================
@@ -205,12 +205,12 @@ class Message(BaseModel):
205
  class FrontendContext(BaseModel):
206
  """Dynamic context from the frontend that can affect system prompts"""
207
  theme: Optional[Dict] = None # Current theme colors {name, accent, bg, etc.}
208
- open_notebooks: Optional[List[str]] = None # List of open notebook types/names
209
 
210
 
211
  class ChatRequest(BaseModel):
212
  messages: List[Message]
213
- notebook_type: str = "command"
214
  stream: bool = True
215
  endpoint: str # User's configured LLM endpoint
216
  token: Optional[str] = None # Optional auth token
@@ -227,7 +227,7 @@ class ChatRequest(BaseModel):
227
  research_sub_agent_extra_params: Optional[Dict] = None # Extra params for research sub-agent
228
  research_parallel_workers: Optional[int] = None # Number of parallel workers for research
229
  research_max_websites: Optional[int] = None # Max websites to analyze per research session
230
- notebook_id: Optional[str] = None # Unique notebook/tab ID for session management
231
  frontend_context: Optional[FrontendContext] = None # Dynamic context from frontend
232
 
233
 
@@ -247,7 +247,7 @@ class SandboxStopRequest(BaseModel):
247
  session_id: str
248
 
249
 
250
- async def stream_code_notebook(
251
  messages: List[dict],
252
  endpoint: str,
253
  token: Optional[str],
@@ -258,7 +258,7 @@ async def stream_code_notebook(
258
  frontend_context: Optional[Dict] = None,
259
  extra_params: Optional[Dict] = None
260
  ):
261
- """Handle code notebook with execution capabilities"""
262
 
263
  if not E2B_AVAILABLE:
264
  yield f"data: {json.dumps({'type': 'error', 'content': 'E2B not available. Install with: pip install e2b-code-interpreter'})}\n\n"
@@ -279,7 +279,7 @@ async def stream_code_notebook(
279
  # Create OpenAI client with user's endpoint
280
  client = OpenAI(base_url=endpoint, api_key=token)
281
 
282
- # Add system prompt for code notebook (with file tree and styling context)
283
  system_prompt = get_system_prompt("code", frontend_context)
284
  full_messages = [
285
  {"role": "system", "content": system_prompt}
@@ -365,7 +365,7 @@ async def stream_code_notebook(
365
  yield f"data: {json.dumps({'type': 'error', 'content': error_message})}\n\n"
366
 
367
 
368
- async def stream_research_notebook(
369
  messages: List[dict],
370
  endpoint: str,
371
  token: Optional[str],
@@ -380,7 +380,7 @@ async def stream_research_notebook(
380
  extra_params: Optional[Dict] = None,
381
  sub_agent_extra_params: Optional[Dict] = None
382
  ):
383
- """Handle research notebook with web search"""
384
 
385
  if not RESEARCH_AVAILABLE:
386
  yield f"data: {json.dumps({'type': 'error', 'content': 'Research dependencies not available. Install with: pip install trafilatura requests'})}\n\n"
@@ -453,7 +453,7 @@ async def stream_research_notebook(
453
  yield f"data: {json.dumps({'type': 'error', 'content': error_message})}\n\n"
454
 
455
 
456
- async def stream_command_center_notebook(
457
  messages: List[dict],
458
  endpoint: str,
459
  token: Optional[str],
@@ -461,7 +461,7 @@ async def stream_command_center_notebook(
461
  tab_id: str = "0",
462
  extra_params: Optional[Dict] = None
463
  ):
464
- """Handle command center with tool-based notebook launching"""
465
 
466
  if not COMMAND_AVAILABLE:
467
  # Fallback to regular chat if command tools not available
@@ -514,7 +514,7 @@ async def stream_command_center_notebook(
514
  yield f"data: {json.dumps({'type': 'error', 'content': error_message})}\n\n"
515
 
516
 
517
- async def stream_agent_notebook(
518
  messages: List[dict],
519
  endpoint: str,
520
  token: Optional[str],
@@ -523,7 +523,7 @@ async def stream_agent_notebook(
523
  tab_id: str = "default",
524
  extra_params: Optional[Dict] = None
525
  ):
526
- """Handle agent notebook with web tools (search, read, screenshot)"""
527
 
528
  if not AGENT_AVAILABLE:
529
  async for chunk in stream_chat_response(messages, endpoint, token, model, "agent", tab_id, extra_params):
@@ -565,7 +565,7 @@ async def stream_agent_notebook(
565
  yield f"data: {json.dumps({'type': 'error', 'content': error_message})}\n\n"
566
 
567
 
568
- async def stream_image_notebook(
569
  messages: List[dict],
570
  endpoint: str,
571
  token: Optional[str],
@@ -576,7 +576,7 @@ async def stream_image_notebook(
576
  tab_id: str = "default",
577
  extra_params: Optional[Dict] = None
578
  ):
579
- """Handle image notebook with HuggingFace image generation tools"""
580
 
581
  if not IMAGE_AVAILABLE:
582
  yield f"data: {json.dumps({'type': 'error', 'content': 'Image agent not available. Install with: pip install huggingface_hub Pillow'})}\n\n"
@@ -626,7 +626,7 @@ async def stream_chat_response(
626
  endpoint: str,
627
  token: Optional[str],
628
  model: str,
629
- notebook_type: str,
630
  tab_id: str = "default",
631
  extra_params: Optional[Dict] = None
632
  ):
@@ -635,8 +635,8 @@ async def stream_chat_response(
635
  try:
636
  logger.info(f"Stream request: endpoint={endpoint}, model={model}, messages={len(messages)}, token={'yes' if token else 'no'}")
637
 
638
- # Prepare messages with appropriate system prompt based on notebook type (with file tree)
639
- system_prompt = get_system_prompt(notebook_type)
640
  full_messages = [
641
  {"role": "system", "content": system_prompt}
642
  ] + messages
@@ -730,7 +730,7 @@ async def stream_chat_response(
730
  @app.get("/api/info")
731
  async def api_info():
732
  return {
733
- "message": "Productive API - LLM Proxy Server",
734
  "version": "1.0.0",
735
  "endpoints": {
736
  "/api/chat/stream": "POST - Proxy streaming chat to user's LLM endpoint"
@@ -794,7 +794,7 @@ async def generate_title(request: TitleRequest):
794
  async def chat_stream(request: ChatRequest):
795
  """Proxy streaming chat to user's configured LLM endpoint"""
796
 
797
- logger.debug(f"Chat stream request: notebook_type={request.notebook_type}")
798
 
799
  if not request.messages:
800
  raise HTTPException(status_code=400, detail="Messages are required")
@@ -806,7 +806,7 @@ async def chat_stream(request: ChatRequest):
806
  messages = [{"role": msg.role, "content": msg.content} for msg in request.messages]
807
 
808
  # Get tab_id for debugging
809
- tab_id = request.notebook_id or "0"
810
 
811
  # Convert frontend_context to dict if provided
812
  frontend_context = request.frontend_context.model_dump() if request.frontend_context else None
@@ -820,13 +820,13 @@ async def chat_stream(request: ChatRequest):
820
  if not hf_token:
821
  hf_token = token
822
 
823
- # Route to code execution handler for code notebooks
824
- if request.notebook_type == "code":
825
- # Use notebook_id as session key, fallback to "default" if not provided
826
- session_id = request.notebook_id or "default"
827
 
828
  return StreamingResponse(
829
- stream_code_notebook(
830
  messages,
831
  request.endpoint,
832
  token,
@@ -845,14 +845,14 @@ async def chat_stream(request: ChatRequest):
845
  }
846
  )
847
 
848
- # Route to research handler for research notebooks
849
- if request.notebook_type == "research":
850
  # Use sub-agent endpoint/token if provided, otherwise fall back to main
851
  sub_agent_endpoint = request.research_sub_agent_endpoint or request.endpoint
852
  sub_agent_token = request.research_sub_agent_token if request.research_sub_agent_endpoint else token
853
 
854
  return StreamingResponse(
855
- stream_research_notebook(
856
  messages,
857
  request.endpoint,
858
  token,
@@ -876,9 +876,9 @@ async def chat_stream(request: ChatRequest):
876
  )
877
 
878
  # Route to image handler with HuggingFace tools
879
- if request.notebook_type == "image":
880
  return StreamingResponse(
881
- stream_image_notebook(
882
  messages,
883
  request.endpoint,
884
  token,
@@ -898,9 +898,9 @@ async def chat_stream(request: ChatRequest):
898
  )
899
 
900
  # Route to agent handler with web tools
901
- if request.notebook_type == "agent":
902
  return StreamingResponse(
903
- stream_agent_notebook(
904
  messages,
905
  request.endpoint,
906
  token,
@@ -918,9 +918,9 @@ async def chat_stream(request: ChatRequest):
918
  )
919
 
920
  # Route to command center handler for command center (with tool-based launching)
921
- if request.notebook_type == "command":
922
  return StreamingResponse(
923
- stream_command_center_notebook(
924
  messages,
925
  request.endpoint,
926
  token,
@@ -936,14 +936,14 @@ async def chat_stream(request: ChatRequest):
936
  }
937
  )
938
 
939
- # Regular chat for other notebook types (agent, chat)
940
  return StreamingResponse(
941
  stream_chat_response(
942
  messages,
943
  request.endpoint,
944
  token,
945
  request.model or "gpt-4",
946
- request.notebook_type,
947
  tab_id,
948
  request.extra_params
949
  ),
@@ -958,7 +958,7 @@ async def chat_stream(request: ChatRequest):
958
 
959
  @app.post("/api/sandbox/start")
960
  async def start_sandbox(request: SandboxRequest):
961
- """Start a sandbox for a code notebook session"""
962
  session_id = request.session_id
963
  e2b_key = request.e2b_key
964
 
@@ -999,7 +999,7 @@ async def start_sandbox(request: SandboxRequest):
999
 
1000
  @app.post("/api/sandbox/stop")
1001
  async def stop_sandbox(request: SandboxStopRequest):
1002
- """Stop a sandbox for a code notebook session"""
1003
  session_id = request.session_id
1004
 
1005
  if not session_id:
@@ -1018,7 +1018,7 @@ async def stop_sandbox(request: SandboxStopRequest):
1018
 
1019
  @app.post("/api/conversation/add-tool-response")
1020
  async def add_tool_response(request: dict):
1021
- """Add a tool response to the conversation history when a notebook returns a result"""
1022
  global CONVERSATION_HISTORY
1023
 
1024
  tab_id = request.get("tab_id", "0")
@@ -1056,12 +1056,19 @@ async def health():
1056
  return {"status": "healthy"}
1057
 
1058
 
1059
- # File paths - use ~/.config/productive/ by default (cross-platform standard)
 
1060
  # These can be overridden via command-line arguments or set_*_file functions
1061
  def get_default_config_dir():
1062
- """Get the default config directory (~/.config/productive/)"""
1063
  config_home = os.environ.get("XDG_CONFIG_HOME", os.path.join(os.path.expanduser("~"), ".config"))
1064
- return os.path.join(config_home, "productive")
 
 
 
 
 
 
1065
 
1066
  CONFIG_DIR = get_default_config_dir()
1067
  os.makedirs(CONFIG_DIR, exist_ok=True)
@@ -1077,7 +1084,7 @@ FILES_EXCLUDE = {
1077
  'node_modules', '__pycache__', '.git', '.pytest_cache',
1078
  'env', 'venv', 'env312', '.venv', 'dist', 'build',
1079
  '.egg-info', '.tox', '.coverage', 'htmlcov',
1080
- 'test-results', 'playwright-report'
1081
  }
1082
 
1083
  def set_settings_file(path: str):
@@ -1110,6 +1117,9 @@ async def get_settings():
1110
  if os.path.exists(SETTINGS_FILE):
1111
  with open(SETTINGS_FILE, "r") as f:
1112
  settings = json.load(f)
 
 
 
1113
  settings["_settingsPath"] = SETTINGS_FILE
1114
  return settings
1115
  else:
@@ -1329,7 +1339,7 @@ def get_default_workspace():
1329
  "version": 1,
1330
  "tabCounter": 1,
1331
  "activeTabId": 0,
1332
- "notebookCounters": get_default_counters(),
1333
  "tabs": [
1334
  {
1335
  "id": 0,
@@ -1454,7 +1464,7 @@ def get_file_tree_for_prompt() -> str:
1454
 
1455
 
1456
  def get_styling_context(theme: Optional[Dict] = None) -> str:
1457
- """Generate styling guidance for code notebooks based on current theme"""
1458
  # App style description
1459
  style_desc = """## Visual Style Guidelines
1460
  The application has a minimalist, technical aesthetic with clean lines and muted colors. When generating plots or visualizations:
@@ -1484,17 +1494,17 @@ Current theme: {name}
1484
  return style_desc
1485
 
1486
 
1487
- def get_system_prompt(notebook_type: str, frontend_context: Optional[Dict] = None) -> str:
1488
- """Get system prompt for a notebook type with dynamic context appended"""
1489
  from agents import get_system_prompt as _get_agent_prompt
1490
- base_prompt = _get_agent_prompt(notebook_type) or _get_agent_prompt("command")
1491
  file_tree = get_file_tree_for_prompt()
1492
 
1493
  # Build the full prompt with context sections
1494
  sections = [base_prompt, f"## Project Files\n{file_tree}"]
1495
 
1496
- # Add styling context for code notebooks
1497
- if notebook_type == "code" and frontend_context:
1498
  theme = frontend_context.get('theme') if frontend_context else None
1499
  styling = get_styling_context(theme)
1500
  sections.append(styling)
@@ -1542,7 +1552,7 @@ def start():
1542
  import threading
1543
  import uvicorn
1544
 
1545
- parser = argparse.ArgumentParser(description="Productive API Server")
1546
  parser.add_argument("--clean", action="store_true", help="Clear workspace at startup")
1547
  parser.add_argument("--port", type=int, default=8765, help="Port to run the server on (default: 8765)")
1548
  parser.add_argument("--no-browser", action="store_true", help="Don't open browser automatically")
@@ -1565,7 +1575,7 @@ def start():
1565
  os.makedirs(SESSIONS_ROOT, exist_ok=True)
1566
 
1567
  url = f"http://localhost:{args.port}"
1568
- logger.info(f"Starting Productive server...")
1569
  logger.info(f"Config directory: {CONFIG_DIR}")
1570
  logger.info(f"Sessions directory: {SESSIONS_ROOT}")
1571
  logger.info(f"Opening {url} in your browser...")
 
46
  logging.getLogger("e2b.api").setLevel(logging.WARNING)
47
  logging.getLogger("httpx").setLevel(logging.WARNING)
48
 
49
+ app = FastAPI(title="AgentUI API")
50
 
51
 
52
  # ============================================
 
205
  class FrontendContext(BaseModel):
206
  """Dynamic context from the frontend that can affect system prompts"""
207
  theme: Optional[Dict] = None # Current theme colors {name, accent, bg, etc.}
208
+ open_agents: Optional[List[str]] = None # List of open agent types/names
209
 
210
 
211
  class ChatRequest(BaseModel):
212
  messages: List[Message]
213
+ agent_type: str = "command"
214
  stream: bool = True
215
  endpoint: str # User's configured LLM endpoint
216
  token: Optional[str] = None # Optional auth token
 
227
  research_sub_agent_extra_params: Optional[Dict] = None # Extra params for research sub-agent
228
  research_parallel_workers: Optional[int] = None # Number of parallel workers for research
229
  research_max_websites: Optional[int] = None # Max websites to analyze per research session
230
+ agent_id: Optional[str] = None # Unique agent/tab ID for session management
231
  frontend_context: Optional[FrontendContext] = None # Dynamic context from frontend
232
 
233
 
 
247
  session_id: str
248
 
249
 
250
+ async def stream_code_agent(
251
  messages: List[dict],
252
  endpoint: str,
253
  token: Optional[str],
 
258
  frontend_context: Optional[Dict] = None,
259
  extra_params: Optional[Dict] = None
260
  ):
261
+ """Handle code agent with execution capabilities"""
262
 
263
  if not E2B_AVAILABLE:
264
  yield f"data: {json.dumps({'type': 'error', 'content': 'E2B not available. Install with: pip install e2b-code-interpreter'})}\n\n"
 
279
  # Create OpenAI client with user's endpoint
280
  client = OpenAI(base_url=endpoint, api_key=token)
281
 
282
+ # Add system prompt for code agent (with file tree and styling context)
283
  system_prompt = get_system_prompt("code", frontend_context)
284
  full_messages = [
285
  {"role": "system", "content": system_prompt}
 
365
  yield f"data: {json.dumps({'type': 'error', 'content': error_message})}\n\n"
366
 
367
 
368
+ async def stream_research_agent(
369
  messages: List[dict],
370
  endpoint: str,
371
  token: Optional[str],
 
380
  extra_params: Optional[Dict] = None,
381
  sub_agent_extra_params: Optional[Dict] = None
382
  ):
383
+ """Handle research agent with web search"""
384
 
385
  if not RESEARCH_AVAILABLE:
386
  yield f"data: {json.dumps({'type': 'error', 'content': 'Research dependencies not available. Install with: pip install trafilatura requests'})}\n\n"
 
453
  yield f"data: {json.dumps({'type': 'error', 'content': error_message})}\n\n"
454
 
455
 
456
+ async def stream_command_center_handler(
457
  messages: List[dict],
458
  endpoint: str,
459
  token: Optional[str],
 
461
  tab_id: str = "0",
462
  extra_params: Optional[Dict] = None
463
  ):
464
+ """Handle command center with tool-based agent launching"""
465
 
466
  if not COMMAND_AVAILABLE:
467
  # Fallback to regular chat if command tools not available
 
514
  yield f"data: {json.dumps({'type': 'error', 'content': error_message})}\n\n"
515
 
516
 
517
+ async def stream_web_agent(
518
  messages: List[dict],
519
  endpoint: str,
520
  token: Optional[str],
 
523
  tab_id: str = "default",
524
  extra_params: Optional[Dict] = None
525
  ):
526
+ """Handle web agent with tools (search, read, screenshot)"""
527
 
528
  if not AGENT_AVAILABLE:
529
  async for chunk in stream_chat_response(messages, endpoint, token, model, "agent", tab_id, extra_params):
 
565
  yield f"data: {json.dumps({'type': 'error', 'content': error_message})}\n\n"
566
 
567
 
568
+ async def stream_image_agent(
569
  messages: List[dict],
570
  endpoint: str,
571
  token: Optional[str],
 
576
  tab_id: str = "default",
577
  extra_params: Optional[Dict] = None
578
  ):
579
+ """Handle image agent with HuggingFace image generation tools"""
580
 
581
  if not IMAGE_AVAILABLE:
582
  yield f"data: {json.dumps({'type': 'error', 'content': 'Image agent not available. Install with: pip install huggingface_hub Pillow'})}\n\n"
 
626
  endpoint: str,
627
  token: Optional[str],
628
  model: str,
629
+ agent_type: str,
630
  tab_id: str = "default",
631
  extra_params: Optional[Dict] = None
632
  ):
 
635
  try:
636
  logger.info(f"Stream request: endpoint={endpoint}, model={model}, messages={len(messages)}, token={'yes' if token else 'no'}")
637
 
638
+ # Prepare messages with appropriate system prompt based on agent type (with file tree)
639
+ system_prompt = get_system_prompt(agent_type)
640
  full_messages = [
641
  {"role": "system", "content": system_prompt}
642
  ] + messages
 
730
  @app.get("/api/info")
731
  async def api_info():
732
  return {
733
+ "message": "AgentUI API - LLM Proxy Server",
734
  "version": "1.0.0",
735
  "endpoints": {
736
  "/api/chat/stream": "POST - Proxy streaming chat to user's LLM endpoint"
 
794
  async def chat_stream(request: ChatRequest):
795
  """Proxy streaming chat to user's configured LLM endpoint"""
796
 
797
+ logger.debug(f"Chat stream request: agent_type={request.agent_type}")
798
 
799
  if not request.messages:
800
  raise HTTPException(status_code=400, detail="Messages are required")
 
806
  messages = [{"role": msg.role, "content": msg.content} for msg in request.messages]
807
 
808
  # Get tab_id for debugging
809
+ tab_id = request.agent_id or "0"
810
 
811
  # Convert frontend_context to dict if provided
812
  frontend_context = request.frontend_context.model_dump() if request.frontend_context else None
 
820
  if not hf_token:
821
  hf_token = token
822
 
823
+ # Route to code execution handler
824
+ if request.agent_type == "code":
825
+ # Use agent_id as session key, fallback to "default" if not provided
826
+ session_id = request.agent_id or "default"
827
 
828
  return StreamingResponse(
829
+ stream_code_agent(
830
  messages,
831
  request.endpoint,
832
  token,
 
845
  }
846
  )
847
 
848
+ # Route to research handler
849
+ if request.agent_type == "research":
850
  # Use sub-agent endpoint/token if provided, otherwise fall back to main
851
  sub_agent_endpoint = request.research_sub_agent_endpoint or request.endpoint
852
  sub_agent_token = request.research_sub_agent_token if request.research_sub_agent_endpoint else token
853
 
854
  return StreamingResponse(
855
+ stream_research_agent(
856
  messages,
857
  request.endpoint,
858
  token,
 
876
  )
877
 
878
  # Route to image handler with HuggingFace tools
879
+ if request.agent_type == "image":
880
  return StreamingResponse(
881
+ stream_image_agent(
882
  messages,
883
  request.endpoint,
884
  token,
 
898
  )
899
 
900
  # Route to agent handler with web tools
901
+ if request.agent_type == "agent":
902
  return StreamingResponse(
903
+ stream_web_agent(
904
  messages,
905
  request.endpoint,
906
  token,
 
918
  )
919
 
920
  # Route to command center handler for command center (with tool-based launching)
921
+ if request.agent_type == "command":
922
  return StreamingResponse(
923
+ stream_command_center_handler(
924
  messages,
925
  request.endpoint,
926
  token,
 
936
  }
937
  )
938
 
939
+ # Regular chat for other agent types
940
  return StreamingResponse(
941
  stream_chat_response(
942
  messages,
943
  request.endpoint,
944
  token,
945
  request.model or "gpt-4",
946
+ request.agent_type,
947
  tab_id,
948
  request.extra_params
949
  ),
 
958
 
959
  @app.post("/api/sandbox/start")
960
  async def start_sandbox(request: SandboxRequest):
961
+ """Start a sandbox for a code agent session"""
962
  session_id = request.session_id
963
  e2b_key = request.e2b_key
964
 
 
999
 
1000
  @app.post("/api/sandbox/stop")
1001
  async def stop_sandbox(request: SandboxStopRequest):
1002
+ """Stop a sandbox for a code agent session"""
1003
  session_id = request.session_id
1004
 
1005
  if not session_id:
 
1018
 
1019
  @app.post("/api/conversation/add-tool-response")
1020
  async def add_tool_response(request: dict):
1021
+ """Add a tool response to the conversation history when an agent returns a result"""
1022
  global CONVERSATION_HISTORY
1023
 
1024
  tab_id = request.get("tab_id", "0")
 
1056
  return {"status": "healthy"}
1057
 
1058
 
1059
+ # File paths - use ~/.config/agentui/ by default (cross-platform standard)
1060
+ # Falls back to ~/.config/productive/ for backward compatibility
1061
  # These can be overridden via command-line arguments or set_*_file functions
1062
  def get_default_config_dir():
1063
+ """Get the default config directory (~/.config/agentui/), with fallback to ~/.config/productive/"""
1064
  config_home = os.environ.get("XDG_CONFIG_HOME", os.path.join(os.path.expanduser("~"), ".config"))
1065
+ new_dir = os.path.join(config_home, "agentui")
1066
+ old_dir = os.path.join(config_home, "productive")
1067
+ # Use new dir if it exists, or if old dir doesn't exist (fresh install)
1068
+ if os.path.exists(new_dir) or not os.path.exists(old_dir):
1069
+ return new_dir
1070
+ # Fall back to old dir for existing installations
1071
+ return old_dir
1072
 
1073
  CONFIG_DIR = get_default_config_dir()
1074
  os.makedirs(CONFIG_DIR, exist_ok=True)
 
1084
  'node_modules', '__pycache__', '.git', '.pytest_cache',
1085
  'env', 'venv', 'env312', '.venv', 'dist', 'build',
1086
  '.egg-info', '.tox', '.coverage', 'htmlcov',
1087
+ 'test-results', 'playwright-report', 'sessions'
1088
  }
1089
 
1090
  def set_settings_file(path: str):
 
1117
  if os.path.exists(SETTINGS_FILE):
1118
  with open(SETTINGS_FILE, "r") as f:
1119
  settings = json.load(f)
1120
+ # Migrate old "notebooks" key to "agents"
1121
+ if "notebooks" in settings and "agents" not in settings:
1122
+ settings["agents"] = settings.pop("notebooks")
1123
  settings["_settingsPath"] = SETTINGS_FILE
1124
  return settings
1125
  else:
 
1339
  "version": 1,
1340
  "tabCounter": 1,
1341
  "activeTabId": 0,
1342
+ "agentCounters": get_default_counters(),
1343
  "tabs": [
1344
  {
1345
  "id": 0,
 
1464
 
1465
 
1466
  def get_styling_context(theme: Optional[Dict] = None) -> str:
1467
+ """Generate styling guidance for code agents based on current theme"""
1468
  # App style description
1469
  style_desc = """## Visual Style Guidelines
1470
  The application has a minimalist, technical aesthetic with clean lines and muted colors. When generating plots or visualizations:
 
1494
  return style_desc
1495
 
1496
 
1497
+ def get_system_prompt(agent_type: str, frontend_context: Optional[Dict] = None) -> str:
1498
+ """Get system prompt for an agent type with dynamic context appended"""
1499
  from agents import get_system_prompt as _get_agent_prompt
1500
+ base_prompt = _get_agent_prompt(agent_type) or _get_agent_prompt("command")
1501
  file_tree = get_file_tree_for_prompt()
1502
 
1503
  # Build the full prompt with context sections
1504
  sections = [base_prompt, f"## Project Files\n{file_tree}"]
1505
 
1506
+ # Add styling context for code agents
1507
+ if agent_type == "code" and frontend_context:
1508
  theme = frontend_context.get('theme') if frontend_context else None
1509
  styling = get_styling_context(theme)
1510
  sections.append(styling)
 
1552
  import threading
1553
  import uvicorn
1554
 
1555
+ parser = argparse.ArgumentParser(description="AgentUI API Server")
1556
  parser.add_argument("--clean", action="store_true", help="Clear workspace at startup")
1557
  parser.add_argument("--port", type=int, default=8765, help="Port to run the server on (default: 8765)")
1558
  parser.add_argument("--no-browser", action="store_true", help="Don't open browser automatically")
 
1575
  os.makedirs(SESSIONS_ROOT, exist_ok=True)
1576
 
1577
  url = f"http://localhost:{args.port}"
1578
+ logger.info(f"Starting AgentUI server...")
1579
  logger.info(f"Config directory: {CONFIG_DIR}")
1580
  logger.info(f"Sessions directory: {SESSIONS_ROOT}")
1581
  logger.info(f"Opening {url} in your browser...")
backend/research.py CHANGED
@@ -1,5 +1,5 @@
1
  """
2
- Research notebook backend using DR-TULU model - model-driven deep research
3
 
4
  DR-TULU drives the research loop - it decides when to search, what to search for,
5
  and when it has enough information to answer.
 
1
  """
2
+ Research agent backend using DR-TULU model - model-driven deep research
3
 
4
  DR-TULU drives the research loop - it decides when to search, what to search for,
5
  and when it has enough information to answer.
backend/tools.py CHANGED
@@ -25,7 +25,7 @@ logger = logging.getLogger(__name__)
25
 
26
 
27
  # ============================================================
28
- # Code execution tools (used by code notebook)
29
  # ============================================================
30
 
31
  execute_code = {
@@ -99,7 +99,7 @@ download_files = {
99
 
100
 
101
  # ============================================================
102
- # Web tools (used by agent notebook)
103
  # ============================================================
104
 
105
  web_search = {
@@ -271,7 +271,7 @@ def execute_screenshot_url(url: str) -> Optional[str]:
271
 
272
 
273
  # ============================================================
274
- # Image tools (used by image notebook)
275
  # ============================================================
276
 
277
  generate_image = {
 
25
 
26
 
27
  # ============================================================
28
+ # Code execution tools (used by code agent)
29
  # ============================================================
30
 
31
  execute_code = {
 
99
 
100
 
101
  # ============================================================
102
+ # Web tools (used by web agent)
103
  # ============================================================
104
 
105
  web_search = {
 
271
 
272
 
273
  # ============================================================
274
+ # Image tools (used by image agent)
275
  # ============================================================
276
 
277
  generate_image = {
backend/utils.py CHANGED
@@ -83,7 +83,7 @@ def clean_messages_for_api(messages):
83
  return cleaned_messages
84
 
85
 
86
- def run_interactive_notebook(client, model, messages, sbx, max_new_tokens=512):
87
  notebook = JupyterNotebook(messages)
88
  sbx_info = sbx.get_info()
89
  notebook.add_sandbox_countdown(sbx_info.started_at, sbx_info.end_at)
 
83
  return cleaned_messages
84
 
85
 
86
+ def run_stateful_code(client, model, messages, sbx, max_new_tokens=512):
87
  notebook = JupyterNotebook(messages)
88
  sbx_info = sbx.get_info()
89
  notebook.add_sandbox_countdown(sbx_info.started_at, sbx_info.end_at)
frontend/index.html CHANGED
@@ -3,11 +3,11 @@
3
  <head>
4
  <meta charset="UTF-8">
5
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
- <title>Productive</title>
7
  <link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@300;400;500;700&display=swap" rel="stylesheet">
8
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/prism/1.29.0/themes/prism.min.css">
9
  <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.16.9/dist/katex.min.css">
10
- <link rel="stylesheet" href="style.css?v=64">
11
  </head>
12
  <body>
13
  <div class="app-container">
@@ -33,8 +33,8 @@
33
  <!-- Tab Content Area -->
34
  <div class="content-area">
35
  <!-- Left Sidebar - Timeline Overview -->
36
- <div class="notebooks-sidebar" id="notebooksSidebar">
37
- <div class="sidebar-content" id="sidebarNotebooks">
38
  <!-- Timeline widgets will be rendered here -->
39
  </div>
40
  <div class="sidebar-legend">
@@ -52,10 +52,10 @@
52
  <div class="main-content">
53
  <!-- Task Center Tab -->
54
  <div class="tab-content active" data-content-id="0">
55
- <div class="notebook-interface">
56
- <div class="notebook-header">
57
  <div>
58
- <div class="notebook-type">TASK CENTER</div>
59
  <h2>Task Center</h2>
60
  </div>
61
  <div class="header-actions" id="launcherButtons">
@@ -64,10 +64,10 @@
64
  </div>
65
  </div>
66
 
67
- <div class="notebook-body">
68
  <div class="chat-container" id="messages-command">
69
  <div class="welcome-message" id="welcomeMessage">
70
- <p>Welcome to Productive — an AI interface with specialized notebooks.</p>
71
 
72
  <div class="session-selector" id="sessionSelector">
73
  <div class="session-selector-form">
@@ -92,7 +92,7 @@
92
  </div>
93
 
94
  <div class="welcome-explanation">
95
- <p>The assistant can automatically open specialized notebooks for different tasks:</p>
96
  <ul>
97
  <li><strong style="color: var(--theme-accent)">BASE</strong> — Basic tasks with search</li>
98
  <li><strong style="color: var(--theme-accent)">CODE</strong> — Programming and data analysis</li>
@@ -124,11 +124,11 @@
124
  <line x1="45" y1="130" x2="175" y2="130" stroke="#eee" stroke-width="1"/>
125
  <text x="110" y="150" text-anchor="middle" font-size="9" fill="#333">Report summary</text>
126
 
127
- <!-- Notebook Box -->
128
  <rect x="300" y="10" width="180" height="160" rx="4" fill="#fafafa" stroke="#e0e0e0" stroke-width="1"/>
129
- <text x="390" y="28" text-anchor="middle" font-size="11" font-weight="600" fill="var(--theme-accent)">NOTEBOOK</text>
130
 
131
- <!-- Query (top of notebook) -->
132
  <rect x="315" y="40" width="150" height="24" rx="3" fill="white" stroke="var(--theme-accent)" stroke-width="1"/>
133
  <text x="390" y="56" text-anchor="middle" font-size="9" fill="#333">Query</text>
134
 
@@ -140,7 +140,7 @@
140
  <rect x="315" y="108" width="150" height="14" rx="2" fill="#fbfbfb" stroke="#f0f0f0" stroke-width="1"/>
141
  <text x="390" y="118" text-anchor="middle" font-size="8" fill="#ccc">...</text>
142
 
143
- <!-- Report (bottom of notebook) - aligned with Task report area at y=145 -->
144
  <rect x="315" y="130" width="150" height="30" rx="3" fill="white" stroke="var(--theme-accent)" stroke-width="1"/>
145
  <text x="390" y="150" text-anchor="middle" font-size="9" fill="#333">Report</text>
146
 
@@ -152,7 +152,7 @@
152
  <line x1="315" y1="150" x2="193" y2="150" stroke="var(--theme-accent)" stroke-width="1.5" marker-end="url(#arrowhead)"/>
153
  </svg>
154
  </div>
155
- <p>When a notebook is opened, you'll see a widget you can click to jump to it. A pulsing dot on the tab indicates active generation.</p>
156
  </div>
157
  </div>
158
  </div>
@@ -200,13 +200,13 @@
200
  <button type="button" class="settings-add-btn" onclick="showModelDialog()">+ Add Model</button>
201
  </div>
202
 
203
- <!-- Notebook Model Selection -->
204
  <div class="settings-section">
205
  <label class="settings-label">
206
- <span class="label-text">NOTEBOOK MODELS</span>
207
- <span class="label-description">Select which model to use for each notebook type</span>
208
  </label>
209
- <div class="notebook-models-grid" id="notebookModelsGrid">
210
  <!-- Generated dynamically from AGENT_REGISTRY -->
211
  </div>
212
  </div>
@@ -215,7 +215,7 @@
215
  <div class="settings-section">
216
  <label class="settings-label">
217
  <span class="label-text">E2B API KEY (OPTIONAL)</span>
218
- <span class="label-description">Required for code execution in CODE notebooks</span>
219
  </label>
220
  <input type="password" id="setting-e2b-key" class="settings-input" placeholder="Leave empty if not using code execution">
221
  </div>
@@ -223,7 +223,7 @@
223
  <div class="settings-section">
224
  <label class="settings-label">
225
  <span class="label-text">SERPER API KEY (OPTIONAL)</span>
226
- <span class="label-description">Required for web search in RESEARCH notebooks</span>
227
  </label>
228
  <input type="password" id="setting-serper-key" class="settings-input" placeholder="Leave empty if not using research">
229
  </div>
@@ -231,7 +231,7 @@
231
  <div class="settings-section">
232
  <label class="settings-label">
233
  <span class="label-text">HUGGINGFACE TOKEN (OPTIONAL)</span>
234
- <span class="label-description">Required for image generation in IMAGE notebooks</span>
235
  </label>
236
  <input type="password" id="setting-hf-token" class="settings-input" placeholder="Leave empty to use provider token">
237
  </div>
@@ -240,7 +240,7 @@
240
  <div class="settings-section">
241
  <label class="settings-label">
242
  <span class="label-text">IMAGE GENERATION MODEL (OPTIONAL)</span>
243
- <span class="label-description">Model for text-to-image generation in IMAGE notebooks</span>
244
  </label>
245
  <select id="setting-image-gen-model" class="settings-select"></select>
246
  </div>
@@ -248,7 +248,7 @@
248
  <div class="settings-section">
249
  <label class="settings-label">
250
  <span class="label-text">IMAGE EDIT MODEL (OPTIONAL)</span>
251
- <span class="label-description">Model for image-to-image editing in IMAGE notebooks</span>
252
  </label>
253
  <select id="setting-image-edit-model" class="settings-select"></select>
254
  </div>
@@ -483,6 +483,6 @@
483
  <script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script>
484
  <script src="https://cdn.jsdelivr.net/npm/katex@0.16.9/dist/katex.min.js"></script>
485
  <script src="research-ui.js?v=23"></script>
486
- <script src="script.js?v=59"></script>
487
  </body>
488
  </html>
 
3
  <head>
4
  <meta charset="UTF-8">
5
  <meta name="viewport" content="width=device-width, initial-scale=1.0">
6
+ <title>AgentUI</title>
7
  <link href="https://fonts.googleapis.com/css2?family=JetBrains+Mono:wght@300;400;500;700&display=swap" rel="stylesheet">
8
  <link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/prism/1.29.0/themes/prism.min.css">
9
  <link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/katex@0.16.9/dist/katex.min.css">
10
+ <link rel="stylesheet" href="style.css?v=65">
11
  </head>
12
  <body>
13
  <div class="app-container">
 
33
  <!-- Tab Content Area -->
34
  <div class="content-area">
35
  <!-- Left Sidebar - Timeline Overview -->
36
+ <div class="agents-sidebar" id="agentsSidebar">
37
+ <div class="sidebar-content" id="sidebarAgents">
38
  <!-- Timeline widgets will be rendered here -->
39
  </div>
40
  <div class="sidebar-legend">
 
52
  <div class="main-content">
53
  <!-- Task Center Tab -->
54
  <div class="tab-content active" data-content-id="0">
55
+ <div class="agent-interface">
56
+ <div class="agent-header">
57
  <div>
58
+ <div class="agent-type">TASK CENTER</div>
59
  <h2>Task Center</h2>
60
  </div>
61
  <div class="header-actions" id="launcherButtons">
 
64
  </div>
65
  </div>
66
 
67
+ <div class="agent-body">
68
  <div class="chat-container" id="messages-command">
69
  <div class="welcome-message" id="welcomeMessage">
70
+ <p>Welcome to AgentUI — an AI interface with specialized agents.</p>
71
 
72
  <div class="session-selector" id="sessionSelector">
73
  <div class="session-selector-form">
 
92
  </div>
93
 
94
  <div class="welcome-explanation">
95
+ <p>The assistant can automatically open specialized agents for different tasks:</p>
96
  <ul>
97
  <li><strong style="color: var(--theme-accent)">BASE</strong> — Basic tasks with search</li>
98
  <li><strong style="color: var(--theme-accent)">CODE</strong> — Programming and data analysis</li>
 
124
  <line x1="45" y1="130" x2="175" y2="130" stroke="#eee" stroke-width="1"/>
125
  <text x="110" y="150" text-anchor="middle" font-size="9" fill="#333">Report summary</text>
126
 
127
+ <!-- Agent Box -->
128
  <rect x="300" y="10" width="180" height="160" rx="4" fill="#fafafa" stroke="#e0e0e0" stroke-width="1"/>
129
+ <text x="390" y="28" text-anchor="middle" font-size="11" font-weight="600" fill="var(--theme-accent)">AGENT</text>
130
 
131
+ <!-- Query (top of agent) -->
132
  <rect x="315" y="40" width="150" height="24" rx="3" fill="white" stroke="var(--theme-accent)" stroke-width="1"/>
133
  <text x="390" y="56" text-anchor="middle" font-size="9" fill="#333">Query</text>
134
 
 
140
  <rect x="315" y="108" width="150" height="14" rx="2" fill="#fbfbfb" stroke="#f0f0f0" stroke-width="1"/>
141
  <text x="390" y="118" text-anchor="middle" font-size="8" fill="#ccc">...</text>
142
 
143
+ <!-- Report (bottom of agent) - aligned with Task report area at y=145 -->
144
  <rect x="315" y="130" width="150" height="30" rx="3" fill="white" stroke="var(--theme-accent)" stroke-width="1"/>
145
  <text x="390" y="150" text-anchor="middle" font-size="9" fill="#333">Report</text>
146
 
 
152
  <line x1="315" y1="150" x2="193" y2="150" stroke="var(--theme-accent)" stroke-width="1.5" marker-end="url(#arrowhead)"/>
153
  </svg>
154
  </div>
155
+ <p>When an agent is opened, you'll see a widget you can click to jump to it. A pulsing dot on the tab indicates active generation.</p>
156
  </div>
157
  </div>
158
  </div>
 
200
  <button type="button" class="settings-add-btn" onclick="showModelDialog()">+ Add Model</button>
201
  </div>
202
 
203
+ <!-- Agent Model Selection -->
204
  <div class="settings-section">
205
  <label class="settings-label">
206
+ <span class="label-text">AGENT MODELS</span>
207
+ <span class="label-description">Select which model to use for each agent type</span>
208
  </label>
209
+ <div class="agent-models-grid" id="agentModelsGrid">
210
  <!-- Generated dynamically from AGENT_REGISTRY -->
211
  </div>
212
  </div>
 
215
  <div class="settings-section">
216
  <label class="settings-label">
217
  <span class="label-text">E2B API KEY (OPTIONAL)</span>
218
+ <span class="label-description">Required for code execution in CODE agents</span>
219
  </label>
220
  <input type="password" id="setting-e2b-key" class="settings-input" placeholder="Leave empty if not using code execution">
221
  </div>
 
223
  <div class="settings-section">
224
  <label class="settings-label">
225
  <span class="label-text">SERPER API KEY (OPTIONAL)</span>
226
+ <span class="label-description">Required for web search in RESEARCH agents</span>
227
  </label>
228
  <input type="password" id="setting-serper-key" class="settings-input" placeholder="Leave empty if not using research">
229
  </div>
 
231
  <div class="settings-section">
232
  <label class="settings-label">
233
  <span class="label-text">HUGGINGFACE TOKEN (OPTIONAL)</span>
234
+ <span class="label-description">Required for image generation in IMAGE agents</span>
235
  </label>
236
  <input type="password" id="setting-hf-token" class="settings-input" placeholder="Leave empty to use provider token">
237
  </div>
 
240
  <div class="settings-section">
241
  <label class="settings-label">
242
  <span class="label-text">IMAGE GENERATION MODEL (OPTIONAL)</span>
243
+ <span class="label-description">Model for text-to-image generation in IMAGE agents</span>
244
  </label>
245
  <select id="setting-image-gen-model" class="settings-select"></select>
246
  </div>
 
248
  <div class="settings-section">
249
  <label class="settings-label">
250
  <span class="label-text">IMAGE EDIT MODEL (OPTIONAL)</span>
251
+ <span class="label-description">Model for image-to-image editing in IMAGE agents</span>
252
  </label>
253
  <select id="setting-image-edit-model" class="settings-select"></select>
254
  </div>
 
483
  <script src="https://cdn.jsdelivr.net/npm/marked/marked.min.js"></script>
484
  <script src="https://cdn.jsdelivr.net/npm/katex@0.16.9/dist/katex.min.js"></script>
485
  <script src="research-ui.js?v=23"></script>
486
+ <script src="script.js?v=60"></script>
487
  </body>
488
  </html>
frontend/script.js CHANGED
@@ -54,7 +54,7 @@ let settings = {
54
  // New provider/model structure
55
  providers: {}, // providerId -> {name, endpoint, token}
56
  models: {}, // modelId -> {name, providerId, modelId (API model string)}
57
- notebooks: Object.fromEntries(Object.keys(AGENT_REGISTRY).map(k => [k, ''])),
58
  // Service API keys
59
  e2bKey: '',
60
  serperKey: '',
@@ -78,17 +78,17 @@ const actionWidgets = {};
78
  // Track tool call IDs for result updates (maps tabId -> tool_call_id)
79
  const toolCallIds = {};
80
 
81
- // Track notebooks by task_id for reuse (maps task_id -> tabId)
82
  const taskIdToTabId = {};
83
 
84
- // Track notebook counters for each type (derived from registry)
85
- let notebookCounters = getDefaultCounters();
86
 
87
  // Debounce timer for workspace saving
88
  let saveWorkspaceTimer = null;
89
 
90
  // Timeline data structure for sidebar
91
- // Maps tabId -> { type, title, events: [{type: 'user'|'assistant'|'notebook', content, childTabId?}], parentTabId?, isGenerating }
92
  const timelineData = {
93
  0: { type: 'command', title: 'Task Center', events: [], parentTabId: null, isGenerating: false }
94
  };
@@ -107,7 +107,7 @@ function resetLocalState() {
107
  Object.keys(taskIdToTabId).forEach(k => delete taskIdToTabId[k]);
108
  researchQueryTabIds = {};
109
  showAllTurns = false;
110
- notebookCounters = getDefaultCounters();
111
 
112
  // Reset timeline data
113
  Object.keys(timelineData).forEach(k => delete timelineData[k]);
@@ -145,7 +145,7 @@ function addTimelineEvent(tabId, eventType, content, childTabId = null) {
145
  const preview = content.length > 80 ? content.substring(0, 80) + '...' : content;
146
 
147
  timelineData[tabId].events.push({
148
- type: eventType, // 'user', 'assistant', or 'notebook'
149
  content: preview,
150
  childTabId: childTabId,
151
  timestamp: Date.now()
@@ -154,8 +154,8 @@ function addTimelineEvent(tabId, eventType, content, childTabId = null) {
154
  renderTimeline();
155
  }
156
 
157
- // Register a new notebook in timeline
158
- function registerNotebookInTimeline(tabId, type, title, parentTabId = null) {
159
  timelineData[tabId] = {
160
  type: type,
161
  title: title,
@@ -164,9 +164,9 @@ function registerNotebookInTimeline(tabId, type, title, parentTabId = null) {
164
  isGenerating: false
165
  };
166
 
167
- // If this notebook was launched from another, add a notebook event to parent
168
  if (parentTabId !== null && timelineData[parentTabId]) {
169
- addTimelineEvent(parentTabId, 'notebook', title, tabId);
170
  }
171
 
172
  renderTimeline();
@@ -180,7 +180,7 @@ function setTimelineGenerating(tabId, isGenerating) {
180
  }
181
  }
182
 
183
- // Update notebook title in timeline
184
  function updateTimelineTitle(tabId, title) {
185
  if (timelineData[tabId]) {
186
  timelineData[tabId].title = title;
@@ -188,7 +188,7 @@ function updateTimelineTitle(tabId, title) {
188
  }
189
  }
190
 
191
- // Remove notebook from timeline
192
  function removeFromTimeline(tabId) {
193
  // Remove from parent's events if it was a child
194
  const notebook = timelineData[tabId];
@@ -246,8 +246,8 @@ function reopenClosedTab(tabId, notebook) {
246
  // Restore the saved content (includes all messages)
247
  content.innerHTML = notebook.savedContent;
248
  } else {
249
- // Fallback: create fresh notebook content
250
- content.innerHTML = createNotebookContent(type, tabId, title);
251
  }
252
 
253
  document.querySelector('.main-content').appendChild(content);
@@ -283,7 +283,7 @@ function reopenClosedTab(tabId, notebook) {
283
  });
284
  }
285
 
286
- // If this is a code notebook, start the sandbox proactively
287
  if (type === 'code') {
288
  startSandbox(tabId);
289
  }
@@ -298,17 +298,17 @@ function reopenClosedTab(tabId, notebook) {
298
 
299
  // Render the full timeline widget
300
  function renderTimeline() {
301
- const sidebarContent = document.getElementById('sidebarNotebooks');
302
  if (!sidebarContent) return;
303
 
304
- // Get root notebooks (those without parents) - always include command center for workspace name
305
- const rootNotebooks = Object.entries(timelineData)
306
  .filter(([id, data]) => data.parentTabId === null);
307
 
308
  let html = '';
309
 
310
- for (const [tabId, notebook] of rootNotebooks) {
311
- html += renderNotebookTimeline(parseInt(tabId), notebook);
312
  }
313
 
314
  sidebarContent.innerHTML = html;
@@ -375,8 +375,8 @@ function renderTimeline() {
375
  });
376
  }
377
 
378
- // Render a single notebook's timeline (recursive for nested)
379
- function renderNotebookTimeline(tabId, notebook, isNested = false) {
380
  const isActive = activeTabId === tabId;
381
  const isClosed = notebook.isClosed || false;
382
  const typeLabel = getTypeLabel(notebook.type);
@@ -434,7 +434,7 @@ function renderNotebookTimeline(tabId, notebook, isNested = false) {
434
  <div class="tl-row turn user" data-tab-id="${tabId}">
435
  <div class="tl-dot" data-tooltip="${escapeHtml(event.content)}"></div>
436
  </div>`;
437
- } else if (group.type === 'notebook') {
438
  const event = group.events[0];
439
  if (event.childTabId !== null) {
440
  const childNotebook = timelineData[event.childTabId];
@@ -472,7 +472,7 @@ function renderNotebookTimeline(tabId, notebook, isNested = false) {
472
  const isComplete = !childIsGenerating;
473
  html += `
474
  <div class="tl-nested${isComplete ? ' complete' : ''}${isCollapsed ? ' collapsed' : ''}" data-child-tab-id="${event.childTabId}">
475
- ${renderNotebookTimeline(event.childTabId, childNotebook, true)}
476
  </div>`;
477
  // Return row with dot on parent line - only when subagent is complete
478
  if (isComplete) {
@@ -639,15 +639,15 @@ function renderLauncherButtons() {
639
  }
640
  }
641
 
642
- function renderNotebookModelSelectors() {
643
- const grid = document.getElementById('notebookModelsGrid');
644
  if (!grid) return;
645
  grid.innerHTML = '';
646
  for (const [key, agent] of Object.entries(AGENT_REGISTRY)) {
647
  const label = document.createElement('label');
648
  label.textContent = `${agent.label}:`;
649
  const select = document.createElement('select');
650
- select.id = `setting-notebook-${key}`;
651
  select.className = 'settings-select';
652
  grid.appendChild(label);
653
  grid.appendChild(select);
@@ -658,14 +658,14 @@ function initializeEventListeners() {
658
  // Generate dynamic UI elements from registry
659
  renderLauncherButtons();
660
  renderNewTabMenu();
661
- renderNotebookModelSelectors();
662
 
663
  // Launcher buttons in command center
664
  document.querySelectorAll('.launcher-btn').forEach(btn => {
665
  btn.addEventListener('click', (e) => {
666
  e.stopPropagation();
667
  const type = btn.dataset.type;
668
- createNotebookTab(type);
669
  });
670
  });
671
 
@@ -746,7 +746,7 @@ function initializeEventListeners() {
746
  document.querySelectorAll('.menu-item').forEach(item => {
747
  item.addEventListener('click', () => {
748
  const type = item.dataset.type;
749
- createNotebookTab(type);
750
  newTabMenu.classList.remove('active');
751
  });
752
  });
@@ -1178,7 +1178,7 @@ function initializeSessionListeners() {
1178
  }
1179
  }
1180
 
1181
- function createNotebookTab(type, initialMessage = null, autoSwitch = true, taskId = null, parentTabId = null) {
1182
  const tabId = tabCounter++;
1183
 
1184
  // Use task_id if provided, otherwise generate default title
@@ -1186,17 +1186,17 @@ function createNotebookTab(type, initialMessage = null, autoSwitch = true, taskI
1186
  if (taskId) {
1187
  // Convert dashes to spaces and title case for display
1188
  title = taskId;
1189
- // Register this notebook for task_id reuse
1190
  taskIdToTabId[taskId] = tabId;
1191
  } else if (type !== 'command-center') {
1192
- notebookCounters[type]++;
1193
  title = `New ${type} task`;
1194
  } else {
1195
  title = getTypeLabel(type);
1196
  }
1197
 
1198
  // Register in timeline
1199
- registerNotebookInTimeline(tabId, type, title, parentTabId);
1200
 
1201
  // Create tab
1202
  const tab = document.createElement('div');
@@ -1216,7 +1216,7 @@ function createNotebookTab(type, initialMessage = null, autoSwitch = true, taskI
1216
  const content = document.createElement('div');
1217
  content.className = 'tab-content';
1218
  content.dataset.contentId = tabId;
1219
- content.innerHTML = createNotebookContent(type, tabId, title);
1220
 
1221
  document.querySelector('.main-content').appendChild(content);
1222
 
@@ -1248,7 +1248,7 @@ function createNotebookTab(type, initialMessage = null, autoSwitch = true, taskI
1248
  });
1249
  }
1250
 
1251
- // If this is a code notebook, start the sandbox proactively
1252
  if (type === 'code') {
1253
  startSandbox(tabId);
1254
  }
@@ -1269,7 +1269,7 @@ function createNotebookTab(type, initialMessage = null, autoSwitch = true, taskI
1269
  return tabId; // Return the tabId so we can reference it
1270
  }
1271
 
1272
- function createNotebookContent(type, tabId, title = null) {
1273
  if (type === 'command-center') {
1274
  return document.querySelector('[data-content-id="0"]').innerHTML;
1275
  }
@@ -1281,15 +1281,15 @@ function createNotebookContent(type, tabId, title = null) {
1281
  const displayTitle = title || `New ${type} task`;
1282
 
1283
  return `
1284
- <div class="notebook-interface">
1285
- <div class="notebook-header">
1286
  <div>
1287
- <div class="notebook-type">${getTypeLabel(type)}</div>
1288
  <h2>${escapeHtml(displayTitle)}</h2>
1289
  </div>
1290
  </div>
1291
- <div class="notebook-body">
1292
- <div class="chat-container" id="messages-${uniqueId}" data-notebook-type="${type}">
1293
  </div>
1294
  </div>
1295
  <div class="input-area">
@@ -1341,11 +1341,11 @@ function closeTab(tabId) {
1341
  const content = document.querySelector(`[data-content-id="${tabId}"]`);
1342
 
1343
  if (tab && content) {
1344
- // Check if this is a code notebook and stop its sandbox
1345
  const chatContainer = content.querySelector('.chat-container');
1346
- const notebookType = chatContainer?.dataset.notebookType || 'chat';
1347
 
1348
- if (notebookType === 'code') {
1349
  stopSandbox(tabId);
1350
  }
1351
 
@@ -1409,10 +1409,10 @@ function hideProgressWidget(chatContainer) {
1409
  }
1410
 
1411
  function scrollChatToBottom(chatContainer) {
1412
- // The actual scrolling container is .notebook-body, not .chat-container
1413
- const notebookBody = chatContainer.closest('.notebook-body');
1414
- if (notebookBody) {
1415
- notebookBody.scrollTop = notebookBody.scrollHeight;
1416
  }
1417
  }
1418
 
@@ -1445,18 +1445,18 @@ async function sendMessage(tabId) {
1445
  // Add to timeline
1446
  addTimelineEvent(tabId, 'user', message);
1447
 
1448
- // Scroll the notebook body (the actual scrolling container) to bottom
1449
- const notebookBody = chatContainer.closest('.notebook-body');
1450
- if (notebookBody) {
1451
- notebookBody.scrollTop = notebookBody.scrollHeight;
1452
  }
1453
 
1454
  // Show progress widget while waiting for response
1455
  showProgressWidget(chatContainer);
1456
 
1457
- // Generate a title for the notebook if this is the first message and not command center
1458
  if (isFirstMessage && tabId !== 0) {
1459
- generateNotebookTitle(tabId, message);
1460
  }
1461
 
1462
  // Clear input and disable it during processing
@@ -1467,14 +1467,14 @@ async function sendMessage(tabId) {
1467
  // Set tab to generating state
1468
  setTabGenerating(tabId, true);
1469
 
1470
- // Determine notebook type from chat container ID
1471
- const notebookType = getNotebookTypeFromContainer(chatContainer);
1472
 
1473
- // Send full conversation history for all notebook types (stateless backend)
1474
  const messages = getConversationHistory(chatContainer);
1475
 
1476
  // Stream response from backend
1477
- await streamChatResponse(messages, chatContainer, notebookType, tabId);
1478
 
1479
  // Re-enable input and mark generation as complete
1480
  input.disabled = false;
@@ -1485,7 +1485,7 @@ async function sendMessage(tabId) {
1485
  saveWorkspaceDebounced();
1486
  }
1487
 
1488
- async function generateNotebookTitle(tabId, query) {
1489
  const currentSettings = getSettings();
1490
  const backendEndpoint = '/api';
1491
  const llmEndpoint = currentSettings.endpoint || 'https://api.openai.com/v1';
@@ -1526,18 +1526,18 @@ async function generateNotebookTitle(tabId, query) {
1526
  }
1527
  }
1528
 
1529
- function getNotebookTypeFromContainer(chatContainer) {
1530
- // Try to get type from data attribute first (for dynamically created notebooks)
1531
- const typeFromData = chatContainer.dataset.notebookType;
1532
  if (typeFromData) {
1533
  return typeFromData;
1534
  }
1535
 
1536
- // Fallback: Extract notebook type from the container ID (e.g., "messages-command" -> "command")
1537
  const containerId = chatContainer.id;
1538
  if (containerId && containerId.startsWith('messages-')) {
1539
  const type = containerId.replace('messages-', '');
1540
- // Map to notebook type
1541
  if (type === 'command') return 'command';
1542
  if (type.startsWith('agent')) return 'agent';
1543
  if (type.startsWith('code')) return 'code';
@@ -1575,8 +1575,8 @@ function getConversationHistory(chatContainer) {
1575
  funcName = toolCall.function_name;
1576
  funcArgs = toolCall.arguments;
1577
  } else {
1578
- // Command center-style tool call (launch_*_notebook)
1579
- funcName = `launch_${toolCall.notebook_type}_notebook`;
1580
  funcArgs = JSON.stringify({
1581
  task: toolCall.message,
1582
  topic: toolCall.message,
@@ -1617,12 +1617,12 @@ function getConversationHistory(chatContainer) {
1617
  return messages;
1618
  }
1619
 
1620
- async function streamChatResponse(messages, chatContainer, notebookType, tabId) {
1621
  const currentSettings = getSettings();
1622
  const backendEndpoint = '/api';
1623
 
1624
- // Resolve model configuration for this notebook type
1625
- let modelConfig = resolveModelConfig(notebookType);
1626
  if (!modelConfig) {
1627
  modelConfig = getDefaultModelConfig();
1628
  }
@@ -1666,7 +1666,7 @@ async function streamChatResponse(messages, chatContainer, notebookType, tabId)
1666
  headers: { 'Content-Type': 'application/json' },
1667
  body: JSON.stringify({
1668
  messages: messages,
1669
- notebook_type: notebookType,
1670
  stream: true,
1671
  endpoint: modelConfig.endpoint,
1672
  token: modelConfig.token || null,
@@ -1683,7 +1683,7 @@ async function streamChatResponse(messages, chatContainer, notebookType, tabId)
1683
  research_sub_agent_extra_params: researchSubAgentConfig?.extraParams || null,
1684
  research_parallel_workers: currentSettings.researchParallelWorkers || null,
1685
  research_max_websites: currentSettings.researchMaxWebsites || null,
1686
- notebook_id: tabId.toString(), // Send unique tab ID for sandbox sessions
1687
  frontend_context: getFrontendContext() // Dynamic context for system prompts
1688
  })
1689
  });
@@ -1762,7 +1762,7 @@ async function streamChatResponse(messages, chatContainer, notebookType, tabId)
1762
  // Still generating - no action needed
1763
 
1764
  } else if (data.type === 'result') {
1765
- // Notebook result - update command center widget
1766
  updateActionWidgetWithResult(tabId, data.content, data.figures, data.images);
1767
 
1768
  } else if (data.type === 'result_preview') {
@@ -1846,7 +1846,7 @@ async function streamChatResponse(messages, chatContainer, notebookType, tabId)
1846
  const globalIdx = startIdx + qi;
1847
  const virtualId = `research-${tabId}-q${globalIdx}`;
1848
  researchQueryTabIds[globalIdx] = virtualId;
1849
- registerNotebookInTimeline(virtualId, 'search', data.queries[qi], tabId);
1850
  setTimelineGenerating(virtualId, true);
1851
  }
1852
 
@@ -1866,7 +1866,7 @@ async function streamChatResponse(messages, chatContainer, notebookType, tabId)
1866
  } else if (data.query_index === -1) {
1867
  // Browse result — create a virtual browse entry if needed
1868
  const browseId = `research-${tabId}-browse-${Date.now()}`;
1869
- registerNotebookInTimeline(browseId, 'browse', data.url || 'webpage', tabId);
1870
  addTimelineEvent(browseId, 'assistant', data.title || data.url || 'page');
1871
  setTimelineGenerating(browseId, false);
1872
  }
@@ -1979,7 +1979,14 @@ async function streamChatResponse(messages, chatContainer, notebookType, tabId)
1979
  } catch(e) { /* ignore parse errors */ }
1980
  } else if (data.tool === 'read_url') {
1981
  const len = data.result?.length || 0;
1982
- outputHtml = `<div class="tool-cell-read-summary">${len > 0 ? `Extracted ${(len / 1000).toFixed(1)}k chars` : 'No content extracted'}</div>`;
 
 
 
 
 
 
 
1983
  } else if (data.tool === 'screenshot_url' && data.image) {
1984
  outputHtml = `<img src="data:image/png;base64,${data.image}" alt="Screenshot" class="screenshot-img" />`;
1985
  } else if ((data.tool === 'generate_image' || data.tool === 'edit_image' || data.tool === 'read_image_url') && data.image) {
@@ -1999,7 +2006,7 @@ async function streamChatResponse(messages, chatContainer, notebookType, tabId)
1999
  scrollChatToBottom(chatContainer);
2000
 
2001
  } else if (data.type === 'content') {
2002
- // Regular streaming content (non-code notebooks)
2003
  if (!currentMessageEl) {
2004
  currentMessageEl = createAssistantMessage(chatContainer);
2005
  }
@@ -2008,8 +2015,8 @@ async function streamChatResponse(messages, chatContainer, notebookType, tabId)
2008
  scrollChatToBottom(chatContainer);
2009
 
2010
  } else if (data.type === 'launch') {
2011
- // Tool-based notebook launch from command center
2012
- const notebookType = data.notebook_type;
2013
  const initialMessage = data.initial_message;
2014
  const taskId = data.task_id;
2015
  const toolCallId = data.tool_call_id;
@@ -2024,7 +2031,7 @@ async function streamChatResponse(messages, chatContainer, notebookType, tabId)
2024
  chatContainer.appendChild(toolCallMsg);
2025
  }
2026
  toolCallMsg.setAttribute('data-tool-call', JSON.stringify({
2027
- notebook_type: notebookType,
2028
  message: initialMessage,
2029
  tool_call_id: toolCallId
2030
  }));
@@ -2035,14 +2042,14 @@ async function streamChatResponse(messages, chatContainer, notebookType, tabId)
2035
  toolResponseMsg.style.display = 'none';
2036
  toolResponseMsg.setAttribute('data-tool-response', JSON.stringify({
2037
  tool_call_id: toolCallId,
2038
- content: `Launched ${notebookType} notebook with task: ${initialMessage}`
2039
  }));
2040
  chatContainer.appendChild(toolResponseMsg);
2041
 
2042
  // The action widget will show the launch visually
2043
- handleActionToken(notebookType, initialMessage, (targetTabId) => {
2044
- showActionWidget(chatContainer, notebookType, initialMessage, targetTabId, taskId);
2045
- // Store tool call ID for this notebook tab so we can send result back
2046
  toolCallIds[targetTabId] = toolCallId;
2047
  }, taskId, tabId);
2048
 
@@ -2053,8 +2060,8 @@ async function streamChatResponse(messages, chatContainer, notebookType, tabId)
2053
  // Remove retry indicator on success
2054
  removeRetryIndicator(chatContainer);
2055
 
2056
- // Reset research state when research notebook completes
2057
- if (notebookType === 'research' && typeof resetResearchState === 'function') {
2058
  // Mark all research virtual sub-agents as done
2059
  for (const virtualId of Object.values(researchQueryTabIds)) {
2060
  setTimelineGenerating(virtualId, false);
@@ -2063,7 +2070,7 @@ async function streamChatResponse(messages, chatContainer, notebookType, tabId)
2063
  resetResearchState();
2064
  }
2065
 
2066
- // Check for action tokens in regular notebooks (legacy support)
2067
  if (fullResponse) {
2068
  const actionMatch = fullResponse.match(/<action:(agent|code|research|chat)>([\s\S]*?)<\/action>/i);
2069
  if (actionMatch) {
@@ -2304,7 +2311,7 @@ function showActionWidget(chatContainer, action, message, targetTabId, taskId =
2304
  </div>
2305
  `;
2306
 
2307
- // Make header clickable to jump to the notebook
2308
  const clickableArea = widget.querySelector('.action-widget-clickable');
2309
 
2310
  const clickHandler = () => {
@@ -2442,7 +2449,7 @@ async function updateActionWidgetWithResult(tabId, resultContent, figures, image
2442
  }
2443
 
2444
  function sendMessageToTab(tabId, message) {
2445
- // Programmatically send a message to an existing notebook tab
2446
  const content = document.querySelector(`[data-content-id="${tabId}"]`);
2447
  if (!content) return;
2448
 
@@ -2455,13 +2462,13 @@ function sendMessageToTab(tabId, message) {
2455
  }
2456
 
2457
  function handleActionToken(action, message, callback, taskId = null, parentTabId = null) {
2458
- // Check if a notebook with this task_id already exists
2459
  if (taskId && taskIdToTabId[taskId]) {
2460
  const existingTabId = taskIdToTabId[taskId];
2461
  const existingContent = document.querySelector(`[data-content-id="${existingTabId}"]`);
2462
 
2463
  if (existingContent) {
2464
- // Send the message to the existing notebook
2465
  sendMessageToTab(existingTabId, message);
2466
  if (callback) {
2467
  callback(existingTabId);
@@ -2473,10 +2480,10 @@ function handleActionToken(action, message, callback, taskId = null, parentTabId
2473
  }
2474
  }
2475
 
2476
- // Open the notebook with the extracted message as initial prompt
2477
  // Don't auto-switch to the new tab (autoSwitch = false)
2478
  setTimeout(() => {
2479
- const newTabId = createNotebookTab(action, message, false, taskId, parentTabId);
2480
  if (callback) {
2481
  callback(newTabId);
2482
  }
@@ -2656,7 +2663,7 @@ async function loadWorkspace() {
2656
  function restoreWorkspace(workspace) {
2657
  // Restore counters
2658
  tabCounter = workspace.tabCounter || 1;
2659
- notebookCounters = workspace.notebookCounters || getDefaultCounters();
2660
 
2661
  // Restore timeline data before tabs so renderTimeline works
2662
  if (workspace.timelineData) {
@@ -2706,7 +2713,7 @@ function restoreTab(tabData) {
2706
  const content = document.createElement('div');
2707
  content.className = 'tab-content';
2708
  content.dataset.contentId = tabData.id;
2709
- content.innerHTML = createNotebookContent(tabData.type, tabData.id);
2710
  document.querySelector('.main-content').appendChild(content);
2711
 
2712
  // Add event listeners for the new content
@@ -2732,7 +2739,7 @@ function restoreTab(tabData) {
2732
  // Restore messages
2733
  restoreTabMessages(tabData);
2734
 
2735
- // If this is a code notebook, start the sandbox proactively
2736
  if (tabData.type === 'code') {
2737
  startSandbox(tabData.id);
2738
  }
@@ -2986,7 +2993,7 @@ function serializeWorkspace() {
2986
  version: 1,
2987
  tabCounter: tabCounter,
2988
  activeTabId: activeTabId,
2989
- notebookCounters: notebookCounters,
2990
  tabs: [],
2991
  timelineData: serializeTimelineData()
2992
  };
@@ -3001,8 +3008,8 @@ function serializeWorkspace() {
3001
  const content = document.querySelector(`[data-content-id="${tabId}"]`);
3002
  if (content) {
3003
  const chatContainer = content.querySelector('.chat-container');
3004
- const notebookType = chatContainer?.dataset.notebookType || 'chat';
3005
- workspace.tabs.push(serializeTab(tabId, notebookType));
3006
  }
3007
  }
3008
 
@@ -3217,7 +3224,7 @@ function migrateSettings(oldSettings) {
3217
  const newSettings = {
3218
  providers: {},
3219
  models: {},
3220
- notebooks: {
3221
  command: '',
3222
  agent: '',
3223
  code: '',
@@ -3254,18 +3261,18 @@ function migrateSettings(oldSettings) {
3254
  modelId: oldSettings.model
3255
  };
3256
 
3257
- // Set as default for all notebooks
3258
- newSettings.notebooks.command = modelId;
3259
- newSettings.notebooks.agent = modelId;
3260
- newSettings.notebooks.code = modelId;
3261
- newSettings.notebooks.research = modelId;
3262
- newSettings.notebooks.chat = modelId;
3263
  }
3264
 
3265
- // Migrate notebook-specific models if they existed
3266
  const oldModels = oldSettings.models || {};
3267
- const notebookTypes = Object.keys(AGENT_REGISTRY).filter(k => AGENT_REGISTRY[k].hasCounter);
3268
- notebookTypes.forEach(type => {
3269
  if (oldModels[type]) {
3270
  const specificModelId = `model_${type}`;
3271
  newSettings.models[specificModelId] = {
@@ -3273,7 +3280,7 @@ function migrateSettings(oldSettings) {
3273
  providerId: providerId,
3274
  modelId: oldModels[type]
3275
  };
3276
- newSettings.notebooks[type] = specificModelId;
3277
  }
3278
  });
3279
  }
@@ -3298,7 +3305,7 @@ async function loadSettings() {
3298
 
3299
  // Fallback to localStorage if backend is unavailable
3300
  if (!loadedSettings) {
3301
- const savedSettings = localStorage.getItem('productive_settings');
3302
  console.log('Loading settings from localStorage:', savedSettings ? 'found' : 'not found');
3303
  if (savedSettings) {
3304
  try {
@@ -3311,6 +3318,11 @@ async function loadSettings() {
3311
  }
3312
 
3313
  if (loadedSettings) {
 
 
 
 
 
3314
  // Migrate if needed
3315
  if (!loadedSettings.settingsVersion || loadedSettings.settingsVersion < 2) {
3316
  loadedSettings = migrateSettings(loadedSettings);
@@ -3400,14 +3412,14 @@ function renderModelsList() {
3400
  container.innerHTML = html;
3401
  }
3402
 
3403
- // Populate model dropdowns for notebook selection
3404
  function populateModelDropdowns() {
3405
  const models = settings.models || {};
3406
- const notebooks = settings.notebooks || {};
3407
 
3408
  // Build dropdown IDs from registry + special dropdowns
3409
  const dropdownIds = [
3410
- ...Object.keys(AGENT_REGISTRY).map(t => `setting-notebook-${t}`),
3411
  'setting-research-sub-agent-model',
3412
  'setting-image-gen-model',
3413
  'setting-image-edit-model'
@@ -3438,8 +3450,8 @@ function populateModelDropdowns() {
3438
 
3439
  // Set values from settings (driven by registry)
3440
  for (const type of Object.keys(AGENT_REGISTRY)) {
3441
- const dropdown = document.getElementById(`setting-notebook-${type}`);
3442
- if (dropdown) dropdown.value = notebooks[type] || '';
3443
  }
3444
  const subAgentDropdown = document.getElementById('setting-research-sub-agent-model');
3445
  if (subAgentDropdown) subAgentDropdown.value = settings.researchSubAgentModel || '';
@@ -3591,17 +3603,17 @@ function editModel(modelId) {
3591
 
3592
  // Delete model
3593
  function deleteModel(modelId) {
3594
- // Check if any notebooks use this model
3595
- const notebooksUsingModel = Object.entries(settings.notebooks || {})
3596
  .filter(([_, mid]) => mid === modelId);
3597
 
3598
- if (notebooksUsingModel.length > 0) {
3599
- const warning = `This model is used by: ${notebooksUsingModel.map(([t]) => t).join(', ')}. Delete anyway?`;
3600
  if (!confirm(warning)) return;
3601
 
3602
- // Clear the notebook assignments
3603
- notebooksUsingModel.forEach(([type]) => {
3604
- settings.notebooks[type] = '';
3605
  });
3606
  } else if (!confirm('Delete this model?')) {
3607
  return;
@@ -3653,10 +3665,10 @@ function openSettings() {
3653
  }
3654
 
3655
  async function saveSettings() {
3656
- // Get notebook model selections from dropdowns (driven by registry)
3657
- const notebookModels = {};
3658
  for (const type of Object.keys(AGENT_REGISTRY)) {
3659
- notebookModels[type] = document.getElementById(`setting-notebook-${type}`)?.value || '';
3660
  }
3661
  const researchSubAgentModel = document.getElementById('setting-research-sub-agent-model')?.value || '';
3662
 
@@ -3682,7 +3694,7 @@ async function saveSettings() {
3682
  }
3683
 
3684
  // Update settings
3685
- settings.notebooks = notebookModels;
3686
  settings.e2bKey = e2bKey;
3687
  settings.serperKey = serperKey;
3688
  settings.hfToken = hfToken;
@@ -3705,11 +3717,11 @@ async function saveSettings() {
3705
  console.log('Settings saved to file:', settings);
3706
  } else {
3707
  console.error('Failed to save settings to file, falling back to localStorage');
3708
- localStorage.setItem('productive_settings', JSON.stringify(settings));
3709
  }
3710
  } catch (e) {
3711
  console.error('Could not save settings to backend, falling back to localStorage:', e);
3712
- localStorage.setItem('productive_settings', JSON.stringify(settings));
3713
  }
3714
 
3715
  // Apply theme
@@ -3867,10 +3879,10 @@ function getSettings() {
3867
  return settings;
3868
  }
3869
 
3870
- // Resolve model configuration for a notebook type
3871
  // Returns { endpoint, token, model, extraParams } or null if not configured
3872
- function resolveModelConfig(notebookType) {
3873
- const modelId = settings.notebooks?.[notebookType];
3874
  if (!modelId) return null;
3875
 
3876
  const model = settings.models?.[modelId];
@@ -3921,12 +3933,12 @@ function getFrontendContext() {
3921
  textPrimary: theme.textPrimary,
3922
  textSecondary: theme.textSecondary
3923
  } : null,
3924
- open_notebooks: getOpenNotebookTypes()
3925
  };
3926
  }
3927
 
3928
- // Get list of open notebook types
3929
- function getOpenNotebookTypes() {
3930
  const tabs = document.querySelectorAll('.tab[data-tab-id]');
3931
  const types = [];
3932
  tabs.forEach(tab => {
@@ -3937,8 +3949,8 @@ function getOpenNotebookTypes() {
3937
  const content = document.querySelector(`[data-content-id="${tabId}"]`);
3938
  if (content) {
3939
  const chatContainer = content.querySelector('.chat-container');
3940
- if (chatContainer && chatContainer.dataset.notebookType) {
3941
- types.push(chatContainer.dataset.notebookType);
3942
  }
3943
  }
3944
  }
@@ -3946,7 +3958,7 @@ function getOpenNotebookTypes() {
3946
  return types;
3947
  }
3948
 
3949
- // Sandbox management for code notebooks
3950
  async function startSandbox(tabId) {
3951
  const currentSettings = getSettings();
3952
  const backendEndpoint = '/api';
@@ -3956,7 +3968,7 @@ async function startSandbox(tabId) {
3956
  return;
3957
  }
3958
 
3959
- // Add a status message to the notebook
3960
  const uniqueId = `code-${tabId}`;
3961
  const chatContainer = document.getElementById(`messages-${uniqueId}`);
3962
  if (chatContainer) {
 
54
  // New provider/model structure
55
  providers: {}, // providerId -> {name, endpoint, token}
56
  models: {}, // modelId -> {name, providerId, modelId (API model string)}
57
+ agents: Object.fromEntries(Object.keys(AGENT_REGISTRY).map(k => [k, ''])),
58
  // Service API keys
59
  e2bKey: '',
60
  serperKey: '',
 
78
  // Track tool call IDs for result updates (maps tabId -> tool_call_id)
79
  const toolCallIds = {};
80
 
81
+ // Track agents by task_id for reuse (maps task_id -> tabId)
82
  const taskIdToTabId = {};
83
 
84
+ // Track agent counters for each type (derived from registry)
85
+ let agentCounters = getDefaultCounters();
86
 
87
  // Debounce timer for workspace saving
88
  let saveWorkspaceTimer = null;
89
 
90
  // Timeline data structure for sidebar
91
+ // Maps tabId -> { type, title, events: [{type: 'user'|'assistant'|'agent', content, childTabId?}], parentTabId?, isGenerating }
92
  const timelineData = {
93
  0: { type: 'command', title: 'Task Center', events: [], parentTabId: null, isGenerating: false }
94
  };
 
107
  Object.keys(taskIdToTabId).forEach(k => delete taskIdToTabId[k]);
108
  researchQueryTabIds = {};
109
  showAllTurns = false;
110
+ agentCounters = getDefaultCounters();
111
 
112
  // Reset timeline data
113
  Object.keys(timelineData).forEach(k => delete timelineData[k]);
 
145
  const preview = content.length > 80 ? content.substring(0, 80) + '...' : content;
146
 
147
  timelineData[tabId].events.push({
148
+ type: eventType, // 'user', 'assistant', or 'agent'
149
  content: preview,
150
  childTabId: childTabId,
151
  timestamp: Date.now()
 
154
  renderTimeline();
155
  }
156
 
157
+ // Register a new agent in timeline
158
+ function registerAgentInTimeline(tabId, type, title, parentTabId = null) {
159
  timelineData[tabId] = {
160
  type: type,
161
  title: title,
 
164
  isGenerating: false
165
  };
166
 
167
+ // If this agent was launched from another, add an agent event to parent
168
  if (parentTabId !== null && timelineData[parentTabId]) {
169
+ addTimelineEvent(parentTabId, 'agent', title, tabId);
170
  }
171
 
172
  renderTimeline();
 
180
  }
181
  }
182
 
183
+ // Update agent title in timeline
184
  function updateTimelineTitle(tabId, title) {
185
  if (timelineData[tabId]) {
186
  timelineData[tabId].title = title;
 
188
  }
189
  }
190
 
191
+ // Remove agent from timeline
192
  function removeFromTimeline(tabId) {
193
  // Remove from parent's events if it was a child
194
  const notebook = timelineData[tabId];
 
246
  // Restore the saved content (includes all messages)
247
  content.innerHTML = notebook.savedContent;
248
  } else {
249
+ // Fallback: create fresh agent content
250
+ content.innerHTML = createAgentContent(type, tabId, title);
251
  }
252
 
253
  document.querySelector('.main-content').appendChild(content);
 
283
  });
284
  }
285
 
286
+ // If this is a code agent, start the sandbox proactively
287
  if (type === 'code') {
288
  startSandbox(tabId);
289
  }
 
298
 
299
  // Render the full timeline widget
300
  function renderTimeline() {
301
+ const sidebarContent = document.getElementById('sidebarAgents');
302
  if (!sidebarContent) return;
303
 
304
+ // Get root agents (those without parents) - always include command center for workspace name
305
+ const rootAgents = Object.entries(timelineData)
306
  .filter(([id, data]) => data.parentTabId === null);
307
 
308
  let html = '';
309
 
310
+ for (const [tabId, notebook] of rootAgents) {
311
+ html += renderAgentTimeline(parseInt(tabId), notebook);
312
  }
313
 
314
  sidebarContent.innerHTML = html;
 
375
  });
376
  }
377
 
378
+ // Render a single agent's timeline (recursive for nested)
379
+ function renderAgentTimeline(tabId, notebook, isNested = false) {
380
  const isActive = activeTabId === tabId;
381
  const isClosed = notebook.isClosed || false;
382
  const typeLabel = getTypeLabel(notebook.type);
 
434
  <div class="tl-row turn user" data-tab-id="${tabId}">
435
  <div class="tl-dot" data-tooltip="${escapeHtml(event.content)}"></div>
436
  </div>`;
437
+ } else if (group.type === 'agent') {
438
  const event = group.events[0];
439
  if (event.childTabId !== null) {
440
  const childNotebook = timelineData[event.childTabId];
 
472
  const isComplete = !childIsGenerating;
473
  html += `
474
  <div class="tl-nested${isComplete ? ' complete' : ''}${isCollapsed ? ' collapsed' : ''}" data-child-tab-id="${event.childTabId}">
475
+ ${renderAgentTimeline(event.childTabId, childNotebook, true)}
476
  </div>`;
477
  // Return row with dot on parent line - only when subagent is complete
478
  if (isComplete) {
 
639
  }
640
  }
641
 
642
+ function renderAgentModelSelectors() {
643
+ const grid = document.getElementById('agentModelsGrid');
644
  if (!grid) return;
645
  grid.innerHTML = '';
646
  for (const [key, agent] of Object.entries(AGENT_REGISTRY)) {
647
  const label = document.createElement('label');
648
  label.textContent = `${agent.label}:`;
649
  const select = document.createElement('select');
650
+ select.id = `setting-agent-${key}`;
651
  select.className = 'settings-select';
652
  grid.appendChild(label);
653
  grid.appendChild(select);
 
658
  // Generate dynamic UI elements from registry
659
  renderLauncherButtons();
660
  renderNewTabMenu();
661
+ renderAgentModelSelectors();
662
 
663
  // Launcher buttons in command center
664
  document.querySelectorAll('.launcher-btn').forEach(btn => {
665
  btn.addEventListener('click', (e) => {
666
  e.stopPropagation();
667
  const type = btn.dataset.type;
668
+ createAgentTab(type);
669
  });
670
  });
671
 
 
746
  document.querySelectorAll('.menu-item').forEach(item => {
747
  item.addEventListener('click', () => {
748
  const type = item.dataset.type;
749
+ createAgentTab(type);
750
  newTabMenu.classList.remove('active');
751
  });
752
  });
 
1178
  }
1179
  }
1180
 
1181
+ function createAgentTab(type, initialMessage = null, autoSwitch = true, taskId = null, parentTabId = null) {
1182
  const tabId = tabCounter++;
1183
 
1184
  // Use task_id if provided, otherwise generate default title
 
1186
  if (taskId) {
1187
  // Convert dashes to spaces and title case for display
1188
  title = taskId;
1189
+ // Register this agent for task_id reuse
1190
  taskIdToTabId[taskId] = tabId;
1191
  } else if (type !== 'command-center') {
1192
+ agentCounters[type]++;
1193
  title = `New ${type} task`;
1194
  } else {
1195
  title = getTypeLabel(type);
1196
  }
1197
 
1198
  // Register in timeline
1199
+ registerAgentInTimeline(tabId, type, title, parentTabId);
1200
 
1201
  // Create tab
1202
  const tab = document.createElement('div');
 
1216
  const content = document.createElement('div');
1217
  content.className = 'tab-content';
1218
  content.dataset.contentId = tabId;
1219
+ content.innerHTML = createAgentContent(type, tabId, title);
1220
 
1221
  document.querySelector('.main-content').appendChild(content);
1222
 
 
1248
  });
1249
  }
1250
 
1251
+ // If this is a code agent, start the sandbox proactively
1252
  if (type === 'code') {
1253
  startSandbox(tabId);
1254
  }
 
1269
  return tabId; // Return the tabId so we can reference it
1270
  }
1271
 
1272
+ function createAgentContent(type, tabId, title = null) {
1273
  if (type === 'command-center') {
1274
  return document.querySelector('[data-content-id="0"]').innerHTML;
1275
  }
 
1281
  const displayTitle = title || `New ${type} task`;
1282
 
1283
  return `
1284
+ <div class="agent-interface">
1285
+ <div class="agent-header">
1286
  <div>
1287
+ <div class="agent-type">${getTypeLabel(type)}</div>
1288
  <h2>${escapeHtml(displayTitle)}</h2>
1289
  </div>
1290
  </div>
1291
+ <div class="agent-body">
1292
+ <div class="chat-container" id="messages-${uniqueId}" data-agent-type="${type}">
1293
  </div>
1294
  </div>
1295
  <div class="input-area">
 
1341
  const content = document.querySelector(`[data-content-id="${tabId}"]`);
1342
 
1343
  if (tab && content) {
1344
+ // Check if this is a code agent and stop its sandbox
1345
  const chatContainer = content.querySelector('.chat-container');
1346
+ const agentType = chatContainer?.dataset.agentType || 'chat';
1347
 
1348
+ if (agentType === 'code') {
1349
  stopSandbox(tabId);
1350
  }
1351
 
 
1409
  }
1410
 
1411
  function scrollChatToBottom(chatContainer) {
1412
+ // The actual scrolling container is .agent-body
1413
+ const agentBody = chatContainer.closest('.agent-body');
1414
+ if (agentBody) {
1415
+ agentBody.scrollTop = agentBody.scrollHeight;
1416
  }
1417
  }
1418
 
 
1445
  // Add to timeline
1446
  addTimelineEvent(tabId, 'user', message);
1447
 
1448
+ // Scroll the agent body (the actual scrolling container) to bottom
1449
+ const agentBody = chatContainer.closest('.agent-body');
1450
+ if (agentBody) {
1451
+ agentBody.scrollTop = agentBody.scrollHeight;
1452
  }
1453
 
1454
  // Show progress widget while waiting for response
1455
  showProgressWidget(chatContainer);
1456
 
1457
+ // Generate a title for the agent if this is the first message and not command center
1458
  if (isFirstMessage && tabId !== 0) {
1459
+ generateAgentTitle(tabId, message);
1460
  }
1461
 
1462
  // Clear input and disable it during processing
 
1467
  // Set tab to generating state
1468
  setTabGenerating(tabId, true);
1469
 
1470
+ // Determine agent type from chat container ID
1471
+ const agentType = getAgentTypeFromContainer(chatContainer);
1472
 
1473
+ // Send full conversation history for all agent types (stateless backend)
1474
  const messages = getConversationHistory(chatContainer);
1475
 
1476
  // Stream response from backend
1477
+ await streamChatResponse(messages, chatContainer, agentType, tabId);
1478
 
1479
  // Re-enable input and mark generation as complete
1480
  input.disabled = false;
 
1485
  saveWorkspaceDebounced();
1486
  }
1487
 
1488
+ async function generateAgentTitle(tabId, query) {
1489
  const currentSettings = getSettings();
1490
  const backendEndpoint = '/api';
1491
  const llmEndpoint = currentSettings.endpoint || 'https://api.openai.com/v1';
 
1526
  }
1527
  }
1528
 
1529
+ function getAgentTypeFromContainer(chatContainer) {
1530
+ // Try to get type from data attribute first (for dynamically created agents)
1531
+ const typeFromData = chatContainer.dataset.agentType;
1532
  if (typeFromData) {
1533
  return typeFromData;
1534
  }
1535
 
1536
+ // Fallback: Extract agent type from the container ID (e.g., "messages-command" -> "command")
1537
  const containerId = chatContainer.id;
1538
  if (containerId && containerId.startsWith('messages-')) {
1539
  const type = containerId.replace('messages-', '');
1540
+ // Map to agent type
1541
  if (type === 'command') return 'command';
1542
  if (type.startsWith('agent')) return 'agent';
1543
  if (type.startsWith('code')) return 'code';
 
1575
  funcName = toolCall.function_name;
1576
  funcArgs = toolCall.arguments;
1577
  } else {
1578
+ // Command center-style tool call (launch_*_agent)
1579
+ funcName = `launch_${toolCall.agent_type}_agent`;
1580
  funcArgs = JSON.stringify({
1581
  task: toolCall.message,
1582
  topic: toolCall.message,
 
1617
  return messages;
1618
  }
1619
 
1620
+ async function streamChatResponse(messages, chatContainer, agentType, tabId) {
1621
  const currentSettings = getSettings();
1622
  const backendEndpoint = '/api';
1623
 
1624
+ // Resolve model configuration for this agent type
1625
+ let modelConfig = resolveModelConfig(agentType);
1626
  if (!modelConfig) {
1627
  modelConfig = getDefaultModelConfig();
1628
  }
 
1666
  headers: { 'Content-Type': 'application/json' },
1667
  body: JSON.stringify({
1668
  messages: messages,
1669
+ agent_type: agentType,
1670
  stream: true,
1671
  endpoint: modelConfig.endpoint,
1672
  token: modelConfig.token || null,
 
1683
  research_sub_agent_extra_params: researchSubAgentConfig?.extraParams || null,
1684
  research_parallel_workers: currentSettings.researchParallelWorkers || null,
1685
  research_max_websites: currentSettings.researchMaxWebsites || null,
1686
+ agent_id: tabId.toString(), // Send unique tab ID for sandbox sessions
1687
  frontend_context: getFrontendContext() // Dynamic context for system prompts
1688
  })
1689
  });
 
1762
  // Still generating - no action needed
1763
 
1764
  } else if (data.type === 'result') {
1765
+ // Agent result - update command center widget
1766
  updateActionWidgetWithResult(tabId, data.content, data.figures, data.images);
1767
 
1768
  } else if (data.type === 'result_preview') {
 
1846
  const globalIdx = startIdx + qi;
1847
  const virtualId = `research-${tabId}-q${globalIdx}`;
1848
  researchQueryTabIds[globalIdx] = virtualId;
1849
+ registerAgentInTimeline(virtualId, 'search', data.queries[qi], tabId);
1850
  setTimelineGenerating(virtualId, true);
1851
  }
1852
 
 
1866
  } else if (data.query_index === -1) {
1867
  // Browse result — create a virtual browse entry if needed
1868
  const browseId = `research-${tabId}-browse-${Date.now()}`;
1869
+ registerAgentInTimeline(browseId, 'browse', data.url || 'webpage', tabId);
1870
  addTimelineEvent(browseId, 'assistant', data.title || data.url || 'page');
1871
  setTimelineGenerating(browseId, false);
1872
  }
 
1979
  } catch(e) { /* ignore parse errors */ }
1980
  } else if (data.tool === 'read_url') {
1981
  const len = data.result?.length || 0;
1982
+ const markdown = data.result?.markdown || '';
1983
+ const summaryText = len > 0 ? `Extracted ${(len / 1000).toFixed(1)}k chars` : 'No content extracted';
1984
+ if (markdown) {
1985
+ const toggleId = `read-content-${Date.now()}-${Math.random().toString(36).slice(2, 7)}`;
1986
+ outputHtml = `<div class="tool-cell-read-summary">${summaryText} <button class="read-content-toggle" onclick="const el=document.getElementById('${toggleId}');el.classList.toggle('expanded');this.textContent=el.classList.contains('expanded')?'Hide':'Show content'">Show content</button></div><div id="${toggleId}" class="read-content-body">${parseMarkdown(markdown)}</div>`;
1987
+ } else {
1988
+ outputHtml = `<div class="tool-cell-read-summary">${summaryText}</div>`;
1989
+ }
1990
  } else if (data.tool === 'screenshot_url' && data.image) {
1991
  outputHtml = `<img src="data:image/png;base64,${data.image}" alt="Screenshot" class="screenshot-img" />`;
1992
  } else if ((data.tool === 'generate_image' || data.tool === 'edit_image' || data.tool === 'read_image_url') && data.image) {
 
2006
  scrollChatToBottom(chatContainer);
2007
 
2008
  } else if (data.type === 'content') {
2009
+ // Regular streaming content (non-code agents)
2010
  if (!currentMessageEl) {
2011
  currentMessageEl = createAssistantMessage(chatContainer);
2012
  }
 
2015
  scrollChatToBottom(chatContainer);
2016
 
2017
  } else if (data.type === 'launch') {
2018
+ // Tool-based agent launch from command center
2019
+ const agentType = data.agent_type;
2020
  const initialMessage = data.initial_message;
2021
  const taskId = data.task_id;
2022
  const toolCallId = data.tool_call_id;
 
2031
  chatContainer.appendChild(toolCallMsg);
2032
  }
2033
  toolCallMsg.setAttribute('data-tool-call', JSON.stringify({
2034
+ agent_type: agentType,
2035
  message: initialMessage,
2036
  tool_call_id: toolCallId
2037
  }));
 
2042
  toolResponseMsg.style.display = 'none';
2043
  toolResponseMsg.setAttribute('data-tool-response', JSON.stringify({
2044
  tool_call_id: toolCallId,
2045
+ content: `Launched ${agentType} agent with task: ${initialMessage}`
2046
  }));
2047
  chatContainer.appendChild(toolResponseMsg);
2048
 
2049
  // The action widget will show the launch visually
2050
+ handleActionToken(agentType, initialMessage, (targetTabId) => {
2051
+ showActionWidget(chatContainer, agentType, initialMessage, targetTabId, taskId);
2052
+ // Store tool call ID for this agent tab so we can send result back
2053
  toolCallIds[targetTabId] = toolCallId;
2054
  }, taskId, tabId);
2055
 
 
2060
  // Remove retry indicator on success
2061
  removeRetryIndicator(chatContainer);
2062
 
2063
+ // Reset research state when research agent completes
2064
+ if (agentType === 'research' && typeof resetResearchState === 'function') {
2065
  // Mark all research virtual sub-agents as done
2066
  for (const virtualId of Object.values(researchQueryTabIds)) {
2067
  setTimelineGenerating(virtualId, false);
 
2070
  resetResearchState();
2071
  }
2072
 
2073
+ // Check for action tokens in regular agents (legacy support)
2074
  if (fullResponse) {
2075
  const actionMatch = fullResponse.match(/<action:(agent|code|research|chat)>([\s\S]*?)<\/action>/i);
2076
  if (actionMatch) {
 
2311
  </div>
2312
  `;
2313
 
2314
+ // Make header clickable to jump to the agent
2315
  const clickableArea = widget.querySelector('.action-widget-clickable');
2316
 
2317
  const clickHandler = () => {
 
2449
  }
2450
 
2451
  function sendMessageToTab(tabId, message) {
2452
+ // Programmatically send a message to an existing agent tab
2453
  const content = document.querySelector(`[data-content-id="${tabId}"]`);
2454
  if (!content) return;
2455
 
 
2462
  }
2463
 
2464
  function handleActionToken(action, message, callback, taskId = null, parentTabId = null) {
2465
+ // Check if an agent with this task_id already exists
2466
  if (taskId && taskIdToTabId[taskId]) {
2467
  const existingTabId = taskIdToTabId[taskId];
2468
  const existingContent = document.querySelector(`[data-content-id="${existingTabId}"]`);
2469
 
2470
  if (existingContent) {
2471
+ // Send the message to the existing agent
2472
  sendMessageToTab(existingTabId, message);
2473
  if (callback) {
2474
  callback(existingTabId);
 
2480
  }
2481
  }
2482
 
2483
+ // Open the agent with the extracted message as initial prompt
2484
  // Don't auto-switch to the new tab (autoSwitch = false)
2485
  setTimeout(() => {
2486
+ const newTabId = createAgentTab(action, message, false, taskId, parentTabId);
2487
  if (callback) {
2488
  callback(newTabId);
2489
  }
 
2663
  function restoreWorkspace(workspace) {
2664
  // Restore counters
2665
  tabCounter = workspace.tabCounter || 1;
2666
+ agentCounters = workspace.agentCounters || workspace.notebookCounters || getDefaultCounters();
2667
 
2668
  // Restore timeline data before tabs so renderTimeline works
2669
  if (workspace.timelineData) {
 
2713
  const content = document.createElement('div');
2714
  content.className = 'tab-content';
2715
  content.dataset.contentId = tabData.id;
2716
+ content.innerHTML = createAgentContent(tabData.type, tabData.id);
2717
  document.querySelector('.main-content').appendChild(content);
2718
 
2719
  // Add event listeners for the new content
 
2739
  // Restore messages
2740
  restoreTabMessages(tabData);
2741
 
2742
+ // If this is a code agent, start the sandbox proactively
2743
  if (tabData.type === 'code') {
2744
  startSandbox(tabData.id);
2745
  }
 
2993
  version: 1,
2994
  tabCounter: tabCounter,
2995
  activeTabId: activeTabId,
2996
+ agentCounters: agentCounters,
2997
  tabs: [],
2998
  timelineData: serializeTimelineData()
2999
  };
 
3008
  const content = document.querySelector(`[data-content-id="${tabId}"]`);
3009
  if (content) {
3010
  const chatContainer = content.querySelector('.chat-container');
3011
+ const agentType = chatContainer?.dataset.agentType || 'chat';
3012
+ workspace.tabs.push(serializeTab(tabId, agentType));
3013
  }
3014
  }
3015
 
 
3224
  const newSettings = {
3225
  providers: {},
3226
  models: {},
3227
+ agents: {
3228
  command: '',
3229
  agent: '',
3230
  code: '',
 
3261
  modelId: oldSettings.model
3262
  };
3263
 
3264
+ // Set as default for all agents
3265
+ newSettings.agents.command = modelId;
3266
+ newSettings.agents.agent = modelId;
3267
+ newSettings.agents.code = modelId;
3268
+ newSettings.agents.research = modelId;
3269
+ newSettings.agents.chat = modelId;
3270
  }
3271
 
3272
+ // Migrate agent-specific models if they existed
3273
  const oldModels = oldSettings.models || {};
3274
+ const agentTypes = Object.keys(AGENT_REGISTRY).filter(k => AGENT_REGISTRY[k].hasCounter);
3275
+ agentTypes.forEach(type => {
3276
  if (oldModels[type]) {
3277
  const specificModelId = `model_${type}`;
3278
  newSettings.models[specificModelId] = {
 
3280
  providerId: providerId,
3281
  modelId: oldModels[type]
3282
  };
3283
+ newSettings.agents[type] = specificModelId;
3284
  }
3285
  });
3286
  }
 
3305
 
3306
  // Fallback to localStorage if backend is unavailable
3307
  if (!loadedSettings) {
3308
+ const savedSettings = localStorage.getItem('agentui_settings') || localStorage.getItem('productive_settings');
3309
  console.log('Loading settings from localStorage:', savedSettings ? 'found' : 'not found');
3310
  if (savedSettings) {
3311
  try {
 
3318
  }
3319
 
3320
  if (loadedSettings) {
3321
+ // Migrate old "notebooks" key to "agents"
3322
+ if (loadedSettings.notebooks && !loadedSettings.agents) {
3323
+ loadedSettings.agents = loadedSettings.notebooks;
3324
+ delete loadedSettings.notebooks;
3325
+ }
3326
  // Migrate if needed
3327
  if (!loadedSettings.settingsVersion || loadedSettings.settingsVersion < 2) {
3328
  loadedSettings = migrateSettings(loadedSettings);
 
3412
  container.innerHTML = html;
3413
  }
3414
 
3415
+ // Populate model dropdowns for agent selection
3416
  function populateModelDropdowns() {
3417
  const models = settings.models || {};
3418
+ const agents = settings.agents || {};
3419
 
3420
  // Build dropdown IDs from registry + special dropdowns
3421
  const dropdownIds = [
3422
+ ...Object.keys(AGENT_REGISTRY).map(t => `setting-agent-${t}`),
3423
  'setting-research-sub-agent-model',
3424
  'setting-image-gen-model',
3425
  'setting-image-edit-model'
 
3450
 
3451
  // Set values from settings (driven by registry)
3452
  for (const type of Object.keys(AGENT_REGISTRY)) {
3453
+ const dropdown = document.getElementById(`setting-agent-${type}`);
3454
+ if (dropdown) dropdown.value = agents[type] || '';
3455
  }
3456
  const subAgentDropdown = document.getElementById('setting-research-sub-agent-model');
3457
  if (subAgentDropdown) subAgentDropdown.value = settings.researchSubAgentModel || '';
 
3603
 
3604
  // Delete model
3605
  function deleteModel(modelId) {
3606
+ // Check if any agents use this model
3607
+ const agentsUsingModel = Object.entries(settings.agents || {})
3608
  .filter(([_, mid]) => mid === modelId);
3609
 
3610
+ if (agentsUsingModel.length > 0) {
3611
+ const warning = `This model is used by: ${agentsUsingModel.map(([t]) => t).join(', ')}. Delete anyway?`;
3612
  if (!confirm(warning)) return;
3613
 
3614
+ // Clear the agent assignments
3615
+ agentsUsingModel.forEach(([type]) => {
3616
+ settings.agents[type] = '';
3617
  });
3618
  } else if (!confirm('Delete this model?')) {
3619
  return;
 
3665
  }
3666
 
3667
  async function saveSettings() {
3668
+ // Get agent model selections from dropdowns (driven by registry)
3669
+ const agentModels = {};
3670
  for (const type of Object.keys(AGENT_REGISTRY)) {
3671
+ agentModels[type] = document.getElementById(`setting-agent-${type}`)?.value || '';
3672
  }
3673
  const researchSubAgentModel = document.getElementById('setting-research-sub-agent-model')?.value || '';
3674
 
 
3694
  }
3695
 
3696
  // Update settings
3697
+ settings.agents = agentModels;
3698
  settings.e2bKey = e2bKey;
3699
  settings.serperKey = serperKey;
3700
  settings.hfToken = hfToken;
 
3717
  console.log('Settings saved to file:', settings);
3718
  } else {
3719
  console.error('Failed to save settings to file, falling back to localStorage');
3720
+ localStorage.setItem('agentui_settings', JSON.stringify(settings));
3721
  }
3722
  } catch (e) {
3723
  console.error('Could not save settings to backend, falling back to localStorage:', e);
3724
+ localStorage.setItem('agentui_settings', JSON.stringify(settings));
3725
  }
3726
 
3727
  // Apply theme
 
3879
  return settings;
3880
  }
3881
 
3882
+ // Resolve model configuration for an agent type
3883
  // Returns { endpoint, token, model, extraParams } or null if not configured
3884
+ function resolveModelConfig(agentType) {
3885
+ const modelId = settings.agents?.[agentType];
3886
  if (!modelId) return null;
3887
 
3888
  const model = settings.models?.[modelId];
 
3933
  textPrimary: theme.textPrimary,
3934
  textSecondary: theme.textSecondary
3935
  } : null,
3936
+ open_agents: getOpenAgentTypes()
3937
  };
3938
  }
3939
 
3940
+ // Get list of open agent types
3941
+ function getOpenAgentTypes() {
3942
  const tabs = document.querySelectorAll('.tab[data-tab-id]');
3943
  const types = [];
3944
  tabs.forEach(tab => {
 
3949
  const content = document.querySelector(`[data-content-id="${tabId}"]`);
3950
  if (content) {
3951
  const chatContainer = content.querySelector('.chat-container');
3952
+ if (chatContainer && chatContainer.dataset.agentType) {
3953
+ types.push(chatContainer.dataset.agentType);
3954
  }
3955
  }
3956
  }
 
3958
  return types;
3959
  }
3960
 
3961
+ // Sandbox management for code agents
3962
  async function startSandbox(tabId) {
3963
  const currentSettings = getSettings();
3964
  const backendEndpoint = '/api';
 
3968
  return;
3969
  }
3970
 
3971
+ // Add a status message to the agent
3972
  const uniqueId = `code-${tabId}`;
3973
  const chatContainer = document.getElementById(`messages-${uniqueId}`);
3974
  if (chatContainer) {
frontend/style.css CHANGED
@@ -259,8 +259,8 @@ body {
259
  margin-right: 320px;
260
  }
261
 
262
- /* Left Sidebar - Notebooks Overview */
263
- .notebooks-sidebar {
264
  width: 240px;
265
  min-width: 240px;
266
  background: var(--bg-primary);
@@ -740,14 +740,14 @@ body {
740
  transform: translateY(1px);
741
  }
742
 
743
- /* Notebook Content */
744
- .notebook-interface {
745
  height: 100%;
746
  display: flex;
747
  flex-direction: column;
748
  }
749
 
750
- .notebook-header {
751
  background: var(--bg-secondary);
752
  padding: 15px 20px;
753
  border-bottom: 1px solid var(--border-primary);
@@ -763,14 +763,14 @@ body {
763
  align-items: center;
764
  }
765
 
766
- .notebook-type {
767
  font-size: 10px;
768
  color: var(--text-secondary);
769
  text-transform: uppercase;
770
  letter-spacing: 1px;
771
  }
772
 
773
- .notebook-header h2 {
774
  font-size: 14px;
775
  font-weight: 500;
776
  letter-spacing: 1px;
@@ -778,8 +778,10 @@ body {
778
  margin-top: 4px;
779
  }
780
 
781
- .notebook-body {
782
  flex: 1;
 
 
783
  background: var(--bg-card);
784
  padding: 20px;
785
  overflow-y: auto;
@@ -1313,17 +1315,17 @@ pre code [class*="token"] {
1313
  }
1314
 
1315
  /* Scrollbar styling */
1316
- .notebook-body::-webkit-scrollbar,
1317
  .tab-content::-webkit-scrollbar {
1318
  width: 6px;
1319
  }
1320
 
1321
- .notebook-body::-webkit-scrollbar-track,
1322
  .tab-content::-webkit-scrollbar-track {
1323
  background: var(--bg-secondary);
1324
  }
1325
 
1326
- .notebook-body::-webkit-scrollbar-thumb,
1327
  .tab-content::-webkit-scrollbar-thumb {
1328
  background: var(--border-primary);
1329
  border-radius: 3px;
@@ -1763,7 +1765,7 @@ pre code [class*="token"] {
1763
  color: #c62828;
1764
  }
1765
 
1766
- /* Result Preview in CODE notebook */
1767
  .result-preview {
1768
  margin: 16px 0;
1769
  overflow: hidden;
@@ -1800,7 +1802,7 @@ pre code [class*="token"] {
1800
  }
1801
 
1802
  @media (max-width: 1024px) {
1803
- .notebook-types {
1804
  grid-template-columns: 1fr;
1805
  }
1806
 
@@ -1809,7 +1811,7 @@ pre code [class*="token"] {
1809
  }
1810
  }
1811
 
1812
- /* Research Notebook Styles */
1813
  .research-container,
1814
  .research-container.message.assistant {
1815
  margin-bottom: 16px;
@@ -2904,14 +2906,14 @@ pre code [class*="token"] {
2904
  }
2905
 
2906
  /* Notebook models grid */
2907
- .notebook-models-grid {
2908
  display: grid;
2909
  grid-template-columns: 100px 1fr;
2910
  gap: 8px 12px;
2911
  align-items: center;
2912
  }
2913
 
2914
- .notebook-models-grid label {
2915
  font-size: 11px;
2916
  font-weight: 500;
2917
  color: var(--text-secondary);
@@ -3864,6 +3866,39 @@ pre code [class*="token"] {
3864
  font-size: 11px;
3865
  }
3866
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3867
  .screenshot-img {
3868
  max-width: 100%;
3869
  max-height: 400px;
 
259
  margin-right: 320px;
260
  }
261
 
262
+ /* Left Sidebar - Agents Overview */
263
+ .agents-sidebar {
264
  width: 240px;
265
  min-width: 240px;
266
  background: var(--bg-primary);
 
740
  transform: translateY(1px);
741
  }
742
 
743
+ /* Agent Content */
744
+ .agent-interface {
745
  height: 100%;
746
  display: flex;
747
  flex-direction: column;
748
  }
749
 
750
+ .agent-interface > .agent-header {
751
  background: var(--bg-secondary);
752
  padding: 15px 20px;
753
  border-bottom: 1px solid var(--border-primary);
 
763
  align-items: center;
764
  }
765
 
766
+ .agent-type {
767
  font-size: 10px;
768
  color: var(--text-secondary);
769
  text-transform: uppercase;
770
  letter-spacing: 1px;
771
  }
772
 
773
+ .agent-interface > .agent-header h2 {
774
  font-size: 14px;
775
  font-weight: 500;
776
  letter-spacing: 1px;
 
778
  margin-top: 4px;
779
  }
780
 
781
+ .agent-interface > .agent-body {
782
  flex: 1;
783
+ display: block;
784
+ align-items: initial;
785
  background: var(--bg-card);
786
  padding: 20px;
787
  overflow-y: auto;
 
1315
  }
1316
 
1317
  /* Scrollbar styling */
1318
+ .agent-interface > .agent-body::-webkit-scrollbar,
1319
  .tab-content::-webkit-scrollbar {
1320
  width: 6px;
1321
  }
1322
 
1323
+ .agent-interface > .agent-body::-webkit-scrollbar-track,
1324
  .tab-content::-webkit-scrollbar-track {
1325
  background: var(--bg-secondary);
1326
  }
1327
 
1328
+ .agent-interface > .agent-body::-webkit-scrollbar-thumb,
1329
  .tab-content::-webkit-scrollbar-thumb {
1330
  background: var(--border-primary);
1331
  border-radius: 3px;
 
1765
  color: #c62828;
1766
  }
1767
 
1768
+ /* Result Preview in CODE agent */
1769
  .result-preview {
1770
  margin: 16px 0;
1771
  overflow: hidden;
 
1802
  }
1803
 
1804
  @media (max-width: 1024px) {
1805
+ .agent-types {
1806
  grid-template-columns: 1fr;
1807
  }
1808
 
 
1811
  }
1812
  }
1813
 
1814
+ /* Research Agent Styles */
1815
  .research-container,
1816
  .research-container.message.assistant {
1817
  margin-bottom: 16px;
 
2906
  }
2907
 
2908
  /* Notebook models grid */
2909
+ .agent-models-grid {
2910
  display: grid;
2911
  grid-template-columns: 100px 1fr;
2912
  gap: 8px 12px;
2913
  align-items: center;
2914
  }
2915
 
2916
+ .agent-models-grid label {
2917
  font-size: 11px;
2918
  font-weight: 500;
2919
  color: var(--text-secondary);
 
3866
  font-size: 11px;
3867
  }
3868
 
3869
+ .read-content-toggle {
3870
+ background: none;
3871
+ border: 1px solid var(--border-color);
3872
+ color: var(--accent-color);
3873
+ font-size: 10px;
3874
+ cursor: pointer;
3875
+ padding: 1px 6px;
3876
+ border-radius: 3px;
3877
+ margin-left: 6px;
3878
+ }
3879
+
3880
+ .read-content-toggle:hover {
3881
+ background: var(--accent-color);
3882
+ color: var(--bg-primary);
3883
+ }
3884
+
3885
+ .read-content-body {
3886
+ display: none;
3887
+ max-height: 400px;
3888
+ overflow-y: auto;
3889
+ padding: 8px 10px;
3890
+ margin-top: 6px;
3891
+ border: 1px solid var(--border-color);
3892
+ border-radius: 4px;
3893
+ font-size: 12px;
3894
+ line-height: 1.5;
3895
+ background: var(--bg-primary);
3896
+ }
3897
+
3898
+ .read-content-body.expanded {
3899
+ display: block;
3900
+ }
3901
+
3902
  .screenshot-img {
3903
  max-width: 100%;
3904
  max-height: 400px;
tests/backend/conftest.py CHANGED
@@ -45,7 +45,7 @@ def sample_workspace():
45
  "version": 1,
46
  "tabCounter": 3,
47
  "activeTabId": 0,
48
- "notebookCounters": {
49
  "agent": 1,
50
  "code": 1,
51
  "research": 0,
 
45
  "version": 1,
46
  "tabCounter": 3,
47
  "activeTabId": 0,
48
+ "agentCounters": {
49
  "agent": 1,
50
  "code": 1,
51
  "research": 0,
tests/backend/test_api.py CHANGED
@@ -130,11 +130,11 @@ class TestChatEndpoints:
130
  """Test that chat fails without endpoint"""
131
  response = client.post("/api/chat/stream", json={
132
  "messages": [{"role": "user", "content": "Hello"}],
133
- "notebook_type": "chat",
134
  "endpoint": "",
135
  "token": "",
136
  "model": "gpt-4",
137
- "notebook_id": "1"
138
  })
139
  # Should return 400 for missing endpoint
140
  assert response.status_code == 400
@@ -143,11 +143,11 @@ class TestChatEndpoints:
143
  """Test that chat requires messages"""
144
  response = client.post("/api/chat/stream", json={
145
  "messages": [],
146
- "notebook_type": "chat",
147
  "endpoint": "https://api.openai.com/v1",
148
  "token": "test",
149
  "model": "gpt-4",
150
- "notebook_id": "1"
151
  })
152
  # Empty messages should return 400
153
  assert response.status_code == 400
@@ -156,11 +156,11 @@ class TestChatEndpoints:
156
  """Test that chat accepts frontend_context parameter"""
157
  response = client.post("/api/chat/stream", json={
158
  "messages": [{"role": "user", "content": "Hello"}],
159
- "notebook_type": "code",
160
  "endpoint": "https://api.openai.com/v1",
161
  "token": "test",
162
  "model": "gpt-4",
163
- "notebook_id": "1",
164
  "frontend_context": {
165
  "theme": {
166
  "name": "forest",
@@ -168,7 +168,7 @@ class TestChatEndpoints:
168
  "bg": "#e8f5e9",
169
  "border": "#1b5e20"
170
  },
171
- "open_notebooks": ["command", "code"]
172
  }
173
  })
174
  # Request should be valid (actual streaming would fail due to invalid endpoint, but request is accepted)
@@ -185,17 +185,17 @@ class TestFrontendContextModel:
185
  import main
186
  ctx = main.FrontendContext(
187
  theme={"name": "forest", "accent": "#1b5e20", "bg": "#e8f5e9"},
188
- open_notebooks=["command", "code"]
189
  )
190
  assert ctx.theme["name"] == "forest"
191
- assert ctx.open_notebooks == ["command", "code"]
192
 
193
  def test_frontend_context_optional_fields(self):
194
  """Test that FrontendContext fields are optional"""
195
  import main
196
  ctx = main.FrontendContext()
197
  assert ctx.theme is None
198
- assert ctx.open_notebooks is None
199
 
200
  def test_chat_request_with_frontend_context(self):
201
  """Test ChatRequest with frontend_context"""
@@ -232,7 +232,7 @@ class TestStylingContext:
232
  assert "#e8f5e9" in result
233
 
234
  def test_get_system_prompt_code_with_context(self):
235
- """Test get_system_prompt for code notebook includes styling"""
236
  import main
237
  context = {"theme": {"name": "ocean", "accent": "#00796b", "bg": "#e0f2f1"}}
238
  result = main.get_system_prompt("code", context)
@@ -241,11 +241,11 @@ class TestStylingContext:
241
  assert "#00796b" in result
242
 
243
  def test_get_system_prompt_chat_no_styling(self):
244
- """Test get_system_prompt for chat notebook doesn't include styling"""
245
  import main
246
  context = {"theme": {"name": "forest", "accent": "#1b5e20"}}
247
  result = main.get_system_prompt("chat", context)
248
- # Chat notebooks should not have styling guidelines
249
  assert "Visual Style Guidelines" not in result
250
 
251
 
 
130
  """Test that chat fails without endpoint"""
131
  response = client.post("/api/chat/stream", json={
132
  "messages": [{"role": "user", "content": "Hello"}],
133
+ "agent_type": "chat",
134
  "endpoint": "",
135
  "token": "",
136
  "model": "gpt-4",
137
+ "agent_id": "1"
138
  })
139
  # Should return 400 for missing endpoint
140
  assert response.status_code == 400
 
143
  """Test that chat requires messages"""
144
  response = client.post("/api/chat/stream", json={
145
  "messages": [],
146
+ "agent_type": "chat",
147
  "endpoint": "https://api.openai.com/v1",
148
  "token": "test",
149
  "model": "gpt-4",
150
+ "agent_id": "1"
151
  })
152
  # Empty messages should return 400
153
  assert response.status_code == 400
 
156
  """Test that chat accepts frontend_context parameter"""
157
  response = client.post("/api/chat/stream", json={
158
  "messages": [{"role": "user", "content": "Hello"}],
159
+ "agent_type": "code",
160
  "endpoint": "https://api.openai.com/v1",
161
  "token": "test",
162
  "model": "gpt-4",
163
+ "agent_id": "1",
164
  "frontend_context": {
165
  "theme": {
166
  "name": "forest",
 
168
  "bg": "#e8f5e9",
169
  "border": "#1b5e20"
170
  },
171
+ "open_agents": ["command", "code"]
172
  }
173
  })
174
  # Request should be valid (actual streaming would fail due to invalid endpoint, but request is accepted)
 
185
  import main
186
  ctx = main.FrontendContext(
187
  theme={"name": "forest", "accent": "#1b5e20", "bg": "#e8f5e9"},
188
+ open_agents=["command", "code"]
189
  )
190
  assert ctx.theme["name"] == "forest"
191
+ assert ctx.open_agents == ["command", "code"]
192
 
193
  def test_frontend_context_optional_fields(self):
194
  """Test that FrontendContext fields are optional"""
195
  import main
196
  ctx = main.FrontendContext()
197
  assert ctx.theme is None
198
+ assert ctx.open_agents is None
199
 
200
  def test_chat_request_with_frontend_context(self):
201
  """Test ChatRequest with frontend_context"""
 
232
  assert "#e8f5e9" in result
233
 
234
  def test_get_system_prompt_code_with_context(self):
235
+ """Test get_system_prompt for code agent includes styling"""
236
  import main
237
  context = {"theme": {"name": "ocean", "accent": "#00796b", "bg": "#e0f2f1"}}
238
  result = main.get_system_prompt("code", context)
 
241
  assert "#00796b" in result
242
 
243
  def test_get_system_prompt_chat_no_styling(self):
244
+ """Test get_system_prompt for chat agent doesn't include styling"""
245
  import main
246
  context = {"theme": {"name": "forest", "accent": "#1b5e20"}}
247
  result = main.get_system_prompt("chat", context)
248
+ # Chat agents should not have styling guidelines
249
  assert "Visual Style Guidelines" not in result
250
 
251