owenkaplinsky commited on
Commit
cc3b7b2
·
1 Parent(s): 0654cbb

Squash commits

Browse files
.gitattributes CHANGED
@@ -1,2 +1,4 @@
1
  # Auto detect text files and perform LF normalization
2
  * text=auto
 
 
 
1
  # Auto detect text files and perform LF normalization
2
  * text=auto
3
+ thumbnail.jpg filter=lfs diff=lfs merge=lfs -text
4
+ thumbnail.png filter=lfs diff=lfs merge=lfs -text
Dockerfile CHANGED
@@ -1,27 +1,34 @@
1
  # syntax=docker/dockerfile:1
2
- FROM python:3.11-slim
3
 
4
- # Avoid interactive prompts
5
  ENV DEBIAN_FRONTEND=noninteractive \
 
6
  PYTHONUNBUFFERED=1 \
 
7
  PORT=7860
8
 
9
- WORKDIR /app
10
-
11
- # System deps (curl for health checks/logs if needed)
12
  RUN apt-get update && apt-get install -y --no-install-recommends \
13
- curl ca-certificates && \
14
  rm -rf /var/lib/apt/lists/*
15
 
16
- # Install Python deps first (better layer caching)
 
 
 
 
 
17
  COPY requirements.txt .
18
  RUN pip install --no-cache-dir -r requirements.txt
19
 
20
- # Copy the application
21
- COPY . .
 
 
 
 
 
22
 
23
- # Expose the Space port
24
  EXPOSE 7860
25
 
26
- # Start the unified server (FastAPI + Gradio mounts)
27
- CMD ["python", "project/unified_server.py"]
 
1
  # syntax=docker/dockerfile:1
2
+ FROM node:20-slim
3
 
 
4
  ENV DEBIAN_FRONTEND=noninteractive \
5
+ PYTHONDONTWRITEBYTECODE=1 \
6
  PYTHONUNBUFFERED=1 \
7
+ VENV_PATH=/opt/venv \
8
  PORT=7860
9
 
10
+ # Python + venv (avoid PEP 668 system install issues)
 
 
11
  RUN apt-get update && apt-get install -y --no-install-recommends \
12
+ python3 python3-venv ca-certificates curl && \
13
  rm -rf /var/lib/apt/lists/*
14
 
15
+ RUN python3 -m venv $VENV_PATH
16
+ ENV PATH="$VENV_PATH/bin:$PATH"
17
+
18
+ WORKDIR /app
19
+
20
+ # Python deps
21
  COPY requirements.txt .
22
  RUN pip install --no-cache-dir -r requirements.txt
23
 
24
+ # Node deps
25
+ WORKDIR /app/project
26
+ COPY project/package*.json ./
27
+ RUN npm install
28
+
29
+ # App source
30
+ COPY project .
31
 
 
32
  EXPOSE 7860
33
 
34
+ CMD ["npm", "start"]
 
README.md CHANGED
@@ -1,79 +1,97 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # MCP Blockly
2
 
3
- MCP Blockly is a visual programming environment for building real MCP servers without dealing with Python syntax or configuration. It brings the clarity of block-based logic into AI development, allowing newcomers and experienced builders alike to design, test, and refine MCP tools through a clean drag-and-connect workflow. Every block you place generates runnable Python code instantly, so you can focus on structure and behavior while the system manages the boilerplate.
4
 
5
- ## What This Does
 
 
6
 
7
- MCP Blockly lets you build Model Context Protocol (MCP) servers using a block-based interface, perfect for students and newcomers stepping into AI development. The core building happens on the visual canvas. You define your MCP inputs, arrange your logic with blocks, and choose what your server will return. Every change you make is reflected in live Python code on the side. The generator handles function signatures and MCP boilerplate automatically, so you never have to worry about syntax or configuration. Everything stays valid and synchronized.
8
 
9
- The interface has three main areas. The canvas on the left is where you build by dragging and connecting blocks. On the right are two tabs for working with your project: the Testing tab, and an AI Assistant tab.
10
 
11
- Once your blocks are in place, the Development panel makes testing simple. It automatically generates input fields based on your parameters, so you can run the MCP server logic instantly. You enter values, submit, and see the outputs appear. This kind of immediate feedback helps learners understand how data flows through their tool and builds intuition about how AI tools work.
12
 
13
- The AI Assistant tab lets you build and refine your project through conversation. It understands your workspace and becomes a natural part of how you learn and explore the MCP ecosystem.
14
 
15
- The assistant can:
16
- - Create new blocks from plain language. Describe what you want to accomplish, and it builds the correct structure automatically.
17
- - Delete or replace existing blocks without disrupting the rest of your layout.
18
- - Create and name variables that become immediately usable in your workspace.
19
- - Run your MCP tool with real inputs and show the actual outputs for testing.
20
- - Build nested block structures, such as inserting expressions or operations inside other blocks.
21
- - Explain concepts and guide you step by step through the creation process.
22
- - Perform multi-step changes and executions, refining your setup one step at a time.
23
 
24
- It works live and responds quickly, keeping your workspace synchronized with every instruction. This combination of hands-on building with conversational guidance bridges the gap between visual coding and real AI workflows. Taking things one step at a time leads to more accurate and reliable results as you learn.
25
 
26
- The File menu handles creating new projects, opening existing ones, and downloading your work. You can download just the generated Python code or the entire project as a JSON file. The Edit menu provides standard undo, redo, and a cleanup button to reorganize blocks. The Examples menu includes pre-built projects you can load to understand how to structure your own.
27
 
28
- ### API Keys
29
 
30
- The system has two optional but recommended API keys:
31
 
32
- **OpenAI API Key**: Enables the AI Assistant: your guide through the learning process. The assistant helps you build blocks, fix mistakes, explain concepts, and explore MCP development interactively. Without it, you can still create and test blocks manually.
33
 
34
- **Hugging Face API Key**: Allows you to deploy your MCP as a real, live server on Hugging Face Spaces. This is a practical way to learn how AI tools work in production. The system creates a new Space and uploads your tool automatically. The Space becomes a real MCP server that other AI systems can connect to and call natively. Without it, you can build and test locally but won't be able to deploy unless you manually create a space and upload the generated `app.py` file.
35
 
36
- Set these keys through Settings before using features that depend on them. Both are optional: you can build and test tools without either key, but certain features won't be available.
37
 
38
- The toolbox contains blocks for common operations: calling language models, making HTTP requests, extracting data from JSON, manipulating text, performing math, and working with lists. You connect these blocks to build your workflow.
39
 
40
- ## Installation
 
 
 
 
 
 
41
 
42
- Clone the repository and install dependencies.
43
 
44
- ```bash
45
- git clone https://github.com/owenkaplinsky/mcp-blockly.git
46
- cd mcp-blockly/project
47
- pip install -r ../requirements.txt
48
- npm install
49
- ```
50
 
51
- ## Running Locally
52
 
53
- Start the application with:
54
 
55
- ```bash
56
- npm start
57
- ```
58
 
59
- After that, it will open a tab in your browser and you can start building!
60
 
61
  ## How It Works
62
 
63
- The system has three main components: the frontend Blockly editor, the backend Python services, and the AI Assistant engine.
 
 
 
 
64
 
65
- When you arrange blocks in the editor, change listeners trigger code generation. The JavaScript generator traverses your block tree and outputs Python code that represents your workflow. Each block type has a corresponding generator function that knows how to output Python for that block. These functions compose recursively, building the complete function definition from your block arrangement. The generated code is sent to the backend via HTTP POST and stored in memory.
66
 
67
- Blocks dynamically manage their input and output ports through Blockly's mutator system. When you modify a block to add or remove parameters, the mutator updates both the visual shape and the internal state that tracks how many inputs and outputs exist. Each input and output has metadata about its name and type. When a user defines inputs on their main function block, the system creates invisible reference blocks for each input parameter. These reference blocks appear as connectable outputs that other blocks can use. During code generation, these references translate to variable names in the Python function signature and body.
68
 
69
- The AI Assistant component is the sophisticated heart of the system. It continuously monitors the current workspace state and code. When you send a message, the system formats your entire block structure into a readable representation and includes it in the context sent to OpenAI. The model receives not just your question but a complete understanding of what you've built. The system includes a detailed system prompt that explains MCP concepts, the block syntax, and what actions the model can perform.
70
 
71
- Based on the model's response, the system recognizes four special commands: run to execute your MCP with sample inputs, delete to remove a block by ID, create to add new blocks to your workspace, and deploy_to_huggingface to publish your tool as a live server. When the model issues these commands, they're executed immediately. For block modifications, the system uses Server-Sent Events to stream commands back to the frontend, which creates or deletes blocks in real time while you watch. This maintains real-time synchronization between the chat interface and the visual editor.
72
 
73
- The AI Assistant can execute multiple actions per conversation turn. If the model decides it needs to run your code to see the result before suggesting improvements, it does that automatically. If it needs to delete a broken block and create a replacement, it performs both operations and then reports back with what happened. This looping continues for up to ten consecutive iterations per user message, allowing the AI to progressively refine your blocks without requiring you to send multiple messages.
74
 
75
- When you're ready to share your MCP tool, you can deploy it directly to Hugging Face Spaces. Once deployed, the tool becomes a real MCP server that can be called by any AI system supporting the MCP protocol. The AI Assistant in MCP Blockly can immediately use your deployed tool—just ask it to call your MCP and it will invoke the actual live server.
76
 
77
- The agent also monitors your Space's build status. Build typically takes 1-2 minutes. Once the Space reaches RUNNING status, all the tools you defined in your blocks become available for the AI to call natively. If you send a message while the Space is still building, the AI will let you know to wait a moment before your MCP tools become available.
78
 
79
- API keys are managed through environment variables set at runtime. The system uses Gradio to automatically generate user interfaces based on the function signatures in your generated code, creating input and output fields that match your tool's parameters.
 
1
+ ---
2
+ title: MCP Blockly
3
+ emoji: 🧩
4
+ colorFrom: purple
5
+ colorTo: blue
6
+ sdk: docker
7
+ pinned: true
8
+ tags:
9
+ - mcp-in-action-track-creative
10
+ - mcp-in-action-track-consumer
11
+ - OpenAI
12
+ - Blockly
13
+ - Education
14
+ - MCP
15
+ short_description: AI that makes MCP servers with block-code.
16
+ ---
17
+
18
  # MCP Blockly
19
 
20
+ MCP Blockly introduces a new kind of MCP development experience: a block-based Gradio 6 MCP server builder, powered by an autonomous AI agent that can understand your entire workspace, reason about the structure of your MCP tool, and act on it directly. It is one of the first publicly available AI agents that can perform multi-step editing using block-code. Give it a goal and it creates a plan, modifies or rebuilds your logic, tests the tool with real inputs, and checks the results. This is far beyond suggestion driven assistance. The agent can create blocks, repair or remove broken logic, construct complex nested structures, and even deploy the finished MCP server for you. While the interface feels familiar (intentionally resembling Scratch), the result is a visual editor that becomes a fully interactive and agent powered development platform.
21
 
22
+ Most educational tools demonstrate concepts passively, but MCP Blockly supports learning through an active, hands-on environment. Studies consistently show that students develop deeper understanding and longer term retention when they learn by doing, and MCP Blockly applies this idea by letting users experiment with real MCP logic while having an AI partner that can step in when needed, show alternative structures, edit the workspace to illustrate concepts, or help the learner understand how a tool should be built.
23
+
24
+ Vibe coding makes it easy to lean entirely on an AI assistant without gaining any real understanding, but in the context of MCP servers this often drops beginners, especially those coming from Scratch, into an unfamiliar world where they rely on generated code that feels like magic rather than something they can reason about. MCP Blockly takes a different approach by letting the AI work with you in a transparent, structured environment that shows how each block fits into the overall logic. This makes the assistant a guide rather than a crutch and helps learners develop genuine intuition about MCP development instead of staying dependent on vibe coded projects they don't understand.
25
 
26
+ This project uses the **OpenAI Responses API** for easy MCP integration, along with their excellent proprietary models which help the agent make smarter decisions.
27
 
28
+ You can read the announcement post [on LinkedIn](https://www.linkedin.com/feed/update/urn:li:activity:7399566358813790209/), along with the [article](https://huggingface.co/blog/MCP-1st-Birthday/mcp-blockly) about the project!
29
 
30
+ ## YouTube Video (click on image)
31
 
32
+ [![Video Thumbnail](thumbnail.png)](https://www.youtube.com/watch?v=5oj-2uIZpb0)
33
 
34
+ ### Setup
 
 
 
 
 
 
 
35
 
36
+ The system needs one API key (OpenAI API provided for free):
37
 
38
+ - **Hugging Face API Key**: Allows you to deploy your MCP server as a real, live Gradio 6 MCP server on Hugging Face Spaces. The system creates a new Space and uploads your tool automatically. Once deployed, the tool acts as a real MCP endpoint that other AI systems can call. Without this key, you can build and test your tool in this space, but you will not be able to deploy unless you manually upload the generated code.
39
 
40
+ Set this key through the welcome menu, or `File > API Keys` before using features that depend on it!
41
 
42
+ ## What This Does
43
 
44
+ MCP Blockly lets you build Gradio 6 MCP servers using a block based interface, perfect for students and newcomers stepping into AI development. The core building happens in the workspace: you define your MCP inputs, arrange your logic with blocks, and choose what your server will return. Every change you make is reflected in live Python code on the side. The generator handles function signatures and MCP boilerplate automatically, so you never have to worry about syntax or configuration. Everything stays valid and synchronized.
45
 
46
+ The interface has three main areas. The workspace on the right is where you build by dragging and connecting blocks. On the left are two tabs for working with your project: the Testing tab, and an AI Assistant tab.
47
 
48
+ Additionally, there are 3 dropdowns in the top bar to aid you in development. The File menu handles creating new projects, opening existing ones, and downloading your work. You can download just the generated Python code or the entire project as a JSON file. The Edit menu provides standard undo, redo, and a cleanup button to reorganize blocks. The Examples menu includes pre-built projects you can load to understand how to structure your own.
49
 
50
+ Once your blocks are in place, the Testing tab makes testing simple. Once you refresh it, it automatically generates input fields based on your parameters, so you can run the MCP server logic instantly. You enter values, submit, and see the outputs appear. This kind of immediate feedback helps learners understand how data moves through their tool and builds intuition about how AI tools work.
51
 
52
+ The AI Assistant tab lets you build and refine your project through conversation. You can think of it as a conversational partner that helps you shape your MCP tool step by step. It's always there to explain concepts or code to you, help you develop your tool, and ensure your code runs without issues.
53
+ The assistant can help you:
54
+ - Create or adjust your blocks.
55
+ - Introduce variables and expressions when you need them.
56
+ - Deploy your completed Gradio 6 MCP server to Hugging Face Spaces.
57
+ - Call the deployed tool to verify that everything works end to end.
58
+ - And much more!
59
 
60
+ The assistant is meant to collaborate with you, not to take over. It may occasionally misunderstand complex structures, and you can always correct or rearrange blocks manually. This keeps the experience grounded and aligned with learning and exploration.
61
 
62
+ <div style="display:flex; gap:32px; align-items:flex-start;">
63
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/68965fe809f27a491d9f5852/XebJEOohCUcSbCLYYUacJ.gif" />
64
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/68965fe809f27a491d9f5852/MdS5XHiAqhAxafGV-BTiJ.gif" />
65
+ </div>
 
 
66
 
67
+ ## Why This Matters
68
 
69
+ The goal of MCP Blockly is to empower the next generation of AI builders. By providing an environment that is both powerful and transparent, we can do more than just help people build tools; we can help them understand *how* they are built.
70
 
71
+ When a learner sees the AI assistant construct a program block by block, they are not just getting a solution: they are getting a live, narrated demonstration of the logical steps required to solve a problem. They can intervene at any time, modify the blocks themselves, and use the interactive testing tab to see how their changes affect the outcome.
 
 
72
 
73
+ This creates a powerful feedback loop that builds true, lasting intuition. It's a new way to learn, a new way to build, and a small step toward making the incredible power of AI accessible to everyone.
74
 
75
  ## How It Works
76
 
77
+ This project expands Blockly with brand new features designed to make MCP development smoother and more intuitive. These changes range from enabling a smoother user experience, along with low-level hooks to allow the AI agent to perform its actions.
78
+
79
+ ### Backend
80
+
81
+ The core mechanism is a recursive Python code generator. When you connect blocks, the system walks through your structure and compiles it into Python code. Text blocks produce string literals, math operations produce arithmetic expressions, conditionals produce if/elif/else branches, and loops produce iteration logic. Your top-level MCP block becomes a function with typed parameters and return values.
82
 
83
+ When you test your tool, the generated code gets sent to a backend service that handles local testing and execution. It spins up a thread and parses your function signature, sees what types your parameters are, and generates input fields accordingly. Finally, it displays your results in the Gradio UI.
84
 
85
+ All of this runs through one unified web interface. The frontend communicates with both backends over HTTP for regular operations and Server-Sent Events for real-time AI updates. Your API keys are stored in localStorage and used to authenticate requests to OpenAI and Hugging Face during your session. They are never saved in the Python backend nor printed, and are immediately disposed of after being used.
86
 
87
+ ### AI Assistant
88
 
89
+ The assistant works by reading your workspace in a custom domain-specific language (DSL) created for this project, allowing for the AI to interact with a normally UI-based environment. Each block gets a unique ID marked with special delimiters, and its structure is described as nested function calls. For example, a text block might look like `↿ abc123 text(inputs(TEXT: "hello"))`, telling the AI what the block does and how it's configured. When you send a message, the AI receives your entire workspace in this format as context. It understands what operations are possible: it can construct new blocks described in the same nested syntax, request existing blocks be deleted by their ID, create variables, and more. These requests come back to your browser as instructions, which are executed immediately to update the visual workspace.
90
 
91
+ The agent does not simply think of blocks individually: it understands the complete structure of the workspace, including multi-branch blocks such as conditionals and nested logic constructs. For example, when the DSL describes an `if` block, the agent knows that it contains a condition input, a `do` branch, and an `else` branch, each of which expects different kinds of sub-blocks. The agent can independently modify any of these branches, insert new blocks into the correct slot, or replace just one part of a larger structure while preserving the rest. This structural awareness lets the assistant work reliably with arbitrarily deep or complex logic, because it always understands which positions in the workspace are valid targets for a given operation.
92
 
93
+ The assistant follows a multi-step planning pipeline whenever it works on the project. Each request begins with a high-level plan, followed by pseudocode, then a concrete checklist of operations. The agent executes each operation one at a time, updating its understanding of the workspace after every change by rereading the DSL. This loop continues for several iterations until the goal is complete or no further progress can be made. Because the assistant evaluates the workspace after each modification, it can adapt to new block layouts, recover from earlier mistakes, and take long sequences of small steps that ultimately create or transform complex logic. This approach often allows the agent to reliably perform edits that would be impossible to express in a single instruction.
94
 
95
+ MCP Blockly includes an error-catching layer that lets the AI correct its own mistakes while editing your workspace. If the assistant tries to place a block where it can't go, tries to use a tool incorrectly, or writes incorrect commands, the system returns a structured error that the agent reads and adapts to in the next step. It can retry operations, adjust its plan, and repair the workspace without always requiring the user to intervene. This allows multi-step goals to complete even when the initial attempt wasn't perfect.
96
 
97
+ When deployment happens, the latest generated Python code is packaged with its dependencies and uploaded to Hugging Face Spaces. The system waits for the space to build (typically 1-2 minutes), then registers it as a live MCP server. From that point, the AI can call your deployed Gradio 6 MCP server directly with real data, getting results from the production version rather than the local one.
project/chat.py CHANGED
@@ -12,6 +12,7 @@ import json
12
  import uuid
13
  import time
14
  from colorama import Fore, Style
 
15
  from huggingface_hub import HfApi
16
  from collections import defaultdict
17
  from typing import Dict, Any
@@ -71,7 +72,7 @@ def wait_for_result(request_id, request_type, session_id, timeout=8, id_field='r
71
  start_time = time.time()
72
  check_interval = 0.05
73
  results_buffer = [] # Buffer for results we read but don't match
74
- queue = results_queues[session_id]
75
 
76
  while time.time() - start_time < timeout:
77
  # Check if we have buffered results that match
@@ -83,13 +84,13 @@ def wait_for_result(request_id, request_type, session_id, timeout=8, id_field='r
83
 
84
  # Try to get a new result from queue
85
  try:
86
- result = queue.get_nowait()
87
  # Check if this is our result
88
  if (result.get(id_field) == request_id and
89
  result.get('request_type') == request_type):
90
  # Put back any buffered results we collected
91
  for buffered in results_buffer:
92
- queue.put(buffered)
93
  results_buffer = []
94
  return result
95
  else:
@@ -102,7 +103,7 @@ def wait_for_result(request_id, request_type, session_id, timeout=8, id_field='r
102
 
103
  # Timeout - put back any buffered results
104
  for buffered in results_buffer:
105
- queue.put(buffered)
106
 
107
  raise TimeoutError(f"No response received for {request_type} request {request_id} after {timeout} seconds")
108
 
@@ -270,6 +271,12 @@ def replace_block(block_id, command, session_id):
270
  session_id = require_session_id(session_id)
271
  print(f"[REPLACE REQUEST] Attempting to replace block {block_id} with: {command}")
272
 
 
 
 
 
 
 
273
  # Generate a unique request ID
274
  request_id = str(uuid.uuid4())
275
 
@@ -300,9 +307,28 @@ def replace_block(block_id, command, session_id):
300
 
301
  # Unified Server-Sent Events endpoint for all workspace operations
302
  @app.get("/unified_stream")
303
- async def unified_stream(session_id: str = None):
304
- session_id = require_session_id(session_id)
305
- print(f"[UNIFIED STREAM] Connected with session_id: {session_id}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
306
 
307
  async def clear_sent_request(sent_requests, request_key, delay):
308
  """Clear request_key from sent_requests after delay seconds"""
@@ -365,15 +391,15 @@ async def unified_stream(session_id: str = None):
365
  @app.get("/results_stream")
366
  async def results_stream(session_id: str = None):
367
  session_id = require_session_id(session_id)
368
- print(f"[RESULTS STREAM] Connected with session_id: {session_id}")
369
 
370
  async def event_generator():
371
- queue = results_queues[session_id]
372
  while True:
373
  try:
374
  # Check if there are any results to send
375
- if not queue.empty():
376
- result_data = queue.get_nowait()
377
  yield f"data: {json.dumps(result_data)}\n\n"
378
  else:
379
  # Send a heartbeat every 30 seconds to keep connection alive
@@ -406,18 +432,18 @@ async def request_result(request: Request):
406
  block_id = data.get("block_id")
407
  success = data.get("success")
408
  error = data.get("error")
409
- print(f"[RESULT RECEIVED] type={request_type}, block_id={block_id}, success={success}, error={redact_secrets(str(error))}, session_id={session_id}")
410
  elif request_type == "variable":
411
  request_id = data.get("request_id")
412
  variable_id = data.get("variable_id")
413
  success = data.get("success")
414
  error = data.get("error")
415
- print(f"[RESULT RECEIVED] type={request_type}, request_id={request_id}, success={success}, error={redact_secrets(str(error))}, variable_id={variable_id}, session_id={session_id}")
416
  elif request_type in ("create", "replace", "edit_mcp"):
417
  request_id = data.get("request_id")
418
  success = data.get("success")
419
  error = data.get("error")
420
- print(f"[RESULT RECEIVED] type={request_type}, request_id={request_id}, success={success}, error={redact_secrets(str(error))}, session_id={session_id}")
421
 
422
  # Put directly in per-session results queue
423
  results_queues[session_id].put(data)
@@ -527,7 +553,7 @@ The tool has been automatically deployed to Hugging Face Spaces and is ready to
527
  session_deploy_state[session_id]["deployment_message"] = (
528
  "Your MCP tool is being built on Hugging Face Spaces. This usually takes 1-2 minutes. Once it's ready, you'll be able to use the MCP tools defined in your blocks."
529
  )
530
- print(f"[MCP] Registered MCP server (session {session_id}): {space_url}")
531
 
532
  return f"[TOOL] Successfully deployed to Hugging Face Space!\n\n**Space URL:** {space_url}"
533
 
@@ -538,9 +564,25 @@ The tool has been automatically deployed to Hugging Face Spaces and is ready to
538
  return f"[DEPLOY ERROR] Failed to deploy: {str(e)}"
539
 
540
  def create_gradio_interface():
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
541
  # Hardcoded system prompt
542
 
543
- SYSTEM_PROMPT = f"""You are an AI assistant that helps users build **MCP servers** using Blockly blocks.
544
 
545
  You'll receive the workspace state in this format:
546
  `↿ blockId ↾ block_name(inputs(input_name: value))`
@@ -750,6 +792,21 @@ def create_gradio_interface():
750
 
751
  ---
752
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
753
  ## REQUIRED PLANNING PHASE BEFORE ANY TOOL CALL
754
 
755
  Before creating or deleting any blocks, always begin with a *Planning Phase*:
@@ -785,7 +842,11 @@ def create_gradio_interface():
785
 
786
  5. Perform the actions in order without asking for approval or asking to wait for intermediate results.
787
 
788
- 6. Before stopping, you must confirm that every single output slot of the MCP block is filled. You must explicitly confirm this and if not all output slots are filled in, you must do so immediately."""
 
 
 
 
789
  tools = [
790
  {
791
  "type": "function",
@@ -948,6 +1009,10 @@ def create_gradio_interface():
948
  if request and not hf_token:
949
  hf_token = request.headers.get("x-hf-key") or request.cookies.get("mcp_hf_key")
950
 
 
 
 
 
951
  if not openai_key:
952
  yield "OpenAI API key not configured. Please set it in File > Keys in the Blockly interface."
953
  return
@@ -963,11 +1028,44 @@ def create_gradio_interface():
963
  context = session_chat_state.get(session_id, {}).get("code", "")
964
  vars = session_chat_state.get(session_id, {}).get("vars", "")
965
 
966
- # Convert history to OpenAI format
967
  input_items = []
968
- for human, ai in history:
969
- input_items.append({"role": "user", "content": human})
970
- input_items.append({"role": "assistant", "content": ai})
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
971
 
972
  # Build instructions
973
  instructions = SYSTEM_PROMPT
@@ -1186,7 +1284,6 @@ def create_gradio_interface():
1186
  placement_type == "input" and
1187
  input_name and
1188
  input_name.startswith("R")):
1189
- is_first_output_attempt = True
1190
  # Mark that we've attempted an output block in this conversation
1191
  first_output_block_attempted = True
1192
  # Return warning instead of creating the block
@@ -1201,7 +1298,7 @@ def create_gradio_interface():
1201
  print(Fore.YELLOW + f"Agent created block with command `{command}`, type: {placement_type}, blockID: `{blockID}`." + Style.RESET_ALL)
1202
  if input_name:
1203
  print(Fore.YELLOW + f" Input name: {input_name}" + Style.RESET_ALL)
1204
- tool_result = create_block(command, blockID, placement_type, input_name)
1205
  result_label = "Create Operation"
1206
 
1207
  elif function_name == "create_variable":
@@ -1249,7 +1346,7 @@ def create_gradio_interface():
1249
  # Tell model to respond to tool result
1250
  current_prompt = "The tool has been executed with the result shown above. Please respond appropriately."
1251
 
1252
- continue # Continue the main loop
1253
 
1254
  else:
1255
  if ai_response:
@@ -1259,7 +1356,7 @@ def create_gradio_interface():
1259
 
1260
  yield accumulated_response
1261
  break
1262
-
1263
  except Exception as e:
1264
  if accumulated_response:
1265
  yield f"{accumulated_response}\n\nError in iteration {current_iteration}: {str(e)}"
@@ -1273,9 +1370,11 @@ def create_gradio_interface():
1273
  yield accumulated_response
1274
 
1275
 
1276
- # Attach to Gradio ChatInterface
1277
  demo = gr.ChatInterface(
1278
  fn=chat_with_context,
 
 
1279
  )
1280
 
1281
  return demo
@@ -1287,4 +1386,4 @@ def get_chat_gradio_interface():
1287
 
1288
  if __name__ == "__main__":
1289
  demo = create_gradio_interface()
1290
- app = gr.mount_gradio_app(app, demo, path="/")
 
12
  import uuid
13
  import time
14
  from colorama import Fore, Style
15
+ import logging
16
  from huggingface_hub import HfApi
17
  from collections import defaultdict
18
  from typing import Dict, Any
 
72
  start_time = time.time()
73
  check_interval = 0.05
74
  results_buffer = [] # Buffer for results we read but don't match
75
+ results_queue = results_queues[session_id]
76
 
77
  while time.time() - start_time < timeout:
78
  # Check if we have buffered results that match
 
84
 
85
  # Try to get a new result from queue
86
  try:
87
+ result = results_queue.get_nowait()
88
  # Check if this is our result
89
  if (result.get(id_field) == request_id and
90
  result.get('request_type') == request_type):
91
  # Put back any buffered results we collected
92
  for buffered in results_buffer:
93
+ results_queue.put(buffered)
94
  results_buffer = []
95
  return result
96
  else:
 
103
 
104
  # Timeout - put back any buffered results
105
  for buffered in results_buffer:
106
+ results_queue.put(buffered)
107
 
108
  raise TimeoutError(f"No response received for {request_type} request {request_id} after {timeout} seconds")
109
 
 
271
  session_id = require_session_id(session_id)
272
  print(f"[REPLACE REQUEST] Attempting to replace block {block_id} with: {command}")
273
 
274
+ # Check if trying to replace create_mcp block
275
+ if command.strip().startswith("create_mcp("):
276
+ error_msg = "[TOOL] Cannot replace the create_mcp block. The main MCP block cannot be replaced. You can only edit its inputs/outputs using the edit_mcp tool."
277
+ print(f"[REPLACE BLOCKED] Attempt to replace create_mcp block blocked: {error_msg}")
278
+ return error_msg
279
+
280
  # Generate a unique request ID
281
  request_id = str(uuid.uuid4())
282
 
 
307
 
308
  # Unified Server-Sent Events endpoint for all workspace operations
309
  @app.get("/unified_stream")
310
+ async def unified_stream(session_id: str = None, request: Request = None):
311
+ # Diagnostics before enforcing session
312
+ q_sid = None
313
+ c_sid = None
314
+ hdr_sid = None
315
+ root_path = None
316
+ try:
317
+ if request:
318
+ q_sid = request.query_params.get("session_id")
319
+ c_sid = request.cookies.get("mcp_blockly_session_id")
320
+ hdr_sid = request.headers.get("x-session-id") or request.headers.get("session-id")
321
+ root_path = request.scope.get("root_path")
322
+ except Exception:
323
+ pass
324
+
325
+ # Prefer the explicit argument but log everything we saw
326
+ logging.getLogger("chat_unified_stream").info(
327
+ f"[unified_stream] arg_sid={session_id}, query_sid={q_sid}, cookie_sid={c_sid}, header_sid={hdr_sid}, root_path={root_path}"
328
+ )
329
+
330
+ session_id = require_session_id(session_id or q_sid or c_sid or hdr_sid)
331
+ print("[UNIFIED STREAM] Client connected")
332
 
333
  async def clear_sent_request(sent_requests, request_key, delay):
334
  """Clear request_key from sent_requests after delay seconds"""
 
391
  @app.get("/results_stream")
392
  async def results_stream(session_id: str = None):
393
  session_id = require_session_id(session_id)
394
+ print("[RESULTS STREAM] Client connected")
395
 
396
  async def event_generator():
397
+ results_queue = results_queues[session_id]
398
  while True:
399
  try:
400
  # Check if there are any results to send
401
+ if not results_queue.empty():
402
+ result_data = results_queue.get_nowait()
403
  yield f"data: {json.dumps(result_data)}\n\n"
404
  else:
405
  # Send a heartbeat every 30 seconds to keep connection alive
 
432
  block_id = data.get("block_id")
433
  success = data.get("success")
434
  error = data.get("error")
435
+ print(f"[RESULT RECEIVED] type={request_type}, block_id={block_id}, success={success}, error={redact_secrets(str(error))}")
436
  elif request_type == "variable":
437
  request_id = data.get("request_id")
438
  variable_id = data.get("variable_id")
439
  success = data.get("success")
440
  error = data.get("error")
441
+ print(f"[RESULT RECEIVED] type={request_type}, request_id={request_id}, success={success}, error={redact_secrets(str(error))}, variable_id={variable_id}")
442
  elif request_type in ("create", "replace", "edit_mcp"):
443
  request_id = data.get("request_id")
444
  success = data.get("success")
445
  error = data.get("error")
446
+ print(f"[RESULT RECEIVED] type={request_type}, request_id={request_id}, success={success}, error={redact_secrets(str(error))}")
447
 
448
  # Put directly in per-session results queue
449
  results_queues[session_id].put(data)
 
553
  session_deploy_state[session_id]["deployment_message"] = (
554
  "Your MCP tool is being built on Hugging Face Spaces. This usually takes 1-2 minutes. Once it's ready, you'll be able to use the MCP tools defined in your blocks."
555
  )
556
+ print(f"[MCP] Registered MCP server at {space_url}")
557
 
558
  return f"[TOOL] Successfully deployed to Hugging Face Space!\n\n**Space URL:** {space_url}"
559
 
 
564
  return f"[DEPLOY ERROR] Failed to deploy: {str(e)}"
565
 
566
  def create_gradio_interface():
567
+ # Define suggested prompts (emoji + short label -> longer full prompt)
568
+ suggested_prompts = [
569
+ {
570
+ "label": "🤔 Explain features",
571
+ "value": "Hi! I'm new here. Can you explain what things you can do for me?"
572
+ },
573
+ {
574
+ "label": "💡 Brainstorm tools",
575
+ "value": "I'm not sure what I should build... do you have any ideas?"
576
+ },
577
+ {
578
+ "label": "🐛 Debug Issue",
579
+ "value": "My code isn't working as expected. Why do you think that is?"
580
+ }
581
+ ]
582
+
583
  # Hardcoded system prompt
584
 
585
+ SYSTEM_PROMPT = f"""You are an AI assistant that helps users build **Model Context Protocol (MCP) servers** using Blockly blocks.
586
 
587
  You'll receive the workspace state in this format:
588
  `↿ blockId ↾ block_name(inputs(input_name: value))`
 
792
 
793
  ---
794
 
795
+ ## Important info
796
+
797
+ Sometimes the user will ask questions about the environment you're in:
798
+ - Creating a new project, opening a project, downloading the project code (the Python), and downloading the project (the blocks) are under the file dropdown
799
+ - Right click on the workspace or go under edit dropdown to undo, redo, and clean up the workspace
800
+ - Under the examples dropdown there is a weather API and fact checker MCP demo
801
+ - Testing tab on the left side is for testing your MCP live with inputs and getting outputs
802
+ - AI Chat tab (you) is toggleable (switch between testing and ai chat)
803
+
804
+ If you don't know, just say so. But if these questions answer it, use these answers naturally.
805
+
806
+ Additionally, never suggest project ideas that use APIs.
807
+
808
+ ---
809
+
810
  ## REQUIRED PLANNING PHASE BEFORE ANY TOOL CALL
811
 
812
  Before creating or deleting any blocks, always begin with a *Planning Phase*:
 
842
 
843
  5. Perform the actions in order without asking for approval or asking to wait for intermediate results.
844
 
845
+ 6. Before stopping, you must confirm that every single output slot of the MCP block is filled. You must explicitly confirm this and if not all output slots are filled in, you must do so immediately.
846
+
847
+ ---
848
+
849
+ When coding, NEVER try to assume or guess ANY APIs. You must have the user provide you with exact API info and tell them exactly how to get it, if you need API stuff. Don't guess, ever. Additionally, never suggest project ideas that use APIs."""
850
  tools = [
851
  {
852
  "type": "function",
 
1009
  if request and not hf_token:
1010
  hf_token = request.headers.get("x-hf-key") or request.cookies.get("mcp_hf_key")
1011
 
1012
+ # TEMPORARY FREE API KEY
1013
+ if not openai_key:
1014
+ openai_key = os.getenv("OPENAI_API_KEY")
1015
+
1016
  if not openai_key:
1017
  yield "OpenAI API key not configured. Please set it in File > Keys in the Blockly interface."
1018
  return
 
1028
  context = session_chat_state.get(session_id, {}).get("code", "")
1029
  vars = session_chat_state.get(session_id, {}).get("vars", "")
1030
 
1031
+ # Convert history (supports legacy tuples and newer ChatMessage/dict formats)
1032
  input_items = []
1033
+ if history:
1034
+ for item in history:
1035
+ # Gradio 6 ChatMessage-like objects (duck-typed)
1036
+ if hasattr(item, "role") and hasattr(item, "content"):
1037
+ # item.content is a list of parts; flatten text parts
1038
+ texts = []
1039
+ for part in item.content or []:
1040
+ if isinstance(part, dict) and "text" in part:
1041
+ texts.append(part["text"])
1042
+ else:
1043
+ texts.append(getattr(part, "text", str(part)))
1044
+ content = "\n".join(t for t in texts if t)
1045
+ if content:
1046
+ input_items.append({"role": item.role, "content": content})
1047
+ continue
1048
+
1049
+ # Gradio serialized dict form
1050
+ if isinstance(item, dict) and "role" in item and "content" in item:
1051
+ texts = []
1052
+ for part in item.get("content") or []:
1053
+ if isinstance(part, dict) and "text" in part:
1054
+ texts.append(part["text"])
1055
+ else:
1056
+ texts.append(str(part))
1057
+ content = "\n".join(t for t in texts if t)
1058
+ if content:
1059
+ input_items.append({"role": item["role"], "content": content})
1060
+ continue
1061
+
1062
+ # Legacy tuple/list (user, assistant)
1063
+ if isinstance(item, (tuple, list)) and len(item) >= 2:
1064
+ human, ai = item[0], item[1]
1065
+ if human:
1066
+ input_items.append({"role": "user", "content": human})
1067
+ if ai:
1068
+ input_items.append({"role": "assistant", "content": ai})
1069
 
1070
  # Build instructions
1071
  instructions = SYSTEM_PROMPT
 
1284
  placement_type == "input" and
1285
  input_name and
1286
  input_name.startswith("R")):
 
1287
  # Mark that we've attempted an output block in this conversation
1288
  first_output_block_attempted = True
1289
  # Return warning instead of creating the block
 
1298
  print(Fore.YELLOW + f"Agent created block with command `{command}`, type: {placement_type}, blockID: `{blockID}`." + Style.RESET_ALL)
1299
  if input_name:
1300
  print(Fore.YELLOW + f" Input name: {input_name}" + Style.RESET_ALL)
1301
+ tool_result = create_block(command, blockID, placement_type, input_name, session_id=session_id)
1302
  result_label = "Create Operation"
1303
 
1304
  elif function_name == "create_variable":
 
1346
  # Tell model to respond to tool result
1347
  current_prompt = "The tool has been executed with the result shown above. Please respond appropriately."
1348
 
1349
+ continue
1350
 
1351
  else:
1352
  if ai_response:
 
1356
 
1357
  yield accumulated_response
1358
  break
1359
+
1360
  except Exception as e:
1361
  if accumulated_response:
1362
  yield f"{accumulated_response}\n\nError in iteration {current_iteration}: {str(e)}"
 
1370
  yield accumulated_response
1371
 
1372
 
1373
+ # Attach to Gradio ChatInterface with suggested prompts
1374
  demo = gr.ChatInterface(
1375
  fn=chat_with_context,
1376
+ examples=[prompt["value"] for prompt in suggested_prompts],
1377
+ example_labels=[prompt["label"] for prompt in suggested_prompts],
1378
  )
1379
 
1380
  return demo
 
1386
 
1387
  if __name__ == "__main__":
1388
  demo = create_gradio_interface()
1389
+ app = gr.mount_gradio_app(app, demo, path="/", show_error=False)
project/src/generators/python.js CHANGED
@@ -108,9 +108,9 @@ forBlock['create_mcp'] = function (block, generator) {
108
 
109
  // Create the main function definition
110
  if (typedInputs.length > 0) {
111
- code += `def create_mcp(${typedInputs.join(', ')}):\n out_amt = ${returnValues.length}\n out_names = ${JSON.stringify(block.outputNames_ || [])}\n out_types = ${JSON.stringify(block.outputTypes_ || [])}\n\n${body}${returnStatement}\n`;
112
  } else {
113
- code += `def create_mcp():\n out_amt = ${returnValues.length}\n out_names = ${JSON.stringify(block.outputNames_ || [])}\n out_types = ${JSON.stringify(block.outputTypes_ || [])}\n\n${body || ''}${returnStatement}`;
114
  }
115
 
116
  // Map Python types to Gradio components for inputs
@@ -402,4 +402,4 @@ forBlock['cast_as'] = function (block, generator) {
402
  // Generate code to cast value to the specified type
403
  const code = `${type}(${value})`;
404
  return [code, Order.FUNCTION_CALL];
405
- };
 
108
 
109
  // Create the main function definition
110
  if (typedInputs.length > 0) {
111
+ code += `def create_mcp(${typedInputs.join(', ')}):\n in_types = ${JSON.stringify(block.inputTypes_ || [])}\n out_amt = ${returnValues.length}\n out_names = ${JSON.stringify(block.outputNames_ || [])}\n out_types = ${JSON.stringify(block.outputTypes_ || [])}\n\n${body}${returnStatement}\n`;
112
  } else {
113
+ code += `def create_mcp():\n in_types = ${JSON.stringify(block.inputTypes_ || [])}\n out_amt = ${returnValues.length}\n out_names = ${JSON.stringify(block.outputNames_ || [])}\n out_types = ${JSON.stringify(block.outputTypes_ || [])}\n\n${body || ''}${returnStatement}`;
114
  }
115
 
116
  // Map Python types to Gradio components for inputs
 
402
  // Generate code to cast value to the specified type
403
  const code = `${type}(${value})`;
404
  return [code, Order.FUNCTION_CALL];
405
+ };
project/src/index.html CHANGED
@@ -4,6 +4,23 @@
4
  <head>
5
  <meta charset="utf-8" />
6
  <title>MCP Blockly</title>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
7
  </head>
8
 
9
  <body>
@@ -59,14 +76,24 @@
59
  </div>
60
  <div id="developmentTab" class="tabContent active">
61
  <div id="chatContainer">
62
- <iframe src="/gradio-test" style="width: 100%; height: 100%; border: none;"></iframe>
 
 
 
 
 
63
  </div>
64
  <div class="verticalResizer"></div>
65
  <pre id="generatedCode"><code></code></pre>
66
  </div>
67
  <div id="aichatTab" class="tabContent">
68
  <div id="gradioContainer">
69
- <iframe src="/gradio-chat" style="width: 100%; height: 100%; border: none;"></iframe>
 
 
 
 
 
70
  </div>
71
  <pre id="aichatCode" style="position: absolute; left: -9999px; width: 1px; height: 1px;"><code></code></pre>
72
  </div>
@@ -192,24 +219,71 @@
192
  });
193
  </script>
194
 
195
- <!-- Keys Modal -->
196
- <div id="apiKeyModal"
197
  style="display: none; position: fixed; top: 0; left: 0; width: 100%; height: 100%; background: rgba(0,0,0,0.5); z-index: 9999; align-items: center; justify-content: center;">
198
  <div
199
- style="background: white; padding: 30px; border-radius: 10px; width: 90%; max-width: 500px; box-shadow: 0 10px 30px rgba(0,0,0,0.2);">
200
- <h2 style="margin-top: 0; margin-bottom: 20px; color: #333;">API Keys</h2>
 
 
 
 
 
 
 
 
 
 
 
 
 
201
 
202
- <label for="apiKeyInput"
 
203
  style="display: block; margin-bottom: 10px; color: #666; font-size: 14px; font-weight: 500;">OpenAI API
204
  Key:</label>
205
- <input type="password" id="apiKeyInput"
206
  style="width: 100%; padding: 10px; border: 1px solid #ddd; border-radius: 5px; font-size: 14px; box-sizing: border-box; margin-bottom: 5px;"
207
  placeholder="sk-...">
208
- <p style="margin: 5px 0 15px 0; color: #999; font-size: 12px;">For the AI assistant and blocks' model calls.</p>
209
 
210
- <label for="hfKeyInput"
211
  style="display: block; margin-bottom: 10px; color: #666; font-size: 14px; font-weight: 500;">Hugging Face API
212
  Key:</label>
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
213
  <input type="password" id="hfKeyInput"
214
  style="width: 100%; padding: 10px; border: 1px solid #ddd; border-radius: 5px; font-size: 14px; box-sizing: border-box; margin-bottom: 5px;"
215
  placeholder="hf_...">
@@ -224,6 +298,17 @@
224
  </div>
225
  </div>
226
  </div>
 
 
 
 
 
 
 
 
 
 
 
227
  </body>
228
 
229
- </html>
 
4
  <head>
5
  <meta charset="utf-8" />
6
  <title>MCP Blockly</title>
7
+ <style>
8
+ @keyframes pulseOutline {
9
+ 0% {
10
+ box-shadow: 0 0 0 0 rgba(255, 255, 255, 1);
11
+ }
12
+ 75% {
13
+ box-shadow: 0 0 0 10px rgba(255, 255, 255, 0);
14
+ }
15
+ 100% {
16
+ box-shadow: 0 0 0 0 rgba(255, 255, 255, 0);
17
+ }
18
+ }
19
+
20
+ .flash-button {
21
+ animation: pulseOutline 1s ease-out infinite;
22
+ }
23
+ </style>
24
  </head>
25
 
26
  <body>
 
76
  </div>
77
  <div id="developmentTab" class="tabContent active">
78
  <div id="chatContainer">
79
+ <iframe
80
+ id="gradioTestFrame"
81
+ data-base-src="/gradio-test"
82
+ src="about:blank"
83
+ style="width: 100%; height: 100%; border: none;"
84
+ ></iframe>
85
  </div>
86
  <div class="verticalResizer"></div>
87
  <pre id="generatedCode"><code></code></pre>
88
  </div>
89
  <div id="aichatTab" class="tabContent">
90
  <div id="gradioContainer">
91
+ <iframe
92
+ id="gradioChatFrame"
93
+ data-base-src="/gradio-chat"
94
+ src="about:blank"
95
+ style="width: 100%; height: 100%; border: none;"
96
+ ></iframe>
97
  </div>
98
  <pre id="aichatCode" style="position: absolute; left: -9999px; width: 1px; height: 1px;"><code></code></pre>
99
  </div>
 
219
  });
220
  </script>
221
 
222
+ <!-- Welcome Modal -->
223
+ <div id="welcomeModal"
224
  style="display: none; position: fixed; top: 0; left: 0; width: 100%; height: 100%; background: rgba(0,0,0,0.5); z-index: 9999; align-items: center; justify-content: center;">
225
  <div
226
+ style="background: white; padding: 30px; border-radius: 10px; width: 90%; max-width: 700px; box-shadow: 0 10px 30px rgba(0,0,0,0.2); max-height: 90vh; overflow-y: auto; position: relative;">
227
+ <h2 style="margin-top: 0; margin-bottom: 10px; color: #333;">Welcome to MCP Blockly</h2>
228
+ <p style="margin: 0 0 20px 0; color: #666; font-size: 14px;">Get started with visual programming for AI tools.</p>
229
+
230
+ <!-- YouTube Video Embed -->
231
+ <div style="margin-bottom: 25px;">
232
+ <iframe width="100%" height="315" src="https://www.youtube.com/embed/5oj-2uIZpb0"
233
+ style="border: none; border-radius: 5px;"
234
+ allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
235
+ allowfullscreen></iframe>
236
+ </div>
237
+
238
+ <!-- API Keys Section -->
239
+ <h3 style="margin-top: 25px; margin-bottom: 15px; color: #333; font-size: 16px; font-weight: 600;">🎉 Free API</h3>
240
+ <p style="margin: 5px 0 15px 0; color: #27ae60; font-size: 12px; font-weight: 500;">Using a free API key for OpenAI. Get started immediately!</p>
241
 
242
+ <!-- TEMPORARY FREE API KEY -->
243
+ <!-- <label for="welcomeApiKeyInput"
244
  style="display: block; margin-bottom: 10px; color: #666; font-size: 14px; font-weight: 500;">OpenAI API
245
  Key:</label>
246
+ <input type="password" id="welcomeApiKeyInput"
247
  style="width: 100%; padding: 10px; border: 1px solid #ddd; border-radius: 5px; font-size: 14px; box-sizing: border-box; margin-bottom: 5px;"
248
  placeholder="sk-...">
249
+ <p style="margin: 5px 0 20px 0; color: #999; font-size: 12px;">For AI-powered features and code generation.</p> -->
250
 
251
+ <label for="welcomeHfKeyInput"
252
  style="display: block; margin-bottom: 10px; color: #666; font-size: 14px; font-weight: 500;">Hugging Face API
253
  Key:</label>
254
+ <input type="password" id="welcomeHfKeyInput"
255
+ style="width: 100%; padding: 10px; border: 1px solid #ddd; border-radius: 5px; font-size: 14px; box-sizing: border-box; margin-bottom: 5px;"
256
+ placeholder="hf_...">
257
+ <p style="margin: 5px 0 20px 0; color: #999; font-size: 12px;">For deploying your MCP server.</p>
258
+
259
+ <p style="color: #999; font-size: 12px; margin-bottom: 20px;">Your API keys will be stored securely for this session.</p>
260
+
261
+ <div style="display: flex; justify-content: flex-end; align-items: center; gap: 15px;">
262
+ <div style="display: flex; align-items: center; gap: 6px;">
263
+ <label for="dontShowWelcomeAgain" style="color: #666; font-size: 12px; cursor: pointer; margin: 0;">Don't show me this again</label>
264
+ <input type="checkbox" id="dontShowWelcomeAgain" style="cursor: pointer; width: 16px; height: 16px;">
265
+ </div>
266
+ <div style="display: flex; gap: 10px;">
267
+ <button id="skipTutorialButton"
268
+ style="padding: 10px 20px; background: #e5e7eb; border: none; border-radius: 5px; cursor: pointer; font-size: 14px;">Skip Tutorial</button>
269
+ <button id="saveWelcomeApiKey"
270
+ style="padding: 10px 20px; background: #6366f1; color: white; border: none; border-radius: 5px; cursor: pointer; font-size: 14px;">Start Tutorial</button>
271
+ </div>
272
+ </div>
273
+ </div>
274
+ </div>
275
+
276
+ <!-- Keys Modal -->
277
+ <div id="apiKeyModal"
278
+ style="display: none; position: fixed; top: 0; left: 0; width: 100%; height: 100%; background: rgba(0,0,0,0.5); z-index: 9999; align-items: center; justify-content: center;">
279
+ <div
280
+ style="background: white; padding: 30px; border-radius: 10px; width: 90%; max-width: 500px; box-shadow: 0 10px 30px rgba(0,0,0,0.2);">
281
+ <h2 style="margin-top: 0; margin-bottom: 20px; color: #333;">API Keys</h2>
282
+ <p style="margin: 0 0 20px 0; color: #27ae60; font-size: 12px; font-weight: 500;">🎉 Using a free OpenAI API key. Only Hugging Face key needed below if deploying.</p>
283
+
284
+ <label for="hfKeyInput"
285
+ style="display: block; margin-bottom: 10px; color: #666; font-size: 14px; font-weight: 500;">Hugging Face API
286
+ Key (suggested):</label>
287
  <input type="password" id="hfKeyInput"
288
  style="width: 100%; padding: 10px; border: 1px solid #ddd; border-radius: 5px; font-size: 14px; box-sizing: border-box; margin-bottom: 5px;"
289
  placeholder="hf_...">
 
298
  </div>
299
  </div>
300
  </div>
301
+
302
+ <!-- Tutorial Popups -->
303
+ <div id="tutorialPopup"
304
+ style="display: none; position: fixed; background: white; padding: 20px; border-radius: 8px; box-shadow: 0 4px 20px rgba(0,0,0,0.3); z-index: 10000; max-width: 280px; border: 2px solid #6366f1; top: 0; left: 0;">
305
+ <h3 id="tutorialTitle" style="margin: 0 0 10px 0; color: #333; font-size: 14px; font-weight: 600;"></h3>
306
+ <p id="tutorialBody" style="margin: 0 0 15px 0; color: #666; font-size: 12px; line-height: 1.4;"></p>
307
+ <div style="display: flex; justify-content: flex-end;">
308
+ <button id="tutorialSkipButton"
309
+ style="padding: 6px 12px; background: #e5e7eb; border: none; border-radius: 4px; cursor: pointer; font-size: 12px;">Exit Tutorial</button>
310
+ </div>
311
+ </div>
312
  </body>
313
 
314
+ </html>
project/src/index.js CHANGED
@@ -9,6 +9,22 @@ import '@blockly/toolbox-search';
9
  import DarkTheme from '@blockly/theme-dark';
10
  import './index.css';
11
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  // Session ID Handling
13
  function getOrCreateSessionId() {
14
  const STORAGE_KEY = "mcp_blockly_session_id";
@@ -30,7 +46,30 @@ const sessionId = getOrCreateSessionId();
30
  window.sessionId = sessionId;
31
  console.log("[SESSION] Using sessionId:", sessionId);
32
  // Share session id with other mounted apps (e.g., Gradio tester) via cookie
33
- document.cookie = `mcp_blockly_session_id=${sessionId}; path=/; SameSite=Lax`;
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
34
 
35
  // Register the blocks and generator with Blockly
36
  Blockly.common.defineBlocks(blocks);
@@ -144,7 +183,7 @@ downloadCodeButton.addEventListener("click", () => {
144
  // Settings button and Keys Modal
145
  const settingsButton = document.querySelector('#settingsButton');
146
  const apiKeyModal = document.querySelector('#apiKeyModal');
147
- const apiKeyInput = document.querySelector('#apiKeyInput');
148
  const hfKeyInput = document.querySelector('#hfKeyInput');
149
  const saveApiKeyButton = document.querySelector('#saveApiKey');
150
  const cancelApiKeyButton = document.querySelector('#cancelApiKey');
@@ -155,18 +194,25 @@ const HF_KEY_STORAGE = "mcp_blockly_hf_key";
155
  const loadStoredKeys = () => {
156
  const storedOpenAI = window.localStorage.getItem(OPENAI_KEY_STORAGE) || "";
157
  const storedHF = window.localStorage.getItem(HF_KEY_STORAGE) || "";
158
- apiKeyInput.value = storedOpenAI;
159
- hfKeyInput.value = storedHF;
 
 
160
  };
161
 
162
  settingsButton.addEventListener("click", () => {
163
- loadStoredKeys();
 
 
 
 
164
  apiKeyModal.style.display = 'flex';
165
  });
166
 
167
  saveApiKeyButton.addEventListener("click", () => {
168
- const apiKey = apiKeyInput.value.trim();
169
- const hfKey = hfKeyInput.value.trim();
 
170
 
171
  // Validate OpenAI key format if provided
172
  if (apiKey && (!apiKey.startsWith("sk-") || apiKey.length < 40)) {
@@ -185,7 +231,7 @@ saveApiKeyButton.addEventListener("click", () => {
185
  window.localStorage.setItem(HF_KEY_STORAGE, hfKey);
186
 
187
  // Share keys with backend via cookies (per-request, not stored server-side)
188
- const cookieOpts = "path=/; SameSite=Lax";
189
  if (apiKey) {
190
  document.cookie = `mcp_openai_key=${encodeURIComponent(apiKey)}; ${cookieOpts}`;
191
  } else {
@@ -205,11 +251,488 @@ cancelApiKeyButton.addEventListener("click", () => {
205
  apiKeyModal.style.display = 'none';
206
  });
207
 
208
- const weatherText = '{"workspaceComments":[{"height":80,"width":477,"id":"XI5[EHp-Ow+kinXf6n5y","x":52.674375,"y":-52.760000000000005,"text":"Gets temperature of location with a latitude and a longitude."}],"blocks":{"languageVersion":0,"blocks":[{"type":"create_mcp","id":")N.HEG1x]Z/,k#TeWr,S","x":50,"y":50,"deletable":false,"extraState":{"inputCount":2,"inputNames":["latitude","longitude"],"inputTypes":["string","string"],"outputCount":1,"outputNames":["output0"],"outputTypes":["string"],"toolCount":0},"inputs":{"X0":{"block":{"type":"input_reference_latitude","id":"]3mj!y}qfRt+!okheU7L","deletable":false,"extraState":{"ownerBlockId":")N.HEG1x]Z/,k#TeWr,S"},"fields":{"VARNAME":"latitude"}}},"X1":{"block":{"type":"input_reference_longitude","id":"Do/{HFNGSd.!;POiKS?D","deletable":false,"extraState":{"ownerBlockId":")N.HEG1x]Z/,k#TeWr,S"},"fields":{"VARNAME":"longitude"}}},"R0":{"block":{"type":"in_json","id":"R|j?_8s^H{l0;UZ-oQt3","inputs":{"NAME":{"block":{"type":"text","id":"@Z+@U^@8c0gQYj}La`PY","fields":{"TEXT":"temperature_2m"}}},"JSON":{"block":{"type":"in_json","id":"X=M,R1@7bRjJVZIPi[qD","inputs":{"NAME":{"block":{"type":"text","id":"OMr~`#kG$3@k`YPDHbzH","fields":{"TEXT":"current"}}},"JSON":{"block":{"type":"call_api","id":"^(.vyM.yni08S~c1EBm=","fields":{"METHOD":"GET"},"inputs":{"URL":{"shadow":{"type":"text","id":"}.T;_U_OsRS)B_y09p % { ","fields":{"TEXT":""}},"block":{"type":"text_replace","id":"OwH9uERJPTGQG!UER#ch","inputs":{"FROM":{"shadow":{"type":"text","id":"ya05#^ 7 % UbUeXX#eDSmH","fields":{"TEXT":"{latitude}"}},"block":{"type":"text","id":"6CX#+wo9^x+vZ`LRt5ms","fields":{"TEXT":"{latitude}"}}},"TO":{"shadow":{"type":"text","id":": _ZloQuh9c-MNf-U]!k5","fields":{"TEXT":""}},"block":{"type":"input_reference_latitude","id":"?%@)3sErZ)}=#4ags#gu","extraState":{"ownerBlockId":")N.HEG1x]Z/,k#TeWr,S"},"fields":{"VARNAME":"latitude"}}},"TEXT":{"shadow":{"type":"text","id":"w@zsP)m6:WjkUp,ln3$x","fields":{"TEXT":""}},"block":{"type":"text_replace","id":"ImNPsvzD7r^+1MJ%IirV","inputs":{"FROM":{"shadow":{"type":"text","id":"%o(3rro?WLIFpmE0#MMM","fields":{"TEXT":"{longitude}"}},"block":{"type":"text","id":"`p!s8dQ7e~?0JvofyB-{","fields":{"TEXT":"{longitude}"}}},"TO":{"shadow":{"type":"text","id":"Zpql-%oJ_sdSi | r |* er | ","fields":{"TEXT":""}},"block":{"type":"input_reference_longitude","id":"WUgiJP$X + zY#f$5nhnTX","extraState":{"ownerBlockId":") N.HEG1x]Z /, k#TeWr, S"},"fields":{"VARNAME":"longitude"}}},"TEXT":{"shadow":{"type":"text","id":", (vw$o_s7P = b4P; 8]}yj","fields":{"TEXT":"https://api.open-meteo.com/v1/forecast?latitude={latitude}&longitude={longitude}&current=temperature_2m,wind_speed_10m"}}}}}}}}}}}}}}}}}}}}]}}';
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
209
  weatherButton.addEventListener("click", () => {
210
  try {
 
211
  const fileContent = JSON.parse(weatherText);
212
  Blockly.serialization.workspaces.load(fileContent, ws);
 
 
 
 
 
 
 
 
 
 
 
 
 
213
  } catch (error) {
214
  console.error("Error loading weather.txt contents:", error);
215
  }
@@ -218,8 +741,22 @@ weatherButton.addEventListener("click", () => {
218
  const factText = "{\"workspaceComments\":[{\"height\":66,\"width\":575,\"id\":\"x/Z2E2Oid(4||-pQ)h*;\",\"x\":51.00000000000023,\"y\":-35.76388082917071,\"text\":\"A fact checker that uses a searching LLM to verify the validity of a claim.\"}],\"blocks\":{\"languageVersion\":0,\"blocks\":[{\"type\":\"create_mcp\",\"id\":\"yScKJD/XLhk)D}qn2TW:\",\"x\":50,\"y\":50,\"deletable\":false,\"extraState\":{\"inputCount\":1,\"inputNames\":[\"prompt\"],\"inputTypes\":[\"string\"],\"outputCount\":1,\"outputNames\":[\"result\"],\"outputTypes\":[\"string\"],\"toolCount\":0},\"inputs\":{\"X0\":{\"block\":{\"type\":\"input_reference_prompt\",\"id\":\"-r%M-[oX1]?RxxF_V(V@\",\"deletable\":false,\"extraState\":{\"ownerBlockId\":\"yScKJD/XLhk)D}qn2TW:\"},\"fields\":{\"VARNAME\":\"prompt\"}}},\"R0\":{\"block\":{\"type\":\"llm_call\",\"id\":\"m/*D8ZBx;QZlUN*aw15U\",\"fields\":{\"MODEL\":\"gpt-4o-search-preview-2025-03-11\"},\"inputs\":{\"PROMPT\":{\"block\":{\"type\":\"text_join\",\"id\":\"e@#`RVKXpIZ9__%zUK]`\",\"extraState\":{\"itemCount\":3},\"inputs\":{\"ADD0\":{\"block\":{\"type\":\"text\",\"id\":\"M3QD})k`FXiizaF,gA{9\",\"fields\":{\"TEXT\":\"Verify whether the following claim: \\\"\"}}},\"ADD1\":{\"block\":{\"type\":\"input_reference_prompt\",\"id\":\"B4.LNZ0es`RFM0Xi@SL:\",\"extraState\":{\"ownerBlockId\":\"yScKJD/XLhk)D}qn2TW:\"},\"fields\":{\"VARNAME\":\"prompt\"}}},\"ADD2\":{\"block\":{\"type\":\"text\",\"id\":\"Ng!fFR+xTMdmgWZv6Oh{\",\"fields\":{\"TEXT\":\"\\\" is true or not. Return one of the following values: \\\"True\\\", \\\"Unsure\\\", \\\"False\\\", and nothing else. You may not say anything but one of these answers no matter what.\"}}}}}}}}}}}]}}"
219
  factButton.addEventListener("click", () => {
220
  try {
 
221
  const fileContent = JSON.parse(factText);
222
  Blockly.serialization.workspaces.load(fileContent, ws);
 
 
 
 
 
 
 
 
 
 
 
 
 
223
  } catch (error) {
224
  console.error("Error loading weather.txt contents:", error);
225
  }
@@ -790,7 +1327,9 @@ function parseInputs(inputStr) {
790
 
791
  // Set up unified SSE connection for all workspace operations
792
  const setupUnifiedStream = () => {
793
- const eventSource = new EventSource(`/unified_stream?session_id=${sessionId}`);
 
 
794
  const processedRequests = new Set(); // Track processed requests
795
 
796
  eventSource.onmessage = (event) => {
@@ -894,7 +1433,7 @@ const setupUnifiedStream = () => {
894
 
895
  // Send result back to backend immediately
896
  console.log('[SSE] Sending edit MCP result:', { request_id: data.request_id, success, error });
897
- fetch('/request_result', {
898
  method: 'POST',
899
  headers: { 'Content-Type': 'application/json' },
900
  body: JSON.stringify({
@@ -1028,7 +1567,7 @@ const setupUnifiedStream = () => {
1028
 
1029
  // Send result back to backend
1030
  console.log('[SSE] Sending replace block result:', { request_id: data.request_id, success, error, block_id: blockId });
1031
- fetch('/request_result', {
1032
  method: 'POST',
1033
  headers: { 'Content-Type': 'application/json' },
1034
  body: JSON.stringify({
@@ -1077,7 +1616,7 @@ const setupUnifiedStream = () => {
1077
 
1078
  // Send result back to backend immediately
1079
  console.log('[SSE] Sending deletion result:', { block_id: data.block_id, success, error });
1080
- fetch('/request_result', {
1081
  method: 'POST',
1082
  headers: { 'Content-Type': 'application/json' },
1083
  body: JSON.stringify({
@@ -1271,7 +1810,7 @@ const setupUnifiedStream = () => {
1271
  block_id: blockId
1272
  });
1273
 
1274
- fetch('/request_result', {
1275
  method: 'POST',
1276
  headers: { 'Content-Type': 'application/json' },
1277
  body: JSON.stringify({
@@ -1324,7 +1863,7 @@ const setupUnifiedStream = () => {
1324
  variable_id: variableId
1325
  });
1326
 
1327
- fetch('/request_result', {
1328
  method: 'POST',
1329
  headers: { 'Content-Type': 'application/json' },
1330
  body: JSON.stringify({
@@ -1524,7 +2063,7 @@ demo.launch(mcp_server=True)
1524
  codeEl.textContent = code;
1525
  }
1526
 
1527
- fetch("/update_code", {
1528
  method: "POST",
1529
  headers: { "Content-Type": "application/json" },
1530
  body: JSON.stringify({ session_id: sessionId, code }),
@@ -1545,7 +2084,7 @@ let globalVarString = '';
1545
  // Function to check if chat backend is available
1546
  const checkChatBackend = async () => {
1547
  try {
1548
- const response = await fetch("/update_chat", {
1549
  method: "POST",
1550
  headers: { "Content-Type": "application/json" },
1551
  body: JSON.stringify({
@@ -1579,7 +2118,7 @@ const processChatUpdateQueue = () => {
1579
  // Send chat update with retry logic
1580
  const sendChatUpdate = async (chatCode, retryCount = 0) => {
1581
  try {
1582
- const response = await fetch("/update_chat", {
1583
  method: "POST",
1584
  headers: { "Content-Type": "application/json" },
1585
  body: JSON.stringify({
 
9
  import DarkTheme from '@blockly/theme-dark';
10
  import './index.css';
11
 
12
+ // Determine the correct base path when running behind HF proxy (/proxy/... or /spaces/...)
13
+ const getBasePath = () => {
14
+ const p = window.location.pathname || "";
15
+ if (p.startsWith("/proxy/")) {
16
+ const parts = p.split("/").filter(Boolean); // proxy/owner/space[/...]
17
+ return "/" + parts.slice(0, 3).join("/");
18
+ }
19
+ if (p.startsWith("/spaces/")) {
20
+ const parts = p.split("/").filter(Boolean); // spaces/owner/space[/...]
21
+ return "/" + parts.slice(0, 3).join("/");
22
+ }
23
+ return "";
24
+ };
25
+
26
+ const basePath = getBasePath();
27
+
28
  // Session ID Handling
29
  function getOrCreateSessionId() {
30
  const STORAGE_KEY = "mcp_blockly_session_id";
 
46
  window.sessionId = sessionId;
47
  console.log("[SESSION] Using sessionId:", sessionId);
48
  // Share session id with other mounted apps (e.g., Gradio tester) via cookie
49
+ const sameSite = window.location.protocol === "https:" ? "None" : "Lax";
50
+ const secure = window.location.protocol === "https:" ? "; Secure" : "";
51
+ const maxAge = 60 * 60 * 24 * 7; // one week
52
+ document.cookie = `mcp_blockly_session_id=${sessionId}; path=/; Max-Age=${maxAge}; SameSite=${sameSite}${secure}`;
53
+
54
+ // Ensure embedded Gradio iframes receive the session id even when third-party cookies are blocked
55
+ function attachSessionToIframes() {
56
+ const frames = [
57
+ document.getElementById("gradioTestFrame"),
58
+ document.getElementById("gradioChatFrame"),
59
+ ];
60
+
61
+ frames.forEach((frame) => {
62
+ if (!frame) return;
63
+ const baseSrc = frame.dataset.baseSrc || frame.getAttribute("src") || "";
64
+ if (!baseSrc) return;
65
+ // Prefix proxy base when iframe src is absolute-from-root
66
+ const resolvedBaseSrc = baseSrc.startsWith("/") ? `${basePath}${baseSrc}` : baseSrc;
67
+ const joiner = baseSrc.includes("?") ? "&" : "?";
68
+ frame.src = `${resolvedBaseSrc}${joiner}session_id=${encodeURIComponent(sessionId)}&__theme=dark`;
69
+ });
70
+ }
71
+
72
+ attachSessionToIframes();
73
 
74
  // Register the blocks and generator with Blockly
75
  Blockly.common.defineBlocks(blocks);
 
183
  // Settings button and Keys Modal
184
  const settingsButton = document.querySelector('#settingsButton');
185
  const apiKeyModal = document.querySelector('#apiKeyModal');
186
+ // const apiKeyInput = document.querySelector('#apiKeyInput'); // TEMP: No OpenAI key input field
187
  const hfKeyInput = document.querySelector('#hfKeyInput');
188
  const saveApiKeyButton = document.querySelector('#saveApiKey');
189
  const cancelApiKeyButton = document.querySelector('#cancelApiKey');
 
194
  const loadStoredKeys = () => {
195
  const storedOpenAI = window.localStorage.getItem(OPENAI_KEY_STORAGE) || "";
196
  const storedHF = window.localStorage.getItem(HF_KEY_STORAGE) || "";
197
+ // apiKeyInput.value = storedOpenAI; // TEMP: No OpenAI key input field
198
+ if (hfKeyInput) {
199
+ hfKeyInput.value = storedHF;
200
+ }
201
  };
202
 
203
  settingsButton.addEventListener("click", () => {
204
+ if (hfKeyInput) {
205
+ loadStoredKeys();
206
+ } else {
207
+ console.warn("[Settings] HF key input not found in DOM");
208
+ }
209
  apiKeyModal.style.display = 'flex';
210
  });
211
 
212
  saveApiKeyButton.addEventListener("click", () => {
213
+ // const apiKey = apiKeyInput.value.trim(); // TEMP: No OpenAI key input field
214
+ const apiKey = ""; // TEMP: Using free API key, OpenAI field disabled
215
+ const hfKey = hfKeyInput?.value.trim() || "";
216
 
217
  // Validate OpenAI key format if provided
218
  if (apiKey && (!apiKey.startsWith("sk-") || apiKey.length < 40)) {
 
231
  window.localStorage.setItem(HF_KEY_STORAGE, hfKey);
232
 
233
  // Share keys with backend via cookies (per-request, not stored server-side)
234
+ const cookieOpts = "path=/; SameSite=None; Secure";
235
  if (apiKey) {
236
  document.cookie = `mcp_openai_key=${encodeURIComponent(apiKey)}; ${cookieOpts}`;
237
  } else {
 
251
  apiKeyModal.style.display = 'none';
252
  });
253
 
254
+ // Welcome Modal Setup
255
+ const welcomeModal = document.querySelector('#welcomeModal');
256
+ // TEMPORARY FREE API KEY - welcomeApiKeyInput removed from DOM
257
+ // const welcomeApiKeyInput = document.querySelector('#welcomeApiKeyInput');
258
+ const welcomeHfKeyInput = document.querySelector('#welcomeHfKeyInput');
259
+ const saveWelcomeApiKeyButton = document.querySelector('#saveWelcomeApiKey');
260
+ const skipTutorialButton = document.querySelector('#skipTutorialButton');
261
+ const dontShowWelcomeAgainCheckbox = document.querySelector('#dontShowWelcomeAgain');
262
+
263
+ const WELCOME_DISMISSED_COOKIE = "mcp_blockly_welcome_dismissed";
264
+
265
+ const getCookieValue = (name) => {
266
+ const value = `; ${document.cookie}`;
267
+ const parts = value.split(`; ${name}=`);
268
+ if (parts.length === 2) return parts.pop().split(';').shift();
269
+ return null;
270
+ };
271
+
272
+ const loadWelcomeStoredKeys = () => {
273
+ // TEMPORARY FREE API KEY
274
+ // const storedOpenAI = window.localStorage.getItem(OPENAI_KEY_STORAGE) || "";
275
+ const storedHF = window.localStorage.getItem(HF_KEY_STORAGE) || "";
276
+ // TEMPORARY FREE API KEY - welcomeApiKeyInput removed from DOM
277
+ // welcomeApiKeyInput.value = storedOpenAI;
278
+ welcomeHfKeyInput.value = storedHF;
279
+ };
280
+
281
+ const showWelcomeModal = () => {
282
+ loadWelcomeStoredKeys();
283
+ dontShowWelcomeAgainCheckbox.checked = false;
284
+ welcomeModal.style.display = 'flex';
285
+ };
286
+
287
+ const hideWelcomeModal = () => {
288
+ welcomeModal.style.display = 'none';
289
+ if (dontShowWelcomeAgainCheckbox.checked) {
290
+ const maxAge = 60 * 60 * 24 * 365; // one year
291
+ const sameSite = window.location.protocol === "https:" ? "None" : "Lax";
292
+ const secure = window.location.protocol === "https:" ? "; Secure" : "";
293
+ document.cookie = `${WELCOME_DISMISSED_COOKIE}=true; path=/; Max-Age=${maxAge}; SameSite=${sameSite}${secure}`;
294
+ }
295
+ };
296
+
297
+ saveWelcomeApiKeyButton.addEventListener("click", () => {
298
+ // TEMPORARY FREE API KEY
299
+ // const apiKey = welcomeApiKeyInput.value.trim();
300
+ const apiKey = ""; // TEMPORARY FREE API KEY - using free API
301
+ const hfKey = welcomeHfKeyInput.value.trim();
302
+
303
+ // TEMPORARY FREE API KEY
304
+ // // Validate OpenAI key format if provided
305
+ // if (apiKey && (!apiKey.startsWith("sk-") || apiKey.length < 40)) {
306
+ // alert("Invalid OpenAI API key format. Please enter a valid OpenAI API key (starts with 'sk-').");
307
+ // return;
308
+ // }
309
+
310
+ // Validate Hugging Face key format if provided
311
+ if (hfKey && (!hfKey.startsWith("hf_") || hfKey.length < 20)) {
312
+ alert("Invalid Hugging Face API key format. Please enter a valid Hugging Face API key (starts with 'hf_').");
313
+ return;
314
+ }
315
+
316
+ // Save API keys locally
317
+ window.localStorage.setItem(OPENAI_KEY_STORAGE, apiKey);
318
+ window.localStorage.setItem(HF_KEY_STORAGE, hfKey);
319
+
320
+ // Share keys with backend via cookies (per-request, not stored server-side)
321
+ const cookieOpts = "path=/; SameSite=None; Secure";
322
+ // TEMPORARY FREE API KEY
323
+ // if (apiKey) {
324
+ // document.cookie = `mcp_openai_key=${encodeURIComponent(apiKey)}; ${cookieOpts}`;
325
+ // } else {
326
+ // document.cookie = `mcp_openai_key=; Max-Age=0; ${cookieOpts}`;
327
+ // }
328
+ document.cookie = `mcp_openai_key=; Max-Age=0; ${cookieOpts}`; // TEMPORARY FREE API KEY - clear free API cookie
329
+ if (hfKey) {
330
+ document.cookie = `mcp_hf_key=${encodeURIComponent(hfKey)}; ${cookieOpts}`;
331
+ } else {
332
+ document.cookie = `mcp_hf_key=; Max-Age=0; ${cookieOpts}`;
333
+ }
334
+
335
+ hideWelcomeModal();
336
+
337
+ // Trigger the tutorial flow
338
+ tutorialEnabled = true;
339
+ completedTutorialStepIndex = -1;
340
+ examplesJustFlashed = false;
341
+ checkAndFlashExamplesButton();
342
+ });
343
+
344
+ skipTutorialButton.addEventListener("click", () => {
345
+ tutorialEnabled = false;
346
+ hideWelcomeModal();
347
+ });
348
+
349
+ // Show welcome modal on first visit
350
+ const welcomeDismissed = getCookieValue(WELCOME_DISMISSED_COOKIE);
351
+ console.log('[Welcome] Cookie value:', welcomeDismissed);
352
+ if (!welcomeDismissed) {
353
+ // Delay showing welcome to ensure page is fully loaded
354
+ setTimeout(() => {
355
+ console.log('[Welcome] Showing welcome modal');
356
+ showWelcomeModal();
357
+ }, 100);
358
+ } else {
359
+ console.log('[Welcome] Welcome modal dismissed, skipping');
360
+ }
361
+
362
+ const weatherText = '{"workspaceComments":[{"height":80,"width":477,"id":"XI5[EHp-Ow+kinXf6n5y","x":51.0743994140625,"y":-53.56000305175782,"text":"Gets temperature of location with a latitude and a longitude."}],"blocks":{"languageVersion":0,"blocks":[{"type":"create_mcp","id":")N.HEG1x]Z/,k#TeWr,S","x":50,"y":50,"deletable":false,"extraState":{"inputCount":2,"inputNames":["latitude","longitude"],"inputTypes":["float","float"],"outputCount":1,"outputNames":["output0"],"outputTypes":["float"],"toolCount":0},"inputs":{"X0":{"block":{"type":"input_reference_latitude","id":"]3mj!y}qfRt+!okheU7L","deletable":false,"extraState":{"ownerBlockId":")N.HEG1x]Z/,k#TeWr,S"},"fields":{"VARNAME":"latitude"}}},"X1":{"block":{"type":"input_reference_longitude","id":"Do/{HFNGSd.!;POiKS?D","deletable":false,"extraState":{"ownerBlockId":")N.HEG1x]Z/,k#TeWr,S"},"fields":{"VARNAME":"longitude"}}},"R0":{"block":{"type":"cast_as","id":"vKz#fsrWMW(M9*:3Pv;2","fields":{"TYPE":"float"},"inputs":{"VALUE":{"block":{"type":"in_json","id":"R|j?_8s^H{l0;UZ-oQt3","inputs":{"NAME":{"block":{"type":"text","id":"@Z+@U^@8c0gQYj}La`PY","fields":{"TEXT":"temperature_2m"}}},"JSON":{"block":{"type":"in_json","id":"X=M,R1@7bRjJVZIPi[qD","inputs":{"NAME":{"block":{"type":"text","id":"OMr~`#kG$3@k`YPDHbzH","fields":{"TEXT":"current"}}},"JSON":{"block":{"type":"call_api","id":"^(.vyM.yni08S~c1EBm=","fields":{"METHOD":"GET"},"inputs":{"URL":{"shadow":{"type":"text","id":"}.T;_U_OsRS)B_y09p % { ","fields":{"TEXT":""}},"block":{"type":"text_replace","id":"OwH9uERJPTGQG!UER#ch","inputs":{"FROM":{"shadow":{"type":"text","id":"ya05#^ 7 % UbUeXX#eDSmH","fields":{"TEXT":"{latitude}"}},"block":{"type":"text","id":"6CX#+wo9^x+vZ`LRt5ms","fields":{"TEXT":"{latitude}"}}},"TO":{"shadow":{"type":"text","id":": _ZloQuh9c-MNf-U]!k5","fields":{"TEXT":""}},"block":{"type":"cast_as","id":"qXXp2GSF;@+ssDvHN={+","fields":{"TYPE":"str"},"inputs":{"VALUE":{"block":{"type":"input_reference_latitude","id":"?%@)3sErZ)}=#4ags#gu","extraState":{"ownerBlockId":")N.HEG1x]Z/,k#TeWr,S"},"fields":{"VARNAME":"latitude"}}}}}},"TEXT":{"shadow":{"type":"text","id":"w@zsP)m6:WjkUp,ln3$x","fields":{"TEXT":""}},"block":{"type":"text_replace","id":"ImNPsvzD7r^+1MJ%IirV","inputs":{"FROM":{"shadow":{"type":"text","id":"%o(3rro?WLIFpmE0#MMM","fields":{"TEXT":"{longitude}"}},"block":{"type":"text","id":"`p!s8dQ7e~?0JvofyB-{","fields":{"TEXT":"{longitude}"}}},"TO":{"shadow":{"type":"text","id":"Zpql-%oJ_sdSi | r |* er | ","fields":{"TEXT":""}},"block":{"type":"cast_as","id":"T5r7Y,;]kq2wClH)JUf8","fields":{"TYPE":"str"},"inputs":{"VALUE":{"block":{"type":"input_reference_longitude","id":"WUgiJP$X + zY#f$5nhnTX","extraState":{"ownerBlockId":") N.HEG1x]Z /, k#TeWr, S"},"fields":{"VARNAME":"longitude"}}}}}},"TEXT":{"shadow":{"type":"text","id":", (vw$o_s7P = b4P; 8]}yj","fields":{"TEXT":"https://api.open-meteo.com/v1/forecast?latitude={latitude}&longitude={longitude}&current=temperature_2m,wind_speed_10m"}}}}}}}}}}}}}}}}}}}}}}}]}}';
363
+ let examplesJustFlashed = false;
364
+ let selectedExample = null; // Track which example was selected
365
+ let tutorialEnabled = false; // Track if tutorial is enabled
366
+
367
+ // Tutorial popup management
368
+ const tutorialStepOrder = ['examples', 'refresh', 'test', 'aiAssistant', 'send'];
369
+ let completedTutorialStepIndex = -1;
370
+ const getTutorialStepIndex = (step) => tutorialStepOrder.indexOf(step);
371
+ const hasCompletedStepOrBeyond = (step) => {
372
+ const idx = getTutorialStepIndex(step);
373
+ return idx !== -1 && completedTutorialStepIndex >= idx;
374
+ };
375
+ const markTutorialStepComplete = (step) => {
376
+ const idx = getTutorialStepIndex(step);
377
+ if (idx > completedTutorialStepIndex) {
378
+ completedTutorialStepIndex = idx;
379
+ }
380
+ };
381
+ let currentTutorialStep = null;
382
+ const tutorialPopup = document.getElementById('tutorialPopup');
383
+ const tutorialTitle = document.getElementById('tutorialTitle');
384
+ const tutorialBody = document.getElementById('tutorialBody');
385
+ const tutorialSkipButton = document.getElementById('tutorialSkipButton');
386
+ console.log('[TUTORIAL] Popup elements:', { tutorialPopup, tutorialTitle, tutorialBody, tutorialSkipButton });
387
+
388
+ const tutorialContent = {
389
+ examples: {
390
+ title: '1/5 Try an Example',
391
+ body: 'Click on one of the example workflows to get started quickly.'
392
+ },
393
+ refresh: {
394
+ title: '2/5 Refresh Info',
395
+ body: 'Click the Refresh button to update the inputs and outputs of your MCP server.'
396
+ },
397
+ test: {
398
+ title: '3/5 Run Your Test',
399
+ get body() {
400
+ if (selectedExample === 'weather') {
401
+ return 'Enter a latitude (-90 to 90) and longitude (-180 to 80), then press Test to get the current temperature at that location!';
402
+ } else if (selectedExample === 'fact') {
403
+ return 'Enter a claim you want to verify, then press Test to check if it\'s true!';
404
+ }
405
+ return 'Enter some values in your MCP inputs, and press the Test button to see its output.';
406
+ }
407
+ },
408
+ aiAssistant: {
409
+ title: '4/5 Get AI Guidance',
410
+ body: 'After your output is generated, switch to the AI Assistant tab to chat with the integrated agent.'
411
+ },
412
+ send: {
413
+ title: '5/5 Send a Message',
414
+ body: 'Try using one of the three suggested prompts (click on one of the buttons), or write one of your own!'
415
+ }
416
+ };
417
+
418
+ const showTutorialPopup = (step, targetElement) => {
419
+ console.log('[TUTORIAL] showTutorialPopup called for step:', step, 'element:', targetElement);
420
+ if (!tutorialContent[step]) {
421
+ console.log('[TUTORIAL] Content not found for step:', step);
422
+ return;
423
+ }
424
+
425
+ currentTutorialStep = step;
426
+ tutorialTitle.textContent = tutorialContent[step].title;
427
+ // Handle both regular string body and getter function
428
+ const bodyContent = typeof tutorialContent[step].body === 'function'
429
+ ? tutorialContent[step].body()
430
+ : tutorialContent[step].body;
431
+ tutorialBody.textContent = bodyContent;
432
+ tutorialPopup.style.display = 'block';
433
+ console.log('[TUTORIAL] Popup display set to block');
434
+
435
+ // Position the popup next to the target element
436
+ if (targetElement) {
437
+ setTimeout(() => {
438
+ const rect = targetElement.getBoundingClientRect();
439
+ const popupWidth = 280;
440
+ const popupHeight = 150; // Approximate height
441
+ const spacing = 25;
442
+ console.log('[TUTORIAL] Target element rect:', rect, 'window:', window.innerWidth, 'x', window.innerHeight);
443
+
444
+ let top = rect.top;
445
+ let left = rect.right + spacing;
446
+
447
+ // Priority 1: Try to place to the right
448
+ if (left + popupWidth > window.innerWidth) {
449
+ // Priority 2: Try to place below
450
+ left = rect.left;
451
+ top = rect.bottom + spacing;
452
+
453
+ if (top + popupHeight > window.innerHeight) {
454
+ // Priority 3: Place to the left
455
+ left = rect.left - popupWidth - spacing;
456
+ top = rect.top;
457
+ }
458
+ }
459
+
460
+ console.log('[TUTORIAL] Setting popup position to top:', top, 'left:', left);
461
+ tutorialPopup.style.top = `${top}px`;
462
+ tutorialPopup.style.left = `${left}px`;
463
+ }, 50);
464
+ }
465
+ };
466
+
467
+ const hideTutorialPopup = () => {
468
+ tutorialPopup.style.display = 'none';
469
+ currentTutorialStep = null;
470
+ };
471
+
472
+ tutorialSkipButton.addEventListener('click', () => {
473
+ // Stop the tutorial and remove all flashing indicators (including inside iframes)
474
+ tutorialEnabled = false;
475
+ examplesJustFlashed = false;
476
+ document.querySelectorAll('.flash-button').forEach(el => el.classList.remove('flash-button'));
477
+ const iframes = document.querySelectorAll('iframe');
478
+ for (const iframe of iframes) {
479
+ try {
480
+ const doc = iframe.contentDocument || iframe.contentWindow.document;
481
+ doc?.querySelectorAll('.flash-button').forEach(el => el.classList.remove('flash-button'));
482
+ } catch (e) {
483
+ // Ignore cross-origin frames
484
+ }
485
+ }
486
+ hideTutorialPopup();
487
+ });
488
+
489
+ const loadSendButtonFlash = () => {
490
+ if (!tutorialEnabled) return;
491
+ // Find send button in the chat iframe (it's the button with just an SVG icon)
492
+ let sendBtn = null;
493
+ let iframeDoc = null;
494
+
495
+ console.log('[TUTORIAL] Looking for send button in iframes');
496
+ const iframes = document.querySelectorAll('iframe');
497
+ for (const iframe of iframes) {
498
+ try {
499
+ iframeDoc = iframe.contentDocument || iframe.contentWindow.document;
500
+ if (iframeDoc) {
501
+ // Look for send button - it's the one with SVG viewBox="0 0 22 24"
502
+ const buttons = iframeDoc.querySelectorAll('button');
503
+ console.log('[TUTORIAL] Found', buttons.length, 'buttons in iframe');
504
+ for (let btn of buttons) {
505
+ // Look for the button with the send SVG (has viewBox="0 0 22 24")
506
+ if (btn.innerHTML.includes('viewBox="0 0 22 24"')) {
507
+ console.log('[TUTORIAL] Found send button with SVG!');
508
+ sendBtn = btn;
509
+ break;
510
+ }
511
+ }
512
+ if (sendBtn) break;
513
+ }
514
+ } catch (e) {
515
+ // Cannot access iframe, continue
516
+ console.log('[TUTORIAL] Cannot access iframe:', e);
517
+ }
518
+ }
519
+
520
+ console.log('[TUTORIAL] Send button found?', !!sendBtn);
521
+ if (sendBtn && iframeDoc) {
522
+ // Inject CSS if needed
523
+ const styleId = 'send-flash-style';
524
+ if (!iframeDoc.getElementById(styleId)) {
525
+ const style = iframeDoc.createElement('style');
526
+ style.id = styleId;
527
+ style.textContent = `
528
+ @keyframes pulseOutline {
529
+ 0% {
530
+ box-shadow: 0 0 0 0 rgba(255, 255, 255, 1);
531
+ }
532
+ 75% {
533
+ box-shadow: 0 0 0 10px rgba(255, 255, 255, 0);
534
+ }
535
+ 100% {
536
+ box-shadow: 0 0 0 0 rgba(255, 255, 255, 0);
537
+ }
538
+ }
539
+
540
+ .flash-button {
541
+ animation: pulseOutline 1s ease-out infinite !important;
542
+ }
543
+ `;
544
+ iframeDoc.head.appendChild(style);
545
+ }
546
+
547
+ sendBtn.classList.add('flash-button');
548
+ showTutorialPopup('send', sendBtn);
549
+
550
+ // Watch for when the send button gets removed/replaced (which happens when message is sent)
551
+ const observer = new MutationObserver((mutations) => {
552
+ console.log('[TUTORIAL] Send button DOM changed - message was sent');
553
+ hideTutorialPopup();
554
+ observer.disconnect();
555
+ });
556
+
557
+ // Observe the button's parent for changes (including removal/replacement)
558
+ if (sendBtn.parentElement) {
559
+ observer.observe(sendBtn.parentElement, {
560
+ childList: true,
561
+ subtree: true
562
+ });
563
+ }
564
+ }
565
+ };
566
+
567
+ const loadAiAssistantTabFlash = () => {
568
+ if (!tutorialEnabled || hasCompletedStepOrBeyond('aiAssistant')) return;
569
+ const aiTab = document.querySelector('[data-tab="aichat"]');
570
+ if (aiTab) {
571
+ aiTab.classList.add('flash-button');
572
+ showTutorialPopup('aiAssistant', aiTab);
573
+
574
+ // Remove flash on click
575
+ const removeFlash = () => {
576
+ aiTab.classList.remove('flash-button');
577
+ aiTab.removeEventListener('click', removeFlash);
578
+ markTutorialStepComplete('aiAssistant');
579
+ hideTutorialPopup();
580
+ // After AI Assistant tab is clicked, flash the send button
581
+ setTimeout(() => {
582
+ loadSendButtonFlash();
583
+ }, 100);
584
+ };
585
+ aiTab.addEventListener('click', removeFlash);
586
+ }
587
+ };
588
+
589
+ const loadExamplesButtonFlash = () => {
590
+ if (!tutorialEnabled || hasCompletedStepOrBeyond('examples')) return;
591
+ const examplesButton = Array.from(document.querySelectorAll('.menuButton')).find(btn => btn.textContent === 'Examples');
592
+ console.log('[TUTORIAL] Examples button found:', examplesButton);
593
+ if (examplesButton) {
594
+ examplesButton.classList.add('flash-button');
595
+ examplesJustFlashed = true;
596
+ console.log('[TUTORIAL] Showing examples popup');
597
+ showTutorialPopup('examples', examplesButton);
598
+ } else {
599
+ console.log('[TUTORIAL] Examples button NOT found');
600
+ }
601
+ };
602
+
603
+ const loadTestButtonFlash = (testBtn, iframeDoc) => {
604
+ if (!tutorialEnabled || hasCompletedStepOrBeyond('test') || !testBtn || !iframeDoc) return;
605
+ if (testBtn && iframeDoc) {
606
+ testBtn.classList.add('flash-button');
607
+ showTutorialPopup('test', testBtn);
608
+
609
+ // Remove flash on click
610
+ const removeFlash = () => {
611
+ testBtn.classList.remove('flash-button');
612
+ testBtn.removeEventListener('click', removeFlash);
613
+ markTutorialStepComplete('test');
614
+ hideTutorialPopup();
615
+ // Proceed to the next step
616
+ loadAiAssistantTabFlash();
617
+ };
618
+ testBtn.addEventListener('click', removeFlash);
619
+ }
620
+ };
621
+
622
+ const loadRefreshButtonFlash = () => {
623
+ if (!tutorialEnabled || hasCompletedStepOrBeyond('refresh')) return;
624
+ let refreshBtn = null;
625
+ let testBtn = null;
626
+ let iframeDoc = null;
627
+
628
+ // Check iframes for the Refresh and Test buttons
629
+ const iframes = document.querySelectorAll('iframe');
630
+ for (const iframe of iframes) {
631
+ try {
632
+ iframeDoc = iframe.contentDocument || iframe.contentWindow.document;
633
+ if (iframeDoc) {
634
+ const iframeButtons = iframeDoc.querySelectorAll('button');
635
+
636
+ // Try to find Refresh and Test buttons in this iframe
637
+ for (let btn of iframeButtons) {
638
+ if (btn.textContent.includes('Refresh') || btn.innerHTML.includes('Refresh')) {
639
+ refreshBtn = btn;
640
+ }
641
+ if (btn.textContent.includes('Test') && btn.classList.contains('secondary')) {
642
+ testBtn = btn;
643
+ }
644
+ if (refreshBtn && testBtn) break;
645
+ }
646
+
647
+ if (refreshBtn && testBtn) break;
648
+ }
649
+ } catch (e) {
650
+ // Cannot access iframe, continue
651
+ }
652
+ }
653
+
654
+ if (refreshBtn && iframeDoc) {
655
+ // Inject CSS into the iframe if needed
656
+ const styleId = 'refresh-flash-style';
657
+ if (!iframeDoc.getElementById(styleId)) {
658
+ const style = iframeDoc.createElement('style');
659
+ style.id = styleId;
660
+ style.textContent = `
661
+ @keyframes pulseOutline {
662
+ 0% {
663
+ box-shadow: 0 0 0 0 rgba(255, 255, 255, 1);
664
+ }
665
+ 75% {
666
+ box-shadow: 0 0 0 10px rgba(255, 255, 255, 0);
667
+ }
668
+ 100% {
669
+ box-shadow: 0 0 0 0 rgba(255, 255, 255, 0);
670
+ }
671
+ }
672
+
673
+ .flash-button {
674
+ animation: pulseOutline 1s ease-out infinite !important;
675
+ }
676
+ `;
677
+ iframeDoc.head.appendChild(style);
678
+ }
679
+
680
+ refreshBtn.classList.add('flash-button');
681
+ showTutorialPopup('refresh', refreshBtn);
682
+
683
+ // Remove flash on click
684
+ const removeFlash = () => {
685
+ refreshBtn.classList.remove('flash-button');
686
+ refreshBtn.removeEventListener('click', removeFlash);
687
+ markTutorialStepComplete('refresh');
688
+ hideTutorialPopup();
689
+ // After Refresh is clicked, flash the Test button
690
+ if (testBtn) {
691
+ loadTestButtonFlash(testBtn, iframeDoc);
692
+ }
693
+ };
694
+ refreshBtn.addEventListener('click', removeFlash);
695
+ } else {
696
+ // Retry after a delay if button not found yet
697
+ setTimeout(loadRefreshButtonFlash, 1000);
698
+ }
699
+ };
700
+
701
+ const checkAndFlashExamplesButton = () => {
702
+ console.log('[TUTORIAL] checkAndFlashExamplesButton called, tutorialEnabled:', tutorialEnabled, 'examplesJustFlashed:', examplesJustFlashed);
703
+ if (!tutorialEnabled) {
704
+ console.log('[TUTORIAL] Tutorial disabled, skipping');
705
+ return;
706
+ }
707
+ if (hasCompletedStepOrBeyond('examples')) {
708
+ console.log('[TUTORIAL] Examples step already completed, skipping');
709
+ return;
710
+ }
711
+ if (!examplesJustFlashed) {
712
+ // Flash the examples button immediately
713
+ console.log('[TUTORIAL] About to call loadExamplesButtonFlash');
714
+ loadExamplesButtonFlash();
715
+ }
716
+ };
717
+
718
  weatherButton.addEventListener("click", () => {
719
  try {
720
+ selectedExample = 'weather';
721
  const fileContent = JSON.parse(weatherText);
722
  Blockly.serialization.workspaces.load(fileContent, ws);
723
+ const shouldHandleTutorial = tutorialEnabled && !hasCompletedStepOrBeyond('examples');
724
+ if (shouldHandleTutorial) {
725
+ // Example loaded - stop flashing Examples button and move to Refresh
726
+ const examplesButton = Array.from(document.querySelectorAll('.menuButton')).find(btn => btn.textContent === 'Examples');
727
+ if (examplesButton) {
728
+ examplesButton.classList.remove('flash-button');
729
+ }
730
+ hideTutorialPopup();
731
+ examplesJustFlashed = false;
732
+ markTutorialStepComplete('examples');
733
+ // Flash refresh button when example is selected
734
+ loadRefreshButtonFlash();
735
+ }
736
  } catch (error) {
737
  console.error("Error loading weather.txt contents:", error);
738
  }
 
741
  const factText = "{\"workspaceComments\":[{\"height\":66,\"width\":575,\"id\":\"x/Z2E2Oid(4||-pQ)h*;\",\"x\":51.00000000000023,\"y\":-35.76388082917071,\"text\":\"A fact checker that uses a searching LLM to verify the validity of a claim.\"}],\"blocks\":{\"languageVersion\":0,\"blocks\":[{\"type\":\"create_mcp\",\"id\":\"yScKJD/XLhk)D}qn2TW:\",\"x\":50,\"y\":50,\"deletable\":false,\"extraState\":{\"inputCount\":1,\"inputNames\":[\"prompt\"],\"inputTypes\":[\"string\"],\"outputCount\":1,\"outputNames\":[\"result\"],\"outputTypes\":[\"string\"],\"toolCount\":0},\"inputs\":{\"X0\":{\"block\":{\"type\":\"input_reference_prompt\",\"id\":\"-r%M-[oX1]?RxxF_V(V@\",\"deletable\":false,\"extraState\":{\"ownerBlockId\":\"yScKJD/XLhk)D}qn2TW:\"},\"fields\":{\"VARNAME\":\"prompt\"}}},\"R0\":{\"block\":{\"type\":\"llm_call\",\"id\":\"m/*D8ZBx;QZlUN*aw15U\",\"fields\":{\"MODEL\":\"gpt-4o-search-preview-2025-03-11\"},\"inputs\":{\"PROMPT\":{\"block\":{\"type\":\"text_join\",\"id\":\"e@#`RVKXpIZ9__%zUK]`\",\"extraState\":{\"itemCount\":3},\"inputs\":{\"ADD0\":{\"block\":{\"type\":\"text\",\"id\":\"M3QD})k`FXiizaF,gA{9\",\"fields\":{\"TEXT\":\"Verify whether the following claim: \\\"\"}}},\"ADD1\":{\"block\":{\"type\":\"input_reference_prompt\",\"id\":\"B4.LNZ0es`RFM0Xi@SL:\",\"extraState\":{\"ownerBlockId\":\"yScKJD/XLhk)D}qn2TW:\"},\"fields\":{\"VARNAME\":\"prompt\"}}},\"ADD2\":{\"block\":{\"type\":\"text\",\"id\":\"Ng!fFR+xTMdmgWZv6Oh{\",\"fields\":{\"TEXT\":\"\\\" is true or not. Return one of the following values: \\\"True\\\", \\\"Unsure\\\", \\\"False\\\", and nothing else. You may not say anything but one of these answers no matter what.\"}}}}}}}}}}}]}}"
742
  factButton.addEventListener("click", () => {
743
  try {
744
+ selectedExample = 'fact';
745
  const fileContent = JSON.parse(factText);
746
  Blockly.serialization.workspaces.load(fileContent, ws);
747
+ const shouldHandleTutorial = tutorialEnabled && !hasCompletedStepOrBeyond('examples');
748
+ if (shouldHandleTutorial) {
749
+ // Example loaded - stop flashing Examples button and move to Refresh
750
+ const examplesButton = Array.from(document.querySelectorAll('.menuButton')).find(btn => btn.textContent === 'Examples');
751
+ if (examplesButton) {
752
+ examplesButton.classList.remove('flash-button');
753
+ }
754
+ hideTutorialPopup();
755
+ examplesJustFlashed = false;
756
+ markTutorialStepComplete('examples');
757
+ // Flash refresh button when example is selected
758
+ loadRefreshButtonFlash();
759
+ }
760
  } catch (error) {
761
  console.error("Error loading weather.txt contents:", error);
762
  }
 
1327
 
1328
  // Set up unified SSE connection for all workspace operations
1329
  const setupUnifiedStream = () => {
1330
+ const esUrl = `${basePath}/unified_stream?session_id=${sessionId}`;
1331
+ console.log("[SSE] Opening unified_stream", esUrl);
1332
+ const eventSource = new EventSource(esUrl);
1333
  const processedRequests = new Set(); // Track processed requests
1334
 
1335
  eventSource.onmessage = (event) => {
 
1433
 
1434
  // Send result back to backend immediately
1435
  console.log('[SSE] Sending edit MCP result:', { request_id: data.request_id, success, error });
1436
+ fetch(`${basePath}/request_result`, {
1437
  method: 'POST',
1438
  headers: { 'Content-Type': 'application/json' },
1439
  body: JSON.stringify({
 
1567
 
1568
  // Send result back to backend
1569
  console.log('[SSE] Sending replace block result:', { request_id: data.request_id, success, error, block_id: blockId });
1570
+ fetch(`${basePath}/request_result`, {
1571
  method: 'POST',
1572
  headers: { 'Content-Type': 'application/json' },
1573
  body: JSON.stringify({
 
1616
 
1617
  // Send result back to backend immediately
1618
  console.log('[SSE] Sending deletion result:', { block_id: data.block_id, success, error });
1619
+ fetch(`${basePath}/request_result`, {
1620
  method: 'POST',
1621
  headers: { 'Content-Type': 'application/json' },
1622
  body: JSON.stringify({
 
1810
  block_id: blockId
1811
  });
1812
 
1813
+ fetch(`${basePath}/request_result`, {
1814
  method: 'POST',
1815
  headers: { 'Content-Type': 'application/json' },
1816
  body: JSON.stringify({
 
1863
  variable_id: variableId
1864
  });
1865
 
1866
+ fetch(`${basePath}/request_result`, {
1867
  method: 'POST',
1868
  headers: { 'Content-Type': 'application/json' },
1869
  body: JSON.stringify({
 
2063
  codeEl.textContent = code;
2064
  }
2065
 
2066
+ fetch(`${basePath}/update_code`, {
2067
  method: "POST",
2068
  headers: { "Content-Type": "application/json" },
2069
  body: JSON.stringify({ session_id: sessionId, code }),
 
2084
  // Function to check if chat backend is available
2085
  const checkChatBackend = async () => {
2086
  try {
2087
+ const response = await fetch(`${basePath}/update_chat`, {
2088
  method: "POST",
2089
  headers: { "Content-Type": "application/json" },
2090
  body: JSON.stringify({
 
2118
  // Send chat update with retry logic
2119
  const sendChatUpdate = async (chatCode, retryCount = 0) => {
2120
  try {
2121
+ const response = await fetch(`${basePath}/update_chat`, {
2122
  method: "POST",
2123
  headers: { "Content-Type": "application/json" },
2124
  body: JSON.stringify({
project/test.py CHANGED
@@ -79,8 +79,10 @@ def _worker_exec(code_str, inputs, queue, openai_key=None, hf_key=None):
79
 
80
  exec(code_str, env)
81
  if "create_mcp" in env:
82
- sig = inspect.signature(env["create_mcp"])
 
83
  params = list(sig.parameters.values())
 
84
 
85
  typed_args = []
86
  for i, arg in enumerate(inputs):
@@ -90,23 +92,28 @@ def _worker_exec(code_str, inputs, queue, openai_key=None, hf_key=None):
90
  typed_args.append(None)
91
  continue
92
 
93
- anno = params[i].annotation
94
  try:
95
- if anno == int:
96
  typed_args.append(int(arg))
97
- elif anno == float:
98
  typed_args.append(float(arg))
99
- elif anno == bool:
100
- typed_args.append(bool(arg) if isinstance(arg, bool) else str(arg).lower() in ("true", "1"))
101
- elif anno == list:
102
- try:
103
- typed_args.append(ast.literal_eval(arg))
104
- except Exception:
105
- typed_args.append([arg])
106
- elif anno == str or anno == inspect._empty:
 
 
 
 
 
107
  typed_args.append(str(arg))
108
  else:
109
- typed_args.append(arg)
110
  except Exception:
111
  typed_args.append(arg)
112
 
@@ -132,14 +139,14 @@ async def update_code(request: Request):
132
  data = await request.json()
133
  session_id = require_session_id(data)
134
  latest_blockly_code[session_id] = data.get("code", "")
135
- logger.info(f"[update_code] Stored code for session {session_id}; length={len(latest_blockly_code[session_id])}")
136
  return {"ok": True}
137
 
138
  # Sends the latest code to chat.py so that the agent will be able to use the MCP
139
  @app.get("/get_latest_code")
140
  async def get_latest_code(request: Request):
141
  session_id = require_session_id(dict(session_id=request.query_params.get("session_id")))
142
- logger.info(f"[get_latest_code] Returning code for session {session_id}; length={len(latest_blockly_code.get(session_id, ''))}")
143
  return {"code": latest_blockly_code.get(session_id, "")}
144
 
145
  def execute_blockly_logic(user_inputs, session_id, openai_key=None, hf_key=None):
@@ -148,7 +155,7 @@ def execute_blockly_logic(user_inputs, session_id, openai_key=None, hf_key=None)
148
  """
149
  if not session_id:
150
  return "No session_id provided to execute code."
151
- logger.info(f"[execute_blockly_logic] session={session_id}; inputs={user_inputs}")
152
  code_for_session = latest_blockly_code.get(session_id, "")
153
  if not code_for_session.strip():
154
  return "No Blockly code available"
@@ -194,7 +201,7 @@ def execute_blockly_logic(user_inputs, session_id, openai_key=None, hf_key=None)
194
  if p.is_alive():
195
  p.terminate()
196
  p.join()
197
- logger.error(f"[execute_blockly_logic] timeout for session {session_id}")
198
  return "Error: Execution timed out"
199
 
200
  if q.empty():
@@ -222,15 +229,17 @@ def build_interface():
222
  c_sid = None
223
 
224
  sid = q_sid or c_sid
225
- logger.info(
226
- f"[init_session] query_sid={q_sid}, cookie_sid={c_sid}, chosen={sid}"
227
- )
228
  if not sid:
229
  logger.error("Gradio test interface loaded without session_id (neither query param nor cookie); refresh/process will be no-ops.")
230
  return sid
231
 
232
  demo.load(init_session, inputs=None, outputs=[session_state], queue=False)
233
 
 
 
 
 
234
  # Create a fixed number of potential input fields (max 10)
235
  input_fields = []
236
  input_labels = []
@@ -239,7 +248,8 @@ def build_interface():
239
  with gr.Accordion("MCP Inputs", open=True):
240
  for i in range(10):
241
  # Create inputs that can be shown/hidden
242
- txt = gr.Textbox(label=f"Input {i+1}", visible=False)
 
243
  input_fields.append(txt)
244
  input_group_items.append(txt)
245
 
@@ -247,7 +257,8 @@ def build_interface():
247
 
248
  with gr.Accordion("MCP Outputs", open=True):
249
  for i in range(10):
250
- out = gr.Textbox(label=f"Output {i+1}", visible=False, interactive=False)
 
251
  output_fields.append(out)
252
 
253
  with gr.Row():
@@ -260,42 +271,57 @@ def build_interface():
260
  return [gr.update(visible=False, value="") for _ in input_fields + output_fields]
261
  import re
262
  code_str = latest_blockly_code.get(session_id, "")
263
- logger.info(f"[refresh_inputs] session={session_id}; code length={len(code_str)}")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
264
 
265
  # Look for the create_mcp function definition
266
- pattern = r'def create_mcp\((.*?)\):'
267
- match = re.search(pattern, code_str)
 
268
 
269
  params = []
 
 
 
 
 
 
 
 
 
270
  if match:
271
  params_str = match.group(1)
272
  if params_str.strip():
273
  # Parse parameters to extract names and types
274
- for param in params_str.split(','):
275
  param = param.strip()
276
- if ':' in param:
277
- name, type_hint = param.split(':')
278
- type_hint = type_hint.strip()
279
- # Convert Python type hints to display names
280
- if 'int' in type_hint:
281
- display_type = 'integer'
282
- elif 'float' in type_hint:
283
- display_type = 'float'
284
- elif 'list' in type_hint:
285
- display_type = 'list'
286
- elif 'bool' in type_hint:
287
- display_type = 'boolean'
288
- else:
289
- display_type = 'string'
290
- params.append({
291
- 'name': name.strip(),
292
- 'type': display_type
293
- })
294
- else:
295
- params.append({
296
- 'name': param,
297
- 'type': 'string'
298
- })
299
 
300
  # Detect output count (out_amt = N)
301
  out_amt_match = re.search(r'out_amt\s*=\s*(\d+)', code_str)
@@ -368,6 +394,10 @@ def build_interface():
368
  openai_key = request.headers.get("x-openai-key") or request.cookies.get("mcp_openai_key")
369
  hf_key = request.headers.get("x-hf-key") or request.cookies.get("mcp_hf_key")
370
 
 
 
 
 
371
  result = execute_blockly_logic(inputs, session_id, openai_key=openai_key, hf_key=hf_key)
372
 
373
  # Get output types to determine how to format the result
@@ -418,6 +448,14 @@ def build_interface():
418
  queue=False
419
  )
420
 
 
 
 
 
 
 
 
 
421
  return demo
422
 
423
 
 
79
 
80
  exec(code_str, env)
81
  if "create_mcp" in env:
82
+ func = env["create_mcp"]
83
+ sig = inspect.signature(func)
84
  params = list(sig.parameters.values())
85
+ in_types = getattr(func, "in_types", [])
86
 
87
  typed_args = []
88
  for i, arg in enumerate(inputs):
 
92
  typed_args.append(None)
93
  continue
94
 
95
+ target_type = in_types[i] if i < len(in_types) else params[i].annotation
96
  try:
97
+ if target_type in (int, "int", "integer"):
98
  typed_args.append(int(arg))
99
+ elif target_type in (float, "float"):
100
  typed_args.append(float(arg))
101
+ elif target_type in (bool, "bool", "boolean"):
102
+ typed_args.append(bool(arg) if isinstance(arg, bool) else str(arg).lower() in ("true", "1", "yes"))
103
+ elif target_type in (list, "list"):
104
+ if isinstance(arg, list):
105
+ typed_args.append(arg)
106
+ else:
107
+ try:
108
+ typed_args.append(ast.literal_eval(arg))
109
+ except Exception:
110
+ typed_args.append([arg])
111
+ elif target_type in ("any", "Any"):
112
+ typed_args.append(arg)
113
+ elif target_type == inspect._empty:
114
  typed_args.append(str(arg))
115
  else:
116
+ typed_args.append(str(arg))
117
  except Exception:
118
  typed_args.append(arg)
119
 
 
139
  data = await request.json()
140
  session_id = require_session_id(data)
141
  latest_blockly_code[session_id] = data.get("code", "")
142
+ logger.info(f"[update_code] Stored code; length={len(latest_blockly_code[session_id])}")
143
  return {"ok": True}
144
 
145
  # Sends the latest code to chat.py so that the agent will be able to use the MCP
146
  @app.get("/get_latest_code")
147
  async def get_latest_code(request: Request):
148
  session_id = require_session_id(dict(session_id=request.query_params.get("session_id")))
149
+ logger.info(f"[get_latest_code] Returning code; length={len(latest_blockly_code.get(session_id, ''))}")
150
  return {"code": latest_blockly_code.get(session_id, "")}
151
 
152
  def execute_blockly_logic(user_inputs, session_id, openai_key=None, hf_key=None):
 
155
  """
156
  if not session_id:
157
  return "No session_id provided to execute code."
158
+ logger.info(f"[execute_blockly_logic] executing; inputs={user_inputs}")
159
  code_for_session = latest_blockly_code.get(session_id, "")
160
  if not code_for_session.strip():
161
  return "No Blockly code available"
 
201
  if p.is_alive():
202
  p.terminate()
203
  p.join()
204
+ logger.error("[execute_blockly_logic] timeout")
205
  return "Error: Execution timed out"
206
 
207
  if q.empty():
 
229
  c_sid = None
230
 
231
  sid = q_sid or c_sid
232
+ logger.info("[init_session] session resolved")
 
 
233
  if not sid:
234
  logger.error("Gradio test interface loaded without session_id (neither query param nor cookie); refresh/process will be no-ops.")
235
  return sid
236
 
237
  demo.load(init_session, inputs=None, outputs=[session_state], queue=False)
238
 
239
+ def hide_all_fields():
240
+ # Hide everything after initial mount so they don't flash visible
241
+ return [gr.update(visible=False, value="") for _ in input_fields + output_fields]
242
+
243
  # Create a fixed number of potential input fields (max 10)
244
  input_fields = []
245
  input_labels = []
 
248
  with gr.Accordion("MCP Inputs", open=True):
249
  for i in range(10):
250
  # Create inputs that can be shown/hidden
251
+ # Gradio 6 lazy-renders hidden components; start visible so they mount, then hide via refresh updates
252
+ txt = gr.Textbox(label=f"Input {i+1}", visible=True)
253
  input_fields.append(txt)
254
  input_group_items.append(txt)
255
 
 
257
 
258
  with gr.Accordion("MCP Outputs", open=True):
259
  for i in range(10):
260
+ # Start visible to force mount; refresh updates will hide/show appropriately
261
+ out = gr.Textbox(label=f"Output {i+1}", visible=True, interactive=False)
262
  output_fields.append(out)
263
 
264
  with gr.Row():
 
271
  return [gr.update(visible=False, value="") for _ in input_fields + output_fields]
272
  import re
273
  code_str = latest_blockly_code.get(session_id, "")
274
+ logger.info(f"[refresh_inputs] code length={len(code_str)}")
275
+
276
+ def normalize_type_name(type_name):
277
+ if not type_name:
278
+ return "string"
279
+ if isinstance(type_name, str):
280
+ type_name_lower = type_name.lower()
281
+ if "int" in type_name_lower:
282
+ return "integer"
283
+ if "float" in type_name_lower:
284
+ return "float"
285
+ if "list" in type_name_lower:
286
+ return "list"
287
+ if "bool" in type_name_lower:
288
+ return "boolean"
289
+ if "any" in type_name_lower:
290
+ return "any"
291
+ return "string"
292
 
293
  # Look for the create_mcp function definition
294
+ # Allow multiline function signatures
295
+ pattern = r'def\s+create_mcp\s*\((.*?)\)\s*:'
296
+ match = re.search(pattern, code_str, re.DOTALL)
297
 
298
  params = []
299
+ # Prefer explicit in_types if present
300
+ in_types_match = re.search(r'in_types\s*=\s*(\[.*?\])', code_str, re.DOTALL)
301
+ in_types = []
302
+ if in_types_match:
303
+ try:
304
+ in_types = ast.literal_eval(in_types_match.group(1))
305
+ except Exception:
306
+ in_types = []
307
+
308
  if match:
309
  params_str = match.group(1)
310
  if params_str.strip():
311
  # Parse parameters to extract names and types
312
+ for idx, param in enumerate(params_str.split(',')):
313
  param = param.strip()
314
+ name = param.split(':')[0].strip() if param else param
315
+ # Prefer in_types entry, fall back to annotation parsing
316
+ display_type = normalize_type_name(in_types[idx] if idx < len(in_types) else None)
317
+ if ':' in param and (not in_types or idx >= len(in_types)):
318
+ _, type_hint = param.split(':')
319
+ display_type = normalize_type_name(type_hint.strip())
320
+
321
+ params.append({
322
+ 'name': name,
323
+ 'type': display_type
324
+ })
 
 
 
 
 
 
 
 
 
 
 
 
325
 
326
  # Detect output count (out_amt = N)
327
  out_amt_match = re.search(r'out_amt\s*=\s*(\d+)', code_str)
 
394
  openai_key = request.headers.get("x-openai-key") or request.cookies.get("mcp_openai_key")
395
  hf_key = request.headers.get("x-hf-key") or request.cookies.get("mcp_hf_key")
396
 
397
+ # TEMPORARY FREE API KEY
398
+ if not openai_key:
399
+ openai_key = os.getenv("OPENAI_API_KEY")
400
+
401
  result = execute_blockly_logic(inputs, session_id, openai_key=openai_key, hf_key=hf_key)
402
 
403
  # Get output types to determine how to format the result
 
448
  queue=False
449
  )
450
 
451
+ # After mount, hide all fields so they don't stay visible until refreshed
452
+ demo.load(
453
+ hide_all_fields,
454
+ inputs=None,
455
+ outputs=input_fields + output_fields,
456
+ queue=False
457
+ )
458
+
459
  return demo
460
 
461
 
project/unified_server.py CHANGED
@@ -5,13 +5,17 @@ import gradio as gr
5
  import uvicorn
6
  import os
7
  import sys
 
8
 
9
  # Ensure local modules are importable
10
  sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
11
  import test
12
  import chat
13
 
14
- app = FastAPI()
 
 
 
15
 
16
  app.add_middleware(
17
  CORSMiddleware,
@@ -32,8 +36,15 @@ async def update_chat_route(request: Request):
32
 
33
  @app.get("/unified_stream")
34
  async def unified_stream_route(request: Request):
35
- session_id = request.query_params.get("session_id")
36
- return await chat.unified_stream(session_id=session_id)
 
 
 
 
 
 
 
37
 
38
  @app.post("/request_result")
39
  async def request_result_route(request: Request):
@@ -83,4 +94,12 @@ if __name__ == "__main__":
83
  print(f"[UNIFIED] running on http://127.0.0.1:{port}")
84
  print(f"- /gradio-test")
85
  print(f"- /gradio-chat")
86
- uvicorn.run(app, host="0.0.0.0", port=port, log_level="critical")
 
 
 
 
 
 
 
 
 
5
  import uvicorn
6
  import os
7
  import sys
8
+ import logging
9
 
10
  # Ensure local modules are importable
11
  sys.path.insert(0, os.path.dirname(os.path.abspath(__file__)))
12
  import test
13
  import chat
14
 
15
+ app = FastAPI(
16
+ # Respect Hugging Face Spaces reverse proxy path/prefix when present
17
+ root_path=os.environ.get("SPACE_ROOT_PATH", "")
18
+ )
19
 
20
  app.add_middleware(
21
  CORSMiddleware,
 
36
 
37
  @app.get("/unified_stream")
38
  async def unified_stream_route(request: Request):
39
+ # Diagnostics: log what we see at the unified server layer
40
+ q_sid = request.query_params.get("session_id")
41
+ c_sid = request.cookies.get("mcp_blockly_session_id")
42
+ hdr_sid = request.headers.get("x-session-id") or request.headers.get("session-id")
43
+ root_path = request.scope.get("root_path")
44
+ logging.getLogger("unified_server").info(
45
+ f"[unified_stream_route] query_sid={q_sid}, cookie_sid={c_sid}, header_sid={hdr_sid}, root_path={root_path}"
46
+ )
47
+ return await chat.unified_stream(session_id=q_sid, request=request)
48
 
49
  @app.post("/request_result")
50
  async def request_result_route(request: Request):
 
94
  print(f"[UNIFIED] running on http://127.0.0.1:{port}")
95
  print(f"- /gradio-test")
96
  print(f"- /gradio-chat")
97
+ uvicorn.run(
98
+ app,
99
+ host="0.0.0.0",
100
+ port=port,
101
+ log_level="critical",
102
+ # Tell Uvicorn to trust proxy headers so scheme becomes https on HF Spaces
103
+ proxy_headers=True,
104
+ forwarded_allow_ips="*",
105
+ )
thumbnail.png ADDED

Git LFS Details

  • SHA256: 6b143b64f668095174ad943ddcd3e0422fa13d252c83698546fcea5ea16a9fd0
  • Pointer size: 131 Bytes
  • Size of remote file: 200 kB