Spaces:
Running
Running
Ali Hmaou
commited on
Commit
·
a0a02f2
1
Parent(s):
b33c1fc
Version 1.94RC
Browse files- .gitattributes +3 -0
- README.md +64 -65
- app.py +2 -1
- assets/images/MCEPTION FOND.jpeg +3 -0
- assets/images/header_bg.jpeg +3 -0
- src/core/builder/code_generator.py +28 -28
- src/core/builder/proposal_generator.py +10 -10
- src/core/deployer/huggingface.py +26 -26
- src/core/state/session_manager.py +4 -4
- src/mcp_server/playground.py +34 -34
- src/mcp_server/server.py +131 -99
- src/mcp_server/tools.py +34 -34
.gitattributes
CHANGED
|
@@ -33,3 +33,6 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
*.jpeg filter=lfs diff=lfs merge=lfs -text
|
| 37 |
+
*.jpg filter=lfs diff=lfs merge=lfs -text
|
| 38 |
+
*.png filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
|
@@ -11,71 +11,70 @@ license: mit
|
|
| 11 |
short_description: Create and deploy MCP servers using natural language.
|
| 12 |
---
|
| 13 |
|
| 14 |
-
# 🏭 MCEPTION
|
| 15 |
|
| 16 |
-
**MCEPTION**
|
| 17 |
|
| 18 |
-
|
| 19 |
|
| 20 |
-
## 🌟
|
| 21 |
|
| 22 |
-
|
| 23 |
|
| 24 |
-
|
| 25 |
|
| 26 |
-
MCEPTION
|
| 27 |
|
| 28 |
-
|
| 29 |
|
| 30 |
-
|
| 31 |
|
| 32 |
-
##
|
| 33 |
|
| 34 |
-
|
| 35 |
|
| 36 |
-
|
| 37 |
|
| 38 |
-
|
| 39 |
|
| 40 |
-
##
|
| 41 |
|
| 42 |
-
|
| 43 |
|
|
|
|
| 44 |
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
* **
|
| 48 |
-
* **
|
| 49 |
-
* **
|
| 50 |
-
* **Déploiement Automatisé sur HF Spaces** : En un clic (ou un appel d'outil), votre serveur MCP est en ligne, hébergé gratuitement sur Hugging Face Spaces.
|
| 51 |
-
* **Compatible MCP Natif** : Les outils générés sont immédiatement utilisables via le protocole MCP exposé nativement par Gradio 6.
|
| 52 |
|
| 53 |
## 🛠️ Architecture
|
| 54 |
|
| 55 |
|
| 56 |
|
| 57 |
-
## 📚 Installation &
|
| 58 |
|
| 59 |
-
### 1.
|
| 60 |
|
| 61 |
-
|
| 62 |
|
| 63 |
-
1. **
|
| 64 |
-
*
|
| 65 |
-
*
|
| 66 |
|
| 67 |
-
2. **Configuration
|
| 68 |
-
*
|
| 69 |
-
*
|
| 70 |
-
*
|
| 71 |
-
* **Name
|
| 72 |
-
* **Value
|
| 73 |
|
| 74 |
-
*
|
| 75 |
|
| 76 |
-
### 2.
|
| 77 |
|
| 78 |
-
|
| 79 |
|
| 80 |
```json
|
| 81 |
{
|
|
@@ -92,44 +91,44 @@ Pour transformer votre Claude Desktop en "usine à outils", ajoutez la configura
|
|
| 92 |
}
|
| 93 |
```
|
| 94 |
|
| 95 |
-
> **Note
|
| 96 |
|
| 97 |
-
## 🤖 Guide
|
| 98 |
|
| 99 |
-
MCEPTION
|
| 100 |
|
| 101 |
-
### Workflow
|
| 102 |
|
| 103 |
-
1. **Configuration** (
|
| 104 |
-
2. **
|
| 105 |
-
3. **Validation
|
| 106 |
-
4. **
|
| 107 |
-
5. **
|
| 108 |
|
| 109 |
-
###
|
| 110 |
|
| 111 |
-
|
|
| 112 |
| :--- | :--- |
|
| 113 |
-
| `step_0_configuration` |
|
| 114 |
-
| `step_1_initialisation_and_proposal` | (UI+API)
|
| 115 |
-
| `step_2_logic_definition` | (UI+API)
|
| 116 |
-
| `step_3_deployment` | (UI+API)
|
| 117 |
-
| `expert_step1_propose_implementation` | (API
|
| 118 |
-
| `expert_step2_define_logic` | (API
|
| 119 |
-
| `util_delete_tool` |
|
| 120 |
-
| `util_get_tool_code` |
|
| 121 |
|
| 122 |
-
## 💡
|
| 123 |
|
| 124 |
-
|
| 125 |
|
| 126 |
-
> "
|
| 127 |
|
| 128 |
-
Claude
|
| 129 |
-
1.
|
| 130 |
-
2.
|
| 131 |
-
3.
|
| 132 |
-
4.
|
| 133 |
|
| 134 |
---
|
| 135 |
-
*
|
|
|
|
| 11 |
short_description: Create and deploy MCP servers using natural language.
|
| 12 |
---
|
| 13 |
|
| 14 |
+
# 🏭 MCEPTION: Your MCP Servers agency
|
| 15 |
|
| 16 |
+
**MCEPTION** is an MCP (Model Context Protocol) server powered by **Gradio** that allows generating, testing, and deploying other MCP servers on the fly.
|
| 17 |
|
| 18 |
+
It is an "MCP Server creator of MCP Servers". It enables AI agents (like Claude, ChatGPT, or autonomous `smolagents`) or human users to create new custom MCP servers and associated Tools, and deploy them instantly to Hugging Face Spaces, thus recursively extending their agent's capabilities.
|
| 19 |
|
| 20 |
+
## 🌟 Why MCEPTION?
|
| 21 |
|
| 22 |
+
With the widespread adoption of conversational tools and the upcoming era of the generative internet, it becomes essential for companies and administrations to master the information communicated to the public through these AIs.
|
| 23 |
|
| 24 |
+
The MCP protocol, combined with the reasoning capabilities of LLM agents, offers possibilities that are sometimes very difficult to imagine without implementing and testing them. It can also sometimes be disappointing and require numerous settings and adjustments. While the MCP protocol is simple in its design, it requires great care to work really well.
|
| 25 |
|
| 26 |
+
MCEPTION answers a current need: providing organizations with tools to test the impact of the MCP protocol for their activity.
|
| 27 |
|
| 28 |
+
This tool has been designed to leverage the power of the agentic structure calling the MCP (generally through top-tier models), combined with specific LLMs such as GPT OSS or Kimi K2 thinking for reactive coding during the tool design phase.
|
| 29 |
|
| 30 |
+
With MCEPTION, organizations can directly try MCP by attempting, for example, to encapsulate existing APIs, create servers around data files, or even expose directly coded business rules allowing agents to answer deterministically and in a controlled manner on subjects that concern them.
|
| 31 |
|
| 32 |
+
## An Intuitive Workflow for Humans and AIs
|
| 33 |
|
| 34 |
+
The tool has been designed to offer an intuitive creation workflow whether for humans or for artificial intelligence agents. In particular, it allows direct testing through a smolagents instance of the generated server's operation, seeing the tools appear and trying them out.
|
| 35 |
|
| 36 |
+
The ideal environment for testing generated MCP servers remains [Hugging Chat](https://huggingface.co/chat/) for Open Source models thanks to the hot reload of MCP servers and their tools.
|
| 37 |
|
| 38 |
+
On the proprietary tools side, the local Claude client works very well, and configurations to integrate tools into Claude are provided by the application.
|
| 39 |
|
| 40 |
+
## Thanks Gradio
|
| 41 |
|
| 42 |
+
This project illustrates the impressive capabilities of Gradio to expose MCP services; it articulates perfectly with the Hugging Face Spaces infrastructure, which is very well complemented today by inference services, through the Inference API and Hugging Chat.
|
| 43 |
|
| 44 |
+
## 🚀 Key Features
|
| 45 |
|
| 46 |
+
* **Natural Language Initialization**: Simply describe the tool: *"I want a tool that gives me the weather in Paris"* or *"Create a currency converter"*.
|
| 47 |
+
* **Intelligent Code Generation**: Uses powerful LLMs (GPT OSS, Kimi K2 Thinking) via inference providers routed by HuggingFace (Together AI, Hyperbolic) to write Python code.
|
| 48 |
+
* **Sandbox & Validation**: Check and modify the proposed code before deployment.
|
| 49 |
+
* **Automated Deployment on HF Spaces**: In one click (or tool call), your MCP server is online, hosted for free on Hugging Face Spaces.
|
| 50 |
+
* **Native MCP Compatible**: Generated tools are immediately usable via the MCP protocol natively exposed by Gradio 6.
|
|
|
|
|
|
|
| 51 |
|
| 52 |
## 🛠️ Architecture
|
| 53 |
|
| 54 |
|
| 55 |
|
| 56 |
+
## 📚 Installation & Getting Started
|
| 57 |
|
| 58 |
+
### 1. Quick Deployment on Hugging Face Spaces
|
| 59 |
|
| 60 |
+
This is the recommended method to use MCEPTION without local installation.
|
| 61 |
|
| 62 |
+
1. **Duplicate the Space**:
|
| 63 |
+
* Click on the menu (three dots) at the top right of this page, then on **"Duplicate this Space"**.
|
| 64 |
+
* Choose a name for your Space and validate.
|
| 65 |
|
| 66 |
+
2. **Token Configuration (Essential)**:
|
| 67 |
+
* Once the Space is duplicated, go to the **Settings** tab of your new Space.
|
| 68 |
+
* Scroll down to the **Variables and secrets** section.
|
| 69 |
+
* Click on **New secret**.
|
| 70 |
+
* **Name**: `HF_TOKEN`
|
| 71 |
+
* **Value**: Your Hugging Face token (make sure it has **WRITE** permissions). You can create one here: [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens).
|
| 72 |
|
| 73 |
+
*The Space will restart automatically with your rights, ready to create new Spaces for you!*
|
| 74 |
|
| 75 |
+
### 2. Usage with Claude Desktop
|
| 76 |
|
| 77 |
+
To turn your Claude Desktop into a "tool factory", add the following configuration to your `claude_desktop_config.json` file:
|
| 78 |
|
| 79 |
```json
|
| 80 |
{
|
|
|
|
| 91 |
}
|
| 92 |
```
|
| 93 |
|
| 94 |
+
> **Note**: If you are using MCEPTION from a public Hugging Face Space, replace the local URL with your Space URL (e.g. `https://alihmaou-metamcp-proto.hf.space/gradio_api/mcp/sse`).
|
| 95 |
|
| 96 |
+
## 🤖 Guide for Agents (API)
|
| 97 |
|
| 98 |
+
MCEPTION exposes a suite of MCP tools allowing an autonomous agent to manage the entire tool creation lifecycle.
|
| 99 |
|
| 100 |
+
### Typical Workflow
|
| 101 |
|
| 102 |
+
1. **Configuration** (Optional): `step_0_configuration` to set the HF user or token if not present in the environment.
|
| 103 |
+
2. **Design**: `expert_step1_propose_implementation` or `step_1_initialisation_and_proposal`. The agent submits a description, the server returns a `draft_id` and a code proposal.
|
| 104 |
+
3. **Validation**: `expert_step2_define_logic`. The agent confirms the code and dependencies.
|
| 105 |
+
4. **Deployment**: `step_3_deployment`. The server creates the Hugging Face Space.
|
| 106 |
+
5. **Usage**: The agent receives the config to connect to this new server.
|
| 107 |
|
| 108 |
+
### List of Exposed Tools
|
| 109 |
|
| 110 |
+
| Tool | Description |
|
| 111 |
| :--- | :--- |
|
| 112 |
+
| `step_0_configuration` | Configures the environment (HF User, Token, Default Space). |
|
| 113 |
+
| `step_1_initialisation_and_proposal` | (UI+API) Initializes a project and proposes an implementation via LLM. |
|
| 114 |
+
| `step_2_logic_definition` | (UI+API) Validates and saves the tool's Python code. |
|
| 115 |
+
| `step_3_deployment` | (UI+API) Deploys the tool to a Hugging Face Space. |
|
| 116 |
+
| `expert_step1_propose_implementation` | (Expert API) Generates just the code without UI state (useful for iterating). |
|
| 117 |
+
| `expert_step2_define_logic` | (Expert API) Defines logic with complex JSON inputs. |
|
| 118 |
+
| `util_delete_tool` | Deletes a tool from an existing Space. |
|
| 119 |
+
| `util_get_tool_code` | Retrieves the source code of an existing tool for inspection. |
|
| 120 |
|
| 121 |
+
## 💡 Example Prompt for Claude
|
| 122 |
|
| 123 |
+
Once MCEPTION is connected to Claude, you can tell it:
|
| 124 |
|
| 125 |
+
> "Create me a tool that retrieves the current Bitcoin price using the CoinGecko API, and deploy it to my Hugging Face space."
|
| 126 |
|
| 127 |
+
Claude will then:
|
| 128 |
+
1. Call `step_1` to generate the Python code (`requests.get(...)`).
|
| 129 |
+
2. Ask you for confirmation or call `step_2` to validate.
|
| 130 |
+
3. Call `step_3` to deploy.
|
| 131 |
+
4. Give you the config to use this new "Bitcoin Price" tool immediately!
|
| 132 |
|
| 133 |
---
|
| 134 |
+
*Made with ❤️, Python, Gradio and a lot of Agents.*
|
app.py
CHANGED
|
@@ -16,4 +16,5 @@ if __name__ == "__main__":
|
|
| 16 |
# Lancement du serveur compatible Hugging Face Spaces
|
| 17 |
# mcp_server=True active les endpoints MCP
|
| 18 |
# show_error=True permet de voir les erreurs Python dans l'interface (utile pour le débug)
|
| 19 |
-
|
|
|
|
|
|
| 16 |
# Lancement du serveur compatible Hugging Face Spaces
|
| 17 |
# mcp_server=True active les endpoints MCP
|
| 18 |
# show_error=True permet de voir les erreurs Python dans l'interface (utile pour le débug)
|
| 19 |
+
# allowed_paths=["assets"] permet de servir les fichiers statiques du dossier assets
|
| 20 |
+
demo.launch(server_name="0.0.0.0", server_port=7860, mcp_server=True, show_error=True, allowed_paths=["assets"])
|
assets/images/MCEPTION FOND.jpeg
ADDED
|
Git LFS Details
|
assets/images/header_bg.jpeg
ADDED
|
Git LFS Details
|
src/core/builder/code_generator.py
CHANGED
|
@@ -4,28 +4,28 @@ class CodeGenerator:
|
|
| 4 |
@staticmethod
|
| 5 |
def generate_gradio_app(function_code: str, inputs: dict, output_desc: str) -> str:
|
| 6 |
"""
|
| 7 |
-
|
| 8 |
|
| 9 |
Args:
|
| 10 |
-
function_code:
|
| 11 |
-
inputs: Dict
|
| 12 |
-
output_desc: Description
|
| 13 |
|
| 14 |
Returns:
|
| 15 |
-
|
| 16 |
"""
|
| 17 |
|
| 18 |
-
#
|
| 19 |
-
#
|
| 20 |
import re
|
| 21 |
match = re.search(r"def\s+([a-zA-Z_][a-zA-Z0-9_]*)\s*\(", function_code)
|
| 22 |
func_name = match.group(1) if match else "main_function"
|
| 23 |
|
| 24 |
-
# Mapping
|
| 25 |
-
#
|
| 26 |
-
# TODO:
|
| 27 |
|
| 28 |
-
#
|
| 29 |
template = f"""
|
| 30 |
import gradio as gr
|
| 31 |
import json
|
|
@@ -35,14 +35,14 @@ import json
|
|
| 35 |
|
| 36 |
# --- Gradio Interface ---
|
| 37 |
|
| 38 |
-
# Wrapper
|
| 39 |
def wrapper(*args):
|
| 40 |
result = {func_name}(*args)
|
| 41 |
return str(result) # Force string output for simplicity
|
| 42 |
|
| 43 |
-
#
|
| 44 |
-
# Note:
|
| 45 |
-
#
|
| 46 |
|
| 47 |
iface = gr.Interface(
|
| 48 |
fn=wrapper,
|
|
@@ -60,7 +60,7 @@ if __name__ == "__main__":
|
|
| 60 |
@staticmethod
|
| 61 |
def generate_tool_module(function_code: str, inputs: dict, output_desc: str, tool_name: str, output_component: str = "text") -> str:
|
| 62 |
"""
|
| 63 |
-
|
| 64 |
"""
|
| 65 |
import re
|
| 66 |
match = re.search(r"def\s+([a-zA-Z_][a-zA-Z0-9_]*)\s*\(", function_code)
|
|
@@ -68,7 +68,7 @@ if __name__ == "__main__":
|
|
| 68 |
|
| 69 |
inputs_keys_str = str(list(inputs.keys()))
|
| 70 |
|
| 71 |
-
#
|
| 72 |
if output_component == "image":
|
| 73 |
gradio_output = 'gr.Image(type="filepath", label="__OUTPUT_DESC__")'
|
| 74 |
elif output_component == "audio":
|
|
@@ -107,7 +107,7 @@ def create_interface():
|
|
| 107 |
code = code.replace("__FUNCTION_CODE__", function_code)
|
| 108 |
code = code.replace("__FUNC_NAME__", func_name)
|
| 109 |
code = code.replace("__INPUTS_KEYS__", inputs_keys_str)
|
| 110 |
-
code = code.replace("__GRADIO_OUTPUT__", gradio_output) #
|
| 111 |
code = code.replace("__OUTPUT_DESC__", output_desc)
|
| 112 |
code = code.replace("__TOOL_NAME__", tool_name)
|
| 113 |
|
|
@@ -116,8 +116,8 @@ def create_interface():
|
|
| 116 |
@staticmethod
|
| 117 |
def generate_master_app() -> str:
|
| 118 |
"""
|
| 119 |
-
|
| 120 |
-
|
| 121 |
"""
|
| 122 |
template = """
|
| 123 |
import gradio as gr
|
|
@@ -128,13 +128,13 @@ import importlib
|
|
| 128 |
# Configuration
|
| 129 |
TOOLS_DIR = "tools"
|
| 130 |
|
| 131 |
-
#
|
| 132 |
if not os.path.exists(TOOLS_DIR):
|
| 133 |
os.makedirs(TOOLS_DIR, exist_ok=True)
|
| 134 |
with open(os.path.join(TOOLS_DIR, "__init__.py"), "w") as f:
|
| 135 |
pass
|
| 136 |
|
| 137 |
-
#
|
| 138 |
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
| 139 |
|
| 140 |
interfaces = []
|
|
@@ -143,7 +143,7 @@ names = []
|
|
| 143 |
print(f"🚀 Starting Meta-MCP Toolbox...")
|
| 144 |
print(f"📂 Scanning '{TOOLS_DIR}' directory...")
|
| 145 |
|
| 146 |
-
# Scan
|
| 147 |
try:
|
| 148 |
for filename in sorted(os.listdir(TOOLS_DIR)):
|
| 149 |
if filename.endswith(".py") and not filename.startswith("_"):
|
|
@@ -152,13 +152,13 @@ try:
|
|
| 152 |
|
| 153 |
try:
|
| 154 |
print(f" 👉 Importing {full_module_name}...")
|
| 155 |
-
#
|
| 156 |
-
#
|
| 157 |
module = importlib.import_module(full_module_name)
|
| 158 |
importlib.reload(module)
|
| 159 |
|
| 160 |
if hasattr(module, "create_interface"):
|
| 161 |
-
#
|
| 162 |
tool_interface = module.create_interface()
|
| 163 |
interfaces.append(tool_interface)
|
| 164 |
names.append(module_name)
|
|
@@ -173,7 +173,7 @@ try:
|
|
| 173 |
except Exception as e:
|
| 174 |
print(f"Error scanning tools directory: {e}")
|
| 175 |
|
| 176 |
-
#
|
| 177 |
if not interfaces:
|
| 178 |
demo = gr.Interface(
|
| 179 |
fn=lambda x: "No tools loaded yet. Add a tool via Meta-MCP!",
|
|
@@ -193,6 +193,6 @@ if __name__ == "__main__":
|
|
| 193 |
@staticmethod
|
| 194 |
def generate_mcp_server_code(function_code: str) -> str:
|
| 195 |
"""
|
| 196 |
-
|
| 197 |
"""
|
| 198 |
pass
|
|
|
|
| 4 |
@staticmethod
|
| 5 |
def generate_gradio_app(function_code: str, inputs: dict, output_desc: str) -> str:
|
| 6 |
"""
|
| 7 |
+
Generates the complete code for a Gradio application from a function snippet.
|
| 8 |
|
| 9 |
Args:
|
| 10 |
+
function_code: The source code of the main function (e.g. def count_r(word): ...)
|
| 11 |
+
inputs: Dict describing inputs (e.g. {"word": "text"})
|
| 12 |
+
output_desc: Description of the output
|
| 13 |
|
| 14 |
Returns:
|
| 15 |
+
The complete source code for app.py
|
| 16 |
"""
|
| 17 |
|
| 18 |
+
# Simple analysis to find the function name (very naive for now)
|
| 19 |
+
# We assume the code contains "def function_name("
|
| 20 |
import re
|
| 21 |
match = re.search(r"def\s+([a-zA-Z_][a-zA-Z0-9_]*)\s*\(", function_code)
|
| 22 |
func_name = match.group(1) if match else "main_function"
|
| 23 |
|
| 24 |
+
# Mapping MCP/JSON types to Gradio types
|
| 25 |
+
# For simplicity, we map everything to Text for now or use direct strings
|
| 26 |
+
# TODO: Improve type mapping
|
| 27 |
|
| 28 |
+
# Code construction
|
| 29 |
template = f"""
|
| 30 |
import gradio as gr
|
| 31 |
import json
|
|
|
|
| 35 |
|
| 36 |
# --- Gradio Interface ---
|
| 37 |
|
| 38 |
+
# Wrapper to handle types if necessary
|
| 39 |
def wrapper(*args):
|
| 40 |
result = {func_name}(*args)
|
| 41 |
return str(result) # Force string output for simplicity
|
| 42 |
|
| 43 |
+
# Gradio inputs configuration
|
| 44 |
+
# Note: This part is generic for the MVP.
|
| 45 |
+
# Ideally, iterate over 'inputs' to create corresponding Gradio components.
|
| 46 |
|
| 47 |
iface = gr.Interface(
|
| 48 |
fn=wrapper,
|
|
|
|
| 60 |
@staticmethod
|
| 61 |
def generate_tool_module(function_code: str, inputs: dict, output_desc: str, tool_name: str, output_component: str = "text") -> str:
|
| 62 |
"""
|
| 63 |
+
Generates a Python module containing the tool logic and an interface factory.
|
| 64 |
"""
|
| 65 |
import re
|
| 66 |
match = re.search(r"def\s+([a-zA-Z_][a-zA-Z0-9_]*)\s*\(", function_code)
|
|
|
|
| 68 |
|
| 69 |
inputs_keys_str = str(list(inputs.keys()))
|
| 70 |
|
| 71 |
+
# Output component mapping
|
| 72 |
if output_component == "image":
|
| 73 |
gradio_output = 'gr.Image(type="filepath", label="__OUTPUT_DESC__")'
|
| 74 |
elif output_component == "audio":
|
|
|
|
| 107 |
code = code.replace("__FUNCTION_CODE__", function_code)
|
| 108 |
code = code.replace("__FUNC_NAME__", func_name)
|
| 109 |
code = code.replace("__INPUTS_KEYS__", inputs_keys_str)
|
| 110 |
+
code = code.replace("__GRADIO_OUTPUT__", gradio_output) # Dynamic component injection
|
| 111 |
code = code.replace("__OUTPUT_DESC__", output_desc)
|
| 112 |
code = code.replace("__TOOL_NAME__", tool_name)
|
| 113 |
|
|
|
|
| 116 |
@staticmethod
|
| 117 |
def generate_master_app() -> str:
|
| 118 |
"""
|
| 119 |
+
Generates the main app.py file.
|
| 120 |
+
Uses a standard approach with simple dynamic import.
|
| 121 |
"""
|
| 122 |
template = """
|
| 123 |
import gradio as gr
|
|
|
|
| 128 |
# Configuration
|
| 129 |
TOOLS_DIR = "tools"
|
| 130 |
|
| 131 |
+
# Ensure tools directory exists and is a package
|
| 132 |
if not os.path.exists(TOOLS_DIR):
|
| 133 |
os.makedirs(TOOLS_DIR, exist_ok=True)
|
| 134 |
with open(os.path.join(TOOLS_DIR, "__init__.py"), "w") as f:
|
| 135 |
pass
|
| 136 |
|
| 137 |
+
# Add current directory to path so 'import tools.xxx' works
|
| 138 |
sys.path.append(os.path.dirname(os.path.abspath(__file__)))
|
| 139 |
|
| 140 |
interfaces = []
|
|
|
|
| 143 |
print(f"🚀 Starting Meta-MCP Toolbox...")
|
| 144 |
print(f"📂 Scanning '{TOOLS_DIR}' directory...")
|
| 145 |
|
| 146 |
+
# Scan and import tools
|
| 147 |
try:
|
| 148 |
for filename in sorted(os.listdir(TOOLS_DIR)):
|
| 149 |
if filename.endswith(".py") and not filename.startswith("_"):
|
|
|
|
| 152 |
|
| 153 |
try:
|
| 154 |
print(f" 👉 Importing {full_module_name}...")
|
| 155 |
+
# Standard dynamic import
|
| 156 |
+
# Use reload to ensure latest version is taken if restarted
|
| 157 |
module = importlib.import_module(full_module_name)
|
| 158 |
importlib.reload(module)
|
| 159 |
|
| 160 |
if hasattr(module, "create_interface"):
|
| 161 |
+
# Create Gradio interface for this tool
|
| 162 |
tool_interface = module.create_interface()
|
| 163 |
interfaces.append(tool_interface)
|
| 164 |
names.append(module_name)
|
|
|
|
| 173 |
except Exception as e:
|
| 174 |
print(f"Error scanning tools directory: {e}")
|
| 175 |
|
| 176 |
+
# Final interface construction
|
| 177 |
if not interfaces:
|
| 178 |
demo = gr.Interface(
|
| 179 |
fn=lambda x: "No tools loaded yet. Add a tool via Meta-MCP!",
|
|
|
|
| 193 |
@staticmethod
|
| 194 |
def generate_mcp_server_code(function_code: str) -> str:
|
| 195 |
"""
|
| 196 |
+
Generates an MCP server (FastMCP) instead of Gradio (Future feature).
|
| 197 |
"""
|
| 198 |
pass
|
src/core/builder/proposal_generator.py
CHANGED
|
@@ -10,18 +10,18 @@ class ProposalGenerator:
|
|
| 10 |
|
| 11 |
def generate_from_description(self, project_name: str, description: str, model: str = "Qwen/Qwen2.5-Coder-32B-Instruct", provider: str = None):
|
| 12 |
"""
|
| 13 |
-
|
| 14 |
-
|
| 15 |
"""
|
| 16 |
|
| 17 |
-
#
|
| 18 |
-
# Note: InferenceClient
|
| 19 |
-
#
|
| 20 |
# "None" string from UI should be converted to None type
|
| 21 |
if provider == "None" or provider == "":
|
| 22 |
provider = None
|
| 23 |
|
| 24 |
-
print(f"🤖
|
| 25 |
|
| 26 |
# Use current environment variable if available (supports UI updates), otherwise fallback to init token
|
| 27 |
current_token = os.environ.get("HF_TOKEN", self.token)
|
|
@@ -72,18 +72,18 @@ Do not use markdown formatting (no ```json). Just the raw JSON string.
|
|
| 72 |
stream=False
|
| 73 |
)
|
| 74 |
|
| 75 |
-
#
|
| 76 |
content = response.choices[0].message.content.strip()
|
| 77 |
|
| 78 |
-
#
|
| 79 |
import re
|
| 80 |
-
#
|
| 81 |
match = re.search(r'\{.*\}', content, re.DOTALL)
|
| 82 |
if match:
|
| 83 |
json_content = match.group(0)
|
| 84 |
return json.loads(json_content)
|
| 85 |
|
| 86 |
-
# Fallback
|
| 87 |
return json.loads(content)
|
| 88 |
|
| 89 |
except Exception as e:
|
|
|
|
| 10 |
|
| 11 |
def generate_from_description(self, project_name: str, description: str, model: str = "Qwen/Qwen2.5-Coder-32B-Instruct", provider: str = None):
|
| 12 |
"""
|
| 13 |
+
Generates a code and configuration proposal from a description.
|
| 14 |
+
Uses chat_completion for better compatibility.
|
| 15 |
"""
|
| 16 |
|
| 17 |
+
# Dynamic client configuration if necessary (e.g. provider change)
|
| 18 |
+
# Note: InferenceClient is lightweight, we can instantiate it on demand or use the existing one
|
| 19 |
+
# If provider is specified, use it. Otherwise let HF choose.
|
| 20 |
# "None" string from UI should be converted to None type
|
| 21 |
if provider == "None" or provider == "":
|
| 22 |
provider = None
|
| 23 |
|
| 24 |
+
print(f"🤖 LLM Call with Model: {model}, Provider: {provider}")
|
| 25 |
|
| 26 |
# Use current environment variable if available (supports UI updates), otherwise fallback to init token
|
| 27 |
current_token = os.environ.get("HF_TOKEN", self.token)
|
|
|
|
| 72 |
stream=False
|
| 73 |
)
|
| 74 |
|
| 75 |
+
# Content extraction
|
| 76 |
content = response.choices[0].message.content.strip()
|
| 77 |
|
| 78 |
+
# Robust JSON extraction via Regex
|
| 79 |
import re
|
| 80 |
+
# Finds the first { and the last }
|
| 81 |
match = re.search(r'\{.*\}', content, re.DOTALL)
|
| 82 |
if match:
|
| 83 |
json_content = match.group(0)
|
| 84 |
return json.loads(json_content)
|
| 85 |
|
| 86 |
+
# Fallback: direct parsing attempt if regex fails (e.g. list or other format)
|
| 87 |
return json.loads(content)
|
| 88 |
|
| 89 |
except Exception as e:
|
src/core/deployer/huggingface.py
CHANGED
|
@@ -5,18 +5,18 @@ from huggingface_hub import HfApi, get_token
|
|
| 5 |
class HFDeployer:
|
| 6 |
def __init__(self, token: Optional[str] = None):
|
| 7 |
"""
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
"""
|
| 12 |
self.token = token or os.environ.get("HF_TOKEN") or get_token()
|
| 13 |
if not self.token:
|
| 14 |
-
raise ValueError("
|
| 15 |
|
| 16 |
self.api = HfApi(token=self.token)
|
| 17 |
|
| 18 |
def _sanitize_repo_id(self, input_name: str, current_username: str) -> str:
|
| 19 |
-
"""
|
| 20 |
input_name = input_name.strip()
|
| 21 |
|
| 22 |
# Cas URL complète : https://huggingface.co/spaces/user/repo
|
|
@@ -44,30 +44,30 @@ class HFDeployer:
|
|
| 44 |
sdk: str = "gradio",
|
| 45 |
private: bool = False) -> str:
|
| 46 |
"""
|
| 47 |
-
|
| 48 |
|
| 49 |
Args:
|
| 50 |
-
space_name:
|
| 51 |
-
files:
|
| 52 |
-
username:
|
| 53 |
-
sdk: 'gradio', 'streamlit',
|
| 54 |
-
private:
|
| 55 |
|
| 56 |
Returns:
|
| 57 |
-
|
| 58 |
"""
|
| 59 |
|
| 60 |
-
# 1.
|
| 61 |
if not username:
|
| 62 |
user_info = self.api.whoami()
|
| 63 |
username = user_info["name"]
|
| 64 |
|
| 65 |
-
#
|
| 66 |
repo_id = self._sanitize_repo_id(space_name, username)
|
| 67 |
|
| 68 |
-
print(f"🚀
|
| 69 |
|
| 70 |
-
# 2.
|
| 71 |
try:
|
| 72 |
self.api.create_repo(
|
| 73 |
repo_id=repo_id,
|
|
@@ -76,14 +76,14 @@ class HFDeployer:
|
|
| 76 |
private=private,
|
| 77 |
exist_ok=True
|
| 78 |
)
|
| 79 |
-
print(f"✅ Repo {repo_id}
|
| 80 |
except Exception as e:
|
| 81 |
-
raise RuntimeError(f"
|
| 82 |
|
| 83 |
-
# 3.
|
| 84 |
operations = []
|
| 85 |
for filename, content in files.items():
|
| 86 |
-
#
|
| 87 |
content_bytes = content.encode("utf-8")
|
| 88 |
operations.append(
|
| 89 |
self.api.run_as_future(
|
|
@@ -101,7 +101,7 @@ class HFDeployer:
|
|
| 101 |
|
| 102 |
try:
|
| 103 |
for filename, content in files.items():
|
| 104 |
-
print(f"📤
|
| 105 |
content_bytes = content.encode("utf-8")
|
| 106 |
self.api.upload_file(
|
| 107 |
path_or_fileobj=content_bytes,
|
|
@@ -110,13 +110,13 @@ class HFDeployer:
|
|
| 110 |
repo_type="space",
|
| 111 |
commit_message=f"Deploy {filename} via Meta-MCP"
|
| 112 |
)
|
| 113 |
-
print("✅
|
| 114 |
except Exception as e:
|
| 115 |
-
raise RuntimeError(f"
|
| 116 |
|
| 117 |
-
# 4. Construction
|
| 118 |
-
#
|
| 119 |
space_url = f"https://huggingface.co/spaces/{repo_id}"
|
| 120 |
|
| 121 |
-
print(f"🎉
|
| 122 |
return space_url
|
|
|
|
| 5 |
class HFDeployer:
|
| 6 |
def __init__(self, token: Optional[str] = None):
|
| 7 |
"""
|
| 8 |
+
Initializes the Hugging Face deployer.
|
| 9 |
+
If token is None, tries to retrieve it from HF_TOKEN environment variable
|
| 10 |
+
or local cache.
|
| 11 |
"""
|
| 12 |
self.token = token or os.environ.get("HF_TOKEN") or get_token()
|
| 13 |
if not self.token:
|
| 14 |
+
raise ValueError("No Hugging Face token found. Please set HF_TOKEN.")
|
| 15 |
|
| 16 |
self.api = HfApi(token=self.token)
|
| 17 |
|
| 18 |
def _sanitize_repo_id(self, input_name: str, current_username: str) -> str:
|
| 19 |
+
"""Sanitizes the repo/space name to handle URLs and partial formats."""
|
| 20 |
input_name = input_name.strip()
|
| 21 |
|
| 22 |
# Cas URL complète : https://huggingface.co/spaces/user/repo
|
|
|
|
| 44 |
sdk: str = "gradio",
|
| 45 |
private: bool = False) -> str:
|
| 46 |
"""
|
| 47 |
+
Creates a Space and deploys files.
|
| 48 |
|
| 49 |
Args:
|
| 50 |
+
space_name: Space name (e.g. 'strawberry-counter')
|
| 51 |
+
files: Dictionary {filename: content} (e.g. {'app.py': '...'})
|
| 52 |
+
username: Target username or organization. If None, uses current user.
|
| 53 |
+
sdk: 'gradio', 'streamlit', or 'docker'
|
| 54 |
+
private: If True, creates a private repo
|
| 55 |
|
| 56 |
Returns:
|
| 57 |
+
The deployed Space URL.
|
| 58 |
"""
|
| 59 |
|
| 60 |
+
# 1. Determine full repo_id
|
| 61 |
if not username:
|
| 62 |
user_info = self.api.whoami()
|
| 63 |
username = user_info["name"]
|
| 64 |
|
| 65 |
+
# Use sanitization method
|
| 66 |
repo_id = self._sanitize_repo_id(space_name, username)
|
| 67 |
|
| 68 |
+
print(f"🚀 Preparing deployment to {repo_id}...")
|
| 69 |
|
| 70 |
+
# 2. Repo creation (idempotent: does not crash if already exists)
|
| 71 |
try:
|
| 72 |
self.api.create_repo(
|
| 73 |
repo_id=repo_id,
|
|
|
|
| 76 |
private=private,
|
| 77 |
exist_ok=True
|
| 78 |
)
|
| 79 |
+
print(f"✅ Repo {repo_id} ready.")
|
| 80 |
except Exception as e:
|
| 81 |
+
raise RuntimeError(f"Error creating repo: {str(e)}")
|
| 82 |
|
| 83 |
+
# 3. File upload
|
| 84 |
operations = []
|
| 85 |
for filename, content in files.items():
|
| 86 |
+
# Encode content to bytes for upload
|
| 87 |
content_bytes = content.encode("utf-8")
|
| 88 |
operations.append(
|
| 89 |
self.api.run_as_future(
|
|
|
|
| 101 |
|
| 102 |
try:
|
| 103 |
for filename, content in files.items():
|
| 104 |
+
print(f"📤 Uploading {filename}...")
|
| 105 |
content_bytes = content.encode("utf-8")
|
| 106 |
self.api.upload_file(
|
| 107 |
path_or_fileobj=content_bytes,
|
|
|
|
| 110 |
repo_type="space",
|
| 111 |
commit_message=f"Deploy {filename} via Meta-MCP"
|
| 112 |
)
|
| 113 |
+
print("✅ All files uploaded.")
|
| 114 |
except Exception as e:
|
| 115 |
+
raise RuntimeError(f"Error uploading files: {str(e)}")
|
| 116 |
|
| 117 |
+
# 4. URL Construction
|
| 118 |
+
# Standard URL is https://huggingface.co/spaces/USERNAME/SPACE_NAME
|
| 119 |
space_url = f"https://huggingface.co/spaces/{repo_id}"
|
| 120 |
|
| 121 |
+
print(f"🎉 Deployment finished! Space accessible here: {space_url}")
|
| 122 |
return space_url
|
src/core/state/session_manager.py
CHANGED
|
@@ -18,7 +18,7 @@ class SessionManager:
|
|
| 18 |
self._drafts: Dict[str, ProjectDraft] = {}
|
| 19 |
|
| 20 |
def create_draft(self, name: str, description: str, type: str = "adhoc") -> ProjectDraft:
|
| 21 |
-
"""
|
| 22 |
draft_id = str(uuid.uuid4())
|
| 23 |
draft = ProjectDraft(
|
| 24 |
draft_id=draft_id,
|
|
@@ -33,11 +33,11 @@ class SessionManager:
|
|
| 33 |
return draft
|
| 34 |
|
| 35 |
def get_draft(self, draft_id: str) -> Optional[ProjectDraft]:
|
| 36 |
-
"""
|
| 37 |
return self._drafts.get(draft_id)
|
| 38 |
|
| 39 |
def update_code(self, draft_id: str, filename: str, content: str) -> bool:
|
| 40 |
-
"""
|
| 41 |
draft = self.get_draft(draft_id)
|
| 42 |
if not draft:
|
| 43 |
return False
|
|
@@ -45,5 +45,5 @@ class SessionManager:
|
|
| 45 |
return True
|
| 46 |
|
| 47 |
def list_drafts(self) -> Dict[str, str]:
|
| 48 |
-
"""
|
| 49 |
return {d.draft_id: d.name for d in self._drafts.values()}
|
|
|
|
| 18 |
self._drafts: Dict[str, ProjectDraft] = {}
|
| 19 |
|
| 20 |
def create_draft(self, name: str, description: str, type: str = "adhoc") -> ProjectDraft:
|
| 21 |
+
"""Creates a new project draft."""
|
| 22 |
draft_id = str(uuid.uuid4())
|
| 23 |
draft = ProjectDraft(
|
| 24 |
draft_id=draft_id,
|
|
|
|
| 33 |
return draft
|
| 34 |
|
| 35 |
def get_draft(self, draft_id: str) -> Optional[ProjectDraft]:
|
| 36 |
+
"""Retrieves a draft by its ID."""
|
| 37 |
return self._drafts.get(draft_id)
|
| 38 |
|
| 39 |
def update_code(self, draft_id: str, filename: str, content: str) -> bool:
|
| 40 |
+
"""Updates a code file in the draft."""
|
| 41 |
draft = self.get_draft(draft_id)
|
| 42 |
if not draft:
|
| 43 |
return False
|
|
|
|
| 45 |
return True
|
| 46 |
|
| 47 |
def list_drafts(self) -> Dict[str, str]:
|
| 48 |
+
"""Lists all active drafts."""
|
| 49 |
return {d.draft_id: d.name for d in self._drafts.values()}
|
src/mcp_server/playground.py
CHANGED
|
@@ -8,18 +8,18 @@ from contextlib import redirect_stdout
|
|
| 8 |
from smolagents import InferenceClientModel, CodeAgent, Tool
|
| 9 |
|
| 10 |
def remove_ansi_codes(text):
|
| 11 |
-
"""
|
| 12 |
ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
|
| 13 |
return ansi_escape.sub('', text)
|
| 14 |
|
| 15 |
-
# Note: MCPClient
|
| 16 |
-
#
|
| 17 |
-
#
|
| 18 |
try:
|
| 19 |
from smolagents import MCPClient
|
| 20 |
except ImportError:
|
| 21 |
-
# Fallback
|
| 22 |
-
#
|
| 23 |
MCPClient = None
|
| 24 |
|
| 25 |
class PlaygroundManager:
|
|
@@ -29,29 +29,29 @@ class PlaygroundManager:
|
|
| 29 |
self.mcp_client = None
|
| 30 |
|
| 31 |
def load_mcp_tools(self, mcp_url: str):
|
| 32 |
-
"""
|
| 33 |
try:
|
| 34 |
-
#
|
| 35 |
if self.mcp_client:
|
| 36 |
-
# self.mcp_client.disconnect() #
|
| 37 |
pass
|
| 38 |
|
| 39 |
-
#
|
| 40 |
-
#
|
| 41 |
-
#
|
| 42 |
if mcp_url.endswith("/sse"):
|
| 43 |
mcp_url = mcp_url[:-4]
|
| 44 |
|
| 45 |
-
#
|
| 46 |
-
# Note:
|
| 47 |
-
# structured_output=False
|
| 48 |
self.mcp_client = MCPClient({"url": mcp_url}, structured_output=False)
|
| 49 |
|
| 50 |
-
#
|
| 51 |
self.tools = self.mcp_client.get_tools()
|
| 52 |
|
| 53 |
-
# Configuration
|
| 54 |
-
#
|
| 55 |
token = os.environ.get("HF_TOKEN")
|
| 56 |
if not token:
|
| 57 |
return pd.DataFrame({"Error": ["HF_TOKEN env var is missing"]}), "Error: HF_TOKEN missing"
|
|
@@ -59,10 +59,10 @@ class PlaygroundManager:
|
|
| 59 |
model = InferenceClientModel(token=token)
|
| 60 |
self.agent = CodeAgent(tools=self.tools, model=model)
|
| 61 |
|
| 62 |
-
#
|
| 63 |
rows = []
|
| 64 |
for tool in self.tools:
|
| 65 |
-
#
|
| 66 |
input_desc = str(tool.inputs) if hasattr(tool, 'inputs') else "N/A"
|
| 67 |
rows.append({
|
| 68 |
"Tool name": tool.name,
|
|
@@ -71,33 +71,33 @@ class PlaygroundManager:
|
|
| 71 |
})
|
| 72 |
|
| 73 |
df = pd.DataFrame(rows)
|
| 74 |
-
return df, f"
|
| 75 |
|
| 76 |
except Exception as e:
|
| 77 |
import traceback
|
| 78 |
traceback.print_exc()
|
| 79 |
-
return pd.DataFrame({"Error": [str(e)]}), f"
|
| 80 |
|
| 81 |
def chat(self, message: str, history: list):
|
| 82 |
-
"""
|
| 83 |
if not self.agent:
|
| 84 |
-
return "⚠️
|
| 85 |
|
| 86 |
-
# Capture
|
| 87 |
f = io.StringIO()
|
| 88 |
try:
|
| 89 |
with redirect_stdout(f):
|
| 90 |
-
#
|
| 91 |
-
# Note:
|
| 92 |
response = self.agent.run(message)
|
| 93 |
|
| 94 |
-
#
|
| 95 |
raw_logs = f.getvalue()
|
| 96 |
clean_logs = remove_ansi_codes(raw_logs)
|
| 97 |
|
| 98 |
-
#
|
| 99 |
if clean_logs:
|
| 100 |
-
formatted_response = f"**💭
|
| 101 |
else:
|
| 102 |
formatted_response = str(response)
|
| 103 |
|
|
@@ -106,14 +106,14 @@ class PlaygroundManager:
|
|
| 106 |
except Exception as e:
|
| 107 |
raw_logs = f.getvalue()
|
| 108 |
clean_logs = remove_ansi_codes(raw_logs)
|
| 109 |
-
return f"
|
| 110 |
|
| 111 |
-
# Singleton
|
| 112 |
-
#
|
| 113 |
playground = PlaygroundManager()
|
| 114 |
|
| 115 |
def get_playground_ui_handlers():
|
| 116 |
-
"""
|
| 117 |
|
| 118 |
def reload_tools(url):
|
| 119 |
return playground.load_mcp_tools(url)
|
|
|
|
| 8 |
from smolagents import InferenceClientModel, CodeAgent, Tool
|
| 9 |
|
| 10 |
def remove_ansi_codes(text):
|
| 11 |
+
"""Removes ANSI escape codes (colors) from text."""
|
| 12 |
ansi_escape = re.compile(r'\x1B(?:[@-Z\\-_]|\[[0-?]*[ -/]*[@-~])')
|
| 13 |
return ansi_escape.sub('', text)
|
| 14 |
|
| 15 |
+
# Note: MCPClient might not be directly exposed by smolagents in all versions.
|
| 16 |
+
# If import fails, a different approach or version check might be needed.
|
| 17 |
+
# User provided `from smolagents import ..., MCPClient`, so we follow this path.
|
| 18 |
try:
|
| 19 |
from smolagents import MCPClient
|
| 20 |
except ImportError:
|
| 21 |
+
# Fallback or mock if MCPClient is not yet in the installed version
|
| 22 |
+
# Assuming it's good as requested by user for now
|
| 23 |
MCPClient = None
|
| 24 |
|
| 25 |
class PlaygroundManager:
|
|
|
|
| 29 |
self.mcp_client = None
|
| 30 |
|
| 31 |
def load_mcp_tools(self, mcp_url: str):
|
| 32 |
+
"""Connects the MCP client to the given URL and loads tools."""
|
| 33 |
try:
|
| 34 |
+
# Cleanup old client
|
| 35 |
if self.mcp_client:
|
| 36 |
+
# self.mcp_client.disconnect() # If method exists
|
| 37 |
pass
|
| 38 |
|
| 39 |
+
# Initialize MCP Client
|
| 40 |
+
# User requested to ignore SSE mode and use streamable HTTP
|
| 41 |
+
# Clean URL if it still contains /sse by mistake
|
| 42 |
if mcp_url.endswith("/sse"):
|
| 43 |
mcp_url = mcp_url[:-4]
|
| 44 |
|
| 45 |
+
# Pass URL without forcing SSE transport, smolagents should handle it
|
| 46 |
+
# Note: Pass URL directly if possible, or in a dict depending on API
|
| 47 |
+
# structured_output=False to avoid FutureWarning and stay compatible
|
| 48 |
self.mcp_client = MCPClient({"url": mcp_url}, structured_output=False)
|
| 49 |
|
| 50 |
+
# Retrieve tools
|
| 51 |
self.tools = self.mcp_client.get_tools()
|
| 52 |
|
| 53 |
+
# Agent Configuration
|
| 54 |
+
# Use HF_TOKEN for inference model
|
| 55 |
token = os.environ.get("HF_TOKEN")
|
| 56 |
if not token:
|
| 57 |
return pd.DataFrame({"Error": ["HF_TOKEN env var is missing"]}), "Error: HF_TOKEN missing"
|
|
|
|
| 59 |
model = InferenceClientModel(token=token)
|
| 60 |
self.agent = CodeAgent(tools=self.tools, model=model)
|
| 61 |
|
| 62 |
+
# Create DataFrame for display
|
| 63 |
rows = []
|
| 64 |
for tool in self.tools:
|
| 65 |
+
# Simplified input handling for display
|
| 66 |
input_desc = str(tool.inputs) if hasattr(tool, 'inputs') else "N/A"
|
| 67 |
rows.append({
|
| 68 |
"Tool name": tool.name,
|
|
|
|
| 71 |
})
|
| 72 |
|
| 73 |
df = pd.DataFrame(rows)
|
| 74 |
+
return df, f"Success! {len(self.tools)} tools loaded from {mcp_url}"
|
| 75 |
|
| 76 |
except Exception as e:
|
| 77 |
import traceback
|
| 78 |
traceback.print_exc()
|
| 79 |
+
return pd.DataFrame({"Error": [str(e)]}), f"Connection error: {str(e)}"
|
| 80 |
|
| 81 |
def chat(self, message: str, history: list):
|
| 82 |
+
"""Executes user message via agent capturing reflection."""
|
| 83 |
if not self.agent:
|
| 84 |
+
return "⚠️ Please load a valid MCP server first."
|
| 85 |
|
| 86 |
+
# Capture stdout (smolagents reflection logs)
|
| 87 |
f = io.StringIO()
|
| 88 |
try:
|
| 89 |
with redirect_stdout(f):
|
| 90 |
+
# Run smolagents agent
|
| 91 |
+
# Note: Real streaming of reflection would require deeper integration with smolagents
|
| 92 |
response = self.agent.run(message)
|
| 93 |
|
| 94 |
+
# Clean logs (remove ANSI colors that break Markdown)
|
| 95 |
raw_logs = f.getvalue()
|
| 96 |
clean_logs = remove_ansi_codes(raw_logs)
|
| 97 |
|
| 98 |
+
# Format response with cleaned reflection logs
|
| 99 |
if clean_logs:
|
| 100 |
+
formatted_response = f"**💭 Agent Reflection:**\n```text\n{clean_logs}\n```\n\n**✅ Response:**\n{str(response)}"
|
| 101 |
else:
|
| 102 |
formatted_response = str(response)
|
| 103 |
|
|
|
|
| 106 |
except Exception as e:
|
| 107 |
raw_logs = f.getvalue()
|
| 108 |
clean_logs = remove_ansi_codes(raw_logs)
|
| 109 |
+
return f"Error executing agent: {str(e)}\n\nPartial logs:\n{clean_logs}"
|
| 110 |
|
| 111 |
+
# Singleton to manage playground state in Gradio instance
|
| 112 |
+
# Warning: In a real multi-user deployment, state should be managed by gr.State
|
| 113 |
playground = PlaygroundManager()
|
| 114 |
|
| 115 |
def get_playground_ui_handlers():
|
| 116 |
+
"""Returns wrapper functions for Gradio UI."""
|
| 117 |
|
| 118 |
def reload_tools(url):
|
| 119 |
return playground.load_mcp_tools(url)
|
src/mcp_server/server.py
CHANGED
|
@@ -47,12 +47,12 @@ def step_1_initialisation_and_proposal(project_name, description, model_id, prov
|
|
| 47 |
init_result = tools.init_project(project_name, description, type="adhoc")
|
| 48 |
draft_id = init_result.get("draft_id", "")
|
| 49 |
|
| 50 |
-
# 2.
|
| 51 |
-
gr.Info("
|
| 52 |
-
print(f"🤖
|
| 53 |
proposal = proposal_generator.generate_from_description(project_name, description, model=model_id, provider=provider_id)
|
| 54 |
|
| 55 |
-
gr.Info("
|
| 56 |
|
| 57 |
# 3. Retourne les données pour mettre à jour l'UI
|
| 58 |
# Gère le cas où 'requirements' n'est pas renvoyé par le LLM
|
|
@@ -97,9 +97,9 @@ def step_2_logic_definition(draft_id: str, python_code: str, inputs: Any, output
|
|
| 97 |
result = tools.define_logic(draft_id, python_code, inputs, output_desc, requirements, output_component)
|
| 98 |
|
| 99 |
if "error" not in result:
|
| 100 |
-
gr.Info("Code
|
| 101 |
else:
|
| 102 |
-
gr.Info(f"
|
| 103 |
|
| 104 |
return result
|
| 105 |
|
|
@@ -112,8 +112,8 @@ def step_3_deployment(draft_id):
|
|
| 112 |
Args:
|
| 113 |
draft_id: The unique ID of the project draft (from Step 1).
|
| 114 |
"""
|
| 115 |
-
gr.Info("
|
| 116 |
-
# Simplification:
|
| 117 |
result = tools.deploy_to_space(draft_id, visibility="public", space_target="new", target_space_name=None)
|
| 118 |
|
| 119 |
status_msg = ""
|
|
@@ -123,9 +123,9 @@ def step_3_deployment(draft_id):
|
|
| 123 |
|
| 124 |
if "error" not in result:
|
| 125 |
space_url_val = result.get('url', '')
|
| 126 |
-
gr.Info(f"
|
| 127 |
|
| 128 |
-
status_msg = "### 🚀
|
| 129 |
|
| 130 |
# Construction de l'URL MCP
|
| 131 |
mcp_url_val = space_url_val
|
|
@@ -169,8 +169,8 @@ def step_3_deployment(draft_id):
|
|
| 169 |
)
|
| 170 |
|
| 171 |
else:
|
| 172 |
-
gr.Info(f"
|
| 173 |
-
status_msg = f"### ❌
|
| 174 |
|
| 175 |
# Retourne :
|
| 176 |
# 1. JSON result (pour out_deploy)
|
|
@@ -188,7 +188,7 @@ reload_tools_handler, chat_response_handler = get_playground_ui_handlers()
|
|
| 188 |
|
| 189 |
def step_0_configuration(hf_user: str = None, hf_token: str = None, default_space: str = None):
|
| 190 |
"""
|
| 191 |
-
STEP 0: Configures the
|
| 192 |
|
| 193 |
This step is needed to set up the Hugging Face environment.
|
| 194 |
|
|
@@ -276,23 +276,29 @@ def util_get_tool_code(space_name: str, tool_name: str):
|
|
| 276 |
# --- Construction de l'interface ---
|
| 277 |
|
| 278 |
with gr.Blocks(title="MCePtion") as demo:
|
| 279 |
-
|
| 280 |
-
gr.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 281 |
|
| 282 |
with gr.Tab("0. Setup & How-to"):
|
| 283 |
-
gr.Markdown("## Configuration
|
| 284 |
with gr.Row():
|
| 285 |
hf_user_profile = gr.Textbox(
|
| 286 |
label="HF User Profile / Namespace",
|
| 287 |
value=os.environ.get("HF_USER", ""),
|
| 288 |
-
placeholder="
|
| 289 |
-
info="
|
| 290 |
)
|
| 291 |
default_mcp_space_name = gr.Textbox(
|
| 292 |
label="Default Toolbox Name",
|
| 293 |
value=os.environ.get("DEFAULT_SPACE", "mymcpserver"),
|
| 294 |
-
placeholder="
|
| 295 |
-
info="
|
| 296 |
)
|
| 297 |
|
| 298 |
with gr.Row():
|
|
@@ -300,46 +306,46 @@ with gr.Blocks(title="MCePtion") as demo:
|
|
| 300 |
label="HF Write Token (Optional override)",
|
| 301 |
type="password",
|
| 302 |
placeholder="hf_...",
|
| 303 |
-
info="
|
| 304 |
)
|
| 305 |
|
| 306 |
-
#
|
| 307 |
btn_save_config = gr.Button("Save Configuration")
|
| 308 |
|
| 309 |
def save_config_ui(user: str, space: str, token: str):
|
| 310 |
if user: os.environ["HF_USER"] = user
|
| 311 |
if space: os.environ["DEFAULT_SPACE"] = space
|
| 312 |
if token: os.environ["HF_TOKEN"] = token
|
| 313 |
-
gr.Info("Configuration
|
| 314 |
return f"Configuration saved! User: {user}, Default Space: {space}"
|
| 315 |
|
| 316 |
config_status = gr.Markdown("")
|
| 317 |
btn_save_config.click(save_config_ui, inputs=[hf_user_profile, default_mcp_space_name, hf_token_input], outputs=config_status)
|
| 318 |
|
| 319 |
-
gr.Markdown("##
|
| 320 |
|
| 321 |
with gr.Row():
|
| 322 |
-
with gr.Column("Guide
|
| 323 |
gr.Markdown("""
|
| 324 |
-
##
|
| 325 |
|
| 326 |
-
### 1.
|
| 327 |
-
*
|
| 328 |
-
*
|
| 329 |
-
*
|
| 330 |
|
| 331 |
-
### 2. Validation
|
| 332 |
-
*
|
| 333 |
-
*
|
| 334 |
-
*
|
| 335 |
|
| 336 |
-
### 3.
|
| 337 |
-
*
|
| 338 |
-
*
|
| 339 |
-
*
|
| 340 |
|
| 341 |
### 4. Test
|
| 342 |
-
*
|
| 343 |
""")
|
| 344 |
|
| 345 |
with gr.Column():
|
|
@@ -370,63 +376,63 @@ with gr.Blocks(title="MCePtion") as demo:
|
|
| 370 |
}
|
| 371 |
}
|
| 372 |
_c_claude_config_str = json.dumps(_c_claude_config, indent=2)
|
| 373 |
-
gr.Markdown("""##
|
| 374 |
gr.Code(label="URL of this space :", value=_c_space_url, language=None, interactive=False, lines=1)
|
| 375 |
gr.Code(label="URL of MCP endpoint :", value=_c_mcp_url, language=None, interactive=False, lines=1)
|
| 376 |
gr.Code(label="Claude Desktop Configuration", value=_c_claude_config_str, language="json", interactive=False)
|
| 377 |
|
| 378 |
|
| 379 |
|
| 380 |
-
with gr.Tab("1.
|
| 381 |
-
gr.Markdown("
|
| 382 |
|
| 383 |
-
project_name = gr.Textbox(label="
|
| 384 |
|
| 385 |
project_desc = gr.Textbox(
|
| 386 |
-
label="Description
|
| 387 |
lines=10,
|
| 388 |
-
placeholder="
|
| 389 |
)
|
| 390 |
|
| 391 |
-
with gr.Accordion("
|
| 392 |
provider_id = gr.Dropdown(
|
| 393 |
-
label="Provider
|
| 394 |
choices=["sambanova", "together", "None", "hyperbolic", "fal-ai", "replicate", "novita", "nebius", "cerebras", "fireworks", "groq"],
|
| 395 |
value="together",
|
| 396 |
-
info="
|
| 397 |
)
|
| 398 |
|
| 399 |
model_id = gr.Dropdown(
|
| 400 |
-
label="
|
| 401 |
value="moonshotai/Kimi-K2-Instruct-0905",
|
| 402 |
choices=COMMON_MODELS,
|
| 403 |
allow_custom_value=True,
|
| 404 |
-
info="
|
| 405 |
)
|
| 406 |
|
| 407 |
-
#
|
| 408 |
def update_models(provider: str):
|
| 409 |
models = PROVIDER_MODELS.get(provider, PROVIDER_MODELS["default"])
|
| 410 |
return gr.update(choices=models, value=models[0] if models else "")
|
| 411 |
|
| 412 |
provider_id.change(update_models, inputs=[provider_id], outputs=[model_id])
|
| 413 |
|
| 414 |
-
btn_init = gr.Button("
|
| 415 |
-
out_init = gr.JSON(label="
|
| 416 |
|
| 417 |
|
| 418 |
-
with gr.Tab("2.
|
| 419 |
-
gr.Markdown("
|
| 420 |
|
| 421 |
-
#
|
| 422 |
draft_id_logic = gr.Textbox(label="Draft ID", interactive=False)
|
| 423 |
|
| 424 |
with gr.Row():
|
| 425 |
-
#
|
| 426 |
with gr.Column(scale=2):
|
| 427 |
-
python_code = gr.Code(language="python", label="Code
|
| 428 |
|
| 429 |
-
#
|
| 430 |
with gr.Column(scale=1):
|
| 431 |
# 1. Requirements
|
| 432 |
requirements_box = gr.Code(language="json", label="Requirements (JSON List)", value='[]')
|
|
@@ -435,16 +441,16 @@ with gr.Blocks(title="MCePtion") as demo:
|
|
| 435 |
inputs_dict = gr.Code(language="json", label="Inputs (JSON)", value='{"word": "text"}')
|
| 436 |
|
| 437 |
# 3. Outputs
|
| 438 |
-
output_desc = gr.Textbox(label="Description
|
| 439 |
output_component_ui = gr.Dropdown(
|
| 440 |
-
label="Type
|
| 441 |
choices=["text", "image", "audio", "video", "html", "json", "file"],
|
| 442 |
value="text",
|
| 443 |
interactive=True
|
| 444 |
)
|
| 445 |
|
| 446 |
-
btn_logic = gr.Button("
|
| 447 |
-
out_logic = gr.JSON(label="
|
| 448 |
|
| 449 |
btn_logic.click(
|
| 450 |
step_2_logic_definition,
|
|
@@ -453,48 +459,48 @@ with gr.Blocks(title="MCePtion") as demo:
|
|
| 453 |
api_name="step_2_logic_definition"
|
| 454 |
)
|
| 455 |
|
| 456 |
-
with gr.Tab("3.
|
| 457 |
-
gr.Markdown("
|
| 458 |
with gr.Row():
|
| 459 |
draft_id_deploy = gr.Textbox(label="Draft ID")
|
| 460 |
-
# Simplification:
|
| 461 |
|
| 462 |
-
#
|
| 463 |
-
deployment_summary = gr.Markdown("
|
| 464 |
|
| 465 |
def update_deployment_summary(draft_id: str):
|
| 466 |
if not draft_id:
|
| 467 |
-
return "
|
| 468 |
|
| 469 |
-
#
|
| 470 |
default_space = os.environ.get("DEFAULT_SPACE")
|
| 471 |
-
target = default_space if default_space else "
|
| 472 |
-
mode = "
|
| 473 |
|
| 474 |
return f"""
|
| 475 |
-
### 📋
|
| 476 |
|
| 477 |
-
* **Mode
|
| 478 |
-
* **
|
| 479 |
-
* **
|
| 480 |
|
| 481 |
-
|
| 482 |
-
|
| 483 |
"""
|
| 484 |
|
| 485 |
-
btn_deploy = gr.Button("
|
| 486 |
|
| 487 |
out_status = gr.Markdown("")
|
| 488 |
|
| 489 |
with gr.Row():
|
| 490 |
-
#
|
| 491 |
-
out_space_url = gr.Code(language=None, label="
|
| 492 |
-
out_mcp_url = gr.Code(language=None, label="
|
| 493 |
|
| 494 |
-
out_claude_config = gr.Code(language="json", label="
|
| 495 |
|
| 496 |
-
with gr.Accordion("
|
| 497 |
-
out_deploy = gr.Code(language="json", label="
|
| 498 |
|
| 499 |
# Mise à jour du résumé quand le draft_id change
|
| 500 |
draft_id_deploy.change(update_deployment_summary, inputs=[draft_id_deploy], outputs=[deployment_summary])
|
|
@@ -519,37 +525,37 @@ with gr.Blocks(title="MCePtion") as demo:
|
|
| 519 |
)
|
| 520 |
|
| 521 |
with gr.Tab("4. Test & Playground (Smolagents)"):
|
| 522 |
-
gr.Markdown("
|
| 523 |
|
| 524 |
with gr.Column():
|
| 525 |
mcp_url_input = gr.Textbox(
|
| 526 |
-
label="
|
| 527 |
-
placeholder="
|
| 528 |
scale=3
|
| 529 |
)
|
| 530 |
-
btn_reload = gr.Button("🔄
|
| 531 |
|
| 532 |
status_msg = gr.Markdown("")
|
| 533 |
|
| 534 |
-
#
|
| 535 |
tool_table = gr.DataFrame(
|
| 536 |
headers=["Tool name", "Description", "Params"],
|
| 537 |
-
label="
|
| 538 |
wrap=True,
|
| 539 |
interactive=False
|
| 540 |
)
|
| 541 |
|
| 542 |
gr.Markdown("""
|
| 543 |
-
### ⚙️ Configuration
|
| 544 |
-
|
| 545 |
```python
|
| 546 |
from smolagents import MCPClient
|
| 547 |
-
#
|
| 548 |
-
client = MCPClient(url="
|
| 549 |
```
|
| 550 |
""")
|
| 551 |
|
| 552 |
-
gr.Markdown("### 🤖
|
| 553 |
chatbot = gr.ChatInterface(
|
| 554 |
fn=chat_response_handler
|
| 555 |
)
|
|
@@ -566,11 +572,37 @@ with gr.Blocks(title="MCePtion") as demo:
|
|
| 566 |
try:
|
| 567 |
with open("README.md", "r", encoding="utf-8") as f:
|
| 568 |
readme_content = f.read()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 569 |
except Exception as e:
|
| 570 |
-
readme_content = f"
|
|
|
|
|
|
|
|
|
|
| 571 |
|
| 572 |
-
|
| 573 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 574 |
|
| 575 |
# Câblage différé du déploiement (pour avoir accès à mcp_url_input défini dans le Tab 4)
|
| 576 |
btn_deploy.click(
|
|
|
|
| 47 |
init_result = tools.init_project(project_name, description, type="adhoc")
|
| 48 |
draft_id = init_result.get("draft_id", "")
|
| 49 |
|
| 50 |
+
# 2. AI Proposal Generation
|
| 51 |
+
gr.Info("AI code generation in progress...")
|
| 52 |
+
print(f"🤖 Generating proposal for: {project_name} (Model: {model_id}, Provider: {provider_id})...")
|
| 53 |
proposal = proposal_generator.generate_from_description(project_name, description, model=model_id, provider=provider_id)
|
| 54 |
|
| 55 |
+
gr.Info("Proposal generated! Please validate in the next tab.")
|
| 56 |
|
| 57 |
# 3. Retourne les données pour mettre à jour l'UI
|
| 58 |
# Gère le cas où 'requirements' n'est pas renvoyé par le LLM
|
|
|
|
| 97 |
result = tools.define_logic(draft_id, python_code, inputs, output_desc, requirements, output_component)
|
| 98 |
|
| 99 |
if "error" not in result:
|
| 100 |
+
gr.Info("Code validated and saved! Ready to deploy.")
|
| 101 |
else:
|
| 102 |
+
gr.Info(f"Error: {result['error']}")
|
| 103 |
|
| 104 |
return result
|
| 105 |
|
|
|
|
| 112 |
Args:
|
| 113 |
draft_id: The unique ID of the project draft (from Step 1).
|
| 114 |
"""
|
| 115 |
+
gr.Info("Deployment in progress... This may take a few minutes.")
|
| 116 |
+
# Simplification: Always public, always new (overwrite/create), space name = project name
|
| 117 |
result = tools.deploy_to_space(draft_id, visibility="public", space_target="new", target_space_name=None)
|
| 118 |
|
| 119 |
status_msg = ""
|
|
|
|
| 123 |
|
| 124 |
if "error" not in result:
|
| 125 |
space_url_val = result.get('url', '')
|
| 126 |
+
gr.Info(f"Deployment successful! URL: {space_url_val}")
|
| 127 |
|
| 128 |
+
status_msg = "### 🚀 Deployment successful!"
|
| 129 |
|
| 130 |
# Construction de l'URL MCP
|
| 131 |
mcp_url_val = space_url_val
|
|
|
|
| 169 |
)
|
| 170 |
|
| 171 |
else:
|
| 172 |
+
gr.Info(f"Deployment failed: {result.get('error')}")
|
| 173 |
+
status_msg = f"### ❌ Deployment failed\n\nError: {result.get('error')}"
|
| 174 |
|
| 175 |
# Retourne :
|
| 176 |
# 1. JSON result (pour out_deploy)
|
|
|
|
| 188 |
|
| 189 |
def step_0_configuration(hf_user: str = None, hf_token: str = None, default_space: str = None):
|
| 190 |
"""
|
| 191 |
+
STEP 0: Configures the MCEPTION server environment.
|
| 192 |
|
| 193 |
This step is needed to set up the Hugging Face environment.
|
| 194 |
|
|
|
|
| 276 |
# --- Construction de l'interface ---
|
| 277 |
|
| 278 |
with gr.Blocks(title="MCePtion") as demo:
|
| 279 |
+
# Bandeau haut (Image croppée à 50% de hauteur)
|
| 280 |
+
gr.HTML("""
|
| 281 |
+
<div style="width: 100%; overflow: hidden; margin-bottom: 20px;">
|
| 282 |
+
<img src="/file=assets/images/header_bg.jpeg" style="width: 100%; display: block; border-radius: 8px;" alt="MCePtion Header">
|
| 283 |
+
</div>
|
| 284 |
+
""")
|
| 285 |
+
gr.Markdown("# 🏭 MCEPTION is the MCP of your MCPs")
|
| 286 |
+
gr.Markdown("This server allows you to create and deploy other MCP servers on Hugging Face Spaces.")
|
| 287 |
|
| 288 |
with gr.Tab("0. Setup & How-to"):
|
| 289 |
+
gr.Markdown("## Global Configuration")
|
| 290 |
with gr.Row():
|
| 291 |
hf_user_profile = gr.Textbox(
|
| 292 |
label="HF User Profile / Namespace",
|
| 293 |
value=os.environ.get("HF_USER", ""),
|
| 294 |
+
placeholder="e.g. alihmaou",
|
| 295 |
+
info="Your default Hugging Face username or organization."
|
| 296 |
)
|
| 297 |
default_mcp_space_name = gr.Textbox(
|
| 298 |
label="Default Toolbox Name",
|
| 299 |
value=os.environ.get("DEFAULT_SPACE", "mymcpserver"),
|
| 300 |
+
placeholder="e.g. mymcpserver",
|
| 301 |
+
info="Default Space (Toolbox) name for additions (will be concatenated with user)."
|
| 302 |
)
|
| 303 |
|
| 304 |
with gr.Row():
|
|
|
|
| 306 |
label="HF Write Token (Optional override)",
|
| 307 |
type="password",
|
| 308 |
placeholder="hf_...",
|
| 309 |
+
info="Deployment token. If empty, uses the server's HF_TOKEN environment variable."
|
| 310 |
)
|
| 311 |
|
| 312 |
+
# Button to apply config (simple update of global variables/env for the session)
|
| 313 |
btn_save_config = gr.Button("Save Configuration")
|
| 314 |
|
| 315 |
def save_config_ui(user: str, space: str, token: str):
|
| 316 |
if user: os.environ["HF_USER"] = user
|
| 317 |
if space: os.environ["DEFAULT_SPACE"] = space
|
| 318 |
if token: os.environ["HF_TOKEN"] = token
|
| 319 |
+
gr.Info("Configuration saved!")
|
| 320 |
return f"Configuration saved! User: {user}, Default Space: {space}"
|
| 321 |
|
| 322 |
config_status = gr.Markdown("")
|
| 323 |
btn_save_config.click(save_config_ui, inputs=[hf_user_profile, default_mcp_space_name, hf_token_input], outputs=config_status)
|
| 324 |
|
| 325 |
+
gr.Markdown("## How to use this MCePtion server?")
|
| 326 |
|
| 327 |
with gr.Row():
|
| 328 |
+
with gr.Column("User Guide"):
|
| 329 |
gr.Markdown("""
|
| 330 |
+
## Human Interface User Guide
|
| 331 |
|
| 332 |
+
### 1. Tool Creation
|
| 333 |
+
* Go to tab **1. Initialization**.
|
| 334 |
+
* Provide a name and describe what you want (or paste a Swagger).
|
| 335 |
+
* Click on "Initialize & Generate".
|
| 336 |
|
| 337 |
+
### 2. Code Validation
|
| 338 |
+
* Go to tab **2. Logic Definition**.
|
| 339 |
+
* Check the generated Python code and dependencies.
|
| 340 |
+
* Click on "Validate Code" to validate.
|
| 341 |
|
| 342 |
+
### 3. Deployment
|
| 343 |
+
* Go to tab **3. Deployment**.
|
| 344 |
+
* Choose "New" to create a new Space or "Existing" to add to a Toolbox.
|
| 345 |
+
* Click on "Deploy".
|
| 346 |
|
| 347 |
### 4. Test
|
| 348 |
+
* Use the **4. Playground** tab to test your new tool after initialization (approx. 1 minute).
|
| 349 |
""")
|
| 350 |
|
| 351 |
with gr.Column():
|
|
|
|
| 376 |
}
|
| 377 |
}
|
| 378 |
_c_claude_config_str = json.dumps(_c_claude_config, indent=2)
|
| 379 |
+
gr.Markdown("""## MCP Integration Settings""")
|
| 380 |
gr.Code(label="URL of this space :", value=_c_space_url, language=None, interactive=False, lines=1)
|
| 381 |
gr.Code(label="URL of MCP endpoint :", value=_c_mcp_url, language=None, interactive=False, lines=1)
|
| 382 |
gr.Code(label="Claude Desktop Configuration", value=_c_claude_config_str, language="json", interactive=False)
|
| 383 |
|
| 384 |
|
| 385 |
|
| 386 |
+
with gr.Tab("1. Initialization"):
|
| 387 |
+
gr.Markdown("Start by initializing a new project.")
|
| 388 |
|
| 389 |
+
project_name = gr.Textbox(label="e.g. Project Name (e.g. strawberry-counter, city-weather)...")
|
| 390 |
|
| 391 |
project_desc = gr.Textbox(
|
| 392 |
+
label="Tool Description or Specification (Swagger/OpenAPI JSON)",
|
| 393 |
lines=10,
|
| 394 |
+
placeholder="Describe what the tool should do, or paste the content of a swagger.json file here to generate an API client automatically."
|
| 395 |
)
|
| 396 |
|
| 397 |
+
with gr.Accordion("AI Settings (Advanced)", open=False):
|
| 398 |
provider_id = gr.Dropdown(
|
| 399 |
+
label="Inference Provider",
|
| 400 |
choices=["sambanova", "together", "None", "hyperbolic", "fal-ai", "replicate", "novita", "nebius", "cerebras", "fireworks", "groq"],
|
| 401 |
value="together",
|
| 402 |
+
info="Select a specific provider."
|
| 403 |
)
|
| 404 |
|
| 405 |
model_id = gr.Dropdown(
|
| 406 |
+
label="LLM Model",
|
| 407 |
value="moonshotai/Kimi-K2-Instruct-0905",
|
| 408 |
choices=COMMON_MODELS,
|
| 409 |
allow_custom_value=True,
|
| 410 |
+
info="Choose a code-optimized model or type a new one."
|
| 411 |
)
|
| 412 |
|
| 413 |
+
# Dynamic model update
|
| 414 |
def update_models(provider: str):
|
| 415 |
models = PROVIDER_MODELS.get(provider, PROVIDER_MODELS["default"])
|
| 416 |
return gr.update(choices=models, value=models[0] if models else "")
|
| 417 |
|
| 418 |
provider_id.change(update_models, inputs=[provider_id], outputs=[model_id])
|
| 419 |
|
| 420 |
+
btn_init = gr.Button("Initialize Project & Propose Code (AI)")
|
| 421 |
+
out_init = gr.JSON(label="Result (Copy the draft_id)")
|
| 422 |
|
| 423 |
|
| 424 |
+
with gr.Tab("2. Logic Definition"):
|
| 425 |
+
gr.Markdown("Verify and refine the Python code and interface of your tool.")
|
| 426 |
|
| 427 |
+
# Display draft_id as read-only to ensure propagation
|
| 428 |
draft_id_logic = gr.Textbox(label="Draft ID", interactive=False)
|
| 429 |
|
| 430 |
with gr.Row():
|
| 431 |
+
# Left Column: Code
|
| 432 |
with gr.Column(scale=2):
|
| 433 |
+
python_code = gr.Code(language="python", label="Python Code (e.g. def count_r(word): ...)")
|
| 434 |
|
| 435 |
+
# Right Column: Requirements, Inputs, Outputs
|
| 436 |
with gr.Column(scale=1):
|
| 437 |
# 1. Requirements
|
| 438 |
requirements_box = gr.Code(language="json", label="Requirements (JSON List)", value='[]')
|
|
|
|
| 441 |
inputs_dict = gr.Code(language="json", label="Inputs (JSON)", value='{"word": "text"}')
|
| 442 |
|
| 443 |
# 3. Outputs
|
| 444 |
+
output_desc = gr.Textbox(label="Output Description")
|
| 445 |
output_component_ui = gr.Dropdown(
|
| 446 |
+
label="Output Type (Gradio Component)",
|
| 447 |
choices=["text", "image", "audio", "video", "html", "json", "file"],
|
| 448 |
value="text",
|
| 449 |
interactive=True
|
| 450 |
)
|
| 451 |
|
| 452 |
+
btn_logic = gr.Button("Validate Code")
|
| 453 |
+
out_logic = gr.JSON(label="Result")
|
| 454 |
|
| 455 |
btn_logic.click(
|
| 456 |
step_2_logic_definition,
|
|
|
|
| 459 |
api_name="step_2_logic_definition"
|
| 460 |
)
|
| 461 |
|
| 462 |
+
with gr.Tab("3. Deployment"):
|
| 463 |
+
gr.Markdown("Deploy your tool to Hugging Face Spaces.")
|
| 464 |
with gr.Row():
|
| 465 |
draft_id_deploy = gr.Textbox(label="Draft ID")
|
| 466 |
+
# Simplification: No other inputs needed
|
| 467 |
|
| 468 |
+
# Deployment plan summary (dynamically calculated)
|
| 469 |
+
deployment_summary = gr.Markdown("Waiting for Draft ID...")
|
| 470 |
|
| 471 |
def update_deployment_summary(draft_id: str):
|
| 472 |
if not draft_id:
|
| 473 |
+
return "Waiting..."
|
| 474 |
|
| 475 |
+
# Simplified logic mirroring tools.deploy_to_space
|
| 476 |
default_space = os.environ.get("DEFAULT_SPACE")
|
| 477 |
+
target = default_space if default_space else "New Space (Project Name)"
|
| 478 |
+
mode = "ADD (Toolbox)" if default_space else "CREATE (New Space)"
|
| 479 |
|
| 480 |
return f"""
|
| 481 |
+
### 📋 Deployment Summary
|
| 482 |
|
| 483 |
+
* **Mode:** {mode}
|
| 484 |
+
* **Target:** `{target}`
|
| 485 |
+
* **Visibility:** Public
|
| 486 |
|
| 487 |
+
If you use a `DEFAULT_SPACE`, the tool will be added to your existing toolbox without overwriting other tools.
|
| 488 |
+
Otherwise, a new dedicated Space will be created.
|
| 489 |
"""
|
| 490 |
|
| 491 |
+
btn_deploy = gr.Button("Deploy to Spaces", variant="primary")
|
| 492 |
|
| 493 |
out_status = gr.Markdown("")
|
| 494 |
|
| 495 |
with gr.Row():
|
| 496 |
+
# Using gr.Code because gr.Textbox(show_copy_button=True) is not supported in this Gradio version
|
| 497 |
+
out_space_url = gr.Code(language=None, label="Hugging Face Space URL", interactive=False, lines=1)
|
| 498 |
+
out_mcp_url = gr.Code(language=None, label="MCP Endpoint URL", interactive=False, lines=1)
|
| 499 |
|
| 500 |
+
out_claude_config = gr.Code(language="json", label="Claude Desktop Configuration (add to claude_desktop_config.json)")
|
| 501 |
|
| 502 |
+
with gr.Accordion("JSON Details (Debug)", open=False):
|
| 503 |
+
out_deploy = gr.Code(language="json", label="Raw Result")
|
| 504 |
|
| 505 |
# Mise à jour du résumé quand le draft_id change
|
| 506 |
draft_id_deploy.change(update_deployment_summary, inputs=[draft_id_deploy], outputs=[deployment_summary])
|
|
|
|
| 525 |
)
|
| 526 |
|
| 527 |
with gr.Tab("4. Test & Playground (Smolagents)"):
|
| 528 |
+
gr.Markdown("Immediately test your deployed MCP server.")
|
| 529 |
|
| 530 |
with gr.Column():
|
| 531 |
mcp_url_input = gr.Textbox(
|
| 532 |
+
label="MCP Server URL",
|
| 533 |
+
placeholder="e.g. https://your-user-your-space.hf.space/gradio_api/mcp/sse",
|
| 534 |
scale=3
|
| 535 |
)
|
| 536 |
+
btn_reload = gr.Button("🔄 Load Tools", scale=1)
|
| 537 |
|
| 538 |
status_msg = gr.Markdown("")
|
| 539 |
|
| 540 |
+
# Table adapted for tool display (wrap=True)
|
| 541 |
tool_table = gr.DataFrame(
|
| 542 |
headers=["Tool name", "Description", "Params"],
|
| 543 |
+
label="Detected Tools",
|
| 544 |
wrap=True,
|
| 545 |
interactive=False
|
| 546 |
)
|
| 547 |
|
| 548 |
gr.Markdown("""
|
| 549 |
+
### ⚙️ Smolagents Configuration
|
| 550 |
+
To use this tool with smolagents in your code:
|
| 551 |
```python
|
| 552 |
from smolagents import MCPClient
|
| 553 |
+
# Direct HTTP Mode (recommended)
|
| 554 |
+
client = MCPClient(url="SERVER_URL", structured_output=False)
|
| 555 |
```
|
| 556 |
""")
|
| 557 |
|
| 558 |
+
gr.Markdown("### 🤖 Chat with your MCP Agent")
|
| 559 |
chatbot = gr.ChatInterface(
|
| 560 |
fn=chat_response_handler
|
| 561 |
)
|
|
|
|
| 572 |
try:
|
| 573 |
with open("README.md", "r", encoding="utf-8") as f:
|
| 574 |
readme_content = f.read()
|
| 575 |
+
|
| 576 |
+
# Remove Hugging Face YAML frontmatter if present
|
| 577 |
+
if readme_content.startswith("---"):
|
| 578 |
+
try:
|
| 579 |
+
# Find the end of the frontmatter (second '---')
|
| 580 |
+
# We start searching from index 3 to skip the first '---'
|
| 581 |
+
end_index = readme_content.find("---", 3)
|
| 582 |
+
if end_index != -1:
|
| 583 |
+
# Slice content after the second '---' and strip leading whitespace
|
| 584 |
+
readme_content = readme_content[end_index + 3:].lstrip()
|
| 585 |
+
except Exception:
|
| 586 |
+
pass
|
| 587 |
+
|
| 588 |
except Exception as e:
|
| 589 |
+
readme_content = f"Unable to load README.md: {str(e)}"
|
| 590 |
+
|
| 591 |
+
# Le conteneur Row pour aligner les 3 colonnes horizontalement
|
| 592 |
+
with gr.Row():
|
| 593 |
|
| 594 |
+
# 1. Colonne vide à gauche (1 part)
|
| 595 |
+
# min_width=0 est important pour que la colonne puisse rétrécir si besoin
|
| 596 |
+
with gr.Column(scale=1, min_width=0):
|
| 597 |
+
pass
|
| 598 |
+
|
| 599 |
+
# 2. Colonne centrale avec le contenu (3 parts)
|
| 600 |
+
with gr.Column(scale=3):
|
| 601 |
+
gr.Markdown(readme_content)
|
| 602 |
+
|
| 603 |
+
# 3. Colonne vide à droite (1 part)
|
| 604 |
+
with gr.Column(scale=1, min_width=0):
|
| 605 |
+
pass
|
| 606 |
|
| 607 |
# Câblage différé du déploiement (pour avoir accès à mcp_url_input défini dans le Tab 4)
|
| 608 |
btn_deploy.click(
|
src/mcp_server/tools.py
CHANGED
|
@@ -16,13 +16,13 @@ session_manager = SessionManager()
|
|
| 16 |
|
| 17 |
def init_project(project_name: str, description: str, type: str = "adhoc") -> Dict[str, Any]:
|
| 18 |
"""
|
| 19 |
-
|
| 20 |
Args:
|
| 21 |
-
project_name:
|
| 22 |
-
description:
|
| 23 |
-
type: 'adhoc' (code
|
| 24 |
Returns:
|
| 25 |
-
|
| 26 |
"""
|
| 27 |
print(f"DEBUG [init_project]: project_name={project_name}, type={type}")
|
| 28 |
draft = session_manager.create_draft(project_name, description, type)
|
|
@@ -33,20 +33,20 @@ def init_project(project_name: str, description: str, type: str = "adhoc") -> Di
|
|
| 33 |
"description": draft.description,
|
| 34 |
"files": list(draft.code_files.keys())
|
| 35 |
},
|
| 36 |
-
"message": f"
|
| 37 |
}
|
| 38 |
print(f"DEBUG [init_project]: result={result}")
|
| 39 |
return result
|
| 40 |
|
| 41 |
def propose_implementation(project_name: str, description: str) -> Dict[str, Any]:
|
| 42 |
"""
|
| 43 |
-
|
| 44 |
Args:
|
| 45 |
-
project_name:
|
| 46 |
-
description:
|
| 47 |
Returns:
|
| 48 |
-
|
| 49 |
-
|
| 50 |
"""
|
| 51 |
print(f"DEBUG [propose_implementation]: project_name={project_name}")
|
| 52 |
try:
|
|
@@ -54,26 +54,26 @@ def propose_implementation(project_name: str, description: str) -> Dict[str, Any
|
|
| 54 |
result = {
|
| 55 |
"status": "success",
|
| 56 |
"proposal": proposal,
|
| 57 |
-
"message": "
|
| 58 |
}
|
| 59 |
print(f"DEBUG [propose_implementation]: result={result.keys()}")
|
| 60 |
return result
|
| 61 |
except Exception as e:
|
| 62 |
print(f"DEBUG [propose_implementation]: error={str(e)}")
|
| 63 |
-
return {"error": f"
|
| 64 |
|
| 65 |
def define_logic(draft_id: str, python_code: str, inputs: Union[Dict[str, str], str], output_desc: str, requirements: str = "", output_component: str = "text") -> Dict[str, Any]:
|
| 66 |
"""
|
| 67 |
-
|
| 68 |
Args:
|
| 69 |
-
inputs:
|
| 70 |
-
output_component:
|
| 71 |
"""
|
| 72 |
print(f"DEBUG [define_logic]: draft_id={draft_id}, output_component={output_component}")
|
| 73 |
draft = session_manager.get_draft(draft_id)
|
| 74 |
if not draft:
|
| 75 |
print(f"DEBUG [define_logic]: Draft not found")
|
| 76 |
-
return {"error": f"Draft {draft_id}
|
| 77 |
|
| 78 |
# Gestion des inputs (Dict ou JSON String)
|
| 79 |
if isinstance(inputs, str):
|
|
@@ -138,19 +138,19 @@ def define_logic(draft_id: str, python_code: str, inputs: Union[Dict[str, str],
|
|
| 138 |
|
| 139 |
return {
|
| 140 |
"status": "success",
|
| 141 |
-
"message": f"
|
| 142 |
"preview": tool_module_code[:200] + "..."
|
| 143 |
}
|
| 144 |
|
| 145 |
|
| 146 |
def deploy_to_space(draft_id: str, visibility: str = "public", space_target: str = "new", target_space_name: str = "") -> Dict[str, Any]:
|
| 147 |
"""
|
| 148 |
-
|
| 149 |
"""
|
| 150 |
print(f"DEBUG [deploy_to_space]: draft_id={draft_id}, target={space_target}, name={target_space_name}")
|
| 151 |
draft = session_manager.get_draft(draft_id)
|
| 152 |
if not draft:
|
| 153 |
-
return {"error": f"Draft {draft_id}
|
| 154 |
|
| 155 |
deployer = HFDeployer()
|
| 156 |
|
|
@@ -257,9 +257,9 @@ def deploy_to_space(draft_id: str, visibility: str = "public", space_target: str
|
|
| 257 |
private=(visibility == "private")
|
| 258 |
)
|
| 259 |
|
| 260 |
-
mode_msg = "
|
| 261 |
|
| 262 |
-
#
|
| 263 |
mcp_endpoint = url.rstrip("/") + "/gradio_api/mcp/"
|
| 264 |
|
| 265 |
# Nom du serveur pour la config Claude (nom du Space sans le username)
|
|
@@ -289,23 +289,23 @@ def deploy_to_space(draft_id: str, visibility: str = "public", space_target: str
|
|
| 289 |
return {
|
| 290 |
"status": "success",
|
| 291 |
"url": url,
|
| 292 |
-
"instructions": f"
|
| 293 |
"claude_config": claude_config
|
| 294 |
}
|
| 295 |
except Exception as e:
|
| 296 |
-
return {"error": f"
|
| 297 |
|
| 298 |
def delete_tool(space_name: str, tool_name: str) -> Dict[str, Any]:
|
| 299 |
"""
|
| 300 |
-
|
| 301 |
Args:
|
| 302 |
-
space_name:
|
| 303 |
-
tool_name:
|
| 304 |
"""
|
| 305 |
deployer = HFDeployer()
|
| 306 |
api = HfApi(token=deployer.token)
|
| 307 |
|
| 308 |
-
#
|
| 309 |
repo_id = space_name
|
| 310 |
if "/" not in repo_id:
|
| 311 |
hf_user = os.environ.get("HF_USER")
|
|
@@ -322,17 +322,17 @@ def delete_tool(space_name: str, tool_name: str) -> Dict[str, Any]:
|
|
| 322 |
repo_type="space",
|
| 323 |
commit_message=f"Delete tool {tool_name} via Meta-MCP"
|
| 324 |
)
|
| 325 |
-
return {"status": "success", "message": f"
|
| 326 |
except Exception as e:
|
| 327 |
print(f"DEBUG [delete_tool]: Error: {e}")
|
| 328 |
-
return {"error": f"
|
| 329 |
|
| 330 |
def get_tool_code(space_name: str, tool_name: str) -> Dict[str, Any]:
|
| 331 |
"""
|
| 332 |
-
|
| 333 |
Args:
|
| 334 |
-
space_name:
|
| 335 |
-
tool_name:
|
| 336 |
"""
|
| 337 |
deployer = HFDeployer()
|
| 338 |
|
|
@@ -358,4 +358,4 @@ def get_tool_code(space_name: str, tool_name: str) -> Dict[str, Any]:
|
|
| 358 |
return {"status": "success", "code": code}
|
| 359 |
except Exception as e:
|
| 360 |
print(f"DEBUG [get_tool_code]: Error: {e}")
|
| 361 |
-
return {"error": f"
|
|
|
|
| 16 |
|
| 17 |
def init_project(project_name: str, description: str, type: str = "adhoc") -> Dict[str, Any]:
|
| 18 |
"""
|
| 19 |
+
Creates a new empty project.
|
| 20 |
Args:
|
| 21 |
+
project_name: Technical name (e.g. strawberry-counter, ratp-api).
|
| 22 |
+
description: Tool description, or complete Technical Specification (e.g. content of a Swagger/OpenAPI JSON).
|
| 23 |
+
type: 'adhoc' (pure code), 'api_wrapper' (REST).
|
| 24 |
Returns:
|
| 25 |
+
A dictionary containing the 'draft_id' required for next steps.
|
| 26 |
"""
|
| 27 |
print(f"DEBUG [init_project]: project_name={project_name}, type={type}")
|
| 28 |
draft = session_manager.create_draft(project_name, description, type)
|
|
|
|
| 33 |
"description": draft.description,
|
| 34 |
"files": list(draft.code_files.keys())
|
| 35 |
},
|
| 36 |
+
"message": f"Project '{project_name}' initialized. Draft ID: {draft.draft_id}"
|
| 37 |
}
|
| 38 |
print(f"DEBUG [init_project]: result={result}")
|
| 39 |
return result
|
| 40 |
|
| 41 |
def propose_implementation(project_name: str, description: str) -> Dict[str, Any]:
|
| 42 |
"""
|
| 43 |
+
Uses internal AI to propose a complete implementation from a description or Swagger.
|
| 44 |
Args:
|
| 45 |
+
project_name: The project name.
|
| 46 |
+
description: The description or Swagger/OpenAPI JSON.
|
| 47 |
Returns:
|
| 48 |
+
A dictionary containing the proposed Python code, detected inputs, and requirements.
|
| 49 |
+
The calling agent can then validate or modify this code before calling define_logic.
|
| 50 |
"""
|
| 51 |
print(f"DEBUG [propose_implementation]: project_name={project_name}")
|
| 52 |
try:
|
|
|
|
| 54 |
result = {
|
| 55 |
"status": "success",
|
| 56 |
"proposal": proposal,
|
| 57 |
+
"message": "Implementation proposed. Please review 'python_code' and 'requirements' before calling define_logic."
|
| 58 |
}
|
| 59 |
print(f"DEBUG [propose_implementation]: result={result.keys()}")
|
| 60 |
return result
|
| 61 |
except Exception as e:
|
| 62 |
print(f"DEBUG [propose_implementation]: error={str(e)}")
|
| 63 |
+
return {"error": f"Error during generation: {str(e)}"}
|
| 64 |
|
| 65 |
def define_logic(draft_id: str, python_code: str, inputs: Union[Dict[str, str], str], output_desc: str, requirements: str = "", output_component: str = "text") -> Dict[str, Any]:
|
| 66 |
"""
|
| 67 |
+
Defines the internal logic of the tool.
|
| 68 |
Args:
|
| 69 |
+
inputs: Dictionary of inputs (e.g. {"word": "text"}). Can be a JSON string.
|
| 70 |
+
output_component: Output Gradio component type (text, image, audio, video, html, json, file).
|
| 71 |
"""
|
| 72 |
print(f"DEBUG [define_logic]: draft_id={draft_id}, output_component={output_component}")
|
| 73 |
draft = session_manager.get_draft(draft_id)
|
| 74 |
if not draft:
|
| 75 |
print(f"DEBUG [define_logic]: Draft not found")
|
| 76 |
+
return {"error": f"Draft {draft_id} not found."}
|
| 77 |
|
| 78 |
# Gestion des inputs (Dict ou JSON String)
|
| 79 |
if isinstance(inputs, str):
|
|
|
|
| 138 |
|
| 139 |
return {
|
| 140 |
"status": "success",
|
| 141 |
+
"message": f"Logic generated for '{draft.name}'. Ready to deploy.",
|
| 142 |
"preview": tool_module_code[:200] + "..."
|
| 143 |
}
|
| 144 |
|
| 145 |
|
| 146 |
def deploy_to_space(draft_id: str, visibility: str = "public", space_target: str = "new", target_space_name: str = "") -> Dict[str, Any]:
|
| 147 |
"""
|
| 148 |
+
Deploys the project to Hugging Face Spaces.
|
| 149 |
"""
|
| 150 |
print(f"DEBUG [deploy_to_space]: draft_id={draft_id}, target={space_target}, name={target_space_name}")
|
| 151 |
draft = session_manager.get_draft(draft_id)
|
| 152 |
if not draft:
|
| 153 |
+
return {"error": f"Draft {draft_id} not found."}
|
| 154 |
|
| 155 |
deployer = HFDeployer()
|
| 156 |
|
|
|
|
| 257 |
private=(visibility == "private")
|
| 258 |
)
|
| 259 |
|
| 260 |
+
mode_msg = "added to toolbox" if space_target == "existing" else "deployed (new space)"
|
| 261 |
|
| 262 |
+
# Standard MCP URL for Gradio
|
| 263 |
mcp_endpoint = url.rstrip("/") + "/gradio_api/mcp/"
|
| 264 |
|
| 265 |
# Nom du serveur pour la config Claude (nom du Space sans le username)
|
|
|
|
| 289 |
return {
|
| 290 |
"status": "success",
|
| 291 |
"url": url,
|
| 292 |
+
"instructions": f"Tool '{draft.name}' {mode_msg} !",
|
| 293 |
"claude_config": claude_config
|
| 294 |
}
|
| 295 |
except Exception as e:
|
| 296 |
+
return {"error": f"Deployment error: {str(e)}"}
|
| 297 |
|
| 298 |
def delete_tool(space_name: str, tool_name: str) -> Dict[str, Any]:
|
| 299 |
"""
|
| 300 |
+
Deletes a tool from an existing Space.
|
| 301 |
Args:
|
| 302 |
+
space_name: Full Space name (e.g. user/space) or short name (if HF_USER configured).
|
| 303 |
+
tool_name: Tool name (e.g. strawberry_counter).
|
| 304 |
"""
|
| 305 |
deployer = HFDeployer()
|
| 306 |
api = HfApi(token=deployer.token)
|
| 307 |
|
| 308 |
+
# Repo name resolution
|
| 309 |
repo_id = space_name
|
| 310 |
if "/" not in repo_id:
|
| 311 |
hf_user = os.environ.get("HF_USER")
|
|
|
|
| 322 |
repo_type="space",
|
| 323 |
commit_message=f"Delete tool {tool_name} via Meta-MCP"
|
| 324 |
)
|
| 325 |
+
return {"status": "success", "message": f"Tool '{tool_name}' deleted from '{repo_id}'."}
|
| 326 |
except Exception as e:
|
| 327 |
print(f"DEBUG [delete_tool]: Error: {e}")
|
| 328 |
+
return {"error": f"Error during deletion: {str(e)}"}
|
| 329 |
|
| 330 |
def get_tool_code(space_name: str, tool_name: str) -> Dict[str, Any]:
|
| 331 |
"""
|
| 332 |
+
Retrieves the source code of an existing tool.
|
| 333 |
Args:
|
| 334 |
+
space_name: Full Space name (e.g. user/space).
|
| 335 |
+
tool_name: Tool name.
|
| 336 |
"""
|
| 337 |
deployer = HFDeployer()
|
| 338 |
|
|
|
|
| 358 |
return {"status": "success", "code": code}
|
| 359 |
except Exception as e:
|
| 360 |
print(f"DEBUG [get_tool_code]: Error: {e}")
|
| 361 |
+
return {"error": f"Error reading code: {str(e)}"}
|