Spaces:
Running
A newer version of the Gradio SDK is available: 6.13.0
title: Auto Deployer
emoji: π
colorFrom: indigo
colorTo: purple
sdk: gradio
sdk_version: 5.49.1
app_file: app.py
pinned: false
license: apache-2.0
short_description: From CSV to Deployed ML API in 30 seconds.
tags:
- building-mcp-track-enterprise
Auto-Deployer MCP Server
Auto-Deployer is a Model Context Protocol (MCP) server that gives AI agents (like Claude, Cursor, or Windsurf) the ability to train and deploy machine learning models directly from a conversation.
It bridges the gap between an AI agent's reasoning capabilities and heavy-duty ML infrastructure. By connecting this server, your agent gains tools to:
- π Analyze CSV datasets with comprehensive insights.
- π Train models (Classification, Regression, Time Series) on serverless infrastructure.
- π Deploy those models as production-ready APIs instantly.
Check out the social media post with the video demo: https://www.linkedin.com/feed/update/urn:li:ugcPost:7399114210342903809/
Here is the youtube video: https://www.youtube.com/watch?v=F1Xxvz95N1E
π οΈ How It Works
This project uses a streamlined architecture for maximum efficiency:
The Interface (Gradio MCP):
- The
app.pyscript runs a Gradio application that serves as the MCP Server. - It exposes 4 core tools as MCP functions that AI agents can discover and invoke.
- Features smart file handling that seamlessly processes uploads, URLs, and local paths.
- Provides both web interface for manual use and MCP interface for agent integration.
- The
The Engine (Modal):
- The
modal_backend.pyruns on Modal's serverless cloud platform. - When agents request model operations, the Gradio server triggers Modal functions.
- Modal automatically scales compute resources, processes data, trains models, and serves them as production APIs.
- Models are stored persistently and available via high-performance endpoints.
- The
ποΈ Simplified Architecture
- 4 Core Tools Only: Eliminates complexity with focused functionality
- Unified Interface: Same tools work for web users and AI agents
- Smart Input Detection: Handles files, URLs, and uploads automatically
- Serverless Scaling: Pay only for compute when you use it
- Production Ready: Instant API deployment with comprehensive monitoring
π Usage Guide
This MCP server is hosted on Hugging Face Spaces. You do not need to run any code locally to use it, other than configuring your AI agent client.
Prerequisites
- Claude Code (or another MCP-compliant client).
- UV: Required to run the MCP tools (
pip install uv).
π Connecting to Claude Code
To connect your agent to the live server, add the following configuration to your .claude.json:
{
"mcpServers": {
"auto-deployer": {
"type": "http",
"url": "https://mcp-1st-birthday-auto-deployer.hf.space/gradio_api/mcp/"
},
"upload_files_to_gradio": {
"command": "uvx",
"args": [
"--from",
"gradio[mcp]",
"gradio",
"upload-mcp",
"https://mcp-1st-birthday-auto-deployer.hf.space",
"C:\\Path\\To\\Your\\Data"
]
}
}
}
Important Notes:
auto-deployer: Connects to the live Auto-Deployer MCP server.upload_files_to_gradio: This helper tool allows the agent to "upload" local files from your computer to the remote server for processing.- Action Required: Replace
C:\\Path\\To\\Your\\Datawith the absolute path to the folder containing your CSV files.
π¬ Example Prompts
Once connected, you can talk to your agent naturally. Here are some workflows you can try:
π£ The Step-by-Step Approach
Best for understanding the data first.
Step 1: Upload and Analyze
"Upload my local data folder using the
upload_files_to_gradiotool. Then, analyzehousing_prices.csvand tell me about the dataset shape, column types, and any missing values."
Step 2: Training
"Train a regression model to predict 'price' using the housing data. Focus on features like 'sqft_living', 'bedrooms', and 'location'."
Step 3: Deployment
"Deploy the trained model and provide me with the API endpoint and Python code examples for making predictions."
β‘ The One-Click Approach
Best for rapid deployment.
Complete Pipeline
"Upload
customer_churn.csvand auto-deploy a classification model to predict 'Churn' status. Give me the full report with performance metrics and API access."
π§ Advanced Workflows
Custom Model Training
"Analyze this medical dataset, then train a classification model for 'HeartDisease' with 80/20 train-test split and provide detailed performance metrics."
Batch Processing
"I have multiple datasets in my folder. Upload all CSV files and analyze each one, then train models for the datasets that have good data quality."
π¦ Available Tools
The server provides 4 core tools that work seamlessly with both file uploads and URLs:
π§ Core Tools
| Tool Name | Description | Input | Output |
|---|---|---|---|
analyze_data_tool |
π Analyze CSV datasets and provide comprehensive statistical metadata | CSV file or URL | JSON analysis with shape, columns, data types, and missing values |
train_model_tool |
π Train production-ready ML models on serverless infrastructure | CSV file/URL, target column, task type | JSON with model ID and performance metrics |
deploy_model_tool |
π Deploy trained models to live production API endpoints | Model ID | Markdown with API URL and usage examples |
auto_deploy_tool |
β‘ Complete end-to-end pipeline (Analyze β Train β Deploy) in one click | CSV file/URL, target column, task type | Comprehensive deployment report with insights |
π― Task Types Supported
- Classification: Predict categorical outcomes (e.g., spam/not spam, disease/no disease)
- Regression: Predict continuous numerical values (e.g., house prices, temperature)
- Time Series: Forecast time-based data patterns and trends
π Input Flexibility
All tools accept multiple input formats:
- Local CSV files: Direct file uploads through the web interface
- File URLs: HTTP/HTTPS links to CSV files
- File paths: Local file system paths
- Gradio file objects: Seamless integration with web uploads
π Key Features
- Serverless Processing: Runs on Modal's serverless infrastructure for scalability
- Smart File Handling: Automatic detection and processing of different input types
- Comprehensive Analytics: Detailed dataset insights and model performance reports
- Production APIs: Instant deployment with ready-to-use code examples
- Multi-format Support: Works with CSV files from various sources