Spaces:
Sleeping
A newer version of the Gradio SDK is available:
6.5.1
β‘ Quick Start - Deploy MedSAM to HuggingFace Space
π― Goal
Deploy your MedSAM model as an API that you can call from your backend.
π¦ What's in This Folder
huggingface_space/
βββ app.py # Gradio app (upload to HF Space)
βββ requirements.txt # Dependencies (upload to HF Space)
βββ README.md # Space description (upload to HF Space)
βββ .gitattributes # Git LFS config (upload to HF Space)
βββ DEPLOYMENT_GUIDE.md # Detailed deployment steps
βββ integration_example.py # How to use in your backend
βββ test_space.py # Test script after deployment
βββ QUICKSTART.md # This file
π Deploy in 5 Steps
Step 1: Create Space (2 min)
- Go to: https://huggingface.co/new-space
- Fill in:
- Space name:
medsam-inference - SDK: Gradio
- Hardware: CPU basic (free) or T4 small (GPU, $0.60/hr)
- Space name:
- Click Create Space
Step 2: Upload Files (3 min)
Option A: Via Web (Easiest)
- In your Space, click Files β Add file β Upload files
- Upload these 4 files:
app.pyrequirements.txtREADME.md.gitattributes
Option B: Via Git
# Clone your Space
git clone https://huggingface.co/spaces/YOUR_USERNAME/medsam-inference
cd medsam-inference
# Copy files
cp app.py requirements.txt README.md .gitattributes .
# Commit
git add .
git commit -m "Initial commit"
git push
Step 3: Upload Model (2 min)
Download your model:
Go to: https://huggingface.co/Aniketg6/Fine-Tuned-MedSAM
Download: medsam_vit_b.pth (375 MB)
Upload to Space:
- Via web: Files β Add file β Upload file β Upload
medsam_vit_b.pth - Via git:
# Make sure Git LFS is installed git lfs install git lfs track "*.pth" # Copy your model cp /path/to/medsam_vit_b.pth . # Commit (will use LFS for large file) git add .gitattributes medsam_vit_b.pth git commit -m "Add MedSAM model" git push
Step 4: Wait for Build (3-5 min)
- HuggingFace will build your Space automatically
- Check Logs tab to see progress
- When done, you'll see "Running" status β
Step 5: Test It! (1 min)
- Visit your Space:
https://huggingface.co/spaces/YOUR_USERNAME/medsam-inference - Click Simple Interface tab
- Upload a test image
- Enter X, Y coordinates (e.g., 200, 150)
- Click Segment
- You should see a mask! π
β Your API is Ready!
Endpoint: https://YOUR_USERNAME-medsam-inference.hf.space/api/predict
π Use in Your Backend
Quick Integration
- Create client file:
cd backend
nano medsam_space_client.py
- Add this code:
import requests
import json
import base64
from io import BytesIO
from PIL import Image
import numpy as np
SPACE_URL = "https://YOUR_USERNAME-medsam-inference.hf.space/api/predict"
class MedSAMSpacePredictor:
def __init__(self, space_url):
self.space_url = space_url
self.image_array = None
def set_image(self, image):
self.image_array = image
def predict(self, point_coords, point_labels, multimask_output=True, **kwargs):
# Convert to base64
img = Image.fromarray(self.image_array)
buf = BytesIO()
img.save(buf, format="PNG")
img_b64 = base64.b64encode(buf.getvalue()).decode()
# Call API
points_json = json.dumps({
"coords": point_coords.tolist(),
"labels": point_labels.tolist(),
"multimask_output": multimask_output
})
resp = requests.post(
self.space_url,
json={"data": [f"data:image/png;base64,{img_b64}", points_json]},
timeout=120
)
result = json.loads(resp.json()["data"][0])
masks = np.array([np.array(m["mask_data"], dtype=bool) for m in result["masks"]])
scores = np.array(result["scores"])
return masks, scores, None
- Update app.py:
# Add import
from medsam_space_client import MedSAMSpacePredictor
# Replace this:
# sam_predictor = SamPredictor(sam)
# With this:
sam_predictor = MedSAMSpacePredictor(
"https://YOUR_USERNAME-medsam-inference.hf.space/api/predict"
)
# Everything else stays the same!
# sam_predictor.set_image(image_array)
# masks, scores, _ = sam_predictor.predict(...)
- Done! Your backend now uses the HF Space API β
π§ͺ Test Your Integration
cd backend/huggingface_space
# Update SPACE_URL in test_space.py first
nano test_space.py
# Run test
python test_space.py path/to/test/image.jpg 200 150
Should see:
β
TEST PASSED! Your Space is working correctly!
π° Cost
Free Tier (CPU Basic):
- β Free!
- β οΈ Slower (~5-10 seconds per image)
- β οΈ Sleeps after 48h inactivity
Paid Tier (T4 Small GPU):
- π° $0.60/hour
- β Fast (~1-2 seconds)
- β Always on
Upgrade: Space Settings β Hardware β T4 small
π Troubleshooting
"Application startup failed" β Check Logs tab, make sure medsam_vit_b.pth is uploaded
"Space is sleeping" β First request wakes it (takes 10-20s)
API timeout β Space might be sleeping or overloaded, retry
CORS error β Update your backend CORS settings
π More Info
- Detailed guide:
DEPLOYMENT_GUIDE.md - Integration examples:
integration_example.py - Test script:
test_space.py
β¨ Summary
- β Create Space on HuggingFace (2 min)
- β Upload 4 files + model (5 min)
- β Wait for build (3-5 min)
- β Test via UI (1 min)
- β Integrate with backend (5 min)
- π Total: ~15 minutes!
Your MedSAM model is now a cloud API! π
Questions? Check: DEPLOYMENT_GUIDE.md