medsam-inference / QUICKSTART.md
Anigor66
Initial commit
0b86477

A newer version of the Gradio SDK is available: 6.5.1

Upgrade

⚑ Quick Start - Deploy MedSAM to HuggingFace Space

🎯 Goal

Deploy your MedSAM model as an API that you can call from your backend.

πŸ“¦ What's in This Folder

huggingface_space/
β”œβ”€β”€ app.py                    # Gradio app (upload to HF Space)
β”œβ”€β”€ requirements.txt          # Dependencies (upload to HF Space)
β”œβ”€β”€ README.md                # Space description (upload to HF Space)
β”œβ”€β”€ .gitattributes           # Git LFS config (upload to HF Space)
β”œβ”€β”€ DEPLOYMENT_GUIDE.md      # Detailed deployment steps
β”œβ”€β”€ integration_example.py   # How to use in your backend
β”œβ”€β”€ test_space.py           # Test script after deployment
└── QUICKSTART.md           # This file

πŸš€ Deploy in 5 Steps

Step 1: Create Space (2 min)

  1. Go to: https://huggingface.co/new-space
  2. Fill in:
    • Space name: medsam-inference
    • SDK: Gradio
    • Hardware: CPU basic (free) or T4 small (GPU, $0.60/hr)
  3. Click Create Space

Step 2: Upload Files (3 min)

Option A: Via Web (Easiest)

  1. In your Space, click Files β†’ Add file β†’ Upload files
  2. Upload these 4 files:
    • app.py
    • requirements.txt
    • README.md
    • .gitattributes

Option B: Via Git

# Clone your Space
git clone https://huggingface.co/spaces/YOUR_USERNAME/medsam-inference
cd medsam-inference

# Copy files
cp app.py requirements.txt README.md .gitattributes .

# Commit
git add .
git commit -m "Initial commit"
git push

Step 3: Upload Model (2 min)

Download your model:

Go to: https://huggingface.co/Aniketg6/Fine-Tuned-MedSAM

Download: medsam_vit_b.pth (375 MB)

Upload to Space:

  • Via web: Files β†’ Add file β†’ Upload file β†’ Upload medsam_vit_b.pth
  • Via git:
    # Make sure Git LFS is installed
    git lfs install
    git lfs track "*.pth"
    
    # Copy your model
    cp /path/to/medsam_vit_b.pth .
    
    # Commit (will use LFS for large file)
    git add .gitattributes medsam_vit_b.pth
    git commit -m "Add MedSAM model"
    git push
    

Step 4: Wait for Build (3-5 min)

  • HuggingFace will build your Space automatically
  • Check Logs tab to see progress
  • When done, you'll see "Running" status βœ…

Step 5: Test It! (1 min)

  1. Visit your Space: https://huggingface.co/spaces/YOUR_USERNAME/medsam-inference
  2. Click Simple Interface tab
  3. Upload a test image
  4. Enter X, Y coordinates (e.g., 200, 150)
  5. Click Segment
  6. You should see a mask! πŸŽ‰

βœ… Your API is Ready!

Endpoint: https://YOUR_USERNAME-medsam-inference.hf.space/api/predict


πŸ”— Use in Your Backend

Quick Integration

  1. Create client file:
cd backend
nano medsam_space_client.py
  1. Add this code:
import requests
import json
import base64
from io import BytesIO
from PIL import Image
import numpy as np

SPACE_URL = "https://YOUR_USERNAME-medsam-inference.hf.space/api/predict"

class MedSAMSpacePredictor:
    def __init__(self, space_url):
        self.space_url = space_url
        self.image_array = None
    
    def set_image(self, image):
        self.image_array = image
    
    def predict(self, point_coords, point_labels, multimask_output=True, **kwargs):
        # Convert to base64
        img = Image.fromarray(self.image_array)
        buf = BytesIO()
        img.save(buf, format="PNG")
        img_b64 = base64.b64encode(buf.getvalue()).decode()
        
        # Call API
        points_json = json.dumps({
            "coords": point_coords.tolist(),
            "labels": point_labels.tolist(),
            "multimask_output": multimask_output
        })
        
        resp = requests.post(
            self.space_url,
            json={"data": [f"data:image/png;base64,{img_b64}", points_json]},
            timeout=120
        )
        
        result = json.loads(resp.json()["data"][0])
        masks = np.array([np.array(m["mask_data"], dtype=bool) for m in result["masks"]])
        scores = np.array(result["scores"])
        
        return masks, scores, None
  1. Update app.py:
# Add import
from medsam_space_client import MedSAMSpacePredictor

# Replace this:
# sam_predictor = SamPredictor(sam)

# With this:
sam_predictor = MedSAMSpacePredictor(
    "https://YOUR_USERNAME-medsam-inference.hf.space/api/predict"
)

# Everything else stays the same!
# sam_predictor.set_image(image_array)
# masks, scores, _ = sam_predictor.predict(...)
  1. Done! Your backend now uses the HF Space API βœ…

πŸ§ͺ Test Your Integration

cd backend/huggingface_space

# Update SPACE_URL in test_space.py first
nano test_space.py

# Run test
python test_space.py path/to/test/image.jpg 200 150

Should see:

βœ… TEST PASSED! Your Space is working correctly!

πŸ’° Cost

Free Tier (CPU Basic):

  • βœ… Free!
  • ⚠️ Slower (~5-10 seconds per image)
  • ⚠️ Sleeps after 48h inactivity

Paid Tier (T4 Small GPU):

  • πŸ’° $0.60/hour
  • βœ… Fast (~1-2 seconds)
  • βœ… Always on

Upgrade: Space Settings β†’ Hardware β†’ T4 small


πŸ› Troubleshooting

"Application startup failed" β†’ Check Logs tab, make sure medsam_vit_b.pth is uploaded

"Space is sleeping" β†’ First request wakes it (takes 10-20s)

API timeout β†’ Space might be sleeping or overloaded, retry

CORS error β†’ Update your backend CORS settings


πŸ“š More Info

  • Detailed guide: DEPLOYMENT_GUIDE.md
  • Integration examples: integration_example.py
  • Test script: test_space.py

✨ Summary

  1. βœ… Create Space on HuggingFace (2 min)
  2. βœ… Upload 4 files + model (5 min)
  3. βœ… Wait for build (3-5 min)
  4. βœ… Test via UI (1 min)
  5. βœ… Integrate with backend (5 min)
  6. πŸŽ‰ Total: ~15 minutes!

Your MedSAM model is now a cloud API! πŸš€


Questions? Check: DEPLOYMENT_GUIDE.md