Spaces:
Sleeping
A newer version of the Gradio SDK is available:
6.6.0
Hugging Face Spaces Deployment Guide
π Prerequisites
- A Hugging Face account (create one at https://huggingface.co/join)
- Your trained ConvNeXt model uploaded to Hugging Face Model Hub
- Git installed on your system
- Git LFS (Large File Storage) installed
π Step-by-Step Deployment Instructions
Step 1: Create a New Space on Hugging Face
Click "Create new Space"
Fill in the details:
- Space name:
project-phoenix-cervical-classification(or your preferred name) - License: MIT (or your choice)
- Select SDK: Choose Gradio
- SDK version: 4.0.0 or latest
- Hardware: CPU (Free) or GPU (Paid - recommended for faster inference)
- Visibility: Public or Private
- Space name:
Click "Create Space"
Step 2: Clone Your Space Repository
Open a terminal and run:
# Navigate to your project directory
cd "C:\Meet\Projects\Project_8_Phoenix_Cervical Cancer Image Classification\Project-Phoenix\Phoenix\Project-Phoenix"
# Clone your Hugging Face Space
git clone https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
cd YOUR_SPACE_NAME
Replace YOUR_USERNAME with your Hugging Face username and YOUR_SPACE_NAME with your space name.
Step 3: Set Up Git LFS (for Large Files)
# Install Git LFS if not already installed
# For Windows: Download from https://git-lfs.github.com/
# Or use: choco install git-lfs
# Initialize Git LFS in the repository
git lfs install
Step 4: Copy Files to Space Repository
Copy the following files from your Project-Phoenix directory to the cloned space directory:
# Copy the main app file
Copy-Item -Path "../app.py" -Destination "."
# Copy requirements
Copy-Item -Path "../requirements.txt" -Destination "."
# Copy README (rename to README.md)
Copy-Item -Path "../README_HF.md" -Destination "./README.md"
Or manually copy:
app.pyrequirements.txt- Rename
README_HF.mdtoREADME.md
Step 5: Update the Model ID in app.py (if needed)
Make sure your app.py has the correct model ID. Open app.py and verify line 33:
HF_MODEL_ID = os.getenv("HF_MODEL_ID", "Meet2304/convnextv2-cervical-cell-classification")
Change "Meet2304/convnextv2-cervical-cell-classification" to your actual model ID if different.
Step 6: Commit and Push to Hugging Face
# Add all files
git add .
# Commit the changes
git commit -m "Initial deployment of Project Phoenix Gradio app"
# Push to Hugging Face
git push
You may be prompted for credentials:
- Username: Your Hugging Face username
- Password: Use a Hugging Face Access Token (not your password)
To create an access token:
- Go to https://huggingface.co/settings/tokens
- Click "New token"
- Name it (e.g., "Space Deploy")
- Select "write" permission
- Copy the token and use it as the password
Step 7: Monitor Deployment
- Go to your Space URL:
https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME - You'll see a "Building" status
- Check the Logs tab to monitor the build process
- Common stages:
- Installing dependencies from requirements.txt
- Loading the model from Hugging Face Hub
- Starting the Gradio server
The build typically takes 3-10 minutes depending on hardware.
Step 8: Test Your Deployed App
Once the status changes to "Running":
- Your app will be live at:
https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME - Test both tabs:
- Basic Prediction: Upload a cell image and click "Classify"
- Prediction + Explainability: Upload an image and see GRAD-CAM visualization
π§ Troubleshooting
Build Fails Due to Dependencies
If you see errors during installation:
- Check the Logs tab for specific errors
- Common issues:
- PyTorch version: May need to specify CPU version for free tier
- OpenCV: Sometimes requires additional system libraries
Update requirements.txt if needed:
torch>=2.0.0
torchvision>=0.15.0
transformers>=4.30.0
gradio>=4.0.0
opencv-python-headless>=4.8.0 # Use headless version for servers
numpy>=1.24.0
Pillow>=10.0.0
grad-cam>=1.4.8
Model Not Loading
If you see "Model not found" errors:
- Verify your model is public (or the Space has access if private)
- Check the model ID is correct in
app.py - Ensure your model was properly uploaded to Hugging Face Model Hub
Out of Memory on CPU
If the free CPU tier runs out of memory:
- Upgrade to a GPU Space (paid)
- Or optimize the model:
- Use model quantization
- Reduce batch size (already 1 in this app)
π Connecting Your Next.js Frontend to the Deployed Space
Option 1: Use Gradio Client API (Recommended)
The easiest way is to use Gradio's client API. Your Space provides an API endpoint automatically.
Get your Space API URL:
https://YOUR_USERNAME-YOUR_SPACE_NAME.hf.spaceInstall Gradio Client in your Next.js project:
cd "C:\Meet\Projects\Project_8_Phoenix_Cervical Cancer Image Classification\Project-Phoenix\Phoenix\phoenix-app"
npm install @gradio/client
- Create an API service file (
src/lib/inference-api.ts):
import { client } from "@gradio/client";
const SPACE_URL = "https://YOUR_USERNAME-YOUR_SPACE_NAME.hf.space";
export async function predictBasic(imageFile: File) {
try {
const app = await client(SPACE_URL);
const result = await app.predict("/predict_basic", [imageFile]);
return result.data;
} catch (error) {
console.error("Prediction error:", error);
throw error;
}
}
export async function predictWithExplainability(imageFile: File) {
try {
const app = await client(SPACE_URL);
const result = await app.predict("/predict_with_explainability", [imageFile]);
return result.data;
} catch (error) {
console.error("Prediction with explainability error:", error);
throw error;
}
}
- Update your inference page to use the API:
// In your handleAnalyze function
const handleAnalyze = async () => {
setIsAnalyzing(true);
try {
if (selectedSource === 'upload' && fileState.files.length > 0) {
const file = fileState.files[0].file as File;
const result = await predictBasic(file);
setAnalysisResult({
predicted_class: result.label,
confidence: result.confidences[0].confidence,
top_predictions: result.confidences.map((c: any) => ({
class: c.label,
probability: c.confidence
}))
});
} else if (selectedSource === 'sample' && selectedSample) {
// For sample images, fetch the image first then predict
const response = await fetch(currentImage!);
const blob = await response.blob();
const file = new File([blob], 'sample.jpg', { type: 'image/jpeg' });
const result = await predictBasic(file);
setAnalysisResult({
predicted_class: result.label,
confidence: result.confidences[0].confidence,
top_predictions: result.confidences.map((c: any) => ({
class: c.label,
probability: c.confidence
}))
});
}
} catch (error) {
console.error("Analysis error:", error);
// Handle error appropriately
} finally {
setIsAnalyzing(false);
}
};
Option 2: Use Direct API Endpoint
Alternatively, you can use the automatic API endpoint that Gradio creates:
const SPACE_API = "https://YOUR_USERNAME-YOUR_SPACE_NAME.hf.space/api/predict";
async function predictWithFetch(imageFile: File) {
const formData = new FormData();
formData.append('data', imageFile);
const response = await fetch(SPACE_API, {
method: 'POST',
body: formData,
});
return await response.json();
}
π Monitoring and Analytics
- View Usage Statistics: Go to your Space settings to see usage metrics
- Check Logs: Monitor real-time logs in the Space interface
- Set up Alerts: Configure notifications for errors or downtime
π Security Considerations
- API Keys: If you need authentication, use Hugging Face's built-in authentication
- Rate Limiting: Consider implementing rate limiting for public spaces
- Model Access: Ensure your model repository has appropriate access controls
π° Cost Considerations
- CPU (Free): Limited resources, slower inference
- CPU Basic ($5/month): Better performance
- GPU T4 Small ($0.60/hour): Recommended for production
- GPU A10G Large ($3.15/hour): High-performance inference
π― Next Steps
- Deploy the Space following steps 1-7
- Test the interface directly on Hugging Face
- Integrate with your Next.js frontend using the API
- Monitor performance and upgrade hardware if needed
- Collect user feedback and iterate
π Additional Resources
β Quick Checklist
- Created Hugging Face Space
- Cloned Space repository
- Copied app.py, requirements.txt, README.md
- Updated model ID in app.py
- Committed and pushed to Hugging Face
- Verified deployment in Logs
- Tested both prediction modes
- Integrated API with Next.js frontend
- Tested end-to-end workflow
Need Help? Check the logs tab in your Space or refer to Hugging Face documentation.