Spaces:
Sleeping
Sleeping
| # Hugging Face Spaces Deployment Guide | |
| ## π Prerequisites | |
| 1. A Hugging Face account (create one at https://huggingface.co/join) | |
| 2. Your trained ConvNeXt model uploaded to Hugging Face Model Hub | |
| 3. Git installed on your system | |
| 4. Git LFS (Large File Storage) installed | |
| --- | |
| ## π Step-by-Step Deployment Instructions | |
| ### Step 1: Create a New Space on Hugging Face | |
| 1. Go to https://huggingface.co/spaces | |
| 2. Click **"Create new Space"** | |
| 3. Fill in the details: | |
| - **Space name**: `project-phoenix-cervical-classification` (or your preferred name) | |
| - **License**: MIT (or your choice) | |
| - **Select SDK**: Choose **Gradio** | |
| - **SDK version**: 4.0.0 or latest | |
| - **Hardware**: CPU (Free) or GPU (Paid - recommended for faster inference) | |
| - **Visibility**: Public or Private | |
| 4. Click **"Create Space"** | |
| --- | |
| ### Step 2: Clone Your Space Repository | |
| Open a terminal and run: | |
| ```bash | |
| # Navigate to your project directory | |
| cd "C:\Meet\Projects\Project_8_Phoenix_Cervical Cancer Image Classification\Project-Phoenix\Phoenix\Project-Phoenix" | |
| # Clone your Hugging Face Space | |
| git clone https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME | |
| cd YOUR_SPACE_NAME | |
| ``` | |
| Replace `YOUR_USERNAME` with your Hugging Face username and `YOUR_SPACE_NAME` with your space name. | |
| --- | |
| ### Step 3: Set Up Git LFS (for Large Files) | |
| ```bash | |
| # Install Git LFS if not already installed | |
| # For Windows: Download from https://git-lfs.github.com/ | |
| # Or use: choco install git-lfs | |
| # Initialize Git LFS in the repository | |
| git lfs install | |
| ``` | |
| --- | |
| ### Step 4: Copy Files to Space Repository | |
| Copy the following files from your Project-Phoenix directory to the cloned space directory: | |
| ```bash | |
| # Copy the main app file | |
| Copy-Item -Path "../app.py" -Destination "." | |
| # Copy requirements | |
| Copy-Item -Path "../requirements.txt" -Destination "." | |
| # Copy README (rename to README.md) | |
| Copy-Item -Path "../README_HF.md" -Destination "./README.md" | |
| ``` | |
| Or manually copy: | |
| - `app.py` | |
| - `requirements.txt` | |
| - Rename `README_HF.md` to `README.md` | |
| --- | |
| ### Step 5: Update the Model ID in app.py (if needed) | |
| Make sure your `app.py` has the correct model ID. Open `app.py` and verify line 33: | |
| ```python | |
| HF_MODEL_ID = os.getenv("HF_MODEL_ID", "Meet2304/convnextv2-cervical-cell-classification") | |
| ``` | |
| Change `"Meet2304/convnextv2-cervical-cell-classification"` to your actual model ID if different. | |
| --- | |
| ### Step 6: Commit and Push to Hugging Face | |
| ```bash | |
| # Add all files | |
| git add . | |
| # Commit the changes | |
| git commit -m "Initial deployment of Project Phoenix Gradio app" | |
| # Push to Hugging Face | |
| git push | |
| ``` | |
| You may be prompted for credentials: | |
| - **Username**: Your Hugging Face username | |
| - **Password**: Use a Hugging Face **Access Token** (not your password) | |
| To create an access token: | |
| 1. Go to https://huggingface.co/settings/tokens | |
| 2. Click "New token" | |
| 3. Name it (e.g., "Space Deploy") | |
| 4. Select "write" permission | |
| 5. Copy the token and use it as the password | |
| --- | |
| ### Step 7: Monitor Deployment | |
| 1. Go to your Space URL: `https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME` | |
| 2. You'll see a "Building" status | |
| 3. Check the **Logs** tab to monitor the build process | |
| 4. Common stages: | |
| - Installing dependencies from requirements.txt | |
| - Loading the model from Hugging Face Hub | |
| - Starting the Gradio server | |
| The build typically takes 3-10 minutes depending on hardware. | |
| --- | |
| ### Step 8: Test Your Deployed App | |
| Once the status changes to "Running": | |
| 1. Your app will be live at: `https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME` | |
| 2. Test both tabs: | |
| - **Basic Prediction**: Upload a cell image and click "Classify" | |
| - **Prediction + Explainability**: Upload an image and see GRAD-CAM visualization | |
| --- | |
| ## π§ Troubleshooting | |
| ### Build Fails Due to Dependencies | |
| If you see errors during installation: | |
| 1. Check the **Logs** tab for specific errors | |
| 2. Common issues: | |
| - **PyTorch version**: May need to specify CPU version for free tier | |
| - **OpenCV**: Sometimes requires additional system libraries | |
| Update `requirements.txt` if needed: | |
| ```txt | |
| torch>=2.0.0 | |
| torchvision>=0.15.0 | |
| transformers>=4.30.0 | |
| gradio>=4.0.0 | |
| opencv-python-headless>=4.8.0 # Use headless version for servers | |
| numpy>=1.24.0 | |
| Pillow>=10.0.0 | |
| grad-cam>=1.4.8 | |
| ``` | |
| ### Model Not Loading | |
| If you see "Model not found" errors: | |
| 1. Verify your model is public (or the Space has access if private) | |
| 2. Check the model ID is correct in `app.py` | |
| 3. Ensure your model was properly uploaded to Hugging Face Model Hub | |
| ### Out of Memory on CPU | |
| If the free CPU tier runs out of memory: | |
| 1. Upgrade to a GPU Space (paid) | |
| 2. Or optimize the model: | |
| - Use model quantization | |
| - Reduce batch size (already 1 in this app) | |
| --- | |
| ## π Connecting Your Next.js Frontend to the Deployed Space | |
| ### Option 1: Use Gradio Client API (Recommended) | |
| The easiest way is to use Gradio's client API. Your Space provides an API endpoint automatically. | |
| 1. **Get your Space API URL**: `https://YOUR_USERNAME-YOUR_SPACE_NAME.hf.space` | |
| 2. **Install Gradio Client in your Next.js project**: | |
| ```bash | |
| cd "C:\Meet\Projects\Project_8_Phoenix_Cervical Cancer Image Classification\Project-Phoenix\Phoenix\phoenix-app" | |
| npm install @gradio/client | |
| ``` | |
| 3. **Create an API service file** (`src/lib/inference-api.ts`): | |
| ```typescript | |
| import { client } from "@gradio/client"; | |
| const SPACE_URL = "https://YOUR_USERNAME-YOUR_SPACE_NAME.hf.space"; | |
| export async function predictBasic(imageFile: File) { | |
| try { | |
| const app = await client(SPACE_URL); | |
| const result = await app.predict("/predict_basic", [imageFile]); | |
| return result.data; | |
| } catch (error) { | |
| console.error("Prediction error:", error); | |
| throw error; | |
| } | |
| } | |
| export async function predictWithExplainability(imageFile: File) { | |
| try { | |
| const app = await client(SPACE_URL); | |
| const result = await app.predict("/predict_with_explainability", [imageFile]); | |
| return result.data; | |
| } catch (error) { | |
| console.error("Prediction with explainability error:", error); | |
| throw error; | |
| } | |
| } | |
| ``` | |
| 4. **Update your inference page** to use the API: | |
| ```typescript | |
| // In your handleAnalyze function | |
| const handleAnalyze = async () => { | |
| setIsAnalyzing(true); | |
| try { | |
| if (selectedSource === 'upload' && fileState.files.length > 0) { | |
| const file = fileState.files[0].file as File; | |
| const result = await predictBasic(file); | |
| setAnalysisResult({ | |
| predicted_class: result.label, | |
| confidence: result.confidences[0].confidence, | |
| top_predictions: result.confidences.map((c: any) => ({ | |
| class: c.label, | |
| probability: c.confidence | |
| })) | |
| }); | |
| } else if (selectedSource === 'sample' && selectedSample) { | |
| // For sample images, fetch the image first then predict | |
| const response = await fetch(currentImage!); | |
| const blob = await response.blob(); | |
| const file = new File([blob], 'sample.jpg', { type: 'image/jpeg' }); | |
| const result = await predictBasic(file); | |
| setAnalysisResult({ | |
| predicted_class: result.label, | |
| confidence: result.confidences[0].confidence, | |
| top_predictions: result.confidences.map((c: any) => ({ | |
| class: c.label, | |
| probability: c.confidence | |
| })) | |
| }); | |
| } | |
| } catch (error) { | |
| console.error("Analysis error:", error); | |
| // Handle error appropriately | |
| } finally { | |
| setIsAnalyzing(false); | |
| } | |
| }; | |
| ``` | |
| ### Option 2: Use Direct API Endpoint | |
| Alternatively, you can use the automatic API endpoint that Gradio creates: | |
| ```typescript | |
| const SPACE_API = "https://YOUR_USERNAME-YOUR_SPACE_NAME.hf.space/api/predict"; | |
| async function predictWithFetch(imageFile: File) { | |
| const formData = new FormData(); | |
| formData.append('data', imageFile); | |
| const response = await fetch(SPACE_API, { | |
| method: 'POST', | |
| body: formData, | |
| }); | |
| return await response.json(); | |
| } | |
| ``` | |
| --- | |
| ## π Monitoring and Analytics | |
| 1. **View Usage Statistics**: Go to your Space settings to see usage metrics | |
| 2. **Check Logs**: Monitor real-time logs in the Space interface | |
| 3. **Set up Alerts**: Configure notifications for errors or downtime | |
| --- | |
| ## π Security Considerations | |
| 1. **API Keys**: If you need authentication, use Hugging Face's built-in authentication | |
| 2. **Rate Limiting**: Consider implementing rate limiting for public spaces | |
| 3. **Model Access**: Ensure your model repository has appropriate access controls | |
| --- | |
| ## π° Cost Considerations | |
| - **CPU (Free)**: Limited resources, slower inference | |
| - **CPU Basic ($5/month)**: Better performance | |
| - **GPU T4 Small ($0.60/hour)**: Recommended for production | |
| - **GPU A10G Large ($3.15/hour)**: High-performance inference | |
| --- | |
| ## π― Next Steps | |
| 1. **Deploy the Space** following steps 1-7 | |
| 2. **Test the interface** directly on Hugging Face | |
| 3. **Integrate with your Next.js frontend** using the API | |
| 4. **Monitor performance** and upgrade hardware if needed | |
| 5. **Collect user feedback** and iterate | |
| --- | |
| ## π Additional Resources | |
| - [Gradio Documentation](https://gradio.app/docs/) | |
| - [Hugging Face Spaces Guide](https://huggingface.co/docs/hub/spaces) | |
| - [Gradio Client Library](https://gradio.app/guides/getting-started-with-the-js-client/) | |
| --- | |
| ## β Quick Checklist | |
| - [ ] Created Hugging Face Space | |
| - [ ] Cloned Space repository | |
| - [ ] Copied app.py, requirements.txt, README.md | |
| - [ ] Updated model ID in app.py | |
| - [ ] Committed and pushed to Hugging Face | |
| - [ ] Verified deployment in Logs | |
| - [ ] Tested both prediction modes | |
| - [ ] Integrated API with Next.js frontend | |
| - [ ] Tested end-to-end workflow | |
| --- | |
| **Need Help?** Check the logs tab in your Space or refer to Hugging Face documentation. | |