Spaces:
Sleeping
Sleeping
File size: 9,668 Bytes
fba4e48 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 | # Hugging Face Spaces Deployment Guide
## π Prerequisites
1. A Hugging Face account (create one at https://huggingface.co/join)
2. Your trained ConvNeXt model uploaded to Hugging Face Model Hub
3. Git installed on your system
4. Git LFS (Large File Storage) installed
---
## π Step-by-Step Deployment Instructions
### Step 1: Create a New Space on Hugging Face
1. Go to https://huggingface.co/spaces
2. Click **"Create new Space"**
3. Fill in the details:
- **Space name**: `project-phoenix-cervical-classification` (or your preferred name)
- **License**: MIT (or your choice)
- **Select SDK**: Choose **Gradio**
- **SDK version**: 4.0.0 or latest
- **Hardware**: CPU (Free) or GPU (Paid - recommended for faster inference)
- **Visibility**: Public or Private
4. Click **"Create Space"**
---
### Step 2: Clone Your Space Repository
Open a terminal and run:
```bash
# Navigate to your project directory
cd "C:\Meet\Projects\Project_8_Phoenix_Cervical Cancer Image Classification\Project-Phoenix\Phoenix\Project-Phoenix"
# Clone your Hugging Face Space
git clone https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
cd YOUR_SPACE_NAME
```
Replace `YOUR_USERNAME` with your Hugging Face username and `YOUR_SPACE_NAME` with your space name.
---
### Step 3: Set Up Git LFS (for Large Files)
```bash
# Install Git LFS if not already installed
# For Windows: Download from https://git-lfs.github.com/
# Or use: choco install git-lfs
# Initialize Git LFS in the repository
git lfs install
```
---
### Step 4: Copy Files to Space Repository
Copy the following files from your Project-Phoenix directory to the cloned space directory:
```bash
# Copy the main app file
Copy-Item -Path "../app.py" -Destination "."
# Copy requirements
Copy-Item -Path "../requirements.txt" -Destination "."
# Copy README (rename to README.md)
Copy-Item -Path "../README_HF.md" -Destination "./README.md"
```
Or manually copy:
- `app.py`
- `requirements.txt`
- Rename `README_HF.md` to `README.md`
---
### Step 5: Update the Model ID in app.py (if needed)
Make sure your `app.py` has the correct model ID. Open `app.py` and verify line 33:
```python
HF_MODEL_ID = os.getenv("HF_MODEL_ID", "Meet2304/convnextv2-cervical-cell-classification")
```
Change `"Meet2304/convnextv2-cervical-cell-classification"` to your actual model ID if different.
---
### Step 6: Commit and Push to Hugging Face
```bash
# Add all files
git add .
# Commit the changes
git commit -m "Initial deployment of Project Phoenix Gradio app"
# Push to Hugging Face
git push
```
You may be prompted for credentials:
- **Username**: Your Hugging Face username
- **Password**: Use a Hugging Face **Access Token** (not your password)
To create an access token:
1. Go to https://huggingface.co/settings/tokens
2. Click "New token"
3. Name it (e.g., "Space Deploy")
4. Select "write" permission
5. Copy the token and use it as the password
---
### Step 7: Monitor Deployment
1. Go to your Space URL: `https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME`
2. You'll see a "Building" status
3. Check the **Logs** tab to monitor the build process
4. Common stages:
- Installing dependencies from requirements.txt
- Loading the model from Hugging Face Hub
- Starting the Gradio server
The build typically takes 3-10 minutes depending on hardware.
---
### Step 8: Test Your Deployed App
Once the status changes to "Running":
1. Your app will be live at: `https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME`
2. Test both tabs:
- **Basic Prediction**: Upload a cell image and click "Classify"
- **Prediction + Explainability**: Upload an image and see GRAD-CAM visualization
---
## π§ Troubleshooting
### Build Fails Due to Dependencies
If you see errors during installation:
1. Check the **Logs** tab for specific errors
2. Common issues:
- **PyTorch version**: May need to specify CPU version for free tier
- **OpenCV**: Sometimes requires additional system libraries
Update `requirements.txt` if needed:
```txt
torch>=2.0.0
torchvision>=0.15.0
transformers>=4.30.0
gradio>=4.0.0
opencv-python-headless>=4.8.0 # Use headless version for servers
numpy>=1.24.0
Pillow>=10.0.0
grad-cam>=1.4.8
```
### Model Not Loading
If you see "Model not found" errors:
1. Verify your model is public (or the Space has access if private)
2. Check the model ID is correct in `app.py`
3. Ensure your model was properly uploaded to Hugging Face Model Hub
### Out of Memory on CPU
If the free CPU tier runs out of memory:
1. Upgrade to a GPU Space (paid)
2. Or optimize the model:
- Use model quantization
- Reduce batch size (already 1 in this app)
---
## π Connecting Your Next.js Frontend to the Deployed Space
### Option 1: Use Gradio Client API (Recommended)
The easiest way is to use Gradio's client API. Your Space provides an API endpoint automatically.
1. **Get your Space API URL**: `https://YOUR_USERNAME-YOUR_SPACE_NAME.hf.space`
2. **Install Gradio Client in your Next.js project**:
```bash
cd "C:\Meet\Projects\Project_8_Phoenix_Cervical Cancer Image Classification\Project-Phoenix\Phoenix\phoenix-app"
npm install @gradio/client
```
3. **Create an API service file** (`src/lib/inference-api.ts`):
```typescript
import { client } from "@gradio/client";
const SPACE_URL = "https://YOUR_USERNAME-YOUR_SPACE_NAME.hf.space";
export async function predictBasic(imageFile: File) {
try {
const app = await client(SPACE_URL);
const result = await app.predict("/predict_basic", [imageFile]);
return result.data;
} catch (error) {
console.error("Prediction error:", error);
throw error;
}
}
export async function predictWithExplainability(imageFile: File) {
try {
const app = await client(SPACE_URL);
const result = await app.predict("/predict_with_explainability", [imageFile]);
return result.data;
} catch (error) {
console.error("Prediction with explainability error:", error);
throw error;
}
}
```
4. **Update your inference page** to use the API:
```typescript
// In your handleAnalyze function
const handleAnalyze = async () => {
setIsAnalyzing(true);
try {
if (selectedSource === 'upload' && fileState.files.length > 0) {
const file = fileState.files[0].file as File;
const result = await predictBasic(file);
setAnalysisResult({
predicted_class: result.label,
confidence: result.confidences[0].confidence,
top_predictions: result.confidences.map((c: any) => ({
class: c.label,
probability: c.confidence
}))
});
} else if (selectedSource === 'sample' && selectedSample) {
// For sample images, fetch the image first then predict
const response = await fetch(currentImage!);
const blob = await response.blob();
const file = new File([blob], 'sample.jpg', { type: 'image/jpeg' });
const result = await predictBasic(file);
setAnalysisResult({
predicted_class: result.label,
confidence: result.confidences[0].confidence,
top_predictions: result.confidences.map((c: any) => ({
class: c.label,
probability: c.confidence
}))
});
}
} catch (error) {
console.error("Analysis error:", error);
// Handle error appropriately
} finally {
setIsAnalyzing(false);
}
};
```
### Option 2: Use Direct API Endpoint
Alternatively, you can use the automatic API endpoint that Gradio creates:
```typescript
const SPACE_API = "https://YOUR_USERNAME-YOUR_SPACE_NAME.hf.space/api/predict";
async function predictWithFetch(imageFile: File) {
const formData = new FormData();
formData.append('data', imageFile);
const response = await fetch(SPACE_API, {
method: 'POST',
body: formData,
});
return await response.json();
}
```
---
## π Monitoring and Analytics
1. **View Usage Statistics**: Go to your Space settings to see usage metrics
2. **Check Logs**: Monitor real-time logs in the Space interface
3. **Set up Alerts**: Configure notifications for errors or downtime
---
## π Security Considerations
1. **API Keys**: If you need authentication, use Hugging Face's built-in authentication
2. **Rate Limiting**: Consider implementing rate limiting for public spaces
3. **Model Access**: Ensure your model repository has appropriate access controls
---
## π° Cost Considerations
- **CPU (Free)**: Limited resources, slower inference
- **CPU Basic ($5/month)**: Better performance
- **GPU T4 Small ($0.60/hour)**: Recommended for production
- **GPU A10G Large ($3.15/hour)**: High-performance inference
---
## π― Next Steps
1. **Deploy the Space** following steps 1-7
2. **Test the interface** directly on Hugging Face
3. **Integrate with your Next.js frontend** using the API
4. **Monitor performance** and upgrade hardware if needed
5. **Collect user feedback** and iterate
---
## π Additional Resources
- [Gradio Documentation](https://gradio.app/docs/)
- [Hugging Face Spaces Guide](https://huggingface.co/docs/hub/spaces)
- [Gradio Client Library](https://gradio.app/guides/getting-started-with-the-js-client/)
---
## β
Quick Checklist
- [ ] Created Hugging Face Space
- [ ] Cloned Space repository
- [ ] Copied app.py, requirements.txt, README.md
- [ ] Updated model ID in app.py
- [ ] Committed and pushed to Hugging Face
- [ ] Verified deployment in Logs
- [ ] Tested both prediction modes
- [ ] Integrated API with Next.js frontend
- [ ] Tested end-to-end workflow
---
**Need Help?** Check the logs tab in your Space or refer to Hugging Face documentation.
|