Spaces:
Sleeping
Sleeping
File size: 3,047 Bytes
2afd81c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 |
# π Hugging Face Spaces Deployment Guide
## Quick Deploy Steps
### 1. Create Your Space
1. Go to [huggingface.co/new-space](https://huggingface.co/new-space)
2. Name: `content-classifier` (or your preferred name)
3. SDK: **Docker**
4. Visibility: Public/Private (your choice)
5. Click **Create Space**
### 2. Upload Files
Upload these files to your Space:
**Required Files:**
- `contextClassifier.onnx` (your model file)
- `app.py`
- `requirements.txt`
- `Dockerfile`
- `README.md`
**Optional Files:**
- `test_api.py` (for testing)
### 3. Model File
β οΈ **Important**: Make sure your `contextClassifier.onnx` file is in the same directory as these files before uploading.
### 4. Git Method (Recommended)
```bash
# Clone your space
git clone https://huggingface.co/spaces/YOUR_USERNAME/YOUR_SPACE_NAME
cd YOUR_SPACE_NAME
# Copy your model file
copy path\to\your\contextClassifier.onnx .
# Copy all project files
copy app.py .
copy requirements.txt .
copy Dockerfile .
copy README.md .
# Add and commit
git add .
git commit -m "π Add content classifier ONNX model"
git push
```
### 5. Monitor Deployment
1. **Check Build Logs**: Go to your Space > Logs tab
2. **Wait for Build**: Usually takes 2-3 minutes
3. **Check Status**: Space will show "Building" β "Running"
### 6. Test Your Space
Once deployed, your API will be available at:
```
https://YOUR_USERNAME-YOUR_SPACE_NAME.hf.space
```
**API Endpoints:**
- `/docs` - Interactive documentation
- `/predict` - Main prediction endpoint
- `/health` - Health check
- `/model-info` - Model information
### 7. Example Usage
```python
import requests
# Replace with your actual Space URL
api_url = "https://YOUR_USERNAME-YOUR_SPACE_NAME.hf.space"
response = requests.post(
f"{api_url}/predict",
json={"text": "This is a test message for classification"}
)
print(response.json())
```
## Troubleshooting
### Common Issues:
**Build Fails:**
- Check Logs tab for error details
- Verify all required files are uploaded
- Ensure `contextClassifier.onnx` is present
**Model Not Found:**
- Verify `contextClassifier.onnx` is in root directory
- Check file name matches exactly (case-sensitive)
**API Not Responding:**
- Check if Space is "Running" (not "Building")
- Try accessing `/health` endpoint first
- Check Logs for runtime errors
**Memory Issues:**
- ONNX model might be too large
- Consider model optimization
- Check Space hardware limits
### Success Indicators:
β
Space shows "Running" status
β
`/health` endpoint returns `{"status": "healthy"}`
β
`/docs` shows interactive API documentation
β
`/predict` accepts POST requests and returns expected format
## Next Steps
1. **Test thoroughly** with various text inputs
2. **Share your Space** with the community
3. **Monitor usage** in Space analytics
4. **Update model** by pushing new `contextClassifier.onnx`
Your Content Classifier is now live and ready to use! π
|