Spaces:
Runtime error
Runtime error
File size: 3,613 Bytes
bcfd653 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 |
# Hugging Face Space Deployment Guide
## π How to Deploy Your Sentiment Analysis API to Hugging Face Spaces
### Step 1: Create a Hugging Face Account
1. Go to [https://huggingface.co](https://huggingface.co)
2. Sign up for a free account if you don't have one
### Step 2: Create a New Space
1. Go to [https://huggingface.co/new-space](https://huggingface.co/new-space)
2. Fill in the details:
- **Space name**: `sentiment-analysis-api` (or your preferred name)
- **License**: MIT
- **SDK**: Docker
- **Hardware**: CPU Basic (free tier)
### Step 3: Upload Your Files
Upload all the files from this directory to your new Space:
- `app.py` - FastAPI application
- `sentiment.pkl` - Your trained model
- `requirements.txt` - Python dependencies
- `Dockerfile` - Docker configuration
- `README.md` - Documentation
- `index.html` - Simple web interface (optional)
### Step 4: Your Space Will Auto-Deploy
- Hugging Face will automatically build and deploy your Docker container
- The build process takes 5-10 minutes
- You can monitor the build logs in the Space dashboard
### Step 5: Access Your API
Once deployed, your API will be available at:
- **API Base URL**: `https://YOUR_USERNAME-sentiment-analysis-api.hf.space`
- **Interactive Docs**: `https://YOUR_USERNAME-sentiment-analysis-api.hf.space/docs`
- **Simple Interface**: `https://YOUR_USERNAME-sentiment-analysis-api.hf.space` (if you uploaded index.html)
## π§ͺ Testing Locally First
Before deploying, test your API locally:
1. **Install dependencies**:
```bash
pip install -r requirements.txt
```
2. **Run the API**:
```bash
python app.py
```
3. **Test the endpoints**:
- Visit `http://localhost:7860/docs` for interactive API docs
- Run `python test_api.py` to test all endpoints
- Visit `http://localhost:7860` for the simple web interface
## π API Endpoints
### POST `/predict`
```json
{
"text": "I love this product!"
}
```
Response:
```json
{
"prediction": 1,
"confidence": 0.95,
"sentiment": "positive"
}
```
### POST `/predict_proba`
```json
{
"text": "This is terrible"
}
```
Response:
```json
{
"probabilities": [0.85, 0.15],
"prediction": 0,
"sentiment": "negative"
}
```
### POST `/batch_predict`
```json
["I love it!", "This is bad", "It's okay"]
```
## π§ Customization
### If your model requires text preprocessing:
Edit the `app.py` file in the prediction functions to add your preprocessing steps:
```python
# Add preprocessing before model.predict()
def preprocess_text(text):
# Add your preprocessing logic here
# e.g., tokenization, vectorization, etc.
return processed_text
# In the prediction functions:
processed_text = preprocess_text(input_data.text)
prediction = model.predict([processed_text])[0]
```
### To change the port or host:
Modify the last line in `app.py`:
```python
uvicorn.run(app, host="0.0.0.0", port=8000) # Change port as needed
```
## π Troubleshooting
1. **Model loading errors**: Ensure `sentiment.pkl` is in the same directory
2. **Dependency issues**: Check that all required packages are in `requirements.txt`
3. **Memory issues**: Your model might be too large for the free tier (upgrade to paid tier)
4. **Preprocessing errors**: Make sure your text preprocessing matches your training pipeline
## π‘ Next Steps
- Add authentication for production use
- Implement rate limiting
- Add logging and monitoring
- Create more sophisticated web interface
- Add model versioning
|