Spaces:
Runtime error
Runtime error
| # Hugging Face Space Deployment Guide | |
| ## π How to Deploy Your Sentiment Analysis API to Hugging Face Spaces | |
| ### Step 1: Create a Hugging Face Account | |
| 1. Go to [https://huggingface.co](https://huggingface.co) | |
| 2. Sign up for a free account if you don't have one | |
| ### Step 2: Create a New Space | |
| 1. Go to [https://huggingface.co/new-space](https://huggingface.co/new-space) | |
| 2. Fill in the details: | |
| - **Space name**: `sentiment-analysis-api` (or your preferred name) | |
| - **License**: MIT | |
| - **SDK**: Docker | |
| - **Hardware**: CPU Basic (free tier) | |
| ### Step 3: Upload Your Files | |
| Upload all the files from this directory to your new Space: | |
| - `app.py` - FastAPI application | |
| - `sentiment.pkl` - Your trained model | |
| - `requirements.txt` - Python dependencies | |
| - `Dockerfile` - Docker configuration | |
| - `README.md` - Documentation | |
| - `index.html` - Simple web interface (optional) | |
| ### Step 4: Your Space Will Auto-Deploy | |
| - Hugging Face will automatically build and deploy your Docker container | |
| - The build process takes 5-10 minutes | |
| - You can monitor the build logs in the Space dashboard | |
| ### Step 5: Access Your API | |
| Once deployed, your API will be available at: | |
| - **API Base URL**: `https://YOUR_USERNAME-sentiment-analysis-api.hf.space` | |
| - **Interactive Docs**: `https://YOUR_USERNAME-sentiment-analysis-api.hf.space/docs` | |
| - **Simple Interface**: `https://YOUR_USERNAME-sentiment-analysis-api.hf.space` (if you uploaded index.html) | |
| ## π§ͺ Testing Locally First | |
| Before deploying, test your API locally: | |
| 1. **Install dependencies**: | |
| ```bash | |
| pip install -r requirements.txt | |
| ``` | |
| 2. **Run the API**: | |
| ```bash | |
| python app.py | |
| ``` | |
| 3. **Test the endpoints**: | |
| - Visit `http://localhost:7860/docs` for interactive API docs | |
| - Run `python test_api.py` to test all endpoints | |
| - Visit `http://localhost:7860` for the simple web interface | |
| ## π API Endpoints | |
| ### POST `/predict` | |
| ```json | |
| { | |
| "text": "I love this product!" | |
| } | |
| ``` | |
| Response: | |
| ```json | |
| { | |
| "prediction": 1, | |
| "confidence": 0.95, | |
| "sentiment": "positive" | |
| } | |
| ``` | |
| ### POST `/predict_proba` | |
| ```json | |
| { | |
| "text": "This is terrible" | |
| } | |
| ``` | |
| Response: | |
| ```json | |
| { | |
| "probabilities": [0.85, 0.15], | |
| "prediction": 0, | |
| "sentiment": "negative" | |
| } | |
| ``` | |
| ### POST `/batch_predict` | |
| ```json | |
| ["I love it!", "This is bad", "It's okay"] | |
| ``` | |
| ## π§ Customization | |
| ### If your model requires text preprocessing: | |
| Edit the `app.py` file in the prediction functions to add your preprocessing steps: | |
| ```python | |
| # Add preprocessing before model.predict() | |
| def preprocess_text(text): | |
| # Add your preprocessing logic here | |
| # e.g., tokenization, vectorization, etc. | |
| return processed_text | |
| # In the prediction functions: | |
| processed_text = preprocess_text(input_data.text) | |
| prediction = model.predict([processed_text])[0] | |
| ``` | |
| ### To change the port or host: | |
| Modify the last line in `app.py`: | |
| ```python | |
| uvicorn.run(app, host="0.0.0.0", port=8000) # Change port as needed | |
| ``` | |
| ## π Troubleshooting | |
| 1. **Model loading errors**: Ensure `sentiment.pkl` is in the same directory | |
| 2. **Dependency issues**: Check that all required packages are in `requirements.txt` | |
| 3. **Memory issues**: Your model might be too large for the free tier (upgrade to paid tier) | |
| 4. **Preprocessing errors**: Make sure your text preprocessing matches your training pipeline | |
| ## π‘ Next Steps | |
| - Add authentication for production use | |
| - Implement rate limiting | |
| - Add logging and monitoring | |
| - Create more sophisticated web interface | |
| - Add model versioning | |