Spaces:
Runtime error
Runtime error
Hugging Face Space Deployment Guide
π How to Deploy Your Sentiment Analysis API to Hugging Face Spaces
Step 1: Create a Hugging Face Account
- Go to https://huggingface.co
- Sign up for a free account if you don't have one
Step 2: Create a New Space
- Go to https://huggingface.co/new-space
- Fill in the details:
- Space name:
sentiment-analysis-api(or your preferred name) - License: MIT
- SDK: Docker
- Hardware: CPU Basic (free tier)
- Space name:
Step 3: Upload Your Files
Upload all the files from this directory to your new Space:
app.py- FastAPI applicationsentiment.pkl- Your trained modelrequirements.txt- Python dependenciesDockerfile- Docker configurationREADME.md- Documentationindex.html- Simple web interface (optional)
Step 4: Your Space Will Auto-Deploy
- Hugging Face will automatically build and deploy your Docker container
- The build process takes 5-10 minutes
- You can monitor the build logs in the Space dashboard
Step 5: Access Your API
Once deployed, your API will be available at:
- API Base URL:
https://YOUR_USERNAME-sentiment-analysis-api.hf.space - Interactive Docs:
https://YOUR_USERNAME-sentiment-analysis-api.hf.space/docs - Simple Interface:
https://YOUR_USERNAME-sentiment-analysis-api.hf.space(if you uploaded index.html)
π§ͺ Testing Locally First
Before deploying, test your API locally:
Install dependencies:
pip install -r requirements.txtRun the API:
python app.pyTest the endpoints:
- Visit
http://localhost:7860/docsfor interactive API docs - Run
python test_api.pyto test all endpoints - Visit
http://localhost:7860for the simple web interface
- Visit
π API Endpoints
POST /predict
{
"text": "I love this product!"
}
Response:
{
"prediction": 1,
"confidence": 0.95,
"sentiment": "positive"
}
POST /predict_proba
{
"text": "This is terrible"
}
Response:
{
"probabilities": [0.85, 0.15],
"prediction": 0,
"sentiment": "negative"
}
POST /batch_predict
["I love it!", "This is bad", "It's okay"]
π§ Customization
If your model requires text preprocessing:
Edit the app.py file in the prediction functions to add your preprocessing steps:
# Add preprocessing before model.predict()
def preprocess_text(text):
# Add your preprocessing logic here
# e.g., tokenization, vectorization, etc.
return processed_text
# In the prediction functions:
processed_text = preprocess_text(input_data.text)
prediction = model.predict([processed_text])[0]
To change the port or host:
Modify the last line in app.py:
uvicorn.run(app, host="0.0.0.0", port=8000) # Change port as needed
π Troubleshooting
- Model loading errors: Ensure
sentiment.pklis in the same directory - Dependency issues: Check that all required packages are in
requirements.txt - Memory issues: Your model might be too large for the free tier (upgrade to paid tier)
- Preprocessing errors: Make sure your text preprocessing matches your training pipeline
π‘ Next Steps
- Add authentication for production use
- Implement rate limiting
- Add logging and monitoring
- Create more sophisticated web interface
- Add model versioning