Spaces:
Sleeping
A newer version of the Gradio SDK is available: 6.14.0
title: Prediksi Karyawan Resign
emoji: π
colorFrom: red
colorTo: blue
sdk: gradio
sdk_version: 6.2.0
app_file: gradio_app.py
pinned: false
short_description: This space about employee resignation prediction using ML
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
π Deployment Guide - Gradio App
Overview
Panduan lengkap untuk deploy HR Analytics Resignation Prediction Model menggunakan Gradio Web Interface.
π¦ File Structure
Setelah training model, Anda harus memiliki:
deployment/
βββ gradio_app.py # β Main app (Full features)
βββ gradio_app_simple.py # β Simple version
βββ best_model_RF_SMOTE.pkl # β Trained model
βββ scaler.pkl # β Feature scaler
βββ label_encoders.pkl # β Categorical encoders
βββ target_encoder.pkl # β Target encoder (for full app)
βββ requirements_gradio.txt # β Dependencies
βββ README_DEPLOYMENT.md # β This file
π― Two Deployment Options
Option 1: Simple App (Recommended for Quick Start)
- β Single employee prediction only
- β Simple, clean interface
- β Easy to understand
- β Perfect for demos
File: gradio_app_simple.py
Option 2: Full App (Recommended for Production)
- β Single employee prediction
- β Batch prediction (CSV upload)
- β Advanced visualizations
- β Model information tab
- β User guide tab
- β Downloadable results
File: gradio_app.py
π Step-by-Step Deployment
Step 1: Train Your Model
Jalankan notebook terlebih dahulu untuk generate model files:
jupyter notebook HR_Analytics_Dataset_HR_FINAL.ipynb
Required outputs:
best_model_RF_SMOTETomek.pklscaler.pkllabel_encoders.pkltarget_encoder.pkl(optional untuk simple app)
Step 2: Install Gradio Dependencies
pip install gradio plotly
Atau install semua:
pip install -r requirements.txt
Step 3: Verify Files
Pastikan semua file ada di folder yang sama:
ls -la
# Output harus menunjukkan:
# - gradio_app.py atau gradio_app_simple.py
# - best_model_RF_SMOTE.pkl
# - scaler.pkl
# - label_encoders.pkl
Step 4: Run Gradio App
A. Simple Version:
python gradio_simple.py
B. Full Version:
python gradio_app.py
Step 5: Access the App
Setelah running, Gradio akan menampilkan:
Running on local URL: http://127.0.0.1:7860
Running on public URL: https://xxxxx.gradio.live
To create a permanent link, use `share=True`
Local Access:
- Buka browser
- Go to
http://127.0.0.1:7860
Public Access:
- Share link
https://xxxxx.gradio.liveke team - Link valid 72 jam
- Anyone dengan link bisa akses
π Deployment Options
Option A: Local Development (Quick Testing)
app.launch() # Default: local only
Pros:
- Instant deployment
- No setup needed
- Perfect for testing
Cons:
- Only accessible from your computer
- Stops when you close terminal
Option B: Temporary Public Link (Share with Team)
app.launch(share=True) # Creates public link
Pros:
- Anyone can access with link
- Great for demos/presentations
- No infrastructure needed
Cons:
- Link expires in 72 hours
- Not suitable for production
- Limited to Gradio's free tier
Option C: Gradio Spaces (Free Hosting) β RECOMMENDED
Hugging Face Spaces provides free hosting for Gradio apps!
Steps:
Create account di huggingface.co
Create new Space:
- Go to huggingface.co/spaces
- Click "Create new Space"
- Name: "hr-analytics-resign-prediction"
- SDK: Gradio
- Make it Public or Private
Upload files:
Space repository/ βββ app.py # Rename gradio_app.py to app.py βββ requirements.txt # Gradio dependencies βββ best_model_RF_SMOTE.pkl βββ scaler.pkl βββ label_encoders.pkl βββ target_encoder.pklConfigure requirements.txt:
gradio plotly pandas numpy scikit-learnPush to Space:
git clone https://huggingface.co/spaces/YOUR_USERNAME/hr-analytics-resign-prediction cd hr-analytics-resign-prediction cp gradio_app.py app.py cp best_model_RF_SMOTE.pkl . cp scaler.pkl . cp label_encoders.pkl . cp target_encoder.pkl . git add . git commit -m "Initial deployment" git pushAccess your app:
- URL:
https://huggingface.co/spaces/YOUR_USERNAME/hr-analytics-resign-prediction - Permanent link!
- Free hosting!
- URL:
Pros:
- β Free hosting
- β Permanent link
- β SSL certificate
- β Easy updates via git
- β Community support
Cons:
- Public by default (use Private if needed)
- Storage limits (5GB free)
Option D: Cloud Deployment (Production)
D1. AWS EC2
# 1. Launch EC2 instance (Ubuntu)
# 2. SSH into instance
ssh -i your-key.pem ubuntu@your-ec2-ip
# 3. Install dependencies
sudo apt update
sudo apt install python3-pip
pip3 install gradio plotly pandas numpy scikit-learn
# 4. Upload files
scp -i your-key.pem *.pkl ubuntu@your-ec2-ip:~/
scp -i your-key.pem gradio_app.py ubuntu@your-ec2-ip:~/
# 5. Run app
python3 gradio_app.py
# 6. Access via EC2 public IP
# http://your-ec2-ip:7860
Cost: ~$10-30/month (t2.micro - t2.medium)
D2. Google Cloud Run (Containerized)
- Create Dockerfile:
FROM python:3.9-slim
WORKDIR /app
COPY requirements_gradio.txt .
RUN pip install -r requirements_gradio.txt
COPY . .
CMD ["python", "gradio_app.py"]
- Deploy:
gcloud run deploy hr-analytics \
--source . \
--platform managed \
--region us-central1 \
--allow-unauthenticated
Cost: Pay per use (~$5-20/month)
D3. Heroku (Simple PaaS)
- Create Procfile:
web: python gradio_app.py
- Deploy:
heroku login
heroku create hr-analytics-app
git push heroku main
Cost: ~$7/month (Hobby tier)
D4. DigitalOcean App Platform
- Go to DigitalOcean App Platform
- Connect GitHub repo
- Select Python
- Add buildpack
- Deploy!
Cost: $5-12/month
π Security Considerations
1. Authentication (Recommended for Production)
Add Gradio authentication:
app.launch(
auth=("admin", "your_secure_password"),
share=False
)
Or use environment variables:
import os
username = os.getenv("GRADIO_USERNAME")
password = os.getenv("GRADIO_PASSWORD")
app.launch(
auth=(username, password),
share=False
)
2. HTTPS/SSL
For production, always use HTTPS:
- Hugging Face Spaces: β Built-in SSL
- Cloud providers: Configure SSL certificate
- Local: Use nginx reverse proxy
3. Data Privacy
# Don't log sensitive data
# Don't store user inputs permanently
# Clear outputs after session
4. Rate Limiting
Implement rate limiting to prevent abuse:
from gradio_client import Client
# Limit requests per user
π¨ Customization
Change Theme
with gr.Blocks(theme=gr.themes.Soft()) as app:
# Your interface
Available themes:
gr.themes.Soft()gr.themes.Base()gr.themes.Glass()gr.themes.Monochrome()
Custom CSS
css = """
.gradio-container {
font-family: 'Arial', sans-serif;
}
.button {
background-color: #4CAF50;
}
"""
with gr.Blocks(css=css) as app:
# Your interface
Add Logo
gr.Image("company_logo.png", height=100, width=200)
π Monitoring & Analytics
Option 1: Built-in Analytics
Gradio provides basic analytics:
- Page views
- User interactions
- Error rates
Access via Spaces dashboard.
Option 2: Custom Logging
import logging
logging.basicConfig(
filename='app.log',
level=logging.INFO,
format='%(asctime)s - %(levelname)s - %(message)s'
)
def predict_employee(...):
logging.info(f"Prediction requested: {divisi}, {gaji}")
# Your code
logging.info(f"Result: {resign_prob}%")
Option 3: Google Analytics
Add GA tracking code to custom HTML.
π Troubleshooting
Problem: "Model file not found"
Solution:
# Check current directory
pwd
# List files
ls -la
# Verify .pkl files exist
ls *.pkl
Problem: "Module 'gradio' not found"
Solution:
pip install gradio plotly
Problem: "Port 7860 already in use"
Solution:
app.launch(server_port=7861) # Change port
Problem: App is slow
Solutions:
- Use smaller model (reduce n_estimators)
- Implement caching
- Use faster instance type
- Optimize preprocessing
Problem: Public link expired
Solutions:
- Deploy to Hugging Face Spaces (permanent)
- Use cloud hosting
- Set up your own server
π Performance Optimization
1. Model Optimization
# Reduce model size
import joblib
joblib.dump(model, 'model.pkl', compress=3)
2. Caching
from functools import lru_cache
@lru_cache(maxsize=100)
def predict_cached(...):
# Prediction logic
3. Async Processing
For batch predictions:
import asyncio
async def predict_batch_async(file):
# Async processing
π Updates & Maintenance
Update Model
- Retrain model with new data
- Generate new .pkl files
- Replace old files
- Restart app
# If on Spaces
git add *.pkl
git commit -m "Update model"
git push
Update UI
- Edit gradio_app.py
- Test locally
- Deploy changes
Monitor Performance
- Track prediction accuracy over time
- Collect user feedback
- A/B test different models
- Update based on business needs
π Support & Resources
Official Documentation
- Gradio: https://gradio.app/docs
- Hugging Face Spaces: https://huggingface.co/docs/hub/spaces
Community
- Gradio Discord: https://discord.gg/gradio
- Hugging Face Forum: https://discuss.huggingface.co
Troubleshooting
- Check GitHub issues
- Stack Overflow
- Gradio Slack community
β Deployment Checklist
Before deploying to production:
- Model trained and tested (F1 > 0.90)
- All .pkl files generated
- Gradio app tested locally
- Authentication configured
- Error handling implemented
- Logging configured
- Documentation updated
- User guide included
- Security reviewed
- Performance tested
- Backup plan in place
- Monitoring setup
- Team trained on usage
- Stakeholders notified
π Next Steps
- Deploy to Hugging Face Spaces (Easiest, FREE)
- Add authentication for security
- Set up monitoring to track usage
- Collect feedback from users
- Iterate and improve based on data
π Best Practices
- Keep it simple - Start with simple version, add features as needed
- Test thoroughly - Test with edge cases before deploying
- Document everything - Help users understand how to use it
- Monitor actively - Track errors and usage patterns
- Update regularly - Retrain model with new data quarterly
- Secure properly - Always use authentication in production
- Backup frequently - Keep copies of model files
Happy Deploying! π
Need help? Check the troubleshooting section or reach out to the community!