Deployment Guide
Complete guide for deploying the Land Redistribution Algorithm to production.
Table of Contents
- Prerequisites
- Backend Deployment (Hugging Face Spaces)
- Frontend Deployment (Streamlit Cloud)
- Local Docker Testing
- Environment Variables
- Troubleshooting
Prerequisites
For All Deployments
- Git installed on your machine
- GitHub account (for Streamlit Cloud)
- Hugging Face account (for backend deployment)
For Local Testing
- Docker and Docker Compose installed
- Python 3.11+ (for non-Docker development)
- Make (optional, for convenience commands)
Backend Deployment (Hugging Face Spaces)
Hugging Face Spaces provides free hosting for ML applications with Docker support.
Step 1: Create a New Space
- Go to Hugging Face Spaces
- Click "Create new Space"
- Configure:
- Space name:
land-redistribution-api(or your choice) - License: MIT
- Select the Space SDK: Docker
- Visibility: Public or Private
- Space name:
Step 2: Prepare Backend Files
The backend directory is already configured with:
- β
Dockerfile- Multi-stage production build - β
README_HF.md- Hugging Face metadata - β
requirements.txt- Python dependencies - β
.dockerignore- Build optimization
Step 3: Deploy to Hugging Face
Option A: Git Push (Recommended)
# Navigate to backend directory
cd /Volumes/WorkSpace/Project/REMB/algorithms/backend
# Initialize git (if not already)
git init
# Add Hugging Face remote using your space name
git remote add hf https://huggingface.co/spaces/<YOUR_USERNAME>/<SPACE_NAME>
# Rename README for Hugging Face
cp README_HF.md README.md
# Add and commit files
git add .
git commit -m "Initial deployment"
# Push to Hugging Face
git push hf main
Option B: Web Upload
- In your Space, click "Files and versions"
- Upload all files from
backend/directory - Ensure
README_HF.mdis renamed toREADME.md
Step 4: Wait for Build
- Hugging Face will automatically build your Docker image
- Build time: ~5-10 minutes
- Monitor progress in the "Logs" tab
Step 5: Test Your Backend API
Once deployed, your API will be available at:
https://<YOUR_USERNAME>-<SPACE_NAME>.hf.space
Test endpoints:
# Health check
curl https://<YOUR_USERNAME>-<SPACE_NAME>.hf.space/health
# API documentation
open https://<YOUR_USERNAME>-<SPACE_NAME>.hf.space/docs
Frontend Deployment (Streamlit Cloud)
Step 1: Push Frontend to GitHub
cd /Volumes/WorkSpace/Project/REMB/algorithms/frontend
# Initialize git repository (if not already)
git init
# Add GitHub remote
git remote add origin https://github.com/<YOUR_USERNAME>/land-redistribution-ui.git
# Add all files
git add .
git commit -m "Initial commit"
# Push to GitHub
git branch -M main
git push -u origin main
Step 2: Deploy on Streamlit Cloud
- Go to Streamlit Cloud
- Sign in with GitHub
- Click "New app"
- Configure:
- Repository: Select your frontend repository
- Branch:
main - Main file path:
app.py
Step 3: Configure Environment Variables
In Streamlit Cloud, add secrets:
- Go to your app settings
- Click "Secrets"
- Add:
API_URL = "https://<YOUR_HF_USERNAME>-<SPACE_NAME>.hf.space"
Step 4: Deploy
- Click "Deploy"
- Streamlit Cloud will install dependencies and launch your app
- Your app will be available at:
https://<APP_NAME>.streamlit.app
Local Docker Testing
Before deploying to production, test locally with Docker Compose.
Quick Start
# Navigate to algorithms directory
cd /Volumes/WorkSpace/Project/REMB/algorithms
# Build and start services
make build
make up
# View logs
make logs
# Test services
make health
Manual Testing
# Build backend
docker-compose build backend
# Start all services
docker-compose up -d
# Check status
docker-compose ps
# View logs
docker-compose logs -f
# Test backend
curl http://localhost:8000/health
# Access frontend
open http://localhost:8501
# Stop services
docker-compose down
Testing the Backend Container Only
cd backend
# Build image
docker build -t land-redistribution-api .
# Run container
docker run -p 7860:7860 land-redistribution-api
# Test in another terminal
curl http://localhost:7860/health
Environment Variables
Backend (.env or Hugging Face Secrets)
API_HOST=0.0.0.0
API_PORT=7860
CORS_ORIGINS=*
LOG_LEVEL=INFO
Frontend (.env or Streamlit Secrets)
# Development
API_URL=http://localhost:8000
# Production (use your actual Hugging Face Space URL)
API_URL=https://<YOUR_HF_USERNAME>-<SPACE_NAME>.hf.space
Troubleshooting
Backend Issues
Build Fails on Hugging Face
Problem: Docker build fails with dependency errors
Solution:
- Check Dockerfile syntax
- Verify requirements.txt has pinned versions
- Check build logs in Hugging Face Space
- Test locally first:
docker build -t test ./backend
API Returns 500 Error
Problem: Backend starts but API endpoints fail
Solution:
- Check logs in Hugging Face Space
- Verify all imports work: Test locally with Docker
- Check CORS settings in
main.py
Slow Performance
Problem: API is slow or times out
Solution:
- Reduce optimization parameters (population_size, generations)
- Consider upgrading to Hugging Face paid tier for more resources
- Add caching for common requests
Frontend Issues
Cannot Connect to Backend
Problem: Frontend shows "Cannot connect to API"
Solution:
- Verify
API_URLenvironment variable is set correctly in Streamlit Secrets - Check backend is running: Visit backend URL directly
- Check CORS settings on backend
- Verify no typos in API_URL (should include https://)
Streamlit Cloud Build Fails
Problem: Deployment fails on Streamlit Cloud
Solution:
- Check
requirements.txtfor incompatible versions - Verify
app.pyhas no syntax errors - Check Streamlit Cloud build logs
- Test locally:
streamlit run app.py
Docker Compose Issues
Port Already in Use
Problem: Error: port is already allocated
Solution:
# Find process using port
lsof -i :8000
lsof -i :8501
# Kill process
kill -9 <PID>
# Or change ports in docker-compose.yml
Container Crashes on Startup
Problem: Service exits immediately
Solution:
# Check logs
docker-compose logs backend
docker-compose logs frontend
# Run container interactively
docker run -it land-redistribution-api /bin/bash
# Check health
docker-compose ps
Performance Optimization
Backend
Reduce CPU-intensive operations:
- Lower default
population_sizeandgenerations - Add request timeouts
- Implement result caching
- Lower default
Optimize Docker image:
- Use multi-stage builds (already implemented)
- Minimize layers
- Remove unnecessary dependencies
Frontend
Optimize Streamlit:
- Use
@st.cache_datafor expensive computations - Lazy load visualizations
- Reduce re-renders with
st.session_state
- Use
Reduce API calls:
- Cache results in session state
- Batch multiple requests
Monitoring
Hugging Face Spaces
- View logs: Space β Logs tab
- Check metrics: Space β Settings β Usage
- Restart: Space β Settings β Factory reboot
Streamlit Cloud
- View logs: App β Manage app β Logs
- Check analytics: App β Analytics
- Restart: App β Manage app β Reboot app
Security Considerations
- Environment Variables: Never commit
.envfiles with secrets - CORS: In production, replace
CORS_ORIGINS=*with specific domains - Rate Limiting: Consider adding rate limiting for public APIs
- Input Validation: Backend validates all inputs (already implemented)
Next Steps
- β Test locally with Docker Compose
- β Deploy backend to Hugging Face Spaces
- β Deploy frontend to Streamlit Cloud
- β Configure environment variables
- β Test end-to-end flow
- π Monitor performance and logs
- π Share with users!
Support
For issues or questions:
- Backend API: Check Hugging Face Space discussions
- Frontend: Check Streamlit Community forum
- General: Open an issue on GitHub
License
MIT