Spaces:
Running
Running
Deployment Guide
Local Development
Backend
cd aiBatteryLifecycle
.\venv\Scripts\activate # Windows
source venv/bin/activate # Linux/Mac
uvicorn api.main:app --host 0.0.0.0 --port 7860 --reload
Frontend (dev mode)
cd frontend
npm install
npm run dev
Frontend proxies /api/* to localhost:7860 in dev mode.
Frontend (production build)
cd frontend
npm run build
Built files go to frontend/dist/ and are served by FastAPI.
Docker
Build
docker build -t battery-predictor .
Run
docker run -p 7860:7860 battery-predictor
Build stages
- frontend-build:
node:20-slimβ installs npm deps and builds React SPA - runtime:
python:3.11-slimβ installs Python deps, copies source and built frontend
Docker Compose (recommended)
# Production β single container (frontend + API)
docker compose up --build
# Development β backend only with hot-reload
docker compose --profile dev up api-dev
# then separately:
cd frontend && npm run dev
Environment Variables
| Variable | Default | Description |
|---|---|---|
LOG_LEVEL |
INFO |
Logging verbosity (DEBUG / INFO / WARNING / ERROR) |
WORKERS |
1 |
Uvicorn worker count |
Hugging Face Spaces
Setup
- Create a new Space on Hugging Face (SDK: Docker)
- Push the repository to the Space
- The Dockerfile exposes port 7860 (HF Spaces default)
Dockerfile Requirements
- Must expose port 7860
- Must respond to health checks at
/health - Keep image size manageable (use CPU-only PyTorch/TF)
Files to include
Dockerfile
requirements.txt
api/
src/
frontend/ # Vite builds during Docker image creation
cleaned_dataset/
artifacts/v1/ # v1 model checkpoints (legacy)
artifacts/v2/ # v2 model checkpoints (recommended)
artifacts/models/ # Root-level models (backward compat)
HuggingFace Space URL
https://huggingface.co/spaces/NeerajCodz/aiBatteryLifeCycle
Space configuration (README.md header)
---
title: AI Battery Lifecycle Predictor
emoji: π
colorFrom: green
colorTo: blue
sdk: docker
pinned: false
app_port: 7860
---
Production Considerations
Performance
- Use
--workers Nfor multi-core deployment - Enable GPU passthrough for deep model inference:
docker run --gpus all - Consider preloading all models (not lazy loading)
Security
- Set
CORS_ORIGINSto specific domains in production - Add authentication middleware if needed
- Use HTTPS reverse proxy (nginx, Caddy)
Monitoring
- Health endpoint:
/health - Logs: JSON-per-line rotating log at
artifacts/logs/battery_lifecycle.log(10 MB Γ 5 backups) β setLOG_LEVEL=DEBUGfor verbose output, mount volume to persist across container restarts - Metrics: Add Prometheus endpoint if needed