Spaces:
Running
Running
File size: 2,943 Bytes
f381be8 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 | # Deployment Guide
## Local Development
### Backend
```bash
cd aiBatteryLifecycle
.\venv\Scripts\activate # Windows
source venv/bin/activate # Linux/Mac
uvicorn api.main:app --host 0.0.0.0 --port 7860 --reload
```
### Frontend (dev mode)
```bash
cd frontend
npm install
npm run dev
```
Frontend proxies `/api/*` to `localhost:7860` in dev mode.
### Frontend (production build)
```bash
cd frontend
npm run build
```
Built files go to `frontend/dist/` and are served by FastAPI.
---
## Docker
### Build
```bash
docker build -t battery-predictor .
```
### Run
```bash
docker run -p 7860:7860 battery-predictor
```
### Build stages
1. **frontend-build:** `node:20-slim` β installs npm deps and builds React SPA
2. **runtime:** `python:3.11-slim` β installs Python deps, copies source and built frontend
### Docker Compose (recommended)
```bash
# Production β single container (frontend + API)
docker compose up --build
# Development β backend only with hot-reload
docker compose --profile dev up api-dev
# then separately:
cd frontend && npm run dev
```
### Environment Variables
| Variable | Default | Description |
|----------|---------|-------------|
| `LOG_LEVEL` | `INFO` | Logging verbosity (`DEBUG` / `INFO` / `WARNING` / `ERROR`) |
| `WORKERS` | `1` | Uvicorn worker count |
---
## Hugging Face Spaces
### Setup
1. Create a new Space on Hugging Face (SDK: Docker)
2. Push the repository to the Space
3. The Dockerfile exposes port 7860 (HF Spaces default)
### Dockerfile Requirements
- Must expose port **7860**
- Must respond to health checks at `/health`
- Keep image size manageable (use CPU-only PyTorch/TF)
### Files to include
```
Dockerfile
requirements.txt
api/
src/
frontend/ # Vite builds during Docker image creation
cleaned_dataset/
artifacts/v1/ # v1 model checkpoints (legacy)
artifacts/v2/ # v2 model checkpoints (recommended)
artifacts/models/ # Root-level models (backward compat)
```
### HuggingFace Space URL
```
https://huggingface.co/spaces/NeerajCodz/aiBatteryLifeCycle
```
### Space configuration (README.md header)
```yaml
---
title: AI Battery Lifecycle Predictor
emoji: π
colorFrom: green
colorTo: blue
sdk: docker
pinned: false
app_port: 7860
---
```
---
## Production Considerations
### Performance
- Use `--workers N` for multi-core deployment
- Enable GPU passthrough for deep model inference: `docker run --gpus all`
- Consider preloading all models (not lazy loading)
### Security
- Set `CORS_ORIGINS` to specific domains in production
- Add authentication middleware if needed
- Use HTTPS reverse proxy (nginx, Caddy)
### Monitoring
- Health endpoint: `/health`
- Logs: JSON-per-line rotating log at `artifacts/logs/battery_lifecycle.log` (10 MB Γ 5 backups)
β set `LOG_LEVEL=DEBUG` for verbose output, mount volume to persist across container restarts
- Metrics: Add Prometheus endpoint if needed
|