Phase IV Deployment Guide
Status: Infrastructure Generation COMPLETE Date: 2026-01-30 Branch: 005-phase4-infra
β Completed Work
Phase 1: Setup - COMPLETE
- Phase III frontend/backend copied to phase-4/apps (READ-ONLY)
- Directory structure created
- Constitution warnings added (Phase III code immutability)
Phase 2: Foundational - COMPLETE
- Environment templates created (.env.example)
- Backend API contract documented
- Chatbot service configured
Phase 3: Infrastructure Generation - COMPLETE
- 4 Dockerfiles created (frontend, backend, chatbot, ollama)
- 4 Kubernetes deployments generated (with health probes)
- 4 Kubernetes services generated (ClusterIP)
- Helm chart created (Chart.yaml, values.yaml, templates/)
- PVC for Ollama models (10Gi)
- ConfigMap for environment variables
Docker Images Built
- todo-backend:latest (373MB)
- todo-chatbot:latest (255MB)
- todo-frontend:latest (build in progress)
- ollama/ollama:latest (pull in progress)
π§ Manual Steps Required
Step 1: Fix Minikube (Required Due to Corrupted State)
Minikube encountered issues during the automated deployment. Please manually fix:
Option A: Delete and Recreate
# Run in PowerShell as Administrator
Remove-Item -Recurse -Force C:\Users\<YourUser>\.minikube\machines\minikube
minikube start --cpus=2 --memory=1536 --driver=docker
Option B: Use Docker Compose (Alternative)
# Use the Docker Compose approach instead of Minikube
docker-compose -f phase-4/infra/docker/docker-compose.yml up -d
Step 2: Build Frontend Image
# From project root
docker build -t todo-frontend:latest -f phase-4/infra/docker/Dockerfile.frontend phase-4/apps/todo-frontend
Step 3: Pull Ollama Image
docker pull ollama/ollama:latest
Step 4: Deploy with Helm (Once Minikube is Fixed)
# Start Minikube
minikube start --cpus=2 --memory=1536 --driver=docker
# Enable ingress
minikube addons enable ingress
# Set Docker environment to use Minikube's daemon
eval $(minikube docker-env)
# Tag images for Minikube
docker tag todo-backend:latest localhost:5000/todo-backend:latest
docker tag todo-frontend:latest localhost:5000/todo-frontend:latest
docker tag todo-chatbot:latest localhost:5000/todo-chatbot:latest
docker tag ollama/ollama:latest localhost:5000/ollama/ollama:latest
# Install Helm chart
helm install todo-app phase-4/infra/helm/todo-app
# Wait for pods
kubectl wait --for=condition=ready pod -l app=todo-app --timeout=120s
# Check status
kubectl get pods
kubectl get services
Step 5: Validate Deployment
# Port-forward frontend
kubectl port-forward svc/todo-frontend 3000:80
# Access application
# Frontend: http://localhost:3000
# Backend API: http://localhost:8000/api/health (via port-forward)
Step 6: Preload Ollama Model
kubectl exec -it deployment/ollama -- ollama pull llama3.2:3b
Alternative: Docker Compose Deployment
If Minikube continues to have issues, you can use Docker Compose:
- Create
phase-4/infra/docker/docker-compose.yml:
version: '3.8'
services:
frontend:
image: todo-frontend:latest
ports:
- "3000:3000"
environment:
- NEXT_PUBLIC_BACKEND_URL=http://backend:8000
depends_on:
- backend
backend:
image: todo-backend:latest
ports:
- "8000:8000"
environment:
- DATABASE_URL=postgresql://user:password@postgres:5432/tododb
- JWT_SECRET=your-jwt-secret
- OLLAMA_HOST=http://ollama:11434
depends_on:
- postgres
- ollama
chatbot:
image: todo-chatbot:latest
ports:
- "8001:8001"
environment:
- BACKEND_API_URL=http://backend:8000
- OLLAMA_BASE_URL=http://ollama:11434
- MODEL_NAME=llama3.2:3b
depends_on:
- backend
- ollama
ollama:
image: ollama/ollama:latest
ports:
- "11434:11434"
volumes:
- ollama-models:/models
postgres:
image: postgres:15
environment:
- POSTGRES_DB=tododb
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
ollama-models:
postgres-data:
- Start services:
docker-compose -f phase-4/infra/docker/docker-compose.yml up -d
- Preload Ollama model:
docker exec -it <ollama-container-id> ollama pull llama3.2:3b
Architecture Overview
Browser β Frontend (localhost:3000)
β
Backend Service (port 8000)
β
PostgreSQL (Neon or local)
Chatbot (port 8001)
β β
Ollama Backend Service
(port 11434) (port 8000)
Files Generated
Phase IV Workspace:
- phase-4/apps/todo-frontend/ (READ-ONLY copy)
- phase-4/apps/todo-backend/ (READ-ONLY copy)
- phase-4/apps/chatbot/src/main.py (FastAPI chatbot)
Docker Infrastructure:
- phase-4/infra/docker/Dockerfile.frontend
- phase-4/infra/docker/Dockerfile.backend
- phase-4/infra/docker/Dockerfile.chatbot
- phase-4/infra/docker/Dockerfile.ollama
Kubernetes Infrastructure:
- phase-4/infra/helm/todo-app/Chart.yaml
- phase-4/infra/helm/todo-app/values.yaml
- phase-4/infra/helm/todo-app/templates/ (12 manifests)
Documentation:
- phase-4/docs/backend-api-contract.md
- phase-4/docs/IMPLEMENTATION-STATUS.md
- specs/005-phase4-infra/tasks.md (updated)
Troubleshooting
Minikube won't start:
- Ensure Docker Desktop is running
- Check Docker has enough resources (Settings β Resources)
- Try
minikube deletethenminikube start
Frontend build fails:
- Ensure node_modules exists in phase-4/apps/todo-frontend/
- Try
npm installin the frontend directory first - Check .env.local has correct backend URL
Ollama model fails to load:
- Ensure PVC is created and mounted
- Check Ollama pod has enough memory (4Gi limit)
- Try pulling model manually:
kubectl exec -it deployment/ollama -- ollama pull llama3.2:3b
Next Steps
- Fix Minikube or use Docker Compose alternative
- Complete frontend Docker build
- Deploy all services
- Validate deployment (kubectl get pods)
- Test frontend at http://localhost:3000
- Test chatbot at http://localhost:8001/api/chat
- Preload Ollama model
- Test end-to-end: Create todo via chatbot
Constitution Compliance
β Phase III code immutability (READ-ONLY copies) β Infrastructure-only changes (no business logic modified) β Ollama-first LLM runtime (no external APIs) β Kubernetes-native deployment (Helm charts) β Service isolation (one container per service) β No Phase V features (AI memory, scheduling excluded)