fix: Use lightweight requirements-deploy.txt (no PyTorch/transformers/whisper) to fit HF free tier size limits — ML services already have graceful fallbacks
fix: Add build-dependencies for PyTorch, Librosa, and bcrypt, and compel CPU-only PyTorch download to prevent Hugging Face Docker Container Out-Of-Memory crashes