=============================================================================== FILES TO UPLOAD TO HUGGINGFACE SPACES =============================================================================== ✅ COPY THESE FILES TO YOUR SPACE (11 files total): 1. app.py - Main application (REQUIRED - HF Spaces entry point) 2. llm.py - LLM inference with local models 3. extractors.py - Document text extraction (DOCX/PDF) 4. tagging.py - Speaker tagging 5. chunking.py - Text chunking 6. validation.py - Quality validation 7. reporting.py - CSV/PDF report generation 8. dashboard.py - Dashboard generation 9. production_logger.py - Session logging 10. quote_extractor.py - Quote extraction (optional but recommended) 11. requirements.txt - Python dependencies =============================================================================== OPTIONAL - NICE TO HAVE: =============================================================================== - README.md - Documentation for your Space =============================================================================== DO NOT UPLOAD: =============================================================================== ❌ .env - Contains secrets (use Spaces Variables instead) ❌ test_*.py - Test files ❌ *.log - Log files ❌ logs/ - Log directory ❌ outputs/ - Output directory ❌ __pycache__/ - Python cache =============================================================================== HUGGINGFACE SPACES SETTINGS: =============================================================================== Space SDK: Gradio Hardware: GPU (T4 or better) ⚠️ IMPORTANT - CPU will be very slow! Optional Variables (Settings → Variables): - DEBUG_MODE = True (to see detailed logs) - LOCAL_MODEL = microsoft/Phi-3-mini-4k-instruct (default, no need to set) =============================================================================== DEPLOYMENT METHOD: =============================================================================== Option 1: Direct Upload - Go to your Space → Files → Upload files - Drag and drop the 11 files above Option 2: Git Repository - Create a Git repo with these files - Add .gitignore (already created) - Connect repo to your Space - Auto-deploys on push =============================================================================== FIRST TIME STARTUP: =============================================================================== 1. Dependencies install: ~2-5 minutes 2. Model download: ~2-5 minutes (Phi-3-mini downloads automatically) 3. Total first startup: ~5-10 minutes Subsequent starts: ~30-60 seconds (model is cached) =============================================================================== VERIFICATION: =============================================================================== Check the Logs tab - you should see: ✅ Configuration loaded for HuggingFace Spaces 🚀 TranscriptorAI Enterprise - LLM Backend: local [Local Model] Loading microsoft/Phi-3-mini-4k-instruct... [Local Model] ✅ Model loaded on cuda:0 Running on local URL: http://0.0.0.0:7860 ===============================================================================