| # SPARKNET Deployment Guide | |
| ## Architecture Overview | |
| SPARKNET supports a hybrid deployment architecture: | |
| ``` | |
| ┌─────────────────────────────┐ ┌─────────────────────────────┐ | |
| │ Streamlit Cloud │ │ GPU Server (Lytos) │ | |
| │ (Frontend/UI) │ HTTPS │ FastAPI Backend │ | |
| │ │ ◄─────► │ │ | |
| │ sparknet.streamlit.app │ API │ - PaddleOCR (GPU) │ | |
| │ │ │ - Document Processing │ | |
| │ - User Interface │ │ - RAG + Embeddings │ | |
| │ - Authentication │ │ - Ollama LLM │ | |
| │ - Cloud LLM fallback │ │ - ChromaDB Vector Store │ | |
| └─────────────────────────────┘ └─────────────────────────────┘ | |
| ``` | |
| ## Deployment Options | |
| ### Option 1: Full Stack on GPU Server (Recommended for Production) | |
| Run both frontend and backend on Lytos with GPU acceleration. | |
| ### Option 2: Hybrid (Streamlit Cloud + GPU Backend) | |
| - **Frontend**: Streamlit Cloud (free hosting, easy sharing) | |
| - **Backend**: Lytos GPU server (full processing power) | |
| ### Option 3: Streamlit Cloud Only (Demo Mode) | |
| - Uses cloud LLM providers (Groq, Gemini, etc.) | |
| - Limited functionality (no OCR, no RAG indexing) | |
| --- | |
| ## Option 2: Hybrid Deployment (Recommended) | |
| ### Step 1: Setup Backend on Lytos (GPU Server) | |
| #### 1.1 SSH into Lytos | |
| ```bash | |
| ssh user@lytos.server.address | |
| ``` | |
| #### 1.2 Clone the repository | |
| ```bash | |
| git clone https://github.com/your-repo/sparknet.git | |
| cd sparknet | |
| ``` | |
| #### 1.3 Create virtual environment | |
| ```bash | |
| python -m venv venv | |
| source venv/bin/activate | |
| ``` | |
| #### 1.4 Install backend dependencies | |
| ```bash | |
| pip install -r backend/requirements.txt | |
| ``` | |
| #### 1.5 Install Ollama (for LLM inference) | |
| ```bash | |
| curl -fsSL https://ollama.com/install.sh | sh | |
| # Pull required models | |
| ollama pull llama3.2:latest | |
| ollama pull nomic-embed-text | |
| ``` | |
| #### 1.6 Start the backend server | |
| ```bash | |
| # Development mode | |
| cd backend | |
| uvicorn api:app --host 0.0.0.0 --port 8000 --reload | |
| # Production mode (with multiple workers) | |
| uvicorn api:app --host 0.0.0.0 --port 8000 --workers 4 | |
| ``` | |
| #### 1.7 (Optional) Run with systemd for auto-restart | |
| ```bash | |
| sudo nano /etc/systemd/system/sparknet-backend.service | |
| ``` | |
| Add: | |
| ```ini | |
| [Unit] | |
| Description=SPARKNET Backend API | |
| After=network.target | |
| [Service] | |
| Type=simple | |
| User=your-user | |
| WorkingDirectory=/path/to/sparknet/backend | |
| Environment=PATH=/path/to/sparknet/venv/bin | |
| ExecStart=/path/to/sparknet/venv/bin/uvicorn api:app --host 0.0.0.0 --port 8000 --workers 4 | |
| Restart=always | |
| RestartSec=10 | |
| [Install] | |
| WantedBy=multi-user.target | |
| ``` | |
| Enable and start: | |
| ```bash | |
| sudo systemctl enable sparknet-backend | |
| sudo systemctl start sparknet-backend | |
| ``` | |
| #### 1.8 Configure firewall (allow port 8000) | |
| ```bash | |
| sudo ufw allow 8000/tcp | |
| ``` | |
| #### 1.9 (Optional) Setup HTTPS with nginx | |
| ```bash | |
| sudo apt install nginx certbot python3-certbot-nginx | |
| sudo nano /etc/nginx/sites-available/sparknet | |
| ``` | |
| Add: | |
| ```nginx | |
| server { | |
| listen 80; | |
| server_name api.sparknet.yourdomain.com; | |
| location / { | |
| proxy_pass http://127.0.0.1:8000; | |
| proxy_http_version 1.1; | |
| proxy_set_header Upgrade $http_upgrade; | |
| proxy_set_header Connection 'upgrade'; | |
| proxy_set_header Host $host; | |
| proxy_set_header X-Real-IP $remote_addr; | |
| proxy_cache_bypass $http_upgrade; | |
| proxy_read_timeout 300s; | |
| proxy_connect_timeout 75s; | |
| } | |
| } | |
| ``` | |
| Enable and get SSL: | |
| ```bash | |
| sudo ln -s /etc/nginx/sites-available/sparknet /etc/nginx/sites-enabled/ | |
| sudo certbot --nginx -d api.sparknet.yourdomain.com | |
| sudo systemctl restart nginx | |
| ``` | |
| ### Step 2: Configure Streamlit Cloud | |
| #### 2.1 Update Streamlit secrets | |
| In Streamlit Cloud dashboard → Settings → Secrets, add: | |
| ```toml | |
| [auth] | |
| password = "SPARKNET@2026" | |
| # Backend URL (your Lytos server) | |
| BACKEND_URL = "https://api.sparknet.yourdomain.com" | |
| # Or without HTTPS: | |
| # BACKEND_URL = "http://lytos-ip-address:8000" | |
| # Fallback cloud providers (optional, used if backend unavailable) | |
| GROQ_API_KEY = "your-groq-key" | |
| GOOGLE_API_KEY = "your-google-key" | |
| ``` | |
| #### 2.2 Deploy to Streamlit Cloud | |
| Push your code and Streamlit Cloud will auto-deploy: | |
| ```bash | |
| git add . | |
| git commit -m "Add backend support" | |
| git push origin main | |
| ``` | |
| ### Step 3: Verify Deployment | |
| #### 3.1 Test backend directly | |
| ```bash | |
| # Health check | |
| curl https://api.sparknet.yourdomain.com/api/health | |
| # System status | |
| curl https://api.sparknet.yourdomain.com/api/status | |
| ``` | |
| #### 3.2 Test from Streamlit | |
| Visit your Streamlit app and check: | |
| - Status bar should show "Backend" instead of "Demo Mode" | |
| - GPU indicator should appear | |
| - Document processing should use full pipeline | |
| --- | |
| ## Backend API Endpoints | |
| | Endpoint | Method | Description | | |
| |----------|--------|-------------| | |
| | `/api/health` | GET | Health check | | |
| | `/api/status` | GET | System status (Ollama, GPU, RAG) | | |
| | `/api/process` | POST | Process document (OCR, layout) | | |
| | `/api/index` | POST | Index document to RAG | | |
| | `/api/query` | POST | Query RAG system | | |
| | `/api/search` | POST | Search similar chunks | | |
| | `/api/documents` | GET | List indexed documents | | |
| | `/api/documents/{id}` | DELETE | Delete document | | |
| ### API Documentation | |
| Once backend is running, visit: | |
| - Swagger UI: `http://lytos:8000/docs` | |
| - ReDoc: `http://lytos:8000/redoc` | |
| --- | |
| ## Environment Variables | |
| ### Backend (Lytos) | |
| ```bash | |
| # Optional: Configure Ollama host if not localhost | |
| export OLLAMA_HOST=http://localhost:11434 | |
| # Optional: GPU device selection | |
| export CUDA_VISIBLE_DEVICES=0 | |
| ``` | |
| ### Frontend (Streamlit) | |
| Set in `secrets.toml` or Streamlit Cloud secrets: | |
| ```toml | |
| # Required for hybrid mode | |
| BACKEND_URL = "https://your-backend-url" | |
| # Authentication | |
| [auth] | |
| password = "your-password" | |
| # Fallback cloud providers | |
| GROQ_API_KEY = "..." | |
| GOOGLE_API_KEY = "..." | |
| ``` | |
| --- | |
| ## Troubleshooting | |
| ### Backend not reachable | |
| 1. Check if backend is running: | |
| ```bash | |
| curl http://localhost:8000/api/health | |
| ``` | |
| 2. Check firewall: | |
| ```bash | |
| sudo ufw status | |
| ``` | |
| 3. Check nginx logs: | |
| ```bash | |
| sudo tail -f /var/log/nginx/error.log | |
| ``` | |
| ### GPU not detected | |
| 1. Check CUDA: | |
| ```bash | |
| nvidia-smi | |
| python -c "import torch; print(torch.cuda.is_available())" | |
| ``` | |
| 2. Check PaddlePaddle GPU: | |
| ```bash | |
| python -c "import paddle; print(paddle.device.is_compiled_with_cuda())" | |
| ``` | |
| ### Ollama not working | |
| 1. Check Ollama status: | |
| ```bash | |
| ollama list | |
| curl http://localhost:11434/api/tags | |
| ``` | |
| 2. Restart Ollama: | |
| ```bash | |
| sudo systemctl restart ollama | |
| ``` | |
| ### Document processing fails | |
| 1. Check backend logs: | |
| ```bash | |
| journalctl -u sparknet-backend -f | |
| ``` | |
| 2. Test processing directly: | |
| ```bash | |
| curl -X POST http://localhost:8000/api/process \ | |
| -F "file=@test.pdf" \ | |
| -F "ocr_engine=paddleocr" | |
| ``` | |
| --- | |
| ## Security Considerations | |
| ### Production Checklist | |
| - [ ] Enable HTTPS for backend API | |
| - [ ] Configure CORS properly (restrict origins) | |
| - [ ] Use strong authentication password | |
| - [ ] Enable rate limiting | |
| - [ ] Set up monitoring and alerts | |
| - [ ] Configure backup for ChromaDB data | |
| - [ ] Review GDPR compliance for data handling | |
| ### CORS Configuration | |
| In `backend/api.py`, update for production: | |
| ```python | |
| app.add_middleware( | |
| CORSMiddleware, | |
| allow_origins=["https://sparknet.streamlit.app"], # Your Streamlit URL | |
| allow_credentials=True, | |
| allow_methods=["GET", "POST", "DELETE"], | |
| allow_headers=["*"], | |
| ) | |
| ``` | |
| --- | |
| ## Performance Tuning | |
| ### Backend Workers | |
| Adjust based on CPU cores: | |
| ```bash | |
| uvicorn api:app --workers $(nproc) | |
| ``` | |
| ### GPU Memory | |
| For large documents, monitor GPU memory: | |
| ```bash | |
| watch -n 1 nvidia-smi | |
| ``` | |
| ### ChromaDB Optimization | |
| For large document collections: | |
| ```python | |
| store_config = VectorStoreConfig( | |
| persist_directory="data/sparknet_unified_rag", | |
| collection_name="sparknet_documents", | |
| similarity_threshold=0.0, | |
| # Add indexing options for better performance | |
| ) | |
| ``` | |
| --- | |
| ## Contact & Support | |
| - **Project**: VISTA/Horizon EU | |
| - **Framework**: SPARKNET - Strategic Patent Acceleration & Research Kinetics NETwork | |
| - **Issues**: https://github.com/your-repo/sparknet/issues | |