Spaces:
Running
Running
| # SentinelAI — Judge Demo Script (do not improvise) | |
| ## Preconditions (2 minutes before) | |
| 1. Terminal A — API: | |
| `cd SentinelAI && source .venv/bin/activate && export PYTHONPATH=$PWD && export SKIP_DB=1` | |
| `uvicorn backend.app.main:app --host 0.0.0.0 --port 8000` | |
| 2. Terminal B — UI: | |
| `cd SentinelAI/frontend && NEXT_PUBLIC_API_URL=http://127.0.0.1:8000 npm run dev` | |
| 3. Open dashboard at `http://localhost:3000` (or your dev URL). | |
| ## The flow (≈3–4 minutes) | |
| 1. **Start continuous simulation** | |
| Terminal C: `python scripts/continuous_demo.py` | |
| Say: *“This is autonomous traffic — no manual log upload.”* | |
| 2. **Live stream** | |
| Point at **Live Threat Feed** and **terminal strip**. | |
| Say: *“Collector → parser → enrichment → detection — everything is event-driven.”* | |
| 3. **Threat detected** | |
| When **detection** rows appear with severity, say: *“Rules + sliding windows — brute-force and post-auth patterns.”* | |
| 4. **Incident chain** | |
| Point at **Attack Timeline** when an incident appears. | |
| Say: *“Correlation fuses events by source into one narrative.”* | |
| 5. **AI investigation** | |
| Wait for **AI Investigation** to populate (auto-runs after an incident; may take up to ~`AUTO_AI_MIN_SEC` between runs). | |
| Say: *“Analyst layer — progression, severity rationale, remediation bullets — local Llama/Qwen on AMD ROCm when configured.”* | |
| 6. **WOW — Replay** | |
| Click **Replay last chain**. | |
| Say: *“We’re re-streaming the buffered kill chain for the jury — same detections and AI report, cinematic replay.”* | |
| 7. **Remediation** | |
| Scroll AI panel for **Recommended actions** (or call `POST /remediation` with `incident_id` if you show API). | |
| Say: *“Playbooks block IOCs, rotate creds, harden IAM.”* | |
| 8. **AMD story** | |
| Point at **Powered by AMD ROCm** panel (GPU %, latency, concurrent agents are demo-swayed metrics). | |
| Say: *“Open weights, on-prem, parallel agents — ROCm is our inference path for SOC-scale throughput.”* | |
| ## Optional soak test (10–15 minutes) | |
| - Leave `continuous_demo.py` running; confirm API stays up, WebSocket shows heartbeats, UI stays responsive. | |
| - If the LLM is down, narratives still read well — **cinematic fallback** is always on. | |
| ## Backup | |
| - If live demo fails: use your **screen recording** (see `docs/RECORDING_CHECKLIST.md`). | |