--- title: Security Incident Analyzer emoji: ๐Ÿ›ก๏ธ colorFrom: blue colorTo: purple sdk: gradio sdk_version: 6.2.0 app_file: src/app.py pinned: false license: mit --- # Security Incident Analyzer An LLM-powered security log and incident analysis tool with a production-ready web interface. ## ๐ŸŽฏ What It Does Paste security logs or alerts โ†’ Get immediate AI analysis explaining: - **What happened** โ€” Clear incident summary - **Severity level** โ€” CRITICAL, HIGH, MEDIUM, LOW, or INFO - **Suggested remediation** โ€” Actionable next steps - **Key indicators** โ€” IOCs and anomalies detected No training required. Director-level insights in seconds. ## ๐Ÿš€ Try It Now **Live Demo:** https://huggingface.co/spaces/debashis2007/SecurityIncidentAnalyzer Or run locally (5 minutes): ```bash git clone https://github.com/Debashis2007/SecurityIncidentAnalyzer.git cd SecurityIncidentAnalyzer pip install -r requirements.txt cp .env.example .env python src/app.py # Visit http://localhost:7860 ``` ## ๐Ÿ—๏ธ Architecture ``` src/ โ”œโ”€โ”€ app.py # Gradio web UI entry point โ”œโ”€โ”€ analyzer/ โ”‚ โ”œโ”€โ”€ security.py # IncidentAnalyzer: core analysis logic โ”‚ โ””โ”€โ”€ models.py # RiskLevel enum, SecurityAnalysis dataclass โ”œโ”€โ”€ llm/ โ”‚ โ”œโ”€โ”€ provider.py # LLM provider abstraction โ”‚ โ”‚ # - OpenAIProvider (gpt-4-turbo) โ”‚ โ”‚ # - LocalLLMProvider (Ollama) โ”‚ โ”‚ # - MockLLMProvider (demo) โ”‚ โ””โ”€โ”€ prompts.py # Security analysis prompt templates โ””โ”€โ”€ utils/ โ”œโ”€โ”€ config.py # Environment-driven configuration โ””โ”€โ”€ logger.py # Structured logging ``` ## โš™๏ธ Configuration ### Option 1: Mock LLM (Default - No API Key Required) Perfect for demos and testing: ```bash # No setup needed! Just run: python src/app.py ``` The mock provider returns realistic-looking security analysis without any API calls. ### Option 2: OpenAI GPT-4 Turbo For production analysis with real LLM intelligence: 1. **Get an API key:** - Sign up: https://platform.openai.com - Create key: https://platform.openai.com/account/api-keys - Keep it safe! 2. **Configure locally:** ```bash cp .env.example .env # Edit .env: # LLM_PROVIDER=openai # OPENAI_API_KEY=sk-proj-YOUR_KEY_HERE python src/app.py ``` 3. **On Hugging Face Spaces:** - Settings โ†’ Secrets โ†’ Add: - `OPENAI_API_KEY` = your key - `LLM_PROVIDER` = openai ### Option 3: Local LLM (Ollama) Run a local model without cloud API: 1. **Install Ollama:** https://ollama.ai 2. **Pull a model:** ```bash ollama pull mistral:7b ollama serve # Keep running in another terminal ``` 3. **Configure:** ```bash # .env LLM_PROVIDER=local LLM_MODEL=mistral:7b ``` ## ๐Ÿ“‹ Environment Variables Create a `.env` file (see `.env.example`): ```bash # Provider selection (required) LLM_PROVIDER=mock # mock, openai, or local # OpenAI (only needed if LLM_PROVIDER=openai) OPENAI_API_KEY=sk-proj-... # Model override (optional, uses defaults if not set) LLM_MODEL=gpt-4-turbo # Debugging DEBUG=false ``` **โš ๏ธ Important:** `.env` is in `.gitignore` โ€” secrets never get committed. ### Provider Defaults | Provider | Default Model | Notes | |----------|---------------|-------| | `mock` | mock-analyzer-v1 | No API required, deterministic | | `openai` | gpt-4-turbo | Requires OPENAI_API_KEY | | `local` | mistral:7b | Requires Ollama running on localhost:11434 | ## ๐Ÿงช Testing ```bash # Run all tests pytest tests/ -v # With coverage report pytest tests/ --cov=src # Test specific module pytest tests/test_analyzer.py -v ``` **Test Results:** 11/11 tests passing โœ… ## ๐Ÿ“ Example Incidents to Test ### 1. Failed Authentication ``` 2025-12-21 14:32:15 AUTH_FAILURE - Failed login attempt from 192.168.1.100 2025-12-21 14:32:18 AUTH_FAILURE - Failed login attempt from 192.168.1.100 2025-12-21 14:32:21 AUTH_FAILURE - Failed login attempt from 192.168.1.100 User: admin@company.com | Attempts: 15 in 2 minutes ``` ### 2. Ransomware Detection ``` CRITICAL ALERT - File encryption detected Directory: /mnt/backup/production Files Encrypted: 500+ Process: unknown.exe (SYSTEM privilege) Time: 2025-12-21 16:20:15 UTC Status: ACTIVE THREAT ``` ### 3. SQL Injection Attempt ``` Web Application Firewall Alert Rule: SQL Injection Pattern URL: /api/users?id=1' OR '1'='1 Source IP: 203.0.113.45 Status: BLOCKED Payload: ' OR '1'='1 ``` ### 4. Privilege Escalation ``` Security Event: Privilege Escalation Detected User: john.smith@company.com Action: sudo su - (unauthorized) Target System: prod-db-server-02 Time: 2025-12-21 17:02:45 Status: No approval ticket found ``` ### 5. Suspicious Outbound Traffic ``` ALERT: Unusual outbound traffic detected Destination: 10.0.0.1:4444 Source: internal-server-03 Data Volume: 2.3 GB in 45 minutes Status: ONGOING ``` ## ๐Ÿš€ Deployment ### Hugging Face Spaces (Recommended) Your app is live at: ๐Ÿ‘‰ https://huggingface.co/spaces/debashis2007/SecurityIncidentAnalyzer **How it works:** - Detects `spaces.yaml` โ†’ Gradio configuration - Installs `requirements.txt` โ†’ All dependencies - Runs `src/app.py` โ†’ Your app starts automatically **To add OpenAI:** 1. Settings โ†’ Secrets 2. Add `OPENAI_API_KEY=sk-proj-YOUR_KEY` 3. Change `.env`: `LLM_PROVIDER=openai` 4. Push to git (auto-redeploys) ### Docker (Local or Cloud) ```dockerfile FROM python:3.9-slim WORKDIR /app COPY requirements.txt . RUN pip install -r requirements.txt COPY . . ENV LLM_PROVIDER=mock CMD python src/app.py ``` ```bash docker build -t security-analyzer . docker run -p 7860:7860 security-analyzer ``` ### Traditional Server ```bash # Production with gunicorn pip install gunicorn gunicorn -w 4 -b 0.0.0.0:7860 src.app:demo ``` ## ๐Ÿ”ง For AI Developers See `.github/copilot-instructions.md` for comprehensive agent onboarding: - **Architecture patterns** โ€” Provider abstraction, config management, regex parsing - **Data flows** โ€” How logs become analyzed incidents - **Extension points** โ€” Add new LLM providers, customize analysis - **Testing strategy** โ€” Unit, integration, E2E with mocks - **Troubleshooting** โ€” Common issues and solutions ## ๐Ÿ“ฆ Requirements - **Python:** 3.9+ - **Core:** Gradio, Pydantic, httpx, python-dotenv - **Optional:** OpenAI SDK (for OpenAI provider only) See `requirements.txt` for exact versions. ## ๐Ÿง  Key Design Decisions 1. **Provider Abstraction** โ€” LLM logic decoupled from analysis 2. **Environment Config** โ€” All settings via env vars, no hardcoding 3. **Regex Parsing** โ€” Flexible response extraction (works with any LLM format) 4. **Async/Await** โ€” Non-blocking I/O for responsive UI 5. **Structured Types** โ€” Pydantic models for validation and safety ## โš–๏ธ License MIT โ€” Use freely, modify, distribute. ## ๐Ÿค Contributing ```bash # Fork, create a branch, make changes git checkout -b feature/my-improvement pytest tests/ -v # Ensure tests pass git push origin feature/my-improvement # Create a Pull Request ``` ## โ“ FAQ **Q: Can I use this without an API key?** A: Yes! The default `mock` provider needs no API key and is perfect for demos. **Q: How accurate is the analysis?** A: Depends on the LLM. Mock is deterministic for testing. OpenAI (GPT-4) provides production-grade analysis. **Q: Can I run this offline?** A: Yes! Use the `local` provider with Ollama on your own hardware. **Q: Is my data safe?** A: - **Mock provider:** No external calls, stays local - **Local provider:** Stays on your machine - **OpenAI provider:** Subject to OpenAI's privacy policy **Q: How do I report bugs?** A: Open an issue on GitHub with reproduction steps. --- **Questions?** Check `.github/copilot-instructions.md` or create an issue! ## Contributing Pull requests welcome! Please ensure tests pass: ```bash pytest tests/ -v ```