File size: 8,007 Bytes
a954ecc 6438d9e a954ecc 0355450 94cbcd3 0355450 94cbcd3 0355450 94cbcd3 6b23e48 94cbcd3 6b23e48 94cbcd3 0355450 94cbcd3 0355450 94cbcd3 0355450 94cbcd3 0355450 94cbcd3 0355450 94cbcd3 0355450 94cbcd3 0355450 94cbcd3 0355450 94cbcd3 0355450 94cbcd3 0355450 94cbcd3 0355450 94cbcd3 d5dc3a0 94cbcd3 d5dc3a0 94cbcd3 6b23e48 94cbcd3 0355450 94cbcd3 6b23e48 0355450 94cbcd3 0355450 94cbcd3 0355450 94cbcd3 d5dc3a0 94cbcd3 d5dc3a0 94cbcd3 d5dc3a0 94cbcd3 6b23e48 94cbcd3 6b23e48 94cbcd3 6b23e48 94cbcd3 6b23e48 94cbcd3 6b23e48 94cbcd3 6b23e48 94cbcd3 6b23e48 94cbcd3 6b23e48 94cbcd3 6b23e48 94cbcd3 6b23e48 94cbcd3 6b23e48 94cbcd3 0355450 94cbcd3 6b23e48 6438d9e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 | ---
title: Security Incident Analyzer
emoji: π‘οΈ
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 6.2.0
app_file: src/app.py
pinned: false
license: mit
---
# Security Incident Analyzer
An LLM-powered security log and incident analysis tool with a production-ready web interface.
## π― What It Does
Paste security logs or alerts β Get immediate AI analysis explaining:
- **What happened** β Clear incident summary
- **Severity level** β CRITICAL, HIGH, MEDIUM, LOW, or INFO
- **Suggested remediation** β Actionable next steps
- **Key indicators** β IOCs and anomalies detected
No training required. Director-level insights in seconds.
## π Try It Now
**Live Demo:** https://huggingface.co/spaces/debashis2007/SecurityIncidentAnalyzer
Or run locally (5 minutes):
```bash
git clone https://github.com/Debashis2007/SecurityIncidentAnalyzer.git
cd SecurityIncidentAnalyzer
pip install -r requirements.txt
cp .env.example .env
python src/app.py # Visit http://localhost:7860
```
## ποΈ Architecture
```
src/
βββ app.py # Gradio web UI entry point
βββ analyzer/
β βββ security.py # IncidentAnalyzer: core analysis logic
β βββ models.py # RiskLevel enum, SecurityAnalysis dataclass
βββ llm/
β βββ provider.py # LLM provider abstraction
β β # - OpenAIProvider (gpt-4-turbo)
β β # - LocalLLMProvider (Ollama)
β β # - MockLLMProvider (demo)
β βββ prompts.py # Security analysis prompt templates
βββ utils/
βββ config.py # Environment-driven configuration
βββ logger.py # Structured logging
```
## βοΈ Configuration
### Option 1: Mock LLM (Default - No API Key Required)
Perfect for demos and testing:
```bash
# No setup needed! Just run:
python src/app.py
```
The mock provider returns realistic-looking security analysis without any API calls.
### Option 2: OpenAI GPT-4 Turbo
For production analysis with real LLM intelligence:
1. **Get an API key:**
- Sign up: https://platform.openai.com
- Create key: https://platform.openai.com/account/api-keys
- Keep it safe!
2. **Configure locally:**
```bash
cp .env.example .env
# Edit .env:
# LLM_PROVIDER=openai
# OPENAI_API_KEY=sk-proj-YOUR_KEY_HERE
python src/app.py
```
3. **On Hugging Face Spaces:**
- Settings β Secrets β Add:
- `OPENAI_API_KEY` = your key
- `LLM_PROVIDER` = openai
### Option 3: Local LLM (Ollama)
Run a local model without cloud API:
1. **Install Ollama:** https://ollama.ai
2. **Pull a model:**
```bash
ollama pull mistral:7b
ollama serve # Keep running in another terminal
```
3. **Configure:**
```bash
# .env
LLM_PROVIDER=local
LLM_MODEL=mistral:7b
```
## π Environment Variables
Create a `.env` file (see `.env.example`):
```bash
# Provider selection (required)
LLM_PROVIDER=mock # mock, openai, or local
# OpenAI (only needed if LLM_PROVIDER=openai)
OPENAI_API_KEY=sk-proj-...
# Model override (optional, uses defaults if not set)
LLM_MODEL=gpt-4-turbo
# Debugging
DEBUG=false
```
**β οΈ Important:** `.env` is in `.gitignore` β secrets never get committed.
### Provider Defaults
| Provider | Default Model | Notes |
|----------|---------------|-------|
| `mock` | mock-analyzer-v1 | No API required, deterministic |
| `openai` | gpt-4-turbo | Requires OPENAI_API_KEY |
| `local` | mistral:7b | Requires Ollama running on localhost:11434 |
## π§ͺ Testing
```bash
# Run all tests
pytest tests/ -v
# With coverage report
pytest tests/ --cov=src
# Test specific module
pytest tests/test_analyzer.py -v
```
**Test Results:** 11/11 tests passing β
## π Example Incidents to Test
### 1. Failed Authentication
```
2025-12-21 14:32:15 AUTH_FAILURE - Failed login attempt from 192.168.1.100
2025-12-21 14:32:18 AUTH_FAILURE - Failed login attempt from 192.168.1.100
2025-12-21 14:32:21 AUTH_FAILURE - Failed login attempt from 192.168.1.100
User: admin@company.com | Attempts: 15 in 2 minutes
```
### 2. Ransomware Detection
```
CRITICAL ALERT - File encryption detected
Directory: /mnt/backup/production
Files Encrypted: 500+
Process: unknown.exe (SYSTEM privilege)
Time: 2025-12-21 16:20:15 UTC
Status: ACTIVE THREAT
```
### 3. SQL Injection Attempt
```
Web Application Firewall Alert
Rule: SQL Injection Pattern
URL: /api/users?id=1' OR '1'='1
Source IP: 203.0.113.45
Status: BLOCKED
Payload: ' OR '1'='1
```
### 4. Privilege Escalation
```
Security Event: Privilege Escalation Detected
User: john.smith@company.com
Action: sudo su - (unauthorized)
Target System: prod-db-server-02
Time: 2025-12-21 17:02:45
Status: No approval ticket found
```
### 5. Suspicious Outbound Traffic
```
ALERT: Unusual outbound traffic detected
Destination: 10.0.0.1:4444
Source: internal-server-03
Data Volume: 2.3 GB in 45 minutes
Status: ONGOING
```
## π Deployment
### Hugging Face Spaces (Recommended)
Your app is live at:
π https://huggingface.co/spaces/debashis2007/SecurityIncidentAnalyzer
**How it works:**
- Detects `spaces.yaml` β Gradio configuration
- Installs `requirements.txt` β All dependencies
- Runs `src/app.py` β Your app starts automatically
**To add OpenAI:**
1. Settings β Secrets
2. Add `OPENAI_API_KEY=sk-proj-YOUR_KEY`
3. Change `.env`: `LLM_PROVIDER=openai`
4. Push to git (auto-redeploys)
### Docker (Local or Cloud)
```dockerfile
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
ENV LLM_PROVIDER=mock
CMD python src/app.py
```
```bash
docker build -t security-analyzer .
docker run -p 7860:7860 security-analyzer
```
### Traditional Server
```bash
# Production with gunicorn
pip install gunicorn
gunicorn -w 4 -b 0.0.0.0:7860 src.app:demo
```
## π§ For AI Developers
See `.github/copilot-instructions.md` for comprehensive agent onboarding:
- **Architecture patterns** β Provider abstraction, config management, regex parsing
- **Data flows** β How logs become analyzed incidents
- **Extension points** β Add new LLM providers, customize analysis
- **Testing strategy** β Unit, integration, E2E with mocks
- **Troubleshooting** β Common issues and solutions
## π¦ Requirements
- **Python:** 3.9+
- **Core:** Gradio, Pydantic, httpx, python-dotenv
- **Optional:** OpenAI SDK (for OpenAI provider only)
See `requirements.txt` for exact versions.
## π§ Key Design Decisions
1. **Provider Abstraction** β LLM logic decoupled from analysis
2. **Environment Config** β All settings via env vars, no hardcoding
3. **Regex Parsing** β Flexible response extraction (works with any LLM format)
4. **Async/Await** β Non-blocking I/O for responsive UI
5. **Structured Types** β Pydantic models for validation and safety
## βοΈ License
MIT β Use freely, modify, distribute.
## π€ Contributing
```bash
# Fork, create a branch, make changes
git checkout -b feature/my-improvement
pytest tests/ -v # Ensure tests pass
git push origin feature/my-improvement
# Create a Pull Request
```
## β FAQ
**Q: Can I use this without an API key?**
A: Yes! The default `mock` provider needs no API key and is perfect for demos.
**Q: How accurate is the analysis?**
A: Depends on the LLM. Mock is deterministic for testing. OpenAI (GPT-4) provides production-grade analysis.
**Q: Can I run this offline?**
A: Yes! Use the `local` provider with Ollama on your own hardware.
**Q: Is my data safe?**
A: - **Mock provider:** No external calls, stays local
- **Local provider:** Stays on your machine
- **OpenAI provider:** Subject to OpenAI's privacy policy
**Q: How do I report bugs?**
A: Open an issue on GitHub with reproduction steps.
---
**Questions?** Check `.github/copilot-instructions.md` or create an issue!
## Contributing
Pull requests welcome! Please ensure tests pass:
```bash
pytest tests/ -v
``` |