A newer version of the Gradio SDK is available:
6.5.1
title: Security Incident Analyzer
emoji: π‘οΈ
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 6.2.0
app_file: src/app.py
pinned: false
license: mit
Security Incident Analyzer
An LLM-powered security log and incident analysis tool with a production-ready web interface.
π― What It Does
Paste security logs or alerts β Get immediate AI analysis explaining:
- What happened β Clear incident summary
- Severity level β CRITICAL, HIGH, MEDIUM, LOW, or INFO
- Suggested remediation β Actionable next steps
- Key indicators β IOCs and anomalies detected
No training required. Director-level insights in seconds.
π Try It Now
Live Demo: https://huggingface.co/spaces/debashis2007/SecurityIncidentAnalyzer
Or run locally (5 minutes):
git clone https://github.com/Debashis2007/SecurityIncidentAnalyzer.git
cd SecurityIncidentAnalyzer
pip install -r requirements.txt
cp .env.example .env
python src/app.py # Visit http://localhost:7860
ποΈ Architecture
src/
βββ app.py # Gradio web UI entry point
βββ analyzer/
β βββ security.py # IncidentAnalyzer: core analysis logic
β βββ models.py # RiskLevel enum, SecurityAnalysis dataclass
βββ llm/
β βββ provider.py # LLM provider abstraction
β β # - OpenAIProvider (gpt-4-turbo)
β β # - LocalLLMProvider (Ollama)
β β # - MockLLMProvider (demo)
β βββ prompts.py # Security analysis prompt templates
βββ utils/
βββ config.py # Environment-driven configuration
βββ logger.py # Structured logging
βοΈ Configuration
Option 1: Mock LLM (Default - No API Key Required)
Perfect for demos and testing:
# No setup needed! Just run:
python src/app.py
The mock provider returns realistic-looking security analysis without any API calls.
Option 2: OpenAI GPT-4 Turbo
For production analysis with real LLM intelligence:
Get an API key:
- Sign up: https://platform.openai.com
- Create key: https://platform.openai.com/account/api-keys
- Keep it safe!
Configure locally:
cp .env.example .env # Edit .env: # LLM_PROVIDER=openai # OPENAI_API_KEY=sk-proj-YOUR_KEY_HERE python src/app.pyOn Hugging Face Spaces:
- Settings β Secrets β Add:
OPENAI_API_KEY= your keyLLM_PROVIDER= openai
- Settings β Secrets β Add:
Option 3: Local LLM (Ollama)
Run a local model without cloud API:
Install Ollama: https://ollama.ai
Pull a model:
ollama pull mistral:7b ollama serve # Keep running in another terminalConfigure:
# .env LLM_PROVIDER=local LLM_MODEL=mistral:7b
π Environment Variables
Create a .env file (see .env.example):
# Provider selection (required)
LLM_PROVIDER=mock # mock, openai, or local
# OpenAI (only needed if LLM_PROVIDER=openai)
OPENAI_API_KEY=sk-proj-...
# Model override (optional, uses defaults if not set)
LLM_MODEL=gpt-4-turbo
# Debugging
DEBUG=false
β οΈ Important: .env is in .gitignore β secrets never get committed.
Provider Defaults
| Provider | Default Model | Notes |
|---|---|---|
mock |
mock-analyzer-v1 | No API required, deterministic |
openai |
gpt-4-turbo | Requires OPENAI_API_KEY |
local |
mistral:7b | Requires Ollama running on localhost:11434 |
π§ͺ Testing
# Run all tests
pytest tests/ -v
# With coverage report
pytest tests/ --cov=src
# Test specific module
pytest tests/test_analyzer.py -v
Test Results: 11/11 tests passing β
π Example Incidents to Test
1. Failed Authentication
2025-12-21 14:32:15 AUTH_FAILURE - Failed login attempt from 192.168.1.100
2025-12-21 14:32:18 AUTH_FAILURE - Failed login attempt from 192.168.1.100
2025-12-21 14:32:21 AUTH_FAILURE - Failed login attempt from 192.168.1.100
User: admin@company.com | Attempts: 15 in 2 minutes
2. Ransomware Detection
CRITICAL ALERT - File encryption detected
Directory: /mnt/backup/production
Files Encrypted: 500+
Process: unknown.exe (SYSTEM privilege)
Time: 2025-12-21 16:20:15 UTC
Status: ACTIVE THREAT
3. SQL Injection Attempt
Web Application Firewall Alert
Rule: SQL Injection Pattern
URL: /api/users?id=1' OR '1'='1
Source IP: 203.0.113.45
Status: BLOCKED
Payload: ' OR '1'='1
4. Privilege Escalation
Security Event: Privilege Escalation Detected
User: john.smith@company.com
Action: sudo su - (unauthorized)
Target System: prod-db-server-02
Time: 2025-12-21 17:02:45
Status: No approval ticket found
5. Suspicious Outbound Traffic
ALERT: Unusual outbound traffic detected
Destination: 10.0.0.1:4444
Source: internal-server-03
Data Volume: 2.3 GB in 45 minutes
Status: ONGOING
π Deployment
Hugging Face Spaces (Recommended)
Your app is live at: π https://huggingface.co/spaces/debashis2007/SecurityIncidentAnalyzer
How it works:
- Detects
spaces.yamlβ Gradio configuration - Installs
requirements.txtβ All dependencies - Runs
src/app.pyβ Your app starts automatically
To add OpenAI:
- Settings β Secrets
- Add
OPENAI_API_KEY=sk-proj-YOUR_KEY - Change
.env:LLM_PROVIDER=openai - Push to git (auto-redeploys)
Docker (Local or Cloud)
FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
ENV LLM_PROVIDER=mock
CMD python src/app.py
docker build -t security-analyzer .
docker run -p 7860:7860 security-analyzer
Traditional Server
# Production with gunicorn
pip install gunicorn
gunicorn -w 4 -b 0.0.0.0:7860 src.app:demo
π§ For AI Developers
See .github/copilot-instructions.md for comprehensive agent onboarding:
- Architecture patterns β Provider abstraction, config management, regex parsing
- Data flows β How logs become analyzed incidents
- Extension points β Add new LLM providers, customize analysis
- Testing strategy β Unit, integration, E2E with mocks
- Troubleshooting β Common issues and solutions
π¦ Requirements
- Python: 3.9+
- Core: Gradio, Pydantic, httpx, python-dotenv
- Optional: OpenAI SDK (for OpenAI provider only)
See requirements.txt for exact versions.
π§ Key Design Decisions
- Provider Abstraction β LLM logic decoupled from analysis
- Environment Config β All settings via env vars, no hardcoding
- Regex Parsing β Flexible response extraction (works with any LLM format)
- Async/Await β Non-blocking I/O for responsive UI
- Structured Types β Pydantic models for validation and safety
βοΈ License
MIT β Use freely, modify, distribute.
π€ Contributing
# Fork, create a branch, make changes
git checkout -b feature/my-improvement
pytest tests/ -v # Ensure tests pass
git push origin feature/my-improvement
# Create a Pull Request
β FAQ
Q: Can I use this without an API key?
A: Yes! The default mock provider needs no API key and is perfect for demos.
Q: How accurate is the analysis? A: Depends on the LLM. Mock is deterministic for testing. OpenAI (GPT-4) provides production-grade analysis.
Q: Can I run this offline?
A: Yes! Use the local provider with Ollama on your own hardware.
Q: Is my data safe? A: - Mock provider: No external calls, stays local
- Local provider: Stays on your machine
- OpenAI provider: Subject to OpenAI's privacy policy
Q: How do I report bugs? A: Open an issue on GitHub with reproduction steps.
Questions? Check .github/copilot-instructions.md or create an issue!
Contributing
Pull requests welcome! Please ensure tests pass:
pytest tests/ -v