debashis2007's picture
Update README.md
6438d9e verified

A newer version of the Gradio SDK is available: 6.5.1

Upgrade
metadata
title: Security Incident Analyzer
emoji: πŸ›‘οΈ
colorFrom: blue
colorTo: purple
sdk: gradio
sdk_version: 6.2.0
app_file: src/app.py
pinned: false
license: mit

Security Incident Analyzer

An LLM-powered security log and incident analysis tool with a production-ready web interface.

🎯 What It Does

Paste security logs or alerts β†’ Get immediate AI analysis explaining:

  • What happened β€” Clear incident summary
  • Severity level β€” CRITICAL, HIGH, MEDIUM, LOW, or INFO
  • Suggested remediation β€” Actionable next steps
  • Key indicators β€” IOCs and anomalies detected

No training required. Director-level insights in seconds.

πŸš€ Try It Now

Live Demo: https://huggingface.co/spaces/debashis2007/SecurityIncidentAnalyzer

Or run locally (5 minutes):

git clone https://github.com/Debashis2007/SecurityIncidentAnalyzer.git
cd SecurityIncidentAnalyzer
pip install -r requirements.txt
cp .env.example .env
python src/app.py  # Visit http://localhost:7860

πŸ—οΈ Architecture

src/
β”œβ”€β”€ app.py                    # Gradio web UI entry point
β”œβ”€β”€ analyzer/
β”‚   β”œβ”€β”€ security.py          # IncidentAnalyzer: core analysis logic
β”‚   └── models.py            # RiskLevel enum, SecurityAnalysis dataclass
β”œβ”€β”€ llm/
β”‚   β”œβ”€β”€ provider.py          # LLM provider abstraction
β”‚   β”‚                        # - OpenAIProvider (gpt-4-turbo)
β”‚   β”‚                        # - LocalLLMProvider (Ollama)
β”‚   β”‚                        # - MockLLMProvider (demo)
β”‚   └── prompts.py           # Security analysis prompt templates
└── utils/
    β”œβ”€β”€ config.py            # Environment-driven configuration
    └── logger.py            # Structured logging

βš™οΈ Configuration

Option 1: Mock LLM (Default - No API Key Required)

Perfect for demos and testing:

# No setup needed! Just run:
python src/app.py

The mock provider returns realistic-looking security analysis without any API calls.

Option 2: OpenAI GPT-4 Turbo

For production analysis with real LLM intelligence:

  1. Get an API key:

  2. Configure locally:

    cp .env.example .env
    # Edit .env:
    # LLM_PROVIDER=openai
    # OPENAI_API_KEY=sk-proj-YOUR_KEY_HERE
    python src/app.py
    
  3. On Hugging Face Spaces:

    • Settings β†’ Secrets β†’ Add:
      • OPENAI_API_KEY = your key
      • LLM_PROVIDER = openai

Option 3: Local LLM (Ollama)

Run a local model without cloud API:

  1. Install Ollama: https://ollama.ai

  2. Pull a model:

    ollama pull mistral:7b
    ollama serve  # Keep running in another terminal
    
  3. Configure:

    # .env
    LLM_PROVIDER=local
    LLM_MODEL=mistral:7b
    

πŸ“‹ Environment Variables

Create a .env file (see .env.example):

# Provider selection (required)
LLM_PROVIDER=mock          # mock, openai, or local

# OpenAI (only needed if LLM_PROVIDER=openai)
OPENAI_API_KEY=sk-proj-...

# Model override (optional, uses defaults if not set)
LLM_MODEL=gpt-4-turbo

# Debugging
DEBUG=false

⚠️ Important: .env is in .gitignore β€” secrets never get committed.

Provider Defaults

Provider Default Model Notes
mock mock-analyzer-v1 No API required, deterministic
openai gpt-4-turbo Requires OPENAI_API_KEY
local mistral:7b Requires Ollama running on localhost:11434

πŸ§ͺ Testing

# Run all tests
pytest tests/ -v

# With coverage report
pytest tests/ --cov=src

# Test specific module
pytest tests/test_analyzer.py -v

Test Results: 11/11 tests passing βœ…

πŸ“ Example Incidents to Test

1. Failed Authentication

2025-12-21 14:32:15 AUTH_FAILURE - Failed login attempt from 192.168.1.100
2025-12-21 14:32:18 AUTH_FAILURE - Failed login attempt from 192.168.1.100
2025-12-21 14:32:21 AUTH_FAILURE - Failed login attempt from 192.168.1.100
User: admin@company.com | Attempts: 15 in 2 minutes

2. Ransomware Detection

CRITICAL ALERT - File encryption detected
Directory: /mnt/backup/production
Files Encrypted: 500+
Process: unknown.exe (SYSTEM privilege)
Time: 2025-12-21 16:20:15 UTC
Status: ACTIVE THREAT

3. SQL Injection Attempt

Web Application Firewall Alert
Rule: SQL Injection Pattern
URL: /api/users?id=1' OR '1'='1
Source IP: 203.0.113.45
Status: BLOCKED
Payload: ' OR '1'='1

4. Privilege Escalation

Security Event: Privilege Escalation Detected
User: john.smith@company.com
Action: sudo su - (unauthorized)
Target System: prod-db-server-02
Time: 2025-12-21 17:02:45
Status: No approval ticket found

5. Suspicious Outbound Traffic

ALERT: Unusual outbound traffic detected
Destination: 10.0.0.1:4444
Source: internal-server-03
Data Volume: 2.3 GB in 45 minutes
Status: ONGOING

πŸš€ Deployment

Hugging Face Spaces (Recommended)

Your app is live at: πŸ‘‰ https://huggingface.co/spaces/debashis2007/SecurityIncidentAnalyzer

How it works:

  • Detects spaces.yaml β†’ Gradio configuration
  • Installs requirements.txt β†’ All dependencies
  • Runs src/app.py β†’ Your app starts automatically

To add OpenAI:

  1. Settings β†’ Secrets
  2. Add OPENAI_API_KEY=sk-proj-YOUR_KEY
  3. Change .env: LLM_PROVIDER=openai
  4. Push to git (auto-redeploys)

Docker (Local or Cloud)

FROM python:3.9-slim
WORKDIR /app
COPY requirements.txt .
RUN pip install -r requirements.txt
COPY . .
ENV LLM_PROVIDER=mock
CMD python src/app.py
docker build -t security-analyzer .
docker run -p 7860:7860 security-analyzer

Traditional Server

# Production with gunicorn
pip install gunicorn
gunicorn -w 4 -b 0.0.0.0:7860 src.app:demo

πŸ”§ For AI Developers

See .github/copilot-instructions.md for comprehensive agent onboarding:

  • Architecture patterns β€” Provider abstraction, config management, regex parsing
  • Data flows β€” How logs become analyzed incidents
  • Extension points β€” Add new LLM providers, customize analysis
  • Testing strategy β€” Unit, integration, E2E with mocks
  • Troubleshooting β€” Common issues and solutions

πŸ“¦ Requirements

  • Python: 3.9+
  • Core: Gradio, Pydantic, httpx, python-dotenv
  • Optional: OpenAI SDK (for OpenAI provider only)

See requirements.txt for exact versions.

🧠 Key Design Decisions

  1. Provider Abstraction β€” LLM logic decoupled from analysis
  2. Environment Config β€” All settings via env vars, no hardcoding
  3. Regex Parsing β€” Flexible response extraction (works with any LLM format)
  4. Async/Await β€” Non-blocking I/O for responsive UI
  5. Structured Types β€” Pydantic models for validation and safety

βš–οΈ License

MIT β€” Use freely, modify, distribute.

🀝 Contributing

# Fork, create a branch, make changes
git checkout -b feature/my-improvement
pytest tests/ -v  # Ensure tests pass
git push origin feature/my-improvement
# Create a Pull Request

❓ FAQ

Q: Can I use this without an API key? A: Yes! The default mock provider needs no API key and is perfect for demos.

Q: How accurate is the analysis? A: Depends on the LLM. Mock is deterministic for testing. OpenAI (GPT-4) provides production-grade analysis.

Q: Can I run this offline? A: Yes! Use the local provider with Ollama on your own hardware.

Q: Is my data safe? A: - Mock provider: No external calls, stays local

  • Local provider: Stays on your machine
  • OpenAI provider: Subject to OpenAI's privacy policy

Q: How do I report bugs? A: Open an issue on GitHub with reproduction steps.


Questions? Check .github/copilot-instructions.md or create an issue!

Contributing

Pull requests welcome! Please ensure tests pass:

pytest tests/ -v