Spaces:
Running
title: PatchHawk
emoji: π¦
colorFrom: blue
colorTo: purple
sdk: docker
python_version: '3.11'
app_file: inference.py
pinned: false
π¦ PatchHawk: Autonomous Supply-Chain Guard
Submitted to the OpenEnv Hackathon 2026 β hosted by Meta.
PatchHawk is an autonomous DevSecOps agent powered by Group Relative Policy Optimization (GRPO). It moves beyond static vulnerability detection by validating findings inside isolated Docker sandboxes and generating verified, syntactically correct patches. The system closes the loop between detection, validation, and remediation through a cyber-physical reinforcement learning feedback cycle grounded in real execution environments.
π½οΈ The Vision: Cyber-Physical RL Loop
Traditional security scanners suffer from high false-positive rates and often report vulnerabilities that cannot be exploited or fixed in practice. PatchHawk addresses this by implementing a reinforcement learning loop where the model's reward is tied directly to the success of its patches inside a real execution environment.
graph TD
A[Source Code / PR] --> B{PatchHawk Agent}
B -->|Analyze| C[Static Analysis]
B -->|Test| D[Docker Sandbox]
D -->|Detonate| E[Behavioral Telemetry]
E --> F[Reward Signal]
B -->|Patch| G[Verification Pipeline]
G -->|Syntax Check| H{Success?}
G -->|Unit Tests| I{Pass?}
G -->|Re-Attack| J{Defeated?}
H & I & J -->|All Pass| K[Positive Reward +3.0]
H | I | J -->|Failure| L[Negative Penalty -1.5]
K --> M[Model Update / Optimization]
L --> M
The agent learns to produce patches that not only compile but also withstand re-execution of the original exploit vector. Every decision is accompanied by a structured <thought> block, providing a complete and machine-readable audit trail.
β¨ Key Features
- π‘οΈ Autonomous Detection: Sophisticated supply-chain analysis identifying typosquatting, backdoors, data exfiltration, and malicious logic in dependencies.
- π³ Hardened Sandboxing: High-fidelity Docker isolation with network-disabled execution, strict resource caps, and ephemeral file systems to safely detonate suspicious code.
- π§ GRPO-Driven Learning: Group Relative Policy Optimization (inspired by DeepSeek-R1) enables trial-and-error mastery and structured reasoning without a separate critic model.
- π§© XML Reasoning Traces: All agent decisions are accompanied by a machine-readable
<thought>...</thought>block, providing full auditability of the decision-making process. - π SOC Dashboard: Real-time Streamlit interface for monitoring agent behavior, sandbox telemetry, and reward breakdowns.
- β OpenEnv Compliance: Fully integrated with the PyTorch OpenEnv framework, ensuring reproducible and shareable reinforcement learning environments.
π οΈ Project Structure
PatchHawk/
βββ patchhawk/ # π§ Core Environment Logic
β βββ agent/ # Environment API & Sandbox logic
β βββ data/ # Scenario datasets
β βββ env_models.py # Contract definitions
β βββ tasks.py # Graders for each task
βββ server/ # π FastAPI & Dashboard server
βββ inference.py # π Main Agent Rollout Script
βββ openenv.yaml # Metadata for OpenEnv
βββ Dockerfile # Container definition for HF Spaces
βββ requirements.txt # Python dependencies
βββ README.md
π Getting Started
Prerequisites
- Python 3.12 or higher
- Docker Engine (with buildx support)
- NVIDIA GPU (8 GB VRAM or more recommended for training and inference)
- Hugging Face account and access token
Installation
# Clone the repository
git clone https://github.com/ramprasathk07/PatchHawk.git
cd PatchHawk
# Create and activate a virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install core dependencies
pip install -e .
Environment Setup
# Copy the environment template and populate your keys
cp .env.example .env
# Edit .env to include HF_TOKEN, OPENAI_API_KEY, WANDB_API_KEY, etc.
# Build the validation sandbox Docker image
docker build -t patchhawk-sandbox:latest -f docker/Dockerfile.sandbox .
Running the Agent
# Terminal 1 β Launch environment (includes API & Dashboard)
./start.sh
# Terminal 2 β Run the Agent Rollout
python inference.py
π§ Training
PatchHawk uses GRPO with a 4-bit quantised Qwen2.5-Coder-7B-Instruct base model and LoRA adapters. The training script is located at patchhawk/training/train_grpo.py.
Training Progress
GPU training (RTX 3060 12 GB defaults)
python -m patchhawk.training.train_grpo \
--epochs 3 \
--batch-size 1 \
--grad-accum 8 \
--group-size 4 \
--max-seq-len 1024 \
--output-dir grpo_lora
π Reward Rubric
The agent is guided by a granular reward structure that encourages safe, effective, and verifiable actions.
| Action ID | Action Name | Base Reward | Success Criteria |
|---|---|---|---|
| 0 | ANALYZE |
0.0 |
Observation step; used solely for data gathering. |
| 1 | DETONATE |
+0.1 |
Successfully extract telemetry from the Docker sandbox. |
| 2 | BLOCK_PR |
+2.0 / -1.0 |
Positive reward when correctly blocking a malicious PR; negative penalty for false positives. |
| 3 | SUBMIT_PATCH |
+3.0 / -1.5 |
The primary goal. Reward requires passing syntax check, unit tests, and a re-attack validation. |
| 4 | ESCALATE |
0.0 |
Hands off to a human expert when uncertainty exceeds a configurable threshold. |
Dynamic Scaling Factors
- Risk Accuracy Bonus: Up to
+2.0additional reward for accurately predicting the risk score of a vulnerability. - Safety Multiplier: Repeated syntax check failures apply a decay factor to all future rewards.
π Dashboard
Launch the Security Operations Center (SOC) dashboard to observe the agent's reasoning in real time.
streamlit run patchhawk/app/dashboard.py
The dashboard provides:
- Live XML reasoning logs (
<thought>traces) from the agent. - Real-time stdout/stderr streams from the Docker sandbox.
- Detailed audit trail of reward assignments and verification outcomes.
πΊοΈ Roadmap & Future Work
- Multi-Agent Coordination: Deploy attacker and defender models for automated red-teaming exercises.
- CVE Ingestion: Automatically generate training scenarios from the National Vulnerability Database (NVD).
- Cross-Language Support: Expand beyond Python to Go, JavaScript, Rust, and Java.
- Kubernetes Native: Orchestrate sandboxes at scale using Kubernetes instead of local Docker.
- Fine-Tuned Vulnerability Model: Train a specialized 7B parameter LLM (e.g., VulnLLM-R) on vulnerability-fixing commits.
- Context-Aware Analysis: Integrate Code Property Graph (CPG) slicing for LLM-based semantic vulnerability detection.
- Silent Patch Detection: Identify security-relevant commits that were not publicly disclosed.
- AI-Generated Code Audit: Trace vulnerabilities back to AI coding assistants (e.g., GitHub Copilot, ChatGPT).
- Automated PR Remediation: Generate and submit fix-containing pull requests for detected vulnerabilities.
- Adversarial Training Loop: Implement a self-improving LLM-vs-LLM red-team / blue-team training regimen.
- Supply-Chain Malware Detection: Extend dependency analysis to identify novel, unpublished attack patterns.
π Hackathon Submission & Infrastructure Guide
This project is fully compliant with the Meta PyTorch Hackathon (OpenEnv) requirements.
1. Submission Validity
PatchHawk adheres to the OpenEnv specification:
- Gymnasium API: Implemented via
PatchHawkEnv. - Isolation: Docker-based sandboxing with resource caps.
- Metadata: Standardized
openenv.yamldescribing tasks and graders. - Rollout: A compliant
inference.pyscript demonstrating the agent loop.
2. Infrastructure Restrictions (20min / 2 vCPU / 8GB RAM)
PatchHawk is optimized for low-resource evaluation environments:
- Runtime: The full 3-task rollout typically finishes in < 5 minutes when using the HF Inference API.
- CPU/Memory: By defaulting to Remote LLM Inference, the local footprint is kept well under 8GB RAM and 2 vCPUs.
- Portability: The sandbox logic automatically detects if Docker is unavailable and falls back to a secure local subprocess mode to ensure the evaluation never hangs.
3. Hugging Face Integration
To run the agent rollout, you should provide a Hugging Face token:
export HF_TOKEN="your_hf_token_here"
python inference.py
This token is used to call the Llama-3.2-3B-Instruct (or similar) model via the HF Inference API, ensuring high-speed analysis even on hardware without a GPU.
π License
Distributed under the MIT License. See the LICENSE file in the repository root for full details.
Developed with β€οΈ by Ramprasath K & The PatchHawk Team for the OpenEnv Hackathon 2026 hosted by Meta.

