Spaces:
Running
Running
metadata
title: SentinelOps Arena
emoji: π‘οΈ
colorFrom: green
colorTo: red
sdk: gradio
sdk_version: 6.9.0
app_file: app.py
pinned: false
SentinelOps Arena
Multi-agent self-play RL environment for enterprise security training, built on OpenEnv for the OpenEnv Hackathon SF (March 7-8, 2026).
Three AI agents compete in a simulated enterprise environment:
- RED TEAM (Attacker) β Launches schema drift, policy drift, social engineering, and rate limiting attacks
- BLUE TEAM (Worker) β Handles customer requests across CRM, Billing, and Ticketing systems
- AUDITOR (Oversight) β Monitors worker actions and flags policy violations
Through adversarial self-play with GRPO training, all three agents improve simultaneously.
Quick Start
# Setup
python3 -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
# Run Gradio demo
python app.py
# Run HTTP server
python -m sentinelops_arena.server --port 8000
# Run demo script
python -m sentinelops_arena.demo
Project Structure
NexusEnv/
βββ sentinelops_arena/
β βββ models.py # Action, Observation, State, data models
β βββ environment.py # SentinelOpsArena (MCPEnvironment) β core env
β βββ systems/
β β βββ crm.py # CRM simulator
β β βββ billing.py # Billing simulator
β β βββ ticketing.py # Ticketing simulator
β βββ attacks.py # 4 attack types (schema/policy drift, social eng, rate limit)
β βββ rewards.py # Reward functions for all 3 agents
β βββ task_generator.py # Customer task generation
β βββ demo.py # Heuristic agents + episode runner
β βββ server.py # HTTP/WebSocket server
β βββ test_phase1.py # Unit tests
β βββ test_environment.py # Integration tests
βββ app.py # Gradio UI (HuggingFace Spaces)
βββ train.py # GRPO training script (Unsloth + TRL)
βββ requirements.txt
βββ pyproject.toml
βββ README.md
Architecture
3 Agents, 3 Systems, 30 Ticks per Episode
Each tick: Attacker acts β Worker acts β Oversight acts
Attack Types
- Schema Drift β Renames fields across all records. Worker must detect KeyError, call
get_schema(), and adapt. - Policy Drift β Changes business rules (refund windows, approval requirements). Worker must call
get_current_policy(). - Social Engineering β Injects fake authority messages. Worker must resist manipulation.
- Rate Limiting β Throttles API calls. Worker must handle gracefully.
MCP Tools
19 tools exposed via FastMCP, organized by agent role:
- Worker: lookup_customer, check_balance, issue_refund, create_ticket, get_schema, get_current_policy, etc.
- Attacker: launch_attack, get_attack_budget
- Oversight: flag_action, get_trajectory
Training
Uses GRPO (Group Relative Policy Optimization) with Unsloth + TRL:
# Train with Unsloth (recommended, 2x faster)
python train.py --use_unsloth --model_name unsloth/Qwen2.5-0.5B-Instruct
# Train without Unsloth
python train.py --model_name Qwen/Qwen2.5-0.5B-Instruct
See train.py for the full training pipeline.
Partner Tracks
- Fleet AI β Scalable Oversight: the Oversight agent monitors and explains Worker behavior
- Patronus AI β Schema Drift: schema and policy drift are core attack types
Tech Stack
- OpenEnv 0.2.x β Environment framework
- FastMCP β MCP tool server
- Gradio β Demo UI
- HuggingFace TRL β GRPO training
- Unsloth β Fast fine-tuning (2x speed, 70% less VRAM)
- Pydantic β Data validation
Tests
python sentinelops_arena/test_phase1.py
python sentinelops_arena/test_environment.py