Spaces:
Running
Running
HaramGuard β Agentic AI Crowd Management System
π Folder Structure
HaramGuard/
β
βββ config.py β All thresholds & API keys (edit here only)
βββ pipeline.py β RealTimePipeline β orchestrates all agents
βββ dashboard.py β Streamlit dashboard entry point
βββ requirements.txt
β
βββ core/ β Shared infrastructure (no agent logic here)
β βββ __init__.py
β βββ models.py β FrameResult, RiskResult, Decision dataclasses
β βββ database.py β HajjFlowDB β SQLite (4 tables)
β
βββ agents/ β One file per agent
β βββ __init__.py
β βββ perception_agent.py β YOLO tracking + Guardrails GR1/GR2
β βββ risk_agent.py β Clip segmentation + sliding K-window density scoring
β βββ reflection_agent.py β Self-critique design pattern (Bias detection)
β βββ operations_agent.py β Event-driven playbook + rate-limit guardrail
β βββ coordinator_agent.py β openai/gpt-oss-120b + output validation guardrails GR-C1..5
β
βββ outputs/ β Auto-created at runtime
βββ hajjflow_rt.db β Main SQLite database
βββ plots/ β Saved charts
π€ Agent Pipeline
Frame
β
βΌ
PerceptionAgent β FrameResult (YOLO detect + track)
β
βΌ
RiskAgent β RiskResult (clip segmentation + K-window density score)
β
βΌ
ReflectionAgent β reflection{} (bias check + correction)
β
βΌ
OperationsAgent β Decision (event-driven, P0/P1/P2)
β
βΌ
CoordinatorAgent β plan{} (openai/gpt-oss-120b action plan, all priorities)
β
βΌ
pipeline.state β Dashboard
π‘οΈ Guardrails
| ID | Agent | Description |
|---|---|---|
| GR1 | PerceptionAgent | Person count capped at 500 |
| GR2 | PerceptionAgent | Density score capped at 50 |
| GR3 | RiskAgent | Risk score clamped to [0, 1] (density_pct / 100) |
| GR3b | RiskAgent | level_changed suppressed during K-window warmup (first 17 frames per clip) |
| GR4 | OperationsAgent | P0 rate-limited (1 per 5 min per zone); resets on pipeline restart |
| GR-C1..5 | CoordinatorAgent | LLM output validation (fields, threat_level, confidence, score-level consistency, Arabic fallback) |
| RF1..3 | ReflectionAgent | Chronic LOW bias, rising trend ignored, count-risk mismatch |
π Run Evaluation
python evaluation.py
Outputs:
outputs/plots/eval_perception.pngβ PerceptionAgent chartsoutputs/plots/eval_risk.pngβ RiskAgent score trajectoriesoutputs/plots/eval_e2e.pngβ End-to-end accuracy + throughputoutputs/eval/summary.jsonβ Final metrics summary
Rubric coverage:
- β End-to-end performance metrics (Section 5)
- β Component-level evaluation (Sections 1β4)
- β Error analysis methodology (Section 6)
- β Evidence of iterative improvement (Section 7)
π Run Backend API
pip install -r requirements.txt
python api.py