Spaces:
Sleeping
Sleeping
metadata
title: RUSH AGENTS RUSH Backend
emoji: 🔥
colorFrom: red
colorTo: yellow
sdk: docker
sdk_version: latest
python_version: '3.11'
pinned: false
Rush Agents Rush Backend
FastAPI server driving the fire-suppression simulation.
What It Does
- Accepts model selections and starts a new simulation.
- Places a fire on the map and generates water wells.
- Runs the tick-based AI loop with coalition voting, movement, and extinguishing.
- Streams state updates and events over WebSockets.
Key Endpoints
GET /wake- health and readiness checkGET /available-models- list available models for the UIPOST /start-simulation- create a new simulationPOST /place-fire- place the fire and spawn water sourcesWS /ws/{simulation_id}- stream live simulation ticks
Environment Variables
HUGGINGFACE_API_TOKENorHF_API_TOKEN: Required for Hugging Face router model calls.ALLOWED_ORIGINS: CORS whitelist.
Local Run
cd backend
pip install -r requirements.txt
python -m uvicorn app.main:app --reload --port 8000
Notes
- Simulation state is in memory.
- Fire growth, extinguish rate, and movement are tuned in
app/simulation.py. - Model decisions are generated in
app/groq_client.pythroughhttps://router.huggingface.co/v1/chat/completions. /available-modelsis backed byapp/hf_spaces.pyand filters a preferred model list against the live Hugging Face router catalog.- This
backend/app is the local development copy; the Hugging Face Space runtime uses the rootapp/package.