--- title: Soci emoji: ๐Ÿ™๏ธ colorFrom: blue colorTo: purple sdk: docker app_port: 7860 pinned: false --- # Soci โ€” LLM-Powered City Population Simulator Simulates a diverse population of AI people living in a city using an LLM as the reasoning engine. Each agent has a unique persona, memory stream, needs, and relationships. Inspired by [Stanford Generative Agents (Joon Park et al.)](https://arxiv.org/abs/2304.03442), CitySim, AgentSociety, and a16z ai-town. **Live demo:** https://huggingface.co/spaces/RayMelius/soci --- ## Features - AI agents with unique personas, goals, and memories - Maslow-inspired needs system (hunger, energy, social, purpose, comfort, fun) - Relationship graph with familiarity, trust, sentiment, and romance - Agent cognition loop: **OBSERVE โ†’ REFLECT โ†’ PLAN โ†’ ACT โ†’ REMEMBER** - Web UI with animated city map, zoom, pan, and agent inspector - Road-based movement with L-shaped routing (agents walk along streets) - Agent animations: walking (profile/back view), sleeping on bed - Speed controls (1x โ†’ 50x) and real-time WebSocket sync across browsers - **LLM probability slider** โ€” tune AI usage from 0โ€“100% to stay within free-tier quotas - **Player login** โ€” register an account, get your own agent on the map, chat with NPCs - Multi-LLM support: Gemini (free tier), Groq (free tier), Anthropic Claude, Ollama (local) - GitHub-based state persistence (survives server reboots and redeploys) - Cost-efficient model routing (Haiku for routine, Sonnet for novel situations) - Daily quota circuit-breaker with warnings at 50 / 70 / 90 / 99% usage --- ## System Architecture ``` โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ Browser (web/index.html โ€” single-file Vue-less UI) โ”‚ โ”‚ โ€ข Canvas city map โ€ข Agent inspector โ€ข Chat panel โ”‚ โ”‚ โ€ข Speed / LLM-probability sliders โ€ข Login modal โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ REST GET /api/* โ”‚ WebSocket /ws โ”‚ POST /api/controls/* โ”‚ push events โ–ผ โ–ผ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ FastAPI Server (soci/api/server.py) โ”‚ โ”‚ โ€ข lifespan: load state โ†’ start sim loop โ”‚ โ”‚ โ€ข routes.py โ€” REST endpoints โ”‚ โ”‚ โ€ข websocket.py โ€” broadcast tick events โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ asyncio.create_task โ–ผ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ Simulation Loop (background task) โ”‚ โ”‚ tick every N sec โ†’ sim.tick() โ†’ sleep โ”‚ โ”‚ respects: _sim_paused / _sim_speed / llm_call_prob โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ โ–ผ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ Simulation.tick() (engine/simulation.py) โ”‚ โ”‚ โ”‚ โ”‚ 1. Entropy / world events โ”‚ โ”‚ 2. Daily plan generation โ”€โ”€โ–บ LLM (if prob gate โœ“) โ”‚ โ”‚ 3. Agent needs + routine actions (no LLM) โ”‚ โ”‚ 4. LLM action decisions โ”€โ”€โ”€โ”€โ”€โ–บ LLM (if prob gate โœ“) โ”‚ โ”‚ 5. Conversation turns โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–บ LLM (if prob gate โœ“) โ”‚ โ”‚ 6. New conversation starts โ”€โ”€โ–บ LLM (if prob gate โœ“) โ”‚ โ”‚ 7. Reflections โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ–บ LLM (if prob gate โœ“) โ”‚ โ”‚ 8. Romance / relationship updates (no LLM) โ”‚ โ”‚ 9. Clock advance โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ await llm.complete_json() โ–ผ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ LLM Client (engine/llm.py) โ”‚ โ”‚ โ€ข Rate limiter (asyncio.Lock, min interval per RPM) โ”‚ โ”‚ โ€ข Daily usage counter โ†’ warns at 50/70/90/99% โ”‚ โ”‚ โ€ข Quota circuit-breaker (expires midnight Pacific) โ”‚ โ”‚ โ€ข Providers: GeminiClient / GroqClient / โ”‚ โ”‚ ClaudeClient / HFInferenceClient / โ”‚ โ”‚ OllamaClient โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ HTTP (httpx async) โ–ผ External LLM API (Gemini / Groq / โ€ฆ) ``` --- ## Message Flow โ€” One Simulation Tick ``` Browser poll (3s) Simulation background loop โ”‚ โ”‚ โ”‚ GET /api/city tick_delay (4s Gemini, 0.5s Ollama) โ”‚โ—„โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ค โ”‚ โ”‚ sim.tick() โ”‚ โ”‚ โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ โ”‚ For each agent: โ”‚ โ”‚ โ”‚ tick_needs() โ”‚ โ”‚ โ”‚ check routine โ”‚โ”€โ”€โ–บ execute routine action (no LLM) โ”‚ โ”‚ roll prob gate โ”‚ โ”‚ โ”‚ _decide_action() โ”‚โ”€โ”€โ–บ LLM call โ”€โ”€โ–บ AgentAction JSON โ”‚ โ”‚ _execute_action() โ”‚ โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ โ”‚ โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ โ”‚ Conversations: โ”‚ โ”‚ โ”‚ continue_conv() โ”‚โ”€โ”€โ–บ LLM call โ”€โ”€โ–บ dialogue turn โ”‚ โ”‚ new conv start โ”‚โ”€โ”€โ–บ LLM call โ”€โ”€โ–บ opening line โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ โ”‚ โ”‚ โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ดโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ โ”‚ Reflections: โ”‚ โ”‚ โ”‚ should_reflect()? โ”‚โ”€โ”€โ–บ LLM call โ”€โ”€โ–บ memory insight โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”ฌโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ โ”‚ โ”‚ clock.tick() โ”‚ โ”‚ โ”‚ WebSocket push โ”‚ โ”‚โ—„โ”€โ”€ events/state โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ โ”‚ [browser updates map, event log, agent inspector] ``` --- ## Agent Cognition Loop ``` Every tick, each NPC agent runs: โ”Œโ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ” โ”‚ โ”‚ โ”‚ OBSERVE โ”€โ”€โ–บ perceive nearby agents, events, โ”‚ โ”‚ location, time of day โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ–ผ โ”‚ โ”‚ REFLECT โ”€โ”€โ–บ check memory.should_reflect() โ”‚ โ”‚ LLM synthesises insight from recent โ”‚ โ”‚ memories โ†’ stored as reflection โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ–ผ โ”‚ โ”‚ PLAN โ”€โ”€โ”€โ–บ if no daily plan: LLM generates โ”‚ โ”‚ ordered list of goals for the day โ”‚ โ”‚ (or routine fills the plan โ€” no LLM) โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ–ผ โ”‚ โ”‚ ACT โ”€โ”€โ”€โ”€โ–บ routine slot? โ†’ execute directly โ”‚ โ”‚ no slot? โ†’ LLM picks action โ”‚ โ”‚ action types: move / work / eat / โ”‚ โ”‚ sleep / socialise / leisure / rest โ”‚ โ”‚ โ”‚ โ”‚ โ”‚ โ–ผ โ”‚ โ”‚ REMEMBER โ–บ add_observation() to memory stream โ”‚ โ”‚ importance 1โ€“10, recency decay, โ”‚ โ”‚ retrieved by relevance score โ”‚ โ”‚ โ”‚ โ””โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”€โ”˜ LLM budget per tick (rate-limited providers): max_llm_calls_this_tick = 1 (Gemini/Groq/HF) llm_call_probability = 0.45 (Gemini default โ†’ ~10h/day) ``` --- ## Tech Stack | Layer | Technology | |-------|-----------| | Language | Python 3.10+ | | API server | FastAPI + Uvicorn | | Real-time | WebSocket (FastAPI) | | Database | SQLite via aiosqlite | | LLM providers | Gemini ยท Groq ยท Anthropic Claude ยท HF Inference ยท Ollama | | Config | YAML (city layout, agent personas) | | State persistence | GitHub API (simulation-state branch) | | Container | Docker (HF Spaces / Render) | --- ## Quick Start (Local) ### Prerequisites - Python 3.10+ - At least one LLM API key โ€” or [Ollama](https://ollama.ai) installed locally (free, no key needed) ### Install ```bash git clone https://github.com/Bonum/Soci.git cd Soci pip install -r requirements.txt ``` ### Configure ```bash # Pick ONE provider (Gemini recommended โ€” free tier is generous): export GEMINI_API_KEY=AIza... # https://aistudio.google.com/apikey # or export GROQ_API_KEY=gsk_... # https://console.groq.com # or export ANTHROPIC_API_KEY=sk-ant-... # or install Ollama and pull a model โ€” no key needed ``` ### Run ```bash # Web UI (recommended) python -m uvicorn soci.api.server:app --host 0.0.0.0 --port 8000 # Open http://localhost:8000 # Terminal only python main.py --ticks 20 --agents 5 ``` --- ## Deploying to the Internet ### Option 1 โ€” Hugging Face Spaces (free, recommended) HF Spaces runs the Docker container for free with automatic HTTPS. 1. **Create a Space** at https://huggingface.co/new-space - SDK: **Docker** - Visibility: Public 2. **Add the HF remote** and push: ```bash git remote add hf https://YOUR_HF_USERNAME:YOUR_HF_TOKEN@huggingface.co/spaces/YOUR_HF_USERNAME/soci git push hf master:main ``` Get a write token at https://huggingface.co/settings/tokens (select *Write* + *Inference Providers* permissions). 3. **Add Space secrets** (Settings โ†’ Variables and Secrets): | Secret | Value | |--------|-------| | `SOCI_PROVIDER` | `gemini` | | `GEMINI_API_KEY` | your AI Studio key | | `GITHUB_TOKEN` | GitHub PAT (repo read/write) | | `GITHUB_OWNER` | your GitHub username | | `GITHUB_REPO_NAME` | `Soci` | 4. Your Space rebuilds automatically on every push. Visit `https://YOUR_HF_USERNAME-soci.hf.space` > **Free-tier tip:** Gemini free tier = 5 RPM, ~1500 requests/day (resets midnight Pacific). > The default LLM probability is set to **45%** which gives ~10 hours of AI-driven simulation per day. > Use the ๐Ÿง  slider in the toolbar to adjust at runtime. --- ### Option 2 โ€” Render (free tier) 1. Connect your GitHub repo at https://render.com/new 2. Choose **Web Service** โ†’ Docker 3. Set **Start Command**: ``` python -m uvicorn soci.api.server:app --host 0.0.0.0 --port $PORT ``` 4. Set environment variables in the Render dashboard: | Variable | Value | |----------|-------| | `SOCI_PROVIDER` | `gemini` or `groq` | | `GEMINI_API_KEY` | your key | | `GITHUB_TOKEN` | GitHub PAT | | `GITHUB_OWNER` | your GitHub username | | `GITHUB_REPO_NAME` | `Soci` | 5. To prevent state-file commits from triggering redeploys, set **Ignore Command**: ``` [ "$(git diff --name-only HEAD~1 HEAD | grep -v '^state/' | wc -l)" = "0" ] ``` > **Note:** Render free tier spins down after 15 min of inactivity. Simulation state is saved to GitHub on shutdown and restored on the next boot โ€” no data is lost. --- ### Option 3 โ€” Railway 1. Go to https://railway.app โ†’ **New Project** โ†’ **Deploy from GitHub repo** 2. Railway auto-detects the Dockerfile 3. Add environment variables in the Railway dashboard (same as Render above) 4. Railway assigns a public URL automatically --- ### Option 4 โ€” Local + Ngrok (quick public URL for testing) ```bash # Start the server python -m uvicorn soci.api.server:app --host 0.0.0.0 --port 8000 & # Expose it publicly (install ngrok first: https://ngrok.com) ngrok http 8000 # Copy the https://xxxx.ngrok.io URL and share it ``` --- ## Environment Variables | Variable | Default | Description | |----------|---------|-------------| | `SOCI_PROVIDER` | auto-detect | LLM provider: `gemini` ยท `groq` ยท `claude` ยท `hf` ยท `ollama` | | `GEMINI_API_KEY` | โ€” | Google AI Studio key (free tier: 5 RPM, ~1500 RPD) | | `GROQ_API_KEY` | โ€” | Groq API key (free tier: 30 RPM) | | `ANTHROPIC_API_KEY` | โ€” | Anthropic Claude API key | | `SOCI_LLM_PROB` | per-provider | LLM call probability 0โ€“1 (`0.45` Gemini ยท `0.7` Groq ยท `1.0` Ollama) | | `GEMINI_DAILY_LIMIT` | `1500` | Override Gemini daily request quota for warning thresholds | | `SOCI_AGENTS` | `50` | Starting agent count | | `SOCI_TICK_DELAY` | `0.5` | Seconds between simulation ticks (overridden to 4.0 for rate-limited providers) | | `SOCI_DATA_DIR` | `data` | Directory for SQLite DB and snapshots | | `GITHUB_TOKEN` | โ€” | GitHub PAT for state persistence across deploys | | `GITHUB_OWNER` | โ€” | GitHub repo owner (e.g. `alice`) | | `GITHUB_REPO_NAME` | โ€” | GitHub repo name (e.g. `Soci`) | | `GITHUB_STATE_BRANCH` | `simulation-state` | Branch used for state snapshots (never touches main) | | `GITHUB_STATE_FILE` | `state/autosave.json` | Path inside repo for state file | | `PORT` | `8000` | HTTP port (set to `7860` on HF Spaces automatically) | --- ## Web UI Controls | Control | How | |---------|-----| | Zoom | Scroll wheel or **๏ผ‹ / ๏ผ** buttons | | Fit view | **Fit** button | | Pan | Drag canvas or use sliders | | Rectangle zoom | Click **โฌš**, then drag | | Inspect agent | Click agent on map or in sidebar list | | Speed | **๐Ÿข 1x 2x 5x 10x 50x** buttons | | LLM usage | **๐Ÿง ** slider (0โ€“100%) โ€” tune AI call frequency | | Switch LLM | Click the provider badge (e.g. **โœฆ Gemini 2.0 Flash**) | | **Login / play** | Register โ†’ your agent appears with a gold ring | | **Talk to NPC** | Select agent โ†’ **Talk to [Name]** button | | **Move** | Player panel โ†’ location dropdown โ†’ **Go** | | **Edit profile** | Player panel โ†’ **Edit Profile** | | **Add plans** | Player panel โ†’ **My Plans** | --- ## LLM Provider Comparison | Provider | Free tier | RPM | Daily limit | Best for | |----------|-----------|-----|-------------|----------| | **Gemini 2.0 Flash** | โœ… Yes | 5 | ~1500 req | Cloud demos (default) | | **Groq Llama 3.1 8B** | โœ… Yes | 30 | ~14k tokens/min | Fast responses | | **Ollama** | โœ… Local | โˆž | โˆž | Local dev, no quota | | **Anthropic Claude** | โŒ Paid | โ€” | โ€” | Highest quality | | **HF Inference** | โš ๏ธ PRO only | 5 | varies | Experimenting | --- ## Project Structure ``` Soci/ โ”œโ”€โ”€ src/soci/ โ”‚ โ”œโ”€โ”€ world/ City map, simulation clock, world events โ”‚ โ”œโ”€โ”€ agents/ Agent cognition: persona, memory, needs, relationships โ”‚ โ”œโ”€โ”€ actions/ Movement, activities, conversation, social actions โ”‚ โ”œโ”€โ”€ engine/ Simulation loop, scheduler, entropy, LLM clients โ”‚ โ”œโ”€โ”€ persistence/ SQLite database, save/load snapshots โ”‚ โ””โ”€โ”€ api/ FastAPI REST + WebSocket server โ”œโ”€โ”€ config/ โ”‚ โ”œโ”€โ”€ city.yaml City layout, building positions, zones โ”‚ โ””โ”€โ”€ personas.yaml Named character definitions (20 hand-crafted agents) โ”œโ”€โ”€ web/ โ”‚ โ””โ”€โ”€ index.html Single-file web UI (no framework) โ”œโ”€โ”€ Dockerfile For HF Spaces / Render / Railway deployment โ”œโ”€โ”€ render.yaml Render deployment config โ””โ”€โ”€ main.py Terminal runner (no UI) ``` --- ## License MIT