Update README: message flow charts, full deployment guide
Browse files- System architecture diagram (ASCII) showing browser β API β sim loop β LLM
- Message flow for one simulation tick (per-agent LLM call path)
- Agent cognition loop diagram (OBSERVE/REFLECT/PLAN/ACT/REMEMBER)
- Deployment guides: HF Spaces, Render, Railway, ngrok (local)
- Updated env vars table (GEMINI_API_KEY, SOCI_LLM_PROB, GEMINI_DAILY_LIMIT, etc.)
- LLM provider comparison table with free-tier limits
- Updated features list (Gemini, LLM probability slider, quota circuit-breaker)
- Updated live demo URL to HF Space
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
README.md
CHANGED
|
@@ -13,7 +13,7 @@ Simulates a diverse population of AI people living in a city using an LLM as the
|
|
| 13 |
|
| 14 |
Inspired by [Stanford Generative Agents (Joon Park et al.)](https://arxiv.org/abs/2304.03442), CitySim, AgentSociety, and a16z ai-town.
|
| 15 |
|
| 16 |
-
**Live demo:** https://
|
| 17 |
|
| 18 |
---
|
| 19 |
|
|
@@ -27,156 +27,365 @@ Inspired by [Stanford Generative Agents (Joon Park et al.)](https://arxiv.org/ab
|
|
| 27 |
- Road-based movement with L-shaped routing (agents walk along streets)
|
| 28 |
- Agent animations: walking (profile/back view), sleeping on bed
|
| 29 |
- Speed controls (1x β 50x) and real-time WebSocket sync across browsers
|
|
|
|
| 30 |
- **Player login** β register an account, get your own agent on the map, chat with NPCs
|
| 31 |
-
- Multi-LLM support: Groq (free tier), Anthropic Claude, Ollama (local)
|
| 32 |
- GitHub-based state persistence (survives server reboots and redeploys)
|
| 33 |
- Cost-efficient model routing (Haiku for routine, Sonnet for novel situations)
|
|
|
|
| 34 |
|
| 35 |
---
|
| 36 |
|
| 37 |
-
##
|
| 38 |
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 44 |
|
| 45 |
---
|
| 46 |
|
| 47 |
-
##
|
| 48 |
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
|
| 55 |
-
|
| 56 |
-
```bash
|
| 57 |
-
pip install -r requirements.txt
|
| 58 |
-
```
|
| 59 |
|
| 60 |
-
|
| 61 |
-
```bash
|
| 62 |
-
# Groq (free tier β recommended for cloud)
|
| 63 |
-
export GROQ_API_KEY=gsk_...
|
| 64 |
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 68 |
|
| 69 |
---
|
| 70 |
|
| 71 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 72 |
|
| 73 |
-
### Web UI (local)
|
| 74 |
```bash
|
| 75 |
-
|
|
|
|
|
|
|
| 76 |
```
|
| 77 |
-
Open `http://localhost:8000` in your browser.
|
| 78 |
|
| 79 |
-
###
|
|
|
|
| 80 |
```bash
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 81 |
python main.py --ticks 20 --agents 5
|
| 82 |
```
|
| 83 |
|
| 84 |
---
|
| 85 |
|
| 86 |
-
##
|
| 87 |
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
| 92 |
-
|
| 93 |
-
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 99 |
|
| 100 |
---
|
| 101 |
|
| 102 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 103 |
|
| 104 |
-
|
| 105 |
-
2. Set **Start Command**: `python -m uvicorn soci.api.server:app --host 0.0.0.0 --port $PORT`
|
| 106 |
-
3. Set env vars: `SOCI_PROVIDER`, `GROQ_API_KEY` (or `ANTHROPIC_API_KEY`), `GITHUB_TOKEN`, `GITHUB_REPO`
|
| 107 |
-
4. Add an **Ignore Command** to prevent state-file commits from triggering redeploys:
|
| 108 |
```
|
| 109 |
[ "$(git diff --name-only HEAD~1 HEAD | grep -v '^state/' | wc -l)" = "0" ]
|
| 110 |
```
|
| 111 |
|
| 112 |
-
Simulation state is
|
| 113 |
|
| 114 |
---
|
| 115 |
|
| 116 |
-
##
|
| 117 |
|
| 118 |
-
|
| 119 |
-
|
| 120 |
-
|
| 121 |
-
|
| 122 |
-
actions/ β Movement, activities, conversation, social actions
|
| 123 |
-
engine/ β Simulation loop, scheduler, entropy, LLM client
|
| 124 |
-
persistence/ β SQLite database, save/load snapshots
|
| 125 |
-
api/ β FastAPI REST + WebSocket server
|
| 126 |
-
config/
|
| 127 |
-
city.yaml β City layout and building positions
|
| 128 |
-
personas.yaml β Named character definitions
|
| 129 |
-
web/
|
| 130 |
-
index.html β Single-file web UI
|
| 131 |
-
```
|
| 132 |
|
| 133 |
---
|
| 134 |
|
| 135 |
-
##
|
| 136 |
|
| 137 |
-
|
| 138 |
-
|
| 139 |
-
|
| 140 |
-
|
| 141 |
-
|
| 142 |
-
|
| 143 |
-
|
| 144 |
-
|
| 145 |
-
| **Talk to NPC** | Select any agent β "Talk to [Name]" button |
|
| 146 |
-
| **Move** | Player panel β location dropdown β Go |
|
| 147 |
-
| **Edit profile** | Player panel β Edit Profile |
|
| 148 |
-
| **Add plans** | Player panel β My Plans |
|
| 149 |
|
| 150 |
---
|
| 151 |
|
| 152 |
-
##
|
| 153 |
|
| 154 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 155 |
|
| 156 |
-
|
| 157 |
-
2. Your agent appears immediately on the map with a **gold ring** to identify you.
|
| 158 |
-
3. Click **Edit Profile** to set your name, age, occupation, background, and personality traits.
|
| 159 |
-
4. Click any NPC β **Talk to [Name]** to start a conversation β they reply in character via LLM.
|
| 160 |
-
5. Use **My Plans** to add goals (e.g. *"Go to the park and meet new people"*).
|
| 161 |
|
| 162 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 163 |
|
| 164 |
---
|
| 165 |
|
| 166 |
-
##
|
| 167 |
|
| 168 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 169 |
|
| 170 |
```
|
| 171 |
-
|
| 172 |
-
|
| 173 |
-
|
| 174 |
-
|
| 175 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 176 |
```
|
| 177 |
|
| 178 |
-
Memory entries are scored by importance (1β10) with recency decay, retrieved by relevance score.
|
| 179 |
-
|
| 180 |
---
|
| 181 |
|
| 182 |
## License
|
|
|
|
| 13 |
|
| 14 |
Inspired by [Stanford Generative Agents (Joon Park et al.)](https://arxiv.org/abs/2304.03442), CitySim, AgentSociety, and a16z ai-town.
|
| 15 |
|
| 16 |
+
**Live demo:** https://huggingface.co/spaces/RayMelius/soci
|
| 17 |
|
| 18 |
---
|
| 19 |
|
|
|
|
| 27 |
- Road-based movement with L-shaped routing (agents walk along streets)
|
| 28 |
- Agent animations: walking (profile/back view), sleeping on bed
|
| 29 |
- Speed controls (1x β 50x) and real-time WebSocket sync across browsers
|
| 30 |
+
- **LLM probability slider** β tune AI usage from 0β100% to stay within free-tier quotas
|
| 31 |
- **Player login** β register an account, get your own agent on the map, chat with NPCs
|
| 32 |
+
- Multi-LLM support: Gemini (free tier), Groq (free tier), Anthropic Claude, Ollama (local)
|
| 33 |
- GitHub-based state persistence (survives server reboots and redeploys)
|
| 34 |
- Cost-efficient model routing (Haiku for routine, Sonnet for novel situations)
|
| 35 |
+
- Daily quota circuit-breaker with warnings at 50 / 70 / 90 / 99% usage
|
| 36 |
|
| 37 |
---
|
| 38 |
|
| 39 |
+
## System Architecture
|
| 40 |
|
| 41 |
+
```
|
| 42 |
+
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 43 |
+
β Browser (web/index.html β single-file Vue-less UI) β
|
| 44 |
+
β β’ Canvas city map β’ Agent inspector β’ Chat panel β
|
| 45 |
+
β β’ Speed / LLM-probability sliders β’ Login modal β
|
| 46 |
+
ββββββββββ¬βββββββββββββββββββββββββββββββ¬ββββββββββββββββββ
|
| 47 |
+
β REST GET /api/* β WebSocket /ws
|
| 48 |
+
β POST /api/controls/* β push events
|
| 49 |
+
βΌ βΌ
|
| 50 |
+
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 51 |
+
β FastAPI Server (soci/api/server.py) β
|
| 52 |
+
β β’ lifespan: load state β start sim loop β
|
| 53 |
+
β β’ routes.py β REST endpoints β
|
| 54 |
+
β β’ websocket.py β broadcast tick events β
|
| 55 |
+
ββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββ
|
| 56 |
+
β asyncio.create_task
|
| 57 |
+
βΌ
|
| 58 |
+
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 59 |
+
β Simulation Loop (background task) β
|
| 60 |
+
β tick every N sec β sim.tick() β sleep β
|
| 61 |
+
β respects: _sim_paused / _sim_speed / llm_call_prob β
|
| 62 |
+
ββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββ
|
| 63 |
+
β
|
| 64 |
+
βΌ
|
| 65 |
+
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 66 |
+
β Simulation.tick() (engine/simulation.py) β
|
| 67 |
+
β β
|
| 68 |
+
β 1. Entropy / world events β
|
| 69 |
+
β 2. Daily plan generation βββΊ LLM (if prob gate β) β
|
| 70 |
+
β 3. Agent needs + routine actions (no LLM) β
|
| 71 |
+
β 4. LLM action decisions ββββββΊ LLM (if prob gate β) β
|
| 72 |
+
β 5. Conversation turns ββββββββΊ LLM (if prob gate β) β
|
| 73 |
+
β 6. New conversation starts βββΊ LLM (if prob gate β) β
|
| 74 |
+
β 7. Reflections βββββββββββββββΊ LLM (if prob gate β) β
|
| 75 |
+
β 8. Romance / relationship updates (no LLM) β
|
| 76 |
+
β 9. Clock advance β
|
| 77 |
+
ββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββ
|
| 78 |
+
β await llm.complete_json()
|
| 79 |
+
βΌ
|
| 80 |
+
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 81 |
+
β LLM Client (engine/llm.py) β
|
| 82 |
+
β β’ Rate limiter (asyncio.Lock, min interval per RPM) β
|
| 83 |
+
β β’ Daily usage counter β warns at 50/70/90/99% β
|
| 84 |
+
β β’ Quota circuit-breaker (expires midnight Pacific) β
|
| 85 |
+
β β’ Providers: GeminiClient / GroqClient / β
|
| 86 |
+
β ClaudeClient / HFInferenceClient / β
|
| 87 |
+
β OllamaClient β
|
| 88 |
+
ββββββββββββββββββββββββββ¬ββββββββββββββββββββββββββββββββββ
|
| 89 |
+
β HTTP (httpx async)
|
| 90 |
+
βΌ
|
| 91 |
+
External LLM API (Gemini / Groq / β¦)
|
| 92 |
+
```
|
| 93 |
|
| 94 |
---
|
| 95 |
|
| 96 |
+
## Message Flow β One Simulation Tick
|
| 97 |
|
| 98 |
+
```
|
| 99 |
+
Browser poll (3s) Simulation background loop
|
| 100 |
+
β β
|
| 101 |
+
β GET /api/city tick_delay (4s Gemini, 0.5s Ollama)
|
| 102 |
+
βββββββββββββββββββββββββββββββ€
|
| 103 |
+
β β sim.tick()
|
| 104 |
+
β β
|
| 105 |
+
β βββββββββββ΄βββββββββββ
|
| 106 |
+
β β For each agent: β
|
| 107 |
+
β β tick_needs() β
|
| 108 |
+
β β check routine ββββΊ execute routine action (no LLM)
|
| 109 |
+
β β roll prob gate β
|
| 110 |
+
β β _decide_action() ββββΊ LLM call βββΊ AgentAction JSON
|
| 111 |
+
β β _execute_action() β
|
| 112 |
+
β βββββββββββ¬βββββββββββ
|
| 113 |
+
β β
|
| 114 |
+
β βββββββββββ΄βββββββββββ
|
| 115 |
+
β β Conversations: β
|
| 116 |
+
β β continue_conv() ββββΊ LLM call βββΊ dialogue turn
|
| 117 |
+
β β new conv start ββββΊ LLM call βββΊ opening line
|
| 118 |
+
β βββββββββββ¬βββββββββββ
|
| 119 |
+
β β
|
| 120 |
+
β βββββββββββ΄βββββββββββ
|
| 121 |
+
β β Reflections: β
|
| 122 |
+
β β should_reflect()? ββββΊ LLM call βββΊ memory insight
|
| 123 |
+
β βββββββββββ¬βββββββββββ
|
| 124 |
+
β β
|
| 125 |
+
β clock.tick()
|
| 126 |
+
β β
|
| 127 |
+
β WebSocket push β
|
| 128 |
+
ββββ events/state βββββββββββββ
|
| 129 |
+
β
|
| 130 |
+
[browser updates map, event log, agent inspector]
|
| 131 |
+
```
|
| 132 |
|
| 133 |
+
---
|
|
|
|
|
|
|
|
|
|
| 134 |
|
| 135 |
+
## Agent Cognition Loop
|
|
|
|
|
|
|
|
|
|
| 136 |
|
| 137 |
+
```
|
| 138 |
+
Every tick, each NPC agent runs:
|
| 139 |
+
|
| 140 |
+
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 141 |
+
β β
|
| 142 |
+
β OBSERVE βββΊ perceive nearby agents, events, β
|
| 143 |
+
β location, time of day β
|
| 144 |
+
β β β
|
| 145 |
+
β βΌ β
|
| 146 |
+
β REFLECT βββΊ check memory.should_reflect() β
|
| 147 |
+
β LLM synthesises insight from recent β
|
| 148 |
+
β memories β stored as reflection β
|
| 149 |
+
β β β
|
| 150 |
+
β βΌ β
|
| 151 |
+
β PLAN ββββΊ if no daily plan: LLM generates β
|
| 152 |
+
β ordered list of goals for the day β
|
| 153 |
+
β (or routine fills the plan β no LLM) β
|
| 154 |
+
β β β
|
| 155 |
+
β βΌ β
|
| 156 |
+
β ACT βββββΊ routine slot? β execute directly β
|
| 157 |
+
β no slot? β LLM picks action β
|
| 158 |
+
β action types: move / work / eat / β
|
| 159 |
+
β sleep / socialise / leisure / rest β
|
| 160 |
+
β β β
|
| 161 |
+
β βΌ β
|
| 162 |
+
β REMEMBER βΊ add_observation() to memory stream β
|
| 163 |
+
β importance 1β10, recency decay, β
|
| 164 |
+
β retrieved by relevance score β
|
| 165 |
+
β β
|
| 166 |
+
ββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 167 |
+
|
| 168 |
+
LLM budget per tick (rate-limited providers):
|
| 169 |
+
max_llm_calls_this_tick = 1 (Gemini/Groq/HF)
|
| 170 |
+
llm_call_probability = 0.45 (Gemini default β ~10h/day)
|
| 171 |
+
```
|
| 172 |
|
| 173 |
---
|
| 174 |
|
| 175 |
+
## Tech Stack
|
| 176 |
+
|
| 177 |
+
| Layer | Technology |
|
| 178 |
+
|-------|-----------|
|
| 179 |
+
| Language | Python 3.10+ |
|
| 180 |
+
| API server | FastAPI + Uvicorn |
|
| 181 |
+
| Real-time | WebSocket (FastAPI) |
|
| 182 |
+
| Database | SQLite via aiosqlite |
|
| 183 |
+
| LLM providers | Gemini Β· Groq Β· Anthropic Claude Β· HF Inference Β· Ollama |
|
| 184 |
+
| Config | YAML (city layout, agent personas) |
|
| 185 |
+
| State persistence | GitHub API (simulation-state branch) |
|
| 186 |
+
| Container | Docker (HF Spaces / Render) |
|
| 187 |
+
|
| 188 |
+
---
|
| 189 |
+
|
| 190 |
+
## Quick Start (Local)
|
| 191 |
+
|
| 192 |
+
### Prerequisites
|
| 193 |
+
- Python 3.10+
|
| 194 |
+
- At least one LLM API key β or [Ollama](https://ollama.ai) installed locally (free, no key needed)
|
| 195 |
+
|
| 196 |
+
### Install
|
| 197 |
|
|
|
|
| 198 |
```bash
|
| 199 |
+
git clone https://github.com/Bonum/Soci.git
|
| 200 |
+
cd Soci
|
| 201 |
+
pip install -r requirements.txt
|
| 202 |
```
|
|
|
|
| 203 |
|
| 204 |
+
### Configure
|
| 205 |
+
|
| 206 |
```bash
|
| 207 |
+
# Pick ONE provider (Gemini recommended β free tier is generous):
|
| 208 |
+
export GEMINI_API_KEY=AIza... # https://aistudio.google.com/apikey
|
| 209 |
+
# or
|
| 210 |
+
export GROQ_API_KEY=gsk_... # https://console.groq.com
|
| 211 |
+
# or
|
| 212 |
+
export ANTHROPIC_API_KEY=sk-ant-...
|
| 213 |
+
# or install Ollama and pull a model β no key needed
|
| 214 |
+
```
|
| 215 |
+
|
| 216 |
+
### Run
|
| 217 |
+
|
| 218 |
+
```bash
|
| 219 |
+
# Web UI (recommended)
|
| 220 |
+
python -m uvicorn soci.api.server:app --host 0.0.0.0 --port 8000
|
| 221 |
+
# Open http://localhost:8000
|
| 222 |
+
|
| 223 |
+
# Terminal only
|
| 224 |
python main.py --ticks 20 --agents 5
|
| 225 |
```
|
| 226 |
|
| 227 |
---
|
| 228 |
|
| 229 |
+
## Deploying to the Internet
|
| 230 |
|
| 231 |
+
### Option 1 β Hugging Face Spaces (free, recommended)
|
| 232 |
+
|
| 233 |
+
HF Spaces runs the Docker container for free with automatic HTTPS.
|
| 234 |
+
|
| 235 |
+
1. **Create a Space** at https://huggingface.co/new-space
|
| 236 |
+
- SDK: **Docker**
|
| 237 |
+
- Visibility: Public
|
| 238 |
+
|
| 239 |
+
2. **Add the HF remote** and push:
|
| 240 |
+
```bash
|
| 241 |
+
git remote add hf https://YOUR_HF_USERNAME:YOUR_HF_TOKEN@huggingface.co/spaces/YOUR_HF_USERNAME/soci
|
| 242 |
+
git push hf master:main
|
| 243 |
+
```
|
| 244 |
+
Get a write token at https://huggingface.co/settings/tokens (select *Write* + *Inference Providers* permissions).
|
| 245 |
+
|
| 246 |
+
3. **Add Space secrets** (Settings β Variables and Secrets):
|
| 247 |
+
|
| 248 |
+
| Secret | Value |
|
| 249 |
+
|--------|-------|
|
| 250 |
+
| `SOCI_PROVIDER` | `gemini` |
|
| 251 |
+
| `GEMINI_API_KEY` | your AI Studio key |
|
| 252 |
+
| `GITHUB_TOKEN` | GitHub PAT (repo read/write) |
|
| 253 |
+
| `GITHUB_OWNER` | your GitHub username |
|
| 254 |
+
| `GITHUB_REPO_NAME` | `Soci` |
|
| 255 |
+
|
| 256 |
+
4. Your Space rebuilds automatically on every push. Visit
|
| 257 |
+
`https://YOUR_HF_USERNAME-soci.hf.space`
|
| 258 |
+
|
| 259 |
+
> **Free-tier tip:** Gemini free tier = 5 RPM, ~1500 requests/day (resets midnight Pacific).
|
| 260 |
+
> The default LLM probability is set to **45%** which gives ~10 hours of AI-driven simulation per day.
|
| 261 |
+
> Use the π§ slider in the toolbar to adjust at runtime.
|
| 262 |
|
| 263 |
---
|
| 264 |
|
| 265 |
+
### Option 2 β Render (free tier)
|
| 266 |
+
|
| 267 |
+
1. Connect your GitHub repo at https://render.com/new
|
| 268 |
+
2. Choose **Web Service** β Docker
|
| 269 |
+
3. Set **Start Command**:
|
| 270 |
+
```
|
| 271 |
+
python -m uvicorn soci.api.server:app --host 0.0.0.0 --port $PORT
|
| 272 |
+
```
|
| 273 |
+
4. Set environment variables in the Render dashboard:
|
| 274 |
+
|
| 275 |
+
| Variable | Value |
|
| 276 |
+
|----------|-------|
|
| 277 |
+
| `SOCI_PROVIDER` | `gemini` or `groq` |
|
| 278 |
+
| `GEMINI_API_KEY` | your key |
|
| 279 |
+
| `GITHUB_TOKEN` | GitHub PAT |
|
| 280 |
+
| `GITHUB_OWNER` | your GitHub username |
|
| 281 |
+
| `GITHUB_REPO_NAME` | `Soci` |
|
| 282 |
|
| 283 |
+
5. To prevent state-file commits from triggering redeploys, set **Ignore Command**:
|
|
|
|
|
|
|
|
|
|
| 284 |
```
|
| 285 |
[ "$(git diff --name-only HEAD~1 HEAD | grep -v '^state/' | wc -l)" = "0" ]
|
| 286 |
```
|
| 287 |
|
| 288 |
+
> **Note:** Render free tier spins down after 15 min of inactivity. Simulation state is saved to GitHub on shutdown and restored on the next boot β no data is lost.
|
| 289 |
|
| 290 |
---
|
| 291 |
|
| 292 |
+
### Option 3 β Railway
|
| 293 |
|
| 294 |
+
1. Go to https://railway.app β **New Project** β **Deploy from GitHub repo**
|
| 295 |
+
2. Railway auto-detects the Dockerfile
|
| 296 |
+
3. Add environment variables in the Railway dashboard (same as Render above)
|
| 297 |
+
4. Railway assigns a public URL automatically
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 298 |
|
| 299 |
---
|
| 300 |
|
| 301 |
+
### Option 4 β Local + Ngrok (quick public URL for testing)
|
| 302 |
|
| 303 |
+
```bash
|
| 304 |
+
# Start the server
|
| 305 |
+
python -m uvicorn soci.api.server:app --host 0.0.0.0 --port 8000 &
|
| 306 |
+
|
| 307 |
+
# Expose it publicly (install ngrok first: https://ngrok.com)
|
| 308 |
+
ngrok http 8000
|
| 309 |
+
# Copy the https://xxxx.ngrok.io URL and share it
|
| 310 |
+
```
|
|
|
|
|
|
|
|
|
|
|
|
|
| 311 |
|
| 312 |
---
|
| 313 |
|
| 314 |
+
## Environment Variables
|
| 315 |
|
| 316 |
+
| Variable | Default | Description |
|
| 317 |
+
|----------|---------|-------------|
|
| 318 |
+
| `SOCI_PROVIDER` | auto-detect | LLM provider: `gemini` Β· `groq` Β· `claude` Β· `hf` Β· `ollama` |
|
| 319 |
+
| `GEMINI_API_KEY` | β | Google AI Studio key (free tier: 5 RPM, ~1500 RPD) |
|
| 320 |
+
| `GROQ_API_KEY` | β | Groq API key (free tier: 30 RPM) |
|
| 321 |
+
| `ANTHROPIC_API_KEY` | β | Anthropic Claude API key |
|
| 322 |
+
| `SOCI_LLM_PROB` | per-provider | LLM call probability 0β1 (`0.45` Gemini Β· `0.7` Groq Β· `1.0` Ollama) |
|
| 323 |
+
| `GEMINI_DAILY_LIMIT` | `1500` | Override Gemini daily request quota for warning thresholds |
|
| 324 |
+
| `SOCI_AGENTS` | `50` | Starting agent count |
|
| 325 |
+
| `SOCI_TICK_DELAY` | `0.5` | Seconds between simulation ticks (overridden to 4.0 for rate-limited providers) |
|
| 326 |
+
| `SOCI_DATA_DIR` | `data` | Directory for SQLite DB and snapshots |
|
| 327 |
+
| `GITHUB_TOKEN` | β | GitHub PAT for state persistence across deploys |
|
| 328 |
+
| `GITHUB_OWNER` | β | GitHub repo owner (e.g. `alice`) |
|
| 329 |
+
| `GITHUB_REPO_NAME` | β | GitHub repo name (e.g. `Soci`) |
|
| 330 |
+
| `GITHUB_STATE_BRANCH` | `simulation-state` | Branch used for state snapshots (never touches main) |
|
| 331 |
+
| `GITHUB_STATE_FILE` | `state/autosave.json` | Path inside repo for state file |
|
| 332 |
+
| `PORT` | `8000` | HTTP port (set to `7860` on HF Spaces automatically) |
|
| 333 |
+
|
| 334 |
+
---
|
| 335 |
|
| 336 |
+
## Web UI Controls
|
|
|
|
|
|
|
|
|
|
|
|
|
| 337 |
|
| 338 |
+
| Control | How |
|
| 339 |
+
|---------|-----|
|
| 340 |
+
| Zoom | Scroll wheel or **οΌ / οΌ** buttons |
|
| 341 |
+
| Fit view | **Fit** button |
|
| 342 |
+
| Pan | Drag canvas or use sliders |
|
| 343 |
+
| Rectangle zoom | Click **β¬**, then drag |
|
| 344 |
+
| Inspect agent | Click agent on map or in sidebar list |
|
| 345 |
+
| Speed | **π’ 1x 2x 5x 10x 50x** buttons |
|
| 346 |
+
| LLM usage | **π§ ** slider (0β100%) β tune AI call frequency |
|
| 347 |
+
| Switch LLM | Click the provider badge (e.g. **β¦ Gemini 2.0 Flash**) |
|
| 348 |
+
| **Login / play** | Register β your agent appears with a gold ring |
|
| 349 |
+
| **Talk to NPC** | Select agent β **Talk to [Name]** button |
|
| 350 |
+
| **Move** | Player panel β location dropdown β **Go** |
|
| 351 |
+
| **Edit profile** | Player panel β **Edit Profile** |
|
| 352 |
+
| **Add plans** | Player panel β **My Plans** |
|
| 353 |
|
| 354 |
---
|
| 355 |
|
| 356 |
+
## LLM Provider Comparison
|
| 357 |
|
| 358 |
+
| Provider | Free tier | RPM | Daily limit | Best for |
|
| 359 |
+
|----------|-----------|-----|-------------|----------|
|
| 360 |
+
| **Gemini 2.0 Flash** | β
Yes | 5 | ~1500 req | Cloud demos (default) |
|
| 361 |
+
| **Groq Llama 3.1 8B** | β
Yes | 30 | ~14k tokens/min | Fast responses |
|
| 362 |
+
| **Ollama** | β
Local | β | β | Local dev, no quota |
|
| 363 |
+
| **Anthropic Claude** | β Paid | β | β | Highest quality |
|
| 364 |
+
| **HF Inference** | β οΈ PRO only | 5 | varies | Experimenting |
|
| 365 |
+
|
| 366 |
+
---
|
| 367 |
+
|
| 368 |
+
## Project Structure
|
| 369 |
|
| 370 |
```
|
| 371 |
+
Soci/
|
| 372 |
+
βββ src/soci/
|
| 373 |
+
β βββ world/ City map, simulation clock, world events
|
| 374 |
+
β βββ agents/ Agent cognition: persona, memory, needs, relationships
|
| 375 |
+
β βββ actions/ Movement, activities, conversation, social actions
|
| 376 |
+
β βββ engine/ Simulation loop, scheduler, entropy, LLM clients
|
| 377 |
+
β βββ persistence/ SQLite database, save/load snapshots
|
| 378 |
+
β βββ api/ FastAPI REST + WebSocket server
|
| 379 |
+
βββ config/
|
| 380 |
+
β βββ city.yaml City layout, building positions, zones
|
| 381 |
+
β βββ personas.yaml Named character definitions (20 hand-crafted agents)
|
| 382 |
+
βββ web/
|
| 383 |
+
β βββ index.html Single-file web UI (no framework)
|
| 384 |
+
βββ Dockerfile For HF Spaces / Render / Railway deployment
|
| 385 |
+
βββ render.yaml Render deployment config
|
| 386 |
+
βββ main.py Terminal runner (no UI)
|
| 387 |
```
|
| 388 |
|
|
|
|
|
|
|
| 389 |
---
|
| 390 |
|
| 391 |
## License
|