File size: 17,775 Bytes
6388a35
 
 
 
 
 
5ec5bd3
6388a35
 
 
dcb07bb
 
292744f
dcb07bb
 
 
da342a7
292744f
dcb07bb
 
 
 
 
 
292744f
dcb07bb
 
292744f
 
 
da342a7
292744f
da342a7
292744f
dcb07bb
da342a7
dcb07bb
 
 
da342a7
dcb07bb
da342a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dcb07bb
 
 
da342a7
dcb07bb
da342a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dcb07bb
da342a7
dcb07bb
da342a7
292744f
da342a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dcb07bb
 
 
da342a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dcb07bb
 
da342a7
 
 
dcb07bb
 
da342a7
 
dcb07bb
da342a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
292744f
dcb07bb
 
 
 
da342a7
dcb07bb
da342a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dcb07bb
292744f
dcb07bb
da342a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dcb07bb
da342a7
dcb07bb
292744f
dcb07bb
 
da342a7
292744f
dcb07bb
 
da342a7
dcb07bb
da342a7
 
 
 
dcb07bb
 
 
da342a7
dcb07bb
da342a7
 
 
 
 
 
 
 
292744f
 
 
da342a7
292744f
da342a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
292744f
da342a7
292744f
da342a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dcb07bb
 
 
da342a7
dcb07bb
da342a7
 
 
 
 
 
 
 
 
 
 
dcb07bb
 
da342a7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
dcb07bb
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
---
title: Soci
emoji: πŸ™οΈ
colorFrom: blue
colorTo: purple
sdk: docker
app_port: 7860
pinned: false
---

# Soci β€” LLM-Powered City Population Simulator

Simulates a diverse population of AI people living in a city using an LLM as the reasoning engine. Each agent has a unique persona, memory stream, needs, and relationships.

Inspired by [Stanford Generative Agents (Joon Park et al.)](https://arxiv.org/abs/2304.03442), CitySim, AgentSociety, and a16z ai-town.

**Live demo:** https://huggingface.co/spaces/RayMelius/soci

---

## Features

- AI agents with unique personas, goals, and memories
- Maslow-inspired needs system (hunger, energy, social, purpose, comfort, fun)
- Relationship graph with familiarity, trust, sentiment, and romance
- Agent cognition loop: **OBSERVE β†’ REFLECT β†’ PLAN β†’ ACT β†’ REMEMBER**
- Web UI with animated city map, zoom, pan, and agent inspector
- Road-based movement with L-shaped routing (agents walk along streets)
- Agent animations: walking (profile/back view), sleeping on bed
- Speed controls (1x β†’ 50x) and real-time WebSocket sync across browsers
- **LLM probability slider** β€” tune AI usage from 0–100% to stay within free-tier quotas
- **Player login** β€” register an account, get your own agent on the map, chat with NPCs
- Multi-LLM support: Gemini (free tier), Groq (free tier), Anthropic Claude, Ollama (local)
- GitHub-based state persistence (survives server reboots and redeploys)
- Cost-efficient model routing (Haiku for routine, Sonnet for novel situations)
- Daily quota circuit-breaker with warnings at 50 / 70 / 90 / 99% usage

---

## System Architecture

```
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Browser  (web/index.html β€” single-file Vue-less UI)    β”‚
β”‚  β€’ Canvas city map  β€’ Agent inspector  β€’ Chat panel     β”‚
β”‚  β€’ Speed / LLM-probability sliders  β€’ Login modal       β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
         β”‚ REST  GET /api/*             β”‚ WebSocket /ws
         β”‚ POST /api/controls/*         β”‚ push events
         β–Ό                              β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  FastAPI Server  (soci/api/server.py)                    β”‚
β”‚  β€’ lifespan: load state β†’ start sim loop                β”‚
β”‚  β€’ routes.py  β€” REST endpoints                          β”‚
β”‚  β€’ websocket.py β€” broadcast tick events                 β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                         β”‚ asyncio.create_task
                         β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Simulation Loop  (background task)                      β”‚
β”‚  tick every N sec  β†’  sim.tick()  β†’  sleep              β”‚
β”‚  respects: _sim_paused / _sim_speed / llm_call_prob      β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                         β”‚
                         β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Simulation.tick()  (engine/simulation.py)               β”‚
β”‚                                                          β”‚
β”‚  1. Entropy / world events                               β”‚
β”‚  2. Daily plan generation  ──► LLM (if prob gate βœ“)     β”‚
β”‚  3. Agent needs + routine actions  (no LLM)             β”‚
β”‚  4. LLM action decisions  ─────► LLM (if prob gate βœ“)  β”‚
β”‚  5. Conversation turns  ───────► LLM (if prob gate βœ“)  β”‚
β”‚  6. New conversation starts  ──► LLM (if prob gate βœ“)  β”‚
β”‚  7. Reflections  ──────────────► LLM (if prob gate βœ“)  β”‚
β”‚  8. Romance / relationship updates  (no LLM)            β”‚
β”‚  9. Clock advance                                        β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                         β”‚ await llm.complete_json()
                         β–Ό
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  LLM Client  (engine/llm.py)                             β”‚
β”‚  β€’ Rate limiter (asyncio.Lock, min interval per RPM)     β”‚
β”‚  β€’ Daily usage counter β†’ warns at 50/70/90/99%          β”‚
β”‚  β€’ Quota circuit-breaker (expires midnight Pacific)      β”‚
β”‚  β€’ Providers: GeminiClient / GroqClient /                β”‚
β”‚               ClaudeClient / HFInferenceClient /         β”‚
β”‚               OllamaClient                               β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
                         β”‚ HTTP (httpx async)
                         β–Ό
              External LLM API  (Gemini / Groq / …)
```

---

## Message Flow β€” One Simulation Tick

```
Browser poll (3s)          Simulation background loop
      β”‚                              β”‚
      β”‚  GET /api/city              tick_delay (4s Gemini, 0.5s Ollama)
      │◄─────────────────────────────
      β”‚                             β”‚ sim.tick()
      β”‚                             β”‚
      β”‚                   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚                   β”‚  For each agent:   β”‚
      β”‚                   β”‚  tick_needs()      β”‚
      β”‚                   β”‚  check routine     │──► execute routine action (no LLM)
      β”‚                   β”‚  roll prob gate    β”‚
      β”‚                   β”‚  _decide_action()  │──► LLM call ──► AgentAction JSON
      β”‚                   β”‚  _execute_action() β”‚
      β”‚                   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      β”‚                             β”‚
      β”‚                   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚                   β”‚  Conversations:    β”‚
      β”‚                   β”‚  continue_conv()   │──► LLM call ──► dialogue turn
      β”‚                   β”‚  new conv start    │──► LLM call ──► opening line
      β”‚                   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      β”‚                             β”‚
      β”‚                   β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
      β”‚                   β”‚  Reflections:      β”‚
      β”‚                   β”‚  should_reflect()?  │──► LLM call ──► memory insight
      β”‚                   β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      β”‚                             β”‚
      β”‚                        clock.tick()
      β”‚                             β”‚
      β”‚  WebSocket push             β”‚
      │◄── events/state β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
      β”‚
[browser updates map, event log, agent inspector]
```

---

## Agent Cognition Loop

```
Every tick, each NPC agent runs:

  β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
  β”‚                                                      β”‚
  β”‚   OBSERVE ──► perceive nearby agents, events,        β”‚
  β”‚               location, time of day                  β”‚
  β”‚      β”‚                                               β”‚
  β”‚      β–Ό                                               β”‚
  β”‚   REFLECT ──► check memory.should_reflect()          β”‚
  β”‚               LLM synthesises insight from recent    β”‚
  β”‚               memories  β†’  stored as reflection      β”‚
  β”‚      β”‚                                               β”‚
  β”‚      β–Ό                                               β”‚
  β”‚   PLAN  ───► if no daily plan: LLM generates         β”‚
  β”‚               ordered list of goals for the day      β”‚
  β”‚               (or routine fills the plan β€” no LLM)   β”‚
  β”‚      β”‚                                               β”‚
  β”‚      β–Ό                                               β”‚
  β”‚   ACT  ────► routine slot?  β†’ execute directly       β”‚
  β”‚               no slot?      β†’ LLM picks action       β”‚
  β”‚               action types: move / work / eat /      β”‚
  β”‚               sleep / socialise / leisure / rest     β”‚
  β”‚      β”‚                                               β”‚
  β”‚      β–Ό                                               β”‚
  β”‚   REMEMBER β–Ί add_observation() to memory stream      β”‚
  β”‚               importance 1–10, recency decay,        β”‚
  β”‚               retrieved by relevance score           β”‚
  β”‚                                                      β”‚
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

  LLM budget per tick (rate-limited providers):
    max_llm_calls_this_tick = 1   (Gemini/Groq/HF)
    llm_call_probability    = 0.45 (Gemini default β†’ ~10h/day)
```

---

## Tech Stack

| Layer | Technology |
|-------|-----------|
| Language | Python 3.10+ |
| API server | FastAPI + Uvicorn |
| Real-time | WebSocket (FastAPI) |
| Database | SQLite via aiosqlite |
| LLM providers | Gemini Β· Groq Β· Anthropic Claude Β· HF Inference Β· Ollama |
| Config | YAML (city layout, agent personas) |
| State persistence | GitHub API (simulation-state branch) |
| Container | Docker (HF Spaces / Render) |

---

## Quick Start (Local)

### Prerequisites
- Python 3.10+
- At least one LLM API key β€” or [Ollama](https://ollama.ai) installed locally (free, no key needed)

### Install

```bash
git clone https://github.com/Bonum/Soci.git
cd Soci
pip install -r requirements.txt
```

### Configure

```bash
# Pick ONE provider (Gemini recommended β€” free tier is generous):
export GEMINI_API_KEY=AIza...        # https://aistudio.google.com/apikey
# or
export GROQ_API_KEY=gsk_...          # https://console.groq.com
# or
export ANTHROPIC_API_KEY=sk-ant-...
# or install Ollama and pull a model β€” no key needed
```

### Run

```bash
# Web UI (recommended)
python -m uvicorn soci.api.server:app --host 0.0.0.0 --port 8000
# Open http://localhost:8000

# Terminal only
python main.py --ticks 20 --agents 5
```

---

## Deploying to the Internet

### Option 1 β€” Hugging Face Spaces (free, recommended)

HF Spaces runs the Docker container for free with automatic HTTPS.

1. **Create a Space** at https://huggingface.co/new-space
   - SDK: **Docker**
   - Visibility: Public

2. **Add the HF remote** and push:
   ```bash
   git remote add hf https://YOUR_HF_USERNAME:YOUR_HF_TOKEN@huggingface.co/spaces/YOUR_HF_USERNAME/soci
   git push hf master:main
   ```
   Get a write token at https://huggingface.co/settings/tokens (select *Write* + *Inference Providers* permissions).

3. **Add Space secrets** (Settings β†’ Variables and Secrets):

   | Secret | Value |
   |--------|-------|
   | `SOCI_PROVIDER` | `gemini` |
   | `GEMINI_API_KEY` | your AI Studio key |
   | `GITHUB_TOKEN` | GitHub PAT (repo read/write) |
   | `GITHUB_OWNER` | your GitHub username |
   | `GITHUB_REPO_NAME` | `Soci` |

4. Your Space rebuilds automatically on every push. Visit
   `https://YOUR_HF_USERNAME-soci.hf.space`

> **Free-tier tip:** Gemini free tier = 5 RPM, ~1500 requests/day (resets midnight Pacific).
> The default LLM probability is set to **45%** which gives ~10 hours of AI-driven simulation per day.
> Use the 🧠 slider in the toolbar to adjust at runtime.

---

### Option 2 β€” Render (free tier)

1. Connect your GitHub repo at https://render.com/new
2. Choose **Web Service** β†’ Docker
3. Set **Start Command**:
   ```
   python -m uvicorn soci.api.server:app --host 0.0.0.0 --port $PORT
   ```
4. Set environment variables in the Render dashboard:

   | Variable | Value |
   |----------|-------|
   | `SOCI_PROVIDER` | `gemini` or `groq` |
   | `GEMINI_API_KEY` | your key |
   | `GITHUB_TOKEN` | GitHub PAT |
   | `GITHUB_OWNER` | your GitHub username |
   | `GITHUB_REPO_NAME` | `Soci` |

5. To prevent state-file commits from triggering redeploys, set **Ignore Command**:
   ```
   [ "$(git diff --name-only HEAD~1 HEAD | grep -v '^state/' | wc -l)" = "0" ]
   ```

> **Note:** Render free tier spins down after 15 min of inactivity. Simulation state is saved to GitHub on shutdown and restored on the next boot β€” no data is lost.

---

### Option 3 β€” Railway

1. Go to https://railway.app β†’ **New Project** β†’ **Deploy from GitHub repo**
2. Railway auto-detects the Dockerfile
3. Add environment variables in the Railway dashboard (same as Render above)
4. Railway assigns a public URL automatically

---

### Option 4 β€” Local + Ngrok (quick public URL for testing)

```bash
# Start the server
python -m uvicorn soci.api.server:app --host 0.0.0.0 --port 8000 &

# Expose it publicly (install ngrok first: https://ngrok.com)
ngrok http 8000
# Copy the https://xxxx.ngrok.io URL and share it
```

---

## Environment Variables

| Variable | Default | Description |
|----------|---------|-------------|
| `SOCI_PROVIDER` | auto-detect | LLM provider: `gemini` Β· `groq` Β· `claude` Β· `hf` Β· `ollama` |
| `GEMINI_API_KEY` | β€” | Google AI Studio key (free tier: 5 RPM, ~1500 RPD) |
| `GROQ_API_KEY` | β€” | Groq API key (free tier: 30 RPM) |
| `ANTHROPIC_API_KEY` | β€” | Anthropic Claude API key |
| `SOCI_LLM_PROB` | per-provider | LLM call probability 0–1 (`0.45` Gemini Β· `0.7` Groq Β· `1.0` Ollama) |
| `GEMINI_DAILY_LIMIT` | `1500` | Override Gemini daily request quota for warning thresholds |
| `SOCI_AGENTS` | `50` | Starting agent count |
| `SOCI_TICK_DELAY` | `0.5` | Seconds between simulation ticks (overridden to 4.0 for rate-limited providers) |
| `SOCI_DATA_DIR` | `data` | Directory for SQLite DB and snapshots |
| `GITHUB_TOKEN` | β€” | GitHub PAT for state persistence across deploys |
| `GITHUB_OWNER` | β€” | GitHub repo owner (e.g. `alice`) |
| `GITHUB_REPO_NAME` | β€” | GitHub repo name (e.g. `Soci`) |
| `GITHUB_STATE_BRANCH` | `simulation-state` | Branch used for state snapshots (never touches main) |
| `GITHUB_STATE_FILE` | `state/autosave.json` | Path inside repo for state file |
| `PORT` | `8000` | HTTP port (set to `7860` on HF Spaces automatically) |

---

## Web UI Controls

| Control | How |
|---------|-----|
| Zoom | Scroll wheel or **οΌ‹ / -** buttons |
| Fit view | **Fit** button |
| Pan | Drag canvas or use sliders |
| Rectangle zoom | Click **⬚**, then drag |
| Inspect agent | Click agent on map or in sidebar list |
| Speed | **🐒 1x 2x 5x 10x 50x** buttons |
| LLM usage | **🧠** slider (0–100%) β€” tune AI call frequency |
| Switch LLM | Click the provider badge (e.g. **✦ Gemini 2.0 Flash**) |
| **Login / play** | Register β†’ your agent appears with a gold ring |
| **Talk to NPC** | Select agent β†’ **Talk to [Name]** button |
| **Move** | Player panel β†’ location dropdown β†’ **Go** |
| **Edit profile** | Player panel β†’ **Edit Profile** |
| **Add plans** | Player panel β†’ **My Plans** |

---

## LLM Provider Comparison

| Provider | Free tier | RPM | Daily limit | Best for |
|----------|-----------|-----|-------------|----------|
| **Gemini 2.0 Flash** | βœ… Yes | 5 | ~1500 req | Cloud demos (default) |
| **Groq Llama 3.1 8B** | βœ… Yes | 30 | ~14k tokens/min | Fast responses |
| **Ollama** | βœ… Local | ∞ | ∞ | Local dev, no quota |
| **Anthropic Claude** | ❌ Paid | β€” | β€” | Highest quality |
| **HF Inference** | ⚠️ PRO only | 5 | varies | Experimenting |

---

## Project Structure

```
Soci/
β”œβ”€β”€ src/soci/
β”‚   β”œβ”€β”€ world/          City map, simulation clock, world events
β”‚   β”œβ”€β”€ agents/         Agent cognition: persona, memory, needs, relationships
β”‚   β”œβ”€β”€ actions/        Movement, activities, conversation, social actions
β”‚   β”œβ”€β”€ engine/         Simulation loop, scheduler, entropy, LLM clients
β”‚   β”œβ”€β”€ persistence/    SQLite database, save/load snapshots
β”‚   └── api/            FastAPI REST + WebSocket server
β”œβ”€β”€ config/
β”‚   β”œβ”€β”€ city.yaml       City layout, building positions, zones
β”‚   └── personas.yaml   Named character definitions (20 hand-crafted agents)
β”œβ”€β”€ web/
β”‚   └── index.html      Single-file web UI (no framework)
β”œβ”€β”€ Dockerfile          For HF Spaces / Render / Railway deployment
β”œβ”€β”€ render.yaml         Render deployment config
└── main.py             Terminal runner (no UI)
```

---

## License

MIT