File size: 2,757 Bytes
e3892d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ed0db0d
 
 
e3892d4
 
 
 
 
7e138b7
 
 
 
 
 
 
 
 
e3892d4
 
 
ed0db0d
e3892d4
 
 
7e138b7
e3892d4
 
 
 
 
 
 
 
 
 
 
 
 
7e138b7
 
 
 
 
 
e3892d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
# Development Guide

This guide captures what future contributors need to know to extend the AI Survey Simulator quickly.

## Environment Essentials

- Python 3.9+
- Ollama running locally (or another LLM provider wired into `llm_client.py`)
- Optional GPU for faster inference

```bash
cp .env.example .env                # adjust values as needed
pip install -r requirements.txt
```

Key environment variables (see `.env.example`):

- `LLM_BACKEND``ollama` (local default) or `openrouter`
- `LLM_HOST` / `LLM_MODEL` — target endpoint & model ID
- `LLM_API_KEY`, `LLM_SITE_URL`, `LLM_APP_NAME` — required when using OpenRouter
- `FRONTEND_BACKEND_BASE_URL` and `FRONTEND_WEBSOCKET_URL` — how the UI talks to FastAPI
- `LOG_LEVEL` — INFO by default

## Running the Stack

### Recommended (HF-like) Local Run

This project is deployed on Hugging Face Spaces using Docker. The closest local workflow is to run the Docker image locally:

```bash
./run_docker_local.sh
```

### One Command (legacy local stack)
```bash
./run_local.sh
```
- Starts `ollama serve` (if not already running) — this mode expects `LLM_BACKEND=ollama`
- Launches FastAPI backend and Gradio frontend in the background
- Press `Ctrl+C` to stop all three processes

### Manual Terminals (for logs, legacy)
```bash
# Terminal 1
ollama serve

# Terminal 2
cd backend
uvicorn api.main:app --reload --host 0.0.0.0 --port 8000

# Terminal 3
cd frontend
python gradio_app.py
```

### Web UI (React hybrid)

The primary demo UI is served by `frontend/react_gradio_hybrid.py` and includes bottom-up + top-down analysis panels.

When running outside Docker, you typically run the backend and the web UI separately; when running in Docker/HF, the backend is mounted under `/api` inside the same server.

## Making Changes Safely

- Prefer editing personas via YAML (`data/`) and restart the backend to reload.
- All configuration flows through `config/settings.py`; add new settings there and reference them via `get_settings()`.
- When adding LLM providers, implement a new client in `backend/core/llm_client.py` and hook it into the existing factory.
- Keep WebSocket message schemas stable (`backend/api/conversation_ws.py`); update both backend and frontend consumers if you change them.

## Testing & Verification

- No automated test suite yet. Add lightweight `pytest` modules under `tests/` as you extend functionality.
- Manually verify conversations through the Gradio UI.
- If you need to debug the conversation loop, instrument `backend/core/conversation_manager.py` or launch a shell and run it directly.

## Roadmap & Next Steps

See `docs/roadmap.md` for current priorities, open questions, and suggested next features (persona selector UI, hosted LLM support, etc.).