| # Development Guide | |
| This guide captures what future contributors need to know to extend the AI Survey Simulator quickly. | |
| ## Environment Essentials | |
| - Python 3.9+ | |
| - Ollama running locally (or another LLM provider wired into `llm_client.py`) | |
| - Optional GPU for faster inference | |
| ```bash | |
| cp .env.example .env # adjust values as needed | |
| pip install -r requirements.txt | |
| ``` | |
| Key environment variables (see `.env.example`): | |
| - `LLM_BACKEND` β `ollama` (local default) or `openrouter` | |
| - `LLM_HOST` / `LLM_MODEL` β target endpoint & model ID | |
| - `LLM_API_KEY`, `LLM_SITE_URL`, `LLM_APP_NAME` β required when using OpenRouter | |
| - `APP_PASSWORD` β optional shared password gate (when set, the UI + API require login) | |
| - `FRONTEND_BACKEND_BASE_URL` and `FRONTEND_WEBSOCKET_URL` β how the UI talks to FastAPI | |
| - `LOG_LEVEL` β INFO by default | |
| ## Running the Stack | |
| ### Recommended (HF-like) Local Run | |
| This project is deployed on Hugging Face Spaces using Docker. The closest local workflow is to run the Docker image locally: | |
| ```bash | |
| ./run_docker_local.sh | |
| ``` | |
| ### One Command (legacy local stack) | |
| ```bash | |
| ./run_local.sh | |
| ``` | |
| - Starts `ollama serve` (if not already running) β this mode expects `LLM_BACKEND=ollama` | |
| - Launches FastAPI backend and Gradio frontend in the background | |
| - Press `Ctrl+C` to stop all three processes | |
| ### Manual Terminals (for logs, legacy) | |
| ```bash | |
| # Terminal 1 | |
| ollama serve | |
| # Terminal 2 | |
| cd backend | |
| uvicorn api.main:app --reload --host 0.0.0.0 --port 8000 | |
| # Terminal 3 | |
| cd frontend | |
| python gradio_app.py | |
| ``` | |
| ### Web UI (React hybrid) | |
| The primary demo UI is served by `frontend/react_gradio_hybrid.py` and includes bottom-up + top-down analysis panels. | |
| When running outside Docker, you typically run the backend and the web UI separately; when running in Docker/HF, the backend is mounted under `/api` inside the same server. | |
| The UI also includes a **Configuration** view that lets you select personas and add per-role prompt additions. These settings are currently stored in the browser (local-only). | |
| ## Making Changes Safely | |
| - Prefer editing personas via YAML (`data/`) and restart the backend to reload. | |
| - All configuration flows through `config/settings.py`; add new settings there and reference them via `get_settings()`. | |
| - When adding LLM providers, implement a new client in `backend/core/llm_client.py` and hook it into the existing factory. | |
| - Keep WebSocket message schemas stable (`backend/api/conversation_ws.py`); update both backend and frontend consumers if you change them. | |
| ## Testing & Verification | |
| - No automated test suite yet. Add lightweight `pytest` modules under `tests/` as you extend functionality. | |
| - Manually verify conversations through the Gradio UI. | |
| - If you need to debug the conversation loop, instrument `backend/core/conversation_manager.py` or launch a shell and run it directly. | |
| ## Notes on Persistence (HF) | |
| Hugging Face Spaces provide a persistent volume (typically mounted at `/data` in Docker Spaces). This repo does not yet persist app data there; future work (persona editing + conversation history) should store shared artifacts under `/data` and treat repo YAML under `data/` as defaults. | |
| ## Roadmap & Next Steps | |
| See `docs/roadmap.md` for current priorities, open questions, and suggested next features (persona selector UI, hosted LLM support, etc.). | |