Development Guide
This guide captures what future contributors need to know to extend the AI Survey Simulator quickly.
Environment Essentials
- Python 3.9+
- Ollama running locally (or another LLM provider wired into
llm_client.py) - Optional GPU for faster inference
cp .env.example .env # adjust values as needed
pip install -r requirements.txt
Key environment variables (see .env.example):
LLM_BACKEND—ollama(local default) oropenrouterLLM_HOST/LLM_MODEL— target endpoint & model IDLLM_API_KEY,LLM_SITE_URL,LLM_APP_NAME— required when using OpenRouterFRONTEND_BACKEND_BASE_URLandFRONTEND_WEBSOCKET_URL— how the UI talks to FastAPILOG_LEVEL— INFO by default
Running the Stack
Recommended (HF-like) Local Run
This project is deployed on Hugging Face Spaces using Docker. The closest local workflow is to run the Docker image locally:
./run_docker_local.sh
One Command (legacy local stack)
./run_local.sh
- Starts
ollama serve(if not already running) — this mode expectsLLM_BACKEND=ollama - Launches FastAPI backend and Gradio frontend in the background
- Press
Ctrl+Cto stop all three processes
Manual Terminals (for logs, legacy)
# Terminal 1
ollama serve
# Terminal 2
cd backend
uvicorn api.main:app --reload --host 0.0.0.0 --port 8000
# Terminal 3
cd frontend
python gradio_app.py
Web UI (React hybrid)
The primary demo UI is served by frontend/react_gradio_hybrid.py and includes bottom-up + top-down analysis panels.
When running outside Docker, you typically run the backend and the web UI separately; when running in Docker/HF, the backend is mounted under /api inside the same server.
Making Changes Safely
- Prefer editing personas via YAML (
data/) and restart the backend to reload. - All configuration flows through
config/settings.py; add new settings there and reference them viaget_settings(). - When adding LLM providers, implement a new client in
backend/core/llm_client.pyand hook it into the existing factory. - Keep WebSocket message schemas stable (
backend/api/conversation_ws.py); update both backend and frontend consumers if you change them.
Testing & Verification
- No automated test suite yet. Add lightweight
pytestmodules undertests/as you extend functionality. - Manually verify conversations through the Gradio UI.
- If you need to debug the conversation loop, instrument
backend/core/conversation_manager.pyor launch a shell and run it directly.
Roadmap & Next Steps
See docs/roadmap.md for current priorities, open questions, and suggested next features (persona selector UI, hosted LLM support, etc.).