metadata
title: ConverTA
emoji: 💻
colorFrom: indigo
colorTo: gray
sdk: docker
pinned: false
AI Survey Simulator – Quick Start
The folllowing are the instructions for running the simulator either with a local Ollama model or with a hosted OpenRouter model.
Requirements
- Python 3.9+
- Pip
- Optional for local mode: Ollama with a pulled model (e.g.,
ollama pull llama3.2:latest) - Optional for hosted mode: OpenRouter account + API key
1. Create .env
Copy .env.example to .env and choose one of the following blocks.
Local (Ollama)
LLM_BACKEND=ollama
LLM_HOST=http://localhost:11434
LLM_MODEL=llama3.2:latest
Hosted (OpenRouter)
LLM_BACKEND=openrouter
LLM_HOST=https://openrouter.ai/api/v1
LLM_MODEL=anthropic/claude-3-haiku:beta # pick any model
LLM_API_KEY=sk-or-...
LLM_SITE_URL=http://localhost:7860
LLM_APP_NAME=AI_Survey_Simulator
Other environment values (ports, websocket URL, log level) are already set in .env.example.
2. Install Python Dependencies
python -m venv .venv # optional
source .venv/bin/activate # Windows: .venv\Scripts\activate
pip install -r requirements.txt
3. Run the Stack
Option A – HF-like (recommended)
./run_docker_local.sh
This runs the same Dockerized web UI used on Hugging Face Spaces.
Option B – Single Command (legacy local stack)
./run_local.sh
Reads .env, starts Ollama if needed, launches FastAPI + Gradio, and keeps them running until Ctrl+C.
Option C – Manual Terminals (legacy)
- (Only if LLM_BACKEND=ollama)
ollama serve cd backend && uvicorn api.main:app --host 0.0.0.0 --port 8000cd frontend && python gradio_app.py
Backend listens on http://localhost:8000, Gradio on http://localhost:7860.
4. Use the App
- Open the UI URL.
- If
APP_PASSWORDis set, enter it on the login page to unlock the app. - (Optional) Open Configuration to choose personas and add per-role prompt additions.
- Click Start Conversation.
- Wait for the conversation to complete to see analysis results.
- Note: Stop Conversation currently aborts the run and may skip post-conversation analysis.
After the conversation completes, the app runs post-conversation analysis and populates:
- Bottom-up findings (emergent themes) with evidence
- Top-down coding (care experience rubric + codebook categories) with evidence
Human Chat (Human ↔ Surveyor)
- Open the Human Chat tab, click Start, then type as the patient.
- Click End session to run the same post-conversation analysis + enable exports.
- Click Abort to stop immediately (may skip analysis).
5. Personas
- Surveyor definitions:
data/surveyor_personas.yaml - Patient definitions:
data/patient_personas.yaml
Edit the YAML, then restart the backend to apply changes.
6. Troubleshooting
| Issue | Resolution |
|---|---|
| “Temporary failure in name resolution” (OpenRouter) | Launch backend from an environment that can resolve openrouter.ai; ensure proxies/DNS settings match the working terminal. |
| “All connection attempts failed” | Backend cannot reach the LLM. Verify .env, restart backend, check console logs. |
| “Model not found” (Ollama) | Pull the model with ollama pull <model> and restart backend. |
| UI stays empty | Backend not running or .env mismatch. Restart both processes. |
7. Reference Docs
docs/overview.md– architecture summarydocs/development.md– environment tips and backend switchingdocs/roadmap.md– upcoming work