File size: 3,352 Bytes
e3892d4
 
 
 
 
 
 
 
7e138b7
 
 
e3892d4
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ca1ff64
7e138b7
 
 
 
 
 
e3892d4
ca1ff64
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e3892d4
 
 
 
 
 
 
 
 
 
 
 
 
 
7e138b7
 
e3892d4
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
# System Overview

## Purpose

The AI Survey Simulator orchestrates AI-to-AI healthcare survey conversations so researchers can explore interviewer and patient persona behavior without involving real participants.

## Architecture at a Glance

- **Web UI (`frontend/react_gradio_hybrid.py`)**  
  Serves a browser UI (React rendered in-page) and provides a small WebSocket bridge from the UI to the backend conversation WebSocket.
  This is the primary demo/UI path (including analysis panels).

- **FastAPI Backend (`backend/api/`)**  
  Hosts REST endpoints for conversation control, WebSocket endpoints for live streaming, and the conversation service that manages active sessions.

- **Core Logic (`backend/core/`)**  
  Contains reusable building blocks: persona loading (`persona_system.py`), conversation flow management (`conversation_manager.py`), and LLM client adapters (`llm_client.py`).

- **LLM Backend (Ollama by default)**  
  The backend uses `LLM_HOST`/`LLM_MODEL` from `.env` to reach a local Ollama server. Other providers can be integrated by extending `llm_client.py`.

- **Data Assets (`data/`)**  
  Persona definitions live in YAML files (`patient_personas.yaml`, `surveyor_personas.yaml`). Update these to add or refine personas.

## Runtime Flow

1. Browser loads the Web UI (may require login if `APP_PASSWORD` is set) and opens `ws://.../ws/frontend/{conversation_id}`.
2. The Web UI server bridges that connection to the backend conversation socket at `/api/ws/conversation/{conversation_id}`.
3. Backend spawns a `ConversationManager`, which alternates surveyor/patient turns using the configured LLM.
4. Generated messages stream back to the browser over the bridged WebSocket connection.
5. When the conversation completes, the backend runs a post-conversation analysis pass and returns:
   - Bottom-up findings (emergent themes) with evidence pointers
   - Top-down coding (care experience rubric + codebook categories) with evidence pointers

## Configuration UI

The UI includes a **Configuration** view (same page, no reload) that lets you:

- Select surveyor + patient personas (loaded from `GET /api/personas`)
- Add optional prompt additions for each role (sent with `start_conversation`)

These settings are currently stored in the browser (local-only) and apply to the next run.

## Access Control (Prototype)

If `APP_PASSWORD` is set, the Space is gated behind a simple login page:

- The browser receives a signed session cookie after login
- All `/api/*` endpoints and the backend WebSocket require either:
  - that cookie, or
  - an internal header (`x-internal-auth`) used by the UI server when bridging sockets

## Repository Map (Key Paths)

```
backend/
  api/
    main.py              # FastAPI entry point
    routes.py            # REST endpoints
    conversation_service.py
    conversation_ws.py
  core/
    conversation_manager.py
    persona_system.py
    llm_client.py
frontend/
  gradio_app.py          # legacy/optional local UI
  react_gradio_hybrid.py # primary demo UI (web)
  websocket_manager.py
data/
  patient_personas.yaml
  surveyor_personas.yaml
config/
  settings.py            # Shared configuration loader
.env.example
run_local.sh
```

Keep this mental model in mind when extending the simulator—it highlights where to plug in new personas, swap LLMs, or modify UI behavior.