| # Capabilities (Current “What the App Does”) | |
| This document describes what ConverTA does today, independent of any past roadmap. | |
| ## Modes | |
| - **AI-to-AI** | |
| - Runs a simulated healthcare survey interview between: | |
| - an automated **Surveyor** agent and | |
| - a synthetic **Patient** agent. | |
| - When the run completes normally, the app performs post-run analysis and seals the run. | |
| - **Human-to-AI** | |
| - Lets a human play one side of the conversation while the other side is generated by the LLM. | |
| - Ending the session triggers the same post-run analysis and seals the run. | |
| - **Upload Text** | |
| - Paste a transcript or upload a text/PDF file (best-effort PDF extraction). | |
| - Runs the same analysis pipeline and seals the run. | |
| - **History** | |
| - Lists sealed runs (shared/global) and allows read-only replay: | |
| - transcript | |
| - analysis output | |
| - Exports from History use server-canonical run data. | |
| - **Configuration** | |
| - Review/edit system prompts and agent configuration. | |
| - Create/duplicate/edit/delete **user personas**. | |
| - Create/duplicate/edit/delete **analysis frameworks** (bottom-up + rubric + top-down); default framework is read-only and can be duplicated. | |
| ## Persistence (Server-Canonical) | |
| ConverTA uses SQLite for persistence. | |
| - `DB_PATH` controls the SQLite path. | |
| - Local Docker default: `.localdata/...` | |
| - Hugging Face Spaces default: under `/data/...` | |
| Persisted artifacts: | |
| - **Sealed runs** (transcript + analysis + evidence catalog + config snapshots) | |
| - **Personas** (defaults + user-created) with versioning | |
| - **Shared settings** (system prompts + active analysis framework id) | |
| - **Analysis frameworks** (bottom-up + rubric + top-down) with versioning | |
| Notes: | |
| - Sealed runs are immutable. | |
| - “Stop/abort” during a live run does not produce a sealed run. | |
| - If the resource agent fails (e.g., OpenRouter errors), the UI surfaces retry attempts and a failure notice. | |
| ## Personas | |
| There are two persona kinds: | |
| - **Surveyor personas** | |
| - Editable fields (user personas): | |
| - attribute list (bullet-style strings) | |
| - question bank (list of target questions) | |
| - Selection of which Surveyor is used for a run happens in the run panels (not in Configuration). | |
| - **Patient personas** | |
| - Editable fields (user personas): | |
| - attribute list (bullet-style strings) | |
| - Selection of which Patient is used for a run happens in the run panels (not in Configuration). | |
| Default personas: | |
| - A small seeded set exists for each kind. | |
| - Defaults are view-only in the UI (to prevent breaking the app by deleting/over-editing baseline personas). | |
| ## System Prompts (Universal per agent type) | |
| These are shared, DB-backed “universal” system prompts: | |
| - Surveyor system prompt (applies to all surveyor personas) | |
| - Patient system prompt (applies to all patient personas) | |
| - Analysis agent system prompt (applies to all analyses) | |
| The Configuration UI provides: | |
| - “Apply defaults” | |
| - A warning/acknowledgement gate before editing the system prompt text. | |
| - Default analysis frameworks are read-only and must be duplicated to edit. | |
| ## Analysis | |
| Analysis runs post-conversation (or on uploaded text) and produces evidence-backed outputs: | |
| - **Bottom-up findings** (emergent themes) | |
| - **Care experience rubric** (fixed buckets: positive / mixed / negative / neutral) | |
| - Includes a model-rated split across positive/mixed/negative (for a pie chart summary) | |
| - **Top-down codebook categories** (template-driven) | |
| ### Analysis execution (3-pass) | |
| For reliability and smaller-model compatibility, the analysis pipeline runs as three sequential LLM calls: | |
| 1. Bottom-up findings | |
| 2. Care experience rubric | |
| 3. Top-down codebook categories | |
| In the UI, each column populates as its pass completes, while later passes show as pending/running. | |
| ### Analysis frameworks | |
| - An **analysis framework** defines: | |
| - bottom-up instructions + attributes | |
| - rubric instructions + attributes | |
| - top-down instructions + attributes + codebook categories (optional per-category descriptions) | |
| - Frameworks are managed in **Configuration** (create/duplicate/delete + edit the framework). | |
| - The **active framework is selected per run** using the dropdown next to the **📊 Analysis** header in: | |
| - AI-to-AI | |
| - Human-to-AI | |
| - Upload Text | |
| - Runs snapshot the selected framework so History replay and exports remain stable. | |
| ## Exports | |
| - Excel (`.xlsx`) multi-sheet export and lossless JSON export are available after analysis completes. | |
| - When a run is sealed, exports are generated from server-canonical persisted data. | |
| - There is a client-side fallback export for non-sealed sessions. | |
| ## Access Control (Prototype) | |
| If `APP_PASSWORD` is set: | |
| - UI is gated behind a login overlay. | |
| - API access requires the session cookie, or the internal header used by the UI WebSocket bridge. | |