Roadmap & Status
Last updated: 2026-01-12
Current Capabilities
- Web UI (React-in-HTML) served by FastAPI
- Real-time conversation streaming over WebSockets
- Post-conversation analysis with evidence-backed outputs:
- Bottom-up findings (emergent themes)
- Top-down coding (care experience rubric + codebook categories)
- FastAPI backend with conversation management service
- Personas defined via YAML and loaded dynamically
- Ollama integration with fallback to
/api/generate - Hosted LLM support via OpenRouter (
LLM_BACKEND=openrouter) - Hugging Face Spaces (Docker) deployment
Near-Term Priorities
Configuration Panel (Personas + Prompts)
Add a UI panel to select surveyor/patient personas and optionally tweak what the LLM receives (system prompts / parameters) without editing YAML.Evidence Export (Metadata Download)
Add a “Download conversation metadata” UI action to export transcript + analysis output + provenance metadata (e.g., evidence pointers, prompt/schema versions).Basic Test Coverage
Add smoke tests (mocked LLM responses) to prevent regressions in conversation flow and analysis schema parsing.
Longer-Term Ideas
- Interactive persona editor within the UI
- Conversation playback and analytics
- Multi-model comparison mode
- Cloud-hosted deployment (Hugging Face Spaces or similar)
How to Contribute
- Sync with this roadmap and open a planning thread or issue for new work.
- Keep docs up to date—update this file when priorities shift.
- Follow the patterns in
backend/core/andconfig/settings.pyto keep configuration centralized.