File size: 1,643 Bytes
e3892d4 7e138b7 e3892d4 7e138b7 e3892d4 7e138b7 e3892d4 7e138b7 e3892d4 7e138b7 e3892d4 7e138b7 e3892d4 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 |
# Roadmap & Status
_Last updated: 2026-01-12_
## Current Capabilities
- Web UI (React-in-HTML) served by FastAPI
- Real-time conversation streaming over WebSockets
- Post-conversation analysis with evidence-backed outputs:
- Bottom-up findings (emergent themes)
- Top-down coding (care experience rubric + codebook categories)
- FastAPI backend with conversation management service
- Personas defined via YAML and loaded dynamically
- Ollama integration with fallback to `/api/generate`
- Hosted LLM support via OpenRouter (`LLM_BACKEND=openrouter`)
- Hugging Face Spaces (Docker) deployment
## Near-Term Priorities
1. **Configuration Panel (Personas + Prompts)**
Add a UI panel to select surveyor/patient personas and optionally tweak what the LLM receives (system prompts / parameters) without editing YAML.
2. **Evidence Export (Metadata Download)**
Add a “Download conversation metadata” UI action to export transcript + analysis output + provenance metadata (e.g., evidence pointers, prompt/schema versions).
3. **Basic Test Coverage**
Add smoke tests (mocked LLM responses) to prevent regressions in conversation flow and analysis schema parsing.
## Longer-Term Ideas
- Interactive persona editor within the UI
- Conversation playback and analytics
- Multi-model comparison mode
- Cloud-hosted deployment (Hugging Face Spaces or similar)
## How to Contribute
1. Sync with this roadmap and open a planning thread or issue for new work.
2. Keep docs up to date—update this file when priorities shift.
3. Follow the patterns in `backend/core/` and `config/settings.py` to keep configuration centralized.
|