ConverTA / docs /roadmap.md
MikelWL's picture
Docs: update for current MVP and next config work
7e138b7

Roadmap & Status

Last updated: 2026-01-12

Current Capabilities

  • Web UI (React-in-HTML) served by FastAPI
  • Real-time conversation streaming over WebSockets
  • Post-conversation analysis with evidence-backed outputs:
    • Bottom-up findings (emergent themes)
    • Top-down coding (care experience rubric + codebook categories)
  • FastAPI backend with conversation management service
  • Personas defined via YAML and loaded dynamically
  • Ollama integration with fallback to /api/generate
  • Hosted LLM support via OpenRouter (LLM_BACKEND=openrouter)
  • Hugging Face Spaces (Docker) deployment

Near-Term Priorities

  1. Configuration Panel (Personas + Prompts)
    Add a UI panel to select surveyor/patient personas and optionally tweak what the LLM receives (system prompts / parameters) without editing YAML.

  2. Evidence Export (Metadata Download)
    Add a “Download conversation metadata” UI action to export transcript + analysis output + provenance metadata (e.g., evidence pointers, prompt/schema versions).

  3. Basic Test Coverage
    Add smoke tests (mocked LLM responses) to prevent regressions in conversation flow and analysis schema parsing.

Longer-Term Ideas

  • Interactive persona editor within the UI
  • Conversation playback and analytics
  • Multi-model comparison mode
  • Cloud-hosted deployment (Hugging Face Spaces or similar)

How to Contribute

  1. Sync with this roadmap and open a planning thread or issue for new work.
  2. Keep docs up to date—update this file when priorities shift.
  3. Follow the patterns in backend/core/ and config/settings.py to keep configuration centralized.