reachy_mini_minder / README.md
Boopster's picture
feat: enhance session summary UI and streamline headache logging with guided wizard steps.
b152b60
metadata
title: Reachy Mini Minder
emoji: πŸ€–
colorFrom: purple
colorTo: gray
sdk: static
pinned: false
short_description: Voice-first care companion for Reachy Mini
tags:
  - reachy_mini
  - reachy_mini_python_app

Reachy Mini Minder

A voice-first health companion for Reachy Mini, helping users log medication, track headaches, and share appointment summaries with their doctor β€” all through natural conversation.

Prerequisites

Requirement Version Notes
Python 3.10+ 3.12 recommended
Node.js 18+ For the React frontend
uv latest Python package manager (recommended)
Reachy Mini SDK β‰₯ 1.2.11 pip install reachy-mini
OpenAI API key β€” For the Realtime voice API
Docker (optional) latest Only needed for Neo4j knowledge graph

Quick Start

# 1. Clone the repo
git clone <your-repo-url>
cd reachy_mini_minder

# 2. Copy environment file and add your OpenAI key
cp .env.example .env
# Edit .env and set OPENAI_API_KEY=sk-...

# 3. Install Python dependencies
uv sync          # or: pip install -e .

# 4. Install frontend dependencies
cd frontend && npm install && cd ..

# 5. Start everything (daemon + frontend + LangGraph sidecar + app)
./start-dev.sh

The app opens at http://localhost:3000. The robot starts in wakeword mode β€” say "Hey Reachy" to begin a session.

Manual Setup (4 terminals)

If you need to debug individual components, run each in its own terminal:

# Terminal 1 β€” Robot daemon
source .venv/bin/activate
reachy-mini-daemon

# Terminal 2 β€” Frontend (Next.js)
cd frontend
npm run dev

# Terminal 3 β€” LangGraph sidecar (reports, trends, session summary)
source .venv/bin/activate
langgraph dev

# Terminal 4 β€” Conversation app
source .venv/bin/activate
reachy-mini-minder --react
Service URL Purpose
Frontend http://localhost:3000 Main UI
Daemon http://localhost:8000 Robot hardware control
LangGraph http://localhost:2024 Sidecar agent (reports, trends)

Environment Variables

Copy .env.example to .env and configure:

Variable Required Description
OPENAI_API_KEY Yes OpenAI API key for Realtime voice
MODEL_NAME No Model name (default: gpt-realtime)
REACHY_MINI_CUSTOM_PROFILE No Custom personality profile folder
API_TOKEN No Bearer token for LAN deployments
LANGSMITH_API_KEY No For LangGraph tracing (free at smith.langchain.com)

Privacy & Data

Mini Minder stores all health data locally in an unencrypted SQLite file (mini_minder.db). No data is sent to a remote database.

Text pipelines β€” Session summaries, memory notes, appointment exports, and health-history queries are redacted by pii_guard.py before reaching any cloud LLM. Names are replaced with roles (e.g. "the patient") and medication names are replaced with symptom categories (e.g. "migraine prevention medication").

Audio gap: Microphone audio is streamed directly to the OpenAI Realtime API as raw PCM frames. There is no way to redact speech before it leaves the device. If the user speaks their name, medication names, or other sensitive details aloud, OpenAI's servers will receive that audio. OpenAI's API data usage policy states that API inputs are not used for model training, but the audio does traverse their infrastructure. Eliminating this gap would require switching to a local speech-to-text model.

Knowledge Graph (Optional)

For cross-session memory (entity extraction, relationship tracking), you can connect a Neo4j graph database via Docker:

# Start Neo4j
docker run -d --name neo4j \
  -p 7474:7474 -p 7687:7687 \
  -e NEO4J_AUTH=neo4j/password \
  neo4j:5

# Install the memory extras (Presidio PII protection + Neo4j driver)
uv sync --extra memory   # or: pip install -e ".[memory]"

The app auto-detects Neo4j on startup (bolt://localhost:7687). When connected, it:

  • Extracts entities (medications, symptoms, people) from each session
  • Injects graph context into the next session's prompt
  • Browse the graph at http://localhost:7474

To stop: docker stop neo4j && docker rm neo4j

CLI Flags

reachy-mini-minder [OPTIONS]
Flag Description
--react Enable React frontend mode (requires frontend dev server)
--clear-data Clear all data (health entries, profile) before starting
--debug Enable debug logging
--no-camera Disable camera usage
--head-tracker {yolo,mediapipe} Enable head tracking (requires camera)
--local-vision Use local vision model instead of cloud API
--robot-name NAME Robot name for Zenoh topics (multi-robot setups)

Project Structure

β”œβ”€β”€ src/reachy_mini_conversation_app/
β”‚   β”œβ”€β”€ main.py                    # Entry point
β”‚   β”œβ”€β”€ openai_realtime.py         # OpenAI Realtime API handler
β”‚   β”œβ”€β”€ stream_api.py              # FastAPI WebSocket server
β”‚   β”œβ”€β”€ database.py                # SQLite health data
β”‚   β”œβ”€β”€ pii_guard.py               # PII redaction before cloud LLM
β”‚   β”œβ”€β”€ wakeword_detector.py       # "Hey Reachy" detection (bundled model)
β”‚   β”œβ”€β”€ profiles/                  # Personality, tools, prompts
β”‚   β”œβ”€β”€ tools/                     # Voice command implementations
β”‚   β”œβ”€β”€ langgraph_agent/           # Sidecar agent (reports, trends)
β”‚   └── models/hey_reachy.onnx     # Wakeword model (bundled)
β”œβ”€β”€ frontend/                      # Next.js 16 + React 19 UI
β”œβ”€β”€ tests/                         # pytest test suite
β”œβ”€β”€ start-dev.sh                   # One-command dev launcher
└── .env.example                   # Environment template

Running Tests

source .venv/bin/activate
python -m pytest tests/ -v

Customization

Use the locked profile folder for personality and tool configuration:

src/reachy_mini_conversation_app/profiles/_reachy_mini_minder_locked_profile/
β”œβ”€β”€ instructions.txt    # System prompt personality
β”œβ”€β”€ tools.txt           # Available tool list
└── voice.txt           # Voice settings

License

See LICENSE for details.