Spaces:
Running
title: Reachy Mini Minder
emoji: π€
colorFrom: purple
colorTo: gray
sdk: static
pinned: false
short_description: Voice-first care companion for Reachy Mini
tags:
- reachy_mini
- reachy_mini_python_app
Reachy Mini Minder
A voice-first health companion for Reachy Mini, helping users log medication, track headaches, and share appointment summaries with their doctor β all through natural conversation.
Prerequisites
| Requirement | Version | Notes |
|---|---|---|
| Python | 3.10+ | 3.12 recommended |
| Node.js | 18+ | For the React frontend |
| uv | latest | Python package manager (recommended) |
| Reachy Mini SDK | β₯ 1.2.11 | pip install reachy-mini |
| OpenAI API key | β | For the Realtime voice API |
| Docker (optional) | latest | Only needed for Neo4j knowledge graph |
Quick Start
# 1. Clone the repo
git clone <your-repo-url>
cd reachy_mini_minder
# 2. Copy environment file and add your OpenAI key
cp .env.example .env
# Edit .env and set OPENAI_API_KEY=sk-...
# 3. Install Python dependencies
uv sync # or: pip install -e .
# 4. Install frontend dependencies
cd frontend && npm install && cd ..
# 5. Start everything (daemon + frontend + LangGraph sidecar + app)
./start-dev.sh
The app opens at http://localhost:3000. The robot starts in wakeword mode β say "Hey Reachy" to begin a session.
Manual Setup (4 terminals)
If you need to debug individual components, run each in its own terminal:
# Terminal 1 β Robot daemon
source .venv/bin/activate
reachy-mini-daemon
# Terminal 2 β Frontend (Next.js)
cd frontend
npm run dev
# Terminal 3 β LangGraph sidecar (reports, trends, session summary)
source .venv/bin/activate
langgraph dev
# Terminal 4 β Conversation app
source .venv/bin/activate
reachy-mini-minder --react
| Service | URL | Purpose |
|---|---|---|
| Frontend | http://localhost:3000 | Main UI |
| Daemon | http://localhost:8000 | Robot hardware control |
| LangGraph | http://localhost:2024 | Sidecar agent (reports, trends) |
Environment Variables
Copy .env.example to .env and configure:
| Variable | Required | Description |
|---|---|---|
OPENAI_API_KEY |
Yes | OpenAI API key for Realtime voice |
MODEL_NAME |
No | Model name (default: gpt-realtime) |
REACHY_MINI_CUSTOM_PROFILE |
No | Custom personality profile folder |
API_TOKEN |
No | Bearer token for LAN deployments |
LANGSMITH_API_KEY |
No | For LangGraph tracing (free at smith.langchain.com) |
Privacy & Data
Mini Minder stores all health data locally in an unencrypted SQLite file (mini_minder.db). No data is sent to a remote database.
Text pipelines β Session summaries, memory notes, appointment exports, and health-history queries are redacted by pii_guard.py before reaching any cloud LLM. Names are replaced with roles (e.g. "the patient") and medication names are replaced with symptom categories (e.g. "migraine prevention medication").
Audio gap: Microphone audio is streamed directly to the OpenAI Realtime API as raw PCM frames. There is no way to redact speech before it leaves the device. If the user speaks their name, medication names, or other sensitive details aloud, OpenAI's servers will receive that audio. OpenAI's API data usage policy states that API inputs are not used for model training, but the audio does traverse their infrastructure. Eliminating this gap would require switching to a local speech-to-text model.
Knowledge Graph (Optional)
For cross-session memory (entity extraction, relationship tracking), you can connect a Neo4j graph database via Docker:
# Start Neo4j
docker run -d --name neo4j \
-p 7474:7474 -p 7687:7687 \
-e NEO4J_AUTH=neo4j/password \
neo4j:5
# Install the memory extras (Presidio PII protection + Neo4j driver)
uv sync --extra memory # or: pip install -e ".[memory]"
The app auto-detects Neo4j on startup (bolt://localhost:7687). When connected, it:
- Extracts entities (medications, symptoms, people) from each session
- Injects graph context into the next session's prompt
- Browse the graph at http://localhost:7474
To stop: docker stop neo4j && docker rm neo4j
CLI Flags
reachy-mini-minder [OPTIONS]
| Flag | Description |
|---|---|
--react |
Enable React frontend mode (requires frontend dev server) |
--clear-data |
Clear all data (health entries, profile) before starting |
--debug |
Enable debug logging |
--no-camera |
Disable camera usage |
--head-tracker {yolo,mediapipe} |
Enable head tracking (requires camera) |
--local-vision |
Use local vision model instead of cloud API |
--robot-name NAME |
Robot name for Zenoh topics (multi-robot setups) |
Project Structure
βββ src/reachy_mini_conversation_app/
β βββ main.py # Entry point
β βββ openai_realtime.py # OpenAI Realtime API handler
β βββ stream_api.py # FastAPI WebSocket server
β βββ database.py # SQLite health data
β βββ pii_guard.py # PII redaction before cloud LLM
β βββ wakeword_detector.py # "Hey Reachy" detection (bundled model)
β βββ profiles/ # Personality, tools, prompts
β βββ tools/ # Voice command implementations
β βββ langgraph_agent/ # Sidecar agent (reports, trends)
β βββ models/hey_reachy.onnx # Wakeword model (bundled)
βββ frontend/ # Next.js 16 + React 19 UI
βββ tests/ # pytest test suite
βββ start-dev.sh # One-command dev launcher
βββ .env.example # Environment template
Running Tests
source .venv/bin/activate
python -m pytest tests/ -v
Customization
Use the locked profile folder for personality and tool configuration:
src/reachy_mini_conversation_app/profiles/_reachy_mini_minder_locked_profile/
βββ instructions.txt # System prompt personality
βββ tools.txt # Available tool list
βββ voice.txt # Voice settings
License
See LICENSE for details.