title: Reachy Mini Anima Conversation App
emoji: 🎤
colorFrom: red
colorTo: blue
sdk: static
pinned: false
short_description: Express emotion with Anima!
tags:
- reachy_mini
- reachy_mini_python_app
Notice: This is a Brain Wave Collective modified app
This repository is a modified fork of the original upstream Reachy Mini Conversation App. This is an example to showcase how Anima can be used to allow a robot to express emotion. It works exactly the same way that the original app works, except that it is modified to include emotionally-inspired motion based on the content of the conversation. Emotional movements and expressions are based entirely on the words being spoken by the robot.
Reachy Mini conversation app
Conversational app for the Reachy Mini robot combining OpenAI's realtime APIs, vision pipelines, and choreographed motion libraries.
Table of contents
- Overview
- Architecture
- Installation
- Configuration
- Running the app
- LLM tools
- Advanced features
- Contributing
- License
Overview
- Real-time audio conversation loop powered by the OpenAI realtime API and
fastrtcfor low-latency streaming. - Vision processing uses gpt-realtime by default (when camera tool is used), with optional local vision processing using SmolVLM2 model running on-device (CPU/GPU/MPS) via
--local-visionflag. - Layered motion system queues primary moves (dances, emotions, goto poses, breathing) while blending speech-reactive wobble and head-tracking.
- Async tool dispatch integrates robot motion, camera capture, and optional head-tracking capabilities through a Gradio web UI with live transcripts.
Architecture
The app follows a layered architecture connecting the user, AI services, and robot hardware:
Installation
Before using this app, you need to install Reachy Mini's SDK.
Windows support is currently experimental and has not been extensively tested. Use with caution.
Using uv (recommended)
Set up the project quickly using uv:
# macOS (Homebrew)
uv venv --python /opt/homebrew/bin/python3.12 .venv
# Linux / Windows (Python in PATH)
uv venv --python python3.12 .venv
source .venv/bin/activate
uv sync
Note: To reproduce the exact dependency set from this repo's
uv.lock, runuv sync --frozen. This ensuresuvinstalls directly from the lockfile without re-resolving or updating any versions.
Install optional features:
uv sync --extra reachy_mini_wireless # Wireless Reachy Mini with GStreamer support
uv sync --extra local_vision # Local PyTorch/Transformers vision
uv sync --extra yolo_vision # YOLO-based head-tracking
uv sync --extra mediapipe_vision # MediaPipe-based head-tracking
uv sync --extra all_vision # All vision features
Combine extras or include dev dependencies:
uv sync --extra all_vision --group dev
Using pip
python -m venv .venv
source .venv/bin/activate
pip install -e .
Install optional features:
pip install -e .[reachy_mini_wireless] # Wireless Reachy Mini
pip install -e .[local_vision] # Local vision stack
pip install -e .[yolo_vision] # YOLO-based vision
pip install -e .[mediapipe_vision] # MediaPipe-based vision
pip install -e .[all_vision] # All vision features
pip install -e .[dev] # Development tools
Some wheels (like PyTorch) are large and require compatible CUDA or CPU builds—make sure your platform matches the binaries pulled in by each extra.
Optional dependency groups
| Extra | Purpose | Notes |
|---|---|---|
reachy_mini_wireless |
Wireless Reachy Mini with GStreamer support | Required for wireless versions of Reachy Mini, includes GStreamer dependencies. |
local_vision |
Run the local VLM (SmolVLM2) through PyTorch/Transformers | GPU recommended. Ensure compatible PyTorch builds for your platform. |
yolo_vision |
YOLOv11n head tracking via ultralytics and supervision |
Runs on CPU (default). GPU improves performance. Supports the --head-tracker yolo option. |
mediapipe_vision |
Lightweight landmark tracking with MediaPipe | Works on CPU. Enables --head-tracker mediapipe. |
all_vision |
Convenience alias installing every vision extra | Install when you want the flexibility to experiment with every provider. |
dev |
Developer tooling (pytest, ruff, mypy) |
Development-only dependencies. Use --group dev with uv or [dev] with pip. |
Note: dev is a dependency group (not an optional dependency). With uv, use --group dev. With pip, use [dev].
Configuration
- Copy
.env.exampleto.env - Fill in required values, notably the OpenAI API key
| Variable | Description |
|---|---|
OPENAI_API_KEY |
Required. Grants access to the OpenAI realtime endpoint. |
MODEL_NAME |
Override the realtime model (defaults to gpt-realtime). Used for both conversation and vision (unless --local-vision flag is used). |
HF_HOME |
Cache directory for local Hugging Face downloads (only used with --local-vision flag, defaults to ./cache). |
HF_TOKEN |
Optional token for Hugging Face access (for gated/private assets). |
LOCAL_VISION_MODEL |
Hugging Face model path for local vision processing (only used with --local-vision flag, defaults to HuggingFaceTB/SmolVLM2-2.2B-Instruct). |
Running the app
Activate your virtual environment, then launch:
reachy-mini-conversation-app
Make sure the Reachy Mini daemon is running before launching the app. If you see a
TimeoutError, it means the daemon isn't started. See Reachy Mini's SDK for setup instructions.
The app runs in console mode by default. Add --gradio to launch a web UI at http://127.0.0.1:7860/ (required for simulation mode). Vision and head-tracking options are described in the CLI table below.
CLI options
| Option | Default | Description |
|---|---|---|
--head-tracker {yolo,mediapipe} |
None |
Select a head-tracking backend when a camera is available. YOLO is implemented locally, MediaPipe comes from the reachy_mini_toolbox package. Requires the matching optional extra. |
--no-camera |
False |
Run without camera capture or head tracking. |
--local-vision |
False |
Use local vision model (SmolVLM2) for periodic image processing instead of gpt-realtime vision. Requires local_vision extra to be installed. |
--gradio |
False |
Launch the Gradio web UI. Without this flag, runs in console mode. Required when running in simulation mode. |
--robot-name |
None |
Optional. Connect to a specific robot by name when running multiple daemons on the same subnet. See Multiple robots on the same subnet. |
--debug |
False |
Enable verbose logging for troubleshooting. |
Examples
# Run with MediaPipe head tracking
reachy-mini-conversation-app --head-tracker mediapipe
# Run with local vision processing (requires local_vision extra)
reachy-mini-conversation-app --local-vision
# Audio-only conversation (no camera)
reachy-mini-conversation-app --no-camera
# Launch with Gradio web interface
reachy-mini-conversation-app --gradio
LLM tools exposed to the assistant
| Tool | Action | Dependencies |
|---|---|---|
move_head |
Queue a head pose change (left/right/up/down/front). | Core install only. |
camera |
Capture the latest camera frame and send it to gpt-realtime for vision analysis. | Requires camera worker. Uses gpt-realtime vision by default. |
head_tracking |
Enable or disable head-tracking offsets (not identity recognition - only detects and tracks head position). | Camera worker with configured head tracker (--head-tracker). |
dance |
Queue a dance from reachy_mini_dances_library. |
Core install only. |
stop_dance |
Clear queued dances. | Core install only. |
play_emotion |
Play a recorded emotion clip via Hugging Face datasets. | Core install only. Uses the default open emotions dataset: pollen-robotics/reachy-mini-emotions-library. |
stop_emotion |
Clear queued emotions. | Core install only. |
do_nothing |
Explicitly remain idle. | Core install only. |
Advanced features
Built-in motion content is published as open Hugging Face datasets:
- Emotions:
pollen-robotics/reachy-mini-emotions-library - Dances:
pollen-robotics/reachy-mini-dances-library
Custom profiles
Create custom profiles with dedicated instructions and enabled tools.
Set REACHY_MINI_CUSTOM_PROFILE=<name> to load src/anima_conversation_app/profiles/<name>/ (see .env.example). If unset, the default profile is used.
Each profile should include instructions.txt (prompt text). tools.txt (list of allowed tools) is recommended. If missing for a non-default profile, the app falls back to profiles/default/tools.txt. Profiles can optionally contain custom tool implementations.
Custom instructions:
Write plain-text prompts in instructions.txt. To reuse shared prompt pieces, add lines like:
[passion_for_lobster_jokes]
[identities/witty_identity]
Each placeholder pulls the matching file under src/anima_conversation_app/prompts/ (nested paths allowed). See src/anima_conversation_app/profiles/example/ for a reference layout.
Enabling tools:
List enabled tools in tools.txt, one per line. Prefix with # to comment out:
play_emotion
# move_head
# My custom tool defined locally
sweep_look
Tools are resolved first from Python files in the profile folder (custom tools), then from the core library src/anima_conversation_app/tools/ (like dance, head_tracking).
Custom tools:
On top of built-in tools found in the core library, you can implement custom tools specific to your profile by adding Python files in the profile folder.
Custom tools must subclass anima_conversation_app.tools.core_tools.Tool (see profiles/example/sweep_look.py).
Edit personalities from the UI:
When running with --gradio, open the "Personality" accordion:
- Select among available profiles (folders under
src/anima_conversation_app/profiles/) or the built‑in default. - Click "Apply" to update the current session instructions live.
- Create a new personality by entering a name and instructions text. It stores files under
profiles/<name>/and copiestools.txtfrom thedefaultprofile.
Note: The "Personality" panel updates the conversation instructions. Tool sets are loaded at startup from tools.txt and are not hot‑reloaded.
Locked profile mode
To create a locked variant of the app that cannot switch profiles, edit src/anima_conversation_app/config.py and set the LOCKED_PROFILE constant to the desired profile name:
LOCKED_PROFILE: str | None = "mars_rover" # Lock to this profile
When LOCKED_PROFILE is set, the app always uses that profile, ignoring REACHY_MINI_CUSTOM_PROFILE env var & the Gradio UI shows "(locked)" and disables all profile editing controls.
This is useful for creating dedicated clones of the app with a fixed personality. Clone scripts can simply edit this constant to lock the variant.
External profiles and tools
You can extend the app with profiles/tools stored outside src/anima_conversation_app/.
- Core profiles are under
src/anima_conversation_app/profiles/. - Core tools are under
src/anima_conversation_app/tools/.
Recommended layout:
external_content/
├── external_profiles/
│ └── my_profile/
│ ├── instructions.txt
│ ├── tools.txt # optional (see fallback behavior below)
│ └── voice.txt # optional
└── external_tools/
└── my_custom_tool.py
Environment variables:
Set these values in your .env (copy from .env.example):
REACHY_MINI_CUSTOM_PROFILE=my_profile
REACHY_MINI_EXTERNAL_PROFILES_DIRECTORY=./external_content/external_profiles
REACHY_MINI_EXTERNAL_TOOLS_DIRECTORY=./external_content/external_tools
# Optional convenience mode:
# AUTOLOAD_EXTERNAL_TOOLS=1
Loading behavior:
- Default/strict mode:
tools.txtdefines enabled tools explicitly. Every name intools.txtmust resolve to either a built-in tool (src/anima_conversation_app/tools/) or an external tool module inREACHY_MINI_EXTERNAL_TOOLS_DIRECTORY. - Convenience mode (
AUTOLOAD_EXTERNAL_TOOLS=1): all valid*.pytool files inREACHY_MINI_EXTERNAL_TOOLS_DIRECTORYare auto-added. - External profile fallback: if the selected external profile has no
tools.txt, the app falls back to built-inprofiles/default/tools.txt.
This supports both:
- Downloaded external tools used with built-in/default profile.
- Downloaded external profiles used with built-in default tools.
Multiple robots on the same subnet
If you run multiple Reachy Mini daemons on the same network, use:
reachy-mini-conversation-app --robot-name <name>
<name> must match the daemon's --robot-name value so the app connects to the correct robot.
Contributing
We welcome bug fixes, features, profiles, and documentation improvements. Please review our contribution guide for branch conventions, quality checks, and PR workflow.
Quick start:
- Fork and clone the repo
- Follow the installation steps (include the
devdependency group) - Run contributor checks listed in CONTRIBUTING.md
License
Apache 2.0
