The most complete web dashboard for Pollen Robotics' Reachy Mini. 146 APIs, 31 AI tools, computer vision, 81 emotions, MuJoCo 3D simulation, voice pipeline, and full system telemetry — all from your browser.
Full dashboard walkthrough — every feature in under 2 minutes
Complete visibility into your robot's hardware state with live-updating charts and diagnostics.
Nine real-time charts streaming over WebSocket at configurable rates. See your robot's health at a glance.
Live streaming charts for every degree of freedom in Reachy's Stewart platform head mechanism.
Draggable, resizable floating panels let you compose your ideal workspace.
Two persistent panels hover above every tab, creating a fully customisable workspace.
Real-time 3D visualization of Reachy Mini powered by MuJoCo and Three.js. The robot's pose mirrors the physical hardware in real time with smooth-shaded GLB meshes.
Intuitive joystick controls for head movement (L/XY/R), look direction, Z/Roll, body rotation, and a reset button. Multiple camera feeds in floating panels — robot camera, FaceTime, and iPhone via WebRTC.
Use your iPhone as a wireless camera via Apple Continuity Camera, your MacBook's FaceTime HD camera, or Reachy's onboard camera — all selectable from a single dropdown. Every video source works with YOLO vision, Follow mode, and recording.
Since the dashboard runs on HTTP (not HTTPS), Chrome blocks camera access by default. A one-time flag enables WebRTC on your local network — paste your dashboard URL into chrome://flags/#unsafely-treat-insecure-origin-as-secure and set it to Enabled.
http://reachy-mini.local:8042)A rich library of pre-built movements organized by mood category, with live preview in the 3D simulation.
Browse and trigger 81 unique emotional expressions, filterable by mood categories.
Continuous, looping head movement patterns with adjustable amplitude and speed for natural idle behaviour.
Chat with Reachy using multi-provider LLMs. The AI can express emotions, set timers, create content, and control the robot.
Full conversation interface with timestamped message log showing confidence and volume. Powered by LiteLLM for unified access to Anthropic, OpenAI, Ollama, and more. The AI triggers emotional expressions automatically based on context.
The LLM can call robot-native tools: set timers, trigger emotions, create scratchpad content, play music, control lights, take snapshots, detect objects, and more. Tool calls are displayed inline in the conversation with full transparency.
The AI creates rich HTML content on demand: recipes, diagrams, charts, and more — all rendered beautifully in the Scratchpad tab.
Ask Reachy for a pancake recipe and it creates a beautifully formatted page with ingredients, step-by-step instructions, and tips — complete with a themed robot illustration.
Interactive architecture diagrams rendered with Mermaid.js, pie charts, activity heatmaps, and more. The AI can generate visual documentation of the robot's own architecture.
Set timers via conversation or the UI. Built-in ambient sound library for focus, relaxation, or atmosphere.
A complete timer and alarm system with rich sound options.
On-device and in-browser computer vision with pose estimation, object detection, and segmentation.
Three processing modes: Off, CM4 (on-robot), and WebGPU (in-browser via WASM). Choose from pose keypoints, detection, or segmentation tasks. Adjustable FPS, confidence threshold, and model size selection.
Blue skeleton overlay tracks body keypoints in real time from any connected camera. Supports Follow mode where the robot tracks detected poses. Works with robot camera, FaceTime, or iPhone sources.
Full bash terminal access via WebSocket PTY with shared tmux sessions.
A fully interactive terminal embedded in the dashboard, powered by tmux for persistent, shared sessions.
Capture, record, and playback media directly from the dashboard with thumbnail previews and metadata.
A built-in media manager for all your robot's captures and audio content.
Fine-grained audio control with channel mixing and Bluetooth A2DP device management.
Complete audio pipeline management with per-channel volume control.
Full control over voice recognition thresholds and the LLM's personality through an editable system prompt.
Adjust voice recognition sensitivity and customise Reachy's AI personality.
Personalise the 3D simulation with procedural environments and paint Reachy in country flags, metallic finishes, or vibrant gradients.
A rich collection of 3D environments and robot textures, instantly switchable from the Appearance settings.
Crystal Cave
Clouds
Brazil Skin
Pure Python + vanilla JavaScript. No framework, no bundler, no build step.
146 REST endpoints + 6 WebSockets
Zero dependencies, SPA
Physics-accurate 3D simulation
Real-time vision in browser
Voice recognition pipeline
Text-to-speech engine
Multi-provider AI (Claude, GPT, Ollama)
All real-time data, zero polling