Spaces:
Running
OpenClaw Embodiment SDK: giving Reachy an agentic brain (Apache 2.0 HAL)
Most embodied AI frameworks are built around the agent: the agent decides, the hardware executes. The device is a peripheral. We think that's backwards -- and built something to prove it.
What it is: The OpenClaw Embodiment SDK is a HAL (hardware abstraction layer) that sits between physical hardware and the OpenClaw multi-agent runtime. The SDK wraps Reachy's native SDK to preserve hardware safety limits and smooth motor trajectories, then surfaces sensor capture, context delivery, and physical actuation as clean Python ABCs -- so agent logic never couples to device specifics. The SDK targets Reachy Mini (Lite and Wireless) and Reachy 2 -- same HAL ABCs, different hardware surfaces.
Architecture: device-to-agent, not command-control. The standard approach issues commands from an agent down to hardware. We flip that. Hardware events stream asynchronous context to the agent runtime, decoupling physical capture rates from LLM inference latency. The loop is: device notices -> agent thinks -> device acts. The HAL buffers incoming sensor events and queues actuator commands so the robot is never blocked waiting on an inference cycle.
Reachy Mini HAL -- 7 classes, fully implemented:
ReachyMotionTracker-- head orientation and motion trackingReachyCameraHAL-- computer vision capture and frame deliveryReachyMicrophoneHAL-- audio capture with configurable sample ratesReachyAudioOutputHAL-- TTS and audio playback via Reachy speakersReachyDisplayHAL-- face display rendering (expressions, overlays)ReachyTransportHAL-- message transport between SDK and OpenClaw runtimeReachyActuatorHAL-- motor control (head, arms, and mobility joints), built on top of Reachy's native SDK to preserve hardware safety limits and smooth trajectories
PowerHal (battery state monitoring with low-battery callbacks) is included as a standalone ABC -- not Reachy-specific, applies across all device profiles.
We are also writing the Reachy 2 HAL -- same ABCs, richer actuator map for the full humanoid platform: 7-DOF arms × 2, grippers, 3-DOF neck, stereo cameras, and mobile base option. The reachy2 device profile is in progress alongside the Reachy Mini Wireless profile, which has a built-in IMU that lets ReachyMotionTracker use real sensor data rather than encoder proxy.
The SDK ships with 6 device profiles: Reachy Mini, Raspberry Pi 5 + PiCamera, Pi Zero 2W, Luxonis OAK-D, Brilliant Labs Frame, and Even Realities G2. The Reachy Mini profile is hardware-validated on CM5. The remaining five profiles are spec-based -- HAL code complete, hardware validation not yet run. One-line setup:
from openclaw_wearable.profiles import load_profile
config = load_profile("reachy-mini", host="127.0.0.1")
Status:
- Apache 2.0, fully open source
- Tested on Raspberry Pi Compute Module 5 (CM5)
- Gate 3 complete: CM5 hardware validation + 6 device profiles + 22+ tests passing
- SDK: github.com/mmartoccia/openclaw-embodiment
- Quick health check:
pip install openclaw-embodiment && openclaw-embodiment doctor
On the roadmap:
- Sound-oriented context capture: mic array DoA -> orient head toward sound source -> visual capture. "Reachy hears something, turns toward it, looks."
TriggerArbiter: multi-modal fusion (audio + visual triggers with priority policy)- Frame and Even G2: HAL profiles ship in the SDK (spec-based); hardware validation not yet run -- hardware contributions welcome
Looking for collaborators:
Reachy Mini consumer lead times are running around 90 days right now, which limits our ability to test outside our own CM5 setup. We are looking for 2-3 developers who already have hardware and want to run multi-agent workflows on Reachy today. If you want early access and a direct line to co-develop edge cases, open an issue on the repo and we will get you set up.
If you are building something with Reachy Mini (Lite or Wireless) or Reachy 2 and OpenClaw might be a fit, drop a reply here or open an issue.