Add files using upload-large-folder tool
Browse filesThis view is limited to 50 files because it contains too many changes.
See raw diff
- lerobot/docs/source/act.mdx +92 -0
- lerobot/docs/source/async.mdx +312 -0
- lerobot/docs/source/backwardcomp.mdx +151 -0
- lerobot/docs/source/bring_your_own_policies.mdx +175 -0
- lerobot/docs/source/cameras.mdx +206 -0
- lerobot/docs/source/contributing.md +1 -0
- lerobot/docs/source/debug_processor_pipeline.mdx +299 -0
- lerobot/docs/source/earthrover_mini_plus.mdx +225 -0
- lerobot/docs/source/env_processor.mdx +418 -0
- lerobot/docs/source/envhub.mdx +431 -0
- lerobot/docs/source/envhub_isaaclab_arena.mdx +510 -0
- lerobot/docs/source/envhub_leisaac.mdx +302 -0
- lerobot/docs/source/feetech.mdx +71 -0
- lerobot/docs/source/groot.mdx +131 -0
- lerobot/docs/source/hilserl.mdx +923 -0
- lerobot/docs/source/hilserl_sim.mdx +154 -0
- lerobot/docs/source/hope_jr.mdx +277 -0
- lerobot/docs/source/il_robots.mdx +620 -0
- lerobot/docs/source/implement_your_own_processor.mdx +273 -0
- lerobot/docs/source/index.mdx +23 -0
- lerobot/docs/source/installation.mdx +127 -0
- lerobot/docs/source/integrate_hardware.mdx +476 -0
- lerobot/docs/source/introduction_processors.mdx +314 -0
- lerobot/docs/source/koch.mdx +283 -0
- lerobot/docs/source/lekiwi.mdx +337 -0
- lerobot/docs/source/lerobot-dataset-v3.mdx +314 -0
- lerobot/docs/source/libero.mdx +171 -0
- lerobot/docs/source/metaworld.mdx +80 -0
- lerobot/docs/source/multi_gpu_training.mdx +125 -0
- lerobot/docs/source/notebooks.mdx +29 -0
- lerobot/docs/source/peft_training.mdx +62 -0
- lerobot/docs/source/phone_teleop.mdx +191 -0
- lerobot/docs/source/pi0.mdx +101 -0
- lerobot/docs/source/pi05.mdx +123 -0
- lerobot/docs/source/pi0fast.mdx +246 -0
- lerobot/docs/source/policy_act_README.md +14 -0
- lerobot/docs/source/policy_diffusion_README.md +14 -0
- lerobot/docs/source/policy_groot_README.md +27 -0
- lerobot/docs/source/policy_smolvla_README.md +14 -0
- lerobot/docs/source/policy_tdmpc_README.md +14 -0
- lerobot/docs/source/policy_vqbet_README.md +14 -0
- lerobot/docs/source/policy_walloss_README.md +45 -0
- lerobot/docs/source/porting_datasets_v3.mdx +321 -0
- lerobot/docs/source/processors_robots_teleop.mdx +151 -0
- lerobot/docs/source/reachy2.mdx +303 -0
- lerobot/docs/source/rtc.mdx +188 -0
- lerobot/docs/source/sarm.mdx +592 -0
- lerobot/docs/source/smolvla.mdx +116 -0
- lerobot/docs/source/so100.mdx +640 -0
- lerobot/docs/source/so101.mdx +436 -0
lerobot/docs/source/act.mdx
ADDED
|
@@ -0,0 +1,92 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ACT (Action Chunking with Transformers)
|
| 2 |
+
|
| 3 |
+
ACT is a **lightweight and efficient policy for imitation learning**, especially well-suited for fine-grained manipulation tasks. It's the **first model we recommend when you're starting out** with LeRobot due to its fast training time, low computational requirements, and strong performance.
|
| 4 |
+
|
| 5 |
+
<div class="video-container">
|
| 6 |
+
<iframe
|
| 7 |
+
width="100%"
|
| 8 |
+
height="415"
|
| 9 |
+
src="https://www.youtube.com/embed/ft73x0LfGpM"
|
| 10 |
+
title="LeRobot ACT Tutorial"
|
| 11 |
+
frameborder="0"
|
| 12 |
+
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
|
| 13 |
+
allowfullscreen
|
| 14 |
+
></iframe>
|
| 15 |
+
</div>
|
| 16 |
+
|
| 17 |
+
_Watch this tutorial from the LeRobot team to learn how ACT works: [LeRobot ACT Tutorial](https://www.youtube.com/watch?v=ft73x0LfGpM)_
|
| 18 |
+
|
| 19 |
+
## Model Overview
|
| 20 |
+
|
| 21 |
+
Action Chunking with Transformers (ACT) was introduced in the paper [Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware](https://arxiv.org/abs/2304.13705) by Zhao et al. The policy was designed to enable precise, contact-rich manipulation tasks using affordable hardware and minimal demonstration data.
|
| 22 |
+
|
| 23 |
+
### Why ACT is Great for Beginners
|
| 24 |
+
|
| 25 |
+
ACT stands out as an excellent starting point for several reasons:
|
| 26 |
+
|
| 27 |
+
- **Fast Training**: Trains in a few hours on a single GPU
|
| 28 |
+
- **Lightweight**: Only ~80M parameters, making it efficient and easy to work with
|
| 29 |
+
- **Data Efficient**: Often achieves high success rates with just 50 demonstrations
|
| 30 |
+
|
| 31 |
+
### Architecture
|
| 32 |
+
|
| 33 |
+
ACT uses a transformer-based architecture with three main components:
|
| 34 |
+
|
| 35 |
+
1. **Vision Backbone**: ResNet-18 processes images from multiple camera viewpoints
|
| 36 |
+
2. **Transformer Encoder**: Synthesizes information from camera features, joint positions, and a learned latent variable
|
| 37 |
+
3. **Transformer Decoder**: Generates coherent action sequences using cross-attention
|
| 38 |
+
|
| 39 |
+
The policy takes as input:
|
| 40 |
+
|
| 41 |
+
- Multiple RGB images (e.g., from wrist cameras, front/top cameras)
|
| 42 |
+
- Current robot joint positions
|
| 43 |
+
- A latent style variable `z` (learned during training, set to zero during inference)
|
| 44 |
+
|
| 45 |
+
And outputs a chunk of `k` future action sequences.
|
| 46 |
+
|
| 47 |
+
## Installation Requirements
|
| 48 |
+
|
| 49 |
+
1. Install LeRobot by following our [Installation Guide](./installation).
|
| 50 |
+
2. ACT is included in the base LeRobot installation, so no additional dependencies are needed!
|
| 51 |
+
|
| 52 |
+
## Training ACT
|
| 53 |
+
|
| 54 |
+
ACT works seamlessly with the standard LeRobot training pipeline. Here's a complete example for training ACT on your dataset:
|
| 55 |
+
|
| 56 |
+
```bash
|
| 57 |
+
lerobot-train \
|
| 58 |
+
--dataset.repo_id=${HF_USER}/your_dataset \
|
| 59 |
+
--policy.type=act \
|
| 60 |
+
--output_dir=outputs/train/act_your_dataset \
|
| 61 |
+
--job_name=act_your_dataset \
|
| 62 |
+
--policy.device=cuda \
|
| 63 |
+
--wandb.enable=true \
|
| 64 |
+
--policy.repo_id=${HF_USER}/act_policy
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
### Training Tips
|
| 68 |
+
|
| 69 |
+
1. **Start with defaults**: ACT's default hyperparameters work well for most tasks
|
| 70 |
+
2. **Training duration**: Expect a few hours for 100k training steps on a single GPU
|
| 71 |
+
3. **Batch size**: Start with batch size 8 and adjust based on your GPU memory
|
| 72 |
+
|
| 73 |
+
### Train using Google Colab
|
| 74 |
+
|
| 75 |
+
If your local computer doesn't have a powerful GPU, you can utilize Google Colab to train your model by following the [ACT training notebook](./notebooks#training-act).
|
| 76 |
+
|
| 77 |
+
## Evaluating ACT
|
| 78 |
+
|
| 79 |
+
Once training is complete, you can evaluate your ACT policy using the `lerobot-record` command with your trained policy. This will run inference and record evaluation episodes:
|
| 80 |
+
|
| 81 |
+
```bash
|
| 82 |
+
lerobot-record \
|
| 83 |
+
--robot.type=so100_follower \
|
| 84 |
+
--robot.port=/dev/ttyACM0 \
|
| 85 |
+
--robot.id=my_robot \
|
| 86 |
+
--robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}}" \
|
| 87 |
+
--display_data=true \
|
| 88 |
+
--dataset.repo_id=${HF_USER}/eval_act_your_dataset \
|
| 89 |
+
--dataset.num_episodes=10 \
|
| 90 |
+
--dataset.single_task="Your task description" \
|
| 91 |
+
--policy.path=${HF_USER}/act_policy
|
| 92 |
+
```
|
lerobot/docs/source/async.mdx
ADDED
|
@@ -0,0 +1,312 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Asynchronous Inference
|
| 2 |
+
|
| 3 |
+
With our [SmolVLA](https://huggingface.co/papers/2506.01844) we introduced a new way to run inference on real-world robots, **decoupling action prediction from action execution**.
|
| 4 |
+
In this tutorial, we'll show how to use asynchronous inference (_async inference_) using a finetuned version of SmolVLA, and all the policies supported by LeRobot.
|
| 5 |
+
**Try async inference with all the policies** supported by LeRobot!
|
| 6 |
+
|
| 7 |
+
**What you'll learn:**
|
| 8 |
+
|
| 9 |
+
1. Why asynchronous inference matters and how it compares to, more traditional, sequential inference.
|
| 10 |
+
2. How to spin-up a `PolicyServer` and connect a `RobotClient` from the same machine, and even over the network.
|
| 11 |
+
3. How to tune key parameters (`actions_per_chunk`, `chunk_size_threshold`) for your robot and policy.
|
| 12 |
+
|
| 13 |
+
If you get stuck, hop into our [Discord community](https://discord.gg/s3KuuzsPFb)!
|
| 14 |
+
|
| 15 |
+
In a nutshell: with _async inference_, your robot keeps acting while the policy server is already busy computing the next chunk of actions---eliminating "wait-for-inference" lags and unlocking smoother, more reactive behaviours.
|
| 16 |
+
This is fundamentally different from synchronous inference (sync), where the robot stays idle while the policy computes the next chunk of actions.
|
| 17 |
+
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
## Getting started with async inference
|
| 21 |
+
|
| 22 |
+
You can read more information on asynchronous inference in our [blogpost](https://huggingface.co/blog/async-robot-inference). This guide is designed to help you quickly set up and run asynchronous inference in your environment.
|
| 23 |
+
|
| 24 |
+
First, install `lerobot` with the `async` tag, to install the extra dependencies required to run async inference.
|
| 25 |
+
|
| 26 |
+
```shell
|
| 27 |
+
pip install -e ".[async]"
|
| 28 |
+
```
|
| 29 |
+
|
| 30 |
+
Then, spin up a policy server (in one terminal, or in a separate machine) specifying the host address and port for the client to connect to.
|
| 31 |
+
You can spin up a policy server running:
|
| 32 |
+
|
| 33 |
+
```shell
|
| 34 |
+
python -m lerobot.async_inference.policy_server \
|
| 35 |
+
--host=127.0.0.1 \
|
| 36 |
+
--port=8080
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
This will start a policy server listening on `127.0.0.1:8080` (`localhost`, port 8080). At this stage, the policy server is empty, as all information related to which policy to run and with which parameters are specified during the first handshake with the client. Spin up a client with:
|
| 40 |
+
|
| 41 |
+
```shell
|
| 42 |
+
python -m lerobot.async_inference.robot_client \
|
| 43 |
+
--server_address=127.0.0.1:8080 \ # SERVER: the host address and port of the policy server
|
| 44 |
+
--robot.type=so100_follower \ # ROBOT: your robot type
|
| 45 |
+
--robot.port=/dev/tty.usbmodem585A0076841 \ # ROBOT: your robot port
|
| 46 |
+
--robot.id=follower_so100 \ # ROBOT: your robot id, to load calibration file
|
| 47 |
+
--robot.cameras="{ laptop: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}, phone: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \ # POLICY: the cameras used to acquire frames, with keys matching the keys expected by the policy
|
| 48 |
+
--task="dummy" \ # POLICY: The task to run the policy on (`Fold my t-shirt`). Not necessarily defined for all policies, such as `act`
|
| 49 |
+
--policy_type=your_policy_type \ # POLICY: the type of policy to run (smolvla, act, etc)
|
| 50 |
+
--pretrained_name_or_path=user/model \ # POLICY: the model name/path on server to the checkpoint to run (e.g., lerobot/smolvla_base)
|
| 51 |
+
--policy_device=mps \ # POLICY: the device to run the policy on, on the server
|
| 52 |
+
--actions_per_chunk=50 \ # POLICY: the number of actions to output at once
|
| 53 |
+
--chunk_size_threshold=0.5 \ # CLIENT: the threshold for the chunk size before sending a new observation to the server
|
| 54 |
+
--aggregate_fn_name=weighted_average \ # CLIENT: the function to aggregate actions on overlapping portions
|
| 55 |
+
--debug_visualize_queue_size=True # CLIENT: whether to visualize the queue size at runtime
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
In summary, you need to specify instructions for:
|
| 59 |
+
|
| 60 |
+
- `SERVER`: the address and port of the policy server
|
| 61 |
+
- `ROBOT`: the type of robot to connect to, the port to connect to, and the local `id` of the robot
|
| 62 |
+
- `POLICY`: the type of policy to run, and the model name/path on server to the checkpoint to run. You also need to specify which device should the sever be using, and how many actions to output at once (capped at the policy max actions value).
|
| 63 |
+
- `CLIENT`: the threshold for the chunk size before sending a new observation to the server, and the function to aggregate actions on overlapping portions. Optionally, you can also visualize the queue size at runtime, to help you tune the `CLIENT` parameters.
|
| 64 |
+
|
| 65 |
+
Importantly,
|
| 66 |
+
|
| 67 |
+
- `actions_per_chunk` and `chunk_size_threshold` are key parameters to tune for your setup.
|
| 68 |
+
- `aggregate_fn_name` is the function to aggregate actions on overlapping portions. You can either add a new one to a registry of functions, or add your own in `robot_client.py` (see [here](NOTE:addlinktoLOC))
|
| 69 |
+
- `debug_visualize_queue_size` is a useful tool to tune the `CLIENT` parameters.
|
| 70 |
+
|
| 71 |
+
## Done! You should see your robot moving around by now 😉
|
| 72 |
+
|
| 73 |
+
## Async vs. synchronous inference
|
| 74 |
+
|
| 75 |
+
Synchronous inference relies on interleaving action chunk prediction and action execution. This inherently results in _idle frames_, frames where the robot awaits idle the policy's output: a new action chunk.
|
| 76 |
+
In turn, inference is plagued by evident real-time lags, where the robot simply stops acting due to the lack of available actions.
|
| 77 |
+
With robotics models increasing in size, this problem risks becoming only more severe.
|
| 78 |
+
|
| 79 |
+
<p align="center">
|
| 80 |
+
<img
|
| 81 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/async-inference/sync.png"
|
| 82 |
+
width="80%"
|
| 83 |
+
></img>
|
| 84 |
+
</p>
|
| 85 |
+
<p align="center">
|
| 86 |
+
<i>Synchronous inference</i> makes the robot idle while the policy is
|
| 87 |
+
computing the next chunk of actions.
|
| 88 |
+
</p>
|
| 89 |
+
|
| 90 |
+
To overcome this, we design async inference, a paradigm where action planning and execution are decoupled, resulting in (1) higher adaptability and, most importantly, (2) no idle frames.
|
| 91 |
+
Crucially, with async inference, the next action chunk is computed _before_ the current one is exhausted, resulting in no idleness.
|
| 92 |
+
Higher adaptability is ensured by aggregating the different action chunks on overlapping portions, obtaining an up-to-date plan and a tighter control loop.
|
| 93 |
+
|
| 94 |
+
<p align="center">
|
| 95 |
+
<img
|
| 96 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/async-inference/async.png"
|
| 97 |
+
width="80%"
|
| 98 |
+
></img>
|
| 99 |
+
</p>
|
| 100 |
+
<p align="center">
|
| 101 |
+
<i>Asynchronous inference</i> results in no idleness because the next chunk is
|
| 102 |
+
computed before the current chunk is exhausted.
|
| 103 |
+
</p>
|
| 104 |
+
|
| 105 |
+
---
|
| 106 |
+
|
| 107 |
+
## Start the Policy Server
|
| 108 |
+
|
| 109 |
+
Policy servers are wrappers around a `PreTrainedPolicy` interfacing them with observations coming from a robot client.
|
| 110 |
+
Policy servers are initialized as empty containers which are populated with the requested policy specified in the initial handshake between the robot client and the policy server.
|
| 111 |
+
As such, spinning up a policy server is as easy as specifying the host address and port. If you're running the policy server on the same machine as the robot client, you can use `localhost` as the host address.
|
| 112 |
+
|
| 113 |
+
<hfoptions id="start_policy_server">
|
| 114 |
+
<hfoption id="Command">
|
| 115 |
+
```bash
|
| 116 |
+
python -m lerobot.async_inference.policy_server \
|
| 117 |
+
--host=127.0.0.1 \
|
| 118 |
+
--port=8080
|
| 119 |
+
```
|
| 120 |
+
</hfoption>
|
| 121 |
+
<hfoption id="API example">
|
| 122 |
+
|
| 123 |
+
<!-- prettier-ignore-start -->
|
| 124 |
+
```python
|
| 125 |
+
from lerobot.async_inference.configs import PolicyServerConfig
|
| 126 |
+
from lerobot.async_inference.policy_server import serve
|
| 127 |
+
|
| 128 |
+
config = PolicyServerConfig(
|
| 129 |
+
host="localhost",
|
| 130 |
+
port=8080,
|
| 131 |
+
)
|
| 132 |
+
serve(config)
|
| 133 |
+
```
|
| 134 |
+
<!-- prettier-ignore-end -->
|
| 135 |
+
|
| 136 |
+
</hfoption>
|
| 137 |
+
</hfoptions>
|
| 138 |
+
|
| 139 |
+
This listens on `localhost:8080` for an incoming connection from the associated`RobotClient`, which will communicate which policy to run during the first client-server handshake.
|
| 140 |
+
|
| 141 |
+
---
|
| 142 |
+
|
| 143 |
+
## Launch the Robot Client
|
| 144 |
+
|
| 145 |
+
`RobotClient` is a wrapper around a `Robot` instance, which `RobotClient` connects to the (possibly remote) `PolicyServer`.
|
| 146 |
+
The `RobotClient` streams observations to the `PolicyServer`, and receives action chunks obtained running inference on the server (which we assume to have better computational resources than the robot controller).
|
| 147 |
+
|
| 148 |
+
<hfoptions id="start_robot_client">
|
| 149 |
+
<hfoption id="Command">
|
| 150 |
+
```bash
|
| 151 |
+
python -m lerobot.async_inference.robot_client \
|
| 152 |
+
--server_address=127.0.0.1:8080 \ # SERVER: the host address and port of the policy server
|
| 153 |
+
--robot.type=so100_follower \ # ROBOT: your robot type
|
| 154 |
+
--robot.port=/dev/tty.usbmodem585A0076841 \ # ROBOT: your robot port
|
| 155 |
+
--robot.id=follower_so100 \ # ROBOT: your robot id, to load calibration file
|
| 156 |
+
--robot.cameras="{ laptop: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}, phone: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \ # POLICY: the cameras used to acquire frames, with keys matching the keys expected by the policy
|
| 157 |
+
--task="dummy" \ # POLICY: The task to run the policy on (`Fold my t-shirt`). Not necessarily defined for all policies, such as `act`
|
| 158 |
+
--policy_type=your_policy_type \ # POLICY: the type of policy to run (smolvla, act, etc)
|
| 159 |
+
--pretrained_name_or_path=user/model \ # POLICY: the model name/path on server to the checkpoint to run (e.g., lerobot/smolvla_base)
|
| 160 |
+
--policy_device=mps \ # POLICY: the device to run the policy on, on the server
|
| 161 |
+
--actions_per_chunk=50 \ # POLICY: the number of actions to output at once
|
| 162 |
+
--chunk_size_threshold=0.5 \ # CLIENT: the threshold for the chunk size before sending a new observation to the server
|
| 163 |
+
--aggregate_fn_name=weighted_average \ # CLIENT: the function to aggregate actions on overlapping portions
|
| 164 |
+
--debug_visualize_queue_size=True # CLIENT: whether to visualize the queue size at runtime
|
| 165 |
+
```
|
| 166 |
+
</hfoption>
|
| 167 |
+
<hfoption id="API example">
|
| 168 |
+
|
| 169 |
+
<!-- prettier-ignore-start -->
|
| 170 |
+
```python
|
| 171 |
+
import threading
|
| 172 |
+
from lerobot.robots.so_follower import SO100FollowerConfig
|
| 173 |
+
from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
|
| 174 |
+
from lerobot.async_inference.configs import RobotClientConfig
|
| 175 |
+
from lerobot.async_inference.robot_client import RobotClient
|
| 176 |
+
from lerobot.async_inference.helpers import visualize_action_queue_size
|
| 177 |
+
|
| 178 |
+
# 1. Create the robot instance
|
| 179 |
+
"""Check out the cameras available in your setup by running `python lerobot/find_cameras.py`"""
|
| 180 |
+
# these cameras must match the ones expected by the policy
|
| 181 |
+
# check the config.json on the Hub for the policy you are using
|
| 182 |
+
camera_cfg = {
|
| 183 |
+
"top": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=30),
|
| 184 |
+
"side": OpenCVCameraConfig(index_or_path=1, width=640, height=480, fps=30)
|
| 185 |
+
}
|
| 186 |
+
|
| 187 |
+
robot_cfg = SO100FollowerConfig(
|
| 188 |
+
port="/dev/tty.usbmodem585A0076841",
|
| 189 |
+
id="follower_so100",
|
| 190 |
+
cameras=camera_cfg
|
| 191 |
+
)
|
| 192 |
+
|
| 193 |
+
# 3. Create client configuration
|
| 194 |
+
client_cfg = RobotClientConfig(
|
| 195 |
+
robot=robot_cfg,
|
| 196 |
+
server_address="localhost:8080",
|
| 197 |
+
policy_device="mps",
|
| 198 |
+
policy_type="smolvla",
|
| 199 |
+
pretrained_name_or_path="<user>/smolvla_async",
|
| 200 |
+
chunk_size_threshold=0.5,
|
| 201 |
+
actions_per_chunk=50, # make sure this is less than the max actions of the policy
|
| 202 |
+
)
|
| 203 |
+
|
| 204 |
+
# 4. Create and start client
|
| 205 |
+
client = RobotClient(client_cfg)
|
| 206 |
+
|
| 207 |
+
# 5. Specify the task
|
| 208 |
+
task = "Don't do anything, stay still"
|
| 209 |
+
|
| 210 |
+
if client.start():
|
| 211 |
+
# Start action receiver thread
|
| 212 |
+
action_receiver_thread = threading.Thread(target=client.receive_actions, daemon=True)
|
| 213 |
+
action_receiver_thread.start()
|
| 214 |
+
|
| 215 |
+
try:
|
| 216 |
+
# Run the control loop
|
| 217 |
+
client.control_loop(task)
|
| 218 |
+
except KeyboardInterrupt:
|
| 219 |
+
client.stop()
|
| 220 |
+
action_receiver_thread.join()
|
| 221 |
+
# (Optionally) plot the action queue size
|
| 222 |
+
visualize_action_queue_size(client.action_queue_size)
|
| 223 |
+
```
|
| 224 |
+
<!-- prettier-ignore-end -->
|
| 225 |
+
|
| 226 |
+
</hfoption>
|
| 227 |
+
</hfoptions>
|
| 228 |
+
|
| 229 |
+
The following two parameters are key in every setup:
|
| 230 |
+
|
| 231 |
+
<table>
|
| 232 |
+
<thead>
|
| 233 |
+
<tr>
|
| 234 |
+
<th>Hyperparameter</th>
|
| 235 |
+
<th>Default</th>
|
| 236 |
+
<th>What it does</th>
|
| 237 |
+
</tr>
|
| 238 |
+
</thead>
|
| 239 |
+
<tbody>
|
| 240 |
+
<tr>
|
| 241 |
+
<td>
|
| 242 |
+
<code>actions_per_chunk</code>
|
| 243 |
+
</td>
|
| 244 |
+
<td>50</td>
|
| 245 |
+
<td>
|
| 246 |
+
How many actions the policy outputs at once. Typical values: 10-50.
|
| 247 |
+
</td>
|
| 248 |
+
</tr>
|
| 249 |
+
<tr>
|
| 250 |
+
<td>
|
| 251 |
+
<code>chunk_size_threshold</code>
|
| 252 |
+
</td>
|
| 253 |
+
<td>0.7</td>
|
| 254 |
+
<td>
|
| 255 |
+
When the queue is ≤ 50% full, the client sends a fresh observation.
|
| 256 |
+
Value in [0, 1].
|
| 257 |
+
</td>
|
| 258 |
+
</tr>
|
| 259 |
+
</tbody>
|
| 260 |
+
</table>
|
| 261 |
+
|
| 262 |
+
<Tip>
|
| 263 |
+
Different values of `actions_per_chunk` and `chunk_size_threshold` do result
|
| 264 |
+
in different behaviours.
|
| 265 |
+
</Tip>
|
| 266 |
+
|
| 267 |
+
On the one hand, increasing the value of `actions_per_chunk` will result in reducing the likelihood of ending up with no actions to execute, as more actions will be available when the new chunk is computed.
|
| 268 |
+
However, larger values of `actions_per_chunk` might also result in less precise actions, due to the compounding errors consequent to predicting actions over longer timespans.
|
| 269 |
+
|
| 270 |
+
On the other hand, increasing the value of `chunk_size_threshold` will result in sending out to the `PolicyServer` observations for inference more often, resulting in a larger number of updates action chunks, overlapping on significant portions. This results in high adaptability, in the limit predicting one action chunk for each observation, which is in turn only marginally consumed while a new one is produced.
|
| 271 |
+
This option does also put more pressure on the inference pipeline, as a consequence of the many requests. Conversely, values of `chunk_size_threshold` close to 0.0 collapse to the synchronous edge case, whereby new observations are only sent out whenever the current chunk is exhausted.
|
| 272 |
+
|
| 273 |
+
We found the default values of `actions_per_chunk` and `chunk_size_threshold` to work well in the experiments we developed for the [SmolVLA paper](https://huggingface.co/papers/2506.01844), but recommend experimenting with different values to find the best fit for your setup.
|
| 274 |
+
|
| 275 |
+
### Tuning async inference for your setup
|
| 276 |
+
|
| 277 |
+
1. **Choose your computational resources carefully.** [PI0](https://huggingface.co/lerobot/pi0) occupies 14GB of memory at inference time, while [SmolVLA](https://huggingface.co/lerobot/smolvla_base) requires only ~2GB. You should identify the best computational resource for your use case keeping in mind smaller policies require less computational resources. The combination of policy and device used (CPU-intensive, using MPS, or the number of CUDA cores on a given NVIDIA GPU) directly impacts the average inference latency you should expect.
|
| 278 |
+
2. **Adjust your `fps` based on inference latency.** While the server generates a new action chunk, the client is not idle and is stepping through its current action queue. If the two processes happen at fundamentally different speeds, the client might end up with an empty queue. As such, you should reduce your fps if you consistently run out of actions in queue.
|
| 279 |
+
3. **Adjust `chunk_size_threshold`**.
|
| 280 |
+
- Values closer to `0.0` result in almost sequential behavior. Values closer to `1.0` → send observation every step (more bandwidth, relies on good world-model).
|
| 281 |
+
- We found values around 0.5-0.6 to work well. If you want to tweak this, spin up a `RobotClient` setting the `--debug_visualize_queue_size` to `True`. This will plot the action queue size evolution at runtime, and you can use it to find the value of `chunk_size_threshold` that works best for your setup.
|
| 282 |
+
|
| 283 |
+
<p align="center">
|
| 284 |
+
<img
|
| 285 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/async-inference/queues.png"
|
| 286 |
+
width="80%"
|
| 287 |
+
></img>
|
| 288 |
+
</p>
|
| 289 |
+
<p align="center">
|
| 290 |
+
<i>
|
| 291 |
+
The action queue size is plotted at runtime when the
|
| 292 |
+
`--debug_visualize_queue_size` flag is passed, for various levels of
|
| 293 |
+
`chunk_size_threshold` (`g` in the SmolVLA paper).
|
| 294 |
+
</i>
|
| 295 |
+
</p>
|
| 296 |
+
|
| 297 |
+
---
|
| 298 |
+
|
| 299 |
+
## Conclusion
|
| 300 |
+
|
| 301 |
+
Asynchronous inference represents a significant advancement in real-time robotics control, addressing the fundamental challenge of inference latency that has long plagued robotics applications. Through this tutorial, you've learned how to implement a complete async inference pipeline that eliminates idle frames and enables smoother, more reactive robot behaviors.
|
| 302 |
+
|
| 303 |
+
**Key Takeaways:**
|
| 304 |
+
|
| 305 |
+
- **Paradigm Shift**: Async inference decouples action prediction from execution, allowing robots to continue acting while new action chunks are computed in parallel
|
| 306 |
+
- **Performance Benefits**: Eliminates "wait-for-inference" lags that are inherent in synchronous approaches, becoming increasingly important as policy models grow larger
|
| 307 |
+
- **Flexible Architecture**: The server-client design enables distributed computing, where inference can run on powerful remote hardware while maintaining real-time robot control
|
| 308 |
+
- **Tunable Parameters**: Success depends on properly configuring `actions_per_chunk` and `chunk_size_threshold` for your specific hardware, policy, and task requirements
|
| 309 |
+
- **Universal Compatibility**: Works with all LeRobot-supported policies, from lightweight ACT models to vision-language models like SmolVLA
|
| 310 |
+
|
| 311 |
+
Start experimenting with the default parameters, monitor your action queue sizes, and iteratively refine your setup to achieve optimal performance for your specific use case.
|
| 312 |
+
If you want to discuss this further, hop into our [Discord community](https://discord.gg/s3KuuzsPFb), or open an issue on our [GitHub repository](https://github.com/lerobot/lerobot/issues).
|
lerobot/docs/source/backwardcomp.mdx
ADDED
|
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Backward compatibility
|
| 2 |
+
|
| 3 |
+
## Policy Normalization Migration (PR #1452)
|
| 4 |
+
|
| 5 |
+
**Breaking Change**: LeRobot policies no longer have built-in normalization layers embedded in their weights. Normalization is now handled by external `PolicyProcessorPipeline` components.
|
| 6 |
+
|
| 7 |
+
### What changed?
|
| 8 |
+
|
| 9 |
+
| | Before PR #1452 | After PR #1452 |
|
| 10 |
+
| -------------------------- | ------------------------------------------------ | ------------------------------------------------------------ |
|
| 11 |
+
| **Normalization Location** | Embedded in model weights (`normalize_inputs.*`) | External `PolicyProcessorPipeline` components |
|
| 12 |
+
| **Model State Dict** | Contains normalization statistics | **Clean weights only** - no normalization parameters |
|
| 13 |
+
| **Usage** | `policy(batch)` handles everything | `preprocessor(batch)` → `policy(...)` → `postprocessor(...)` |
|
| 14 |
+
|
| 15 |
+
### Impact on existing models
|
| 16 |
+
|
| 17 |
+
- Models trained **before** PR #1452 have normalization embedded in their weights
|
| 18 |
+
- These models need migration to work with the new `PolicyProcessorPipeline` system
|
| 19 |
+
- The migration extracts normalization statistics and creates separate processor pipelines
|
| 20 |
+
|
| 21 |
+
### Migrating old models
|
| 22 |
+
|
| 23 |
+
Use the migration script to convert models with embedded normalization:
|
| 24 |
+
|
| 25 |
+
```shell
|
| 26 |
+
python src/lerobot/processor/migrate_policy_normalization.py \
|
| 27 |
+
--pretrained-path lerobot/act_aloha_sim_transfer_cube_human \
|
| 28 |
+
--push-to-hub \
|
| 29 |
+
--branch migrated
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
The script:
|
| 33 |
+
|
| 34 |
+
1. **Extracts** normalization statistics from model weights
|
| 35 |
+
2. **Creates** external preprocessor and postprocessor pipelines
|
| 36 |
+
3. **Removes** normalization layers from model weights
|
| 37 |
+
4. **Saves** clean model + processor pipelines
|
| 38 |
+
5. **Pushes** to Hub with automatic PR creation
|
| 39 |
+
|
| 40 |
+
### Using migrated models
|
| 41 |
+
|
| 42 |
+
```python
|
| 43 |
+
# New usage pattern (after migration)
|
| 44 |
+
from lerobot.policies.factory import make_policy, make_pre_post_processors
|
| 45 |
+
|
| 46 |
+
# Load model and processors separately
|
| 47 |
+
policy = make_policy(config, ds_meta=dataset.meta)
|
| 48 |
+
preprocessor, postprocessor = make_pre_post_processors(
|
| 49 |
+
policy_cfg=config,
|
| 50 |
+
dataset_stats=dataset.meta.stats
|
| 51 |
+
)
|
| 52 |
+
|
| 53 |
+
# Process data through pipeline
|
| 54 |
+
processed_batch = preprocessor(raw_batch)
|
| 55 |
+
action = policy.select_action(processed_batch)
|
| 56 |
+
final_action = postprocessor(action)
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
## Hardware API redesign
|
| 60 |
+
|
| 61 |
+
PR [#777](https://github.com/huggingface/lerobot/pull/777) improves the LeRobot calibration but is **not backward-compatible**. Below is a overview of what changed and how you can continue to work with datasets created before this pull request.
|
| 62 |
+
|
| 63 |
+
### What changed?
|
| 64 |
+
|
| 65 |
+
| | Before PR #777 | After PR #777 |
|
| 66 |
+
| --------------------------------- | ------------------------------------------------- | ------------------------------------------------------------ |
|
| 67 |
+
| **Joint range** | Degrees `-180...180°` | **Normalised range** Joints: `–100...100` Gripper: `0...100` |
|
| 68 |
+
| **Zero position (SO100 / SO101)** | Arm fully extended horizontally | **In middle of the range for each joint** |
|
| 69 |
+
| **Boundary handling** | Software safeguards to detect ±180 ° wrap-arounds | No wrap-around logic needed due to mid-range zero |
|
| 70 |
+
|
| 71 |
+
---
|
| 72 |
+
|
| 73 |
+
### Impact on existing datasets
|
| 74 |
+
|
| 75 |
+
- Recorded trajectories created **before** PR #777 will replay incorrectly if loaded directly:
|
| 76 |
+
- Joint angles are offset and incorrectly normalized.
|
| 77 |
+
- Any models directly finetuned or trained on the old data will need their inputs and outputs converted.
|
| 78 |
+
|
| 79 |
+
### Using datasets made with the previous calibration system
|
| 80 |
+
|
| 81 |
+
We provide a migration example script for replaying an episode recorded with the previous calibration here: `examples/backward_compatibility/replay.py`.
|
| 82 |
+
Below we take you through the modifications that are done in the example script to make the previous calibration datasets work.
|
| 83 |
+
|
| 84 |
+
```diff
|
| 85 |
+
+ key = f"{name.removeprefix('main_')}.pos"
|
| 86 |
+
action[key] = action_array[i].item()
|
| 87 |
+
+ action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
|
| 88 |
+
+ action["elbow_flex.pos"] -= 90
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
Let's break this down.
|
| 92 |
+
New codebase uses `.pos` suffix for the position observations and we have removed `main_` prefix:
|
| 93 |
+
|
| 94 |
+
<!-- prettier-ignore-start -->
|
| 95 |
+
```python
|
| 96 |
+
key = f"{name.removeprefix('main_')}.pos"
|
| 97 |
+
```
|
| 98 |
+
<!-- prettier-ignore-end -->
|
| 99 |
+
|
| 100 |
+
For `"shoulder_lift"` (id = 2), the 0 position is changed by -90 degrees and the direction is reversed compared to old calibration/code.
|
| 101 |
+
|
| 102 |
+
<!-- prettier-ignore-start -->
|
| 103 |
+
```python
|
| 104 |
+
action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
|
| 105 |
+
```
|
| 106 |
+
<!-- prettier-ignore-end -->
|
| 107 |
+
|
| 108 |
+
For `"elbow_flex"` (id = 3), the 0 position is changed by -90 degrees compared to old calibration/code.
|
| 109 |
+
|
| 110 |
+
<!-- prettier-ignore-start -->
|
| 111 |
+
```python
|
| 112 |
+
action["elbow_flex.pos"] -= 90
|
| 113 |
+
```
|
| 114 |
+
<!-- prettier-ignore-end -->
|
| 115 |
+
|
| 116 |
+
To use degrees normalization we then set the `--robot.use_degrees` option to `true`.
|
| 117 |
+
|
| 118 |
+
```diff
|
| 119 |
+
python examples/backward_compatibility/replay.py \
|
| 120 |
+
--robot.type=so101_follower \
|
| 121 |
+
--robot.port=/dev/tty.usbmodem5A460814411 \
|
| 122 |
+
--robot.id=blue \
|
| 123 |
+
+ --robot.use_degrees=true \
|
| 124 |
+
--dataset.repo_id=my_dataset_id \
|
| 125 |
+
--dataset.episode=0
|
| 126 |
+
```
|
| 127 |
+
|
| 128 |
+
### Using policies trained with the previous calibration system
|
| 129 |
+
|
| 130 |
+
Policies output actions in the same format as the datasets (`torch.Tensors`). Therefore, the same transformations should be applied.
|
| 131 |
+
|
| 132 |
+
To find these transformations, we recommend to first try and and replay an episode of the dataset your policy was trained on using the section above.
|
| 133 |
+
Then, add these same transformations on your inference script (shown here in the `record.py` script):
|
| 134 |
+
|
| 135 |
+
```diff
|
| 136 |
+
action_values = predict_action(
|
| 137 |
+
observation_frame,
|
| 138 |
+
policy,
|
| 139 |
+
get_safe_torch_device(policy.config.device),
|
| 140 |
+
policy.config.use_amp,
|
| 141 |
+
task=single_task,
|
| 142 |
+
robot_type=robot.robot_type,
|
| 143 |
+
)
|
| 144 |
+
action = {key: action_values[i].item() for i, key in enumerate(robot.action_features)}
|
| 145 |
+
|
| 146 |
+
+ action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
|
| 147 |
+
+ action["elbow_flex.pos"] -= 90
|
| 148 |
+
robot.send_action(action)
|
| 149 |
+
```
|
| 150 |
+
|
| 151 |
+
If you have questions or run into migration issues, feel free to ask them on [Discord](https://discord.gg/s3KuuzsPFb)
|
lerobot/docs/source/bring_your_own_policies.mdx
ADDED
|
@@ -0,0 +1,175 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Bring Your Own Policies
|
| 2 |
+
|
| 3 |
+
This tutorial explains how to integrate your own custom policy implementations into the LeRobot ecosystem, allowing you to leverage all LeRobot tools for training, evaluation, and deployment while using your own algorithms.
|
| 4 |
+
|
| 5 |
+
## Step 1: Create a Policy Package
|
| 6 |
+
|
| 7 |
+
Your custom policy should be organized as an installable Python package following LeRobot's plugin conventions.
|
| 8 |
+
|
| 9 |
+
### Package Structure
|
| 10 |
+
|
| 11 |
+
Create a package with the prefix `lerobot_policy_` (IMPORTANT!) followed by your policy name:
|
| 12 |
+
|
| 13 |
+
```bash
|
| 14 |
+
lerobot_policy_my_custom_policy/
|
| 15 |
+
├── pyproject.toml
|
| 16 |
+
└── src/
|
| 17 |
+
└── lerobot_policy_my_custom_policy/
|
| 18 |
+
├── __init__.py
|
| 19 |
+
├── configuration_my_custom_policy.py
|
| 20 |
+
├── modeling_my_custom_policy.py
|
| 21 |
+
└── processor_my_custom_policy.py
|
| 22 |
+
```
|
| 23 |
+
|
| 24 |
+
### Package Configuration
|
| 25 |
+
|
| 26 |
+
Set up your `pyproject.toml`:
|
| 27 |
+
|
| 28 |
+
```toml
|
| 29 |
+
[project]
|
| 30 |
+
name = "lerobot_policy_my_custom_policy"
|
| 31 |
+
version = "0.1.0"
|
| 32 |
+
dependencies = [
|
| 33 |
+
# your policy-specific dependencies
|
| 34 |
+
]
|
| 35 |
+
requires-python = ">= 3.11"
|
| 36 |
+
|
| 37 |
+
[build-system]
|
| 38 |
+
build-backend = # your-build-backend
|
| 39 |
+
requires = # your-build-system
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
## Step 2: Define the Policy Configuration
|
| 43 |
+
|
| 44 |
+
Create a configuration class that inherits from `PreTrainedConfig` and registers your policy type:
|
| 45 |
+
|
| 46 |
+
```python
|
| 47 |
+
# configuration_my_custom_policy.py
|
| 48 |
+
from dataclasses import dataclass, field
|
| 49 |
+
from lerobot.configs.policies import PreTrainedConfig
|
| 50 |
+
from lerobot.configs.types import NormalizationMode
|
| 51 |
+
|
| 52 |
+
@PreTrainedConfig.register_subclass("my_custom_policy")
|
| 53 |
+
@dataclass
|
| 54 |
+
class MyCustomPolicyConfig(PreTrainedConfig):
|
| 55 |
+
"""Configuration class for MyCustomPolicy.
|
| 56 |
+
|
| 57 |
+
Args:
|
| 58 |
+
n_obs_steps: Number of observation steps to use as input
|
| 59 |
+
horizon: Action prediction horizon
|
| 60 |
+
n_action_steps: Number of action steps to execute
|
| 61 |
+
hidden_dim: Hidden dimension for the policy network
|
| 62 |
+
# Add your policy-specific parameters here
|
| 63 |
+
"""
|
| 64 |
+
# ...PreTrainedConfig fields...
|
| 65 |
+
pass
|
| 66 |
+
|
| 67 |
+
def __post_init__(self):
|
| 68 |
+
super().__post_init__()
|
| 69 |
+
# Add any validation logic here
|
| 70 |
+
|
| 71 |
+
def validate_features(self) -> None:
|
| 72 |
+
"""Validate input/output feature compatibility."""
|
| 73 |
+
# Implement validation logic for your policy's requirements
|
| 74 |
+
pass
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
## Step 3: Implement the Policy Class
|
| 78 |
+
|
| 79 |
+
Create your policy implementation by inheriting from LeRobot's base `PreTrainedPolicy` class:
|
| 80 |
+
|
| 81 |
+
```python
|
| 82 |
+
# modeling_my_custom_policy.py
|
| 83 |
+
import torch
|
| 84 |
+
import torch.nn as nn
|
| 85 |
+
from typing import Dict, Any
|
| 86 |
+
|
| 87 |
+
from lerobot.policies.pretrained import PreTrainedPolicy
|
| 88 |
+
from .configuration_my_custom_policy import MyCustomPolicyConfig
|
| 89 |
+
|
| 90 |
+
class MyCustomPolicy(PreTrainedPolicy):
|
| 91 |
+
config_class = MyCustomPolicyConfig
|
| 92 |
+
name = "my_custom_policy"
|
| 93 |
+
|
| 94 |
+
def __init__(self, config: MyCustomPolicyConfig, dataset_stats: Dict[str, Any] = None):
|
| 95 |
+
super().__init__(config, dataset_stats)
|
| 96 |
+
...
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
## Step 4: Add Data Processors
|
| 100 |
+
|
| 101 |
+
Create processor functions:
|
| 102 |
+
|
| 103 |
+
```python
|
| 104 |
+
# processor_my_custom_policy.py
|
| 105 |
+
from typing import Dict, Any
|
| 106 |
+
import torch
|
| 107 |
+
|
| 108 |
+
|
| 109 |
+
def make_my_custom_policy_pre_post_processors(
|
| 110 |
+
config,
|
| 111 |
+
) -> tuple[
|
| 112 |
+
PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
|
| 113 |
+
PolicyProcessorPipeline[PolicyAction, PolicyAction],
|
| 114 |
+
]:
|
| 115 |
+
"""Create preprocessing and postprocessing functions for your policy."""
|
| 116 |
+
pass # Define your preprocessing and postprocessing logic here
|
| 117 |
+
|
| 118 |
+
```
|
| 119 |
+
|
| 120 |
+
## Step 5: Package Initialization
|
| 121 |
+
|
| 122 |
+
Expose your classes in the package's `__init__.py`:
|
| 123 |
+
|
| 124 |
+
```python
|
| 125 |
+
# __init__.py
|
| 126 |
+
"""Custom policy package for LeRobot."""
|
| 127 |
+
|
| 128 |
+
try:
|
| 129 |
+
import lerobot # noqa: F401
|
| 130 |
+
except ImportError:
|
| 131 |
+
raise ImportError(
|
| 132 |
+
"lerobot is not installed. Please install lerobot to use this policy package."
|
| 133 |
+
)
|
| 134 |
+
|
| 135 |
+
from .configuration_my_custom_policy import MyCustomPolicyConfig
|
| 136 |
+
from .modeling_my_custom_policy import MyCustomPolicy
|
| 137 |
+
from .processor_my_custom_policy import make_my_custom_policy_pre_post_processors
|
| 138 |
+
|
| 139 |
+
__all__ = [
|
| 140 |
+
"MyCustomPolicyConfig",
|
| 141 |
+
"MyCustomPolicy",
|
| 142 |
+
"make_my_custom_policy_pre_post_processors",
|
| 143 |
+
]
|
| 144 |
+
```
|
| 145 |
+
|
| 146 |
+
## Step 6: Installation and Usage
|
| 147 |
+
|
| 148 |
+
### Install Your Policy Package
|
| 149 |
+
|
| 150 |
+
```bash
|
| 151 |
+
cd lerobot_policy_my_custom_policy
|
| 152 |
+
pip install -e .
|
| 153 |
+
|
| 154 |
+
# Or install from PyPI if published
|
| 155 |
+
pip install lerobot_policy_my_custom_policy
|
| 156 |
+
```
|
| 157 |
+
|
| 158 |
+
### Use Your Policy
|
| 159 |
+
|
| 160 |
+
Once installed, your policy automatically integrates with LeRobot's training and evaluation tools:
|
| 161 |
+
|
| 162 |
+
```bash
|
| 163 |
+
lerobot-train \
|
| 164 |
+
--policy.type my_custom_policy \
|
| 165 |
+
--env.type pusht \
|
| 166 |
+
--steps 200000
|
| 167 |
+
```
|
| 168 |
+
|
| 169 |
+
## Examples and Community Contributions
|
| 170 |
+
|
| 171 |
+
Check out these example policy implementations:
|
| 172 |
+
|
| 173 |
+
- [DiTFlow Policy](https://github.com/danielsanjosepro/lerobot_policy_ditflow) - Diffusion Transformer policy with flow-matching objective. Try it out in this example: [DiTFlow Example](https://github.com/danielsanjosepro/test_lerobot_policy_ditflow)
|
| 174 |
+
|
| 175 |
+
Share your policy implementations with the community! 🤗
|
lerobot/docs/source/cameras.mdx
ADDED
|
@@ -0,0 +1,206 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Cameras
|
| 2 |
+
|
| 3 |
+
LeRobot offers multiple options for video capture, including phone cameras, built-in laptop cameras, external webcams, and Intel RealSense cameras. To efficiently record frames from most cameras, you can use either the `OpenCVCamera` or `RealSenseCamera` class. For additional compatibility details on the `OpenCVCamera` class, refer to the [Video I/O with OpenCV Overview](https://docs.opencv.org/4.x/d0/da7/videoio_overview.html).
|
| 4 |
+
|
| 5 |
+
### Finding your camera
|
| 6 |
+
|
| 7 |
+
To instantiate a camera, you need a camera identifier. This identifier might change if you reboot your computer or re-plug your camera, a behavior mostly dependant on your operating system.
|
| 8 |
+
|
| 9 |
+
To find the camera indices of the cameras plugged into your system, run the following script:
|
| 10 |
+
|
| 11 |
+
```bash
|
| 12 |
+
lerobot-find-cameras opencv # or realsense for Intel Realsense cameras
|
| 13 |
+
```
|
| 14 |
+
|
| 15 |
+
The output will look something like this if you have two cameras connected:
|
| 16 |
+
|
| 17 |
+
```
|
| 18 |
+
--- Detected Cameras ---
|
| 19 |
+
Camera #0:
|
| 20 |
+
Name: OpenCV Camera @ 0
|
| 21 |
+
Type: OpenCV
|
| 22 |
+
Id: 0
|
| 23 |
+
Backend api: AVFOUNDATION
|
| 24 |
+
Default stream profile:
|
| 25 |
+
Format: 16.0
|
| 26 |
+
Width: 1920
|
| 27 |
+
Height: 1080
|
| 28 |
+
Fps: 15.0
|
| 29 |
+
--------------------
|
| 30 |
+
(more cameras ...)
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
> [!WARNING]
|
| 34 |
+
> When using Intel RealSense cameras in `macOS`, you could get this [error](https://github.com/IntelRealSense/librealsense/issues/12307): `Error finding RealSense cameras: failed to set power state`, this can be solved by running the same command with `sudo` permissions. Note that using RealSense cameras in `macOS` is unstable.
|
| 35 |
+
|
| 36 |
+
## Use Cameras
|
| 37 |
+
|
| 38 |
+
Below are two examples, demonstrating how to work with the API.
|
| 39 |
+
|
| 40 |
+
- **Asynchronous frame capture** using an OpenCV-based camera
|
| 41 |
+
- **Color and depth capture** using an Intel RealSense camera
|
| 42 |
+
|
| 43 |
+
<hfoptions id="shell_restart">
|
| 44 |
+
<hfoption id="Open CV Camera">
|
| 45 |
+
|
| 46 |
+
<!-- prettier-ignore-start -->
|
| 47 |
+
```python
|
| 48 |
+
from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
|
| 49 |
+
from lerobot.cameras.opencv.camera_opencv import OpenCVCamera
|
| 50 |
+
from lerobot.cameras.configs import ColorMode, Cv2Rotation
|
| 51 |
+
|
| 52 |
+
# Construct an `OpenCVCameraConfig` with your desired FPS, resolution, color mode, and rotation.
|
| 53 |
+
config = OpenCVCameraConfig(
|
| 54 |
+
index_or_path=0,
|
| 55 |
+
fps=15,
|
| 56 |
+
width=1920,
|
| 57 |
+
height=1080,
|
| 58 |
+
color_mode=ColorMode.RGB,
|
| 59 |
+
rotation=Cv2Rotation.NO_ROTATION
|
| 60 |
+
)
|
| 61 |
+
|
| 62 |
+
# Instantiate and connect an `OpenCVCamera`, performing a warm-up read (default).
|
| 63 |
+
camera = OpenCVCamera(config)
|
| 64 |
+
camera.connect()
|
| 65 |
+
|
| 66 |
+
# Read frames asynchronously in a loop via `async_read(timeout_ms)`
|
| 67 |
+
try:
|
| 68 |
+
for i in range(10):
|
| 69 |
+
frame = camera.async_read(timeout_ms=200)
|
| 70 |
+
print(f"Async frame {i} shape:", frame.shape)
|
| 71 |
+
finally:
|
| 72 |
+
camera.disconnect()
|
| 73 |
+
```
|
| 74 |
+
<!-- prettier-ignore-end -->
|
| 75 |
+
|
| 76 |
+
</hfoption>
|
| 77 |
+
<hfoption id="Intel Realsense Camera">
|
| 78 |
+
|
| 79 |
+
<!-- prettier-ignore-start -->
|
| 80 |
+
```python
|
| 81 |
+
from lerobot.cameras.realsense.configuration_realsense import RealSenseCameraConfig
|
| 82 |
+
from lerobot.cameras.realsense.camera_realsense import RealSenseCamera
|
| 83 |
+
from lerobot.cameras.configs import ColorMode, Cv2Rotation
|
| 84 |
+
|
| 85 |
+
# Create a `RealSenseCameraConfig` specifying your camera’s serial number and enabling depth.
|
| 86 |
+
config = RealSenseCameraConfig(
|
| 87 |
+
serial_number_or_name="233522074606",
|
| 88 |
+
fps=15,
|
| 89 |
+
width=640,
|
| 90 |
+
height=480,
|
| 91 |
+
color_mode=ColorMode.RGB,
|
| 92 |
+
use_depth=True,
|
| 93 |
+
rotation=Cv2Rotation.NO_ROTATION
|
| 94 |
+
)
|
| 95 |
+
|
| 96 |
+
# Instantiate and connect a `RealSenseCamera` with warm-up read (default).
|
| 97 |
+
camera = RealSenseCamera(config)
|
| 98 |
+
camera.connect()
|
| 99 |
+
|
| 100 |
+
# Capture a color frame via `read()` and a depth map via `read_depth()`.
|
| 101 |
+
try:
|
| 102 |
+
color_frame = camera.read()
|
| 103 |
+
depth_map = camera.read_depth()
|
| 104 |
+
print("Color frame shape:", color_frame.shape)
|
| 105 |
+
print("Depth map shape:", depth_map.shape)
|
| 106 |
+
finally:
|
| 107 |
+
camera.disconnect()
|
| 108 |
+
```
|
| 109 |
+
<!-- prettier-ignore-end -->
|
| 110 |
+
|
| 111 |
+
</hfoption>
|
| 112 |
+
</hfoptions>
|
| 113 |
+
|
| 114 |
+
## Use your phone
|
| 115 |
+
|
| 116 |
+
<hfoptions id="use phone">
|
| 117 |
+
<hfoption id="Mac">
|
| 118 |
+
|
| 119 |
+
To use your iPhone as a camera on macOS, enable the Continuity Camera feature:
|
| 120 |
+
|
| 121 |
+
- Ensure your Mac is running macOS 13 or later, and your iPhone is on iOS 16 or later.
|
| 122 |
+
- Sign in both devices with the same Apple ID.
|
| 123 |
+
- Connect your devices with a USB cable or turn on Wi-Fi and Bluetooth for a wireless connection.
|
| 124 |
+
|
| 125 |
+
For more details, visit [Apple support](https://support.apple.com/en-gb/guide/mac-help/mchl77879b8a/mac).
|
| 126 |
+
|
| 127 |
+
Your iPhone should be detected automatically when running the camera setup script in the next section.
|
| 128 |
+
|
| 129 |
+
</hfoption>
|
| 130 |
+
<hfoption id="Linux">
|
| 131 |
+
|
| 132 |
+
If you want to use your phone as a camera on Linux, follow these steps to set up a virtual camera
|
| 133 |
+
|
| 134 |
+
1. _Install `v4l2loopback-dkms` and `v4l-utils`_. Those packages are required to create virtual camera devices (`v4l2loopback`) and verify their settings with the `v4l2-ctl` utility from `v4l-utils`. Install them using:
|
| 135 |
+
|
| 136 |
+
<!-- prettier-ignore-start -->
|
| 137 |
+
```python
|
| 138 |
+
sudo apt install v4l2loopback-dkms v4l-utils
|
| 139 |
+
```
|
| 140 |
+
<!-- prettier-ignore-end -->
|
| 141 |
+
|
| 142 |
+
2. _Install [DroidCam](https://droidcam.app) on your phone_. This app is available for both iOS and Android.
|
| 143 |
+
3. _Install [OBS Studio](https://obsproject.com)_. This software will help you manage the camera feed. Install it using [Flatpak](https://flatpak.org):
|
| 144 |
+
|
| 145 |
+
<!-- prettier-ignore-start -->
|
| 146 |
+
```python
|
| 147 |
+
flatpak install flathub com.obsproject.Studio
|
| 148 |
+
```
|
| 149 |
+
<!-- prettier-ignore-end -->
|
| 150 |
+
|
| 151 |
+
4. _Install the DroidCam OBS plugin_. This plugin integrates DroidCam with OBS Studio. Install it with:
|
| 152 |
+
|
| 153 |
+
<!-- prettier-ignore-start -->
|
| 154 |
+
```python
|
| 155 |
+
flatpak install flathub com.obsproject.Studio.Plugin.DroidCam
|
| 156 |
+
```
|
| 157 |
+
<!-- prettier-ignore-end -->
|
| 158 |
+
|
| 159 |
+
5. _Start OBS Studio_. Launch with:
|
| 160 |
+
|
| 161 |
+
<!-- prettier-ignore-start -->
|
| 162 |
+
```python
|
| 163 |
+
flatpak run com.obsproject.Studio
|
| 164 |
+
```
|
| 165 |
+
<!-- prettier-ignore-end -->
|
| 166 |
+
|
| 167 |
+
6. _Add your phone as a source_. Follow the instructions [here](https://droidcam.app/obs/usage). Be sure to set the resolution to `640x480`.
|
| 168 |
+
7. _Adjust resolution settings_. In OBS Studio, go to `File > Settings > Video`. Change the `Base(Canvas) Resolution` and the `Output(Scaled) Resolution` to `640x480` by manually typing it in.
|
| 169 |
+
8. _Start virtual camera_. In OBS Studio, follow the instructions [here](https://obsproject.com/kb/virtual-camera-guide).
|
| 170 |
+
9. _Verify the virtual camera setup_. Use `v4l2-ctl` to list the devices:
|
| 171 |
+
|
| 172 |
+
<!-- prettier-ignore-start -->
|
| 173 |
+
```python
|
| 174 |
+
v4l2-ctl --list-devices
|
| 175 |
+
```
|
| 176 |
+
<!-- prettier-ignore-end -->
|
| 177 |
+
|
| 178 |
+
You should see an entry like:
|
| 179 |
+
|
| 180 |
+
```
|
| 181 |
+
VirtualCam (platform:v4l2loopback-000):
|
| 182 |
+
/dev/video1
|
| 183 |
+
```
|
| 184 |
+
|
| 185 |
+
10. _Check the camera resolution_. Use `v4l2-ctl` to ensure that the virtual camera output resolution is `640x480`. Change `/dev/video1` to the port of your virtual camera from the output of `v4l2-ctl --list-devices`.
|
| 186 |
+
|
| 187 |
+
<!-- prettier-ignore-start -->
|
| 188 |
+
```python
|
| 189 |
+
v4l2-ctl -d /dev/video1 --get-fmt-video
|
| 190 |
+
```
|
| 191 |
+
<!-- prettier-ignore-end -->
|
| 192 |
+
|
| 193 |
+
You should see an entry like:
|
| 194 |
+
|
| 195 |
+
```
|
| 196 |
+
>>> Format Video Capture:
|
| 197 |
+
>>> Width/Height : 640/480
|
| 198 |
+
>>> Pixel Format : 'YUYV' (YUYV 4:2:2)
|
| 199 |
+
```
|
| 200 |
+
|
| 201 |
+
Troubleshooting: If the resolution is not correct you will have to delete the Virtual Camera port and try again as it cannot be changed.
|
| 202 |
+
|
| 203 |
+
If everything is set up correctly, you can proceed with the rest of the tutorial.
|
| 204 |
+
|
| 205 |
+
</hfoption>
|
| 206 |
+
</hfoptions>
|
lerobot/docs/source/contributing.md
ADDED
|
@@ -0,0 +1 @@
|
|
|
|
|
|
|
| 1 |
+
../../CONTRIBUTING.md
|
lerobot/docs/source/debug_processor_pipeline.mdx
ADDED
|
@@ -0,0 +1,299 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Debug Your Processor Pipeline
|
| 2 |
+
|
| 3 |
+
Processor pipelines can be complex, especially when chaining multiple transformation steps.
|
| 4 |
+
Unlike simple function calls, pipelines lack natural observability, you can't easily see what happens
|
| 5 |
+
between each step or where things go wrong.
|
| 6 |
+
This guide provides debugging tools and techniques specifically designed to address these challenges
|
| 7 |
+
and help you understand data flow through your pipelines.
|
| 8 |
+
|
| 9 |
+
We'll explore three complementary debugging approaches: **hooks** for runtime monitoring, **step-through debugging** for detailed inspection, and **feature validation** for catching structural mismatches. Each serves a different purpose and together they provide complete visibility into your pipeline's behavior.
|
| 10 |
+
|
| 11 |
+
## Understanding Hooks
|
| 12 |
+
|
| 13 |
+
Hooks are functions that get called at specific points during pipeline execution.
|
| 14 |
+
They provide a way to inspect, monitor, or modify data without changing your pipeline code.
|
| 15 |
+
Think of them as "event listeners" for your pipeline.
|
| 16 |
+
|
| 17 |
+
### What is a Hook?
|
| 18 |
+
|
| 19 |
+
A hook is a callback function that gets automatically invoked at specific moments during pipeline execution.
|
| 20 |
+
The concept comes from event-driven programming, imagine you could "hook into" the pipeline's execution flow to observe or react to what's happening.
|
| 21 |
+
|
| 22 |
+
Think of hooks like inserting checkpoints into your pipeline. Every time the pipeline reaches one of these checkpoints, it pauses briefly to call your hook function, giving you a chance to inspect the current state, log information, and validate data.
|
| 23 |
+
|
| 24 |
+
A hook is simply a function that accepts two parameters:
|
| 25 |
+
|
| 26 |
+
- `step_idx: int` - The index of the current processing step (0, 1, 2, etc.)
|
| 27 |
+
- `transition: EnvTransition` - The data transition at that point in the pipeline
|
| 28 |
+
|
| 29 |
+
The beauty of hooks is their non-invasive nature: you can add monitoring, validation, or debugging logic without changing a single line of your pipeline code. The pipeline remains clean and focused on its core logic, while hooks handle the cross-cutting concerns like logging, monitoring, and debugging.
|
| 30 |
+
|
| 31 |
+
### Before vs After Hooks
|
| 32 |
+
|
| 33 |
+
The pipeline supports two types of hooks:
|
| 34 |
+
|
| 35 |
+
- **Before hooks** (`register_before_step_hook`) - Called before each step executes
|
| 36 |
+
- **After hooks** (`register_after_step_hook`) - Called after each step completes
|
| 37 |
+
|
| 38 |
+
```python
|
| 39 |
+
def before_hook(step_idx: int, transition: EnvTransition):
|
| 40 |
+
"""Called before step processes the transition."""
|
| 41 |
+
print(f"About to execute step {step_idx}")
|
| 42 |
+
# Useful for: logging, validation, setup
|
| 43 |
+
|
| 44 |
+
def after_hook(step_idx: int, transition: EnvTransition):
|
| 45 |
+
"""Called after step has processed the transition."""
|
| 46 |
+
print(f"Completed step {step_idx}")
|
| 47 |
+
# Useful for: monitoring results, cleanup, debugging
|
| 48 |
+
|
| 49 |
+
processor.register_before_step_hook(before_hook)
|
| 50 |
+
processor.register_after_step_hook(after_hook)
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
### Implementing a NaN Detection Hook
|
| 54 |
+
|
| 55 |
+
Here's a practical example of a hook that detects NaN values:
|
| 56 |
+
|
| 57 |
+
```python
|
| 58 |
+
def check_nans(step_idx: int, transition: EnvTransition):
|
| 59 |
+
"""Check for NaN values in observations."""
|
| 60 |
+
obs = transition.get(TransitionKey.OBSERVATION)
|
| 61 |
+
if obs:
|
| 62 |
+
for key, value in obs.items():
|
| 63 |
+
if isinstance(value, torch.Tensor) and torch.isnan(value).any():
|
| 64 |
+
print(f"NaN detected in {key} at step {step_idx}")
|
| 65 |
+
|
| 66 |
+
# Register the hook to run after each step
|
| 67 |
+
processor.register_after_step_hook(check_nans)
|
| 68 |
+
|
| 69 |
+
# Process your data - the hook will be called automatically
|
| 70 |
+
output = processor(input_data)
|
| 71 |
+
|
| 72 |
+
# Remove the hook when done debugging
|
| 73 |
+
processor.unregister_after_step_hook(check_nans)
|
| 74 |
+
```
|
| 75 |
+
|
| 76 |
+
### How Hooks Work Internally
|
| 77 |
+
|
| 78 |
+
Understanding the internal mechanism helps you use hooks more effectively. The pipeline maintains two separate lists: one for before-step hooks and another for after-step hooks. When you register a hook, it's simply appended to the appropriate list.
|
| 79 |
+
|
| 80 |
+
During execution, the pipeline follows a strict sequence: for each processing step, it first calls all before-hooks in registration order, then executes the actual step transformation, and finally calls all after-hooks in registration order. This creates a predictable, sandwich-like structure around each step.
|
| 81 |
+
|
| 82 |
+
The key insight is that hooks don't change the core pipeline logic—they're purely additive. The pipeline's `_forward` method orchestrates this dance between hooks and processing steps, ensuring that your debugging or monitoring code runs at exactly the right moments without interfering with the main data flow.
|
| 83 |
+
|
| 84 |
+
Here's a simplified view of how the pipeline executes hooks:
|
| 85 |
+
|
| 86 |
+
```python
|
| 87 |
+
class DataProcessorPipeline:
|
| 88 |
+
def __init__(self):
|
| 89 |
+
self.steps = [...]
|
| 90 |
+
self.before_step_hooks = [] # List of before hooks
|
| 91 |
+
self.after_step_hooks = [] # List of after hooks
|
| 92 |
+
|
| 93 |
+
def _forward(self, transition):
|
| 94 |
+
"""Internal method that processes the transition through all steps."""
|
| 95 |
+
for step_idx, processor_step in enumerate(self.steps):
|
| 96 |
+
# 1. Call all BEFORE hooks
|
| 97 |
+
for hook in self.before_step_hooks:
|
| 98 |
+
hook(step_idx, transition)
|
| 99 |
+
|
| 100 |
+
# 2. Execute the actual processing step
|
| 101 |
+
transition = processor_step(transition)
|
| 102 |
+
|
| 103 |
+
# 3. Call all AFTER hooks
|
| 104 |
+
for hook in self.after_step_hooks:
|
| 105 |
+
hook(step_idx, transition)
|
| 106 |
+
|
| 107 |
+
return transition
|
| 108 |
+
|
| 109 |
+
def register_before_step_hook(self, hook_fn):
|
| 110 |
+
self.before_step_hooks.append(hook_fn)
|
| 111 |
+
|
| 112 |
+
def register_after_step_hook(self, hook_fn):
|
| 113 |
+
self.after_step_hooks.append(hook_fn)
|
| 114 |
+
```
|
| 115 |
+
|
| 116 |
+
### Execution Flow
|
| 117 |
+
|
| 118 |
+
The execution flow looks like this:
|
| 119 |
+
|
| 120 |
+
```
|
| 121 |
+
Input → Before Hook → Step 0 → After Hook → Before Hook → Step 1 → After Hook → ... → Output
|
| 122 |
+
```
|
| 123 |
+
|
| 124 |
+
For example, with 3 steps and both hook types:
|
| 125 |
+
|
| 126 |
+
```python
|
| 127 |
+
def timing_before(step_idx, transition):
|
| 128 |
+
print(f"⏱️ Starting step {step_idx}")
|
| 129 |
+
|
| 130 |
+
def validation_after(step_idx, transition):
|
| 131 |
+
print(f"✅ Completed step {step_idx}")
|
| 132 |
+
|
| 133 |
+
processor.register_before_step_hook(timing_before)
|
| 134 |
+
processor.register_after_step_hook(validation_after)
|
| 135 |
+
|
| 136 |
+
# This will output:
|
| 137 |
+
# ⏱️ Starting step 0
|
| 138 |
+
# ✅ Completed step 0
|
| 139 |
+
# ⏱️ Starting step 1
|
| 140 |
+
# ✅ Completed step 1
|
| 141 |
+
# ⏱️ Starting step 2
|
| 142 |
+
# ✅ Completed step 2
|
| 143 |
+
```
|
| 144 |
+
|
| 145 |
+
### Multiple Hooks
|
| 146 |
+
|
| 147 |
+
You can register multiple hooks of the same type - they execute in the order registered:
|
| 148 |
+
|
| 149 |
+
```python
|
| 150 |
+
def log_shapes(step_idx: int, transition: EnvTransition):
|
| 151 |
+
obs = transition.get(TransitionKey.OBSERVATION)
|
| 152 |
+
if obs:
|
| 153 |
+
print(f"Step {step_idx} observation shapes:")
|
| 154 |
+
for key, value in obs.items():
|
| 155 |
+
if isinstance(value, torch.Tensor):
|
| 156 |
+
print(f" {key}: {value.shape}")
|
| 157 |
+
|
| 158 |
+
processor.register_after_step_hook(check_nans) # Executes first
|
| 159 |
+
processor.register_after_step_hook(log_shapes) # Executes second
|
| 160 |
+
|
| 161 |
+
# Both hooks will be called after each step in registration order
|
| 162 |
+
output = processor(input_data)
|
| 163 |
+
```
|
| 164 |
+
|
| 165 |
+
While hooks are excellent for monitoring specific issues (like NaN detection) or gathering metrics during normal pipeline execution, sometimes you need to dive deeper. When you want to understand exactly what happens at each step or debug complex transformation logic, step-through debugging provides the detailed inspection you need.
|
| 166 |
+
|
| 167 |
+
## Step-Through Debugging
|
| 168 |
+
|
| 169 |
+
Step-through debugging is like having a slow-motion replay for your pipeline. Instead of watching your data get transformed in one quick blur from input to output, you can pause and examine what happens after each individual step.
|
| 170 |
+
|
| 171 |
+
This approach is particularly valuable when you're trying to understand a complex pipeline, debug unexpected behavior, or verify that each transformation is working as expected. Unlike hooks, which are great for automated monitoring, step-through debugging gives you manual, interactive control over the inspection process.
|
| 172 |
+
|
| 173 |
+
The `step_through()` method is a generator that yields the transition state after each processing step, allowing you to inspect intermediate results. Think of it as creating a series of snapshots of your data as it flows through the pipeline—each snapshot shows you exactly what your data looks like after one more transformation has been applied.
|
| 174 |
+
|
| 175 |
+
### How Step-Through Works
|
| 176 |
+
|
| 177 |
+
The `step_through()` method fundamentally changes how the pipeline executes. Instead of running all steps in sequence and only returning the final result, it transforms the pipeline into an iterator that yields intermediate results.
|
| 178 |
+
|
| 179 |
+
Here's what happens internally: the method starts by converting your input data into the pipeline's internal transition format, then yields this initial state. Next, it applies the first processing step and yields the result. Then it applies the second step to that result and yields again, and so on. Each `yield` gives you a complete snapshot of the transition at that point.
|
| 180 |
+
|
| 181 |
+
This generator pattern is powerful because it's lazy—the pipeline only computes the next step when you ask for it. This means you can stop at any point, inspect the current state thoroughly, and decide whether to continue. You're not forced to run the entire pipeline just to debug one problematic step.
|
| 182 |
+
|
| 183 |
+
Instead of running the entire pipeline and only seeing the final result, `step_through()` pauses after each step and gives you the intermediate transition:
|
| 184 |
+
|
| 185 |
+
```python
|
| 186 |
+
# This creates a generator that yields intermediate states
|
| 187 |
+
for i, intermediate_result in enumerate(processor.step_through(input_data)):
|
| 188 |
+
print(f"=== After step {i} ===")
|
| 189 |
+
|
| 190 |
+
# Inspect the observation at this stage
|
| 191 |
+
obs = intermediate_result.get(TransitionKey.OBSERVATION)
|
| 192 |
+
if obs:
|
| 193 |
+
for key, value in obs.items():
|
| 194 |
+
if isinstance(value, torch.Tensor):
|
| 195 |
+
print(f"{key}: shape={value.shape}, dtype={value.dtype}")
|
| 196 |
+
```
|
| 197 |
+
|
| 198 |
+
### Interactive Debugging with Breakpoints
|
| 199 |
+
|
| 200 |
+
You can add breakpoints in the step-through loop to interactively debug:
|
| 201 |
+
|
| 202 |
+
```python
|
| 203 |
+
# Step through the pipeline with debugging
|
| 204 |
+
for i, intermediate in enumerate(processor.step_through(data)):
|
| 205 |
+
print(f"Step {i}: {processor.steps[i].__class__.__name__}")
|
| 206 |
+
|
| 207 |
+
# Set a breakpoint to inspect the current state
|
| 208 |
+
breakpoint() # Debugger will pause here
|
| 209 |
+
|
| 210 |
+
# You can now inspect 'intermediate' in the debugger:
|
| 211 |
+
# - Check tensor shapes and values
|
| 212 |
+
# - Verify expected transformations
|
| 213 |
+
# - Look for unexpected changes
|
| 214 |
+
```
|
| 215 |
+
|
| 216 |
+
During the debugger session, you can:
|
| 217 |
+
|
| 218 |
+
- Examine `intermediate[TransitionKey.OBSERVATION]` to see observation data
|
| 219 |
+
- Check `intermediate[TransitionKey.ACTION]` for action transformations
|
| 220 |
+
- Inspect any part of the transition to understand what each step does
|
| 221 |
+
|
| 222 |
+
Step-through debugging is perfect for understanding the _data_ transformations, but what about the _structure_ of that data? While hooks and step-through help you debug runtime behavior, you also need to ensure your pipeline produces data in the format expected by downstream components. This is where feature contract validation comes in.
|
| 223 |
+
|
| 224 |
+
## Validating Feature Contracts
|
| 225 |
+
|
| 226 |
+
Feature contracts define what data structure your pipeline expects as input and produces as output.
|
| 227 |
+
Validating these contracts helps catch mismatches early.
|
| 228 |
+
|
| 229 |
+
### Understanding Feature Contracts
|
| 230 |
+
|
| 231 |
+
Each processor step has a `transform_features()` method that describes how it changes the data structure:
|
| 232 |
+
|
| 233 |
+
```python
|
| 234 |
+
# Get the expected output features from your pipeline
|
| 235 |
+
initial_features = {
|
| 236 |
+
PipelineFeatureType.OBSERVATION: {
|
| 237 |
+
"observation.state": PolicyFeature(type=FeatureType.STATE, shape=(7,)),
|
| 238 |
+
"observation.image": PolicyFeature(type=FeatureType.IMAGE, shape=(3, 224, 224))
|
| 239 |
+
},
|
| 240 |
+
PipelineFeatureType.ACTION: {
|
| 241 |
+
"action": PolicyFeature(type=FeatureType.ACTION, shape=(4,))
|
| 242 |
+
}
|
| 243 |
+
}
|
| 244 |
+
|
| 245 |
+
# Check what your pipeline will output
|
| 246 |
+
output_features = processor.transform_features(initial_features)
|
| 247 |
+
|
| 248 |
+
print("Input features:")
|
| 249 |
+
for feature_type, features in initial_features.items():
|
| 250 |
+
print(f" {feature_type}:")
|
| 251 |
+
for key, feature in features.items():
|
| 252 |
+
print(f" {key}: {feature.type.value}, shape={feature.shape}")
|
| 253 |
+
|
| 254 |
+
print("\nOutput features:")
|
| 255 |
+
for feature_type, features in output_features.items():
|
| 256 |
+
print(f" {feature_type}:")
|
| 257 |
+
for key, feature in features.items():
|
| 258 |
+
print(f" {key}: {feature.type.value}, shape={feature.shape}")
|
| 259 |
+
```
|
| 260 |
+
|
| 261 |
+
### Verifying Expected Features
|
| 262 |
+
|
| 263 |
+
Check that your pipeline produces the features you expect:
|
| 264 |
+
|
| 265 |
+
```python
|
| 266 |
+
# Define what features you expect the pipeline to produce
|
| 267 |
+
expected_keys = ["observation.state", "observation.image", "action"]
|
| 268 |
+
|
| 269 |
+
print("Validating feature contract...")
|
| 270 |
+
for expected_key in expected_keys:
|
| 271 |
+
found = False
|
| 272 |
+
for feature_type, features in output_features.items():
|
| 273 |
+
if expected_key in features:
|
| 274 |
+
feature = features[expected_key]
|
| 275 |
+
print(f"✅ {expected_key}: {feature.type.value}, shape={feature.shape}")
|
| 276 |
+
found = True
|
| 277 |
+
break
|
| 278 |
+
|
| 279 |
+
if not found:
|
| 280 |
+
print(f"❌ Missing expected feature: {expected_key}")
|
| 281 |
+
```
|
| 282 |
+
|
| 283 |
+
This validation helps ensure your pipeline will work correctly with downstream components that expect specific data structures.
|
| 284 |
+
|
| 285 |
+
## Summary
|
| 286 |
+
|
| 287 |
+
Now that you understand the three debugging approaches, you can tackle any pipeline issue systematically:
|
| 288 |
+
|
| 289 |
+
1. **Hooks** - For runtime monitoring and validation without modifying pipeline code
|
| 290 |
+
2. **Step-through** - For inspecting intermediate states and understanding transformations
|
| 291 |
+
3. **Feature validation** - For ensuring data structure contracts are met
|
| 292 |
+
|
| 293 |
+
**When to use each approach:**
|
| 294 |
+
|
| 295 |
+
- Start with **step-through debugging** when you need to understand what your pipeline does or when something unexpected happens
|
| 296 |
+
- Add **hooks** for continuous monitoring during development and production to catch issues automatically
|
| 297 |
+
- Use **feature validation** before deployment to ensure your pipeline works with downstream components
|
| 298 |
+
|
| 299 |
+
These three tools work together to give you the complete observability that complex pipelines naturally lack. With hooks watching for issues, step-through helping you understand behavior, and feature validation ensuring compatibility, you'll be able to debug any pipeline confidently and efficiently.
|
lerobot/docs/source/earthrover_mini_plus.mdx
ADDED
|
@@ -0,0 +1,225 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# EarthRover Mini Plus
|
| 2 |
+
|
| 3 |
+
The EarthRover Mini Plus is a fully open source mobile robot that connects through the cloud using the Frodobots SDK. This lets you control the robot and record datasets for training AI models.
|
| 4 |
+
|
| 5 |
+
## What You Need
|
| 6 |
+
|
| 7 |
+
### Hardware
|
| 8 |
+
|
| 9 |
+
- EarthRover Mini robot
|
| 10 |
+
- Computer with Python 3.10 or newer
|
| 11 |
+
- Internet connection
|
| 12 |
+
|
| 13 |
+
### Setting Up the Frodobots SDK
|
| 14 |
+
|
| 15 |
+
The robot needs the [Frodobots SDK](https://github.com/frodobots-org/earth-rovers-sdk) running on your computer. Here's how:
|
| 16 |
+
|
| 17 |
+
1. Download and install the SDK:
|
| 18 |
+
|
| 19 |
+
```bash
|
| 20 |
+
git clone https://github.com/frodobots-org/earth-rovers-sdk.git
|
| 21 |
+
cd earth-rovers-sdk
|
| 22 |
+
pip install -r requirements.txt
|
| 23 |
+
```
|
| 24 |
+
|
| 25 |
+
2. Save Credentials:
|
| 26 |
+
|
| 27 |
+
Write your .env variables with the SDK API key and bot name provided by the Frodobots team.
|
| 28 |
+
|
| 29 |
+
```bash
|
| 30 |
+
SDK_API_TOKEN=your_sdk_api_token_here
|
| 31 |
+
BOT_SLUG=your_bot_slug_here
|
| 32 |
+
CHROME_EXECUTABLE_PATH=/path/to/chrome_or_chromium
|
| 33 |
+
# Default value is MAP_ZOOM_LEVEL=18 https://wiki.openstreetmap.org/wiki/Zoom_levels
|
| 34 |
+
MAP_ZOOM_LEVEL=18
|
| 35 |
+
MISSION_SLUG=your_mission_slug_here
|
| 36 |
+
# Image quality between 0.1 and 1.0 (default: 0.8)
|
| 37 |
+
# Recommended: 0.8 for better performance
|
| 38 |
+
IMAGE_QUALITY=0.8
|
| 39 |
+
# Image format: jpeg, png or webp (default: png)
|
| 40 |
+
# Recommended: jpeg for better performance and lower bandwidth usage
|
| 41 |
+
IMAGE_FORMAT=jpeg
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
3. Start the SDK:
|
| 45 |
+
|
| 46 |
+
```bash
|
| 47 |
+
hypercorn main:app --reload
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
4. Open your web browser and go to `http://localhost:8000`, then click "Join"
|
| 51 |
+
|
| 52 |
+
The SDK gives you:
|
| 53 |
+
|
| 54 |
+
- Live video from front and rear cameras
|
| 55 |
+
|
| 56 |
+
> [!IMPORTANT]
|
| 57 |
+
> The SDK must be running before you can use the robot.
|
| 58 |
+
|
| 59 |
+
## Install LeRobot
|
| 60 |
+
|
| 61 |
+
Follow our [Installation Guide](./installation) to install LeRobot.
|
| 62 |
+
|
| 63 |
+
In addition to the base installation, install the EarthRover Mini dependencies:
|
| 64 |
+
|
| 65 |
+
```bash
|
| 66 |
+
pip install -e .
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
## How It Works
|
| 70 |
+
|
| 71 |
+
The robot uses the internet to communicate:
|
| 72 |
+
|
| 73 |
+
- **Movement commands**: Sent through the SDK
|
| 74 |
+
- **Camera video**: Received from the SDK
|
| 75 |
+
- **Robot info**: Battery, location, speed from the SDK
|
| 76 |
+
|
| 77 |
+
You don't need to plug anything in - it all works through the SDK.
|
| 78 |
+
|
| 79 |
+
## Calibration
|
| 80 |
+
|
| 81 |
+
No calibration needed! The robot is ready to use as soon as the SDK is running.
|
| 82 |
+
|
| 83 |
+
## Controlling the Robot
|
| 84 |
+
|
| 85 |
+
You control the robot using your keyboard - just like playing a video game with WASD keys.
|
| 86 |
+
|
| 87 |
+
### Keyboard Controls
|
| 88 |
+
|
| 89 |
+
| Key | Action |
|
| 90 |
+
| --- | -------------------------------- |
|
| 91 |
+
| W | Move forward |
|
| 92 |
+
| S | Move backward |
|
| 93 |
+
| A | Turn left (with forward motion) |
|
| 94 |
+
| D | Turn right (with forward motion) |
|
| 95 |
+
| Q | Rotate left in place |
|
| 96 |
+
| E | Rotate right in place |
|
| 97 |
+
| X | Stop all movement |
|
| 98 |
+
| +/= | Increase speed |
|
| 99 |
+
| - | Decrease speed |
|
| 100 |
+
| ESC | Disconnect |
|
| 101 |
+
|
| 102 |
+
### Speed Settings
|
| 103 |
+
|
| 104 |
+
You can adjust how fast the robot moves:
|
| 105 |
+
|
| 106 |
+
- **Forward/backward speed**: Default is full speed (1.0)
|
| 107 |
+
- **Turning speed**: Default is full speed (1.0)
|
| 108 |
+
- **Speed changes**: Use +/- keys to adjust by 0.1 each time
|
| 109 |
+
|
| 110 |
+
### Try It Out
|
| 111 |
+
|
| 112 |
+
Test driving the robot before recording data:
|
| 113 |
+
|
| 114 |
+
```python
|
| 115 |
+
from lerobot.robots.earthrover_mini_plus import EarthRoverMiniPlus, EarthRoverMiniPlusConfig
|
| 116 |
+
from lerobot.teleoperators.keyboard import KeyboardRoverTeleop, KeyboardRoverTeleopConfig
|
| 117 |
+
|
| 118 |
+
# Initialize robot
|
| 119 |
+
robot_config = EarthRoverMiniPlusConfig()
|
| 120 |
+
robot = EarthRoverMiniPlus(robot_config)
|
| 121 |
+
|
| 122 |
+
# Initialize teleoperator
|
| 123 |
+
teleop_config = KeyboardRoverTeleopConfig(
|
| 124 |
+
linear_speed=1.0,
|
| 125 |
+
angular_speed=1.0,
|
| 126 |
+
speed_increment=0.1
|
| 127 |
+
)
|
| 128 |
+
teleop = KeyboardRoverTeleop(teleop_config)
|
| 129 |
+
|
| 130 |
+
# Connect
|
| 131 |
+
robot.connect()
|
| 132 |
+
teleop.connect()
|
| 133 |
+
|
| 134 |
+
# Teleoperate (use keyboard controls)
|
| 135 |
+
try:
|
| 136 |
+
while True:
|
| 137 |
+
action = teleop.get_action()
|
| 138 |
+
robot.send_action(action)
|
| 139 |
+
except KeyboardInterrupt:
|
| 140 |
+
pass
|
| 141 |
+
finally:
|
| 142 |
+
robot.disconnect()
|
| 143 |
+
teleop.disconnect()
|
| 144 |
+
```
|
| 145 |
+
|
| 146 |
+
> [!TIP]
|
| 147 |
+
> If you're using a Mac, you might need to give Terminal permission to access your keyboard for teleoperation. Go to System Preferences > Security & Privacy > Input Monitoring and check the box for Terminal.
|
| 148 |
+
|
| 149 |
+
## Recording Data
|
| 150 |
+
|
| 151 |
+
Once you can drive the robot well, you can start recording data to train AI models. The system records:
|
| 152 |
+
|
| 153 |
+
- **What you do**: How you move the robot (forward, backward, turning)
|
| 154 |
+
- **What the robot sees**:
|
| 155 |
+
- Videos from both cameras
|
| 156 |
+
- Robot speed and direction
|
| 157 |
+
- Battery level and location
|
| 158 |
+
- GPS position and signal
|
| 159 |
+
- Other sensor data
|
| 160 |
+
- **When it happened**: Timestamps for everything
|
| 161 |
+
|
| 162 |
+
### Setting Up Hugging Face
|
| 163 |
+
|
| 164 |
+
We use Hugging Face to store your data online. First, log in with your token from [Hugging Face settings](https://huggingface.co/settings/tokens):
|
| 165 |
+
|
| 166 |
+
```bash
|
| 167 |
+
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
|
| 168 |
+
```
|
| 169 |
+
|
| 170 |
+
Store your Hugging Face username:
|
| 171 |
+
|
| 172 |
+
```bash
|
| 173 |
+
HF_USER=$(huggingface-cli whoami | head -n 1)
|
| 174 |
+
echo $HF_USER
|
| 175 |
+
```
|
| 176 |
+
|
| 177 |
+
### Start Recording
|
| 178 |
+
|
| 179 |
+
Use the standard recording command:
|
| 180 |
+
|
| 181 |
+
```bash
|
| 182 |
+
python src/lerobot/scripts/lerobot_record.py \
|
| 183 |
+
--robot.type=earthrover_mini_plus \
|
| 184 |
+
--teleop.type=keyboard_rover \
|
| 185 |
+
--dataset.repo_id=your_username/dataset_name \
|
| 186 |
+
--dataset.num_episodes=2 \
|
| 187 |
+
--dataset.fps=10 \
|
| 188 |
+
--dataset.single_task="Navigate around obstacles" \
|
| 189 |
+
--display_data=true
|
| 190 |
+
```
|
| 191 |
+
|
| 192 |
+
Replace `your_username/dataset_name` with your Hugging Face username and a name for your dataset.
|
| 193 |
+
|
| 194 |
+
### What Gets Saved
|
| 195 |
+
|
| 196 |
+
Your dataset includes:
|
| 197 |
+
|
| 198 |
+
**Your Actions (2 things)**:
|
| 199 |
+
|
| 200 |
+
- How much you moved forward/backward
|
| 201 |
+
- How much you turned left/right
|
| 202 |
+
|
| 203 |
+
**Robot Observations (12 things)**:
|
| 204 |
+
|
| 205 |
+
- Front camera video
|
| 206 |
+
- Rear camera video
|
| 207 |
+
- Current speed
|
| 208 |
+
- Battery level
|
| 209 |
+
- Which way the robot is facing
|
| 210 |
+
- GPS location (latitude, longitude, signal strength)
|
| 211 |
+
- Network signal strength
|
| 212 |
+
- Vibration level
|
| 213 |
+
- Lamp status (on/off)
|
| 214 |
+
|
| 215 |
+
### Where Your Data Goes
|
| 216 |
+
|
| 217 |
+
On your computer: `~/.cache/huggingface/lerobot/{repo-id}`
|
| 218 |
+
|
| 219 |
+
After recording, your data automatically uploads to your Hugging Face page:
|
| 220 |
+
|
| 221 |
+
```bash
|
| 222 |
+
echo https://huggingface.co/datasets/${HF_USER}/earthrover-navigation
|
| 223 |
+
```
|
| 224 |
+
|
| 225 |
+
Your dataset will be tagged with `LeRobot` for community discovery.
|
lerobot/docs/source/env_processor.mdx
ADDED
|
@@ -0,0 +1,418 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Environment Processors
|
| 2 |
+
|
| 3 |
+
Environment processors are a critical layer in LeRobot's data processing architecture that handle **environment-specific** transformations, separate from policy-specific processing. This separation of concerns enables cleaner code, better modularity, and easier experimentation with different environments and policies.
|
| 4 |
+
|
| 5 |
+
## Why Environment Processors?
|
| 6 |
+
|
| 7 |
+
When working with different robot environments (LIBERO, MetaWorld, Aloha, etc.), each environment often has unique data formats, coordinate systems, and conventions that need standardization **before** policy processing. Without environment processors, these transformations would be:
|
| 8 |
+
|
| 9 |
+
1. **Hardcoded in environment code** - Making it difficult to experiment with different state representations
|
| 10 |
+
2. **Duplicated across policies** - Each policy would need to handle environment-specific quirks
|
| 11 |
+
3. **Mixed with policy logic** - Violating separation of concerns and making debugging harder
|
| 12 |
+
|
| 13 |
+
Environment processors solve this by providing a **dedicated processing layer** between raw environment observations and policy inputs.
|
| 14 |
+
|
| 15 |
+
## The Processing Pipeline
|
| 16 |
+
|
| 17 |
+
Here's how data flows through the complete processing pipeline during evaluation:
|
| 18 |
+
|
| 19 |
+
```python
|
| 20 |
+
# In lerobot_eval.py rollout() function:
|
| 21 |
+
|
| 22 |
+
# 1. Raw environment observation (numpy arrays, various formats)
|
| 23 |
+
raw_observation = env.step(action)
|
| 24 |
+
|
| 25 |
+
# 2. Convert numpy to torch, normalize images [0,1]
|
| 26 |
+
observation = preprocess_observation(raw_observation)
|
| 27 |
+
|
| 28 |
+
# 3. Add task metadata (for multi-task environments)
|
| 29 |
+
observation = add_envs_task(env, observation)
|
| 30 |
+
|
| 31 |
+
# 4. ENVIRONMENT-SPECIFIC preprocessing (NEW!)
|
| 32 |
+
# - Flatten robot states
|
| 33 |
+
# - Rotate images to match dataset conventions
|
| 34 |
+
# - Handle environment-specific coordinate systems
|
| 35 |
+
observation = env_preprocessor(observation)
|
| 36 |
+
|
| 37 |
+
# 5. POLICY-SPECIFIC preprocessing
|
| 38 |
+
# - Normalize with dataset statistics
|
| 39 |
+
# - Add batch dimensions
|
| 40 |
+
# - Move to GPU
|
| 41 |
+
# - Tokenize language instructions
|
| 42 |
+
observation = preprocessor(observation)
|
| 43 |
+
|
| 44 |
+
# 6. Policy inference
|
| 45 |
+
action = policy.select_action(observation)
|
| 46 |
+
|
| 47 |
+
# 7. POLICY-SPECIFIC postprocessing
|
| 48 |
+
# - Unnormalize actions
|
| 49 |
+
# - Remove batch dimensions
|
| 50 |
+
action = postprocessor(action)
|
| 51 |
+
|
| 52 |
+
# 8. ENVIRONMENT-SPECIFIC postprocessing (NEW!)
|
| 53 |
+
# - Convert action formats if needed
|
| 54 |
+
# - Apply environment-specific constraints
|
| 55 |
+
action_transition = {"action": action}
|
| 56 |
+
action_transition = env_postprocessor(action_transition)
|
| 57 |
+
action = action_transition["action"]
|
| 58 |
+
|
| 59 |
+
# 9. Execute in environment
|
| 60 |
+
env.step(action)
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
## The Benefits
|
| 64 |
+
|
| 65 |
+
### 1. **Separation of Concerns**
|
| 66 |
+
|
| 67 |
+
Environment processors handle transformations specific to the **environment's data format**, while policy processors handle transformations specific to the **model's requirements**.
|
| 68 |
+
|
| 69 |
+
```python
|
| 70 |
+
# ❌ Before: Mixed concerns
|
| 71 |
+
class LiberoVLAPolicy:
|
| 72 |
+
def preprocess(self, obs):
|
| 73 |
+
# Environment-specific: Flatten robot state (shouldn't be in policy!)
|
| 74 |
+
state = self._flatten_robot_state(obs["robot_state"])
|
| 75 |
+
# Policy-specific: Normalize with dataset stats
|
| 76 |
+
state = self.normalizer(state)
|
| 77 |
+
return state
|
| 78 |
+
|
| 79 |
+
# ✅ After: Clear separation
|
| 80 |
+
# Environment processor: Handles LIBERO's nested robot state
|
| 81 |
+
env_preprocessor = LiberoProcessorStep() # Flattens robot_state
|
| 82 |
+
|
| 83 |
+
# Policy processor: Handles model requirements
|
| 84 |
+
policy_preprocessor = NormalizerProcessorStep(stats=dataset_stats)
|
| 85 |
+
```
|
| 86 |
+
|
| 87 |
+
### 2. **Flexibility and Reusability**
|
| 88 |
+
|
| 89 |
+
The same policy can work with different environment processors, and the same environment processor can work with different policies:
|
| 90 |
+
|
| 91 |
+
```python
|
| 92 |
+
# Use SmolVLA policy with LIBERO environment
|
| 93 |
+
libero_preprocessor, libero_postprocessor = make_env_pre_post_processors(libero_cfg)
|
| 94 |
+
smolvla_preprocessor, smolvla_postprocessor = make_pre_post_processors(smolvla_cfg)
|
| 95 |
+
|
| 96 |
+
# Or use ACT policy with the same LIBERO environment
|
| 97 |
+
libero_preprocessor, libero_postprocessor = make_env_pre_post_processors(libero_cfg)
|
| 98 |
+
act_preprocessor, act_postprocessor = make_pre_post_processors(act_cfg)
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
+
### 3. **Easier Experimentation**
|
| 102 |
+
|
| 103 |
+
Want to try different state representations for LIBERO? Just create a new processor:
|
| 104 |
+
|
| 105 |
+
```python
|
| 106 |
+
# Original: 8D state (pos + quat→axisangle + gripper)
|
| 107 |
+
@ProcessorStepRegistry.register("libero_processor")
|
| 108 |
+
class LiberoProcessorStep(ObservationProcessorStep):
|
| 109 |
+
def _process_observation(self, obs):
|
| 110 |
+
eef_pos = robot_state["eef"]["pos"] # 3D
|
| 111 |
+
eef_axisangle = quat2axisangle(quat) # 3D
|
| 112 |
+
gripper = robot_state["gripper"]["qpos"] # 2D
|
| 113 |
+
state = torch.cat([eef_pos, eef_axisangle, gripper], dim=-1) # 8D
|
| 114 |
+
return state
|
| 115 |
+
|
| 116 |
+
# Experiment: Add velocity for better control
|
| 117 |
+
@ProcessorStepRegistry.register("libero_velocity_processor")
|
| 118 |
+
class LiberoVelocityProcessorStep(ObservationProcessorStep):
|
| 119 |
+
def _process_observation(self, obs):
|
| 120 |
+
# Include velocities for 14D state
|
| 121 |
+
eef_pos = robot_state["eef"]["pos"] # 3D
|
| 122 |
+
eef_axisangle = quat2axisangle(quat) # 3D
|
| 123 |
+
eef_vel = robot_state["eef"]["vel"] # 3D (NEW)
|
| 124 |
+
gripper_pos = robot_state["gripper"]["qpos"] # 2D
|
| 125 |
+
gripper_vel = robot_state["gripper"]["qvel"] # 3D (NEW)
|
| 126 |
+
state = torch.cat([eef_pos, eef_axisangle, eef_vel,
|
| 127 |
+
gripper_pos, gripper_vel], dim=-1) # 14D
|
| 128 |
+
return state
|
| 129 |
+
```
|
| 130 |
+
|
| 131 |
+
### 4. **Cleaner Environment Code**
|
| 132 |
+
|
| 133 |
+
Environments expose **all available data** without needing to know what downstream models will use:
|
| 134 |
+
|
| 135 |
+
```python
|
| 136 |
+
# LIBERO environment exposes full robot state
|
| 137 |
+
observation = {
|
| 138 |
+
"pixels": {"image": img, "image2": img2},
|
| 139 |
+
"robot_state": {
|
| 140 |
+
"eef": {"pos": ..., "quat": ..., "vel": ..., "mat": ..., "axisangle": ...},
|
| 141 |
+
"gripper": {"qpos": ..., "qvel": ...},
|
| 142 |
+
"joints": {"pos": ..., "vel": ...}
|
| 143 |
+
}
|
| 144 |
+
}
|
| 145 |
+
|
| 146 |
+
# Environment processor decides what to use
|
| 147 |
+
# Policy processor handles model-specific transformations
|
| 148 |
+
```
|
| 149 |
+
|
| 150 |
+
## Using Environment Processors
|
| 151 |
+
|
| 152 |
+
### Factory Function
|
| 153 |
+
|
| 154 |
+
The `make_env_pre_post_processors` function follows the same pattern as `make_pre_post_processors` for policies:
|
| 155 |
+
|
| 156 |
+
```python
|
| 157 |
+
from lerobot.envs.factory import make_env_pre_post_processors
|
| 158 |
+
from lerobot.envs.configs import LiberoEnv, PushtEnv
|
| 159 |
+
|
| 160 |
+
# For LIBERO: Returns LiberoProcessorStep in preprocessor
|
| 161 |
+
libero_cfg = LiberoEnv(task="libero_spatial", camera_name=["agentview"])
|
| 162 |
+
env_preprocessor, env_postprocessor = make_env_pre_post_processors(libero_cfg)
|
| 163 |
+
|
| 164 |
+
# For other environments: Returns identity processors (no-op)
|
| 165 |
+
pusht_cfg = PushtEnv()
|
| 166 |
+
env_preprocessor, env_postprocessor = make_env_pre_post_processors(pusht_cfg)
|
| 167 |
+
```
|
| 168 |
+
|
| 169 |
+
### Implementation in `envs/factory.py`
|
| 170 |
+
|
| 171 |
+
```python
|
| 172 |
+
def make_env_pre_post_processors(
|
| 173 |
+
env_cfg: EnvConfig,
|
| 174 |
+
) -> tuple[
|
| 175 |
+
PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
|
| 176 |
+
PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
|
| 177 |
+
]:
|
| 178 |
+
"""
|
| 179 |
+
Create preprocessor and postprocessor pipelines for environment observations.
|
| 180 |
+
|
| 181 |
+
Args:
|
| 182 |
+
env_cfg: The configuration of the environment.
|
| 183 |
+
|
| 184 |
+
Returns:
|
| 185 |
+
A tuple containing:
|
| 186 |
+
- preprocessor: Pipeline that processes environment observations
|
| 187 |
+
- postprocessor: Pipeline that processes environment outputs
|
| 188 |
+
"""
|
| 189 |
+
# For LIBERO environments, add the LiberoProcessorStep to preprocessor
|
| 190 |
+
if isinstance(env_cfg, LiberoEnv) or "libero" in env_cfg.type:
|
| 191 |
+
preprocessor = PolicyProcessorPipeline(steps=[LiberoProcessorStep()])
|
| 192 |
+
else:
|
| 193 |
+
# For all other environments, return an identity preprocessor
|
| 194 |
+
preprocessor = PolicyProcessorPipeline(steps=[])
|
| 195 |
+
|
| 196 |
+
# Postprocessor is currently identity for all environments
|
| 197 |
+
# Future: Could add environment-specific action transformations
|
| 198 |
+
postprocessor = PolicyProcessorPipeline(steps=[])
|
| 199 |
+
|
| 200 |
+
return preprocessor, postprocessor
|
| 201 |
+
```
|
| 202 |
+
|
| 203 |
+
### Integration in Evaluation
|
| 204 |
+
|
| 205 |
+
In `lerobot_eval.py`, the environment processors are created once and used throughout:
|
| 206 |
+
|
| 207 |
+
```python
|
| 208 |
+
def eval_main(cfg: EvalPipelineConfig):
|
| 209 |
+
# Create environment
|
| 210 |
+
envs = make_env(cfg.env, n_envs=cfg.eval.batch_size)
|
| 211 |
+
|
| 212 |
+
# Create policy
|
| 213 |
+
policy = make_policy(cfg=cfg.policy, env_cfg=cfg.env)
|
| 214 |
+
|
| 215 |
+
# Create policy processors
|
| 216 |
+
preprocessor, postprocessor = make_pre_post_processors(
|
| 217 |
+
policy_cfg=cfg.policy,
|
| 218 |
+
pretrained_path=cfg.policy.pretrained_path,
|
| 219 |
+
)
|
| 220 |
+
|
| 221 |
+
# Create environment processors (NEW!)
|
| 222 |
+
env_preprocessor, env_postprocessor = make_env_pre_post_processors(env_cfg=cfg.env)
|
| 223 |
+
|
| 224 |
+
# Run evaluation with both processor types
|
| 225 |
+
eval_policy_all(
|
| 226 |
+
envs=envs,
|
| 227 |
+
policy=policy,
|
| 228 |
+
env_preprocessor=env_preprocessor, # Environment-specific
|
| 229 |
+
env_postprocessor=env_postprocessor, # Environment-specific
|
| 230 |
+
preprocessor=preprocessor, # Policy-specific
|
| 231 |
+
postprocessor=postprocessor, # Policy-specific
|
| 232 |
+
n_episodes=cfg.eval.n_episodes,
|
| 233 |
+
)
|
| 234 |
+
```
|
| 235 |
+
|
| 236 |
+
## Example: LIBERO Environment Processor
|
| 237 |
+
|
| 238 |
+
The `LiberoProcessorStep` demonstrates a real-world environment processor:
|
| 239 |
+
|
| 240 |
+
```python
|
| 241 |
+
from lerobot.processor.pipeline import ObservationProcessorStep
|
| 242 |
+
|
| 243 |
+
@dataclass
|
| 244 |
+
@ProcessorStepRegistry.register(name="libero_processor")
|
| 245 |
+
class LiberoProcessorStep(ObservationProcessorStep):
|
| 246 |
+
"""
|
| 247 |
+
Processes LIBERO observations into the LeRobot format.
|
| 248 |
+
|
| 249 |
+
**State Processing:**
|
| 250 |
+
- Extracts end-effector position (3D)
|
| 251 |
+
- Converts quaternion to axis-angle representation (3D)
|
| 252 |
+
- Extracts gripper joint positions (2D)
|
| 253 |
+
- Concatenates into 8D state vector
|
| 254 |
+
|
| 255 |
+
**Image Processing:**
|
| 256 |
+
- Rotates images 180° to match HuggingFaceVLA/libero convention
|
| 257 |
+
"""
|
| 258 |
+
|
| 259 |
+
def _process_observation(self, observation):
|
| 260 |
+
processed_obs = observation.copy()
|
| 261 |
+
|
| 262 |
+
# Process images: Flip 180° for camera convention
|
| 263 |
+
for key in list(processed_obs.keys()):
|
| 264 |
+
if key.startswith("observation.images."):
|
| 265 |
+
img = processed_obs[key]
|
| 266 |
+
img = torch.flip(img, dims=[2, 3]) # Flip H and W
|
| 267 |
+
processed_obs[key] = img
|
| 268 |
+
|
| 269 |
+
# Process robot_state: Flatten to 8D vector
|
| 270 |
+
if "observation.robot_state" in processed_obs:
|
| 271 |
+
robot_state = processed_obs.pop("observation.robot_state")
|
| 272 |
+
|
| 273 |
+
eef_pos = robot_state["eef"]["pos"] # (B, 3)
|
| 274 |
+
eef_quat = robot_state["eef"]["quat"] # (B, 4)
|
| 275 |
+
gripper_qpos = robot_state["gripper"]["qpos"] # (B, 2)
|
| 276 |
+
|
| 277 |
+
# Convert quaternion to axis-angle
|
| 278 |
+
eef_axisangle = self._quat2axisangle(eef_quat) # (B, 3)
|
| 279 |
+
|
| 280 |
+
# Concatenate into single state vector
|
| 281 |
+
state = torch.cat((eef_pos, eef_axisangle, gripper_qpos), dim=-1)
|
| 282 |
+
state = state.float()
|
| 283 |
+
|
| 284 |
+
processed_obs["observation.state"] = state
|
| 285 |
+
|
| 286 |
+
return processed_obs
|
| 287 |
+
```
|
| 288 |
+
|
| 289 |
+
### Why These Transformations?
|
| 290 |
+
|
| 291 |
+
1. **Image Rotation**: The HuggingFaceVLA/libero dataset has images rotated 180° from the raw LIBERO simulator. The processor handles this convention mismatch so policies trained on the dataset work seamlessly.
|
| 292 |
+
|
| 293 |
+
2. **State Flattening**: The raw LIBERO environment exposes nested dictionaries with all available state information (position, quaternion, velocity, matrix representation, etc.). The processor:
|
| 294 |
+
- Selects the relevant components (pos, quat, gripper)
|
| 295 |
+
- Converts quaternion to axis-angle (more suitable for learning)
|
| 296 |
+
- Flattens to a single 8D vector that policies expect
|
| 297 |
+
|
| 298 |
+
3. **Flexibility**: The environment still exposes **all** raw data. If you want to try different state representations (e.g., including velocities, using matrix representation instead of axis-angle), you can create a new processor without modifying the environment code.
|
| 299 |
+
|
| 300 |
+
## Adding Environment Processors for New Environments
|
| 301 |
+
|
| 302 |
+
To add environment processors for a new environment:
|
| 303 |
+
|
| 304 |
+
### 1. Create the Processor Step
|
| 305 |
+
|
| 306 |
+
```python
|
| 307 |
+
# In src/lerobot/processor/env_processor.py
|
| 308 |
+
|
| 309 |
+
@dataclass
|
| 310 |
+
@ProcessorStepRegistry.register(name="myenv_processor")
|
| 311 |
+
class MyEnvProcessorStep(ObservationProcessorStep):
|
| 312 |
+
"""Process observations from MyEnv."""
|
| 313 |
+
|
| 314 |
+
def _process_observation(self, observation):
|
| 315 |
+
processed = observation.copy()
|
| 316 |
+
|
| 317 |
+
# Your environment-specific transformations
|
| 318 |
+
if "myenv.specific.state" in processed:
|
| 319 |
+
state = processed.pop("myenv.specific.state")
|
| 320 |
+
# Transform to standard format
|
| 321 |
+
processed["observation.state"] = self._transform_state(state)
|
| 322 |
+
|
| 323 |
+
return processed
|
| 324 |
+
```
|
| 325 |
+
|
| 326 |
+
### 2. Update the Factory
|
| 327 |
+
|
| 328 |
+
```python
|
| 329 |
+
# In src/lerobot/envs/factory.py
|
| 330 |
+
|
| 331 |
+
def make_env_pre_post_processors(env_cfg: EnvConfig):
|
| 332 |
+
if isinstance(env_cfg, LiberoEnv) or "libero" in env_cfg.type:
|
| 333 |
+
preprocessor = PolicyProcessorPipeline(steps=[LiberoProcessorStep()])
|
| 334 |
+
elif isinstance(env_cfg, MyEnvConfig) or "myenv" in env_cfg.type:
|
| 335 |
+
preprocessor = PolicyProcessorPipeline(steps=[MyEnvProcessorStep()])
|
| 336 |
+
else:
|
| 337 |
+
preprocessor = PolicyProcessorPipeline(steps=[])
|
| 338 |
+
|
| 339 |
+
postprocessor = PolicyProcessorPipeline(steps=[])
|
| 340 |
+
return preprocessor, postprocessor
|
| 341 |
+
```
|
| 342 |
+
|
| 343 |
+
### 3. Use in Evaluation
|
| 344 |
+
|
| 345 |
+
No changes needed! The evaluation script automatically uses the appropriate processor:
|
| 346 |
+
|
| 347 |
+
```bash
|
| 348 |
+
lerobot-eval \
|
| 349 |
+
--policy.path=lerobot/my_policy \
|
| 350 |
+
--env.type=myenv \ # Automatically uses MyEnvProcessorStep
|
| 351 |
+
--eval.n_episodes=10
|
| 352 |
+
```
|
| 353 |
+
|
| 354 |
+
## Future: Environment Postprocessors
|
| 355 |
+
|
| 356 |
+
Currently, postprocessors are identity (no-op) for all environments. Future use cases include:
|
| 357 |
+
|
| 358 |
+
### Action Space Transformations
|
| 359 |
+
|
| 360 |
+
```python
|
| 361 |
+
@dataclass
|
| 362 |
+
class MyEnvActionPostprocessor(ProcessorStep):
|
| 363 |
+
"""Convert policy actions to environment-specific format."""
|
| 364 |
+
|
| 365 |
+
def __call__(self, transition: EnvTransition) -> EnvTransition:
|
| 366 |
+
action = transition["action"]
|
| 367 |
+
|
| 368 |
+
# Example: Convert from Cartesian to joint space
|
| 369 |
+
if self.action_space == "joint":
|
| 370 |
+
action = self.ik_solver(action)
|
| 371 |
+
|
| 372 |
+
# Example: Apply environment-specific safety limits
|
| 373 |
+
action = torch.clamp(action, self.min_action, self.max_action)
|
| 374 |
+
|
| 375 |
+
transition["action"] = action
|
| 376 |
+
return transition
|
| 377 |
+
```
|
| 378 |
+
|
| 379 |
+
### Coordinate System Conversions
|
| 380 |
+
|
| 381 |
+
```python
|
| 382 |
+
@dataclass
|
| 383 |
+
class CoordinateTransformPostprocessor(ProcessorStep):
|
| 384 |
+
"""Transform actions between coordinate systems."""
|
| 385 |
+
|
| 386 |
+
def __call__(self, transition: EnvTransition) -> EnvTransition:
|
| 387 |
+
action = transition["action"]
|
| 388 |
+
|
| 389 |
+
# Example: Policy outputs in world frame, env expects base frame
|
| 390 |
+
action = self.world_to_base_transform(action)
|
| 391 |
+
|
| 392 |
+
transition["action"] = action
|
| 393 |
+
return transition
|
| 394 |
+
```
|
| 395 |
+
|
| 396 |
+
## Best Practices
|
| 397 |
+
|
| 398 |
+
1. **Keep environment processors simple**: They should only handle environment-specific data format issues, not complex learning-related transformations.
|
| 399 |
+
|
| 400 |
+
2. **Use policy processors for model requirements**: Normalization, batching, device placement, and tokenization belong in policy processors.
|
| 401 |
+
|
| 402 |
+
3. **Expose all data from environments**: Let processors decide what to use rather than hardcoding choices in the environment.
|
| 403 |
+
|
| 404 |
+
4. **Document conventions**: Clearly document any coordinate system conventions, camera orientations, or data formats that your processor handles.
|
| 405 |
+
|
| 406 |
+
5. **Test independently**: Environment processors should be testable without loading full policies or environments.
|
| 407 |
+
|
| 408 |
+
## Summary
|
| 409 |
+
|
| 410 |
+
Environment processors provide a **clean separation** between environment-specific data transformations and policy-specific model requirements. This architecture:
|
| 411 |
+
|
| 412 |
+
- ✅ Enables easy experimentation with different state representations
|
| 413 |
+
- ✅ Allows policies to work seamlessly across different environments
|
| 414 |
+
- ✅ Keeps environment code focused on simulation/hardware interface
|
| 415 |
+
- ✅ Makes processor pipelines more maintainable and debuggable
|
| 416 |
+
- ✅ Follows the single responsibility principle
|
| 417 |
+
|
| 418 |
+
The key insight: **Environments define data formats, processors standardize them, policies consume standardized data.** Each layer has a clear, focused responsibility.
|
lerobot/docs/source/envhub.mdx
ADDED
|
@@ -0,0 +1,431 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Loading Environments from the Hub
|
| 2 |
+
|
| 3 |
+
The **EnvHub** feature allows you to load simulation environments directly from the Hugging Face Hub with a single line of code. This unlocks a powerful new model for collaboration: instead of environments being locked away inside monolithic libraries, anyone can publish custom environments and share them with the community.
|
| 4 |
+
|
| 5 |
+
## What is EnvHub?
|
| 6 |
+
|
| 7 |
+
EnvHub lets you create custom robotics simulation environments with your own robot models and scenarios, and make them easily usable by anyone through the LeRobot framework.
|
| 8 |
+
|
| 9 |
+
EnvHub packages are stored on the Hugging Face Hub, and can be seamlessly pulled and used in your AI robotics projects through LeRobot with a single line of code.
|
| 10 |
+
|
| 11 |
+
Thanks to EnvHub, you can:
|
| 12 |
+
|
| 13 |
+
1. **Create and publish environments** to the Hugging Face Hub as Git repositories, and distribute complex physics simulations without packaging hassles
|
| 14 |
+
2. **Load environments** dynamically, without installing them as packages
|
| 15 |
+
3. **Version and track** environment changes using Git semantics
|
| 16 |
+
4. **Discover** new simulation tasks shared by the community
|
| 17 |
+
|
| 18 |
+
This design means you can go from discovering an interesting environment on the Hub to running experiments in seconds, or create your own custom robot and environment without worrying about dependency conflicts or complex installation procedures.
|
| 19 |
+
|
| 20 |
+
When you create an EnvHub package, you can build anything you want inside it and use any simulation tool you like: this is your own space to play with. The only requirement is that the package contains an `env.py` file that defines the environment and allows LeRobot to load and use your EnvHub package.
|
| 21 |
+
|
| 22 |
+
This `env.py` file needs to expose a small API so LeRobot can load and run it. In particular, you must provide a `make_env(n_envs: int = 1, use_async_envs: bool = False)` or `make_env(n_envs: int = 1, use_async_envs: bool = False, cfg: EnvConfig)` function, which is the main entry point for LeRobot. It should return one of:
|
| 23 |
+
|
| 24 |
+
- A `gym.vector.VectorEnv` (most common)
|
| 25 |
+
- A single `gym.Env` (will be automatically wrapped)
|
| 26 |
+
- A dict mapping `{suite_name: {task_id: VectorEnv}}` (for multi-task benchmarks)
|
| 27 |
+
|
| 28 |
+
You can also pass an `EnvConfig` object to `make_env` to configure the environment (e.g. the number of environments, task, camera name, initial states, control mode, episode length, etc.).
|
| 29 |
+
|
| 30 |
+
Finally, your environment must implement the standard `gym.vector.VectorEnv` interface so it works with LeRobot, including methods like `reset` and `step`.
|
| 31 |
+
|
| 32 |
+
## Quick Start
|
| 33 |
+
|
| 34 |
+
Loading an environment from the Hub is as simple as:
|
| 35 |
+
|
| 36 |
+
```python
|
| 37 |
+
from lerobot.envs.factory import make_env
|
| 38 |
+
|
| 39 |
+
# Load a hub environment (requires explicit consent to run remote code)
|
| 40 |
+
env = make_env("lerobot/cartpole-env", trust_remote_code=True)
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
<Tip warning={true}>
|
| 44 |
+
**Security Notice**: Loading environments from the Hub executes Python code
|
| 45 |
+
from third-party repositories. Only use `trust_remote_code=True` with
|
| 46 |
+
repositories you trust. We strongly recommend pinning to a specific commit
|
| 47 |
+
hash for reproducibility and security.
|
| 48 |
+
</Tip>
|
| 49 |
+
|
| 50 |
+
## Repository Structure
|
| 51 |
+
|
| 52 |
+
To make your environment loadable from the Hub, your repository must contain at minimum:
|
| 53 |
+
|
| 54 |
+
### Required Files
|
| 55 |
+
|
| 56 |
+
**`env.py`** (or custom Python file)
|
| 57 |
+
|
| 58 |
+
- Must expose a `make_env(n_envs: int, use_async_envs: bool)` function
|
| 59 |
+
- This function should return one of:
|
| 60 |
+
- A `gym.vector.VectorEnv` (most common)
|
| 61 |
+
- A single `gym.Env` (will be automatically wrapped)
|
| 62 |
+
- A dict mapping `{suite_name: {task_id: VectorEnv}}` (for multi-task benchmarks)
|
| 63 |
+
|
| 64 |
+
### Optional Files
|
| 65 |
+
|
| 66 |
+
**`requirements.txt`**
|
| 67 |
+
|
| 68 |
+
- List any additional dependencies your environment needs
|
| 69 |
+
- Users will need to install these manually before loading your environment
|
| 70 |
+
|
| 71 |
+
**`README.md`**
|
| 72 |
+
|
| 73 |
+
- Document your environment: what task it implements, observation/action spaces, rewards, etc.
|
| 74 |
+
- Include usage examples and any special setup instructions
|
| 75 |
+
|
| 76 |
+
**`.gitignore`**
|
| 77 |
+
|
| 78 |
+
- Exclude unnecessary files from your repository
|
| 79 |
+
|
| 80 |
+
### Example Repository Structure
|
| 81 |
+
|
| 82 |
+
```
|
| 83 |
+
my-environment-repo/
|
| 84 |
+
├── env.py # Main environment definition (required)
|
| 85 |
+
├── requirements.txt # Dependencies (optional)
|
| 86 |
+
├── README.md # Documentation (recommended)
|
| 87 |
+
├── assets/ # Images, videos, etc. (optional)
|
| 88 |
+
│ └── demo.gif
|
| 89 |
+
└── configs/ # Config files if needed (optional)
|
| 90 |
+
└── task_config.yaml
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
## Creating Your Environment Repository
|
| 94 |
+
|
| 95 |
+
### Step 1: Define Your Environment
|
| 96 |
+
|
| 97 |
+
Create an `env.py` file with a `make_env` function:
|
| 98 |
+
|
| 99 |
+
```python
|
| 100 |
+
# env.py
|
| 101 |
+
import gymnasium as gym
|
| 102 |
+
|
| 103 |
+
def make_env(n_envs: int = 1, use_async_envs: bool = False):
|
| 104 |
+
"""
|
| 105 |
+
Create vectorized environments for your custom task.
|
| 106 |
+
|
| 107 |
+
Args:
|
| 108 |
+
n_envs: Number of parallel environments
|
| 109 |
+
use_async_envs: Whether to use AsyncVectorEnv or SyncVectorEnv
|
| 110 |
+
|
| 111 |
+
Returns:
|
| 112 |
+
gym.vector.VectorEnv or dict mapping suite names to vectorized envs
|
| 113 |
+
"""
|
| 114 |
+
def _make_single_env():
|
| 115 |
+
# Create your custom environment
|
| 116 |
+
return gym.make("CartPole-v1")
|
| 117 |
+
|
| 118 |
+
# Choose vector environment type
|
| 119 |
+
env_cls = gym.vector.AsyncVectorEnv if use_async_envs else gym.vector.SyncVectorEnv
|
| 120 |
+
|
| 121 |
+
# Create vectorized environment
|
| 122 |
+
vec_env = env_cls([_make_single_env for _ in range(n_envs)])
|
| 123 |
+
|
| 124 |
+
return vec_env
|
| 125 |
+
```
|
| 126 |
+
|
| 127 |
+
### Step 2: Test Locally
|
| 128 |
+
|
| 129 |
+
Before uploading, test your environment locally:
|
| 130 |
+
|
| 131 |
+
```python
|
| 132 |
+
from lerobot.envs.utils import _load_module_from_path, _call_make_env, _normalize_hub_result
|
| 133 |
+
|
| 134 |
+
# Load your module
|
| 135 |
+
module = _load_module_from_path("./env.py")
|
| 136 |
+
|
| 137 |
+
# Test the make_env function
|
| 138 |
+
result = _call_make_env(module, n_envs=2, use_async_envs=False)
|
| 139 |
+
normalized = _normalize_hub_result(result)
|
| 140 |
+
|
| 141 |
+
# Verify it works
|
| 142 |
+
suite_name = next(iter(normalized))
|
| 143 |
+
env = normalized[suite_name][0]
|
| 144 |
+
obs, info = env.reset()
|
| 145 |
+
print(f"Observation shape: {obs.shape if hasattr(obs, 'shape') else type(obs)}")
|
| 146 |
+
env.close()
|
| 147 |
+
```
|
| 148 |
+
|
| 149 |
+
### Step 3: Upload to the Hub
|
| 150 |
+
|
| 151 |
+
Upload your repository to Hugging Face:
|
| 152 |
+
|
| 153 |
+
```bash
|
| 154 |
+
# Install huggingface_hub if needed
|
| 155 |
+
pip install huggingface_hub
|
| 156 |
+
|
| 157 |
+
# Login to Hugging Face
|
| 158 |
+
huggingface-cli login
|
| 159 |
+
|
| 160 |
+
# Create a new repository
|
| 161 |
+
huggingface-cli repo create my-custom-env --type space --org my-org
|
| 162 |
+
|
| 163 |
+
# Initialize git and push
|
| 164 |
+
git init
|
| 165 |
+
git add .
|
| 166 |
+
git commit -m "Initial environment implementation"
|
| 167 |
+
git remote add origin https://huggingface.co/my-org/my-custom-env
|
| 168 |
+
git push -u origin main
|
| 169 |
+
```
|
| 170 |
+
|
| 171 |
+
Alternatively, use the `huggingface_hub` Python API:
|
| 172 |
+
|
| 173 |
+
```python
|
| 174 |
+
from huggingface_hub import HfApi
|
| 175 |
+
|
| 176 |
+
api = HfApi()
|
| 177 |
+
|
| 178 |
+
# Create repository
|
| 179 |
+
api.create_repo("my-custom-env", repo_type="space")
|
| 180 |
+
|
| 181 |
+
# Upload files
|
| 182 |
+
api.upload_folder(
|
| 183 |
+
folder_path="./my-env-folder",
|
| 184 |
+
repo_id="username/my-custom-env",
|
| 185 |
+
repo_type="space",
|
| 186 |
+
)
|
| 187 |
+
```
|
| 188 |
+
|
| 189 |
+
## Loading Environments from the Hub
|
| 190 |
+
|
| 191 |
+
### Basic Usage
|
| 192 |
+
|
| 193 |
+
```python
|
| 194 |
+
from lerobot.envs.factory import make_env
|
| 195 |
+
|
| 196 |
+
# Load from the hub
|
| 197 |
+
envs_dict = make_env(
|
| 198 |
+
"username/my-custom-env",
|
| 199 |
+
n_envs=4,
|
| 200 |
+
trust_remote_code=True
|
| 201 |
+
)
|
| 202 |
+
|
| 203 |
+
# Access the environment
|
| 204 |
+
suite_name = next(iter(envs_dict))
|
| 205 |
+
env = envs_dict[suite_name][0]
|
| 206 |
+
|
| 207 |
+
# Use it like any gym environment
|
| 208 |
+
obs, info = env.reset()
|
| 209 |
+
action = env.action_space.sample()
|
| 210 |
+
obs, reward, terminated, truncated, info = env.step(action)
|
| 211 |
+
```
|
| 212 |
+
|
| 213 |
+
### Advanced: Pinning to Specific Versions
|
| 214 |
+
|
| 215 |
+
For reproducibility and security, pin to a specific Git revision:
|
| 216 |
+
|
| 217 |
+
```python
|
| 218 |
+
# Pin to a specific branch
|
| 219 |
+
env = make_env("username/my-env@main", trust_remote_code=True)
|
| 220 |
+
|
| 221 |
+
# Pin to a specific commit (recommended for papers/experiments)
|
| 222 |
+
env = make_env("username/my-env@abc123def456", trust_remote_code=True)
|
| 223 |
+
|
| 224 |
+
# Pin to a tag
|
| 225 |
+
env = make_env("username/my-env@v1.0.0", trust_remote_code=True)
|
| 226 |
+
```
|
| 227 |
+
|
| 228 |
+
### Custom File Paths
|
| 229 |
+
|
| 230 |
+
If your environment definition is not in `env.py`:
|
| 231 |
+
|
| 232 |
+
```python
|
| 233 |
+
# Load from a custom file
|
| 234 |
+
env = make_env("username/my-env:custom_env.py", trust_remote_code=True)
|
| 235 |
+
|
| 236 |
+
# Combine with version pinning
|
| 237 |
+
env = make_env("username/my-env@v1.0:envs/task_a.py", trust_remote_code=True)
|
| 238 |
+
```
|
| 239 |
+
|
| 240 |
+
### Async Environments
|
| 241 |
+
|
| 242 |
+
For better performance with multiple environments:
|
| 243 |
+
|
| 244 |
+
```python
|
| 245 |
+
envs_dict = make_env(
|
| 246 |
+
"username/my-env",
|
| 247 |
+
n_envs=8,
|
| 248 |
+
use_async_envs=True, # Use AsyncVectorEnv for parallel execution
|
| 249 |
+
trust_remote_code=True
|
| 250 |
+
)
|
| 251 |
+
```
|
| 252 |
+
|
| 253 |
+
## URL Format Reference
|
| 254 |
+
|
| 255 |
+
The hub URL format supports several patterns:
|
| 256 |
+
|
| 257 |
+
| Pattern | Description | Example |
|
| 258 |
+
| -------------------- | ------------------------------ | -------------------------------------- |
|
| 259 |
+
| `user/repo` | Load `env.py` from main branch | `make_env("lerobot/pusht-env")` |
|
| 260 |
+
| `user/repo@revision` | Load from specific revision | `make_env("lerobot/pusht-env@main")` |
|
| 261 |
+
| `user/repo:path` | Load custom file | `make_env("lerobot/envs:pusht.py")` |
|
| 262 |
+
| `user/repo@rev:path` | Revision + custom file | `make_env("lerobot/envs@v1:pusht.py")` |
|
| 263 |
+
|
| 264 |
+
## Multi-Task Environments
|
| 265 |
+
|
| 266 |
+
For benchmarks with multiple tasks (like LIBERO), return a nested dictionary:
|
| 267 |
+
|
| 268 |
+
```python
|
| 269 |
+
def make_env(n_envs: int = 1, use_async_envs: bool = False):
|
| 270 |
+
env_cls = gym.vector.AsyncVectorEnv if use_async_envs else gym.vector.SyncVectorEnv
|
| 271 |
+
|
| 272 |
+
# Return dict: {suite_name: {task_id: VectorEnv}}
|
| 273 |
+
return {
|
| 274 |
+
"suite_1": {
|
| 275 |
+
0: env_cls([lambda: gym.make("Task1-v0") for _ in range(n_envs)]),
|
| 276 |
+
1: env_cls([lambda: gym.make("Task2-v0") for _ in range(n_envs)]),
|
| 277 |
+
},
|
| 278 |
+
"suite_2": {
|
| 279 |
+
0: env_cls([lambda: gym.make("Task3-v0") for _ in range(n_envs)]),
|
| 280 |
+
}
|
| 281 |
+
}
|
| 282 |
+
```
|
| 283 |
+
|
| 284 |
+
## Security Considerations
|
| 285 |
+
|
| 286 |
+
<Tip warning={true}>
|
| 287 |
+
**Important**: The `trust_remote_code=True` flag is required to execute
|
| 288 |
+
environment code from the Hub. This is by design for security.
|
| 289 |
+
</Tip>
|
| 290 |
+
|
| 291 |
+
When loading environments from the Hub:
|
| 292 |
+
|
| 293 |
+
1. **Review the code first**: Visit the repository and inspect `env.py` before loading
|
| 294 |
+
2. **Pin to commits**: Use specific commit hashes for reproducibility
|
| 295 |
+
3. **Check dependencies**: Review `requirements.txt` for suspicious packages
|
| 296 |
+
4. **Use trusted sources**: Prefer official organizations or well-known researchers
|
| 297 |
+
5. **Sandbox if needed**: Run untrusted code in isolated environments (containers, VMs)
|
| 298 |
+
|
| 299 |
+
Example of safe usage:
|
| 300 |
+
|
| 301 |
+
```python
|
| 302 |
+
# ❌ BAD: Loading without inspection
|
| 303 |
+
env = make_env("random-user/untrusted-env", trust_remote_code=True)
|
| 304 |
+
|
| 305 |
+
# ✅ GOOD: Review code, then pin to specific commit
|
| 306 |
+
# 1. Visit https://huggingface.co/trusted-org/verified-env
|
| 307 |
+
# 2. Review the env.py file
|
| 308 |
+
# 3. Copy the commit hash
|
| 309 |
+
env = make_env("trusted-org/verified-env@a1b2c3d4", trust_remote_code=True)
|
| 310 |
+
```
|
| 311 |
+
|
| 312 |
+
## Example: CartPole from the Hub
|
| 313 |
+
|
| 314 |
+
Here's a complete example using the reference CartPole environment:
|
| 315 |
+
|
| 316 |
+
```python
|
| 317 |
+
from lerobot.envs.factory import make_env
|
| 318 |
+
import numpy as np
|
| 319 |
+
|
| 320 |
+
# Load the environment
|
| 321 |
+
envs_dict = make_env("lerobot/cartpole-env", n_envs=4, trust_remote_code=True)
|
| 322 |
+
|
| 323 |
+
# Get the vectorized environment
|
| 324 |
+
suite_name = next(iter(envs_dict))
|
| 325 |
+
env = envs_dict[suite_name][0]
|
| 326 |
+
|
| 327 |
+
# Run a simple episode
|
| 328 |
+
obs, info = env.reset()
|
| 329 |
+
done = np.zeros(env.num_envs, dtype=bool)
|
| 330 |
+
total_reward = np.zeros(env.num_envs)
|
| 331 |
+
|
| 332 |
+
while not done.all():
|
| 333 |
+
# Random policy
|
| 334 |
+
action = env.action_space.sample()
|
| 335 |
+
obs, reward, terminated, truncated, info = env.step(action)
|
| 336 |
+
total_reward += reward
|
| 337 |
+
done = terminated | truncated
|
| 338 |
+
|
| 339 |
+
print(f"Average reward: {total_reward.mean():.2f}")
|
| 340 |
+
env.close()
|
| 341 |
+
```
|
| 342 |
+
|
| 343 |
+
## Benefits of EnvHub
|
| 344 |
+
|
| 345 |
+
### For Environment Authors
|
| 346 |
+
|
| 347 |
+
- **Easy distribution**: No PyPI packaging required
|
| 348 |
+
- **Version control**: Use Git for environment versioning
|
| 349 |
+
- **Rapid iteration**: Push updates instantly
|
| 350 |
+
- **Documentation**: Hub README renders beautifully
|
| 351 |
+
- **Community**: Reach LeRobot users directly
|
| 352 |
+
|
| 353 |
+
### For Researchers
|
| 354 |
+
|
| 355 |
+
- **Quick experiments**: Load any environment in one line
|
| 356 |
+
- **Reproducibility**: Pin to specific commits
|
| 357 |
+
- **Discovery**: Browse environments on the Hub
|
| 358 |
+
- **No conflicts**: No need to install conflicting packages
|
| 359 |
+
|
| 360 |
+
### For the Community
|
| 361 |
+
|
| 362 |
+
- **Growing ecosystem**: More diverse simulation tasks
|
| 363 |
+
- **Standardization**: Common `make_env` API
|
| 364 |
+
- **Collaboration**: Fork and improve existing environments
|
| 365 |
+
- **Accessibility**: Lower barrier to sharing research
|
| 366 |
+
|
| 367 |
+
## Troubleshooting
|
| 368 |
+
|
| 369 |
+
### "Refusing to execute remote code"
|
| 370 |
+
|
| 371 |
+
You must explicitly pass `trust_remote_code=True`:
|
| 372 |
+
|
| 373 |
+
```python
|
| 374 |
+
env = make_env("user/repo", trust_remote_code=True)
|
| 375 |
+
```
|
| 376 |
+
|
| 377 |
+
### "Module X not found"
|
| 378 |
+
|
| 379 |
+
The hub environment has dependencies you need to install:
|
| 380 |
+
|
| 381 |
+
```bash
|
| 382 |
+
# Check the repo's requirements.txt and install dependencies
|
| 383 |
+
pip install gymnasium numpy
|
| 384 |
+
```
|
| 385 |
+
|
| 386 |
+
### "make_env not found in module"
|
| 387 |
+
|
| 388 |
+
Your `env.py` must expose a `make_env` function:
|
| 389 |
+
|
| 390 |
+
```python
|
| 391 |
+
def make_env(n_envs: int, use_async_envs: bool):
|
| 392 |
+
# Your implementation
|
| 393 |
+
pass
|
| 394 |
+
```
|
| 395 |
+
|
| 396 |
+
### Environment returns wrong type
|
| 397 |
+
|
| 398 |
+
The `make_env` function must return:
|
| 399 |
+
|
| 400 |
+
- A `gym.vector.VectorEnv`, or
|
| 401 |
+
- A single `gym.Env`, or
|
| 402 |
+
- A dict `{suite_name: {task_id: VectorEnv}}`
|
| 403 |
+
|
| 404 |
+
## Best Practices
|
| 405 |
+
|
| 406 |
+
1. **Document your environment**: Include observation/action space descriptions, reward structure, and termination conditions in your README
|
| 407 |
+
2. **Add requirements.txt**: List all dependencies with versions
|
| 408 |
+
3. **Test thoroughly**: Verify your environment works locally before pushing
|
| 409 |
+
4. **Use semantic versioning**: Tag releases with version numbers
|
| 410 |
+
5. **Add examples**: Include usage examples in your README
|
| 411 |
+
6. **Keep it simple**: Minimize dependencies when possible
|
| 412 |
+
7. **License your work**: Add a LICENSE file to clarify usage terms
|
| 413 |
+
|
| 414 |
+
## Future Directions
|
| 415 |
+
|
| 416 |
+
The EnvHub ecosystem enables exciting possibilities:
|
| 417 |
+
|
| 418 |
+
- **GPU-accelerated physics**: Share Isaac Gym or Brax environments
|
| 419 |
+
- **Photorealistic rendering**: Distribute environments with advanced graphics
|
| 420 |
+
- **Multi-agent scenarios**: Complex interaction tasks
|
| 421 |
+
- **Real-world simulators**: Digital twins of physical setups
|
| 422 |
+
- **Procedural generation**: Infinite task variations
|
| 423 |
+
- **Domain randomization**: Pre-configured DR pipelines
|
| 424 |
+
|
| 425 |
+
As more researchers and developers contribute, the diversity and quality of available environments will grow, benefiting the entire robotics learning community.
|
| 426 |
+
|
| 427 |
+
## See Also
|
| 428 |
+
|
| 429 |
+
- [Hugging Face Hub Documentation](https://huggingface.co/docs/hub/en/index)
|
| 430 |
+
- [Gymnasium Documentation](https://gymnasium.farama.org/index.html)
|
| 431 |
+
- [Example Hub Environment](https://huggingface.co/lerobot/cartpole-env)
|
lerobot/docs/source/envhub_isaaclab_arena.mdx
ADDED
|
@@ -0,0 +1,510 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# NVIDIA IsaacLab Arena & LeRobot
|
| 2 |
+
|
| 3 |
+
LeRobot EnvHub now supports **GPU-accelerated simulation** with IsaacLab Arena for policy evaluation at scale.
|
| 4 |
+
Train and evaluate imitation learning policies with high-fidelity simulation — all integrated into the LeRobot ecosystem.
|
| 5 |
+
|
| 6 |
+
<img
|
| 7 |
+
src="https://huggingface.co/nvidia/isaaclab-arena-envs/resolve/main/assets/Gr1OpenMicrowaveEnvironment.png"
|
| 8 |
+
alt="IsaacLab Arena - GR1 Microwave Environment"
|
| 9 |
+
style={{ maxWidth: "100%", borderRadius: "8px", marginBottom: "1rem" }}
|
| 10 |
+
/>
|
| 11 |
+
|
| 12 |
+
[IsaacLab Arena](https://github.com/isaac-sim/IsaacLab-Arena) integrates with NVIDIA IsaacLab to provide:
|
| 13 |
+
|
| 14 |
+
- 🤖 **Humanoid embodiments**: GR1, G1, Galileo with various configurations
|
| 15 |
+
- 🎯 **Manipulation & loco-manipulation tasks**: Door opening, pick-and-place, button pressing, and more
|
| 16 |
+
- ⚡ **GPU-accelerated rollouts**: Parallel environment execution on NVIDIA GPUs
|
| 17 |
+
- 🖼️ **RTX Rendering**: Evaluate vision-based policies with realistic rendering, reflections and refractions
|
| 18 |
+
- 📦 **LeRobot-compatible datasets**: Ready for training with GR00T N1x, PI0, SmolVLA, ACT, and Diffusion policies
|
| 19 |
+
- 🔄 **EnvHub integration**: Load environments from HuggingFace EnvHub with one line
|
| 20 |
+
|
| 21 |
+
## Installation
|
| 22 |
+
|
| 23 |
+
### Prerequisites
|
| 24 |
+
|
| 25 |
+
Hardware requirements are shared with Isaac Sim, and are detailed in [Isaac Sim Requirements](https://docs.isaacsim.omniverse.nvidia.com/5.1.0/installation/requirements.html).
|
| 26 |
+
|
| 27 |
+
- NVIDIA GPU with CUDA support
|
| 28 |
+
- NVIDIA driver compatible with IsaacSim 5.1.0
|
| 29 |
+
- Linux (Ubuntu 22.04 / 24.04)
|
| 30 |
+
|
| 31 |
+
### Setup
|
| 32 |
+
|
| 33 |
+
```bash
|
| 34 |
+
# 1. Create conda environment
|
| 35 |
+
conda create -y -n lerobot-arena python=3.11
|
| 36 |
+
conda activate lerobot-arena
|
| 37 |
+
conda install -y -c conda-forge ffmpeg=7.1.1
|
| 38 |
+
|
| 39 |
+
# 2. Install Isaac Sim 5.1.0
|
| 40 |
+
pip install "isaacsim[all,extscache]==5.1.0" --extra-index-url https://pypi.nvidia.com
|
| 41 |
+
|
| 42 |
+
# Accept NVIDIA EULA (required)
|
| 43 |
+
export ACCEPT_EULA=Y
|
| 44 |
+
export PRIVACY_CONSENT=Y
|
| 45 |
+
|
| 46 |
+
# 3. Install IsaacLab 2.3.0
|
| 47 |
+
git clone https://github.com/isaac-sim/IsaacLab.git
|
| 48 |
+
cd IsaacLab
|
| 49 |
+
git checkout v2.3.0
|
| 50 |
+
./isaaclab.sh -i
|
| 51 |
+
cd ..
|
| 52 |
+
|
| 53 |
+
# 4. Install IsaacLab Arena
|
| 54 |
+
git clone https://github.com/isaac-sim/IsaacLab-Arena.git
|
| 55 |
+
cd IsaacLab-Arena
|
| 56 |
+
git checkout release/0.1.1
|
| 57 |
+
pip install -e .
|
| 58 |
+
cd ..
|
| 59 |
+
|
| 60 |
+
|
| 61 |
+
# 5. Install LeRobot
|
| 62 |
+
git clone https://github.com/huggingface/lerobot.git
|
| 63 |
+
cd lerobot
|
| 64 |
+
pip install -e .
|
| 65 |
+
cd ..
|
| 66 |
+
|
| 67 |
+
|
| 68 |
+
# 6. Install additional dependencies
|
| 69 |
+
pip install onnxruntime==1.23.2 lightwheel-sdk==1.0.1 vuer[all]==0.0.70 qpsolvers==4.8.1
|
| 70 |
+
pip install numpy==1.26.0 # Isaac Sim 5.1 depends on numpy==1.26.0, this will be fixed in next release
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
## Evaluating Policies
|
| 74 |
+
|
| 75 |
+
### Pre-trained Policies
|
| 76 |
+
|
| 77 |
+
The following trained policies are available:
|
| 78 |
+
|
| 79 |
+
| Policy | Architecture | Task | Link |
|
| 80 |
+
| :-------------------------- | :----------- | :------------ | :----------------------------------------------------------------------- |
|
| 81 |
+
| pi05-arena-gr1-microwave | PI0.5 | GR1 Microwave | [HuggingFace](https://huggingface.co/nvidia/pi05-arena-gr1-microwave) |
|
| 82 |
+
| smolvla-arena-gr1-microwave | SmolVLA | GR1 Microwave | [HuggingFace](https://huggingface.co/nvidia/smolvla-arena-gr1-microwave) |
|
| 83 |
+
|
| 84 |
+
### Evaluate SmolVLA
|
| 85 |
+
|
| 86 |
+
```bash
|
| 87 |
+
pip install -e ".[smolvla]"
|
| 88 |
+
pip install numpy==1.26.0 # revert numpy to version 1.26
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
```bash
|
| 92 |
+
lerobot-eval \
|
| 93 |
+
--policy.path=nvidia/smolvla-arena-gr1-microwave \
|
| 94 |
+
--env.type=isaaclab_arena \
|
| 95 |
+
--env.hub_path=nvidia/isaaclab-arena-envs \
|
| 96 |
+
--rename_map='{"observation.images.robot_pov_cam_rgb": "observation.images.robot_pov_cam"}' \
|
| 97 |
+
--policy.device=cuda \
|
| 98 |
+
--env.environment=gr1_microwave \
|
| 99 |
+
--env.embodiment=gr1_pink \
|
| 100 |
+
--env.object=mustard_bottle \
|
| 101 |
+
--env.headless=false \
|
| 102 |
+
--env.enable_cameras=true \
|
| 103 |
+
--env.video=true \
|
| 104 |
+
--env.video_length=10 \
|
| 105 |
+
--env.video_interval=15 \
|
| 106 |
+
--env.state_keys=robot_joint_pos \
|
| 107 |
+
--env.camera_keys=robot_pov_cam_rgb \
|
| 108 |
+
--trust_remote_code=True \
|
| 109 |
+
--eval.batch_size=1
|
| 110 |
+
```
|
| 111 |
+
|
| 112 |
+
### Evaluate PI0.5
|
| 113 |
+
|
| 114 |
+
```bash
|
| 115 |
+
pip install -e ".[pi]"
|
| 116 |
+
pip install numpy==1.26.0 # revert numpy to version 1.26
|
| 117 |
+
```
|
| 118 |
+
|
| 119 |
+
<Tip>PI0.5 requires disabling torch compile for evaluation:</Tip>
|
| 120 |
+
|
| 121 |
+
```bash
|
| 122 |
+
TORCH_COMPILE_DISABLE=1 TORCHINDUCTOR_DISABLE=1 lerobot-eval \
|
| 123 |
+
--policy.path=nvidia/pi05-arena-gr1-microwave \
|
| 124 |
+
--env.type=isaaclab_arena \
|
| 125 |
+
--env.hub_path=nvidia/isaaclab-arena-envs \
|
| 126 |
+
--rename_map='{"observation.images.robot_pov_cam_rgb": "observation.images.robot_pov_cam"}' \
|
| 127 |
+
--policy.device=cuda \
|
| 128 |
+
--env.environment=gr1_microwave \
|
| 129 |
+
--env.embodiment=gr1_pink \
|
| 130 |
+
--env.object=mustard_bottle \
|
| 131 |
+
--env.headless=false \
|
| 132 |
+
--env.enable_cameras=true \
|
| 133 |
+
--env.video=true \
|
| 134 |
+
--env.video_length=15 \
|
| 135 |
+
--env.video_interval=15 \
|
| 136 |
+
--env.state_keys=robot_joint_pos \
|
| 137 |
+
--env.camera_keys=robot_pov_cam_rgb \
|
| 138 |
+
--trust_remote_code=True \
|
| 139 |
+
--eval.batch_size=1
|
| 140 |
+
```
|
| 141 |
+
|
| 142 |
+
<Tip>
|
| 143 |
+
To change the number of parallel environments, use the ```--eval.batch_size```
|
| 144 |
+
flag.
|
| 145 |
+
</Tip>
|
| 146 |
+
|
| 147 |
+
### What to Expect
|
| 148 |
+
|
| 149 |
+
During evaluation, you will see a progress bar showing the running success rate:
|
| 150 |
+
|
| 151 |
+
```
|
| 152 |
+
Stepping through eval batches: 8%|██████▍ | 4/50 [00:45<08:06, 10.58s/it, running_success_rate=25.0%]
|
| 153 |
+
```
|
| 154 |
+
|
| 155 |
+
### Video Recording
|
| 156 |
+
|
| 157 |
+
To enable video recording during evaluation, add the following flags to your command:
|
| 158 |
+
|
| 159 |
+
```bash
|
| 160 |
+
--env.video=true \
|
| 161 |
+
--env.video_length=15 \
|
| 162 |
+
--env.video_interval=15
|
| 163 |
+
```
|
| 164 |
+
|
| 165 |
+
For more details on video recording, see the [IsaacLab Recording Documentation](https://isaac-sim.github.io/IsaacLab/main/source/how-to/record_video.html).
|
| 166 |
+
|
| 167 |
+
<Tip>
|
| 168 |
+
When running headless with `--env.headless=true`, you must also enable cameras explicitly for camera enabled environments:
|
| 169 |
+
|
| 170 |
+
```bash
|
| 171 |
+
--env.headless=true --env.enable_cameras=true
|
| 172 |
+
```
|
| 173 |
+
|
| 174 |
+
</Tip>
|
| 175 |
+
|
| 176 |
+
### Output Directory
|
| 177 |
+
|
| 178 |
+
Evaluation videos are saved to the output directory with the following structure:
|
| 179 |
+
|
| 180 |
+
```
|
| 181 |
+
outputs/eval/<date>/<timestamp>_<env>_<policy>/videos/<task>_<env_id>/eval_episode_<n>.mp4
|
| 182 |
+
```
|
| 183 |
+
|
| 184 |
+
For example:
|
| 185 |
+
|
| 186 |
+
```
|
| 187 |
+
outputs/eval/2026-01-02/14-38-01_isaaclab_arena_smolvla/videos/gr1_microwave_0/eval_episode_0.mp4
|
| 188 |
+
```
|
| 189 |
+
|
| 190 |
+
## Training Policies
|
| 191 |
+
|
| 192 |
+
To learn more about training policies with LeRobot, please refer to the training documentation:
|
| 193 |
+
|
| 194 |
+
- [SmolVLA](./smolvla)
|
| 195 |
+
- [Pi0.5](./pi05)
|
| 196 |
+
- [GR00T N1.5](./groot)
|
| 197 |
+
|
| 198 |
+
Sample IsaacLab Arena datasets are available on HuggingFace Hub for experimentation:
|
| 199 |
+
|
| 200 |
+
| Dataset | Description | Frames |
|
| 201 |
+
| :-------------------------------------------------------------------------------------------------------- | :------------------------- | :----- |
|
| 202 |
+
| [Arena-GR1-Manipulation-Task](https://huggingface.co/datasets/nvidia/Arena-GR1-Manipulation-Task-v3) | GR1 microwave manipulation | ~4K |
|
| 203 |
+
| [Arena-G1-Loco-Manipulation-Task](https://huggingface.co/datasets/nvidia/Arena-G1-Loco-Manipulation-Task) | G1 loco-manipulation | ~4K |
|
| 204 |
+
|
| 205 |
+
## Environment Configuration
|
| 206 |
+
|
| 207 |
+
### Full Configuration Options
|
| 208 |
+
|
| 209 |
+
```python
|
| 210 |
+
from lerobot.envs.configs import IsaaclabArenaEnv
|
| 211 |
+
|
| 212 |
+
config = IsaaclabArenaEnv(
|
| 213 |
+
# Environment selection
|
| 214 |
+
environment="gr1_microwave", # Task environment
|
| 215 |
+
embodiment="gr1_pink", # Robot embodiment
|
| 216 |
+
object="power_drill", # Object to manipulate
|
| 217 |
+
|
| 218 |
+
# Simulation settings
|
| 219 |
+
episode_length=300, # Max steps per episode
|
| 220 |
+
headless=True, # Run without GUI
|
| 221 |
+
device="cuda:0", # GPU device
|
| 222 |
+
seed=42, # Random seed
|
| 223 |
+
|
| 224 |
+
# Observation configuration
|
| 225 |
+
state_keys="robot_joint_pos", # State observation keys (comma-separated)
|
| 226 |
+
camera_keys="robot_pov_cam_rgb", # Camera observation keys (comma-separated)
|
| 227 |
+
state_dim=54, # Expected state dimension
|
| 228 |
+
action_dim=36, # Expected action dimension
|
| 229 |
+
camera_height=512, # Camera image height
|
| 230 |
+
camera_width=512, # Camera image width
|
| 231 |
+
enable_cameras=True, # Enable camera observations
|
| 232 |
+
|
| 233 |
+
# Video recording
|
| 234 |
+
video=False, # Enable video recording
|
| 235 |
+
video_length=100, # Frames per video
|
| 236 |
+
video_interval=200, # Steps between recordings
|
| 237 |
+
|
| 238 |
+
# Advanced
|
| 239 |
+
mimic=False, # Enable mimic mode
|
| 240 |
+
teleop_device=None, # Teleoperation device
|
| 241 |
+
disable_fabric=False, # Disable fabric optimization
|
| 242 |
+
enable_pinocchio=True, # Enable Pinocchio for IK
|
| 243 |
+
)
|
| 244 |
+
```
|
| 245 |
+
|
| 246 |
+
### Using Environment Hub directly for advanced usage
|
| 247 |
+
|
| 248 |
+
Create a file called `test_env_load_arena.py` or [download from the EnvHub](https://huggingface.co/nvidia/isaaclab-arena-envs/blob/main/tests/test_env_load_arena.py):
|
| 249 |
+
|
| 250 |
+
```python
|
| 251 |
+
import logging
|
| 252 |
+
from dataclasses import asdict
|
| 253 |
+
from pprint import pformat
|
| 254 |
+
import torch
|
| 255 |
+
import tqdm
|
| 256 |
+
from lerobot.configs import parser
|
| 257 |
+
from lerobot.configs.eval import EvalPipelineConfig
|
| 258 |
+
|
| 259 |
+
|
| 260 |
+
@parser.wrap()
|
| 261 |
+
def main(cfg: EvalPipelineConfig):
|
| 262 |
+
"""Run random action rollout for IsaacLab Arena environment."""
|
| 263 |
+
logging.info(pformat(asdict(cfg)))
|
| 264 |
+
|
| 265 |
+
from lerobot.envs.factory import make_env
|
| 266 |
+
|
| 267 |
+
env_dict = make_env(
|
| 268 |
+
cfg.env,
|
| 269 |
+
n_envs=cfg.env.num_envs,
|
| 270 |
+
trust_remote_code=True,
|
| 271 |
+
)
|
| 272 |
+
env = next(iter(env_dict.values()))[0]
|
| 273 |
+
env.reset()
|
| 274 |
+
for _ in tqdm.tqdm(range(cfg.env.episode_length)):
|
| 275 |
+
with torch.inference_mode():
|
| 276 |
+
actions = env.action_space.sample()
|
| 277 |
+
obs, rewards, terminated, truncated, info = env.step(actions)
|
| 278 |
+
if terminated.any() or truncated.any():
|
| 279 |
+
obs, info = env.reset()
|
| 280 |
+
env.close()
|
| 281 |
+
|
| 282 |
+
|
| 283 |
+
if __name__ == "__main__":
|
| 284 |
+
main()
|
| 285 |
+
```
|
| 286 |
+
|
| 287 |
+
Run with:
|
| 288 |
+
|
| 289 |
+
```bash
|
| 290 |
+
python test_env_load_arena.py \
|
| 291 |
+
--env.environment=g1_locomanip_pnp \
|
| 292 |
+
--env.embodiment=gr1_pink \
|
| 293 |
+
--env.object=cracker_box \
|
| 294 |
+
--env.num_envs=4 \
|
| 295 |
+
--env.enable_cameras=true \
|
| 296 |
+
--env.seed=1000 \
|
| 297 |
+
--env.video=true \
|
| 298 |
+
--env.video_length=10 \
|
| 299 |
+
--env.video_interval=15 \
|
| 300 |
+
--env.headless=false \
|
| 301 |
+
--env.hub_path=nvidia/isaaclab-arena-envs \
|
| 302 |
+
--env.type=isaaclab_arena
|
| 303 |
+
```
|
| 304 |
+
|
| 305 |
+
## Creating New Environments
|
| 306 |
+
|
| 307 |
+
First create a new IsaacLab Arena environment by following the [IsaacLab Arena Documentation](https://isaac-sim.github.io/IsaacLab-Arena/release/0.1.1/index.html).
|
| 308 |
+
|
| 309 |
+
Clone our EnvHub repo:
|
| 310 |
+
|
| 311 |
+
```bash
|
| 312 |
+
git clone https://huggingface.co/nvidia/isaaclab-arena-envs
|
| 313 |
+
```
|
| 314 |
+
|
| 315 |
+
Modify the `example_envs.yaml` file based on your new environment.
|
| 316 |
+
[Upload](./envhub#step-3-upload-to-the-hub) your modified repo to HuggingFace EnvHub.
|
| 317 |
+
|
| 318 |
+
<Tip>
|
| 319 |
+
Your IsaacLab Arena environment code must be locally available during
|
| 320 |
+
evaluation. Users can clone your environment repository separately, or you can
|
| 321 |
+
bundle the environment code and assets directly in your EnvHub repo.
|
| 322 |
+
</Tip>
|
| 323 |
+
|
| 324 |
+
Then, when evaluating, use your new environment:
|
| 325 |
+
|
| 326 |
+
```bash
|
| 327 |
+
lerobot-eval \
|
| 328 |
+
--env.hub_path=<your-env-hub-path>/isaaclab-arena-envs \
|
| 329 |
+
--env.environment=<your new environment> \
|
| 330 |
+
...other flags...
|
| 331 |
+
```
|
| 332 |
+
|
| 333 |
+
We look forward to your contributions!
|
| 334 |
+
|
| 335 |
+
## Troubleshooting
|
| 336 |
+
|
| 337 |
+
### CUDA out of memory
|
| 338 |
+
|
| 339 |
+
Reduce `batch_size` or use a GPU with more VRAM:
|
| 340 |
+
|
| 341 |
+
```bash
|
| 342 |
+
--eval.batch_size=1
|
| 343 |
+
```
|
| 344 |
+
|
| 345 |
+
### EULA not accepted
|
| 346 |
+
|
| 347 |
+
Set environment variables before running:
|
| 348 |
+
|
| 349 |
+
```bash
|
| 350 |
+
export ACCEPT_EULA=Y
|
| 351 |
+
export PRIVACY_CONSENT=Y
|
| 352 |
+
```
|
| 353 |
+
|
| 354 |
+
### Video recording not working
|
| 355 |
+
|
| 356 |
+
Enable cameras when running headless:
|
| 357 |
+
|
| 358 |
+
```bash
|
| 359 |
+
--env.video=true --env.enable_cameras=true --env.headless=true
|
| 360 |
+
```
|
| 361 |
+
|
| 362 |
+
### Policy output dimension mismatch
|
| 363 |
+
|
| 364 |
+
Ensure `action_dim` matches your policy:
|
| 365 |
+
|
| 366 |
+
```bash
|
| 367 |
+
--env.action_dim=36
|
| 368 |
+
```
|
| 369 |
+
|
| 370 |
+
### libGLU.so.1 Errors during Isaac Sim initialization
|
| 371 |
+
|
| 372 |
+
Ensure you have the following dependencies installed, this is likely to happen on headless machines.
|
| 373 |
+
|
| 374 |
+
```bash
|
| 375 |
+
sudo apt update && sudo apt install -y libglu1-mesa libxt6
|
| 376 |
+
```
|
| 377 |
+
|
| 378 |
+
## See Also
|
| 379 |
+
|
| 380 |
+
- [EnvHub Documentation](./envhub.mdx) - General EnvHub usage
|
| 381 |
+
- [IsaacLab Arena GitHub](https://github.com/isaac-sim/IsaacLab-Arena)
|
| 382 |
+
- [IsaacLab Documentation](https://isaac-sim.github.io/IsaacLab/)
|
| 383 |
+
|
| 384 |
+
## Lightwheel LW-BenchHub
|
| 385 |
+
|
| 386 |
+
[Lightwheel](https://www.lightwheel.ai) is bringing `Lightwheel-Libero-Tasks` and `Lightwheel-RoboCasa-Tasks` with 268 tasks to the LeRobot ecosystem.
|
| 387 |
+
LW-BenchHub collects and generates large-scale datasets via teleoperation that comply with the LeRobot specification, enabling out-of-the-box training and evaluation workflows.
|
| 388 |
+
With the unified interface provided by EnvHub, developers can quickly build end-to-end experimental pipelines.
|
| 389 |
+
|
| 390 |
+
### Install
|
| 391 |
+
|
| 392 |
+
Assuming you followed the [Installation](#installation) steps, you can install LW-BenchHub with:
|
| 393 |
+
|
| 394 |
+
```bash
|
| 395 |
+
conda install pinocchio -c conda-forge -y
|
| 396 |
+
pip install numpy==1.26.0 # revert numpy to version 1.26
|
| 397 |
+
|
| 398 |
+
sudo apt-get install git-lfs && git lfs install
|
| 399 |
+
|
| 400 |
+
git clone https://github.com/LightwheelAI/lw_benchhub
|
| 401 |
+
git lfs pull # Ensure LFS files (e.g., .usd assets) are downloaded
|
| 402 |
+
|
| 403 |
+
cd lw_benchhub
|
| 404 |
+
pip install -e .
|
| 405 |
+
```
|
| 406 |
+
|
| 407 |
+
For more detailed instructions, please refer to the [LW-BenchHub Documentation](https://docs.lightwheel.net/lw_benchhub/usage/Installation).
|
| 408 |
+
|
| 409 |
+
### Lightwheel Tasks Dataset
|
| 410 |
+
|
| 411 |
+
LW-BenchHub datasets are available on HuggingFace Hub:
|
| 412 |
+
|
| 413 |
+
| Dataset | Description | Tasks | Frames |
|
| 414 |
+
| :------------------------------------------------------------------------------------------------------------ | :---------------------- | :---- | :----- |
|
| 415 |
+
| [Lightwheel-Tasks-X7S](https://huggingface.co/datasets/LightwheelAI/Lightwheel-Tasks-X7S) | X7S LIBERO and RoboCasa | 117 | ~10.3M |
|
| 416 |
+
| [Lightwheel-Tasks-Double-Piper](https://huggingface.co/datasets/LightwheelAI/Lightwheel-Tasks-Double-Piper) | Double-Piper LIBERO | 130 | ~6.0M |
|
| 417 |
+
| [Lightwheel-Tasks-G1-Controller](https://huggingface.co/datasets/LightwheelAI/Lightwheel-Tasks-G1-Controller) | G1-Controller LIBERO | 62 | ~2.7M |
|
| 418 |
+
| [Lightwheel-Tasks-G1-WBC](https://huggingface.co/datasets/LightwheelAI/Lightwheel-Tasks-G1-WBC) | G1-WBC RoboCasa | 32 | ~1.5M |
|
| 419 |
+
|
| 420 |
+
For training policies, refer to the [Training Policies](#training-policies) section.
|
| 421 |
+
|
| 422 |
+
### Evaluating Policies
|
| 423 |
+
|
| 424 |
+
#### Pre-trained Policies
|
| 425 |
+
|
| 426 |
+
The following trained policies are available:
|
| 427 |
+
|
| 428 |
+
| Policy | Architecture | Task | Layout | Robot | Link |
|
| 429 |
+
| :----------------------- | :----------- | :----------------------------- | :--------- | :-------------- | :------------------------------------------------------------------------------------ |
|
| 430 |
+
| smolvla-double-piper-pnp | SmolVLA | L90K1PutTheBlackBowlOnThePlate | libero-1-1 | DoublePiper-Abs | [HuggingFace](https://huggingface.co/LightwheelAI/smolvla-double-piper-pnp/tree/main) |
|
| 431 |
+
|
| 432 |
+
#### Evaluate SmolVLA
|
| 433 |
+
|
| 434 |
+
```bash
|
| 435 |
+
lerobot-eval \
|
| 436 |
+
--policy.path=LightwheelAI/smolvla-double-piper-pnp \
|
| 437 |
+
--env.type=isaaclab_arena \
|
| 438 |
+
--rename_map='{"observation.images.left_hand_camera_rgb": "observation.images.left_hand", "observation.images.right_hand_camera_rgb": "observation.images.right_hand", "observation.images.first_person_camera_rgb": "observation.images.first_person"}' \
|
| 439 |
+
--env.hub_path=LightwheelAI/lw_benchhub_env \
|
| 440 |
+
--env.kwargs='{"config_path": "configs/envhub/example.yml"}' \
|
| 441 |
+
--trust_remote_code=true \
|
| 442 |
+
--env.state_keys=joint_pos \
|
| 443 |
+
--env.action_dim=12 \
|
| 444 |
+
--env.camera_keys=left_hand_camera_rgb,right_hand_camera_rgb,first_person_camera_rgb \
|
| 445 |
+
--policy.device=cuda \
|
| 446 |
+
--eval.batch_size=10 \
|
| 447 |
+
--eval.n_episodes=100
|
| 448 |
+
```
|
| 449 |
+
|
| 450 |
+
### Environment Configuration
|
| 451 |
+
|
| 452 |
+
Evaluation can be quickly launched by modifying the `robot`, `task`, and `layout` settings in the configuration file.
|
| 453 |
+
|
| 454 |
+
#### Full Configuration Options
|
| 455 |
+
|
| 456 |
+
```yml
|
| 457 |
+
# =========================
|
| 458 |
+
# Basic Settings
|
| 459 |
+
# =========================
|
| 460 |
+
disable_fabric: false
|
| 461 |
+
device: cuda:0
|
| 462 |
+
sensitivity: 1.0
|
| 463 |
+
step_hz: 50
|
| 464 |
+
enable_cameras: true
|
| 465 |
+
execute_mode: eval
|
| 466 |
+
episode_length_s: 20.0 # Episode length in seconds, increase if episodes timeout during eval
|
| 467 |
+
|
| 468 |
+
# =========================
|
| 469 |
+
# Robot Settings
|
| 470 |
+
# =========================
|
| 471 |
+
robot: DoublePiper-Abs # Robot type, DoublePiper-Abs, X7S-Abs, G1-Controller or G1-Controller-DecoupledWBC
|
| 472 |
+
robot_scale: 1.0
|
| 473 |
+
|
| 474 |
+
# =========================
|
| 475 |
+
# Task & Scene Settings
|
| 476 |
+
# =========================
|
| 477 |
+
task: L90K1PutTheBlackBowlOnThePlate # Task name
|
| 478 |
+
scene_backend: robocasa
|
| 479 |
+
task_backend: robocasa
|
| 480 |
+
debug_assets: null
|
| 481 |
+
layout: libero-1-1 # Layout and style ID
|
| 482 |
+
sources:
|
| 483 |
+
- objaverse
|
| 484 |
+
- lightwheel
|
| 485 |
+
- aigen_objs
|
| 486 |
+
object_projects: []
|
| 487 |
+
usd_simplify: false
|
| 488 |
+
seed: 42
|
| 489 |
+
|
| 490 |
+
# =========================
|
| 491 |
+
# Object Placement Retry Settings
|
| 492 |
+
# =========================
|
| 493 |
+
max_scene_retry: 4
|
| 494 |
+
max_object_placement_retry: 3
|
| 495 |
+
|
| 496 |
+
resample_objects_placement_on_reset: true
|
| 497 |
+
resample_robot_placement_on_reset: true
|
| 498 |
+
|
| 499 |
+
# =========================
|
| 500 |
+
# Replay Configuration Settings
|
| 501 |
+
# =========================
|
| 502 |
+
replay_cfgs:
|
| 503 |
+
add_camera_to_observation: true
|
| 504 |
+
render_resolution: [640, 480]
|
| 505 |
+
```
|
| 506 |
+
|
| 507 |
+
### See Also
|
| 508 |
+
|
| 509 |
+
- [LW-BenchHub GitHub](https://github.com/LightwheelAI/LW-BenchHub)
|
| 510 |
+
- [LW-BenchHub Documentation](https://docs.lightwheel.net/lw_benchhub/)
|
lerobot/docs/source/envhub_leisaac.mdx
ADDED
|
@@ -0,0 +1,302 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# LeIsaac × LeRobot EnvHub
|
| 2 |
+
|
| 3 |
+
LeRobot EnvHub now supports **imitation learning in simulation** with LeIsaac.
|
| 4 |
+
Spin up everyday manipulation tasks, teleoperate the robot, collect demos, push them to the Hub, and train policies in LeRobot — all in one loop.
|
| 5 |
+
|
| 6 |
+
[LeIsaac](https://github.com/LightwheelAI/leisaac) integrates with IsaacLab and the SO101 Leader/Follower setup to provide:
|
| 7 |
+
|
| 8 |
+
- 🕹️ **Teleoperation-first workflows** for data collection
|
| 9 |
+
- 📦 **Built-in data conversion** ready for LeRobot training
|
| 10 |
+
- 🤖 **Everyday skills** like picking oranges, lifting cubes, cleaning tables, and folding cloth
|
| 11 |
+
- ☁️ **Ongoing upgrades** from [LightWheel](https://lightwheel.ai/): cloud simulation, EnvHub support, Sim2Real tooling, and more
|
| 12 |
+
|
| 13 |
+
Below you’ll find the currently supported LeIsaac tasks exposed through LeRobot EnvHub.
|
| 14 |
+
|
| 15 |
+
# Available Environments
|
| 16 |
+
|
| 17 |
+
The following table lists all available tasks and environments in LeIsaac x LeRobot Envhub. You can also get the latest list of environments by running the following command:
|
| 18 |
+
|
| 19 |
+
```bash
|
| 20 |
+
python scripts/environments/list_envs.py
|
| 21 |
+
```
|
| 22 |
+
|
| 23 |
+
| Task | Environment ID | Task Description | Related Robot |
|
| 24 |
+
| :-------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------- |
|
| 25 |
+
| <video src="https://github.com/user-attachments/assets/466eddff-f720-4f99-94d5-5e123e4c302c" autoplay loop muted playsinline style="max-width: 300px;"></video> | [LeIsaac-SO101-PickOrange-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/pick_orange/pick_orange_env_cfg.py)<br /><br />[LeIsaac-SO101-PickOrange-Direct-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/pick_orange/direct/pick_orange_env.py) | Pick three oranges and put them into the plate, then reset the arm to rest state. | Single-Arm SO101 Follower |
|
| 26 |
+
| <video src="https://github.com/user-attachments/assets/1e4eb83a-0b38-40fb-a0b2-ddb0fe201e6d" autoplay loop muted playsinline style="max-width: 300px;"></video> | [LeIsaac-SO101-LiftCube-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/lift_cube/lift_cube_env_cfg.py)<br /><br />[LeIsaac-SO101-LiftCube-Direct-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/lift_cube/direct/lift_cube_env.py) | Lift the red cube up. | Single-Arm SO101 Follower |
|
| 27 |
+
| <video src="https://github.com/user-attachments/assets/e49d8f1c-dcc9-412b-a88f-100680d8a45b" autoplay loop muted playsinline style="max-width: 300px;"></video> | [LeIsaac-SO101-CleanToyTable-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/clean_toy_table/clean_toy_table_env_cfg.py)<br /><br />[LeIsaac-SO101-CleanToyTable-BiArm-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/clean_toy_table/clean_toy_table_bi_arm_env_cfg.py)<br /><br />[LeIsaac-SO101-CleanToyTable-BiArm-Direct-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/clean_toy_table/direct/clean_toy_table_bi_arm_env.py) | Pick two letter e objects into the box, and reset the arm to rest state. | Single-Arm SO101 Follower<br /><br />Bi-Arm SO101 Follower |
|
| 28 |
+
| <video src="https://github.com/user-attachments/assets/e29a0f8a-9286-4ce6-b45d-342c3d3ba754" autoplay loop muted playsinline style="max-width: 300px;"></video> | [LeIsaac-SO101-FoldCloth-BiArm-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/fold_cloth/fold_cloth_bi_arm_env_cfg.py)<br /><br />[LeIsaac-SO101-FoldCloth-BiArm-Direct-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/fold_cloth/direct/fold_cloth_bi_arm_env.py) | Fold the cloth, and reset the arm to rest state.<br /><br />_Note: Only the DirectEnv support check_success in this task._ | Bi-Arm SO101 Follower |
|
| 29 |
+
|
| 30 |
+
# Load LeIsaac directly in LeRobot with one line of code
|
| 31 |
+
|
| 32 |
+
> EnvHub: Share LeIsaac environments through HuggingFace
|
| 33 |
+
|
| 34 |
+
[EnvHub](https://huggingface.co/docs/lerobot/envhub) is our reproducible environment hub, spin up a packaged simulation with one line, experiment immediately, and publish your own tasks for the community.
|
| 35 |
+
|
| 36 |
+
LeIsaac offers EnvHub support so you can consume or share tasks with only a few commands.
|
| 37 |
+
|
| 38 |
+
<video
|
| 39 |
+
controls
|
| 40 |
+
src="https://github.com/user-attachments/assets/687666f5-ebe0-421d-84a0-eb86116ac5f8"
|
| 41 |
+
style={{ width: "100%", maxWidth: "960px", borderRadius: "8px" }}
|
| 42 |
+
/>
|
| 43 |
+
|
| 44 |
+
## How to get started, environment Setup
|
| 45 |
+
|
| 46 |
+
Run the following commands to setup your code environments:
|
| 47 |
+
|
| 48 |
+
```bash
|
| 49 |
+
# Refer to Getting Started/Installation to install leisaac firstly
|
| 50 |
+
conda create -n leisaac_envhub python=3.11
|
| 51 |
+
conda activate leisaac_envhub
|
| 52 |
+
|
| 53 |
+
conda install -c "nvidia/label/cuda-12.8.1" cuda-toolkit
|
| 54 |
+
pip install -U torch==2.7.0 torchvision==0.22.0 --index-url https://download.pytorch.org/whl/cu128
|
| 55 |
+
pip install 'leisaac[isaaclab] @ git+https://github.com/LightwheelAI/leisaac.git#subdirectory=source/leisaac' --extra-index-url https://pypi.nvidia.com
|
| 56 |
+
|
| 57 |
+
# Install lerobot
|
| 58 |
+
pip install lerobot==0.4.1
|
| 59 |
+
|
| 60 |
+
# Fix numpy version
|
| 61 |
+
pip install numpy==1.26.0
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
## Usage Example
|
| 65 |
+
|
| 66 |
+
EnvHub exposes every LeIsaac-supported task in a uniform interface. The examples below load `so101_pick_orange` and demonstrate a random-action rollout and an interactive teleoperation.
|
| 67 |
+
|
| 68 |
+
### Random Action
|
| 69 |
+
|
| 70 |
+
<details>
|
| 71 |
+
<summary>Click to expand code example</summary>
|
| 72 |
+
|
| 73 |
+
```python
|
| 74 |
+
# envhub_random_action.py
|
| 75 |
+
|
| 76 |
+
import torch
|
| 77 |
+
from lerobot.envs.factory import make_env
|
| 78 |
+
|
| 79 |
+
# Load from the hub
|
| 80 |
+
envs_dict = make_env("LightwheelAI/leisaac_env:envs/so101_pick_orange.py", n_envs=1, trust_remote_code=True)
|
| 81 |
+
|
| 82 |
+
# Access the environment
|
| 83 |
+
suite_name = next(iter(envs_dict))
|
| 84 |
+
sync_vector_env = envs_dict[suite_name][0]
|
| 85 |
+
# retrieve the isaac environment from the sync vector env
|
| 86 |
+
env = sync_vector_env.envs[0].unwrapped
|
| 87 |
+
|
| 88 |
+
# Use it like any gym environment
|
| 89 |
+
obs, info = env.reset()
|
| 90 |
+
|
| 91 |
+
while True:
|
| 92 |
+
action = torch.tensor(env.action_space.sample())
|
| 93 |
+
obs, reward, terminated, truncated, info = env.step(action)
|
| 94 |
+
if terminated or truncated:
|
| 95 |
+
obs, info = env.reset()
|
| 96 |
+
|
| 97 |
+
env.close()
|
| 98 |
+
```
|
| 99 |
+
|
| 100 |
+
</details>
|
| 101 |
+
|
| 102 |
+
```bash
|
| 103 |
+
python envhub_random_action.py
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
You should see the SO101 arm swinging under purely random commands.
|
| 107 |
+
|
| 108 |
+
### Teleoperation
|
| 109 |
+
|
| 110 |
+
LeRobot’s teleoperation stack can drive the simulated arm.
|
| 111 |
+
|
| 112 |
+
Connect the SO101 Leader controller, run the calibration command below.
|
| 113 |
+
|
| 114 |
+
```bash
|
| 115 |
+
lerobot-calibrate \
|
| 116 |
+
--teleop.type=so101_leader \
|
| 117 |
+
--teleop.port=/dev/ttyACM0 \
|
| 118 |
+
--teleop.id=leader
|
| 119 |
+
```
|
| 120 |
+
|
| 121 |
+
And then launch the teleop script.
|
| 122 |
+
|
| 123 |
+
<details>
|
| 124 |
+
<summary>Click to expand code example</summary>
|
| 125 |
+
|
| 126 |
+
```python
|
| 127 |
+
# envhub_teleop_example.py
|
| 128 |
+
|
| 129 |
+
import logging
|
| 130 |
+
import time
|
| 131 |
+
import gymnasium as gym
|
| 132 |
+
|
| 133 |
+
from dataclasses import asdict, dataclass
|
| 134 |
+
from pprint import pformat
|
| 135 |
+
|
| 136 |
+
from lerobot.teleoperators import ( # noqa: F401
|
| 137 |
+
Teleoperator,
|
| 138 |
+
TeleoperatorConfig,
|
| 139 |
+
make_teleoperator_from_config,
|
| 140 |
+
so_leader,
|
| 141 |
+
bi_so_leader,
|
| 142 |
+
)
|
| 143 |
+
from lerobot.utils.robot_utils import precise_sleep
|
| 144 |
+
from lerobot.utils.utils import init_logging
|
| 145 |
+
from lerobot.envs.factory import make_env
|
| 146 |
+
|
| 147 |
+
|
| 148 |
+
@dataclass
|
| 149 |
+
class TeleoperateConfig:
|
| 150 |
+
teleop: TeleoperatorConfig
|
| 151 |
+
env_name: str = "so101_pick_orange"
|
| 152 |
+
fps: int = 60
|
| 153 |
+
|
| 154 |
+
|
| 155 |
+
@dataclass
|
| 156 |
+
class EnvWrap:
|
| 157 |
+
env: gym.Env
|
| 158 |
+
|
| 159 |
+
|
| 160 |
+
def make_env_from_leisaac(env_name: str = "so101_pick_orange"):
|
| 161 |
+
envs_dict = make_env(
|
| 162 |
+
f'LightwheelAI/leisaac_env:envs/{env_name}.py',
|
| 163 |
+
n_envs=1,
|
| 164 |
+
trust_remote_code=True
|
| 165 |
+
)
|
| 166 |
+
suite_name = next(iter(envs_dict))
|
| 167 |
+
sync_vector_env = envs_dict[suite_name][0]
|
| 168 |
+
env = sync_vector_env.envs[0].unwrapped
|
| 169 |
+
|
| 170 |
+
return env
|
| 171 |
+
|
| 172 |
+
|
| 173 |
+
def teleop_loop(teleop: Teleoperator, env: gym.Env, fps: int):
|
| 174 |
+
from leisaac.devices.action_process import preprocess_device_action
|
| 175 |
+
from leisaac.assets.robots.lerobot import SO101_FOLLOWER_MOTOR_LIMITS
|
| 176 |
+
from leisaac.utils.env_utils import dynamic_reset_gripper_effort_limit_sim
|
| 177 |
+
|
| 178 |
+
env_wrap = EnvWrap(env=env)
|
| 179 |
+
|
| 180 |
+
obs, info = env.reset()
|
| 181 |
+
while True:
|
| 182 |
+
loop_start = time.perf_counter()
|
| 183 |
+
if env.cfg.dynamic_reset_gripper_effort_limit:
|
| 184 |
+
dynamic_reset_gripper_effort_limit_sim(env, 'so101leader')
|
| 185 |
+
|
| 186 |
+
raw_action = teleop.get_action()
|
| 187 |
+
processed_action = preprocess_device_action(
|
| 188 |
+
dict(
|
| 189 |
+
so101_leader=True,
|
| 190 |
+
joint_state={
|
| 191 |
+
k.removesuffix(".pos"): v for k, v in raw_action.items()},
|
| 192 |
+
motor_limits=SO101_FOLLOWER_MOTOR_LIMITS),
|
| 193 |
+
env_wrap
|
| 194 |
+
)
|
| 195 |
+
obs, reward, terminated, truncated, info = env.step(processed_action)
|
| 196 |
+
if terminated or truncated:
|
| 197 |
+
obs, info = env.reset()
|
| 198 |
+
|
| 199 |
+
dt_s = time.perf_counter() - loop_start
|
| 200 |
+
precise_sleep(max(1 / fps - dt_s, 0.0))
|
| 201 |
+
loop_s = time.perf_counter() - loop_start
|
| 202 |
+
print(f"\ntime: {loop_s * 1e3:.2f}ms ({1 / loop_s:.0f} Hz)")
|
| 203 |
+
|
| 204 |
+
|
| 205 |
+
def teleoperate(cfg: TeleoperateConfig):
|
| 206 |
+
init_logging()
|
| 207 |
+
logging.info(pformat(asdict(cfg)))
|
| 208 |
+
|
| 209 |
+
teleop = make_teleoperator_from_config(cfg.teleop)
|
| 210 |
+
env = make_env_from_leisaac(cfg.env_name)
|
| 211 |
+
|
| 212 |
+
teleop.connect()
|
| 213 |
+
if hasattr(env, 'initialize'):
|
| 214 |
+
env.initialize()
|
| 215 |
+
try:
|
| 216 |
+
teleop_loop(teleop=teleop, env=env, fps=cfg.fps)
|
| 217 |
+
except KeyboardInterrupt:
|
| 218 |
+
pass
|
| 219 |
+
finally:
|
| 220 |
+
teleop.disconnect()
|
| 221 |
+
env.close()
|
| 222 |
+
|
| 223 |
+
|
| 224 |
+
def main():
|
| 225 |
+
teleoperate(TeleoperateConfig(
|
| 226 |
+
teleop=so_leader.SO101LeaderConfig(
|
| 227 |
+
port="/dev/ttyACM0",
|
| 228 |
+
id='leader',
|
| 229 |
+
use_degrees=False,
|
| 230 |
+
),
|
| 231 |
+
env_name="so101_pick_orange",
|
| 232 |
+
fps=60,
|
| 233 |
+
))
|
| 234 |
+
|
| 235 |
+
|
| 236 |
+
if __name__ == "__main__":
|
| 237 |
+
main()
|
| 238 |
+
|
| 239 |
+
```
|
| 240 |
+
|
| 241 |
+
</details>
|
| 242 |
+
|
| 243 |
+
```bash
|
| 244 |
+
python envhub_teleop_example.py
|
| 245 |
+
```
|
| 246 |
+
|
| 247 |
+
Running the script lets you operate the simulated arm using the physical Leader device.
|
| 248 |
+
|
| 249 |
+
## ☁️ Cloud Simulation (No GPU Required)
|
| 250 |
+
|
| 251 |
+
Don’t have a local GPU or the right drivers? No problem! You can run LeIsaac entirely in the cloud with zero setup.
|
| 252 |
+
LeIsaac works out-of-the-box on **NVIDIA Brev**, giving you a fully configured environment directly in your browser.
|
| 253 |
+
|
| 254 |
+
👉 **Start here:** [https://lightwheelai.github.io/leisaac/docs/cloud_simulation/nvidia_brev](https://lightwheelai.github.io/leisaac/docs/cloud_simulation/nvidia_brev)
|
| 255 |
+
|
| 256 |
+
Once your instance is deployed, simply open the link for **port 80 (HTTP)** to launch **Visual Studio Code Server** (default password: `password`). From there, you can run simulations, edit code, and visualize IsaacLab environments — all from your web browser.
|
| 257 |
+
|
| 258 |
+
**No GPU, no drivers, no local installation. Just click and run.**
|
| 259 |
+
|
| 260 |
+
## Additional Notes
|
| 261 |
+
|
| 262 |
+
We keep EnvHub coverage aligned with the LeIsaac task. Currently supported:
|
| 263 |
+
|
| 264 |
+
- `so101_pick_orange`
|
| 265 |
+
- `so101_lift_cube`
|
| 266 |
+
- `so101_clean_toytable`
|
| 267 |
+
- `bi_so101_fold_cloth`
|
| 268 |
+
|
| 269 |
+
Switch tasks by targeting a different script when calling `make_env`, for example:
|
| 270 |
+
|
| 271 |
+
```python
|
| 272 |
+
envs_dict_pick_orange = make_env("LightwheelAI/leisaac_env:envs/so101_pick_orange.py", n_envs=1, trust_remote_code=True)
|
| 273 |
+
envs_dict_lift_cube = make_env("LightwheelAI/leisaac_env:envs/so101_lift_cube.py", n_envs=1, trust_remote_code=True)
|
| 274 |
+
envs_dict_clean_toytable = make_env("LightwheelAI/leisaac_env:envs/so101_clean_toytable.py", n_envs=1, trust_remote_code=True)
|
| 275 |
+
envs_dict_fold_cloth = make_env("LightwheelAI/leisaac_env:envs/bi_so101_fold_cloth.py", n_envs=1, trust_remote_code=True)
|
| 276 |
+
```
|
| 277 |
+
|
| 278 |
+
Note: when working with `bi_so101_fold_cloth`, call `initialize()` immediately after retrieving the env before performing any other operations:
|
| 279 |
+
|
| 280 |
+
<details>
|
| 281 |
+
<summary>Click to expand code example</summary>
|
| 282 |
+
|
| 283 |
+
```python
|
| 284 |
+
import torch
|
| 285 |
+
from lerobot.envs.factory import make_env
|
| 286 |
+
|
| 287 |
+
# Load from the hub
|
| 288 |
+
envs_dict = make_env("LightwheelAI/leisaac_env:envs/bi_so101_fold_cloth.py", n_envs=1, trust_remote_code=True)
|
| 289 |
+
|
| 290 |
+
# Access the environment
|
| 291 |
+
suite_name = next(iter(envs_dict))
|
| 292 |
+
sync_vector_env = envs_dict[suite_name][0]
|
| 293 |
+
# retrieve the isaac environment from the sync vector env
|
| 294 |
+
env = sync_vector_env.envs[0].unwrapped
|
| 295 |
+
|
| 296 |
+
# NOTE: initialize() first
|
| 297 |
+
env.initialize()
|
| 298 |
+
|
| 299 |
+
# other operation with env...
|
| 300 |
+
```
|
| 301 |
+
|
| 302 |
+
</details>
|
lerobot/docs/source/feetech.mdx
ADDED
|
@@ -0,0 +1,71 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Feetech Motor Firmware Update
|
| 2 |
+
|
| 3 |
+
This tutorial guides you through updating the firmware of Feetech motors using the official Feetech software.
|
| 4 |
+
|
| 5 |
+
## Prerequisites
|
| 6 |
+
|
| 7 |
+
- Windows computer (Feetech software is only available for Windows)
|
| 8 |
+
- Feetech motor control board
|
| 9 |
+
- USB cable to connect the control board to your computer
|
| 10 |
+
- Feetech motors connected to the control board
|
| 11 |
+
|
| 12 |
+
## Step 1: Download Feetech Software
|
| 13 |
+
|
| 14 |
+
1. Visit the official Feetech software download page: [https://www.feetechrc.com/software.html](https://www.feetechrc.com/software.html)
|
| 15 |
+
2. Download the latest version of the Feetech debugging software (FD)
|
| 16 |
+
3. Install the software on your Windows computer
|
| 17 |
+
|
| 18 |
+
## Step 2: Hardware Setup
|
| 19 |
+
|
| 20 |
+
1. Connect your Feetech motors to the motor control board
|
| 21 |
+
2. Connect the motor control board to your Windows computer via USB cable
|
| 22 |
+
3. Ensure power is supplied to the motors
|
| 23 |
+
|
| 24 |
+
## Step 3: Configure Connection
|
| 25 |
+
|
| 26 |
+
1. Launch the Feetech debugging software
|
| 27 |
+
2. Select the correct COM port from the port dropdown menu
|
| 28 |
+
- If unsure which port to use, check Windows Device Manager under "Ports (COM & LPT)"
|
| 29 |
+
3. Set the appropriate baud rate (typically 1000000 for most Feetech motors)
|
| 30 |
+
4. Click "Open" to establish communication with the control board
|
| 31 |
+
|
| 32 |
+
## Step 4: Scan for Motors
|
| 33 |
+
|
| 34 |
+
1. Once connected, click the "Search" button to detect all connected motors
|
| 35 |
+
2. The software will automatically discover and list all motors on the bus
|
| 36 |
+
3. Each motor will appear with its ID number
|
| 37 |
+
|
| 38 |
+
## Step 5: Update Firmware
|
| 39 |
+
|
| 40 |
+
For each motor you want to update:
|
| 41 |
+
|
| 42 |
+
1. **Select the motor** from the list by clicking on it
|
| 43 |
+
2. **Click on Upgrade tab**:
|
| 44 |
+
3. **Click on Online button**:
|
| 45 |
+
- If an potential firmware update is found, it will be displayed in the box
|
| 46 |
+
4. **Click on Upgrade button**:
|
| 47 |
+
- The update progress will be displayed
|
| 48 |
+
|
| 49 |
+
## Step 6: Verify Update
|
| 50 |
+
|
| 51 |
+
1. After the update completes, the software should automatically refresh the motor information
|
| 52 |
+
2. Verify that the firmware version has been updated to the expected version
|
| 53 |
+
|
| 54 |
+
## Important Notes
|
| 55 |
+
|
| 56 |
+
⚠️ **Warning**: Do not disconnect power or USB during firmware updates, it will potentially brick the motor.
|
| 57 |
+
|
| 58 |
+
## Bonus: Motor Debugging on Linux/macOS
|
| 59 |
+
|
| 60 |
+
For debugging purposes only, you can use the open-source Feetech Debug Tool:
|
| 61 |
+
|
| 62 |
+
- **Repository**: [FT_SCServo_Debug_Qt](https://github.com/CarolinePascal/FT_SCServo_Debug_Qt/tree/fix/port-search-timer)
|
| 63 |
+
|
| 64 |
+
### Installation Instructions
|
| 65 |
+
|
| 66 |
+
Follow the instructions in the repository to install the tool, for Ubuntu you can directly install it, for MacOS you need to build it from source.
|
| 67 |
+
|
| 68 |
+
**Limitations:**
|
| 69 |
+
|
| 70 |
+
- This tool is for debugging and parameter adjustment only
|
| 71 |
+
- Firmware updates must still be done on Windows with official Feetech software
|
lerobot/docs/source/groot.mdx
ADDED
|
@@ -0,0 +1,131 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# GR00T N1.5 Policy
|
| 2 |
+
|
| 3 |
+
GR00T N1.5 is an open foundation model from NVIDIA designed for generalized humanoid robot reasoning and skills. It is a cross-embodiment model that accepts multimodal input, including language and images, to perform manipulation tasks in diverse environments.
|
| 4 |
+
|
| 5 |
+
This document outlines the specifics of its integration and usage within the LeRobot framework.
|
| 6 |
+
|
| 7 |
+
## Model Overview
|
| 8 |
+
|
| 9 |
+
NVIDIA Isaac GR00T N1.5 is an upgraded version of the GR00T N1 foundation model. It is built to improve generalization and language-following abilities for humanoid robots.
|
| 10 |
+
|
| 11 |
+
Developers and researchers can post-train GR00T N1.5 with their own real or synthetic data to adapt it for specific humanoid robots or tasks.
|
| 12 |
+
|
| 13 |
+
GR00T N1.5 (specifically the GR00T-N1.5-3B model) is built using pre-trained vision and language encoders. It utilizes a flow matching action transformer to model a chunk of actions, conditioned on vision, language, and proprioception.
|
| 14 |
+
|
| 15 |
+
<img
|
| 16 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/lerobot-groot-paper1%20(1).png"
|
| 17 |
+
alt="An overview of GR00T"
|
| 18 |
+
width="80%"
|
| 19 |
+
/>
|
| 20 |
+
|
| 21 |
+
Its strong performance comes from being trained on an expansive and diverse humanoid dataset, which includes:
|
| 22 |
+
|
| 23 |
+
- Real captured data from robots.
|
| 24 |
+
- Synthetic data generated using NVIDIA Isaac GR00T Blueprint.
|
| 25 |
+
- Internet-scale video data.
|
| 26 |
+
|
| 27 |
+
This approach allows the model to be highly adaptable through post-training for specific embodiments, tasks, and environments.
|
| 28 |
+
|
| 29 |
+
## Installation Requirements
|
| 30 |
+
|
| 31 |
+
As of today, GR00T N1.5 requires flash attention for it's internal working.
|
| 32 |
+
|
| 33 |
+
We are working on making this optional, but in the meantime that means that we require an extra installation step and it can only be used in CUDA enabled devices.
|
| 34 |
+
|
| 35 |
+
1. Following the Environment Setup of our [Installation Guide](./installation). **Attention** don't install `lerobot` in this step.
|
| 36 |
+
2. Install [Flash Attention](https://github.com/Dao-AILab/flash-attention) by running:
|
| 37 |
+
|
| 38 |
+
```bash
|
| 39 |
+
# Check https://pytorch.org/get-started/locally/ for your system
|
| 40 |
+
pip install "torch>=2.2.1,<2.8.0" "torchvision>=0.21.0,<0.23.0" # --index-url https://download.pytorch.org/whl/cu1XX
|
| 41 |
+
pip install ninja "packaging>=24.2,<26.0" # flash attention dependencies
|
| 42 |
+
pip install "flash-attn>=2.5.9,<3.0.0" --no-build-isolation
|
| 43 |
+
python -c "import flash_attn; print(f'Flash Attention {flash_attn.__version__} imported successfully')"
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
3. Install LeRobot by running:
|
| 47 |
+
|
| 48 |
+
```bash
|
| 49 |
+
pip install lerobot[groot]
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
## Usage
|
| 53 |
+
|
| 54 |
+
To use GR00T in your LeRobot configuration, specify the policy type as:
|
| 55 |
+
|
| 56 |
+
```python
|
| 57 |
+
policy.type=groot
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
## Training
|
| 61 |
+
|
| 62 |
+
### Training Command Example
|
| 63 |
+
|
| 64 |
+
Here's a complete training command for finetuning the base GR00T model on your own dataset:
|
| 65 |
+
|
| 66 |
+
```bash
|
| 67 |
+
# Using a multi-GPU setup
|
| 68 |
+
accelerate launch \
|
| 69 |
+
--multi_gpu \
|
| 70 |
+
--num_processes=$NUM_GPUS \
|
| 71 |
+
$(which lerobot-train) \
|
| 72 |
+
--output_dir=$OUTPUT_DIR \
|
| 73 |
+
--save_checkpoint=true \
|
| 74 |
+
--batch_size=$BATCH_SIZE \
|
| 75 |
+
--steps=$NUM_STEPS \
|
| 76 |
+
--save_freq=$SAVE_FREQ \
|
| 77 |
+
--log_freq=$LOG_FREQ \
|
| 78 |
+
--policy.push_to_hub=true \
|
| 79 |
+
--policy.type=groot \
|
| 80 |
+
--policy.repo_id=$REPO_ID \
|
| 81 |
+
--policy.tune_diffusion_model=false \
|
| 82 |
+
--dataset.repo_id=$DATASET_ID \
|
| 83 |
+
--wandb.enable=true \
|
| 84 |
+
--wandb.disable_artifact=true \
|
| 85 |
+
--job_name=$JOB_NAME
|
| 86 |
+
```
|
| 87 |
+
|
| 88 |
+
## Performance Results
|
| 89 |
+
|
| 90 |
+
### Libero Benchmark Results
|
| 91 |
+
|
| 92 |
+
> [!NOTE]
|
| 93 |
+
> Follow our instructions for Libero usage: [Libero](./libero)
|
| 94 |
+
|
| 95 |
+
GR00T has demonstrated strong performance on the Libero benchmark suite. To compare and test its LeRobot implementation, we finetuned the GR00T N1.5 model for 30k steps on the Libero dataset and compared the results to the GR00T reference results.
|
| 96 |
+
|
| 97 |
+
| Benchmark | LeRobot Implementation | GR00T Reference |
|
| 98 |
+
| ------------------ | ---------------------- | --------------- |
|
| 99 |
+
| **Libero Spatial** | 82.0% | 92.0% |
|
| 100 |
+
| **Libero Object** | 99.0% | 92.0% |
|
| 101 |
+
| **Libero Long** | 82.0% | 76.0% |
|
| 102 |
+
| **Average** | 87.0% | 87.0% |
|
| 103 |
+
|
| 104 |
+
These results demonstrate GR00T's strong generalization capabilities across diverse robotic manipulation tasks. To reproduce these results, you can follow the instructions in the [Libero](https://huggingface.co/docs/lerobot/libero) section.
|
| 105 |
+
|
| 106 |
+
### Evaluate in your hardware setup
|
| 107 |
+
|
| 108 |
+
Once you have trained your model using your parameters you can run inference in your downstream task. Follow the instructions in [Imitation Learning for Robots](./il_robots). For example:
|
| 109 |
+
|
| 110 |
+
```bash
|
| 111 |
+
lerobot-record \
|
| 112 |
+
--robot.type=bi_so_follower \
|
| 113 |
+
--robot.left_arm_port=/dev/ttyACM1 \
|
| 114 |
+
--robot.right_arm_port=/dev/ttyACM0 \
|
| 115 |
+
--robot.id=bimanual_follower \
|
| 116 |
+
--robot.cameras='{ right: {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30},
|
| 117 |
+
left: {"type": "opencv", "index_or_path": 2, "width": 640, "height": 480, "fps": 30},
|
| 118 |
+
top: {"type": "opencv", "index_or_path": 4, "width": 640, "height": 480, "fps": 30},
|
| 119 |
+
}' \
|
| 120 |
+
--display_data=true \
|
| 121 |
+
--dataset.repo_id=<user>/eval_groot-bimanual \
|
| 122 |
+
--dataset.num_episodes=10 \
|
| 123 |
+
--dataset.single_task="Grab and handover the red cube to the other arm"
|
| 124 |
+
--policy.path=<user>/groot-bimanual # your trained model
|
| 125 |
+
--dataset.episode_time_s=30
|
| 126 |
+
--dataset.reset_time_s=10
|
| 127 |
+
```
|
| 128 |
+
|
| 129 |
+
## License
|
| 130 |
+
|
| 131 |
+
This model follows the **Apache 2.0 License**, consistent with the original [GR00T repository](https://github.com/NVIDIA/Isaac-GR00T).
|
lerobot/docs/source/hilserl.mdx
ADDED
|
@@ -0,0 +1,923 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# HIL-SERL Real Robot Training Workflow Guide
|
| 2 |
+
|
| 3 |
+
In this tutorial you will go through the full Human-in-the-Loop Sample-Efficient Reinforcement Learning (HIL-SERL) workflow using LeRobot. You will master training a policy with RL on a real robot in just a few hours.
|
| 4 |
+
|
| 5 |
+
HIL-SERL is a sample-efficient reinforcement learning algorithm that combines human demonstrations with online learning and human interventions. The approach starts from a small set of human demonstrations, uses them to train a reward classifier, and then employs an actor-learner architecture where humans can intervene during policy execution to guide exploration and correct unsafe behaviors. In this tutorial, you'll use a gamepad to provide interventions and control the robot during the learning process.
|
| 6 |
+
|
| 7 |
+
It combines three key ingredients:
|
| 8 |
+
|
| 9 |
+
1. **Offline demonstrations & reward classifier:** a handful of human-teleop episodes plus a vision-based success detector give the policy a shaped starting point.
|
| 10 |
+
|
| 11 |
+
2. **On-robot actor / learner loop with human interventions:** a distributed Soft Actor Critic (SAC) learner updates the policy while an actor explores on the physical robot; the human can jump in at any time to correct dangerous or unproductive behaviour.
|
| 12 |
+
|
| 13 |
+
3. **Safety & efficiency tools:** joint/end-effector (EE) bounds, crop region of interest (ROI) preprocessing and WandB monitoring keep the data useful and the hardware safe.
|
| 14 |
+
|
| 15 |
+
Together these elements let HIL-SERL reach near-perfect task success and faster cycle times than imitation-only baselines.
|
| 16 |
+
|
| 17 |
+
<p align="center">
|
| 18 |
+
<img
|
| 19 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/hilserl-main-figure.png"
|
| 20 |
+
alt="HIL-SERL workflow"
|
| 21 |
+
title="HIL-SERL workflow"
|
| 22 |
+
width="100%"
|
| 23 |
+
></img>
|
| 24 |
+
</p>
|
| 25 |
+
|
| 26 |
+
<p align="center">
|
| 27 |
+
<i>HIL-SERL workflow, Luo et al. 2024</i>
|
| 28 |
+
</p>
|
| 29 |
+
|
| 30 |
+
This guide provides step-by-step instructions for training a robot policy using LeRobot's HilSerl implementation to train on a real robot.
|
| 31 |
+
|
| 32 |
+
## What do I need?
|
| 33 |
+
|
| 34 |
+
- A gamepad (recommended) or keyboard to control the robot
|
| 35 |
+
- A Nvidia GPU
|
| 36 |
+
- A real robot with a follower and leader arm (optional if you use the keyboard or the gamepad)
|
| 37 |
+
- A URDF file for the robot for the kinematics package (check `lerobot/model/kinematics.py`)
|
| 38 |
+
|
| 39 |
+
## What kind of tasks can I train?
|
| 40 |
+
|
| 41 |
+
One can use HIL-SERL to train on a variety of manipulation tasks. Some recommendations:
|
| 42 |
+
|
| 43 |
+
- Start with a simple task to understand how the system works.
|
| 44 |
+
- Push cube to a goal region
|
| 45 |
+
- Pick and lift cube with the gripper
|
| 46 |
+
- Avoid extremely long horizon tasks. Focus on tasks that can be completed in 5-10 seconds.
|
| 47 |
+
- Once you have a good idea of how the system works, you can try more complex tasks and longer horizons.
|
| 48 |
+
- Pick and place cube
|
| 49 |
+
- Bimanual tasks to pick objects with two arms
|
| 50 |
+
- Hand-over tasks to transfer objects from one arm to another
|
| 51 |
+
- Go crazy!
|
| 52 |
+
|
| 53 |
+
## Install LeRobot with HIL-SERL
|
| 54 |
+
|
| 55 |
+
To install LeRobot with HIL-SERL, you need to install the `hilserl` extra.
|
| 56 |
+
|
| 57 |
+
```bash
|
| 58 |
+
pip install -e ".[hilserl]"
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
## Real Robot Training Workflow
|
| 62 |
+
|
| 63 |
+
### Understanding Configuration
|
| 64 |
+
|
| 65 |
+
The training process begins with proper configuration for the HILSerl environment. The main configuration class is `GymManipulatorConfig` in `lerobot/rl/gym_manipulator.py`, which contains nested `HILSerlRobotEnvConfig` and `DatasetConfig`. The configuration is organized into focused, nested sub-configs:
|
| 66 |
+
|
| 67 |
+
<!-- prettier-ignore-start -->
|
| 68 |
+
```python
|
| 69 |
+
class GymManipulatorConfig:
|
| 70 |
+
env: HILSerlRobotEnvConfig # Environment configuration (nested)
|
| 71 |
+
dataset: DatasetConfig # Dataset recording/replay configuration (nested)
|
| 72 |
+
mode: str | None = None # "record", "replay", or None (for training)
|
| 73 |
+
device: str = "cpu" # Compute device
|
| 74 |
+
|
| 75 |
+
class HILSerlRobotEnvConfig(EnvConfig):
|
| 76 |
+
robot: RobotConfig | None = None # Main robot agent (defined in `lerobot/robots`)
|
| 77 |
+
teleop: TeleoperatorConfig | None = None # Teleoperator agent, e.g., gamepad or leader arm
|
| 78 |
+
processor: HILSerlProcessorConfig # Processing pipeline configuration (nested)
|
| 79 |
+
name: str = "real_robot" # Environment name
|
| 80 |
+
task: str | None = None # Task identifier
|
| 81 |
+
fps: int = 10 # Control frequency
|
| 82 |
+
|
| 83 |
+
# Nested processor configuration
|
| 84 |
+
class HILSerlProcessorConfig:
|
| 85 |
+
control_mode: str = "gamepad" # Control mode
|
| 86 |
+
observation: ObservationConfig | None = None # Observation processing settings
|
| 87 |
+
image_preprocessing: ImagePreprocessingConfig | None = None # Image crop/resize settings
|
| 88 |
+
gripper: GripperConfig | None = None # Gripper control and penalty settings
|
| 89 |
+
reset: ResetConfig | None = None # Environment reset and timing settings
|
| 90 |
+
inverse_kinematics: InverseKinematicsConfig | None = None # IK processing settings
|
| 91 |
+
reward_classifier: RewardClassifierConfig | None = None # Reward classifier settings
|
| 92 |
+
max_gripper_pos: float | None = 100.0 # Maximum gripper position
|
| 93 |
+
|
| 94 |
+
# Sub-configuration classes
|
| 95 |
+
class ObservationConfig:
|
| 96 |
+
add_joint_velocity_to_observation: bool = False # Add joint velocities to state
|
| 97 |
+
add_current_to_observation: bool = False # Add motor currents to state
|
| 98 |
+
display_cameras: bool = False # Display camera feeds during execution
|
| 99 |
+
|
| 100 |
+
class ImagePreprocessingConfig:
|
| 101 |
+
crop_params_dict: dict[str, tuple[int, int, int, int]] | None = None # Image cropping parameters
|
| 102 |
+
resize_size: tuple[int, int] | None = None # Target image size
|
| 103 |
+
|
| 104 |
+
class GripperConfig:
|
| 105 |
+
use_gripper: bool = True # Enable gripper control
|
| 106 |
+
gripper_penalty: float = 0.0 # Penalty for inappropriate gripper usage
|
| 107 |
+
|
| 108 |
+
class ResetConfig:
|
| 109 |
+
fixed_reset_joint_positions: Any | None = None # Joint positions for reset
|
| 110 |
+
reset_time_s: float = 5.0 # Time to wait during reset
|
| 111 |
+
control_time_s: float = 20.0 # Maximum episode duration
|
| 112 |
+
terminate_on_success: bool = True # Whether to terminate episodes on success detection
|
| 113 |
+
|
| 114 |
+
class InverseKinematicsConfig:
|
| 115 |
+
urdf_path: str | None = None # Path to robot URDF file
|
| 116 |
+
target_frame_name: str | None = None # End-effector frame name
|
| 117 |
+
end_effector_bounds: dict[str, list[float]] | None = None # EE workspace bounds
|
| 118 |
+
end_effector_step_sizes: dict[str, float] | None = None # EE step sizes per axis
|
| 119 |
+
|
| 120 |
+
class RewardClassifierConfig:
|
| 121 |
+
pretrained_path: str | None = None # Path to pretrained reward classifier
|
| 122 |
+
success_threshold: float = 0.5 # Success detection threshold
|
| 123 |
+
success_reward: float = 1.0 # Reward value for successful episodes
|
| 124 |
+
|
| 125 |
+
# Dataset configuration
|
| 126 |
+
class DatasetConfig:
|
| 127 |
+
repo_id: str # LeRobot dataset repository ID
|
| 128 |
+
task: str # Task identifier
|
| 129 |
+
root: str | None = None # Local dataset root directory
|
| 130 |
+
num_episodes_to_record: int = 5 # Number of episodes for recording
|
| 131 |
+
replay_episode: int | None = None # Episode index for replay
|
| 132 |
+
push_to_hub: bool = False # Whether to push datasets to Hub
|
| 133 |
+
```
|
| 134 |
+
<!-- prettier-ignore-end -->
|
| 135 |
+
|
| 136 |
+
### Processor Pipeline Architecture
|
| 137 |
+
|
| 138 |
+
HIL-SERL uses a modular processor pipeline architecture that processes robot observations and actions through a series of composable steps. The pipeline is divided into two main components:
|
| 139 |
+
|
| 140 |
+
#### Environment Processor Pipeline
|
| 141 |
+
|
| 142 |
+
The environment processor (`env_processor`) handles incoming observations and environment state:
|
| 143 |
+
|
| 144 |
+
1. **VanillaObservationProcessorStep**: Converts raw robot observations into standardized format
|
| 145 |
+
2. **JointVelocityProcessorStep** (optional): Adds joint velocity information to observations
|
| 146 |
+
3. **MotorCurrentProcessorStep** (optional): Adds motor current readings to observations
|
| 147 |
+
4. **ForwardKinematicsJointsToEE** (optional): Computes end-effector pose from joint positions
|
| 148 |
+
5. **ImageCropResizeProcessorStep** (optional): Crops and resizes camera images
|
| 149 |
+
6. **TimeLimitProcessorStep** (optional): Enforces episode time limits
|
| 150 |
+
7. **GripperPenaltyProcessorStep** (optional): Applies penalties for inappropriate gripper usage
|
| 151 |
+
8. **RewardClassifierProcessorStep** (optional): Automated reward detection using vision models
|
| 152 |
+
9. **AddBatchDimensionProcessorStep**: Converts data to batch format for neural network processing
|
| 153 |
+
10. **DeviceProcessorStep**: Moves data to the specified compute device (CPU/GPU)
|
| 154 |
+
|
| 155 |
+
#### Action Processor Pipeline
|
| 156 |
+
|
| 157 |
+
The action processor (`action_processor`) handles outgoing actions and human interventions:
|
| 158 |
+
|
| 159 |
+
1. **AddTeleopActionAsComplimentaryDataStep**: Captures teleoperator actions for logging
|
| 160 |
+
2. **AddTeleopEventsAsInfoStep**: Records intervention events and episode control signals
|
| 161 |
+
3. **InterventionActionProcessorStep**: Handles human interventions and episode termination
|
| 162 |
+
4. **Inverse Kinematics Pipeline** (when enabled):
|
| 163 |
+
- **MapDeltaActionToRobotActionStep**: Converts delta actions to robot action format
|
| 164 |
+
- **EEReferenceAndDelta**: Computes end-effector reference and delta movements
|
| 165 |
+
- **EEBoundsAndSafety**: Enforces workspace safety bounds
|
| 166 |
+
- **InverseKinematicsEEToJoints**: Converts end-effector actions to joint targets
|
| 167 |
+
- **GripperVelocityToJoint**: Handles gripper control commands
|
| 168 |
+
|
| 169 |
+
#### Configuration Examples
|
| 170 |
+
|
| 171 |
+
**Basic Observation Processing**:
|
| 172 |
+
|
| 173 |
+
```json
|
| 174 |
+
{
|
| 175 |
+
"env": {
|
| 176 |
+
"processor": {
|
| 177 |
+
"observation": {
|
| 178 |
+
"add_joint_velocity_to_observation": true,
|
| 179 |
+
"add_current_to_observation": false,
|
| 180 |
+
"display_cameras": false
|
| 181 |
+
}
|
| 182 |
+
}
|
| 183 |
+
}
|
| 184 |
+
}
|
| 185 |
+
```
|
| 186 |
+
|
| 187 |
+
**Image Processing**:
|
| 188 |
+
|
| 189 |
+
```json
|
| 190 |
+
{
|
| 191 |
+
"env": {
|
| 192 |
+
"processor": {
|
| 193 |
+
"image_preprocessing": {
|
| 194 |
+
"crop_params_dict": {
|
| 195 |
+
"observation.images.front": [180, 250, 120, 150],
|
| 196 |
+
"observation.images.side": [180, 207, 180, 200]
|
| 197 |
+
},
|
| 198 |
+
"resize_size": [128, 128]
|
| 199 |
+
}
|
| 200 |
+
}
|
| 201 |
+
}
|
| 202 |
+
}
|
| 203 |
+
```
|
| 204 |
+
|
| 205 |
+
**Inverse Kinematics Setup**:
|
| 206 |
+
|
| 207 |
+
```json
|
| 208 |
+
{
|
| 209 |
+
"env": {
|
| 210 |
+
"processor": {
|
| 211 |
+
"inverse_kinematics": {
|
| 212 |
+
"urdf_path": "path/to/robot.urdf",
|
| 213 |
+
"target_frame_name": "end_effector",
|
| 214 |
+
"end_effector_bounds": {
|
| 215 |
+
"min": [0.16, -0.08, 0.03],
|
| 216 |
+
"max": [0.24, 0.2, 0.1]
|
| 217 |
+
},
|
| 218 |
+
"end_effector_step_sizes": {
|
| 219 |
+
"x": 0.02,
|
| 220 |
+
"y": 0.02,
|
| 221 |
+
"z": 0.02
|
| 222 |
+
}
|
| 223 |
+
}
|
| 224 |
+
}
|
| 225 |
+
}
|
| 226 |
+
}
|
| 227 |
+
```
|
| 228 |
+
|
| 229 |
+
### Advanced Observation Processing
|
| 230 |
+
|
| 231 |
+
The HIL-SERL framework supports additional observation processing features that can improve policy learning:
|
| 232 |
+
|
| 233 |
+
#### Joint Velocity Processing
|
| 234 |
+
|
| 235 |
+
Enable joint velocity estimation to provide the policy with motion information:
|
| 236 |
+
|
| 237 |
+
```json
|
| 238 |
+
{
|
| 239 |
+
"env": {
|
| 240 |
+
"processor": {
|
| 241 |
+
"observation": {
|
| 242 |
+
"add_joint_velocity_to_observation": true
|
| 243 |
+
}
|
| 244 |
+
}
|
| 245 |
+
}
|
| 246 |
+
}
|
| 247 |
+
```
|
| 248 |
+
|
| 249 |
+
This processor:
|
| 250 |
+
|
| 251 |
+
- Estimates joint velocities using finite differences between consecutive joint position readings
|
| 252 |
+
- Adds velocity information to the observation state vector
|
| 253 |
+
- Useful for policies that need motion awareness for dynamic tasks
|
| 254 |
+
|
| 255 |
+
#### Motor Current Processing
|
| 256 |
+
|
| 257 |
+
Monitor motor currents to detect contact forces and load conditions:
|
| 258 |
+
|
| 259 |
+
```json
|
| 260 |
+
{
|
| 261 |
+
"env": {
|
| 262 |
+
"processor": {
|
| 263 |
+
"observation": {
|
| 264 |
+
"add_current_to_observation": true
|
| 265 |
+
}
|
| 266 |
+
}
|
| 267 |
+
}
|
| 268 |
+
}
|
| 269 |
+
```
|
| 270 |
+
|
| 271 |
+
This processor:
|
| 272 |
+
|
| 273 |
+
- Reads motor current values from the robot's control system
|
| 274 |
+
- Adds current measurements to the observation state vector
|
| 275 |
+
- Helps detect contact events, object weights, and mechanical resistance
|
| 276 |
+
- Useful for contact-rich manipulation tasks
|
| 277 |
+
|
| 278 |
+
#### Combined Observation Processing
|
| 279 |
+
|
| 280 |
+
You can enable multiple observation processing features simultaneously:
|
| 281 |
+
|
| 282 |
+
```json
|
| 283 |
+
{
|
| 284 |
+
"env": {
|
| 285 |
+
"processor": {
|
| 286 |
+
"observation": {
|
| 287 |
+
"add_joint_velocity_to_observation": true,
|
| 288 |
+
"add_current_to_observation": true,
|
| 289 |
+
"display_cameras": false
|
| 290 |
+
}
|
| 291 |
+
}
|
| 292 |
+
}
|
| 293 |
+
}
|
| 294 |
+
```
|
| 295 |
+
|
| 296 |
+
**Note**: Enabling additional observation features increases the state space dimensionality, which may require adjusting your policy network architecture and potentially collecting more training data.
|
| 297 |
+
|
| 298 |
+
### Finding Robot Workspace Bounds
|
| 299 |
+
|
| 300 |
+
Before collecting demonstrations, you need to determine the appropriate operational bounds for your robot.
|
| 301 |
+
|
| 302 |
+
This helps simplify the problem of learning on the real robot in two ways: 1) by limiting the robot's operational space to a specific region that solves the task and avoids unnecessary or unsafe exploration, and 2) by allowing training in end-effector space rather than joint space. Empirically, learning in joint space for reinforcement learning in manipulation is often a harder problem - some tasks are nearly impossible to learn in joint space but become learnable when the action space is transformed to end-effector coordinates.
|
| 303 |
+
|
| 304 |
+
**Using lerobot-find-joint-limits**
|
| 305 |
+
|
| 306 |
+
This script helps you find the safe operational bounds for your robot's end-effector. Given that you have a follower and leader arm, you can use the script to find the bounds for the follower arm that will be applied during training.
|
| 307 |
+
Bounding the action space will reduce the redundant exploration of the agent and guarantees safety.
|
| 308 |
+
|
| 309 |
+
```bash
|
| 310 |
+
lerobot-find-joint-limits \
|
| 311 |
+
--robot.type=so100_follower \
|
| 312 |
+
--robot.port=/dev/tty.usbmodem58760431541 \
|
| 313 |
+
--robot.id=black \
|
| 314 |
+
--teleop.type=so100_leader \
|
| 315 |
+
--teleop.port=/dev/tty.usbmodem58760431551 \
|
| 316 |
+
--teleop.id=blue
|
| 317 |
+
```
|
| 318 |
+
|
| 319 |
+
**Workflow**
|
| 320 |
+
|
| 321 |
+
1. Run the script and move the robot through the space that solves the task
|
| 322 |
+
2. The script will record the minimum and maximum end-effector positions and the joint angles and prints them to the console, for example:
|
| 323 |
+
```
|
| 324 |
+
Max ee position [0.2417 0.2012 0.1027]
|
| 325 |
+
Min ee position [0.1663 -0.0823 0.0336]
|
| 326 |
+
Max joint positions [-20.0, -20.0, -20.0, -20.0, -20.0, -20.0]
|
| 327 |
+
Min joint positions [50.0, 50.0, 50.0, 50.0, 50.0, 50.0]
|
| 328 |
+
```
|
| 329 |
+
3. Use these values in the configuration of your teleoperation device (TeleoperatorConfig) under the `end_effector_bounds` field
|
| 330 |
+
|
| 331 |
+
**Example Configuration**
|
| 332 |
+
|
| 333 |
+
```json
|
| 334 |
+
"end_effector_bounds": {
|
| 335 |
+
"max": [0.24, 0.20, 0.10],
|
| 336 |
+
"min": [0.16, -0.08, 0.03]
|
| 337 |
+
}
|
| 338 |
+
```
|
| 339 |
+
|
| 340 |
+
### Collecting Demonstrations
|
| 341 |
+
|
| 342 |
+
With the bounds defined, you can safely collect demonstrations for training. Training RL with off-policy algorithm allows us to use offline datasets collected in order to improve the efficiency of the learning process.
|
| 343 |
+
|
| 344 |
+
**Setting Up Record Mode**
|
| 345 |
+
|
| 346 |
+
Create a configuration file for recording demonstrations (or edit an existing one like [env_config.json](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/env_config.json)):
|
| 347 |
+
|
| 348 |
+
1. Set `mode` to `"record"` at the root level
|
| 349 |
+
2. Specify a unique `repo_id` for your dataset in the `dataset` section (e.g., "username/task_name")
|
| 350 |
+
3. Set `num_episodes_to_record` in the `dataset` section to the number of demonstrations you want to collect
|
| 351 |
+
4. Set `env.processor.image_preprocessing.crop_params_dict` to `{}` initially (we'll determine crops later)
|
| 352 |
+
5. Configure `env.robot`, `env.teleop`, and other hardware settings in the `env` section
|
| 353 |
+
|
| 354 |
+
Example configuration section:
|
| 355 |
+
|
| 356 |
+
```json
|
| 357 |
+
{
|
| 358 |
+
"env": {
|
| 359 |
+
"type": "gym_manipulator",
|
| 360 |
+
"name": "real_robot",
|
| 361 |
+
"fps": 10,
|
| 362 |
+
"processor": {
|
| 363 |
+
"control_mode": "gamepad",
|
| 364 |
+
"observation": {
|
| 365 |
+
"display_cameras": false
|
| 366 |
+
},
|
| 367 |
+
"image_preprocessing": {
|
| 368 |
+
"crop_params_dict": {},
|
| 369 |
+
"resize_size": [128, 128]
|
| 370 |
+
},
|
| 371 |
+
"gripper": {
|
| 372 |
+
"use_gripper": true,
|
| 373 |
+
"gripper_penalty": 0.0
|
| 374 |
+
},
|
| 375 |
+
"reset": {
|
| 376 |
+
"reset_time_s": 5.0,
|
| 377 |
+
"control_time_s": 20.0
|
| 378 |
+
}
|
| 379 |
+
},
|
| 380 |
+
"robot": {
|
| 381 |
+
// ... robot configuration ...
|
| 382 |
+
},
|
| 383 |
+
"teleop": {
|
| 384 |
+
// ... teleoperator configuration ...
|
| 385 |
+
}
|
| 386 |
+
},
|
| 387 |
+
"dataset": {
|
| 388 |
+
"repo_id": "username/pick_lift_cube",
|
| 389 |
+
"root": null,
|
| 390 |
+
"task": "pick_and_lift",
|
| 391 |
+
"num_episodes_to_record": 15,
|
| 392 |
+
"replay_episode": 0,
|
| 393 |
+
"push_to_hub": true
|
| 394 |
+
},
|
| 395 |
+
"mode": "record",
|
| 396 |
+
"device": "cpu"
|
| 397 |
+
}
|
| 398 |
+
```
|
| 399 |
+
|
| 400 |
+
### Using a Teleoperation Device
|
| 401 |
+
|
| 402 |
+
Along with your robot, you will need a teleoperation device to control it in order to collect datasets of your task and perform interventions during the online training.
|
| 403 |
+
We support using a gamepad or a keyboard or the leader arm of the robot.
|
| 404 |
+
|
| 405 |
+
HIL-Serl learns actions in the end-effector space of the robot. Therefore, the teleoperation will control the end-effector's x,y,z displacements.
|
| 406 |
+
|
| 407 |
+
For that we need to define a version of the robot that takes actions in the end-effector space. Check the robot class `SO100FollowerEndEffector` and its configuration `SO100FollowerEndEffectorConfig` for the default parameters related to the end-effector space.
|
| 408 |
+
|
| 409 |
+
<!-- prettier-ignore-start -->
|
| 410 |
+
```python
|
| 411 |
+
class SO100FollowerEndEffectorConfig(SO100FollowerConfig):
|
| 412 |
+
"""Configuration for the SO100FollowerEndEffector robot."""
|
| 413 |
+
|
| 414 |
+
# Default bounds for the end-effector position (in meters)
|
| 415 |
+
end_effector_bounds: dict[str, list[float]] = field( # bounds for the end-effector in x,y,z direction
|
| 416 |
+
default_factory=lambda: {
|
| 417 |
+
"min": [-1.0, -1.0, -1.0], # min x, y, z
|
| 418 |
+
"max": [1.0, 1.0, 1.0], # max x, y, z
|
| 419 |
+
}
|
| 420 |
+
)
|
| 421 |
+
|
| 422 |
+
max_gripper_pos: float = 50 # maximum gripper position that the gripper will be open at
|
| 423 |
+
|
| 424 |
+
end_effector_step_sizes: dict[str, float] = field( # maximum step size for the end-effector in x,y,z direction
|
| 425 |
+
default_factory=lambda: {
|
| 426 |
+
"x": 0.02,
|
| 427 |
+
"y": 0.02,
|
| 428 |
+
"z": 0.02,
|
| 429 |
+
}
|
| 430 |
+
)
|
| 431 |
+
```
|
| 432 |
+
<!-- prettier-ignore-end -->
|
| 433 |
+
|
| 434 |
+
The `Teleoperator` defines the teleoperation device. You can check the list of available teleoperators in `lerobot/teleoperators`.
|
| 435 |
+
|
| 436 |
+
**Setting up the Gamepad**
|
| 437 |
+
|
| 438 |
+
The gamepad provides a very convenient way to control the robot and the episode state.
|
| 439 |
+
|
| 440 |
+
To setup the gamepad, you need to set the `control_mode` to `"gamepad"` and define the `teleop` section in the configuration file.
|
| 441 |
+
|
| 442 |
+
```json
|
| 443 |
+
{
|
| 444 |
+
"env": {
|
| 445 |
+
"teleop": {
|
| 446 |
+
"type": "gamepad",
|
| 447 |
+
"use_gripper": true
|
| 448 |
+
},
|
| 449 |
+
"processor": {
|
| 450 |
+
"control_mode": "gamepad",
|
| 451 |
+
"gripper": {
|
| 452 |
+
"use_gripper": true
|
| 453 |
+
}
|
| 454 |
+
}
|
| 455 |
+
}
|
| 456 |
+
}
|
| 457 |
+
```
|
| 458 |
+
|
| 459 |
+
<p align="center">
|
| 460 |
+
<img
|
| 461 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/gamepad_guide.jpg?raw=true"
|
| 462 |
+
alt="Figure shows the control mappings on a Logitech gamepad."
|
| 463 |
+
title="Gamepad Control Mapping"
|
| 464 |
+
width="100%"
|
| 465 |
+
></img>
|
| 466 |
+
</p>
|
| 467 |
+
<p align="center">
|
| 468 |
+
<i>Gamepad button mapping for robot control and episode management</i>
|
| 469 |
+
</p>
|
| 470 |
+
|
| 471 |
+
**Setting up the SO101 leader**
|
| 472 |
+
|
| 473 |
+
The SO101 leader arm has reduced gears that allows it to move and track the follower arm during exploration. Therefore, taking over is much smoother than the gearless SO100.
|
| 474 |
+
|
| 475 |
+
To setup the SO101 leader, you need to set the `control_mode` to `"leader"` and define the `teleop` section in the configuration file.
|
| 476 |
+
|
| 477 |
+
```json
|
| 478 |
+
{
|
| 479 |
+
"env": {
|
| 480 |
+
"teleop": {
|
| 481 |
+
"type": "so101_leader",
|
| 482 |
+
"port": "/dev/tty.usbmodem585A0077921",
|
| 483 |
+
"use_degrees": true
|
| 484 |
+
},
|
| 485 |
+
"processor": {
|
| 486 |
+
"control_mode": "leader",
|
| 487 |
+
"gripper": {
|
| 488 |
+
"use_gripper": true
|
| 489 |
+
}
|
| 490 |
+
}
|
| 491 |
+
}
|
| 492 |
+
}
|
| 493 |
+
```
|
| 494 |
+
|
| 495 |
+
In order to annotate the success/failure of the episode, **you will need** to use a keyboard to press `s` for success, `esc` for failure.
|
| 496 |
+
During the online training, press `space` to take over the policy and `space` again to give the control back to the policy.
|
| 497 |
+
|
| 498 |
+
<details>
|
| 499 |
+
<summary><strong>Video: SO101 leader teleoperation</strong></summary>
|
| 500 |
+
|
| 501 |
+
<div class="video-container">
|
| 502 |
+
<video controls width="600">
|
| 503 |
+
<source
|
| 504 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so101_leader_tutorial.mp4"
|
| 505 |
+
type="video/mp4"
|
| 506 |
+
/>
|
| 507 |
+
</video>
|
| 508 |
+
</div>
|
| 509 |
+
|
| 510 |
+
<p align="center"><i>SO101 leader teleoperation example, the leader tracks the follower, press `space` to intervene</i></p>
|
| 511 |
+
</details>
|
| 512 |
+
|
| 513 |
+
**Recording Demonstrations**
|
| 514 |
+
|
| 515 |
+
Start the recording process, an example of the config file can be found [here](https://huggingface.co/datasets/aractingi/lerobot-example-config-files/blob/main/env_config_so100.json):
|
| 516 |
+
|
| 517 |
+
```bash
|
| 518 |
+
python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config_so100.json
|
| 519 |
+
```
|
| 520 |
+
|
| 521 |
+
During recording:
|
| 522 |
+
|
| 523 |
+
1. The robot will reset to the initial position defined in the configuration file `env.processor.reset.fixed_reset_joint_positions`
|
| 524 |
+
2. Complete the task successfully
|
| 525 |
+
3. The episode ends with a reward of 1 when you press the "success" button
|
| 526 |
+
4. If the time limit is reached, or the fail button is pressed, the episode ends with a reward of 0
|
| 527 |
+
5. You can rerecord an episode by pressing the "rerecord" button
|
| 528 |
+
6. The process automatically continues to the next episode
|
| 529 |
+
7. After recording all episodes, the dataset is pushed to the Hugging Face Hub (optional) and saved locally
|
| 530 |
+
|
| 531 |
+
### Processing the Dataset
|
| 532 |
+
|
| 533 |
+
After collecting demonstrations, process them to determine optimal camera crops.
|
| 534 |
+
Reinforcement learning is sensitive to background distractions, so it is important to crop the images to the relevant workspace area.
|
| 535 |
+
|
| 536 |
+
Visual RL algorithms learn directly from pixel inputs, making them vulnerable to irrelevant visual information. Background elements like changing lighting, shadows, people moving, or objects outside the workspace can confuse the learning process. Good ROI selection should:
|
| 537 |
+
|
| 538 |
+
- Include only the essential workspace where the task happens
|
| 539 |
+
- Capture the robot's end-effector and all objects involved in the task
|
| 540 |
+
- Exclude unnecessary background elements and distractions
|
| 541 |
+
|
| 542 |
+
Note: If you already know the crop parameters, you can skip this step and just set the `crop_params_dict` in the configuration file during recording.
|
| 543 |
+
|
| 544 |
+
**Determining Crop Parameters**
|
| 545 |
+
|
| 546 |
+
Use the `crop_dataset_roi.py` script to interactively select regions of interest in your camera images:
|
| 547 |
+
|
| 548 |
+
```bash
|
| 549 |
+
python -m lerobot.rl.crop_dataset_roi --repo-id username/pick_lift_cube
|
| 550 |
+
```
|
| 551 |
+
|
| 552 |
+
1. For each camera view, the script will display the first frame
|
| 553 |
+
2. Draw a rectangle around the relevant workspace area
|
| 554 |
+
3. Press 'c' to confirm the selection
|
| 555 |
+
4. Repeat for all camera views
|
| 556 |
+
5. The script outputs cropping parameters and creates a new cropped dataset
|
| 557 |
+
|
| 558 |
+
Example output:
|
| 559 |
+
|
| 560 |
+
```
|
| 561 |
+
Selected Rectangular Regions of Interest (top, left, height, width):
|
| 562 |
+
observation.images.side: [180, 207, 180, 200]
|
| 563 |
+
observation.images.front: [180, 250, 120, 150]
|
| 564 |
+
```
|
| 565 |
+
|
| 566 |
+
<p align="center">
|
| 567 |
+
<img
|
| 568 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/crop_dataset.gif"
|
| 569 |
+
width="600"
|
| 570 |
+
/>
|
| 571 |
+
</p>
|
| 572 |
+
|
| 573 |
+
<p align="center">
|
| 574 |
+
<i>Interactive cropping tool for selecting regions of interest</i>
|
| 575 |
+
</p>
|
| 576 |
+
|
| 577 |
+
**Updating Configuration**
|
| 578 |
+
|
| 579 |
+
Add these crop parameters to your training configuration:
|
| 580 |
+
|
| 581 |
+
```json
|
| 582 |
+
{
|
| 583 |
+
"env": {
|
| 584 |
+
"processor": {
|
| 585 |
+
"image_preprocessing": {
|
| 586 |
+
"crop_params_dict": {
|
| 587 |
+
"observation.images.side": [180, 207, 180, 200],
|
| 588 |
+
"observation.images.front": [180, 250, 120, 150]
|
| 589 |
+
},
|
| 590 |
+
"resize_size": [128, 128]
|
| 591 |
+
}
|
| 592 |
+
}
|
| 593 |
+
}
|
| 594 |
+
}
|
| 595 |
+
```
|
| 596 |
+
|
| 597 |
+
**Recommended image resolution**
|
| 598 |
+
|
| 599 |
+
Most vision-based policies have been validated on square inputs of either **128×128** (default) or **64×64** pixels. We therefore advise setting the resize_size parameter to [128, 128] – or [64, 64] if you need to save GPU memory and bandwidth. Other resolutions are possible but have not been extensively tested.
|
| 600 |
+
|
| 601 |
+
### Training a Reward Classifier
|
| 602 |
+
|
| 603 |
+
The reward classifier plays an important role in the HIL-SERL workflow by automating reward assignment and automatically detecting episode success. Instead of manually defining reward functions or relying on human feedback for every timestep, the reward classifier learns to predict success/failure from visual observations. This enables the RL algorithm to learn efficiently by providing consistent and automated reward signals based on the robot's camera inputs.
|
| 604 |
+
|
| 605 |
+
This guide explains how to train a reward classifier for human-in-the-loop reinforcement learning implementation of LeRobot. Reward classifiers learn to predict the reward value given a state which can be used in an RL setup to train a policy.
|
| 606 |
+
|
| 607 |
+
**Note**: Training a reward classifier is optional. You can start the first round of RL experiments by annotating the success manually with your gamepad or keyboard device.
|
| 608 |
+
|
| 609 |
+
The reward classifier implementation in `modeling_classifier.py` uses a pretrained vision model to process the images. It can output either a single value for binary rewards to predict success/fail cases or multiple values for multi-class settings.
|
| 610 |
+
|
| 611 |
+
**Collecting a Dataset for the reward classifier**
|
| 612 |
+
|
| 613 |
+
Before training, you need to collect a dataset with labeled examples. The `record_dataset` function in `gym_manipulator.py` enables the process of collecting a dataset of observations, actions, and rewards.
|
| 614 |
+
|
| 615 |
+
To collect a dataset, you need to modify some parameters in the environment configuration based on HILSerlRobotEnvConfig.
|
| 616 |
+
|
| 617 |
+
```bash
|
| 618 |
+
python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/reward_classifier_train_config.json
|
| 619 |
+
```
|
| 620 |
+
|
| 621 |
+
**Key Parameters for Data Collection**
|
| 622 |
+
|
| 623 |
+
- **mode**: set it to `"record"` to collect a dataset (at root level)
|
| 624 |
+
- **dataset.repo_id**: `"hf_username/dataset_name"`, name of the dataset and repo on the hub
|
| 625 |
+
- **dataset.num_episodes_to_record**: Number of episodes to record
|
| 626 |
+
- **env.processor.reset.terminate_on_success**: Whether to automatically terminate episodes when success is detected (default: `true`)
|
| 627 |
+
- **env.fps**: Number of frames per second to record
|
| 628 |
+
- **dataset.push_to_hub**: Whether to push the dataset to the hub
|
| 629 |
+
|
| 630 |
+
The `env.processor.reset.terminate_on_success` parameter allows you to control episode termination behavior. When set to `false`, episodes will continue even after success is detected, allowing you to collect more positive examples with the reward=1 label. This is crucial for training reward classifiers as it provides more success state examples in your dataset. When set to `true` (default), episodes terminate immediately upon success detection.
|
| 631 |
+
|
| 632 |
+
**Important**: For reward classifier training, set `terminate_on_success: false` to collect sufficient positive examples. For regular HIL-SERL training, keep it as `true` to enable automatic episode termination when the task is completed successfully.
|
| 633 |
+
|
| 634 |
+
Example configuration section for data collection:
|
| 635 |
+
|
| 636 |
+
```json
|
| 637 |
+
{
|
| 638 |
+
"env": {
|
| 639 |
+
"type": "gym_manipulator",
|
| 640 |
+
"name": "real_robot",
|
| 641 |
+
"fps": 10,
|
| 642 |
+
"processor": {
|
| 643 |
+
"reset": {
|
| 644 |
+
"reset_time_s": 5.0,
|
| 645 |
+
"control_time_s": 20.0,
|
| 646 |
+
"terminate_on_success": false
|
| 647 |
+
},
|
| 648 |
+
"gripper": {
|
| 649 |
+
"use_gripper": true
|
| 650 |
+
}
|
| 651 |
+
},
|
| 652 |
+
"robot": {
|
| 653 |
+
// ... robot configuration ...
|
| 654 |
+
},
|
| 655 |
+
"teleop": {
|
| 656 |
+
// ... teleoperator configuration ...
|
| 657 |
+
}
|
| 658 |
+
},
|
| 659 |
+
"dataset": {
|
| 660 |
+
"repo_id": "hf_username/dataset_name",
|
| 661 |
+
"dataset_root": "data/your_dataset",
|
| 662 |
+
"task": "reward_classifier_task",
|
| 663 |
+
"num_episodes_to_record": 20,
|
| 664 |
+
"replay_episode": null,
|
| 665 |
+
"push_to_hub": true
|
| 666 |
+
},
|
| 667 |
+
"mode": "record",
|
| 668 |
+
"device": "cpu"
|
| 669 |
+
}
|
| 670 |
+
```
|
| 671 |
+
|
| 672 |
+
**Reward Classifier Configuration**
|
| 673 |
+
|
| 674 |
+
The reward classifier is configured using `configuration_classifier.py`. Here are the key parameters:
|
| 675 |
+
|
| 676 |
+
- **model_name**: Base model architecture (e.g., we mainly use `"helper2424/resnet10"`)
|
| 677 |
+
- **model_type**: `"cnn"` or `"transformer"`
|
| 678 |
+
- **num_cameras**: Number of camera inputs
|
| 679 |
+
- **num_classes**: Number of output classes (typically 2 for binary success/failure)
|
| 680 |
+
- **hidden_dim**: Size of hidden representation
|
| 681 |
+
- **dropout_rate**: Regularization parameter
|
| 682 |
+
- **learning_rate**: Learning rate for optimizer
|
| 683 |
+
|
| 684 |
+
Example configuration for training the [reward classifier](https://huggingface.co/datasets/aractingi/lerobot-example-config-files/blob/main/reward_classifier_train_config.json):
|
| 685 |
+
|
| 686 |
+
```json
|
| 687 |
+
{
|
| 688 |
+
"policy": {
|
| 689 |
+
"type": "reward_classifier",
|
| 690 |
+
"model_name": "helper2424/resnet10",
|
| 691 |
+
"model_type": "cnn",
|
| 692 |
+
"num_cameras": 2,
|
| 693 |
+
"num_classes": 2,
|
| 694 |
+
"hidden_dim": 256,
|
| 695 |
+
"dropout_rate": 0.1,
|
| 696 |
+
"learning_rate": 1e-4,
|
| 697 |
+
"device": "cuda",
|
| 698 |
+
"use_amp": true,
|
| 699 |
+
"input_features": {
|
| 700 |
+
"observation.images.front": {
|
| 701 |
+
"type": "VISUAL",
|
| 702 |
+
"shape": [3, 128, 128]
|
| 703 |
+
},
|
| 704 |
+
"observation.images.side": {
|
| 705 |
+
"type": "VISUAL",
|
| 706 |
+
"shape": [3, 128, 128]
|
| 707 |
+
}
|
| 708 |
+
}
|
| 709 |
+
}
|
| 710 |
+
}
|
| 711 |
+
```
|
| 712 |
+
|
| 713 |
+
**Training the Classifier**
|
| 714 |
+
|
| 715 |
+
To train the classifier, use the `train.py` script with your configuration:
|
| 716 |
+
|
| 717 |
+
```bash
|
| 718 |
+
lerobot-train --config_path path/to/reward_classifier_train_config.json
|
| 719 |
+
```
|
| 720 |
+
|
| 721 |
+
**Deploying and Testing the Model**
|
| 722 |
+
|
| 723 |
+
To use your trained reward classifier, configure the `HILSerlRobotEnvConfig` to use your model:
|
| 724 |
+
|
| 725 |
+
<!-- prettier-ignore-start -->
|
| 726 |
+
```python
|
| 727 |
+
config = GymManipulatorConfig(
|
| 728 |
+
env=HILSerlRobotEnvConfig(
|
| 729 |
+
processor=HILSerlProcessorConfig(
|
| 730 |
+
reward_classifier=RewardClassifierConfig(
|
| 731 |
+
pretrained_path="path_to_your_pretrained_trained_model"
|
| 732 |
+
)
|
| 733 |
+
),
|
| 734 |
+
# Other environment parameters
|
| 735 |
+
),
|
| 736 |
+
dataset=DatasetConfig(...),
|
| 737 |
+
mode=None # For training
|
| 738 |
+
)
|
| 739 |
+
```
|
| 740 |
+
<!-- prettier-ignore-end -->
|
| 741 |
+
|
| 742 |
+
or set the argument in the json config file.
|
| 743 |
+
|
| 744 |
+
```json
|
| 745 |
+
{
|
| 746 |
+
"env": {
|
| 747 |
+
"processor": {
|
| 748 |
+
"reward_classifier": {
|
| 749 |
+
"pretrained_path": "path_to_your_pretrained_model",
|
| 750 |
+
"success_threshold": 0.7,
|
| 751 |
+
"success_reward": 1.0
|
| 752 |
+
},
|
| 753 |
+
"reset": {
|
| 754 |
+
"terminate_on_success": true
|
| 755 |
+
}
|
| 756 |
+
}
|
| 757 |
+
}
|
| 758 |
+
}
|
| 759 |
+
```
|
| 760 |
+
|
| 761 |
+
Run `gym_manipulator.py` to test the model.
|
| 762 |
+
|
| 763 |
+
```bash
|
| 764 |
+
python -m lerobot.rl.gym_manipulator --config_path path/to/env_config.json
|
| 765 |
+
```
|
| 766 |
+
|
| 767 |
+
The reward classifier will automatically provide rewards based on the visual input from the robot's cameras.
|
| 768 |
+
|
| 769 |
+
**Example Workflow for training the reward classifier**
|
| 770 |
+
|
| 771 |
+
1. **Create the configuration files**:
|
| 772 |
+
Create the necessary json configuration files for the reward classifier and the environment. Check the examples [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/reward_classifier/config.json).
|
| 773 |
+
|
| 774 |
+
2. **Collect a dataset**:
|
| 775 |
+
|
| 776 |
+
```bash
|
| 777 |
+
python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
|
| 778 |
+
```
|
| 779 |
+
|
| 780 |
+
3. **Train the classifier**:
|
| 781 |
+
|
| 782 |
+
```bash
|
| 783 |
+
lerobot-train --config_path src/lerobot/configs/reward_classifier_train_config.json
|
| 784 |
+
```
|
| 785 |
+
|
| 786 |
+
4. **Test the classifier**:
|
| 787 |
+
```bash
|
| 788 |
+
python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
|
| 789 |
+
```
|
| 790 |
+
|
| 791 |
+
### Training with Actor-Learner
|
| 792 |
+
|
| 793 |
+
The LeRobot system uses a distributed actor-learner architecture for training. This architecture decouples robot interactions from the learning process, allowing them to run concurrently without blocking each other. The actor server handles robot observations and actions, sending interaction data to the learner server. The learner server performs gradient descent and periodically updates the actor's policy weights. You will need to start two processes: a learner and an actor.
|
| 794 |
+
|
| 795 |
+
**Configuration Setup**
|
| 796 |
+
|
| 797 |
+
Create a training configuration file (example available [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/train_config.json)). The training config is based on the main `TrainRLServerPipelineConfig` class in `lerobot/configs/train.py`.
|
| 798 |
+
|
| 799 |
+
1. Configure the policy settings (`type="sac"`, `device`, etc.)
|
| 800 |
+
2. Set `dataset` to your cropped dataset
|
| 801 |
+
3. Configure environment settings with crop parameters
|
| 802 |
+
4. Check the other parameters related to SAC in [configuration_sac.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/policies/sac/configuration_sac.py#L79).
|
| 803 |
+
5. Verify that the `policy` config is correct with the right `input_features` and `output_features` for your task.
|
| 804 |
+
|
| 805 |
+
**Starting the Learner**
|
| 806 |
+
|
| 807 |
+
First, start the learner server process:
|
| 808 |
+
|
| 809 |
+
```bash
|
| 810 |
+
python -m lerobot.rl.learner --config_path src/lerobot/configs/train_config_hilserl_so100.json
|
| 811 |
+
```
|
| 812 |
+
|
| 813 |
+
The learner:
|
| 814 |
+
|
| 815 |
+
- Initializes the policy network
|
| 816 |
+
- Prepares replay buffers
|
| 817 |
+
- Opens a `gRPC` server to communicate with actors
|
| 818 |
+
- Processes transitions and updates the policy
|
| 819 |
+
|
| 820 |
+
**Starting the Actor**
|
| 821 |
+
|
| 822 |
+
In a separate terminal, start the actor process with the same configuration:
|
| 823 |
+
|
| 824 |
+
```bash
|
| 825 |
+
python -m lerobot.rl.actor --config_path src/lerobot/configs/train_config_hilserl_so100.json
|
| 826 |
+
```
|
| 827 |
+
|
| 828 |
+
The actor:
|
| 829 |
+
|
| 830 |
+
- Connects to the learner via `gRPC`
|
| 831 |
+
- Initializes the environment
|
| 832 |
+
- Execute rollouts of the policy to collect experience
|
| 833 |
+
- Sends transitions to the learner
|
| 834 |
+
- Receives updated policy parameters
|
| 835 |
+
|
| 836 |
+
**Training Flow**
|
| 837 |
+
|
| 838 |
+
The training proceeds automatically:
|
| 839 |
+
|
| 840 |
+
1. The actor executes the policy in the environment
|
| 841 |
+
2. Transitions are collected and sent to the learner
|
| 842 |
+
3. The learner updates the policy based on these transitions
|
| 843 |
+
4. Updated policy parameters are sent back to the actor
|
| 844 |
+
5. The process continues until the specified step limit is reached
|
| 845 |
+
|
| 846 |
+
**Human in the Loop**
|
| 847 |
+
|
| 848 |
+
- The key to learning efficiently is to have human interventions to provide corrective feedback and completing the task to aide the policy learning and exploration.
|
| 849 |
+
- To perform human interventions, you can press the upper right trigger button on the gamepad (or the `space` key on the keyboard). This will pause the policy actions and allow you to take over.
|
| 850 |
+
- A successful experiment is one where the human has to intervene at the start but then reduces the amount of interventions as the policy improves. You can monitor the intervention rate in the `wandb` dashboard.
|
| 851 |
+
|
| 852 |
+
<p align="center">
|
| 853 |
+
<img
|
| 854 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/hil_effect.png?raw=true"
|
| 855 |
+
alt="Figure shows the control mappings on a Logitech gamepad."
|
| 856 |
+
title="Gamepad Control Mapping"
|
| 857 |
+
width="100%"
|
| 858 |
+
></img>
|
| 859 |
+
</p>
|
| 860 |
+
|
| 861 |
+
<p align="center">
|
| 862 |
+
<i>
|
| 863 |
+
Example showing how human interventions help guide policy learning over time
|
| 864 |
+
</i>
|
| 865 |
+
</p>
|
| 866 |
+
|
| 867 |
+
- The figure shows the plot of the episodic reward over interaction step. The figure shows the effect of human interventions on the policy learning.
|
| 868 |
+
- The orange curve is an experiment without any human interventions. While the pink and blue curves are experiments with human interventions.
|
| 869 |
+
- We can observe that the number of steps where the policy starts achieving the maximum reward is cut by a quarter when human interventions are present.
|
| 870 |
+
|
| 871 |
+
**Monitoring and Debugging**
|
| 872 |
+
|
| 873 |
+
If you have `wandb.enable` set to `true` in your configuration, you can monitor training progress in real-time through the [Weights & Biases](https://wandb.ai/site/) dashboard.
|
| 874 |
+
|
| 875 |
+
### Guide to Human Interventions
|
| 876 |
+
|
| 877 |
+
The learning process is very sensitive to the intervention strategy. It will takes a few runs to understand how to intervene effectively. Some tips and hints:
|
| 878 |
+
|
| 879 |
+
- Allow the policy to explore for a few episodes at the start of training.
|
| 880 |
+
- Avoid intervening for long periods of time. Try to intervene in situation to correct the robot's behaviour when it goes off track.
|
| 881 |
+
- Once the policy starts achieving the task, even if its not perfect, you can limit your interventions to simple quick actions like a simple grasping commands.
|
| 882 |
+
|
| 883 |
+
The ideal behaviour is that your intervention rate should drop gradually during training as shown in the figure below.
|
| 884 |
+
|
| 885 |
+
<p align="center">
|
| 886 |
+
<img
|
| 887 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/intervention_rate_tutorial_rl.png?raw=true"
|
| 888 |
+
alt="Intervention rate"
|
| 889 |
+
title="Intervention rate during training"
|
| 890 |
+
width="100%"
|
| 891 |
+
></img>
|
| 892 |
+
</p>
|
| 893 |
+
|
| 894 |
+
<p align="center">
|
| 895 |
+
<i>
|
| 896 |
+
Plot of the intervention rate during a training run on a pick and lift cube
|
| 897 |
+
task
|
| 898 |
+
</i>
|
| 899 |
+
</p>
|
| 900 |
+
|
| 901 |
+
### Key hyperparameters to tune
|
| 902 |
+
|
| 903 |
+
Some configuration values have a disproportionate impact on training stability and speed:
|
| 904 |
+
|
| 905 |
+
- **`temperature_init`** (`policy.temperature_init`) – initial entropy temperature in SAC. Higher values encourage more exploration; lower values make the policy more deterministic early on. A good starting point is `1e-2`. We observed that setting it too high can make human interventions ineffective and slow down learning.
|
| 906 |
+
- **`policy_parameters_push_frequency`** (`policy.actor_learner_config.policy_parameters_push_frequency`) – interval in _seconds_ between two weight pushes from the learner to the actor. The default is `4 s`. Decrease to **1-2 s** to provide fresher weights (at the cost of more network traffic); increase only if your connection is slow, as this will reduce sample efficiency.
|
| 907 |
+
- **`storage_device`** (`policy.storage_device`) – device on which the learner keeps the policy parameters. If you have spare GPU memory, set this to `"cuda"` (instead of the default `"cpu"`). Keeping the weights on-GPU removes CPU→GPU transfer overhead and can significantly increase the number of learner updates per second.
|
| 908 |
+
|
| 909 |
+
Congrats 🎉, you have finished this tutorial!
|
| 910 |
+
|
| 911 |
+
> [!TIP]
|
| 912 |
+
> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
|
| 913 |
+
|
| 914 |
+
Paper citation:
|
| 915 |
+
|
| 916 |
+
```
|
| 917 |
+
@article{luo2024precise,
|
| 918 |
+
title={Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning},
|
| 919 |
+
author={Luo, Jianlan and Xu, Charles and Wu, Jeffrey and Levine, Sergey},
|
| 920 |
+
journal={arXiv preprint arXiv:2410.21845},
|
| 921 |
+
year={2024}
|
| 922 |
+
}
|
| 923 |
+
```
|
lerobot/docs/source/hilserl_sim.mdx
ADDED
|
@@ -0,0 +1,154 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Train RL in Simulation
|
| 2 |
+
|
| 3 |
+
This guide explains how to use the `gym_hil` simulation environments as an alternative to real robots when working with the LeRobot framework for Human-In-the-Loop (HIL) reinforcement learning.
|
| 4 |
+
|
| 5 |
+
`gym_hil` is a package that provides Gymnasium-compatible simulation environments specifically designed for Human-In-the-Loop reinforcement learning. These environments allow you to:
|
| 6 |
+
|
| 7 |
+
- Train policies in simulation to test the RL stack before training on real robots
|
| 8 |
+
|
| 9 |
+
- Collect demonstrations in sim using external devices like gamepads or keyboards
|
| 10 |
+
- Perform human interventions during policy learning
|
| 11 |
+
|
| 12 |
+
Currently, the main environment is a Franka Panda robot simulation based on MuJoCo, with tasks like picking up a cube.
|
| 13 |
+
|
| 14 |
+
## Installation
|
| 15 |
+
|
| 16 |
+
First, install the `gym_hil` package within the LeRobot environment:
|
| 17 |
+
|
| 18 |
+
```bash
|
| 19 |
+
pip install -e ".[hilserl]"
|
| 20 |
+
```
|
| 21 |
+
|
| 22 |
+
## What do I need?
|
| 23 |
+
|
| 24 |
+
- A gamepad or keyboard to control the robot
|
| 25 |
+
- A Nvidia GPU
|
| 26 |
+
|
| 27 |
+
## Configuration
|
| 28 |
+
|
| 29 |
+
To use `gym_hil` with LeRobot, you need to create a configuration file. An example is provided [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/gym_hil/env_config.json). Key configuration sections include:
|
| 30 |
+
|
| 31 |
+
### Environment Type and Task
|
| 32 |
+
|
| 33 |
+
```json
|
| 34 |
+
{
|
| 35 |
+
"env": {
|
| 36 |
+
"type": "gym_manipulator",
|
| 37 |
+
"name": "gym_hil",
|
| 38 |
+
"task": "PandaPickCubeGamepad-v0",
|
| 39 |
+
"fps": 10
|
| 40 |
+
},
|
| 41 |
+
"device": "cuda"
|
| 42 |
+
}
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
Available tasks:
|
| 46 |
+
|
| 47 |
+
- `PandaPickCubeBase-v0`: Basic environment
|
| 48 |
+
- `PandaPickCubeGamepad-v0`: With gamepad control
|
| 49 |
+
- `PandaPickCubeKeyboard-v0`: With keyboard control
|
| 50 |
+
|
| 51 |
+
### Processor Configuration
|
| 52 |
+
|
| 53 |
+
```json
|
| 54 |
+
{
|
| 55 |
+
"env": {
|
| 56 |
+
"processor": {
|
| 57 |
+
"control_mode": "gamepad",
|
| 58 |
+
"gripper": {
|
| 59 |
+
"use_gripper": true,
|
| 60 |
+
"gripper_penalty": -0.02
|
| 61 |
+
},
|
| 62 |
+
"reset": {
|
| 63 |
+
"control_time_s": 15.0,
|
| 64 |
+
"fixed_reset_joint_positions": [
|
| 65 |
+
0.0, 0.195, 0.0, -2.43, 0.0, 2.62, 0.785
|
| 66 |
+
]
|
| 67 |
+
},
|
| 68 |
+
"inverse_kinematics": {
|
| 69 |
+
"end_effector_step_sizes": {
|
| 70 |
+
"x": 0.025,
|
| 71 |
+
"y": 0.025,
|
| 72 |
+
"z": 0.025
|
| 73 |
+
}
|
| 74 |
+
}
|
| 75 |
+
}
|
| 76 |
+
}
|
| 77 |
+
}
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
Important parameters:
|
| 81 |
+
|
| 82 |
+
- `gripper.gripper_penalty`: Penalty for excessive gripper movement
|
| 83 |
+
- `gripper.use_gripper`: Whether to enable gripper control
|
| 84 |
+
- `inverse_kinematics.end_effector_step_sizes`: Size of the steps in the x,y,z axes of the end-effector
|
| 85 |
+
- `control_mode`: Set to `"gamepad"` to use a gamepad controller
|
| 86 |
+
|
| 87 |
+
## Running with HIL RL of LeRobot
|
| 88 |
+
|
| 89 |
+
### Basic Usage
|
| 90 |
+
|
| 91 |
+
To run the environment, set mode to null:
|
| 92 |
+
|
| 93 |
+
```bash
|
| 94 |
+
python -m lerobot.rl.gym_manipulator --config_path path/to/gym_hil_env.json
|
| 95 |
+
```
|
| 96 |
+
|
| 97 |
+
### Recording a Dataset
|
| 98 |
+
|
| 99 |
+
To collect a dataset, set the mode to `record` whilst defining the repo_id and number of episodes to record:
|
| 100 |
+
|
| 101 |
+
```json
|
| 102 |
+
{
|
| 103 |
+
"env": {
|
| 104 |
+
"type": "gym_manipulator",
|
| 105 |
+
"name": "gym_hil",
|
| 106 |
+
"task": "PandaPickCubeGamepad-v0"
|
| 107 |
+
},
|
| 108 |
+
"dataset": {
|
| 109 |
+
"repo_id": "username/sim_dataset",
|
| 110 |
+
"root": null,
|
| 111 |
+
"task": "pick_cube",
|
| 112 |
+
"num_episodes_to_record": 10,
|
| 113 |
+
"replay_episode": null,
|
| 114 |
+
"push_to_hub": true
|
| 115 |
+
},
|
| 116 |
+
"mode": "record"
|
| 117 |
+
}
|
| 118 |
+
```
|
| 119 |
+
|
| 120 |
+
```bash
|
| 121 |
+
python -m lerobot.rl.gym_manipulator --config_path path/to/gym_hil_env.json
|
| 122 |
+
```
|
| 123 |
+
|
| 124 |
+
### Training a Policy
|
| 125 |
+
|
| 126 |
+
To train a policy, checkout the configuration example available [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/gym_hil/train_config.json) and run the actor and learner servers:
|
| 127 |
+
|
| 128 |
+
```bash
|
| 129 |
+
python -m lerobot.rl.actor --config_path path/to/train_gym_hil_env.json
|
| 130 |
+
```
|
| 131 |
+
|
| 132 |
+
In a different terminal, run the learner server:
|
| 133 |
+
|
| 134 |
+
```bash
|
| 135 |
+
python -m lerobot.rl.learner --config_path path/to/train_gym_hil_env.json
|
| 136 |
+
```
|
| 137 |
+
|
| 138 |
+
The simulation environment provides a safe and repeatable way to develop and test your Human-In-the-Loop reinforcement learning components before deploying to real robots.
|
| 139 |
+
|
| 140 |
+
Congrats 🎉, you have finished this tutorial!
|
| 141 |
+
|
| 142 |
+
> [!TIP]
|
| 143 |
+
> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
|
| 144 |
+
|
| 145 |
+
Paper citation:
|
| 146 |
+
|
| 147 |
+
```
|
| 148 |
+
@article{luo2024precise,
|
| 149 |
+
title={Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning},
|
| 150 |
+
author={Luo, Jianlan and Xu, Charles and Wu, Jeffrey and Levine, Sergey},
|
| 151 |
+
journal={arXiv preprint arXiv:2410.21845},
|
| 152 |
+
year={2024}
|
| 153 |
+
}
|
| 154 |
+
```
|
lerobot/docs/source/hope_jr.mdx
ADDED
|
@@ -0,0 +1,277 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# HopeJR
|
| 2 |
+
|
| 3 |
+
## Prerequisites
|
| 4 |
+
|
| 5 |
+
- [Hardware Setup](https://github.com/TheRobotStudio/HOPEJr)
|
| 6 |
+
|
| 7 |
+
## Install LeRobot
|
| 8 |
+
|
| 9 |
+
Follow the [installation instructions](https://github.com/huggingface/lerobot#installation) to install LeRobot.
|
| 10 |
+
|
| 11 |
+
Install LeRobot with HopeJR dependencies:
|
| 12 |
+
|
| 13 |
+
```bash
|
| 14 |
+
pip install -e ".[hopejr]"
|
| 15 |
+
```
|
| 16 |
+
|
| 17 |
+
## Device Configuration
|
| 18 |
+
|
| 19 |
+
Before starting calibration and operation, you need to identify the USB ports for each HopeJR component. Run this script to find the USB ports for the arm, hand, glove, and exoskeleton:
|
| 20 |
+
|
| 21 |
+
```bash
|
| 22 |
+
lerobot-find-port
|
| 23 |
+
```
|
| 24 |
+
|
| 25 |
+
This will display the available USB ports and their associated devices. Make note of the port paths (e.g., `/dev/tty.usbmodem58760433331`, `/dev/tty.usbmodem11301`) as you'll need to specify them in the `--robot.port` and `--teleop.port` parameters when recording data, replaying episodes, or running teleoperation scripts.
|
| 26 |
+
|
| 27 |
+
## Step 1: Calibration
|
| 28 |
+
|
| 29 |
+
Before performing teleoperation, HopeJR's limbs need to be calibrated. Calibration files will be saved in `~/.cache/huggingface/lerobot/calibration`
|
| 30 |
+
|
| 31 |
+
### 1.1 Calibrate Robot Hand
|
| 32 |
+
|
| 33 |
+
```bash
|
| 34 |
+
lerobot-calibrate \
|
| 35 |
+
--robot.type=hope_jr_hand \
|
| 36 |
+
--robot.port=/dev/tty.usbmodem58760432281 \
|
| 37 |
+
--robot.id=blue \
|
| 38 |
+
--robot.side=right
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
When running the calibration script, a calibration GUI will pop up. Finger joints are named as follows:
|
| 42 |
+
|
| 43 |
+
**Thumb**:
|
| 44 |
+
|
| 45 |
+
- **CMC**: base joint connecting thumb to hand
|
| 46 |
+
- **MCP**: knuckle joint
|
| 47 |
+
- **PIP**: first finger joint
|
| 48 |
+
- **DIP** : fingertip joint
|
| 49 |
+
|
| 50 |
+
**Index, Middle, Ring, and Pinky fingers**:
|
| 51 |
+
|
| 52 |
+
- **Radial flexor**: Moves base of finger towards the thumb
|
| 53 |
+
- **Ulnar flexor**: Moves base of finger towards the pinky
|
| 54 |
+
- **PIP/DIP**: Flexes the distal and proximal phalanx of the finger
|
| 55 |
+
|
| 56 |
+
Each one of these will need to be calibrated individually via the GUI.
|
| 57 |
+
Note that ulnar and radial flexors should have ranges of the same size (but with different offsets) in order to get symmetric movement.
|
| 58 |
+
|
| 59 |
+
<p align="center">
|
| 60 |
+
<img
|
| 61 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/calibration_gui_1.png"
|
| 62 |
+
alt="Setting boundaries in the hand calibration GUI"
|
| 63 |
+
title="Setting boundaries in the hand calibration GUI"
|
| 64 |
+
width="100%"
|
| 65 |
+
></img>
|
| 66 |
+
</p>
|
| 67 |
+
|
| 68 |
+
Use the calibration interface to set the range boundaries for each joint as shown above.
|
| 69 |
+
|
| 70 |
+
<p align="center">
|
| 71 |
+
<img
|
| 72 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/calibration_gui_2.png"
|
| 73 |
+
alt="Saving calibration values"
|
| 74 |
+
title="Saving calibration values"
|
| 75 |
+
width="100%"
|
| 76 |
+
></img>
|
| 77 |
+
</p>
|
| 78 |
+
|
| 79 |
+
Once you have set the appropriate boundaries for all joints, click "Save" to save the calibration values to the motors.
|
| 80 |
+
|
| 81 |
+
### 1.2 Calibrate Teleoperator Glove
|
| 82 |
+
|
| 83 |
+
```bash
|
| 84 |
+
lerobot-calibrate \
|
| 85 |
+
--teleop.type=homunculus_glove \
|
| 86 |
+
--teleop.port=/dev/tty.usbmodem11201 \
|
| 87 |
+
--teleop.id=red \
|
| 88 |
+
--teleop.side=right
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
Move each finger through its full range of motion, starting from the thumb.
|
| 92 |
+
|
| 93 |
+
```
|
| 94 |
+
Move thumb through its entire range of motion.
|
| 95 |
+
Recording positions. Press ENTER to stop...
|
| 96 |
+
|
| 97 |
+
-------------------------------------------
|
| 98 |
+
NAME | MIN | POS | MAX
|
| 99 |
+
thumb_cmc | 1790 | 1831 | 1853
|
| 100 |
+
thumb_mcp | 1497 | 1514 | 1528
|
| 101 |
+
thumb_pip | 1466 | 1496 | 1515
|
| 102 |
+
thumb_dip | 1463 | 1484 | 1514
|
| 103 |
+
```
|
| 104 |
+
|
| 105 |
+
Continue with each finger:
|
| 106 |
+
|
| 107 |
+
```
|
| 108 |
+
Move middle through its entire range of motion.
|
| 109 |
+
Recording positions. Press ENTER to stop...
|
| 110 |
+
|
| 111 |
+
-------------------------------------------
|
| 112 |
+
NAME | MIN | POS | MAX
|
| 113 |
+
middle_mcp_abduction | 1598 | 1718 | 1820
|
| 114 |
+
middle_mcp_flexion | 1512 | 1658 | 2136
|
| 115 |
+
middle_dip | 1484 | 1500 | 1547
|
| 116 |
+
```
|
| 117 |
+
|
| 118 |
+
Once calibration is complete, the system will save the calibration to `/Users/your_username/.cache/huggingface/lerobot/calibration/teleoperators/homunculus_glove/red.json`
|
| 119 |
+
|
| 120 |
+
### 1.3 Calibrate Robot Arm
|
| 121 |
+
|
| 122 |
+
```bash
|
| 123 |
+
lerobot-calibrate \
|
| 124 |
+
--robot.type=hope_jr_arm \
|
| 125 |
+
--robot.port=/dev/tty.usbserial-1110 \
|
| 126 |
+
--robot.id=white
|
| 127 |
+
```
|
| 128 |
+
|
| 129 |
+
This will open a calibration GUI where you can set the range limits for each motor. The arm motions are organized as follows:
|
| 130 |
+
|
| 131 |
+
- **Shoulder**: pitch, yaw, and roll
|
| 132 |
+
- **Elbow**: flex
|
| 133 |
+
- **Wrist**: pitch, yaw, and roll
|
| 134 |
+
|
| 135 |
+
<p align="center">
|
| 136 |
+
<img
|
| 137 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/calibration_gui_2.png"
|
| 138 |
+
alt="Setting boundaries in the arm calibration GUI"
|
| 139 |
+
title="Setting boundaries in the arm calibration GUI"
|
| 140 |
+
width="100%"
|
| 141 |
+
></img>
|
| 142 |
+
</p>
|
| 143 |
+
|
| 144 |
+
Use the calibration interface to set the range boundaries for each joint. Move each joint through its full range of motion and adjust the minimum and maximum values accordingly. Once you have set the appropriate boundaries for all joints, save the calibration.
|
| 145 |
+
|
| 146 |
+
### 1.4 Calibrate Teleoperator Exoskeleton
|
| 147 |
+
|
| 148 |
+
```bash
|
| 149 |
+
lerobot-calibrate \
|
| 150 |
+
--teleop.type=homunculus_arm \
|
| 151 |
+
--teleop.port=/dev/tty.usbmodem11201 \
|
| 152 |
+
--teleop.id=black
|
| 153 |
+
```
|
| 154 |
+
|
| 155 |
+
The exoskeleton allows one to control the robot arm. During calibration, you'll be prompted to move all joints through their full range of motion:
|
| 156 |
+
|
| 157 |
+
```
|
| 158 |
+
Move all joints through their entire range of motion.
|
| 159 |
+
Recording positions. Press ENTER to stop...
|
| 160 |
+
|
| 161 |
+
-------------------------------------------
|
| 162 |
+
-------------------------------------------
|
| 163 |
+
NAME | MIN | POS | MAX
|
| 164 |
+
shoulder_pitch | 586 | 736 | 895
|
| 165 |
+
shoulder_yaw | 1257 | 1374 | 1390
|
| 166 |
+
shoulder_roll | 449 | 1034 | 2564
|
| 167 |
+
elbow_flex | 3023 | 3117 | 3134
|
| 168 |
+
wrist_roll | 3073 | 3096 | 3147
|
| 169 |
+
wrist_yaw | 2143 | 2171 | 2185
|
| 170 |
+
wrist_pitch | 1975 | 1993 | 2074
|
| 171 |
+
Calibration saved to /Users/your_username/.cache/huggingface/lerobot/calibration/teleoperators/homunculus_arm/black.json
|
| 172 |
+
```
|
| 173 |
+
|
| 174 |
+
## Step 2: Teleoperation
|
| 175 |
+
|
| 176 |
+
Due to global variable conflicts in the Feetech middleware, teleoperation for arm and hand must run in separate shell sessions:
|
| 177 |
+
|
| 178 |
+
### Hand
|
| 179 |
+
|
| 180 |
+
```bash
|
| 181 |
+
lerobot-teleoperate \
|
| 182 |
+
--robot.type=hope_jr_hand \
|
| 183 |
+
--robot.port=/dev/tty.usbmodem58760432281 \
|
| 184 |
+
--robot.id=blue \
|
| 185 |
+
--robot.side=right \
|
| 186 |
+
--teleop.type=homunculus_glove \
|
| 187 |
+
--teleop.port=/dev/tty.usbmodem11201 \
|
| 188 |
+
--teleop.id=red \
|
| 189 |
+
--teleop.side=right \
|
| 190 |
+
--display_data=true \
|
| 191 |
+
--fps=30
|
| 192 |
+
```
|
| 193 |
+
|
| 194 |
+
### Arm
|
| 195 |
+
|
| 196 |
+
```bash
|
| 197 |
+
lerobot-teleoperate \
|
| 198 |
+
--robot.type=hope_jr_arm \
|
| 199 |
+
--robot.port=/dev/tty.usbserial-1110 \
|
| 200 |
+
--robot.id=white \
|
| 201 |
+
--teleop.type=homunculus_arm \
|
| 202 |
+
--teleop.port=/dev/tty.usbmodem11201 \
|
| 203 |
+
--teleop.id=black \
|
| 204 |
+
--display_data=true \
|
| 205 |
+
--fps=30
|
| 206 |
+
```
|
| 207 |
+
|
| 208 |
+
## Step 3: Record, Replay, Train
|
| 209 |
+
|
| 210 |
+
Record, Replay and Train with Hope-JR is still experimental.
|
| 211 |
+
|
| 212 |
+
### Record
|
| 213 |
+
|
| 214 |
+
This step records the dataset, which can be seen as an example [here](https://huggingface.co/datasets/nepyope/hand_record_test_with_video_data/settings).
|
| 215 |
+
|
| 216 |
+
```bash
|
| 217 |
+
lerobot-record \
|
| 218 |
+
--robot.type=hope_jr_hand \
|
| 219 |
+
--robot.port=/dev/tty.usbmodem58760432281 \
|
| 220 |
+
--robot.id=right \
|
| 221 |
+
--robot.side=right \
|
| 222 |
+
--robot.cameras='{"main": {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30}}' \
|
| 223 |
+
--teleop.type=homunculus_glove \
|
| 224 |
+
--teleop.port=/dev/tty.usbmodem1201 \
|
| 225 |
+
--teleop.id=right \
|
| 226 |
+
--teleop.side=right \
|
| 227 |
+
--dataset.repo_id=nepyope/hand_record_test_with_video_data \
|
| 228 |
+
--dataset.single_task="Hand recording test with video data" \
|
| 229 |
+
--dataset.num_episodes=1 \
|
| 230 |
+
--dataset.episode_time_s=5 \
|
| 231 |
+
--dataset.push_to_hub=true \
|
| 232 |
+
--dataset.private=true \
|
| 233 |
+
--display_data=true
|
| 234 |
+
```
|
| 235 |
+
|
| 236 |
+
### Replay
|
| 237 |
+
|
| 238 |
+
```bash
|
| 239 |
+
lerobot-replay \
|
| 240 |
+
--robot.type=hope_jr_hand \
|
| 241 |
+
--robot.port=/dev/tty.usbmodem58760432281 \
|
| 242 |
+
--robot.id=right \
|
| 243 |
+
--robot.side=right \
|
| 244 |
+
--dataset.repo_id=nepyope/hand_record_test_with_camera \
|
| 245 |
+
--dataset.episode=0
|
| 246 |
+
```
|
| 247 |
+
|
| 248 |
+
### Train
|
| 249 |
+
|
| 250 |
+
```bash
|
| 251 |
+
lerobot-train \
|
| 252 |
+
--dataset.repo_id=nepyope/hand_record_test_with_video_data \
|
| 253 |
+
--policy.type=act \
|
| 254 |
+
--output_dir=outputs/train/hopejr_hand \
|
| 255 |
+
--job_name=hopejr \
|
| 256 |
+
--policy.device=mps \
|
| 257 |
+
--wandb.enable=true \
|
| 258 |
+
--policy.repo_id=nepyope/hand_test_policy
|
| 259 |
+
```
|
| 260 |
+
|
| 261 |
+
### Evaluate
|
| 262 |
+
|
| 263 |
+
This training run can be viewed as an example [here](https://wandb.ai/tino/lerobot/runs/rp0k8zvw?nw=nwusertino).
|
| 264 |
+
|
| 265 |
+
```bash
|
| 266 |
+
lerobot-record \
|
| 267 |
+
--robot.type=hope_jr_hand \
|
| 268 |
+
--robot.port=/dev/tty.usbmodem58760432281 \
|
| 269 |
+
--robot.id=right \
|
| 270 |
+
--robot.side=right \
|
| 271 |
+
--robot.cameras='{"main": {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30}}' \
|
| 272 |
+
--display_data=false \
|
| 273 |
+
--dataset.repo_id=nepyope/eval_hopejr \
|
| 274 |
+
--dataset.single_task="Evaluate hopejr hand policy" \
|
| 275 |
+
--dataset.num_episodes=10 \
|
| 276 |
+
--policy.path=outputs/train/hopejr_hand/checkpoints/last/pretrained_model
|
| 277 |
+
```
|
lerobot/docs/source/il_robots.mdx
ADDED
|
@@ -0,0 +1,620 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Imitation Learning on Real-World Robots
|
| 2 |
+
|
| 3 |
+
This tutorial will explain how to train a neural network to control a real robot autonomously.
|
| 4 |
+
|
| 5 |
+
**You'll learn:**
|
| 6 |
+
|
| 7 |
+
1. How to record and visualize your dataset.
|
| 8 |
+
2. How to train a policy using your data and prepare it for evaluation.
|
| 9 |
+
3. How to evaluate your policy and visualize the results.
|
| 10 |
+
|
| 11 |
+
By following these steps, you'll be able to replicate tasks, such as picking up a Lego block and placing it in a bin with a high success rate, as shown in the video below.
|
| 12 |
+
|
| 13 |
+
<details>
|
| 14 |
+
<summary><strong>Video: pickup lego block task</strong></summary>
|
| 15 |
+
|
| 16 |
+
<div class="video-container">
|
| 17 |
+
<video controls width="600">
|
| 18 |
+
<source
|
| 19 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/lerobot_task.mp4"
|
| 20 |
+
type="video/mp4"
|
| 21 |
+
/>
|
| 22 |
+
</video>
|
| 23 |
+
</div>
|
| 24 |
+
|
| 25 |
+
</details>
|
| 26 |
+
|
| 27 |
+
This tutorial isn’t tied to a specific robot: we walk you through the commands and API snippets you can adapt for any supported platform.
|
| 28 |
+
|
| 29 |
+
During data collection, you’ll use a “teloperation” device, such as a leader arm or keyboard to teleoperate the robot and record its motion trajectories.
|
| 30 |
+
|
| 31 |
+
Once you’ve gathered enough trajectories, you’ll train a neural network to imitate these trajectories and deploy the trained model so your robot can perform the task autonomously.
|
| 32 |
+
|
| 33 |
+
If you run into any issues at any point, jump into our [Discord community](https://discord.com/invite/s3KuuzsPFb) for support.
|
| 34 |
+
|
| 35 |
+
## Set up and Calibrate
|
| 36 |
+
|
| 37 |
+
If you haven't yet set up and calibrated your robot and teleop device, please do so by following the robot-specific tutorial.
|
| 38 |
+
|
| 39 |
+
## Teleoperate
|
| 40 |
+
|
| 41 |
+
In this example, we’ll demonstrate how to teleoperate the SO101 robot. For each command, we also provide a corresponding API example.
|
| 42 |
+
|
| 43 |
+
Note that the `id` associated with a robot is used to store the calibration file. It's important to use the same `id` when teleoperating, recording, and evaluating when using the same setup.
|
| 44 |
+
|
| 45 |
+
<hfoptions id="teleoperate_so101">
|
| 46 |
+
<hfoption id="Command">
|
| 47 |
+
```bash
|
| 48 |
+
lerobot-teleoperate \
|
| 49 |
+
--robot.type=so101_follower \
|
| 50 |
+
--robot.port=/dev/tty.usbmodem58760431541 \
|
| 51 |
+
--robot.id=my_awesome_follower_arm \
|
| 52 |
+
--teleop.type=so101_leader \
|
| 53 |
+
--teleop.port=/dev/tty.usbmodem58760431551 \
|
| 54 |
+
--teleop.id=my_awesome_leader_arm
|
| 55 |
+
```
|
| 56 |
+
</hfoption>
|
| 57 |
+
<hfoption id="API example">
|
| 58 |
+
|
| 59 |
+
<!-- prettier-ignore-start -->
|
| 60 |
+
```python
|
| 61 |
+
from lerobot.teleoperators.so_leader import SO101LeaderConfig, SO101Leader
|
| 62 |
+
from lerobot.robots.so_follower import SO101FollowerConfig, SO101Follower
|
| 63 |
+
|
| 64 |
+
robot_config = SO101FollowerConfig(
|
| 65 |
+
port="/dev/tty.usbmodem58760431541",
|
| 66 |
+
id="my_red_robot_arm",
|
| 67 |
+
)
|
| 68 |
+
|
| 69 |
+
teleop_config = SO101LeaderConfig(
|
| 70 |
+
port="/dev/tty.usbmodem58760431551",
|
| 71 |
+
id="my_blue_leader_arm",
|
| 72 |
+
)
|
| 73 |
+
|
| 74 |
+
robot = SO101Follower(robot_config)
|
| 75 |
+
teleop_device = SO101Leader(teleop_config)
|
| 76 |
+
robot.connect()
|
| 77 |
+
teleop_device.connect()
|
| 78 |
+
|
| 79 |
+
while True:
|
| 80 |
+
action = teleop_device.get_action()
|
| 81 |
+
robot.send_action(action)
|
| 82 |
+
```
|
| 83 |
+
<!-- prettier-ignore-end -->
|
| 84 |
+
|
| 85 |
+
</hfoption>
|
| 86 |
+
</hfoptions>
|
| 87 |
+
|
| 88 |
+
The teleoperate command will automatically:
|
| 89 |
+
|
| 90 |
+
1. Identify any missing calibrations and initiate the calibration procedure.
|
| 91 |
+
2. Connect the robot and teleop device and start teleoperation.
|
| 92 |
+
|
| 93 |
+
## Cameras
|
| 94 |
+
|
| 95 |
+
To add cameras to your setup, follow this [Guide](./cameras#setup-cameras).
|
| 96 |
+
|
| 97 |
+
## Teleoperate with cameras
|
| 98 |
+
|
| 99 |
+
With `rerun`, you can teleoperate again while simultaneously visualizing the camera feeds and joint positions. In this example, we’re using the Koch arm.
|
| 100 |
+
|
| 101 |
+
<hfoptions id="teleoperate_koch_camera">
|
| 102 |
+
<hfoption id="Command">
|
| 103 |
+
```bash
|
| 104 |
+
lerobot-teleoperate \
|
| 105 |
+
--robot.type=koch_follower \
|
| 106 |
+
--robot.port=/dev/tty.usbmodem58760431541 \
|
| 107 |
+
--robot.id=my_awesome_follower_arm \
|
| 108 |
+
--robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \
|
| 109 |
+
--teleop.type=koch_leader \
|
| 110 |
+
--teleop.port=/dev/tty.usbmodem58760431551 \
|
| 111 |
+
--teleop.id=my_awesome_leader_arm \
|
| 112 |
+
--display_data=true
|
| 113 |
+
```
|
| 114 |
+
</hfoption>
|
| 115 |
+
<hfoption id="API example">
|
| 116 |
+
|
| 117 |
+
<!-- prettier-ignore-start -->
|
| 118 |
+
```python
|
| 119 |
+
from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
|
| 120 |
+
from lerobot.teleoperators.koch_leader import KochLeaderConfig, KochLeader
|
| 121 |
+
from lerobot.robots.koch_follower import KochFollowerConfig, KochFollower
|
| 122 |
+
|
| 123 |
+
camera_config = {
|
| 124 |
+
"front": OpenCVCameraConfig(index_or_path=0, width=1920, height=1080, fps=30)
|
| 125 |
+
}
|
| 126 |
+
|
| 127 |
+
robot_config = KochFollowerConfig(
|
| 128 |
+
port="/dev/tty.usbmodem585A0076841",
|
| 129 |
+
id="my_red_robot_arm",
|
| 130 |
+
cameras=camera_config
|
| 131 |
+
)
|
| 132 |
+
|
| 133 |
+
teleop_config = KochLeaderConfig(
|
| 134 |
+
port="/dev/tty.usbmodem58760431551",
|
| 135 |
+
id="my_blue_leader_arm",
|
| 136 |
+
)
|
| 137 |
+
|
| 138 |
+
robot = KochFollower(robot_config)
|
| 139 |
+
teleop_device = KochLeader(teleop_config)
|
| 140 |
+
robot.connect()
|
| 141 |
+
teleop_device.connect()
|
| 142 |
+
|
| 143 |
+
while True:
|
| 144 |
+
observation = robot.get_observation()
|
| 145 |
+
action = teleop_device.get_action()
|
| 146 |
+
robot.send_action(action)
|
| 147 |
+
```
|
| 148 |
+
<!-- prettier-ignore-end -->
|
| 149 |
+
|
| 150 |
+
</hfoption>
|
| 151 |
+
</hfoptions>
|
| 152 |
+
|
| 153 |
+
## Record a dataset
|
| 154 |
+
|
| 155 |
+
Once you're familiar with teleoperation, you can record your first dataset.
|
| 156 |
+
|
| 157 |
+
We use the Hugging Face hub features for uploading your dataset. If you haven't previously used the Hub, make sure you can login via the cli using a write-access token, this token can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens).
|
| 158 |
+
|
| 159 |
+
Add your token to the CLI by running this command:
|
| 160 |
+
|
| 161 |
+
```bash
|
| 162 |
+
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
|
| 163 |
+
```
|
| 164 |
+
|
| 165 |
+
Then store your Hugging Face repository name in a variable:
|
| 166 |
+
|
| 167 |
+
```bash
|
| 168 |
+
HF_USER=$(hf auth whoami | head -n 1)
|
| 169 |
+
echo $HF_USER
|
| 170 |
+
```
|
| 171 |
+
|
| 172 |
+
Now you can record a dataset. To record 5 episodes and upload your dataset to the hub, adapt the code below for your robot and execute the command or API example.
|
| 173 |
+
|
| 174 |
+
<hfoptions id="record">
|
| 175 |
+
<hfoption id="Command">
|
| 176 |
+
```bash
|
| 177 |
+
lerobot-record \
|
| 178 |
+
--robot.type=so101_follower \
|
| 179 |
+
--robot.port=/dev/tty.usbmodem585A0076841 \
|
| 180 |
+
--robot.id=my_awesome_follower_arm \
|
| 181 |
+
--robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \
|
| 182 |
+
--teleop.type=so101_leader \
|
| 183 |
+
--teleop.port=/dev/tty.usbmodem58760431551 \
|
| 184 |
+
--teleop.id=my_awesome_leader_arm \
|
| 185 |
+
--display_data=true \
|
| 186 |
+
--dataset.repo_id=${HF_USER}/record-test \
|
| 187 |
+
--dataset.num_episodes=5 \
|
| 188 |
+
--dataset.single_task="Grab the black cube"
|
| 189 |
+
```
|
| 190 |
+
</hfoption>
|
| 191 |
+
<hfoption id="API example">
|
| 192 |
+
|
| 193 |
+
<!-- prettier-ignore-start -->
|
| 194 |
+
```python
|
| 195 |
+
from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
|
| 196 |
+
from lerobot.datasets.lerobot_dataset import LeRobotDataset
|
| 197 |
+
from lerobot.datasets.utils import hw_to_dataset_features
|
| 198 |
+
from lerobot.robots.so_follower import SO100Follower, SO100FollowerConfig
|
| 199 |
+
from lerobot.teleoperators.so_leader.config_so100_leader import SO100LeaderConfig
|
| 200 |
+
from lerobot.teleoperators.so_leader.so100_leader import SO100Leader
|
| 201 |
+
from lerobot.utils.control_utils import init_keyboard_listener
|
| 202 |
+
from lerobot.utils.utils import log_say
|
| 203 |
+
from lerobot.utils.visualization_utils import init_rerun
|
| 204 |
+
from lerobot.scripts.lerobot_record import record_loop
|
| 205 |
+
from lerobot.processor import make_default_processors
|
| 206 |
+
|
| 207 |
+
NUM_EPISODES = 5
|
| 208 |
+
FPS = 30
|
| 209 |
+
EPISODE_TIME_SEC = 60
|
| 210 |
+
RESET_TIME_SEC = 10
|
| 211 |
+
TASK_DESCRIPTION = "My task description"
|
| 212 |
+
|
| 213 |
+
# Create robot configuration
|
| 214 |
+
robot_config = SO100FollowerConfig(
|
| 215 |
+
id="my_awesome_follower_arm",
|
| 216 |
+
cameras={
|
| 217 |
+
"front": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=FPS) # Optional: fourcc="MJPG" for troubleshooting OpenCV async error.
|
| 218 |
+
},
|
| 219 |
+
port="/dev/tty.usbmodem58760434471",
|
| 220 |
+
)
|
| 221 |
+
|
| 222 |
+
teleop_config = SO100LeaderConfig(
|
| 223 |
+
id="my_awesome_leader_arm",
|
| 224 |
+
port="/dev/tty.usbmodem585A0077581",
|
| 225 |
+
)
|
| 226 |
+
|
| 227 |
+
# Initialize the robot and teleoperator
|
| 228 |
+
robot = SO100Follower(robot_config)
|
| 229 |
+
teleop = SO100Leader(teleop_config)
|
| 230 |
+
|
| 231 |
+
# Configure the dataset features
|
| 232 |
+
action_features = hw_to_dataset_features(robot.action_features, "action")
|
| 233 |
+
obs_features = hw_to_dataset_features(robot.observation_features, "observation")
|
| 234 |
+
dataset_features = {**action_features, **obs_features}
|
| 235 |
+
|
| 236 |
+
# Create the dataset
|
| 237 |
+
dataset = LeRobotDataset.create(
|
| 238 |
+
repo_id="<hf_username>/<dataset_repo_id>",
|
| 239 |
+
fps=FPS,
|
| 240 |
+
features=dataset_features,
|
| 241 |
+
robot_type=robot.name,
|
| 242 |
+
use_videos=True,
|
| 243 |
+
image_writer_threads=4,
|
| 244 |
+
)
|
| 245 |
+
|
| 246 |
+
# Initialize the keyboard listener and rerun visualization
|
| 247 |
+
_, events = init_keyboard_listener()
|
| 248 |
+
init_rerun(session_name="recording")
|
| 249 |
+
|
| 250 |
+
# Connect the robot and teleoperator
|
| 251 |
+
robot.connect()
|
| 252 |
+
teleop.connect()
|
| 253 |
+
|
| 254 |
+
# Create the required processors
|
| 255 |
+
teleop_action_processor, robot_action_processor, robot_observation_processor = make_default_processors()
|
| 256 |
+
|
| 257 |
+
episode_idx = 0
|
| 258 |
+
while episode_idx < NUM_EPISODES and not events["stop_recording"]:
|
| 259 |
+
log_say(f"Recording episode {episode_idx + 1} of {NUM_EPISODES}")
|
| 260 |
+
|
| 261 |
+
record_loop(
|
| 262 |
+
robot=robot,
|
| 263 |
+
events=events,
|
| 264 |
+
fps=FPS,
|
| 265 |
+
teleop_action_processor=teleop_action_processor,
|
| 266 |
+
robot_action_processor=robot_action_processor,
|
| 267 |
+
robot_observation_processor=robot_observation_processor,
|
| 268 |
+
teleop=teleop,
|
| 269 |
+
dataset=dataset,
|
| 270 |
+
control_time_s=EPISODE_TIME_SEC,
|
| 271 |
+
single_task=TASK_DESCRIPTION,
|
| 272 |
+
display_data=True,
|
| 273 |
+
)
|
| 274 |
+
|
| 275 |
+
# Reset the environment if not stopping or re-recording
|
| 276 |
+
if not events["stop_recording"] and (episode_idx < NUM_EPISODES - 1 or events["rerecord_episode"]):
|
| 277 |
+
log_say("Reset the environment")
|
| 278 |
+
record_loop(
|
| 279 |
+
robot=robot,
|
| 280 |
+
events=events,
|
| 281 |
+
fps=FPS,
|
| 282 |
+
teleop_action_processor=teleop_action_processor,
|
| 283 |
+
robot_action_processor=robot_action_processor,
|
| 284 |
+
robot_observation_processor=robot_observation_processor,
|
| 285 |
+
teleop=teleop,
|
| 286 |
+
control_time_s=RESET_TIME_SEC,
|
| 287 |
+
single_task=TASK_DESCRIPTION,
|
| 288 |
+
display_data=True,
|
| 289 |
+
)
|
| 290 |
+
|
| 291 |
+
if events["rerecord_episode"]:
|
| 292 |
+
log_say("Re-recording episode")
|
| 293 |
+
events["rerecord_episode"] = False
|
| 294 |
+
events["exit_early"] = False
|
| 295 |
+
dataset.clear_episode_buffer()
|
| 296 |
+
continue
|
| 297 |
+
|
| 298 |
+
dataset.save_episode()
|
| 299 |
+
episode_idx += 1
|
| 300 |
+
|
| 301 |
+
# Clean up
|
| 302 |
+
log_say("Stop recording")
|
| 303 |
+
robot.disconnect()
|
| 304 |
+
teleop.disconnect()
|
| 305 |
+
dataset.push_to_hub()
|
| 306 |
+
```
|
| 307 |
+
<!-- prettier-ignore-end -->
|
| 308 |
+
|
| 309 |
+
</hfoption>
|
| 310 |
+
</hfoptions>
|
| 311 |
+
|
| 312 |
+
#### Dataset upload
|
| 313 |
+
|
| 314 |
+
Locally, your dataset is stored in this folder: `~/.cache/huggingface/lerobot/{repo-id}`. At the end of data recording, your dataset will be uploaded on your Hugging Face page (e.g. `https://huggingface.co/datasets/${HF_USER}/so101_test`) that you can obtain by running:
|
| 315 |
+
|
| 316 |
+
```bash
|
| 317 |
+
echo https://huggingface.co/datasets/${HF_USER}/so101_test
|
| 318 |
+
```
|
| 319 |
+
|
| 320 |
+
Your dataset will be automatically tagged with `LeRobot` for the community to find it easily, and you can also add custom tags (in this case `tutorial` for example).
|
| 321 |
+
|
| 322 |
+
You can look for other LeRobot datasets on the hub by searching for `LeRobot` [tags](https://huggingface.co/datasets?other=LeRobot).
|
| 323 |
+
|
| 324 |
+
You can also push your local dataset to the Hub manually, running:
|
| 325 |
+
|
| 326 |
+
```bash
|
| 327 |
+
huggingface-cli upload ${HF_USER}/record-test ~/.cache/huggingface/lerobot/{repo-id} --repo-type dataset
|
| 328 |
+
```
|
| 329 |
+
|
| 330 |
+
#### Record function
|
| 331 |
+
|
| 332 |
+
The `record` function provides a suite of tools for capturing and managing data during robot operation:
|
| 333 |
+
|
| 334 |
+
##### 1. Data Storage
|
| 335 |
+
|
| 336 |
+
- Data is stored using the `LeRobotDataset` format and is stored on disk during recording.
|
| 337 |
+
- By default, the dataset is pushed to your Hugging Face page after recording.
|
| 338 |
+
- To disable uploading, use `--dataset.push_to_hub=False`.
|
| 339 |
+
|
| 340 |
+
##### 2. Checkpointing and Resuming
|
| 341 |
+
|
| 342 |
+
- Checkpoints are automatically created during recording.
|
| 343 |
+
- If an issue occurs, you can resume by re-running the same command with `--resume=true`. When resuming a recording, `--dataset.num_episodes` must be set to the **number of additional episodes to be recorded**, and not to the targeted total number of episodes in the dataset !
|
| 344 |
+
- To start recording from scratch, **manually delete** the dataset directory.
|
| 345 |
+
|
| 346 |
+
##### 3. Recording Parameters
|
| 347 |
+
|
| 348 |
+
Set the flow of data recording using command-line arguments:
|
| 349 |
+
|
| 350 |
+
- `--dataset.episode_time_s=60`
|
| 351 |
+
Duration of each data recording episode (default: **60 seconds**).
|
| 352 |
+
- `--dataset.reset_time_s=60`
|
| 353 |
+
Duration for resetting the environment after each episode (default: **60 seconds**).
|
| 354 |
+
- `--dataset.num_episodes=50`
|
| 355 |
+
Total number of episodes to record (default: **50**).
|
| 356 |
+
|
| 357 |
+
##### 4. Keyboard Controls During Recording
|
| 358 |
+
|
| 359 |
+
Control the data recording flow using keyboard shortcuts:
|
| 360 |
+
|
| 361 |
+
- Press **Right Arrow (`→`)**: Early stop the current episode or reset time and move to the next.
|
| 362 |
+
- Press **Left Arrow (`←`)**: Cancel the current episode and re-record it.
|
| 363 |
+
- Press **Escape (`ESC`)**: Immediately stop the session, encode videos, and upload the dataset.
|
| 364 |
+
|
| 365 |
+
#### Tips for gathering data
|
| 366 |
+
|
| 367 |
+
Once you're comfortable with data recording, you can create a larger dataset for training. A good starting task is grasping an object at different locations and placing it in a bin. We suggest recording at least 50 episodes, with 10 episodes per location. Keep the cameras fixed and maintain consistent grasping behavior throughout the recordings. Also make sure the object you are manipulating is visible on the camera's. A good rule of thumb is you should be able to do the task yourself by only looking at the camera images.
|
| 368 |
+
|
| 369 |
+
In the following sections, you’ll train your neural network. After achieving reliable grasping performance, you can start introducing more variations during data collection, such as additional grasp locations, different grasping techniques, and altering camera positions.
|
| 370 |
+
|
| 371 |
+
Avoid adding too much variation too quickly, as it may hinder your results.
|
| 372 |
+
|
| 373 |
+
If you want to dive deeper into this important topic, you can check out the [blog post](https://huggingface.co/blog/lerobot-datasets#what-makes-a-good-dataset) we wrote on what makes a good dataset.
|
| 374 |
+
|
| 375 |
+
#### Troubleshooting:
|
| 376 |
+
|
| 377 |
+
- On Linux, if the left and right arrow keys and escape key don't have any effect during data recording, make sure you've set the `$DISPLAY` environment variable. See [pynput limitations](https://pynput.readthedocs.io/en/latest/limitations.html#linux).
|
| 378 |
+
|
| 379 |
+
## Visualize a dataset
|
| 380 |
+
|
| 381 |
+
If you uploaded your dataset to the hub with `--control.push_to_hub=true`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
|
| 382 |
+
|
| 383 |
+
```bash
|
| 384 |
+
echo ${HF_USER}/so101_test
|
| 385 |
+
```
|
| 386 |
+
|
| 387 |
+
## Replay an episode
|
| 388 |
+
|
| 389 |
+
A useful feature is the `replay` function, which allows you to replay any episode that you've recorded or episodes from any dataset out there. This function helps you test the repeatability of your robot's actions and assess transferability across robots of the same model.
|
| 390 |
+
|
| 391 |
+
You can replay the first episode on your robot with either the command below or with the API example:
|
| 392 |
+
|
| 393 |
+
<hfoptions id="replay">
|
| 394 |
+
<hfoption id="Command">
|
| 395 |
+
```bash
|
| 396 |
+
lerobot-replay \
|
| 397 |
+
--robot.type=so101_follower \
|
| 398 |
+
--robot.port=/dev/tty.usbmodem58760431541 \
|
| 399 |
+
--robot.id=my_awesome_follower_arm \
|
| 400 |
+
--dataset.repo_id=${HF_USER}/record-test \
|
| 401 |
+
--dataset.episode=0 # choose the episode you want to replay
|
| 402 |
+
```
|
| 403 |
+
</hfoption>
|
| 404 |
+
<hfoption id="API example">
|
| 405 |
+
|
| 406 |
+
<!-- prettier-ignore-start -->
|
| 407 |
+
```python
|
| 408 |
+
import time
|
| 409 |
+
|
| 410 |
+
from lerobot.datasets.lerobot_dataset import LeRobotDataset
|
| 411 |
+
from lerobot.robots.so_follower.config_so100_follower import SO100FollowerConfig
|
| 412 |
+
from lerobot.robots.so_follower.so100_follower import SO100Follower
|
| 413 |
+
from lerobot.utils.robot_utils import precise_sleep
|
| 414 |
+
from lerobot.utils.utils import log_say
|
| 415 |
+
|
| 416 |
+
episode_idx = 0
|
| 417 |
+
|
| 418 |
+
robot_config = SO100FollowerConfig(port="/dev/tty.usbmodem58760434471", id="my_awesome_follower_arm")
|
| 419 |
+
|
| 420 |
+
robot = SO100Follower(robot_config)
|
| 421 |
+
robot.connect()
|
| 422 |
+
|
| 423 |
+
dataset = LeRobotDataset("<hf_username>/<dataset_repo_id>", episodes=[episode_idx])
|
| 424 |
+
actions = dataset.hf_dataset.select_columns("action")
|
| 425 |
+
|
| 426 |
+
log_say(f"Replaying episode {episode_idx}")
|
| 427 |
+
for idx in range(dataset.num_frames):
|
| 428 |
+
t0 = time.perf_counter()
|
| 429 |
+
|
| 430 |
+
action = {
|
| 431 |
+
name: float(actions[idx]["action"][i]) for i, name in enumerate(dataset.features["action"]["names"])
|
| 432 |
+
}
|
| 433 |
+
robot.send_action(action)
|
| 434 |
+
|
| 435 |
+
precise_sleep(max(1.0 / dataset.fps - (time.perf_counter() - t0), 0.0))
|
| 436 |
+
|
| 437 |
+
robot.disconnect()
|
| 438 |
+
```
|
| 439 |
+
<!-- prettier-ignore-end -->
|
| 440 |
+
|
| 441 |
+
</hfoption>
|
| 442 |
+
</hfoptions>
|
| 443 |
+
|
| 444 |
+
Your robot should replicate movements similar to those you recorded. For example, check out [this video](https://x.com/RemiCadene/status/1793654950905680090) where we use `replay` on a Aloha robot from [Trossen Robotics](https://www.trossenrobotics.com).
|
| 445 |
+
|
| 446 |
+
## Train a policy
|
| 447 |
+
|
| 448 |
+
To train a policy to control your robot, use the [`lerobot-train`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/lerobot_train.py) script. A few arguments are required. Here is an example command:
|
| 449 |
+
|
| 450 |
+
```bash
|
| 451 |
+
lerobot-train \
|
| 452 |
+
--dataset.repo_id=${HF_USER}/so101_test \
|
| 453 |
+
--policy.type=act \
|
| 454 |
+
--output_dir=outputs/train/act_so101_test \
|
| 455 |
+
--job_name=act_so101_test \
|
| 456 |
+
--policy.device=cuda \
|
| 457 |
+
--wandb.enable=true \
|
| 458 |
+
--policy.repo_id=${HF_USER}/my_policy
|
| 459 |
+
```
|
| 460 |
+
|
| 461 |
+
Let's explain the command:
|
| 462 |
+
|
| 463 |
+
1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/so101_test`.
|
| 464 |
+
2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
|
| 465 |
+
3. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
|
| 466 |
+
4. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
|
| 467 |
+
|
| 468 |
+
Training should take several hours. You will find checkpoints in `outputs/train/act_so101_test/checkpoints`.
|
| 469 |
+
|
| 470 |
+
To resume training from a checkpoint, below is an example command to resume from `last` checkpoint of the `act_so101_test` policy:
|
| 471 |
+
|
| 472 |
+
```bash
|
| 473 |
+
lerobot-train \
|
| 474 |
+
--config_path=outputs/train/act_so101_test/checkpoints/last/pretrained_model/train_config.json \
|
| 475 |
+
--resume=true
|
| 476 |
+
```
|
| 477 |
+
|
| 478 |
+
If you do not want to push your model to the hub after training use `--policy.push_to_hub=false`.
|
| 479 |
+
|
| 480 |
+
Additionally you can provide extra `tags` or specify a `license` for your model or make the model repo `private` by adding this: `--policy.private=true --policy.tags=\[ppo,rl\] --policy.license=mit`
|
| 481 |
+
|
| 482 |
+
#### Train using Google Colab
|
| 483 |
+
|
| 484 |
+
If your local computer doesn't have a powerful GPU you could utilize Google Colab to train your model by following the [ACT training notebook](./notebooks#training-act).
|
| 485 |
+
|
| 486 |
+
#### Upload policy checkpoints
|
| 487 |
+
|
| 488 |
+
Once training is done, upload the latest checkpoint with:
|
| 489 |
+
|
| 490 |
+
```bash
|
| 491 |
+
huggingface-cli upload ${HF_USER}/act_so101_test \
|
| 492 |
+
outputs/train/act_so101_test/checkpoints/last/pretrained_model
|
| 493 |
+
```
|
| 494 |
+
|
| 495 |
+
You can also upload intermediate checkpoints with:
|
| 496 |
+
|
| 497 |
+
```bash
|
| 498 |
+
CKPT=010000
|
| 499 |
+
huggingface-cli upload ${HF_USER}/act_so101_test${CKPT} \
|
| 500 |
+
outputs/train/act_so101_test/checkpoints/${CKPT}/pretrained_model
|
| 501 |
+
```
|
| 502 |
+
|
| 503 |
+
## Run inference and evaluate your policy
|
| 504 |
+
|
| 505 |
+
You can use the `record` script from [`lerobot-record`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/lerobot_record.py) with a policy checkpoint as input, to run inference and evaluate your policy. For instance, run this command or API example to run inference and record 10 evaluation episodes:
|
| 506 |
+
|
| 507 |
+
<hfoptions id="eval">
|
| 508 |
+
<hfoption id="Command">
|
| 509 |
+
```bash
|
| 510 |
+
lerobot-record \
|
| 511 |
+
--robot.type=so100_follower \
|
| 512 |
+
--robot.port=/dev/ttyACM1 \
|
| 513 |
+
--robot.cameras="{ up: {type: opencv, index_or_path: /dev/video10, width: 640, height: 480, fps: 30}, side: {type: intelrealsense, serial_number_or_name: 233522074606, width: 640, height: 480, fps: 30}}" \
|
| 514 |
+
--robot.id=my_awesome_follower_arm \
|
| 515 |
+
--display_data=false \
|
| 516 |
+
--dataset.repo_id=${HF_USER}/eval_so100 \
|
| 517 |
+
--dataset.single_task="Put lego brick into the transparent box" \
|
| 518 |
+
# <- Teleop optional if you want to teleoperate in between episodes \
|
| 519 |
+
# --teleop.type=so100_leader \
|
| 520 |
+
# --teleop.port=/dev/ttyACM0 \
|
| 521 |
+
# --teleop.id=my_awesome_leader_arm \
|
| 522 |
+
--policy.path=${HF_USER}/my_policy
|
| 523 |
+
```
|
| 524 |
+
</hfoption>
|
| 525 |
+
<hfoption id="API example">
|
| 526 |
+
|
| 527 |
+
<!-- prettier-ignore-start -->
|
| 528 |
+
```python
|
| 529 |
+
from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
|
| 530 |
+
from lerobot.datasets.lerobot_dataset import LeRobotDataset
|
| 531 |
+
from lerobot.datasets.utils import hw_to_dataset_features
|
| 532 |
+
from lerobot.policies.act.modeling_act import ACTPolicy
|
| 533 |
+
from lerobot.policies.factory import make_pre_post_processors
|
| 534 |
+
from lerobot.robots.so_follower.config_so100_follower import SO100FollowerConfig
|
| 535 |
+
from lerobot.robots.so_follower.so100_follower import SO100Follower
|
| 536 |
+
from lerobot.scripts.lerobot_record import record_loop
|
| 537 |
+
from lerobot.utils.control_utils import init_keyboard_listener
|
| 538 |
+
from lerobot.utils.utils import log_say
|
| 539 |
+
from lerobot.utils.visualization_utils import init_rerun
|
| 540 |
+
|
| 541 |
+
|
| 542 |
+
NUM_EPISODES = 5
|
| 543 |
+
FPS = 30
|
| 544 |
+
EPISODE_TIME_SEC = 60
|
| 545 |
+
TASK_DESCRIPTION = "My task description"
|
| 546 |
+
HF_MODEL_ID = "<hf_username>/<model_repo_id>"
|
| 547 |
+
HF_DATASET_ID = "<hf_username>/<eval_dataset_repo_id>"
|
| 548 |
+
|
| 549 |
+
# Create the robot configuration
|
| 550 |
+
camera_config = {"front": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=FPS)}
|
| 551 |
+
robot_config = SO100FollowerConfig(
|
| 552 |
+
port="/dev/tty.usbmodem58760434471", id="my_awesome_follower_arm", cameras=camera_config
|
| 553 |
+
)
|
| 554 |
+
|
| 555 |
+
# Initialize the robot
|
| 556 |
+
robot = SO100Follower(robot_config)
|
| 557 |
+
|
| 558 |
+
# Initialize the policy
|
| 559 |
+
policy = ACTPolicy.from_pretrained(HF_MODEL_ID)
|
| 560 |
+
|
| 561 |
+
# Configure the dataset features
|
| 562 |
+
action_features = hw_to_dataset_features(robot.action_features, "action")
|
| 563 |
+
obs_features = hw_to_dataset_features(robot.observation_features, "observation")
|
| 564 |
+
dataset_features = {**action_features, **obs_features}
|
| 565 |
+
|
| 566 |
+
# Create the dataset
|
| 567 |
+
dataset = LeRobotDataset.create(
|
| 568 |
+
repo_id=HF_DATASET_ID,
|
| 569 |
+
fps=FPS,
|
| 570 |
+
features=dataset_features,
|
| 571 |
+
robot_type=robot.name,
|
| 572 |
+
use_videos=True,
|
| 573 |
+
image_writer_threads=4,
|
| 574 |
+
)
|
| 575 |
+
|
| 576 |
+
# Initialize the keyboard listener and rerun visualization
|
| 577 |
+
_, events = init_keyboard_listener()
|
| 578 |
+
init_rerun(session_name="recording")
|
| 579 |
+
|
| 580 |
+
# Connect the robot
|
| 581 |
+
robot.connect()
|
| 582 |
+
|
| 583 |
+
preprocessor, postprocessor = make_pre_post_processors(
|
| 584 |
+
policy_cfg=policy,
|
| 585 |
+
pretrained_path=HF_MODEL_ID,
|
| 586 |
+
dataset_stats=dataset.meta.stats,
|
| 587 |
+
)
|
| 588 |
+
|
| 589 |
+
for episode_idx in range(NUM_EPISODES):
|
| 590 |
+
log_say(f"Running inference, recording eval episode {episode_idx + 1} of {NUM_EPISODES}")
|
| 591 |
+
|
| 592 |
+
# Run the policy inference loop
|
| 593 |
+
record_loop(
|
| 594 |
+
robot=robot,
|
| 595 |
+
events=events,
|
| 596 |
+
fps=FPS,
|
| 597 |
+
policy=policy,
|
| 598 |
+
preprocessor=preprocessor,
|
| 599 |
+
postprocessor=postprocessor,
|
| 600 |
+
dataset=dataset,
|
| 601 |
+
control_time_s=EPISODE_TIME_SEC,
|
| 602 |
+
single_task=TASK_DESCRIPTION,
|
| 603 |
+
display_data=True,
|
| 604 |
+
)
|
| 605 |
+
|
| 606 |
+
dataset.save_episode()
|
| 607 |
+
|
| 608 |
+
# Clean up
|
| 609 |
+
robot.disconnect()
|
| 610 |
+
dataset.push_to_hub()
|
| 611 |
+
```
|
| 612 |
+
<!-- prettier-ignore-end -->
|
| 613 |
+
|
| 614 |
+
</hfoption>
|
| 615 |
+
</hfoptions>
|
| 616 |
+
|
| 617 |
+
As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
|
| 618 |
+
|
| 619 |
+
1. There is an additional `--control.policy.path` argument which indicates the path to your policy checkpoint with (e.g. `outputs/train/eval_act_so101_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `${HF_USER}/act_so101_test`).
|
| 620 |
+
2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `${HF_USER}/eval_act_so101_test`).
|
lerobot/docs/source/implement_your_own_processor.mdx
ADDED
|
@@ -0,0 +1,273 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Implement your own Robot Processor
|
| 2 |
+
|
| 3 |
+
In this tutorial, you'll learn how to implement your own Robot Processor.
|
| 4 |
+
It begins by exploring the need for a custom processor, then uses the `NormalizerProcessorStep` as the running example to explain how to implement, configure, and serialize a processor. Finally, it lists all helper processors that ship with LeRobot.
|
| 5 |
+
|
| 6 |
+
## Why would you need a custom processor?
|
| 7 |
+
|
| 8 |
+
In most cases, when reading raw data from sensors or when models output actions, you need to process this data to make it compatible with your target system. For example, a common need is normalizing data ranges to make them suitable for neural networks.
|
| 9 |
+
|
| 10 |
+
LeRobot's `NormalizerProcessorStep` handles this crucial task:
|
| 11 |
+
|
| 12 |
+
```python
|
| 13 |
+
# Input: raw joint positions in [0, 180] degrees
|
| 14 |
+
raw_action = torch.tensor([90.0, 45.0, 135.0])
|
| 15 |
+
|
| 16 |
+
# After processing: normalized to [-1, 1] range for model training
|
| 17 |
+
normalizer = NormalizerProcessorStep(features=features, norm_map=norm_map, stats=dataset_stats)
|
| 18 |
+
normalized_result = normalizer(transition)
|
| 19 |
+
# ...
|
| 20 |
+
```
|
| 21 |
+
|
| 22 |
+
Other common processing needs include:
|
| 23 |
+
|
| 24 |
+
- **Device placement**: Moving tensors between CPU/GPU and converting data types
|
| 25 |
+
- **Format conversion**: Transforming between different data structures
|
| 26 |
+
- **Batching**: Adding/removing batch dimensions for model compatibility
|
| 27 |
+
- **Safety constraints**: Applying limits to robot commands
|
| 28 |
+
|
| 29 |
+
```python
|
| 30 |
+
# Example pipeline combining multiple processors
|
| 31 |
+
pipeline = PolicyProcessorPipeline([
|
| 32 |
+
RenameObservationsProcessorStep(rename_map={}),
|
| 33 |
+
AddBatchDimensionProcessorStep(),
|
| 34 |
+
NormalizerProcessorStep(features=features, stats=stats),
|
| 35 |
+
DeviceProcessorStep(device="cuda"),
|
| 36 |
+
# ...
|
| 37 |
+
])
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
LeRobot provides a pipeline mechanism to implement sequences of processing steps for both input data and output actions, making it easy to compose these transformations in the right order for optimal performance.
|
| 41 |
+
|
| 42 |
+
## How to implement your own processor?
|
| 43 |
+
|
| 44 |
+
We'll use the `NormalizerProcessorStep` as our main example because it demonstrates essential processor patterns including state management, configuration serialization, and tensor handling that you'll commonly need.
|
| 45 |
+
|
| 46 |
+
Prepare the sequence of processing steps necessary for your problem. A processor step is a class that implements the following methods:
|
| 47 |
+
|
| 48 |
+
- `__call__`: implements the processing step for the input transition.
|
| 49 |
+
- `get_config`: gets the configuration of the processor step.
|
| 50 |
+
- `state_dict`: gets the state of the processor step.
|
| 51 |
+
- `load_state_dict`: loads the state of the processor step.
|
| 52 |
+
- `reset`: resets the state of the processor step.
|
| 53 |
+
- `feature_contract`: displays the modification to the feature space during the processor step.
|
| 54 |
+
|
| 55 |
+
### Implement the `__call__` method
|
| 56 |
+
|
| 57 |
+
The `__call__` method is the core of your processor step. It takes an `EnvTransition` and returns a modified `EnvTransition`. Here's how the `NormalizerProcessorStep` works:
|
| 58 |
+
|
| 59 |
+
```python
|
| 60 |
+
@dataclass
|
| 61 |
+
@ProcessorStepRegistry.register("normalizer_processor")
|
| 62 |
+
class NormalizerProcessorStep(ProcessorStep):
|
| 63 |
+
"""Normalize observations/actions using dataset statistics."""
|
| 64 |
+
|
| 65 |
+
features: dict[str, PolicyFeature]
|
| 66 |
+
norm_map: dict[FeatureType, NormalizationMode]
|
| 67 |
+
stats: dict[str, dict[str, Any]] | None = None
|
| 68 |
+
eps: float = 1e-8
|
| 69 |
+
_tensor_stats: dict = field(default_factory=dict, init=False, repr=False)
|
| 70 |
+
|
| 71 |
+
def __post_init__(self):
|
| 72 |
+
"""Convert stats to tensors for efficient computation."""
|
| 73 |
+
self.stats = self.stats or {}
|
| 74 |
+
self._tensor_stats = to_tensor(self.stats, device=self.device, dtype=torch.float32)
|
| 75 |
+
|
| 76 |
+
def __call__(self, transition: EnvTransition) -> EnvTransition:
|
| 77 |
+
new_transition = transition.copy()
|
| 78 |
+
# Normalize observations
|
| 79 |
+
# ...
|
| 80 |
+
# Normalize action
|
| 81 |
+
# ...
|
| 82 |
+
return new_transition
|
| 83 |
+
|
| 84 |
+
```
|
| 85 |
+
|
| 86 |
+
See the full implementation in `src/lerobot/processor/normalize_processor.py` for complete details.
|
| 87 |
+
|
| 88 |
+
**Key principles:**
|
| 89 |
+
|
| 90 |
+
- **Always use `transition.copy()`** to avoid side effects
|
| 91 |
+
- **Handle both observations and actions** consistently
|
| 92 |
+
- **Separate config from state**: `get_config()` returns JSON-serializable params, `state_dict()` returns tensors
|
| 93 |
+
- **Convert stats to tensors** in `__post_init__()` for efficient computation
|
| 94 |
+
|
| 95 |
+
### Configuration and State Management
|
| 96 |
+
|
| 97 |
+
Processors support serialization through three methods that separate configuration from tensor state. The `NormalizerProcessorStep` demonstrates this perfectly - it carries dataset statistics (tensors) in its state, and hyperparameters in its config:
|
| 98 |
+
|
| 99 |
+
```python
|
| 100 |
+
# Continuing the NormalizerProcessorStep example...
|
| 101 |
+
|
| 102 |
+
def get_config(self) -> dict[str, Any]:
|
| 103 |
+
"""JSON-serializable configuration (no tensors)."""
|
| 104 |
+
return {
|
| 105 |
+
"eps": self.eps,
|
| 106 |
+
"features": {k: {"type": v.type.value, "shape": v.shape} for k, v in self.features.items()},
|
| 107 |
+
"norm_map": {ft.value: nm.value for ft, nm in self.norm_map.items()},
|
| 108 |
+
# ...
|
| 109 |
+
}
|
| 110 |
+
|
| 111 |
+
def state_dict(self) -> dict[str, torch.Tensor]:
|
| 112 |
+
"""Tensor state only (e.g., dataset statistics)."""
|
| 113 |
+
flat: dict[str, torch.Tensor] = {}
|
| 114 |
+
for key, sub in self._tensor_stats.items():
|
| 115 |
+
for stat_name, tensor in sub.items():
|
| 116 |
+
flat[f"{key}.{stat_name}"] = tensor.cpu() # Always save to CPU
|
| 117 |
+
return flat
|
| 118 |
+
|
| 119 |
+
def load_state_dict(self, state: dict[str, torch.Tensor]) -> None:
|
| 120 |
+
"""Restore tensor state at runtime."""
|
| 121 |
+
self._tensor_stats.clear()
|
| 122 |
+
for flat_key, tensor in state.items():
|
| 123 |
+
key, stat_name = flat_key.rsplit(".", 1)
|
| 124 |
+
# Load to processor's configured device
|
| 125 |
+
self._tensor_stats.setdefault(key, {})[stat_name] = tensor.to(
|
| 126 |
+
dtype=torch.float32, device=self.device
|
| 127 |
+
)
|
| 128 |
+
# ...
|
| 129 |
+
```
|
| 130 |
+
|
| 131 |
+
**Usage:**
|
| 132 |
+
|
| 133 |
+
```python
|
| 134 |
+
# Save (e.g., inside a policy)
|
| 135 |
+
config = normalizer.get_config()
|
| 136 |
+
tensors = normalizer.state_dict()
|
| 137 |
+
|
| 138 |
+
# Restore (e.g., loading a pretrained policy)
|
| 139 |
+
new_normalizer = NormalizerProcessorStep(**config)
|
| 140 |
+
new_normalizer.load_state_dict(tensors)
|
| 141 |
+
# Now new_normalizer has the same stats and configuration
|
| 142 |
+
```
|
| 143 |
+
|
| 144 |
+
### Transform features
|
| 145 |
+
|
| 146 |
+
The `transform_features` method defines how your processor transforms feature names and shapes. This is crucial for policy configuration and debugging.
|
| 147 |
+
|
| 148 |
+
For `NormalizerProcessorStep`, features are typically preserved unchanged since normalization doesn't alter keys or shapes:
|
| 149 |
+
|
| 150 |
+
```python
|
| 151 |
+
def transform_features(self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
|
| 152 |
+
"""Normalization preserves all feature definitions."""
|
| 153 |
+
return features # No changes to feature structure
|
| 154 |
+
# ...
|
| 155 |
+
```
|
| 156 |
+
|
| 157 |
+
When your processor renames or reshapes data, implement this method to reflect the mapping for downstream components. For example, a simple rename processor:
|
| 158 |
+
|
| 159 |
+
```python
|
| 160 |
+
def transform_features(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
|
| 161 |
+
# Simple renaming
|
| 162 |
+
if "pixels" in features:
|
| 163 |
+
features["observation.image"] = features.pop("pixels")
|
| 164 |
+
|
| 165 |
+
# Pattern-based renaming
|
| 166 |
+
for key in list(features.keys()):
|
| 167 |
+
if key.startswith("env_state."):
|
| 168 |
+
suffix = key[len("env_state."):]
|
| 169 |
+
features[f"observation.{suffix}"] = features.pop(key)
|
| 170 |
+
# ...
|
| 171 |
+
|
| 172 |
+
return features
|
| 173 |
+
```
|
| 174 |
+
|
| 175 |
+
**Key principles:**
|
| 176 |
+
|
| 177 |
+
- Use `features.pop(old_key)` to remove and get the old feature
|
| 178 |
+
- Use `features[new_key] = old_feature` to add the renamed feature
|
| 179 |
+
- Always return the modified features dictionary
|
| 180 |
+
- Document transformations clearly in the docstring
|
| 181 |
+
|
| 182 |
+
### Using overrides
|
| 183 |
+
|
| 184 |
+
You can override step parameters at load-time using `overrides`. This is handy for non-serializable objects or site-specific settings. It works both in policy factories and with `DataProcessorPipeline.from_pretrained(...)`.
|
| 185 |
+
|
| 186 |
+
**Foundational model adaptation**: This is particularly useful when working with foundational pretrained policies where you rarely have access to the original training statistics. You can inject your own dataset statistics to adapt the normalizer to your specific robot or environment data.
|
| 187 |
+
|
| 188 |
+
Example: during policy evaluation on the robot, override the device and rename map.
|
| 189 |
+
Use this to run a policy trained on CUDA on a CPU-only robot, or to remap camera keys when the robot uses different names than the dataset.
|
| 190 |
+
|
| 191 |
+
Direct usage with `from_pretrained`:
|
| 192 |
+
|
| 193 |
+
```python
|
| 194 |
+
from lerobot.processor import RobotProcessorPipeline
|
| 195 |
+
|
| 196 |
+
# Load a foundational policy trained on diverse robot data
|
| 197 |
+
# but adapt normalization to your specific robot/environment
|
| 198 |
+
new_stats = LeRobotDataset(repo_id="username/my-dataset").meta.stats
|
| 199 |
+
processor = RobotProcessorPipeline.from_pretrained(
|
| 200 |
+
"huggingface/foundational-robot-policy", # Pretrained foundation model
|
| 201 |
+
overrides={
|
| 202 |
+
"normalizer_processor": {"stats": new_stats}, # Inject your robot's statistics
|
| 203 |
+
"device_processor": {"device": "cuda:0"}, # registry name for registered steps
|
| 204 |
+
"rename_processor": {"rename_map": robot_key_map}, # Map your robot's observation keys
|
| 205 |
+
# ...
|
| 206 |
+
},
|
| 207 |
+
)
|
| 208 |
+
```
|
| 209 |
+
|
| 210 |
+
## Best Practices
|
| 211 |
+
|
| 212 |
+
Based on analysis of all LeRobot processor implementations, here are the key patterns and practices:
|
| 213 |
+
|
| 214 |
+
### 1. **Safe Data Handling**
|
| 215 |
+
|
| 216 |
+
Always create copies of input data to avoid unintended side effects. Use `transition.copy()` and `observation.copy()` rather than modifying data in-place. This prevents your processor from accidentally affecting other components in the pipeline.
|
| 217 |
+
|
| 218 |
+
Check for required data before processing and handle missing data gracefully. If your processor expects certain keys (like `"pixels"` for image processing), validate their presence first. For optional data, use safe access patterns like `transition.get()` and handle `None` values appropriately.
|
| 219 |
+
|
| 220 |
+
When data validation fails, provide clear, actionable error messages that help users understand what went wrong and how to fix it.
|
| 221 |
+
|
| 222 |
+
### 2. **Choose Appropriate Base Classes**
|
| 223 |
+
|
| 224 |
+
LeRobot provides specialized base classes that reduce boilerplate code and ensure consistency. Use `ObservationProcessorStep` when you only need to modify observations, `ActionProcessorStep` for action-only processing, and `RobotActionProcessorStep` specifically for dictionary-based robot actions.
|
| 225 |
+
|
| 226 |
+
Only inherit directly from `ProcessorStep` when you need full control over the entire transition or when processing multiple transition components simultaneously. The specialized base classes handle the transition management for you and provide type safety.
|
| 227 |
+
|
| 228 |
+
### 3. **Registration and Naming**
|
| 229 |
+
|
| 230 |
+
Register your processors with descriptive, namespaced names using `@ProcessorStepRegistry.register()`. Use organization prefixes like `"robotics_lab/safety_clipper"` or `"acme_corp/vision_enhancer"` to avoid naming conflicts. Avoid generic names like `"processor"` or `"step"` that could clash with other implementations.
|
| 231 |
+
|
| 232 |
+
Good registration makes your processors discoverable and enables clean serialization/deserialization when saving and loading pipelines.
|
| 233 |
+
|
| 234 |
+
### 4. **State Management Patterns**
|
| 235 |
+
|
| 236 |
+
Distinguish between configuration parameters (JSON-serializable values) and internal state (tensors, buffers). Use dataclass fields with `init=False, repr=False` for internal state that shouldn't appear in the constructor or string representation.
|
| 237 |
+
|
| 238 |
+
Implement the `reset()` method to clear internal state between episodes. This is crucial for stateful processors that accumulate data over time, like moving averages or temporal filters.
|
| 239 |
+
|
| 240 |
+
Remember that `get_config()` should only return JSON-serializable configuration, while `state_dict()` handles tensor state separately.
|
| 241 |
+
|
| 242 |
+
### 5. **Input Validation and Error Handling**
|
| 243 |
+
|
| 244 |
+
Validate input types and shapes before processing. Check tensor properties like `dtype` and dimensions to ensure compatibility with your algorithms. For robot actions, verify that required pose components or joint values are present and within expected ranges.
|
| 245 |
+
|
| 246 |
+
Use early returns for edge cases where no processing is needed. Provide clear, descriptive error messages that include the expected vs. actual data types or shapes. This makes debugging much easier for users.
|
| 247 |
+
|
| 248 |
+
### 6. **Device and Dtype Awareness**
|
| 249 |
+
|
| 250 |
+
Design your processors to automatically adapt to the device and dtype of input tensors. Internal tensors (like normalization statistics) should match the input tensor's device and dtype to ensure compatibility with multi-GPU training, mixed precision, and distributed setups.
|
| 251 |
+
|
| 252 |
+
Implement a `to()` method that moves your processor's internal state to the specified device. Check device/dtype compatibility at runtime and automatically migrate internal state when needed. This pattern enables seamless operation across different hardware configurations without manual intervention.
|
| 253 |
+
|
| 254 |
+
## Conclusion
|
| 255 |
+
|
| 256 |
+
You now have all the tools to implement custom processors in LeRobot! The key steps are:
|
| 257 |
+
|
| 258 |
+
1. **Define your processor** as a dataclass with the required methods (`__call__`, `get_config`, `state_dict`, `load_state_dict`, `reset`, `transform_features`)
|
| 259 |
+
2. **Register it** using `@ProcessorStepRegistry.register("name")` for discoverability
|
| 260 |
+
3. **Integrate it** into a `DataProcessorPipeline` with other processing steps
|
| 261 |
+
4. **Use base classes** like `ObservationProcessorStep` when possible to reduce boilerplate
|
| 262 |
+
5. **Implement device/dtype awareness** to support multi-GPU and mixed precision setups
|
| 263 |
+
|
| 264 |
+
The processor system is designed to be modular and composable, allowing you to build complex data processing pipelines from simple, focused components. Whether you're preprocessing sensor data for training or post-processing model outputs for robot execution, custom processors give you the flexibility to handle any data transformation your robotics application requires.
|
| 265 |
+
|
| 266 |
+
Key principles for robust processors:
|
| 267 |
+
|
| 268 |
+
- **Device/dtype adaptation**: Internal tensors should match input tensors
|
| 269 |
+
- **Clear error messages**: Help users understand what went wrong
|
| 270 |
+
- **Base class usage**: Leverage specialized base classes to reduce boilerplate
|
| 271 |
+
- **Feature contracts**: Declare data structure changes with `transform_features()`
|
| 272 |
+
|
| 273 |
+
Start simple, test thoroughly, and ensure your processors work seamlessly across different hardware configurations!
|
lerobot/docs/source/index.mdx
ADDED
|
@@ -0,0 +1,23 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<div class="flex justify-center">
|
| 2 |
+
<a target="_blank" href="https://huggingface.co/lerobot">
|
| 3 |
+
<img
|
| 4 |
+
alt="HuggingFace Expert Acceleration Program"
|
| 5 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/lerobot-logo-thumbnail.png"
|
| 6 |
+
style="width: 100%"
|
| 7 |
+
></img>
|
| 8 |
+
</a>
|
| 9 |
+
</div>
|
| 10 |
+
|
| 11 |
+
# LeRobot
|
| 12 |
+
|
| 13 |
+
**State-of-the-art machine learning for real-world robotics**
|
| 14 |
+
|
| 15 |
+
🤗 LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier for entry to robotics so that everyone can contribute and benefit from sharing datasets and pretrained models.
|
| 16 |
+
|
| 17 |
+
🤗 LeRobot contains state-of-the-art approaches that have been shown to transfer to the real-world with a focus on imitation learning and reinforcement learning.
|
| 18 |
+
|
| 19 |
+
🤗 LeRobot already provides a set of pretrained models, datasets with human collected demonstrations, and simulated environments so that everyone can get started.
|
| 20 |
+
|
| 21 |
+
🤗 LeRobot hosts pretrained models and datasets on the LeRobot HuggingFace page.
|
| 22 |
+
|
| 23 |
+
Join the LeRobot community on [Discord](https://discord.gg/s3KuuzsPFb)
|
lerobot/docs/source/installation.mdx
ADDED
|
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Installation
|
| 2 |
+
|
| 3 |
+
## Install [`miniforge`](https://conda-forge.org/download/)
|
| 4 |
+
|
| 5 |
+
```bash
|
| 6 |
+
wget "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
|
| 7 |
+
bash Miniforge3-$(uname)-$(uname -m).sh
|
| 8 |
+
```
|
| 9 |
+
|
| 10 |
+
## Environment Setup
|
| 11 |
+
|
| 12 |
+
Create a virtual environment with Python 3.10, using conda:
|
| 13 |
+
|
| 14 |
+
```bash
|
| 15 |
+
conda create -y -n lerobot python=3.10
|
| 16 |
+
```
|
| 17 |
+
|
| 18 |
+
Then activate your conda environment, you have to do this each time you open a shell to use lerobot:
|
| 19 |
+
|
| 20 |
+
```bash
|
| 21 |
+
conda activate lerobot
|
| 22 |
+
```
|
| 23 |
+
|
| 24 |
+
When using `conda`, install `ffmpeg` in your environment:
|
| 25 |
+
|
| 26 |
+
```bash
|
| 27 |
+
conda install ffmpeg -c conda-forge
|
| 28 |
+
```
|
| 29 |
+
|
| 30 |
+
> [!TIP]
|
| 31 |
+
> This usually installs `ffmpeg 7.X` for your platform compiled with the `libsvtav1` encoder. If `libsvtav1` is not supported (check supported encoders with `ffmpeg -encoders`), you can:
|
| 32 |
+
>
|
| 33 |
+
> - _[On any platform]_ Explicitly install `ffmpeg 7.X` using:
|
| 34 |
+
>
|
| 35 |
+
> ```bash
|
| 36 |
+
> conda install ffmpeg=7.1.1 -c conda-forge
|
| 37 |
+
> ```
|
| 38 |
+
>
|
| 39 |
+
> - _[On Linux only]_ If you want to bring your own ffmpeg: Install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1), and make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.
|
| 40 |
+
|
| 41 |
+
## Install LeRobot 🤗
|
| 42 |
+
|
| 43 |
+
### From Source
|
| 44 |
+
|
| 45 |
+
First, clone the repository and navigate into the directory:
|
| 46 |
+
|
| 47 |
+
```bash
|
| 48 |
+
git clone https://github.com/huggingface/lerobot.git
|
| 49 |
+
cd lerobot
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
Then, install the library in editable mode. This is useful if you plan to contribute to the code.
|
| 53 |
+
|
| 54 |
+
```bash
|
| 55 |
+
pip install -e .
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
### Installation from PyPI
|
| 59 |
+
|
| 60 |
+
**Core Library:**
|
| 61 |
+
Install the base package with:
|
| 62 |
+
|
| 63 |
+
```bash
|
| 64 |
+
pip install lerobot
|
| 65 |
+
```
|
| 66 |
+
|
| 67 |
+
_This installs only the default dependencies._
|
| 68 |
+
|
| 69 |
+
**Extra Features:**
|
| 70 |
+
To install additional functionality, use one of the following:
|
| 71 |
+
|
| 72 |
+
```bash
|
| 73 |
+
pip install 'lerobot[all]' # All available features
|
| 74 |
+
pip install 'lerobot[aloha,pusht]' # Specific features (Aloha & Pusht)
|
| 75 |
+
pip install 'lerobot[feetech]' # Feetech motor support
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
_Replace `[...]` with your desired features._
|
| 79 |
+
|
| 80 |
+
**Available Tags:**
|
| 81 |
+
For a full list of optional dependencies, see:
|
| 82 |
+
https://pypi.org/project/lerobot/
|
| 83 |
+
|
| 84 |
+
> [!NOTE]
|
| 85 |
+
> For lerobot 0.4.0, if you want to install pi, you will have to do: `pip install "lerobot[pi]@git+https://github.com/huggingface/lerobot.git"`
|
| 86 |
+
|
| 87 |
+
### Troubleshooting
|
| 88 |
+
|
| 89 |
+
If you encounter build errors, you may need to install additional dependencies: `cmake`, `build-essential`, and `ffmpeg libs`.
|
| 90 |
+
To install these for linux run:
|
| 91 |
+
|
| 92 |
+
```bash
|
| 93 |
+
sudo apt-get install cmake build-essential python3-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/installation.html#bring-your-own-ffmpeg)
|
| 97 |
+
|
| 98 |
+
## Optional dependencies
|
| 99 |
+
|
| 100 |
+
LeRobot provides optional extras for specific functionalities. Multiple extras can be combined (e.g., `.[aloha,feetech]`). For all available extras, refer to `pyproject.toml`.
|
| 101 |
+
|
| 102 |
+
### Simulations
|
| 103 |
+
|
| 104 |
+
Install environment packages: `aloha` ([gym-aloha](https://github.com/huggingface/gym-aloha)), or `pusht` ([gym-pusht](https://github.com/huggingface/gym-pusht))
|
| 105 |
+
Example:
|
| 106 |
+
|
| 107 |
+
```bash
|
| 108 |
+
pip install -e ".[aloha]" # or "[pusht]" for example
|
| 109 |
+
```
|
| 110 |
+
|
| 111 |
+
### Motor Control
|
| 112 |
+
|
| 113 |
+
For Koch v1.1 install the Dynamixel SDK, for SO100/SO101/Moss install the Feetech SDK.
|
| 114 |
+
|
| 115 |
+
```bash
|
| 116 |
+
pip install -e ".[feetech]" # or "[dynamixel]" for example
|
| 117 |
+
```
|
| 118 |
+
|
| 119 |
+
### Experiment Tracking
|
| 120 |
+
|
| 121 |
+
To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with
|
| 122 |
+
|
| 123 |
+
```bash
|
| 124 |
+
wandb login
|
| 125 |
+
```
|
| 126 |
+
|
| 127 |
+
You can now assemble your robot if it's not ready yet, look for your robot type on the left. Then follow the link below to use Lerobot with your robot.
|
lerobot/docs/source/integrate_hardware.mdx
ADDED
|
@@ -0,0 +1,476 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Bring Your Own Hardware
|
| 2 |
+
|
| 3 |
+
This tutorial will explain how to integrate your own robot design into the LeRobot ecosystem and have it access all of our tools (data collection, control pipelines, policy training and inference).
|
| 4 |
+
|
| 5 |
+
To that end, we provide the [`Robot`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/robots/robot.py) base class in the LeRobot which specifies a standard interface for physical robot integration. Let's see how to implement it.
|
| 6 |
+
|
| 7 |
+
## Prerequisites
|
| 8 |
+
|
| 9 |
+
- Your own robot which exposes a communication interface (e.g. serial, CAN, TCP)
|
| 10 |
+
- A way to read sensor data and send motor commands programmatically, e.g. manufacturer's SDK or API, or your own protocol implementation.
|
| 11 |
+
- LeRobot installed in your environment. Follow our [Installation Guide](./installation).
|
| 12 |
+
|
| 13 |
+
## Choose your motors
|
| 14 |
+
|
| 15 |
+
If you're using Feetech or Dynamixel motors, LeRobot provides built-in bus interfaces:
|
| 16 |
+
|
| 17 |
+
- [`FeetechMotorsBus`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/feetech/feetech.py) – for controlling Feetech servos
|
| 18 |
+
- [`DynamixelMotorsBus`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/dynamixel/dynamixel.py) – for controlling Dynamixel servos
|
| 19 |
+
|
| 20 |
+
Please refer to the [`MotorsBus`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/motors_bus.py) abstract class to learn about its API.
|
| 21 |
+
For a good example of how it can be used, you can have a look at our own [SO101 follower implementation](https://github.com/huggingface/lerobot/blob/main/src/lerobot/robots/so_follower/so101_follower/so101_follower.py)
|
| 22 |
+
|
| 23 |
+
Use these if compatible. Otherwise, you'll need to find or write a Python interface (not covered in this tutorial):
|
| 24 |
+
|
| 25 |
+
- Find an existing SDK in Python (or use bindings to C/C++)
|
| 26 |
+
- Or implement a basic communication wrapper (e.g., via pyserial, socket, or CANopen)
|
| 27 |
+
|
| 28 |
+
You're not alone—many community contributions use custom boards or firmware!
|
| 29 |
+
|
| 30 |
+
For Feetech and Dynamixel, we currently support these servos: - Feetech: - STS & SMS series (protocol 0): `sts3215`, `sts3250`, `sm8512bl` - SCS series (protocol 1): `scs0009` - Dynamixel (protocol 2.0 only): `xl330-m077`, `xl330-m288`, `xl430-w250`, `xm430-w350`, `xm540-w270`, `xc430-w150`
|
| 31 |
+
|
| 32 |
+
If you are using Feetech or Dynamixel servos that are not in this list, you can add those in the [Feetech table](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/feetech/tables.py) or [Dynamixel table](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/dynamixel/tables.py). Depending on the model, this will require you to add model-specific information. In most cases though, there shouldn't be a lot of additions to do.
|
| 33 |
+
|
| 34 |
+
In the next sections, we'll use a `FeetechMotorsBus` as the motors interface for the examples. Replace it and adapt to your motors if necessary.
|
| 35 |
+
|
| 36 |
+
## Step 1: Subclass the `Robot` Interface
|
| 37 |
+
|
| 38 |
+
You’ll first need to specify the config class and a string identifier (`name`) for your robot. If your robot has special needs that you'd like to be able to change easily, it should go here (e.g. port/address, baudrate).
|
| 39 |
+
|
| 40 |
+
Here, we'll add the port name and one camera by default for our robot:
|
| 41 |
+
|
| 42 |
+
<!-- prettier-ignore-start -->
|
| 43 |
+
```python
|
| 44 |
+
from dataclasses import dataclass, field
|
| 45 |
+
|
| 46 |
+
from lerobot.cameras import CameraConfig
|
| 47 |
+
from lerobot.cameras.opencv import OpenCVCameraConfig
|
| 48 |
+
from lerobot.robots import RobotConfig
|
| 49 |
+
|
| 50 |
+
|
| 51 |
+
@RobotConfig.register_subclass("my_cool_robot")
|
| 52 |
+
@dataclass
|
| 53 |
+
class MyCoolRobotConfig(RobotConfig):
|
| 54 |
+
port: str
|
| 55 |
+
cameras: dict[str, CameraConfig] = field(
|
| 56 |
+
default_factory={
|
| 57 |
+
"cam_1": OpenCVCameraConfig(
|
| 58 |
+
index_or_path=2,
|
| 59 |
+
fps=30,
|
| 60 |
+
width=480,
|
| 61 |
+
height=640,
|
| 62 |
+
),
|
| 63 |
+
}
|
| 64 |
+
)
|
| 65 |
+
```
|
| 66 |
+
<!-- prettier-ignore-end -->
|
| 67 |
+
|
| 68 |
+
[Cameras tutorial](./cameras) to understand how to detect and add your camera.
|
| 69 |
+
|
| 70 |
+
Next, we'll create our actual robot class which inherits from `Robot`. This abstract class defines a contract you must follow for your robot to be usable with the rest of the LeRobot tools.
|
| 71 |
+
|
| 72 |
+
Here we'll create a simple 5-DoF robot with one camera. It could be a simple arm but notice that the `Robot` abstract class does not assume anything on your robot's form factor. You can let you imagination run wild when designing new robots!
|
| 73 |
+
|
| 74 |
+
<!-- prettier-ignore-start -->
|
| 75 |
+
```python
|
| 76 |
+
from lerobot.cameras import make_cameras_from_configs
|
| 77 |
+
from lerobot.motors import Motor, MotorNormMode
|
| 78 |
+
from lerobot.motors.feetech import FeetechMotorsBus
|
| 79 |
+
from lerobot.robots import Robot
|
| 80 |
+
|
| 81 |
+
class MyCoolRobot(Robot):
|
| 82 |
+
config_class = MyCoolRobotConfig
|
| 83 |
+
name = "my_cool_robot"
|
| 84 |
+
|
| 85 |
+
def __init__(self, config: MyCoolRobotConfig):
|
| 86 |
+
super().__init__(config)
|
| 87 |
+
self.bus = FeetechMotorsBus(
|
| 88 |
+
port=self.config.port,
|
| 89 |
+
motors={
|
| 90 |
+
"joint_1": Motor(1, "sts3250", MotorNormMode.RANGE_M100_100),
|
| 91 |
+
"joint_2": Motor(2, "sts3215", MotorNormMode.RANGE_M100_100),
|
| 92 |
+
"joint_3": Motor(3, "sts3215", MotorNormMode.RANGE_M100_100),
|
| 93 |
+
"joint_4": Motor(4, "sts3215", MotorNormMode.RANGE_M100_100),
|
| 94 |
+
"joint_5": Motor(5, "sts3215", MotorNormMode.RANGE_M100_100),
|
| 95 |
+
},
|
| 96 |
+
calibration=self.calibration,
|
| 97 |
+
)
|
| 98 |
+
self.cameras = make_cameras_from_configs(config.cameras)
|
| 99 |
+
```
|
| 100 |
+
<!-- prettier-ignore-end -->
|
| 101 |
+
|
| 102 |
+
## Step 2: Define Observation and Action Features
|
| 103 |
+
|
| 104 |
+
These two properties define the _interface contract_ between your robot and tools that consume it (such as data collection or learning pipelines).
|
| 105 |
+
|
| 106 |
+
> [!WARNING]
|
| 107 |
+
> Note that these properties must be callable even if the robot is not yet connected, so avoid relying on runtime hardware state to define them.
|
| 108 |
+
|
| 109 |
+
### `observation_features`
|
| 110 |
+
|
| 111 |
+
This property should return a dictionary describing the structure of sensor outputs from your robot. The keys match what `get_observation()` returns, and the values describe either the shape (for arrays/images) or the type (for simple values).
|
| 112 |
+
|
| 113 |
+
Example for our 5-DoF arm with one camera:
|
| 114 |
+
|
| 115 |
+
<!-- prettier-ignore-start -->
|
| 116 |
+
```python
|
| 117 |
+
@property
|
| 118 |
+
def _motors_ft(self) -> dict[str, type]:
|
| 119 |
+
return {
|
| 120 |
+
"joint_1.pos": float,
|
| 121 |
+
"joint_2.pos": float,
|
| 122 |
+
"joint_3.pos": float,
|
| 123 |
+
"joint_4.pos": float,
|
| 124 |
+
"joint_5.pos": float,
|
| 125 |
+
}
|
| 126 |
+
|
| 127 |
+
@property
|
| 128 |
+
def _cameras_ft(self) -> dict[str, tuple]:
|
| 129 |
+
return {
|
| 130 |
+
cam: (self.cameras[cam].height, self.cameras[cam].width, 3) for cam in self.cameras
|
| 131 |
+
}
|
| 132 |
+
|
| 133 |
+
@property
|
| 134 |
+
def observation_features(self) -> dict:
|
| 135 |
+
return {**self._motors_ft, **self._cameras_ft}
|
| 136 |
+
```
|
| 137 |
+
<!-- prettier-ignore-end -->
|
| 138 |
+
|
| 139 |
+
In this case, observations consist of a simple dict storing each motor's position and a camera image.
|
| 140 |
+
|
| 141 |
+
### `action_features`
|
| 142 |
+
|
| 143 |
+
This property describes the commands your robot expects via `send_action()`. Again, keys must match the expected input format, and values define the shape/type of each command.
|
| 144 |
+
|
| 145 |
+
Here, we simply use the same joints proprioceptive features (`self._motors_ft`) as with `observation_features`: the action sent will simply the goal position for each motor.
|
| 146 |
+
|
| 147 |
+
<!-- prettier-ignore-start -->
|
| 148 |
+
```python
|
| 149 |
+
def action_features(self) -> dict:
|
| 150 |
+
return self._motors_ft
|
| 151 |
+
```
|
| 152 |
+
<!-- prettier-ignore-end -->
|
| 153 |
+
|
| 154 |
+
## Step 3: Handle Connection and Disconnection
|
| 155 |
+
|
| 156 |
+
These methods should handle opening and closing communication with your hardware (e.g. serial ports, CAN interfaces, USB devices, cameras).
|
| 157 |
+
|
| 158 |
+
### `is_connected`
|
| 159 |
+
|
| 160 |
+
This property should simply reflect that communication with the robot's hardware is established. When this property is `True`, it should be possible to read and write to the hardware using `get_observation()` and `send_action()`.
|
| 161 |
+
|
| 162 |
+
<!-- prettier-ignore-start -->
|
| 163 |
+
```python
|
| 164 |
+
@property
|
| 165 |
+
def is_connected(self) -> bool:
|
| 166 |
+
return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
|
| 167 |
+
```
|
| 168 |
+
<!-- prettier-ignore-end -->
|
| 169 |
+
|
| 170 |
+
### `connect()`
|
| 171 |
+
|
| 172 |
+
This method should establish communication with the hardware. Moreover, if your robot needs calibration and is not calibrated, it should start a calibration procedure by default. If your robot needs some specific configuration, this should also be called here.
|
| 173 |
+
|
| 174 |
+
<!-- prettier-ignore-start -->
|
| 175 |
+
```python
|
| 176 |
+
def connect(self, calibrate: bool = True) -> None:
|
| 177 |
+
self.bus.connect()
|
| 178 |
+
if not self.is_calibrated and calibrate:
|
| 179 |
+
self.calibrate()
|
| 180 |
+
|
| 181 |
+
for cam in self.cameras.values():
|
| 182 |
+
cam.connect()
|
| 183 |
+
|
| 184 |
+
self.configure()
|
| 185 |
+
```
|
| 186 |
+
<!-- prettier-ignore-end -->
|
| 187 |
+
|
| 188 |
+
### `disconnect()`
|
| 189 |
+
|
| 190 |
+
This method should gracefully terminate communication with the hardware: free any related resources (threads or processes), close ports, etc.
|
| 191 |
+
|
| 192 |
+
Here, we already handle this in our `MotorsBus` and `Camera` classes so we just need to call their own `disconnect()` methods:
|
| 193 |
+
|
| 194 |
+
<!-- prettier-ignore-start -->
|
| 195 |
+
```python
|
| 196 |
+
def disconnect(self) -> None:
|
| 197 |
+
self.bus.disconnect()
|
| 198 |
+
for cam in self.cameras.values():
|
| 199 |
+
cam.disconnect()
|
| 200 |
+
```
|
| 201 |
+
<!-- prettier-ignore-end -->
|
| 202 |
+
|
| 203 |
+
## Step 4: Support Calibration and Configuration
|
| 204 |
+
|
| 205 |
+
LeRobot supports saving and loading calibration data automatically. This is useful for joint offsets, zero positions, or sensor alignment.
|
| 206 |
+
|
| 207 |
+
> Note that depending on your hardware, this may not apply. If that's the case, you can simply leave these methods as no-ops:
|
| 208 |
+
|
| 209 |
+
<!-- prettier-ignore-start -->
|
| 210 |
+
```python
|
| 211 |
+
@property
|
| 212 |
+
def is_calibrated(self) -> bool:
|
| 213 |
+
return True
|
| 214 |
+
|
| 215 |
+
def calibrate(self) -> None:
|
| 216 |
+
pass
|
| 217 |
+
```
|
| 218 |
+
<!-- prettier-ignore-end -->
|
| 219 |
+
|
| 220 |
+
### `is_calibrated`
|
| 221 |
+
|
| 222 |
+
This should reflect whether your robot has the required calibration loaded.
|
| 223 |
+
|
| 224 |
+
<!-- prettier-ignore-start -->
|
| 225 |
+
```python
|
| 226 |
+
@property
|
| 227 |
+
def is_calibrated(self) -> bool:
|
| 228 |
+
return self.bus.is_calibrated
|
| 229 |
+
```
|
| 230 |
+
<!-- prettier-ignore-end -->
|
| 231 |
+
|
| 232 |
+
### `calibrate()`
|
| 233 |
+
|
| 234 |
+
The goal of the calibration is twofold:
|
| 235 |
+
|
| 236 |
+
- Know the physical range of motion of each motors in order to only send commands within this range.
|
| 237 |
+
- Normalize raw motors positions to sensible continuous values (e.g. percentages, degrees) instead of arbitrary discrete value dependant on the specific motor used that will not replicate elsewhere.
|
| 238 |
+
|
| 239 |
+
It should implement the logic for calibration (if relevant) and update the `self.calibration` dictionary. If you are using Feetech or Dynamixel motors, our bus interfaces already include methods to help with this.
|
| 240 |
+
|
| 241 |
+
<!-- prettier-ignore-start -->
|
| 242 |
+
```python
|
| 243 |
+
def calibrate(self) -> None:
|
| 244 |
+
self.bus.disable_torque()
|
| 245 |
+
for motor in self.bus.motors:
|
| 246 |
+
self.bus.write("Operating_Mode", motor, OperatingMode.POSITION.value)
|
| 247 |
+
|
| 248 |
+
input(f"Move {self} to the middle of its range of motion and press ENTER....")
|
| 249 |
+
homing_offsets = self.bus.set_half_turn_homings()
|
| 250 |
+
|
| 251 |
+
print(
|
| 252 |
+
"Move all joints sequentially through their entire ranges "
|
| 253 |
+
"of motion.\nRecording positions. Press ENTER to stop..."
|
| 254 |
+
)
|
| 255 |
+
range_mins, range_maxes = self.bus.record_ranges_of_motion()
|
| 256 |
+
|
| 257 |
+
self.calibration = {}
|
| 258 |
+
for motor, m in self.bus.motors.items():
|
| 259 |
+
self.calibration[motor] = MotorCalibration(
|
| 260 |
+
id=m.id,
|
| 261 |
+
drive_mode=0,
|
| 262 |
+
homing_offset=homing_offsets[motor],
|
| 263 |
+
range_min=range_mins[motor],
|
| 264 |
+
range_max=range_maxes[motor],
|
| 265 |
+
)
|
| 266 |
+
|
| 267 |
+
self.bus.write_calibration(self.calibration)
|
| 268 |
+
self._save_calibration()
|
| 269 |
+
print("Calibration saved to", self.calibration_fpath)
|
| 270 |
+
```
|
| 271 |
+
<!-- prettier-ignore-end -->
|
| 272 |
+
|
| 273 |
+
### `configure()`
|
| 274 |
+
|
| 275 |
+
Use this to set up any configuration for your hardware (servos control modes, controller gains, etc.). This should usually be run at connection time and be idempotent.
|
| 276 |
+
|
| 277 |
+
<!-- prettier-ignore-start -->
|
| 278 |
+
```python
|
| 279 |
+
def configure(self) -> None:
|
| 280 |
+
with self.bus.torque_disabled():
|
| 281 |
+
self.bus.configure_motors()
|
| 282 |
+
for motor in self.bus.motors:
|
| 283 |
+
self.bus.write("Operating_Mode", motor, OperatingMode.POSITION.value)
|
| 284 |
+
self.bus.write("P_Coefficient", motor, 16)
|
| 285 |
+
self.bus.write("I_Coefficient", motor, 0)
|
| 286 |
+
self.bus.write("D_Coefficient", motor, 32)
|
| 287 |
+
```
|
| 288 |
+
<!-- prettier-ignore-end -->
|
| 289 |
+
|
| 290 |
+
## Step 5: Implement Sensors Reading and Action Sending
|
| 291 |
+
|
| 292 |
+
These are the most important runtime functions: the core I/O loop.
|
| 293 |
+
|
| 294 |
+
### `get_observation()`
|
| 295 |
+
|
| 296 |
+
Returns a dictionary of sensor values from the robot. These typically include motor states, camera frames, various sensors, etc. In the LeRobot framework, these observations are what will be fed to a policy in order to predict the actions to take. The dictionary keys and structure must match `observation_features`.
|
| 297 |
+
|
| 298 |
+
<!-- prettier-ignore-start -->
|
| 299 |
+
```python
|
| 300 |
+
def get_observation(self) -> dict[str, Any]:
|
| 301 |
+
if not self.is_connected:
|
| 302 |
+
raise ConnectionError(f"{self} is not connected.")
|
| 303 |
+
|
| 304 |
+
# Read arm position
|
| 305 |
+
obs_dict = self.bus.sync_read("Present_Position")
|
| 306 |
+
obs_dict = {f"{motor}.pos": val for motor, val in obs_dict.items()}
|
| 307 |
+
|
| 308 |
+
# Capture images from cameras
|
| 309 |
+
for cam_key, cam in self.cameras.items():
|
| 310 |
+
obs_dict[cam_key] = cam.async_read()
|
| 311 |
+
|
| 312 |
+
return obs_dict
|
| 313 |
+
```
|
| 314 |
+
<!-- prettier-ignore-end -->
|
| 315 |
+
|
| 316 |
+
### `send_action()`
|
| 317 |
+
|
| 318 |
+
Takes a dictionary that matches `action_features`, and sends it to your hardware. You can add safety limits (clipping, smoothing) and return what was actually sent.
|
| 319 |
+
|
| 320 |
+
For simplicity, we won't be adding any modification of the actions in our example here.
|
| 321 |
+
|
| 322 |
+
<!-- prettier-ignore-start -->
|
| 323 |
+
```python
|
| 324 |
+
def send_action(self, action: dict[str, Any]) -> dict[str, Any]:
|
| 325 |
+
goal_pos = {key.removesuffix(".pos"): val for key, val in action.items()}
|
| 326 |
+
|
| 327 |
+
# Send goal position to the arm
|
| 328 |
+
self.bus.sync_write("Goal_Position", goal_pos)
|
| 329 |
+
|
| 330 |
+
return action
|
| 331 |
+
```
|
| 332 |
+
<!-- prettier-ignore-end -->
|
| 333 |
+
|
| 334 |
+
## Adding a Teleoperator
|
| 335 |
+
|
| 336 |
+
For implementing teleoperation devices, we also provide a [`Teleoperator`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/teleoperators/teleoperator.py) base class. This class is very similar to the `Robot` base class and also doesn't assume anything on form factor.
|
| 337 |
+
|
| 338 |
+
The main differences are in the I/O functions: a teleoperator allows you to produce action via `get_action` and can receive feedback actions via `send_feedback`. Feedback could be anything controllable on the teleoperation device that could help the person controlling it understand the consequences of the actions sent. Think motion/force feedback on a leader arm, vibrations on a gamepad controller for example. To implement a teleoperator, you can follow this same tutorial and adapt it for these two methods.
|
| 339 |
+
|
| 340 |
+
## Using Your Own `LeRobot` Devices 🔌
|
| 341 |
+
|
| 342 |
+
You can easily extend `lerobot` with your own custom hardware—be it a camera, robot, or teleoperation device—by creating a separate, installable Python package. If you follow a few simple conventions, the `lerobot` command-line tools (like `lerobot-teleop` and `lerobot-record`) will **automatically discover and integrate your creations** without requiring any changes to the `lerobot` source code.
|
| 343 |
+
|
| 344 |
+
This guide outlines the conventions your plugin must follow.
|
| 345 |
+
|
| 346 |
+
### The 4 Core Conventions
|
| 347 |
+
|
| 348 |
+
To ensure your custom device is discoverable, you must adhere to the following four rules.
|
| 349 |
+
|
| 350 |
+
#### 1\. Create an Installable Package with a Specific Prefix
|
| 351 |
+
|
| 352 |
+
Your project must be a standard, installable Python package. Crucially, the name of your package (as defined in `pyproject.toml` or `setup.py`) must begin with one of these prefixes:
|
| 353 |
+
|
| 354 |
+
- `lerobot_robot_` for a robot.
|
| 355 |
+
- `lerobot_camera_` for a camera.
|
| 356 |
+
- `lerobot_teleoperator_` for a teleoperation device.
|
| 357 |
+
|
| 358 |
+
This prefix system is how `lerobot` automatically finds your plugin in the Python environment.
|
| 359 |
+
|
| 360 |
+
#### 2\. Follow the `SomethingConfig`/`Something` Naming Pattern
|
| 361 |
+
|
| 362 |
+
Your device's implementation class must be named after its configuration class, simply by removing the `Config` suffix.
|
| 363 |
+
|
| 364 |
+
- **Config Class:** `MyAwesomeTeleopConfig`
|
| 365 |
+
- **Device Class:** `MyAwesomeTeleop`
|
| 366 |
+
|
| 367 |
+
#### 3\. Place Your Files in a Predictable Structure
|
| 368 |
+
|
| 369 |
+
The device class (`MyAwesomeTeleop`) must be located in a predictable module relative to its configuration class (`MyAwesomeTeleopConfig`). `lerobot` will automatically search in these locations:
|
| 370 |
+
|
| 371 |
+
- In the **same module** as the config class.
|
| 372 |
+
- In a **submodule named after the device** (e.g., `my_awesome_teleop.py`).
|
| 373 |
+
|
| 374 |
+
The recommended and simplest structure is to place them in separate, clearly named files within the same directory.
|
| 375 |
+
|
| 376 |
+
#### 4\. Expose Classes in `__init__.py`
|
| 377 |
+
|
| 378 |
+
Your package's `__init__.py` file should import and expose both the configuration and the device classes, making them easily accessible.
|
| 379 |
+
|
| 380 |
+
### Putting It All Together: A Complete Example
|
| 381 |
+
|
| 382 |
+
Let's create a new teleoperator called `my_awesome_teleop`.
|
| 383 |
+
|
| 384 |
+
#### Directory Structure
|
| 385 |
+
|
| 386 |
+
Here is what the project folder should look like. The package name, `lerobot_teleoperator_my_awesome_teleop`, follows **Convention \#1**.
|
| 387 |
+
|
| 388 |
+
```
|
| 389 |
+
lerobot_teleoperator_my_awesome_teleop/
|
| 390 |
+
├── pyproject.toml # (or setup.py) lists lerobot as a dependency
|
| 391 |
+
└── lerobot_teleoperator_my_awesome_teleop/
|
| 392 |
+
├── __init__.py
|
| 393 |
+
├── config_my_awesome_teleop.py
|
| 394 |
+
└── my_awesome_teleop.py
|
| 395 |
+
```
|
| 396 |
+
|
| 397 |
+
#### File Contents
|
| 398 |
+
|
| 399 |
+
- **`config_my_awesome_teleop.py`**: Defines the configuration class. Note the `Config` suffix (**Convention \#2**).
|
| 400 |
+
|
| 401 |
+
```python
|
| 402 |
+
from dataclasses import dataclass
|
| 403 |
+
|
| 404 |
+
from lerobot.teleoperators.config import TeleoperatorConfig
|
| 405 |
+
|
| 406 |
+
@TeleoperatorConfig.register_subclass("my_awesome_teleop")
|
| 407 |
+
@dataclass
|
| 408 |
+
class MyAwesomeTeleopConfig(TeleoperatorConfig):
|
| 409 |
+
# Your configuration fields go here
|
| 410 |
+
port: str = "192.168.1.1"
|
| 411 |
+
```
|
| 412 |
+
|
| 413 |
+
- **`my_awesome_teleop.py`**: Implements the device. The class name `MyAwesomeTeleop` matches its config class name (**Convention \#2**). This file structure adheres to **Convention \#3**.
|
| 414 |
+
|
| 415 |
+
```python
|
| 416 |
+
from lerobot.teleoperators.teleoperator import Teleoperator
|
| 417 |
+
|
| 418 |
+
from .config_my_awesome_teleop import MyAwesomeTeleopConfig
|
| 419 |
+
|
| 420 |
+
class MyAwesomeTeleop(Teleoperator):
|
| 421 |
+
config_class = MyAwesomeTeleopConfig
|
| 422 |
+
name = "my_awesome_teleop"
|
| 423 |
+
|
| 424 |
+
def __init__(self, config: MyAwesomeTeleopConfig):
|
| 425 |
+
super().__init__(config)
|
| 426 |
+
self.config = config
|
| 427 |
+
|
| 428 |
+
# Your device logic (e.g., connect) goes here
|
| 429 |
+
```
|
| 430 |
+
|
| 431 |
+
- **`__init__.py`**: Exposes the key classes (**Convention \#4**).
|
| 432 |
+
|
| 433 |
+
```python
|
| 434 |
+
from .config_my_awesome_teleop import MyAwesomeTeleopConfig
|
| 435 |
+
from .my_awesome_teleop import MyAwesomeTeleop
|
| 436 |
+
```
|
| 437 |
+
|
| 438 |
+
### Installation and Usage
|
| 439 |
+
|
| 440 |
+
1. **Install your new plugin in your Python environment.** You can install your local plugin package using `pip`'s editable mode or from PyPi.
|
| 441 |
+
|
| 442 |
+
```bash
|
| 443 |
+
# Locally
|
| 444 |
+
# Navigate to your plugin's root directory and install it
|
| 445 |
+
cd lerobot_teleoperator_my_awesome_teleop
|
| 446 |
+
pip install -e .
|
| 447 |
+
|
| 448 |
+
# From PyPi
|
| 449 |
+
pip install lerobot_teleoperator_my_awesome_teleop
|
| 450 |
+
```
|
| 451 |
+
|
| 452 |
+
2. **Use it directly from the command line.** Now, you can use your custom device by referencing its type.
|
| 453 |
+
|
| 454 |
+
```bash
|
| 455 |
+
lerobot-teleoperate --teleop.type=my_awesome_teleop \
|
| 456 |
+
# other arguments
|
| 457 |
+
```
|
| 458 |
+
|
| 459 |
+
And that's it\! Your custom device is now fully integrated.
|
| 460 |
+
|
| 461 |
+
### Looking for an example ?
|
| 462 |
+
|
| 463 |
+
Check out these two packages from the community:
|
| 464 |
+
|
| 465 |
+
- https://github.com/SpesRobotics/lerobot-robot-xarm
|
| 466 |
+
- https://github.com/SpesRobotics/lerobot-teleoperator-teleop
|
| 467 |
+
|
| 468 |
+
## Wrapping Up
|
| 469 |
+
|
| 470 |
+
Once your robot class is complete, you can leverage the LeRobot ecosystem:
|
| 471 |
+
|
| 472 |
+
- Control your robot with available teleoperators or integrate directly your teleoperating device
|
| 473 |
+
- Record training data and visualize it
|
| 474 |
+
- Integrate it into RL or imitation learning pipelines
|
| 475 |
+
|
| 476 |
+
Don't hesitate to reach out to the community for help on our [Discord](https://discord.gg/s3KuuzsPFb) 🤗
|
lerobot/docs/source/introduction_processors.mdx
ADDED
|
@@ -0,0 +1,314 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Introduction to Processors
|
| 2 |
+
|
| 3 |
+
In robotics, there's a fundamental mismatch between the data that robots and humans produce and what machine learning models expect.
|
| 4 |
+
Robots output raw sensor data like camera images and joint positions that need normalization, batching, and device placement before models can process them.
|
| 5 |
+
Language instructions from humans must be tokenized into numerical representations, and different robots use different coordinate systems that need standardization.
|
| 6 |
+
|
| 7 |
+
The challenge extends to model outputs as well.
|
| 8 |
+
Models might output end-effector positions while robots need joint-space commands, or teleoperators produce relative movements while robots expect absolute commands.
|
| 9 |
+
Model predictions are often normalized and need conversion back to real-world scales.
|
| 10 |
+
|
| 11 |
+
Cross-domain translation adds another layer of complexity.
|
| 12 |
+
Training data from one robot setup needs adaptation for deployment on different hardware, models trained with specific camera configurations must work with new arrangements, and datasets with different naming conventions need harmonization.
|
| 13 |
+
|
| 14 |
+
**That's where processors come in.** They serve as universal translators that bridge these gaps, ensuring seamless data flow from sensors to models to actuators.
|
| 15 |
+
Processors handle all the preprocessing and postprocessing steps needed to convert raw environment data into model-ready inputs and vice versa.
|
| 16 |
+
|
| 17 |
+
This means that your favorite policy can be used like this:
|
| 18 |
+
|
| 19 |
+
```python
|
| 20 |
+
import torch
|
| 21 |
+
|
| 22 |
+
from lerobot.datasets.lerobot_dataset import LeRobotDataset
|
| 23 |
+
from lerobot.policies.factory import make_pre_post_processors
|
| 24 |
+
from lerobot.policies.your_policy import YourPolicy
|
| 25 |
+
from lerobot.processor.pipeline import RobotProcessorPipeline, PolicyProcessorPipeline
|
| 26 |
+
dataset = LeRobotDataset("hf_user/dataset", episodes=[0])
|
| 27 |
+
sample = dataset[10]
|
| 28 |
+
|
| 29 |
+
model = YourPolicy.from_pretrained(
|
| 30 |
+
"hf_user/model",
|
| 31 |
+
)
|
| 32 |
+
model.eval()
|
| 33 |
+
model.to("cuda")
|
| 34 |
+
preprocessor, postprocessor = make_pre_post_processors(model.config, pretrained_path="hf_user/model", dataset_stats=dataset.meta.stats)
|
| 35 |
+
|
| 36 |
+
preprocessed_sample = preprocessor(sample)
|
| 37 |
+
action = model.select_action(preprocessed_sample)
|
| 38 |
+
postprocessed_action = postprocessor(action)
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
## What are Processors?
|
| 42 |
+
|
| 43 |
+
In robotics, data comes in many forms: images from cameras, joint positions from sensors, text instructions from users, and more. Each type of data requires specific transformations before a model can use it effectively. Models need this data to be:
|
| 44 |
+
|
| 45 |
+
- **Normalized**: Scaled to appropriate ranges for neural network processing
|
| 46 |
+
- **Batched**: Organized with proper dimensions for batch processing
|
| 47 |
+
- **Tokenized**: Text converted to numerical representations
|
| 48 |
+
- **Device-placed**: Moved to the right hardware (CPU/GPU)
|
| 49 |
+
- **Type-converted**: Cast to appropriate data types
|
| 50 |
+
|
| 51 |
+
Processors handle these transformations through composable, reusable steps that can be chained together into pipelines. Think of them as a modular assembly line where each station performs a specific transformation on your data.
|
| 52 |
+
|
| 53 |
+
## Core Concepts
|
| 54 |
+
|
| 55 |
+
### EnvTransition: The Universal Data Container
|
| 56 |
+
|
| 57 |
+
The `EnvTransition` is the fundamental data structure that flows through all processors.
|
| 58 |
+
It's a typed dictionary that represents a complete robot-environment interaction:
|
| 59 |
+
|
| 60 |
+
- **OBSERVATION**: All sensor data (images, states, proprioception)
|
| 61 |
+
- **ACTION**: The action to execute or that was executed
|
| 62 |
+
- **REWARD**: Reinforcement learning signal
|
| 63 |
+
- **DONE/TRUNCATED**: Episode boundary indicators
|
| 64 |
+
- **INFO**: Arbitrary metadata
|
| 65 |
+
- **COMPLEMENTARY_DATA**: Task descriptions, indices, padding flags, inter-step data
|
| 66 |
+
|
| 67 |
+
### ProcessorStep: The Building Block
|
| 68 |
+
|
| 69 |
+
A `ProcessorStep` is a single transformation unit that processes transitions. It's an abstract base class with two required methods:
|
| 70 |
+
|
| 71 |
+
```python
|
| 72 |
+
from lerobot.processor import ProcessorStep, EnvTransition
|
| 73 |
+
|
| 74 |
+
class MyProcessorStep(ProcessorStep):
|
| 75 |
+
"""Example processor step - inherit and implement abstract methods."""
|
| 76 |
+
|
| 77 |
+
def __call__(self, transition: EnvTransition) -> EnvTransition:
|
| 78 |
+
"""Transform the transition - REQUIRED abstract method."""
|
| 79 |
+
# Your processing logic here
|
| 80 |
+
return transition
|
| 81 |
+
|
| 82 |
+
def transform_features(self, features):
|
| 83 |
+
"""Declare how this step transforms feature shapes/types - REQUIRED abstract method."""
|
| 84 |
+
return features # Most processors return features unchanged
|
| 85 |
+
```
|
| 86 |
+
|
| 87 |
+
`__call__` is the core of your processor step. It takes an `EnvTransition` and returns a modified `EnvTransition`.
|
| 88 |
+
|
| 89 |
+
`transform_features` is used to declare how this step transforms feature shapes/types.
|
| 90 |
+
|
| 91 |
+
### DataProcessorPipeline: The Generic Orchestrator
|
| 92 |
+
|
| 93 |
+
The `DataProcessorPipeline[TInput, TOutput]` chains multiple `ProcessorStep` instances together:
|
| 94 |
+
|
| 95 |
+
```python
|
| 96 |
+
from lerobot.processor import RobotProcessorPipeline, PolicyProcessorPipeline
|
| 97 |
+
|
| 98 |
+
# For robot hardware (unbatched data)
|
| 99 |
+
robot_processor = RobotProcessorPipeline[RobotAction, RobotAction](
|
| 100 |
+
steps=[step1, step2, step3],
|
| 101 |
+
name="robot_pipeline"
|
| 102 |
+
)
|
| 103 |
+
|
| 104 |
+
# For model training/inference (batched data)
|
| 105 |
+
policy_processor = PolicyProcessorPipeline[dict[str, Any], dict[str, Any]](
|
| 106 |
+
steps=[step1, step2, step3],
|
| 107 |
+
name="policy_pipeline"
|
| 108 |
+
)
|
| 109 |
+
```
|
| 110 |
+
|
| 111 |
+
## RobotProcessorPipeline vs PolicyProcessorPipeline
|
| 112 |
+
|
| 113 |
+
The key distinction is in the data structures they handle:
|
| 114 |
+
|
| 115 |
+
| Aspect | RobotProcessorPipeline | PolicyProcessorPipeline |
|
| 116 |
+
| --------------- | -------------------------------------------- | ---------------------------------------- |
|
| 117 |
+
| **Input** | `dict[str, Any]` - Individual robot values | `dict[str, Any]` - Batched tensors |
|
| 118 |
+
| **Output** | `dict[str, Any]` - Individual robot commands | `torch.Tensor` - Policy predictions |
|
| 119 |
+
| **Use Case** | Real-time robot control | Model training/inference |
|
| 120 |
+
| **Data Format** | Unbatched, heterogeneous | Batched, homogeneous |
|
| 121 |
+
| **Examples** | `{"joint_1": 0.5}` | `{"observation.state": tensor([[0.5]])}` |
|
| 122 |
+
|
| 123 |
+
**Use `RobotProcessorPipeline`** for robot hardware interfaces:
|
| 124 |
+
|
| 125 |
+
```python
|
| 126 |
+
# Robot data structures: dict[str, Any] for observations and actions
|
| 127 |
+
robot_obs: dict[str, Any] = {
|
| 128 |
+
"joint_1": 0.5, # Individual joint values
|
| 129 |
+
"joint_2": -0.3,
|
| 130 |
+
"camera_0": image_array # Raw camera data
|
| 131 |
+
}
|
| 132 |
+
|
| 133 |
+
robot_action: dict[str, Any] = {
|
| 134 |
+
"joint_1": 0.2, # Target joint positions
|
| 135 |
+
"joint_2": 0.1,
|
| 136 |
+
"gripper": 0.8
|
| 137 |
+
}
|
| 138 |
+
```
|
| 139 |
+
|
| 140 |
+
**Use `PolicyProcessorPipeline`** for model training and batch processing:
|
| 141 |
+
|
| 142 |
+
```python
|
| 143 |
+
# Policy data structures: batch dicts and tensors
|
| 144 |
+
policy_batch: dict[str, Any] = {
|
| 145 |
+
"observation.state": torch.tensor([[0.5, -0.3]]), # Batched states
|
| 146 |
+
"observation.images.camera0": torch.tensor(...), # Batched images
|
| 147 |
+
"action": torch.tensor([[0.2, 0.1, 0.8]]) # Batched actions
|
| 148 |
+
}
|
| 149 |
+
|
| 150 |
+
policy_action: torch.Tensor = torch.tensor([[0.2, 0.1, 0.8]]) # Model output tensor
|
| 151 |
+
```
|
| 152 |
+
|
| 153 |
+
## Converter Functions
|
| 154 |
+
|
| 155 |
+
LeRobot provides converter functions to bridge different data formats in `lerobot.processor.converters`. These functions handle the crucial translations between robot hardware data structures, policy model formats, and the internal `EnvTransition` representation that flows through processor pipelines.
|
| 156 |
+
|
| 157 |
+
| Category | Function | Description |
|
| 158 |
+
| ------------------------------ | ----------------------------- | ------------------------------- |
|
| 159 |
+
| **Robot Hardware Converters** | `robot_action_to_transition` | Robot dict → EnvTransition |
|
| 160 |
+
| | `observation_to_transition` | Robot obs → EnvTransition |
|
| 161 |
+
| | `transition_to_robot_action` | EnvTransition → Robot dict |
|
| 162 |
+
| **Policy/Training Converters** | `batch_to_transition` | Batch dict → EnvTransition |
|
| 163 |
+
| | `transition_to_batch` | EnvTransition → Batch dict |
|
| 164 |
+
| | `policy_action_to_transition` | Policy tensor → EnvTransition |
|
| 165 |
+
| | `transition_to_policy_action` | EnvTransition → Policy tensor |
|
| 166 |
+
| **Utilities** | `create_transition` | Build transitions with defaults |
|
| 167 |
+
| | `identity_transition` | Pass-through converter |
|
| 168 |
+
|
| 169 |
+
The key insight is that **robot hardware converters** work with individual values and dictionaries, while **policy/training converters** work with batched tensors and model outputs. The converter functions automatically handle the structural differences, so your processor steps can focus on the core transformations without worrying about data format compatibility.
|
| 170 |
+
|
| 171 |
+
## Processor Examples
|
| 172 |
+
|
| 173 |
+
The following examples demonstrate real-world processor configurations for policy training and inference.
|
| 174 |
+
|
| 175 |
+
Here is an example processor for policy training and inference:
|
| 176 |
+
|
| 177 |
+
```python
|
| 178 |
+
# Training data preprocessing (optimized order for GPU performance)
|
| 179 |
+
training_preprocessor = PolicyProcessorPipeline[dict[str, Any], dict[str, Any]](
|
| 180 |
+
steps=[
|
| 181 |
+
RenameObservationsProcessorStep(rename_map={}), # Standardize keys
|
| 182 |
+
AddBatchDimensionProcessorStep(), # Add batch dims
|
| 183 |
+
TokenizerProcessorStep(tokenizer_name="...", ...), # Tokenize language
|
| 184 |
+
DeviceProcessorStep(device="cuda"), # Move to GPU first
|
| 185 |
+
NormalizerProcessorStep(features=..., stats=...), # Normalize on GPU
|
| 186 |
+
]
|
| 187 |
+
)
|
| 188 |
+
|
| 189 |
+
# Model output postprocessing
|
| 190 |
+
training_postprocessor = PolicyProcessorPipeline[torch.Tensor, torch.Tensor](
|
| 191 |
+
steps=[
|
| 192 |
+
DeviceProcessorStep(device="cpu"), # Move to CPU
|
| 193 |
+
UnnormalizerProcessorStep(features=..., stats=...), # Denormalize
|
| 194 |
+
]
|
| 195 |
+
to_transition=policy_action_to_transition,
|
| 196 |
+
to_output=transition_to_policy_action,
|
| 197 |
+
)
|
| 198 |
+
```
|
| 199 |
+
|
| 200 |
+
### An interaction between a robot and a policy with processors
|
| 201 |
+
|
| 202 |
+
The most common real-world scenario combines both pipeline types robot hardware generates observations that need policy processing, and policy outputs need robot-compatible postprocessing:
|
| 203 |
+
|
| 204 |
+
```python
|
| 205 |
+
# Real deployment: Robot sensors → Model → Robot commands
|
| 206 |
+
with torch.no_grad():
|
| 207 |
+
while not done:
|
| 208 |
+
raw_obs = robot.get_observation() # dict[str, Any]
|
| 209 |
+
|
| 210 |
+
# Add your robot observation to policy observation processor
|
| 211 |
+
|
| 212 |
+
policy_input = policy_preprocessor(raw_obs) # Batched dict
|
| 213 |
+
|
| 214 |
+
policy_output = policy.select_action(policy_input) # Policy tensor
|
| 215 |
+
|
| 216 |
+
policy_action = policy_postprocessor(policy_output)
|
| 217 |
+
|
| 218 |
+
# Add your robot action to policy action processor
|
| 219 |
+
|
| 220 |
+
robot.send_action(policy_action)
|
| 221 |
+
```
|
| 222 |
+
|
| 223 |
+
## Feature Contracts: Shape and Type Transformation
|
| 224 |
+
|
| 225 |
+
Processors don't just transform data - they can also **change the data structure itself**. The `transform_features()` method declares these changes, which is crucial for dataset recording and policy creation.
|
| 226 |
+
|
| 227 |
+
### Why Feature Contracts Matter
|
| 228 |
+
|
| 229 |
+
When building datasets or policies, LeRobot needs to know:
|
| 230 |
+
|
| 231 |
+
- **What data fields will exist** after processing
|
| 232 |
+
- **What shapes and types** each field will have
|
| 233 |
+
- **How to configure models** for the expected data structure
|
| 234 |
+
|
| 235 |
+
```python
|
| 236 |
+
# Example: A processor that adds velocity to observations
|
| 237 |
+
class VelocityProcessor(ObservationProcessorStep):
|
| 238 |
+
def observation(self, obs):
|
| 239 |
+
new_obs = obs.copy()
|
| 240 |
+
if "observation.state" in obs:
|
| 241 |
+
# concatenate computed velocity field to the state
|
| 242 |
+
new_obs["observation.state"] = self._compute_velocity(obs["observation.state"])
|
| 243 |
+
return new_obs
|
| 244 |
+
|
| 245 |
+
def transform_features(self, features):
|
| 246 |
+
"""Declare the new velocity field we're adding."""
|
| 247 |
+
state_feature = features[PipelineFeatureType.OBSERVATION].get("observation.state")
|
| 248 |
+
if state_feature:
|
| 249 |
+
double_shape = (state_feature.shape[0] * 2,) if state_feature.shape else (2,)
|
| 250 |
+
features[PipelineFeatureType.OBSERVATION]["observation.state"] = PolicyFeature(
|
| 251 |
+
type=FeatureType.STATE, shape=double_shape
|
| 252 |
+
)
|
| 253 |
+
return features
|
| 254 |
+
```
|
| 255 |
+
|
| 256 |
+
### Feature Specification Functions
|
| 257 |
+
|
| 258 |
+
`create_initial_features()` and `aggregate_pipeline_dataset_features()` solve a critical dataset creation problem: determining the exact final data structure before any data is processed.
|
| 259 |
+
Since processor pipelines can add new features (like velocity fields), change tensor shapes (like cropping images), or rename keys, datasets need to know the complete output specification upfront to allocate proper storage and define schemas.
|
| 260 |
+
These functions work together by starting with robot hardware specifications (`create_initial_features()`) then simulating the entire pipeline transformation (`aggregate_pipeline_dataset_features()`) to compute the final feature dictionary that gets passed to `LeRobotDataset.create()`, ensuring perfect alignment between what processors output and what datasets expect to store.
|
| 261 |
+
|
| 262 |
+
```python
|
| 263 |
+
from lerobot.datasets.pipeline_features import aggregate_pipeline_dataset_features
|
| 264 |
+
|
| 265 |
+
# Start with robot's raw features
|
| 266 |
+
initial_features = create_initial_features(
|
| 267 |
+
observation=robot.observation_features, # {"joint_1.pos": float, "camera_0": (480,640,3)}
|
| 268 |
+
action=robot.action_features # {"joint_1.pos": float, "gripper.pos": float}
|
| 269 |
+
)
|
| 270 |
+
|
| 271 |
+
# Apply processor pipeline to compute final features
|
| 272 |
+
final_features = aggregate_pipeline_dataset_features(
|
| 273 |
+
pipeline=my_processor_pipeline,
|
| 274 |
+
initial_features=initial_features,
|
| 275 |
+
use_videos=True
|
| 276 |
+
)
|
| 277 |
+
|
| 278 |
+
# Use for dataset creation
|
| 279 |
+
dataset = LeRobotDataset.create(
|
| 280 |
+
repo_id="my_dataset",
|
| 281 |
+
features=final_features, # Knows exactly what data to expect
|
| 282 |
+
...
|
| 283 |
+
)
|
| 284 |
+
```
|
| 285 |
+
|
| 286 |
+
## Common Processor Steps
|
| 287 |
+
|
| 288 |
+
LeRobot provides many registered processor steps. Here are the most commonly used core processors:
|
| 289 |
+
|
| 290 |
+
### Essential Processors
|
| 291 |
+
|
| 292 |
+
- **`normalizer_processor`**: Normalize observations/actions using dataset statistics (mean/std or min/max)
|
| 293 |
+
- **`device_processor`**: Move tensors to CPU/GPU with optional dtype conversion
|
| 294 |
+
- **`to_batch_processor`**: Add batch dimensions to transitions for model compatibility
|
| 295 |
+
- **`rename_observations_processor`**: Rename observation keys using mapping dictionaries
|
| 296 |
+
- **`tokenizer_processor`**: Tokenize natural language task descriptions into tokens and attention masks
|
| 297 |
+
|
| 298 |
+
### Next Steps
|
| 299 |
+
|
| 300 |
+
- **[Implement Your Own Processor](./implement_your_own_processor)** - Create custom processor steps
|
| 301 |
+
- **[Debug Your Pipeline](./debug_processor_pipeline)** - Troubleshoot and optimize pipelines
|
| 302 |
+
- **[Processors for Robots and Teleoperators](./processors_robots_teleop)** - Real-world integration patterns
|
| 303 |
+
|
| 304 |
+
## Summary
|
| 305 |
+
|
| 306 |
+
Processors solve the data translation problem in robotics by providing:
|
| 307 |
+
|
| 308 |
+
- **Modular transformations**: Composable, reusable processing steps
|
| 309 |
+
- **Type safety**: Generic pipelines with compile-time checking
|
| 310 |
+
- **Performance optimization**: GPU-accelerated operations
|
| 311 |
+
- **Robot/Policy distinction**: Separate pipelines for different data structures
|
| 312 |
+
- **Comprehensive ecosystem**: 30+ registered processors for common tasks
|
| 313 |
+
|
| 314 |
+
The key insight: `RobotProcessorPipeline` handles unbatched robot hardware data, while `PolicyProcessorPipeline` handles batched model data. Choose the right tool for your data structure!
|
lerobot/docs/source/koch.mdx
ADDED
|
@@ -0,0 +1,283 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Koch v1.1
|
| 2 |
+
|
| 3 |
+
In the steps below, we explain how to assemble the Koch v1.1 robot.
|
| 4 |
+
|
| 5 |
+
## Order and assemble the parts
|
| 6 |
+
|
| 7 |
+
Follow the sourcing and assembling instructions provided in this [README](https://github.com/jess-moss/koch-v1-1). This will guide you through setting up both the follower and leader arms, as shown in the image below.
|
| 8 |
+
|
| 9 |
+
For a visual walkthrough of the assembly process, you can refer to [this video tutorial](https://youtu.be/8nQIg9BwwTk).
|
| 10 |
+
|
| 11 |
+
> [!WARNING]
|
| 12 |
+
> Since the production of this video, we simplified the configuration phase. Because of this, two things differ from the instructions in that video:
|
| 13 |
+
>
|
| 14 |
+
> - Don't plug in all the motor cables right away and wait to be instructed to do so in [Configure the motors](#configure-the-motors).
|
| 15 |
+
> - Don't screw in the controller board (PCB) to the base right away and wait for being instructed to do so in [Configure the motors](#configure-the-motors).
|
| 16 |
+
|
| 17 |
+
## Install LeRobot 🤗
|
| 18 |
+
|
| 19 |
+
To install LeRobot follow, our [Installation Guide](./installation)
|
| 20 |
+
|
| 21 |
+
In addition to these instructions, you need to install the Dynamixel SDK:
|
| 22 |
+
|
| 23 |
+
```bash
|
| 24 |
+
pip install -e ".[dynamixel]"
|
| 25 |
+
```
|
| 26 |
+
|
| 27 |
+
## Configure the motors
|
| 28 |
+
|
| 29 |
+
### 1. Find the USB ports associated with each arm
|
| 30 |
+
|
| 31 |
+
To find the port for each bus servo adapter, run this script:
|
| 32 |
+
|
| 33 |
+
```bash
|
| 34 |
+
lerobot-find-port
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
<hfoptions id="example">
|
| 38 |
+
<hfoption id="Mac">
|
| 39 |
+
|
| 40 |
+
Example output:
|
| 41 |
+
|
| 42 |
+
```
|
| 43 |
+
Finding all available ports for the MotorBus.
|
| 44 |
+
['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
|
| 45 |
+
Remove the USB cable from your MotorsBus and press Enter when done.
|
| 46 |
+
|
| 47 |
+
[...Disconnect corresponding leader or follower arm and press Enter...]
|
| 48 |
+
|
| 49 |
+
The port of this MotorsBus is /dev/tty.usbmodem575E0032081
|
| 50 |
+
Reconnect the USB cable.
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
Where the found port is: `/dev/tty.usbmodem575E0032081` corresponding to your leader or follower arm.
|
| 54 |
+
|
| 55 |
+
</hfoption>
|
| 56 |
+
<hfoption id="Linux">
|
| 57 |
+
|
| 58 |
+
On Linux, you might need to give access to the USB ports by running:
|
| 59 |
+
|
| 60 |
+
```bash
|
| 61 |
+
sudo chmod 666 /dev/ttyACM0
|
| 62 |
+
sudo chmod 666 /dev/ttyACM1
|
| 63 |
+
```
|
| 64 |
+
|
| 65 |
+
Example output:
|
| 66 |
+
|
| 67 |
+
```
|
| 68 |
+
Finding all available ports for the MotorBus.
|
| 69 |
+
['/dev/ttyACM0', '/dev/ttyACM1']
|
| 70 |
+
Remove the usb cable from your MotorsBus and press Enter when done.
|
| 71 |
+
|
| 72 |
+
[...Disconnect corresponding leader or follower arm and press Enter...]
|
| 73 |
+
|
| 74 |
+
The port of this MotorsBus is /dev/ttyACM1
|
| 75 |
+
Reconnect the USB cable.
|
| 76 |
+
```
|
| 77 |
+
|
| 78 |
+
Where the found port is: `/dev/ttyACM1` corresponding to your leader or follower arm.
|
| 79 |
+
|
| 80 |
+
</hfoption>
|
| 81 |
+
</hfoptions>
|
| 82 |
+
|
| 83 |
+
### 2. Set the motors ids and baudrates
|
| 84 |
+
|
| 85 |
+
Each motor is identified by a unique id on the bus. When brand new, motors usually come with a default id of `1`. For the communication to work properly between the motors and the controller, we first need to set a unique, different id to each motor. Additionally, the speed at which data is transmitted on the bus is determined by the baudrate. In order to talk to each other, the controller and all the motors need to be configured with the same baudrate.
|
| 86 |
+
|
| 87 |
+
To that end, we first need to connect to each motor individually with the controller in order to set these. Since we will write these parameters in the non-volatile section of the motors' internal memory (EEPROM), we'll only need to do this once.
|
| 88 |
+
|
| 89 |
+
If you are repurposing motors from another robot, you will probably also need to perform this step, as the ids and baudrate likely won't match.
|
| 90 |
+
|
| 91 |
+
#### Follower
|
| 92 |
+
|
| 93 |
+
Connect the usb cable from your computer and the 5V power supply to the follower arm's controller board. Then, run the following command or run the API example with the port you got from the previous step. You'll also need to give your leader arm a name with the `id` parameter.
|
| 94 |
+
|
| 95 |
+
For a visual reference on how to set the motor ids please refer to [this video](https://huggingface.co/docs/lerobot/en/so101#setup-motors-video) where we follow the process for the SO101 arm.
|
| 96 |
+
|
| 97 |
+
<hfoptions id="setup_motors">
|
| 98 |
+
<hfoption id="Command">
|
| 99 |
+
|
| 100 |
+
```bash
|
| 101 |
+
lerobot-setup-motors \
|
| 102 |
+
--robot.type=koch_follower \
|
| 103 |
+
--robot.port=/dev/tty.usbmodem575E0031751 # <- paste here the port found at previous step
|
| 104 |
+
```
|
| 105 |
+
|
| 106 |
+
</hfoption>
|
| 107 |
+
<hfoption id="API example">
|
| 108 |
+
|
| 109 |
+
<!-- prettier-ignore-start -->
|
| 110 |
+
```python
|
| 111 |
+
from lerobot.robots.koch_follower import KochFollower, KochFollowerConfig
|
| 112 |
+
|
| 113 |
+
config = KochFollowerConfig(
|
| 114 |
+
port="/dev/tty.usbmodem575E0031751",
|
| 115 |
+
id="my_awesome_follower_arm",
|
| 116 |
+
)
|
| 117 |
+
follower = KochFollower(config)
|
| 118 |
+
follower.setup_motors()
|
| 119 |
+
```
|
| 120 |
+
<!-- prettier-ignore-end -->
|
| 121 |
+
|
| 122 |
+
</hfoption>
|
| 123 |
+
</hfoptions>
|
| 124 |
+
|
| 125 |
+
You should see the following instruction.
|
| 126 |
+
|
| 127 |
+
```
|
| 128 |
+
Connect the controller board to the 'gripper' motor only and press enter.
|
| 129 |
+
```
|
| 130 |
+
|
| 131 |
+
As instructed, plug the gripper's motor. Make sure it's the only motor connected to the board, and that the motor itself is not yet daisy-chained to any other motor. As you press `[Enter]`, the script will automatically set the id and baudrate for that motor.
|
| 132 |
+
|
| 133 |
+
<details>
|
| 134 |
+
<summary>Troubleshooting</summary>
|
| 135 |
+
|
| 136 |
+
If you get an error at that point, check your cables and make sure they are plugged in properly:
|
| 137 |
+
|
| 138 |
+
<ul>
|
| 139 |
+
<li>Power supply</li>
|
| 140 |
+
<li>USB cable between your computer and the controller board</li>
|
| 141 |
+
<li>The 3-pin cable from the controller board to the motor</li>
|
| 142 |
+
</ul>
|
| 143 |
+
|
| 144 |
+
If you are using a Waveshare controller board, make sure that the two jumpers are set on the `B` channel (USB).
|
| 145 |
+
|
| 146 |
+
</details>
|
| 147 |
+
|
| 148 |
+
You should then see the following message:
|
| 149 |
+
|
| 150 |
+
```
|
| 151 |
+
'gripper' motor id set to 6
|
| 152 |
+
```
|
| 153 |
+
|
| 154 |
+
Followed by the next instruction:
|
| 155 |
+
|
| 156 |
+
```
|
| 157 |
+
Connect the controller board to the 'wrist_roll' motor only and press enter.
|
| 158 |
+
```
|
| 159 |
+
|
| 160 |
+
You can disconnect the 3-pin cable from the controller board but you can leave it connected to the gripper motor on the other end as it will already be in the right place. Now, plug in another 3-pin cable to the wrist roll motor and connect it to the controller board. As with the previous motor, make sure it is the only motor connected to the board and that the motor itself isn't connected to any other one.
|
| 161 |
+
|
| 162 |
+
Repeat the operation for each motor as instructed.
|
| 163 |
+
|
| 164 |
+
> [!TIP]
|
| 165 |
+
> Check your cabling at each step before pressing Enter. For instance, the power supply cable might disconnect as you manipulate the board.
|
| 166 |
+
|
| 167 |
+
When you are done, the script will simply finish, at which point the motors are ready to be used. You can now plug the 3-pin cable from each motor to the next one, and the cable from the first motor (the 'shoulder pan' with id=1) to the controller board, which can now be attached to the base of the arm.
|
| 168 |
+
|
| 169 |
+
#### Leader
|
| 170 |
+
|
| 171 |
+
Do the same steps for the leader arm but modify the command or script accordingly.
|
| 172 |
+
|
| 173 |
+
<hfoptions id="setup_motors">
|
| 174 |
+
<hfoption id="Command">
|
| 175 |
+
|
| 176 |
+
```bash
|
| 177 |
+
lerobot-setup-motors \
|
| 178 |
+
--teleop.type=koch_leader \
|
| 179 |
+
--teleop.port=/dev/tty.usbmodem575E0031751 \ # <- paste here the port found at previous step
|
| 180 |
+
```
|
| 181 |
+
|
| 182 |
+
</hfoption>
|
| 183 |
+
<hfoption id="API example">
|
| 184 |
+
|
| 185 |
+
<!-- prettier-ignore-start -->
|
| 186 |
+
```python
|
| 187 |
+
from lerobot.teleoperators.koch_leader import KochLeader, KochLeaderConfig
|
| 188 |
+
|
| 189 |
+
config = KochLeaderConfig(
|
| 190 |
+
port="/dev/tty.usbmodem575E0031751",
|
| 191 |
+
id="my_awesome_leader_arm",
|
| 192 |
+
)
|
| 193 |
+
leader = KochLeader(config)
|
| 194 |
+
leader.setup_motors()
|
| 195 |
+
```
|
| 196 |
+
<!-- prettier-ignore-end -->
|
| 197 |
+
|
| 198 |
+
</hfoption>
|
| 199 |
+
</hfoptions>
|
| 200 |
+
|
| 201 |
+
## Calibrate
|
| 202 |
+
|
| 203 |
+
Next, you'll need to calibrate your robot to ensure that the leader and follower arms have the same position values when they are in the same physical position.
|
| 204 |
+
The calibration process is very important because it allows a neural network trained on one robot to work on another.
|
| 205 |
+
|
| 206 |
+
#### Follower
|
| 207 |
+
|
| 208 |
+
Run the following command or API example to calibrate the follower arm:
|
| 209 |
+
|
| 210 |
+
<hfoptions id="calibrate_follower">
|
| 211 |
+
<hfoption id="Command">
|
| 212 |
+
|
| 213 |
+
```bash
|
| 214 |
+
lerobot-calibrate \
|
| 215 |
+
--robot.type=koch_follower \
|
| 216 |
+
--robot.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
|
| 217 |
+
--robot.id=my_awesome_follower_arm # <- Give the robot a unique name
|
| 218 |
+
```
|
| 219 |
+
|
| 220 |
+
</hfoption>
|
| 221 |
+
<hfoption id="API example">
|
| 222 |
+
|
| 223 |
+
<!-- prettier-ignore-start -->
|
| 224 |
+
```python
|
| 225 |
+
from lerobot.robots.koch_follower import KochFollowerConfig, KochFollower
|
| 226 |
+
|
| 227 |
+
config = KochFollowerConfig(
|
| 228 |
+
port="/dev/tty.usbmodem585A0076891",
|
| 229 |
+
id="my_awesome_follower_arm",
|
| 230 |
+
)
|
| 231 |
+
|
| 232 |
+
follower = KochFollower(config)
|
| 233 |
+
follower.connect(calibrate=False)
|
| 234 |
+
follower.calibrate()
|
| 235 |
+
follower.disconnect()
|
| 236 |
+
```
|
| 237 |
+
<!-- prettier-ignore-end -->
|
| 238 |
+
|
| 239 |
+
</hfoption>
|
| 240 |
+
</hfoptions>
|
| 241 |
+
|
| 242 |
+
We unified the calibration method for most robots. Thus, the calibration steps for this Koch arm are the same as the steps for the SO100 and SO101. First, we have to move the robot to the position where each joint is in the middle of its range, then we press `Enter`. Secondly, we move all joints through their full range of motion. A video of this same process for the SO101 as reference can be found [here](https://huggingface.co/docs/lerobot/en/so101#calibration-video).
|
| 243 |
+
|
| 244 |
+
#### Leader
|
| 245 |
+
|
| 246 |
+
Do the same steps to calibrate the leader arm, run the following command or API example:
|
| 247 |
+
|
| 248 |
+
<hfoptions id="calibrate_leader">
|
| 249 |
+
<hfoption id="Command">
|
| 250 |
+
|
| 251 |
+
```bash
|
| 252 |
+
lerobot-calibrate \
|
| 253 |
+
--teleop.type=koch_leader \
|
| 254 |
+
--teleop.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
|
| 255 |
+
--teleop.id=my_awesome_leader_arm # <- Give the robot a unique name
|
| 256 |
+
```
|
| 257 |
+
|
| 258 |
+
</hfoption>
|
| 259 |
+
<hfoption id="API example">
|
| 260 |
+
|
| 261 |
+
<!-- prettier-ignore-start -->
|
| 262 |
+
```python
|
| 263 |
+
from lerobot.teleoperators.koch_leader import KochLeaderConfig, KochLeader
|
| 264 |
+
|
| 265 |
+
config = KochLeaderConfig(
|
| 266 |
+
port="/dev/tty.usbmodem575E0031751",
|
| 267 |
+
id="my_awesome_leader_arm",
|
| 268 |
+
)
|
| 269 |
+
|
| 270 |
+
leader = KochLeader(config)
|
| 271 |
+
leader.connect(calibrate=False)
|
| 272 |
+
leader.calibrate()
|
| 273 |
+
leader.disconnect()
|
| 274 |
+
```
|
| 275 |
+
<!-- prettier-ignore-end -->
|
| 276 |
+
|
| 277 |
+
</hfoption>
|
| 278 |
+
</hfoptions>
|
| 279 |
+
|
| 280 |
+
Congrats 🎉, your robot is all set to learn a task on its own. Start training it by following this tutorial: [Getting started with real-world robots](./il_robots)
|
| 281 |
+
|
| 282 |
+
> [!TIP]
|
| 283 |
+
> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
|
lerobot/docs/source/lekiwi.mdx
ADDED
|
@@ -0,0 +1,337 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# LeKiwi
|
| 2 |
+
|
| 3 |
+
In the steps below, we explain how to assemble the LeKiwi mobile robot.
|
| 4 |
+
|
| 5 |
+
## Source the parts
|
| 6 |
+
|
| 7 |
+
Follow this [README](https://github.com/SIGRobotics-UIUC/LeKiwi). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts.
|
| 8 |
+
And advise if it's your first time printing or if you don't own a 3D printer.
|
| 9 |
+
|
| 10 |
+
### Wired version
|
| 11 |
+
|
| 12 |
+
If you have the **wired** LeKiwi version, you can skip the installation of the Raspberry Pi and setting up SSH. You can also run all commands directly on your PC for both the LeKiwi scripts and the leader arm scripts for teleoperating.
|
| 13 |
+
|
| 14 |
+
## Install software on Pi
|
| 15 |
+
|
| 16 |
+
Now we have to set up the remote PC that will run on the LeKiwi Robot. This is normally a Raspberry Pi, but can be any PC that can run on 5V and has enough usb ports (2 or more) for the cameras and motor control board.
|
| 17 |
+
|
| 18 |
+
### Install OS
|
| 19 |
+
|
| 20 |
+
For setting up the Raspberry Pi and its SD-card see: [Setup PI](https://www.raspberrypi.com/documentation/computers/getting-started.html). Here is explained how to download the [Imager](https://www.raspberrypi.com/software/) to install Raspberry Pi OS or Ubuntu.
|
| 21 |
+
|
| 22 |
+
### Setup SSH
|
| 23 |
+
|
| 24 |
+
After setting up your Pi, you should enable and set up [SSH](https://www.raspberrypi.com/news/coding-on-raspberry-pi-remotely-with-visual-studio-code/) (Secure Shell Protocol) so you can log in to the Pi from your laptop without requiring a screen, keyboard, and mouse on the Pi. A great tutorial on how to do this can be found [here](https://www.raspberrypi.com/documentation/computers/remote-access.html#ssh). Logging into your Pi can be done in your Command Prompt (cmd) or, if you use VSCode you can use [this](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh) extension.
|
| 25 |
+
|
| 26 |
+
### Install LeRobot on Pi 🤗
|
| 27 |
+
|
| 28 |
+
On your Raspberry Pi install LeRobot using our [Installation Guide](./installation)
|
| 29 |
+
|
| 30 |
+
In addition to these instructions, you need to install the Feetech SDK & ZeroMQ on your Pi:
|
| 31 |
+
|
| 32 |
+
```bash
|
| 33 |
+
pip install -e ".[lekiwi]"
|
| 34 |
+
```
|
| 35 |
+
|
| 36 |
+
## Install LeRobot locally
|
| 37 |
+
|
| 38 |
+
If you already have installed LeRobot on your laptop/pc you can skip this step; otherwise, please follow along as we do the same steps we did on the Pi.
|
| 39 |
+
|
| 40 |
+
Follow our [Installation Guide](./installation)
|
| 41 |
+
|
| 42 |
+
In addition to these instructions, you need to install the Feetech SDK & ZeroMQ on your laptop/pc:
|
| 43 |
+
|
| 44 |
+
```bash
|
| 45 |
+
pip install -e ".[lekiwi]"
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
Great :hugs:! You are now done installing LeRobot, and we can begin assembling the SO100/SO101 arms and the mobile base :robot:.
|
| 49 |
+
Every time you now want to use LeRobot, you can go to the `~/lerobot` folder where we installed LeRobot and run one of the commands.
|
| 50 |
+
|
| 51 |
+
# Step-by-Step Assembly Instructions
|
| 52 |
+
|
| 53 |
+
First, we will assemble the two SO100/SO101 arms. One to attach to the mobile base and one for teleoperation. Then we will assemble the mobile base. The instructions for assembling can be found on these two pages:
|
| 54 |
+
|
| 55 |
+
- [Assemble SO101](./so101#step-by-step-assembly-instructions)
|
| 56 |
+
- [Assemble LeKiwi](https://github.com/SIGRobotics-UIUC/LeKiwi/blob/main/Assembly.md)
|
| 57 |
+
|
| 58 |
+
### Find the USB ports associated with motor board
|
| 59 |
+
|
| 60 |
+
To find the port for each bus servo adapter, run this script:
|
| 61 |
+
|
| 62 |
+
```bash
|
| 63 |
+
lerobot-find-port
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
<hfoptions id="example">
|
| 67 |
+
<hfoption id="Mac">
|
| 68 |
+
|
| 69 |
+
Example output:
|
| 70 |
+
|
| 71 |
+
```
|
| 72 |
+
Finding all available ports for the MotorBus.
|
| 73 |
+
['/dev/tty.usbmodem575E0032081']
|
| 74 |
+
Remove the USB cable from your MotorsBus and press Enter when done.
|
| 75 |
+
|
| 76 |
+
[...Disconnect corresponding leader or follower arm and press Enter...]
|
| 77 |
+
|
| 78 |
+
The port of this MotorsBus is /dev/tty.usbmodem575E0032081
|
| 79 |
+
Reconnect the USB cable.
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
Where the found port is: `/dev/tty.usbmodem575E0032081` corresponding to your board.
|
| 83 |
+
|
| 84 |
+
</hfoption>
|
| 85 |
+
<hfoption id="Linux">
|
| 86 |
+
|
| 87 |
+
On Linux, you might need to give access to the USB ports by running:
|
| 88 |
+
|
| 89 |
+
```bash
|
| 90 |
+
sudo chmod 666 /dev/ttyACM0
|
| 91 |
+
sudo chmod 666 /dev/ttyACM1
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
Example output:
|
| 95 |
+
|
| 96 |
+
```
|
| 97 |
+
Finding all available ports for the MotorBus.
|
| 98 |
+
['/dev/ttyACM0']
|
| 99 |
+
Remove the usb cable from your MotorsBus and press Enter when done.
|
| 100 |
+
|
| 101 |
+
[...Disconnect corresponding leader or follower arm and press Enter...]
|
| 102 |
+
|
| 103 |
+
The port of this MotorsBus is /dev/ttyACM0
|
| 104 |
+
Reconnect the USB cable.
|
| 105 |
+
```
|
| 106 |
+
|
| 107 |
+
Where the found port is: `/dev/ttyACM0` corresponding to your board.
|
| 108 |
+
|
| 109 |
+
</hfoption>
|
| 110 |
+
</hfoptions>
|
| 111 |
+
|
| 112 |
+
### Configure motors
|
| 113 |
+
|
| 114 |
+
The instructions for configuring the motors can be found in the SO101 [docs](./so101#configure-the-motors). Besides the ids for the arm motors, we also need to set the motor ids for the mobile base. These need to be in a specific order to work. Below an image of the motor ids and motor mounting positions for the mobile base. Note that we only use one Motor Control board on LeKiwi. This means the motor ids for the wheels are 7, 8 and 9.
|
| 115 |
+
|
| 116 |
+
You can run this command to setup motors for LeKiwi. It will first setup the motors for arm (id 6..1) and then setup motors for wheels (9,8,7)
|
| 117 |
+
|
| 118 |
+
```bash
|
| 119 |
+
lerobot-setup-motors \
|
| 120 |
+
--robot.type=lekiwi \
|
| 121 |
+
--robot.port=/dev/tty.usbmodem58760431551 # <- paste here the port found at previous step
|
| 122 |
+
```
|
| 123 |
+
|
| 124 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/motor_ids.webp" alt="Motor ID's for mobile robot" title="Motor ID's for mobile robot" width="60%">
|
| 125 |
+
|
| 126 |
+
### Troubleshoot communication
|
| 127 |
+
|
| 128 |
+
If you are having trouble connecting to the Mobile SO100, follow these steps to diagnose and resolve the issue.
|
| 129 |
+
|
| 130 |
+
#### 1. Verify IP Address Configuration
|
| 131 |
+
|
| 132 |
+
Make sure that the correct IP for the Pi is used in the commands or in your code. To check the Raspberry Pi's IP address, run (on the Pi command line):
|
| 133 |
+
|
| 134 |
+
```bash
|
| 135 |
+
hostname -I
|
| 136 |
+
```
|
| 137 |
+
|
| 138 |
+
#### 2. Check if Pi is reachable from laptop/pc
|
| 139 |
+
|
| 140 |
+
Try pinging the Raspberry Pi from your laptop:
|
| 141 |
+
|
| 142 |
+
```bach
|
| 143 |
+
ping <your_pi_ip_address>
|
| 144 |
+
```
|
| 145 |
+
|
| 146 |
+
If the ping fails:
|
| 147 |
+
|
| 148 |
+
- Ensure the Pi is powered on and connected to the same network.
|
| 149 |
+
- Check if SSH is enabled on the Pi.
|
| 150 |
+
|
| 151 |
+
#### 3. Try SSH connection
|
| 152 |
+
|
| 153 |
+
If you can't SSH into the Pi, it might not be properly connected. Use:
|
| 154 |
+
|
| 155 |
+
```bash
|
| 156 |
+
ssh <your_pi_user_name>@<your_pi_ip_address>
|
| 157 |
+
```
|
| 158 |
+
|
| 159 |
+
If you get a connection error:
|
| 160 |
+
|
| 161 |
+
- Ensure SSH is enabled on the Pi by running:
|
| 162 |
+
```bash
|
| 163 |
+
sudo raspi-config
|
| 164 |
+
```
|
| 165 |
+
Then navigate to: **Interfacing Options -> SSH** and enable it.
|
| 166 |
+
|
| 167 |
+
### Calibration
|
| 168 |
+
|
| 169 |
+
Now we have to calibrate the leader arm and the follower arm. The wheel motors don't have to be calibrated.
|
| 170 |
+
The calibration process is very important because it allows a neural network trained on one robot to work on another.
|
| 171 |
+
|
| 172 |
+
### Calibrate follower arm (on mobile base)
|
| 173 |
+
|
| 174 |
+
Make sure the arm is connected to the Raspberry Pi and run this script or API example (on the Raspberry Pi via SSH) to launch calibration of the follower arm:
|
| 175 |
+
|
| 176 |
+
```bash
|
| 177 |
+
lerobot-calibrate \
|
| 178 |
+
--robot.type=lekiwi \
|
| 179 |
+
--robot.id=my_awesome_kiwi # <- Give the robot a unique name
|
| 180 |
+
```
|
| 181 |
+
|
| 182 |
+
We unified the calibration method for most robots, thus, the calibration steps for this SO100 arm are the same as the steps for the Koch and SO101. First, we have to move the robot to the position where each joint is in the middle of its range, then we press `Enter`. Secondly, we move all joints through their full range of motion. A video of this same process for the SO101 as reference can be found [here](https://huggingface.co/docs/lerobot/en/so101#calibration-video).
|
| 183 |
+
|
| 184 |
+
### Wired version
|
| 185 |
+
|
| 186 |
+
If you have the **wired** LeKiwi version, please run all commands on your laptop.
|
| 187 |
+
|
| 188 |
+
### Calibrate leader arm
|
| 189 |
+
|
| 190 |
+
Then, to calibrate the leader arm (which is attached to the laptop/pc). Run the following command of API example on your laptop:
|
| 191 |
+
|
| 192 |
+
<hfoptions id="calibrate_leader">
|
| 193 |
+
<hfoption id="Command">
|
| 194 |
+
|
| 195 |
+
```bash
|
| 196 |
+
lerobot-calibrate \
|
| 197 |
+
--teleop.type=so100_leader \
|
| 198 |
+
--teleop.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
|
| 199 |
+
--teleop.id=my_awesome_leader_arm # <- Give the robot a unique name
|
| 200 |
+
```
|
| 201 |
+
|
| 202 |
+
</hfoption>
|
| 203 |
+
<hfoption id="API example">
|
| 204 |
+
|
| 205 |
+
<!-- prettier-ignore-start -->
|
| 206 |
+
```python
|
| 207 |
+
from lerobot.teleoperators.so_leader import SO100LeaderConfig, SO100Leader
|
| 208 |
+
|
| 209 |
+
config = SO100LeaderConfig(
|
| 210 |
+
port="/dev/tty.usbmodem58760431551",
|
| 211 |
+
id="my_awesome_leader_arm",
|
| 212 |
+
)
|
| 213 |
+
|
| 214 |
+
leader = SO100Leader(config)
|
| 215 |
+
leader.connect(calibrate=False)
|
| 216 |
+
leader.calibrate()
|
| 217 |
+
leader.disconnect()
|
| 218 |
+
```
|
| 219 |
+
<!-- prettier-ignore-end -->
|
| 220 |
+
|
| 221 |
+
</hfoption>
|
| 222 |
+
</hfoptions>
|
| 223 |
+
|
| 224 |
+
## Teleoperate LeKiwi
|
| 225 |
+
|
| 226 |
+
> [!TIP]
|
| 227 |
+
> If you're using a Mac, you might need to give Terminal permission to access your keyboard for teleoperation. Go to System Preferences > Security & Privacy > Input Monitoring and check the box for Terminal.
|
| 228 |
+
|
| 229 |
+
To teleoperate, SSH into your Raspberry Pi, and run `conda activate lerobot` and this command:
|
| 230 |
+
|
| 231 |
+
```bash
|
| 232 |
+
python -m lerobot.robots.lekiwi.lekiwi_host --robot.id=my_awesome_kiwi
|
| 233 |
+
```
|
| 234 |
+
|
| 235 |
+
Then on your laptop, also run `conda activate lerobot` and run the API example, make sure you set the correct `remote_ip` and `port` in `examples/lekiwi/teleoperate.py`.
|
| 236 |
+
|
| 237 |
+
```bash
|
| 238 |
+
python examples/lekiwi/teleoperate.py
|
| 239 |
+
```
|
| 240 |
+
|
| 241 |
+
You should see on your laptop something like this: `[INFO] Connected to remote robot at tcp://172.17.133.91:5555 and video stream at tcp://172.17.133.91:5556.` Now you can move the leader arm and use the keyboard (w,a,s,d) to drive forward, left, backwards, right. And use (z,x) to turn left or turn right. You can use (r,f) to increase and decrease the speed of the mobile robot. There are three speed modes, see the table below:
|
| 242 |
+
|
| 243 |
+
| Speed Mode | Linear Speed (m/s) | Rotation Speed (deg/s) |
|
| 244 |
+
| ---------- | ------------------ | ---------------------- |
|
| 245 |
+
| Fast | 0.4 | 90 |
|
| 246 |
+
| Medium | 0.25 | 60 |
|
| 247 |
+
| Slow | 0.1 | 30 |
|
| 248 |
+
|
| 249 |
+
| Key | Action |
|
| 250 |
+
| --- | -------------- |
|
| 251 |
+
| W | Move forward |
|
| 252 |
+
| A | Move left |
|
| 253 |
+
| S | Move backward |
|
| 254 |
+
| D | Move right |
|
| 255 |
+
| Z | Turn left |
|
| 256 |
+
| X | Turn right |
|
| 257 |
+
| R | Increase speed |
|
| 258 |
+
| F | Decrease speed |
|
| 259 |
+
|
| 260 |
+
> [!TIP]
|
| 261 |
+
> If you use a different keyboard, you can change the keys for each command in the [`LeKiwiClientConfig`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/robots/lekiwi/config_lekiwi.py).
|
| 262 |
+
|
| 263 |
+
### Wired version
|
| 264 |
+
|
| 265 |
+
If you have the **wired** LeKiwi version, please run all commands on your laptop.
|
| 266 |
+
|
| 267 |
+
## Record a dataset
|
| 268 |
+
|
| 269 |
+
Once you're familiar with teleoperation, you can record your first dataset.
|
| 270 |
+
|
| 271 |
+
We use the Hugging Face hub features for uploading your dataset. If you haven't previously used the Hub, make sure you can login via the cli using a write-access token, this token can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens).
|
| 272 |
+
|
| 273 |
+
Add your token to the CLI by running this command:
|
| 274 |
+
|
| 275 |
+
```bash
|
| 276 |
+
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
|
| 277 |
+
```
|
| 278 |
+
|
| 279 |
+
Then store your Hugging Face repository name in a variable:
|
| 280 |
+
|
| 281 |
+
```bash
|
| 282 |
+
HF_USER=$(huggingface-cli whoami | head -n 1)
|
| 283 |
+
echo $HF_USER
|
| 284 |
+
```
|
| 285 |
+
|
| 286 |
+
Now you can record a dataset. To record episodes and upload your dataset to the hub, execute this API example tailored for LeKiwi. Make sure to first adapt the `remote_ip`, `repo_id`, `port` and `task` in the script. If you would like to run the script for longer you can increase `NB_CYCLES_CLIENT_CONNECTION`.
|
| 287 |
+
|
| 288 |
+
```bash
|
| 289 |
+
python examples/lekiwi/record.py
|
| 290 |
+
```
|
| 291 |
+
|
| 292 |
+
#### Dataset upload
|
| 293 |
+
|
| 294 |
+
Locally, your dataset is stored in this folder: `~/.cache/huggingface/lerobot/{repo-id}`. At the end of data recording, your dataset will be uploaded on your Hugging Face page (e.g. https://huggingface.co/datasets/cadene/so101_test) that you can obtain by running:
|
| 295 |
+
|
| 296 |
+
```bash
|
| 297 |
+
echo https://huggingface.co/datasets/${HF_USER}/so101_test
|
| 298 |
+
```
|
| 299 |
+
|
| 300 |
+
Your dataset will be automatically tagged with `LeRobot` for the community to find it easily, and you can also add custom tags (in this case `tutorial` for example).
|
| 301 |
+
|
| 302 |
+
You can look for other LeRobot datasets on the hub by searching for `LeRobot` [tags](https://huggingface.co/datasets?other=LeRobot).
|
| 303 |
+
|
| 304 |
+
#### Tips for gathering data
|
| 305 |
+
|
| 306 |
+
Once you're comfortable with data recording, you can create a larger dataset for training. A good starting task is grasping an object at different locations and placing it in a bin. We suggest recording at least 50 episodes, with 10 episodes per location. Keep the cameras fixed and maintain consistent grasping behavior throughout the recordings. Also make sure the object you are manipulating is visible on the camera's. A good rule of thumb is you should be able to do the task yourself by only looking at the camera images.
|
| 307 |
+
|
| 308 |
+
In the following sections, you’ll train your neural network. After achieving reliable grasping performance, you can start introducing more variations during data collection, such as additional grasp locations, different grasping techniques, and altering camera positions.
|
| 309 |
+
|
| 310 |
+
Avoid adding too much variation too quickly, as it may hinder your results.
|
| 311 |
+
|
| 312 |
+
If you want to dive deeper into this important topic, you can check out the [blog post](https://huggingface.co/blog/lerobot-datasets#what-makes-a-good-dataset) we wrote on what makes a good dataset.
|
| 313 |
+
|
| 314 |
+
#### Troubleshooting:
|
| 315 |
+
|
| 316 |
+
- On Linux, if the left and right arrow keys and escape key don't have any effect during data recording, make sure you've set the `$DISPLAY` environment variable. See [pynput limitations](https://pynput.readthedocs.io/en/latest/limitations.html#linux).
|
| 317 |
+
|
| 318 |
+
## Replay an episode
|
| 319 |
+
|
| 320 |
+
To replay an episode run the API example below, make sure to change `remote_ip`, `port`, LeRobotDatasetId and episode index.
|
| 321 |
+
|
| 322 |
+
```bash
|
| 323 |
+
python examples/lekiwi/replay.py
|
| 324 |
+
```
|
| 325 |
+
|
| 326 |
+
Congrats 🎉, your robot is all set to learn a task on its own. Start training it by the training part of this tutorial: [Getting started with real-world robots](./il_robots)
|
| 327 |
+
|
| 328 |
+
## Evaluate your policy
|
| 329 |
+
|
| 330 |
+
To evaluate your policy run the `evaluate.py` API example, make sure to change `remote_ip`, `port`, model..
|
| 331 |
+
|
| 332 |
+
```bash
|
| 333 |
+
python examples/lekiwi/evaluate.py
|
| 334 |
+
```
|
| 335 |
+
|
| 336 |
+
> [!TIP]
|
| 337 |
+
> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
|
lerobot/docs/source/lerobot-dataset-v3.mdx
ADDED
|
@@ -0,0 +1,314 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# LeRobotDataset v3.0
|
| 2 |
+
|
| 3 |
+
`LeRobotDataset v3.0` is a standardized format for robot learning data. It provides unified access to multi-modal time-series data, sensorimotor signals and multi‑camera video, as well as rich metadata for indexing, search, and visualization on the Hugging Face Hub.
|
| 4 |
+
|
| 5 |
+
This docs will guide you to:
|
| 6 |
+
|
| 7 |
+
- Understand the v3.0 design and directory layout
|
| 8 |
+
- Record a dataset and push it to the Hub
|
| 9 |
+
- Load datasets for training with `LeRobotDataset`
|
| 10 |
+
- Stream datasets without downloading using `StreamingLeRobotDataset`
|
| 11 |
+
- Apply image transforms for data augmentation during training
|
| 12 |
+
- Migrate existing `v2.1` datasets to `v3.0`
|
| 13 |
+
|
| 14 |
+
## What’s new in `v3`
|
| 15 |
+
|
| 16 |
+
- **File-based storage**: Many episodes per Parquet/MP4 file (v2 used one file per episode).
|
| 17 |
+
- **Relational metadata**: Episode boundaries and lookups are resolved through metadata, not filenames.
|
| 18 |
+
- **Hub-native streaming**: Consume datasets directly from the Hub with `StreamingLeRobotDataset`.
|
| 19 |
+
- **Lower file-system pressure**: Fewer, larger files ⇒ faster initialization and fewer issues at scale.
|
| 20 |
+
- **Unified organization**: Clean directory layout with consistent path templates across data and videos.
|
| 21 |
+
|
| 22 |
+
## Installation
|
| 23 |
+
|
| 24 |
+
`LeRobotDataset v3.0` will be included in `lerobot >= 0.4.0`.
|
| 25 |
+
|
| 26 |
+
Until that stable release, you can use the main branch by following the [build from source instructions](./installation#from-source).
|
| 27 |
+
|
| 28 |
+
## Record a dataset
|
| 29 |
+
|
| 30 |
+
Run the command below to record a dataset with the SO-101 and push to the Hub:
|
| 31 |
+
|
| 32 |
+
```bash
|
| 33 |
+
lerobot-record \
|
| 34 |
+
--robot.type=so101_follower \
|
| 35 |
+
--robot.port=/dev/tty.usbmodem585A0076841 \
|
| 36 |
+
--robot.id=my_awesome_follower_arm \
|
| 37 |
+
--robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \
|
| 38 |
+
--teleop.type=so101_leader \
|
| 39 |
+
--teleop.port=/dev/tty.usbmodem58760431551 \
|
| 40 |
+
--teleop.id=my_awesome_leader_arm \
|
| 41 |
+
--display_data=true \
|
| 42 |
+
--dataset.repo_id=${HF_USER}/record-test \
|
| 43 |
+
--dataset.num_episodes=5 \
|
| 44 |
+
--dataset.single_task="Grab the black cube"
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
See the [recording guide](./il_robots#record-a-dataset) for more details.
|
| 48 |
+
|
| 49 |
+
## Format design
|
| 50 |
+
|
| 51 |
+
A core v3 principle is **decoupling storage from the user API**: data is stored efficiently (few large files), while the public API exposes intuitive episode-level access.
|
| 52 |
+
|
| 53 |
+
`v3` has three pillars:
|
| 54 |
+
|
| 55 |
+
1. **Tabular data**: Low‑dimensional, high‑frequency signals (states, actions, timestamps) stored in **Apache Parquet**. Access is memory‑mapped or streamed via the `datasets` stack.
|
| 56 |
+
2. **Visual data**: Camera frames concatenated and encoded into **MP4**. Frames from the same episode are grouped; videos are sharded per camera for practical sizes.
|
| 57 |
+
3. **Metadata**: JSON/Parquet records describing schema (feature names, dtypes, shapes), frame rates, normalization stats, and **episode segmentation** (start/end offsets into shared Parquet/MP4 files).
|
| 58 |
+
|
| 59 |
+
> To scale to millions of episodes, tabular rows and video frames from multiple episodes are **concatenated** into larger files. Episode‑specific views are reconstructed **via metadata**, not file boundaries.
|
| 60 |
+
|
| 61 |
+
<div style="display:flex; justify-content:center; gap:12px; flex-wrap:wrap;">
|
| 62 |
+
<figure style="margin:0; text-align:center;">
|
| 63 |
+
<img
|
| 64 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobotdataset-v3/asset1datasetv3.png"
|
| 65 |
+
alt="LeRobotDataset v3 diagram"
|
| 66 |
+
width="220"
|
| 67 |
+
/>
|
| 68 |
+
<figcaption style="font-size:0.9em; color:#666;">
|
| 69 |
+
From episode‑based to file‑based datasets
|
| 70 |
+
</figcaption>
|
| 71 |
+
</figure>
|
| 72 |
+
</div>
|
| 73 |
+
|
| 74 |
+
### Directory layout (simplified)
|
| 75 |
+
|
| 76 |
+
- **`meta/info.json`**: canonical schema (features, shapes/dtypes), FPS, codebase version, and **path templates** to locate data/video shards.
|
| 77 |
+
- **`meta/stats.json`**: global feature statistics (mean/std/min/max) used for normalization; exposed as `dataset.meta.stats`.
|
| 78 |
+
- **`meta/tasks.jsonl`**: natural‑language task descriptions mapped to integer IDs for task‑conditioned policies.
|
| 79 |
+
- **`meta/episodes/`**: per‑episode records (lengths, tasks, offsets) stored as **chunked Parquet** for scalability.
|
| 80 |
+
- **`data/`**: frame‑by‑frame **Parquet** shards; each file typically contains **many episodes**.
|
| 81 |
+
- **`videos/`**: **MP4** shards per camera; each file typically contains **many episodes**.
|
| 82 |
+
|
| 83 |
+
## Load a dataset for training
|
| 84 |
+
|
| 85 |
+
`LeRobotDataset` returns Python dictionaries of PyTorch tensors and integrates with `torch.utils.data.DataLoader`. Here is a code example showing its use:
|
| 86 |
+
|
| 87 |
+
```python
|
| 88 |
+
import torch
|
| 89 |
+
from lerobot.datasets.lerobot_dataset import LeRobotDataset
|
| 90 |
+
|
| 91 |
+
repo_id = "yaak-ai/L2D-v3"
|
| 92 |
+
|
| 93 |
+
# 1) Load from the Hub (cached locally)
|
| 94 |
+
dataset = LeRobotDataset(repo_id)
|
| 95 |
+
|
| 96 |
+
# 2) Random access by index
|
| 97 |
+
sample = dataset[100]
|
| 98 |
+
print(sample)
|
| 99 |
+
# {
|
| 100 |
+
# 'observation.state': tensor([...]),
|
| 101 |
+
# 'action': tensor([...]),
|
| 102 |
+
# 'observation.images.front_left': tensor([C, H, W]),
|
| 103 |
+
# 'timestamp': tensor(1.234),
|
| 104 |
+
# ...
|
| 105 |
+
# }
|
| 106 |
+
|
| 107 |
+
# 3) Temporal windows via delta_timestamps (seconds relative to t)
|
| 108 |
+
delta_timestamps = {
|
| 109 |
+
"observation.images.front_left": [-0.2, -0.1, 0.0] # 0.2s and 0.1s before current frame
|
| 110 |
+
}
|
| 111 |
+
|
| 112 |
+
dataset = LeRobotDataset(repo_id, delta_timestamps=delta_timestamps)
|
| 113 |
+
|
| 114 |
+
# Accessing an index now returns a stack for the specified key(s)
|
| 115 |
+
sample = dataset[100]
|
| 116 |
+
print(sample["observation.images.front_left"].shape) # [T, C, H, W], where T=3
|
| 117 |
+
|
| 118 |
+
# 4) Wrap with a DataLoader for training
|
| 119 |
+
batch_size = 16
|
| 120 |
+
data_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size)
|
| 121 |
+
|
| 122 |
+
device = "cuda" if torch.cuda.is_available() else "cpu"
|
| 123 |
+
for batch in data_loader:
|
| 124 |
+
observations = batch["observation.state"].to(device)
|
| 125 |
+
actions = batch["action"].to(device)
|
| 126 |
+
images = batch["observation.images.front_left"].to(device)
|
| 127 |
+
# model.forward(batch)
|
| 128 |
+
```
|
| 129 |
+
|
| 130 |
+
## Stream a dataset (no downloads)
|
| 131 |
+
|
| 132 |
+
Use `StreamingLeRobotDataset` to iterate directly from the Hub without local copies. This allows to stream large datasets without the need to downloading them onto disk or loading them onto memory, and is a key feature of the new dataset format.
|
| 133 |
+
|
| 134 |
+
```python
|
| 135 |
+
from lerobot.datasets.streaming_dataset import StreamingLeRobotDataset
|
| 136 |
+
|
| 137 |
+
repo_id = "yaak-ai/L2D-v3"
|
| 138 |
+
dataset = StreamingLeRobotDataset(repo_id) # streams directly from the Hub
|
| 139 |
+
```
|
| 140 |
+
|
| 141 |
+
<div style="display:flex; justify-content:center; gap:12px; flex-wrap:wrap;">
|
| 142 |
+
<figure style="margin:0; text-align:center;">
|
| 143 |
+
<img
|
| 144 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobotdataset-v3/streaming-lerobot.png"
|
| 145 |
+
alt="StreamingLeRobotDataset"
|
| 146 |
+
width="520"
|
| 147 |
+
/>
|
| 148 |
+
<figcaption style="font-size:0.9em; color:#666;">
|
| 149 |
+
Stream directly from the Hub for on‑the‑fly training.
|
| 150 |
+
</figcaption>
|
| 151 |
+
</figure>
|
| 152 |
+
</div>
|
| 153 |
+
|
| 154 |
+
## Image transforms
|
| 155 |
+
|
| 156 |
+
Image transforms are data augmentations applied to camera frames during training to improve model robustness and generalization. LeRobot supports various transforms including brightness, contrast, saturation, hue, and sharpness adjustments.
|
| 157 |
+
|
| 158 |
+
### Using transforms during dataset creation/recording
|
| 159 |
+
|
| 160 |
+
Currently, transforms are applied during **training time only**, not during recording. When you create or record a dataset, the raw images are stored without transforms. This allows you to experiment with different augmentations later without re-recording data.
|
| 161 |
+
|
| 162 |
+
### Adding transforms to existing datasets (API)
|
| 163 |
+
|
| 164 |
+
Use the `image_transforms` parameter when loading a dataset for training:
|
| 165 |
+
|
| 166 |
+
```python
|
| 167 |
+
from lerobot.datasets.lerobot_dataset import LeRobotDataset
|
| 168 |
+
from lerobot.datasets.transforms import ImageTransforms, ImageTransformsConfig, ImageTransformConfig
|
| 169 |
+
|
| 170 |
+
# Option 1: Use default transform configuration (disabled by default)
|
| 171 |
+
transforms_config = ImageTransformsConfig(
|
| 172 |
+
enable=True, # Enable transforms
|
| 173 |
+
max_num_transforms=3, # Apply up to 3 transforms per frame
|
| 174 |
+
random_order=False, # Apply in standard order
|
| 175 |
+
)
|
| 176 |
+
transforms = ImageTransforms(transforms_config)
|
| 177 |
+
|
| 178 |
+
dataset = LeRobotDataset(
|
| 179 |
+
repo_id="your-username/your-dataset",
|
| 180 |
+
image_transforms=transforms
|
| 181 |
+
)
|
| 182 |
+
|
| 183 |
+
# Option 2: Create custom transform configuration
|
| 184 |
+
custom_transforms_config = ImageTransformsConfig(
|
| 185 |
+
enable=True,
|
| 186 |
+
max_num_transforms=2,
|
| 187 |
+
random_order=True,
|
| 188 |
+
tfs={
|
| 189 |
+
"brightness": ImageTransformConfig(
|
| 190 |
+
weight=1.0,
|
| 191 |
+
type="ColorJitter",
|
| 192 |
+
kwargs={"brightness": (0.7, 1.3)} # Adjust brightness range
|
| 193 |
+
),
|
| 194 |
+
"contrast": ImageTransformConfig(
|
| 195 |
+
weight=2.0, # Higher weight = more likely to be selected
|
| 196 |
+
type="ColorJitter",
|
| 197 |
+
kwargs={"contrast": (0.8, 1.2)}
|
| 198 |
+
),
|
| 199 |
+
"sharpness": ImageTransformConfig(
|
| 200 |
+
weight=0.5, # Lower weight = less likely to be selected
|
| 201 |
+
type="SharpnessJitter",
|
| 202 |
+
kwargs={"sharpness": (0.3, 2.0)}
|
| 203 |
+
),
|
| 204 |
+
}
|
| 205 |
+
)
|
| 206 |
+
|
| 207 |
+
dataset = LeRobotDataset(
|
| 208 |
+
repo_id="your-username/your-dataset",
|
| 209 |
+
image_transforms=ImageTransforms(custom_transforms_config)
|
| 210 |
+
)
|
| 211 |
+
|
| 212 |
+
# Option 3: Use pure torchvision transforms
|
| 213 |
+
from torchvision.transforms import v2
|
| 214 |
+
|
| 215 |
+
torchvision_transforms = v2.Compose([
|
| 216 |
+
v2.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.1),
|
| 217 |
+
v2.GaussianBlur(kernel_size=3, sigma=(0.1, 2.0)),
|
| 218 |
+
])
|
| 219 |
+
|
| 220 |
+
dataset = LeRobotDataset(
|
| 221 |
+
repo_id="your-username/your-dataset",
|
| 222 |
+
image_transforms=torchvision_transforms
|
| 223 |
+
)
|
| 224 |
+
```
|
| 225 |
+
|
| 226 |
+
### Available transform types
|
| 227 |
+
|
| 228 |
+
LeRobot provides several transform types:
|
| 229 |
+
|
| 230 |
+
- **`ColorJitter`**: Adjusts brightness, contrast, saturation, and hue
|
| 231 |
+
- **`SharpnessJitter`**: Randomly adjusts image sharpness
|
| 232 |
+
- **`Identity`**: No transformation (useful for testing)
|
| 233 |
+
|
| 234 |
+
You can also use any `torchvision.transforms.v2` transform by passing it directly to the `image_transforms` parameter.
|
| 235 |
+
|
| 236 |
+
### Configuration options
|
| 237 |
+
|
| 238 |
+
- **`enable`**: Enable/disable transforms (default: `False`)
|
| 239 |
+
- **`max_num_transforms`**: Maximum number of transforms applied per frame (default: `3`)
|
| 240 |
+
- **`random_order`**: Apply transforms in random order vs. standard order (default: `False`)
|
| 241 |
+
- **`weight`**: Sampling probability for each transform (higher = more likely, if sum of weights is not 1, they will be normalized)
|
| 242 |
+
- **`kwargs`**: Transform-specific parameters (e.g., brightness range)
|
| 243 |
+
|
| 244 |
+
### Visualizing transforms
|
| 245 |
+
|
| 246 |
+
Use the visualization script to preview how transforms affect your data:
|
| 247 |
+
|
| 248 |
+
```bash
|
| 249 |
+
lerobot-imgtransform-viz \
|
| 250 |
+
--repo-id=your-username/your-dataset \
|
| 251 |
+
--output-dir=./transform_examples \
|
| 252 |
+
--n-examples=5
|
| 253 |
+
```
|
| 254 |
+
|
| 255 |
+
This saves example images showing the effect of each transform, helping you tune parameters.
|
| 256 |
+
|
| 257 |
+
### Best practices
|
| 258 |
+
|
| 259 |
+
- **Start conservative**: Begin with small ranges (e.g., brightness 0.9-1.1) and increase gradually
|
| 260 |
+
- **Test first**: Use the visualization script to ensure transforms look reasonable
|
| 261 |
+
- **Monitor training**: Strong augmentations can hurt performance if too aggressive
|
| 262 |
+
- **Match your domain**: If your robot operates in varying lighting, use brightness/contrast transforms
|
| 263 |
+
- **Combine wisely**: Using too many transforms simultaneously can make training unstable
|
| 264 |
+
|
| 265 |
+
## Migrate `v2.1` → `v3.0`
|
| 266 |
+
|
| 267 |
+
A converter aggregates per‑episode files into larger shards and writes episode offsets/metadata. Convert your dataset using the instructions below.
|
| 268 |
+
|
| 269 |
+
```bash
|
| 270 |
+
# Pre-release build with v3 support:
|
| 271 |
+
pip install "https://github.com/huggingface/lerobot/archive/33cad37054c2b594ceba57463e8f11ee374fa93c.zip"
|
| 272 |
+
|
| 273 |
+
# Convert an existing v2.1 dataset hosted on the Hub:
|
| 274 |
+
python -m lerobot.datasets.v30.convert_dataset_v21_to_v30 --repo-id=<HF_USER/DATASET_ID>
|
| 275 |
+
```
|
| 276 |
+
|
| 277 |
+
**What it does**
|
| 278 |
+
|
| 279 |
+
- Aggregates parquet files: `episode-0000.parquet`, `episode-0001.parquet`, … → **`file-0000.parquet`**, …
|
| 280 |
+
- Aggregates mp4 files: `episode-0000.mp4`, `episode-0001.mp4`, … → **`file-0000.mp4`**, …
|
| 281 |
+
- Updates `meta/episodes/*` (chunked Parquet) with per‑episode lengths, tasks, and byte/frame offsets.
|
| 282 |
+
|
| 283 |
+
## Common Issues
|
| 284 |
+
|
| 285 |
+
### Always call `finalize()` before pushing
|
| 286 |
+
|
| 287 |
+
When creating or recording datasets, you **must** call `dataset.finalize()` to properly close parquet writers. See the [PR #1903](https://github.com/huggingface/lerobot/pull/1903) for more details.
|
| 288 |
+
|
| 289 |
+
```python
|
| 290 |
+
from lerobot.datasets.lerobot_dataset import LeRobotDataset
|
| 291 |
+
|
| 292 |
+
# Create dataset and record episodes
|
| 293 |
+
dataset = LeRobotDataset.create(...)
|
| 294 |
+
|
| 295 |
+
for episode in range(num_episodes):
|
| 296 |
+
# Record frames
|
| 297 |
+
for frame in episode_data:
|
| 298 |
+
dataset.add_frame(frame)
|
| 299 |
+
dataset.save_episode()
|
| 300 |
+
|
| 301 |
+
# Call finalize() when done recording and before push_to_hub()
|
| 302 |
+
dataset.finalize() # Closes parquet writers, writes metadata footers
|
| 303 |
+
dataset.push_to_hub()
|
| 304 |
+
```
|
| 305 |
+
|
| 306 |
+
**Why is this necessary?**
|
| 307 |
+
|
| 308 |
+
Dataset v3.0 uses incremental parquet writing with buffered metadata for efficiency. The `finalize()` method:
|
| 309 |
+
|
| 310 |
+
- Flushes any buffered episode metadata to disk
|
| 311 |
+
- Closes parquet writers to write footer metadata, otherwise the parquet files will be corrupt
|
| 312 |
+
- Ensures the dataset is valid for loading
|
| 313 |
+
|
| 314 |
+
Without calling `finalize()`, your parquet files will be incomplete and the dataset won't load properly.
|
lerobot/docs/source/libero.mdx
ADDED
|
@@ -0,0 +1,171 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# LIBERO
|
| 2 |
+
|
| 3 |
+
**LIBERO** is a benchmark designed to study **lifelong robot learning**. The idea is that robots won’t just be pretrained once in a factory, they’ll need to keep learning and adapting with their human users over time. This ongoing adaptation is called **lifelong learning in decision making (LLDM)**, and it’s a key step toward building robots that become truly personalized helpers.
|
| 4 |
+
|
| 5 |
+
- 📄 [LIBERO paper](https://arxiv.org/abs/2306.03310)
|
| 6 |
+
- 💻 [Original LIBERO repo](https://github.com/Lifelong-Robot-Learning/LIBERO)
|
| 7 |
+
|
| 8 |
+
To make progress on this challenge, LIBERO provides a set of standardized tasks that focus on **knowledge transfer**: how well a robot can apply what it has already learned to new situations. By evaluating on LIBERO, different algorithms can be compared fairly and researchers can build on each other’s work.
|
| 9 |
+
|
| 10 |
+
LIBERO includes **five task suites**:
|
| 11 |
+
|
| 12 |
+
- **LIBERO-Spatial (`libero_spatial`)** – tasks that require reasoning about spatial relations.
|
| 13 |
+
- **LIBERO-Object (`libero_object`)** – tasks centered on manipulating different objects.
|
| 14 |
+
- **LIBERO-Goal (`libero_goal`)** – goal-conditioned tasks where the robot must adapt to changing targets.
|
| 15 |
+
- **LIBERO-90 (`libero_90`)** – 90 short-horizon tasks from the LIBERO-100 collection.
|
| 16 |
+
- **LIBERO-Long (`libero_10`)** – 10 long-horizon tasks from the LIBERO-100 collection.
|
| 17 |
+
|
| 18 |
+
Together, these suites cover **130 tasks**, ranging from simple object manipulations to complex multi-step scenarios. LIBERO is meant to grow over time, and to serve as a shared benchmark where the community can test and improve lifelong learning algorithms.
|
| 19 |
+
|
| 20 |
+

|
| 21 |
+
|
| 22 |
+
## Evaluating with LIBERO
|
| 23 |
+
|
| 24 |
+
At **LeRobot**, we ported [LIBERO](https://github.com/Lifelong-Robot-Learning/LIBERO) into our framework and used it mainly to **evaluate [SmolVLA](https://huggingface.co/docs/lerobot/en/smolvla)**, our lightweight Vision-Language-Action model.
|
| 25 |
+
|
| 26 |
+
LIBERO is now part of our **multi-eval supported simulation**, meaning you can benchmark your policies either on a **single suite of tasks** or across **multiple suites at once** with just a flag.
|
| 27 |
+
|
| 28 |
+
To Install LIBERO, after following LeRobot official instructions, just do:
|
| 29 |
+
`pip install -e ".[libero]"`
|
| 30 |
+
|
| 31 |
+
### Single-suite evaluation
|
| 32 |
+
|
| 33 |
+
Evaluate a policy on one LIBERO suite:
|
| 34 |
+
|
| 35 |
+
```bash
|
| 36 |
+
lerobot-eval \
|
| 37 |
+
--policy.path="your-policy-id" \
|
| 38 |
+
--env.type=libero \
|
| 39 |
+
--env.task=libero_object \
|
| 40 |
+
--eval.batch_size=2 \
|
| 41 |
+
--eval.n_episodes=3
|
| 42 |
+
```
|
| 43 |
+
|
| 44 |
+
- `--env.task` picks the suite (`libero_object`, `libero_spatial`, etc.).
|
| 45 |
+
- `--eval.batch_size` controls how many environments run in parallel.
|
| 46 |
+
- `--eval.n_episodes` sets how many episodes to run in total.
|
| 47 |
+
|
| 48 |
+
---
|
| 49 |
+
|
| 50 |
+
### Multi-suite evaluation
|
| 51 |
+
|
| 52 |
+
Benchmark a policy across multiple suites at once:
|
| 53 |
+
|
| 54 |
+
```bash
|
| 55 |
+
lerobot-eval \
|
| 56 |
+
--policy.path="your-policy-id" \
|
| 57 |
+
--env.type=libero \
|
| 58 |
+
--env.task=libero_object,libero_spatial \
|
| 59 |
+
--eval.batch_size=1 \
|
| 60 |
+
--eval.n_episodes=2
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
- Pass a comma-separated list to `--env.task` for multi-suite evaluation.
|
| 64 |
+
|
| 65 |
+
### Control Mode
|
| 66 |
+
|
| 67 |
+
LIBERO now supports two control modes: relative and absolute. This matters because different VLA checkpoints are trained with different mode of action to output hence control parameterizations.
|
| 68 |
+
You can switch them with: `env.control_mode = "relative"` and `env.control_mode = "absolute"`
|
| 69 |
+
|
| 70 |
+
### Policy inputs and outputs
|
| 71 |
+
|
| 72 |
+
When using LIBERO through LeRobot, policies interact with the environment via **observations** and **actions**:
|
| 73 |
+
|
| 74 |
+
- **Observations**
|
| 75 |
+
- `observation.state` – proprioceptive features (agent state).
|
| 76 |
+
- `observation.images.image` – main camera view (`agentview_image`).
|
| 77 |
+
- `observation.images.image2` – wrist camera view (`robot0_eye_in_hand_image`).
|
| 78 |
+
|
| 79 |
+
⚠️ **Note:** LeRobot enforces the `.images.*` prefix for any multi-modal visual features. Always ensure that your policy config `input_features` use the same naming keys, and that your dataset metadata keys follow this convention during evaluation.
|
| 80 |
+
If your data contains different keys, you must rename the observations to match what the policy expects, since naming keys are encoded inside the normalization statistics layer.
|
| 81 |
+
This will be fixed with the upcoming Pipeline PR.
|
| 82 |
+
|
| 83 |
+
- **Actions**
|
| 84 |
+
- Continuous control values in a `Box(-1, 1, shape=(7,))` space.
|
| 85 |
+
|
| 86 |
+
We also provide a notebook for quick testing:
|
| 87 |
+
Training with LIBERO
|
| 88 |
+
|
| 89 |
+
## Training with LIBERO
|
| 90 |
+
|
| 91 |
+
When training on LIBERO tasks, make sure your dataset parquet and metadata keys follow the LeRobot convention.
|
| 92 |
+
|
| 93 |
+
The environment expects:
|
| 94 |
+
|
| 95 |
+
- `observation.state` → 8-dim agent state
|
| 96 |
+
- `observation.images.image` → main camera (`agentview_image`)
|
| 97 |
+
- `observation.images.image2` → wrist camera (`robot0_eye_in_hand_image`)
|
| 98 |
+
|
| 99 |
+
⚠️ Cleaning the dataset upfront is **cleaner and more efficient** than remapping keys inside the code.
|
| 100 |
+
To avoid potential mismatches and key errors, we provide a **preprocessed LIBERO dataset** that is fully compatible with the current LeRobot codebase and requires no additional manipulation:
|
| 101 |
+
👉 [HuggingFaceVLA/libero](https://huggingface.co/datasets/HuggingFaceVLA/libero)
|
| 102 |
+
|
| 103 |
+
For reference, here is the **original dataset** published by Physical Intelligence:
|
| 104 |
+
👉 [physical-intelligence/libero](https://huggingface.co/datasets/physical-intelligence/libero)
|
| 105 |
+
|
| 106 |
+
---
|
| 107 |
+
|
| 108 |
+
### Example training command
|
| 109 |
+
|
| 110 |
+
```bash
|
| 111 |
+
lerobot-train \
|
| 112 |
+
--policy.type=smolvla \
|
| 113 |
+
--policy.repo_id=${HF_USER}/libero-test \
|
| 114 |
+
--policy.load_vlm_weights=true \
|
| 115 |
+
--dataset.repo_id=HuggingFaceVLA/libero \
|
| 116 |
+
--env.type=libero \
|
| 117 |
+
--env.task=libero_10 \
|
| 118 |
+
--output_dir=./outputs/ \
|
| 119 |
+
--steps=100000 \
|
| 120 |
+
--batch_size=4 \
|
| 121 |
+
--eval.batch_size=1 \
|
| 122 |
+
--eval.n_episodes=1 \
|
| 123 |
+
--eval_freq=1000 \
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
---
|
| 127 |
+
|
| 128 |
+
### Note on rendering
|
| 129 |
+
|
| 130 |
+
LeRobot uses MuJoCo for simulation. You need to set the rendering backend before training or evaluation:
|
| 131 |
+
|
| 132 |
+
- `export MUJOCO_GL=egl` → for headless servers (e.g. HPC, cloud)
|
| 133 |
+
|
| 134 |
+
## Reproducing π₀.₅ results
|
| 135 |
+
|
| 136 |
+
We reproduce the results of π₀.₅ on the LIBERO benchmark using the LeRobot implementation. We take the Physical Intelligence LIBERO base model (`pi05_libero`) and finetune for an additional 6k steps in bfloat16, with batch size of 256 on 8 H100 GPUs using the [HuggingFace LIBERO dataset](https://huggingface.co/datasets/HuggingFaceVLA/libero).
|
| 137 |
+
|
| 138 |
+
The finetuned model can be found here:
|
| 139 |
+
|
| 140 |
+
- **π₀.₅ LIBERO**: [lerobot/pi05_libero_finetuned](https://huggingface.co/lerobot/pi05_libero_finetuned)
|
| 141 |
+
|
| 142 |
+
We then evaluate the finetuned model using the LeRobot LIBERO implementation, by running the following command:
|
| 143 |
+
|
| 144 |
+
```bash
|
| 145 |
+
lerobot-eval \
|
| 146 |
+
--output_dir=/logs/ \
|
| 147 |
+
--env.type=libero \
|
| 148 |
+
--env.task=libero_spatial,libero_object,libero_goal,libero_10 \
|
| 149 |
+
--eval.batch_size=1 \
|
| 150 |
+
--eval.n_episodes=10 \
|
| 151 |
+
--policy.path=pi05_libero_finetuned \
|
| 152 |
+
--policy.n_action_steps=10 \
|
| 153 |
+
--output_dir=./eval_logs/ \
|
| 154 |
+
--env.max_parallel_tasks=1
|
| 155 |
+
```
|
| 156 |
+
|
| 157 |
+
**Note:** We set `n_action_steps=10`, similar to the original OpenPI implementation.
|
| 158 |
+
|
| 159 |
+
### Results
|
| 160 |
+
|
| 161 |
+
We obtain the following results on the LIBERO benchmark:
|
| 162 |
+
|
| 163 |
+
| Model | LIBERO Spatial | LIBERO Object | LIBERO Goal | LIBERO 10 | Average |
|
| 164 |
+
| -------- | -------------- | ------------- | ----------- | --------- | -------- |
|
| 165 |
+
| **π₀.₅** | 97.0 | 99.0 | 98.0 | 96.0 | **97.5** |
|
| 166 |
+
|
| 167 |
+
These results are consistent with the original [results](https://github.com/Physical-Intelligence/openpi/tree/main/examples/libero#results) reported by Physical Intelligence:
|
| 168 |
+
|
| 169 |
+
| Model | LIBERO Spatial | LIBERO Object | LIBERO Goal | LIBERO 10 | Average |
|
| 170 |
+
| -------- | -------------- | ------------- | ----------- | --------- | --------- |
|
| 171 |
+
| **π₀.₅** | 98.8 | 98.2 | 98.0 | 92.4 | **96.85** |
|
lerobot/docs/source/metaworld.mdx
ADDED
|
@@ -0,0 +1,80 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Meta-World
|
| 2 |
+
|
| 3 |
+
Meta-World is a well-designed, open-source simulation benchmark for multi-task and meta reinforcement learning in continuous-control robotic manipulation. It gives researchers a shared, realistic playground to test whether algorithms can _learn many different tasks_ and _generalize quickly to new ones_ — two central challenges for real-world robotics.
|
| 4 |
+
|
| 5 |
+
- 📄 [MetaWorld paper](https://arxiv.org/pdf/1910.10897)
|
| 6 |
+
- 💻 [Original MetaWorld repo](https://github.com/Farama-Foundation/Metaworld)
|
| 7 |
+
|
| 8 |
+

|
| 9 |
+
|
| 10 |
+
## Why Meta-World matters
|
| 11 |
+
|
| 12 |
+
- **Diverse, realistic tasks.** Meta-World bundles a large suite of simulated manipulation tasks (50 in the MT50 suite) using everyday objects and a common tabletop Sawyer arm. This diversity exposes algorithms to a wide variety of dynamics, contacts and goal specifications while keeping a consistent control and observation structure.
|
| 13 |
+
- **Focus on generalization and multi-task learning.** By evaluating across task distributions that share structure but differ in goals and objects, Meta-World reveals whether an agent truly learns transferable skills rather than overfitting to a narrow task.
|
| 14 |
+
- **Standardized evaluation protocol.** It provides clear evaluation modes and difficulty splits, so different methods can be compared fairly across easy, medium, hard and very-hard regimes.
|
| 15 |
+
- **Empirical insight.** Past evaluations on Meta-World show impressive progress on some fronts, but also highlight that current multi-task and meta-RL methods still struggle with large, diverse task sets. That gap points to important research directions.
|
| 16 |
+
|
| 17 |
+
## What it enables in LeRobot
|
| 18 |
+
|
| 19 |
+
In LeRobot, you can evaluate any policy or vision-language-action (VLA) model on Meta-World tasks and get a clear success-rate measure. The integration is designed to be straightforward:
|
| 20 |
+
|
| 21 |
+
- We provide a LeRobot-ready dataset for Meta-World (MT50) on the HF Hub: `https://huggingface.co/datasets/lerobot/metaworld_mt50`.
|
| 22 |
+
- This dataset is formatted for the MT50 evaluation that uses all 50 tasks (the most challenging multi-task setting).
|
| 23 |
+
- MT50 gives the policy a one-hot task vector and uses fixed object/goal positions for consistency.
|
| 24 |
+
|
| 25 |
+
- Task descriptions and the exact keys required for evaluation are available in the repo/dataset — use these to ensure your policy outputs the right success signals.
|
| 26 |
+
|
| 27 |
+
## Quick start, train a SmolVLA policy on Meta-World
|
| 28 |
+
|
| 29 |
+
Example command to train a SmolVLA policy on a subset of tasks:
|
| 30 |
+
|
| 31 |
+
```bash
|
| 32 |
+
lerobot-train \
|
| 33 |
+
--policy.type=smolvla \
|
| 34 |
+
--policy.repo_id=${HF_USER}/metaworld-test \
|
| 35 |
+
--policy.load_vlm_weights=true \
|
| 36 |
+
--dataset.repo_id=lerobot/metaworld_mt50 \
|
| 37 |
+
--env.type=metaworld \
|
| 38 |
+
--env.task=assembly-v3,dial-turn-v3,handle-press-side-v3 \
|
| 39 |
+
--output_dir=./outputs/ \
|
| 40 |
+
--steps=100000 \
|
| 41 |
+
--batch_size=4 \
|
| 42 |
+
--eval.batch_size=1 \
|
| 43 |
+
--eval.n_episodes=1 \
|
| 44 |
+
--eval_freq=1000
|
| 45 |
+
```
|
| 46 |
+
|
| 47 |
+
Notes:
|
| 48 |
+
|
| 49 |
+
- `--env.task` accepts explicit task lists (comma separated) or difficulty groups (e.g., `env.task="hard"`).
|
| 50 |
+
- Adjust `batch_size`, `steps`, and `eval_freq` to match your compute budget.
|
| 51 |
+
- **Gymnasium Assertion Error**: if you encounter an error like
|
| 52 |
+
`AssertionError: ['human', 'rgb_array', 'depth_array']` when running MetaWorld environments, this comes from a mismatch between MetaWorld and your Gymnasium version.
|
| 53 |
+
We recommend using:
|
| 54 |
+
|
| 55 |
+
```bash
|
| 56 |
+
pip install "gymnasium==1.1.0"
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
to ensure proper compatibility.
|
| 60 |
+
|
| 61 |
+
## Quick start — evaluate a trained policy
|
| 62 |
+
|
| 63 |
+
To evaluate a trained policy on the Meta-World medium difficulty split:
|
| 64 |
+
|
| 65 |
+
```bash
|
| 66 |
+
lerobot-eval \
|
| 67 |
+
--policy.path="your-policy-id" \
|
| 68 |
+
--env.type=metaworld \
|
| 69 |
+
--env.task=medium \
|
| 70 |
+
--eval.batch_size=1 \
|
| 71 |
+
--eval.n_episodes=2
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
This will run episodes and return per-task success rates using the standard Meta-World evaluation keys.
|
| 75 |
+
|
| 76 |
+
## Practical tips
|
| 77 |
+
|
| 78 |
+
- If you care about generalization, run on the full MT50 suite — it’s intentionally challenging and reveals strengths/weaknesses better than a few narrow tasks.
|
| 79 |
+
- Use the one-hot task conditioning for multi-task training (MT10 / MT50 conventions) so policies have explicit task context.
|
| 80 |
+
- Inspect the dataset task descriptions and the `info["is_success"]` keys when writing post-processing or logging so your success metrics line up with the benchmark.
|
lerobot/docs/source/multi_gpu_training.mdx
ADDED
|
@@ -0,0 +1,125 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Multi-GPU Training
|
| 2 |
+
|
| 3 |
+
This guide shows you how to train policies on multiple GPUs using [Hugging Face Accelerate](https://huggingface.co/docs/accelerate).
|
| 4 |
+
|
| 5 |
+
## Installation
|
| 6 |
+
|
| 7 |
+
First, ensure you have accelerate installed:
|
| 8 |
+
|
| 9 |
+
```bash
|
| 10 |
+
pip install accelerate
|
| 11 |
+
```
|
| 12 |
+
|
| 13 |
+
## Training with Multiple GPUs
|
| 14 |
+
|
| 15 |
+
You can launch training in two ways:
|
| 16 |
+
|
| 17 |
+
### Option 1: Without config (specify parameters directly)
|
| 18 |
+
|
| 19 |
+
You can specify all parameters directly in the command without running `accelerate config`:
|
| 20 |
+
|
| 21 |
+
```bash
|
| 22 |
+
accelerate launch \
|
| 23 |
+
--multi_gpu \
|
| 24 |
+
--num_processes=2 \
|
| 25 |
+
$(which lerobot-train) \
|
| 26 |
+
--dataset.repo_id=${HF_USER}/my_dataset \
|
| 27 |
+
--policy.type=act \
|
| 28 |
+
--policy.repo_id=${HF_USER}/my_trained_policy \
|
| 29 |
+
--output_dir=outputs/train/act_multi_gpu \
|
| 30 |
+
--job_name=act_multi_gpu \
|
| 31 |
+
--wandb.enable=true
|
| 32 |
+
```
|
| 33 |
+
|
| 34 |
+
**Key accelerate parameters:**
|
| 35 |
+
|
| 36 |
+
- `--multi_gpu`: Enable multi-GPU training
|
| 37 |
+
- `--num_processes=2`: Number of GPUs to use
|
| 38 |
+
- `--mixed_precision=fp16`: Use fp16 mixed precision (or `bf16` if supported)
|
| 39 |
+
|
| 40 |
+
### Option 2: Using accelerate config
|
| 41 |
+
|
| 42 |
+
If you prefer to save your configuration, you can optionally configure accelerate for your hardware setup by running:
|
| 43 |
+
|
| 44 |
+
```bash
|
| 45 |
+
accelerate config
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
This interactive setup will ask you questions about your training environment (number of GPUs, mixed precision settings, etc.) and saves the configuration for future use. For a simple multi-GPU setup on a single machine, you can use these recommended settings:
|
| 49 |
+
|
| 50 |
+
- Compute environment: This machine
|
| 51 |
+
- Number of machines: 1
|
| 52 |
+
- Number of processes: (number of GPUs you want to use)
|
| 53 |
+
- GPU ids to use: (leave empty to use all)
|
| 54 |
+
- Mixed precision: fp16 or bf16 (recommended for faster training)
|
| 55 |
+
|
| 56 |
+
Then launch training with:
|
| 57 |
+
|
| 58 |
+
```bash
|
| 59 |
+
accelerate launch $(which lerobot-train) \
|
| 60 |
+
--dataset.repo_id=${HF_USER}/my_dataset \
|
| 61 |
+
--policy.type=act \
|
| 62 |
+
--policy.repo_id=${HF_USER}/my_trained_policy \
|
| 63 |
+
--output_dir=outputs/train/act_multi_gpu \
|
| 64 |
+
--job_name=act_multi_gpu \
|
| 65 |
+
--wandb.enable=true
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
## How It Works
|
| 69 |
+
|
| 70 |
+
When you launch training with accelerate:
|
| 71 |
+
|
| 72 |
+
1. **Automatic detection**: LeRobot automatically detects if it's running under accelerate
|
| 73 |
+
2. **Data distribution**: Your batch is automatically split across GPUs
|
| 74 |
+
3. **Gradient synchronization**: Gradients are synchronized across GPUs during backpropagation
|
| 75 |
+
4. **Single process logging**: Only the main process logs to wandb and saves checkpoints
|
| 76 |
+
|
| 77 |
+
## Learning Rate and Training Steps Scaling
|
| 78 |
+
|
| 79 |
+
**Important:** LeRobot does **NOT** automatically scale learning rates or training steps based on the number of GPUs. This gives you full control over your training hyperparameters.
|
| 80 |
+
|
| 81 |
+
### Why No Automatic Scaling?
|
| 82 |
+
|
| 83 |
+
Many distributed training frameworks automatically scale the learning rate by the number of GPUs (e.g., `lr = base_lr × num_gpus`).
|
| 84 |
+
However, LeRobot keeps the learning rate exactly as you specify it.
|
| 85 |
+
|
| 86 |
+
### When and How to Scale
|
| 87 |
+
|
| 88 |
+
If you want to scale your hyperparameters when using multiple GPUs, you should do it manually:
|
| 89 |
+
|
| 90 |
+
**Learning Rate Scaling:**
|
| 91 |
+
|
| 92 |
+
```bash
|
| 93 |
+
# Example: 2 GPUs with linear LR scaling
|
| 94 |
+
# Base LR: 1e-4, with 2 GPUs -> 2e-4
|
| 95 |
+
accelerate launch --num_processes=2 $(which lerobot-train) \
|
| 96 |
+
--optimizer.lr=2e-4 \
|
| 97 |
+
--dataset.repo_id=lerobot/pusht \
|
| 98 |
+
--policy=act
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
+
**Training Steps Scaling:**
|
| 102 |
+
|
| 103 |
+
Since the effective batch size `bs` increases with multiple GPUs (batch_size × num_gpus), you may want to reduce the number of training steps proportionally:
|
| 104 |
+
|
| 105 |
+
```bash
|
| 106 |
+
# Example: 2 GPUs with effective batch size 2x larger
|
| 107 |
+
# Original: batch_size=8, steps=100000
|
| 108 |
+
# With 2 GPUs: batch_size=8 (16 in total), steps=50000
|
| 109 |
+
accelerate launch --num_processes=2 $(which lerobot-train) \
|
| 110 |
+
--batch_size=8 \
|
| 111 |
+
--steps=50000 \
|
| 112 |
+
--dataset.repo_id=lerobot/pusht \
|
| 113 |
+
--policy=act
|
| 114 |
+
```
|
| 115 |
+
|
| 116 |
+
## Notes
|
| 117 |
+
|
| 118 |
+
- The `--policy.use_amp` flag in `lerobot-train` is only used when **not** running with accelerate. When using accelerate, mixed precision is controlled by accelerate's configuration.
|
| 119 |
+
- Training logs, checkpoints, and hub uploads are only done by the main process to avoid conflicts. Non-main processes have console logging disabled to prevent duplicate output.
|
| 120 |
+
- The effective batch size is `batch_size × num_gpus`. If you use 4 GPUs with `--batch_size=8`, your effective batch size is 32.
|
| 121 |
+
- Learning rate scheduling is handled correctly across multiple processes—LeRobot sets `step_scheduler_with_optimizer=False` to prevent accelerate from adjusting scheduler steps based on the number of processes.
|
| 122 |
+
- When saving or pushing models, LeRobot automatically unwraps the model from accelerate's distributed wrapper to ensure compatibility.
|
| 123 |
+
- WandB integration automatically initializes only on the main process, preventing multiple runs from being created.
|
| 124 |
+
|
| 125 |
+
For more advanced configurations and troubleshooting, see the [Accelerate documentation](https://huggingface.co/docs/accelerate). If you want to learn more about how to train on a large number of GPUs, checkout this awesome guide: [Ultrascale Playbook](https://huggingface.co/spaces/nanotron/ultrascale-playbook).
|
lerobot/docs/source/notebooks.mdx
ADDED
|
@@ -0,0 +1,29 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# 🤗 LeRobot Notebooks
|
| 2 |
+
|
| 3 |
+
This repository contains example notebooks for using LeRobot. These notebooks demonstrate how to train policies on real or simulation datasets using standardized policies.
|
| 4 |
+
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
### Training ACT
|
| 8 |
+
|
| 9 |
+
[ACT](https://huggingface.co/papers/2304.13705) (Action Chunking Transformer) is a transformer-based policy architecture for imitation learning that processes robot states and camera inputs to generate smooth, chunked action sequences.
|
| 10 |
+
|
| 11 |
+
We provide a ready-to-run Google Colab notebook to help you train ACT policies using datasets from the Hugging Face Hub, with optional logging to Weights & Biases.
|
| 12 |
+
|
| 13 |
+
| Notebook | Colab |
|
| 14 |
+
| :------------------------------------------------------------------------------------------------------ | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
|
| 15 |
+
| [Train ACT with LeRobot](https://github.com/huggingface/notebooks/blob/main/lerobot/training-act.ipynb) | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/lerobot/training-act.ipynb) |
|
| 16 |
+
|
| 17 |
+
Expected training time for 100k steps: ~1.5 hours on an NVIDIA A100 GPU with batch size of `64`.
|
| 18 |
+
|
| 19 |
+
### Training SmolVLA
|
| 20 |
+
|
| 21 |
+
[SmolVLA](https://huggingface.co/papers/2506.01844) is a small but efficient Vision-Language-Action model. It is compact in size with 450 M-parameter and is developed by Hugging Face.
|
| 22 |
+
|
| 23 |
+
We provide a ready-to-run Google Colab notebook to help you train SmolVLA policies using datasets from the Hugging Face Hub, with optional logging to Weights & Biases.
|
| 24 |
+
|
| 25 |
+
| Notebook | Colab |
|
| 26 |
+
| :-------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
|
| 27 |
+
| [Train SmolVLA with LeRobot](https://github.com/huggingface/notebooks/blob/main/lerobot/training-smolvla.ipynb) | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/lerobot/training-smolvla.ipynb) |
|
| 28 |
+
|
| 29 |
+
Expected training time for 20k steps: ~5 hours on an NVIDIA A100 GPU with batch size of `64`.
|
lerobot/docs/source/peft_training.mdx
ADDED
|
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Parameter efficient fine-tuning with 🤗 PEFT
|
| 2 |
+
|
| 3 |
+
[🤗 PEFT](https://github.com/huggingface/peft) (Parameter-Efficient Fine-Tuning) is a library for efficiently adapting
|
| 4 |
+
large pretrained models such as pre-trained policies (e.g., SmolVLA, π₀, ...) to new tasks without training all
|
| 5 |
+
of the model's parameters while yielding comparable performance.
|
| 6 |
+
|
| 7 |
+
Install the `lerobot[peft]` optional package to enable PEFT support.
|
| 8 |
+
|
| 9 |
+
To read about all the possible methods of adaption, please refer to the [🤗 PEFT docs](https://huggingface.co/docs/peft/index).
|
| 10 |
+
|
| 11 |
+
## Training SmolVLA
|
| 12 |
+
|
| 13 |
+
In this section we'll show you how to train a pre-trained SmolVLA policy with PEFT on the libero dataset.
|
| 14 |
+
For brevity we're only training on the `libero_spatial` subset. We will use `lerobot/smolvla_base` as the model
|
| 15 |
+
to parameter efficiently fine-tune:
|
| 16 |
+
|
| 17 |
+
```
|
| 18 |
+
lerobot-train \
|
| 19 |
+
--policy.path=lerobot/smolvla_base \
|
| 20 |
+
--policy.repo_id=your_hub_name/my_libero_smolvla \
|
| 21 |
+
--dataset.repo_id=HuggingFaceVLA/libero \
|
| 22 |
+
--policy.output_features=null \
|
| 23 |
+
--policy.input_features=null \
|
| 24 |
+
--policy.optimizer_lr=1e-3 \
|
| 25 |
+
--policy.scheduler_decay_lr=1e-4 \
|
| 26 |
+
--env.type=libero \
|
| 27 |
+
--env.task=libero_spatial \
|
| 28 |
+
--steps=100000 \
|
| 29 |
+
--batch_size=32 \
|
| 30 |
+
--peft.method_type=LORA \
|
| 31 |
+
--peft.r=64
|
| 32 |
+
```
|
| 33 |
+
|
| 34 |
+
Note the `--peft.method_type` parameter that let's you select which PEFT method to use. Here we use
|
| 35 |
+
[LoRA](https://huggingface.co/docs/peft/main/en/package_reference/lora) (Low-Rank Adapter) which is probably the most
|
| 36 |
+
popular fine-tuning method to date. Low-rank adaption means that we only fine-tune a matrix with comparably low rank
|
| 37 |
+
instead of the full weight matrix. This rank can be specified using the `--peft.r` parameter. The higher the rank
|
| 38 |
+
the closer you get to full fine-tuning
|
| 39 |
+
|
| 40 |
+
There are more complex methods that have more parameters. These are not yet supported, feel free to raise an issue
|
| 41 |
+
if you want to see a specific PEFT method supported.
|
| 42 |
+
|
| 43 |
+
By default, PEFT will target the `q_proj` and `v_proj` layers of the LM expert in SmolVLA. It will also target the
|
| 44 |
+
state and action projection matrices as they are most likely task-dependent. If you need to target different layers
|
| 45 |
+
you can use `--peft.target_modules` to specify which layers to target. You can refer to the respective PEFT method's
|
| 46 |
+
documentation to see what inputs are supported, (e.g., [LoRA's target_modules documentation](https://huggingface.co/docs/peft/main/en/package_reference/lora#peft.LoraConfig.target_modules)).
|
| 47 |
+
Usually a list of suffixes or a regex are supported. For example, to target the MLPs of the `lm_expert` instead of
|
| 48 |
+
the `q` and `v` projections, use:
|
| 49 |
+
|
| 50 |
+
```
|
| 51 |
+
--peft.target_modules='(model\.vlm_with_expert\.lm_expert\..*\.(down|gate|up)_proj|.*\.(state_proj|action_in_proj|action_out_proj|action_time_mlp_in|action_time_mlp_out))'
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
In case you need to fully fine-tune a layer instead of just adapting it, you can supply a list of layer suffixes
|
| 55 |
+
to the `--peft.full_training_modules` parameter:
|
| 56 |
+
|
| 57 |
+
```
|
| 58 |
+
--peft.full_training_modules=["state_proj"]
|
| 59 |
+
```
|
| 60 |
+
|
| 61 |
+
The learning rate and the scheduled target learning rate can usually be scaled by a factor of 10 compared to the
|
| 62 |
+
learning rate used for full fine-tuning (e.g., 1e-4 normal, so 1e-3 using LoRA).
|
lerobot/docs/source/phone_teleop.mdx
ADDED
|
@@ -0,0 +1,191 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Phone
|
| 2 |
+
|
| 3 |
+
Use your phone (iOS or Android) to control your robot.
|
| 4 |
+
|
| 5 |
+
**In this guide you'll learn:**
|
| 6 |
+
|
| 7 |
+
- How to connect an iOS/Android phone
|
| 8 |
+
- How phone pose is mapped to robot end‑effector (EE) targets
|
| 9 |
+
- How to tweak safety limits, gripper control, and IK settings
|
| 10 |
+
|
| 11 |
+
To use phone to control your robot, install the relevant dependencies with:
|
| 12 |
+
|
| 13 |
+
```bash
|
| 14 |
+
pip install lerobot[phone]
|
| 15 |
+
```
|
| 16 |
+
|
| 17 |
+
## Get started
|
| 18 |
+
|
| 19 |
+
### Supported platforms
|
| 20 |
+
|
| 21 |
+
- iOS: Uses the HEBI Mobile I/O app (ARKit pose + buttons). Download the app first, open it and the examples will discover it on your network and stream the phone pose and inputs.
|
| 22 |
+
- Android: Uses the `teleop` package (WebXR). When you start the Python process, it prints a local URL. Open the link on your phone, tap Start, then use Move to stream pose.
|
| 23 |
+
|
| 24 |
+
Links:
|
| 25 |
+
|
| 26 |
+
- Android WebXR library: [`teleop` on PyPI](https://pypi.org/project/teleop/)
|
| 27 |
+
- iOS app: [HEBI Mobile I/O](https://docs.hebi.us/tools.html#mobile-io)
|
| 28 |
+
|
| 29 |
+
### Phone orientation and controls
|
| 30 |
+
|
| 31 |
+
- Orientation: hold the phone with the screen facing up and the top edge pointing in the same direction as the robot gripper. This ensures calibration aligns the phone’s frame with the robot frame so motion feels natural, see the image below for reference.
|
| 32 |
+
- Enable/disable:
|
| 33 |
+
- iOS: Hold `B1` to enable teleoperation, release to stop. The first press captures a reference pose.
|
| 34 |
+
- Android: Press and hold the `Move` button, release to stop. The first press captures a reference pose.
|
| 35 |
+
- Gripper control:
|
| 36 |
+
- iOS: Analog input `A3` controls the gripper as velocity input.
|
| 37 |
+
- Android: Buttons `A` and `B` act like increment/decrement (A opens, B closes). You can tune velocity in the `GripperVelocityToJoint` step.
|
| 38 |
+
|
| 39 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/phone_teleop.webp" alt="Phone teleop orientation" title="Phone teleop orientation" width="40%">
|
| 40 |
+
|
| 41 |
+
### Step 1: Choose the platform
|
| 42 |
+
|
| 43 |
+
Modify the examples to use `PhoneOS.IOS` or `PhoneOS.ANDROID` in `PhoneConfig`. The API is identical across platforms, only the input source differs. All examples are under `examples/` and have `phone_so100_*.py` variants.
|
| 44 |
+
|
| 45 |
+
Teleoperation example:
|
| 46 |
+
|
| 47 |
+
```python
|
| 48 |
+
from lerobot.teleoperators.phone.config_phone import PhoneConfig, PhoneOS
|
| 49 |
+
|
| 50 |
+
teleop_config = PhoneConfig(phone_os=PhoneOS.IOS) # or PhoneOS.ANDROID
|
| 51 |
+
teleop_device = Phone(teleop_config)
|
| 52 |
+
```
|
| 53 |
+
|
| 54 |
+
### Step 2: Connect and calibrate
|
| 55 |
+
|
| 56 |
+
When `Phone(teleop_config)` is created and `connect()` is called, calibration is prompted automatically. Hold the phone in the orientation described above, then:
|
| 57 |
+
|
| 58 |
+
- iOS: press and hold `B1` to capture the reference pose.
|
| 59 |
+
- Android: press `Move` button on the WebXR page to capture the reference pose.
|
| 60 |
+
|
| 61 |
+
Why calibrate? We capture the current pose so subsequent poses are expressed in a robot aligned frame. When you again press the button to enable control, the position is recaptured to avoid drift when your phone is repositioned while it was disabled.
|
| 62 |
+
|
| 63 |
+
### Step 3: Run an example
|
| 64 |
+
|
| 65 |
+
Run on of the examples scripts to teleoperate, record a dataset, replay a dataset or evaluate a policy.
|
| 66 |
+
|
| 67 |
+
All scripts assume you configured your robot (e.g., SO-100 follower) and set the correct serial port.
|
| 68 |
+
|
| 69 |
+
Additionally you need to **copy the urdf of the robot to the examples folder**. For the examples in this tutorial (Using SO100/SO101) it is highly recommended to use the urdf in the [SO-ARM100 repo](https://github.com/TheRobotStudio/SO-ARM100/blob/main/Simulation/SO101/so101_new_calib.urdf)
|
| 70 |
+
|
| 71 |
+
- Run this example to teleoperate:
|
| 72 |
+
|
| 73 |
+
```bash
|
| 74 |
+
python examples/phone_to_so100/teleoperate.py
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
After running the example:
|
| 78 |
+
|
| 79 |
+
- Android: after starting the script, open the printed local URL on your phone, tap Start, then press and hold Move.
|
| 80 |
+
- iOS: open HEBI Mobile I/O first; B1 enables motion. A3 controls the gripper.
|
| 81 |
+
|
| 82 |
+
Additionally you can customize mapping or safety limits by editing the processor steps shown in the examples. You can also remap inputs (e.g., use a different analog input) or adapt the pipeline to other robots (e.g., LeKiwi) by modifying the input and kinematics steps. More about this in the [Processors for Robots and Teleoperators](./processors_robots_teleop) guide.
|
| 83 |
+
|
| 84 |
+
- Run this example to record a dataset, which saves absolute end effector observations and actions:
|
| 85 |
+
|
| 86 |
+
```bash
|
| 87 |
+
python examples/phone_to_so100/record.py
|
| 88 |
+
```
|
| 89 |
+
|
| 90 |
+
- Run this example to replay recorded episodes:
|
| 91 |
+
|
| 92 |
+
```bash
|
| 93 |
+
python examples/phone_to_so100/replay.py
|
| 94 |
+
```
|
| 95 |
+
|
| 96 |
+
- Run this example to evaluate a pretrained policy:
|
| 97 |
+
|
| 98 |
+
```bash
|
| 99 |
+
python examples/phone_to_so100/evaluate.py
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
+
### Important pipeline steps and options
|
| 103 |
+
|
| 104 |
+
- Kinematics are used in multiple steps. We use [Placo](https://github.com/Rhoban/placo) which is a wrapper around Pinocchio for handling our kinematics. We construct the kinematics object by passing the robot's URDF and target frame. We set `target_frame_name` to the gripper frame.
|
| 105 |
+
|
| 106 |
+
```python
|
| 107 |
+
kinematics_solver = RobotKinematics(
|
| 108 |
+
urdf_path="./SO101/so101_new_calib.urdf",
|
| 109 |
+
target_frame_name="gripper_frame_link",
|
| 110 |
+
joint_names=list(robot.bus.motors.keys()),
|
| 111 |
+
)
|
| 112 |
+
|
| 113 |
+
```
|
| 114 |
+
|
| 115 |
+
- The `MapPhoneActionToRobotAction` step converts the calibrated phone pose and inputs into target deltas and gripper commands, below is shown what the step outputs.
|
| 116 |
+
|
| 117 |
+
```python
|
| 118 |
+
action["enabled"] = enabled
|
| 119 |
+
action["target_x"] = -pos[1] if enabled else 0.0
|
| 120 |
+
action["target_y"] = pos[0] if enabled else 0.0
|
| 121 |
+
action["target_z"] = pos[2] if enabled else 0.0
|
| 122 |
+
action["target_wx"] = rotvec[1] if enabled else 0.0
|
| 123 |
+
action["target_wy"] = rotvec[0] if enabled else 0.0
|
| 124 |
+
action["target_wz"] = -rotvec[2] if enabled else 0.0
|
| 125 |
+
action["gripper_vel"] = gripper_vel # Still send gripper action when disabled
|
| 126 |
+
```
|
| 127 |
+
|
| 128 |
+
- The `EEReferenceAndDelta` step converts target deltas to an absolute desired EE pose, storing a reference on enable, the `end_effector_step_sizes` are the step sizes for the EE pose and can be modified to change the motion speed.
|
| 129 |
+
|
| 130 |
+
```python
|
| 131 |
+
EEReferenceAndDelta(
|
| 132 |
+
kinematics=kinematics_solver,
|
| 133 |
+
end_effector_step_sizes={"x": 0.5, "y": 0.5, "z": 0.5},
|
| 134 |
+
motor_names=list(robot.bus.motors.keys()),
|
| 135 |
+
use_latched_reference=True,
|
| 136 |
+
),
|
| 137 |
+
```
|
| 138 |
+
|
| 139 |
+
- The `EEBoundsAndSafety` step clamps EE motion to a workspace and checks for large ee step jumps to ensure safety. The `end_effector_bounds` are the bounds for the EE pose and can be modified to change the workspace. The `max_ee_step_m` are the step limits for the EE pose and can be modified to change the safety limits.
|
| 140 |
+
|
| 141 |
+
```python
|
| 142 |
+
EEBoundsAndSafety(
|
| 143 |
+
end_effector_bounds={"min": [-1.0, -1.0, -1.0], "max": [1.0, 1.0, 1.0]},
|
| 144 |
+
max_ee_step_m=0.10,
|
| 145 |
+
)
|
| 146 |
+
```
|
| 147 |
+
|
| 148 |
+
- The `GripperVelocityToJoint` step turns a velocity‑like gripper input into absolute gripper position using the current measured state. The `speed_factor` is the factor by which the velocity is multiplied.
|
| 149 |
+
|
| 150 |
+
```python
|
| 151 |
+
GripperVelocityToJoint(speed_factor=20.0)
|
| 152 |
+
```
|
| 153 |
+
|
| 154 |
+
#### Different IK initial guesses
|
| 155 |
+
|
| 156 |
+
We use different IK initial guesses in the kinematic steps. As initial guess either the current measured joints or the previous IK solution is used.
|
| 157 |
+
|
| 158 |
+
- Closed loop (used in record/eval): sets `initial_guess_current_joints=True` so IK starts from the measured joints each frame.
|
| 159 |
+
|
| 160 |
+
```python
|
| 161 |
+
InverseKinematicsEEToJoints(
|
| 162 |
+
kinematics=kinematics_solver,
|
| 163 |
+
motor_names=list(robot.bus.motors.keys()),
|
| 164 |
+
initial_guess_current_joints=True, # closed loop
|
| 165 |
+
)
|
| 166 |
+
```
|
| 167 |
+
|
| 168 |
+
- Open loop (used in replay): sets `initial_guess_current_joints=False` so IK continues from the previous IK solution rather than the measured state. This preserves action stability when we replay without feedback.
|
| 169 |
+
|
| 170 |
+
```python
|
| 171 |
+
InverseKinematicsEEToJoints(
|
| 172 |
+
kinematics=kinematics_solver,
|
| 173 |
+
motor_names=list(robot.bus.motors.keys()),
|
| 174 |
+
initial_guess_current_joints=False, # open loop
|
| 175 |
+
)
|
| 176 |
+
```
|
| 177 |
+
|
| 178 |
+
### Pipeline steps explained
|
| 179 |
+
|
| 180 |
+
- MapPhoneActionToRobotAction: converts calibrated phone pose and inputs into target deltas and a gripper command. Motion is gated by an enable signal (B1 on iOS, Move on Android).
|
| 181 |
+
- EEReferenceAndDelta: latches a reference EE pose on enable and combines it with target deltas to produce an absolute desired EE pose each frame. When disabled, it keeps sending the last commanded pose.
|
| 182 |
+
- EEBoundsAndSafety: clamps the EE pose to a workspace and rate‑limits jumps for safety. Also declares `action.ee.*` features.
|
| 183 |
+
- InverseKinematicsEEToJoints: turns an EE pose into joint positions with IK. `initial_guess_current_joints=True` is recommended for closed‑loop control; set `False` for open‑loop replay for stability.
|
| 184 |
+
- GripperVelocityToJoint: integrates a velocity‑like gripper input into an absolute gripper position using the current measured state.
|
| 185 |
+
- ForwardKinematicsJointsToEE: computes `observation.state.ee.*` from observed joints for logging and training on EE state.
|
| 186 |
+
|
| 187 |
+
### Troubleshooting
|
| 188 |
+
|
| 189 |
+
- iOS not discovered: ensure HEBI Mobile I/O is open and your laptop/phone are on the same network.
|
| 190 |
+
- Android URL not reachable: check local you used `https` instead of `http`, use the exact IP printed by the script and allow your browser to enter and ignore the certificate issue.
|
| 191 |
+
- Motion feels inverted: adjust the sign flips in `MapPhoneActionToRobotAction` or swap axes to match your setup.
|
lerobot/docs/source/pi0.mdx
ADDED
|
@@ -0,0 +1,101 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π₀ (Pi0)
|
| 2 |
+
|
| 3 |
+
π₀ is a **Vision-Language-Action model for general robot control**, from Physical Intelligence. The LeRobot implementation is adapted from their open source [OpenPI](https://github.com/Physical-Intelligence/openpi) repository.
|
| 4 |
+
|
| 5 |
+
## Model Overview
|
| 6 |
+
|
| 7 |
+
π₀ represents a breakthrough in robotics as the first general-purpose robot foundation model developed by [Physical Intelligence](https://www.physicalintelligence.company/blog/pi0). Unlike traditional robot programs that are narrow specialists programmed for repetitive motions, π₀ is designed to be a generalist policy that can understand visual inputs, interpret natural language instructions, and control a variety of different robots across diverse tasks.
|
| 8 |
+
|
| 9 |
+
<img
|
| 10 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/lerobot-pi0%20(1).png"
|
| 11 |
+
alt="An overview of Pi0"
|
| 12 |
+
width="85%"
|
| 13 |
+
/>
|
| 14 |
+
|
| 15 |
+
### The Vision for Physical Intelligence
|
| 16 |
+
|
| 17 |
+
As described by Physical Intelligence, while AI has achieved remarkable success in digital domains, from chess-playing to drug discovery, human intelligence still dramatically outpaces AI in the physical world. To paraphrase Moravec's paradox, winning a game of chess represents an "easy" problem for AI, but folding a shirt or cleaning up a table requires solving some of the most difficult engineering problems ever conceived. π₀ represents a first step toward developing artificial physical intelligence that enables users to simply ask robots to perform any task they want, just like they can with large language models.
|
| 18 |
+
|
| 19 |
+
### Architecture and Approach
|
| 20 |
+
|
| 21 |
+
π₀ combines several key innovations:
|
| 22 |
+
|
| 23 |
+
- **Flow Matching**: Uses a novel method to augment pre-trained VLMs with continuous action outputs via flow matching (a variant of diffusion models)
|
| 24 |
+
- **Cross-Embodiment Training**: Trained on data from 8 distinct robot platforms including UR5e, Bimanual UR5e, Franka, Bimanual Trossen, Bimanual ARX, Mobile Trossen, and Mobile Fibocom
|
| 25 |
+
- **Internet-Scale Pre-training**: Inherits semantic knowledge from a pre-trained 3B parameter Vision-Language Model
|
| 26 |
+
- **High-Frequency Control**: Outputs motor commands at up to 50 Hz for real-time dexterous manipulation
|
| 27 |
+
|
| 28 |
+
## Installation Requirements
|
| 29 |
+
|
| 30 |
+
1. Install LeRobot by following our [Installation Guide](./installation).
|
| 31 |
+
2. Install Pi0 dependencies by running:
|
| 32 |
+
|
| 33 |
+
```bash
|
| 34 |
+
pip install -e ".[pi]"
|
| 35 |
+
```
|
| 36 |
+
|
| 37 |
+
> [!NOTE]
|
| 38 |
+
> For lerobot 0.4.0, if you want to install pi tag, you will have to do: `pip install "lerobot[pi]@git+https://github.com/huggingface/lerobot.git"`.
|
| 39 |
+
>
|
| 40 |
+
> This will be solved in the next patch release
|
| 41 |
+
|
| 42 |
+
## Training Data and Capabilities
|
| 43 |
+
|
| 44 |
+
π₀ is trained on the largest robot interaction dataset to date, combining three key data sources:
|
| 45 |
+
|
| 46 |
+
1. **Internet-Scale Pre-training**: Vision-language data from the web for semantic understanding
|
| 47 |
+
2. **Open X-Embodiment Dataset**: Open-source robot manipulation datasets
|
| 48 |
+
3. **Physical Intelligence Dataset**: Large and diverse dataset of dexterous tasks across 8 distinct robots
|
| 49 |
+
|
| 50 |
+
## Usage
|
| 51 |
+
|
| 52 |
+
To use π₀ in LeRobot, specify the policy type as:
|
| 53 |
+
|
| 54 |
+
```python
|
| 55 |
+
policy.type=pi0
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
## Training
|
| 59 |
+
|
| 60 |
+
For training π₀, you can use the standard LeRobot training script with the appropriate configuration:
|
| 61 |
+
|
| 62 |
+
```bash
|
| 63 |
+
python src/lerobot/scripts/lerobot_train.py \
|
| 64 |
+
--dataset.repo_id=your_dataset \
|
| 65 |
+
--policy.type=pi0 \
|
| 66 |
+
--output_dir=./outputs/pi0_training \
|
| 67 |
+
--job_name=pi0_training \
|
| 68 |
+
--policy.pretrained_path=lerobot/pi0_base \
|
| 69 |
+
--policy.repo_id=your_repo_id \
|
| 70 |
+
--policy.compile_model=true \
|
| 71 |
+
--policy.gradient_checkpointing=true \
|
| 72 |
+
--policy.dtype=bfloat16 \
|
| 73 |
+
--policy.freeze_vision_encoder=false \
|
| 74 |
+
--policy.train_expert_only=false \
|
| 75 |
+
--steps=3000 \
|
| 76 |
+
--policy.device=cuda \
|
| 77 |
+
--batch_size=32
|
| 78 |
+
```
|
| 79 |
+
|
| 80 |
+
### Key Training Parameters
|
| 81 |
+
|
| 82 |
+
- **`--policy.compile_model=true`**: Enables model compilation for faster training
|
| 83 |
+
- **`--policy.gradient_checkpointing=true`**: Reduces memory usage significantly during training
|
| 84 |
+
- **`--policy.dtype=bfloat16`**: Use mixed precision training for efficiency
|
| 85 |
+
- **`--batch_size=32`**: Batch size for training, adapt this based on your GPU memory
|
| 86 |
+
- **`--policy.pretrained_path=lerobot/pi0_base`**: The base π₀ model you want to finetune, options are:
|
| 87 |
+
- [lerobot/pi0_base](https://huggingface.co/lerobot/pi0_base)
|
| 88 |
+
- [lerobot/pi0_libero](https://huggingface.co/lerobot/pi0_libero) (specifically trained on the Libero dataset)
|
| 89 |
+
|
| 90 |
+
### Training Parameters Explained
|
| 91 |
+
|
| 92 |
+
| Parameter | Default | Description |
|
| 93 |
+
| ----------------------- | ------- | ------------------------------------------- |
|
| 94 |
+
| `freeze_vision_encoder` | `false` | Do not freeze the vision encoder |
|
| 95 |
+
| `train_expert_only` | `false` | Do not freeze the VLM, train all parameters |
|
| 96 |
+
|
| 97 |
+
**💡 Tip**: Setting `train_expert_only=true` freezes the VLM and trains only the action expert and projections, allowing finetuning with reduced memory usage.
|
| 98 |
+
|
| 99 |
+
## License
|
| 100 |
+
|
| 101 |
+
This model follows the **Apache 2.0 License**, consistent with the original [OpenPI repository](https://github.com/Physical-Intelligence/openpi).
|
lerobot/docs/source/pi05.mdx
ADDED
|
@@ -0,0 +1,123 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π₀.₅ (Pi05) Policy
|
| 2 |
+
|
| 3 |
+
π₀.₅ is a **Vision-Language-Action model with open-world generalization**, from Physical Intelligence. The LeRobot implementation is adapted from their open source [OpenPI](https://github.com/Physical-Intelligence/openpi) repository.
|
| 4 |
+
|
| 5 |
+
## Model Overview
|
| 6 |
+
|
| 7 |
+
π₀.₅ represents a significant evolution from π₀, developed by [Physical Intelligence](https://www.physicalintelligence.company/blog/pi05) to address a big challenge in robotics: **open-world generalization**. While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training.
|
| 8 |
+
|
| 9 |
+
### The Generalization Challenge
|
| 10 |
+
|
| 11 |
+
As Physical Intelligence explains, the fundamental challenge isn't performing tasks of agility or dexterity, but generalization, the ability to correctly perform tasks in new settings with new objects. Consider a robot cleaning different homes: each home has different objects in different places. Generalization must occur at multiple levels:
|
| 12 |
+
|
| 13 |
+
- **Physical Level**: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments
|
| 14 |
+
- **Semantic Level**: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools are appropriate for cleaning spills
|
| 15 |
+
- **Environmental Level**: Adapting to "messy" real-world environments like homes, grocery stores, offices, and hospitals
|
| 16 |
+
|
| 17 |
+
### Co-Training on Heterogeneous Data
|
| 18 |
+
|
| 19 |
+
The breakthrough innovation in π₀.₅ is **co-training on heterogeneous data sources**. The model learns from:
|
| 20 |
+
|
| 21 |
+
1. **Multimodal Web Data**: Image captioning, visual question answering, object detection
|
| 22 |
+
2. **Verbal Instructions**: Humans coaching robots through complex tasks step-by-step
|
| 23 |
+
3. **Subtask Commands**: High-level semantic behavior labels (e.g., "pick up the pillow" for an unmade bed)
|
| 24 |
+
4. **Cross-Embodiment Robot Data**: Data from various robot platforms with different capabilities
|
| 25 |
+
5. **Multi-Environment Data**: Static robots deployed across many different homes
|
| 26 |
+
6. **Mobile Manipulation Data**: ~400 hours of mobile robot demonstrations
|
| 27 |
+
|
| 28 |
+
This diverse training mixture creates a "curriculum" that enables generalization across physical, visual, and semantic levels simultaneously.
|
| 29 |
+
|
| 30 |
+
## Installation Requirements
|
| 31 |
+
|
| 32 |
+
1. Install LeRobot by following our [Installation Guide](./installation).
|
| 33 |
+
2. Install Pi0.5 dependencies by running:
|
| 34 |
+
|
| 35 |
+
```bash
|
| 36 |
+
pip install -e ".[pi]"
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
> [!NOTE]
|
| 40 |
+
> For lerobot 0.4.0, if you want to install pi tag, you will have to do: `pip install "lerobot[pi]@git+https://github.com/huggingface/lerobot.git"`.
|
| 41 |
+
>
|
| 42 |
+
> This will be solved in the next patch release
|
| 43 |
+
|
| 44 |
+
## Usage
|
| 45 |
+
|
| 46 |
+
To use π₀.₅ in your LeRobot configuration, specify the policy type as:
|
| 47 |
+
|
| 48 |
+
```python
|
| 49 |
+
policy.type=pi05
|
| 50 |
+
```
|
| 51 |
+
|
| 52 |
+
## Training
|
| 53 |
+
|
| 54 |
+
### Training Command Example
|
| 55 |
+
|
| 56 |
+
Here's a complete training command for finetuning the base π₀.₅ model on your own dataset:
|
| 57 |
+
|
| 58 |
+
```bash
|
| 59 |
+
python src/lerobot/scripts/lerobot_train.py\
|
| 60 |
+
--dataset.repo_id=your_dataset \
|
| 61 |
+
--policy.type=pi05 \
|
| 62 |
+
--output_dir=./outputs/pi05_training \
|
| 63 |
+
--job_name=pi05_training \
|
| 64 |
+
--policy.repo_id=your_repo_id \
|
| 65 |
+
--policy.pretrained_path=lerobot/pi05_base \
|
| 66 |
+
--policy.compile_model=true \
|
| 67 |
+
--policy.gradient_checkpointing=true \
|
| 68 |
+
--wandb.enable=true \
|
| 69 |
+
--policy.dtype=bfloat16 \
|
| 70 |
+
--policy.freeze_vision_encoder=false \
|
| 71 |
+
--policy.train_expert_only=false \
|
| 72 |
+
--steps=3000 \
|
| 73 |
+
--policy.device=cuda \
|
| 74 |
+
--batch_size=32
|
| 75 |
+
```
|
| 76 |
+
|
| 77 |
+
### Key Training Parameters
|
| 78 |
+
|
| 79 |
+
- **`--policy.compile_model=true`**: Enables model compilation for faster training
|
| 80 |
+
- **`--policy.gradient_checkpointing=true`**: Reduces memory usage significantly during training
|
| 81 |
+
- **`--policy.dtype=bfloat16`**: Use mixed precision training for efficiency
|
| 82 |
+
- **`--batch_size=32`**: Batch size for training, adapt this based on your GPU memory
|
| 83 |
+
- **`--policy.pretrained_path=lerobot/pi05_base`**: The base π₀.₅ model you want to finetune, options are:
|
| 84 |
+
- [lerobot/pi05_base](https://huggingface.co/lerobot/pi05_base)
|
| 85 |
+
- [lerobot/pi05_libero](https://huggingface.co/lerobot/pi05_libero) (specifically trained on the Libero dataset)
|
| 86 |
+
|
| 87 |
+
### Training Parameters Explained
|
| 88 |
+
|
| 89 |
+
| Parameter | Default | Description |
|
| 90 |
+
| ----------------------- | ------- | ------------------------------------------- |
|
| 91 |
+
| `freeze_vision_encoder` | `false` | Do not freeze the vision encoder |
|
| 92 |
+
| `train_expert_only` | `false` | Do not freeze the VLM, train all parameters |
|
| 93 |
+
|
| 94 |
+
**💡 Tip**: Setting `train_expert_only=true` freezes the VLM and trains only the action expert and projections, allowing finetuning with reduced memory usage.
|
| 95 |
+
|
| 96 |
+
If your dataset is not converted with `quantiles`, you can convert it with the following command:
|
| 97 |
+
|
| 98 |
+
```bash
|
| 99 |
+
python src/lerobot/datasets/v30/augment_dataset_quantile_stats.py \
|
| 100 |
+
--repo-id=your_dataset \
|
| 101 |
+
```
|
| 102 |
+
|
| 103 |
+
Or train pi05 with this normalization mapping: `--policy.normalization_mapping='{"ACTION": "MEAN_STD", "STATE": "MEAN_STD", "VISUAL": "IDENTITY"}'`
|
| 104 |
+
|
| 105 |
+
## Performance Results
|
| 106 |
+
|
| 107 |
+
### Libero Benchmark Results
|
| 108 |
+
|
| 109 |
+
π₀.₅ has demonstrated strong performance on the Libero benchmark suite. To compare and test its LeRobot implementation, we finetuned the libero base model for an additional 6k steps on the Libero dataset and compared the results to the OpenPI reference results.
|
| 110 |
+
|
| 111 |
+
| Benchmark | LeRobot Implementation | OpenPI Reference |
|
| 112 |
+
| ------------------ | ---------------------- | ---------------- |
|
| 113 |
+
| **Libero Spatial** | 97.0% | 98.8% |
|
| 114 |
+
| **Libero Object** | 99.0% | 98.2% |
|
| 115 |
+
| **Libero Goal** | 98.0% | 98.0% |
|
| 116 |
+
| **Libero 10** | 96.0% | 92.4% |
|
| 117 |
+
| **Average** | 97.5% | 96.85% |
|
| 118 |
+
|
| 119 |
+
These results demonstrate π₀.₅'s strong generalization capabilities across diverse robotic manipulation tasks. To reproduce these results, you can follow the instructions in the [Libero](https://huggingface.co/docs/lerobot/libero) section.
|
| 120 |
+
|
| 121 |
+
## License
|
| 122 |
+
|
| 123 |
+
This model follows the **Apache 2.0 License**, consistent with the original [OpenPI repository](https://github.com/Physical-Intelligence/openpi).
|
lerobot/docs/source/pi0fast.mdx
ADDED
|
@@ -0,0 +1,246 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# π₀-FAST (Pi0-FAST)
|
| 2 |
+
|
| 3 |
+
π₀-FAST is a **Vision-Language-Action model for general robot control** that uses autoregressive next-token prediction to model continuous robot actions.
|
| 4 |
+
|
| 5 |
+
## Model Overview
|
| 6 |
+
|
| 7 |
+
π₀-FAST combines the power of Vision-Language Models with a novel action tokenization approach called **FAST (Frequency-space Action Sequence Tokenization)**. This enables training autoregressive VLAs on highly dexterous tasks that are impossible with standard binning-based discretization, while training **up to 5x faster** than diffusion-based approaches like π₀.
|
| 8 |
+
|
| 9 |
+
<img
|
| 10 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/lerobot-pifast.png"
|
| 11 |
+
alt="An overview of Pi0-FAST"
|
| 12 |
+
width="85%"
|
| 13 |
+
/>
|
| 14 |
+
|
| 15 |
+
### Why FAST?
|
| 16 |
+
|
| 17 |
+
Standard approaches for robot action tokenization use simple per-dimension, per-timestep binning schemes. While passable for simple behaviors, this rapidly breaks down for complex and dexterous skills that require precision and high-frequency control.
|
| 18 |
+
|
| 19 |
+
FAST solves this by compressing action sequences using signal processing techniques, resulting in a dense sequence of action tokens that can be predicted autoregressively—just like language tokens.
|
| 20 |
+
|
| 21 |
+
### How FAST Tokenization Works
|
| 22 |
+
|
| 23 |
+
The FAST tokenizer compresses action sequences through the following steps:
|
| 24 |
+
|
| 25 |
+
1. **Normalize**: Take a continuous action chunk of shape `(H, D)` where `H` is the horizon and `D` is the action dimension. Normalize using one of the supported normalization methods (Quantiles recommended to handle outliers).
|
| 26 |
+
|
| 27 |
+
2. **Discrete Cosine Transform (DCT)**: Apply DCT (via scipy) to each action dimension separately. DCT is a compression algorithm commonly used in image and audio codecs (JPEG, MP3).
|
| 28 |
+
|
| 29 |
+
3. **Quantization**: Round and remove insignificant coefficients for each action dimension, producing a sparse frequency matrix.
|
| 30 |
+
|
| 31 |
+
4. **Flatten**: Flatten the matrix into a 1D vector, with low-frequency components first.
|
| 32 |
+
|
| 33 |
+
5. **Byte Pair Encoding (BPE)**: Train a BPE tokenizer to compress the DCT coefficients into dense action tokens, typically achieving **10x compression** over prior tokenization approaches.
|
| 34 |
+
|
| 35 |
+
This approach can transform **any existing VLM** into a VLA by training it to predict these FAST tokens.
|
| 36 |
+
|
| 37 |
+
## Installation Requirements
|
| 38 |
+
|
| 39 |
+
1. Install LeRobot by following our [Installation Guide](./installation).
|
| 40 |
+
2. Install π₀-FAST dependencies by running:
|
| 41 |
+
|
| 42 |
+
```bash
|
| 43 |
+
pip install -e ".[pi]"
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
> [!NOTE]
|
| 47 |
+
> For lerobot 0.4.0, if you want to install the pi tag, you will have to do: `pip install "lerobot[pi]@git+https://github.com/huggingface/lerobot.git"`.
|
| 48 |
+
>
|
| 49 |
+
> This will be solved in the next patch release
|
| 50 |
+
|
| 51 |
+
## Training a Custom FAST Tokenizer
|
| 52 |
+
|
| 53 |
+
You have two options for the FAST tokenizer:
|
| 54 |
+
|
| 55 |
+
1. **Use the pre-trained tokenizer**: The `physical-intelligence/fast` tokenizer was trained on 1M+ real robot action sequences and works as a general-purpose tokenizer.
|
| 56 |
+
|
| 57 |
+
2. **Train your own tokenizer**: For maximum performance on your specific dataset, you can finetune the tokenizer on your own data.
|
| 58 |
+
|
| 59 |
+
### Training Your Own Tokenizer
|
| 60 |
+
|
| 61 |
+
```bash
|
| 62 |
+
lerobot-train-tokenizer \
|
| 63 |
+
--repo_id "user/my-lerobot-dataset" \
|
| 64 |
+
--action_horizon 10 \
|
| 65 |
+
--encoded_dims "0:6" \
|
| 66 |
+
--vocab_size 1024 \
|
| 67 |
+
--scale 10.0 \
|
| 68 |
+
--normalization_mode QUANTILES \
|
| 69 |
+
--output_dir "./my_fast_tokenizer" \
|
| 70 |
+
--push_to_hub \
|
| 71 |
+
--hub_repo_id "username/my-action-tokenizer"
|
| 72 |
+
```
|
| 73 |
+
|
| 74 |
+
### Key Tokenizer Parameters
|
| 75 |
+
|
| 76 |
+
| Parameter | Description | Default |
|
| 77 |
+
| ---------------------- | --------------------------------------------------------------------------------- | ------------ |
|
| 78 |
+
| `--repo_id` | LeRobot dataset repository ID | Required |
|
| 79 |
+
| `--action_horizon` | Number of future actions in each chunk | `10` |
|
| 80 |
+
| `--encoded_dims` | Comma-separated dimension ranges to encode (e.g., `"0:6,7:23"`) | `"0:6,7:23"` |
|
| 81 |
+
| `--vocab_size` | BPE vocabulary size | `1024` |
|
| 82 |
+
| `--scale` | DCT scaling factor for quantization | `10.0` |
|
| 83 |
+
| `--normalization_mode` | Normalization mode (`MEAN_STD`, `MIN_MAX`, `QUANTILES`, `QUANTILE10`, `IDENTITY`) | `QUANTILES` |
|
| 84 |
+
| `--sample_fraction` | Fraction of chunks to sample per episode | `0.1` |
|
| 85 |
+
|
| 86 |
+
## Usage
|
| 87 |
+
|
| 88 |
+
To use π₀-FAST in LeRobot, specify the policy type as:
|
| 89 |
+
|
| 90 |
+
```python
|
| 91 |
+
policy.type=pi0_fast
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
## Training
|
| 95 |
+
|
| 96 |
+
For training π₀-FAST, you can use the LeRobot training script:
|
| 97 |
+
|
| 98 |
+
```bash
|
| 99 |
+
lerobot-train \
|
| 100 |
+
--dataset.repo_id=your_dataset \
|
| 101 |
+
--policy.type=pi0_fast \
|
| 102 |
+
--output_dir=./outputs/pi0fast_training \
|
| 103 |
+
--job_name=pi0fast_training \
|
| 104 |
+
--policy.pretrained_path=lerobot/pi0_fast_base \
|
| 105 |
+
--policy.dtype=bfloat16 \
|
| 106 |
+
--policy.gradient_checkpointing=true \
|
| 107 |
+
--policy.chunk_size=10 \
|
| 108 |
+
--policy.n_action_steps=10 \
|
| 109 |
+
--policy.max_action_tokens=256 \
|
| 110 |
+
--steps=100000 \
|
| 111 |
+
--batch_size=4 \
|
| 112 |
+
--policy.device=cuda
|
| 113 |
+
```
|
| 114 |
+
|
| 115 |
+
### Key Training Parameters
|
| 116 |
+
|
| 117 |
+
| Parameter | Description | Default |
|
| 118 |
+
| -------------------------------------- | -------------------------------------------------- | ---------------------------- |
|
| 119 |
+
| `--policy.gradient_checkpointing=true` | Reduces memory usage significantly during training | `false` |
|
| 120 |
+
| `--policy.dtype=bfloat16` | Use mixed precision training for efficiency | `float32` |
|
| 121 |
+
| `--policy.chunk_size` | Number of action steps to predict (action horizon) | `50` |
|
| 122 |
+
| `--policy.n_action_steps` | Number of action steps to execute | `50` |
|
| 123 |
+
| `--policy.max_action_tokens` | Maximum number of FAST tokens per action chunk | `256` |
|
| 124 |
+
| `--policy.action_tokenizer_name` | FAST tokenizer to use | `physical-intelligence/fast` |
|
| 125 |
+
| `--policy.compile_model=true` | Enable torch.compile for faster training | `false` |
|
| 126 |
+
|
| 127 |
+
## Inference
|
| 128 |
+
|
| 129 |
+
### KV-Caching for Fast Inference
|
| 130 |
+
|
| 131 |
+
π₀-FAST supports **KV-caching**, a widely used optimization in LLM inference. This caches the key-value pairs from the attention mechanism, avoiding redundant computation during autoregressive decoding.
|
| 132 |
+
|
| 133 |
+
```python
|
| 134 |
+
# KV-caching is enabled by default
|
| 135 |
+
policy.use_kv_cache=true
|
| 136 |
+
```
|
| 137 |
+
|
| 138 |
+
### Inference Example
|
| 139 |
+
|
| 140 |
+
```python
|
| 141 |
+
from lerobot.policies.pi0_fast import PI0FastPolicy, PI0FastConfig
|
| 142 |
+
|
| 143 |
+
# Load the policy
|
| 144 |
+
policy = PI0FastPolicy.from_pretrained("your-model-path")
|
| 145 |
+
|
| 146 |
+
# During inference
|
| 147 |
+
actions = policy.predict_action_chunk(batch)
|
| 148 |
+
```
|
| 149 |
+
|
| 150 |
+
## Model Architecture
|
| 151 |
+
|
| 152 |
+
π₀-FAST uses a PaliGemma-based architecture:
|
| 153 |
+
|
| 154 |
+
- **Vision Encoder**: SigLIP vision tower for image understanding
|
| 155 |
+
- **Language Model**: Gemma 2B for processing language instructions and predicting action tokens
|
| 156 |
+
|
| 157 |
+
The model takes images, text instructions, and robot state as input, and outputs discrete FAST tokens that are decoded back to continuous actions.
|
| 158 |
+
|
| 159 |
+
## Configuration Options
|
| 160 |
+
|
| 161 |
+
| Parameter | Description | Default |
|
| 162 |
+
| -------------------- | ----------------------------------------------- | ---------- |
|
| 163 |
+
| `paligemma_variant` | VLM backbone variant (`gemma_300m`, `gemma_2b`) | `gemma_2b` |
|
| 164 |
+
| `max_state_dim` | Maximum state vector dimension (padded) | `32` |
|
| 165 |
+
| `max_action_dim` | Maximum action vector dimension (padded) | `32` |
|
| 166 |
+
| `temperature` | Sampling temperature (0.0 for greedy) | `0.0` |
|
| 167 |
+
| `max_decoding_steps` | Maximum decoding steps | `256` |
|
| 168 |
+
| `use_kv_cache` | Enable KV caching for faster inference | `true` |
|
| 169 |
+
|
| 170 |
+
## Comparison with π₀
|
| 171 |
+
|
| 172 |
+
| Feature | π₀ | π₀-FAST |
|
| 173 |
+
| --------------------- | ------------------------- | ---------------------------- |
|
| 174 |
+
| Action Representation | Flow Matching (Diffusion) | Autoregressive Tokens (FAST) |
|
| 175 |
+
| Training Speed | 1x | **5x faster** |
|
| 176 |
+
| Dexterity | High | High |
|
| 177 |
+
| Inference Method | Iterative Denoising | Autoregressive Decoding |
|
| 178 |
+
| KV-Caching | N/A | Supported |
|
| 179 |
+
|
| 180 |
+
## Reproducing π₀Fast results
|
| 181 |
+
|
| 182 |
+
We reproduce the results of π₀Fast on the LIBERO benchmark using the LeRobot implementation. We take the LeRobot PiFast base model [lerobot/pi0fast-base](https://huggingface.co/lerobot/pi0fast-base) and finetune for an additional 40kk steps in bfloat16, with batch size of 256 on 8 H100 GPUs using the [HuggingFace LIBERO dataset](https://huggingface.co/datasets/HuggingFaceVLA/libero).
|
| 183 |
+
|
| 184 |
+
The finetuned model can be found here:
|
| 185 |
+
|
| 186 |
+
- **π₀Fast LIBERO**: [lerobot/pi0fast-libero](https://huggingface.co/lerobot/pi0fast-libero)
|
| 187 |
+
|
| 188 |
+
With the following training command:
|
| 189 |
+
|
| 190 |
+
```bash
|
| 191 |
+
lerobot-train \
|
| 192 |
+
--dataset.repo_id=lerobot/libero \
|
| 193 |
+
--output_dir=outputs/libero_pi0fast \
|
| 194 |
+
--job_name=libero_pi0fast \
|
| 195 |
+
--policy.path=lerobot/pi0fast_base \
|
| 196 |
+
--policy.dtype=bfloat16 \
|
| 197 |
+
--steps=100000 \
|
| 198 |
+
--save_freq=20000 \
|
| 199 |
+
--batch_size=4 \
|
| 200 |
+
--policy.device=cuda \
|
| 201 |
+
--policy.scheduler_warmup_steps=4000 \
|
| 202 |
+
--policy.scheduler_decay_steps=100000 \
|
| 203 |
+
--policy.scheduler_decay_lr=1e-5 \
|
| 204 |
+
--policy.gradient_checkpointing=true \
|
| 205 |
+
--policy.chunk_size=10 \
|
| 206 |
+
--policy.n_action_steps=10 \
|
| 207 |
+
--policy.max_action_tokens=256 \
|
| 208 |
+
--policy.empty_cameras=1 \
|
| 209 |
+
```
|
| 210 |
+
|
| 211 |
+
We then evaluate the finetuned model using the LeRobot LIBERO implementation, by running the following command:
|
| 212 |
+
|
| 213 |
+
```bash
|
| 214 |
+
tasks="libero_object,libero_spatial,libero_goal,libero_10"
|
| 215 |
+
lerobot-eval \
|
| 216 |
+
--policy.path=lerobot/pi0fast-libero \
|
| 217 |
+
--policy.max_action_tokens=256 \
|
| 218 |
+
--env.type=libero \
|
| 219 |
+
--policy.gradient_checkpointing=false \
|
| 220 |
+
--env.task=${tasks} \
|
| 221 |
+
--eval.batch_size=1 \
|
| 222 |
+
--eval.n_episodes=1 \
|
| 223 |
+
--rename_map='{"observation.images.image":"observation.images.base_0_rgb","observation.images.image2":"observation.images.left_wrist_0_rgb"}'
|
| 224 |
+
```
|
| 225 |
+
|
| 226 |
+
**Note:** We set `n_action_steps=10`, similar to the original OpenPI implementation.
|
| 227 |
+
|
| 228 |
+
### Results
|
| 229 |
+
|
| 230 |
+
We obtain the following results on the LIBERO benchmark:
|
| 231 |
+
|
| 232 |
+
| Model | LIBERO Spatial | LIBERO Object | LIBERO Goal | LIBERO 10 | Average |
|
| 233 |
+
| ----------- | -------------- | ------------- | ----------- | --------- | -------- |
|
| 234 |
+
| **π₀-fast** | 70.0 | 100.0 | 100.0 | 60.0 | **82.5** |
|
| 235 |
+
|
| 236 |
+
The full evaluation output folder, including videos, is available [here](https://drive.google.com/drive/folders/1HXpwPTRm4hx6g1sF2P7OOqGG0TwPU7LQ?usp=sharing)
|
| 237 |
+
|
| 238 |
+
## License
|
| 239 |
+
|
| 240 |
+
This model follows the **Apache 2.0 License**, consistent with the original [OpenPI repository](https://github.com/Physical-Intelligence/openpi).
|
| 241 |
+
|
| 242 |
+
## References
|
| 243 |
+
|
| 244 |
+
- [FAST: Efficient Robot Action Tokenization](https://www.physicalintelligence.company/research/fast) - Physical Intelligence Blog
|
| 245 |
+
- [OpenPI Repository](https://github.com/Physical-Intelligence/openpi) - Original implementation
|
| 246 |
+
- [FAST Tokenizer on Hugging Face](https://huggingface.co/physical-intelligence/fast) - Pre-trained tokenizer
|
lerobot/docs/source/policy_act_README.md
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## Paper
|
| 2 |
+
|
| 3 |
+
https://tonyzhaozh.github.io/aloha
|
| 4 |
+
|
| 5 |
+
## Citation
|
| 6 |
+
|
| 7 |
+
```bibtex
|
| 8 |
+
@article{zhao2023learning,
|
| 9 |
+
title={Learning fine-grained bimanual manipulation with low-cost hardware},
|
| 10 |
+
author={Zhao, Tony Z and Kumar, Vikash and Levine, Sergey and Finn, Chelsea},
|
| 11 |
+
journal={arXiv preprint arXiv:2304.13705},
|
| 12 |
+
year={2023}
|
| 13 |
+
}
|
| 14 |
+
```
|
lerobot/docs/source/policy_diffusion_README.md
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## Paper
|
| 2 |
+
|
| 3 |
+
https://diffusion-policy.cs.columbia.edu
|
| 4 |
+
|
| 5 |
+
## Citation
|
| 6 |
+
|
| 7 |
+
```bibtex
|
| 8 |
+
@article{chi2024diffusionpolicy,
|
| 9 |
+
author = {Cheng Chi and Zhenjia Xu and Siyuan Feng and Eric Cousineau and Yilun Du and Benjamin Burchfiel and Russ Tedrake and Shuran Song},
|
| 10 |
+
title ={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion},
|
| 11 |
+
journal = {The International Journal of Robotics Research},
|
| 12 |
+
year = {2024},
|
| 13 |
+
}
|
| 14 |
+
```
|
lerobot/docs/source/policy_groot_README.md
ADDED
|
@@ -0,0 +1,27 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## Research Paper
|
| 2 |
+
|
| 3 |
+
Paper: https://research.nvidia.com/labs/gear/gr00t-n1_5/
|
| 4 |
+
|
| 5 |
+
## Repository
|
| 6 |
+
|
| 7 |
+
Code: https://github.com/NVIDIA/Isaac-GR00T
|
| 8 |
+
|
| 9 |
+
## Citation
|
| 10 |
+
|
| 11 |
+
```bibtex
|
| 12 |
+
@inproceedings{gr00tn1_2025,
|
| 13 |
+
archivePrefix = {arxiv},
|
| 14 |
+
eprint = {2503.14734},
|
| 15 |
+
title = {{GR00T} {N1}: An Open Foundation Model for Generalist Humanoid Robots},
|
| 16 |
+
author = {NVIDIA and Johan Bjorck andFernando Castañeda, Nikita Cherniadev and Xingye Da and Runyu Ding and Linxi "Jim" Fan and Yu Fang and Dieter Fox and Fengyuan Hu and Spencer Huang and Joel Jang and Zhenyu Jiang and Jan Kautz and Kaushil Kundalia and Lawrence Lao and Zhiqi Li and Zongyu Lin and Kevin Lin and Guilin Liu and Edith Llontop and Loic Magne and Ajay Mandlekar and Avnish Narayan and Soroush Nasiriany and Scott Reed and You Liang Tan and Guanzhi Wang and Zu Wang and Jing Wang and Qi Wang and Jiannan Xiang and Yuqi Xie and Yinzhen Xu and Zhenjia Xu and Seonghyeon Ye and Zhiding Yu and Ao Zhang and Hao Zhang and Yizhou Zhao and Ruijie Zheng and Yuke Zhu},
|
| 17 |
+
month = {March},
|
| 18 |
+
year = {2025},
|
| 19 |
+
booktitle = {ArXiv Preprint},
|
| 20 |
+
}
|
| 21 |
+
```
|
| 22 |
+
|
| 23 |
+
## Additional Resources
|
| 24 |
+
|
| 25 |
+
Blog: https://developer.nvidia.com/isaac/gr00t
|
| 26 |
+
|
| 27 |
+
Hugging Face Model: https://huggingface.co/nvidia/GR00T-N1.5-3B
|
lerobot/docs/source/policy_smolvla_README.md
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## Paper
|
| 2 |
+
|
| 3 |
+
https://arxiv.org/abs/2506.01844
|
| 4 |
+
|
| 5 |
+
## Citation
|
| 6 |
+
|
| 7 |
+
```bibtex
|
| 8 |
+
@article{shukor2025smolvla,
|
| 9 |
+
title={SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics},
|
| 10 |
+
author={Shukor, Mustafa and Aubakirova, Dana and Capuano, Francesco and Kooijmans, Pepijn and Palma, Steven and Zouitine, Adil and Aractingi, Michel and Pascal, Caroline and Russi, Martino and Marafioti, Andres and Alibert, Simon and Cord, Matthieu and Wolf, Thomas and Cadene, Remi},
|
| 11 |
+
journal={arXiv preprint arXiv:2506.01844},
|
| 12 |
+
year={2025}
|
| 13 |
+
}
|
| 14 |
+
```
|
lerobot/docs/source/policy_tdmpc_README.md
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## Paper
|
| 2 |
+
|
| 3 |
+
https://www.nicklashansen.com/td-mpc/
|
| 4 |
+
|
| 5 |
+
## Citation
|
| 6 |
+
|
| 7 |
+
```bibtex
|
| 8 |
+
@inproceedings{Hansen2022tdmpc,
|
| 9 |
+
title={Temporal Difference Learning for Model Predictive Control},
|
| 10 |
+
author={Nicklas Hansen and Xiaolong Wang and Hao Su},
|
| 11 |
+
booktitle={ICML},
|
| 12 |
+
year={2022}
|
| 13 |
+
}
|
| 14 |
+
```
|
lerobot/docs/source/policy_vqbet_README.md
ADDED
|
@@ -0,0 +1,14 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
## Paper
|
| 2 |
+
|
| 3 |
+
https://sjlee.cc/vq-bet/
|
| 4 |
+
|
| 5 |
+
## Citation
|
| 6 |
+
|
| 7 |
+
```bibtex
|
| 8 |
+
@article{lee2024behavior,
|
| 9 |
+
title={Behavior generation with latent actions},
|
| 10 |
+
author={Lee, Seungjae and Wang, Yibin and Etukuru, Haritheja and Kim, H Jin and Shafiullah, Nur Muhammad Mahi and Pinto, Lerrel},
|
| 11 |
+
journal={arXiv preprint arXiv:2403.03181},
|
| 12 |
+
year={2024}
|
| 13 |
+
}
|
| 14 |
+
```
|
lerobot/docs/source/policy_walloss_README.md
ADDED
|
@@ -0,0 +1,45 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# WALL-OSS
|
| 2 |
+
|
| 3 |
+
This repository contains the Hugging Face port of [**WALL-OSS**](https://x2robot.com/en/research/68bc2cde8497d7f238dde690), a Vision-Language-Action model for cross-embodiment robotic control based on Qwen2.5-VL with flow matching/FAST action prediction.
|
| 4 |
+
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
## Model Overview
|
| 8 |
+
|
| 9 |
+
| Feature | Description |
|
| 10 |
+
| ------------------ | ----------------------------------------------------- |
|
| 11 |
+
| Base Model | Qwen2.5-VL (Vision-Language Model) |
|
| 12 |
+
| Action Prediction | Flow Matching (diffusion) or FAST (discrete tokens) |
|
| 13 |
+
| Architecture | Mixture of Experts (MoE) with action-specific routing |
|
| 14 |
+
| Multi-Modal Inputs | Vision (images/videos), Language, Proprioception |
|
| 15 |
+
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
## Additional Resources
|
| 19 |
+
|
| 20 |
+
Paper: https://arxiv.org/pdf/2509.11766
|
| 21 |
+
|
| 22 |
+
Official Repository: https://github.com/X-Square-Robot/wall-x
|
| 23 |
+
|
| 24 |
+
Hugging Face: https://huggingface.co/x-square-robot
|
| 25 |
+
|
| 26 |
+
---
|
| 27 |
+
|
| 28 |
+
## Citation
|
| 29 |
+
|
| 30 |
+
If you use this work, please cite:
|
| 31 |
+
|
| 32 |
+
```bibtex
|
| 33 |
+
@article{zhai2025igniting,
|
| 34 |
+
title = {Igniting VLMs Toward the Embodied Space},
|
| 35 |
+
author = {Zhai, Andy and Liu, Brae and Fang, Bruno and Cai, Chalse and Ma, Ellie and Yin, Ethan and Wang, Hao and Zhou, Hugo and Wang, James and Shi, Lights and Liang, Lucy and Wang, Make and Wang, Qian and Gan, Roy and Yu, Ryan and Li, Shalfun and Liu, Starrick and Chen, Sylas and Chen, Vincent and Xu, Zach},
|
| 36 |
+
journal = {arXiv preprint arXiv:2509.11766},
|
| 37 |
+
year = {2025}
|
| 38 |
+
}
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
---
|
| 42 |
+
|
| 43 |
+
## License
|
| 44 |
+
|
| 45 |
+
This model follows the **Apache 2.0 License**, consistent with the original [WallX repository](https://github.com/X-Square-Robot/wall-x).
|
lerobot/docs/source/porting_datasets_v3.mdx
ADDED
|
@@ -0,0 +1,321 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Porting Large Datasets to LeRobot Dataset v3.0
|
| 2 |
+
|
| 3 |
+
This tutorial explains how to port large-scale robotic datasets to the LeRobot Dataset v3.0 format. We'll use the **DROID 1.0.1** dataset as our primary example, which demonstrates handling multi-terabyte datasets with thousands of shards across SLURM clusters.
|
| 4 |
+
|
| 5 |
+
## File Organization: v2.1 vs v3.0
|
| 6 |
+
|
| 7 |
+
Dataset v3.0 fundamentally changes how data is organized and stored:
|
| 8 |
+
|
| 9 |
+
**v2.1 Structure (Episode-based)**:
|
| 10 |
+
|
| 11 |
+
```
|
| 12 |
+
dataset/
|
| 13 |
+
├── data/chunk-000/episode_000000.parquet
|
| 14 |
+
├── data/chunk-000/episode_000001.parquet
|
| 15 |
+
├── videos/chunk-000/camera/episode_000000.mp4
|
| 16 |
+
└── meta/episodes.jsonl
|
| 17 |
+
```
|
| 18 |
+
|
| 19 |
+
**v3.0 Structure (File-based)**:
|
| 20 |
+
|
| 21 |
+
```
|
| 22 |
+
dataset/
|
| 23 |
+
├── data/chunk-000/file-000.parquet # Multiple episodes per file
|
| 24 |
+
├── videos/camera/chunk-000/file-000.mp4 # Consolidated video chunks
|
| 25 |
+
└── meta/episodes/chunk-000/file-000.parquet # Structured metadata
|
| 26 |
+
```
|
| 27 |
+
|
| 28 |
+
This transition from individual episode files to file-based chunks dramatically improves performance and reduces storage overhead.
|
| 29 |
+
|
| 30 |
+
## What's New in Dataset v3.0
|
| 31 |
+
|
| 32 |
+
Dataset v3.0 introduces significant improvements for handling large datasets:
|
| 33 |
+
|
| 34 |
+
### 🏗️ **Enhanced File Organization**
|
| 35 |
+
|
| 36 |
+
- **File-based structure**: Episodes are now grouped into chunked files rather than individual episode files
|
| 37 |
+
- **Configurable file sizes**: for data and video files
|
| 38 |
+
- **Improved storage efficiency**: Better compression and reduced overhead
|
| 39 |
+
|
| 40 |
+
### 📊 **Modern Metadata Management**
|
| 41 |
+
|
| 42 |
+
- **Parquet-based metadata**: Replaced JSON Lines with efficient parquet format
|
| 43 |
+
- **Structured episode access**: Direct pandas DataFrame access via `dataset.meta.episodes`
|
| 44 |
+
- **Per-episode statistics**: Enhanced statistics tracking at episode level
|
| 45 |
+
|
| 46 |
+
### 🚀 **Performance Enhancements**
|
| 47 |
+
|
| 48 |
+
- **Memory-mapped access**: Improved RAM usage through PyArrow memory mapping
|
| 49 |
+
- **Faster loading**: Significantly reduced dataset initialization time
|
| 50 |
+
- **Better scalability**: Designed for datasets with millions of episodes
|
| 51 |
+
|
| 52 |
+
## Prerequisites
|
| 53 |
+
|
| 54 |
+
Before porting large datasets, ensure you have:
|
| 55 |
+
|
| 56 |
+
- **LeRobot installed** with v3.0 support. Follow our [Installation Guide](./installation).
|
| 57 |
+
- **Sufficient storage**: Raw datasets can be very large (e.g., DROID requires 2TB)
|
| 58 |
+
- **Cluster access** (recommended for large datasets): SLURM or similar job scheduler
|
| 59 |
+
- **Dataset-specific dependencies**: For DROID, you'll need TensorFlow Dataset utilities
|
| 60 |
+
|
| 61 |
+
## Understanding the DROID Dataset
|
| 62 |
+
|
| 63 |
+
[DROID 1.0.1](https://droid-dataset.github.io/droid/the-droid-dataset) is an excellent example of a large-scale robotic dataset:
|
| 64 |
+
|
| 65 |
+
- **Size**: 1.7TB (RLDS format), 8.7TB (raw data)
|
| 66 |
+
- **Structure**: 2048 pre-defined TensorFlow dataset shards
|
| 67 |
+
- **Content**: 76,000+ robot manipulation trajectories from Franka Emika Panda robots
|
| 68 |
+
- **Scope**: Real-world manipulation tasks across multiple environments and objects
|
| 69 |
+
- **Format**: Originally in TensorFlow Records/RLDS format, requiring conversion to LeRobot format
|
| 70 |
+
- **Hosting**: Google Cloud Storage with public access via `gsutil`
|
| 71 |
+
|
| 72 |
+
The dataset contains diverse manipulation demonstrations with:
|
| 73 |
+
|
| 74 |
+
- Multiple camera views (wrist camera, exterior cameras)
|
| 75 |
+
- Natural language task descriptions
|
| 76 |
+
- Robot proprioceptive state and actions
|
| 77 |
+
- Success/failure annotations
|
| 78 |
+
|
| 79 |
+
### DROID Features Schema
|
| 80 |
+
|
| 81 |
+
```python
|
| 82 |
+
DROID_FEATURES = {
|
| 83 |
+
# Episode markers
|
| 84 |
+
"is_first": {"dtype": "bool", "shape": (1,)},
|
| 85 |
+
"is_last": {"dtype": "bool", "shape": (1,)},
|
| 86 |
+
"is_terminal": {"dtype": "bool", "shape": (1,)},
|
| 87 |
+
|
| 88 |
+
# Language instructions
|
| 89 |
+
"language_instruction": {"dtype": "string", "shape": (1,)},
|
| 90 |
+
"language_instruction_2": {"dtype": "string", "shape": (1,)},
|
| 91 |
+
"language_instruction_3": {"dtype": "string", "shape": (1,)},
|
| 92 |
+
|
| 93 |
+
# Robot state
|
| 94 |
+
"observation.state.gripper_position": {"dtype": "float32", "shape": (1,)},
|
| 95 |
+
"observation.state.cartesian_position": {"dtype": "float32", "shape": (6,)},
|
| 96 |
+
"observation.state.joint_position": {"dtype": "float32", "shape": (7,)},
|
| 97 |
+
|
| 98 |
+
# Camera observations
|
| 99 |
+
"observation.images.wrist_left": {"dtype": "image"},
|
| 100 |
+
"observation.images.exterior_1_left": {"dtype": "image"},
|
| 101 |
+
"observation.images.exterior_2_left": {"dtype": "image"},
|
| 102 |
+
|
| 103 |
+
# Actions
|
| 104 |
+
"action.gripper_position": {"dtype": "float32", "shape": (1,)},
|
| 105 |
+
"action.cartesian_position": {"dtype": "float32", "shape": (6,)},
|
| 106 |
+
"action.joint_position": {"dtype": "float32", "shape": (7,)},
|
| 107 |
+
|
| 108 |
+
# Standard LeRobot format
|
| 109 |
+
"observation.state": {"dtype": "float32", "shape": (8,)}, # joints + gripper
|
| 110 |
+
"action": {"dtype": "float32", "shape": (8,)}, # joints + gripper
|
| 111 |
+
}
|
| 112 |
+
```
|
| 113 |
+
|
| 114 |
+
## Approach 1: Single Computer Porting
|
| 115 |
+
|
| 116 |
+
### Step 1: Install Dependencies
|
| 117 |
+
|
| 118 |
+
For DROID specifically:
|
| 119 |
+
|
| 120 |
+
```bash
|
| 121 |
+
pip install tensorflow
|
| 122 |
+
pip install tensorflow_datasets
|
| 123 |
+
```
|
| 124 |
+
|
| 125 |
+
For other datasets, install the appropriate readers for your source format.
|
| 126 |
+
|
| 127 |
+
### Step 2: Download Raw Data
|
| 128 |
+
|
| 129 |
+
Download DROID from Google Cloud Storage using `gsutil`:
|
| 130 |
+
|
| 131 |
+
```bash
|
| 132 |
+
# Install Google Cloud SDK if not already installed
|
| 133 |
+
# https://cloud.google.com/sdk/docs/install
|
| 134 |
+
|
| 135 |
+
# Download the full RLDS dataset (1.7TB)
|
| 136 |
+
gsutil -m cp -r gs://gresearch/robotics/droid/1.0.1 /your/data/
|
| 137 |
+
|
| 138 |
+
# Or download just the 100-episode sample (2GB) for testing
|
| 139 |
+
gsutil -m cp -r gs://gresearch/robotics/droid_100 /your/data/
|
| 140 |
+
```
|
| 141 |
+
|
| 142 |
+
> [!WARNING]
|
| 143 |
+
> Large datasets require substantial time and storage:
|
| 144 |
+
>
|
| 145 |
+
> - **Full DROID (1.7TB)**: Several days to download depending on bandwidth
|
| 146 |
+
> - **Processing time**: 7+ days for local porting of full dataset
|
| 147 |
+
> - **Upload time**: 3+ days to push to Hugging Face Hub
|
| 148 |
+
> - **Local storage**: ~400GB for processed LeRobot format
|
| 149 |
+
|
| 150 |
+
### Step 3: Port the Dataset
|
| 151 |
+
|
| 152 |
+
```bash
|
| 153 |
+
python examples/port_datasets/port_droid.py \
|
| 154 |
+
--raw-dir /your/data/droid/1.0.1 \
|
| 155 |
+
--repo-id your_id/droid_1.0.1 \
|
| 156 |
+
--push-to-hub
|
| 157 |
+
```
|
| 158 |
+
|
| 159 |
+
### Development and Testing
|
| 160 |
+
|
| 161 |
+
For development, you can port a single shard:
|
| 162 |
+
|
| 163 |
+
```bash
|
| 164 |
+
python examples/port_datasets/port_droid.py \
|
| 165 |
+
--raw-dir /your/data/droid/1.0.1 \
|
| 166 |
+
--repo-id your_id/droid_1.0.1_test \
|
| 167 |
+
--num-shards 2048 \
|
| 168 |
+
--shard-index 0
|
| 169 |
+
```
|
| 170 |
+
|
| 171 |
+
This approach works for smaller datasets or testing, but large datasets require cluster computing.
|
| 172 |
+
|
| 173 |
+
## Approach 2: SLURM Cluster Porting (Recommended)
|
| 174 |
+
|
| 175 |
+
For large datasets like DROID, parallel processing across multiple nodes dramatically reduces processing time.
|
| 176 |
+
|
| 177 |
+
### Step 1: Install Cluster Dependencies
|
| 178 |
+
|
| 179 |
+
```bash
|
| 180 |
+
pip install datatrove # Hugging Face's distributed processing library
|
| 181 |
+
```
|
| 182 |
+
|
| 183 |
+
### Step 2: Configure Your SLURM Environment
|
| 184 |
+
|
| 185 |
+
Find your partition information:
|
| 186 |
+
|
| 187 |
+
```bash
|
| 188 |
+
sinfo --format="%R" # List available partitions
|
| 189 |
+
sinfo -N -p your_partition -h -o "%N cpus=%c mem=%m" # Check resources
|
| 190 |
+
```
|
| 191 |
+
|
| 192 |
+
Choose a **CPU partition** - no GPU needed for dataset porting.
|
| 193 |
+
|
| 194 |
+
### Step 3: Launch Parallel Porting Jobs
|
| 195 |
+
|
| 196 |
+
```bash
|
| 197 |
+
python examples/port_datasets/slurm_port_shards.py \
|
| 198 |
+
--raw-dir /your/data/droid/1.0.1 \
|
| 199 |
+
--repo-id your_id/droid_1.0.1 \
|
| 200 |
+
--logs-dir /your/logs \
|
| 201 |
+
--job-name port_droid \
|
| 202 |
+
--partition your_partition \
|
| 203 |
+
--workers 2048 \
|
| 204 |
+
--cpus-per-task 8 \
|
| 205 |
+
--mem-per-cpu 1950M
|
| 206 |
+
```
|
| 207 |
+
|
| 208 |
+
#### Parameter Guidelines
|
| 209 |
+
|
| 210 |
+
- **`--workers`**: Number of parallel jobs (max 2048 for DROID's shard count)
|
| 211 |
+
- **`--cpus-per-task`**: 8 CPUs recommended for frame encoding parallelization
|
| 212 |
+
- **`--mem-per-cpu`**: ~16GB total RAM (8×1950M) for loading raw frames
|
| 213 |
+
|
| 214 |
+
> [!TIP]
|
| 215 |
+
> Start with fewer workers (e.g., 100) to test your cluster configuration before launching thousands of jobs.
|
| 216 |
+
|
| 217 |
+
### Step 4: Monitor Progress
|
| 218 |
+
|
| 219 |
+
Check running jobs:
|
| 220 |
+
|
| 221 |
+
```bash
|
| 222 |
+
squeue -u $USER
|
| 223 |
+
```
|
| 224 |
+
|
| 225 |
+
Monitor overall progress:
|
| 226 |
+
|
| 227 |
+
```bash
|
| 228 |
+
jobs_status /your/logs
|
| 229 |
+
```
|
| 230 |
+
|
| 231 |
+
Inspect individual job logs:
|
| 232 |
+
|
| 233 |
+
```bash
|
| 234 |
+
less /your/logs/port_droid/slurm_jobs/JOB_ID_WORKER_ID.out
|
| 235 |
+
```
|
| 236 |
+
|
| 237 |
+
Debug failed jobs:
|
| 238 |
+
|
| 239 |
+
```bash
|
| 240 |
+
failed_logs /your/logs/port_droid
|
| 241 |
+
```
|
| 242 |
+
|
| 243 |
+
### Step 5: Aggregate Shards
|
| 244 |
+
|
| 245 |
+
Once all porting jobs complete:
|
| 246 |
+
|
| 247 |
+
```bash
|
| 248 |
+
python examples/port_datasets/slurm_aggregate_shards.py \
|
| 249 |
+
--repo-id your_id/droid_1.0.1 \
|
| 250 |
+
--logs-dir /your/logs \
|
| 251 |
+
--job-name aggr_droid \
|
| 252 |
+
--partition your_partition \
|
| 253 |
+
--workers 2048 \
|
| 254 |
+
--cpus-per-task 8 \
|
| 255 |
+
--mem-per-cpu 1950M
|
| 256 |
+
```
|
| 257 |
+
|
| 258 |
+
### Step 6: Upload to Hub
|
| 259 |
+
|
| 260 |
+
```bash
|
| 261 |
+
python examples/port_datasets/slurm_upload.py \
|
| 262 |
+
--repo-id your_id/droid_1.0.1 \
|
| 263 |
+
--logs-dir /your/logs \
|
| 264 |
+
--job-name upload_droid \
|
| 265 |
+
--partition your_partition \
|
| 266 |
+
--workers 50 \
|
| 267 |
+
--cpus-per-task 4 \
|
| 268 |
+
--mem-per-cpu 1950M
|
| 269 |
+
```
|
| 270 |
+
|
| 271 |
+
> [!NOTE]
|
| 272 |
+
> Upload uses fewer workers (50) since it's network-bound rather than compute-bound.
|
| 273 |
+
|
| 274 |
+
## Dataset v3.0 File Structure
|
| 275 |
+
|
| 276 |
+
Your completed dataset will have this modern structure:
|
| 277 |
+
|
| 278 |
+
```
|
| 279 |
+
dataset/
|
| 280 |
+
├── meta/
|
| 281 |
+
│ ├── episodes/
|
| 282 |
+
│ │ └── chunk-000/
|
| 283 |
+
│ │ └── file-000.parquet # Episode metadata
|
| 284 |
+
│ ├── tasks.parquet # Task definitions
|
| 285 |
+
│ ├── stats.json # Aggregated statistics
|
| 286 |
+
│ └── info.json # Dataset information
|
| 287 |
+
├── data/
|
| 288 |
+
│ └── chunk-000/
|
| 289 |
+
│ └── file-000.parquet # Consolidated episode data
|
| 290 |
+
└── videos/
|
| 291 |
+
└── camera_key/
|
| 292 |
+
└── chunk-000/
|
| 293 |
+
└── file-000.mp4 # Consolidated video files
|
| 294 |
+
```
|
| 295 |
+
|
| 296 |
+
This replaces the old episode-per-file structure with efficient, optimally-sized chunks.
|
| 297 |
+
|
| 298 |
+
## Migrating from Dataset v2.1
|
| 299 |
+
|
| 300 |
+
If you have existing datasets in v2.1 format, use the migration tool:
|
| 301 |
+
|
| 302 |
+
```bash
|
| 303 |
+
python src/lerobot/datasets/v30/convert_dataset_v21_to_v30.py \
|
| 304 |
+
--repo-id your_id/existing_dataset
|
| 305 |
+
```
|
| 306 |
+
|
| 307 |
+
This automatically:
|
| 308 |
+
|
| 309 |
+
- Converts file structure to v3.0 format
|
| 310 |
+
- Migrates metadata from JSON Lines to parquet
|
| 311 |
+
- Aggregates statistics and creates per-episode stats
|
| 312 |
+
- Updates version information
|
| 313 |
+
|
| 314 |
+
## Performance Benefits
|
| 315 |
+
|
| 316 |
+
Dataset v3.0 provides significant improvements for large datasets:
|
| 317 |
+
|
| 318 |
+
- **Faster loading**: 3-5x reduction in initialization time
|
| 319 |
+
- **Memory efficiency**: Better RAM usage through memory mapping
|
| 320 |
+
- **Scalable processing**: Handles millions of episodes efficiently
|
| 321 |
+
- **Storage optimization**: Reduced file count and improved compression
|
lerobot/docs/source/processors_robots_teleop.mdx
ADDED
|
@@ -0,0 +1,151 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Processors for Robots and Teleoperators
|
| 2 |
+
|
| 3 |
+
This guide shows how to build and modify processing pipelines that connect teleoperators (e.g., phone) to robots and datasets. Pipelines standardize conversions between different action/observation spaces so you can swap teleops and robots without rewriting glue code.
|
| 4 |
+
|
| 5 |
+
We use the Phone to SO‑100 follower examples for concreteness, but the same patterns apply to other robots.
|
| 6 |
+
|
| 7 |
+
**What you'll learn**
|
| 8 |
+
|
| 9 |
+
- Absolute vs. relative EE control: What each means, trade‑offs, and how to choose for your task.
|
| 10 |
+
- Three-pipeline pattern: How to map teleop actions → dataset actions → robot commands, and robot observations → dataset observations.
|
| 11 |
+
- Adapters (`to_transition` / `to_output`): How these convert raw dicts to `EnvTransition` and back to reduce boilerplate.
|
| 12 |
+
- Dataset feature contracts: How steps declare features via `transform_features(...)`, and how to aggregate/merge them for recording.
|
| 13 |
+
- Choosing a representation: When to store joints, absolute EE poses, or relative EE deltas—and how that affects training.
|
| 14 |
+
- Pipeline customization guidance: How to swap robots/URDFs safely and tune bounds, step sizes, and options like IK initialization.
|
| 15 |
+
|
| 16 |
+
### Absolute vs relative EE control
|
| 17 |
+
|
| 18 |
+
The examples in this guide use absolute end effector (EE) poses because they are easy to reason about. In practice, relative EE deltas or joint position are often preferred as learning features.
|
| 19 |
+
|
| 20 |
+
With processors, you choose the learning features you want to use for your policy. This could be joints positions/velocities, absolute EE, or relative EE positions. You can also choose to store other features, such as joint torques, motor currents, etc.
|
| 21 |
+
|
| 22 |
+
## Three pipelines
|
| 23 |
+
|
| 24 |
+
We often compose three pipelines. Depending on your setup, some can be empty if action and observation spaces already match.
|
| 25 |
+
Each of these pipelines handle different conversions between different action and observation spaces. Below is a quick explanation of each pipeline.
|
| 26 |
+
|
| 27 |
+
1. Pipeline 1: Teleop action space → dataset action space (phone pose → EE targets)
|
| 28 |
+
2. Pipeline 2: Dataset action space → robot command space (EE targets → joints)
|
| 29 |
+
3. Pipeline 3: Robot observation space → dataset observation space (joints → EE pose)
|
| 30 |
+
|
| 31 |
+
Below is an example of the three pipelines that we use in the phone to SO-100 follower examples:
|
| 32 |
+
|
| 33 |
+
```python
|
| 34 |
+
phone_to_robot_ee_pose_processor = RobotProcessorPipeline[RobotAction, RobotAction]( # teleop -> dataset action
|
| 35 |
+
steps=[
|
| 36 |
+
MapPhoneActionToRobotAction(platform=teleop_config.phone_os),
|
| 37 |
+
EEReferenceAndDelta(
|
| 38 |
+
kinematics=kinematics_solver, end_effector_step_sizes={"x": 0.5, "y": 0.5, "z": 0.5}, motor_names=list(robot.bus.motors.keys()),
|
| 39 |
+
),
|
| 40 |
+
EEBoundsAndSafety(
|
| 41 |
+
end_effector_bounds={"min": [-1.0, -1.0, -1.0], "max": [1.0, 1.0, 1.0]}, max_ee_step_m=0.20,
|
| 42 |
+
),
|
| 43 |
+
GripperVelocityToJoint(),
|
| 44 |
+
],
|
| 45 |
+
to_transition=robot_action_to_transition,
|
| 46 |
+
to_output=transition_to_robot_action,
|
| 47 |
+
)
|
| 48 |
+
|
| 49 |
+
robot_ee_to_joints_processor = RobotProcessorPipeline[RobotAction, RobotAction]( # dataset action -> robot
|
| 50 |
+
steps=[
|
| 51 |
+
InverseKinematicsEEToJoints(
|
| 52 |
+
kinematics=kinematics_solver, motor_names=list(robot.bus.motors.keys()), initial_guess_current_joints=True,
|
| 53 |
+
),
|
| 54 |
+
],
|
| 55 |
+
to_transition=robot_action_to_transition,
|
| 56 |
+
to_output=transition_to_robot_action,
|
| 57 |
+
)
|
| 58 |
+
|
| 59 |
+
robot_joints_to_ee_pose = RobotProcessorPipeline[RobotObservation, RobotObservation]( # robot obs -> dataset obs
|
| 60 |
+
steps=[
|
| 61 |
+
ForwardKinematicsJointsToEE(kinematics=kinematics_solver, motor_names=list(robot.bus.motors.keys()))
|
| 62 |
+
],
|
| 63 |
+
to_transition=observation_to_transition,
|
| 64 |
+
to_output=transition_to_observation,
|
| 65 |
+
)
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
## Why to_transition / to_output
|
| 69 |
+
|
| 70 |
+
To convert from robot/teleoperator to pipeline and back, we use the `to_transition` and `to_output` pipeline adapters.
|
| 71 |
+
They standardize conversions to reduce boilerplate code, and form the bridge between the robot and teleoperators raw dictionaries and the pipeline’s `EnvTransition` format.
|
| 72 |
+
In the phone to SO-100 follower examples we use the following adapters:
|
| 73 |
+
|
| 74 |
+
- `robot_action_to_transition`: transforms the teleop action dict to a pipeline transition.
|
| 75 |
+
- `transition_to_robot_action`: transforms the pipeline transition to a robot action dict.
|
| 76 |
+
- `observation_to_transition`: transforms the robot observation dict to a pipeline transition.
|
| 77 |
+
- `transition_to_observation`: transforms the pipeline transition to a observation dict.
|
| 78 |
+
|
| 79 |
+
Checkout [src/lerobot/processor/converters.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/processor/converters.py) for more details.
|
| 80 |
+
|
| 81 |
+
## Dataset feature contracts
|
| 82 |
+
|
| 83 |
+
Dataset features are determined by the keys saved in the dataset. Each step can declare what features it modifies in a contract called `transform_features(...)`. Once you build a processor, the processor can then aggregate all of these features with `aggregate_pipeline_dataset_features()` and merge multiple feature dicts with `combine_feature_dicts(...)`.
|
| 84 |
+
|
| 85 |
+
Below is and example of how we declare features with the `transform_features` method in the phone to SO-100 follower examples:
|
| 86 |
+
|
| 87 |
+
```python
|
| 88 |
+
def transform_features(
|
| 89 |
+
self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
|
| 90 |
+
) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
|
| 91 |
+
# We only use the ee pose in the dataset, so we don't need the joint positions
|
| 92 |
+
for n in self.motor_names:
|
| 93 |
+
features[PipelineFeatureType.ACTION].pop(f"{n}.pos", None)
|
| 94 |
+
# We specify the dataset features of this step that we want to be stored in the dataset
|
| 95 |
+
for k in ["x", "y", "z", "wx", "wy", "wz", "gripper_pos"]:
|
| 96 |
+
features[PipelineFeatureType.ACTION][f"ee.{k}"] = PolicyFeature(
|
| 97 |
+
type=FeatureType.STATE, shape=(1,)
|
| 98 |
+
)
|
| 99 |
+
return features
|
| 100 |
+
```
|
| 101 |
+
|
| 102 |
+
Here we declare what PolicyFeatures we modify in this step, so we know what features we can expect when we run the processor. These features can then be aggregated and used to create the dataset features.
|
| 103 |
+
|
| 104 |
+
Below is an example of how we aggregate and merge features in the phone to SO-100 record example:
|
| 105 |
+
|
| 106 |
+
```python
|
| 107 |
+
features=combine_feature_dicts(
|
| 108 |
+
# Run the feature contract of the pipelines
|
| 109 |
+
# This tells you how the features would look like after the pipeline steps
|
| 110 |
+
aggregate_pipeline_dataset_features(
|
| 111 |
+
pipeline=phone_to_robot_ee_pose_processor,
|
| 112 |
+
initial_features=create_initial_features(action=phone.action_features), # <- Action features we can expect, these come from our teleop device (phone) and action processor
|
| 113 |
+
use_videos=True,
|
| 114 |
+
),
|
| 115 |
+
aggregate_pipeline_dataset_features(
|
| 116 |
+
pipeline=robot_joints_to_ee_pose,
|
| 117 |
+
initial_features=create_initial_features(observation=robot.observation_features), # <- Observation features we can expect, these come from our robot and observation processor
|
| 118 |
+
use_videos=True,
|
| 119 |
+
patterns=["observation.state.ee"], # <- Here you could optionally filter the features we want to store in the dataset, with a specific pattern
|
| 120 |
+
|
| 121 |
+
),
|
| 122 |
+
),
|
| 123 |
+
```
|
| 124 |
+
|
| 125 |
+
How it works:
|
| 126 |
+
|
| 127 |
+
- `aggregate_pipeline_dataset_features(...)`: applies `transform_features` across the pipeline and filters by patterns (images included when `use_videos=True`, and state features included when `patterns` is specified).
|
| 128 |
+
- `combine_feature_dicts(...)`: combine multiple feature dicts.
|
| 129 |
+
- Recording with `record_loop(...)` uses `build_dataset_frame(...)` to build frames consistent with `dataset.features` before we call `add_frame(...)` to add the frame to the dataset.
|
| 130 |
+
|
| 131 |
+
## Guidance when customizing robot pipelines
|
| 132 |
+
|
| 133 |
+
You can store any of the following features as your action/observation space:
|
| 134 |
+
|
| 135 |
+
- Joint positions
|
| 136 |
+
- Absolute EE poses
|
| 137 |
+
- Relative EE deltas
|
| 138 |
+
- Other features: joint velocity, torques, etc.
|
| 139 |
+
|
| 140 |
+
Pick what you want to use for your policy action and observation space and configure/modify the pipelines and steps accordingly.
|
| 141 |
+
|
| 142 |
+
### Different robots
|
| 143 |
+
|
| 144 |
+
- You can easily reuse pipelines, for example to use another robot with phone teleop, modify the examples and swap the robot `RobotKinematics` (URDF) and `motor_names` to use your own robot with Phone teleop. Additionally you should ensure `target_frame_name` points to your gripper/wrist.
|
| 145 |
+
|
| 146 |
+
### Safety first
|
| 147 |
+
|
| 148 |
+
- When changing pipelines, start with tight bounds, implement safety steps when working with real robots.
|
| 149 |
+
- Its advised to start with simulation first and then move to real robots.
|
| 150 |
+
|
| 151 |
+
Thats it! We hope this guide helps you get started with customizing your robot pipelines, If you run into any issues at any point, jump into our [Discord community](https://discord.com/invite/s3KuuzsPFb) for support.
|
lerobot/docs/source/reachy2.mdx
ADDED
|
@@ -0,0 +1,303 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Reachy 2
|
| 2 |
+
|
| 3 |
+
Reachy 2 is an open-source humanoid robot made by Pollen Robotics, specifically designed for the development of embodied AI and real-world applications.
|
| 4 |
+
Check out [Pollen Robotics website](https://www.pollen-robotics.com/reachy/), or access [Reachy 2 documentation](https://docs.pollen-robotics.com/) for more information on the platform!
|
| 5 |
+
|
| 6 |
+
## Teleoperate Reachy 2
|
| 7 |
+
|
| 8 |
+
Currently, there are two ways to teleoperate Reachy 2:
|
| 9 |
+
|
| 10 |
+
- Pollen Robotics’ VR teleoperation (not included in LeRobot).
|
| 11 |
+
- Robot-to-robot teleoperation (use one Reachy 2 to control another).
|
| 12 |
+
|
| 13 |
+
## Reachy 2 Simulation
|
| 14 |
+
|
| 15 |
+
**(Linux only)** You can run Reachy 2 in simulation (Gazebo or MuJoCo) using the provided [Docker image](https://hub.docker.com/r/pollenrobotics/reachy2_core).
|
| 16 |
+
|
| 17 |
+
1. Install [Docker Engine](https://docs.docker.com/engine/).
|
| 18 |
+
2. Run (for MuJoCo):
|
| 19 |
+
|
| 20 |
+
```
|
| 21 |
+
docker run --rm -it \
|
| 22 |
+
--name reachy \
|
| 23 |
+
--privileged \
|
| 24 |
+
--network host \
|
| 25 |
+
--ipc host \
|
| 26 |
+
--device-cgroup-rule='c 189:* rwm' \
|
| 27 |
+
--group-add audio \
|
| 28 |
+
-e ROS_DOMAIN_ID="$ROS_DOMAIN_ID" \
|
| 29 |
+
-e DISPLAY="$DISPLAY" \
|
| 30 |
+
-e RCUTILS_CONSOLE_OUTPUT_FORMAT="[{severity}]: {message}" \
|
| 31 |
+
-e REACHY2_CORE_SERVICE_FAKE="${REACHY2_CORE_SERVICE_FAKE:-true}" \
|
| 32 |
+
-v /dev:/dev \
|
| 33 |
+
-v "$HOME/.reachy_config":/home/reachy/.reachy_config_override \
|
| 34 |
+
-v "$HOME/.reachy.log":/home/reachy/.ros/log \
|
| 35 |
+
-v /usr/lib/x86_64-linux-gnu:/opt/host-libs \
|
| 36 |
+
--entrypoint /package/launch.sh \
|
| 37 |
+
pollenrobotics/reachy2_core:1.7.5.9_deploy \
|
| 38 |
+
start_rviz:=true start_sdk_server:=true mujoco:=true
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
> [!NOTE]
|
| 42 |
+
> If MuJoCo runs slowly (low simulation frequency), append `-e LD_LIBRARY_PATH="/opt/host-libs:$LD_LIBRARY_PATH" \` to the previous command to improve performance:
|
| 43 |
+
>
|
| 44 |
+
> ```
|
| 45 |
+
> docker run --rm -it \
|
| 46 |
+
> --name reachy \
|
| 47 |
+
> --privileged \
|
| 48 |
+
> --network host \
|
| 49 |
+
> --ipc host \
|
| 50 |
+
> --device-cgroup-rule='c 189:* rwm' \
|
| 51 |
+
> --group-add audio \
|
| 52 |
+
> -e ROS_DOMAIN_ID="$ROS_DOMAIN_ID" \
|
| 53 |
+
> -e DISPLAY="$DISPLAY" \
|
| 54 |
+
> -e RCUTILS_CONSOLE_OUTPUT_FORMAT="[{severity}]: {message}" \
|
| 55 |
+
> -e REACHY2_CORE_SERVICE_FAKE="${REACHY2_CORE_SERVICE_FAKE:-true}" \
|
| 56 |
+
> -e LD_LIBRARY_PATH="/opt/host-libs:$LD_LIBRARY_PATH" \
|
| 57 |
+
> -v /dev:/dev \
|
| 58 |
+
> -v "$HOME/.reachy_config":/home/reachy/.reachy_config_override \
|
| 59 |
+
> -v "$HOME/.reachy.log":/home/reachy/.ros/log \
|
| 60 |
+
> -v /usr/lib/x86_64-linux-gnu:/opt/host-libs \
|
| 61 |
+
> --entrypoint /package/launch.sh \
|
| 62 |
+
> pollenrobotics/reachy2_core:1.7.5.9_deploy \
|
| 63 |
+
> start_rviz:=true start_sdk_server:=true mujoco:=true
|
| 64 |
+
> ```
|
| 65 |
+
|
| 66 |
+
## Setup
|
| 67 |
+
|
| 68 |
+
### Prerequisites
|
| 69 |
+
|
| 70 |
+
- On your robot, check the **service images** meet the minimum versions:
|
| 71 |
+
- **reachy2-core >= 1.7.5.2**
|
| 72 |
+
- **webrtc >= 2.0.1.1**
|
| 73 |
+
|
| 74 |
+
Then, if you want to use VR teleoperation:
|
| 75 |
+
|
| 76 |
+
- Install the [Reachy 2 teleoperation application](https://docs.pollen-robotics.com/teleoperation/teleoperation-introduction/discover-teleoperation/).
|
| 77 |
+
Use version **>=v1.2.0**
|
| 78 |
+
|
| 79 |
+
We recommend using two computers: one for teleoperation (Windows required) and another for recording with LeRobot.
|
| 80 |
+
|
| 81 |
+
### Install LeRobot
|
| 82 |
+
|
| 83 |
+
Follow the [installation instructions](https://github.com/huggingface/lerobot#installation) to install LeRobot.
|
| 84 |
+
|
| 85 |
+
Install LeRobot with Reachy 2 dependencies:
|
| 86 |
+
|
| 87 |
+
```bash
|
| 88 |
+
pip install -e ".[reachy2]"
|
| 89 |
+
```
|
| 90 |
+
|
| 91 |
+
### (Optional but recommended) Install pollen_data_acquisition_server
|
| 92 |
+
|
| 93 |
+
How you manage Reachy 2 recording sessions is up to you, but the **easiest** way is to use this server so you can control sessions directly from the VR teleoperation app.
|
| 94 |
+
|
| 95 |
+
> **Note:** Currently, only the VR teleoperation application works as a client for this server, so this step primarily targets teleoperation. You’re free to develop custom clients to manage sessions to your needs.
|
| 96 |
+
|
| 97 |
+
In your LeRobot environment, install the server from source:
|
| 98 |
+
|
| 99 |
+
```bash
|
| 100 |
+
git clone https://github.com/pollen-robotics/pollen_data_acquisition_server.git
|
| 101 |
+
cd pollen_data_acquisition_server
|
| 102 |
+
pip install -e .
|
| 103 |
+
```
|
| 104 |
+
|
| 105 |
+
Find the [pollen_data_acquisition_server documentation here](https://github.com/pollen-robotics/pollen_data_acquisition_server).
|
| 106 |
+
|
| 107 |
+
## Step 1: Recording
|
| 108 |
+
|
| 109 |
+
### Get Reachy 2 IP address
|
| 110 |
+
|
| 111 |
+
Before starting teleoperation and data recording, find the [robot's IP address](https://docs.pollen-robotics.com/getting-started/setup-reachy2/connect-reachy2/).
|
| 112 |
+
We strongly recommend connecting all devices (PC and robot) via **Ethernet**.
|
| 113 |
+
|
| 114 |
+
### Launch recording
|
| 115 |
+
|
| 116 |
+
There are two ways to manage recording sessions when using the Reachy 2 VR teleoperation application:
|
| 117 |
+
|
| 118 |
+
- **Using the data acquisition server (recommended for VR teleop)**: The VR app orchestrates sessions (via the server it tells LeRobot when to create datasets, start/stop episodes) while also controlling the robot’s motions.
|
| 119 |
+
- **Using LeRobot’s record script**: LeRobot owns session control and decides when to start/stop episodes. If you also use the VR teleop app, it’s only for motion control.
|
| 120 |
+
|
| 121 |
+
### Option 1: Using Pollen data acquisition server (recommended for VR teleop)
|
| 122 |
+
|
| 123 |
+
Make sure you have installed pollen_data_acquisition_server, as explained in the Setup section.
|
| 124 |
+
|
| 125 |
+
Launch the data acquisition server to be able to manage your session directly from the teleoperation application:
|
| 126 |
+
|
| 127 |
+
```bash
|
| 128 |
+
python -m pollen_data_acquisition_server.server
|
| 129 |
+
```
|
| 130 |
+
|
| 131 |
+
Then get into the teleoperation application and choose "Data acquisition session".
|
| 132 |
+
You can finally setup your session by following the screens displayed.
|
| 133 |
+
|
| 134 |
+
> Even without the VR app, you can use the `pollen_data_acquisition_server` with your own client implementation.
|
| 135 |
+
|
| 136 |
+
### Option 2: Using lerobot.record
|
| 137 |
+
|
| 138 |
+
Reachy 2 is fully supported by LeRobot’s recording features.
|
| 139 |
+
If you choose this option but still want to use the VR teleoperation application, select "Standard session" in the app.
|
| 140 |
+
|
| 141 |
+
**Example: start a recording without the mobile base:**
|
| 142 |
+
First add reachy2 and reachy2_teleoperator to the imports of the record script. Then you can use the following command:
|
| 143 |
+
|
| 144 |
+
```bash
|
| 145 |
+
lerobot-record \
|
| 146 |
+
--robot.type=reachy2 \
|
| 147 |
+
--robot.ip_address=192.168.0.200 \
|
| 148 |
+
--robot.id=r2-0000 \
|
| 149 |
+
--robot.use_external_commands=true \
|
| 150 |
+
--robot.with_mobile_base=false \
|
| 151 |
+
--teleop.type=reachy2_teleoperator \
|
| 152 |
+
--teleop.ip_address=192.168.0.200 \
|
| 153 |
+
--teleop.with_mobile_base=false \
|
| 154 |
+
--robot.with_torso_camera=true \
|
| 155 |
+
--dataset.repo_id=pollen_robotics/record_test \
|
| 156 |
+
--dataset.single_task="Reachy 2 recording test" \
|
| 157 |
+
--dataset.num_episodes=1 \
|
| 158 |
+
--dataset.episode_time_s=5 \
|
| 159 |
+
--dataset.fps=15 \
|
| 160 |
+
--dataset.push_to_hub=true \
|
| 161 |
+
--dataset.private=true \
|
| 162 |
+
--display_data=true
|
| 163 |
+
```
|
| 164 |
+
|
| 165 |
+
#### Specific Options
|
| 166 |
+
|
| 167 |
+
**Extended setup overview (all options included):**
|
| 168 |
+
|
| 169 |
+
```bash
|
| 170 |
+
lerobot-record \
|
| 171 |
+
--robot.type=reachy2 \
|
| 172 |
+
--robot.ip_address=192.168.0.200 \
|
| 173 |
+
--robot.use_external_commands=true \
|
| 174 |
+
--robot.with_mobile_base=true \
|
| 175 |
+
--robot.with_l_arm=true \
|
| 176 |
+
--robot.with_r_arm=true \
|
| 177 |
+
--robot.with_neck=true \
|
| 178 |
+
--robot.with_antennas=true \
|
| 179 |
+
--robot.with_left_teleop_camera=true \
|
| 180 |
+
--robot.with_right_teleop_camera=true \
|
| 181 |
+
--robot.with_torso_camera=false \
|
| 182 |
+
--robot.camera_width=640 \
|
| 183 |
+
--robot.camera_height=480 \
|
| 184 |
+
--robot.disable_torque_on_disconnect=false \
|
| 185 |
+
--robot.max_relative_target=5.0 \
|
| 186 |
+
--teleop.type=reachy2_teleoperator \
|
| 187 |
+
--teleop.ip_address=192.168.0.200 \
|
| 188 |
+
--teleop.use_present_position=false \
|
| 189 |
+
--teleop.with_mobile_base=false \
|
| 190 |
+
--teleop.with_l_arm=true \
|
| 191 |
+
--teleop.with_r_arm=true \
|
| 192 |
+
--teleop.with_neck=true \
|
| 193 |
+
--teleop.with_antennas=true \
|
| 194 |
+
--dataset.repo_id=pollen_robotics/record_test \
|
| 195 |
+
--dataset.single_task="Reachy 2 recording test" \
|
| 196 |
+
--dataset.num_episodes=1 \
|
| 197 |
+
--dataset.episode_time_s=5 \
|
| 198 |
+
--dataset.fps=15 \
|
| 199 |
+
--dataset.push_to_hub=true \
|
| 200 |
+
--dataset.private=true \
|
| 201 |
+
--display_data=true
|
| 202 |
+
```
|
| 203 |
+
|
| 204 |
+
##### `--robot.use_external_commands`
|
| 205 |
+
|
| 206 |
+
Determine whether LeRobot robot.send_action() sends commands to the robot.
|
| 207 |
+
**Must** be set to false while using the VR teleoperation application, as the app already sends commands.
|
| 208 |
+
|
| 209 |
+
##### `--teleop.use_present_position`
|
| 210 |
+
|
| 211 |
+
Determine whether the teleoperator reads the goal or present position of the robot.
|
| 212 |
+
Must be set to true if a compliant Reachy 2 is used to control another one.
|
| 213 |
+
|
| 214 |
+
##### Use the relevant parts
|
| 215 |
+
|
| 216 |
+
From our initial tests, recording **all** joints when only some are moving can reduce model quality with certain policies.
|
| 217 |
+
To avoid this, you can exclude specific parts from recording and replay using:
|
| 218 |
+
|
| 219 |
+
```bash
|
| 220 |
+
--robot.with_<part>=false
|
| 221 |
+
```
|
| 222 |
+
|
| 223 |
+
with `<part>` being one of : `mobile_base`, `l_arm`, `r_arm", `neck`, `antennas`.
|
| 224 |
+
It determine whether the corresponding part is recorded in the observations. True if not set.
|
| 225 |
+
|
| 226 |
+
By default, **all parts are recorded**.
|
| 227 |
+
|
| 228 |
+
The same per-part mechanism is available in `reachy2_teleoperator` as well.
|
| 229 |
+
|
| 230 |
+
```bash
|
| 231 |
+
--teleop.with\_<part>
|
| 232 |
+
```
|
| 233 |
+
|
| 234 |
+
with `<part>` being one of : `mobile_base`, `l_arm`, `r_arm", `neck`, `antennas`.
|
| 235 |
+
Determine whether the corresponding part is recorded in the actions. True if not set.
|
| 236 |
+
|
| 237 |
+
> **Important:** In a given session, the **enabled parts must match** on both the robot and the teleoperator.
|
| 238 |
+
> For example, if the robot runs with `--robot.with_mobile_base=false`, the teleoperator must disable the same part `--teleoperator.with_mobile_base=false`.
|
| 239 |
+
|
| 240 |
+
##### Use the relevant cameras
|
| 241 |
+
|
| 242 |
+
You can do the same for **cameras**. Enable or disable each camera with default parameters using:
|
| 243 |
+
|
| 244 |
+
```bash
|
| 245 |
+
--robot.with_left_teleop_camera=<true|false> \
|
| 246 |
+
--robot.with_right_teleop_camera=<true|false> \
|
| 247 |
+
--robot.with_torso_camera=<true|false>
|
| 248 |
+
```
|
| 249 |
+
|
| 250 |
+
By default, no camera is recorded, all camera arguments are set to `false`.
|
| 251 |
+
If you want to, you can use custom `width` and `height` parameters for Reachy 2's cameras using the `--robot.camera_width` & `--robot.camera_height` argument:
|
| 252 |
+
|
| 253 |
+
```bash
|
| 254 |
+
--robot.camera_width=1920 \
|
| 255 |
+
--robot.camera_height=1080
|
| 256 |
+
```
|
| 257 |
+
|
| 258 |
+
This will change the resolution of all 3 default robot cameras (enabled by the above bool arguments).
|
| 259 |
+
|
| 260 |
+
If you want, you can add additional cameras other than the ones in the robot as usual with:
|
| 261 |
+
|
| 262 |
+
```bash
|
| 263 |
+
--robot.cameras="{ extra: {type: opencv, index_or_path: 42, width: 640, height: 480, fps: 30}}" \
|
| 264 |
+
```
|
| 265 |
+
|
| 266 |
+
## Step 2: Replay
|
| 267 |
+
|
| 268 |
+
Make sure the robot is configured with the same parts as the dataset:
|
| 269 |
+
|
| 270 |
+
```bash
|
| 271 |
+
lerobot-replay \
|
| 272 |
+
--robot.type=reachy2 \
|
| 273 |
+
--robot.ip_address=192.168.0.200 \
|
| 274 |
+
--robot.use_external_commands=false \
|
| 275 |
+
--robot.with_mobile_base=false \
|
| 276 |
+
--dataset.repo_id=pollen_robotics/record_test \
|
| 277 |
+
--dataset.episode=0
|
| 278 |
+
```
|
| 279 |
+
|
| 280 |
+
## Step 3: Train
|
| 281 |
+
|
| 282 |
+
```bash
|
| 283 |
+
lerobot-train \
|
| 284 |
+
--dataset.repo_id=pollen_robotics/record_test \
|
| 285 |
+
--policy.type=act \
|
| 286 |
+
--output_dir=outputs/train/reachy2_test \
|
| 287 |
+
--job_name=reachy2 \
|
| 288 |
+
--policy.device=mps \
|
| 289 |
+
--wandb.enable=true \
|
| 290 |
+
--policy.repo_id=pollen_robotics/record_test_policy
|
| 291 |
+
```
|
| 292 |
+
|
| 293 |
+
## Step 4: Evaluate
|
| 294 |
+
|
| 295 |
+
```bash
|
| 296 |
+
lerobot-eval \
|
| 297 |
+
--robot.type=reachy2 \
|
| 298 |
+
--robot.ip_address=192.168.0.200 \
|
| 299 |
+
--dataset.repo_id=pollen_robotics/eval_record_test \
|
| 300 |
+
--dataset.single_task="Evaluate reachy2 policy" \
|
| 301 |
+
--dataset.num_episodes=10 \
|
| 302 |
+
--policy.path=outputs/train/reachy2_test/checkpoints/last/pretrained_model
|
| 303 |
+
```
|
lerobot/docs/source/rtc.mdx
ADDED
|
@@ -0,0 +1,188 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Real-Time Chunking (RTC)
|
| 2 |
+
|
| 3 |
+
Real-Time Chunking (RTC) is an inference-time method that allows large, flow-matching based robotic policies, such as [Pi0](./pi0), [Pi0.5](./pi05), and [SmolVLA](./smolvla), to produce smooth, continuous, and reactive motion despite having high inference latency.
|
| 4 |
+
|
| 5 |
+
These policies generate chunks of future actions (e.g., 50 steps at a time) instead of single actions.
|
| 6 |
+
Because the models are large, producing each chunk takes longer than the time it takes the robot to execute it.
|
| 7 |
+
Naively executing chunks leads to problems such as pauses, jerky transitions, or sudden changes in strategy whenever the next chunk arrives late or disagrees with the previously executed actions.
|
| 8 |
+
|
| 9 |
+
RTC solves this by asynchronously generating the next chunk while the robot continues executing the current one, and by guiding the new chunk so it aligns smoothly with the portion of the previous chunk that has already been executed.
|
| 10 |
+
|
| 11 |
+
## How RTC Works (simplified)
|
| 12 |
+
|
| 13 |
+
RTC lets the robot think ahead while it’s still moving. When the robot is carrying out one chunk of actions, RTC starts creating the next chunk early.
|
| 14 |
+
But since the robot has already moved a bit by the time the new chunk is ready, RTC has to make sure the new chunk still lines up smoothly with what the robot is currently doing.
|
| 15 |
+
|
| 16 |
+
To do this, RTC treats the beginning of the new chunk like an inpainting or “fill-in-the-gaps” problem:
|
| 17 |
+
it gently adjusts the first part of the new chunk so it blends naturally with the robot’s ongoing motion. The result is no pauses, no sudden jumps.
|
| 18 |
+
|
| 19 |
+
In technical terms, RTC adds a guidance term to the flow-matching denoising process that forces the overlapping timesteps of the new chunk to stay close to the executed portion of the previous chunk, typically using a soft transition mask.
|
| 20 |
+
|
| 21 |
+
## Quick Start
|
| 22 |
+
|
| 23 |
+
### Installation
|
| 24 |
+
|
| 25 |
+
RTC is built into LeRobot. Just install the policy dependencies you need:
|
| 26 |
+
|
| 27 |
+
```bash
|
| 28 |
+
# For Pi0 or Pi0.5
|
| 29 |
+
pip install -e ".[pi]"
|
| 30 |
+
|
| 31 |
+
# For SmolVLA
|
| 32 |
+
pip install -e ".[smolvla]"
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
### Using RTC with Pi0
|
| 36 |
+
|
| 37 |
+
You can find a complete reference implementation in [eval_with_real_robot.py](examples/rtc/eval_with_real_robot.py).
|
| 38 |
+
The snippet below provides a simplified pseudo-example of how RTC operates with Pi0 in your pipeline:
|
| 39 |
+
|
| 40 |
+
```python
|
| 41 |
+
from lerobot.policies.pi0 import PI0Policy, PI0Config
|
| 42 |
+
from lerobot.configs.types import RTCAttentionSchedule
|
| 43 |
+
from lerobot.policies.rtc.configuration_rtc import RTCConfig
|
| 44 |
+
from lerobot.policies.rtc.action_queue import ActionQueue
|
| 45 |
+
|
| 46 |
+
# Load Pi0 with RTC enabled
|
| 47 |
+
policy_cfg = PI0Config()
|
| 48 |
+
|
| 49 |
+
# Enable RTC
|
| 50 |
+
policy_cfg.rtc_config = RTCConfig(
|
| 51 |
+
enabled=True,
|
| 52 |
+
execution_horizon=10, # How many steps to blend with previous chunk
|
| 53 |
+
max_guidance_weight=10.0, # How strongly to enforce consistency
|
| 54 |
+
prefix_attention_schedule=RTCAttentionSchedule.EXP, # Exponential blend
|
| 55 |
+
)
|
| 56 |
+
|
| 57 |
+
# Load the policy
|
| 58 |
+
policy = PI0Policy.from_pretrained("lerobot/pi0_base", policy_cfg=policy_cfg, device="cuda")
|
| 59 |
+
|
| 60 |
+
# Now use predict_action_chunk with RTC parameters
|
| 61 |
+
inference_delay = 4 # How many steps of inference latency, this values should be calculated based on the inference latency of the policy
|
| 62 |
+
|
| 63 |
+
# Initialize the action queue
|
| 64 |
+
action_queue = ActionQueue(policy_cfg.rtc_config)
|
| 65 |
+
|
| 66 |
+
# Start in a separate thread with the following function
|
| 67 |
+
def get_actions():
|
| 68 |
+
while True:
|
| 69 |
+
if should_get_actions:
|
| 70 |
+
|
| 71 |
+
prev_actions = action_queue.get_left_over()
|
| 72 |
+
obs = get_robot_observations(robot)
|
| 73 |
+
|
| 74 |
+
# Generate actions WITH RTC
|
| 75 |
+
actions = policy.predict_action_chunk(
|
| 76 |
+
obs,
|
| 77 |
+
inference_delay=inference_delay,
|
| 78 |
+
prev_chunk_left_over=prev_actions,
|
| 79 |
+
)
|
| 80 |
+
|
| 81 |
+
action_queue.merge(
|
| 82 |
+
actions, actions, inference_delay
|
| 83 |
+
)
|
| 84 |
+
|
| 85 |
+
for step in range(num_steps):
|
| 86 |
+
action = action_queue.get()
|
| 87 |
+
|
| 88 |
+
# Execute the first N actions
|
| 89 |
+
execute_actions(action)
|
| 90 |
+
```
|
| 91 |
+
|
| 92 |
+
## Key Parameters
|
| 93 |
+
|
| 94 |
+
`RTCConfig` has the following parameters to tune:
|
| 95 |
+
|
| 96 |
+
**`execution_horizon`**: How many timesteps from the previous chunk to maintain consistency with. Higher values mean smoother transitions but potentially less reactivity.
|
| 97 |
+
|
| 98 |
+
Typical values: 8-12 steps
|
| 99 |
+
|
| 100 |
+
```python
|
| 101 |
+
RTCConfig(execution_horizon=10)
|
| 102 |
+
```
|
| 103 |
+
|
| 104 |
+
**`max_guidance_weight`**: How strongly to enforce consistency with the previous chunk. This is a hyperparameter that can be tuned to balance the smoothness of the transitions and the reactivity of the policy. For 10 steps flow matching (SmolVLA, Pi0, Pi0.5), a value of 10.0 is a optimal value.
|
| 105 |
+
|
| 106 |
+
**`prefix_attention_schedule`**: How to weight consistency across the overlap region.
|
| 107 |
+
|
| 108 |
+
- `LINEAR`: Linear decay from inference_delay to execution_horizon
|
| 109 |
+
- `EXP`: Exponential decay (recommended for getting started)
|
| 110 |
+
- `ONES`: Full weight across entire execution_horizon
|
| 111 |
+
- `ZEROS`: Binary (full weight up to inference_delay, then zero)
|
| 112 |
+
|
| 113 |
+
**`inference_delay`**: How many timesteps of inference latency your system has. This is passed to `predict_action_chunk()` rather than the config, since it may vary at runtime.
|
| 114 |
+
|
| 115 |
+
## Testing RTC Offline
|
| 116 |
+
|
| 117 |
+
Before running on a real robot, test RTC with dataset samples to visualize how it works:
|
| 118 |
+
|
| 119 |
+
```bash
|
| 120 |
+
python examples/rtc/eval_dataset.py \
|
| 121 |
+
--policy.path=lerobot/pi0_libero_finetuned \
|
| 122 |
+
--dataset.repo_id=HuggingFaceVLA/libero \
|
| 123 |
+
--rtc.execution_horizon=10 \
|
| 124 |
+
--rtc.max_guidance_weight=10.0 \
|
| 125 |
+
--device=cuda
|
| 126 |
+
```
|
| 127 |
+
|
| 128 |
+
The script generates a visualization of the denoising process, comparing standard generation (left) with RTC (right). In the RTC plots, you can see how the first few steps (blue/purple lines) are guided to match the red ground truth trajectory (previous chunk's tail), ensuring a smooth transition between chunks.
|
| 129 |
+
|
| 130 |
+
<p align="center">
|
| 131 |
+
<img
|
| 132 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/flow_matching.png"
|
| 133 |
+
alt="Denoising steps with and without RTC"
|
| 134 |
+
width="100%"
|
| 135 |
+
/>
|
| 136 |
+
</p>
|
| 137 |
+
|
| 138 |
+
## Testing RTC with a Real Robot
|
| 139 |
+
|
| 140 |
+
```bash
|
| 141 |
+
python examples/rtc/eval_with_real_robot.py \
|
| 142 |
+
--policy.path=${HF_USERNAME}/policy_repo_id \
|
| 143 |
+
--robot.type=so100_follower \
|
| 144 |
+
--robot.port=/dev/tty.usbmodem58FA0834591 \
|
| 145 |
+
--robot.cameras="{ gripper: {type: opencv, index_or_path: 1, width: 640, height: 480, fps: 30}, front: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}}" \
|
| 146 |
+
--task="Move green small object into the purple platform" \
|
| 147 |
+
--duration=120 \
|
| 148 |
+
--device=cuda
|
| 149 |
+
```
|
| 150 |
+
|
| 151 |
+
## How It Differs from the Async Inference in LeRobot
|
| 152 |
+
|
| 153 |
+
Both RTC and [async inference](./async) improve real-time robot control, but they solve different problems.
|
| 154 |
+
|
| 155 |
+
| Aspect | Async Inference | RTC |
|
| 156 |
+
| ------------- | -------------------------------------------------------------------------- | --------------------------------------------------- |
|
| 157 |
+
| **Problem** | Idle frames while waiting for inference | Discontinuities between action chunks |
|
| 158 |
+
| **Solution** | Decouple prediction from execution | Guide new chunks to continue smoothly from previous |
|
| 159 |
+
| **Benefit** | No waiting, continuous action | Smooth transitions, natural motion |
|
| 160 |
+
| **Best Used** | Async inference is best used with large models with high inference latency | Flow-matching based policies |
|
| 161 |
+
|
| 162 |
+
**Use both together** for maximum smoothness and reactivity!
|
| 163 |
+
|
| 164 |
+
## Advanced: Debug Tracking
|
| 165 |
+
|
| 166 |
+
RTC includes built-in debug tracking to help you understand what's happening during inference:
|
| 167 |
+
|
| 168 |
+
```python
|
| 169 |
+
# Enable debug tracking
|
| 170 |
+
policy_cfg.rtc_config.debug = True
|
| 171 |
+
policy_cfg.rtc_config.debug_maxlen = 100
|
| 172 |
+
|
| 173 |
+
# After inference, access debug data
|
| 174 |
+
debug_data = policy.rtc_processor.get_debug_data()
|
| 175 |
+
|
| 176 |
+
# Visualize denoising steps, corrections, etc.
|
| 177 |
+
from lerobot.policies.rtc.debug_visualizer import RTCDebugVisualizer
|
| 178 |
+
visualizer = RTCDebugVisualizer()
|
| 179 |
+
# ... create plots
|
| 180 |
+
```
|
| 181 |
+
|
| 182 |
+
See `examples/rtc/eval_dataset.py` for a complete example of visualization.
|
| 183 |
+
|
| 184 |
+
## References
|
| 185 |
+
|
| 186 |
+
- [Smooth-As-Butter Robot Policies](https://alexander-soare.github.io/robotics/2025/08/05/smooth-as-butter-robot-policies.html) - Excellent technical explanation with real robot results
|
| 187 |
+
- [Physical Intelligence - Real-Time Chunking](https://www.physicalintelligence.company/research/real_time_chunking) - Original paper and research
|
| 188 |
+
- [Kinetix RTC Implementation](https://github.com/Physical-Intelligence/real-time-chunking-kinetix) - Reference implementation from Physical Intelligence
|
lerobot/docs/source/sarm.mdx
ADDED
|
@@ -0,0 +1,592 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SARM: Stage-Aware Reward Modeling
|
| 2 |
+
|
| 3 |
+
SARM (Stage-Aware Reward Modeling) is a video-based reward modeling framework for long-horizon robot manipulation tasks. This guide covers how to train SARM reward models and optionally use them with Reward-Aligned Behavior Cloning (RA-BC).
|
| 4 |
+
|
| 5 |
+
**Paper**: [SARM: Stage-Aware Reward Modeling for Long Horizon Robot Manipulation](https://arxiv.org/abs/2509.25358)
|
| 6 |
+
|
| 7 |
+
<img
|
| 8 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/lerobot-sarm.png"
|
| 9 |
+
alt="An overview of SARM"
|
| 10 |
+
width="80%"
|
| 11 |
+
/>
|
| 12 |
+
|
| 13 |
+
## Why Reward Models?
|
| 14 |
+
|
| 15 |
+
Standard behavior cloning treats all demonstration frames equally, but real-world robot datasets are messy. They contain hesitations, corrections, and variable-quality trajectories. Reward models solve this by learning a generalizable notion of **task progress** from demonstrations: given video frames and a task description, they predict how close the robot is to completing the task (0→1). This learned "progress signal" can be used in multiple ways, two promising applications are: (1) **weighted imitation learning** (RA-BC), where high-progress frames receive more weight during policy training, and (2) **reinforcement learning**, where the reward model provides dense rewards for online or offline policy improvement.
|
| 16 |
+
|
| 17 |
+
## Overview
|
| 18 |
+
|
| 19 |
+
SARM has following features:
|
| 20 |
+
|
| 21 |
+
1. **Stage-aware architecture**: Jointly predicts the high-level task stage and fine-grained progress within each stage
|
| 22 |
+
2. **Subtask annotations**: Uses natural language subtask annotations to derive consistent progress labels
|
| 23 |
+
3. **Temporal proportions**: Computes dataset-level priors (α̅\_k) for each subtask to normalize progress across variable-length demonstrations
|
| 24 |
+
|
| 25 |
+
SARM trains on a compact **stage+tau** target for each frame:
|
| 26 |
+
|
| 27 |
+
- **stage**: integer stage index `k ∈ {0, ..., K-1}`
|
| 28 |
+
- **τ (tau)**: within-stage progress `τ ∈ [0, 1]`
|
| 29 |
+
- **target encoding**: `y = k + τ` (this is what the dataset processor produces)
|
| 30 |
+
|
| 31 |
+
At inference time (and in downstream RA-BC), SARM converts the raw `k + τ` value into a **normalized progress** in `[0, 1]` using dataset-level **temporal proportions** `α̅_k` (stored in `meta/temporal_proportions_*.json`).
|
| 32 |
+
|
| 33 |
+
This matches **Formula (2)** from the paper:
|
| 34 |
+
|
| 35 |
+
```
|
| 36 |
+
progress_t = P_{k-1} + α̅_k × τ_t
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
Where:
|
| 40 |
+
|
| 41 |
+
- `τ_t = (t - s_k) / (e_k - s_k)` is within-subtask normalized time
|
| 42 |
+
- `P_{k-1}` is cumulative prior (sum of previous subtask proportions)
|
| 43 |
+
- `α̅_k` is the temporal proportion for subtask k
|
| 44 |
+
|
| 45 |
+
This ensures identical task states map to consistent progress values, even across demonstrations of different lengths.
|
| 46 |
+
|
| 47 |
+
## Inputs and Targets (What the new code expects)
|
| 48 |
+
|
| 49 |
+
SARM is trained through its processor (`src/lerobot/policies/sarm/processor_sarm.py`), which:
|
| 50 |
+
|
| 51 |
+
- **Encodes** images and task text with CLIP (ViT-B/32) into `video_features` and `text_features`
|
| 52 |
+
- **Pads/truncates** robot state into `state_features` (up to `max_state_dim`)
|
| 53 |
+
- **Builds targets** as `sparse_targets` (and `dense_targets` in `dense_only`/`dual`) using the stage+tau encoding `y = k + τ`
|
| 54 |
+
- **Masks rewind frames** using a per-sample `lengths` tensor (rewind is a training-time augmentation)
|
| 55 |
+
|
| 56 |
+
At minimum, each training sample needs:
|
| 57 |
+
|
| 58 |
+
- `task` (string): task description
|
| 59 |
+
- `policy.image_key` images and `policy.state_key` states from the dataset
|
| 60 |
+
|
| 61 |
+
---
|
| 62 |
+
|
| 63 |
+
## Annotation Modes
|
| 64 |
+
|
| 65 |
+
You can choose from **3 annotation modes** that determine how progress labels are computed:
|
| 66 |
+
|
| 67 |
+
| Mode | Annotations Required | Heads | Use Case |
|
| 68 |
+
| -------------- | -------------------- | ---------------------------- | ------------------------------------------------------------ |
|
| 69 |
+
| `single_stage` | None | Sparse only | Simple tasks, quick experiments, no VLM needed |
|
| 70 |
+
| `dense_only` | Dense (VLM) | Dual (sparse auto-generated) | Detailed subtask tracking without defining high-level stages |
|
| 71 |
+
| `dual` | Sparse + Dense (VLM) | Dual | Full SARM paper setup with both granularities |
|
| 72 |
+
|
| 73 |
+
### Mode Details
|
| 74 |
+
|
| 75 |
+
<hfoptions id="mode_explanation">
|
| 76 |
+
<hfoption id="single_stage">
|
| 77 |
+
|
| 78 |
+
**No annotations required.** The entire episode is treated as a single stage called `"task"`, and progress is linear from 0 to 1 over the episode duration.
|
| 79 |
+
|
| 80 |
+
- **Sparse head**: 1 stage ("task"), linear progress
|
| 81 |
+
- **Dense head**: Not used
|
| 82 |
+
- **Best for**: Simple tasks, quick experiments, or when VLM annotation is not available
|
| 83 |
+
|
| 84 |
+
## Set Up Your Environment
|
| 85 |
+
|
| 86 |
+
1. Install LeRobot by following our [Installation Guide](./installation).
|
| 87 |
+
2. Install SARM dependencies by running:
|
| 88 |
+
|
| 89 |
+
```bash
|
| 90 |
+
pip install -e ".[sarm]"
|
| 91 |
+
```
|
| 92 |
+
|
| 93 |
+
Workflow:
|
| 94 |
+
|
| 95 |
+
```
|
| 96 |
+
1. Train SARM → 2. Visualize predictions → 3. (Optional) Train policy with RA-BC
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
</hfoption>
|
| 100 |
+
<hfoption id="dense_only">
|
| 101 |
+
|
| 102 |
+
**Only dense (fine-grained) annotations from a VLM.** The sparse head automatically uses a single `"task"` stage covering the full episode, while the dense head learns detailed subtask progression.
|
| 103 |
+
|
| 104 |
+
- **Sparse head**: 1 stage ("task"), linear progress (auto-generated)
|
| 105 |
+
- **Dense head**: Multiple fine-grained stages from VLM annotations
|
| 106 |
+
- **Best for**: When you want detailed subtask tracking but don't need to define high-level stages
|
| 107 |
+
|
| 108 |
+
Workflow:
|
| 109 |
+
|
| 110 |
+
```
|
| 111 |
+
1. Annotate (dense) → 2. Verify → 3. Train SARM → 4. Visualize → 5. (Optional) Train policy with RA-BC
|
| 112 |
+
```
|
| 113 |
+
|
| 114 |
+
</hfoption>
|
| 115 |
+
<hfoption id="dual">
|
| 116 |
+
|
| 117 |
+
**Both sparse and dense annotations from VLM.** Full dual-head mode as described in the SARM paper, with both high-level (sparse) and fine-grained (dense) stage predictions.
|
| 118 |
+
|
| 119 |
+
- **Sparse head**: High-level stages from VLM annotations
|
| 120 |
+
- **Dense head**: Fine-grained stages from VLM annotations
|
| 121 |
+
- **Best for**: Complex multi-stage tasks where both granularities are useful
|
| 122 |
+
|
| 123 |
+
Workflow:
|
| 124 |
+
|
| 125 |
+
```
|
| 126 |
+
1. Annotate (sparse+dense) → 2. Verify → 3. Train SARM → 4. Visualize → 5. (Optional) Train policy with RA-BC
|
| 127 |
+
```
|
| 128 |
+
|
| 129 |
+
</hfoption>
|
| 130 |
+
</hfoptions>
|
| 131 |
+
|
| 132 |
+
---
|
| 133 |
+
|
| 134 |
+
## Step 1: Subtask Annotation
|
| 135 |
+
|
| 136 |
+
<hfoptions id="annotation_mode">
|
| 137 |
+
<hfoption id="single_stage">
|
| 138 |
+
|
| 139 |
+
**No annotation required!** Skip this step entirely. The model will use the episode's task description and compute linear progress automatically.
|
| 140 |
+
|
| 141 |
+
</hfoption>
|
| 142 |
+
<hfoption id="dense_only">
|
| 143 |
+
|
| 144 |
+
Generate **dense (fine-grained) annotations only** using a VLM. The sparse stage will be auto-generated.
|
| 145 |
+
|
| 146 |
+
```bash
|
| 147 |
+
python src/lerobot/data_processing/sarm_annotations/subtask_annotation.py \
|
| 148 |
+
--repo-id your-username/your-dataset \
|
| 149 |
+
--dense-only \
|
| 150 |
+
--dense-subtasks "Bring robot arms up from starting position,Grab near side and do 1st fold,Grab side and do 2nd fold,Grab side and do 3rd fold to finish folding" \
|
| 151 |
+
--video-key observation.images.base \
|
| 152 |
+
--num-workers 4 \
|
| 153 |
+
--push-to-hub
|
| 154 |
+
```
|
| 155 |
+
|
| 156 |
+
**What gets saved:**
|
| 157 |
+
|
| 158 |
+
- `meta/temporal_proportions_sparse.json` - Auto-generated sparse proportions (`{"task": 1.0}`)
|
| 159 |
+
- `meta/temporal_proportions_dense.json` - Dense temporal proportions
|
| 160 |
+
- Per-episode columns in `episodes/*.parquet`:
|
| 161 |
+
- `dense_subtask_names`, `dense_subtask_start_frames`, `dense_subtask_end_frames`
|
| 162 |
+
- (also time-based columns: `dense_subtask_start_times`, `dense_subtask_end_times`)
|
| 163 |
+
|
| 164 |
+
</hfoption>
|
| 165 |
+
<hfoption id="dual">
|
| 166 |
+
|
| 167 |
+
Generate **both sparse (high-level) and dense (fine-grained) annotations** using a VLM.
|
| 168 |
+
|
| 169 |
+
```bash
|
| 170 |
+
python src/lerobot/data_processing/sarm_annotations/subtask_annotation.py \
|
| 171 |
+
--repo-id your-username/your-dataset \
|
| 172 |
+
--sparse-subtasks "Bring arms up from starting position,Fold the towel (3 folds in total)" \
|
| 173 |
+
--dense-subtasks "Bring robot arms up from starting position,Grab near side and do 1st fold,Grab side and do 2nd fold,Grab side and do 3rd fold to finish folding" \
|
| 174 |
+
--video-key observation.images.base \
|
| 175 |
+
--num-workers 4 \
|
| 176 |
+
--push-to-hub
|
| 177 |
+
```
|
| 178 |
+
|
| 179 |
+
**What gets saved:**
|
| 180 |
+
|
| 181 |
+
- `meta/temporal_proportions_sparse.json` - Sparse temporal proportions
|
| 182 |
+
- `meta/temporal_proportions_dense.json` - Dense temporal proportions
|
| 183 |
+
- Per-episode columns in `episodes/*.parquet`:
|
| 184 |
+
- `sparse_subtask_names`, `sparse_subtask_start_frames`, `sparse_subtask_end_frames`
|
| 185 |
+
- `dense_subtask_names`, `dense_subtask_start_frames`, `dense_subtask_end_frames`
|
| 186 |
+
- (also time-based columns: `*_subtask_start_times`, `*_subtask_end_times`)
|
| 187 |
+
|
| 188 |
+
</hfoption>
|
| 189 |
+
</hfoptions>
|
| 190 |
+
|
| 191 |
+
### Annotation Arguments
|
| 192 |
+
|
| 193 |
+
| Argument | Description |
|
| 194 |
+
| ---------------------- | ------------------------------------------------------------------------------- |
|
| 195 |
+
| `--repo-id` | HuggingFace dataset repository ID |
|
| 196 |
+
| `--sparse-subtasks` | Comma-separated list of high-level subtask names |
|
| 197 |
+
| `--dense-subtasks` | Comma-separated list of fine-grained subtask names |
|
| 198 |
+
| `--dense-only` | Generate only dense annotations (auto-creates sparse "task" stage) |
|
| 199 |
+
| `--video-key` | Camera/video key to use (e.g., `observation.images.top`) |
|
| 200 |
+
| `--num-workers` | Number of parallel GPU workers (default: 1) |
|
| 201 |
+
| `--episodes` | Specific episode indices to annotate (default: all) |
|
| 202 |
+
| `--skip-existing` | Skip episodes that already have annotations |
|
| 203 |
+
| `--model` | VLM model (default: `Qwen/Qwen3-VL-30B-A3B-Instruct`) |
|
| 204 |
+
| `--num-visualizations` | Number of episodes to visualize after annotation (default: 5, set to 0 to skip) |
|
| 205 |
+
|
| 206 |
+
> **Note**: After annotation completes, 5 episodes are automatically visualized by default. Use `--num-visualizations 0` to skip this step.
|
| 207 |
+
|
| 208 |
+
---
|
| 209 |
+
|
| 210 |
+
## Step 2: Verify Annotations
|
| 211 |
+
|
| 212 |
+
<hfoptions id="verify_mode">
|
| 213 |
+
<hfoption id="single_stage">
|
| 214 |
+
|
| 215 |
+
**No verification needed!** Skip this step.
|
| 216 |
+
|
| 217 |
+
</hfoption>
|
| 218 |
+
<hfoption id="dense_only">
|
| 219 |
+
|
| 220 |
+
Visualize annotations using the `--visualize-only` flag:
|
| 221 |
+
|
| 222 |
+
```bash
|
| 223 |
+
python src/lerobot/data_processing/sarm_annotations/subtask_annotation.py \
|
| 224 |
+
--repo-id your-username/your-dataset \
|
| 225 |
+
--visualize-only \
|
| 226 |
+
--visualize-type dense \
|
| 227 |
+
--num-visualizations 5 \
|
| 228 |
+
--video-key observation.images.base \
|
| 229 |
+
--output-dir ./subtask_viz
|
| 230 |
+
```
|
| 231 |
+
|
| 232 |
+
</hfoption>
|
| 233 |
+
<hfoption id="dual">
|
| 234 |
+
|
| 235 |
+
Visualize annotations using the `--visualize-only` flag:
|
| 236 |
+
|
| 237 |
+
```bash
|
| 238 |
+
python src/lerobot/data_processing/sarm_annotations/subtask_annotation.py \
|
| 239 |
+
--repo-id your-username/your-dataset \
|
| 240 |
+
--visualize-only \
|
| 241 |
+
--visualize-type both \
|
| 242 |
+
--num-visualizations 5 \
|
| 243 |
+
--video-key observation.images.base \
|
| 244 |
+
--output-dir ./subtask_viz
|
| 245 |
+
```
|
| 246 |
+
|
| 247 |
+
</hfoption>
|
| 248 |
+
</hfoptions>
|
| 249 |
+
|
| 250 |
+
This generates visualizations showing video frames with subtask boundaries overlaid and timeline of subtasks.
|
| 251 |
+
|
| 252 |
+
### Visualization Arguments
|
| 253 |
+
|
| 254 |
+
| Argument | Description |
|
| 255 |
+
| ---------------------- | -------------------------------------------------------------- |
|
| 256 |
+
| `--visualize-only` | Only visualize existing annotations (no generation) |
|
| 257 |
+
| `--num-visualizations` | Number of episodes to visualize (default: 5) |
|
| 258 |
+
| `--visualize-type` | Type of annotations to visualize: `sparse`, `dense`, or `both` |
|
| 259 |
+
|
| 260 |
+
**Tip**: If annotations are inaccurate, adjust your subtask descriptions to be more specific and re-run.
|
| 261 |
+
|
| 262 |
+
---
|
| 263 |
+
|
| 264 |
+
## Step 3: Train SARM
|
| 265 |
+
|
| 266 |
+
<hfoptions id="train_mode">
|
| 267 |
+
<hfoption id="single_stage">
|
| 268 |
+
|
| 269 |
+
Train with **no annotations** - uses linear progress from 0 to 1:
|
| 270 |
+
|
| 271 |
+
```bash
|
| 272 |
+
python src/lerobot/scripts/lerobot_train.py \
|
| 273 |
+
--dataset.repo_id=your-username/your-dataset \
|
| 274 |
+
--policy.type=sarm \
|
| 275 |
+
--policy.annotation_mode=single_stage \
|
| 276 |
+
--policy.image_key=observation.images.base \
|
| 277 |
+
--output_dir=outputs/train/sarm_single \
|
| 278 |
+
--batch_size=32 \
|
| 279 |
+
--steps=5000 \
|
| 280 |
+
--wandb.enable=true \
|
| 281 |
+
--wandb.project=sarm \
|
| 282 |
+
--policy.repo_id=your-username/your-model-name
|
| 283 |
+
```
|
| 284 |
+
|
| 285 |
+
</hfoption>
|
| 286 |
+
<hfoption id="dense_only">
|
| 287 |
+
|
| 288 |
+
Train with **dense annotations only** (sparse auto-generated):
|
| 289 |
+
|
| 290 |
+
```bash
|
| 291 |
+
python src/lerobot/scripts/lerobot_train.py \
|
| 292 |
+
--dataset.repo_id=your-username/your-dataset \
|
| 293 |
+
--policy.type=sarm \
|
| 294 |
+
--policy.annotation_mode=dense_only \
|
| 295 |
+
--policy.image_key=observation.images.base \
|
| 296 |
+
--output_dir=outputs/train/sarm_dense \
|
| 297 |
+
--batch_size=32 \
|
| 298 |
+
--steps=5000 \
|
| 299 |
+
--wandb.enable=true \
|
| 300 |
+
--wandb.project=sarm \
|
| 301 |
+
--policy.repo_id=your-username/your-model-name
|
| 302 |
+
```
|
| 303 |
+
|
| 304 |
+
</hfoption>
|
| 305 |
+
<hfoption id="dual">
|
| 306 |
+
|
| 307 |
+
Train with **both sparse and dense annotations**:
|
| 308 |
+
|
| 309 |
+
```bash
|
| 310 |
+
python src/lerobot/scripts/lerobot_train.py \
|
| 311 |
+
--dataset.repo_id=your-username/your-dataset \
|
| 312 |
+
--policy.type=sarm \
|
| 313 |
+
--policy.annotation_mode=dual \
|
| 314 |
+
--policy.image_key=observation.images.base \
|
| 315 |
+
--output_dir=outputs/train/sarm_dual \
|
| 316 |
+
--batch_size=32 \
|
| 317 |
+
--steps=5000 \
|
| 318 |
+
--wandb.enable=true \
|
| 319 |
+
--wandb.project=sarm \
|
| 320 |
+
--policy.repo_id=your-username/your-model-name
|
| 321 |
+
```
|
| 322 |
+
|
| 323 |
+
</hfoption>
|
| 324 |
+
</hfoptions>
|
| 325 |
+
|
| 326 |
+
### Multi-GPU Training
|
| 327 |
+
|
| 328 |
+
Add `accelerate launch --multi_gpu --num_processes=4` to use multiple GPUs for training.
|
| 329 |
+
|
| 330 |
+
### Training Arguments
|
| 331 |
+
|
| 332 |
+
| Argument | Description | Default |
|
| 333 |
+
| -------------------------- | ----------------------------------------------------------------- | ------------------------ |
|
| 334 |
+
| `--policy.annotation_mode` | `single_stage`, `dense_only`, or `dual` | `single_stage` |
|
| 335 |
+
| `--policy.image_key` | Camera key for images | `observation.images.top` |
|
| 336 |
+
| `--policy.state_key` | Key for joint states | `observation.state` |
|
| 337 |
+
| `--policy.n_obs_steps` | Observation history steps (total obs frames = `n_obs_steps + 1`) | `8` |
|
| 338 |
+
| `--policy.frame_gap` | Gap (in frames) between sampled observations (at 30 fps: 30 ≈ 1s) | `30` |
|
| 339 |
+
|
| 340 |
+
---
|
| 341 |
+
|
| 342 |
+
## Step 4: Visualize Predictions
|
| 343 |
+
|
| 344 |
+
Use `compute_rabc_weights.py` with `--visualize-only` to visualize model predictions (and, if available, annotation-derived targets) without writing a parquet file.
|
| 345 |
+
|
| 346 |
+
<hfoptions id="viz_mode">
|
| 347 |
+
<hfoption id="single_stage">
|
| 348 |
+
|
| 349 |
+
```bash
|
| 350 |
+
python src/lerobot/policies/sarm/compute_rabc_weights.py \
|
| 351 |
+
--dataset-repo-id your-username/your-dataset \
|
| 352 |
+
--reward-model-path your-username/sarm-model \
|
| 353 |
+
--visualize-only \
|
| 354 |
+
--num-visualizations 5 \
|
| 355 |
+
--head-mode sparse \
|
| 356 |
+
--output-dir ./sarm_viz
|
| 357 |
+
```
|
| 358 |
+
|
| 359 |
+
</hfoption>
|
| 360 |
+
<hfoption id="dense_only">
|
| 361 |
+
|
| 362 |
+
```bash
|
| 363 |
+
python src/lerobot/policies/sarm/compute_rabc_weights.py \
|
| 364 |
+
--dataset-repo-id your-username/your-dataset \
|
| 365 |
+
--reward-model-path your-username/sarm-model \
|
| 366 |
+
--visualize-only \
|
| 367 |
+
--num-visualizations 5 \
|
| 368 |
+
--head-mode dense \
|
| 369 |
+
--output-dir ./sarm_viz
|
| 370 |
+
```
|
| 371 |
+
|
| 372 |
+
</hfoption>
|
| 373 |
+
<hfoption id="dual">
|
| 374 |
+
|
| 375 |
+
```bash
|
| 376 |
+
python src/lerobot/policies/sarm/compute_rabc_weights.py \
|
| 377 |
+
--dataset-repo-id your-username/your-dataset \
|
| 378 |
+
--reward-model-path your-username/sarm-model \
|
| 379 |
+
--visualize-only \
|
| 380 |
+
--num-visualizations 5 \
|
| 381 |
+
--head-mode both \
|
| 382 |
+
--output-dir ./sarm_viz
|
| 383 |
+
```
|
| 384 |
+
|
| 385 |
+
</hfoption>
|
| 386 |
+
</hfoptions>
|
| 387 |
+
|
| 388 |
+
The visualization shows:
|
| 389 |
+
|
| 390 |
+
- **Progress plot**: Predicted progress (and optional annotation-derived “GT” when available and `--stride 1`)
|
| 391 |
+
- **Stage probabilities**: Stacked area plot of predicted stage probabilities
|
| 392 |
+
- **Sample frames**: Key frames from the episode with progress/stage labels
|
| 393 |
+
|
| 394 |
+
### Visualization Arguments
|
| 395 |
+
|
| 396 |
+
| Argument | Description |
|
| 397 |
+
| ---------------------- | --------------------------------------------------------- |
|
| 398 |
+
| `--visualize-only` | Only visualize predictions (no RABC computation) |
|
| 399 |
+
| `--num-visualizations` | Number of episodes to visualize (default: 5) |
|
| 400 |
+
| `--head-mode` | SARM head to use: `sparse`, `dense`, or `both` |
|
| 401 |
+
| `--stride` | Compute every N frames, interpolate the rest (default: 1) |
|
| 402 |
+
|
| 403 |
+
---
|
| 404 |
+
|
| 405 |
+
## Step 5 (Optional): Train Policy with RA-BC
|
| 406 |
+
|
| 407 |
+
Reward-Aligned Behavior Cloning (RA-BC) uses the trained SARM model to weight training samples based on predicted progress improvement. This requires two steps:
|
| 408 |
+
|
| 409 |
+
1. **Precompute progress values** for all frames using the trained SARM model
|
| 410 |
+
2. **Train policy** with RA-BC weighting using the precomputed values
|
| 411 |
+
|
| 412 |
+
### How RA-BC Works
|
| 413 |
+
|
| 414 |
+
For each training sample, RA-BC computes the progress delta:
|
| 415 |
+
|
| 416 |
+
```
|
| 417 |
+
r_i = φ(o_{t+Δ}) - φ(o_t)
|
| 418 |
+
```
|
| 419 |
+
|
| 420 |
+
Where `φ` is the SARM progress prediction and `Δ` is the policy's `chunk_size`. Samples with positive progress (good demonstrations) get higher weights, while samples with negative or zero progress get down-weighted.
|
| 421 |
+
|
| 422 |
+
The weighting follows **Equations 8-9** from the paper:
|
| 423 |
+
|
| 424 |
+
- **Soft weight**: `w̃_i = clip((r_i − (μ − 2σ)) / (4σ + ε), 0, 1)`
|
| 425 |
+
- **Final weight**: `w_i = 𝟙{r_i > κ} + 𝟙{0 ≤ r_i ≤ κ} × w̃_i`
|
| 426 |
+
|
| 427 |
+
### Step 5a: Compute SARM Progress Values
|
| 428 |
+
|
| 429 |
+
First, run the SARM model on all frames in your dataset to compute progress values:
|
| 430 |
+
|
| 431 |
+
```bash
|
| 432 |
+
python src/lerobot/policies/sarm/compute_rabc_weights.py \
|
| 433 |
+
--dataset-repo-id your-username/your-dataset \
|
| 434 |
+
--reward-model-path your-username/sarm-model \
|
| 435 |
+
--head-mode sparse \
|
| 436 |
+
--num-visualizations 5 \
|
| 437 |
+
--push-to-hub
|
| 438 |
+
```
|
| 439 |
+
|
| 440 |
+
This script:
|
| 441 |
+
|
| 442 |
+
- Processes all frames and computes progress values
|
| 443 |
+
- Saves progress values to a parquet file next to the dataset on disk (defaults to `<dataset_root>/sarm_progress.parquet`)
|
| 444 |
+
- Generates visualizations of the first N episodes (default: 5)
|
| 445 |
+
|
| 446 |
+
**Arguments:**
|
| 447 |
+
|
| 448 |
+
| Argument | Description | Default |
|
| 449 |
+
| ---------------------- | -------------------------------------------------------------- | ---------- |
|
| 450 |
+
| `--reward-model-path` | Path to trained SARM model | (required) |
|
| 451 |
+
| `--head-mode` | SARM head to use: `sparse`, `dense`, or `both` | `sparse` |
|
| 452 |
+
| `--device` | Device for inference | `cuda` |
|
| 453 |
+
| `--visualize-only` | Only visualize predictions (no RA-BC computation) | `false` |
|
| 454 |
+
| `--num-visualizations` | Number of episodes to visualize (default: 5, set to 0 to skip) | `5` |
|
| 455 |
+
|
| 456 |
+
**Output format** (`sarm_progress.parquet`):
|
| 457 |
+
|
| 458 |
+
| Column | Description |
|
| 459 |
+
| ----------------- | ---------------------------------------------- |
|
| 460 |
+
| `index` | Global frame index in dataset |
|
| 461 |
+
| `episode_index` | Episode number |
|
| 462 |
+
| `frame_index` | Local frame index within episode |
|
| 463 |
+
| `progress_sparse` | Sparse head progress value [0, 1] |
|
| 464 |
+
| `progress_dense` | Dense head progress value [0, 1] (if computed) |
|
| 465 |
+
|
| 466 |
+
### Step 5b: Train Policy with RA-BC
|
| 467 |
+
|
| 468 |
+
Once you have the progress file, train your policy with RA-BC weighting. The progress file is auto-detected from the dataset path (`sarm_progress.parquet`). Currently PI0, PI0.5 and SmolVLA are supported with RA-BC:
|
| 469 |
+
|
| 470 |
+
```bash
|
| 471 |
+
python src/lerobot/scripts/lerobot_train.py \
|
| 472 |
+
--dataset.repo_id=your-username/your-dataset \
|
| 473 |
+
--policy.type=pi0 \
|
| 474 |
+
--use_rabc=true \
|
| 475 |
+
--rabc_head_mode=sparse \
|
| 476 |
+
--rabc_kappa=0.01 \
|
| 477 |
+
--output_dir=outputs/train/policy_rabc \
|
| 478 |
+
--batch_size=32 \
|
| 479 |
+
--steps=40000
|
| 480 |
+
```
|
| 481 |
+
|
| 482 |
+
The training script automatically:
|
| 483 |
+
|
| 484 |
+
- Loads the precomputed progress values from the parquet file
|
| 485 |
+
- Uses the policy's `chunk_size` to compute progress deltas (Δ)
|
| 486 |
+
- Computes sample weights based on progress improvement
|
| 487 |
+
- Applies weighted loss during training
|
| 488 |
+
|
| 489 |
+
**RA-BC Arguments:**
|
| 490 |
+
|
| 491 |
+
| Argument | Description | Default |
|
| 492 |
+
| ---------------------- | ---------------------------------------------------------- | ---------------------------------- |
|
| 493 |
+
| `--use_rabc` | Enable RA-BC sample weighting | `false` |
|
| 494 |
+
| `--rabc_progress_path` | Path to progress parquet file (auto-detected from dataset) | `sarm_progress.parquet` in dataset |
|
| 495 |
+
| `--rabc_head_mode` | Which SARM head's progress to use: `sparse` or `dense` | `sparse` |
|
| 496 |
+
| `--rabc_kappa` | Threshold κ for high-quality samples | `0.01` |
|
| 497 |
+
|
| 498 |
+
### Tuning RA-BC Kappa
|
| 499 |
+
|
| 500 |
+
The `kappa` parameter is the threshold that determines which samples get full weight (w=1). Understanding how to tune it is critical for RA-BC to work effectively.
|
| 501 |
+
|
| 502 |
+
**How the weighting works:**
|
| 503 |
+
|
| 504 |
+
| Condition | Weight |
|
| 505 |
+
| ------------------- | ----------------------- |
|
| 506 |
+
| `delta > kappa` | 1.0 (hard threshold) |
|
| 507 |
+
| `0 ≤ delta ≤ kappa` | Soft weight from Eq. 8 |
|
| 508 |
+
| `delta < 0` | 0.0 (negative progress) |
|
| 509 |
+
|
| 510 |
+
**Diagnosing kappa issues:**
|
| 511 |
+
|
| 512 |
+
Monitor these WandB metrics during training:
|
| 513 |
+
|
| 514 |
+
| Metric | Healthy Range | Problem Indicator |
|
| 515 |
+
| ------------------ | ------------- | ------------------------- |
|
| 516 |
+
| `rabc_mean_weight` | 0.3 - 0.8 | ≈ 1.0 means kappa too low |
|
| 517 |
+
| `rabc_delta_mean` | > 0 | Should be positive |
|
| 518 |
+
| `rabc_delta_std` | > 0 | Variance in data quality |
|
| 519 |
+
|
| 520 |
+
**If `rabc_mean_weight ≈ 1.0`:** Your kappa is too low. Most samples have `delta > kappa` and bypass the soft-weighting entirely. RA-BC becomes equivalent to vanilla BC.
|
| 521 |
+
|
| 522 |
+
**Setting kappa based on your data:**
|
| 523 |
+
|
| 524 |
+
The default `kappa=0.01` was tuned for the paper's T-shirt folding task (~90s episodes at 30fps). For your dataset, check the logged `rabc_delta_mean` and `rabc_delta_std`:
|
| 525 |
+
|
| 526 |
+
```
|
| 527 |
+
# If delta_mean ≈ 0.03 and delta_std ≈ 0.02:
|
| 528 |
+
# Most deltas fall in range [0.01, 0.05]
|
| 529 |
+
|
| 530 |
+
# Option 1: Set kappa = delta_mean (medium selectivity)
|
| 531 |
+
--rabc_kappa=0.03
|
| 532 |
+
|
| 533 |
+
# Option 2: Set kappa = delta_mean + delta_std (high selectivity)
|
| 534 |
+
--rabc_kappa=0.05
|
| 535 |
+
|
| 536 |
+
# Option 3: Set kappa = delta_mean + 2*delta_std (very selective)
|
| 537 |
+
--rabc_kappa=0.07
|
| 538 |
+
```
|
| 539 |
+
|
| 540 |
+
**When RA-BC may not help:**
|
| 541 |
+
|
| 542 |
+
If your dataset is already high quality (consistent progress across all demonstrations), RA-BC won't provide much benefit since there's nothing to filter.
|
| 543 |
+
|
| 544 |
+
### Multi-GPU Training with RA-BC
|
| 545 |
+
|
| 546 |
+
```bash
|
| 547 |
+
accelerate launch \
|
| 548 |
+
--multi_gpu \
|
| 549 |
+
--num_processes=4 \
|
| 550 |
+
src/lerobot/scripts/lerobot_train.py \
|
| 551 |
+
--dataset.repo_id=your-username/your-dataset \
|
| 552 |
+
--policy.type=pi0 \
|
| 553 |
+
--use_rabc=true \
|
| 554 |
+
--rabc_kappa=0.01 \
|
| 555 |
+
--output_dir=outputs/train/policy_rabc \
|
| 556 |
+
--batch_size=32 \
|
| 557 |
+
--steps=40000
|
| 558 |
+
```
|
| 559 |
+
|
| 560 |
+
---
|
| 561 |
+
|
| 562 |
+
## Tips & Best Practices
|
| 563 |
+
|
| 564 |
+
### Choosing a Mode
|
| 565 |
+
|
| 566 |
+
- **Start with `single_stage`** for quick experiments - no annotation overhead
|
| 567 |
+
- Use **`dense_only`** when you want detailed progress tracking but tasks don't have clear high-level stages
|
| 568 |
+
- Use **`dual`** for complex tasks where both coarse and fine-grained progress is meaningful
|
| 569 |
+
|
| 570 |
+
### Annotation Quality
|
| 571 |
+
|
| 572 |
+
1. **Be specific with subtask names**: Instead of "fold", use "grab near side and fold toward center"
|
| 573 |
+
2. **Verify with visualization**: Always check a few episodes before training
|
| 574 |
+
3. **Consistent naming**: Use the same subtask names across all episodes
|
| 575 |
+
|
| 576 |
+
### RA-BC
|
| 577 |
+
|
| 578 |
+
1. **Train SARM first**: RA-BC quality depends entirely on SARM quality
|
| 579 |
+
2. **Monitor `rabc_mean_weight`**: If it's ≈ 1.0, increase kappa (see [Tuning RA-BC Kappa](#tuning-ra-bc-kappa))
|
| 580 |
+
|
| 581 |
+
---
|
| 582 |
+
|
| 583 |
+
## Citation
|
| 584 |
+
|
| 585 |
+
```bibtex
|
| 586 |
+
@article{chen2025sarm,
|
| 587 |
+
title={SARM: Stage-Aware Reward Modeling for Long Horizon Robot Manipulation},
|
| 588 |
+
author={Chen, Qianzhong and Yu, Justin and Schwager, Mac and Abbeel, Pieter and Shentu, Yide and Wu, Philipp},
|
| 589 |
+
journal={arXiv preprint arXiv:2509.25358},
|
| 590 |
+
year={2025}
|
| 591 |
+
}
|
| 592 |
+
```
|
lerobot/docs/source/smolvla.mdx
ADDED
|
@@ -0,0 +1,116 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SmolVLA
|
| 2 |
+
|
| 3 |
+
SmolVLA is Hugging Face’s lightweight foundation model for robotics. Designed for easy fine-tuning on LeRobot datasets, it helps accelerate your development!
|
| 4 |
+
|
| 5 |
+
<p align="center">
|
| 6 |
+
<img
|
| 7 |
+
src="https://cdn-uploads.huggingface.co/production/uploads/640e21ef3c82bd463ee5a76d/aooU0a3DMtYmy_1IWMaIM.png"
|
| 8 |
+
alt="SmolVLA architecture."
|
| 9 |
+
width="500"
|
| 10 |
+
/>
|
| 11 |
+
<br />
|
| 12 |
+
<em>
|
| 13 |
+
Figure 1. SmolVLA takes as input (i) multiple cameras views, (ii) the
|
| 14 |
+
robot’s current sensorimotor state, and (iii) a natural language
|
| 15 |
+
instruction, encoded into contextual features used to condition the action
|
| 16 |
+
expert when generating an action chunk.
|
| 17 |
+
</em>
|
| 18 |
+
</p>
|
| 19 |
+
|
| 20 |
+
## Set Up Your Environment
|
| 21 |
+
|
| 22 |
+
1. Install LeRobot by following our [Installation Guide](./installation).
|
| 23 |
+
2. Install SmolVLA dependencies by running:
|
| 24 |
+
|
| 25 |
+
```bash
|
| 26 |
+
pip install -e ".[smolvla]"
|
| 27 |
+
```
|
| 28 |
+
|
| 29 |
+
## Collect a dataset
|
| 30 |
+
|
| 31 |
+
SmolVLA is a base model, so fine-tuning on your own data is required for optimal performance in your setup.
|
| 32 |
+
We recommend recording ~50 episodes of your task as a starting point. Follow our guide to get started: [Recording a Dataset](./il_robots)
|
| 33 |
+
|
| 34 |
+
<Tip>
|
| 35 |
+
|
| 36 |
+
In your dataset, make sure to have enough demonstrations per each variation (e.g. the cube position on the table if it is cube pick-place task) you are introducing.
|
| 37 |
+
|
| 38 |
+
We recommend checking out the dataset linked below for reference that was used in the [SmolVLA paper](https://huggingface.co/papers/2506.01844):
|
| 39 |
+
|
| 40 |
+
🔗 [SVLA SO100 PickPlace](https://huggingface.co/spaces/lerobot/visualize_dataset?path=%2Flerobot%2Fsvla_so100_pickplace%2Fepisode_0)
|
| 41 |
+
|
| 42 |
+
In this dataset, we recorded 50 episodes across 5 distinct cube positions. For each position, we collected 10 episodes of pick-and-place interactions. This structure, repeating each variation several times, helped the model generalize better. We tried similar dataset with 25 episodes, and it was not enough leading to a bad performance. So, the data quality and quantity is definitely a key.
|
| 43 |
+
After you have your dataset available on the Hub, you are good to go to use our finetuning script to adapt SmolVLA to your application.
|
| 44 |
+
|
| 45 |
+
</Tip>
|
| 46 |
+
|
| 47 |
+
## Finetune SmolVLA on your data
|
| 48 |
+
|
| 49 |
+
Use [`smolvla_base`](https://hf.co/lerobot/smolvla_base), our pretrained 450M model, and fine-tune it on your data.
|
| 50 |
+
Training the model for 20k steps will roughly take ~4 hrs on a single A100 GPU. You should tune the number of steps based on performance and your use-case.
|
| 51 |
+
|
| 52 |
+
If you don't have a gpu device, you can train using our notebook on [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/lerobot/training-smolvla.ipynb)
|
| 53 |
+
|
| 54 |
+
Pass your dataset to the training script using `--dataset.repo_id`. If you want to test your installation, run the following command where we use one of the datasets we collected for the [SmolVLA Paper](https://huggingface.co/papers/2506.01844).
|
| 55 |
+
|
| 56 |
+
```bash
|
| 57 |
+
cd lerobot && lerobot-train \
|
| 58 |
+
--policy.path=lerobot/smolvla_base \
|
| 59 |
+
--dataset.repo_id=${HF_USER}/mydataset \
|
| 60 |
+
--batch_size=64 \
|
| 61 |
+
--steps=20000 \
|
| 62 |
+
--output_dir=outputs/train/my_smolvla \
|
| 63 |
+
--job_name=my_smolvla_training \
|
| 64 |
+
--policy.device=cuda \
|
| 65 |
+
--wandb.enable=true
|
| 66 |
+
```
|
| 67 |
+
|
| 68 |
+
<Tip>
|
| 69 |
+
You can start with a small batch size and increase it incrementally, if the
|
| 70 |
+
GPU allows it, as long as loading times remain short.
|
| 71 |
+
</Tip>
|
| 72 |
+
|
| 73 |
+
Fine-tuning is an art. For a complete overview of the options for finetuning, run
|
| 74 |
+
|
| 75 |
+
```bash
|
| 76 |
+
lerobot-train --help
|
| 77 |
+
```
|
| 78 |
+
|
| 79 |
+
<p align="center">
|
| 80 |
+
<img
|
| 81 |
+
src="https://cdn-uploads.huggingface.co/production/uploads/640e21ef3c82bd463ee5a76d/S-3vvVCulChREwHDkquoc.gif"
|
| 82 |
+
alt="Comparison of SmolVLA across task variations."
|
| 83 |
+
width="500"
|
| 84 |
+
/>
|
| 85 |
+
<br />
|
| 86 |
+
<em>
|
| 87 |
+
Figure 2: Comparison of SmolVLA across task variations. From left to right:
|
| 88 |
+
(1) pick-place cube counting, (2) pick-place cube counting, (3) pick-place
|
| 89 |
+
cube counting under perturbations, and (4) generalization on pick-and-place
|
| 90 |
+
of the lego block with real-world SO101.
|
| 91 |
+
</em>
|
| 92 |
+
</p>
|
| 93 |
+
|
| 94 |
+
## Evaluate the finetuned model and run it in real-time
|
| 95 |
+
|
| 96 |
+
Similarly for when recording an episode, it is recommended that you are logged in to the HuggingFace Hub. You can follow the corresponding steps: [Record a dataset](./il_robots).
|
| 97 |
+
Once you are logged in, you can run inference in your setup by doing:
|
| 98 |
+
|
| 99 |
+
```bash
|
| 100 |
+
lerobot-record \
|
| 101 |
+
--robot.type=so101_follower \
|
| 102 |
+
--robot.port=/dev/ttyACM0 \ # <- Use your port
|
| 103 |
+
--robot.id=my_blue_follower_arm \ # <- Use your robot id
|
| 104 |
+
--robot.cameras="{ front: {type: opencv, index_or_path: 8, width: 640, height: 480, fps: 30}}" \ # <- Use your cameras
|
| 105 |
+
--dataset.single_task="Grasp a lego block and put it in the bin." \ # <- Use the same task description you used in your dataset recording
|
| 106 |
+
--dataset.repo_id=${HF_USER}/eval_DATASET_NAME_test \ # <- This will be the dataset name on HF Hub
|
| 107 |
+
--dataset.episode_time_s=50 \
|
| 108 |
+
--dataset.num_episodes=10 \
|
| 109 |
+
# <- Teleop optional if you want to teleoperate in between episodes \
|
| 110 |
+
# --teleop.type=so100_leader \
|
| 111 |
+
# --teleop.port=/dev/ttyACM0 \
|
| 112 |
+
# --teleop.id=my_red_leader_arm \
|
| 113 |
+
--policy.path=HF_USER/FINETUNE_MODEL_NAME # <- Use your fine-tuned model
|
| 114 |
+
```
|
| 115 |
+
|
| 116 |
+
Depending on your evaluation setup, you can configure the duration and the number of episodes to record for your evaluation suite.
|
lerobot/docs/source/so100.mdx
ADDED
|
@@ -0,0 +1,640 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SO-100
|
| 2 |
+
|
| 3 |
+
In the steps below, we explain how to assemble the SO-100 robot.
|
| 4 |
+
|
| 5 |
+
## Source the parts
|
| 6 |
+
|
| 7 |
+
Follow this [README](https://github.com/TheRobotStudio/SO-ARM100/blob/main/SO100.md). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts. And advise if it's your first time printing or if you don't own a 3D printer.
|
| 8 |
+
|
| 9 |
+
## Install LeRobot 🤗
|
| 10 |
+
|
| 11 |
+
To install LeRobot, follow our [Installation Guide](./installation)
|
| 12 |
+
|
| 13 |
+
In addition to these instructions, you need to install the Feetech SDK:
|
| 14 |
+
|
| 15 |
+
```bash
|
| 16 |
+
pip install -e ".[feetech]"
|
| 17 |
+
```
|
| 18 |
+
|
| 19 |
+
## Configure the motors
|
| 20 |
+
|
| 21 |
+
**Note:**
|
| 22 |
+
Unlike the SO-101, the motor connectors are not easily accessible once the arm is assembled, so the configuration step must be done beforehand.
|
| 23 |
+
|
| 24 |
+
### 1. Find the USB ports associated with each arm
|
| 25 |
+
|
| 26 |
+
To find the port for each bus servo adapter, run this script:
|
| 27 |
+
|
| 28 |
+
```bash
|
| 29 |
+
lerobot-find-port
|
| 30 |
+
```
|
| 31 |
+
|
| 32 |
+
<hfoptions id="example">
|
| 33 |
+
<hfoption id="Mac">
|
| 34 |
+
|
| 35 |
+
Example output:
|
| 36 |
+
|
| 37 |
+
```
|
| 38 |
+
Finding all available ports for the MotorBus.
|
| 39 |
+
['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
|
| 40 |
+
Remove the USB cable from your MotorsBus and press Enter when done.
|
| 41 |
+
|
| 42 |
+
[...Disconnect corresponding leader or follower arm and press Enter...]
|
| 43 |
+
|
| 44 |
+
The port of this MotorsBus is /dev/tty.usbmodem575E0032081
|
| 45 |
+
Reconnect the USB cable.
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
Where the found port is: `/dev/tty.usbmodem575E0032081` corresponding to your leader or follower arm.
|
| 49 |
+
|
| 50 |
+
</hfoption>
|
| 51 |
+
<hfoption id="Linux">
|
| 52 |
+
|
| 53 |
+
On Linux, you might need to give access to the USB ports by running:
|
| 54 |
+
|
| 55 |
+
```bash
|
| 56 |
+
sudo chmod 666 /dev/ttyACM0
|
| 57 |
+
sudo chmod 666 /dev/ttyACM1
|
| 58 |
+
```
|
| 59 |
+
|
| 60 |
+
Example output:
|
| 61 |
+
|
| 62 |
+
```
|
| 63 |
+
Finding all available ports for the MotorBus.
|
| 64 |
+
['/dev/ttyACM0', '/dev/ttyACM1']
|
| 65 |
+
Remove the usb cable from your MotorsBus and press Enter when done.
|
| 66 |
+
|
| 67 |
+
[...Disconnect corresponding leader or follower arm and press Enter...]
|
| 68 |
+
|
| 69 |
+
The port of this MotorsBus is /dev/ttyACM1
|
| 70 |
+
Reconnect the USB cable.
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
Where the found port is: `/dev/ttyACM1` corresponding to your leader or follower arm.
|
| 74 |
+
|
| 75 |
+
</hfoption>
|
| 76 |
+
</hfoptions>
|
| 77 |
+
|
| 78 |
+
### 2. Set the motors ids and baudrates
|
| 79 |
+
|
| 80 |
+
Each motor is identified by a unique id on the bus. When brand new, motors usually come with a default id of `1`. For the communication to work properly between the motors and the controller, we first need to set a unique, different id to each motor. Additionally, the speed at which data is transmitted on the bus is determined by the baudrate. In order to talk to each other, the controller and all the motors need to be configured with the same baudrate.
|
| 81 |
+
|
| 82 |
+
To that end, we first need to connect to each motor individually with the controller in order to set these. Since we will write these parameters in the non-volatile section of the motors' internal memory (EEPROM), we'll only need to do this once.
|
| 83 |
+
|
| 84 |
+
If you are repurposing motors from another robot, you will probably also need to perform this step as the ids and baudrate likely won't match.
|
| 85 |
+
|
| 86 |
+
#### Follower
|
| 87 |
+
|
| 88 |
+
Connect the usb cable from your computer and the power supply to the follower arm's controller board. Then, run the following command or run the API example with the port you got from the previous step. You'll also need to give your leader arm a name with the `id` parameter.
|
| 89 |
+
|
| 90 |
+
For a visual reference on how to set the motor ids please refer to [this video](https://huggingface.co/docs/lerobot/en/so101#setup-motors-video) where we follow the process for the SO101 arm.
|
| 91 |
+
|
| 92 |
+
<hfoptions id="setup_motors">
|
| 93 |
+
<hfoption id="Command">
|
| 94 |
+
|
| 95 |
+
```bash
|
| 96 |
+
lerobot-setup-motors \
|
| 97 |
+
--robot.type=so100_follower \
|
| 98 |
+
--robot.port=/dev/tty.usbmodem585A0076841 # <- paste here the port found at previous step
|
| 99 |
+
```
|
| 100 |
+
|
| 101 |
+
</hfoption>
|
| 102 |
+
<hfoption id="API example">
|
| 103 |
+
|
| 104 |
+
<!-- prettier-ignore-start -->
|
| 105 |
+
```python
|
| 106 |
+
from lerobot.robots.so_follower import SO100Follower, SO100FollowerConfig
|
| 107 |
+
|
| 108 |
+
config = SO100FollowerConfig(
|
| 109 |
+
port="/dev/tty.usbmodem585A0076841",
|
| 110 |
+
id="my_awesome_follower_arm",
|
| 111 |
+
)
|
| 112 |
+
follower = SO100Follower(config)
|
| 113 |
+
follower.setup_motors()
|
| 114 |
+
```
|
| 115 |
+
<!-- prettier-ignore-end -->
|
| 116 |
+
|
| 117 |
+
</hfoption>
|
| 118 |
+
</hfoptions>
|
| 119 |
+
|
| 120 |
+
You should see the following instruction
|
| 121 |
+
|
| 122 |
+
```
|
| 123 |
+
Connect the controller board to the 'gripper' motor only and press enter.
|
| 124 |
+
```
|
| 125 |
+
|
| 126 |
+
As instructed, plug the gripper's motor. Make sure it's the only motor connected to the board, and that the motor itself is not yet daisy-chained to any other motor. As you press `[Enter]`, the script will automatically set the id and baudrate for that motor.
|
| 127 |
+
|
| 128 |
+
<details>
|
| 129 |
+
<summary>Troubleshooting</summary>
|
| 130 |
+
|
| 131 |
+
If you get an error at that point, check your cables and make sure they are plugged in properly:
|
| 132 |
+
|
| 133 |
+
<ul>
|
| 134 |
+
<li>Power supply</li>
|
| 135 |
+
<li>USB cable between your computer and the controller board</li>
|
| 136 |
+
<li>The 3-pin cable from the controller board to the motor</li>
|
| 137 |
+
</ul>
|
| 138 |
+
|
| 139 |
+
If you are using a Waveshare controller board, make sure that the two jumpers are set on the `B` channel (USB).
|
| 140 |
+
|
| 141 |
+
</details>
|
| 142 |
+
|
| 143 |
+
You should then see the following message:
|
| 144 |
+
|
| 145 |
+
```
|
| 146 |
+
'gripper' motor id set to 6
|
| 147 |
+
```
|
| 148 |
+
|
| 149 |
+
Followed by the next instruction:
|
| 150 |
+
|
| 151 |
+
```
|
| 152 |
+
Connect the controller board to the 'wrist_roll' motor only and press enter.
|
| 153 |
+
```
|
| 154 |
+
|
| 155 |
+
You can disconnect the 3-pin cable from the controller board, but you can leave it connected to the gripper motor on the other end, as it will already be in the right place. Now, plug in another 3-pin cable to the wrist roll motor and connect it to the controller board. As with the previous motor, make sure it is the only motor connected to the board and that the motor itself isn't connected to any other one.
|
| 156 |
+
|
| 157 |
+
Repeat the operation for each motor as instructed.
|
| 158 |
+
|
| 159 |
+
> [!TIP]
|
| 160 |
+
> Check your cabling at each step before pressing Enter. For instance, the power supply cable might disconnect as you manipulate the board.
|
| 161 |
+
|
| 162 |
+
When you are done, the script will simply finish, at which point the motors are ready to be used. You can now plug the 3-pin cable from each motor to the next one, and the cable from the first motor (the 'shoulder pan' with id=1) to the controller board, which can now be attached to the base of the arm.
|
| 163 |
+
|
| 164 |
+
#### Leader
|
| 165 |
+
|
| 166 |
+
Do the same steps for the leader arm.
|
| 167 |
+
|
| 168 |
+
<hfoptions id="setup_motors">
|
| 169 |
+
<hfoption id="Command">
|
| 170 |
+
```bash
|
| 171 |
+
lerobot-setup-motors \
|
| 172 |
+
--teleop.type=so100_leader \
|
| 173 |
+
--teleop.port=/dev/tty.usbmodem575E0031751 # <- paste here the port found at previous step
|
| 174 |
+
```
|
| 175 |
+
</hfoption>
|
| 176 |
+
<hfoption id="API example">
|
| 177 |
+
|
| 178 |
+
<!-- prettier-ignore-start -->
|
| 179 |
+
```python
|
| 180 |
+
from lerobot.teleoperators.so_leader import SO100Leader, SO100LeaderConfig
|
| 181 |
+
|
| 182 |
+
config = SO100LeaderConfig(
|
| 183 |
+
port="/dev/tty.usbmodem585A0076841",
|
| 184 |
+
id="my_awesome_leader_arm",
|
| 185 |
+
)
|
| 186 |
+
leader = SO100Leader(config)
|
| 187 |
+
leader.setup_motors()
|
| 188 |
+
```
|
| 189 |
+
<!-- prettier-ignore-end -->
|
| 190 |
+
|
| 191 |
+
</hfoption>
|
| 192 |
+
</hfoptions>
|
| 193 |
+
|
| 194 |
+
## Step-by-Step Assembly Instructions
|
| 195 |
+
|
| 196 |
+
## Remove the gears of the 6 leader motors
|
| 197 |
+
|
| 198 |
+
<details>
|
| 199 |
+
<summary><strong>Video removing gears</strong></summary>
|
| 200 |
+
|
| 201 |
+
<div class="video-container">
|
| 202 |
+
<video controls width="600">
|
| 203 |
+
<source
|
| 204 |
+
src="https://github.com/user-attachments/assets/0c95b88c-5b85-413d-ba19-aee2f864f2a7"
|
| 205 |
+
type="video/mp4"
|
| 206 |
+
/>
|
| 207 |
+
</video>
|
| 208 |
+
</div>
|
| 209 |
+
|
| 210 |
+
</details>
|
| 211 |
+
|
| 212 |
+
Follow the video for removing gears. You need to remove the gear for the motors of the leader arm. As a result, you will only use the position encoding of the motor and reduce friction to more easily operate the leader arm.
|
| 213 |
+
|
| 214 |
+
### Clean Parts
|
| 215 |
+
|
| 216 |
+
Remove all support material from the 3D-printed parts. The easiest way to do this is using a small screwdriver to get underneath the support material.
|
| 217 |
+
|
| 218 |
+
### Additional Guidance
|
| 219 |
+
|
| 220 |
+
<details>
|
| 221 |
+
<summary><strong>Video assembling arms</strong></summary>
|
| 222 |
+
|
| 223 |
+
<div class="video-container">
|
| 224 |
+
<video controls width="600">
|
| 225 |
+
<source
|
| 226 |
+
src="https://github.com/user-attachments/assets/488a39de-0189-4461-9de3-05b015f90cca"
|
| 227 |
+
type="video/mp4"
|
| 228 |
+
/>
|
| 229 |
+
</video>
|
| 230 |
+
</div>
|
| 231 |
+
|
| 232 |
+
</details>
|
| 233 |
+
|
| 234 |
+
**Note:**
|
| 235 |
+
This video provides visual guidance for assembling the arms, but it doesn't specify when or how to do the wiring. Inserting the cables beforehand is much easier than doing it afterward. The first arm may take a bit more than 1 hour to assemble, but once you get used to it, you can assemble the second arm in under 1 hour.
|
| 236 |
+
|
| 237 |
+
---
|
| 238 |
+
|
| 239 |
+
### First Motor
|
| 240 |
+
|
| 241 |
+
**Step 2: Insert Wires**
|
| 242 |
+
|
| 243 |
+
- Insert two wires into the first motor.
|
| 244 |
+
|
| 245 |
+
<img
|
| 246 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_1.webp"
|
| 247 |
+
style="height:300px;"
|
| 248 |
+
/>
|
| 249 |
+
|
| 250 |
+
**Step 3: Install in Base**
|
| 251 |
+
|
| 252 |
+
- Place the first motor into the base.
|
| 253 |
+
|
| 254 |
+
<img
|
| 255 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_2.webp"
|
| 256 |
+
style="height:300px;"
|
| 257 |
+
/>
|
| 258 |
+
|
| 259 |
+
**Step 4: Secure Motor**
|
| 260 |
+
|
| 261 |
+
- Fasten the motor with 4 screws. Two from the bottom and two from top.
|
| 262 |
+
|
| 263 |
+
**Step 5: Attach Motor Holder**
|
| 264 |
+
|
| 265 |
+
- Slide over the first motor holder and fasten it using two screws (one on each side).
|
| 266 |
+
|
| 267 |
+
<img
|
| 268 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_4.webp"
|
| 269 |
+
style="height:300px;"
|
| 270 |
+
/>
|
| 271 |
+
|
| 272 |
+
**Step 6: Attach Motor Horns**
|
| 273 |
+
|
| 274 |
+
- Install both motor horns, securing the top horn with a screw. Try not to move the motor position when attaching the motor horn, especially for the leader arms, where we removed the gears.
|
| 275 |
+
|
| 276 |
+
<img
|
| 277 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_5.webp"
|
| 278 |
+
style="height:300px;"
|
| 279 |
+
/>
|
| 280 |
+
|
| 281 |
+
<details>
|
| 282 |
+
<summary>
|
| 283 |
+
<strong>Video adding motor horn</strong>
|
| 284 |
+
</summary>
|
| 285 |
+
<video src="https://github.com/user-attachments/assets/ef3391a4-ad05-4100-b2bd-1699bf86c969"></video>
|
| 286 |
+
</details>
|
| 287 |
+
|
| 288 |
+
**Step 7: Attach Shoulder Part**
|
| 289 |
+
|
| 290 |
+
- Route one wire to the back of the robot and the other to the left or towards you (see photo).
|
| 291 |
+
- Attach the shoulder part.
|
| 292 |
+
|
| 293 |
+
<img
|
| 294 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_6.webp"
|
| 295 |
+
style="height:300px;"
|
| 296 |
+
/>
|
| 297 |
+
|
| 298 |
+
**Step 8: Secure Shoulder**
|
| 299 |
+
|
| 300 |
+
- Tighten the shoulder part with 4 screws on top and 4 on the bottom
|
| 301 |
+
_(access bottom holes by turning the shoulder)._
|
| 302 |
+
|
| 303 |
+
---
|
| 304 |
+
|
| 305 |
+
### Second Motor Assembly
|
| 306 |
+
|
| 307 |
+
**Step 9: Install Motor 2**
|
| 308 |
+
|
| 309 |
+
- Slide the second motor in from the top and link the wire from motor 1 to motor 2.
|
| 310 |
+
|
| 311 |
+
<img
|
| 312 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_8.webp"
|
| 313 |
+
style="height:300px;"
|
| 314 |
+
/>
|
| 315 |
+
|
| 316 |
+
**Step 10: Attach Shoulder Holder**
|
| 317 |
+
|
| 318 |
+
- Add the shoulder motor holder.
|
| 319 |
+
- Ensure the wire from motor 1 to motor 2 goes behind the holder while the other wire is routed upward (see photo).
|
| 320 |
+
- This part can be tight to assemble, you can use a workbench like the image or a similar setup to push the part around the motor.
|
| 321 |
+
|
| 322 |
+
<div style="display: flex;">
|
| 323 |
+
<img
|
| 324 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_9.webp"
|
| 325 |
+
style="height:250px;"
|
| 326 |
+
/>
|
| 327 |
+
<img
|
| 328 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_10.webp"
|
| 329 |
+
style="height:250px;"
|
| 330 |
+
/>
|
| 331 |
+
<img
|
| 332 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_12.webp"
|
| 333 |
+
style="height:250px;"
|
| 334 |
+
/>
|
| 335 |
+
</div>
|
| 336 |
+
|
| 337 |
+
**Step 11: Secure Motor 2**
|
| 338 |
+
|
| 339 |
+
- Fasten the second motor with 4 screws.
|
| 340 |
+
|
| 341 |
+
**Step 12: Attach Motor Horn**
|
| 342 |
+
|
| 343 |
+
- Attach both motor horns to motor 2, again use the horn screw.
|
| 344 |
+
|
| 345 |
+
**Step 13: Attach Base**
|
| 346 |
+
|
| 347 |
+
- Install the base attachment using 2 screws.
|
| 348 |
+
|
| 349 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_11.webp" style="height:300px;">
|
| 350 |
+
|
| 351 |
+
**Step 14: Attach Upper Arm**
|
| 352 |
+
|
| 353 |
+
- Attach the upper arm with 4 screws on each side.
|
| 354 |
+
|
| 355 |
+
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_13.webp" style="height:300px;">
|
| 356 |
+
|
| 357 |
+
---
|
| 358 |
+
|
| 359 |
+
### Third Motor Assembly
|
| 360 |
+
|
| 361 |
+
**Step 15: Install Motor 3**
|
| 362 |
+
|
| 363 |
+
- Route the motor cable from motor 2 through the cable holder to motor 3, then secure motor 3 with 4 screws.
|
| 364 |
+
|
| 365 |
+
**Step 16: Attach Motor Horn**
|
| 366 |
+
|
| 367 |
+
- Attach both motor horns to motor 3 and secure one again with a horn screw.
|
| 368 |
+
|
| 369 |
+
<img
|
| 370 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_14.webp"
|
| 371 |
+
style="height:300px;"
|
| 372 |
+
/>
|
| 373 |
+
|
| 374 |
+
**Step 17: Attach Forearm**
|
| 375 |
+
|
| 376 |
+
- Connect the forearm to motor 3 using 4 screws on each side.
|
| 377 |
+
|
| 378 |
+
<img
|
| 379 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_15.webp"
|
| 380 |
+
style="height:300px;"
|
| 381 |
+
/>
|
| 382 |
+
|
| 383 |
+
---
|
| 384 |
+
|
| 385 |
+
### Fourth Motor Assembly
|
| 386 |
+
|
| 387 |
+
**Step 18: Install Motor 4**
|
| 388 |
+
|
| 389 |
+
- Slide in motor 4, attach the cable from motor 3, and secure the cable in its holder with a screw.
|
| 390 |
+
|
| 391 |
+
<div style="display: flex;">
|
| 392 |
+
<img
|
| 393 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_16.webp"
|
| 394 |
+
style="height:300px;"
|
| 395 |
+
/>
|
| 396 |
+
<img
|
| 397 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_19.webp"
|
| 398 |
+
style="height:300px;"
|
| 399 |
+
/>
|
| 400 |
+
</div>
|
| 401 |
+
|
| 402 |
+
**Step 19: Attach Motor Holder 4**
|
| 403 |
+
|
| 404 |
+
- Install the fourth motor holder (a tight fit). Ensure one wire is routed upward and the wire from motor 3 is routed downward (see photo).
|
| 405 |
+
|
| 406 |
+
<img
|
| 407 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_17.webp"
|
| 408 |
+
style="height:300px;"
|
| 409 |
+
/>
|
| 410 |
+
|
| 411 |
+
**Step 20: Secure Motor 4 & Attach Horn**
|
| 412 |
+
|
| 413 |
+
- Fasten motor 4 with 4 screws and attach its motor horns, use for one a horn screw.
|
| 414 |
+
|
| 415 |
+
<img
|
| 416 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_18.webp"
|
| 417 |
+
style="height:300px;"
|
| 418 |
+
/>
|
| 419 |
+
|
| 420 |
+
---
|
| 421 |
+
|
| 422 |
+
### Wrist Assembly
|
| 423 |
+
|
| 424 |
+
**Step 21: Install Motor 5**
|
| 425 |
+
|
| 426 |
+
- Insert motor 5 into the wrist holder and secure it with 2 front screws.
|
| 427 |
+
|
| 428 |
+
<img
|
| 429 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_20.webp"
|
| 430 |
+
style="height:300px;"
|
| 431 |
+
/>
|
| 432 |
+
|
| 433 |
+
**Step 22: Attach Wrist**
|
| 434 |
+
|
| 435 |
+
- Connect the wire from motor 4 to motor 5. And already insert the other wire for the gripper.
|
| 436 |
+
- Secure the wrist to motor 4 using 4 screws on both sides.
|
| 437 |
+
|
| 438 |
+
<img
|
| 439 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_22.webp"
|
| 440 |
+
style="height:300px;"
|
| 441 |
+
/>
|
| 442 |
+
|
| 443 |
+
**Step 23: Attach Wrist Horn**
|
| 444 |
+
|
| 445 |
+
- Install only one motor horn on the wrist motor and secure it with a horn screw.
|
| 446 |
+
|
| 447 |
+
<img
|
| 448 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_23.webp"
|
| 449 |
+
style="height:300px;"
|
| 450 |
+
/>
|
| 451 |
+
|
| 452 |
+
---
|
| 453 |
+
|
| 454 |
+
### Follower Configuration
|
| 455 |
+
|
| 456 |
+
**Step 24: Attach Gripper**
|
| 457 |
+
|
| 458 |
+
- Attach the gripper to motor 5.
|
| 459 |
+
|
| 460 |
+
<img
|
| 461 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_24.webp"
|
| 462 |
+
style="height:300px;"
|
| 463 |
+
/>
|
| 464 |
+
|
| 465 |
+
**Step 25: Install Gripper Motor**
|
| 466 |
+
|
| 467 |
+
- Insert the gripper motor, connect the motor wire from motor 5 to motor 6, and secure it with 3 screws on each side.
|
| 468 |
+
|
| 469 |
+
<img
|
| 470 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_25.webp"
|
| 471 |
+
style="height:300px;"
|
| 472 |
+
/>
|
| 473 |
+
|
| 474 |
+
**Step 26: Attach Gripper Horn & Claw**
|
| 475 |
+
|
| 476 |
+
- Attach the motor horns and again use a horn screw.
|
| 477 |
+
- Install the gripper claw and secure it with 4 screws on both sides.
|
| 478 |
+
|
| 479 |
+
<img
|
| 480 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_26.webp"
|
| 481 |
+
style="height:300px;"
|
| 482 |
+
/>
|
| 483 |
+
|
| 484 |
+
**Step 27: Mount Controller**
|
| 485 |
+
|
| 486 |
+
- Attach the motor controller to the back of the robot.
|
| 487 |
+
|
| 488 |
+
<div style="display: flex;">
|
| 489 |
+
<img
|
| 490 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_27.webp"
|
| 491 |
+
style="height:300px;"
|
| 492 |
+
/>
|
| 493 |
+
<img
|
| 494 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_28.webp"
|
| 495 |
+
style="height:300px;"
|
| 496 |
+
/>
|
| 497 |
+
</div>
|
| 498 |
+
|
| 499 |
+
_Assembly complete – proceed to Leader arm assembly._
|
| 500 |
+
|
| 501 |
+
---
|
| 502 |
+
|
| 503 |
+
### Leader Configuration
|
| 504 |
+
|
| 505 |
+
For the leader configuration, perform **Steps 1–23**. Make sure that you removed the motor gears from the motors.
|
| 506 |
+
|
| 507 |
+
**Step 24: Attach Leader Holder**
|
| 508 |
+
|
| 509 |
+
- Mount the leader holder onto the wrist and secure it with a screw.
|
| 510 |
+
|
| 511 |
+
<img
|
| 512 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_29.webp"
|
| 513 |
+
style="height:300px;"
|
| 514 |
+
/>
|
| 515 |
+
|
| 516 |
+
**Step 25: Attach Handle**
|
| 517 |
+
|
| 518 |
+
- Attach the handle to motor 5 using 4 screws.
|
| 519 |
+
|
| 520 |
+
<img
|
| 521 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_30.webp"
|
| 522 |
+
style="height:300px;"
|
| 523 |
+
/>
|
| 524 |
+
|
| 525 |
+
**Step 26: Install Gripper Motor**
|
| 526 |
+
|
| 527 |
+
- Insert the gripper motor, secure it with 3 screws on each side, attach a motor horn using a horn screw, and connect the motor wire.
|
| 528 |
+
|
| 529 |
+
<img
|
| 530 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_31.webp"
|
| 531 |
+
style="height:300px;"
|
| 532 |
+
/>
|
| 533 |
+
|
| 534 |
+
**Step 27: Attach Trigger**
|
| 535 |
+
|
| 536 |
+
- Attach the follower trigger with 4 screws.
|
| 537 |
+
|
| 538 |
+
<img
|
| 539 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_32.webp"
|
| 540 |
+
style="height:300px;"
|
| 541 |
+
/>
|
| 542 |
+
|
| 543 |
+
**Step 28: Mount Controller**
|
| 544 |
+
|
| 545 |
+
- Attach the motor controller to the back of the robot.
|
| 546 |
+
|
| 547 |
+
<div style="display: flex;">
|
| 548 |
+
<img
|
| 549 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_27.webp"
|
| 550 |
+
style="height:300px;"
|
| 551 |
+
/>
|
| 552 |
+
<img
|
| 553 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so100_assembly_28.webp"
|
| 554 |
+
style="height:300px;"
|
| 555 |
+
/>
|
| 556 |
+
</div>
|
| 557 |
+
|
| 558 |
+
## Calibrate
|
| 559 |
+
|
| 560 |
+
Next, you'll need to calibrate your robot to ensure that the leader and follower arms have the same position values when they are in the same physical position.
|
| 561 |
+
The calibration process is very important because it allows a neural network trained on one robot to work on another.
|
| 562 |
+
|
| 563 |
+
#### Follower
|
| 564 |
+
|
| 565 |
+
Run the following command or API example to calibrate the follower arm:
|
| 566 |
+
|
| 567 |
+
<hfoptions id="calibrate_follower">
|
| 568 |
+
<hfoption id="Command">
|
| 569 |
+
|
| 570 |
+
```bash
|
| 571 |
+
lerobot-calibrate \
|
| 572 |
+
--robot.type=so100_follower \
|
| 573 |
+
--robot.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
|
| 574 |
+
--robot.id=my_awesome_follower_arm # <- Give the robot a unique name
|
| 575 |
+
```
|
| 576 |
+
|
| 577 |
+
</hfoption>
|
| 578 |
+
<hfoption id="API example">
|
| 579 |
+
|
| 580 |
+
<!-- prettier-ignore-start -->
|
| 581 |
+
```python
|
| 582 |
+
from lerobot.robots.so_follower import SO100FollowerConfig, SO100Follower
|
| 583 |
+
|
| 584 |
+
config = SO100FollowerConfig(
|
| 585 |
+
port="/dev/tty.usbmodem585A0076891",
|
| 586 |
+
id="my_awesome_follower_arm",
|
| 587 |
+
)
|
| 588 |
+
|
| 589 |
+
follower = SO100Follower(config)
|
| 590 |
+
follower.connect(calibrate=False)
|
| 591 |
+
follower.calibrate()
|
| 592 |
+
follower.disconnect()
|
| 593 |
+
```
|
| 594 |
+
<!-- prettier-ignore-end -->
|
| 595 |
+
|
| 596 |
+
</hfoption>
|
| 597 |
+
</hfoptions>
|
| 598 |
+
|
| 599 |
+
We unified the calibration method for most robots. Thus, the calibration steps for this SO100 arm are the same as the steps for the Koch and SO101. First, we have to move the robot to the position where each joint is in the middle of its range, then we press `Enter`. Secondly, we move all joints through their full range of motion. A video of this same process for the SO101 as reference can be found [here](https://huggingface.co/docs/lerobot/en/so101#calibration-video)
|
| 600 |
+
|
| 601 |
+
#### Leader
|
| 602 |
+
|
| 603 |
+
Do the same steps to calibrate the leader arm, run the following command or API example:
|
| 604 |
+
|
| 605 |
+
<hfoptions id="calibrate_leader">
|
| 606 |
+
<hfoption id="Command">
|
| 607 |
+
|
| 608 |
+
```bash
|
| 609 |
+
lerobot-calibrate \
|
| 610 |
+
--teleop.type=so100_leader \
|
| 611 |
+
--teleop.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
|
| 612 |
+
--teleop.id=my_awesome_leader_arm # <- Give the robot a unique name
|
| 613 |
+
```
|
| 614 |
+
|
| 615 |
+
</hfoption>
|
| 616 |
+
<hfoption id="API example">
|
| 617 |
+
|
| 618 |
+
<!-- prettier-ignore-start -->
|
| 619 |
+
```python
|
| 620 |
+
from lerobot.teleoperators.so_leader import SO100LeaderConfig, SO100Leader
|
| 621 |
+
|
| 622 |
+
config = SO100LeaderConfig(
|
| 623 |
+
port="/dev/tty.usbmodem58760431551",
|
| 624 |
+
id="my_awesome_leader_arm",
|
| 625 |
+
)
|
| 626 |
+
|
| 627 |
+
leader = SO100Leader(config)
|
| 628 |
+
leader.connect(calibrate=False)
|
| 629 |
+
leader.calibrate()
|
| 630 |
+
leader.disconnect()
|
| 631 |
+
```
|
| 632 |
+
<!-- prettier-ignore-end -->
|
| 633 |
+
|
| 634 |
+
</hfoption>
|
| 635 |
+
</hfoptions>
|
| 636 |
+
|
| 637 |
+
Congrats 🎉, your robot is all set to learn a task on its own. Start training it by following this tutorial: [Getting started with real-world robots](./il_robots)
|
| 638 |
+
|
| 639 |
+
> [!TIP]
|
| 640 |
+
> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
|
lerobot/docs/source/so101.mdx
ADDED
|
@@ -0,0 +1,436 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# SO-101
|
| 2 |
+
|
| 3 |
+
In the steps below, we explain how to assemble our flagship robot, the SO-101.
|
| 4 |
+
|
| 5 |
+
## Source the parts
|
| 6 |
+
|
| 7 |
+
Follow this [README](https://github.com/TheRobotStudio/SO-ARM100). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts.
|
| 8 |
+
And advise if it's your first time printing or if you don't own a 3D printer.
|
| 9 |
+
|
| 10 |
+
## Install LeRobot 🤗
|
| 11 |
+
|
| 12 |
+
To install LeRobot, follow our [Installation Guide](./installation)
|
| 13 |
+
|
| 14 |
+
In addition to these instructions, you need to install the Feetech SDK:
|
| 15 |
+
|
| 16 |
+
```bash
|
| 17 |
+
pip install -e ".[feetech]"
|
| 18 |
+
```
|
| 19 |
+
|
| 20 |
+
## Step-by-Step Assembly Instructions
|
| 21 |
+
|
| 22 |
+
The follower arm uses 6x STS3215 motors with 1/345 gearing. The leader, however, uses three differently geared motors to make sure it can both sustain its own weight and it can be moved without requiring much force. Which motor is needed for which joint is shown in the table below.
|
| 23 |
+
|
| 24 |
+
| Leader-Arm Axis | Motor | Gear Ratio |
|
| 25 |
+
| ------------------- | :---: | :--------: |
|
| 26 |
+
| Base / Shoulder Pan | 1 | 1 / 191 |
|
| 27 |
+
| Shoulder Lift | 2 | 1 / 345 |
|
| 28 |
+
| Elbow Flex | 3 | 1 / 191 |
|
| 29 |
+
| Wrist Flex | 4 | 1 / 147 |
|
| 30 |
+
| Wrist Roll | 5 | 1 / 147 |
|
| 31 |
+
| Gripper | 6 | 1 / 147 |
|
| 32 |
+
|
| 33 |
+
## Configure the motors
|
| 34 |
+
|
| 35 |
+
### 1. Find the USB ports associated with each arm
|
| 36 |
+
|
| 37 |
+
To find the port for each bus servo adapter, connect MotorBus to your computer via USB and power. Run the following script and disconnect the MotorBus when prompted:
|
| 38 |
+
|
| 39 |
+
```bash
|
| 40 |
+
lerobot-find-port
|
| 41 |
+
```
|
| 42 |
+
|
| 43 |
+
<hfoptions id="example">
|
| 44 |
+
<hfoption id="Mac">
|
| 45 |
+
|
| 46 |
+
Example output:
|
| 47 |
+
|
| 48 |
+
```
|
| 49 |
+
Finding all available ports for the MotorBus.
|
| 50 |
+
['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
|
| 51 |
+
Remove the USB cable from your MotorsBus and press Enter when done.
|
| 52 |
+
|
| 53 |
+
[...Disconnect corresponding leader or follower arm and press Enter...]
|
| 54 |
+
|
| 55 |
+
The port of this MotorsBus is /dev/tty.usbmodem575E0032081
|
| 56 |
+
Reconnect the USB cable.
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
Where the found port is: `/dev/tty.usbmodem575E0032081` corresponding to your leader or follower arm.
|
| 60 |
+
|
| 61 |
+
</hfoption>
|
| 62 |
+
<hfoption id="Linux">
|
| 63 |
+
|
| 64 |
+
On Linux, you might need to give access to the USB ports by running:
|
| 65 |
+
|
| 66 |
+
```bash
|
| 67 |
+
sudo chmod 666 /dev/ttyACM0
|
| 68 |
+
sudo chmod 666 /dev/ttyACM1
|
| 69 |
+
```
|
| 70 |
+
|
| 71 |
+
Example output:
|
| 72 |
+
|
| 73 |
+
```
|
| 74 |
+
Finding all available ports for the MotorBus.
|
| 75 |
+
['/dev/ttyACM0', '/dev/ttyACM1']
|
| 76 |
+
Remove the usb cable from your MotorsBus and press Enter when done.
|
| 77 |
+
|
| 78 |
+
[...Disconnect corresponding leader or follower arm and press Enter...]
|
| 79 |
+
|
| 80 |
+
The port of this MotorsBus is /dev/ttyACM1
|
| 81 |
+
Reconnect the USB cable.
|
| 82 |
+
```
|
| 83 |
+
|
| 84 |
+
Where the found port is: `/dev/ttyACM1` corresponding to your leader or follower arm.
|
| 85 |
+
|
| 86 |
+
</hfoption>
|
| 87 |
+
</hfoptions>
|
| 88 |
+
|
| 89 |
+
### 2. Set the motors ids and baudrates
|
| 90 |
+
|
| 91 |
+
Each motor is identified by a unique id on the bus. When brand new, motors usually come with a default id of `1`. For the communication to work properly between the motors and the controller, we first need to set a unique, different id to each motor. Additionally, the speed at which data is transmitted on the bus is determined by the baudrate. In order to talk to each other, the controller and all the motors need to be configured with the same baudrate.
|
| 92 |
+
|
| 93 |
+
To that end, we first need to connect to each motor individually with the controller in order to set these. Since we will write these parameters in the non-volatile section of the motors' internal memory (EEPROM), we'll only need to do this once.
|
| 94 |
+
|
| 95 |
+
If you are repurposing motors from another robot, you will probably also need to perform this step as the ids and baudrate likely won't match.
|
| 96 |
+
|
| 97 |
+
The video below shows the sequence of steps for setting the motor ids.
|
| 98 |
+
|
| 99 |
+
##### Setup motors video
|
| 100 |
+
|
| 101 |
+
<div class="video-container">
|
| 102 |
+
<video controls width="600">
|
| 103 |
+
<source
|
| 104 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/setup_motors_so101_2.mp4"
|
| 105 |
+
type="video/mp4"
|
| 106 |
+
/>
|
| 107 |
+
</video>
|
| 108 |
+
</div>
|
| 109 |
+
|
| 110 |
+
#### Follower
|
| 111 |
+
|
| 112 |
+
Connect the usb cable from your computer and the power supply to the follower arm's controller board. Then, run the following command or run the API example with the port you got from the previous step. You'll also need to give your leader arm a name with the `id` parameter.
|
| 113 |
+
|
| 114 |
+
<hfoptions id="setup_motors">
|
| 115 |
+
<hfoption id="Command">
|
| 116 |
+
|
| 117 |
+
```bash
|
| 118 |
+
lerobot-setup-motors \
|
| 119 |
+
--robot.type=so101_follower \
|
| 120 |
+
--robot.port=/dev/tty.usbmodem585A0076841 # <- paste here the port found at previous step
|
| 121 |
+
```
|
| 122 |
+
|
| 123 |
+
</hfoption>
|
| 124 |
+
<hfoption id="API example">
|
| 125 |
+
|
| 126 |
+
<!-- prettier-ignore-start -->
|
| 127 |
+
```python
|
| 128 |
+
from lerobot.robots.so_follower import SO101Follower, SO101FollowerConfig
|
| 129 |
+
|
| 130 |
+
config = SO101FollowerConfig(
|
| 131 |
+
port="/dev/tty.usbmodem585A0076841",
|
| 132 |
+
id="my_awesome_follower_arm",
|
| 133 |
+
)
|
| 134 |
+
follower = SO101Follower(config)
|
| 135 |
+
follower.setup_motors()
|
| 136 |
+
```
|
| 137 |
+
<!-- prettier-ignore-end -->
|
| 138 |
+
|
| 139 |
+
</hfoption>
|
| 140 |
+
</hfoptions>
|
| 141 |
+
|
| 142 |
+
You should see the following instruction
|
| 143 |
+
|
| 144 |
+
```bash
|
| 145 |
+
Connect the controller board to the 'gripper' motor only and press enter.
|
| 146 |
+
```
|
| 147 |
+
|
| 148 |
+
As instructed, plug the gripper's motor. Make sure it's the only motor connected to the board, and that the motor itself is not yet daisy-chained to any other motor. As you press `[Enter]`, the script will automatically set the id and baudrate for that motor.
|
| 149 |
+
|
| 150 |
+
<details>
|
| 151 |
+
<summary>Troubleshooting</summary>
|
| 152 |
+
|
| 153 |
+
If you get an error at that point, check your cables and make sure they are plugged in properly:
|
| 154 |
+
|
| 155 |
+
<ul>
|
| 156 |
+
<li>Power supply</li>
|
| 157 |
+
<li>USB cable between your computer and the controller board</li>
|
| 158 |
+
<li>The 3-pin cable from the controller board to the motor</li>
|
| 159 |
+
</ul>
|
| 160 |
+
|
| 161 |
+
If you are using a Waveshare controller board, make sure that the two jumpers are set on the `B` channel (USB).
|
| 162 |
+
|
| 163 |
+
</details>
|
| 164 |
+
|
| 165 |
+
You should then see the following message:
|
| 166 |
+
|
| 167 |
+
```bash
|
| 168 |
+
'gripper' motor id set to 6
|
| 169 |
+
```
|
| 170 |
+
|
| 171 |
+
Followed by the next instruction:
|
| 172 |
+
|
| 173 |
+
```bash
|
| 174 |
+
Connect the controller board to the 'wrist_roll' motor only and press enter.
|
| 175 |
+
```
|
| 176 |
+
|
| 177 |
+
You can disconnect the 3-pin cable from the controller board, but you can leave it connected to the gripper motor on the other end, as it will already be in the right place. Now, plug in another 3-pin cable to the wrist roll motor and connect it to the controller board. As with the previous motor, make sure it is the only motor connected to the board and that the motor itself isn't connected to any other one.
|
| 178 |
+
|
| 179 |
+
Repeat the operation for each motor as instructed.
|
| 180 |
+
|
| 181 |
+
> [!TIP]
|
| 182 |
+
> Check your cabling at each step before pressing Enter. For instance, the power supply cable might disconnect as you manipulate the board.
|
| 183 |
+
|
| 184 |
+
When you are done, the script will simply finish, at which point the motors are ready to be used. You can now plug the 3-pin cable from each motor to the next one, and the cable from the first motor (the 'shoulder pan' with id=1) to the controller board, which can now be attached to the base of the arm.
|
| 185 |
+
|
| 186 |
+
#### Leader
|
| 187 |
+
|
| 188 |
+
Do the same steps for the leader arm.
|
| 189 |
+
|
| 190 |
+
<hfoptions id="setup_motors">
|
| 191 |
+
<hfoption id="Command">
|
| 192 |
+
|
| 193 |
+
```bash
|
| 194 |
+
lerobot-setup-motors \
|
| 195 |
+
--teleop.type=so101_leader \
|
| 196 |
+
--teleop.port=/dev/tty.usbmodem575E0031751 # <- paste here the port found at previous step
|
| 197 |
+
```
|
| 198 |
+
|
| 199 |
+
</hfoption>
|
| 200 |
+
<hfoption id="API example">
|
| 201 |
+
|
| 202 |
+
<!-- prettier-ignore-start -->
|
| 203 |
+
```python
|
| 204 |
+
from lerobot.teleoperators.so_leader import SO101Leader, SO101LeaderConfig
|
| 205 |
+
|
| 206 |
+
config = SO101LeaderConfig(
|
| 207 |
+
port="/dev/tty.usbmodem585A0076841",
|
| 208 |
+
id="my_awesome_leader_arm",
|
| 209 |
+
)
|
| 210 |
+
leader = SO101Leader(config)
|
| 211 |
+
leader.setup_motors()
|
| 212 |
+
```
|
| 213 |
+
<!-- prettier-ignore-end -->
|
| 214 |
+
|
| 215 |
+
</hfoption>
|
| 216 |
+
</hfoptions>
|
| 217 |
+
|
| 218 |
+
### Clean Parts
|
| 219 |
+
|
| 220 |
+
Remove all support material from the 3D-printed parts. The easiest way to do this is using a small screwdriver to get underneath the support material.
|
| 221 |
+
|
| 222 |
+
It is advisable to install one 3-pin cable in the motor after placing them before continuing assembly.
|
| 223 |
+
|
| 224 |
+
### Joint 1
|
| 225 |
+
|
| 226 |
+
- Place the first motor into the base.
|
| 227 |
+
- Fasten the motor with 4 M2x6mm screws (smallest screws). Two from the top and two from the bottom.
|
| 228 |
+
- Slide over the first motor holder and fasten it using two M2x6mm screws (one on each side).
|
| 229 |
+
- Install both motor horns, securing the top horn with a M3x6mm screw.
|
| 230 |
+
- Attach the shoulder part.
|
| 231 |
+
- Tighten the shoulder part with 4 M3x6mm screws on top and 4 M3x6mm screws on the bottom
|
| 232 |
+
- Add the shoulder motor holder.
|
| 233 |
+
|
| 234 |
+
<div class="video-container">
|
| 235 |
+
<video controls width="600">
|
| 236 |
+
<source
|
| 237 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/Joint1_v2.mp4"
|
| 238 |
+
type="video/mp4"
|
| 239 |
+
/>
|
| 240 |
+
</video>
|
| 241 |
+
</div>
|
| 242 |
+
|
| 243 |
+
### Joint 2
|
| 244 |
+
|
| 245 |
+
- Slide the second motor in from the top.
|
| 246 |
+
- Fasten the second motor with 4 M2x6mm screws.
|
| 247 |
+
- Attach both motor horns to motor 2, again use the M3x6mm horn screw.
|
| 248 |
+
- Attach the upper arm with 4 M3x6mm screws on each side.
|
| 249 |
+
|
| 250 |
+
<div class="video-container">
|
| 251 |
+
<video controls width="600">
|
| 252 |
+
<source
|
| 253 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/Joint2_v2.mp4"
|
| 254 |
+
type="video/mp4"
|
| 255 |
+
/>
|
| 256 |
+
</video>
|
| 257 |
+
</div>
|
| 258 |
+
|
| 259 |
+
### Joint 3
|
| 260 |
+
|
| 261 |
+
- Insert motor 3 and fasten using 4 M2x6mm screws
|
| 262 |
+
- Attach both motor horns to motor 3 and secure one again with a M3x6mm horn screw.
|
| 263 |
+
- Connect the forearm to motor 3 using 4 M3x6mm screws on each side.
|
| 264 |
+
|
| 265 |
+
<div class="video-container">
|
| 266 |
+
<video controls width="600">
|
| 267 |
+
<source
|
| 268 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/Joint3_v2.mp4"
|
| 269 |
+
type="video/mp4"
|
| 270 |
+
/>
|
| 271 |
+
</video>
|
| 272 |
+
</div>
|
| 273 |
+
|
| 274 |
+
### Joint 4
|
| 275 |
+
|
| 276 |
+
- Slide over motor holder 4.
|
| 277 |
+
- Slide in motor 4.
|
| 278 |
+
- Fasten motor 4 with 4 M2x6mm screws and attach its motor horns, use a M3x6mm horn screw.
|
| 279 |
+
|
| 280 |
+
<div class="video-container">
|
| 281 |
+
<video controls width="600">
|
| 282 |
+
<source
|
| 283 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/Joint4_v2.mp4"
|
| 284 |
+
type="video/mp4"
|
| 285 |
+
/>
|
| 286 |
+
</video>
|
| 287 |
+
</div>
|
| 288 |
+
|
| 289 |
+
### Joint 5
|
| 290 |
+
|
| 291 |
+
- Insert motor 5 into the wrist holder and secure it with 2 M2x6mm front screws.
|
| 292 |
+
- Install only one motor horn on the wrist motor and secure it with a M3x6mm horn screw.
|
| 293 |
+
- Secure the wrist to motor 4 using 4 M3x6mm screws on both sides.
|
| 294 |
+
|
| 295 |
+
<div class="video-container">
|
| 296 |
+
<video controls width="600">
|
| 297 |
+
<source
|
| 298 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/Joint5_v2.mp4"
|
| 299 |
+
type="video/mp4"
|
| 300 |
+
/>
|
| 301 |
+
</video>
|
| 302 |
+
</div>
|
| 303 |
+
|
| 304 |
+
### Gripper / Handle
|
| 305 |
+
|
| 306 |
+
<hfoptions id="assembly">
|
| 307 |
+
<hfoption id="Follower">
|
| 308 |
+
|
| 309 |
+
- Attach the gripper to motor 5, attach it to the motor horn on the wrist using 4 M3x6mm screws.
|
| 310 |
+
- Insert the gripper motor and secure it with 2 M2x6mm screws on each side.
|
| 311 |
+
- Attach the motor horns and again use a M3x6mm horn screw.
|
| 312 |
+
- Install the gripper claw and secure it with 4 M3x6mm screws on both sides.
|
| 313 |
+
|
| 314 |
+
<div class="video-container">
|
| 315 |
+
<video controls width="600">
|
| 316 |
+
<source
|
| 317 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/Gripper_v2.mp4"
|
| 318 |
+
type="video/mp4"
|
| 319 |
+
/>
|
| 320 |
+
</video>
|
| 321 |
+
</div>
|
| 322 |
+
|
| 323 |
+
</hfoption>
|
| 324 |
+
<hfoption id="Leader">
|
| 325 |
+
|
| 326 |
+
- Mount the leader holder onto the wrist and secure it with 4 M3x6mm screws.
|
| 327 |
+
- Attach the handle to motor 5 using 1 M2x6mm screw.
|
| 328 |
+
- Insert the gripper motor, secure it with 2 M2x6mm screws on each side, attach a motor horn using a M3x6mm horn screw.
|
| 329 |
+
- Attach the follower trigger with 4 M3x6mm screws.
|
| 330 |
+
|
| 331 |
+
<div class="video-container">
|
| 332 |
+
<video controls width="600">
|
| 333 |
+
<source
|
| 334 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/Leader_v2.mp4"
|
| 335 |
+
type="video/mp4"
|
| 336 |
+
/>
|
| 337 |
+
</video>
|
| 338 |
+
</div>
|
| 339 |
+
|
| 340 |
+
</hfoption>
|
| 341 |
+
</hfoptions>
|
| 342 |
+
|
| 343 |
+
## Calibrate
|
| 344 |
+
|
| 345 |
+
Next, you'll need to calibrate your robot to ensure that the leader and follower arms have the same position values when they are in the same physical position.
|
| 346 |
+
The calibration process is very important because it allows a neural network trained on one robot to work on another.
|
| 347 |
+
|
| 348 |
+
#### Follower
|
| 349 |
+
|
| 350 |
+
Run the following command or API example to calibrate the follower arm:
|
| 351 |
+
|
| 352 |
+
<hfoptions id="calibrate_follower">
|
| 353 |
+
<hfoption id="Command">
|
| 354 |
+
|
| 355 |
+
```bash
|
| 356 |
+
lerobot-calibrate \
|
| 357 |
+
--robot.type=so101_follower \
|
| 358 |
+
--robot.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
|
| 359 |
+
--robot.id=my_awesome_follower_arm # <- Give the robot a unique name
|
| 360 |
+
```
|
| 361 |
+
|
| 362 |
+
</hfoption>
|
| 363 |
+
<hfoption id="API example">
|
| 364 |
+
|
| 365 |
+
<!-- prettier-ignore-start -->
|
| 366 |
+
```python
|
| 367 |
+
from lerobot.robots.so_follower import SO101FollowerConfig, SO101Follower
|
| 368 |
+
|
| 369 |
+
config = SO101FollowerConfig(
|
| 370 |
+
port="/dev/tty.usbmodem585A0076891",
|
| 371 |
+
id="my_awesome_follower_arm",
|
| 372 |
+
)
|
| 373 |
+
|
| 374 |
+
follower = SO101Follower(config)
|
| 375 |
+
follower.connect(calibrate=False)
|
| 376 |
+
follower.calibrate()
|
| 377 |
+
follower.disconnect()
|
| 378 |
+
```
|
| 379 |
+
<!-- prettier-ignore-end -->
|
| 380 |
+
|
| 381 |
+
</hfoption>
|
| 382 |
+
</hfoptions>
|
| 383 |
+
|
| 384 |
+
The video below shows how to perform the calibration. First you need to move the robot to the position where all joints are in the middle of their ranges. Then after pressing enter you have to move each joint through its full range of motion.
|
| 385 |
+
|
| 386 |
+
##### Calibration video
|
| 387 |
+
|
| 388 |
+
<div class="video-container">
|
| 389 |
+
<video controls width="600">
|
| 390 |
+
<source
|
| 391 |
+
src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/calibrate_so101_2.mp4"
|
| 392 |
+
type="video/mp4"
|
| 393 |
+
/>
|
| 394 |
+
</video>
|
| 395 |
+
</div>
|
| 396 |
+
|
| 397 |
+
#### Leader
|
| 398 |
+
|
| 399 |
+
Do the same steps to calibrate the leader arm, run the following command or API example:
|
| 400 |
+
|
| 401 |
+
<hfoptions id="calibrate_leader">
|
| 402 |
+
<hfoption id="Command">
|
| 403 |
+
|
| 404 |
+
```bash
|
| 405 |
+
lerobot-calibrate \
|
| 406 |
+
--teleop.type=so101_leader \
|
| 407 |
+
--teleop.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
|
| 408 |
+
--teleop.id=my_awesome_leader_arm # <- Give the robot a unique name
|
| 409 |
+
```
|
| 410 |
+
|
| 411 |
+
</hfoption>
|
| 412 |
+
<hfoption id="API example">
|
| 413 |
+
|
| 414 |
+
<!-- prettier-ignore-start -->
|
| 415 |
+
```python
|
| 416 |
+
from lerobot.teleoperators.so_leader import SO101LeaderConfig, SO101Leader
|
| 417 |
+
|
| 418 |
+
config = SO101LeaderConfig(
|
| 419 |
+
port="/dev/tty.usbmodem58760431551",
|
| 420 |
+
id="my_awesome_leader_arm",
|
| 421 |
+
)
|
| 422 |
+
|
| 423 |
+
leader = SO101Leader(config)
|
| 424 |
+
leader.connect(calibrate=False)
|
| 425 |
+
leader.calibrate()
|
| 426 |
+
leader.disconnect()
|
| 427 |
+
```
|
| 428 |
+
<!-- prettier-ignore-end -->
|
| 429 |
+
|
| 430 |
+
</hfoption>
|
| 431 |
+
</hfoptions>
|
| 432 |
+
|
| 433 |
+
Congrats 🎉, your robot is all set to learn a task on its own. Start training it by following this tutorial: [Getting started with real-world robots](./il_robots)
|
| 434 |
+
|
| 435 |
+
> [!TIP]
|
| 436 |
+
> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
|