-
-_Watch this tutorial from the LeRobot team to learn how ACT works: [LeRobot ACT Tutorial](https://www.youtube.com/watch?v=ft73x0LfGpM)_
-
-## Model Overview
-
-Action Chunking with Transformers (ACT) was introduced in the paper [Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware](https://arxiv.org/abs/2304.13705) by Zhao et al. The policy was designed to enable precise, contact-rich manipulation tasks using affordable hardware and minimal demonstration data.
-
-### Why ACT is Great for Beginners
-
-ACT stands out as an excellent starting point for several reasons:
-
-- **Fast Training**: Trains in a few hours on a single GPU
-- **Lightweight**: Only ~80M parameters, making it efficient and easy to work with
-- **Data Efficient**: Often achieves high success rates with just 50 demonstrations
-
-### Architecture
-
-ACT uses a transformer-based architecture with three main components:
-
-1. **Vision Backbone**: ResNet-18 processes images from multiple camera viewpoints
-2. **Transformer Encoder**: Synthesizes information from camera features, joint positions, and a learned latent variable
-3. **Transformer Decoder**: Generates coherent action sequences using cross-attention
-
-The policy takes as input:
-
-- Multiple RGB images (e.g., from wrist cameras, front/top cameras)
-- Current robot joint positions
-- A latent style variable `z` (learned during training, set to zero during inference)
-
-And outputs a chunk of `k` future action sequences.
-
-## Installation Requirements
-
-1. Install LeRobot by following our [Installation Guide](./installation).
-2. ACT is included in the base LeRobot installation, so no additional dependencies are needed!
-
-## Training ACT
-
-ACT works seamlessly with the standard LeRobot training pipeline. Here's a complete example for training ACT on your dataset:
-
-```bash
-lerobot-train \
- --dataset.repo_id=${HF_USER}/your_dataset \
- --policy.type=act \
- --output_dir=outputs/train/act_your_dataset \
- --job_name=act_your_dataset \
- --policy.device=cuda \
- --wandb.enable=true \
- --policy.repo_id=${HF_USER}/act_policy
-```
-
-### Training Tips
-
-1. **Start with defaults**: ACT's default hyperparameters work well for most tasks
-2. **Training duration**: Expect a few hours for 100k training steps on a single GPU
-3. **Batch size**: Start with batch size 8 and adjust based on your GPU memory
-
-### Train using Google Colab
-
-If your local computer doesn't have a powerful GPU, you can utilize Google Colab to train your model by following the [ACT training notebook](./notebooks#training-act).
-
-## Evaluating ACT
-
-Once training is complete, you can evaluate your ACT policy using the `lerobot-record` command with your trained policy. This will run inference and record evaluation episodes:
-
-```bash
-lerobot-record \
- --robot.type=so100_follower \
- --robot.port=/dev/ttyACM0 \
- --robot.id=my_robot \
- --robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}}" \
- --display_data=true \
- --dataset.repo_id=${HF_USER}/eval_act_your_dataset \
- --dataset.num_episodes=10 \
- --dataset.single_task="Your task description" \
- --policy.path=${HF_USER}/act_policy
-```
diff --git a/lerobot/docs/source/async.mdx b/lerobot/docs/source/async.mdx
deleted file mode 100644
index 732041a19299f5c96c888bb06a8c49e4fd41703e..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/async.mdx
+++ /dev/null
@@ -1,312 +0,0 @@
-# Asynchronous Inference
-
-With our [SmolVLA](https://huggingface.co/papers/2506.01844) we introduced a new way to run inference on real-world robots, **decoupling action prediction from action execution**.
-In this tutorial, we'll show how to use asynchronous inference (_async inference_) using a finetuned version of SmolVLA, and all the policies supported by LeRobot.
-**Try async inference with all the policies** supported by LeRobot!
-
-**What you'll learn:**
-
-1. Why asynchronous inference matters and how it compares to, more traditional, sequential inference.
-2. How to spin-up a `PolicyServer` and connect a `RobotClient` from the same machine, and even over the network.
-3. How to tune key parameters (`actions_per_chunk`, `chunk_size_threshold`) for your robot and policy.
-
-If you get stuck, hop into our [Discord community](https://discord.gg/s3KuuzsPFb)!
-
-In a nutshell: with _async inference_, your robot keeps acting while the policy server is already busy computing the next chunk of actions---eliminating "wait-for-inference" lags and unlocking smoother, more reactive behaviours.
-This is fundamentally different from synchronous inference (sync), where the robot stays idle while the policy computes the next chunk of actions.
-
----
-
-## Getting started with async inference
-
-You can read more information on asynchronous inference in our [blogpost](https://huggingface.co/blog/async-robot-inference). This guide is designed to help you quickly set up and run asynchronous inference in your environment.
-
-First, install `lerobot` with the `async` tag, to install the extra dependencies required to run async inference.
-
-```shell
-pip install -e ".[async]"
-```
-
-Then, spin up a policy server (in one terminal, or in a separate machine) specifying the host address and port for the client to connect to.
-You can spin up a policy server running:
-
-```shell
-python -m lerobot.async_inference.policy_server \
- --host=127.0.0.1 \
- --port=8080
-```
-
-This will start a policy server listening on `127.0.0.1:8080` (`localhost`, port 8080). At this stage, the policy server is empty, as all information related to which policy to run and with which parameters are specified during the first handshake with the client. Spin up a client with:
-
-```shell
-python -m lerobot.async_inference.robot_client \
- --server_address=127.0.0.1:8080 \ # SERVER: the host address and port of the policy server
- --robot.type=so100_follower \ # ROBOT: your robot type
- --robot.port=/dev/tty.usbmodem585A0076841 \ # ROBOT: your robot port
- --robot.id=follower_so100 \ # ROBOT: your robot id, to load calibration file
- --robot.cameras="{ laptop: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}, phone: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \ # POLICY: the cameras used to acquire frames, with keys matching the keys expected by the policy
- --task="dummy" \ # POLICY: The task to run the policy on (`Fold my t-shirt`). Not necessarily defined for all policies, such as `act`
- --policy_type=your_policy_type \ # POLICY: the type of policy to run (smolvla, act, etc)
- --pretrained_name_or_path=user/model \ # POLICY: the model name/path on server to the checkpoint to run (e.g., lerobot/smolvla_base)
- --policy_device=mps \ # POLICY: the device to run the policy on, on the server
- --actions_per_chunk=50 \ # POLICY: the number of actions to output at once
- --chunk_size_threshold=0.5 \ # CLIENT: the threshold for the chunk size before sending a new observation to the server
- --aggregate_fn_name=weighted_average \ # CLIENT: the function to aggregate actions on overlapping portions
- --debug_visualize_queue_size=True # CLIENT: whether to visualize the queue size at runtime
-```
-
-In summary, you need to specify instructions for:
-
-- `SERVER`: the address and port of the policy server
-- `ROBOT`: the type of robot to connect to, the port to connect to, and the local `id` of the robot
-- `POLICY`: the type of policy to run, and the model name/path on server to the checkpoint to run. You also need to specify which device should the sever be using, and how many actions to output at once (capped at the policy max actions value).
-- `CLIENT`: the threshold for the chunk size before sending a new observation to the server, and the function to aggregate actions on overlapping portions. Optionally, you can also visualize the queue size at runtime, to help you tune the `CLIENT` parameters.
-
-Importantly,
-
-- `actions_per_chunk` and `chunk_size_threshold` are key parameters to tune for your setup.
-- `aggregate_fn_name` is the function to aggregate actions on overlapping portions. You can either add a new one to a registry of functions, or add your own in `robot_client.py` (see [here](NOTE:addlinktoLOC))
-- `debug_visualize_queue_size` is a useful tool to tune the `CLIENT` parameters.
-
-## Done! You should see your robot moving around by now 😉
-
-## Async vs. synchronous inference
-
-Synchronous inference relies on interleaving action chunk prediction and action execution. This inherently results in _idle frames_, frames where the robot awaits idle the policy's output: a new action chunk.
-In turn, inference is plagued by evident real-time lags, where the robot simply stops acting due to the lack of available actions.
-With robotics models increasing in size, this problem risks becoming only more severe.
-
-
-
-
-
- Synchronous inference makes the robot idle while the policy is
- computing the next chunk of actions.
-
-
-To overcome this, we design async inference, a paradigm where action planning and execution are decoupled, resulting in (1) higher adaptability and, most importantly, (2) no idle frames.
-Crucially, with async inference, the next action chunk is computed _before_ the current one is exhausted, resulting in no idleness.
-Higher adaptability is ensured by aggregating the different action chunks on overlapping portions, obtaining an up-to-date plan and a tighter control loop.
-
-
-
-
-
- Asynchronous inference results in no idleness because the next chunk is
- computed before the current chunk is exhausted.
-
-
----
-
-## Start the Policy Server
-
-Policy servers are wrappers around a `PreTrainedPolicy` interfacing them with observations coming from a robot client.
-Policy servers are initialized as empty containers which are populated with the requested policy specified in the initial handshake between the robot client and the policy server.
-As such, spinning up a policy server is as easy as specifying the host address and port. If you're running the policy server on the same machine as the robot client, you can use `localhost` as the host address.
-
-
-
-```bash
-python -m lerobot.async_inference.policy_server \
- --host=127.0.0.1 \
- --port=8080
-```
-
-
-
-
-```python
-from lerobot.async_inference.configs import PolicyServerConfig
-from lerobot.async_inference.policy_server import serve
-
-config = PolicyServerConfig(
- host="localhost",
- port=8080,
-)
-serve(config)
-```
-
-
-
-
-
-This listens on `localhost:8080` for an incoming connection from the associated`RobotClient`, which will communicate which policy to run during the first client-server handshake.
-
----
-
-## Launch the Robot Client
-
-`RobotClient` is a wrapper around a `Robot` instance, which `RobotClient` connects to the (possibly remote) `PolicyServer`.
-The `RobotClient` streams observations to the `PolicyServer`, and receives action chunks obtained running inference on the server (which we assume to have better computational resources than the robot controller).
-
-
-
-```bash
-python -m lerobot.async_inference.robot_client \
- --server_address=127.0.0.1:8080 \ # SERVER: the host address and port of the policy server
- --robot.type=so100_follower \ # ROBOT: your robot type
- --robot.port=/dev/tty.usbmodem585A0076841 \ # ROBOT: your robot port
- --robot.id=follower_so100 \ # ROBOT: your robot id, to load calibration file
- --robot.cameras="{ laptop: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}, phone: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \ # POLICY: the cameras used to acquire frames, with keys matching the keys expected by the policy
- --task="dummy" \ # POLICY: The task to run the policy on (`Fold my t-shirt`). Not necessarily defined for all policies, such as `act`
- --policy_type=your_policy_type \ # POLICY: the type of policy to run (smolvla, act, etc)
- --pretrained_name_or_path=user/model \ # POLICY: the model name/path on server to the checkpoint to run (e.g., lerobot/smolvla_base)
- --policy_device=mps \ # POLICY: the device to run the policy on, on the server
- --actions_per_chunk=50 \ # POLICY: the number of actions to output at once
- --chunk_size_threshold=0.5 \ # CLIENT: the threshold for the chunk size before sending a new observation to the server
- --aggregate_fn_name=weighted_average \ # CLIENT: the function to aggregate actions on overlapping portions
- --debug_visualize_queue_size=True # CLIENT: whether to visualize the queue size at runtime
-```
-
-
-
-
-```python
-import threading
-from lerobot.robots.so_follower import SO100FollowerConfig
-from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
-from lerobot.async_inference.configs import RobotClientConfig
-from lerobot.async_inference.robot_client import RobotClient
-from lerobot.async_inference.helpers import visualize_action_queue_size
-
-# 1. Create the robot instance
-"""Check out the cameras available in your setup by running `python lerobot/find_cameras.py`"""
-# these cameras must match the ones expected by the policy
-# check the config.json on the Hub for the policy you are using
-camera_cfg = {
- "top": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=30),
- "side": OpenCVCameraConfig(index_or_path=1, width=640, height=480, fps=30)
-}
-
-robot_cfg = SO100FollowerConfig(
- port="/dev/tty.usbmodem585A0076841",
- id="follower_so100",
- cameras=camera_cfg
-)
-
-# 3. Create client configuration
-client_cfg = RobotClientConfig(
- robot=robot_cfg,
- server_address="localhost:8080",
- policy_device="mps",
- policy_type="smolvla",
- pretrained_name_or_path="/smolvla_async",
- chunk_size_threshold=0.5,
- actions_per_chunk=50, # make sure this is less than the max actions of the policy
-)
-
-# 4. Create and start client
-client = RobotClient(client_cfg)
-
-# 5. Specify the task
-task = "Don't do anything, stay still"
-
-if client.start():
- # Start action receiver thread
- action_receiver_thread = threading.Thread(target=client.receive_actions, daemon=True)
- action_receiver_thread.start()
-
- try:
- # Run the control loop
- client.control_loop(task)
- except KeyboardInterrupt:
- client.stop()
- action_receiver_thread.join()
- # (Optionally) plot the action queue size
- visualize_action_queue_size(client.action_queue_size)
-```
-
-
-
-
-
-The following two parameters are key in every setup:
-
-
-
-
-
Hyperparameter
-
Default
-
What it does
-
-
-
-
-
- actions_per_chunk
-
-
50
-
- How many actions the policy outputs at once. Typical values: 10-50.
-
-
-
-
- chunk_size_threshold
-
-
0.7
-
- When the queue is ≤ 50% full, the client sends a fresh observation.
- Value in [0, 1].
-
-
-
-
-
-
- Different values of `actions_per_chunk` and `chunk_size_threshold` do result
- in different behaviours.
-
-
-On the one hand, increasing the value of `actions_per_chunk` will result in reducing the likelihood of ending up with no actions to execute, as more actions will be available when the new chunk is computed.
-However, larger values of `actions_per_chunk` might also result in less precise actions, due to the compounding errors consequent to predicting actions over longer timespans.
-
-On the other hand, increasing the value of `chunk_size_threshold` will result in sending out to the `PolicyServer` observations for inference more often, resulting in a larger number of updates action chunks, overlapping on significant portions. This results in high adaptability, in the limit predicting one action chunk for each observation, which is in turn only marginally consumed while a new one is produced.
-This option does also put more pressure on the inference pipeline, as a consequence of the many requests. Conversely, values of `chunk_size_threshold` close to 0.0 collapse to the synchronous edge case, whereby new observations are only sent out whenever the current chunk is exhausted.
-
-We found the default values of `actions_per_chunk` and `chunk_size_threshold` to work well in the experiments we developed for the [SmolVLA paper](https://huggingface.co/papers/2506.01844), but recommend experimenting with different values to find the best fit for your setup.
-
-### Tuning async inference for your setup
-
-1. **Choose your computational resources carefully.** [PI0](https://huggingface.co/lerobot/pi0) occupies 14GB of memory at inference time, while [SmolVLA](https://huggingface.co/lerobot/smolvla_base) requires only ~2GB. You should identify the best computational resource for your use case keeping in mind smaller policies require less computational resources. The combination of policy and device used (CPU-intensive, using MPS, or the number of CUDA cores on a given NVIDIA GPU) directly impacts the average inference latency you should expect.
-2. **Adjust your `fps` based on inference latency.** While the server generates a new action chunk, the client is not idle and is stepping through its current action queue. If the two processes happen at fundamentally different speeds, the client might end up with an empty queue. As such, you should reduce your fps if you consistently run out of actions in queue.
-3. **Adjust `chunk_size_threshold`**.
- - Values closer to `0.0` result in almost sequential behavior. Values closer to `1.0` → send observation every step (more bandwidth, relies on good world-model).
- - We found values around 0.5-0.6 to work well. If you want to tweak this, spin up a `RobotClient` setting the `--debug_visualize_queue_size` to `True`. This will plot the action queue size evolution at runtime, and you can use it to find the value of `chunk_size_threshold` that works best for your setup.
-
-
-
-
-
-
- The action queue size is plotted at runtime when the
- `--debug_visualize_queue_size` flag is passed, for various levels of
- `chunk_size_threshold` (`g` in the SmolVLA paper).
-
-
-
----
-
-## Conclusion
-
-Asynchronous inference represents a significant advancement in real-time robotics control, addressing the fundamental challenge of inference latency that has long plagued robotics applications. Through this tutorial, you've learned how to implement a complete async inference pipeline that eliminates idle frames and enables smoother, more reactive robot behaviors.
-
-**Key Takeaways:**
-
-- **Paradigm Shift**: Async inference decouples action prediction from execution, allowing robots to continue acting while new action chunks are computed in parallel
-- **Performance Benefits**: Eliminates "wait-for-inference" lags that are inherent in synchronous approaches, becoming increasingly important as policy models grow larger
-- **Flexible Architecture**: The server-client design enables distributed computing, where inference can run on powerful remote hardware while maintaining real-time robot control
-- **Tunable Parameters**: Success depends on properly configuring `actions_per_chunk` and `chunk_size_threshold` for your specific hardware, policy, and task requirements
-- **Universal Compatibility**: Works with all LeRobot-supported policies, from lightweight ACT models to vision-language models like SmolVLA
-
-Start experimenting with the default parameters, monitor your action queue sizes, and iteratively refine your setup to achieve optimal performance for your specific use case.
-If you want to discuss this further, hop into our [Discord community](https://discord.gg/s3KuuzsPFb), or open an issue on our [GitHub repository](https://github.com/lerobot/lerobot/issues).
diff --git a/lerobot/docs/source/backwardcomp.mdx b/lerobot/docs/source/backwardcomp.mdx
deleted file mode 100644
index a0546eee722d123bcb7ab8ebaeedcd27b936f6f6..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/backwardcomp.mdx
+++ /dev/null
@@ -1,151 +0,0 @@
-# Backward compatibility
-
-## Policy Normalization Migration (PR #1452)
-
-**Breaking Change**: LeRobot policies no longer have built-in normalization layers embedded in their weights. Normalization is now handled by external `PolicyProcessorPipeline` components.
-
-### What changed?
-
-| | Before PR #1452 | After PR #1452 |
-| -------------------------- | ------------------------------------------------ | ------------------------------------------------------------ |
-| **Normalization Location** | Embedded in model weights (`normalize_inputs.*`) | External `PolicyProcessorPipeline` components |
-| **Model State Dict** | Contains normalization statistics | **Clean weights only** - no normalization parameters |
-| **Usage** | `policy(batch)` handles everything | `preprocessor(batch)` → `policy(...)` → `postprocessor(...)` |
-
-### Impact on existing models
-
-- Models trained **before** PR #1452 have normalization embedded in their weights
-- These models need migration to work with the new `PolicyProcessorPipeline` system
-- The migration extracts normalization statistics and creates separate processor pipelines
-
-### Migrating old models
-
-Use the migration script to convert models with embedded normalization:
-
-```shell
-python src/lerobot/processor/migrate_policy_normalization.py \
- --pretrained-path lerobot/act_aloha_sim_transfer_cube_human \
- --push-to-hub \
- --branch migrated
-```
-
-The script:
-
-1. **Extracts** normalization statistics from model weights
-2. **Creates** external preprocessor and postprocessor pipelines
-3. **Removes** normalization layers from model weights
-4. **Saves** clean model + processor pipelines
-5. **Pushes** to Hub with automatic PR creation
-
-### Using migrated models
-
-```python
-# New usage pattern (after migration)
-from lerobot.policies.factory import make_policy, make_pre_post_processors
-
-# Load model and processors separately
-policy = make_policy(config, ds_meta=dataset.meta)
-preprocessor, postprocessor = make_pre_post_processors(
- policy_cfg=config,
- dataset_stats=dataset.meta.stats
-)
-
-# Process data through pipeline
-processed_batch = preprocessor(raw_batch)
-action = policy.select_action(processed_batch)
-final_action = postprocessor(action)
-```
-
-## Hardware API redesign
-
-PR [#777](https://github.com/huggingface/lerobot/pull/777) improves the LeRobot calibration but is **not backward-compatible**. Below is a overview of what changed and how you can continue to work with datasets created before this pull request.
-
-### What changed?
-
-| | Before PR #777 | After PR #777 |
-| --------------------------------- | ------------------------------------------------- | ------------------------------------------------------------ |
-| **Joint range** | Degrees `-180...180°` | **Normalised range** Joints: `–100...100` Gripper: `0...100` |
-| **Zero position (SO100 / SO101)** | Arm fully extended horizontally | **In middle of the range for each joint** |
-| **Boundary handling** | Software safeguards to detect ±180 ° wrap-arounds | No wrap-around logic needed due to mid-range zero |
-
----
-
-### Impact on existing datasets
-
-- Recorded trajectories created **before** PR #777 will replay incorrectly if loaded directly:
- - Joint angles are offset and incorrectly normalized.
-- Any models directly finetuned or trained on the old data will need their inputs and outputs converted.
-
-### Using datasets made with the previous calibration system
-
-We provide a migration example script for replaying an episode recorded with the previous calibration here: `examples/backward_compatibility/replay.py`.
-Below we take you through the modifications that are done in the example script to make the previous calibration datasets work.
-
-```diff
-+ key = f"{name.removeprefix('main_')}.pos"
- action[key] = action_array[i].item()
-+ action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
-+ action["elbow_flex.pos"] -= 90
-```
-
-Let's break this down.
-New codebase uses `.pos` suffix for the position observations and we have removed `main_` prefix:
-
-
-```python
-key = f"{name.removeprefix('main_')}.pos"
-```
-
-
-For `"shoulder_lift"` (id = 2), the 0 position is changed by -90 degrees and the direction is reversed compared to old calibration/code.
-
-
-```python
-action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
-```
-
-
-For `"elbow_flex"` (id = 3), the 0 position is changed by -90 degrees compared to old calibration/code.
-
-
-```python
-action["elbow_flex.pos"] -= 90
-```
-
-
-To use degrees normalization we then set the `--robot.use_degrees` option to `true`.
-
-```diff
-python examples/backward_compatibility/replay.py \
- --robot.type=so101_follower \
- --robot.port=/dev/tty.usbmodem5A460814411 \
- --robot.id=blue \
-+ --robot.use_degrees=true \
- --dataset.repo_id=my_dataset_id \
- --dataset.episode=0
-```
-
-### Using policies trained with the previous calibration system
-
-Policies output actions in the same format as the datasets (`torch.Tensors`). Therefore, the same transformations should be applied.
-
-To find these transformations, we recommend to first try and and replay an episode of the dataset your policy was trained on using the section above.
-Then, add these same transformations on your inference script (shown here in the `record.py` script):
-
-```diff
-action_values = predict_action(
- observation_frame,
- policy,
- get_safe_torch_device(policy.config.device),
- policy.config.use_amp,
- task=single_task,
- robot_type=robot.robot_type,
- )
- action = {key: action_values[i].item() for i, key in enumerate(robot.action_features)}
-
-+ action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
-+ action["elbow_flex.pos"] -= 90
- robot.send_action(action)
-```
-
-If you have questions or run into migration issues, feel free to ask them on [Discord](https://discord.gg/s3KuuzsPFb)
diff --git a/lerobot/docs/source/bring_your_own_policies.mdx b/lerobot/docs/source/bring_your_own_policies.mdx
deleted file mode 100644
index df1401fac82dccb9f7d4d5c3f13dbf7cf4ae0b82..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/bring_your_own_policies.mdx
+++ /dev/null
@@ -1,175 +0,0 @@
-# Bring Your Own Policies
-
-This tutorial explains how to integrate your own custom policy implementations into the LeRobot ecosystem, allowing you to leverage all LeRobot tools for training, evaluation, and deployment while using your own algorithms.
-
-## Step 1: Create a Policy Package
-
-Your custom policy should be organized as an installable Python package following LeRobot's plugin conventions.
-
-### Package Structure
-
-Create a package with the prefix `lerobot_policy_` (IMPORTANT!) followed by your policy name:
-
-```bash
-lerobot_policy_my_custom_policy/
-├── pyproject.toml
-└── src/
- └── lerobot_policy_my_custom_policy/
- ├── __init__.py
- ├── configuration_my_custom_policy.py
- ├── modeling_my_custom_policy.py
- └── processor_my_custom_policy.py
-```
-
-### Package Configuration
-
-Set up your `pyproject.toml`:
-
-```toml
-[project]
-name = "lerobot_policy_my_custom_policy"
-version = "0.1.0"
-dependencies = [
- # your policy-specific dependencies
-]
-requires-python = ">= 3.11"
-
-[build-system]
-build-backend = # your-build-backend
-requires = # your-build-system
-```
-
-## Step 2: Define the Policy Configuration
-
-Create a configuration class that inherits from `PreTrainedConfig` and registers your policy type:
-
-```python
-# configuration_my_custom_policy.py
-from dataclasses import dataclass, field
-from lerobot.configs.policies import PreTrainedConfig
-from lerobot.configs.types import NormalizationMode
-
-@PreTrainedConfig.register_subclass("my_custom_policy")
-@dataclass
-class MyCustomPolicyConfig(PreTrainedConfig):
- """Configuration class for MyCustomPolicy.
-
- Args:
- n_obs_steps: Number of observation steps to use as input
- horizon: Action prediction horizon
- n_action_steps: Number of action steps to execute
- hidden_dim: Hidden dimension for the policy network
- # Add your policy-specific parameters here
- """
- # ...PreTrainedConfig fields...
- pass
-
- def __post_init__(self):
- super().__post_init__()
- # Add any validation logic here
-
- def validate_features(self) -> None:
- """Validate input/output feature compatibility."""
- # Implement validation logic for your policy's requirements
- pass
-```
-
-## Step 3: Implement the Policy Class
-
-Create your policy implementation by inheriting from LeRobot's base `PreTrainedPolicy` class:
-
-```python
-# modeling_my_custom_policy.py
-import torch
-import torch.nn as nn
-from typing import Dict, Any
-
-from lerobot.policies.pretrained import PreTrainedPolicy
-from .configuration_my_custom_policy import MyCustomPolicyConfig
-
-class MyCustomPolicy(PreTrainedPolicy):
- config_class = MyCustomPolicyConfig
- name = "my_custom_policy"
-
- def __init__(self, config: MyCustomPolicyConfig, dataset_stats: Dict[str, Any] = None):
- super().__init__(config, dataset_stats)
- ...
-```
-
-## Step 4: Add Data Processors
-
-Create processor functions:
-
-```python
-# processor_my_custom_policy.py
-from typing import Dict, Any
-import torch
-
-
-def make_my_custom_policy_pre_post_processors(
- config,
-) -> tuple[
- PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
- PolicyProcessorPipeline[PolicyAction, PolicyAction],
-]:
- """Create preprocessing and postprocessing functions for your policy."""
- pass # Define your preprocessing and postprocessing logic here
-
-```
-
-## Step 5: Package Initialization
-
-Expose your classes in the package's `__init__.py`:
-
-```python
-# __init__.py
-"""Custom policy package for LeRobot."""
-
-try:
- import lerobot # noqa: F401
-except ImportError:
- raise ImportError(
- "lerobot is not installed. Please install lerobot to use this policy package."
- )
-
-from .configuration_my_custom_policy import MyCustomPolicyConfig
-from .modeling_my_custom_policy import MyCustomPolicy
-from .processor_my_custom_policy import make_my_custom_policy_pre_post_processors
-
-__all__ = [
- "MyCustomPolicyConfig",
- "MyCustomPolicy",
- "make_my_custom_policy_pre_post_processors",
-]
-```
-
-## Step 6: Installation and Usage
-
-### Install Your Policy Package
-
-```bash
-cd lerobot_policy_my_custom_policy
-pip install -e .
-
-# Or install from PyPI if published
-pip install lerobot_policy_my_custom_policy
-```
-
-### Use Your Policy
-
-Once installed, your policy automatically integrates with LeRobot's training and evaluation tools:
-
-```bash
-lerobot-train \
- --policy.type my_custom_policy \
- --env.type pusht \
- --steps 200000
-```
-
-## Examples and Community Contributions
-
-Check out these example policy implementations:
-
-- [DiTFlow Policy](https://github.com/danielsanjosepro/lerobot_policy_ditflow) - Diffusion Transformer policy with flow-matching objective. Try it out in this example: [DiTFlow Example](https://github.com/danielsanjosepro/test_lerobot_policy_ditflow)
-
-Share your policy implementations with the community! 🤗
diff --git a/lerobot/docs/source/cameras.mdx b/lerobot/docs/source/cameras.mdx
deleted file mode 100644
index 98205ce105bb2191f67da993cc6d85e0f7873b42..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/cameras.mdx
+++ /dev/null
@@ -1,206 +0,0 @@
-# Cameras
-
-LeRobot offers multiple options for video capture, including phone cameras, built-in laptop cameras, external webcams, and Intel RealSense cameras. To efficiently record frames from most cameras, you can use either the `OpenCVCamera` or `RealSenseCamera` class. For additional compatibility details on the `OpenCVCamera` class, refer to the [Video I/O with OpenCV Overview](https://docs.opencv.org/4.x/d0/da7/videoio_overview.html).
-
-### Finding your camera
-
-To instantiate a camera, you need a camera identifier. This identifier might change if you reboot your computer or re-plug your camera, a behavior mostly dependant on your operating system.
-
-To find the camera indices of the cameras plugged into your system, run the following script:
-
-```bash
-lerobot-find-cameras opencv # or realsense for Intel Realsense cameras
-```
-
-The output will look something like this if you have two cameras connected:
-
-```
---- Detected Cameras ---
-Camera #0:
- Name: OpenCV Camera @ 0
- Type: OpenCV
- Id: 0
- Backend api: AVFOUNDATION
- Default stream profile:
- Format: 16.0
- Width: 1920
- Height: 1080
- Fps: 15.0
---------------------
-(more cameras ...)
-```
-
-> [!WARNING]
-> When using Intel RealSense cameras in `macOS`, you could get this [error](https://github.com/IntelRealSense/librealsense/issues/12307): `Error finding RealSense cameras: failed to set power state`, this can be solved by running the same command with `sudo` permissions. Note that using RealSense cameras in `macOS` is unstable.
-
-## Use Cameras
-
-Below are two examples, demonstrating how to work with the API.
-
-- **Asynchronous frame capture** using an OpenCV-based camera
-- **Color and depth capture** using an Intel RealSense camera
-
-
-
-
-
-```python
-from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
-from lerobot.cameras.opencv.camera_opencv import OpenCVCamera
-from lerobot.cameras.configs import ColorMode, Cv2Rotation
-
-# Construct an `OpenCVCameraConfig` with your desired FPS, resolution, color mode, and rotation.
-config = OpenCVCameraConfig(
- index_or_path=0,
- fps=15,
- width=1920,
- height=1080,
- color_mode=ColorMode.RGB,
- rotation=Cv2Rotation.NO_ROTATION
-)
-
-# Instantiate and connect an `OpenCVCamera`, performing a warm-up read (default).
-camera = OpenCVCamera(config)
-camera.connect()
-
-# Read frames asynchronously in a loop via `async_read(timeout_ms)`
-try:
- for i in range(10):
- frame = camera.async_read(timeout_ms=200)
- print(f"Async frame {i} shape:", frame.shape)
-finally:
- camera.disconnect()
-```
-
-
-
-
-
-
-```python
-from lerobot.cameras.realsense.configuration_realsense import RealSenseCameraConfig
-from lerobot.cameras.realsense.camera_realsense import RealSenseCamera
-from lerobot.cameras.configs import ColorMode, Cv2Rotation
-
-# Create a `RealSenseCameraConfig` specifying your camera’s serial number and enabling depth.
-config = RealSenseCameraConfig(
- serial_number_or_name="233522074606",
- fps=15,
- width=640,
- height=480,
- color_mode=ColorMode.RGB,
- use_depth=True,
- rotation=Cv2Rotation.NO_ROTATION
-)
-
-# Instantiate and connect a `RealSenseCamera` with warm-up read (default).
-camera = RealSenseCamera(config)
-camera.connect()
-
-# Capture a color frame via `read()` and a depth map via `read_depth()`.
-try:
- color_frame = camera.read()
- depth_map = camera.read_depth()
- print("Color frame shape:", color_frame.shape)
- print("Depth map shape:", depth_map.shape)
-finally:
- camera.disconnect()
-```
-
-
-
-
-
-## Use your phone
-
-
-
-
-To use your iPhone as a camera on macOS, enable the Continuity Camera feature:
-
-- Ensure your Mac is running macOS 13 or later, and your iPhone is on iOS 16 or later.
-- Sign in both devices with the same Apple ID.
-- Connect your devices with a USB cable or turn on Wi-Fi and Bluetooth for a wireless connection.
-
-For more details, visit [Apple support](https://support.apple.com/en-gb/guide/mac-help/mchl77879b8a/mac).
-
-Your iPhone should be detected automatically when running the camera setup script in the next section.
-
-
-
-
-If you want to use your phone as a camera on Linux, follow these steps to set up a virtual camera
-
-1. _Install `v4l2loopback-dkms` and `v4l-utils`_. Those packages are required to create virtual camera devices (`v4l2loopback`) and verify their settings with the `v4l2-ctl` utility from `v4l-utils`. Install them using:
-
-
-```python
-sudo apt install v4l2loopback-dkms v4l-utils
-```
-
-
-2. _Install [DroidCam](https://droidcam.app) on your phone_. This app is available for both iOS and Android.
-3. _Install [OBS Studio](https://obsproject.com)_. This software will help you manage the camera feed. Install it using [Flatpak](https://flatpak.org):
-
-
-```python
-flatpak install flathub com.obsproject.Studio
-```
-
-
-4. _Install the DroidCam OBS plugin_. This plugin integrates DroidCam with OBS Studio. Install it with:
-
-
-```python
-flatpak install flathub com.obsproject.Studio.Plugin.DroidCam
-```
-
-
-5. _Start OBS Studio_. Launch with:
-
-
-```python
-flatpak run com.obsproject.Studio
-```
-
-
-6. _Add your phone as a source_. Follow the instructions [here](https://droidcam.app/obs/usage). Be sure to set the resolution to `640x480`.
-7. _Adjust resolution settings_. In OBS Studio, go to `File > Settings > Video`. Change the `Base(Canvas) Resolution` and the `Output(Scaled) Resolution` to `640x480` by manually typing it in.
-8. _Start virtual camera_. In OBS Studio, follow the instructions [here](https://obsproject.com/kb/virtual-camera-guide).
-9. _Verify the virtual camera setup_. Use `v4l2-ctl` to list the devices:
-
-
-```python
-v4l2-ctl --list-devices
-```
-
-
-You should see an entry like:
-
-```
-VirtualCam (platform:v4l2loopback-000):
-/dev/video1
-```
-
-10. _Check the camera resolution_. Use `v4l2-ctl` to ensure that the virtual camera output resolution is `640x480`. Change `/dev/video1` to the port of your virtual camera from the output of `v4l2-ctl --list-devices`.
-
-
-```python
-v4l2-ctl -d /dev/video1 --get-fmt-video
-```
-
-
-You should see an entry like:
-
-```
->>> Format Video Capture:
->>> Width/Height : 640/480
->>> Pixel Format : 'YUYV' (YUYV 4:2:2)
-```
-
-Troubleshooting: If the resolution is not correct you will have to delete the Virtual Camera port and try again as it cannot be changed.
-
-If everything is set up correctly, you can proceed with the rest of the tutorial.
-
-
-
diff --git a/lerobot/docs/source/contributing.md b/lerobot/docs/source/contributing.md
deleted file mode 100644
index f939e75f21a8badb5c40f527abd0e098fe9bc472..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/contributing.md
+++ /dev/null
@@ -1 +0,0 @@
-../../CONTRIBUTING.md
\ No newline at end of file
diff --git a/lerobot/docs/source/debug_processor_pipeline.mdx b/lerobot/docs/source/debug_processor_pipeline.mdx
deleted file mode 100644
index d39eda92c2405ea51cfe20fbb0acf8e3f1f71049..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/debug_processor_pipeline.mdx
+++ /dev/null
@@ -1,299 +0,0 @@
-# Debug Your Processor Pipeline
-
-Processor pipelines can be complex, especially when chaining multiple transformation steps.
-Unlike simple function calls, pipelines lack natural observability, you can't easily see what happens
-between each step or where things go wrong.
-This guide provides debugging tools and techniques specifically designed to address these challenges
-and help you understand data flow through your pipelines.
-
-We'll explore three complementary debugging approaches: **hooks** for runtime monitoring, **step-through debugging** for detailed inspection, and **feature validation** for catching structural mismatches. Each serves a different purpose and together they provide complete visibility into your pipeline's behavior.
-
-## Understanding Hooks
-
-Hooks are functions that get called at specific points during pipeline execution.
-They provide a way to inspect, monitor, or modify data without changing your pipeline code.
-Think of them as "event listeners" for your pipeline.
-
-### What is a Hook?
-
-A hook is a callback function that gets automatically invoked at specific moments during pipeline execution.
-The concept comes from event-driven programming, imagine you could "hook into" the pipeline's execution flow to observe or react to what's happening.
-
-Think of hooks like inserting checkpoints into your pipeline. Every time the pipeline reaches one of these checkpoints, it pauses briefly to call your hook function, giving you a chance to inspect the current state, log information, and validate data.
-
-A hook is simply a function that accepts two parameters:
-
-- `step_idx: int` - The index of the current processing step (0, 1, 2, etc.)
-- `transition: EnvTransition` - The data transition at that point in the pipeline
-
-The beauty of hooks is their non-invasive nature: you can add monitoring, validation, or debugging logic without changing a single line of your pipeline code. The pipeline remains clean and focused on its core logic, while hooks handle the cross-cutting concerns like logging, monitoring, and debugging.
-
-### Before vs After Hooks
-
-The pipeline supports two types of hooks:
-
-- **Before hooks** (`register_before_step_hook`) - Called before each step executes
-- **After hooks** (`register_after_step_hook`) - Called after each step completes
-
-```python
-def before_hook(step_idx: int, transition: EnvTransition):
- """Called before step processes the transition."""
- print(f"About to execute step {step_idx}")
- # Useful for: logging, validation, setup
-
-def after_hook(step_idx: int, transition: EnvTransition):
- """Called after step has processed the transition."""
- print(f"Completed step {step_idx}")
- # Useful for: monitoring results, cleanup, debugging
-
-processor.register_before_step_hook(before_hook)
-processor.register_after_step_hook(after_hook)
-```
-
-### Implementing a NaN Detection Hook
-
-Here's a practical example of a hook that detects NaN values:
-
-```python
-def check_nans(step_idx: int, transition: EnvTransition):
- """Check for NaN values in observations."""
- obs = transition.get(TransitionKey.OBSERVATION)
- if obs:
- for key, value in obs.items():
- if isinstance(value, torch.Tensor) and torch.isnan(value).any():
- print(f"NaN detected in {key} at step {step_idx}")
-
-# Register the hook to run after each step
-processor.register_after_step_hook(check_nans)
-
-# Process your data - the hook will be called automatically
-output = processor(input_data)
-
-# Remove the hook when done debugging
-processor.unregister_after_step_hook(check_nans)
-```
-
-### How Hooks Work Internally
-
-Understanding the internal mechanism helps you use hooks more effectively. The pipeline maintains two separate lists: one for before-step hooks and another for after-step hooks. When you register a hook, it's simply appended to the appropriate list.
-
-During execution, the pipeline follows a strict sequence: for each processing step, it first calls all before-hooks in registration order, then executes the actual step transformation, and finally calls all after-hooks in registration order. This creates a predictable, sandwich-like structure around each step.
-
-The key insight is that hooks don't change the core pipeline logic—they're purely additive. The pipeline's `_forward` method orchestrates this dance between hooks and processing steps, ensuring that your debugging or monitoring code runs at exactly the right moments without interfering with the main data flow.
-
-Here's a simplified view of how the pipeline executes hooks:
-
-```python
-class DataProcessorPipeline:
- def __init__(self):
- self.steps = [...]
- self.before_step_hooks = [] # List of before hooks
- self.after_step_hooks = [] # List of after hooks
-
- def _forward(self, transition):
- """Internal method that processes the transition through all steps."""
- for step_idx, processor_step in enumerate(self.steps):
- # 1. Call all BEFORE hooks
- for hook in self.before_step_hooks:
- hook(step_idx, transition)
-
- # 2. Execute the actual processing step
- transition = processor_step(transition)
-
- # 3. Call all AFTER hooks
- for hook in self.after_step_hooks:
- hook(step_idx, transition)
-
- return transition
-
- def register_before_step_hook(self, hook_fn):
- self.before_step_hooks.append(hook_fn)
-
- def register_after_step_hook(self, hook_fn):
- self.after_step_hooks.append(hook_fn)
-```
-
-### Execution Flow
-
-The execution flow looks like this:
-
-```
-Input → Before Hook → Step 0 → After Hook → Before Hook → Step 1 → After Hook → ... → Output
-```
-
-For example, with 3 steps and both hook types:
-
-```python
-def timing_before(step_idx, transition):
- print(f"⏱️ Starting step {step_idx}")
-
-def validation_after(step_idx, transition):
- print(f"✅ Completed step {step_idx}")
-
-processor.register_before_step_hook(timing_before)
-processor.register_after_step_hook(validation_after)
-
-# This will output:
-# ⏱️ Starting step 0
-# ✅ Completed step 0
-# ⏱️ Starting step 1
-# ✅ Completed step 1
-# ⏱️ Starting step 2
-# ✅ Completed step 2
-```
-
-### Multiple Hooks
-
-You can register multiple hooks of the same type - they execute in the order registered:
-
-```python
-def log_shapes(step_idx: int, transition: EnvTransition):
- obs = transition.get(TransitionKey.OBSERVATION)
- if obs:
- print(f"Step {step_idx} observation shapes:")
- for key, value in obs.items():
- if isinstance(value, torch.Tensor):
- print(f" {key}: {value.shape}")
-
-processor.register_after_step_hook(check_nans) # Executes first
-processor.register_after_step_hook(log_shapes) # Executes second
-
-# Both hooks will be called after each step in registration order
-output = processor(input_data)
-```
-
-While hooks are excellent for monitoring specific issues (like NaN detection) or gathering metrics during normal pipeline execution, sometimes you need to dive deeper. When you want to understand exactly what happens at each step or debug complex transformation logic, step-through debugging provides the detailed inspection you need.
-
-## Step-Through Debugging
-
-Step-through debugging is like having a slow-motion replay for your pipeline. Instead of watching your data get transformed in one quick blur from input to output, you can pause and examine what happens after each individual step.
-
-This approach is particularly valuable when you're trying to understand a complex pipeline, debug unexpected behavior, or verify that each transformation is working as expected. Unlike hooks, which are great for automated monitoring, step-through debugging gives you manual, interactive control over the inspection process.
-
-The `step_through()` method is a generator that yields the transition state after each processing step, allowing you to inspect intermediate results. Think of it as creating a series of snapshots of your data as it flows through the pipeline—each snapshot shows you exactly what your data looks like after one more transformation has been applied.
-
-### How Step-Through Works
-
-The `step_through()` method fundamentally changes how the pipeline executes. Instead of running all steps in sequence and only returning the final result, it transforms the pipeline into an iterator that yields intermediate results.
-
-Here's what happens internally: the method starts by converting your input data into the pipeline's internal transition format, then yields this initial state. Next, it applies the first processing step and yields the result. Then it applies the second step to that result and yields again, and so on. Each `yield` gives you a complete snapshot of the transition at that point.
-
-This generator pattern is powerful because it's lazy—the pipeline only computes the next step when you ask for it. This means you can stop at any point, inspect the current state thoroughly, and decide whether to continue. You're not forced to run the entire pipeline just to debug one problematic step.
-
-Instead of running the entire pipeline and only seeing the final result, `step_through()` pauses after each step and gives you the intermediate transition:
-
-```python
-# This creates a generator that yields intermediate states
-for i, intermediate_result in enumerate(processor.step_through(input_data)):
- print(f"=== After step {i} ===")
-
- # Inspect the observation at this stage
- obs = intermediate_result.get(TransitionKey.OBSERVATION)
- if obs:
- for key, value in obs.items():
- if isinstance(value, torch.Tensor):
- print(f"{key}: shape={value.shape}, dtype={value.dtype}")
-```
-
-### Interactive Debugging with Breakpoints
-
-You can add breakpoints in the step-through loop to interactively debug:
-
-```python
-# Step through the pipeline with debugging
-for i, intermediate in enumerate(processor.step_through(data)):
- print(f"Step {i}: {processor.steps[i].__class__.__name__}")
-
- # Set a breakpoint to inspect the current state
- breakpoint() # Debugger will pause here
-
- # You can now inspect 'intermediate' in the debugger:
- # - Check tensor shapes and values
- # - Verify expected transformations
- # - Look for unexpected changes
-```
-
-During the debugger session, you can:
-
-- Examine `intermediate[TransitionKey.OBSERVATION]` to see observation data
-- Check `intermediate[TransitionKey.ACTION]` for action transformations
-- Inspect any part of the transition to understand what each step does
-
-Step-through debugging is perfect for understanding the _data_ transformations, but what about the _structure_ of that data? While hooks and step-through help you debug runtime behavior, you also need to ensure your pipeline produces data in the format expected by downstream components. This is where feature contract validation comes in.
-
-## Validating Feature Contracts
-
-Feature contracts define what data structure your pipeline expects as input and produces as output.
-Validating these contracts helps catch mismatches early.
-
-### Understanding Feature Contracts
-
-Each processor step has a `transform_features()` method that describes how it changes the data structure:
-
-```python
-# Get the expected output features from your pipeline
-initial_features = {
- PipelineFeatureType.OBSERVATION: {
- "observation.state": PolicyFeature(type=FeatureType.STATE, shape=(7,)),
- "observation.image": PolicyFeature(type=FeatureType.IMAGE, shape=(3, 224, 224))
- },
- PipelineFeatureType.ACTION: {
- "action": PolicyFeature(type=FeatureType.ACTION, shape=(4,))
- }
-}
-
-# Check what your pipeline will output
-output_features = processor.transform_features(initial_features)
-
-print("Input features:")
-for feature_type, features in initial_features.items():
- print(f" {feature_type}:")
- for key, feature in features.items():
- print(f" {key}: {feature.type.value}, shape={feature.shape}")
-
-print("\nOutput features:")
-for feature_type, features in output_features.items():
- print(f" {feature_type}:")
- for key, feature in features.items():
- print(f" {key}: {feature.type.value}, shape={feature.shape}")
-```
-
-### Verifying Expected Features
-
-Check that your pipeline produces the features you expect:
-
-```python
-# Define what features you expect the pipeline to produce
-expected_keys = ["observation.state", "observation.image", "action"]
-
-print("Validating feature contract...")
-for expected_key in expected_keys:
- found = False
- for feature_type, features in output_features.items():
- if expected_key in features:
- feature = features[expected_key]
- print(f"✅ {expected_key}: {feature.type.value}, shape={feature.shape}")
- found = True
- break
-
- if not found:
- print(f"❌ Missing expected feature: {expected_key}")
-```
-
-This validation helps ensure your pipeline will work correctly with downstream components that expect specific data structures.
-
-## Summary
-
-Now that you understand the three debugging approaches, you can tackle any pipeline issue systematically:
-
-1. **Hooks** - For runtime monitoring and validation without modifying pipeline code
-2. **Step-through** - For inspecting intermediate states and understanding transformations
-3. **Feature validation** - For ensuring data structure contracts are met
-
-**When to use each approach:**
-
-- Start with **step-through debugging** when you need to understand what your pipeline does or when something unexpected happens
-- Add **hooks** for continuous monitoring during development and production to catch issues automatically
-- Use **feature validation** before deployment to ensure your pipeline works with downstream components
-
-These three tools work together to give you the complete observability that complex pipelines naturally lack. With hooks watching for issues, step-through helping you understand behavior, and feature validation ensuring compatibility, you'll be able to debug any pipeline confidently and efficiently.
diff --git a/lerobot/docs/source/earthrover_mini_plus.mdx b/lerobot/docs/source/earthrover_mini_plus.mdx
deleted file mode 100644
index a05ec46f6c29d929c612c23b0d109de49f68ae6f..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/earthrover_mini_plus.mdx
+++ /dev/null
@@ -1,225 +0,0 @@
-# EarthRover Mini Plus
-
-The EarthRover Mini Plus is a fully open source mobile robot that connects through the cloud using the Frodobots SDK. This lets you control the robot and record datasets for training AI models.
-
-## What You Need
-
-### Hardware
-
-- EarthRover Mini robot
-- Computer with Python 3.10 or newer
-- Internet connection
-
-### Setting Up the Frodobots SDK
-
-The robot needs the [Frodobots SDK](https://github.com/frodobots-org/earth-rovers-sdk) running on your computer. Here's how:
-
-1. Download and install the SDK:
-
-```bash
-git clone https://github.com/frodobots-org/earth-rovers-sdk.git
-cd earth-rovers-sdk
-pip install -r requirements.txt
-```
-
-2. Save Credentials:
-
-Write your .env variables with the SDK API key and bot name provided by the Frodobots team.
-
-```bash
-SDK_API_TOKEN=your_sdk_api_token_here
-BOT_SLUG=your_bot_slug_here
-CHROME_EXECUTABLE_PATH=/path/to/chrome_or_chromium
-# Default value is MAP_ZOOM_LEVEL=18 https://wiki.openstreetmap.org/wiki/Zoom_levels
-MAP_ZOOM_LEVEL=18
-MISSION_SLUG=your_mission_slug_here
-# Image quality between 0.1 and 1.0 (default: 0.8)
-# Recommended: 0.8 for better performance
-IMAGE_QUALITY=0.8
-# Image format: jpeg, png or webp (default: png)
-# Recommended: jpeg for better performance and lower bandwidth usage
-IMAGE_FORMAT=jpeg
-```
-
-3. Start the SDK:
-
-```bash
-hypercorn main:app --reload
-```
-
-4. Open your web browser and go to `http://localhost:8000`, then click "Join"
-
-The SDK gives you:
-
-- Live video from front and rear cameras
-
-> [!IMPORTANT]
-> The SDK must be running before you can use the robot.
-
-## Install LeRobot
-
-Follow our [Installation Guide](./installation) to install LeRobot.
-
-In addition to the base installation, install the EarthRover Mini dependencies:
-
-```bash
-pip install -e .
-```
-
-## How It Works
-
-The robot uses the internet to communicate:
-
-- **Movement commands**: Sent through the SDK
-- **Camera video**: Received from the SDK
-- **Robot info**: Battery, location, speed from the SDK
-
-You don't need to plug anything in - it all works through the SDK.
-
-## Calibration
-
-No calibration needed! The robot is ready to use as soon as the SDK is running.
-
-## Controlling the Robot
-
-You control the robot using your keyboard - just like playing a video game with WASD keys.
-
-### Keyboard Controls
-
-| Key | Action |
-| --- | -------------------------------- |
-| W | Move forward |
-| S | Move backward |
-| A | Turn left (with forward motion) |
-| D | Turn right (with forward motion) |
-| Q | Rotate left in place |
-| E | Rotate right in place |
-| X | Stop all movement |
-| +/= | Increase speed |
-| - | Decrease speed |
-| ESC | Disconnect |
-
-### Speed Settings
-
-You can adjust how fast the robot moves:
-
-- **Forward/backward speed**: Default is full speed (1.0)
-- **Turning speed**: Default is full speed (1.0)
-- **Speed changes**: Use +/- keys to adjust by 0.1 each time
-
-### Try It Out
-
-Test driving the robot before recording data:
-
-```python
-from lerobot.robots.earthrover_mini_plus import EarthRoverMiniPlus, EarthRoverMiniPlusConfig
-from lerobot.teleoperators.keyboard import KeyboardRoverTeleop, KeyboardRoverTeleopConfig
-
-# Initialize robot
-robot_config = EarthRoverMiniPlusConfig()
-robot = EarthRoverMiniPlus(robot_config)
-
-# Initialize teleoperator
-teleop_config = KeyboardRoverTeleopConfig(
- linear_speed=1.0,
- angular_speed=1.0,
- speed_increment=0.1
-)
-teleop = KeyboardRoverTeleop(teleop_config)
-
-# Connect
-robot.connect()
-teleop.connect()
-
-# Teleoperate (use keyboard controls)
-try:
- while True:
- action = teleop.get_action()
- robot.send_action(action)
-except KeyboardInterrupt:
- pass
-finally:
- robot.disconnect()
- teleop.disconnect()
-```
-
-> [!TIP]
-> If you're using a Mac, you might need to give Terminal permission to access your keyboard for teleoperation. Go to System Preferences > Security & Privacy > Input Monitoring and check the box for Terminal.
-
-## Recording Data
-
-Once you can drive the robot well, you can start recording data to train AI models. The system records:
-
-- **What you do**: How you move the robot (forward, backward, turning)
-- **What the robot sees**:
- - Videos from both cameras
- - Robot speed and direction
- - Battery level and location
- - GPS position and signal
- - Other sensor data
-- **When it happened**: Timestamps for everything
-
-### Setting Up Hugging Face
-
-We use Hugging Face to store your data online. First, log in with your token from [Hugging Face settings](https://huggingface.co/settings/tokens):
-
-```bash
-huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
-```
-
-Store your Hugging Face username:
-
-```bash
-HF_USER=$(huggingface-cli whoami | head -n 1)
-echo $HF_USER
-```
-
-### Start Recording
-
-Use the standard recording command:
-
-```bash
-python src/lerobot/scripts/lerobot_record.py \
- --robot.type=earthrover_mini_plus \
- --teleop.type=keyboard_rover \
- --dataset.repo_id=your_username/dataset_name \
- --dataset.num_episodes=2 \
- --dataset.fps=10 \
- --dataset.single_task="Navigate around obstacles" \
- --display_data=true
-```
-
-Replace `your_username/dataset_name` with your Hugging Face username and a name for your dataset.
-
-### What Gets Saved
-
-Your dataset includes:
-
-**Your Actions (2 things)**:
-
-- How much you moved forward/backward
-- How much you turned left/right
-
-**Robot Observations (12 things)**:
-
-- Front camera video
-- Rear camera video
-- Current speed
-- Battery level
-- Which way the robot is facing
-- GPS location (latitude, longitude, signal strength)
-- Network signal strength
-- Vibration level
-- Lamp status (on/off)
-
-### Where Your Data Goes
-
-On your computer: `~/.cache/huggingface/lerobot/{repo-id}`
-
-After recording, your data automatically uploads to your Hugging Face page:
-
-```bash
-echo https://huggingface.co/datasets/${HF_USER}/earthrover-navigation
-```
-
-Your dataset will be tagged with `LeRobot` for community discovery.
diff --git a/lerobot/docs/source/env_processor.mdx b/lerobot/docs/source/env_processor.mdx
deleted file mode 100644
index 7f5cee2832b1311f70fa6833b4d15a2effbce904..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/env_processor.mdx
+++ /dev/null
@@ -1,418 +0,0 @@
-# Environment Processors
-
-Environment processors are a critical layer in LeRobot's data processing architecture that handle **environment-specific** transformations, separate from policy-specific processing. This separation of concerns enables cleaner code, better modularity, and easier experimentation with different environments and policies.
-
-## Why Environment Processors?
-
-When working with different robot environments (LIBERO, MetaWorld, Aloha, etc.), each environment often has unique data formats, coordinate systems, and conventions that need standardization **before** policy processing. Without environment processors, these transformations would be:
-
-1. **Hardcoded in environment code** - Making it difficult to experiment with different state representations
-2. **Duplicated across policies** - Each policy would need to handle environment-specific quirks
-3. **Mixed with policy logic** - Violating separation of concerns and making debugging harder
-
-Environment processors solve this by providing a **dedicated processing layer** between raw environment observations and policy inputs.
-
-## The Processing Pipeline
-
-Here's how data flows through the complete processing pipeline during evaluation:
-
-```python
-# In lerobot_eval.py rollout() function:
-
-# 1. Raw environment observation (numpy arrays, various formats)
-raw_observation = env.step(action)
-
-# 2. Convert numpy to torch, normalize images [0,1]
-observation = preprocess_observation(raw_observation)
-
-# 3. Add task metadata (for multi-task environments)
-observation = add_envs_task(env, observation)
-
-# 4. ENVIRONMENT-SPECIFIC preprocessing (NEW!)
-# - Flatten robot states
-# - Rotate images to match dataset conventions
-# - Handle environment-specific coordinate systems
-observation = env_preprocessor(observation)
-
-# 5. POLICY-SPECIFIC preprocessing
-# - Normalize with dataset statistics
-# - Add batch dimensions
-# - Move to GPU
-# - Tokenize language instructions
-observation = preprocessor(observation)
-
-# 6. Policy inference
-action = policy.select_action(observation)
-
-# 7. POLICY-SPECIFIC postprocessing
-# - Unnormalize actions
-# - Remove batch dimensions
-action = postprocessor(action)
-
-# 8. ENVIRONMENT-SPECIFIC postprocessing (NEW!)
-# - Convert action formats if needed
-# - Apply environment-specific constraints
-action_transition = {"action": action}
-action_transition = env_postprocessor(action_transition)
-action = action_transition["action"]
-
-# 9. Execute in environment
-env.step(action)
-```
-
-## The Benefits
-
-### 1. **Separation of Concerns**
-
-Environment processors handle transformations specific to the **environment's data format**, while policy processors handle transformations specific to the **model's requirements**.
-
-```python
-# ❌ Before: Mixed concerns
-class LiberoVLAPolicy:
- def preprocess(self, obs):
- # Environment-specific: Flatten robot state (shouldn't be in policy!)
- state = self._flatten_robot_state(obs["robot_state"])
- # Policy-specific: Normalize with dataset stats
- state = self.normalizer(state)
- return state
-
-# ✅ After: Clear separation
-# Environment processor: Handles LIBERO's nested robot state
-env_preprocessor = LiberoProcessorStep() # Flattens robot_state
-
-# Policy processor: Handles model requirements
-policy_preprocessor = NormalizerProcessorStep(stats=dataset_stats)
-```
-
-### 2. **Flexibility and Reusability**
-
-The same policy can work with different environment processors, and the same environment processor can work with different policies:
-
-```python
-# Use SmolVLA policy with LIBERO environment
-libero_preprocessor, libero_postprocessor = make_env_pre_post_processors(libero_cfg)
-smolvla_preprocessor, smolvla_postprocessor = make_pre_post_processors(smolvla_cfg)
-
-# Or use ACT policy with the same LIBERO environment
-libero_preprocessor, libero_postprocessor = make_env_pre_post_processors(libero_cfg)
-act_preprocessor, act_postprocessor = make_pre_post_processors(act_cfg)
-```
-
-### 3. **Easier Experimentation**
-
-Want to try different state representations for LIBERO? Just create a new processor:
-
-```python
-# Original: 8D state (pos + quat→axisangle + gripper)
-@ProcessorStepRegistry.register("libero_processor")
-class LiberoProcessorStep(ObservationProcessorStep):
- def _process_observation(self, obs):
- eef_pos = robot_state["eef"]["pos"] # 3D
- eef_axisangle = quat2axisangle(quat) # 3D
- gripper = robot_state["gripper"]["qpos"] # 2D
- state = torch.cat([eef_pos, eef_axisangle, gripper], dim=-1) # 8D
- return state
-
-# Experiment: Add velocity for better control
-@ProcessorStepRegistry.register("libero_velocity_processor")
-class LiberoVelocityProcessorStep(ObservationProcessorStep):
- def _process_observation(self, obs):
- # Include velocities for 14D state
- eef_pos = robot_state["eef"]["pos"] # 3D
- eef_axisangle = quat2axisangle(quat) # 3D
- eef_vel = robot_state["eef"]["vel"] # 3D (NEW)
- gripper_pos = robot_state["gripper"]["qpos"] # 2D
- gripper_vel = robot_state["gripper"]["qvel"] # 3D (NEW)
- state = torch.cat([eef_pos, eef_axisangle, eef_vel,
- gripper_pos, gripper_vel], dim=-1) # 14D
- return state
-```
-
-### 4. **Cleaner Environment Code**
-
-Environments expose **all available data** without needing to know what downstream models will use:
-
-```python
-# LIBERO environment exposes full robot state
-observation = {
- "pixels": {"image": img, "image2": img2},
- "robot_state": {
- "eef": {"pos": ..., "quat": ..., "vel": ..., "mat": ..., "axisangle": ...},
- "gripper": {"qpos": ..., "qvel": ...},
- "joints": {"pos": ..., "vel": ...}
- }
-}
-
-# Environment processor decides what to use
-# Policy processor handles model-specific transformations
-```
-
-## Using Environment Processors
-
-### Factory Function
-
-The `make_env_pre_post_processors` function follows the same pattern as `make_pre_post_processors` for policies:
-
-```python
-from lerobot.envs.factory import make_env_pre_post_processors
-from lerobot.envs.configs import LiberoEnv, PushtEnv
-
-# For LIBERO: Returns LiberoProcessorStep in preprocessor
-libero_cfg = LiberoEnv(task="libero_spatial", camera_name=["agentview"])
-env_preprocessor, env_postprocessor = make_env_pre_post_processors(libero_cfg)
-
-# For other environments: Returns identity processors (no-op)
-pusht_cfg = PushtEnv()
-env_preprocessor, env_postprocessor = make_env_pre_post_processors(pusht_cfg)
-```
-
-### Implementation in `envs/factory.py`
-
-```python
-def make_env_pre_post_processors(
- env_cfg: EnvConfig,
-) -> tuple[
- PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
- PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
-]:
- """
- Create preprocessor and postprocessor pipelines for environment observations.
-
- Args:
- env_cfg: The configuration of the environment.
-
- Returns:
- A tuple containing:
- - preprocessor: Pipeline that processes environment observations
- - postprocessor: Pipeline that processes environment outputs
- """
- # For LIBERO environments, add the LiberoProcessorStep to preprocessor
- if isinstance(env_cfg, LiberoEnv) or "libero" in env_cfg.type:
- preprocessor = PolicyProcessorPipeline(steps=[LiberoProcessorStep()])
- else:
- # For all other environments, return an identity preprocessor
- preprocessor = PolicyProcessorPipeline(steps=[])
-
- # Postprocessor is currently identity for all environments
- # Future: Could add environment-specific action transformations
- postprocessor = PolicyProcessorPipeline(steps=[])
-
- return preprocessor, postprocessor
-```
-
-### Integration in Evaluation
-
-In `lerobot_eval.py`, the environment processors are created once and used throughout:
-
-```python
-def eval_main(cfg: EvalPipelineConfig):
- # Create environment
- envs = make_env(cfg.env, n_envs=cfg.eval.batch_size)
-
- # Create policy
- policy = make_policy(cfg=cfg.policy, env_cfg=cfg.env)
-
- # Create policy processors
- preprocessor, postprocessor = make_pre_post_processors(
- policy_cfg=cfg.policy,
- pretrained_path=cfg.policy.pretrained_path,
- )
-
- # Create environment processors (NEW!)
- env_preprocessor, env_postprocessor = make_env_pre_post_processors(env_cfg=cfg.env)
-
- # Run evaluation with both processor types
- eval_policy_all(
- envs=envs,
- policy=policy,
- env_preprocessor=env_preprocessor, # Environment-specific
- env_postprocessor=env_postprocessor, # Environment-specific
- preprocessor=preprocessor, # Policy-specific
- postprocessor=postprocessor, # Policy-specific
- n_episodes=cfg.eval.n_episodes,
- )
-```
-
-## Example: LIBERO Environment Processor
-
-The `LiberoProcessorStep` demonstrates a real-world environment processor:
-
-```python
-from lerobot.processor.pipeline import ObservationProcessorStep
-
-@dataclass
-@ProcessorStepRegistry.register(name="libero_processor")
-class LiberoProcessorStep(ObservationProcessorStep):
- """
- Processes LIBERO observations into the LeRobot format.
-
- **State Processing:**
- - Extracts end-effector position (3D)
- - Converts quaternion to axis-angle representation (3D)
- - Extracts gripper joint positions (2D)
- - Concatenates into 8D state vector
-
- **Image Processing:**
- - Rotates images 180° to match HuggingFaceVLA/libero convention
- """
-
- def _process_observation(self, observation):
- processed_obs = observation.copy()
-
- # Process images: Flip 180° for camera convention
- for key in list(processed_obs.keys()):
- if key.startswith("observation.images."):
- img = processed_obs[key]
- img = torch.flip(img, dims=[2, 3]) # Flip H and W
- processed_obs[key] = img
-
- # Process robot_state: Flatten to 8D vector
- if "observation.robot_state" in processed_obs:
- robot_state = processed_obs.pop("observation.robot_state")
-
- eef_pos = robot_state["eef"]["pos"] # (B, 3)
- eef_quat = robot_state["eef"]["quat"] # (B, 4)
- gripper_qpos = robot_state["gripper"]["qpos"] # (B, 2)
-
- # Convert quaternion to axis-angle
- eef_axisangle = self._quat2axisangle(eef_quat) # (B, 3)
-
- # Concatenate into single state vector
- state = torch.cat((eef_pos, eef_axisangle, gripper_qpos), dim=-1)
- state = state.float()
-
- processed_obs["observation.state"] = state
-
- return processed_obs
-```
-
-### Why These Transformations?
-
-1. **Image Rotation**: The HuggingFaceVLA/libero dataset has images rotated 180° from the raw LIBERO simulator. The processor handles this convention mismatch so policies trained on the dataset work seamlessly.
-
-2. **State Flattening**: The raw LIBERO environment exposes nested dictionaries with all available state information (position, quaternion, velocity, matrix representation, etc.). The processor:
- - Selects the relevant components (pos, quat, gripper)
- - Converts quaternion to axis-angle (more suitable for learning)
- - Flattens to a single 8D vector that policies expect
-
-3. **Flexibility**: The environment still exposes **all** raw data. If you want to try different state representations (e.g., including velocities, using matrix representation instead of axis-angle), you can create a new processor without modifying the environment code.
-
-## Adding Environment Processors for New Environments
-
-To add environment processors for a new environment:
-
-### 1. Create the Processor Step
-
-```python
-# In src/lerobot/processor/env_processor.py
-
-@dataclass
-@ProcessorStepRegistry.register(name="myenv_processor")
-class MyEnvProcessorStep(ObservationProcessorStep):
- """Process observations from MyEnv."""
-
- def _process_observation(self, observation):
- processed = observation.copy()
-
- # Your environment-specific transformations
- if "myenv.specific.state" in processed:
- state = processed.pop("myenv.specific.state")
- # Transform to standard format
- processed["observation.state"] = self._transform_state(state)
-
- return processed
-```
-
-### 2. Update the Factory
-
-```python
-# In src/lerobot/envs/factory.py
-
-def make_env_pre_post_processors(env_cfg: EnvConfig):
- if isinstance(env_cfg, LiberoEnv) or "libero" in env_cfg.type:
- preprocessor = PolicyProcessorPipeline(steps=[LiberoProcessorStep()])
- elif isinstance(env_cfg, MyEnvConfig) or "myenv" in env_cfg.type:
- preprocessor = PolicyProcessorPipeline(steps=[MyEnvProcessorStep()])
- else:
- preprocessor = PolicyProcessorPipeline(steps=[])
-
- postprocessor = PolicyProcessorPipeline(steps=[])
- return preprocessor, postprocessor
-```
-
-### 3. Use in Evaluation
-
-No changes needed! The evaluation script automatically uses the appropriate processor:
-
-```bash
-lerobot-eval \
- --policy.path=lerobot/my_policy \
- --env.type=myenv \ # Automatically uses MyEnvProcessorStep
- --eval.n_episodes=10
-```
-
-## Future: Environment Postprocessors
-
-Currently, postprocessors are identity (no-op) for all environments. Future use cases include:
-
-### Action Space Transformations
-
-```python
-@dataclass
-class MyEnvActionPostprocessor(ProcessorStep):
- """Convert policy actions to environment-specific format."""
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- action = transition["action"]
-
- # Example: Convert from Cartesian to joint space
- if self.action_space == "joint":
- action = self.ik_solver(action)
-
- # Example: Apply environment-specific safety limits
- action = torch.clamp(action, self.min_action, self.max_action)
-
- transition["action"] = action
- return transition
-```
-
-### Coordinate System Conversions
-
-```python
-@dataclass
-class CoordinateTransformPostprocessor(ProcessorStep):
- """Transform actions between coordinate systems."""
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- action = transition["action"]
-
- # Example: Policy outputs in world frame, env expects base frame
- action = self.world_to_base_transform(action)
-
- transition["action"] = action
- return transition
-```
-
-## Best Practices
-
-1. **Keep environment processors simple**: They should only handle environment-specific data format issues, not complex learning-related transformations.
-
-2. **Use policy processors for model requirements**: Normalization, batching, device placement, and tokenization belong in policy processors.
-
-3. **Expose all data from environments**: Let processors decide what to use rather than hardcoding choices in the environment.
-
-4. **Document conventions**: Clearly document any coordinate system conventions, camera orientations, or data formats that your processor handles.
-
-5. **Test independently**: Environment processors should be testable without loading full policies or environments.
-
-## Summary
-
-Environment processors provide a **clean separation** between environment-specific data transformations and policy-specific model requirements. This architecture:
-
-- ✅ Enables easy experimentation with different state representations
-- ✅ Allows policies to work seamlessly across different environments
-- ✅ Keeps environment code focused on simulation/hardware interface
-- ✅ Makes processor pipelines more maintainable and debuggable
-- ✅ Follows the single responsibility principle
-
-The key insight: **Environments define data formats, processors standardize them, policies consume standardized data.** Each layer has a clear, focused responsibility.
diff --git a/lerobot/docs/source/envhub.mdx b/lerobot/docs/source/envhub.mdx
deleted file mode 100644
index f19aef6c67ac036a98b65005531976a211864833..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/envhub.mdx
+++ /dev/null
@@ -1,431 +0,0 @@
-# Loading Environments from the Hub
-
-The **EnvHub** feature allows you to load simulation environments directly from the Hugging Face Hub with a single line of code. This unlocks a powerful new model for collaboration: instead of environments being locked away inside monolithic libraries, anyone can publish custom environments and share them with the community.
-
-## What is EnvHub?
-
-EnvHub lets you create custom robotics simulation environments with your own robot models and scenarios, and make them easily usable by anyone through the LeRobot framework.
-
-EnvHub packages are stored on the Hugging Face Hub, and can be seamlessly pulled and used in your AI robotics projects through LeRobot with a single line of code.
-
-Thanks to EnvHub, you can:
-
-1. **Create and publish environments** to the Hugging Face Hub as Git repositories, and distribute complex physics simulations without packaging hassles
-2. **Load environments** dynamically, without installing them as packages
-3. **Version and track** environment changes using Git semantics
-4. **Discover** new simulation tasks shared by the community
-
-This design means you can go from discovering an interesting environment on the Hub to running experiments in seconds, or create your own custom robot and environment without worrying about dependency conflicts or complex installation procedures.
-
-When you create an EnvHub package, you can build anything you want inside it and use any simulation tool you like: this is your own space to play with. The only requirement is that the package contains an `env.py` file that defines the environment and allows LeRobot to load and use your EnvHub package.
-
-This `env.py` file needs to expose a small API so LeRobot can load and run it. In particular, you must provide a `make_env(n_envs: int = 1, use_async_envs: bool = False)` or `make_env(n_envs: int = 1, use_async_envs: bool = False, cfg: EnvConfig)` function, which is the main entry point for LeRobot. It should return one of:
-
-- A `gym.vector.VectorEnv` (most common)
-- A single `gym.Env` (will be automatically wrapped)
-- A dict mapping `{suite_name: {task_id: VectorEnv}}` (for multi-task benchmarks)
-
-You can also pass an `EnvConfig` object to `make_env` to configure the environment (e.g. the number of environments, task, camera name, initial states, control mode, episode length, etc.).
-
-Finally, your environment must implement the standard `gym.vector.VectorEnv` interface so it works with LeRobot, including methods like `reset` and `step`.
-
-## Quick Start
-
-Loading an environment from the Hub is as simple as:
-
-```python
-from lerobot.envs.factory import make_env
-
-# Load a hub environment (requires explicit consent to run remote code)
-env = make_env("lerobot/cartpole-env", trust_remote_code=True)
-```
-
-
- **Security Notice**: Loading environments from the Hub executes Python code
- from third-party repositories. Only use `trust_remote_code=True` with
- repositories you trust. We strongly recommend pinning to a specific commit
- hash for reproducibility and security.
-
-
-## Repository Structure
-
-To make your environment loadable from the Hub, your repository must contain at minimum:
-
-### Required Files
-
-**`env.py`** (or custom Python file)
-
-- Must expose a `make_env(n_envs: int, use_async_envs: bool)` function
-- This function should return one of:
- - A `gym.vector.VectorEnv` (most common)
- - A single `gym.Env` (will be automatically wrapped)
- - A dict mapping `{suite_name: {task_id: VectorEnv}}` (for multi-task benchmarks)
-
-### Optional Files
-
-**`requirements.txt`**
-
-- List any additional dependencies your environment needs
-- Users will need to install these manually before loading your environment
-
-**`README.md`**
-
-- Document your environment: what task it implements, observation/action spaces, rewards, etc.
-- Include usage examples and any special setup instructions
-
-**`.gitignore`**
-
-- Exclude unnecessary files from your repository
-
-### Example Repository Structure
-
-```
-my-environment-repo/
-├── env.py # Main environment definition (required)
-├── requirements.txt # Dependencies (optional)
-├── README.md # Documentation (recommended)
-├── assets/ # Images, videos, etc. (optional)
-│ └── demo.gif
-└── configs/ # Config files if needed (optional)
- └── task_config.yaml
-```
-
-## Creating Your Environment Repository
-
-### Step 1: Define Your Environment
-
-Create an `env.py` file with a `make_env` function:
-
-```python
-# env.py
-import gymnasium as gym
-
-def make_env(n_envs: int = 1, use_async_envs: bool = False):
- """
- Create vectorized environments for your custom task.
-
- Args:
- n_envs: Number of parallel environments
- use_async_envs: Whether to use AsyncVectorEnv or SyncVectorEnv
-
- Returns:
- gym.vector.VectorEnv or dict mapping suite names to vectorized envs
- """
- def _make_single_env():
- # Create your custom environment
- return gym.make("CartPole-v1")
-
- # Choose vector environment type
- env_cls = gym.vector.AsyncVectorEnv if use_async_envs else gym.vector.SyncVectorEnv
-
- # Create vectorized environment
- vec_env = env_cls([_make_single_env for _ in range(n_envs)])
-
- return vec_env
-```
-
-### Step 2: Test Locally
-
-Before uploading, test your environment locally:
-
-```python
-from lerobot.envs.utils import _load_module_from_path, _call_make_env, _normalize_hub_result
-
-# Load your module
-module = _load_module_from_path("./env.py")
-
-# Test the make_env function
-result = _call_make_env(module, n_envs=2, use_async_envs=False)
-normalized = _normalize_hub_result(result)
-
-# Verify it works
-suite_name = next(iter(normalized))
-env = normalized[suite_name][0]
-obs, info = env.reset()
-print(f"Observation shape: {obs.shape if hasattr(obs, 'shape') else type(obs)}")
-env.close()
-```
-
-### Step 3: Upload to the Hub
-
-Upload your repository to Hugging Face:
-
-```bash
-# Install huggingface_hub if needed
-pip install huggingface_hub
-
-# Login to Hugging Face
-huggingface-cli login
-
-# Create a new repository
-huggingface-cli repo create my-custom-env --type space --org my-org
-
-# Initialize git and push
-git init
-git add .
-git commit -m "Initial environment implementation"
-git remote add origin https://huggingface.co/my-org/my-custom-env
-git push -u origin main
-```
-
-Alternatively, use the `huggingface_hub` Python API:
-
-```python
-from huggingface_hub import HfApi
-
-api = HfApi()
-
-# Create repository
-api.create_repo("my-custom-env", repo_type="space")
-
-# Upload files
-api.upload_folder(
- folder_path="./my-env-folder",
- repo_id="username/my-custom-env",
- repo_type="space",
-)
-```
-
-## Loading Environments from the Hub
-
-### Basic Usage
-
-```python
-from lerobot.envs.factory import make_env
-
-# Load from the hub
-envs_dict = make_env(
- "username/my-custom-env",
- n_envs=4,
- trust_remote_code=True
-)
-
-# Access the environment
-suite_name = next(iter(envs_dict))
-env = envs_dict[suite_name][0]
-
-# Use it like any gym environment
-obs, info = env.reset()
-action = env.action_space.sample()
-obs, reward, terminated, truncated, info = env.step(action)
-```
-
-### Advanced: Pinning to Specific Versions
-
-For reproducibility and security, pin to a specific Git revision:
-
-```python
-# Pin to a specific branch
-env = make_env("username/my-env@main", trust_remote_code=True)
-
-# Pin to a specific commit (recommended for papers/experiments)
-env = make_env("username/my-env@abc123def456", trust_remote_code=True)
-
-# Pin to a tag
-env = make_env("username/my-env@v1.0.0", trust_remote_code=True)
-```
-
-### Custom File Paths
-
-If your environment definition is not in `env.py`:
-
-```python
-# Load from a custom file
-env = make_env("username/my-env:custom_env.py", trust_remote_code=True)
-
-# Combine with version pinning
-env = make_env("username/my-env@v1.0:envs/task_a.py", trust_remote_code=True)
-```
-
-### Async Environments
-
-For better performance with multiple environments:
-
-```python
-envs_dict = make_env(
- "username/my-env",
- n_envs=8,
- use_async_envs=True, # Use AsyncVectorEnv for parallel execution
- trust_remote_code=True
-)
-```
-
-## URL Format Reference
-
-The hub URL format supports several patterns:
-
-| Pattern | Description | Example |
-| -------------------- | ------------------------------ | -------------------------------------- |
-| `user/repo` | Load `env.py` from main branch | `make_env("lerobot/pusht-env")` |
-| `user/repo@revision` | Load from specific revision | `make_env("lerobot/pusht-env@main")` |
-| `user/repo:path` | Load custom file | `make_env("lerobot/envs:pusht.py")` |
-| `user/repo@rev:path` | Revision + custom file | `make_env("lerobot/envs@v1:pusht.py")` |
-
-## Multi-Task Environments
-
-For benchmarks with multiple tasks (like LIBERO), return a nested dictionary:
-
-```python
-def make_env(n_envs: int = 1, use_async_envs: bool = False):
- env_cls = gym.vector.AsyncVectorEnv if use_async_envs else gym.vector.SyncVectorEnv
-
- # Return dict: {suite_name: {task_id: VectorEnv}}
- return {
- "suite_1": {
- 0: env_cls([lambda: gym.make("Task1-v0") for _ in range(n_envs)]),
- 1: env_cls([lambda: gym.make("Task2-v0") for _ in range(n_envs)]),
- },
- "suite_2": {
- 0: env_cls([lambda: gym.make("Task3-v0") for _ in range(n_envs)]),
- }
- }
-```
-
-## Security Considerations
-
-
- **Important**: The `trust_remote_code=True` flag is required to execute
- environment code from the Hub. This is by design for security.
-
-
-When loading environments from the Hub:
-
-1. **Review the code first**: Visit the repository and inspect `env.py` before loading
-2. **Pin to commits**: Use specific commit hashes for reproducibility
-3. **Check dependencies**: Review `requirements.txt` for suspicious packages
-4. **Use trusted sources**: Prefer official organizations or well-known researchers
-5. **Sandbox if needed**: Run untrusted code in isolated environments (containers, VMs)
-
-Example of safe usage:
-
-```python
-# ❌ BAD: Loading without inspection
-env = make_env("random-user/untrusted-env", trust_remote_code=True)
-
-# ✅ GOOD: Review code, then pin to specific commit
-# 1. Visit https://huggingface.co/trusted-org/verified-env
-# 2. Review the env.py file
-# 3. Copy the commit hash
-env = make_env("trusted-org/verified-env@a1b2c3d4", trust_remote_code=True)
-```
-
-## Example: CartPole from the Hub
-
-Here's a complete example using the reference CartPole environment:
-
-```python
-from lerobot.envs.factory import make_env
-import numpy as np
-
-# Load the environment
-envs_dict = make_env("lerobot/cartpole-env", n_envs=4, trust_remote_code=True)
-
-# Get the vectorized environment
-suite_name = next(iter(envs_dict))
-env = envs_dict[suite_name][0]
-
-# Run a simple episode
-obs, info = env.reset()
-done = np.zeros(env.num_envs, dtype=bool)
-total_reward = np.zeros(env.num_envs)
-
-while not done.all():
- # Random policy
- action = env.action_space.sample()
- obs, reward, terminated, truncated, info = env.step(action)
- total_reward += reward
- done = terminated | truncated
-
-print(f"Average reward: {total_reward.mean():.2f}")
-env.close()
-```
-
-## Benefits of EnvHub
-
-### For Environment Authors
-
-- **Easy distribution**: No PyPI packaging required
-- **Version control**: Use Git for environment versioning
-- **Rapid iteration**: Push updates instantly
-- **Documentation**: Hub README renders beautifully
-- **Community**: Reach LeRobot users directly
-
-### For Researchers
-
-- **Quick experiments**: Load any environment in one line
-- **Reproducibility**: Pin to specific commits
-- **Discovery**: Browse environments on the Hub
-- **No conflicts**: No need to install conflicting packages
-
-### For the Community
-
-- **Growing ecosystem**: More diverse simulation tasks
-- **Standardization**: Common `make_env` API
-- **Collaboration**: Fork and improve existing environments
-- **Accessibility**: Lower barrier to sharing research
-
-## Troubleshooting
-
-### "Refusing to execute remote code"
-
-You must explicitly pass `trust_remote_code=True`:
-
-```python
-env = make_env("user/repo", trust_remote_code=True)
-```
-
-### "Module X not found"
-
-The hub environment has dependencies you need to install:
-
-```bash
-# Check the repo's requirements.txt and install dependencies
-pip install gymnasium numpy
-```
-
-### "make_env not found in module"
-
-Your `env.py` must expose a `make_env` function:
-
-```python
-def make_env(n_envs: int, use_async_envs: bool):
- # Your implementation
- pass
-```
-
-### Environment returns wrong type
-
-The `make_env` function must return:
-
-- A `gym.vector.VectorEnv`, or
-- A single `gym.Env`, or
-- A dict `{suite_name: {task_id: VectorEnv}}`
-
-## Best Practices
-
-1. **Document your environment**: Include observation/action space descriptions, reward structure, and termination conditions in your README
-2. **Add requirements.txt**: List all dependencies with versions
-3. **Test thoroughly**: Verify your environment works locally before pushing
-4. **Use semantic versioning**: Tag releases with version numbers
-5. **Add examples**: Include usage examples in your README
-6. **Keep it simple**: Minimize dependencies when possible
-7. **License your work**: Add a LICENSE file to clarify usage terms
-
-## Future Directions
-
-The EnvHub ecosystem enables exciting possibilities:
-
-- **GPU-accelerated physics**: Share Isaac Gym or Brax environments
-- **Photorealistic rendering**: Distribute environments with advanced graphics
-- **Multi-agent scenarios**: Complex interaction tasks
-- **Real-world simulators**: Digital twins of physical setups
-- **Procedural generation**: Infinite task variations
-- **Domain randomization**: Pre-configured DR pipelines
-
-As more researchers and developers contribute, the diversity and quality of available environments will grow, benefiting the entire robotics learning community.
-
-## See Also
-
-- [Hugging Face Hub Documentation](https://huggingface.co/docs/hub/en/index)
-- [Gymnasium Documentation](https://gymnasium.farama.org/index.html)
-- [Example Hub Environment](https://huggingface.co/lerobot/cartpole-env)
diff --git a/lerobot/docs/source/envhub_isaaclab_arena.mdx b/lerobot/docs/source/envhub_isaaclab_arena.mdx
deleted file mode 100644
index efeec2bd10c65bc6dc9a034208fd9435b7807214..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/envhub_isaaclab_arena.mdx
+++ /dev/null
@@ -1,510 +0,0 @@
-# NVIDIA IsaacLab Arena & LeRobot
-
-LeRobot EnvHub now supports **GPU-accelerated simulation** with IsaacLab Arena for policy evaluation at scale.
-Train and evaluate imitation learning policies with high-fidelity simulation — all integrated into the LeRobot ecosystem.
-
-
-
-[IsaacLab Arena](https://github.com/isaac-sim/IsaacLab-Arena) integrates with NVIDIA IsaacLab to provide:
-
-- 🤖 **Humanoid embodiments**: GR1, G1, Galileo with various configurations
-- 🎯 **Manipulation & loco-manipulation tasks**: Door opening, pick-and-place, button pressing, and more
-- ⚡ **GPU-accelerated rollouts**: Parallel environment execution on NVIDIA GPUs
-- 🖼️ **RTX Rendering**: Evaluate vision-based policies with realistic rendering, reflections and refractions
-- 📦 **LeRobot-compatible datasets**: Ready for training with GR00T N1x, PI0, SmolVLA, ACT, and Diffusion policies
-- 🔄 **EnvHub integration**: Load environments from HuggingFace EnvHub with one line
-
-## Installation
-
-### Prerequisites
-
-Hardware requirements are shared with Isaac Sim, and are detailed in [Isaac Sim Requirements](https://docs.isaacsim.omniverse.nvidia.com/5.1.0/installation/requirements.html).
-
-- NVIDIA GPU with CUDA support
-- NVIDIA driver compatible with IsaacSim 5.1.0
-- Linux (Ubuntu 22.04 / 24.04)
-
-### Setup
-
-```bash
-# 1. Create conda environment
-conda create -y -n lerobot-arena python=3.11
-conda activate lerobot-arena
-conda install -y -c conda-forge ffmpeg=7.1.1
-
-# 2. Install Isaac Sim 5.1.0
-pip install "isaacsim[all,extscache]==5.1.0" --extra-index-url https://pypi.nvidia.com
-
-# Accept NVIDIA EULA (required)
-export ACCEPT_EULA=Y
-export PRIVACY_CONSENT=Y
-
-# 3. Install IsaacLab 2.3.0
-git clone https://github.com/isaac-sim/IsaacLab.git
-cd IsaacLab
-git checkout v2.3.0
-./isaaclab.sh -i
-cd ..
-
-# 4. Install IsaacLab Arena
-git clone https://github.com/isaac-sim/IsaacLab-Arena.git
-cd IsaacLab-Arena
-git checkout release/0.1.1
-pip install -e .
-cd ..
-
-
-# 5. Install LeRobot
-git clone https://github.com/huggingface/lerobot.git
-cd lerobot
-pip install -e .
-cd ..
-
-
-# 6. Install additional dependencies
-pip install onnxruntime==1.23.2 lightwheel-sdk==1.0.1 vuer[all]==0.0.70 qpsolvers==4.8.1
-pip install numpy==1.26.0 # Isaac Sim 5.1 depends on numpy==1.26.0, this will be fixed in next release
-```
-
-## Evaluating Policies
-
-### Pre-trained Policies
-
-The following trained policies are available:
-
-| Policy | Architecture | Task | Link |
-| :-------------------------- | :----------- | :------------ | :----------------------------------------------------------------------- |
-| pi05-arena-gr1-microwave | PI0.5 | GR1 Microwave | [HuggingFace](https://huggingface.co/nvidia/pi05-arena-gr1-microwave) |
-| smolvla-arena-gr1-microwave | SmolVLA | GR1 Microwave | [HuggingFace](https://huggingface.co/nvidia/smolvla-arena-gr1-microwave) |
-
-### Evaluate SmolVLA
-
-```bash
-pip install -e ".[smolvla]"
-pip install numpy==1.26.0 # revert numpy to version 1.26
-```
-
-```bash
-lerobot-eval \
- --policy.path=nvidia/smolvla-arena-gr1-microwave \
- --env.type=isaaclab_arena \
- --env.hub_path=nvidia/isaaclab-arena-envs \
- --rename_map='{"observation.images.robot_pov_cam_rgb": "observation.images.robot_pov_cam"}' \
- --policy.device=cuda \
- --env.environment=gr1_microwave \
- --env.embodiment=gr1_pink \
- --env.object=mustard_bottle \
- --env.headless=false \
- --env.enable_cameras=true \
- --env.video=true \
- --env.video_length=10 \
- --env.video_interval=15 \
- --env.state_keys=robot_joint_pos \
- --env.camera_keys=robot_pov_cam_rgb \
- --trust_remote_code=True \
- --eval.batch_size=1
-```
-
-### Evaluate PI0.5
-
-```bash
-pip install -e ".[pi]"
-pip install numpy==1.26.0 # revert numpy to version 1.26
-```
-
-PI0.5 requires disabling torch compile for evaluation:
-
-```bash
-TORCH_COMPILE_DISABLE=1 TORCHINDUCTOR_DISABLE=1 lerobot-eval \
- --policy.path=nvidia/pi05-arena-gr1-microwave \
- --env.type=isaaclab_arena \
- --env.hub_path=nvidia/isaaclab-arena-envs \
- --rename_map='{"observation.images.robot_pov_cam_rgb": "observation.images.robot_pov_cam"}' \
- --policy.device=cuda \
- --env.environment=gr1_microwave \
- --env.embodiment=gr1_pink \
- --env.object=mustard_bottle \
- --env.headless=false \
- --env.enable_cameras=true \
- --env.video=true \
- --env.video_length=15 \
- --env.video_interval=15 \
- --env.state_keys=robot_joint_pos \
- --env.camera_keys=robot_pov_cam_rgb \
- --trust_remote_code=True \
- --eval.batch_size=1
-```
-
-
- To change the number of parallel environments, use the ```--eval.batch_size```
- flag.
-
-
-### What to Expect
-
-During evaluation, you will see a progress bar showing the running success rate:
-
-```
-Stepping through eval batches: 8%|██████▍ | 4/50 [00:45<08:06, 10.58s/it, running_success_rate=25.0%]
-```
-
-### Video Recording
-
-To enable video recording during evaluation, add the following flags to your command:
-
-```bash
---env.video=true \
---env.video_length=15 \
---env.video_interval=15
-```
-
-For more details on video recording, see the [IsaacLab Recording Documentation](https://isaac-sim.github.io/IsaacLab/main/source/how-to/record_video.html).
-
-
-When running headless with `--env.headless=true`, you must also enable cameras explicitly for camera enabled environments:
-
-```bash
---env.headless=true --env.enable_cameras=true
-```
-
-
-
-### Output Directory
-
-Evaluation videos are saved to the output directory with the following structure:
-
-```
-outputs/eval//__/videos/_/eval_episode_.mp4
-```
-
-For example:
-
-```
-outputs/eval/2026-01-02/14-38-01_isaaclab_arena_smolvla/videos/gr1_microwave_0/eval_episode_0.mp4
-```
-
-## Training Policies
-
-To learn more about training policies with LeRobot, please refer to the training documentation:
-
-- [SmolVLA](./smolvla)
-- [Pi0.5](./pi05)
-- [GR00T N1.5](./groot)
-
-Sample IsaacLab Arena datasets are available on HuggingFace Hub for experimentation:
-
-| Dataset | Description | Frames |
-| :-------------------------------------------------------------------------------------------------------- | :------------------------- | :----- |
-| [Arena-GR1-Manipulation-Task](https://huggingface.co/datasets/nvidia/Arena-GR1-Manipulation-Task-v3) | GR1 microwave manipulation | ~4K |
-| [Arena-G1-Loco-Manipulation-Task](https://huggingface.co/datasets/nvidia/Arena-G1-Loco-Manipulation-Task) | G1 loco-manipulation | ~4K |
-
-## Environment Configuration
-
-### Full Configuration Options
-
-```python
-from lerobot.envs.configs import IsaaclabArenaEnv
-
-config = IsaaclabArenaEnv(
- # Environment selection
- environment="gr1_microwave", # Task environment
- embodiment="gr1_pink", # Robot embodiment
- object="power_drill", # Object to manipulate
-
- # Simulation settings
- episode_length=300, # Max steps per episode
- headless=True, # Run without GUI
- device="cuda:0", # GPU device
- seed=42, # Random seed
-
- # Observation configuration
- state_keys="robot_joint_pos", # State observation keys (comma-separated)
- camera_keys="robot_pov_cam_rgb", # Camera observation keys (comma-separated)
- state_dim=54, # Expected state dimension
- action_dim=36, # Expected action dimension
- camera_height=512, # Camera image height
- camera_width=512, # Camera image width
- enable_cameras=True, # Enable camera observations
-
- # Video recording
- video=False, # Enable video recording
- video_length=100, # Frames per video
- video_interval=200, # Steps between recordings
-
- # Advanced
- mimic=False, # Enable mimic mode
- teleop_device=None, # Teleoperation device
- disable_fabric=False, # Disable fabric optimization
- enable_pinocchio=True, # Enable Pinocchio for IK
-)
-```
-
-### Using Environment Hub directly for advanced usage
-
-Create a file called `test_env_load_arena.py` or [download from the EnvHub](https://huggingface.co/nvidia/isaaclab-arena-envs/blob/main/tests/test_env_load_arena.py):
-
-```python
-import logging
-from dataclasses import asdict
-from pprint import pformat
-import torch
-import tqdm
-from lerobot.configs import parser
-from lerobot.configs.eval import EvalPipelineConfig
-
-
-@parser.wrap()
-def main(cfg: EvalPipelineConfig):
- """Run random action rollout for IsaacLab Arena environment."""
- logging.info(pformat(asdict(cfg)))
-
- from lerobot.envs.factory import make_env
-
- env_dict = make_env(
- cfg.env,
- n_envs=cfg.env.num_envs,
- trust_remote_code=True,
- )
- env = next(iter(env_dict.values()))[0]
- env.reset()
- for _ in tqdm.tqdm(range(cfg.env.episode_length)):
- with torch.inference_mode():
- actions = env.action_space.sample()
- obs, rewards, terminated, truncated, info = env.step(actions)
- if terminated.any() or truncated.any():
- obs, info = env.reset()
- env.close()
-
-
-if __name__ == "__main__":
- main()
-```
-
-Run with:
-
-```bash
-python test_env_load_arena.py \
- --env.environment=g1_locomanip_pnp \
- --env.embodiment=gr1_pink \
- --env.object=cracker_box \
- --env.num_envs=4 \
- --env.enable_cameras=true \
- --env.seed=1000 \
- --env.video=true \
- --env.video_length=10 \
- --env.video_interval=15 \
- --env.headless=false \
- --env.hub_path=nvidia/isaaclab-arena-envs \
- --env.type=isaaclab_arena
-```
-
-## Creating New Environments
-
-First create a new IsaacLab Arena environment by following the [IsaacLab Arena Documentation](https://isaac-sim.github.io/IsaacLab-Arena/release/0.1.1/index.html).
-
-Clone our EnvHub repo:
-
-```bash
-git clone https://huggingface.co/nvidia/isaaclab-arena-envs
-```
-
-Modify the `example_envs.yaml` file based on your new environment.
-[Upload](./envhub#step-3-upload-to-the-hub) your modified repo to HuggingFace EnvHub.
-
-
- Your IsaacLab Arena environment code must be locally available during
- evaluation. Users can clone your environment repository separately, or you can
- bundle the environment code and assets directly in your EnvHub repo.
-
-
-Then, when evaluating, use your new environment:
-
-```bash
-lerobot-eval \
- --env.hub_path=/isaaclab-arena-envs \
- --env.environment= \
- ...other flags...
-```
-
-We look forward to your contributions!
-
-## Troubleshooting
-
-### CUDA out of memory
-
-Reduce `batch_size` or use a GPU with more VRAM:
-
-```bash
---eval.batch_size=1
-```
-
-### EULA not accepted
-
-Set environment variables before running:
-
-```bash
-export ACCEPT_EULA=Y
-export PRIVACY_CONSENT=Y
-```
-
-### Video recording not working
-
-Enable cameras when running headless:
-
-```bash
---env.video=true --env.enable_cameras=true --env.headless=true
-```
-
-### Policy output dimension mismatch
-
-Ensure `action_dim` matches your policy:
-
-```bash
---env.action_dim=36
-```
-
-### libGLU.so.1 Errors during Isaac Sim initialization
-
-Ensure you have the following dependencies installed, this is likely to happen on headless machines.
-
-```bash
-sudo apt update && sudo apt install -y libglu1-mesa libxt6
-```
-
-## See Also
-
-- [EnvHub Documentation](./envhub.mdx) - General EnvHub usage
-- [IsaacLab Arena GitHub](https://github.com/isaac-sim/IsaacLab-Arena)
-- [IsaacLab Documentation](https://isaac-sim.github.io/IsaacLab/)
-
-## Lightwheel LW-BenchHub
-
-[Lightwheel](https://www.lightwheel.ai) is bringing `Lightwheel-Libero-Tasks` and `Lightwheel-RoboCasa-Tasks` with 268 tasks to the LeRobot ecosystem.
-LW-BenchHub collects and generates large-scale datasets via teleoperation that comply with the LeRobot specification, enabling out-of-the-box training and evaluation workflows.
-With the unified interface provided by EnvHub, developers can quickly build end-to-end experimental pipelines.
-
-### Install
-
-Assuming you followed the [Installation](#installation) steps, you can install LW-BenchHub with:
-
-```bash
-conda install pinocchio -c conda-forge -y
-pip install numpy==1.26.0 # revert numpy to version 1.26
-
-sudo apt-get install git-lfs && git lfs install
-
-git clone https://github.com/LightwheelAI/lw_benchhub
-git lfs pull # Ensure LFS files (e.g., .usd assets) are downloaded
-
-cd lw_benchhub
-pip install -e .
-```
-
-For more detailed instructions, please refer to the [LW-BenchHub Documentation](https://docs.lightwheel.net/lw_benchhub/usage/Installation).
-
-### Lightwheel Tasks Dataset
-
-LW-BenchHub datasets are available on HuggingFace Hub:
-
-| Dataset | Description | Tasks | Frames |
-| :------------------------------------------------------------------------------------------------------------ | :---------------------- | :---- | :----- |
-| [Lightwheel-Tasks-X7S](https://huggingface.co/datasets/LightwheelAI/Lightwheel-Tasks-X7S) | X7S LIBERO and RoboCasa | 117 | ~10.3M |
-| [Lightwheel-Tasks-Double-Piper](https://huggingface.co/datasets/LightwheelAI/Lightwheel-Tasks-Double-Piper) | Double-Piper LIBERO | 130 | ~6.0M |
-| [Lightwheel-Tasks-G1-Controller](https://huggingface.co/datasets/LightwheelAI/Lightwheel-Tasks-G1-Controller) | G1-Controller LIBERO | 62 | ~2.7M |
-| [Lightwheel-Tasks-G1-WBC](https://huggingface.co/datasets/LightwheelAI/Lightwheel-Tasks-G1-WBC) | G1-WBC RoboCasa | 32 | ~1.5M |
-
-For training policies, refer to the [Training Policies](#training-policies) section.
-
-### Evaluating Policies
-
-#### Pre-trained Policies
-
-The following trained policies are available:
-
-| Policy | Architecture | Task | Layout | Robot | Link |
-| :----------------------- | :----------- | :----------------------------- | :--------- | :-------------- | :------------------------------------------------------------------------------------ |
-| smolvla-double-piper-pnp | SmolVLA | L90K1PutTheBlackBowlOnThePlate | libero-1-1 | DoublePiper-Abs | [HuggingFace](https://huggingface.co/LightwheelAI/smolvla-double-piper-pnp/tree/main) |
-
-#### Evaluate SmolVLA
-
-```bash
-lerobot-eval \
- --policy.path=LightwheelAI/smolvla-double-piper-pnp \
- --env.type=isaaclab_arena \
- --rename_map='{"observation.images.left_hand_camera_rgb": "observation.images.left_hand", "observation.images.right_hand_camera_rgb": "observation.images.right_hand", "observation.images.first_person_camera_rgb": "observation.images.first_person"}' \
- --env.hub_path=LightwheelAI/lw_benchhub_env \
- --env.kwargs='{"config_path": "configs/envhub/example.yml"}' \
- --trust_remote_code=true \
- --env.state_keys=joint_pos \
- --env.action_dim=12 \
- --env.camera_keys=left_hand_camera_rgb,right_hand_camera_rgb,first_person_camera_rgb \
- --policy.device=cuda \
- --eval.batch_size=10 \
- --eval.n_episodes=100
-```
-
-### Environment Configuration
-
-Evaluation can be quickly launched by modifying the `robot`, `task`, and `layout` settings in the configuration file.
-
-#### Full Configuration Options
-
-```yml
-# =========================
-# Basic Settings
-# =========================
-disable_fabric: false
-device: cuda:0
-sensitivity: 1.0
-step_hz: 50
-enable_cameras: true
-execute_mode: eval
-episode_length_s: 20.0 # Episode length in seconds, increase if episodes timeout during eval
-
-# =========================
-# Robot Settings
-# =========================
-robot: DoublePiper-Abs # Robot type, DoublePiper-Abs, X7S-Abs, G1-Controller or G1-Controller-DecoupledWBC
-robot_scale: 1.0
-
-# =========================
-# Task & Scene Settings
-# =========================
-task: L90K1PutTheBlackBowlOnThePlate # Task name
-scene_backend: robocasa
-task_backend: robocasa
-debug_assets: null
-layout: libero-1-1 # Layout and style ID
-sources:
- - objaverse
- - lightwheel
- - aigen_objs
-object_projects: []
-usd_simplify: false
-seed: 42
-
-# =========================
-# Object Placement Retry Settings
-# =========================
-max_scene_retry: 4
-max_object_placement_retry: 3
-
-resample_objects_placement_on_reset: true
-resample_robot_placement_on_reset: true
-
-# =========================
-# Replay Configuration Settings
-# =========================
-replay_cfgs:
- add_camera_to_observation: true
- render_resolution: [640, 480]
-```
-
-### See Also
-
-- [LW-BenchHub GitHub](https://github.com/LightwheelAI/LW-BenchHub)
-- [LW-BenchHub Documentation](https://docs.lightwheel.net/lw_benchhub/)
diff --git a/lerobot/docs/source/envhub_leisaac.mdx b/lerobot/docs/source/envhub_leisaac.mdx
deleted file mode 100644
index f2c6f79d570f1d1f474cf9db4b9930057d75627b..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/envhub_leisaac.mdx
+++ /dev/null
@@ -1,302 +0,0 @@
-# LeIsaac × LeRobot EnvHub
-
-LeRobot EnvHub now supports **imitation learning in simulation** with LeIsaac.
-Spin up everyday manipulation tasks, teleoperate the robot, collect demos, push them to the Hub, and train policies in LeRobot — all in one loop.
-
-[LeIsaac](https://github.com/LightwheelAI/leisaac) integrates with IsaacLab and the SO101 Leader/Follower setup to provide:
-
-- 🕹️ **Teleoperation-first workflows** for data collection
-- 📦 **Built-in data conversion** ready for LeRobot training
-- 🤖 **Everyday skills** like picking oranges, lifting cubes, cleaning tables, and folding cloth
-- ☁️ **Ongoing upgrades** from [LightWheel](https://lightwheel.ai/): cloud simulation, EnvHub support, Sim2Real tooling, and more
-
-Below you’ll find the currently supported LeIsaac tasks exposed through LeRobot EnvHub.
-
-# Available Environments
-
-The following table lists all available tasks and environments in LeIsaac x LeRobot Envhub. You can also get the latest list of environments by running the following command:
-
-```bash
-python scripts/environments/list_envs.py
-```
-
-| Task | Environment ID | Task Description | Related Robot |
-| :-------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------- |
-| | [LeIsaac-SO101-PickOrange-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/pick_orange/pick_orange_env_cfg.py)
[LeIsaac-SO101-PickOrange-Direct-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/pick_orange/direct/pick_orange_env.py) | Pick three oranges and put them into the plate, then reset the arm to rest state. | Single-Arm SO101 Follower |
-| | [LeIsaac-SO101-LiftCube-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/lift_cube/lift_cube_env_cfg.py)
[LeIsaac-SO101-LiftCube-Direct-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/lift_cube/direct/lift_cube_env.py) | Lift the red cube up. | Single-Arm SO101 Follower |
-| | [LeIsaac-SO101-CleanToyTable-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/clean_toy_table/clean_toy_table_env_cfg.py)
[LeIsaac-SO101-CleanToyTable-BiArm-Direct-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/clean_toy_table/direct/clean_toy_table_bi_arm_env.py) | Pick two letter e objects into the box, and reset the arm to rest state. | Single-Arm SO101 Follower
[LeIsaac-SO101-FoldCloth-BiArm-Direct-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/fold_cloth/direct/fold_cloth_bi_arm_env.py) | Fold the cloth, and reset the arm to rest state.
_Note: Only the DirectEnv support check_success in this task._ | Bi-Arm SO101 Follower |
-
-# Load LeIsaac directly in LeRobot with one line of code
-
-> EnvHub: Share LeIsaac environments through HuggingFace
-
-[EnvHub](https://huggingface.co/docs/lerobot/envhub) is our reproducible environment hub, spin up a packaged simulation with one line, experiment immediately, and publish your own tasks for the community.
-
-LeIsaac offers EnvHub support so you can consume or share tasks with only a few commands.
-
-
-
-## How to get started, environment Setup
-
-Run the following commands to setup your code environments:
-
-```bash
-# Refer to Getting Started/Installation to install leisaac firstly
-conda create -n leisaac_envhub python=3.11
-conda activate leisaac_envhub
-
-conda install -c "nvidia/label/cuda-12.8.1" cuda-toolkit
-pip install -U torch==2.7.0 torchvision==0.22.0 --index-url https://download.pytorch.org/whl/cu128
-pip install 'leisaac[isaaclab] @ git+https://github.com/LightwheelAI/leisaac.git#subdirectory=source/leisaac' --extra-index-url https://pypi.nvidia.com
-
-# Install lerobot
-pip install lerobot==0.4.1
-
-# Fix numpy version
-pip install numpy==1.26.0
-```
-
-## Usage Example
-
-EnvHub exposes every LeIsaac-supported task in a uniform interface. The examples below load `so101_pick_orange` and demonstrate a random-action rollout and an interactive teleoperation.
-
-### Random Action
-
-
-Click to expand code example
-
-```python
-# envhub_random_action.py
-
-import torch
-from lerobot.envs.factory import make_env
-
-# Load from the hub
-envs_dict = make_env("LightwheelAI/leisaac_env:envs/so101_pick_orange.py", n_envs=1, trust_remote_code=True)
-
-# Access the environment
-suite_name = next(iter(envs_dict))
-sync_vector_env = envs_dict[suite_name][0]
-# retrieve the isaac environment from the sync vector env
-env = sync_vector_env.envs[0].unwrapped
-
-# Use it like any gym environment
-obs, info = env.reset()
-
-while True:
- action = torch.tensor(env.action_space.sample())
- obs, reward, terminated, truncated, info = env.step(action)
- if terminated or truncated:
- obs, info = env.reset()
-
-env.close()
-```
-
-
-
-```bash
-python envhub_random_action.py
-```
-
-You should see the SO101 arm swinging under purely random commands.
-
-### Teleoperation
-
-LeRobot’s teleoperation stack can drive the simulated arm.
-
-Connect the SO101 Leader controller, run the calibration command below.
-
-```bash
-lerobot-calibrate \
- --teleop.type=so101_leader \
- --teleop.port=/dev/ttyACM0 \
- --teleop.id=leader
-```
-
-And then launch the teleop script.
-
-
-Click to expand code example
-
-```python
-# envhub_teleop_example.py
-
-import logging
-import time
-import gymnasium as gym
-
-from dataclasses import asdict, dataclass
-from pprint import pformat
-
-from lerobot.teleoperators import ( # noqa: F401
- Teleoperator,
- TeleoperatorConfig,
- make_teleoperator_from_config,
- so_leader,
- bi_so_leader,
-)
-from lerobot.utils.robot_utils import precise_sleep
-from lerobot.utils.utils import init_logging
-from lerobot.envs.factory import make_env
-
-
-@dataclass
-class TeleoperateConfig:
- teleop: TeleoperatorConfig
- env_name: str = "so101_pick_orange"
- fps: int = 60
-
-
-@dataclass
-class EnvWrap:
- env: gym.Env
-
-
-def make_env_from_leisaac(env_name: str = "so101_pick_orange"):
- envs_dict = make_env(
- f'LightwheelAI/leisaac_env:envs/{env_name}.py',
- n_envs=1,
- trust_remote_code=True
- )
- suite_name = next(iter(envs_dict))
- sync_vector_env = envs_dict[suite_name][0]
- env = sync_vector_env.envs[0].unwrapped
-
- return env
-
-
-def teleop_loop(teleop: Teleoperator, env: gym.Env, fps: int):
- from leisaac.devices.action_process import preprocess_device_action
- from leisaac.assets.robots.lerobot import SO101_FOLLOWER_MOTOR_LIMITS
- from leisaac.utils.env_utils import dynamic_reset_gripper_effort_limit_sim
-
- env_wrap = EnvWrap(env=env)
-
- obs, info = env.reset()
- while True:
- loop_start = time.perf_counter()
- if env.cfg.dynamic_reset_gripper_effort_limit:
- dynamic_reset_gripper_effort_limit_sim(env, 'so101leader')
-
- raw_action = teleop.get_action()
- processed_action = preprocess_device_action(
- dict(
- so101_leader=True,
- joint_state={
- k.removesuffix(".pos"): v for k, v in raw_action.items()},
- motor_limits=SO101_FOLLOWER_MOTOR_LIMITS),
- env_wrap
- )
- obs, reward, terminated, truncated, info = env.step(processed_action)
- if terminated or truncated:
- obs, info = env.reset()
-
- dt_s = time.perf_counter() - loop_start
- precise_sleep(max(1 / fps - dt_s, 0.0))
- loop_s = time.perf_counter() - loop_start
- print(f"\ntime: {loop_s * 1e3:.2f}ms ({1 / loop_s:.0f} Hz)")
-
-
-def teleoperate(cfg: TeleoperateConfig):
- init_logging()
- logging.info(pformat(asdict(cfg)))
-
- teleop = make_teleoperator_from_config(cfg.teleop)
- env = make_env_from_leisaac(cfg.env_name)
-
- teleop.connect()
- if hasattr(env, 'initialize'):
- env.initialize()
- try:
- teleop_loop(teleop=teleop, env=env, fps=cfg.fps)
- except KeyboardInterrupt:
- pass
- finally:
- teleop.disconnect()
- env.close()
-
-
-def main():
- teleoperate(TeleoperateConfig(
- teleop=so_leader.SO101LeaderConfig(
- port="/dev/ttyACM0",
- id='leader',
- use_degrees=False,
- ),
- env_name="so101_pick_orange",
- fps=60,
- ))
-
-
-if __name__ == "__main__":
- main()
-
-```
-
-
-
-```bash
-python envhub_teleop_example.py
-```
-
-Running the script lets you operate the simulated arm using the physical Leader device.
-
-## ☁️ Cloud Simulation (No GPU Required)
-
-Don’t have a local GPU or the right drivers? No problem! You can run LeIsaac entirely in the cloud with zero setup.
-LeIsaac works out-of-the-box on **NVIDIA Brev**, giving you a fully configured environment directly in your browser.
-
-👉 **Start here:** [https://lightwheelai.github.io/leisaac/docs/cloud_simulation/nvidia_brev](https://lightwheelai.github.io/leisaac/docs/cloud_simulation/nvidia_brev)
-
-Once your instance is deployed, simply open the link for **port 80 (HTTP)** to launch **Visual Studio Code Server** (default password: `password`). From there, you can run simulations, edit code, and visualize IsaacLab environments — all from your web browser.
-
-**No GPU, no drivers, no local installation. Just click and run.**
-
-## Additional Notes
-
-We keep EnvHub coverage aligned with the LeIsaac task. Currently supported:
-
-- `so101_pick_orange`
-- `so101_lift_cube`
-- `so101_clean_toytable`
-- `bi_so101_fold_cloth`
-
-Switch tasks by targeting a different script when calling `make_env`, for example:
-
-```python
-envs_dict_pick_orange = make_env("LightwheelAI/leisaac_env:envs/so101_pick_orange.py", n_envs=1, trust_remote_code=True)
-envs_dict_lift_cube = make_env("LightwheelAI/leisaac_env:envs/so101_lift_cube.py", n_envs=1, trust_remote_code=True)
-envs_dict_clean_toytable = make_env("LightwheelAI/leisaac_env:envs/so101_clean_toytable.py", n_envs=1, trust_remote_code=True)
-envs_dict_fold_cloth = make_env("LightwheelAI/leisaac_env:envs/bi_so101_fold_cloth.py", n_envs=1, trust_remote_code=True)
-```
-
-Note: when working with `bi_so101_fold_cloth`, call `initialize()` immediately after retrieving the env before performing any other operations:
-
-
-Click to expand code example
-
-```python
-import torch
-from lerobot.envs.factory import make_env
-
-# Load from the hub
-envs_dict = make_env("LightwheelAI/leisaac_env:envs/bi_so101_fold_cloth.py", n_envs=1, trust_remote_code=True)
-
-# Access the environment
-suite_name = next(iter(envs_dict))
-sync_vector_env = envs_dict[suite_name][0]
-# retrieve the isaac environment from the sync vector env
-env = sync_vector_env.envs[0].unwrapped
-
-# NOTE: initialize() first
-env.initialize()
-
-# other operation with env...
-```
-
-
diff --git a/lerobot/docs/source/feetech.mdx b/lerobot/docs/source/feetech.mdx
deleted file mode 100644
index 777f8a5093521a93470a1e6a5356c443f45fb0d8..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/feetech.mdx
+++ /dev/null
@@ -1,71 +0,0 @@
-# Feetech Motor Firmware Update
-
-This tutorial guides you through updating the firmware of Feetech motors using the official Feetech software.
-
-## Prerequisites
-
-- Windows computer (Feetech software is only available for Windows)
-- Feetech motor control board
-- USB cable to connect the control board to your computer
-- Feetech motors connected to the control board
-
-## Step 1: Download Feetech Software
-
-1. Visit the official Feetech software download page: [https://www.feetechrc.com/software.html](https://www.feetechrc.com/software.html)
-2. Download the latest version of the Feetech debugging software (FD)
-3. Install the software on your Windows computer
-
-## Step 2: Hardware Setup
-
-1. Connect your Feetech motors to the motor control board
-2. Connect the motor control board to your Windows computer via USB cable
-3. Ensure power is supplied to the motors
-
-## Step 3: Configure Connection
-
-1. Launch the Feetech debugging software
-2. Select the correct COM port from the port dropdown menu
- - If unsure which port to use, check Windows Device Manager under "Ports (COM & LPT)"
-3. Set the appropriate baud rate (typically 1000000 for most Feetech motors)
-4. Click "Open" to establish communication with the control board
-
-## Step 4: Scan for Motors
-
-1. Once connected, click the "Search" button to detect all connected motors
-2. The software will automatically discover and list all motors on the bus
-3. Each motor will appear with its ID number
-
-## Step 5: Update Firmware
-
-For each motor you want to update:
-
-1. **Select the motor** from the list by clicking on it
-2. **Click on Upgrade tab**:
-3. **Click on Online button**:
- - If an potential firmware update is found, it will be displayed in the box
-4. **Click on Upgrade button**:
- - The update progress will be displayed
-
-## Step 6: Verify Update
-
-1. After the update completes, the software should automatically refresh the motor information
-2. Verify that the firmware version has been updated to the expected version
-
-## Important Notes
-
-⚠️ **Warning**: Do not disconnect power or USB during firmware updates, it will potentially brick the motor.
-
-## Bonus: Motor Debugging on Linux/macOS
-
-For debugging purposes only, you can use the open-source Feetech Debug Tool:
-
-- **Repository**: [FT_SCServo_Debug_Qt](https://github.com/CarolinePascal/FT_SCServo_Debug_Qt/tree/fix/port-search-timer)
-
-### Installation Instructions
-
-Follow the instructions in the repository to install the tool, for Ubuntu you can directly install it, for MacOS you need to build it from source.
-
-**Limitations:**
-
-- This tool is for debugging and parameter adjustment only
-- Firmware updates must still be done on Windows with official Feetech software
diff --git a/lerobot/docs/source/groot.mdx b/lerobot/docs/source/groot.mdx
deleted file mode 100644
index 5022b50b85ef21a55efbcc98b19bf5fac81e73c2..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/groot.mdx
+++ /dev/null
@@ -1,131 +0,0 @@
-# GR00T N1.5 Policy
-
-GR00T N1.5 is an open foundation model from NVIDIA designed for generalized humanoid robot reasoning and skills. It is a cross-embodiment model that accepts multimodal input, including language and images, to perform manipulation tasks in diverse environments.
-
-This document outlines the specifics of its integration and usage within the LeRobot framework.
-
-## Model Overview
-
-NVIDIA Isaac GR00T N1.5 is an upgraded version of the GR00T N1 foundation model. It is built to improve generalization and language-following abilities for humanoid robots.
-
-Developers and researchers can post-train GR00T N1.5 with their own real or synthetic data to adapt it for specific humanoid robots or tasks.
-
-GR00T N1.5 (specifically the GR00T-N1.5-3B model) is built using pre-trained vision and language encoders. It utilizes a flow matching action transformer to model a chunk of actions, conditioned on vision, language, and proprioception.
-
-
-
-Its strong performance comes from being trained on an expansive and diverse humanoid dataset, which includes:
-
-- Real captured data from robots.
-- Synthetic data generated using NVIDIA Isaac GR00T Blueprint.
-- Internet-scale video data.
-
-This approach allows the model to be highly adaptable through post-training for specific embodiments, tasks, and environments.
-
-## Installation Requirements
-
-As of today, GR00T N1.5 requires flash attention for it's internal working.
-
-We are working on making this optional, but in the meantime that means that we require an extra installation step and it can only be used in CUDA enabled devices.
-
-1. Following the Environment Setup of our [Installation Guide](./installation). **Attention** don't install `lerobot` in this step.
-2. Install [Flash Attention](https://github.com/Dao-AILab/flash-attention) by running:
-
-```bash
-# Check https://pytorch.org/get-started/locally/ for your system
-pip install "torch>=2.2.1,<2.8.0" "torchvision>=0.21.0,<0.23.0" # --index-url https://download.pytorch.org/whl/cu1XX
-pip install ninja "packaging>=24.2,<26.0" # flash attention dependencies
-pip install "flash-attn>=2.5.9,<3.0.0" --no-build-isolation
-python -c "import flash_attn; print(f'Flash Attention {flash_attn.__version__} imported successfully')"
-```
-
-3. Install LeRobot by running:
-
-```bash
-pip install lerobot[groot]
-```
-
-## Usage
-
-To use GR00T in your LeRobot configuration, specify the policy type as:
-
-```python
-policy.type=groot
-```
-
-## Training
-
-### Training Command Example
-
-Here's a complete training command for finetuning the base GR00T model on your own dataset:
-
-```bash
-# Using a multi-GPU setup
-accelerate launch \
- --multi_gpu \
- --num_processes=$NUM_GPUS \
- $(which lerobot-train) \
- --output_dir=$OUTPUT_DIR \
- --save_checkpoint=true \
- --batch_size=$BATCH_SIZE \
- --steps=$NUM_STEPS \
- --save_freq=$SAVE_FREQ \
- --log_freq=$LOG_FREQ \
- --policy.push_to_hub=true \
- --policy.type=groot \
- --policy.repo_id=$REPO_ID \
- --policy.tune_diffusion_model=false \
- --dataset.repo_id=$DATASET_ID \
- --wandb.enable=true \
- --wandb.disable_artifact=true \
- --job_name=$JOB_NAME
-```
-
-## Performance Results
-
-### Libero Benchmark Results
-
-> [!NOTE]
-> Follow our instructions for Libero usage: [Libero](./libero)
-
-GR00T has demonstrated strong performance on the Libero benchmark suite. To compare and test its LeRobot implementation, we finetuned the GR00T N1.5 model for 30k steps on the Libero dataset and compared the results to the GR00T reference results.
-
-| Benchmark | LeRobot Implementation | GR00T Reference |
-| ------------------ | ---------------------- | --------------- |
-| **Libero Spatial** | 82.0% | 92.0% |
-| **Libero Object** | 99.0% | 92.0% |
-| **Libero Long** | 82.0% | 76.0% |
-| **Average** | 87.0% | 87.0% |
-
-These results demonstrate GR00T's strong generalization capabilities across diverse robotic manipulation tasks. To reproduce these results, you can follow the instructions in the [Libero](https://huggingface.co/docs/lerobot/libero) section.
-
-### Evaluate in your hardware setup
-
-Once you have trained your model using your parameters you can run inference in your downstream task. Follow the instructions in [Imitation Learning for Robots](./il_robots). For example:
-
-```bash
-lerobot-record \
- --robot.type=bi_so_follower \
- --robot.left_arm_port=/dev/ttyACM1 \
- --robot.right_arm_port=/dev/ttyACM0 \
- --robot.id=bimanual_follower \
- --robot.cameras='{ right: {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30},
- left: {"type": "opencv", "index_or_path": 2, "width": 640, "height": 480, "fps": 30},
- top: {"type": "opencv", "index_or_path": 4, "width": 640, "height": 480, "fps": 30},
- }' \
- --display_data=true \
- --dataset.repo_id=/eval_groot-bimanual \
- --dataset.num_episodes=10 \
- --dataset.single_task="Grab and handover the red cube to the other arm"
- --policy.path=/groot-bimanual # your trained model
- --dataset.episode_time_s=30
- --dataset.reset_time_s=10
-```
-
-## License
-
-This model follows the **Apache 2.0 License**, consistent with the original [GR00T repository](https://github.com/NVIDIA/Isaac-GR00T).
diff --git a/lerobot/docs/source/hilserl.mdx b/lerobot/docs/source/hilserl.mdx
deleted file mode 100644
index dacaa960aa73c93460aaaa294fb60d54c594e971..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/hilserl.mdx
+++ /dev/null
@@ -1,923 +0,0 @@
-# HIL-SERL Real Robot Training Workflow Guide
-
-In this tutorial you will go through the full Human-in-the-Loop Sample-Efficient Reinforcement Learning (HIL-SERL) workflow using LeRobot. You will master training a policy with RL on a real robot in just a few hours.
-
-HIL-SERL is a sample-efficient reinforcement learning algorithm that combines human demonstrations with online learning and human interventions. The approach starts from a small set of human demonstrations, uses them to train a reward classifier, and then employs an actor-learner architecture where humans can intervene during policy execution to guide exploration and correct unsafe behaviors. In this tutorial, you'll use a gamepad to provide interventions and control the robot during the learning process.
-
-It combines three key ingredients:
-
-1. **Offline demonstrations & reward classifier:** a handful of human-teleop episodes plus a vision-based success detector give the policy a shaped starting point.
-
-2. **On-robot actor / learner loop with human interventions:** a distributed Soft Actor Critic (SAC) learner updates the policy while an actor explores on the physical robot; the human can jump in at any time to correct dangerous or unproductive behaviour.
-
-3. **Safety & efficiency tools:** joint/end-effector (EE) bounds, crop region of interest (ROI) preprocessing and WandB monitoring keep the data useful and the hardware safe.
-
-Together these elements let HIL-SERL reach near-perfect task success and faster cycle times than imitation-only baselines.
-
-
-
-
-
-
- HIL-SERL workflow, Luo et al. 2024
-
-
-This guide provides step-by-step instructions for training a robot policy using LeRobot's HilSerl implementation to train on a real robot.
-
-## What do I need?
-
-- A gamepad (recommended) or keyboard to control the robot
-- A Nvidia GPU
-- A real robot with a follower and leader arm (optional if you use the keyboard or the gamepad)
-- A URDF file for the robot for the kinematics package (check `lerobot/model/kinematics.py`)
-
-## What kind of tasks can I train?
-
-One can use HIL-SERL to train on a variety of manipulation tasks. Some recommendations:
-
-- Start with a simple task to understand how the system works.
- - Push cube to a goal region
- - Pick and lift cube with the gripper
-- Avoid extremely long horizon tasks. Focus on tasks that can be completed in 5-10 seconds.
-- Once you have a good idea of how the system works, you can try more complex tasks and longer horizons.
- - Pick and place cube
- - Bimanual tasks to pick objects with two arms
- - Hand-over tasks to transfer objects from one arm to another
- - Go crazy!
-
-## Install LeRobot with HIL-SERL
-
-To install LeRobot with HIL-SERL, you need to install the `hilserl` extra.
-
-```bash
-pip install -e ".[hilserl]"
-```
-
-## Real Robot Training Workflow
-
-### Understanding Configuration
-
-The training process begins with proper configuration for the HILSerl environment. The main configuration class is `GymManipulatorConfig` in `lerobot/rl/gym_manipulator.py`, which contains nested `HILSerlRobotEnvConfig` and `DatasetConfig`. The configuration is organized into focused, nested sub-configs:
-
-
-```python
-class GymManipulatorConfig:
- env: HILSerlRobotEnvConfig # Environment configuration (nested)
- dataset: DatasetConfig # Dataset recording/replay configuration (nested)
- mode: str | None = None # "record", "replay", or None (for training)
- device: str = "cpu" # Compute device
-
-class HILSerlRobotEnvConfig(EnvConfig):
- robot: RobotConfig | None = None # Main robot agent (defined in `lerobot/robots`)
- teleop: TeleoperatorConfig | None = None # Teleoperator agent, e.g., gamepad or leader arm
- processor: HILSerlProcessorConfig # Processing pipeline configuration (nested)
- name: str = "real_robot" # Environment name
- task: str | None = None # Task identifier
- fps: int = 10 # Control frequency
-
-# Nested processor configuration
-class HILSerlProcessorConfig:
- control_mode: str = "gamepad" # Control mode
- observation: ObservationConfig | None = None # Observation processing settings
- image_preprocessing: ImagePreprocessingConfig | None = None # Image crop/resize settings
- gripper: GripperConfig | None = None # Gripper control and penalty settings
- reset: ResetConfig | None = None # Environment reset and timing settings
- inverse_kinematics: InverseKinematicsConfig | None = None # IK processing settings
- reward_classifier: RewardClassifierConfig | None = None # Reward classifier settings
- max_gripper_pos: float | None = 100.0 # Maximum gripper position
-
-# Sub-configuration classes
-class ObservationConfig:
- add_joint_velocity_to_observation: bool = False # Add joint velocities to state
- add_current_to_observation: bool = False # Add motor currents to state
- display_cameras: bool = False # Display camera feeds during execution
-
-class ImagePreprocessingConfig:
- crop_params_dict: dict[str, tuple[int, int, int, int]] | None = None # Image cropping parameters
- resize_size: tuple[int, int] | None = None # Target image size
-
-class GripperConfig:
- use_gripper: bool = True # Enable gripper control
- gripper_penalty: float = 0.0 # Penalty for inappropriate gripper usage
-
-class ResetConfig:
- fixed_reset_joint_positions: Any | None = None # Joint positions for reset
- reset_time_s: float = 5.0 # Time to wait during reset
- control_time_s: float = 20.0 # Maximum episode duration
- terminate_on_success: bool = True # Whether to terminate episodes on success detection
-
-class InverseKinematicsConfig:
- urdf_path: str | None = None # Path to robot URDF file
- target_frame_name: str | None = None # End-effector frame name
- end_effector_bounds: dict[str, list[float]] | None = None # EE workspace bounds
- end_effector_step_sizes: dict[str, float] | None = None # EE step sizes per axis
-
-class RewardClassifierConfig:
- pretrained_path: str | None = None # Path to pretrained reward classifier
- success_threshold: float = 0.5 # Success detection threshold
- success_reward: float = 1.0 # Reward value for successful episodes
-
-# Dataset configuration
-class DatasetConfig:
- repo_id: str # LeRobot dataset repository ID
- task: str # Task identifier
- root: str | None = None # Local dataset root directory
- num_episodes_to_record: int = 5 # Number of episodes for recording
- replay_episode: int | None = None # Episode index for replay
- push_to_hub: bool = False # Whether to push datasets to Hub
-```
-
-
-### Processor Pipeline Architecture
-
-HIL-SERL uses a modular processor pipeline architecture that processes robot observations and actions through a series of composable steps. The pipeline is divided into two main components:
-
-#### Environment Processor Pipeline
-
-The environment processor (`env_processor`) handles incoming observations and environment state:
-
-1. **VanillaObservationProcessorStep**: Converts raw robot observations into standardized format
-2. **JointVelocityProcessorStep** (optional): Adds joint velocity information to observations
-3. **MotorCurrentProcessorStep** (optional): Adds motor current readings to observations
-4. **ForwardKinematicsJointsToEE** (optional): Computes end-effector pose from joint positions
-5. **ImageCropResizeProcessorStep** (optional): Crops and resizes camera images
-6. **TimeLimitProcessorStep** (optional): Enforces episode time limits
-7. **GripperPenaltyProcessorStep** (optional): Applies penalties for inappropriate gripper usage
-8. **RewardClassifierProcessorStep** (optional): Automated reward detection using vision models
-9. **AddBatchDimensionProcessorStep**: Converts data to batch format for neural network processing
-10. **DeviceProcessorStep**: Moves data to the specified compute device (CPU/GPU)
-
-#### Action Processor Pipeline
-
-The action processor (`action_processor`) handles outgoing actions and human interventions:
-
-1. **AddTeleopActionAsComplimentaryDataStep**: Captures teleoperator actions for logging
-2. **AddTeleopEventsAsInfoStep**: Records intervention events and episode control signals
-3. **InterventionActionProcessorStep**: Handles human interventions and episode termination
-4. **Inverse Kinematics Pipeline** (when enabled):
- - **MapDeltaActionToRobotActionStep**: Converts delta actions to robot action format
- - **EEReferenceAndDelta**: Computes end-effector reference and delta movements
- - **EEBoundsAndSafety**: Enforces workspace safety bounds
- - **InverseKinematicsEEToJoints**: Converts end-effector actions to joint targets
- - **GripperVelocityToJoint**: Handles gripper control commands
-
-#### Configuration Examples
-
-**Basic Observation Processing**:
-
-```json
-{
- "env": {
- "processor": {
- "observation": {
- "add_joint_velocity_to_observation": true,
- "add_current_to_observation": false,
- "display_cameras": false
- }
- }
- }
-}
-```
-
-**Image Processing**:
-
-```json
-{
- "env": {
- "processor": {
- "image_preprocessing": {
- "crop_params_dict": {
- "observation.images.front": [180, 250, 120, 150],
- "observation.images.side": [180, 207, 180, 200]
- },
- "resize_size": [128, 128]
- }
- }
- }
-}
-```
-
-**Inverse Kinematics Setup**:
-
-```json
-{
- "env": {
- "processor": {
- "inverse_kinematics": {
- "urdf_path": "path/to/robot.urdf",
- "target_frame_name": "end_effector",
- "end_effector_bounds": {
- "min": [0.16, -0.08, 0.03],
- "max": [0.24, 0.2, 0.1]
- },
- "end_effector_step_sizes": {
- "x": 0.02,
- "y": 0.02,
- "z": 0.02
- }
- }
- }
- }
-}
-```
-
-### Advanced Observation Processing
-
-The HIL-SERL framework supports additional observation processing features that can improve policy learning:
-
-#### Joint Velocity Processing
-
-Enable joint velocity estimation to provide the policy with motion information:
-
-```json
-{
- "env": {
- "processor": {
- "observation": {
- "add_joint_velocity_to_observation": true
- }
- }
- }
-}
-```
-
-This processor:
-
-- Estimates joint velocities using finite differences between consecutive joint position readings
-- Adds velocity information to the observation state vector
-- Useful for policies that need motion awareness for dynamic tasks
-
-#### Motor Current Processing
-
-Monitor motor currents to detect contact forces and load conditions:
-
-```json
-{
- "env": {
- "processor": {
- "observation": {
- "add_current_to_observation": true
- }
- }
- }
-}
-```
-
-This processor:
-
-- Reads motor current values from the robot's control system
-- Adds current measurements to the observation state vector
-- Helps detect contact events, object weights, and mechanical resistance
-- Useful for contact-rich manipulation tasks
-
-#### Combined Observation Processing
-
-You can enable multiple observation processing features simultaneously:
-
-```json
-{
- "env": {
- "processor": {
- "observation": {
- "add_joint_velocity_to_observation": true,
- "add_current_to_observation": true,
- "display_cameras": false
- }
- }
- }
-}
-```
-
-**Note**: Enabling additional observation features increases the state space dimensionality, which may require adjusting your policy network architecture and potentially collecting more training data.
-
-### Finding Robot Workspace Bounds
-
-Before collecting demonstrations, you need to determine the appropriate operational bounds for your robot.
-
-This helps simplify the problem of learning on the real robot in two ways: 1) by limiting the robot's operational space to a specific region that solves the task and avoids unnecessary or unsafe exploration, and 2) by allowing training in end-effector space rather than joint space. Empirically, learning in joint space for reinforcement learning in manipulation is often a harder problem - some tasks are nearly impossible to learn in joint space but become learnable when the action space is transformed to end-effector coordinates.
-
-**Using lerobot-find-joint-limits**
-
-This script helps you find the safe operational bounds for your robot's end-effector. Given that you have a follower and leader arm, you can use the script to find the bounds for the follower arm that will be applied during training.
-Bounding the action space will reduce the redundant exploration of the agent and guarantees safety.
-
-```bash
-lerobot-find-joint-limits \
- --robot.type=so100_follower \
- --robot.port=/dev/tty.usbmodem58760431541 \
- --robot.id=black \
- --teleop.type=so100_leader \
- --teleop.port=/dev/tty.usbmodem58760431551 \
- --teleop.id=blue
-```
-
-**Workflow**
-
-1. Run the script and move the robot through the space that solves the task
-2. The script will record the minimum and maximum end-effector positions and the joint angles and prints them to the console, for example:
- ```
- Max ee position [0.2417 0.2012 0.1027]
- Min ee position [0.1663 -0.0823 0.0336]
- Max joint positions [-20.0, -20.0, -20.0, -20.0, -20.0, -20.0]
- Min joint positions [50.0, 50.0, 50.0, 50.0, 50.0, 50.0]
- ```
-3. Use these values in the configuration of your teleoperation device (TeleoperatorConfig) under the `end_effector_bounds` field
-
-**Example Configuration**
-
-```json
-"end_effector_bounds": {
- "max": [0.24, 0.20, 0.10],
- "min": [0.16, -0.08, 0.03]
-}
-```
-
-### Collecting Demonstrations
-
-With the bounds defined, you can safely collect demonstrations for training. Training RL with off-policy algorithm allows us to use offline datasets collected in order to improve the efficiency of the learning process.
-
-**Setting Up Record Mode**
-
-Create a configuration file for recording demonstrations (or edit an existing one like [env_config.json](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/env_config.json)):
-
-1. Set `mode` to `"record"` at the root level
-2. Specify a unique `repo_id` for your dataset in the `dataset` section (e.g., "username/task_name")
-3. Set `num_episodes_to_record` in the `dataset` section to the number of demonstrations you want to collect
-4. Set `env.processor.image_preprocessing.crop_params_dict` to `{}` initially (we'll determine crops later)
-5. Configure `env.robot`, `env.teleop`, and other hardware settings in the `env` section
-
-Example configuration section:
-
-```json
-{
- "env": {
- "type": "gym_manipulator",
- "name": "real_robot",
- "fps": 10,
- "processor": {
- "control_mode": "gamepad",
- "observation": {
- "display_cameras": false
- },
- "image_preprocessing": {
- "crop_params_dict": {},
- "resize_size": [128, 128]
- },
- "gripper": {
- "use_gripper": true,
- "gripper_penalty": 0.0
- },
- "reset": {
- "reset_time_s": 5.0,
- "control_time_s": 20.0
- }
- },
- "robot": {
- // ... robot configuration ...
- },
- "teleop": {
- // ... teleoperator configuration ...
- }
- },
- "dataset": {
- "repo_id": "username/pick_lift_cube",
- "root": null,
- "task": "pick_and_lift",
- "num_episodes_to_record": 15,
- "replay_episode": 0,
- "push_to_hub": true
- },
- "mode": "record",
- "device": "cpu"
-}
-```
-
-### Using a Teleoperation Device
-
-Along with your robot, you will need a teleoperation device to control it in order to collect datasets of your task and perform interventions during the online training.
-We support using a gamepad or a keyboard or the leader arm of the robot.
-
-HIL-Serl learns actions in the end-effector space of the robot. Therefore, the teleoperation will control the end-effector's x,y,z displacements.
-
-For that we need to define a version of the robot that takes actions in the end-effector space. Check the robot class `SO100FollowerEndEffector` and its configuration `SO100FollowerEndEffectorConfig` for the default parameters related to the end-effector space.
-
-
-```python
-class SO100FollowerEndEffectorConfig(SO100FollowerConfig):
- """Configuration for the SO100FollowerEndEffector robot."""
-
- # Default bounds for the end-effector position (in meters)
- end_effector_bounds: dict[str, list[float]] = field( # bounds for the end-effector in x,y,z direction
- default_factory=lambda: {
- "min": [-1.0, -1.0, -1.0], # min x, y, z
- "max": [1.0, 1.0, 1.0], # max x, y, z
- }
- )
-
- max_gripper_pos: float = 50 # maximum gripper position that the gripper will be open at
-
- end_effector_step_sizes: dict[str, float] = field( # maximum step size for the end-effector in x,y,z direction
- default_factory=lambda: {
- "x": 0.02,
- "y": 0.02,
- "z": 0.02,
- }
- )
-```
-
-
-The `Teleoperator` defines the teleoperation device. You can check the list of available teleoperators in `lerobot/teleoperators`.
-
-**Setting up the Gamepad**
-
-The gamepad provides a very convenient way to control the robot and the episode state.
-
-To setup the gamepad, you need to set the `control_mode` to `"gamepad"` and define the `teleop` section in the configuration file.
-
-```json
-{
- "env": {
- "teleop": {
- "type": "gamepad",
- "use_gripper": true
- },
- "processor": {
- "control_mode": "gamepad",
- "gripper": {
- "use_gripper": true
- }
- }
- }
-}
-```
-
-
-
-
-
- Gamepad button mapping for robot control and episode management
-
-
-**Setting up the SO101 leader**
-
-The SO101 leader arm has reduced gears that allows it to move and track the follower arm during exploration. Therefore, taking over is much smoother than the gearless SO100.
-
-To setup the SO101 leader, you need to set the `control_mode` to `"leader"` and define the `teleop` section in the configuration file.
-
-```json
-{
- "env": {
- "teleop": {
- "type": "so101_leader",
- "port": "/dev/tty.usbmodem585A0077921",
- "use_degrees": true
- },
- "processor": {
- "control_mode": "leader",
- "gripper": {
- "use_gripper": true
- }
- }
- }
-}
-```
-
-In order to annotate the success/failure of the episode, **you will need** to use a keyboard to press `s` for success, `esc` for failure.
-During the online training, press `space` to take over the policy and `space` again to give the control back to the policy.
-
-
-Video: SO101 leader teleoperation
-
-
-
-
-
-
SO101 leader teleoperation example, the leader tracks the follower, press `space` to intervene
-
-
-**Recording Demonstrations**
-
-Start the recording process, an example of the config file can be found [here](https://huggingface.co/datasets/aractingi/lerobot-example-config-files/blob/main/env_config_so100.json):
-
-```bash
-python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config_so100.json
-```
-
-During recording:
-
-1. The robot will reset to the initial position defined in the configuration file `env.processor.reset.fixed_reset_joint_positions`
-2. Complete the task successfully
-3. The episode ends with a reward of 1 when you press the "success" button
-4. If the time limit is reached, or the fail button is pressed, the episode ends with a reward of 0
-5. You can rerecord an episode by pressing the "rerecord" button
-6. The process automatically continues to the next episode
-7. After recording all episodes, the dataset is pushed to the Hugging Face Hub (optional) and saved locally
-
-### Processing the Dataset
-
-After collecting demonstrations, process them to determine optimal camera crops.
-Reinforcement learning is sensitive to background distractions, so it is important to crop the images to the relevant workspace area.
-
-Visual RL algorithms learn directly from pixel inputs, making them vulnerable to irrelevant visual information. Background elements like changing lighting, shadows, people moving, or objects outside the workspace can confuse the learning process. Good ROI selection should:
-
-- Include only the essential workspace where the task happens
-- Capture the robot's end-effector and all objects involved in the task
-- Exclude unnecessary background elements and distractions
-
-Note: If you already know the crop parameters, you can skip this step and just set the `crop_params_dict` in the configuration file during recording.
-
-**Determining Crop Parameters**
-
-Use the `crop_dataset_roi.py` script to interactively select regions of interest in your camera images:
-
-```bash
-python -m lerobot.rl.crop_dataset_roi --repo-id username/pick_lift_cube
-```
-
-1. For each camera view, the script will display the first frame
-2. Draw a rectangle around the relevant workspace area
-3. Press 'c' to confirm the selection
-4. Repeat for all camera views
-5. The script outputs cropping parameters and creates a new cropped dataset
-
-Example output:
-
-```
-Selected Rectangular Regions of Interest (top, left, height, width):
-observation.images.side: [180, 207, 180, 200]
-observation.images.front: [180, 250, 120, 150]
-```
-
-
-
-
-
-
- Interactive cropping tool for selecting regions of interest
-
-
-**Updating Configuration**
-
-Add these crop parameters to your training configuration:
-
-```json
-{
- "env": {
- "processor": {
- "image_preprocessing": {
- "crop_params_dict": {
- "observation.images.side": [180, 207, 180, 200],
- "observation.images.front": [180, 250, 120, 150]
- },
- "resize_size": [128, 128]
- }
- }
- }
-}
-```
-
-**Recommended image resolution**
-
-Most vision-based policies have been validated on square inputs of either **128×128** (default) or **64×64** pixels. We therefore advise setting the resize_size parameter to [128, 128] – or [64, 64] if you need to save GPU memory and bandwidth. Other resolutions are possible but have not been extensively tested.
-
-### Training a Reward Classifier
-
-The reward classifier plays an important role in the HIL-SERL workflow by automating reward assignment and automatically detecting episode success. Instead of manually defining reward functions or relying on human feedback for every timestep, the reward classifier learns to predict success/failure from visual observations. This enables the RL algorithm to learn efficiently by providing consistent and automated reward signals based on the robot's camera inputs.
-
-This guide explains how to train a reward classifier for human-in-the-loop reinforcement learning implementation of LeRobot. Reward classifiers learn to predict the reward value given a state which can be used in an RL setup to train a policy.
-
-**Note**: Training a reward classifier is optional. You can start the first round of RL experiments by annotating the success manually with your gamepad or keyboard device.
-
-The reward classifier implementation in `modeling_classifier.py` uses a pretrained vision model to process the images. It can output either a single value for binary rewards to predict success/fail cases or multiple values for multi-class settings.
-
-**Collecting a Dataset for the reward classifier**
-
-Before training, you need to collect a dataset with labeled examples. The `record_dataset` function in `gym_manipulator.py` enables the process of collecting a dataset of observations, actions, and rewards.
-
-To collect a dataset, you need to modify some parameters in the environment configuration based on HILSerlRobotEnvConfig.
-
-```bash
-python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/reward_classifier_train_config.json
-```
-
-**Key Parameters for Data Collection**
-
-- **mode**: set it to `"record"` to collect a dataset (at root level)
-- **dataset.repo_id**: `"hf_username/dataset_name"`, name of the dataset and repo on the hub
-- **dataset.num_episodes_to_record**: Number of episodes to record
-- **env.processor.reset.terminate_on_success**: Whether to automatically terminate episodes when success is detected (default: `true`)
-- **env.fps**: Number of frames per second to record
-- **dataset.push_to_hub**: Whether to push the dataset to the hub
-
-The `env.processor.reset.terminate_on_success` parameter allows you to control episode termination behavior. When set to `false`, episodes will continue even after success is detected, allowing you to collect more positive examples with the reward=1 label. This is crucial for training reward classifiers as it provides more success state examples in your dataset. When set to `true` (default), episodes terminate immediately upon success detection.
-
-**Important**: For reward classifier training, set `terminate_on_success: false` to collect sufficient positive examples. For regular HIL-SERL training, keep it as `true` to enable automatic episode termination when the task is completed successfully.
-
-Example configuration section for data collection:
-
-```json
-{
- "env": {
- "type": "gym_manipulator",
- "name": "real_robot",
- "fps": 10,
- "processor": {
- "reset": {
- "reset_time_s": 5.0,
- "control_time_s": 20.0,
- "terminate_on_success": false
- },
- "gripper": {
- "use_gripper": true
- }
- },
- "robot": {
- // ... robot configuration ...
- },
- "teleop": {
- // ... teleoperator configuration ...
- }
- },
- "dataset": {
- "repo_id": "hf_username/dataset_name",
- "dataset_root": "data/your_dataset",
- "task": "reward_classifier_task",
- "num_episodes_to_record": 20,
- "replay_episode": null,
- "push_to_hub": true
- },
- "mode": "record",
- "device": "cpu"
-}
-```
-
-**Reward Classifier Configuration**
-
-The reward classifier is configured using `configuration_classifier.py`. Here are the key parameters:
-
-- **model_name**: Base model architecture (e.g., we mainly use `"helper2424/resnet10"`)
-- **model_type**: `"cnn"` or `"transformer"`
-- **num_cameras**: Number of camera inputs
-- **num_classes**: Number of output classes (typically 2 for binary success/failure)
-- **hidden_dim**: Size of hidden representation
-- **dropout_rate**: Regularization parameter
-- **learning_rate**: Learning rate for optimizer
-
-Example configuration for training the [reward classifier](https://huggingface.co/datasets/aractingi/lerobot-example-config-files/blob/main/reward_classifier_train_config.json):
-
-```json
-{
- "policy": {
- "type": "reward_classifier",
- "model_name": "helper2424/resnet10",
- "model_type": "cnn",
- "num_cameras": 2,
- "num_classes": 2,
- "hidden_dim": 256,
- "dropout_rate": 0.1,
- "learning_rate": 1e-4,
- "device": "cuda",
- "use_amp": true,
- "input_features": {
- "observation.images.front": {
- "type": "VISUAL",
- "shape": [3, 128, 128]
- },
- "observation.images.side": {
- "type": "VISUAL",
- "shape": [3, 128, 128]
- }
- }
- }
-}
-```
-
-**Training the Classifier**
-
-To train the classifier, use the `train.py` script with your configuration:
-
-```bash
-lerobot-train --config_path path/to/reward_classifier_train_config.json
-```
-
-**Deploying and Testing the Model**
-
-To use your trained reward classifier, configure the `HILSerlRobotEnvConfig` to use your model:
-
-
-```python
-config = GymManipulatorConfig(
- env=HILSerlRobotEnvConfig(
- processor=HILSerlProcessorConfig(
- reward_classifier=RewardClassifierConfig(
- pretrained_path="path_to_your_pretrained_trained_model"
- )
- ),
- # Other environment parameters
- ),
- dataset=DatasetConfig(...),
- mode=None # For training
-)
-```
-
-
-or set the argument in the json config file.
-
-```json
-{
- "env": {
- "processor": {
- "reward_classifier": {
- "pretrained_path": "path_to_your_pretrained_model",
- "success_threshold": 0.7,
- "success_reward": 1.0
- },
- "reset": {
- "terminate_on_success": true
- }
- }
- }
-}
-```
-
-Run `gym_manipulator.py` to test the model.
-
-```bash
-python -m lerobot.rl.gym_manipulator --config_path path/to/env_config.json
-```
-
-The reward classifier will automatically provide rewards based on the visual input from the robot's cameras.
-
-**Example Workflow for training the reward classifier**
-
-1. **Create the configuration files**:
- Create the necessary json configuration files for the reward classifier and the environment. Check the examples [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/reward_classifier/config.json).
-
-2. **Collect a dataset**:
-
- ```bash
- python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
- ```
-
-3. **Train the classifier**:
-
- ```bash
- lerobot-train --config_path src/lerobot/configs/reward_classifier_train_config.json
- ```
-
-4. **Test the classifier**:
- ```bash
- python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
- ```
-
-### Training with Actor-Learner
-
-The LeRobot system uses a distributed actor-learner architecture for training. This architecture decouples robot interactions from the learning process, allowing them to run concurrently without blocking each other. The actor server handles robot observations and actions, sending interaction data to the learner server. The learner server performs gradient descent and periodically updates the actor's policy weights. You will need to start two processes: a learner and an actor.
-
-**Configuration Setup**
-
-Create a training configuration file (example available [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/train_config.json)). The training config is based on the main `TrainRLServerPipelineConfig` class in `lerobot/configs/train.py`.
-
-1. Configure the policy settings (`type="sac"`, `device`, etc.)
-2. Set `dataset` to your cropped dataset
-3. Configure environment settings with crop parameters
-4. Check the other parameters related to SAC in [configuration_sac.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/policies/sac/configuration_sac.py#L79).
-5. Verify that the `policy` config is correct with the right `input_features` and `output_features` for your task.
-
-**Starting the Learner**
-
-First, start the learner server process:
-
-```bash
-python -m lerobot.rl.learner --config_path src/lerobot/configs/train_config_hilserl_so100.json
-```
-
-The learner:
-
-- Initializes the policy network
-- Prepares replay buffers
-- Opens a `gRPC` server to communicate with actors
-- Processes transitions and updates the policy
-
-**Starting the Actor**
-
-In a separate terminal, start the actor process with the same configuration:
-
-```bash
-python -m lerobot.rl.actor --config_path src/lerobot/configs/train_config_hilserl_so100.json
-```
-
-The actor:
-
-- Connects to the learner via `gRPC`
-- Initializes the environment
-- Execute rollouts of the policy to collect experience
-- Sends transitions to the learner
-- Receives updated policy parameters
-
-**Training Flow**
-
-The training proceeds automatically:
-
-1. The actor executes the policy in the environment
-2. Transitions are collected and sent to the learner
-3. The learner updates the policy based on these transitions
-4. Updated policy parameters are sent back to the actor
-5. The process continues until the specified step limit is reached
-
-**Human in the Loop**
-
-- The key to learning efficiently is to have human interventions to provide corrective feedback and completing the task to aide the policy learning and exploration.
-- To perform human interventions, you can press the upper right trigger button on the gamepad (or the `space` key on the keyboard). This will pause the policy actions and allow you to take over.
-- A successful experiment is one where the human has to intervene at the start but then reduces the amount of interventions as the policy improves. You can monitor the intervention rate in the `wandb` dashboard.
-
-
-
-
-
-
-
- Example showing how human interventions help guide policy learning over time
-
-
-
-- The figure shows the plot of the episodic reward over interaction step. The figure shows the effect of human interventions on the policy learning.
-- The orange curve is an experiment without any human interventions. While the pink and blue curves are experiments with human interventions.
-- We can observe that the number of steps where the policy starts achieving the maximum reward is cut by a quarter when human interventions are present.
-
-**Monitoring and Debugging**
-
-If you have `wandb.enable` set to `true` in your configuration, you can monitor training progress in real-time through the [Weights & Biases](https://wandb.ai/site/) dashboard.
-
-### Guide to Human Interventions
-
-The learning process is very sensitive to the intervention strategy. It will takes a few runs to understand how to intervene effectively. Some tips and hints:
-
-- Allow the policy to explore for a few episodes at the start of training.
-- Avoid intervening for long periods of time. Try to intervene in situation to correct the robot's behaviour when it goes off track.
-- Once the policy starts achieving the task, even if its not perfect, you can limit your interventions to simple quick actions like a simple grasping commands.
-
-The ideal behaviour is that your intervention rate should drop gradually during training as shown in the figure below.
-
-
-
-
-
-
-
- Plot of the intervention rate during a training run on a pick and lift cube
- task
-
-
-
-### Key hyperparameters to tune
-
-Some configuration values have a disproportionate impact on training stability and speed:
-
-- **`temperature_init`** (`policy.temperature_init`) – initial entropy temperature in SAC. Higher values encourage more exploration; lower values make the policy more deterministic early on. A good starting point is `1e-2`. We observed that setting it too high can make human interventions ineffective and slow down learning.
-- **`policy_parameters_push_frequency`** (`policy.actor_learner_config.policy_parameters_push_frequency`) – interval in _seconds_ between two weight pushes from the learner to the actor. The default is `4 s`. Decrease to **1-2 s** to provide fresher weights (at the cost of more network traffic); increase only if your connection is slow, as this will reduce sample efficiency.
-- **`storage_device`** (`policy.storage_device`) – device on which the learner keeps the policy parameters. If you have spare GPU memory, set this to `"cuda"` (instead of the default `"cpu"`). Keeping the weights on-GPU removes CPU→GPU transfer overhead and can significantly increase the number of learner updates per second.
-
-Congrats 🎉, you have finished this tutorial!
-
-> [!TIP]
-> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
-
-Paper citation:
-
-```
-@article{luo2024precise,
- title={Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning},
- author={Luo, Jianlan and Xu, Charles and Wu, Jeffrey and Levine, Sergey},
- journal={arXiv preprint arXiv:2410.21845},
- year={2024}
-}
-```
diff --git a/lerobot/docs/source/hilserl_sim.mdx b/lerobot/docs/source/hilserl_sim.mdx
deleted file mode 100644
index 3af01f55a333e8e28468d0757bf1c1c658cbb659..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/hilserl_sim.mdx
+++ /dev/null
@@ -1,154 +0,0 @@
-# Train RL in Simulation
-
-This guide explains how to use the `gym_hil` simulation environments as an alternative to real robots when working with the LeRobot framework for Human-In-the-Loop (HIL) reinforcement learning.
-
-`gym_hil` is a package that provides Gymnasium-compatible simulation environments specifically designed for Human-In-the-Loop reinforcement learning. These environments allow you to:
-
-- Train policies in simulation to test the RL stack before training on real robots
-
-- Collect demonstrations in sim using external devices like gamepads or keyboards
-- Perform human interventions during policy learning
-
-Currently, the main environment is a Franka Panda robot simulation based on MuJoCo, with tasks like picking up a cube.
-
-## Installation
-
-First, install the `gym_hil` package within the LeRobot environment:
-
-```bash
-pip install -e ".[hilserl]"
-```
-
-## What do I need?
-
-- A gamepad or keyboard to control the robot
-- A Nvidia GPU
-
-## Configuration
-
-To use `gym_hil` with LeRobot, you need to create a configuration file. An example is provided [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/gym_hil/env_config.json). Key configuration sections include:
-
-### Environment Type and Task
-
-```json
-{
- "env": {
- "type": "gym_manipulator",
- "name": "gym_hil",
- "task": "PandaPickCubeGamepad-v0",
- "fps": 10
- },
- "device": "cuda"
-}
-```
-
-Available tasks:
-
-- `PandaPickCubeBase-v0`: Basic environment
-- `PandaPickCubeGamepad-v0`: With gamepad control
-- `PandaPickCubeKeyboard-v0`: With keyboard control
-
-### Processor Configuration
-
-```json
-{
- "env": {
- "processor": {
- "control_mode": "gamepad",
- "gripper": {
- "use_gripper": true,
- "gripper_penalty": -0.02
- },
- "reset": {
- "control_time_s": 15.0,
- "fixed_reset_joint_positions": [
- 0.0, 0.195, 0.0, -2.43, 0.0, 2.62, 0.785
- ]
- },
- "inverse_kinematics": {
- "end_effector_step_sizes": {
- "x": 0.025,
- "y": 0.025,
- "z": 0.025
- }
- }
- }
- }
-}
-```
-
-Important parameters:
-
-- `gripper.gripper_penalty`: Penalty for excessive gripper movement
-- `gripper.use_gripper`: Whether to enable gripper control
-- `inverse_kinematics.end_effector_step_sizes`: Size of the steps in the x,y,z axes of the end-effector
-- `control_mode`: Set to `"gamepad"` to use a gamepad controller
-
-## Running with HIL RL of LeRobot
-
-### Basic Usage
-
-To run the environment, set mode to null:
-
-```bash
-python -m lerobot.rl.gym_manipulator --config_path path/to/gym_hil_env.json
-```
-
-### Recording a Dataset
-
-To collect a dataset, set the mode to `record` whilst defining the repo_id and number of episodes to record:
-
-```json
-{
- "env": {
- "type": "gym_manipulator",
- "name": "gym_hil",
- "task": "PandaPickCubeGamepad-v0"
- },
- "dataset": {
- "repo_id": "username/sim_dataset",
- "root": null,
- "task": "pick_cube",
- "num_episodes_to_record": 10,
- "replay_episode": null,
- "push_to_hub": true
- },
- "mode": "record"
-}
-```
-
-```bash
-python -m lerobot.rl.gym_manipulator --config_path path/to/gym_hil_env.json
-```
-
-### Training a Policy
-
-To train a policy, checkout the configuration example available [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/gym_hil/train_config.json) and run the actor and learner servers:
-
-```bash
-python -m lerobot.rl.actor --config_path path/to/train_gym_hil_env.json
-```
-
-In a different terminal, run the learner server:
-
-```bash
-python -m lerobot.rl.learner --config_path path/to/train_gym_hil_env.json
-```
-
-The simulation environment provides a safe and repeatable way to develop and test your Human-In-the-Loop reinforcement learning components before deploying to real robots.
-
-Congrats 🎉, you have finished this tutorial!
-
-> [!TIP]
-> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
-
-Paper citation:
-
-```
-@article{luo2024precise,
- title={Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning},
- author={Luo, Jianlan and Xu, Charles and Wu, Jeffrey and Levine, Sergey},
- journal={arXiv preprint arXiv:2410.21845},
- year={2024}
-}
-```
diff --git a/lerobot/docs/source/hope_jr.mdx b/lerobot/docs/source/hope_jr.mdx
deleted file mode 100644
index 91e4e608d3e05c4a885d5aacbd694882a4de8ba8..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/hope_jr.mdx
+++ /dev/null
@@ -1,277 +0,0 @@
-# HopeJR
-
-## Prerequisites
-
-- [Hardware Setup](https://github.com/TheRobotStudio/HOPEJr)
-
-## Install LeRobot
-
-Follow the [installation instructions](https://github.com/huggingface/lerobot#installation) to install LeRobot.
-
-Install LeRobot with HopeJR dependencies:
-
-```bash
-pip install -e ".[hopejr]"
-```
-
-## Device Configuration
-
-Before starting calibration and operation, you need to identify the USB ports for each HopeJR component. Run this script to find the USB ports for the arm, hand, glove, and exoskeleton:
-
-```bash
-lerobot-find-port
-```
-
-This will display the available USB ports and their associated devices. Make note of the port paths (e.g., `/dev/tty.usbmodem58760433331`, `/dev/tty.usbmodem11301`) as you'll need to specify them in the `--robot.port` and `--teleop.port` parameters when recording data, replaying episodes, or running teleoperation scripts.
-
-## Step 1: Calibration
-
-Before performing teleoperation, HopeJR's limbs need to be calibrated. Calibration files will be saved in `~/.cache/huggingface/lerobot/calibration`
-
-### 1.1 Calibrate Robot Hand
-
-```bash
-lerobot-calibrate \
- --robot.type=hope_jr_hand \
- --robot.port=/dev/tty.usbmodem58760432281 \
- --robot.id=blue \
- --robot.side=right
-```
-
-When running the calibration script, a calibration GUI will pop up. Finger joints are named as follows:
-
-**Thumb**:
-
-- **CMC**: base joint connecting thumb to hand
-- **MCP**: knuckle joint
-- **PIP**: first finger joint
-- **DIP** : fingertip joint
-
-**Index, Middle, Ring, and Pinky fingers**:
-
-- **Radial flexor**: Moves base of finger towards the thumb
-- **Ulnar flexor**: Moves base of finger towards the pinky
-- **PIP/DIP**: Flexes the distal and proximal phalanx of the finger
-
-Each one of these will need to be calibrated individually via the GUI.
-Note that ulnar and radial flexors should have ranges of the same size (but with different offsets) in order to get symmetric movement.
-
-
-
-
-
-Use the calibration interface to set the range boundaries for each joint as shown above.
-
-
-
-
-
-Once you have set the appropriate boundaries for all joints, click "Save" to save the calibration values to the motors.
-
-### 1.2 Calibrate Teleoperator Glove
-
-```bash
-lerobot-calibrate \
- --teleop.type=homunculus_glove \
- --teleop.port=/dev/tty.usbmodem11201 \
- --teleop.id=red \
- --teleop.side=right
-```
-
-Move each finger through its full range of motion, starting from the thumb.
-
-```
-Move thumb through its entire range of motion.
-Recording positions. Press ENTER to stop...
-
--------------------------------------------
-NAME | MIN | POS | MAX
-thumb_cmc | 1790 | 1831 | 1853
-thumb_mcp | 1497 | 1514 | 1528
-thumb_pip | 1466 | 1496 | 1515
-thumb_dip | 1463 | 1484 | 1514
-```
-
-Continue with each finger:
-
-```
-Move middle through its entire range of motion.
-Recording positions. Press ENTER to stop...
-
--------------------------------------------
-NAME | MIN | POS | MAX
-middle_mcp_abduction | 1598 | 1718 | 1820
-middle_mcp_flexion | 1512 | 1658 | 2136
-middle_dip | 1484 | 1500 | 1547
-```
-
-Once calibration is complete, the system will save the calibration to `/Users/your_username/.cache/huggingface/lerobot/calibration/teleoperators/homunculus_glove/red.json`
-
-### 1.3 Calibrate Robot Arm
-
-```bash
-lerobot-calibrate \
- --robot.type=hope_jr_arm \
- --robot.port=/dev/tty.usbserial-1110 \
- --robot.id=white
-```
-
-This will open a calibration GUI where you can set the range limits for each motor. The arm motions are organized as follows:
-
-- **Shoulder**: pitch, yaw, and roll
-- **Elbow**: flex
-- **Wrist**: pitch, yaw, and roll
-
-
-
-
-
-Use the calibration interface to set the range boundaries for each joint. Move each joint through its full range of motion and adjust the minimum and maximum values accordingly. Once you have set the appropriate boundaries for all joints, save the calibration.
-
-### 1.4 Calibrate Teleoperator Exoskeleton
-
-```bash
-lerobot-calibrate \
- --teleop.type=homunculus_arm \
- --teleop.port=/dev/tty.usbmodem11201 \
- --teleop.id=black
-```
-
-The exoskeleton allows one to control the robot arm. During calibration, you'll be prompted to move all joints through their full range of motion:
-
-```
-Move all joints through their entire range of motion.
-Recording positions. Press ENTER to stop...
-
--------------------------------------------
--------------------------------------------
-NAME | MIN | POS | MAX
-shoulder_pitch | 586 | 736 | 895
-shoulder_yaw | 1257 | 1374 | 1390
-shoulder_roll | 449 | 1034 | 2564
-elbow_flex | 3023 | 3117 | 3134
-wrist_roll | 3073 | 3096 | 3147
-wrist_yaw | 2143 | 2171 | 2185
-wrist_pitch | 1975 | 1993 | 2074
-Calibration saved to /Users/your_username/.cache/huggingface/lerobot/calibration/teleoperators/homunculus_arm/black.json
-```
-
-## Step 2: Teleoperation
-
-Due to global variable conflicts in the Feetech middleware, teleoperation for arm and hand must run in separate shell sessions:
-
-### Hand
-
-```bash
-lerobot-teleoperate \
- --robot.type=hope_jr_hand \
- --robot.port=/dev/tty.usbmodem58760432281 \
- --robot.id=blue \
- --robot.side=right \
- --teleop.type=homunculus_glove \
- --teleop.port=/dev/tty.usbmodem11201 \
- --teleop.id=red \
- --teleop.side=right \
- --display_data=true \
- --fps=30
-```
-
-### Arm
-
-```bash
-lerobot-teleoperate \
- --robot.type=hope_jr_arm \
- --robot.port=/dev/tty.usbserial-1110 \
- --robot.id=white \
- --teleop.type=homunculus_arm \
- --teleop.port=/dev/tty.usbmodem11201 \
- --teleop.id=black \
- --display_data=true \
- --fps=30
-```
-
-## Step 3: Record, Replay, Train
-
-Record, Replay and Train with Hope-JR is still experimental.
-
-### Record
-
-This step records the dataset, which can be seen as an example [here](https://huggingface.co/datasets/nepyope/hand_record_test_with_video_data/settings).
-
-```bash
-lerobot-record \
- --robot.type=hope_jr_hand \
- --robot.port=/dev/tty.usbmodem58760432281 \
- --robot.id=right \
- --robot.side=right \
- --robot.cameras='{"main": {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30}}' \
- --teleop.type=homunculus_glove \
- --teleop.port=/dev/tty.usbmodem1201 \
- --teleop.id=right \
- --teleop.side=right \
- --dataset.repo_id=nepyope/hand_record_test_with_video_data \
- --dataset.single_task="Hand recording test with video data" \
- --dataset.num_episodes=1 \
- --dataset.episode_time_s=5 \
- --dataset.push_to_hub=true \
- --dataset.private=true \
- --display_data=true
-```
-
-### Replay
-
-```bash
-lerobot-replay \
- --robot.type=hope_jr_hand \
- --robot.port=/dev/tty.usbmodem58760432281 \
- --robot.id=right \
- --robot.side=right \
- --dataset.repo_id=nepyope/hand_record_test_with_camera \
- --dataset.episode=0
-```
-
-### Train
-
-```bash
-lerobot-train \
- --dataset.repo_id=nepyope/hand_record_test_with_video_data \
- --policy.type=act \
- --output_dir=outputs/train/hopejr_hand \
- --job_name=hopejr \
- --policy.device=mps \
- --wandb.enable=true \
- --policy.repo_id=nepyope/hand_test_policy
-```
-
-### Evaluate
-
-This training run can be viewed as an example [here](https://wandb.ai/tino/lerobot/runs/rp0k8zvw?nw=nwusertino).
-
-```bash
-lerobot-record \
- --robot.type=hope_jr_hand \
- --robot.port=/dev/tty.usbmodem58760432281 \
- --robot.id=right \
- --robot.side=right \
- --robot.cameras='{"main": {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30}}' \
- --display_data=false \
- --dataset.repo_id=nepyope/eval_hopejr \
- --dataset.single_task="Evaluate hopejr hand policy" \
- --dataset.num_episodes=10 \
- --policy.path=outputs/train/hopejr_hand/checkpoints/last/pretrained_model
-```
diff --git a/lerobot/docs/source/il_robots.mdx b/lerobot/docs/source/il_robots.mdx
deleted file mode 100644
index c977616f6f84bb9a491d849c0e3d54a7531221ae..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/il_robots.mdx
+++ /dev/null
@@ -1,620 +0,0 @@
-# Imitation Learning on Real-World Robots
-
-This tutorial will explain how to train a neural network to control a real robot autonomously.
-
-**You'll learn:**
-
-1. How to record and visualize your dataset.
-2. How to train a policy using your data and prepare it for evaluation.
-3. How to evaluate your policy and visualize the results.
-
-By following these steps, you'll be able to replicate tasks, such as picking up a Lego block and placing it in a bin with a high success rate, as shown in the video below.
-
-
-Video: pickup lego block task
-
-
-
-
-
-
-
-This tutorial isn’t tied to a specific robot: we walk you through the commands and API snippets you can adapt for any supported platform.
-
-During data collection, you’ll use a “teloperation” device, such as a leader arm or keyboard to teleoperate the robot and record its motion trajectories.
-
-Once you’ve gathered enough trajectories, you’ll train a neural network to imitate these trajectories and deploy the trained model so your robot can perform the task autonomously.
-
-If you run into any issues at any point, jump into our [Discord community](https://discord.com/invite/s3KuuzsPFb) for support.
-
-## Set up and Calibrate
-
-If you haven't yet set up and calibrated your robot and teleop device, please do so by following the robot-specific tutorial.
-
-## Teleoperate
-
-In this example, we’ll demonstrate how to teleoperate the SO101 robot. For each command, we also provide a corresponding API example.
-
-Note that the `id` associated with a robot is used to store the calibration file. It's important to use the same `id` when teleoperating, recording, and evaluating when using the same setup.
-
-
-
-```bash
-lerobot-teleoperate \
- --robot.type=so101_follower \
- --robot.port=/dev/tty.usbmodem58760431541 \
- --robot.id=my_awesome_follower_arm \
- --teleop.type=so101_leader \
- --teleop.port=/dev/tty.usbmodem58760431551 \
- --teleop.id=my_awesome_leader_arm
-```
-
-
-
-
-```python
-from lerobot.teleoperators.so_leader import SO101LeaderConfig, SO101Leader
-from lerobot.robots.so_follower import SO101FollowerConfig, SO101Follower
-
-robot_config = SO101FollowerConfig(
- port="/dev/tty.usbmodem58760431541",
- id="my_red_robot_arm",
-)
-
-teleop_config = SO101LeaderConfig(
- port="/dev/tty.usbmodem58760431551",
- id="my_blue_leader_arm",
-)
-
-robot = SO101Follower(robot_config)
-teleop_device = SO101Leader(teleop_config)
-robot.connect()
-teleop_device.connect()
-
-while True:
- action = teleop_device.get_action()
- robot.send_action(action)
-```
-
-
-
-
-
-The teleoperate command will automatically:
-
-1. Identify any missing calibrations and initiate the calibration procedure.
-2. Connect the robot and teleop device and start teleoperation.
-
-## Cameras
-
-To add cameras to your setup, follow this [Guide](./cameras#setup-cameras).
-
-## Teleoperate with cameras
-
-With `rerun`, you can teleoperate again while simultaneously visualizing the camera feeds and joint positions. In this example, we’re using the Koch arm.
-
-
-
-```bash
-lerobot-teleoperate \
- --robot.type=koch_follower \
- --robot.port=/dev/tty.usbmodem58760431541 \
- --robot.id=my_awesome_follower_arm \
- --robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \
- --teleop.type=koch_leader \
- --teleop.port=/dev/tty.usbmodem58760431551 \
- --teleop.id=my_awesome_leader_arm \
- --display_data=true
-```
-
-
-
-
-```python
-from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
-from lerobot.teleoperators.koch_leader import KochLeaderConfig, KochLeader
-from lerobot.robots.koch_follower import KochFollowerConfig, KochFollower
-
-camera_config = {
- "front": OpenCVCameraConfig(index_or_path=0, width=1920, height=1080, fps=30)
-}
-
-robot_config = KochFollowerConfig(
- port="/dev/tty.usbmodem585A0076841",
- id="my_red_robot_arm",
- cameras=camera_config
-)
-
-teleop_config = KochLeaderConfig(
- port="/dev/tty.usbmodem58760431551",
- id="my_blue_leader_arm",
-)
-
-robot = KochFollower(robot_config)
-teleop_device = KochLeader(teleop_config)
-robot.connect()
-teleop_device.connect()
-
-while True:
- observation = robot.get_observation()
- action = teleop_device.get_action()
- robot.send_action(action)
-```
-
-
-
-
-
-## Record a dataset
-
-Once you're familiar with teleoperation, you can record your first dataset.
-
-We use the Hugging Face hub features for uploading your dataset. If you haven't previously used the Hub, make sure you can login via the cli using a write-access token, this token can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens).
-
-Add your token to the CLI by running this command:
-
-```bash
-huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
-```
-
-Then store your Hugging Face repository name in a variable:
-
-```bash
-HF_USER=$(hf auth whoami | head -n 1)
-echo $HF_USER
-```
-
-Now you can record a dataset. To record 5 episodes and upload your dataset to the hub, adapt the code below for your robot and execute the command or API example.
-
-
-
-```bash
-lerobot-record \
- --robot.type=so101_follower \
- --robot.port=/dev/tty.usbmodem585A0076841 \
- --robot.id=my_awesome_follower_arm \
- --robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \
- --teleop.type=so101_leader \
- --teleop.port=/dev/tty.usbmodem58760431551 \
- --teleop.id=my_awesome_leader_arm \
- --display_data=true \
- --dataset.repo_id=${HF_USER}/record-test \
- --dataset.num_episodes=5 \
- --dataset.single_task="Grab the black cube"
-```
-
-
-
-
-```python
-from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.datasets.utils import hw_to_dataset_features
-from lerobot.robots.so_follower import SO100Follower, SO100FollowerConfig
-from lerobot.teleoperators.so_leader.config_so100_leader import SO100LeaderConfig
-from lerobot.teleoperators.so_leader.so100_leader import SO100Leader
-from lerobot.utils.control_utils import init_keyboard_listener
-from lerobot.utils.utils import log_say
-from lerobot.utils.visualization_utils import init_rerun
-from lerobot.scripts.lerobot_record import record_loop
-from lerobot.processor import make_default_processors
-
-NUM_EPISODES = 5
-FPS = 30
-EPISODE_TIME_SEC = 60
-RESET_TIME_SEC = 10
-TASK_DESCRIPTION = "My task description"
-
-# Create robot configuration
-robot_config = SO100FollowerConfig(
- id="my_awesome_follower_arm",
- cameras={
- "front": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=FPS) # Optional: fourcc="MJPG" for troubleshooting OpenCV async error.
- },
- port="/dev/tty.usbmodem58760434471",
-)
-
-teleop_config = SO100LeaderConfig(
- id="my_awesome_leader_arm",
- port="/dev/tty.usbmodem585A0077581",
-)
-
-# Initialize the robot and teleoperator
-robot = SO100Follower(robot_config)
-teleop = SO100Leader(teleop_config)
-
-# Configure the dataset features
-action_features = hw_to_dataset_features(robot.action_features, "action")
-obs_features = hw_to_dataset_features(robot.observation_features, "observation")
-dataset_features = {**action_features, **obs_features}
-
-# Create the dataset
-dataset = LeRobotDataset.create(
- repo_id="/",
- fps=FPS,
- features=dataset_features,
- robot_type=robot.name,
- use_videos=True,
- image_writer_threads=4,
-)
-
-# Initialize the keyboard listener and rerun visualization
-_, events = init_keyboard_listener()
-init_rerun(session_name="recording")
-
-# Connect the robot and teleoperator
-robot.connect()
-teleop.connect()
-
-# Create the required processors
-teleop_action_processor, robot_action_processor, robot_observation_processor = make_default_processors()
-
-episode_idx = 0
-while episode_idx < NUM_EPISODES and not events["stop_recording"]:
- log_say(f"Recording episode {episode_idx + 1} of {NUM_EPISODES}")
-
- record_loop(
- robot=robot,
- events=events,
- fps=FPS,
- teleop_action_processor=teleop_action_processor,
- robot_action_processor=robot_action_processor,
- robot_observation_processor=robot_observation_processor,
- teleop=teleop,
- dataset=dataset,
- control_time_s=EPISODE_TIME_SEC,
- single_task=TASK_DESCRIPTION,
- display_data=True,
- )
-
- # Reset the environment if not stopping or re-recording
- if not events["stop_recording"] and (episode_idx < NUM_EPISODES - 1 or events["rerecord_episode"]):
- log_say("Reset the environment")
- record_loop(
- robot=robot,
- events=events,
- fps=FPS,
- teleop_action_processor=teleop_action_processor,
- robot_action_processor=robot_action_processor,
- robot_observation_processor=robot_observation_processor,
- teleop=teleop,
- control_time_s=RESET_TIME_SEC,
- single_task=TASK_DESCRIPTION,
- display_data=True,
- )
-
- if events["rerecord_episode"]:
- log_say("Re-recording episode")
- events["rerecord_episode"] = False
- events["exit_early"] = False
- dataset.clear_episode_buffer()
- continue
-
- dataset.save_episode()
- episode_idx += 1
-
-# Clean up
-log_say("Stop recording")
-robot.disconnect()
-teleop.disconnect()
-dataset.push_to_hub()
-```
-
-
-
-
-
-#### Dataset upload
-
-Locally, your dataset is stored in this folder: `~/.cache/huggingface/lerobot/{repo-id}`. At the end of data recording, your dataset will be uploaded on your Hugging Face page (e.g. `https://huggingface.co/datasets/${HF_USER}/so101_test`) that you can obtain by running:
-
-```bash
-echo https://huggingface.co/datasets/${HF_USER}/so101_test
-```
-
-Your dataset will be automatically tagged with `LeRobot` for the community to find it easily, and you can also add custom tags (in this case `tutorial` for example).
-
-You can look for other LeRobot datasets on the hub by searching for `LeRobot` [tags](https://huggingface.co/datasets?other=LeRobot).
-
-You can also push your local dataset to the Hub manually, running:
-
-```bash
-huggingface-cli upload ${HF_USER}/record-test ~/.cache/huggingface/lerobot/{repo-id} --repo-type dataset
-```
-
-#### Record function
-
-The `record` function provides a suite of tools for capturing and managing data during robot operation:
-
-##### 1. Data Storage
-
-- Data is stored using the `LeRobotDataset` format and is stored on disk during recording.
-- By default, the dataset is pushed to your Hugging Face page after recording.
- - To disable uploading, use `--dataset.push_to_hub=False`.
-
-##### 2. Checkpointing and Resuming
-
-- Checkpoints are automatically created during recording.
-- If an issue occurs, you can resume by re-running the same command with `--resume=true`. When resuming a recording, `--dataset.num_episodes` must be set to the **number of additional episodes to be recorded**, and not to the targeted total number of episodes in the dataset !
-- To start recording from scratch, **manually delete** the dataset directory.
-
-##### 3. Recording Parameters
-
-Set the flow of data recording using command-line arguments:
-
-- `--dataset.episode_time_s=60`
- Duration of each data recording episode (default: **60 seconds**).
-- `--dataset.reset_time_s=60`
- Duration for resetting the environment after each episode (default: **60 seconds**).
-- `--dataset.num_episodes=50`
- Total number of episodes to record (default: **50**).
-
-##### 4. Keyboard Controls During Recording
-
-Control the data recording flow using keyboard shortcuts:
-
-- Press **Right Arrow (`→`)**: Early stop the current episode or reset time and move to the next.
-- Press **Left Arrow (`←`)**: Cancel the current episode and re-record it.
-- Press **Escape (`ESC`)**: Immediately stop the session, encode videos, and upload the dataset.
-
-#### Tips for gathering data
-
-Once you're comfortable with data recording, you can create a larger dataset for training. A good starting task is grasping an object at different locations and placing it in a bin. We suggest recording at least 50 episodes, with 10 episodes per location. Keep the cameras fixed and maintain consistent grasping behavior throughout the recordings. Also make sure the object you are manipulating is visible on the camera's. A good rule of thumb is you should be able to do the task yourself by only looking at the camera images.
-
-In the following sections, you’ll train your neural network. After achieving reliable grasping performance, you can start introducing more variations during data collection, such as additional grasp locations, different grasping techniques, and altering camera positions.
-
-Avoid adding too much variation too quickly, as it may hinder your results.
-
-If you want to dive deeper into this important topic, you can check out the [blog post](https://huggingface.co/blog/lerobot-datasets#what-makes-a-good-dataset) we wrote on what makes a good dataset.
-
-#### Troubleshooting:
-
-- On Linux, if the left and right arrow keys and escape key don't have any effect during data recording, make sure you've set the `$DISPLAY` environment variable. See [pynput limitations](https://pynput.readthedocs.io/en/latest/limitations.html#linux).
-
-## Visualize a dataset
-
-If you uploaded your dataset to the hub with `--control.push_to_hub=true`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
-
-```bash
-echo ${HF_USER}/so101_test
-```
-
-## Replay an episode
-
-A useful feature is the `replay` function, which allows you to replay any episode that you've recorded or episodes from any dataset out there. This function helps you test the repeatability of your robot's actions and assess transferability across robots of the same model.
-
-You can replay the first episode on your robot with either the command below or with the API example:
-
-
-
-```bash
-lerobot-replay \
- --robot.type=so101_follower \
- --robot.port=/dev/tty.usbmodem58760431541 \
- --robot.id=my_awesome_follower_arm \
- --dataset.repo_id=${HF_USER}/record-test \
- --dataset.episode=0 # choose the episode you want to replay
-```
-
-
-
-
-```python
-import time
-
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.robots.so_follower.config_so100_follower import SO100FollowerConfig
-from lerobot.robots.so_follower.so100_follower import SO100Follower
-from lerobot.utils.robot_utils import precise_sleep
-from lerobot.utils.utils import log_say
-
-episode_idx = 0
-
-robot_config = SO100FollowerConfig(port="/dev/tty.usbmodem58760434471", id="my_awesome_follower_arm")
-
-robot = SO100Follower(robot_config)
-robot.connect()
-
-dataset = LeRobotDataset("/", episodes=[episode_idx])
-actions = dataset.hf_dataset.select_columns("action")
-
-log_say(f"Replaying episode {episode_idx}")
-for idx in range(dataset.num_frames):
- t0 = time.perf_counter()
-
- action = {
- name: float(actions[idx]["action"][i]) for i, name in enumerate(dataset.features["action"]["names"])
- }
- robot.send_action(action)
-
- precise_sleep(max(1.0 / dataset.fps - (time.perf_counter() - t0), 0.0))
-
-robot.disconnect()
-```
-
-
-
-
-
-Your robot should replicate movements similar to those you recorded. For example, check out [this video](https://x.com/RemiCadene/status/1793654950905680090) where we use `replay` on a Aloha robot from [Trossen Robotics](https://www.trossenrobotics.com).
-
-## Train a policy
-
-To train a policy to control your robot, use the [`lerobot-train`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/lerobot_train.py) script. A few arguments are required. Here is an example command:
-
-```bash
-lerobot-train \
- --dataset.repo_id=${HF_USER}/so101_test \
- --policy.type=act \
- --output_dir=outputs/train/act_so101_test \
- --job_name=act_so101_test \
- --policy.device=cuda \
- --wandb.enable=true \
- --policy.repo_id=${HF_USER}/my_policy
-```
-
-Let's explain the command:
-
-1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/so101_test`.
-2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
-3. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
-4. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
-
-Training should take several hours. You will find checkpoints in `outputs/train/act_so101_test/checkpoints`.
-
-To resume training from a checkpoint, below is an example command to resume from `last` checkpoint of the `act_so101_test` policy:
-
-```bash
-lerobot-train \
- --config_path=outputs/train/act_so101_test/checkpoints/last/pretrained_model/train_config.json \
- --resume=true
-```
-
-If you do not want to push your model to the hub after training use `--policy.push_to_hub=false`.
-
-Additionally you can provide extra `tags` or specify a `license` for your model or make the model repo `private` by adding this: `--policy.private=true --policy.tags=\[ppo,rl\] --policy.license=mit`
-
-#### Train using Google Colab
-
-If your local computer doesn't have a powerful GPU you could utilize Google Colab to train your model by following the [ACT training notebook](./notebooks#training-act).
-
-#### Upload policy checkpoints
-
-Once training is done, upload the latest checkpoint with:
-
-```bash
-huggingface-cli upload ${HF_USER}/act_so101_test \
- outputs/train/act_so101_test/checkpoints/last/pretrained_model
-```
-
-You can also upload intermediate checkpoints with:
-
-```bash
-CKPT=010000
-huggingface-cli upload ${HF_USER}/act_so101_test${CKPT} \
- outputs/train/act_so101_test/checkpoints/${CKPT}/pretrained_model
-```
-
-## Run inference and evaluate your policy
-
-You can use the `record` script from [`lerobot-record`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/lerobot_record.py) with a policy checkpoint as input, to run inference and evaluate your policy. For instance, run this command or API example to run inference and record 10 evaluation episodes:
-
-
-
-```bash
-lerobot-record \
- --robot.type=so100_follower \
- --robot.port=/dev/ttyACM1 \
- --robot.cameras="{ up: {type: opencv, index_or_path: /dev/video10, width: 640, height: 480, fps: 30}, side: {type: intelrealsense, serial_number_or_name: 233522074606, width: 640, height: 480, fps: 30}}" \
- --robot.id=my_awesome_follower_arm \
- --display_data=false \
- --dataset.repo_id=${HF_USER}/eval_so100 \
- --dataset.single_task="Put lego brick into the transparent box" \
- # <- Teleop optional if you want to teleoperate in between episodes \
- # --teleop.type=so100_leader \
- # --teleop.port=/dev/ttyACM0 \
- # --teleop.id=my_awesome_leader_arm \
- --policy.path=${HF_USER}/my_policy
-```
-
-
-
-
-```python
-from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.datasets.utils import hw_to_dataset_features
-from lerobot.policies.act.modeling_act import ACTPolicy
-from lerobot.policies.factory import make_pre_post_processors
-from lerobot.robots.so_follower.config_so100_follower import SO100FollowerConfig
-from lerobot.robots.so_follower.so100_follower import SO100Follower
-from lerobot.scripts.lerobot_record import record_loop
-from lerobot.utils.control_utils import init_keyboard_listener
-from lerobot.utils.utils import log_say
-from lerobot.utils.visualization_utils import init_rerun
-
-
-NUM_EPISODES = 5
-FPS = 30
-EPISODE_TIME_SEC = 60
-TASK_DESCRIPTION = "My task description"
-HF_MODEL_ID = "/"
-HF_DATASET_ID = "/"
-
-# Create the robot configuration
-camera_config = {"front": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=FPS)}
-robot_config = SO100FollowerConfig(
- port="/dev/tty.usbmodem58760434471", id="my_awesome_follower_arm", cameras=camera_config
-)
-
-# Initialize the robot
-robot = SO100Follower(robot_config)
-
-# Initialize the policy
-policy = ACTPolicy.from_pretrained(HF_MODEL_ID)
-
-# Configure the dataset features
-action_features = hw_to_dataset_features(robot.action_features, "action")
-obs_features = hw_to_dataset_features(robot.observation_features, "observation")
-dataset_features = {**action_features, **obs_features}
-
-# Create the dataset
-dataset = LeRobotDataset.create(
- repo_id=HF_DATASET_ID,
- fps=FPS,
- features=dataset_features,
- robot_type=robot.name,
- use_videos=True,
- image_writer_threads=4,
-)
-
-# Initialize the keyboard listener and rerun visualization
-_, events = init_keyboard_listener()
-init_rerun(session_name="recording")
-
-# Connect the robot
-robot.connect()
-
-preprocessor, postprocessor = make_pre_post_processors(
- policy_cfg=policy,
- pretrained_path=HF_MODEL_ID,
- dataset_stats=dataset.meta.stats,
-)
-
-for episode_idx in range(NUM_EPISODES):
- log_say(f"Running inference, recording eval episode {episode_idx + 1} of {NUM_EPISODES}")
-
- # Run the policy inference loop
- record_loop(
- robot=robot,
- events=events,
- fps=FPS,
- policy=policy,
- preprocessor=preprocessor,
- postprocessor=postprocessor,
- dataset=dataset,
- control_time_s=EPISODE_TIME_SEC,
- single_task=TASK_DESCRIPTION,
- display_data=True,
- )
-
- dataset.save_episode()
-
-# Clean up
-robot.disconnect()
-dataset.push_to_hub()
-```
-
-
-
-
-
-As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
-
-1. There is an additional `--control.policy.path` argument which indicates the path to your policy checkpoint with (e.g. `outputs/train/eval_act_so101_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `${HF_USER}/act_so101_test`).
-2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `${HF_USER}/eval_act_so101_test`).
diff --git a/lerobot/docs/source/implement_your_own_processor.mdx b/lerobot/docs/source/implement_your_own_processor.mdx
deleted file mode 100644
index f59ff3f0bf3c5f383b894088f6e5e3da6b17df22..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/implement_your_own_processor.mdx
+++ /dev/null
@@ -1,273 +0,0 @@
-# Implement your own Robot Processor
-
-In this tutorial, you'll learn how to implement your own Robot Processor.
-It begins by exploring the need for a custom processor, then uses the `NormalizerProcessorStep` as the running example to explain how to implement, configure, and serialize a processor. Finally, it lists all helper processors that ship with LeRobot.
-
-## Why would you need a custom processor?
-
-In most cases, when reading raw data from sensors or when models output actions, you need to process this data to make it compatible with your target system. For example, a common need is normalizing data ranges to make them suitable for neural networks.
-
-LeRobot's `NormalizerProcessorStep` handles this crucial task:
-
-```python
-# Input: raw joint positions in [0, 180] degrees
-raw_action = torch.tensor([90.0, 45.0, 135.0])
-
-# After processing: normalized to [-1, 1] range for model training
-normalizer = NormalizerProcessorStep(features=features, norm_map=norm_map, stats=dataset_stats)
-normalized_result = normalizer(transition)
-# ...
-```
-
-Other common processing needs include:
-
-- **Device placement**: Moving tensors between CPU/GPU and converting data types
-- **Format conversion**: Transforming between different data structures
-- **Batching**: Adding/removing batch dimensions for model compatibility
-- **Safety constraints**: Applying limits to robot commands
-
-```python
-# Example pipeline combining multiple processors
-pipeline = PolicyProcessorPipeline([
- RenameObservationsProcessorStep(rename_map={}),
- AddBatchDimensionProcessorStep(),
- NormalizerProcessorStep(features=features, stats=stats),
- DeviceProcessorStep(device="cuda"),
- # ...
-])
-```
-
-LeRobot provides a pipeline mechanism to implement sequences of processing steps for both input data and output actions, making it easy to compose these transformations in the right order for optimal performance.
-
-## How to implement your own processor?
-
-We'll use the `NormalizerProcessorStep` as our main example because it demonstrates essential processor patterns including state management, configuration serialization, and tensor handling that you'll commonly need.
-
-Prepare the sequence of processing steps necessary for your problem. A processor step is a class that implements the following methods:
-
-- `__call__`: implements the processing step for the input transition.
-- `get_config`: gets the configuration of the processor step.
-- `state_dict`: gets the state of the processor step.
-- `load_state_dict`: loads the state of the processor step.
-- `reset`: resets the state of the processor step.
-- `feature_contract`: displays the modification to the feature space during the processor step.
-
-### Implement the `__call__` method
-
-The `__call__` method is the core of your processor step. It takes an `EnvTransition` and returns a modified `EnvTransition`. Here's how the `NormalizerProcessorStep` works:
-
-```python
-@dataclass
-@ProcessorStepRegistry.register("normalizer_processor")
-class NormalizerProcessorStep(ProcessorStep):
- """Normalize observations/actions using dataset statistics."""
-
- features: dict[str, PolicyFeature]
- norm_map: dict[FeatureType, NormalizationMode]
- stats: dict[str, dict[str, Any]] | None = None
- eps: float = 1e-8
- _tensor_stats: dict = field(default_factory=dict, init=False, repr=False)
-
- def __post_init__(self):
- """Convert stats to tensors for efficient computation."""
- self.stats = self.stats or {}
- self._tensor_stats = to_tensor(self.stats, device=self.device, dtype=torch.float32)
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- new_transition = transition.copy()
- # Normalize observations
- # ...
- # Normalize action
- # ...
- return new_transition
-
-```
-
-See the full implementation in `src/lerobot/processor/normalize_processor.py` for complete details.
-
-**Key principles:**
-
-- **Always use `transition.copy()`** to avoid side effects
-- **Handle both observations and actions** consistently
-- **Separate config from state**: `get_config()` returns JSON-serializable params, `state_dict()` returns tensors
-- **Convert stats to tensors** in `__post_init__()` for efficient computation
-
-### Configuration and State Management
-
-Processors support serialization through three methods that separate configuration from tensor state. The `NormalizerProcessorStep` demonstrates this perfectly - it carries dataset statistics (tensors) in its state, and hyperparameters in its config:
-
-```python
-# Continuing the NormalizerProcessorStep example...
-
-def get_config(self) -> dict[str, Any]:
- """JSON-serializable configuration (no tensors)."""
- return {
- "eps": self.eps,
- "features": {k: {"type": v.type.value, "shape": v.shape} for k, v in self.features.items()},
- "norm_map": {ft.value: nm.value for ft, nm in self.norm_map.items()},
- # ...
- }
-
-def state_dict(self) -> dict[str, torch.Tensor]:
- """Tensor state only (e.g., dataset statistics)."""
- flat: dict[str, torch.Tensor] = {}
- for key, sub in self._tensor_stats.items():
- for stat_name, tensor in sub.items():
- flat[f"{key}.{stat_name}"] = tensor.cpu() # Always save to CPU
- return flat
-
-def load_state_dict(self, state: dict[str, torch.Tensor]) -> None:
- """Restore tensor state at runtime."""
- self._tensor_stats.clear()
- for flat_key, tensor in state.items():
- key, stat_name = flat_key.rsplit(".", 1)
- # Load to processor's configured device
- self._tensor_stats.setdefault(key, {})[stat_name] = tensor.to(
- dtype=torch.float32, device=self.device
- )
- # ...
-```
-
-**Usage:**
-
-```python
-# Save (e.g., inside a policy)
-config = normalizer.get_config()
-tensors = normalizer.state_dict()
-
-# Restore (e.g., loading a pretrained policy)
-new_normalizer = NormalizerProcessorStep(**config)
-new_normalizer.load_state_dict(tensors)
-# Now new_normalizer has the same stats and configuration
-```
-
-### Transform features
-
-The `transform_features` method defines how your processor transforms feature names and shapes. This is crucial for policy configuration and debugging.
-
-For `NormalizerProcessorStep`, features are typically preserved unchanged since normalization doesn't alter keys or shapes:
-
-```python
-def transform_features(self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- """Normalization preserves all feature definitions."""
- return features # No changes to feature structure
- # ...
-```
-
-When your processor renames or reshapes data, implement this method to reflect the mapping for downstream components. For example, a simple rename processor:
-
-```python
-def transform_features(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
- # Simple renaming
- if "pixels" in features:
- features["observation.image"] = features.pop("pixels")
-
- # Pattern-based renaming
- for key in list(features.keys()):
- if key.startswith("env_state."):
- suffix = key[len("env_state."):]
- features[f"observation.{suffix}"] = features.pop(key)
- # ...
-
- return features
-```
-
-**Key principles:**
-
-- Use `features.pop(old_key)` to remove and get the old feature
-- Use `features[new_key] = old_feature` to add the renamed feature
-- Always return the modified features dictionary
-- Document transformations clearly in the docstring
-
-### Using overrides
-
-You can override step parameters at load-time using `overrides`. This is handy for non-serializable objects or site-specific settings. It works both in policy factories and with `DataProcessorPipeline.from_pretrained(...)`.
-
-**Foundational model adaptation**: This is particularly useful when working with foundational pretrained policies where you rarely have access to the original training statistics. You can inject your own dataset statistics to adapt the normalizer to your specific robot or environment data.
-
-Example: during policy evaluation on the robot, override the device and rename map.
-Use this to run a policy trained on CUDA on a CPU-only robot, or to remap camera keys when the robot uses different names than the dataset.
-
-Direct usage with `from_pretrained`:
-
-```python
-from lerobot.processor import RobotProcessorPipeline
-
-# Load a foundational policy trained on diverse robot data
-# but adapt normalization to your specific robot/environment
-new_stats = LeRobotDataset(repo_id="username/my-dataset").meta.stats
-processor = RobotProcessorPipeline.from_pretrained(
- "huggingface/foundational-robot-policy", # Pretrained foundation model
- overrides={
- "normalizer_processor": {"stats": new_stats}, # Inject your robot's statistics
- "device_processor": {"device": "cuda:0"}, # registry name for registered steps
- "rename_processor": {"rename_map": robot_key_map}, # Map your robot's observation keys
- # ...
- },
-)
-```
-
-## Best Practices
-
-Based on analysis of all LeRobot processor implementations, here are the key patterns and practices:
-
-### 1. **Safe Data Handling**
-
-Always create copies of input data to avoid unintended side effects. Use `transition.copy()` and `observation.copy()` rather than modifying data in-place. This prevents your processor from accidentally affecting other components in the pipeline.
-
-Check for required data before processing and handle missing data gracefully. If your processor expects certain keys (like `"pixels"` for image processing), validate their presence first. For optional data, use safe access patterns like `transition.get()` and handle `None` values appropriately.
-
-When data validation fails, provide clear, actionable error messages that help users understand what went wrong and how to fix it.
-
-### 2. **Choose Appropriate Base Classes**
-
-LeRobot provides specialized base classes that reduce boilerplate code and ensure consistency. Use `ObservationProcessorStep` when you only need to modify observations, `ActionProcessorStep` for action-only processing, and `RobotActionProcessorStep` specifically for dictionary-based robot actions.
-
-Only inherit directly from `ProcessorStep` when you need full control over the entire transition or when processing multiple transition components simultaneously. The specialized base classes handle the transition management for you and provide type safety.
-
-### 3. **Registration and Naming**
-
-Register your processors with descriptive, namespaced names using `@ProcessorStepRegistry.register()`. Use organization prefixes like `"robotics_lab/safety_clipper"` or `"acme_corp/vision_enhancer"` to avoid naming conflicts. Avoid generic names like `"processor"` or `"step"` that could clash with other implementations.
-
-Good registration makes your processors discoverable and enables clean serialization/deserialization when saving and loading pipelines.
-
-### 4. **State Management Patterns**
-
-Distinguish between configuration parameters (JSON-serializable values) and internal state (tensors, buffers). Use dataclass fields with `init=False, repr=False` for internal state that shouldn't appear in the constructor or string representation.
-
-Implement the `reset()` method to clear internal state between episodes. This is crucial for stateful processors that accumulate data over time, like moving averages or temporal filters.
-
-Remember that `get_config()` should only return JSON-serializable configuration, while `state_dict()` handles tensor state separately.
-
-### 5. **Input Validation and Error Handling**
-
-Validate input types and shapes before processing. Check tensor properties like `dtype` and dimensions to ensure compatibility with your algorithms. For robot actions, verify that required pose components or joint values are present and within expected ranges.
-
-Use early returns for edge cases where no processing is needed. Provide clear, descriptive error messages that include the expected vs. actual data types or shapes. This makes debugging much easier for users.
-
-### 6. **Device and Dtype Awareness**
-
-Design your processors to automatically adapt to the device and dtype of input tensors. Internal tensors (like normalization statistics) should match the input tensor's device and dtype to ensure compatibility with multi-GPU training, mixed precision, and distributed setups.
-
-Implement a `to()` method that moves your processor's internal state to the specified device. Check device/dtype compatibility at runtime and automatically migrate internal state when needed. This pattern enables seamless operation across different hardware configurations without manual intervention.
-
-## Conclusion
-
-You now have all the tools to implement custom processors in LeRobot! The key steps are:
-
-1. **Define your processor** as a dataclass with the required methods (`__call__`, `get_config`, `state_dict`, `load_state_dict`, `reset`, `transform_features`)
-2. **Register it** using `@ProcessorStepRegistry.register("name")` for discoverability
-3. **Integrate it** into a `DataProcessorPipeline` with other processing steps
-4. **Use base classes** like `ObservationProcessorStep` when possible to reduce boilerplate
-5. **Implement device/dtype awareness** to support multi-GPU and mixed precision setups
-
-The processor system is designed to be modular and composable, allowing you to build complex data processing pipelines from simple, focused components. Whether you're preprocessing sensor data for training or post-processing model outputs for robot execution, custom processors give you the flexibility to handle any data transformation your robotics application requires.
-
-Key principles for robust processors:
-
-- **Device/dtype adaptation**: Internal tensors should match input tensors
-- **Clear error messages**: Help users understand what went wrong
-- **Base class usage**: Leverage specialized base classes to reduce boilerplate
-- **Feature contracts**: Declare data structure changes with `transform_features()`
-
-Start simple, test thoroughly, and ensure your processors work seamlessly across different hardware configurations!
diff --git a/lerobot/docs/source/index.mdx b/lerobot/docs/source/index.mdx
deleted file mode 100644
index 5f214f9a2dcca17df60ba56842e0556d08737cfb..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/index.mdx
+++ /dev/null
@@ -1,23 +0,0 @@
-
-
-# LeRobot
-
-**State-of-the-art machine learning for real-world robotics**
-
-🤗 LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier for entry to robotics so that everyone can contribute and benefit from sharing datasets and pretrained models.
-
-🤗 LeRobot contains state-of-the-art approaches that have been shown to transfer to the real-world with a focus on imitation learning and reinforcement learning.
-
-🤗 LeRobot already provides a set of pretrained models, datasets with human collected demonstrations, and simulated environments so that everyone can get started.
-
-🤗 LeRobot hosts pretrained models and datasets on the LeRobot HuggingFace page.
-
-Join the LeRobot community on [Discord](https://discord.gg/s3KuuzsPFb)
diff --git a/lerobot/docs/source/installation.mdx b/lerobot/docs/source/installation.mdx
deleted file mode 100644
index 3506466af9e289631962fb1f897f5e761cc0a418..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/installation.mdx
+++ /dev/null
@@ -1,127 +0,0 @@
-# Installation
-
-## Install [`miniforge`](https://conda-forge.org/download/)
-
-```bash
-wget "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
-bash Miniforge3-$(uname)-$(uname -m).sh
-```
-
-## Environment Setup
-
-Create a virtual environment with Python 3.10, using conda:
-
-```bash
-conda create -y -n lerobot python=3.10
-```
-
-Then activate your conda environment, you have to do this each time you open a shell to use lerobot:
-
-```bash
-conda activate lerobot
-```
-
-When using `conda`, install `ffmpeg` in your environment:
-
-```bash
-conda install ffmpeg -c conda-forge
-```
-
-> [!TIP]
-> This usually installs `ffmpeg 7.X` for your platform compiled with the `libsvtav1` encoder. If `libsvtav1` is not supported (check supported encoders with `ffmpeg -encoders`), you can:
->
-> - _[On any platform]_ Explicitly install `ffmpeg 7.X` using:
->
-> ```bash
-> conda install ffmpeg=7.1.1 -c conda-forge
-> ```
->
-> - _[On Linux only]_ If you want to bring your own ffmpeg: Install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1), and make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.
-
-## Install LeRobot 🤗
-
-### From Source
-
-First, clone the repository and navigate into the directory:
-
-```bash
-git clone https://github.com/huggingface/lerobot.git
-cd lerobot
-```
-
-Then, install the library in editable mode. This is useful if you plan to contribute to the code.
-
-```bash
-pip install -e .
-```
-
-### Installation from PyPI
-
-**Core Library:**
-Install the base package with:
-
-```bash
-pip install lerobot
-```
-
-_This installs only the default dependencies._
-
-**Extra Features:**
-To install additional functionality, use one of the following:
-
-```bash
-pip install 'lerobot[all]' # All available features
-pip install 'lerobot[aloha,pusht]' # Specific features (Aloha & Pusht)
-pip install 'lerobot[feetech]' # Feetech motor support
-```
-
-_Replace `[...]` with your desired features._
-
-**Available Tags:**
-For a full list of optional dependencies, see:
-https://pypi.org/project/lerobot/
-
-> [!NOTE]
-> For lerobot 0.4.0, if you want to install pi, you will have to do: `pip install "lerobot[pi]@git+https://github.com/huggingface/lerobot.git"`
-
-### Troubleshooting
-
-If you encounter build errors, you may need to install additional dependencies: `cmake`, `build-essential`, and `ffmpeg libs`.
-To install these for linux run:
-
-```bash
-sudo apt-get install cmake build-essential python3-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev
-```
-
-For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/installation.html#bring-your-own-ffmpeg)
-
-## Optional dependencies
-
-LeRobot provides optional extras for specific functionalities. Multiple extras can be combined (e.g., `.[aloha,feetech]`). For all available extras, refer to `pyproject.toml`.
-
-### Simulations
-
-Install environment packages: `aloha` ([gym-aloha](https://github.com/huggingface/gym-aloha)), or `pusht` ([gym-pusht](https://github.com/huggingface/gym-pusht))
-Example:
-
-```bash
-pip install -e ".[aloha]" # or "[pusht]" for example
-```
-
-### Motor Control
-
-For Koch v1.1 install the Dynamixel SDK, for SO100/SO101/Moss install the Feetech SDK.
-
-```bash
-pip install -e ".[feetech]" # or "[dynamixel]" for example
-```
-
-### Experiment Tracking
-
-To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with
-
-```bash
-wandb login
-```
-
-You can now assemble your robot if it's not ready yet, look for your robot type on the left. Then follow the link below to use Lerobot with your robot.
diff --git a/lerobot/docs/source/integrate_hardware.mdx b/lerobot/docs/source/integrate_hardware.mdx
deleted file mode 100644
index c37c96612402c998dd07df1d000edf0a8a9a952d..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/integrate_hardware.mdx
+++ /dev/null
@@ -1,476 +0,0 @@
-# Bring Your Own Hardware
-
-This tutorial will explain how to integrate your own robot design into the LeRobot ecosystem and have it access all of our tools (data collection, control pipelines, policy training and inference).
-
-To that end, we provide the [`Robot`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/robots/robot.py) base class in the LeRobot which specifies a standard interface for physical robot integration. Let's see how to implement it.
-
-## Prerequisites
-
-- Your own robot which exposes a communication interface (e.g. serial, CAN, TCP)
-- A way to read sensor data and send motor commands programmatically, e.g. manufacturer's SDK or API, or your own protocol implementation.
-- LeRobot installed in your environment. Follow our [Installation Guide](./installation).
-
-## Choose your motors
-
-If you're using Feetech or Dynamixel motors, LeRobot provides built-in bus interfaces:
-
-- [`FeetechMotorsBus`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/feetech/feetech.py) – for controlling Feetech servos
-- [`DynamixelMotorsBus`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/dynamixel/dynamixel.py) – for controlling Dynamixel servos
-
-Please refer to the [`MotorsBus`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/motors_bus.py) abstract class to learn about its API.
-For a good example of how it can be used, you can have a look at our own [SO101 follower implementation](https://github.com/huggingface/lerobot/blob/main/src/lerobot/robots/so_follower/so101_follower/so101_follower.py)
-
-Use these if compatible. Otherwise, you'll need to find or write a Python interface (not covered in this tutorial):
-
-- Find an existing SDK in Python (or use bindings to C/C++)
-- Or implement a basic communication wrapper (e.g., via pyserial, socket, or CANopen)
-
-You're not alone—many community contributions use custom boards or firmware!
-
-For Feetech and Dynamixel, we currently support these servos: - Feetech: - STS & SMS series (protocol 0): `sts3215`, `sts3250`, `sm8512bl` - SCS series (protocol 1): `scs0009` - Dynamixel (protocol 2.0 only): `xl330-m077`, `xl330-m288`, `xl430-w250`, `xm430-w350`, `xm540-w270`, `xc430-w150`
-
-If you are using Feetech or Dynamixel servos that are not in this list, you can add those in the [Feetech table](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/feetech/tables.py) or [Dynamixel table](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/dynamixel/tables.py). Depending on the model, this will require you to add model-specific information. In most cases though, there shouldn't be a lot of additions to do.
-
-In the next sections, we'll use a `FeetechMotorsBus` as the motors interface for the examples. Replace it and adapt to your motors if necessary.
-
-## Step 1: Subclass the `Robot` Interface
-
-You’ll first need to specify the config class and a string identifier (`name`) for your robot. If your robot has special needs that you'd like to be able to change easily, it should go here (e.g. port/address, baudrate).
-
-Here, we'll add the port name and one camera by default for our robot:
-
-
-```python
-from dataclasses import dataclass, field
-
-from lerobot.cameras import CameraConfig
-from lerobot.cameras.opencv import OpenCVCameraConfig
-from lerobot.robots import RobotConfig
-
-
-@RobotConfig.register_subclass("my_cool_robot")
-@dataclass
-class MyCoolRobotConfig(RobotConfig):
- port: str
- cameras: dict[str, CameraConfig] = field(
- default_factory={
- "cam_1": OpenCVCameraConfig(
- index_or_path=2,
- fps=30,
- width=480,
- height=640,
- ),
- }
- )
-```
-
-
-[Cameras tutorial](./cameras) to understand how to detect and add your camera.
-
-Next, we'll create our actual robot class which inherits from `Robot`. This abstract class defines a contract you must follow for your robot to be usable with the rest of the LeRobot tools.
-
-Here we'll create a simple 5-DoF robot with one camera. It could be a simple arm but notice that the `Robot` abstract class does not assume anything on your robot's form factor. You can let you imagination run wild when designing new robots!
-
-
-```python
-from lerobot.cameras import make_cameras_from_configs
-from lerobot.motors import Motor, MotorNormMode
-from lerobot.motors.feetech import FeetechMotorsBus
-from lerobot.robots import Robot
-
-class MyCoolRobot(Robot):
- config_class = MyCoolRobotConfig
- name = "my_cool_robot"
-
- def __init__(self, config: MyCoolRobotConfig):
- super().__init__(config)
- self.bus = FeetechMotorsBus(
- port=self.config.port,
- motors={
- "joint_1": Motor(1, "sts3250", MotorNormMode.RANGE_M100_100),
- "joint_2": Motor(2, "sts3215", MotorNormMode.RANGE_M100_100),
- "joint_3": Motor(3, "sts3215", MotorNormMode.RANGE_M100_100),
- "joint_4": Motor(4, "sts3215", MotorNormMode.RANGE_M100_100),
- "joint_5": Motor(5, "sts3215", MotorNormMode.RANGE_M100_100),
- },
- calibration=self.calibration,
- )
- self.cameras = make_cameras_from_configs(config.cameras)
-```
-
-
-## Step 2: Define Observation and Action Features
-
-These two properties define the _interface contract_ between your robot and tools that consume it (such as data collection or learning pipelines).
-
-> [!WARNING]
-> Note that these properties must be callable even if the robot is not yet connected, so avoid relying on runtime hardware state to define them.
-
-### `observation_features`
-
-This property should return a dictionary describing the structure of sensor outputs from your robot. The keys match what `get_observation()` returns, and the values describe either the shape (for arrays/images) or the type (for simple values).
-
-Example for our 5-DoF arm with one camera:
-
-
-```python
-@property
-def _motors_ft(self) -> dict[str, type]:
- return {
- "joint_1.pos": float,
- "joint_2.pos": float,
- "joint_3.pos": float,
- "joint_4.pos": float,
- "joint_5.pos": float,
- }
-
-@property
-def _cameras_ft(self) -> dict[str, tuple]:
- return {
- cam: (self.cameras[cam].height, self.cameras[cam].width, 3) for cam in self.cameras
- }
-
-@property
-def observation_features(self) -> dict:
- return {**self._motors_ft, **self._cameras_ft}
-```
-
-
-In this case, observations consist of a simple dict storing each motor's position and a camera image.
-
-### `action_features`
-
-This property describes the commands your robot expects via `send_action()`. Again, keys must match the expected input format, and values define the shape/type of each command.
-
-Here, we simply use the same joints proprioceptive features (`self._motors_ft`) as with `observation_features`: the action sent will simply the goal position for each motor.
-
-
-```python
-def action_features(self) -> dict:
- return self._motors_ft
-```
-
-
-## Step 3: Handle Connection and Disconnection
-
-These methods should handle opening and closing communication with your hardware (e.g. serial ports, CAN interfaces, USB devices, cameras).
-
-### `is_connected`
-
-This property should simply reflect that communication with the robot's hardware is established. When this property is `True`, it should be possible to read and write to the hardware using `get_observation()` and `send_action()`.
-
-
-```python
-@property
-def is_connected(self) -> bool:
- return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
-```
-
-
-### `connect()`
-
-This method should establish communication with the hardware. Moreover, if your robot needs calibration and is not calibrated, it should start a calibration procedure by default. If your robot needs some specific configuration, this should also be called here.
-
-
-```python
-def connect(self, calibrate: bool = True) -> None:
- self.bus.connect()
- if not self.is_calibrated and calibrate:
- self.calibrate()
-
- for cam in self.cameras.values():
- cam.connect()
-
- self.configure()
-```
-
-
-### `disconnect()`
-
-This method should gracefully terminate communication with the hardware: free any related resources (threads or processes), close ports, etc.
-
-Here, we already handle this in our `MotorsBus` and `Camera` classes so we just need to call their own `disconnect()` methods:
-
-
-```python
-def disconnect(self) -> None:
- self.bus.disconnect()
- for cam in self.cameras.values():
- cam.disconnect()
-```
-
-
-## Step 4: Support Calibration and Configuration
-
-LeRobot supports saving and loading calibration data automatically. This is useful for joint offsets, zero positions, or sensor alignment.
-
-> Note that depending on your hardware, this may not apply. If that's the case, you can simply leave these methods as no-ops:
-
-
-```python
-@property
-def is_calibrated(self) -> bool:
- return True
-
-def calibrate(self) -> None:
- pass
-```
-
-
-### `is_calibrated`
-
-This should reflect whether your robot has the required calibration loaded.
-
-
-```python
-@property
-def is_calibrated(self) -> bool:
- return self.bus.is_calibrated
-```
-
-
-### `calibrate()`
-
-The goal of the calibration is twofold:
-
-- Know the physical range of motion of each motors in order to only send commands within this range.
-- Normalize raw motors positions to sensible continuous values (e.g. percentages, degrees) instead of arbitrary discrete value dependant on the specific motor used that will not replicate elsewhere.
-
-It should implement the logic for calibration (if relevant) and update the `self.calibration` dictionary. If you are using Feetech or Dynamixel motors, our bus interfaces already include methods to help with this.
-
-
-```python
-def calibrate(self) -> None:
- self.bus.disable_torque()
- for motor in self.bus.motors:
- self.bus.write("Operating_Mode", motor, OperatingMode.POSITION.value)
-
- input(f"Move {self} to the middle of its range of motion and press ENTER....")
- homing_offsets = self.bus.set_half_turn_homings()
-
- print(
- "Move all joints sequentially through their entire ranges "
- "of motion.\nRecording positions. Press ENTER to stop..."
- )
- range_mins, range_maxes = self.bus.record_ranges_of_motion()
-
- self.calibration = {}
- for motor, m in self.bus.motors.items():
- self.calibration[motor] = MotorCalibration(
- id=m.id,
- drive_mode=0,
- homing_offset=homing_offsets[motor],
- range_min=range_mins[motor],
- range_max=range_maxes[motor],
- )
-
- self.bus.write_calibration(self.calibration)
- self._save_calibration()
- print("Calibration saved to", self.calibration_fpath)
-```
-
-
-### `configure()`
-
-Use this to set up any configuration for your hardware (servos control modes, controller gains, etc.). This should usually be run at connection time and be idempotent.
-
-
-```python
-def configure(self) -> None:
- with self.bus.torque_disabled():
- self.bus.configure_motors()
- for motor in self.bus.motors:
- self.bus.write("Operating_Mode", motor, OperatingMode.POSITION.value)
- self.bus.write("P_Coefficient", motor, 16)
- self.bus.write("I_Coefficient", motor, 0)
- self.bus.write("D_Coefficient", motor, 32)
-```
-
-
-## Step 5: Implement Sensors Reading and Action Sending
-
-These are the most important runtime functions: the core I/O loop.
-
-### `get_observation()`
-
-Returns a dictionary of sensor values from the robot. These typically include motor states, camera frames, various sensors, etc. In the LeRobot framework, these observations are what will be fed to a policy in order to predict the actions to take. The dictionary keys and structure must match `observation_features`.
-
-
-```python
-def get_observation(self) -> dict[str, Any]:
- if not self.is_connected:
- raise ConnectionError(f"{self} is not connected.")
-
- # Read arm position
- obs_dict = self.bus.sync_read("Present_Position")
- obs_dict = {f"{motor}.pos": val for motor, val in obs_dict.items()}
-
- # Capture images from cameras
- for cam_key, cam in self.cameras.items():
- obs_dict[cam_key] = cam.async_read()
-
- return obs_dict
-```
-
-
-### `send_action()`
-
-Takes a dictionary that matches `action_features`, and sends it to your hardware. You can add safety limits (clipping, smoothing) and return what was actually sent.
-
-For simplicity, we won't be adding any modification of the actions in our example here.
-
-
-```python
-def send_action(self, action: dict[str, Any]) -> dict[str, Any]:
- goal_pos = {key.removesuffix(".pos"): val for key, val in action.items()}
-
- # Send goal position to the arm
- self.bus.sync_write("Goal_Position", goal_pos)
-
- return action
-```
-
-
-## Adding a Teleoperator
-
-For implementing teleoperation devices, we also provide a [`Teleoperator`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/teleoperators/teleoperator.py) base class. This class is very similar to the `Robot` base class and also doesn't assume anything on form factor.
-
-The main differences are in the I/O functions: a teleoperator allows you to produce action via `get_action` and can receive feedback actions via `send_feedback`. Feedback could be anything controllable on the teleoperation device that could help the person controlling it understand the consequences of the actions sent. Think motion/force feedback on a leader arm, vibrations on a gamepad controller for example. To implement a teleoperator, you can follow this same tutorial and adapt it for these two methods.
-
-## Using Your Own `LeRobot` Devices 🔌
-
-You can easily extend `lerobot` with your own custom hardware—be it a camera, robot, or teleoperation device—by creating a separate, installable Python package. If you follow a few simple conventions, the `lerobot` command-line tools (like `lerobot-teleop` and `lerobot-record`) will **automatically discover and integrate your creations** without requiring any changes to the `lerobot` source code.
-
-This guide outlines the conventions your plugin must follow.
-
-### The 4 Core Conventions
-
-To ensure your custom device is discoverable, you must adhere to the following four rules.
-
-#### 1\. Create an Installable Package with a Specific Prefix
-
-Your project must be a standard, installable Python package. Crucially, the name of your package (as defined in `pyproject.toml` or `setup.py`) must begin with one of these prefixes:
-
-- `lerobot_robot_` for a robot.
-- `lerobot_camera_` for a camera.
-- `lerobot_teleoperator_` for a teleoperation device.
-
-This prefix system is how `lerobot` automatically finds your plugin in the Python environment.
-
-#### 2\. Follow the `SomethingConfig`/`Something` Naming Pattern
-
-Your device's implementation class must be named after its configuration class, simply by removing the `Config` suffix.
-
-- **Config Class:** `MyAwesomeTeleopConfig`
-- **Device Class:** `MyAwesomeTeleop`
-
-#### 3\. Place Your Files in a Predictable Structure
-
-The device class (`MyAwesomeTeleop`) must be located in a predictable module relative to its configuration class (`MyAwesomeTeleopConfig`). `lerobot` will automatically search in these locations:
-
-- In the **same module** as the config class.
-- In a **submodule named after the device** (e.g., `my_awesome_teleop.py`).
-
-The recommended and simplest structure is to place them in separate, clearly named files within the same directory.
-
-#### 4\. Expose Classes in `__init__.py`
-
-Your package's `__init__.py` file should import and expose both the configuration and the device classes, making them easily accessible.
-
-### Putting It All Together: A Complete Example
-
-Let's create a new teleoperator called `my_awesome_teleop`.
-
-#### Directory Structure
-
-Here is what the project folder should look like. The package name, `lerobot_teleoperator_my_awesome_teleop`, follows **Convention \#1**.
-
-```
-lerobot_teleoperator_my_awesome_teleop/
-├── pyproject.toml # (or setup.py) lists lerobot as a dependency
-└── lerobot_teleoperator_my_awesome_teleop/
- ├── __init__.py
- ├── config_my_awesome_teleop.py
- └── my_awesome_teleop.py
-```
-
-#### File Contents
-
-- **`config_my_awesome_teleop.py`**: Defines the configuration class. Note the `Config` suffix (**Convention \#2**).
-
- ```python
- from dataclasses import dataclass
-
- from lerobot.teleoperators.config import TeleoperatorConfig
-
- @TeleoperatorConfig.register_subclass("my_awesome_teleop")
- @dataclass
- class MyAwesomeTeleopConfig(TeleoperatorConfig):
- # Your configuration fields go here
- port: str = "192.168.1.1"
- ```
-
-- **`my_awesome_teleop.py`**: Implements the device. The class name `MyAwesomeTeleop` matches its config class name (**Convention \#2**). This file structure adheres to **Convention \#3**.
-
- ```python
- from lerobot.teleoperators.teleoperator import Teleoperator
-
- from .config_my_awesome_teleop import MyAwesomeTeleopConfig
-
- class MyAwesomeTeleop(Teleoperator):
- config_class = MyAwesomeTeleopConfig
- name = "my_awesome_teleop"
-
- def __init__(self, config: MyAwesomeTeleopConfig):
- super().__init__(config)
- self.config = config
-
- # Your device logic (e.g., connect) goes here
- ```
-
-- **`__init__.py`**: Exposes the key classes (**Convention \#4**).
-
- ```python
- from .config_my_awesome_teleop import MyAwesomeTeleopConfig
- from .my_awesome_teleop import MyAwesomeTeleop
- ```
-
-### Installation and Usage
-
-1. **Install your new plugin in your Python environment.** You can install your local plugin package using `pip`'s editable mode or from PyPi.
-
- ```bash
- # Locally
- # Navigate to your plugin's root directory and install it
- cd lerobot_teleoperator_my_awesome_teleop
- pip install -e .
-
- # From PyPi
- pip install lerobot_teleoperator_my_awesome_teleop
- ```
-
-2. **Use it directly from the command line.** Now, you can use your custom device by referencing its type.
-
- ```bash
- lerobot-teleoperate --teleop.type=my_awesome_teleop \
- # other arguments
- ```
-
-And that's it\! Your custom device is now fully integrated.
-
-### Looking for an example ?
-
-Check out these two packages from the community:
-
-- https://github.com/SpesRobotics/lerobot-robot-xarm
-- https://github.com/SpesRobotics/lerobot-teleoperator-teleop
-
-## Wrapping Up
-
-Once your robot class is complete, you can leverage the LeRobot ecosystem:
-
-- Control your robot with available teleoperators or integrate directly your teleoperating device
-- Record training data and visualize it
-- Integrate it into RL or imitation learning pipelines
-
-Don't hesitate to reach out to the community for help on our [Discord](https://discord.gg/s3KuuzsPFb) 🤗
diff --git a/lerobot/docs/source/introduction_processors.mdx b/lerobot/docs/source/introduction_processors.mdx
deleted file mode 100644
index cd579b22d150c57fda0ef3303a60e08067d11f03..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/introduction_processors.mdx
+++ /dev/null
@@ -1,314 +0,0 @@
-# Introduction to Processors
-
-In robotics, there's a fundamental mismatch between the data that robots and humans produce and what machine learning models expect.
-Robots output raw sensor data like camera images and joint positions that need normalization, batching, and device placement before models can process them.
-Language instructions from humans must be tokenized into numerical representations, and different robots use different coordinate systems that need standardization.
-
-The challenge extends to model outputs as well.
-Models might output end-effector positions while robots need joint-space commands, or teleoperators produce relative movements while robots expect absolute commands.
-Model predictions are often normalized and need conversion back to real-world scales.
-
-Cross-domain translation adds another layer of complexity.
-Training data from one robot setup needs adaptation for deployment on different hardware, models trained with specific camera configurations must work with new arrangements, and datasets with different naming conventions need harmonization.
-
-**That's where processors come in.** They serve as universal translators that bridge these gaps, ensuring seamless data flow from sensors to models to actuators.
-Processors handle all the preprocessing and postprocessing steps needed to convert raw environment data into model-ready inputs and vice versa.
-
-This means that your favorite policy can be used like this:
-
-```python
-import torch
-
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.policies.factory import make_pre_post_processors
-from lerobot.policies.your_policy import YourPolicy
-from lerobot.processor.pipeline import RobotProcessorPipeline, PolicyProcessorPipeline
-dataset = LeRobotDataset("hf_user/dataset", episodes=[0])
-sample = dataset[10]
-
-model = YourPolicy.from_pretrained(
- "hf_user/model",
-)
-model.eval()
-model.to("cuda")
-preprocessor, postprocessor = make_pre_post_processors(model.config, pretrained_path="hf_user/model", dataset_stats=dataset.meta.stats)
-
-preprocessed_sample = preprocessor(sample)
-action = model.select_action(preprocessed_sample)
-postprocessed_action = postprocessor(action)
-```
-
-## What are Processors?
-
-In robotics, data comes in many forms: images from cameras, joint positions from sensors, text instructions from users, and more. Each type of data requires specific transformations before a model can use it effectively. Models need this data to be:
-
-- **Normalized**: Scaled to appropriate ranges for neural network processing
-- **Batched**: Organized with proper dimensions for batch processing
-- **Tokenized**: Text converted to numerical representations
-- **Device-placed**: Moved to the right hardware (CPU/GPU)
-- **Type-converted**: Cast to appropriate data types
-
-Processors handle these transformations through composable, reusable steps that can be chained together into pipelines. Think of them as a modular assembly line where each station performs a specific transformation on your data.
-
-## Core Concepts
-
-### EnvTransition: The Universal Data Container
-
-The `EnvTransition` is the fundamental data structure that flows through all processors.
-It's a typed dictionary that represents a complete robot-environment interaction:
-
-- **OBSERVATION**: All sensor data (images, states, proprioception)
-- **ACTION**: The action to execute or that was executed
-- **REWARD**: Reinforcement learning signal
-- **DONE/TRUNCATED**: Episode boundary indicators
-- **INFO**: Arbitrary metadata
-- **COMPLEMENTARY_DATA**: Task descriptions, indices, padding flags, inter-step data
-
-### ProcessorStep: The Building Block
-
-A `ProcessorStep` is a single transformation unit that processes transitions. It's an abstract base class with two required methods:
-
-```python
-from lerobot.processor import ProcessorStep, EnvTransition
-
-class MyProcessorStep(ProcessorStep):
- """Example processor step - inherit and implement abstract methods."""
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """Transform the transition - REQUIRED abstract method."""
- # Your processing logic here
- return transition
-
- def transform_features(self, features):
- """Declare how this step transforms feature shapes/types - REQUIRED abstract method."""
- return features # Most processors return features unchanged
-```
-
-`__call__` is the core of your processor step. It takes an `EnvTransition` and returns a modified `EnvTransition`.
-
-`transform_features` is used to declare how this step transforms feature shapes/types.
-
-### DataProcessorPipeline: The Generic Orchestrator
-
-The `DataProcessorPipeline[TInput, TOutput]` chains multiple `ProcessorStep` instances together:
-
-```python
-from lerobot.processor import RobotProcessorPipeline, PolicyProcessorPipeline
-
-# For robot hardware (unbatched data)
-robot_processor = RobotProcessorPipeline[RobotAction, RobotAction](
- steps=[step1, step2, step3],
- name="robot_pipeline"
-)
-
-# For model training/inference (batched data)
-policy_processor = PolicyProcessorPipeline[dict[str, Any], dict[str, Any]](
- steps=[step1, step2, step3],
- name="policy_pipeline"
-)
-```
-
-## RobotProcessorPipeline vs PolicyProcessorPipeline
-
-The key distinction is in the data structures they handle:
-
-| Aspect | RobotProcessorPipeline | PolicyProcessorPipeline |
-| --------------- | -------------------------------------------- | ---------------------------------------- |
-| **Input** | `dict[str, Any]` - Individual robot values | `dict[str, Any]` - Batched tensors |
-| **Output** | `dict[str, Any]` - Individual robot commands | `torch.Tensor` - Policy predictions |
-| **Use Case** | Real-time robot control | Model training/inference |
-| **Data Format** | Unbatched, heterogeneous | Batched, homogeneous |
-| **Examples** | `{"joint_1": 0.5}` | `{"observation.state": tensor([[0.5]])}` |
-
-**Use `RobotProcessorPipeline`** for robot hardware interfaces:
-
-```python
-# Robot data structures: dict[str, Any] for observations and actions
-robot_obs: dict[str, Any] = {
- "joint_1": 0.5, # Individual joint values
- "joint_2": -0.3,
- "camera_0": image_array # Raw camera data
-}
-
-robot_action: dict[str, Any] = {
- "joint_1": 0.2, # Target joint positions
- "joint_2": 0.1,
- "gripper": 0.8
-}
-```
-
-**Use `PolicyProcessorPipeline`** for model training and batch processing:
-
-```python
-# Policy data structures: batch dicts and tensors
-policy_batch: dict[str, Any] = {
- "observation.state": torch.tensor([[0.5, -0.3]]), # Batched states
- "observation.images.camera0": torch.tensor(...), # Batched images
- "action": torch.tensor([[0.2, 0.1, 0.8]]) # Batched actions
-}
-
-policy_action: torch.Tensor = torch.tensor([[0.2, 0.1, 0.8]]) # Model output tensor
-```
-
-## Converter Functions
-
-LeRobot provides converter functions to bridge different data formats in `lerobot.processor.converters`. These functions handle the crucial translations between robot hardware data structures, policy model formats, and the internal `EnvTransition` representation that flows through processor pipelines.
-
-| Category | Function | Description |
-| ------------------------------ | ----------------------------- | ------------------------------- |
-| **Robot Hardware Converters** | `robot_action_to_transition` | Robot dict → EnvTransition |
-| | `observation_to_transition` | Robot obs → EnvTransition |
-| | `transition_to_robot_action` | EnvTransition → Robot dict |
-| **Policy/Training Converters** | `batch_to_transition` | Batch dict → EnvTransition |
-| | `transition_to_batch` | EnvTransition → Batch dict |
-| | `policy_action_to_transition` | Policy tensor → EnvTransition |
-| | `transition_to_policy_action` | EnvTransition → Policy tensor |
-| **Utilities** | `create_transition` | Build transitions with defaults |
-| | `identity_transition` | Pass-through converter |
-
-The key insight is that **robot hardware converters** work with individual values and dictionaries, while **policy/training converters** work with batched tensors and model outputs. The converter functions automatically handle the structural differences, so your processor steps can focus on the core transformations without worrying about data format compatibility.
-
-## Processor Examples
-
-The following examples demonstrate real-world processor configurations for policy training and inference.
-
-Here is an example processor for policy training and inference:
-
-```python
-# Training data preprocessing (optimized order for GPU performance)
-training_preprocessor = PolicyProcessorPipeline[dict[str, Any], dict[str, Any]](
- steps=[
- RenameObservationsProcessorStep(rename_map={}), # Standardize keys
- AddBatchDimensionProcessorStep(), # Add batch dims
- TokenizerProcessorStep(tokenizer_name="...", ...), # Tokenize language
- DeviceProcessorStep(device="cuda"), # Move to GPU first
- NormalizerProcessorStep(features=..., stats=...), # Normalize on GPU
- ]
-)
-
-# Model output postprocessing
-training_postprocessor = PolicyProcessorPipeline[torch.Tensor, torch.Tensor](
- steps=[
- DeviceProcessorStep(device="cpu"), # Move to CPU
- UnnormalizerProcessorStep(features=..., stats=...), # Denormalize
- ]
- to_transition=policy_action_to_transition,
- to_output=transition_to_policy_action,
-)
-```
-
-### An interaction between a robot and a policy with processors
-
-The most common real-world scenario combines both pipeline types robot hardware generates observations that need policy processing, and policy outputs need robot-compatible postprocessing:
-
-```python
-# Real deployment: Robot sensors → Model → Robot commands
-with torch.no_grad():
- while not done:
- raw_obs = robot.get_observation() # dict[str, Any]
-
- # Add your robot observation to policy observation processor
-
- policy_input = policy_preprocessor(raw_obs) # Batched dict
-
- policy_output = policy.select_action(policy_input) # Policy tensor
-
- policy_action = policy_postprocessor(policy_output)
-
- # Add your robot action to policy action processor
-
- robot.send_action(policy_action)
-```
-
-## Feature Contracts: Shape and Type Transformation
-
-Processors don't just transform data - they can also **change the data structure itself**. The `transform_features()` method declares these changes, which is crucial for dataset recording and policy creation.
-
-### Why Feature Contracts Matter
-
-When building datasets or policies, LeRobot needs to know:
-
-- **What data fields will exist** after processing
-- **What shapes and types** each field will have
-- **How to configure models** for the expected data structure
-
-```python
-# Example: A processor that adds velocity to observations
-class VelocityProcessor(ObservationProcessorStep):
- def observation(self, obs):
- new_obs = obs.copy()
- if "observation.state" in obs:
- # concatenate computed velocity field to the state
- new_obs["observation.state"] = self._compute_velocity(obs["observation.state"])
- return new_obs
-
- def transform_features(self, features):
- """Declare the new velocity field we're adding."""
- state_feature = features[PipelineFeatureType.OBSERVATION].get("observation.state")
- if state_feature:
- double_shape = (state_feature.shape[0] * 2,) if state_feature.shape else (2,)
- features[PipelineFeatureType.OBSERVATION]["observation.state"] = PolicyFeature(
- type=FeatureType.STATE, shape=double_shape
- )
- return features
-```
-
-### Feature Specification Functions
-
-`create_initial_features()` and `aggregate_pipeline_dataset_features()` solve a critical dataset creation problem: determining the exact final data structure before any data is processed.
-Since processor pipelines can add new features (like velocity fields), change tensor shapes (like cropping images), or rename keys, datasets need to know the complete output specification upfront to allocate proper storage and define schemas.
-These functions work together by starting with robot hardware specifications (`create_initial_features()`) then simulating the entire pipeline transformation (`aggregate_pipeline_dataset_features()`) to compute the final feature dictionary that gets passed to `LeRobotDataset.create()`, ensuring perfect alignment between what processors output and what datasets expect to store.
-
-```python
-from lerobot.datasets.pipeline_features import aggregate_pipeline_dataset_features
-
-# Start with robot's raw features
-initial_features = create_initial_features(
- observation=robot.observation_features, # {"joint_1.pos": float, "camera_0": (480,640,3)}
- action=robot.action_features # {"joint_1.pos": float, "gripper.pos": float}
-)
-
-# Apply processor pipeline to compute final features
-final_features = aggregate_pipeline_dataset_features(
- pipeline=my_processor_pipeline,
- initial_features=initial_features,
- use_videos=True
-)
-
-# Use for dataset creation
-dataset = LeRobotDataset.create(
- repo_id="my_dataset",
- features=final_features, # Knows exactly what data to expect
- ...
-)
-```
-
-## Common Processor Steps
-
-LeRobot provides many registered processor steps. Here are the most commonly used core processors:
-
-### Essential Processors
-
-- **`normalizer_processor`**: Normalize observations/actions using dataset statistics (mean/std or min/max)
-- **`device_processor`**: Move tensors to CPU/GPU with optional dtype conversion
-- **`to_batch_processor`**: Add batch dimensions to transitions for model compatibility
-- **`rename_observations_processor`**: Rename observation keys using mapping dictionaries
-- **`tokenizer_processor`**: Tokenize natural language task descriptions into tokens and attention masks
-
-### Next Steps
-
-- **[Implement Your Own Processor](./implement_your_own_processor)** - Create custom processor steps
-- **[Debug Your Pipeline](./debug_processor_pipeline)** - Troubleshoot and optimize pipelines
-- **[Processors for Robots and Teleoperators](./processors_robots_teleop)** - Real-world integration patterns
-
-## Summary
-
-Processors solve the data translation problem in robotics by providing:
-
-- **Modular transformations**: Composable, reusable processing steps
-- **Type safety**: Generic pipelines with compile-time checking
-- **Performance optimization**: GPU-accelerated operations
-- **Robot/Policy distinction**: Separate pipelines for different data structures
-- **Comprehensive ecosystem**: 30+ registered processors for common tasks
-
-The key insight: `RobotProcessorPipeline` handles unbatched robot hardware data, while `PolicyProcessorPipeline` handles batched model data. Choose the right tool for your data structure!
diff --git a/lerobot/docs/source/koch.mdx b/lerobot/docs/source/koch.mdx
deleted file mode 100644
index b31e0b100007f2eb70c428f5ca0e4017b769d725..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/koch.mdx
+++ /dev/null
@@ -1,283 +0,0 @@
-# Koch v1.1
-
-In the steps below, we explain how to assemble the Koch v1.1 robot.
-
-## Order and assemble the parts
-
-Follow the sourcing and assembling instructions provided in this [README](https://github.com/jess-moss/koch-v1-1). This will guide you through setting up both the follower and leader arms, as shown in the image below.
-
-For a visual walkthrough of the assembly process, you can refer to [this video tutorial](https://youtu.be/8nQIg9BwwTk).
-
-> [!WARNING]
-> Since the production of this video, we simplified the configuration phase. Because of this, two things differ from the instructions in that video:
->
-> - Don't plug in all the motor cables right away and wait to be instructed to do so in [Configure the motors](#configure-the-motors).
-> - Don't screw in the controller board (PCB) to the base right away and wait for being instructed to do so in [Configure the motors](#configure-the-motors).
-
-## Install LeRobot 🤗
-
-To install LeRobot follow, our [Installation Guide](./installation)
-
-In addition to these instructions, you need to install the Dynamixel SDK:
-
-```bash
-pip install -e ".[dynamixel]"
-```
-
-## Configure the motors
-
-### 1. Find the USB ports associated with each arm
-
-To find the port for each bus servo adapter, run this script:
-
-```bash
-lerobot-find-port
-```
-
-
-
-
-Example output:
-
-```
-Finding all available ports for the MotorBus.
-['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
-Remove the USB cable from your MotorsBus and press Enter when done.
-
-[...Disconnect corresponding leader or follower arm and press Enter...]
-
-The port of this MotorsBus is /dev/tty.usbmodem575E0032081
-Reconnect the USB cable.
-```
-
-Where the found port is: `/dev/tty.usbmodem575E0032081` corresponding to your leader or follower arm.
-
-
-
-
-On Linux, you might need to give access to the USB ports by running:
-
-```bash
-sudo chmod 666 /dev/ttyACM0
-sudo chmod 666 /dev/ttyACM1
-```
-
-Example output:
-
-```
-Finding all available ports for the MotorBus.
-['/dev/ttyACM0', '/dev/ttyACM1']
-Remove the usb cable from your MotorsBus and press Enter when done.
-
-[...Disconnect corresponding leader or follower arm and press Enter...]
-
-The port of this MotorsBus is /dev/ttyACM1
-Reconnect the USB cable.
-```
-
-Where the found port is: `/dev/ttyACM1` corresponding to your leader or follower arm.
-
-
-
-
-### 2. Set the motors ids and baudrates
-
-Each motor is identified by a unique id on the bus. When brand new, motors usually come with a default id of `1`. For the communication to work properly between the motors and the controller, we first need to set a unique, different id to each motor. Additionally, the speed at which data is transmitted on the bus is determined by the baudrate. In order to talk to each other, the controller and all the motors need to be configured with the same baudrate.
-
-To that end, we first need to connect to each motor individually with the controller in order to set these. Since we will write these parameters in the non-volatile section of the motors' internal memory (EEPROM), we'll only need to do this once.
-
-If you are repurposing motors from another robot, you will probably also need to perform this step, as the ids and baudrate likely won't match.
-
-#### Follower
-
-Connect the usb cable from your computer and the 5V power supply to the follower arm's controller board. Then, run the following command or run the API example with the port you got from the previous step. You'll also need to give your leader arm a name with the `id` parameter.
-
-For a visual reference on how to set the motor ids please refer to [this video](https://huggingface.co/docs/lerobot/en/so101#setup-motors-video) where we follow the process for the SO101 arm.
-
-
-
-
-```bash
-lerobot-setup-motors \
- --robot.type=koch_follower \
- --robot.port=/dev/tty.usbmodem575E0031751 # <- paste here the port found at previous step
-```
-
-
-
-
-
-```python
-from lerobot.robots.koch_follower import KochFollower, KochFollowerConfig
-
-config = KochFollowerConfig(
- port="/dev/tty.usbmodem575E0031751",
- id="my_awesome_follower_arm",
-)
-follower = KochFollower(config)
-follower.setup_motors()
-```
-
-
-
-
-
-You should see the following instruction.
-
-```
-Connect the controller board to the 'gripper' motor only and press enter.
-```
-
-As instructed, plug the gripper's motor. Make sure it's the only motor connected to the board, and that the motor itself is not yet daisy-chained to any other motor. As you press `[Enter]`, the script will automatically set the id and baudrate for that motor.
-
-
-Troubleshooting
-
-If you get an error at that point, check your cables and make sure they are plugged in properly:
-
-
-
Power supply
-
USB cable between your computer and the controller board
-
The 3-pin cable from the controller board to the motor
-
-
-If you are using a Waveshare controller board, make sure that the two jumpers are set on the `B` channel (USB).
-
-
-
-You should then see the following message:
-
-```
-'gripper' motor id set to 6
-```
-
-Followed by the next instruction:
-
-```
-Connect the controller board to the 'wrist_roll' motor only and press enter.
-```
-
-You can disconnect the 3-pin cable from the controller board but you can leave it connected to the gripper motor on the other end as it will already be in the right place. Now, plug in another 3-pin cable to the wrist roll motor and connect it to the controller board. As with the previous motor, make sure it is the only motor connected to the board and that the motor itself isn't connected to any other one.
-
-Repeat the operation for each motor as instructed.
-
-> [!TIP]
-> Check your cabling at each step before pressing Enter. For instance, the power supply cable might disconnect as you manipulate the board.
-
-When you are done, the script will simply finish, at which point the motors are ready to be used. You can now plug the 3-pin cable from each motor to the next one, and the cable from the first motor (the 'shoulder pan' with id=1) to the controller board, which can now be attached to the base of the arm.
-
-#### Leader
-
-Do the same steps for the leader arm but modify the command or script accordingly.
-
-
-
-
-```bash
-lerobot-setup-motors \
- --teleop.type=koch_leader \
- --teleop.port=/dev/tty.usbmodem575E0031751 \ # <- paste here the port found at previous step
-```
-
-
-
-
-
-```python
-from lerobot.teleoperators.koch_leader import KochLeader, KochLeaderConfig
-
-config = KochLeaderConfig(
- port="/dev/tty.usbmodem575E0031751",
- id="my_awesome_leader_arm",
-)
-leader = KochLeader(config)
-leader.setup_motors()
-```
-
-
-
-
-
-## Calibrate
-
-Next, you'll need to calibrate your robot to ensure that the leader and follower arms have the same position values when they are in the same physical position.
-The calibration process is very important because it allows a neural network trained on one robot to work on another.
-
-#### Follower
-
-Run the following command or API example to calibrate the follower arm:
-
-
-
-
-```bash
-lerobot-calibrate \
- --robot.type=koch_follower \
- --robot.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
- --robot.id=my_awesome_follower_arm # <- Give the robot a unique name
-```
-
-
-
-
-
-```python
-from lerobot.robots.koch_follower import KochFollowerConfig, KochFollower
-
-config = KochFollowerConfig(
- port="/dev/tty.usbmodem585A0076891",
- id="my_awesome_follower_arm",
-)
-
-follower = KochFollower(config)
-follower.connect(calibrate=False)
-follower.calibrate()
-follower.disconnect()
-```
-
-
-
-
-
-We unified the calibration method for most robots. Thus, the calibration steps for this Koch arm are the same as the steps for the SO100 and SO101. First, we have to move the robot to the position where each joint is in the middle of its range, then we press `Enter`. Secondly, we move all joints through their full range of motion. A video of this same process for the SO101 as reference can be found [here](https://huggingface.co/docs/lerobot/en/so101#calibration-video).
-
-#### Leader
-
-Do the same steps to calibrate the leader arm, run the following command or API example:
-
-
-
-
-```bash
-lerobot-calibrate \
- --teleop.type=koch_leader \
- --teleop.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
- --teleop.id=my_awesome_leader_arm # <- Give the robot a unique name
-```
-
-
-
-
-
-```python
-from lerobot.teleoperators.koch_leader import KochLeaderConfig, KochLeader
-
-config = KochLeaderConfig(
- port="/dev/tty.usbmodem575E0031751",
- id="my_awesome_leader_arm",
-)
-
-leader = KochLeader(config)
-leader.connect(calibrate=False)
-leader.calibrate()
-leader.disconnect()
-```
-
-
-
-
-
-Congrats 🎉, your robot is all set to learn a task on its own. Start training it by following this tutorial: [Getting started with real-world robots](./il_robots)
-
-> [!TIP]
-> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
diff --git a/lerobot/docs/source/lekiwi.mdx b/lerobot/docs/source/lekiwi.mdx
deleted file mode 100644
index 43f7c4596bb9a4e27c0877de948e9f75ff76300e..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/lekiwi.mdx
+++ /dev/null
@@ -1,337 +0,0 @@
-# LeKiwi
-
-In the steps below, we explain how to assemble the LeKiwi mobile robot.
-
-## Source the parts
-
-Follow this [README](https://github.com/SIGRobotics-UIUC/LeKiwi). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts.
-And advise if it's your first time printing or if you don't own a 3D printer.
-
-### Wired version
-
-If you have the **wired** LeKiwi version, you can skip the installation of the Raspberry Pi and setting up SSH. You can also run all commands directly on your PC for both the LeKiwi scripts and the leader arm scripts for teleoperating.
-
-## Install software on Pi
-
-Now we have to set up the remote PC that will run on the LeKiwi Robot. This is normally a Raspberry Pi, but can be any PC that can run on 5V and has enough usb ports (2 or more) for the cameras and motor control board.
-
-### Install OS
-
-For setting up the Raspberry Pi and its SD-card see: [Setup PI](https://www.raspberrypi.com/documentation/computers/getting-started.html). Here is explained how to download the [Imager](https://www.raspberrypi.com/software/) to install Raspberry Pi OS or Ubuntu.
-
-### Setup SSH
-
-After setting up your Pi, you should enable and set up [SSH](https://www.raspberrypi.com/news/coding-on-raspberry-pi-remotely-with-visual-studio-code/) (Secure Shell Protocol) so you can log in to the Pi from your laptop without requiring a screen, keyboard, and mouse on the Pi. A great tutorial on how to do this can be found [here](https://www.raspberrypi.com/documentation/computers/remote-access.html#ssh). Logging into your Pi can be done in your Command Prompt (cmd) or, if you use VSCode you can use [this](https://marketplace.visualstudio.com/items?itemName=ms-vscode-remote.remote-ssh) extension.
-
-### Install LeRobot on Pi 🤗
-
-On your Raspberry Pi install LeRobot using our [Installation Guide](./installation)
-
-In addition to these instructions, you need to install the Feetech SDK & ZeroMQ on your Pi:
-
-```bash
-pip install -e ".[lekiwi]"
-```
-
-## Install LeRobot locally
-
-If you already have installed LeRobot on your laptop/pc you can skip this step; otherwise, please follow along as we do the same steps we did on the Pi.
-
-Follow our [Installation Guide](./installation)
-
-In addition to these instructions, you need to install the Feetech SDK & ZeroMQ on your laptop/pc:
-
-```bash
-pip install -e ".[lekiwi]"
-```
-
-Great :hugs:! You are now done installing LeRobot, and we can begin assembling the SO100/SO101 arms and the mobile base :robot:.
-Every time you now want to use LeRobot, you can go to the `~/lerobot` folder where we installed LeRobot and run one of the commands.
-
-# Step-by-Step Assembly Instructions
-
-First, we will assemble the two SO100/SO101 arms. One to attach to the mobile base and one for teleoperation. Then we will assemble the mobile base. The instructions for assembling can be found on these two pages:
-
-- [Assemble SO101](./so101#step-by-step-assembly-instructions)
-- [Assemble LeKiwi](https://github.com/SIGRobotics-UIUC/LeKiwi/blob/main/Assembly.md)
-
-### Find the USB ports associated with motor board
-
-To find the port for each bus servo adapter, run this script:
-
-```bash
-lerobot-find-port
-```
-
-
-
-
-Example output:
-
-```
-Finding all available ports for the MotorBus.
-['/dev/tty.usbmodem575E0032081']
-Remove the USB cable from your MotorsBus and press Enter when done.
-
-[...Disconnect corresponding leader or follower arm and press Enter...]
-
-The port of this MotorsBus is /dev/tty.usbmodem575E0032081
-Reconnect the USB cable.
-```
-
-Where the found port is: `/dev/tty.usbmodem575E0032081` corresponding to your board.
-
-
-
-
-On Linux, you might need to give access to the USB ports by running:
-
-```bash
-sudo chmod 666 /dev/ttyACM0
-sudo chmod 666 /dev/ttyACM1
-```
-
-Example output:
-
-```
-Finding all available ports for the MotorBus.
-['/dev/ttyACM0']
-Remove the usb cable from your MotorsBus and press Enter when done.
-
-[...Disconnect corresponding leader or follower arm and press Enter...]
-
-The port of this MotorsBus is /dev/ttyACM0
-Reconnect the USB cable.
-```
-
-Where the found port is: `/dev/ttyACM0` corresponding to your board.
-
-
-
-
-### Configure motors
-
-The instructions for configuring the motors can be found in the SO101 [docs](./so101#configure-the-motors). Besides the ids for the arm motors, we also need to set the motor ids for the mobile base. These need to be in a specific order to work. Below an image of the motor ids and motor mounting positions for the mobile base. Note that we only use one Motor Control board on LeKiwi. This means the motor ids for the wheels are 7, 8 and 9.
-
-You can run this command to setup motors for LeKiwi. It will first setup the motors for arm (id 6..1) and then setup motors for wheels (9,8,7)
-
-```bash
-lerobot-setup-motors \
- --robot.type=lekiwi \
- --robot.port=/dev/tty.usbmodem58760431551 # <- paste here the port found at previous step
-```
-
-
-
-### Troubleshoot communication
-
-If you are having trouble connecting to the Mobile SO100, follow these steps to diagnose and resolve the issue.
-
-#### 1. Verify IP Address Configuration
-
-Make sure that the correct IP for the Pi is used in the commands or in your code. To check the Raspberry Pi's IP address, run (on the Pi command line):
-
-```bash
-hostname -I
-```
-
-#### 2. Check if Pi is reachable from laptop/pc
-
-Try pinging the Raspberry Pi from your laptop:
-
-```bach
-ping
-```
-
-If the ping fails:
-
-- Ensure the Pi is powered on and connected to the same network.
-- Check if SSH is enabled on the Pi.
-
-#### 3. Try SSH connection
-
-If you can't SSH into the Pi, it might not be properly connected. Use:
-
-```bash
-ssh @
-```
-
-If you get a connection error:
-
-- Ensure SSH is enabled on the Pi by running:
- ```bash
- sudo raspi-config
- ```
- Then navigate to: **Interfacing Options -> SSH** and enable it.
-
-### Calibration
-
-Now we have to calibrate the leader arm and the follower arm. The wheel motors don't have to be calibrated.
-The calibration process is very important because it allows a neural network trained on one robot to work on another.
-
-### Calibrate follower arm (on mobile base)
-
-Make sure the arm is connected to the Raspberry Pi and run this script or API example (on the Raspberry Pi via SSH) to launch calibration of the follower arm:
-
-```bash
-lerobot-calibrate \
- --robot.type=lekiwi \
- --robot.id=my_awesome_kiwi # <- Give the robot a unique name
-```
-
-We unified the calibration method for most robots, thus, the calibration steps for this SO100 arm are the same as the steps for the Koch and SO101. First, we have to move the robot to the position where each joint is in the middle of its range, then we press `Enter`. Secondly, we move all joints through their full range of motion. A video of this same process for the SO101 as reference can be found [here](https://huggingface.co/docs/lerobot/en/so101#calibration-video).
-
-### Wired version
-
-If you have the **wired** LeKiwi version, please run all commands on your laptop.
-
-### Calibrate leader arm
-
-Then, to calibrate the leader arm (which is attached to the laptop/pc). Run the following command of API example on your laptop:
-
-
-
-
-```bash
-lerobot-calibrate \
- --teleop.type=so100_leader \
- --teleop.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
- --teleop.id=my_awesome_leader_arm # <- Give the robot a unique name
-```
-
-
-
-
-
-```python
-from lerobot.teleoperators.so_leader import SO100LeaderConfig, SO100Leader
-
-config = SO100LeaderConfig(
- port="/dev/tty.usbmodem58760431551",
- id="my_awesome_leader_arm",
-)
-
-leader = SO100Leader(config)
-leader.connect(calibrate=False)
-leader.calibrate()
-leader.disconnect()
-```
-
-
-
-
-
-## Teleoperate LeKiwi
-
-> [!TIP]
-> If you're using a Mac, you might need to give Terminal permission to access your keyboard for teleoperation. Go to System Preferences > Security & Privacy > Input Monitoring and check the box for Terminal.
-
-To teleoperate, SSH into your Raspberry Pi, and run `conda activate lerobot` and this command:
-
-```bash
-python -m lerobot.robots.lekiwi.lekiwi_host --robot.id=my_awesome_kiwi
-```
-
-Then on your laptop, also run `conda activate lerobot` and run the API example, make sure you set the correct `remote_ip` and `port` in `examples/lekiwi/teleoperate.py`.
-
-```bash
-python examples/lekiwi/teleoperate.py
-```
-
-You should see on your laptop something like this: `[INFO] Connected to remote robot at tcp://172.17.133.91:5555 and video stream at tcp://172.17.133.91:5556.` Now you can move the leader arm and use the keyboard (w,a,s,d) to drive forward, left, backwards, right. And use (z,x) to turn left or turn right. You can use (r,f) to increase and decrease the speed of the mobile robot. There are three speed modes, see the table below:
-
-| Speed Mode | Linear Speed (m/s) | Rotation Speed (deg/s) |
-| ---------- | ------------------ | ---------------------- |
-| Fast | 0.4 | 90 |
-| Medium | 0.25 | 60 |
-| Slow | 0.1 | 30 |
-
-| Key | Action |
-| --- | -------------- |
-| W | Move forward |
-| A | Move left |
-| S | Move backward |
-| D | Move right |
-| Z | Turn left |
-| X | Turn right |
-| R | Increase speed |
-| F | Decrease speed |
-
-> [!TIP]
-> If you use a different keyboard, you can change the keys for each command in the [`LeKiwiClientConfig`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/robots/lekiwi/config_lekiwi.py).
-
-### Wired version
-
-If you have the **wired** LeKiwi version, please run all commands on your laptop.
-
-## Record a dataset
-
-Once you're familiar with teleoperation, you can record your first dataset.
-
-We use the Hugging Face hub features for uploading your dataset. If you haven't previously used the Hub, make sure you can login via the cli using a write-access token, this token can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens).
-
-Add your token to the CLI by running this command:
-
-```bash
-huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
-```
-
-Then store your Hugging Face repository name in a variable:
-
-```bash
-HF_USER=$(huggingface-cli whoami | head -n 1)
-echo $HF_USER
-```
-
-Now you can record a dataset. To record episodes and upload your dataset to the hub, execute this API example tailored for LeKiwi. Make sure to first adapt the `remote_ip`, `repo_id`, `port` and `task` in the script. If you would like to run the script for longer you can increase `NB_CYCLES_CLIENT_CONNECTION`.
-
-```bash
-python examples/lekiwi/record.py
-```
-
-#### Dataset upload
-
-Locally, your dataset is stored in this folder: `~/.cache/huggingface/lerobot/{repo-id}`. At the end of data recording, your dataset will be uploaded on your Hugging Face page (e.g. https://huggingface.co/datasets/cadene/so101_test) that you can obtain by running:
-
-```bash
-echo https://huggingface.co/datasets/${HF_USER}/so101_test
-```
-
-Your dataset will be automatically tagged with `LeRobot` for the community to find it easily, and you can also add custom tags (in this case `tutorial` for example).
-
-You can look for other LeRobot datasets on the hub by searching for `LeRobot` [tags](https://huggingface.co/datasets?other=LeRobot).
-
-#### Tips for gathering data
-
-Once you're comfortable with data recording, you can create a larger dataset for training. A good starting task is grasping an object at different locations and placing it in a bin. We suggest recording at least 50 episodes, with 10 episodes per location. Keep the cameras fixed and maintain consistent grasping behavior throughout the recordings. Also make sure the object you are manipulating is visible on the camera's. A good rule of thumb is you should be able to do the task yourself by only looking at the camera images.
-
-In the following sections, you’ll train your neural network. After achieving reliable grasping performance, you can start introducing more variations during data collection, such as additional grasp locations, different grasping techniques, and altering camera positions.
-
-Avoid adding too much variation too quickly, as it may hinder your results.
-
-If you want to dive deeper into this important topic, you can check out the [blog post](https://huggingface.co/blog/lerobot-datasets#what-makes-a-good-dataset) we wrote on what makes a good dataset.
-
-#### Troubleshooting:
-
-- On Linux, if the left and right arrow keys and escape key don't have any effect during data recording, make sure you've set the `$DISPLAY` environment variable. See [pynput limitations](https://pynput.readthedocs.io/en/latest/limitations.html#linux).
-
-## Replay an episode
-
-To replay an episode run the API example below, make sure to change `remote_ip`, `port`, LeRobotDatasetId and episode index.
-
-```bash
-python examples/lekiwi/replay.py
-```
-
-Congrats 🎉, your robot is all set to learn a task on its own. Start training it by the training part of this tutorial: [Getting started with real-world robots](./il_robots)
-
-## Evaluate your policy
-
-To evaluate your policy run the `evaluate.py` API example, make sure to change `remote_ip`, `port`, model..
-
-```bash
-python examples/lekiwi/evaluate.py
-```
-
-> [!TIP]
-> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
diff --git a/lerobot/docs/source/lerobot-dataset-v3.mdx b/lerobot/docs/source/lerobot-dataset-v3.mdx
deleted file mode 100644
index 1071074381c35f062d139980948bf0621c8b26ed..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/lerobot-dataset-v3.mdx
+++ /dev/null
@@ -1,314 +0,0 @@
-# LeRobotDataset v3.0
-
-`LeRobotDataset v3.0` is a standardized format for robot learning data. It provides unified access to multi-modal time-series data, sensorimotor signals and multi‑camera video, as well as rich metadata for indexing, search, and visualization on the Hugging Face Hub.
-
-This docs will guide you to:
-
-- Understand the v3.0 design and directory layout
-- Record a dataset and push it to the Hub
-- Load datasets for training with `LeRobotDataset`
-- Stream datasets without downloading using `StreamingLeRobotDataset`
-- Apply image transforms for data augmentation during training
-- Migrate existing `v2.1` datasets to `v3.0`
-
-## What’s new in `v3`
-
-- **File-based storage**: Many episodes per Parquet/MP4 file (v2 used one file per episode).
-- **Relational metadata**: Episode boundaries and lookups are resolved through metadata, not filenames.
-- **Hub-native streaming**: Consume datasets directly from the Hub with `StreamingLeRobotDataset`.
-- **Lower file-system pressure**: Fewer, larger files ⇒ faster initialization and fewer issues at scale.
-- **Unified organization**: Clean directory layout with consistent path templates across data and videos.
-
-## Installation
-
-`LeRobotDataset v3.0` will be included in `lerobot >= 0.4.0`.
-
-Until that stable release, you can use the main branch by following the [build from source instructions](./installation#from-source).
-
-## Record a dataset
-
-Run the command below to record a dataset with the SO-101 and push to the Hub:
-
-```bash
-lerobot-record \
- --robot.type=so101_follower \
- --robot.port=/dev/tty.usbmodem585A0076841 \
- --robot.id=my_awesome_follower_arm \
- --robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \
- --teleop.type=so101_leader \
- --teleop.port=/dev/tty.usbmodem58760431551 \
- --teleop.id=my_awesome_leader_arm \
- --display_data=true \
- --dataset.repo_id=${HF_USER}/record-test \
- --dataset.num_episodes=5 \
- --dataset.single_task="Grab the black cube"
-```
-
-See the [recording guide](./il_robots#record-a-dataset) for more details.
-
-## Format design
-
-A core v3 principle is **decoupling storage from the user API**: data is stored efficiently (few large files), while the public API exposes intuitive episode-level access.
-
-`v3` has three pillars:
-
-1. **Tabular data**: Low‑dimensional, high‑frequency signals (states, actions, timestamps) stored in **Apache Parquet**. Access is memory‑mapped or streamed via the `datasets` stack.
-2. **Visual data**: Camera frames concatenated and encoded into **MP4**. Frames from the same episode are grouped; videos are sharded per camera for practical sizes.
-3. **Metadata**: JSON/Parquet records describing schema (feature names, dtypes, shapes), frame rates, normalization stats, and **episode segmentation** (start/end offsets into shared Parquet/MP4 files).
-
-> To scale to millions of episodes, tabular rows and video frames from multiple episodes are **concatenated** into larger files. Episode‑specific views are reconstructed **via metadata**, not file boundaries.
-
-
-
-
-
- From episode‑based to file‑based datasets
-
-
-
-
-### Directory layout (simplified)
-
-- **`meta/info.json`**: canonical schema (features, shapes/dtypes), FPS, codebase version, and **path templates** to locate data/video shards.
-- **`meta/stats.json`**: global feature statistics (mean/std/min/max) used for normalization; exposed as `dataset.meta.stats`.
-- **`meta/tasks.jsonl`**: natural‑language task descriptions mapped to integer IDs for task‑conditioned policies.
-- **`meta/episodes/`**: per‑episode records (lengths, tasks, offsets) stored as **chunked Parquet** for scalability.
-- **`data/`**: frame‑by‑frame **Parquet** shards; each file typically contains **many episodes**.
-- **`videos/`**: **MP4** shards per camera; each file typically contains **many episodes**.
-
-## Load a dataset for training
-
-`LeRobotDataset` returns Python dictionaries of PyTorch tensors and integrates with `torch.utils.data.DataLoader`. Here is a code example showing its use:
-
-```python
-import torch
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-
-repo_id = "yaak-ai/L2D-v3"
-
-# 1) Load from the Hub (cached locally)
-dataset = LeRobotDataset(repo_id)
-
-# 2) Random access by index
-sample = dataset[100]
-print(sample)
-# {
-# 'observation.state': tensor([...]),
-# 'action': tensor([...]),
-# 'observation.images.front_left': tensor([C, H, W]),
-# 'timestamp': tensor(1.234),
-# ...
-# }
-
-# 3) Temporal windows via delta_timestamps (seconds relative to t)
-delta_timestamps = {
- "observation.images.front_left": [-0.2, -0.1, 0.0] # 0.2s and 0.1s before current frame
-}
-
-dataset = LeRobotDataset(repo_id, delta_timestamps=delta_timestamps)
-
-# Accessing an index now returns a stack for the specified key(s)
-sample = dataset[100]
-print(sample["observation.images.front_left"].shape) # [T, C, H, W], where T=3
-
-# 4) Wrap with a DataLoader for training
-batch_size = 16
-data_loader = torch.utils.data.DataLoader(dataset, batch_size=batch_size)
-
-device = "cuda" if torch.cuda.is_available() else "cpu"
-for batch in data_loader:
- observations = batch["observation.state"].to(device)
- actions = batch["action"].to(device)
- images = batch["observation.images.front_left"].to(device)
- # model.forward(batch)
-```
-
-## Stream a dataset (no downloads)
-
-Use `StreamingLeRobotDataset` to iterate directly from the Hub without local copies. This allows to stream large datasets without the need to downloading them onto disk or loading them onto memory, and is a key feature of the new dataset format.
-
-```python
-from lerobot.datasets.streaming_dataset import StreamingLeRobotDataset
-
-repo_id = "yaak-ai/L2D-v3"
-dataset = StreamingLeRobotDataset(repo_id) # streams directly from the Hub
-```
-
-
-
-
-
- Stream directly from the Hub for on‑the‑fly training.
-
-
-
-
-## Image transforms
-
-Image transforms are data augmentations applied to camera frames during training to improve model robustness and generalization. LeRobot supports various transforms including brightness, contrast, saturation, hue, and sharpness adjustments.
-
-### Using transforms during dataset creation/recording
-
-Currently, transforms are applied during **training time only**, not during recording. When you create or record a dataset, the raw images are stored without transforms. This allows you to experiment with different augmentations later without re-recording data.
-
-### Adding transforms to existing datasets (API)
-
-Use the `image_transforms` parameter when loading a dataset for training:
-
-```python
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.datasets.transforms import ImageTransforms, ImageTransformsConfig, ImageTransformConfig
-
-# Option 1: Use default transform configuration (disabled by default)
-transforms_config = ImageTransformsConfig(
- enable=True, # Enable transforms
- max_num_transforms=3, # Apply up to 3 transforms per frame
- random_order=False, # Apply in standard order
-)
-transforms = ImageTransforms(transforms_config)
-
-dataset = LeRobotDataset(
- repo_id="your-username/your-dataset",
- image_transforms=transforms
-)
-
-# Option 2: Create custom transform configuration
-custom_transforms_config = ImageTransformsConfig(
- enable=True,
- max_num_transforms=2,
- random_order=True,
- tfs={
- "brightness": ImageTransformConfig(
- weight=1.0,
- type="ColorJitter",
- kwargs={"brightness": (0.7, 1.3)} # Adjust brightness range
- ),
- "contrast": ImageTransformConfig(
- weight=2.0, # Higher weight = more likely to be selected
- type="ColorJitter",
- kwargs={"contrast": (0.8, 1.2)}
- ),
- "sharpness": ImageTransformConfig(
- weight=0.5, # Lower weight = less likely to be selected
- type="SharpnessJitter",
- kwargs={"sharpness": (0.3, 2.0)}
- ),
- }
-)
-
-dataset = LeRobotDataset(
- repo_id="your-username/your-dataset",
- image_transforms=ImageTransforms(custom_transforms_config)
-)
-
-# Option 3: Use pure torchvision transforms
-from torchvision.transforms import v2
-
-torchvision_transforms = v2.Compose([
- v2.ColorJitter(brightness=0.2, contrast=0.2, saturation=0.2, hue=0.1),
- v2.GaussianBlur(kernel_size=3, sigma=(0.1, 2.0)),
-])
-
-dataset = LeRobotDataset(
- repo_id="your-username/your-dataset",
- image_transforms=torchvision_transforms
-)
-```
-
-### Available transform types
-
-LeRobot provides several transform types:
-
-- **`ColorJitter`**: Adjusts brightness, contrast, saturation, and hue
-- **`SharpnessJitter`**: Randomly adjusts image sharpness
-- **`Identity`**: No transformation (useful for testing)
-
-You can also use any `torchvision.transforms.v2` transform by passing it directly to the `image_transforms` parameter.
-
-### Configuration options
-
-- **`enable`**: Enable/disable transforms (default: `False`)
-- **`max_num_transforms`**: Maximum number of transforms applied per frame (default: `3`)
-- **`random_order`**: Apply transforms in random order vs. standard order (default: `False`)
-- **`weight`**: Sampling probability for each transform (higher = more likely, if sum of weights is not 1, they will be normalized)
-- **`kwargs`**: Transform-specific parameters (e.g., brightness range)
-
-### Visualizing transforms
-
-Use the visualization script to preview how transforms affect your data:
-
-```bash
-lerobot-imgtransform-viz \
- --repo-id=your-username/your-dataset \
- --output-dir=./transform_examples \
- --n-examples=5
-```
-
-This saves example images showing the effect of each transform, helping you tune parameters.
-
-### Best practices
-
-- **Start conservative**: Begin with small ranges (e.g., brightness 0.9-1.1) and increase gradually
-- **Test first**: Use the visualization script to ensure transforms look reasonable
-- **Monitor training**: Strong augmentations can hurt performance if too aggressive
-- **Match your domain**: If your robot operates in varying lighting, use brightness/contrast transforms
-- **Combine wisely**: Using too many transforms simultaneously can make training unstable
-
-## Migrate `v2.1` → `v3.0`
-
-A converter aggregates per‑episode files into larger shards and writes episode offsets/metadata. Convert your dataset using the instructions below.
-
-```bash
-# Pre-release build with v3 support:
-pip install "https://github.com/huggingface/lerobot/archive/33cad37054c2b594ceba57463e8f11ee374fa93c.zip"
-
-# Convert an existing v2.1 dataset hosted on the Hub:
-python -m lerobot.datasets.v30.convert_dataset_v21_to_v30 --repo-id=
-```
-
-**What it does**
-
-- Aggregates parquet files: `episode-0000.parquet`, `episode-0001.parquet`, … → **`file-0000.parquet`**, …
-- Aggregates mp4 files: `episode-0000.mp4`, `episode-0001.mp4`, … → **`file-0000.mp4`**, …
-- Updates `meta/episodes/*` (chunked Parquet) with per‑episode lengths, tasks, and byte/frame offsets.
-
-## Common Issues
-
-### Always call `finalize()` before pushing
-
-When creating or recording datasets, you **must** call `dataset.finalize()` to properly close parquet writers. See the [PR #1903](https://github.com/huggingface/lerobot/pull/1903) for more details.
-
-```python
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-
-# Create dataset and record episodes
-dataset = LeRobotDataset.create(...)
-
-for episode in range(num_episodes):
- # Record frames
- for frame in episode_data:
- dataset.add_frame(frame)
- dataset.save_episode()
-
-# Call finalize() when done recording and before push_to_hub()
-dataset.finalize() # Closes parquet writers, writes metadata footers
-dataset.push_to_hub()
-```
-
-**Why is this necessary?**
-
-Dataset v3.0 uses incremental parquet writing with buffered metadata for efficiency. The `finalize()` method:
-
-- Flushes any buffered episode metadata to disk
-- Closes parquet writers to write footer metadata, otherwise the parquet files will be corrupt
-- Ensures the dataset is valid for loading
-
-Without calling `finalize()`, your parquet files will be incomplete and the dataset won't load properly.
diff --git a/lerobot/docs/source/libero.mdx b/lerobot/docs/source/libero.mdx
deleted file mode 100644
index f900369adaf532e98faf50fe797b6b00bbdcacf7..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/libero.mdx
+++ /dev/null
@@ -1,171 +0,0 @@
-# LIBERO
-
-**LIBERO** is a benchmark designed to study **lifelong robot learning**. The idea is that robots won’t just be pretrained once in a factory, they’ll need to keep learning and adapting with their human users over time. This ongoing adaptation is called **lifelong learning in decision making (LLDM)**, and it’s a key step toward building robots that become truly personalized helpers.
-
-- 📄 [LIBERO paper](https://arxiv.org/abs/2306.03310)
-- 💻 [Original LIBERO repo](https://github.com/Lifelong-Robot-Learning/LIBERO)
-
-To make progress on this challenge, LIBERO provides a set of standardized tasks that focus on **knowledge transfer**: how well a robot can apply what it has already learned to new situations. By evaluating on LIBERO, different algorithms can be compared fairly and researchers can build on each other’s work.
-
-LIBERO includes **five task suites**:
-
-- **LIBERO-Spatial (`libero_spatial`)** – tasks that require reasoning about spatial relations.
-- **LIBERO-Object (`libero_object`)** – tasks centered on manipulating different objects.
-- **LIBERO-Goal (`libero_goal`)** – goal-conditioned tasks where the robot must adapt to changing targets.
-- **LIBERO-90 (`libero_90`)** – 90 short-horizon tasks from the LIBERO-100 collection.
-- **LIBERO-Long (`libero_10`)** – 10 long-horizon tasks from the LIBERO-100 collection.
-
-Together, these suites cover **130 tasks**, ranging from simple object manipulations to complex multi-step scenarios. LIBERO is meant to grow over time, and to serve as a shared benchmark where the community can test and improve lifelong learning algorithms.
-
-
-
-## Evaluating with LIBERO
-
-At **LeRobot**, we ported [LIBERO](https://github.com/Lifelong-Robot-Learning/LIBERO) into our framework and used it mainly to **evaluate [SmolVLA](https://huggingface.co/docs/lerobot/en/smolvla)**, our lightweight Vision-Language-Action model.
-
-LIBERO is now part of our **multi-eval supported simulation**, meaning you can benchmark your policies either on a **single suite of tasks** or across **multiple suites at once** with just a flag.
-
-To Install LIBERO, after following LeRobot official instructions, just do:
-`pip install -e ".[libero]"`
-
-### Single-suite evaluation
-
-Evaluate a policy on one LIBERO suite:
-
-```bash
-lerobot-eval \
- --policy.path="your-policy-id" \
- --env.type=libero \
- --env.task=libero_object \
- --eval.batch_size=2 \
- --eval.n_episodes=3
-```
-
-- `--env.task` picks the suite (`libero_object`, `libero_spatial`, etc.).
-- `--eval.batch_size` controls how many environments run in parallel.
-- `--eval.n_episodes` sets how many episodes to run in total.
-
----
-
-### Multi-suite evaluation
-
-Benchmark a policy across multiple suites at once:
-
-```bash
-lerobot-eval \
- --policy.path="your-policy-id" \
- --env.type=libero \
- --env.task=libero_object,libero_spatial \
- --eval.batch_size=1 \
- --eval.n_episodes=2
-```
-
-- Pass a comma-separated list to `--env.task` for multi-suite evaluation.
-
-### Control Mode
-
-LIBERO now supports two control modes: relative and absolute. This matters because different VLA checkpoints are trained with different mode of action to output hence control parameterizations.
-You can switch them with: `env.control_mode = "relative"` and `env.control_mode = "absolute"`
-
-### Policy inputs and outputs
-
-When using LIBERO through LeRobot, policies interact with the environment via **observations** and **actions**:
-
-- **Observations**
- - `observation.state` – proprioceptive features (agent state).
- - `observation.images.image` – main camera view (`agentview_image`).
- - `observation.images.image2` – wrist camera view (`robot0_eye_in_hand_image`).
-
- ⚠️ **Note:** LeRobot enforces the `.images.*` prefix for any multi-modal visual features. Always ensure that your policy config `input_features` use the same naming keys, and that your dataset metadata keys follow this convention during evaluation.
- If your data contains different keys, you must rename the observations to match what the policy expects, since naming keys are encoded inside the normalization statistics layer.
- This will be fixed with the upcoming Pipeline PR.
-
-- **Actions**
- - Continuous control values in a `Box(-1, 1, shape=(7,))` space.
-
-We also provide a notebook for quick testing:
-Training with LIBERO
-
-## Training with LIBERO
-
-When training on LIBERO tasks, make sure your dataset parquet and metadata keys follow the LeRobot convention.
-
-The environment expects:
-
-- `observation.state` → 8-dim agent state
-- `observation.images.image` → main camera (`agentview_image`)
-- `observation.images.image2` → wrist camera (`robot0_eye_in_hand_image`)
-
-⚠️ Cleaning the dataset upfront is **cleaner and more efficient** than remapping keys inside the code.
-To avoid potential mismatches and key errors, we provide a **preprocessed LIBERO dataset** that is fully compatible with the current LeRobot codebase and requires no additional manipulation:
-👉 [HuggingFaceVLA/libero](https://huggingface.co/datasets/HuggingFaceVLA/libero)
-
-For reference, here is the **original dataset** published by Physical Intelligence:
-👉 [physical-intelligence/libero](https://huggingface.co/datasets/physical-intelligence/libero)
-
----
-
-### Example training command
-
-```bash
-lerobot-train \
- --policy.type=smolvla \
- --policy.repo_id=${HF_USER}/libero-test \
- --policy.load_vlm_weights=true \
- --dataset.repo_id=HuggingFaceVLA/libero \
- --env.type=libero \
- --env.task=libero_10 \
- --output_dir=./outputs/ \
- --steps=100000 \
- --batch_size=4 \
- --eval.batch_size=1 \
- --eval.n_episodes=1 \
- --eval_freq=1000 \
-```
-
----
-
-### Note on rendering
-
-LeRobot uses MuJoCo for simulation. You need to set the rendering backend before training or evaluation:
-
-- `export MUJOCO_GL=egl` → for headless servers (e.g. HPC, cloud)
-
-## Reproducing π₀.₅ results
-
-We reproduce the results of π₀.₅ on the LIBERO benchmark using the LeRobot implementation. We take the Physical Intelligence LIBERO base model (`pi05_libero`) and finetune for an additional 6k steps in bfloat16, with batch size of 256 on 8 H100 GPUs using the [HuggingFace LIBERO dataset](https://huggingface.co/datasets/HuggingFaceVLA/libero).
-
-The finetuned model can be found here:
-
-- **π₀.₅ LIBERO**: [lerobot/pi05_libero_finetuned](https://huggingface.co/lerobot/pi05_libero_finetuned)
-
-We then evaluate the finetuned model using the LeRobot LIBERO implementation, by running the following command:
-
-```bash
-lerobot-eval \
- --output_dir=/logs/ \
- --env.type=libero \
- --env.task=libero_spatial,libero_object,libero_goal,libero_10 \
- --eval.batch_size=1 \
- --eval.n_episodes=10 \
- --policy.path=pi05_libero_finetuned \
- --policy.n_action_steps=10 \
- --output_dir=./eval_logs/ \
- --env.max_parallel_tasks=1
-```
-
-**Note:** We set `n_action_steps=10`, similar to the original OpenPI implementation.
-
-### Results
-
-We obtain the following results on the LIBERO benchmark:
-
-| Model | LIBERO Spatial | LIBERO Object | LIBERO Goal | LIBERO 10 | Average |
-| -------- | -------------- | ------------- | ----------- | --------- | -------- |
-| **π₀.₅** | 97.0 | 99.0 | 98.0 | 96.0 | **97.5** |
-
-These results are consistent with the original [results](https://github.com/Physical-Intelligence/openpi/tree/main/examples/libero#results) reported by Physical Intelligence:
-
-| Model | LIBERO Spatial | LIBERO Object | LIBERO Goal | LIBERO 10 | Average |
-| -------- | -------------- | ------------- | ----------- | --------- | --------- |
-| **π₀.₅** | 98.8 | 98.2 | 98.0 | 92.4 | **96.85** |
diff --git a/lerobot/docs/source/metaworld.mdx b/lerobot/docs/source/metaworld.mdx
deleted file mode 100644
index 205cd6db44e6e7e925fc834c7099f51d8175d9ce..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/metaworld.mdx
+++ /dev/null
@@ -1,80 +0,0 @@
-# Meta-World
-
-Meta-World is a well-designed, open-source simulation benchmark for multi-task and meta reinforcement learning in continuous-control robotic manipulation. It gives researchers a shared, realistic playground to test whether algorithms can _learn many different tasks_ and _generalize quickly to new ones_ — two central challenges for real-world robotics.
-
-- 📄 [MetaWorld paper](https://arxiv.org/pdf/1910.10897)
-- 💻 [Original MetaWorld repo](https://github.com/Farama-Foundation/Metaworld)
-
-
-
-## Why Meta-World matters
-
-- **Diverse, realistic tasks.** Meta-World bundles a large suite of simulated manipulation tasks (50 in the MT50 suite) using everyday objects and a common tabletop Sawyer arm. This diversity exposes algorithms to a wide variety of dynamics, contacts and goal specifications while keeping a consistent control and observation structure.
-- **Focus on generalization and multi-task learning.** By evaluating across task distributions that share structure but differ in goals and objects, Meta-World reveals whether an agent truly learns transferable skills rather than overfitting to a narrow task.
-- **Standardized evaluation protocol.** It provides clear evaluation modes and difficulty splits, so different methods can be compared fairly across easy, medium, hard and very-hard regimes.
-- **Empirical insight.** Past evaluations on Meta-World show impressive progress on some fronts, but also highlight that current multi-task and meta-RL methods still struggle with large, diverse task sets. That gap points to important research directions.
-
-## What it enables in LeRobot
-
-In LeRobot, you can evaluate any policy or vision-language-action (VLA) model on Meta-World tasks and get a clear success-rate measure. The integration is designed to be straightforward:
-
-- We provide a LeRobot-ready dataset for Meta-World (MT50) on the HF Hub: `https://huggingface.co/datasets/lerobot/metaworld_mt50`.
- - This dataset is formatted for the MT50 evaluation that uses all 50 tasks (the most challenging multi-task setting).
- - MT50 gives the policy a one-hot task vector and uses fixed object/goal positions for consistency.
-
-- Task descriptions and the exact keys required for evaluation are available in the repo/dataset — use these to ensure your policy outputs the right success signals.
-
-## Quick start, train a SmolVLA policy on Meta-World
-
-Example command to train a SmolVLA policy on a subset of tasks:
-
-```bash
-lerobot-train \
- --policy.type=smolvla \
- --policy.repo_id=${HF_USER}/metaworld-test \
- --policy.load_vlm_weights=true \
- --dataset.repo_id=lerobot/metaworld_mt50 \
- --env.type=metaworld \
- --env.task=assembly-v3,dial-turn-v3,handle-press-side-v3 \
- --output_dir=./outputs/ \
- --steps=100000 \
- --batch_size=4 \
- --eval.batch_size=1 \
- --eval.n_episodes=1 \
- --eval_freq=1000
-```
-
-Notes:
-
-- `--env.task` accepts explicit task lists (comma separated) or difficulty groups (e.g., `env.task="hard"`).
-- Adjust `batch_size`, `steps`, and `eval_freq` to match your compute budget.
-- **Gymnasium Assertion Error**: if you encounter an error like
- `AssertionError: ['human', 'rgb_array', 'depth_array']` when running MetaWorld environments, this comes from a mismatch between MetaWorld and your Gymnasium version.
- We recommend using:
-
-```bash
- pip install "gymnasium==1.1.0"
-```
-
-to ensure proper compatibility.
-
-## Quick start — evaluate a trained policy
-
-To evaluate a trained policy on the Meta-World medium difficulty split:
-
-```bash
-lerobot-eval \
- --policy.path="your-policy-id" \
- --env.type=metaworld \
- --env.task=medium \
- --eval.batch_size=1 \
- --eval.n_episodes=2
-```
-
-This will run episodes and return per-task success rates using the standard Meta-World evaluation keys.
-
-## Practical tips
-
-- If you care about generalization, run on the full MT50 suite — it’s intentionally challenging and reveals strengths/weaknesses better than a few narrow tasks.
-- Use the one-hot task conditioning for multi-task training (MT10 / MT50 conventions) so policies have explicit task context.
-- Inspect the dataset task descriptions and the `info["is_success"]` keys when writing post-processing or logging so your success metrics line up with the benchmark.
diff --git a/lerobot/docs/source/multi_gpu_training.mdx b/lerobot/docs/source/multi_gpu_training.mdx
deleted file mode 100644
index af89a4a188d53987f8beee626a96e99530c2928e..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/multi_gpu_training.mdx
+++ /dev/null
@@ -1,125 +0,0 @@
-# Multi-GPU Training
-
-This guide shows you how to train policies on multiple GPUs using [Hugging Face Accelerate](https://huggingface.co/docs/accelerate).
-
-## Installation
-
-First, ensure you have accelerate installed:
-
-```bash
-pip install accelerate
-```
-
-## Training with Multiple GPUs
-
-You can launch training in two ways:
-
-### Option 1: Without config (specify parameters directly)
-
-You can specify all parameters directly in the command without running `accelerate config`:
-
-```bash
-accelerate launch \
- --multi_gpu \
- --num_processes=2 \
- $(which lerobot-train) \
- --dataset.repo_id=${HF_USER}/my_dataset \
- --policy.type=act \
- --policy.repo_id=${HF_USER}/my_trained_policy \
- --output_dir=outputs/train/act_multi_gpu \
- --job_name=act_multi_gpu \
- --wandb.enable=true
-```
-
-**Key accelerate parameters:**
-
-- `--multi_gpu`: Enable multi-GPU training
-- `--num_processes=2`: Number of GPUs to use
-- `--mixed_precision=fp16`: Use fp16 mixed precision (or `bf16` if supported)
-
-### Option 2: Using accelerate config
-
-If you prefer to save your configuration, you can optionally configure accelerate for your hardware setup by running:
-
-```bash
-accelerate config
-```
-
-This interactive setup will ask you questions about your training environment (number of GPUs, mixed precision settings, etc.) and saves the configuration for future use. For a simple multi-GPU setup on a single machine, you can use these recommended settings:
-
-- Compute environment: This machine
-- Number of machines: 1
-- Number of processes: (number of GPUs you want to use)
-- GPU ids to use: (leave empty to use all)
-- Mixed precision: fp16 or bf16 (recommended for faster training)
-
-Then launch training with:
-
-```bash
-accelerate launch $(which lerobot-train) \
- --dataset.repo_id=${HF_USER}/my_dataset \
- --policy.type=act \
- --policy.repo_id=${HF_USER}/my_trained_policy \
- --output_dir=outputs/train/act_multi_gpu \
- --job_name=act_multi_gpu \
- --wandb.enable=true
-```
-
-## How It Works
-
-When you launch training with accelerate:
-
-1. **Automatic detection**: LeRobot automatically detects if it's running under accelerate
-2. **Data distribution**: Your batch is automatically split across GPUs
-3. **Gradient synchronization**: Gradients are synchronized across GPUs during backpropagation
-4. **Single process logging**: Only the main process logs to wandb and saves checkpoints
-
-## Learning Rate and Training Steps Scaling
-
-**Important:** LeRobot does **NOT** automatically scale learning rates or training steps based on the number of GPUs. This gives you full control over your training hyperparameters.
-
-### Why No Automatic Scaling?
-
-Many distributed training frameworks automatically scale the learning rate by the number of GPUs (e.g., `lr = base_lr × num_gpus`).
-However, LeRobot keeps the learning rate exactly as you specify it.
-
-### When and How to Scale
-
-If you want to scale your hyperparameters when using multiple GPUs, you should do it manually:
-
-**Learning Rate Scaling:**
-
-```bash
-# Example: 2 GPUs with linear LR scaling
-# Base LR: 1e-4, with 2 GPUs -> 2e-4
-accelerate launch --num_processes=2 $(which lerobot-train) \
- --optimizer.lr=2e-4 \
- --dataset.repo_id=lerobot/pusht \
- --policy=act
-```
-
-**Training Steps Scaling:**
-
-Since the effective batch size `bs` increases with multiple GPUs (batch_size × num_gpus), you may want to reduce the number of training steps proportionally:
-
-```bash
-# Example: 2 GPUs with effective batch size 2x larger
-# Original: batch_size=8, steps=100000
-# With 2 GPUs: batch_size=8 (16 in total), steps=50000
-accelerate launch --num_processes=2 $(which lerobot-train) \
- --batch_size=8 \
- --steps=50000 \
- --dataset.repo_id=lerobot/pusht \
- --policy=act
-```
-
-## Notes
-
-- The `--policy.use_amp` flag in `lerobot-train` is only used when **not** running with accelerate. When using accelerate, mixed precision is controlled by accelerate's configuration.
-- Training logs, checkpoints, and hub uploads are only done by the main process to avoid conflicts. Non-main processes have console logging disabled to prevent duplicate output.
-- The effective batch size is `batch_size × num_gpus`. If you use 4 GPUs with `--batch_size=8`, your effective batch size is 32.
-- Learning rate scheduling is handled correctly across multiple processes—LeRobot sets `step_scheduler_with_optimizer=False` to prevent accelerate from adjusting scheduler steps based on the number of processes.
-- When saving or pushing models, LeRobot automatically unwraps the model from accelerate's distributed wrapper to ensure compatibility.
-- WandB integration automatically initializes only on the main process, preventing multiple runs from being created.
-
-For more advanced configurations and troubleshooting, see the [Accelerate documentation](https://huggingface.co/docs/accelerate). If you want to learn more about how to train on a large number of GPUs, checkout this awesome guide: [Ultrascale Playbook](https://huggingface.co/spaces/nanotron/ultrascale-playbook).
diff --git a/lerobot/docs/source/notebooks.mdx b/lerobot/docs/source/notebooks.mdx
deleted file mode 100644
index 34b45f80f2d83391b033082e5cb609ca7ed666ec..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/notebooks.mdx
+++ /dev/null
@@ -1,29 +0,0 @@
-# 🤗 LeRobot Notebooks
-
-This repository contains example notebooks for using LeRobot. These notebooks demonstrate how to train policies on real or simulation datasets using standardized policies.
-
----
-
-### Training ACT
-
-[ACT](https://huggingface.co/papers/2304.13705) (Action Chunking Transformer) is a transformer-based policy architecture for imitation learning that processes robot states and camera inputs to generate smooth, chunked action sequences.
-
-We provide a ready-to-run Google Colab notebook to help you train ACT policies using datasets from the Hugging Face Hub, with optional logging to Weights & Biases.
-
-| Notebook | Colab |
-| :------------------------------------------------------------------------------------------------------ | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
-| [Train ACT with LeRobot](https://github.com/huggingface/notebooks/blob/main/lerobot/training-act.ipynb) | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/lerobot/training-act.ipynb) |
-
-Expected training time for 100k steps: ~1.5 hours on an NVIDIA A100 GPU with batch size of `64`.
-
-### Training SmolVLA
-
-[SmolVLA](https://huggingface.co/papers/2506.01844) is a small but efficient Vision-Language-Action model. It is compact in size with 450 M-parameter and is developed by Hugging Face.
-
-We provide a ready-to-run Google Colab notebook to help you train SmolVLA policies using datasets from the Hugging Face Hub, with optional logging to Weights & Biases.
-
-| Notebook | Colab |
-| :-------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
-| [Train SmolVLA with LeRobot](https://github.com/huggingface/notebooks/blob/main/lerobot/training-smolvla.ipynb) | [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/lerobot/training-smolvla.ipynb) |
-
-Expected training time for 20k steps: ~5 hours on an NVIDIA A100 GPU with batch size of `64`.
diff --git a/lerobot/docs/source/peft_training.mdx b/lerobot/docs/source/peft_training.mdx
deleted file mode 100644
index e0d249731a68931a927c92659d8510163070f0ba..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/peft_training.mdx
+++ /dev/null
@@ -1,62 +0,0 @@
-# Parameter efficient fine-tuning with 🤗 PEFT
-
-[🤗 PEFT](https://github.com/huggingface/peft) (Parameter-Efficient Fine-Tuning) is a library for efficiently adapting
-large pretrained models such as pre-trained policies (e.g., SmolVLA, π₀, ...) to new tasks without training all
-of the model's parameters while yielding comparable performance.
-
-Install the `lerobot[peft]` optional package to enable PEFT support.
-
-To read about all the possible methods of adaption, please refer to the [🤗 PEFT docs](https://huggingface.co/docs/peft/index).
-
-## Training SmolVLA
-
-In this section we'll show you how to train a pre-trained SmolVLA policy with PEFT on the libero dataset.
-For brevity we're only training on the `libero_spatial` subset. We will use `lerobot/smolvla_base` as the model
-to parameter efficiently fine-tune:
-
-```
-lerobot-train \
- --policy.path=lerobot/smolvla_base \
- --policy.repo_id=your_hub_name/my_libero_smolvla \
- --dataset.repo_id=HuggingFaceVLA/libero \
- --policy.output_features=null \
- --policy.input_features=null \
- --policy.optimizer_lr=1e-3 \
- --policy.scheduler_decay_lr=1e-4 \
- --env.type=libero \
- --env.task=libero_spatial \
- --steps=100000 \
- --batch_size=32 \
- --peft.method_type=LORA \
- --peft.r=64
-```
-
-Note the `--peft.method_type` parameter that let's you select which PEFT method to use. Here we use
-[LoRA](https://huggingface.co/docs/peft/main/en/package_reference/lora) (Low-Rank Adapter) which is probably the most
-popular fine-tuning method to date. Low-rank adaption means that we only fine-tune a matrix with comparably low rank
-instead of the full weight matrix. This rank can be specified using the `--peft.r` parameter. The higher the rank
-the closer you get to full fine-tuning
-
-There are more complex methods that have more parameters. These are not yet supported, feel free to raise an issue
-if you want to see a specific PEFT method supported.
-
-By default, PEFT will target the `q_proj` and `v_proj` layers of the LM expert in SmolVLA. It will also target the
-state and action projection matrices as they are most likely task-dependent. If you need to target different layers
-you can use `--peft.target_modules` to specify which layers to target. You can refer to the respective PEFT method's
-documentation to see what inputs are supported, (e.g., [LoRA's target_modules documentation](https://huggingface.co/docs/peft/main/en/package_reference/lora#peft.LoraConfig.target_modules)).
-Usually a list of suffixes or a regex are supported. For example, to target the MLPs of the `lm_expert` instead of
-the `q` and `v` projections, use:
-
-```
---peft.target_modules='(model\.vlm_with_expert\.lm_expert\..*\.(down|gate|up)_proj|.*\.(state_proj|action_in_proj|action_out_proj|action_time_mlp_in|action_time_mlp_out))'
-```
-
-In case you need to fully fine-tune a layer instead of just adapting it, you can supply a list of layer suffixes
-to the `--peft.full_training_modules` parameter:
-
-```
---peft.full_training_modules=["state_proj"]
-```
-
-The learning rate and the scheduled target learning rate can usually be scaled by a factor of 10 compared to the
-learning rate used for full fine-tuning (e.g., 1e-4 normal, so 1e-3 using LoRA).
diff --git a/lerobot/docs/source/phone_teleop.mdx b/lerobot/docs/source/phone_teleop.mdx
deleted file mode 100644
index f4850faa9d213524e449f477ef5c733bd203b2db..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/phone_teleop.mdx
+++ /dev/null
@@ -1,191 +0,0 @@
-# Phone
-
-Use your phone (iOS or Android) to control your robot.
-
-**In this guide you'll learn:**
-
-- How to connect an iOS/Android phone
-- How phone pose is mapped to robot end‑effector (EE) targets
-- How to tweak safety limits, gripper control, and IK settings
-
-To use phone to control your robot, install the relevant dependencies with:
-
-```bash
-pip install lerobot[phone]
-```
-
-## Get started
-
-### Supported platforms
-
-- iOS: Uses the HEBI Mobile I/O app (ARKit pose + buttons). Download the app first, open it and the examples will discover it on your network and stream the phone pose and inputs.
-- Android: Uses the `teleop` package (WebXR). When you start the Python process, it prints a local URL. Open the link on your phone, tap Start, then use Move to stream pose.
-
-Links:
-
-- Android WebXR library: [`teleop` on PyPI](https://pypi.org/project/teleop/)
-- iOS app: [HEBI Mobile I/O](https://docs.hebi.us/tools.html#mobile-io)
-
-### Phone orientation and controls
-
-- Orientation: hold the phone with the screen facing up and the top edge pointing in the same direction as the robot gripper. This ensures calibration aligns the phone’s frame with the robot frame so motion feels natural, see the image below for reference.
-- Enable/disable:
- - iOS: Hold `B1` to enable teleoperation, release to stop. The first press captures a reference pose.
- - Android: Press and hold the `Move` button, release to stop. The first press captures a reference pose.
-- Gripper control:
- - iOS: Analog input `A3` controls the gripper as velocity input.
- - Android: Buttons `A` and `B` act like increment/decrement (A opens, B closes). You can tune velocity in the `GripperVelocityToJoint` step.
-
-
-
-### Step 1: Choose the platform
-
-Modify the examples to use `PhoneOS.IOS` or `PhoneOS.ANDROID` in `PhoneConfig`. The API is identical across platforms, only the input source differs. All examples are under `examples/` and have `phone_so100_*.py` variants.
-
-Teleoperation example:
-
-```python
-from lerobot.teleoperators.phone.config_phone import PhoneConfig, PhoneOS
-
-teleop_config = PhoneConfig(phone_os=PhoneOS.IOS) # or PhoneOS.ANDROID
-teleop_device = Phone(teleop_config)
-```
-
-### Step 2: Connect and calibrate
-
-When `Phone(teleop_config)` is created and `connect()` is called, calibration is prompted automatically. Hold the phone in the orientation described above, then:
-
-- iOS: press and hold `B1` to capture the reference pose.
-- Android: press `Move` button on the WebXR page to capture the reference pose.
-
-Why calibrate? We capture the current pose so subsequent poses are expressed in a robot aligned frame. When you again press the button to enable control, the position is recaptured to avoid drift when your phone is repositioned while it was disabled.
-
-### Step 3: Run an example
-
-Run on of the examples scripts to teleoperate, record a dataset, replay a dataset or evaluate a policy.
-
-All scripts assume you configured your robot (e.g., SO-100 follower) and set the correct serial port.
-
-Additionally you need to **copy the urdf of the robot to the examples folder**. For the examples in this tutorial (Using SO100/SO101) it is highly recommended to use the urdf in the [SO-ARM100 repo](https://github.com/TheRobotStudio/SO-ARM100/blob/main/Simulation/SO101/so101_new_calib.urdf)
-
-- Run this example to teleoperate:
-
- ```bash
- python examples/phone_to_so100/teleoperate.py
- ```
-
-After running the example:
-
-- Android: after starting the script, open the printed local URL on your phone, tap Start, then press and hold Move.
-- iOS: open HEBI Mobile I/O first; B1 enables motion. A3 controls the gripper.
-
-Additionally you can customize mapping or safety limits by editing the processor steps shown in the examples. You can also remap inputs (e.g., use a different analog input) or adapt the pipeline to other robots (e.g., LeKiwi) by modifying the input and kinematics steps. More about this in the [Processors for Robots and Teleoperators](./processors_robots_teleop) guide.
-
-- Run this example to record a dataset, which saves absolute end effector observations and actions:
-
- ```bash
- python examples/phone_to_so100/record.py
- ```
-
-- Run this example to replay recorded episodes:
-
- ```bash
- python examples/phone_to_so100/replay.py
- ```
-
-- Run this example to evaluate a pretrained policy:
-
- ```bash
- python examples/phone_to_so100/evaluate.py
- ```
-
-### Important pipeline steps and options
-
-- Kinematics are used in multiple steps. We use [Placo](https://github.com/Rhoban/placo) which is a wrapper around Pinocchio for handling our kinematics. We construct the kinematics object by passing the robot's URDF and target frame. We set `target_frame_name` to the gripper frame.
-
- ```python
- kinematics_solver = RobotKinematics(
- urdf_path="./SO101/so101_new_calib.urdf",
- target_frame_name="gripper_frame_link",
- joint_names=list(robot.bus.motors.keys()),
- )
-
- ```
-
-- The `MapPhoneActionToRobotAction` step converts the calibrated phone pose and inputs into target deltas and gripper commands, below is shown what the step outputs.
-
- ```python
- action["enabled"] = enabled
- action["target_x"] = -pos[1] if enabled else 0.0
- action["target_y"] = pos[0] if enabled else 0.0
- action["target_z"] = pos[2] if enabled else 0.0
- action["target_wx"] = rotvec[1] if enabled else 0.0
- action["target_wy"] = rotvec[0] if enabled else 0.0
- action["target_wz"] = -rotvec[2] if enabled else 0.0
- action["gripper_vel"] = gripper_vel # Still send gripper action when disabled
- ```
-
-- The `EEReferenceAndDelta` step converts target deltas to an absolute desired EE pose, storing a reference on enable, the `end_effector_step_sizes` are the step sizes for the EE pose and can be modified to change the motion speed.
-
- ```python
- EEReferenceAndDelta(
- kinematics=kinematics_solver,
- end_effector_step_sizes={"x": 0.5, "y": 0.5, "z": 0.5},
- motor_names=list(robot.bus.motors.keys()),
- use_latched_reference=True,
- ),
- ```
-
-- The `EEBoundsAndSafety` step clamps EE motion to a workspace and checks for large ee step jumps to ensure safety. The `end_effector_bounds` are the bounds for the EE pose and can be modified to change the workspace. The `max_ee_step_m` are the step limits for the EE pose and can be modified to change the safety limits.
-
- ```python
- EEBoundsAndSafety(
- end_effector_bounds={"min": [-1.0, -1.0, -1.0], "max": [1.0, 1.0, 1.0]},
- max_ee_step_m=0.10,
- )
- ```
-
-- The `GripperVelocityToJoint` step turns a velocity‑like gripper input into absolute gripper position using the current measured state. The `speed_factor` is the factor by which the velocity is multiplied.
-
- ```python
- GripperVelocityToJoint(speed_factor=20.0)
- ```
-
-#### Different IK initial guesses
-
-We use different IK initial guesses in the kinematic steps. As initial guess either the current measured joints or the previous IK solution is used.
-
-- Closed loop (used in record/eval): sets `initial_guess_current_joints=True` so IK starts from the measured joints each frame.
-
- ```python
- InverseKinematicsEEToJoints(
- kinematics=kinematics_solver,
- motor_names=list(robot.bus.motors.keys()),
- initial_guess_current_joints=True, # closed loop
- )
- ```
-
-- Open loop (used in replay): sets `initial_guess_current_joints=False` so IK continues from the previous IK solution rather than the measured state. This preserves action stability when we replay without feedback.
-
- ```python
- InverseKinematicsEEToJoints(
- kinematics=kinematics_solver,
- motor_names=list(robot.bus.motors.keys()),
- initial_guess_current_joints=False, # open loop
- )
- ```
-
-### Pipeline steps explained
-
-- MapPhoneActionToRobotAction: converts calibrated phone pose and inputs into target deltas and a gripper command. Motion is gated by an enable signal (B1 on iOS, Move on Android).
-- EEReferenceAndDelta: latches a reference EE pose on enable and combines it with target deltas to produce an absolute desired EE pose each frame. When disabled, it keeps sending the last commanded pose.
-- EEBoundsAndSafety: clamps the EE pose to a workspace and rate‑limits jumps for safety. Also declares `action.ee.*` features.
-- InverseKinematicsEEToJoints: turns an EE pose into joint positions with IK. `initial_guess_current_joints=True` is recommended for closed‑loop control; set `False` for open‑loop replay for stability.
-- GripperVelocityToJoint: integrates a velocity‑like gripper input into an absolute gripper position using the current measured state.
-- ForwardKinematicsJointsToEE: computes `observation.state.ee.*` from observed joints for logging and training on EE state.
-
-### Troubleshooting
-
-- iOS not discovered: ensure HEBI Mobile I/O is open and your laptop/phone are on the same network.
-- Android URL not reachable: check local you used `https` instead of `http`, use the exact IP printed by the script and allow your browser to enter and ignore the certificate issue.
-- Motion feels inverted: adjust the sign flips in `MapPhoneActionToRobotAction` or swap axes to match your setup.
diff --git a/lerobot/docs/source/pi0.mdx b/lerobot/docs/source/pi0.mdx
deleted file mode 100644
index 16dfa822dbf495c1b5c83f2211faaf59ccd3bd8a..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/pi0.mdx
+++ /dev/null
@@ -1,101 +0,0 @@
-# π₀ (Pi0)
-
-π₀ is a **Vision-Language-Action model for general robot control**, from Physical Intelligence. The LeRobot implementation is adapted from their open source [OpenPI](https://github.com/Physical-Intelligence/openpi) repository.
-
-## Model Overview
-
-π₀ represents a breakthrough in robotics as the first general-purpose robot foundation model developed by [Physical Intelligence](https://www.physicalintelligence.company/blog/pi0). Unlike traditional robot programs that are narrow specialists programmed for repetitive motions, π₀ is designed to be a generalist policy that can understand visual inputs, interpret natural language instructions, and control a variety of different robots across diverse tasks.
-
-
-
-### The Vision for Physical Intelligence
-
-As described by Physical Intelligence, while AI has achieved remarkable success in digital domains, from chess-playing to drug discovery, human intelligence still dramatically outpaces AI in the physical world. To paraphrase Moravec's paradox, winning a game of chess represents an "easy" problem for AI, but folding a shirt or cleaning up a table requires solving some of the most difficult engineering problems ever conceived. π₀ represents a first step toward developing artificial physical intelligence that enables users to simply ask robots to perform any task they want, just like they can with large language models.
-
-### Architecture and Approach
-
-π₀ combines several key innovations:
-
-- **Flow Matching**: Uses a novel method to augment pre-trained VLMs with continuous action outputs via flow matching (a variant of diffusion models)
-- **Cross-Embodiment Training**: Trained on data from 8 distinct robot platforms including UR5e, Bimanual UR5e, Franka, Bimanual Trossen, Bimanual ARX, Mobile Trossen, and Mobile Fibocom
-- **Internet-Scale Pre-training**: Inherits semantic knowledge from a pre-trained 3B parameter Vision-Language Model
-- **High-Frequency Control**: Outputs motor commands at up to 50 Hz for real-time dexterous manipulation
-
-## Installation Requirements
-
-1. Install LeRobot by following our [Installation Guide](./installation).
-2. Install Pi0 dependencies by running:
-
- ```bash
- pip install -e ".[pi]"
- ```
-
- > [!NOTE]
- > For lerobot 0.4.0, if you want to install pi tag, you will have to do: `pip install "lerobot[pi]@git+https://github.com/huggingface/lerobot.git"`.
- >
- > This will be solved in the next patch release
-
-## Training Data and Capabilities
-
-π₀ is trained on the largest robot interaction dataset to date, combining three key data sources:
-
-1. **Internet-Scale Pre-training**: Vision-language data from the web for semantic understanding
-2. **Open X-Embodiment Dataset**: Open-source robot manipulation datasets
-3. **Physical Intelligence Dataset**: Large and diverse dataset of dexterous tasks across 8 distinct robots
-
-## Usage
-
-To use π₀ in LeRobot, specify the policy type as:
-
-```python
-policy.type=pi0
-```
-
-## Training
-
-For training π₀, you can use the standard LeRobot training script with the appropriate configuration:
-
-```bash
-python src/lerobot/scripts/lerobot_train.py \
- --dataset.repo_id=your_dataset \
- --policy.type=pi0 \
- --output_dir=./outputs/pi0_training \
- --job_name=pi0_training \
- --policy.pretrained_path=lerobot/pi0_base \
- --policy.repo_id=your_repo_id \
- --policy.compile_model=true \
- --policy.gradient_checkpointing=true \
- --policy.dtype=bfloat16 \
- --policy.freeze_vision_encoder=false \
- --policy.train_expert_only=false \
- --steps=3000 \
- --policy.device=cuda \
- --batch_size=32
-```
-
-### Key Training Parameters
-
-- **`--policy.compile_model=true`**: Enables model compilation for faster training
-- **`--policy.gradient_checkpointing=true`**: Reduces memory usage significantly during training
-- **`--policy.dtype=bfloat16`**: Use mixed precision training for efficiency
-- **`--batch_size=32`**: Batch size for training, adapt this based on your GPU memory
-- **`--policy.pretrained_path=lerobot/pi0_base`**: The base π₀ model you want to finetune, options are:
- - [lerobot/pi0_base](https://huggingface.co/lerobot/pi0_base)
- - [lerobot/pi0_libero](https://huggingface.co/lerobot/pi0_libero) (specifically trained on the Libero dataset)
-
-### Training Parameters Explained
-
-| Parameter | Default | Description |
-| ----------------------- | ------- | ------------------------------------------- |
-| `freeze_vision_encoder` | `false` | Do not freeze the vision encoder |
-| `train_expert_only` | `false` | Do not freeze the VLM, train all parameters |
-
-**💡 Tip**: Setting `train_expert_only=true` freezes the VLM and trains only the action expert and projections, allowing finetuning with reduced memory usage.
-
-## License
-
-This model follows the **Apache 2.0 License**, consistent with the original [OpenPI repository](https://github.com/Physical-Intelligence/openpi).
diff --git a/lerobot/docs/source/pi05.mdx b/lerobot/docs/source/pi05.mdx
deleted file mode 100644
index 36193a512391a4e438ee5935a12468a06c440f12..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/pi05.mdx
+++ /dev/null
@@ -1,123 +0,0 @@
-# π₀.₅ (Pi05) Policy
-
-π₀.₅ is a **Vision-Language-Action model with open-world generalization**, from Physical Intelligence. The LeRobot implementation is adapted from their open source [OpenPI](https://github.com/Physical-Intelligence/openpi) repository.
-
-## Model Overview
-
-π₀.₅ represents a significant evolution from π₀, developed by [Physical Intelligence](https://www.physicalintelligence.company/blog/pi05) to address a big challenge in robotics: **open-world generalization**. While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training.
-
-### The Generalization Challenge
-
-As Physical Intelligence explains, the fundamental challenge isn't performing tasks of agility or dexterity, but generalization, the ability to correctly perform tasks in new settings with new objects. Consider a robot cleaning different homes: each home has different objects in different places. Generalization must occur at multiple levels:
-
-- **Physical Level**: Understanding how to pick up a spoon (by the handle) or plate (by the edge), even with unseen objects in cluttered environments
-- **Semantic Level**: Understanding task semantics, where to put clothes and shoes (laundry hamper, not on the bed), and what tools are appropriate for cleaning spills
-- **Environmental Level**: Adapting to "messy" real-world environments like homes, grocery stores, offices, and hospitals
-
-### Co-Training on Heterogeneous Data
-
-The breakthrough innovation in π₀.₅ is **co-training on heterogeneous data sources**. The model learns from:
-
-1. **Multimodal Web Data**: Image captioning, visual question answering, object detection
-2. **Verbal Instructions**: Humans coaching robots through complex tasks step-by-step
-3. **Subtask Commands**: High-level semantic behavior labels (e.g., "pick up the pillow" for an unmade bed)
-4. **Cross-Embodiment Robot Data**: Data from various robot platforms with different capabilities
-5. **Multi-Environment Data**: Static robots deployed across many different homes
-6. **Mobile Manipulation Data**: ~400 hours of mobile robot demonstrations
-
-This diverse training mixture creates a "curriculum" that enables generalization across physical, visual, and semantic levels simultaneously.
-
-## Installation Requirements
-
-1. Install LeRobot by following our [Installation Guide](./installation).
-2. Install Pi0.5 dependencies by running:
-
- ```bash
- pip install -e ".[pi]"
- ```
-
- > [!NOTE]
- > For lerobot 0.4.0, if you want to install pi tag, you will have to do: `pip install "lerobot[pi]@git+https://github.com/huggingface/lerobot.git"`.
- >
- > This will be solved in the next patch release
-
-## Usage
-
-To use π₀.₅ in your LeRobot configuration, specify the policy type as:
-
-```python
-policy.type=pi05
-```
-
-## Training
-
-### Training Command Example
-
-Here's a complete training command for finetuning the base π₀.₅ model on your own dataset:
-
-```bash
-python src/lerobot/scripts/lerobot_train.py\
- --dataset.repo_id=your_dataset \
- --policy.type=pi05 \
- --output_dir=./outputs/pi05_training \
- --job_name=pi05_training \
- --policy.repo_id=your_repo_id \
- --policy.pretrained_path=lerobot/pi05_base \
- --policy.compile_model=true \
- --policy.gradient_checkpointing=true \
- --wandb.enable=true \
- --policy.dtype=bfloat16 \
- --policy.freeze_vision_encoder=false \
- --policy.train_expert_only=false \
- --steps=3000 \
- --policy.device=cuda \
- --batch_size=32
-```
-
-### Key Training Parameters
-
-- **`--policy.compile_model=true`**: Enables model compilation for faster training
-- **`--policy.gradient_checkpointing=true`**: Reduces memory usage significantly during training
-- **`--policy.dtype=bfloat16`**: Use mixed precision training for efficiency
-- **`--batch_size=32`**: Batch size for training, adapt this based on your GPU memory
-- **`--policy.pretrained_path=lerobot/pi05_base`**: The base π₀.₅ model you want to finetune, options are:
- - [lerobot/pi05_base](https://huggingface.co/lerobot/pi05_base)
- - [lerobot/pi05_libero](https://huggingface.co/lerobot/pi05_libero) (specifically trained on the Libero dataset)
-
-### Training Parameters Explained
-
-| Parameter | Default | Description |
-| ----------------------- | ------- | ------------------------------------------- |
-| `freeze_vision_encoder` | `false` | Do not freeze the vision encoder |
-| `train_expert_only` | `false` | Do not freeze the VLM, train all parameters |
-
-**💡 Tip**: Setting `train_expert_only=true` freezes the VLM and trains only the action expert and projections, allowing finetuning with reduced memory usage.
-
-If your dataset is not converted with `quantiles`, you can convert it with the following command:
-
-```bash
-python src/lerobot/datasets/v30/augment_dataset_quantile_stats.py \
- --repo-id=your_dataset \
-```
-
-Or train pi05 with this normalization mapping: `--policy.normalization_mapping='{"ACTION": "MEAN_STD", "STATE": "MEAN_STD", "VISUAL": "IDENTITY"}'`
-
-## Performance Results
-
-### Libero Benchmark Results
-
-π₀.₅ has demonstrated strong performance on the Libero benchmark suite. To compare and test its LeRobot implementation, we finetuned the libero base model for an additional 6k steps on the Libero dataset and compared the results to the OpenPI reference results.
-
-| Benchmark | LeRobot Implementation | OpenPI Reference |
-| ------------------ | ---------------------- | ---------------- |
-| **Libero Spatial** | 97.0% | 98.8% |
-| **Libero Object** | 99.0% | 98.2% |
-| **Libero Goal** | 98.0% | 98.0% |
-| **Libero 10** | 96.0% | 92.4% |
-| **Average** | 97.5% | 96.85% |
-
-These results demonstrate π₀.₅'s strong generalization capabilities across diverse robotic manipulation tasks. To reproduce these results, you can follow the instructions in the [Libero](https://huggingface.co/docs/lerobot/libero) section.
-
-## License
-
-This model follows the **Apache 2.0 License**, consistent with the original [OpenPI repository](https://github.com/Physical-Intelligence/openpi).
diff --git a/lerobot/docs/source/pi0fast.mdx b/lerobot/docs/source/pi0fast.mdx
deleted file mode 100644
index d69e79977d034d6278cef1a647f7a0ae0f82419a..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/pi0fast.mdx
+++ /dev/null
@@ -1,246 +0,0 @@
-# π₀-FAST (Pi0-FAST)
-
-π₀-FAST is a **Vision-Language-Action model for general robot control** that uses autoregressive next-token prediction to model continuous robot actions.
-
-## Model Overview
-
-π₀-FAST combines the power of Vision-Language Models with a novel action tokenization approach called **FAST (Frequency-space Action Sequence Tokenization)**. This enables training autoregressive VLAs on highly dexterous tasks that are impossible with standard binning-based discretization, while training **up to 5x faster** than diffusion-based approaches like π₀.
-
-
-
-### Why FAST?
-
-Standard approaches for robot action tokenization use simple per-dimension, per-timestep binning schemes. While passable for simple behaviors, this rapidly breaks down for complex and dexterous skills that require precision and high-frequency control.
-
-FAST solves this by compressing action sequences using signal processing techniques, resulting in a dense sequence of action tokens that can be predicted autoregressively—just like language tokens.
-
-### How FAST Tokenization Works
-
-The FAST tokenizer compresses action sequences through the following steps:
-
-1. **Normalize**: Take a continuous action chunk of shape `(H, D)` where `H` is the horizon and `D` is the action dimension. Normalize using one of the supported normalization methods (Quantiles recommended to handle outliers).
-
-2. **Discrete Cosine Transform (DCT)**: Apply DCT (via scipy) to each action dimension separately. DCT is a compression algorithm commonly used in image and audio codecs (JPEG, MP3).
-
-3. **Quantization**: Round and remove insignificant coefficients for each action dimension, producing a sparse frequency matrix.
-
-4. **Flatten**: Flatten the matrix into a 1D vector, with low-frequency components first.
-
-5. **Byte Pair Encoding (BPE)**: Train a BPE tokenizer to compress the DCT coefficients into dense action tokens, typically achieving **10x compression** over prior tokenization approaches.
-
-This approach can transform **any existing VLM** into a VLA by training it to predict these FAST tokens.
-
-## Installation Requirements
-
-1. Install LeRobot by following our [Installation Guide](./installation).
-2. Install π₀-FAST dependencies by running:
-
- ```bash
- pip install -e ".[pi]"
- ```
-
- > [!NOTE]
- > For lerobot 0.4.0, if you want to install the pi tag, you will have to do: `pip install "lerobot[pi]@git+https://github.com/huggingface/lerobot.git"`.
- >
- > This will be solved in the next patch release
-
-## Training a Custom FAST Tokenizer
-
-You have two options for the FAST tokenizer:
-
-1. **Use the pre-trained tokenizer**: The `physical-intelligence/fast` tokenizer was trained on 1M+ real robot action sequences and works as a general-purpose tokenizer.
-
-2. **Train your own tokenizer**: For maximum performance on your specific dataset, you can finetune the tokenizer on your own data.
-
-### Training Your Own Tokenizer
-
-```bash
-lerobot-train-tokenizer \
- --repo_id "user/my-lerobot-dataset" \
- --action_horizon 10 \
- --encoded_dims "0:6" \
- --vocab_size 1024 \
- --scale 10.0 \
- --normalization_mode QUANTILES \
- --output_dir "./my_fast_tokenizer" \
- --push_to_hub \
- --hub_repo_id "username/my-action-tokenizer"
-```
-
-### Key Tokenizer Parameters
-
-| Parameter | Description | Default |
-| ---------------------- | --------------------------------------------------------------------------------- | ------------ |
-| `--repo_id` | LeRobot dataset repository ID | Required |
-| `--action_horizon` | Number of future actions in each chunk | `10` |
-| `--encoded_dims` | Comma-separated dimension ranges to encode (e.g., `"0:6,7:23"`) | `"0:6,7:23"` |
-| `--vocab_size` | BPE vocabulary size | `1024` |
-| `--scale` | DCT scaling factor for quantization | `10.0` |
-| `--normalization_mode` | Normalization mode (`MEAN_STD`, `MIN_MAX`, `QUANTILES`, `QUANTILE10`, `IDENTITY`) | `QUANTILES` |
-| `--sample_fraction` | Fraction of chunks to sample per episode | `0.1` |
-
-## Usage
-
-To use π₀-FAST in LeRobot, specify the policy type as:
-
-```python
-policy.type=pi0_fast
-```
-
-## Training
-
-For training π₀-FAST, you can use the LeRobot training script:
-
-```bash
-lerobot-train \
- --dataset.repo_id=your_dataset \
- --policy.type=pi0_fast \
- --output_dir=./outputs/pi0fast_training \
- --job_name=pi0fast_training \
- --policy.pretrained_path=lerobot/pi0_fast_base \
- --policy.dtype=bfloat16 \
- --policy.gradient_checkpointing=true \
- --policy.chunk_size=10 \
- --policy.n_action_steps=10 \
- --policy.max_action_tokens=256 \
- --steps=100000 \
- --batch_size=4 \
- --policy.device=cuda
-```
-
-### Key Training Parameters
-
-| Parameter | Description | Default |
-| -------------------------------------- | -------------------------------------------------- | ---------------------------- |
-| `--policy.gradient_checkpointing=true` | Reduces memory usage significantly during training | `false` |
-| `--policy.dtype=bfloat16` | Use mixed precision training for efficiency | `float32` |
-| `--policy.chunk_size` | Number of action steps to predict (action horizon) | `50` |
-| `--policy.n_action_steps` | Number of action steps to execute | `50` |
-| `--policy.max_action_tokens` | Maximum number of FAST tokens per action chunk | `256` |
-| `--policy.action_tokenizer_name` | FAST tokenizer to use | `physical-intelligence/fast` |
-| `--policy.compile_model=true` | Enable torch.compile for faster training | `false` |
-
-## Inference
-
-### KV-Caching for Fast Inference
-
-π₀-FAST supports **KV-caching**, a widely used optimization in LLM inference. This caches the key-value pairs from the attention mechanism, avoiding redundant computation during autoregressive decoding.
-
-```python
-# KV-caching is enabled by default
-policy.use_kv_cache=true
-```
-
-### Inference Example
-
-```python
-from lerobot.policies.pi0_fast import PI0FastPolicy, PI0FastConfig
-
-# Load the policy
-policy = PI0FastPolicy.from_pretrained("your-model-path")
-
-# During inference
-actions = policy.predict_action_chunk(batch)
-```
-
-## Model Architecture
-
-π₀-FAST uses a PaliGemma-based architecture:
-
-- **Vision Encoder**: SigLIP vision tower for image understanding
-- **Language Model**: Gemma 2B for processing language instructions and predicting action tokens
-
-The model takes images, text instructions, and robot state as input, and outputs discrete FAST tokens that are decoded back to continuous actions.
-
-## Configuration Options
-
-| Parameter | Description | Default |
-| -------------------- | ----------------------------------------------- | ---------- |
-| `paligemma_variant` | VLM backbone variant (`gemma_300m`, `gemma_2b`) | `gemma_2b` |
-| `max_state_dim` | Maximum state vector dimension (padded) | `32` |
-| `max_action_dim` | Maximum action vector dimension (padded) | `32` |
-| `temperature` | Sampling temperature (0.0 for greedy) | `0.0` |
-| `max_decoding_steps` | Maximum decoding steps | `256` |
-| `use_kv_cache` | Enable KV caching for faster inference | `true` |
-
-## Comparison with π₀
-
-| Feature | π₀ | π₀-FAST |
-| --------------------- | ------------------------- | ---------------------------- |
-| Action Representation | Flow Matching (Diffusion) | Autoregressive Tokens (FAST) |
-| Training Speed | 1x | **5x faster** |
-| Dexterity | High | High |
-| Inference Method | Iterative Denoising | Autoregressive Decoding |
-| KV-Caching | N/A | Supported |
-
-## Reproducing π₀Fast results
-
-We reproduce the results of π₀Fast on the LIBERO benchmark using the LeRobot implementation. We take the LeRobot PiFast base model [lerobot/pi0fast-base](https://huggingface.co/lerobot/pi0fast-base) and finetune for an additional 40kk steps in bfloat16, with batch size of 256 on 8 H100 GPUs using the [HuggingFace LIBERO dataset](https://huggingface.co/datasets/HuggingFaceVLA/libero).
-
-The finetuned model can be found here:
-
-- **π₀Fast LIBERO**: [lerobot/pi0fast-libero](https://huggingface.co/lerobot/pi0fast-libero)
-
-With the following training command:
-
-```bash
-lerobot-train \
- --dataset.repo_id=lerobot/libero \
- --output_dir=outputs/libero_pi0fast \
- --job_name=libero_pi0fast \
- --policy.path=lerobot/pi0fast_base \
- --policy.dtype=bfloat16 \
- --steps=100000 \
- --save_freq=20000 \
- --batch_size=4 \
- --policy.device=cuda \
- --policy.scheduler_warmup_steps=4000 \
- --policy.scheduler_decay_steps=100000 \
- --policy.scheduler_decay_lr=1e-5 \
- --policy.gradient_checkpointing=true \
- --policy.chunk_size=10 \
- --policy.n_action_steps=10 \
- --policy.max_action_tokens=256 \
- --policy.empty_cameras=1 \
-```
-
-We then evaluate the finetuned model using the LeRobot LIBERO implementation, by running the following command:
-
-```bash
-tasks="libero_object,libero_spatial,libero_goal,libero_10"
-lerobot-eval \
- --policy.path=lerobot/pi0fast-libero \
- --policy.max_action_tokens=256 \
- --env.type=libero \
- --policy.gradient_checkpointing=false \
- --env.task=${tasks} \
- --eval.batch_size=1 \
- --eval.n_episodes=1 \
- --rename_map='{"observation.images.image":"observation.images.base_0_rgb","observation.images.image2":"observation.images.left_wrist_0_rgb"}'
-```
-
-**Note:** We set `n_action_steps=10`, similar to the original OpenPI implementation.
-
-### Results
-
-We obtain the following results on the LIBERO benchmark:
-
-| Model | LIBERO Spatial | LIBERO Object | LIBERO Goal | LIBERO 10 | Average |
-| ----------- | -------------- | ------------- | ----------- | --------- | -------- |
-| **π₀-fast** | 70.0 | 100.0 | 100.0 | 60.0 | **82.5** |
-
-The full evaluation output folder, including videos, is available [here](https://drive.google.com/drive/folders/1HXpwPTRm4hx6g1sF2P7OOqGG0TwPU7LQ?usp=sharing)
-
-## License
-
-This model follows the **Apache 2.0 License**, consistent with the original [OpenPI repository](https://github.com/Physical-Intelligence/openpi).
-
-## References
-
-- [FAST: Efficient Robot Action Tokenization](https://www.physicalintelligence.company/research/fast) - Physical Intelligence Blog
-- [OpenPI Repository](https://github.com/Physical-Intelligence/openpi) - Original implementation
-- [FAST Tokenizer on Hugging Face](https://huggingface.co/physical-intelligence/fast) - Pre-trained tokenizer
diff --git a/lerobot/docs/source/policy_act_README.md b/lerobot/docs/source/policy_act_README.md
deleted file mode 100644
index ed884402c4c1094c69308aeeeed2ceb577e94628..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/policy_act_README.md
+++ /dev/null
@@ -1,14 +0,0 @@
-## Paper
-
-https://tonyzhaozh.github.io/aloha
-
-## Citation
-
-```bibtex
-@article{zhao2023learning,
- title={Learning fine-grained bimanual manipulation with low-cost hardware},
- author={Zhao, Tony Z and Kumar, Vikash and Levine, Sergey and Finn, Chelsea},
- journal={arXiv preprint arXiv:2304.13705},
- year={2023}
-}
-```
diff --git a/lerobot/docs/source/policy_diffusion_README.md b/lerobot/docs/source/policy_diffusion_README.md
deleted file mode 100644
index b8493afe0dbb363f69cd813f69150b77fc3f44d7..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/policy_diffusion_README.md
+++ /dev/null
@@ -1,14 +0,0 @@
-## Paper
-
-https://diffusion-policy.cs.columbia.edu
-
-## Citation
-
-```bibtex
-@article{chi2024diffusionpolicy,
- author = {Cheng Chi and Zhenjia Xu and Siyuan Feng and Eric Cousineau and Yilun Du and Benjamin Burchfiel and Russ Tedrake and Shuran Song},
- title ={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion},
- journal = {The International Journal of Robotics Research},
- year = {2024},
-}
-```
diff --git a/lerobot/docs/source/policy_groot_README.md b/lerobot/docs/source/policy_groot_README.md
deleted file mode 100644
index c2e435d9e7a91c6644cb0e626947ae60f7cf888c..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/policy_groot_README.md
+++ /dev/null
@@ -1,27 +0,0 @@
-## Research Paper
-
-Paper: https://research.nvidia.com/labs/gear/gr00t-n1_5/
-
-## Repository
-
-Code: https://github.com/NVIDIA/Isaac-GR00T
-
-## Citation
-
-```bibtex
-@inproceedings{gr00tn1_2025,
- archivePrefix = {arxiv},
- eprint = {2503.14734},
- title = {{GR00T} {N1}: An Open Foundation Model for Generalist Humanoid Robots},
- author = {NVIDIA and Johan Bjorck andFernando Castañeda, Nikita Cherniadev and Xingye Da and Runyu Ding and Linxi "Jim" Fan and Yu Fang and Dieter Fox and Fengyuan Hu and Spencer Huang and Joel Jang and Zhenyu Jiang and Jan Kautz and Kaushil Kundalia and Lawrence Lao and Zhiqi Li and Zongyu Lin and Kevin Lin and Guilin Liu and Edith Llontop and Loic Magne and Ajay Mandlekar and Avnish Narayan and Soroush Nasiriany and Scott Reed and You Liang Tan and Guanzhi Wang and Zu Wang and Jing Wang and Qi Wang and Jiannan Xiang and Yuqi Xie and Yinzhen Xu and Zhenjia Xu and Seonghyeon Ye and Zhiding Yu and Ao Zhang and Hao Zhang and Yizhou Zhao and Ruijie Zheng and Yuke Zhu},
- month = {March},
- year = {2025},
- booktitle = {ArXiv Preprint},
-}
-```
-
-## Additional Resources
-
-Blog: https://developer.nvidia.com/isaac/gr00t
-
-Hugging Face Model: https://huggingface.co/nvidia/GR00T-N1.5-3B
diff --git a/lerobot/docs/source/policy_smolvla_README.md b/lerobot/docs/source/policy_smolvla_README.md
deleted file mode 100644
index 2e83a080c19b5c23aa48f80b254cbb28191f3a99..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/policy_smolvla_README.md
+++ /dev/null
@@ -1,14 +0,0 @@
-## Paper
-
-https://arxiv.org/abs/2506.01844
-
-## Citation
-
-```bibtex
-@article{shukor2025smolvla,
- title={SmolVLA: A Vision-Language-Action Model for Affordable and Efficient Robotics},
- author={Shukor, Mustafa and Aubakirova, Dana and Capuano, Francesco and Kooijmans, Pepijn and Palma, Steven and Zouitine, Adil and Aractingi, Michel and Pascal, Caroline and Russi, Martino and Marafioti, Andres and Alibert, Simon and Cord, Matthieu and Wolf, Thomas and Cadene, Remi},
- journal={arXiv preprint arXiv:2506.01844},
- year={2025}
-}
-```
diff --git a/lerobot/docs/source/policy_tdmpc_README.md b/lerobot/docs/source/policy_tdmpc_README.md
deleted file mode 100644
index 6a9eb295a3ac08f166aeae96209bdf1c7b0c3995..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/policy_tdmpc_README.md
+++ /dev/null
@@ -1,14 +0,0 @@
-## Paper
-
-https://www.nicklashansen.com/td-mpc/
-
-## Citation
-
-```bibtex
-@inproceedings{Hansen2022tdmpc,
- title={Temporal Difference Learning for Model Predictive Control},
- author={Nicklas Hansen and Xiaolong Wang and Hao Su},
- booktitle={ICML},
- year={2022}
-}
-```
diff --git a/lerobot/docs/source/policy_vqbet_README.md b/lerobot/docs/source/policy_vqbet_README.md
deleted file mode 100644
index 1d1aa29aa65590c3465543aad288a7705151caab..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/policy_vqbet_README.md
+++ /dev/null
@@ -1,14 +0,0 @@
-## Paper
-
-https://sjlee.cc/vq-bet/
-
-## Citation
-
-```bibtex
-@article{lee2024behavior,
- title={Behavior generation with latent actions},
- author={Lee, Seungjae and Wang, Yibin and Etukuru, Haritheja and Kim, H Jin and Shafiullah, Nur Muhammad Mahi and Pinto, Lerrel},
- journal={arXiv preprint arXiv:2403.03181},
- year={2024}
-}
-```
diff --git a/lerobot/docs/source/policy_walloss_README.md b/lerobot/docs/source/policy_walloss_README.md
deleted file mode 100644
index 26e9d122a261cfb69c40eb3adcafa4f7117cb115..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/policy_walloss_README.md
+++ /dev/null
@@ -1,45 +0,0 @@
-# WALL-OSS
-
-This repository contains the Hugging Face port of [**WALL-OSS**](https://x2robot.com/en/research/68bc2cde8497d7f238dde690), a Vision-Language-Action model for cross-embodiment robotic control based on Qwen2.5-VL with flow matching/FAST action prediction.
-
----
-
-## Model Overview
-
-| Feature | Description |
-| ------------------ | ----------------------------------------------------- |
-| Base Model | Qwen2.5-VL (Vision-Language Model) |
-| Action Prediction | Flow Matching (diffusion) or FAST (discrete tokens) |
-| Architecture | Mixture of Experts (MoE) with action-specific routing |
-| Multi-Modal Inputs | Vision (images/videos), Language, Proprioception |
-
----
-
-## Additional Resources
-
-Paper: https://arxiv.org/pdf/2509.11766
-
-Official Repository: https://github.com/X-Square-Robot/wall-x
-
-Hugging Face: https://huggingface.co/x-square-robot
-
----
-
-## Citation
-
-If you use this work, please cite:
-
-```bibtex
-@article{zhai2025igniting,
- title = {Igniting VLMs Toward the Embodied Space},
- author = {Zhai, Andy and Liu, Brae and Fang, Bruno and Cai, Chalse and Ma, Ellie and Yin, Ethan and Wang, Hao and Zhou, Hugo and Wang, James and Shi, Lights and Liang, Lucy and Wang, Make and Wang, Qian and Gan, Roy and Yu, Ryan and Li, Shalfun and Liu, Starrick and Chen, Sylas and Chen, Vincent and Xu, Zach},
- journal = {arXiv preprint arXiv:2509.11766},
- year = {2025}
-}
-```
-
----
-
-## License
-
-This model follows the **Apache 2.0 License**, consistent with the original [WallX repository](https://github.com/X-Square-Robot/wall-x).
diff --git a/lerobot/docs/source/porting_datasets_v3.mdx b/lerobot/docs/source/porting_datasets_v3.mdx
deleted file mode 100644
index ff5088e2d702e2b2f6de69b0fb0e98ae4dff524d..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/porting_datasets_v3.mdx
+++ /dev/null
@@ -1,321 +0,0 @@
-# Porting Large Datasets to LeRobot Dataset v3.0
-
-This tutorial explains how to port large-scale robotic datasets to the LeRobot Dataset v3.0 format. We'll use the **DROID 1.0.1** dataset as our primary example, which demonstrates handling multi-terabyte datasets with thousands of shards across SLURM clusters.
-
-## File Organization: v2.1 vs v3.0
-
-Dataset v3.0 fundamentally changes how data is organized and stored:
-
-**v2.1 Structure (Episode-based)**:
-
-```
-dataset/
-├── data/chunk-000/episode_000000.parquet
-├── data/chunk-000/episode_000001.parquet
-├── videos/chunk-000/camera/episode_000000.mp4
-└── meta/episodes.jsonl
-```
-
-**v3.0 Structure (File-based)**:
-
-```
-dataset/
-├── data/chunk-000/file-000.parquet # Multiple episodes per file
-├── videos/camera/chunk-000/file-000.mp4 # Consolidated video chunks
-└── meta/episodes/chunk-000/file-000.parquet # Structured metadata
-```
-
-This transition from individual episode files to file-based chunks dramatically improves performance and reduces storage overhead.
-
-## What's New in Dataset v3.0
-
-Dataset v3.0 introduces significant improvements for handling large datasets:
-
-### 🏗️ **Enhanced File Organization**
-
-- **File-based structure**: Episodes are now grouped into chunked files rather than individual episode files
-- **Configurable file sizes**: for data and video files
-- **Improved storage efficiency**: Better compression and reduced overhead
-
-### 📊 **Modern Metadata Management**
-
-- **Parquet-based metadata**: Replaced JSON Lines with efficient parquet format
-- **Structured episode access**: Direct pandas DataFrame access via `dataset.meta.episodes`
-- **Per-episode statistics**: Enhanced statistics tracking at episode level
-
-### 🚀 **Performance Enhancements**
-
-- **Memory-mapped access**: Improved RAM usage through PyArrow memory mapping
-- **Faster loading**: Significantly reduced dataset initialization time
-- **Better scalability**: Designed for datasets with millions of episodes
-
-## Prerequisites
-
-Before porting large datasets, ensure you have:
-
-- **LeRobot installed** with v3.0 support. Follow our [Installation Guide](./installation).
-- **Sufficient storage**: Raw datasets can be very large (e.g., DROID requires 2TB)
-- **Cluster access** (recommended for large datasets): SLURM or similar job scheduler
-- **Dataset-specific dependencies**: For DROID, you'll need TensorFlow Dataset utilities
-
-## Understanding the DROID Dataset
-
-[DROID 1.0.1](https://droid-dataset.github.io/droid/the-droid-dataset) is an excellent example of a large-scale robotic dataset:
-
-- **Size**: 1.7TB (RLDS format), 8.7TB (raw data)
-- **Structure**: 2048 pre-defined TensorFlow dataset shards
-- **Content**: 76,000+ robot manipulation trajectories from Franka Emika Panda robots
-- **Scope**: Real-world manipulation tasks across multiple environments and objects
-- **Format**: Originally in TensorFlow Records/RLDS format, requiring conversion to LeRobot format
-- **Hosting**: Google Cloud Storage with public access via `gsutil`
-
-The dataset contains diverse manipulation demonstrations with:
-
-- Multiple camera views (wrist camera, exterior cameras)
-- Natural language task descriptions
-- Robot proprioceptive state and actions
-- Success/failure annotations
-
-### DROID Features Schema
-
-```python
-DROID_FEATURES = {
- # Episode markers
- "is_first": {"dtype": "bool", "shape": (1,)},
- "is_last": {"dtype": "bool", "shape": (1,)},
- "is_terminal": {"dtype": "bool", "shape": (1,)},
-
- # Language instructions
- "language_instruction": {"dtype": "string", "shape": (1,)},
- "language_instruction_2": {"dtype": "string", "shape": (1,)},
- "language_instruction_3": {"dtype": "string", "shape": (1,)},
-
- # Robot state
- "observation.state.gripper_position": {"dtype": "float32", "shape": (1,)},
- "observation.state.cartesian_position": {"dtype": "float32", "shape": (6,)},
- "observation.state.joint_position": {"dtype": "float32", "shape": (7,)},
-
- # Camera observations
- "observation.images.wrist_left": {"dtype": "image"},
- "observation.images.exterior_1_left": {"dtype": "image"},
- "observation.images.exterior_2_left": {"dtype": "image"},
-
- # Actions
- "action.gripper_position": {"dtype": "float32", "shape": (1,)},
- "action.cartesian_position": {"dtype": "float32", "shape": (6,)},
- "action.joint_position": {"dtype": "float32", "shape": (7,)},
-
- # Standard LeRobot format
- "observation.state": {"dtype": "float32", "shape": (8,)}, # joints + gripper
- "action": {"dtype": "float32", "shape": (8,)}, # joints + gripper
-}
-```
-
-## Approach 1: Single Computer Porting
-
-### Step 1: Install Dependencies
-
-For DROID specifically:
-
-```bash
-pip install tensorflow
-pip install tensorflow_datasets
-```
-
-For other datasets, install the appropriate readers for your source format.
-
-### Step 2: Download Raw Data
-
-Download DROID from Google Cloud Storage using `gsutil`:
-
-```bash
-# Install Google Cloud SDK if not already installed
-# https://cloud.google.com/sdk/docs/install
-
-# Download the full RLDS dataset (1.7TB)
-gsutil -m cp -r gs://gresearch/robotics/droid/1.0.1 /your/data/
-
-# Or download just the 100-episode sample (2GB) for testing
-gsutil -m cp -r gs://gresearch/robotics/droid_100 /your/data/
-```
-
-> [!WARNING]
-> Large datasets require substantial time and storage:
->
-> - **Full DROID (1.7TB)**: Several days to download depending on bandwidth
-> - **Processing time**: 7+ days for local porting of full dataset
-> - **Upload time**: 3+ days to push to Hugging Face Hub
-> - **Local storage**: ~400GB for processed LeRobot format
-
-### Step 3: Port the Dataset
-
-```bash
-python examples/port_datasets/port_droid.py \
- --raw-dir /your/data/droid/1.0.1 \
- --repo-id your_id/droid_1.0.1 \
- --push-to-hub
-```
-
-### Development and Testing
-
-For development, you can port a single shard:
-
-```bash
-python examples/port_datasets/port_droid.py \
- --raw-dir /your/data/droid/1.0.1 \
- --repo-id your_id/droid_1.0.1_test \
- --num-shards 2048 \
- --shard-index 0
-```
-
-This approach works for smaller datasets or testing, but large datasets require cluster computing.
-
-## Approach 2: SLURM Cluster Porting (Recommended)
-
-For large datasets like DROID, parallel processing across multiple nodes dramatically reduces processing time.
-
-### Step 1: Install Cluster Dependencies
-
-```bash
-pip install datatrove # Hugging Face's distributed processing library
-```
-
-### Step 2: Configure Your SLURM Environment
-
-Find your partition information:
-
-```bash
-sinfo --format="%R" # List available partitions
-sinfo -N -p your_partition -h -o "%N cpus=%c mem=%m" # Check resources
-```
-
-Choose a **CPU partition** - no GPU needed for dataset porting.
-
-### Step 3: Launch Parallel Porting Jobs
-
-```bash
-python examples/port_datasets/slurm_port_shards.py \
- --raw-dir /your/data/droid/1.0.1 \
- --repo-id your_id/droid_1.0.1 \
- --logs-dir /your/logs \
- --job-name port_droid \
- --partition your_partition \
- --workers 2048 \
- --cpus-per-task 8 \
- --mem-per-cpu 1950M
-```
-
-#### Parameter Guidelines
-
-- **`--workers`**: Number of parallel jobs (max 2048 for DROID's shard count)
-- **`--cpus-per-task`**: 8 CPUs recommended for frame encoding parallelization
-- **`--mem-per-cpu`**: ~16GB total RAM (8×1950M) for loading raw frames
-
-> [!TIP]
-> Start with fewer workers (e.g., 100) to test your cluster configuration before launching thousands of jobs.
-
-### Step 4: Monitor Progress
-
-Check running jobs:
-
-```bash
-squeue -u $USER
-```
-
-Monitor overall progress:
-
-```bash
-jobs_status /your/logs
-```
-
-Inspect individual job logs:
-
-```bash
-less /your/logs/port_droid/slurm_jobs/JOB_ID_WORKER_ID.out
-```
-
-Debug failed jobs:
-
-```bash
-failed_logs /your/logs/port_droid
-```
-
-### Step 5: Aggregate Shards
-
-Once all porting jobs complete:
-
-```bash
-python examples/port_datasets/slurm_aggregate_shards.py \
- --repo-id your_id/droid_1.0.1 \
- --logs-dir /your/logs \
- --job-name aggr_droid \
- --partition your_partition \
- --workers 2048 \
- --cpus-per-task 8 \
- --mem-per-cpu 1950M
-```
-
-### Step 6: Upload to Hub
-
-```bash
-python examples/port_datasets/slurm_upload.py \
- --repo-id your_id/droid_1.0.1 \
- --logs-dir /your/logs \
- --job-name upload_droid \
- --partition your_partition \
- --workers 50 \
- --cpus-per-task 4 \
- --mem-per-cpu 1950M
-```
-
-> [!NOTE]
-> Upload uses fewer workers (50) since it's network-bound rather than compute-bound.
-
-## Dataset v3.0 File Structure
-
-Your completed dataset will have this modern structure:
-
-```
-dataset/
-├── meta/
-│ ├── episodes/
-│ │ └── chunk-000/
-│ │ └── file-000.parquet # Episode metadata
-│ ├── tasks.parquet # Task definitions
-│ ├── stats.json # Aggregated statistics
-│ └── info.json # Dataset information
-├── data/
-│ └── chunk-000/
-│ └── file-000.parquet # Consolidated episode data
-└── videos/
- └── camera_key/
- └── chunk-000/
- └── file-000.mp4 # Consolidated video files
-```
-
-This replaces the old episode-per-file structure with efficient, optimally-sized chunks.
-
-## Migrating from Dataset v2.1
-
-If you have existing datasets in v2.1 format, use the migration tool:
-
-```bash
-python src/lerobot/datasets/v30/convert_dataset_v21_to_v30.py \
- --repo-id your_id/existing_dataset
-```
-
-This automatically:
-
-- Converts file structure to v3.0 format
-- Migrates metadata from JSON Lines to parquet
-- Aggregates statistics and creates per-episode stats
-- Updates version information
-
-## Performance Benefits
-
-Dataset v3.0 provides significant improvements for large datasets:
-
-- **Faster loading**: 3-5x reduction in initialization time
-- **Memory efficiency**: Better RAM usage through memory mapping
-- **Scalable processing**: Handles millions of episodes efficiently
-- **Storage optimization**: Reduced file count and improved compression
diff --git a/lerobot/docs/source/processors_robots_teleop.mdx b/lerobot/docs/source/processors_robots_teleop.mdx
deleted file mode 100644
index a033d03484890d02cecccc25b79d5653e125fea2..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/processors_robots_teleop.mdx
+++ /dev/null
@@ -1,151 +0,0 @@
-# Processors for Robots and Teleoperators
-
-This guide shows how to build and modify processing pipelines that connect teleoperators (e.g., phone) to robots and datasets. Pipelines standardize conversions between different action/observation spaces so you can swap teleops and robots without rewriting glue code.
-
-We use the Phone to SO‑100 follower examples for concreteness, but the same patterns apply to other robots.
-
-**What you'll learn**
-
-- Absolute vs. relative EE control: What each means, trade‑offs, and how to choose for your task.
-- Three-pipeline pattern: How to map teleop actions → dataset actions → robot commands, and robot observations → dataset observations.
-- Adapters (`to_transition` / `to_output`): How these convert raw dicts to `EnvTransition` and back to reduce boilerplate.
-- Dataset feature contracts: How steps declare features via `transform_features(...)`, and how to aggregate/merge them for recording.
-- Choosing a representation: When to store joints, absolute EE poses, or relative EE deltas—and how that affects training.
-- Pipeline customization guidance: How to swap robots/URDFs safely and tune bounds, step sizes, and options like IK initialization.
-
-### Absolute vs relative EE control
-
-The examples in this guide use absolute end effector (EE) poses because they are easy to reason about. In practice, relative EE deltas or joint position are often preferred as learning features.
-
-With processors, you choose the learning features you want to use for your policy. This could be joints positions/velocities, absolute EE, or relative EE positions. You can also choose to store other features, such as joint torques, motor currents, etc.
-
-## Three pipelines
-
-We often compose three pipelines. Depending on your setup, some can be empty if action and observation spaces already match.
-Each of these pipelines handle different conversions between different action and observation spaces. Below is a quick explanation of each pipeline.
-
-1. Pipeline 1: Teleop action space → dataset action space (phone pose → EE targets)
-2. Pipeline 2: Dataset action space → robot command space (EE targets → joints)
-3. Pipeline 3: Robot observation space → dataset observation space (joints → EE pose)
-
-Below is an example of the three pipelines that we use in the phone to SO-100 follower examples:
-
-```python
-phone_to_robot_ee_pose_processor = RobotProcessorPipeline[RobotAction, RobotAction]( # teleop -> dataset action
- steps=[
- MapPhoneActionToRobotAction(platform=teleop_config.phone_os),
- EEReferenceAndDelta(
- kinematics=kinematics_solver, end_effector_step_sizes={"x": 0.5, "y": 0.5, "z": 0.5}, motor_names=list(robot.bus.motors.keys()),
- ),
- EEBoundsAndSafety(
- end_effector_bounds={"min": [-1.0, -1.0, -1.0], "max": [1.0, 1.0, 1.0]}, max_ee_step_m=0.20,
- ),
- GripperVelocityToJoint(),
- ],
- to_transition=robot_action_to_transition,
- to_output=transition_to_robot_action,
-)
-
-robot_ee_to_joints_processor = RobotProcessorPipeline[RobotAction, RobotAction]( # dataset action -> robot
- steps=[
- InverseKinematicsEEToJoints(
- kinematics=kinematics_solver, motor_names=list(robot.bus.motors.keys()), initial_guess_current_joints=True,
- ),
- ],
- to_transition=robot_action_to_transition,
- to_output=transition_to_robot_action,
-)
-
-robot_joints_to_ee_pose = RobotProcessorPipeline[RobotObservation, RobotObservation]( # robot obs -> dataset obs
- steps=[
- ForwardKinematicsJointsToEE(kinematics=kinematics_solver, motor_names=list(robot.bus.motors.keys()))
- ],
- to_transition=observation_to_transition,
- to_output=transition_to_observation,
-)
-```
-
-## Why to_transition / to_output
-
-To convert from robot/teleoperator to pipeline and back, we use the `to_transition` and `to_output` pipeline adapters.
-They standardize conversions to reduce boilerplate code, and form the bridge between the robot and teleoperators raw dictionaries and the pipeline’s `EnvTransition` format.
-In the phone to SO-100 follower examples we use the following adapters:
-
-- `robot_action_to_transition`: transforms the teleop action dict to a pipeline transition.
-- `transition_to_robot_action`: transforms the pipeline transition to a robot action dict.
-- `observation_to_transition`: transforms the robot observation dict to a pipeline transition.
-- `transition_to_observation`: transforms the pipeline transition to a observation dict.
-
-Checkout [src/lerobot/processor/converters.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/processor/converters.py) for more details.
-
-## Dataset feature contracts
-
-Dataset features are determined by the keys saved in the dataset. Each step can declare what features it modifies in a contract called `transform_features(...)`. Once you build a processor, the processor can then aggregate all of these features with `aggregate_pipeline_dataset_features()` and merge multiple feature dicts with `combine_feature_dicts(...)`.
-
-Below is and example of how we declare features with the `transform_features` method in the phone to SO-100 follower examples:
-
-```python
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- # We only use the ee pose in the dataset, so we don't need the joint positions
- for n in self.motor_names:
- features[PipelineFeatureType.ACTION].pop(f"{n}.pos", None)
- # We specify the dataset features of this step that we want to be stored in the dataset
- for k in ["x", "y", "z", "wx", "wy", "wz", "gripper_pos"]:
- features[PipelineFeatureType.ACTION][f"ee.{k}"] = PolicyFeature(
- type=FeatureType.STATE, shape=(1,)
- )
- return features
-```
-
-Here we declare what PolicyFeatures we modify in this step, so we know what features we can expect when we run the processor. These features can then be aggregated and used to create the dataset features.
-
-Below is an example of how we aggregate and merge features in the phone to SO-100 record example:
-
-```python
-features=combine_feature_dicts(
- # Run the feature contract of the pipelines
- # This tells you how the features would look like after the pipeline steps
- aggregate_pipeline_dataset_features(
- pipeline=phone_to_robot_ee_pose_processor,
- initial_features=create_initial_features(action=phone.action_features), # <- Action features we can expect, these come from our teleop device (phone) and action processor
- use_videos=True,
- ),
- aggregate_pipeline_dataset_features(
- pipeline=robot_joints_to_ee_pose,
- initial_features=create_initial_features(observation=robot.observation_features), # <- Observation features we can expect, these come from our robot and observation processor
- use_videos=True,
- patterns=["observation.state.ee"], # <- Here you could optionally filter the features we want to store in the dataset, with a specific pattern
-
- ),
- ),
-```
-
-How it works:
-
-- `aggregate_pipeline_dataset_features(...)`: applies `transform_features` across the pipeline and filters by patterns (images included when `use_videos=True`, and state features included when `patterns` is specified).
-- `combine_feature_dicts(...)`: combine multiple feature dicts.
-- Recording with `record_loop(...)` uses `build_dataset_frame(...)` to build frames consistent with `dataset.features` before we call `add_frame(...)` to add the frame to the dataset.
-
-## Guidance when customizing robot pipelines
-
-You can store any of the following features as your action/observation space:
-
-- Joint positions
-- Absolute EE poses
-- Relative EE deltas
-- Other features: joint velocity, torques, etc.
-
-Pick what you want to use for your policy action and observation space and configure/modify the pipelines and steps accordingly.
-
-### Different robots
-
-- You can easily reuse pipelines, for example to use another robot with phone teleop, modify the examples and swap the robot `RobotKinematics` (URDF) and `motor_names` to use your own robot with Phone teleop. Additionally you should ensure `target_frame_name` points to your gripper/wrist.
-
-### Safety first
-
-- When changing pipelines, start with tight bounds, implement safety steps when working with real robots.
-- Its advised to start with simulation first and then move to real robots.
-
-Thats it! We hope this guide helps you get started with customizing your robot pipelines, If you run into any issues at any point, jump into our [Discord community](https://discord.com/invite/s3KuuzsPFb) for support.
diff --git a/lerobot/docs/source/reachy2.mdx b/lerobot/docs/source/reachy2.mdx
deleted file mode 100644
index 031934e6f7e3b430fe6ab531b46a97a71f3aa155..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/reachy2.mdx
+++ /dev/null
@@ -1,303 +0,0 @@
-# Reachy 2
-
-Reachy 2 is an open-source humanoid robot made by Pollen Robotics, specifically designed for the development of embodied AI and real-world applications.
-Check out [Pollen Robotics website](https://www.pollen-robotics.com/reachy/), or access [Reachy 2 documentation](https://docs.pollen-robotics.com/) for more information on the platform!
-
-## Teleoperate Reachy 2
-
-Currently, there are two ways to teleoperate Reachy 2:
-
-- Pollen Robotics’ VR teleoperation (not included in LeRobot).
-- Robot-to-robot teleoperation (use one Reachy 2 to control another).
-
-## Reachy 2 Simulation
-
-**(Linux only)** You can run Reachy 2 in simulation (Gazebo or MuJoCo) using the provided [Docker image](https://hub.docker.com/r/pollenrobotics/reachy2_core).
-
-1. Install [Docker Engine](https://docs.docker.com/engine/).
-2. Run (for MuJoCo):
-
-```
-docker run --rm -it \
- --name reachy \
- --privileged \
- --network host \
- --ipc host \
- --device-cgroup-rule='c 189:* rwm' \
- --group-add audio \
- -e ROS_DOMAIN_ID="$ROS_DOMAIN_ID" \
- -e DISPLAY="$DISPLAY" \
- -e RCUTILS_CONSOLE_OUTPUT_FORMAT="[{severity}]: {message}" \
- -e REACHY2_CORE_SERVICE_FAKE="${REACHY2_CORE_SERVICE_FAKE:-true}" \
- -v /dev:/dev \
- -v "$HOME/.reachy_config":/home/reachy/.reachy_config_override \
- -v "$HOME/.reachy.log":/home/reachy/.ros/log \
- -v /usr/lib/x86_64-linux-gnu:/opt/host-libs \
- --entrypoint /package/launch.sh \
- pollenrobotics/reachy2_core:1.7.5.9_deploy \
- start_rviz:=true start_sdk_server:=true mujoco:=true
-```
-
-> [!NOTE]
-> If MuJoCo runs slowly (low simulation frequency), append `-e LD_LIBRARY_PATH="/opt/host-libs:$LD_LIBRARY_PATH" \` to the previous command to improve performance:
->
-> ```
-> docker run --rm -it \
-> --name reachy \
-> --privileged \
-> --network host \
-> --ipc host \
-> --device-cgroup-rule='c 189:* rwm' \
-> --group-add audio \
-> -e ROS_DOMAIN_ID="$ROS_DOMAIN_ID" \
-> -e DISPLAY="$DISPLAY" \
-> -e RCUTILS_CONSOLE_OUTPUT_FORMAT="[{severity}]: {message}" \
-> -e REACHY2_CORE_SERVICE_FAKE="${REACHY2_CORE_SERVICE_FAKE:-true}" \
-> -e LD_LIBRARY_PATH="/opt/host-libs:$LD_LIBRARY_PATH" \
-> -v /dev:/dev \
-> -v "$HOME/.reachy_config":/home/reachy/.reachy_config_override \
-> -v "$HOME/.reachy.log":/home/reachy/.ros/log \
-> -v /usr/lib/x86_64-linux-gnu:/opt/host-libs \
-> --entrypoint /package/launch.sh \
-> pollenrobotics/reachy2_core:1.7.5.9_deploy \
-> start_rviz:=true start_sdk_server:=true mujoco:=true
-> ```
-
-## Setup
-
-### Prerequisites
-
-- On your robot, check the **service images** meet the minimum versions:
- - **reachy2-core >= 1.7.5.2**
- - **webrtc >= 2.0.1.1**
-
-Then, if you want to use VR teleoperation:
-
-- Install the [Reachy 2 teleoperation application](https://docs.pollen-robotics.com/teleoperation/teleoperation-introduction/discover-teleoperation/).
- Use version **>=v1.2.0**
-
-We recommend using two computers: one for teleoperation (Windows required) and another for recording with LeRobot.
-
-### Install LeRobot
-
-Follow the [installation instructions](https://github.com/huggingface/lerobot#installation) to install LeRobot.
-
-Install LeRobot with Reachy 2 dependencies:
-
-```bash
-pip install -e ".[reachy2]"
-```
-
-### (Optional but recommended) Install pollen_data_acquisition_server
-
-How you manage Reachy 2 recording sessions is up to you, but the **easiest** way is to use this server so you can control sessions directly from the VR teleoperation app.
-
-> **Note:** Currently, only the VR teleoperation application works as a client for this server, so this step primarily targets teleoperation. You’re free to develop custom clients to manage sessions to your needs.
-
-In your LeRobot environment, install the server from source:
-
-```bash
-git clone https://github.com/pollen-robotics/pollen_data_acquisition_server.git
-cd pollen_data_acquisition_server
-pip install -e .
-```
-
-Find the [pollen_data_acquisition_server documentation here](https://github.com/pollen-robotics/pollen_data_acquisition_server).
-
-## Step 1: Recording
-
-### Get Reachy 2 IP address
-
-Before starting teleoperation and data recording, find the [robot's IP address](https://docs.pollen-robotics.com/getting-started/setup-reachy2/connect-reachy2/).
-We strongly recommend connecting all devices (PC and robot) via **Ethernet**.
-
-### Launch recording
-
-There are two ways to manage recording sessions when using the Reachy 2 VR teleoperation application:
-
-- **Using the data acquisition server (recommended for VR teleop)**: The VR app orchestrates sessions (via the server it tells LeRobot when to create datasets, start/stop episodes) while also controlling the robot’s motions.
-- **Using LeRobot’s record script**: LeRobot owns session control and decides when to start/stop episodes. If you also use the VR teleop app, it’s only for motion control.
-
-### Option 1: Using Pollen data acquisition server (recommended for VR teleop)
-
-Make sure you have installed pollen_data_acquisition_server, as explained in the Setup section.
-
-Launch the data acquisition server to be able to manage your session directly from the teleoperation application:
-
-```bash
-python -m pollen_data_acquisition_server.server
-```
-
-Then get into the teleoperation application and choose "Data acquisition session".
-You can finally setup your session by following the screens displayed.
-
-> Even without the VR app, you can use the `pollen_data_acquisition_server` with your own client implementation.
-
-### Option 2: Using lerobot.record
-
-Reachy 2 is fully supported by LeRobot’s recording features.
-If you choose this option but still want to use the VR teleoperation application, select "Standard session" in the app.
-
-**Example: start a recording without the mobile base:**
-First add reachy2 and reachy2_teleoperator to the imports of the record script. Then you can use the following command:
-
-```bash
-lerobot-record \
- --robot.type=reachy2 \
- --robot.ip_address=192.168.0.200 \
- --robot.id=r2-0000 \
- --robot.use_external_commands=true \
- --robot.with_mobile_base=false \
- --teleop.type=reachy2_teleoperator \
- --teleop.ip_address=192.168.0.200 \
- --teleop.with_mobile_base=false \
- --robot.with_torso_camera=true \
- --dataset.repo_id=pollen_robotics/record_test \
- --dataset.single_task="Reachy 2 recording test" \
- --dataset.num_episodes=1 \
- --dataset.episode_time_s=5 \
- --dataset.fps=15 \
- --dataset.push_to_hub=true \
- --dataset.private=true \
- --display_data=true
-```
-
-#### Specific Options
-
-**Extended setup overview (all options included):**
-
-```bash
-lerobot-record \
- --robot.type=reachy2 \
- --robot.ip_address=192.168.0.200 \
- --robot.use_external_commands=true \
- --robot.with_mobile_base=true \
- --robot.with_l_arm=true \
- --robot.with_r_arm=true \
- --robot.with_neck=true \
- --robot.with_antennas=true \
- --robot.with_left_teleop_camera=true \
- --robot.with_right_teleop_camera=true \
- --robot.with_torso_camera=false \
- --robot.camera_width=640 \
- --robot.camera_height=480 \
- --robot.disable_torque_on_disconnect=false \
- --robot.max_relative_target=5.0 \
- --teleop.type=reachy2_teleoperator \
- --teleop.ip_address=192.168.0.200 \
- --teleop.use_present_position=false \
- --teleop.with_mobile_base=false \
- --teleop.with_l_arm=true \
- --teleop.with_r_arm=true \
- --teleop.with_neck=true \
- --teleop.with_antennas=true \
- --dataset.repo_id=pollen_robotics/record_test \
- --dataset.single_task="Reachy 2 recording test" \
- --dataset.num_episodes=1 \
- --dataset.episode_time_s=5 \
- --dataset.fps=15 \
- --dataset.push_to_hub=true \
- --dataset.private=true \
- --display_data=true
-```
-
-##### `--robot.use_external_commands`
-
-Determine whether LeRobot robot.send_action() sends commands to the robot.
-**Must** be set to false while using the VR teleoperation application, as the app already sends commands.
-
-##### `--teleop.use_present_position`
-
-Determine whether the teleoperator reads the goal or present position of the robot.
-Must be set to true if a compliant Reachy 2 is used to control another one.
-
-##### Use the relevant parts
-
-From our initial tests, recording **all** joints when only some are moving can reduce model quality with certain policies.
-To avoid this, you can exclude specific parts from recording and replay using:
-
-```bash
---robot.with_=false
-```
-
-with `` being one of : `mobile_base`, `l_arm`, `r_arm", `neck`, `antennas`.
-It determine whether the corresponding part is recorded in the observations. True if not set.
-
-By default, **all parts are recorded**.
-
-The same per-part mechanism is available in `reachy2_teleoperator` as well.
-
-```bash
---teleop.with\_
-```
-
-with `` being one of : `mobile_base`, `l_arm`, `r_arm", `neck`, `antennas`.
-Determine whether the corresponding part is recorded in the actions. True if not set.
-
-> **Important:** In a given session, the **enabled parts must match** on both the robot and the teleoperator.
-> For example, if the robot runs with `--robot.with_mobile_base=false`, the teleoperator must disable the same part `--teleoperator.with_mobile_base=false`.
-
-##### Use the relevant cameras
-
-You can do the same for **cameras**. Enable or disable each camera with default parameters using:
-
-```bash
---robot.with_left_teleop_camera= \
---robot.with_right_teleop_camera= \
---robot.with_torso_camera=
-```
-
-By default, no camera is recorded, all camera arguments are set to `false`.
-If you want to, you can use custom `width` and `height` parameters for Reachy 2's cameras using the `--robot.camera_width` & `--robot.camera_height` argument:
-
-```bash
---robot.camera_width=1920 \
---robot.camera_height=1080
-```
-
-This will change the resolution of all 3 default robot cameras (enabled by the above bool arguments).
-
-If you want, you can add additional cameras other than the ones in the robot as usual with:
-
-```bash
---robot.cameras="{ extra: {type: opencv, index_or_path: 42, width: 640, height: 480, fps: 30}}" \
-```
-
-## Step 2: Replay
-
-Make sure the robot is configured with the same parts as the dataset:
-
-```bash
-lerobot-replay \
- --robot.type=reachy2 \
- --robot.ip_address=192.168.0.200 \
- --robot.use_external_commands=false \
- --robot.with_mobile_base=false \
- --dataset.repo_id=pollen_robotics/record_test \
- --dataset.episode=0
-```
-
-## Step 3: Train
-
-```bash
-lerobot-train \
- --dataset.repo_id=pollen_robotics/record_test \
- --policy.type=act \
- --output_dir=outputs/train/reachy2_test \
- --job_name=reachy2 \
- --policy.device=mps \
- --wandb.enable=true \
- --policy.repo_id=pollen_robotics/record_test_policy
-```
-
-## Step 4: Evaluate
-
-```bash
-lerobot-eval \
- --robot.type=reachy2 \
- --robot.ip_address=192.168.0.200 \
- --dataset.repo_id=pollen_robotics/eval_record_test \
- --dataset.single_task="Evaluate reachy2 policy" \
- --dataset.num_episodes=10 \
- --policy.path=outputs/train/reachy2_test/checkpoints/last/pretrained_model
-```
diff --git a/lerobot/docs/source/rtc.mdx b/lerobot/docs/source/rtc.mdx
deleted file mode 100644
index 729519768032534d3599719a7191709bf63cb08e..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/rtc.mdx
+++ /dev/null
@@ -1,188 +0,0 @@
-# Real-Time Chunking (RTC)
-
-Real-Time Chunking (RTC) is an inference-time method that allows large, flow-matching based robotic policies, such as [Pi0](./pi0), [Pi0.5](./pi05), and [SmolVLA](./smolvla), to produce smooth, continuous, and reactive motion despite having high inference latency.
-
-These policies generate chunks of future actions (e.g., 50 steps at a time) instead of single actions.
-Because the models are large, producing each chunk takes longer than the time it takes the robot to execute it.
-Naively executing chunks leads to problems such as pauses, jerky transitions, or sudden changes in strategy whenever the next chunk arrives late or disagrees with the previously executed actions.
-
-RTC solves this by asynchronously generating the next chunk while the robot continues executing the current one, and by guiding the new chunk so it aligns smoothly with the portion of the previous chunk that has already been executed.
-
-## How RTC Works (simplified)
-
-RTC lets the robot think ahead while it’s still moving. When the robot is carrying out one chunk of actions, RTC starts creating the next chunk early.
-But since the robot has already moved a bit by the time the new chunk is ready, RTC has to make sure the new chunk still lines up smoothly with what the robot is currently doing.
-
-To do this, RTC treats the beginning of the new chunk like an inpainting or “fill-in-the-gaps” problem:
-it gently adjusts the first part of the new chunk so it blends naturally with the robot’s ongoing motion. The result is no pauses, no sudden jumps.
-
-In technical terms, RTC adds a guidance term to the flow-matching denoising process that forces the overlapping timesteps of the new chunk to stay close to the executed portion of the previous chunk, typically using a soft transition mask.
-
-## Quick Start
-
-### Installation
-
-RTC is built into LeRobot. Just install the policy dependencies you need:
-
-```bash
-# For Pi0 or Pi0.5
-pip install -e ".[pi]"
-
-# For SmolVLA
-pip install -e ".[smolvla]"
-```
-
-### Using RTC with Pi0
-
-You can find a complete reference implementation in [eval_with_real_robot.py](examples/rtc/eval_with_real_robot.py).
-The snippet below provides a simplified pseudo-example of how RTC operates with Pi0 in your pipeline:
-
-```python
-from lerobot.policies.pi0 import PI0Policy, PI0Config
-from lerobot.configs.types import RTCAttentionSchedule
-from lerobot.policies.rtc.configuration_rtc import RTCConfig
-from lerobot.policies.rtc.action_queue import ActionQueue
-
-# Load Pi0 with RTC enabled
-policy_cfg = PI0Config()
-
-# Enable RTC
-policy_cfg.rtc_config = RTCConfig(
- enabled=True,
- execution_horizon=10, # How many steps to blend with previous chunk
- max_guidance_weight=10.0, # How strongly to enforce consistency
- prefix_attention_schedule=RTCAttentionSchedule.EXP, # Exponential blend
-)
-
-# Load the policy
-policy = PI0Policy.from_pretrained("lerobot/pi0_base", policy_cfg=policy_cfg, device="cuda")
-
-# Now use predict_action_chunk with RTC parameters
-inference_delay = 4 # How many steps of inference latency, this values should be calculated based on the inference latency of the policy
-
-# Initialize the action queue
-action_queue = ActionQueue(policy_cfg.rtc_config)
-
-# Start in a separate thread with the following function
-def get_actions():
- while True:
- if should_get_actions:
-
- prev_actions = action_queue.get_left_over()
- obs = get_robot_observations(robot)
-
- # Generate actions WITH RTC
- actions = policy.predict_action_chunk(
- obs,
- inference_delay=inference_delay,
- prev_chunk_left_over=prev_actions,
- )
-
- action_queue.merge(
- actions, actions, inference_delay
- )
-
-for step in range(num_steps):
- action = action_queue.get()
-
- # Execute the first N actions
- execute_actions(action)
-```
-
-## Key Parameters
-
-`RTCConfig` has the following parameters to tune:
-
-**`execution_horizon`**: How many timesteps from the previous chunk to maintain consistency with. Higher values mean smoother transitions but potentially less reactivity.
-
-Typical values: 8-12 steps
-
-```python
-RTCConfig(execution_horizon=10)
-```
-
-**`max_guidance_weight`**: How strongly to enforce consistency with the previous chunk. This is a hyperparameter that can be tuned to balance the smoothness of the transitions and the reactivity of the policy. For 10 steps flow matching (SmolVLA, Pi0, Pi0.5), a value of 10.0 is a optimal value.
-
-**`prefix_attention_schedule`**: How to weight consistency across the overlap region.
-
-- `LINEAR`: Linear decay from inference_delay to execution_horizon
-- `EXP`: Exponential decay (recommended for getting started)
-- `ONES`: Full weight across entire execution_horizon
-- `ZEROS`: Binary (full weight up to inference_delay, then zero)
-
-**`inference_delay`**: How many timesteps of inference latency your system has. This is passed to `predict_action_chunk()` rather than the config, since it may vary at runtime.
-
-## Testing RTC Offline
-
-Before running on a real robot, test RTC with dataset samples to visualize how it works:
-
-```bash
-python examples/rtc/eval_dataset.py \
- --policy.path=lerobot/pi0_libero_finetuned \
- --dataset.repo_id=HuggingFaceVLA/libero \
- --rtc.execution_horizon=10 \
- --rtc.max_guidance_weight=10.0 \
- --device=cuda
-```
-
-The script generates a visualization of the denoising process, comparing standard generation (left) with RTC (right). In the RTC plots, you can see how the first few steps (blue/purple lines) are guided to match the red ground truth trajectory (previous chunk's tail), ensuring a smooth transition between chunks.
-
-
-
-
-
-## Testing RTC with a Real Robot
-
-```bash
-python examples/rtc/eval_with_real_robot.py \
- --policy.path=${HF_USERNAME}/policy_repo_id \
- --robot.type=so100_follower \
- --robot.port=/dev/tty.usbmodem58FA0834591 \
- --robot.cameras="{ gripper: {type: opencv, index_or_path: 1, width: 640, height: 480, fps: 30}, front: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}}" \
- --task="Move green small object into the purple platform" \
- --duration=120 \
- --device=cuda
-```
-
-## How It Differs from the Async Inference in LeRobot
-
-Both RTC and [async inference](./async) improve real-time robot control, but they solve different problems.
-
-| Aspect | Async Inference | RTC |
-| ------------- | -------------------------------------------------------------------------- | --------------------------------------------------- |
-| **Problem** | Idle frames while waiting for inference | Discontinuities between action chunks |
-| **Solution** | Decouple prediction from execution | Guide new chunks to continue smoothly from previous |
-| **Benefit** | No waiting, continuous action | Smooth transitions, natural motion |
-| **Best Used** | Async inference is best used with large models with high inference latency | Flow-matching based policies |
-
-**Use both together** for maximum smoothness and reactivity!
-
-## Advanced: Debug Tracking
-
-RTC includes built-in debug tracking to help you understand what's happening during inference:
-
-```python
-# Enable debug tracking
-policy_cfg.rtc_config.debug = True
-policy_cfg.rtc_config.debug_maxlen = 100
-
-# After inference, access debug data
-debug_data = policy.rtc_processor.get_debug_data()
-
-# Visualize denoising steps, corrections, etc.
-from lerobot.policies.rtc.debug_visualizer import RTCDebugVisualizer
-visualizer = RTCDebugVisualizer()
-# ... create plots
-```
-
-See `examples/rtc/eval_dataset.py` for a complete example of visualization.
-
-## References
-
-- [Smooth-As-Butter Robot Policies](https://alexander-soare.github.io/robotics/2025/08/05/smooth-as-butter-robot-policies.html) - Excellent technical explanation with real robot results
-- [Physical Intelligence - Real-Time Chunking](https://www.physicalintelligence.company/research/real_time_chunking) - Original paper and research
-- [Kinetix RTC Implementation](https://github.com/Physical-Intelligence/real-time-chunking-kinetix) - Reference implementation from Physical Intelligence
diff --git a/lerobot/docs/source/sarm.mdx b/lerobot/docs/source/sarm.mdx
deleted file mode 100644
index 81a04d2bd562274a0a237d5d81431948f5afb01e..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/sarm.mdx
+++ /dev/null
@@ -1,592 +0,0 @@
-# SARM: Stage-Aware Reward Modeling
-
-SARM (Stage-Aware Reward Modeling) is a video-based reward modeling framework for long-horizon robot manipulation tasks. This guide covers how to train SARM reward models and optionally use them with Reward-Aligned Behavior Cloning (RA-BC).
-
-**Paper**: [SARM: Stage-Aware Reward Modeling for Long Horizon Robot Manipulation](https://arxiv.org/abs/2509.25358)
-
-
-
-## Why Reward Models?
-
-Standard behavior cloning treats all demonstration frames equally, but real-world robot datasets are messy. They contain hesitations, corrections, and variable-quality trajectories. Reward models solve this by learning a generalizable notion of **task progress** from demonstrations: given video frames and a task description, they predict how close the robot is to completing the task (0→1). This learned "progress signal" can be used in multiple ways, two promising applications are: (1) **weighted imitation learning** (RA-BC), where high-progress frames receive more weight during policy training, and (2) **reinforcement learning**, where the reward model provides dense rewards for online or offline policy improvement.
-
-## Overview
-
-SARM has following features:
-
-1. **Stage-aware architecture**: Jointly predicts the high-level task stage and fine-grained progress within each stage
-2. **Subtask annotations**: Uses natural language subtask annotations to derive consistent progress labels
-3. **Temporal proportions**: Computes dataset-level priors (α̅\_k) for each subtask to normalize progress across variable-length demonstrations
-
-SARM trains on a compact **stage+tau** target for each frame:
-
-- **stage**: integer stage index `k ∈ {0, ..., K-1}`
-- **τ (tau)**: within-stage progress `τ ∈ [0, 1]`
-- **target encoding**: `y = k + τ` (this is what the dataset processor produces)
-
-At inference time (and in downstream RA-BC), SARM converts the raw `k + τ` value into a **normalized progress** in `[0, 1]` using dataset-level **temporal proportions** `α̅_k` (stored in `meta/temporal_proportions_*.json`).
-
-This matches **Formula (2)** from the paper:
-
-```
-progress_t = P_{k-1} + α̅_k × τ_t
-```
-
-Where:
-
-- `τ_t = (t - s_k) / (e_k - s_k)` is within-subtask normalized time
-- `P_{k-1}` is cumulative prior (sum of previous subtask proportions)
-- `α̅_k` is the temporal proportion for subtask k
-
-This ensures identical task states map to consistent progress values, even across demonstrations of different lengths.
-
-## Inputs and Targets (What the new code expects)
-
-SARM is trained through its processor (`src/lerobot/policies/sarm/processor_sarm.py`), which:
-
-- **Encodes** images and task text with CLIP (ViT-B/32) into `video_features` and `text_features`
-- **Pads/truncates** robot state into `state_features` (up to `max_state_dim`)
-- **Builds targets** as `sparse_targets` (and `dense_targets` in `dense_only`/`dual`) using the stage+tau encoding `y = k + τ`
-- **Masks rewind frames** using a per-sample `lengths` tensor (rewind is a training-time augmentation)
-
-At minimum, each training sample needs:
-
-- `task` (string): task description
-- `policy.image_key` images and `policy.state_key` states from the dataset
-
----
-
-## Annotation Modes
-
-You can choose from **3 annotation modes** that determine how progress labels are computed:
-
-| Mode | Annotations Required | Heads | Use Case |
-| -------------- | -------------------- | ---------------------------- | ------------------------------------------------------------ |
-| `single_stage` | None | Sparse only | Simple tasks, quick experiments, no VLM needed |
-| `dense_only` | Dense (VLM) | Dual (sparse auto-generated) | Detailed subtask tracking without defining high-level stages |
-| `dual` | Sparse + Dense (VLM) | Dual | Full SARM paper setup with both granularities |
-
-### Mode Details
-
-
-
-
-**No annotations required.** The entire episode is treated as a single stage called `"task"`, and progress is linear from 0 to 1 over the episode duration.
-
-- **Sparse head**: 1 stage ("task"), linear progress
-- **Dense head**: Not used
-- **Best for**: Simple tasks, quick experiments, or when VLM annotation is not available
-
-## Set Up Your Environment
-
-1. Install LeRobot by following our [Installation Guide](./installation).
-2. Install SARM dependencies by running:
-
-```bash
-pip install -e ".[sarm]"
-```
-
-Workflow:
-
-```
-1. Train SARM → 2. Visualize predictions → 3. (Optional) Train policy with RA-BC
-```
-
-
-
-
-**Only dense (fine-grained) annotations from a VLM.** The sparse head automatically uses a single `"task"` stage covering the full episode, while the dense head learns detailed subtask progression.
-
-- **Sparse head**: 1 stage ("task"), linear progress (auto-generated)
-- **Dense head**: Multiple fine-grained stages from VLM annotations
-- **Best for**: When you want detailed subtask tracking but don't need to define high-level stages
-
-Workflow:
-
-```
-1. Annotate (dense) → 2. Verify → 3. Train SARM → 4. Visualize → 5. (Optional) Train policy with RA-BC
-```
-
-
-
-
-**Both sparse and dense annotations from VLM.** Full dual-head mode as described in the SARM paper, with both high-level (sparse) and fine-grained (dense) stage predictions.
-
-- **Sparse head**: High-level stages from VLM annotations
-- **Dense head**: Fine-grained stages from VLM annotations
-- **Best for**: Complex multi-stage tasks where both granularities are useful
-
-Workflow:
-
-```
-1. Annotate (sparse+dense) → 2. Verify → 3. Train SARM → 4. Visualize → 5. (Optional) Train policy with RA-BC
-```
-
-
-
-
----
-
-## Step 1: Subtask Annotation
-
-
-
-
-**No annotation required!** Skip this step entirely. The model will use the episode's task description and compute linear progress automatically.
-
-
-
-
-Generate **dense (fine-grained) annotations only** using a VLM. The sparse stage will be auto-generated.
-
-```bash
-python src/lerobot/data_processing/sarm_annotations/subtask_annotation.py \
- --repo-id your-username/your-dataset \
- --dense-only \
- --dense-subtasks "Bring robot arms up from starting position,Grab near side and do 1st fold,Grab side and do 2nd fold,Grab side and do 3rd fold to finish folding" \
- --video-key observation.images.base \
- --num-workers 4 \
- --push-to-hub
-```
-
-**What gets saved:**
-
-- `meta/temporal_proportions_sparse.json` - Auto-generated sparse proportions (`{"task": 1.0}`)
-- `meta/temporal_proportions_dense.json` - Dense temporal proportions
-- Per-episode columns in `episodes/*.parquet`:
- - `dense_subtask_names`, `dense_subtask_start_frames`, `dense_subtask_end_frames`
- - (also time-based columns: `dense_subtask_start_times`, `dense_subtask_end_times`)
-
-
-
-
-Generate **both sparse (high-level) and dense (fine-grained) annotations** using a VLM.
-
-```bash
-python src/lerobot/data_processing/sarm_annotations/subtask_annotation.py \
- --repo-id your-username/your-dataset \
- --sparse-subtasks "Bring arms up from starting position,Fold the towel (3 folds in total)" \
- --dense-subtasks "Bring robot arms up from starting position,Grab near side and do 1st fold,Grab side and do 2nd fold,Grab side and do 3rd fold to finish folding" \
- --video-key observation.images.base \
- --num-workers 4 \
- --push-to-hub
-```
-
-**What gets saved:**
-
-- `meta/temporal_proportions_sparse.json` - Sparse temporal proportions
-- `meta/temporal_proportions_dense.json` - Dense temporal proportions
-- Per-episode columns in `episodes/*.parquet`:
- - `sparse_subtask_names`, `sparse_subtask_start_frames`, `sparse_subtask_end_frames`
- - `dense_subtask_names`, `dense_subtask_start_frames`, `dense_subtask_end_frames`
- - (also time-based columns: `*_subtask_start_times`, `*_subtask_end_times`)
-
-
-
-
-### Annotation Arguments
-
-| Argument | Description |
-| ---------------------- | ------------------------------------------------------------------------------- |
-| `--repo-id` | HuggingFace dataset repository ID |
-| `--sparse-subtasks` | Comma-separated list of high-level subtask names |
-| `--dense-subtasks` | Comma-separated list of fine-grained subtask names |
-| `--dense-only` | Generate only dense annotations (auto-creates sparse "task" stage) |
-| `--video-key` | Camera/video key to use (e.g., `observation.images.top`) |
-| `--num-workers` | Number of parallel GPU workers (default: 1) |
-| `--episodes` | Specific episode indices to annotate (default: all) |
-| `--skip-existing` | Skip episodes that already have annotations |
-| `--model` | VLM model (default: `Qwen/Qwen3-VL-30B-A3B-Instruct`) |
-| `--num-visualizations` | Number of episodes to visualize after annotation (default: 5, set to 0 to skip) |
-
-> **Note**: After annotation completes, 5 episodes are automatically visualized by default. Use `--num-visualizations 0` to skip this step.
-
----
-
-## Step 2: Verify Annotations
-
-
-
-
-**No verification needed!** Skip this step.
-
-
-
-
-Visualize annotations using the `--visualize-only` flag:
-
-```bash
-python src/lerobot/data_processing/sarm_annotations/subtask_annotation.py \
- --repo-id your-username/your-dataset \
- --visualize-only \
- --visualize-type dense \
- --num-visualizations 5 \
- --video-key observation.images.base \
- --output-dir ./subtask_viz
-```
-
-
-
-
-Visualize annotations using the `--visualize-only` flag:
-
-```bash
-python src/lerobot/data_processing/sarm_annotations/subtask_annotation.py \
- --repo-id your-username/your-dataset \
- --visualize-only \
- --visualize-type both \
- --num-visualizations 5 \
- --video-key observation.images.base \
- --output-dir ./subtask_viz
-```
-
-
-
-
-This generates visualizations showing video frames with subtask boundaries overlaid and timeline of subtasks.
-
-### Visualization Arguments
-
-| Argument | Description |
-| ---------------------- | -------------------------------------------------------------- |
-| `--visualize-only` | Only visualize existing annotations (no generation) |
-| `--num-visualizations` | Number of episodes to visualize (default: 5) |
-| `--visualize-type` | Type of annotations to visualize: `sparse`, `dense`, or `both` |
-
-**Tip**: If annotations are inaccurate, adjust your subtask descriptions to be more specific and re-run.
-
----
-
-## Step 3: Train SARM
-
-
-
-
-Train with **no annotations** - uses linear progress from 0 to 1:
-
-```bash
-python src/lerobot/scripts/lerobot_train.py \
- --dataset.repo_id=your-username/your-dataset \
- --policy.type=sarm \
- --policy.annotation_mode=single_stage \
- --policy.image_key=observation.images.base \
- --output_dir=outputs/train/sarm_single \
- --batch_size=32 \
- --steps=5000 \
- --wandb.enable=true \
- --wandb.project=sarm \
- --policy.repo_id=your-username/your-model-name
-```
-
-
-
-
-Train with **dense annotations only** (sparse auto-generated):
-
-```bash
-python src/lerobot/scripts/lerobot_train.py \
- --dataset.repo_id=your-username/your-dataset \
- --policy.type=sarm \
- --policy.annotation_mode=dense_only \
- --policy.image_key=observation.images.base \
- --output_dir=outputs/train/sarm_dense \
- --batch_size=32 \
- --steps=5000 \
- --wandb.enable=true \
- --wandb.project=sarm \
- --policy.repo_id=your-username/your-model-name
-```
-
-
-
-
-Train with **both sparse and dense annotations**:
-
-```bash
-python src/lerobot/scripts/lerobot_train.py \
- --dataset.repo_id=your-username/your-dataset \
- --policy.type=sarm \
- --policy.annotation_mode=dual \
- --policy.image_key=observation.images.base \
- --output_dir=outputs/train/sarm_dual \
- --batch_size=32 \
- --steps=5000 \
- --wandb.enable=true \
- --wandb.project=sarm \
- --policy.repo_id=your-username/your-model-name
-```
-
-
-
-
-### Multi-GPU Training
-
-Add `accelerate launch --multi_gpu --num_processes=4` to use multiple GPUs for training.
-
-### Training Arguments
-
-| Argument | Description | Default |
-| -------------------------- | ----------------------------------------------------------------- | ------------------------ |
-| `--policy.annotation_mode` | `single_stage`, `dense_only`, or `dual` | `single_stage` |
-| `--policy.image_key` | Camera key for images | `observation.images.top` |
-| `--policy.state_key` | Key for joint states | `observation.state` |
-| `--policy.n_obs_steps` | Observation history steps (total obs frames = `n_obs_steps + 1`) | `8` |
-| `--policy.frame_gap` | Gap (in frames) between sampled observations (at 30 fps: 30 ≈ 1s) | `30` |
-
----
-
-## Step 4: Visualize Predictions
-
-Use `compute_rabc_weights.py` with `--visualize-only` to visualize model predictions (and, if available, annotation-derived targets) without writing a parquet file.
-
-
-
-
-```bash
-python src/lerobot/policies/sarm/compute_rabc_weights.py \
- --dataset-repo-id your-username/your-dataset \
- --reward-model-path your-username/sarm-model \
- --visualize-only \
- --num-visualizations 5 \
- --head-mode sparse \
- --output-dir ./sarm_viz
-```
-
-
-
-
-```bash
-python src/lerobot/policies/sarm/compute_rabc_weights.py \
- --dataset-repo-id your-username/your-dataset \
- --reward-model-path your-username/sarm-model \
- --visualize-only \
- --num-visualizations 5 \
- --head-mode dense \
- --output-dir ./sarm_viz
-```
-
-
-
-
-```bash
-python src/lerobot/policies/sarm/compute_rabc_weights.py \
- --dataset-repo-id your-username/your-dataset \
- --reward-model-path your-username/sarm-model \
- --visualize-only \
- --num-visualizations 5 \
- --head-mode both \
- --output-dir ./sarm_viz
-```
-
-
-
-
-The visualization shows:
-
-- **Progress plot**: Predicted progress (and optional annotation-derived “GT” when available and `--stride 1`)
-- **Stage probabilities**: Stacked area plot of predicted stage probabilities
-- **Sample frames**: Key frames from the episode with progress/stage labels
-
-### Visualization Arguments
-
-| Argument | Description |
-| ---------------------- | --------------------------------------------------------- |
-| `--visualize-only` | Only visualize predictions (no RABC computation) |
-| `--num-visualizations` | Number of episodes to visualize (default: 5) |
-| `--head-mode` | SARM head to use: `sparse`, `dense`, or `both` |
-| `--stride` | Compute every N frames, interpolate the rest (default: 1) |
-
----
-
-## Step 5 (Optional): Train Policy with RA-BC
-
-Reward-Aligned Behavior Cloning (RA-BC) uses the trained SARM model to weight training samples based on predicted progress improvement. This requires two steps:
-
-1. **Precompute progress values** for all frames using the trained SARM model
-2. **Train policy** with RA-BC weighting using the precomputed values
-
-### How RA-BC Works
-
-For each training sample, RA-BC computes the progress delta:
-
-```
-r_i = φ(o_{t+Δ}) - φ(o_t)
-```
-
-Where `φ` is the SARM progress prediction and `Δ` is the policy's `chunk_size`. Samples with positive progress (good demonstrations) get higher weights, while samples with negative or zero progress get down-weighted.
-
-The weighting follows **Equations 8-9** from the paper:
-
-- **Soft weight**: `w̃_i = clip((r_i − (μ − 2σ)) / (4σ + ε), 0, 1)`
-- **Final weight**: `w_i = 𝟙{r_i > κ} + 𝟙{0 ≤ r_i ≤ κ} × w̃_i`
-
-### Step 5a: Compute SARM Progress Values
-
-First, run the SARM model on all frames in your dataset to compute progress values:
-
-```bash
-python src/lerobot/policies/sarm/compute_rabc_weights.py \
- --dataset-repo-id your-username/your-dataset \
- --reward-model-path your-username/sarm-model \
- --head-mode sparse \
- --num-visualizations 5 \
- --push-to-hub
-```
-
-This script:
-
-- Processes all frames and computes progress values
-- Saves progress values to a parquet file next to the dataset on disk (defaults to `/sarm_progress.parquet`)
-- Generates visualizations of the first N episodes (default: 5)
-
-**Arguments:**
-
-| Argument | Description | Default |
-| ---------------------- | -------------------------------------------------------------- | ---------- |
-| `--reward-model-path` | Path to trained SARM model | (required) |
-| `--head-mode` | SARM head to use: `sparse`, `dense`, or `both` | `sparse` |
-| `--device` | Device for inference | `cuda` |
-| `--visualize-only` | Only visualize predictions (no RA-BC computation) | `false` |
-| `--num-visualizations` | Number of episodes to visualize (default: 5, set to 0 to skip) | `5` |
-
-**Output format** (`sarm_progress.parquet`):
-
-| Column | Description |
-| ----------------- | ---------------------------------------------- |
-| `index` | Global frame index in dataset |
-| `episode_index` | Episode number |
-| `frame_index` | Local frame index within episode |
-| `progress_sparse` | Sparse head progress value [0, 1] |
-| `progress_dense` | Dense head progress value [0, 1] (if computed) |
-
-### Step 5b: Train Policy with RA-BC
-
-Once you have the progress file, train your policy with RA-BC weighting. The progress file is auto-detected from the dataset path (`sarm_progress.parquet`). Currently PI0, PI0.5 and SmolVLA are supported with RA-BC:
-
-```bash
-python src/lerobot/scripts/lerobot_train.py \
- --dataset.repo_id=your-username/your-dataset \
- --policy.type=pi0 \
- --use_rabc=true \
- --rabc_head_mode=sparse \
- --rabc_kappa=0.01 \
- --output_dir=outputs/train/policy_rabc \
- --batch_size=32 \
- --steps=40000
-```
-
-The training script automatically:
-
-- Loads the precomputed progress values from the parquet file
-- Uses the policy's `chunk_size` to compute progress deltas (Δ)
-- Computes sample weights based on progress improvement
-- Applies weighted loss during training
-
-**RA-BC Arguments:**
-
-| Argument | Description | Default |
-| ---------------------- | ---------------------------------------------------------- | ---------------------------------- |
-| `--use_rabc` | Enable RA-BC sample weighting | `false` |
-| `--rabc_progress_path` | Path to progress parquet file (auto-detected from dataset) | `sarm_progress.parquet` in dataset |
-| `--rabc_head_mode` | Which SARM head's progress to use: `sparse` or `dense` | `sparse` |
-| `--rabc_kappa` | Threshold κ for high-quality samples | `0.01` |
-
-### Tuning RA-BC Kappa
-
-The `kappa` parameter is the threshold that determines which samples get full weight (w=1). Understanding how to tune it is critical for RA-BC to work effectively.
-
-**How the weighting works:**
-
-| Condition | Weight |
-| ------------------- | ----------------------- |
-| `delta > kappa` | 1.0 (hard threshold) |
-| `0 ≤ delta ≤ kappa` | Soft weight from Eq. 8 |
-| `delta < 0` | 0.0 (negative progress) |
-
-**Diagnosing kappa issues:**
-
-Monitor these WandB metrics during training:
-
-| Metric | Healthy Range | Problem Indicator |
-| ------------------ | ------------- | ------------------------- |
-| `rabc_mean_weight` | 0.3 - 0.8 | ≈ 1.0 means kappa too low |
-| `rabc_delta_mean` | > 0 | Should be positive |
-| `rabc_delta_std` | > 0 | Variance in data quality |
-
-**If `rabc_mean_weight ≈ 1.0`:** Your kappa is too low. Most samples have `delta > kappa` and bypass the soft-weighting entirely. RA-BC becomes equivalent to vanilla BC.
-
-**Setting kappa based on your data:**
-
-The default `kappa=0.01` was tuned for the paper's T-shirt folding task (~90s episodes at 30fps). For your dataset, check the logged `rabc_delta_mean` and `rabc_delta_std`:
-
-```
-# If delta_mean ≈ 0.03 and delta_std ≈ 0.02:
-# Most deltas fall in range [0.01, 0.05]
-
-# Option 1: Set kappa = delta_mean (medium selectivity)
---rabc_kappa=0.03
-
-# Option 2: Set kappa = delta_mean + delta_std (high selectivity)
---rabc_kappa=0.05
-
-# Option 3: Set kappa = delta_mean + 2*delta_std (very selective)
---rabc_kappa=0.07
-```
-
-**When RA-BC may not help:**
-
-If your dataset is already high quality (consistent progress across all demonstrations), RA-BC won't provide much benefit since there's nothing to filter.
-
-### Multi-GPU Training with RA-BC
-
-```bash
-accelerate launch \
- --multi_gpu \
- --num_processes=4 \
- src/lerobot/scripts/lerobot_train.py \
- --dataset.repo_id=your-username/your-dataset \
- --policy.type=pi0 \
- --use_rabc=true \
- --rabc_kappa=0.01 \
- --output_dir=outputs/train/policy_rabc \
- --batch_size=32 \
- --steps=40000
-```
-
----
-
-## Tips & Best Practices
-
-### Choosing a Mode
-
-- **Start with `single_stage`** for quick experiments - no annotation overhead
-- Use **`dense_only`** when you want detailed progress tracking but tasks don't have clear high-level stages
-- Use **`dual`** for complex tasks where both coarse and fine-grained progress is meaningful
-
-### Annotation Quality
-
-1. **Be specific with subtask names**: Instead of "fold", use "grab near side and fold toward center"
-2. **Verify with visualization**: Always check a few episodes before training
-3. **Consistent naming**: Use the same subtask names across all episodes
-
-### RA-BC
-
-1. **Train SARM first**: RA-BC quality depends entirely on SARM quality
-2. **Monitor `rabc_mean_weight`**: If it's ≈ 1.0, increase kappa (see [Tuning RA-BC Kappa](#tuning-ra-bc-kappa))
-
----
-
-## Citation
-
-```bibtex
-@article{chen2025sarm,
- title={SARM: Stage-Aware Reward Modeling for Long Horizon Robot Manipulation},
- author={Chen, Qianzhong and Yu, Justin and Schwager, Mac and Abbeel, Pieter and Shentu, Yide and Wu, Philipp},
- journal={arXiv preprint arXiv:2509.25358},
- year={2025}
-}
-```
diff --git a/lerobot/docs/source/smolvla.mdx b/lerobot/docs/source/smolvla.mdx
deleted file mode 100644
index a9a498d6476b6f46157465ab3c4f5ab1c9ab1c5d..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/smolvla.mdx
+++ /dev/null
@@ -1,116 +0,0 @@
-# SmolVLA
-
-SmolVLA is Hugging Face’s lightweight foundation model for robotics. Designed for easy fine-tuning on LeRobot datasets, it helps accelerate your development!
-
-
-
-
-
- Figure 1. SmolVLA takes as input (i) multiple cameras views, (ii) the
- robot’s current sensorimotor state, and (iii) a natural language
- instruction, encoded into contextual features used to condition the action
- expert when generating an action chunk.
-
-
-
-## Set Up Your Environment
-
-1. Install LeRobot by following our [Installation Guide](./installation).
-2. Install SmolVLA dependencies by running:
-
- ```bash
- pip install -e ".[smolvla]"
- ```
-
-## Collect a dataset
-
-SmolVLA is a base model, so fine-tuning on your own data is required for optimal performance in your setup.
-We recommend recording ~50 episodes of your task as a starting point. Follow our guide to get started: [Recording a Dataset](./il_robots)
-
-
-
-In your dataset, make sure to have enough demonstrations per each variation (e.g. the cube position on the table if it is cube pick-place task) you are introducing.
-
-We recommend checking out the dataset linked below for reference that was used in the [SmolVLA paper](https://huggingface.co/papers/2506.01844):
-
-🔗 [SVLA SO100 PickPlace](https://huggingface.co/spaces/lerobot/visualize_dataset?path=%2Flerobot%2Fsvla_so100_pickplace%2Fepisode_0)
-
-In this dataset, we recorded 50 episodes across 5 distinct cube positions. For each position, we collected 10 episodes of pick-and-place interactions. This structure, repeating each variation several times, helped the model generalize better. We tried similar dataset with 25 episodes, and it was not enough leading to a bad performance. So, the data quality and quantity is definitely a key.
-After you have your dataset available on the Hub, you are good to go to use our finetuning script to adapt SmolVLA to your application.
-
-
-
-## Finetune SmolVLA on your data
-
-Use [`smolvla_base`](https://hf.co/lerobot/smolvla_base), our pretrained 450M model, and fine-tune it on your data.
-Training the model for 20k steps will roughly take ~4 hrs on a single A100 GPU. You should tune the number of steps based on performance and your use-case.
-
-If you don't have a gpu device, you can train using our notebook on [](https://colab.research.google.com/github/huggingface/notebooks/blob/main/lerobot/training-smolvla.ipynb)
-
-Pass your dataset to the training script using `--dataset.repo_id`. If you want to test your installation, run the following command where we use one of the datasets we collected for the [SmolVLA Paper](https://huggingface.co/papers/2506.01844).
-
-```bash
-cd lerobot && lerobot-train \
- --policy.path=lerobot/smolvla_base \
- --dataset.repo_id=${HF_USER}/mydataset \
- --batch_size=64 \
- --steps=20000 \
- --output_dir=outputs/train/my_smolvla \
- --job_name=my_smolvla_training \
- --policy.device=cuda \
- --wandb.enable=true
-```
-
-
- You can start with a small batch size and increase it incrementally, if the
- GPU allows it, as long as loading times remain short.
-
-
-Fine-tuning is an art. For a complete overview of the options for finetuning, run
-
-```bash
-lerobot-train --help
-```
-
-
-
-
-
- Figure 2: Comparison of SmolVLA across task variations. From left to right:
- (1) pick-place cube counting, (2) pick-place cube counting, (3) pick-place
- cube counting under perturbations, and (4) generalization on pick-and-place
- of the lego block with real-world SO101.
-
-
-
-## Evaluate the finetuned model and run it in real-time
-
-Similarly for when recording an episode, it is recommended that you are logged in to the HuggingFace Hub. You can follow the corresponding steps: [Record a dataset](./il_robots).
-Once you are logged in, you can run inference in your setup by doing:
-
-```bash
-lerobot-record \
- --robot.type=so101_follower \
- --robot.port=/dev/ttyACM0 \ # <- Use your port
- --robot.id=my_blue_follower_arm \ # <- Use your robot id
- --robot.cameras="{ front: {type: opencv, index_or_path: 8, width: 640, height: 480, fps: 30}}" \ # <- Use your cameras
- --dataset.single_task="Grasp a lego block and put it in the bin." \ # <- Use the same task description you used in your dataset recording
- --dataset.repo_id=${HF_USER}/eval_DATASET_NAME_test \ # <- This will be the dataset name on HF Hub
- --dataset.episode_time_s=50 \
- --dataset.num_episodes=10 \
- # <- Teleop optional if you want to teleoperate in between episodes \
- # --teleop.type=so100_leader \
- # --teleop.port=/dev/ttyACM0 \
- # --teleop.id=my_red_leader_arm \
- --policy.path=HF_USER/FINETUNE_MODEL_NAME # <- Use your fine-tuned model
-```
-
-Depending on your evaluation setup, you can configure the duration and the number of episodes to record for your evaluation suite.
diff --git a/lerobot/docs/source/so100.mdx b/lerobot/docs/source/so100.mdx
deleted file mode 100644
index e2df8ef69c83e2f09bda362fa98366513f2bbdc2..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/so100.mdx
+++ /dev/null
@@ -1,640 +0,0 @@
-# SO-100
-
-In the steps below, we explain how to assemble the SO-100 robot.
-
-## Source the parts
-
-Follow this [README](https://github.com/TheRobotStudio/SO-ARM100/blob/main/SO100.md). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts. And advise if it's your first time printing or if you don't own a 3D printer.
-
-## Install LeRobot 🤗
-
-To install LeRobot, follow our [Installation Guide](./installation)
-
-In addition to these instructions, you need to install the Feetech SDK:
-
-```bash
-pip install -e ".[feetech]"
-```
-
-## Configure the motors
-
-**Note:**
-Unlike the SO-101, the motor connectors are not easily accessible once the arm is assembled, so the configuration step must be done beforehand.
-
-### 1. Find the USB ports associated with each arm
-
-To find the port for each bus servo adapter, run this script:
-
-```bash
-lerobot-find-port
-```
-
-
-
-
-Example output:
-
-```
-Finding all available ports for the MotorBus.
-['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
-Remove the USB cable from your MotorsBus and press Enter when done.
-
-[...Disconnect corresponding leader or follower arm and press Enter...]
-
-The port of this MotorsBus is /dev/tty.usbmodem575E0032081
-Reconnect the USB cable.
-```
-
-Where the found port is: `/dev/tty.usbmodem575E0032081` corresponding to your leader or follower arm.
-
-
-
-
-On Linux, you might need to give access to the USB ports by running:
-
-```bash
-sudo chmod 666 /dev/ttyACM0
-sudo chmod 666 /dev/ttyACM1
-```
-
-Example output:
-
-```
-Finding all available ports for the MotorBus.
-['/dev/ttyACM0', '/dev/ttyACM1']
-Remove the usb cable from your MotorsBus and press Enter when done.
-
-[...Disconnect corresponding leader or follower arm and press Enter...]
-
-The port of this MotorsBus is /dev/ttyACM1
-Reconnect the USB cable.
-```
-
-Where the found port is: `/dev/ttyACM1` corresponding to your leader or follower arm.
-
-
-
-
-### 2. Set the motors ids and baudrates
-
-Each motor is identified by a unique id on the bus. When brand new, motors usually come with a default id of `1`. For the communication to work properly between the motors and the controller, we first need to set a unique, different id to each motor. Additionally, the speed at which data is transmitted on the bus is determined by the baudrate. In order to talk to each other, the controller and all the motors need to be configured with the same baudrate.
-
-To that end, we first need to connect to each motor individually with the controller in order to set these. Since we will write these parameters in the non-volatile section of the motors' internal memory (EEPROM), we'll only need to do this once.
-
-If you are repurposing motors from another robot, you will probably also need to perform this step as the ids and baudrate likely won't match.
-
-#### Follower
-
-Connect the usb cable from your computer and the power supply to the follower arm's controller board. Then, run the following command or run the API example with the port you got from the previous step. You'll also need to give your leader arm a name with the `id` parameter.
-
-For a visual reference on how to set the motor ids please refer to [this video](https://huggingface.co/docs/lerobot/en/so101#setup-motors-video) where we follow the process for the SO101 arm.
-
-
-
-
-```bash
-lerobot-setup-motors \
- --robot.type=so100_follower \
- --robot.port=/dev/tty.usbmodem585A0076841 # <- paste here the port found at previous step
-```
-
-
-
-
-
-```python
-from lerobot.robots.so_follower import SO100Follower, SO100FollowerConfig
-
-config = SO100FollowerConfig(
- port="/dev/tty.usbmodem585A0076841",
- id="my_awesome_follower_arm",
-)
-follower = SO100Follower(config)
-follower.setup_motors()
-```
-
-
-
-
-
-You should see the following instruction
-
-```
-Connect the controller board to the 'gripper' motor only and press enter.
-```
-
-As instructed, plug the gripper's motor. Make sure it's the only motor connected to the board, and that the motor itself is not yet daisy-chained to any other motor. As you press `[Enter]`, the script will automatically set the id and baudrate for that motor.
-
-
-Troubleshooting
-
-If you get an error at that point, check your cables and make sure they are plugged in properly:
-
-
-
Power supply
-
USB cable between your computer and the controller board
-
The 3-pin cable from the controller board to the motor
-
-
-If you are using a Waveshare controller board, make sure that the two jumpers are set on the `B` channel (USB).
-
-
-
-You should then see the following message:
-
-```
-'gripper' motor id set to 6
-```
-
-Followed by the next instruction:
-
-```
-Connect the controller board to the 'wrist_roll' motor only and press enter.
-```
-
-You can disconnect the 3-pin cable from the controller board, but you can leave it connected to the gripper motor on the other end, as it will already be in the right place. Now, plug in another 3-pin cable to the wrist roll motor and connect it to the controller board. As with the previous motor, make sure it is the only motor connected to the board and that the motor itself isn't connected to any other one.
-
-Repeat the operation for each motor as instructed.
-
-> [!TIP]
-> Check your cabling at each step before pressing Enter. For instance, the power supply cable might disconnect as you manipulate the board.
-
-When you are done, the script will simply finish, at which point the motors are ready to be used. You can now plug the 3-pin cable from each motor to the next one, and the cable from the first motor (the 'shoulder pan' with id=1) to the controller board, which can now be attached to the base of the arm.
-
-#### Leader
-
-Do the same steps for the leader arm.
-
-
-
-```bash
-lerobot-setup-motors \
- --teleop.type=so100_leader \
- --teleop.port=/dev/tty.usbmodem575E0031751 # <- paste here the port found at previous step
-```
-
-
-
-
-```python
-from lerobot.teleoperators.so_leader import SO100Leader, SO100LeaderConfig
-
-config = SO100LeaderConfig(
- port="/dev/tty.usbmodem585A0076841",
- id="my_awesome_leader_arm",
-)
-leader = SO100Leader(config)
-leader.setup_motors()
-```
-
-
-
-
-
-## Step-by-Step Assembly Instructions
-
-## Remove the gears of the 6 leader motors
-
-
-Video removing gears
-
-
-
-
-
-
-
-Follow the video for removing gears. You need to remove the gear for the motors of the leader arm. As a result, you will only use the position encoding of the motor and reduce friction to more easily operate the leader arm.
-
-### Clean Parts
-
-Remove all support material from the 3D-printed parts. The easiest way to do this is using a small screwdriver to get underneath the support material.
-
-### Additional Guidance
-
-
-Video assembling arms
-
-
-
-
-
-
-
-**Note:**
-This video provides visual guidance for assembling the arms, but it doesn't specify when or how to do the wiring. Inserting the cables beforehand is much easier than doing it afterward. The first arm may take a bit more than 1 hour to assemble, but once you get used to it, you can assemble the second arm in under 1 hour.
-
----
-
-### First Motor
-
-**Step 2: Insert Wires**
-
-- Insert two wires into the first motor.
-
-
-
-**Step 3: Install in Base**
-
-- Place the first motor into the base.
-
-
-
-**Step 4: Secure Motor**
-
-- Fasten the motor with 4 screws. Two from the bottom and two from top.
-
-**Step 5: Attach Motor Holder**
-
-- Slide over the first motor holder and fasten it using two screws (one on each side).
-
-
-
-**Step 6: Attach Motor Horns**
-
-- Install both motor horns, securing the top horn with a screw. Try not to move the motor position when attaching the motor horn, especially for the leader arms, where we removed the gears.
-
-
-
-
-
- Video adding motor horn
-
-
-
-
-**Step 7: Attach Shoulder Part**
-
-- Route one wire to the back of the robot and the other to the left or towards you (see photo).
-- Attach the shoulder part.
-
-
-
-**Step 8: Secure Shoulder**
-
-- Tighten the shoulder part with 4 screws on top and 4 on the bottom
- _(access bottom holes by turning the shoulder)._
-
----
-
-### Second Motor Assembly
-
-**Step 9: Install Motor 2**
-
-- Slide the second motor in from the top and link the wire from motor 1 to motor 2.
-
-
-
-**Step 10: Attach Shoulder Holder**
-
-- Add the shoulder motor holder.
-- Ensure the wire from motor 1 to motor 2 goes behind the holder while the other wire is routed upward (see photo).
-- This part can be tight to assemble, you can use a workbench like the image or a similar setup to push the part around the motor.
-
-
-
-
-
-
-
-**Step 11: Secure Motor 2**
-
-- Fasten the second motor with 4 screws.
-
-**Step 12: Attach Motor Horn**
-
-- Attach both motor horns to motor 2, again use the horn screw.
-
-**Step 13: Attach Base**
-
-- Install the base attachment using 2 screws.
-
-
-
-**Step 14: Attach Upper Arm**
-
-- Attach the upper arm with 4 screws on each side.
-
-
-
----
-
-### Third Motor Assembly
-
-**Step 15: Install Motor 3**
-
-- Route the motor cable from motor 2 through the cable holder to motor 3, then secure motor 3 with 4 screws.
-
-**Step 16: Attach Motor Horn**
-
-- Attach both motor horns to motor 3 and secure one again with a horn screw.
-
-
-
-**Step 17: Attach Forearm**
-
-- Connect the forearm to motor 3 using 4 screws on each side.
-
-
-
----
-
-### Fourth Motor Assembly
-
-**Step 18: Install Motor 4**
-
-- Slide in motor 4, attach the cable from motor 3, and secure the cable in its holder with a screw.
-
-
-
-
-
-
-**Step 19: Attach Motor Holder 4**
-
-- Install the fourth motor holder (a tight fit). Ensure one wire is routed upward and the wire from motor 3 is routed downward (see photo).
-
-
-
-**Step 20: Secure Motor 4 & Attach Horn**
-
-- Fasten motor 4 with 4 screws and attach its motor horns, use for one a horn screw.
-
-
-
----
-
-### Wrist Assembly
-
-**Step 21: Install Motor 5**
-
-- Insert motor 5 into the wrist holder and secure it with 2 front screws.
-
-
-
-**Step 22: Attach Wrist**
-
-- Connect the wire from motor 4 to motor 5. And already insert the other wire for the gripper.
-- Secure the wrist to motor 4 using 4 screws on both sides.
-
-
-
-**Step 23: Attach Wrist Horn**
-
-- Install only one motor horn on the wrist motor and secure it with a horn screw.
-
-
-
----
-
-### Follower Configuration
-
-**Step 24: Attach Gripper**
-
-- Attach the gripper to motor 5.
-
-
-
-**Step 25: Install Gripper Motor**
-
-- Insert the gripper motor, connect the motor wire from motor 5 to motor 6, and secure it with 3 screws on each side.
-
-
-
-**Step 26: Attach Gripper Horn & Claw**
-
-- Attach the motor horns and again use a horn screw.
-- Install the gripper claw and secure it with 4 screws on both sides.
-
-
-
-**Step 27: Mount Controller**
-
-- Attach the motor controller to the back of the robot.
-
-
-
-
-
-
-_Assembly complete – proceed to Leader arm assembly._
-
----
-
-### Leader Configuration
-
-For the leader configuration, perform **Steps 1–23**. Make sure that you removed the motor gears from the motors.
-
-**Step 24: Attach Leader Holder**
-
-- Mount the leader holder onto the wrist and secure it with a screw.
-
-
-
-**Step 25: Attach Handle**
-
-- Attach the handle to motor 5 using 4 screws.
-
-
-
-**Step 26: Install Gripper Motor**
-
-- Insert the gripper motor, secure it with 3 screws on each side, attach a motor horn using a horn screw, and connect the motor wire.
-
-
-
-**Step 27: Attach Trigger**
-
-- Attach the follower trigger with 4 screws.
-
-
-
-**Step 28: Mount Controller**
-
-- Attach the motor controller to the back of the robot.
-
-
-
-
-
-
-## Calibrate
-
-Next, you'll need to calibrate your robot to ensure that the leader and follower arms have the same position values when they are in the same physical position.
-The calibration process is very important because it allows a neural network trained on one robot to work on another.
-
-#### Follower
-
-Run the following command or API example to calibrate the follower arm:
-
-
-
-
-```bash
-lerobot-calibrate \
- --robot.type=so100_follower \
- --robot.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
- --robot.id=my_awesome_follower_arm # <- Give the robot a unique name
-```
-
-
-
-
-
-```python
-from lerobot.robots.so_follower import SO100FollowerConfig, SO100Follower
-
-config = SO100FollowerConfig(
- port="/dev/tty.usbmodem585A0076891",
- id="my_awesome_follower_arm",
-)
-
-follower = SO100Follower(config)
-follower.connect(calibrate=False)
-follower.calibrate()
-follower.disconnect()
-```
-
-
-
-
-
-We unified the calibration method for most robots. Thus, the calibration steps for this SO100 arm are the same as the steps for the Koch and SO101. First, we have to move the robot to the position where each joint is in the middle of its range, then we press `Enter`. Secondly, we move all joints through their full range of motion. A video of this same process for the SO101 as reference can be found [here](https://huggingface.co/docs/lerobot/en/so101#calibration-video)
-
-#### Leader
-
-Do the same steps to calibrate the leader arm, run the following command or API example:
-
-
-
-
-```bash
-lerobot-calibrate \
- --teleop.type=so100_leader \
- --teleop.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
- --teleop.id=my_awesome_leader_arm # <- Give the robot a unique name
-```
-
-
-
-
-
-```python
-from lerobot.teleoperators.so_leader import SO100LeaderConfig, SO100Leader
-
-config = SO100LeaderConfig(
- port="/dev/tty.usbmodem58760431551",
- id="my_awesome_leader_arm",
-)
-
-leader = SO100Leader(config)
-leader.connect(calibrate=False)
-leader.calibrate()
-leader.disconnect()
-```
-
-
-
-
-
-Congrats 🎉, your robot is all set to learn a task on its own. Start training it by following this tutorial: [Getting started with real-world robots](./il_robots)
-
-> [!TIP]
-> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
diff --git a/lerobot/docs/source/so101.mdx b/lerobot/docs/source/so101.mdx
deleted file mode 100644
index 3714a01a65bbdd35d9e70a81c538a27b6ee949bd..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/so101.mdx
+++ /dev/null
@@ -1,436 +0,0 @@
-# SO-101
-
-In the steps below, we explain how to assemble our flagship robot, the SO-101.
-
-## Source the parts
-
-Follow this [README](https://github.com/TheRobotStudio/SO-ARM100). It contains the bill of materials, with a link to source the parts, as well as the instructions to 3D print the parts.
-And advise if it's your first time printing or if you don't own a 3D printer.
-
-## Install LeRobot 🤗
-
-To install LeRobot, follow our [Installation Guide](./installation)
-
-In addition to these instructions, you need to install the Feetech SDK:
-
-```bash
-pip install -e ".[feetech]"
-```
-
-## Step-by-Step Assembly Instructions
-
-The follower arm uses 6x STS3215 motors with 1/345 gearing. The leader, however, uses three differently geared motors to make sure it can both sustain its own weight and it can be moved without requiring much force. Which motor is needed for which joint is shown in the table below.
-
-| Leader-Arm Axis | Motor | Gear Ratio |
-| ------------------- | :---: | :--------: |
-| Base / Shoulder Pan | 1 | 1 / 191 |
-| Shoulder Lift | 2 | 1 / 345 |
-| Elbow Flex | 3 | 1 / 191 |
-| Wrist Flex | 4 | 1 / 147 |
-| Wrist Roll | 5 | 1 / 147 |
-| Gripper | 6 | 1 / 147 |
-
-## Configure the motors
-
-### 1. Find the USB ports associated with each arm
-
-To find the port for each bus servo adapter, connect MotorBus to your computer via USB and power. Run the following script and disconnect the MotorBus when prompted:
-
-```bash
-lerobot-find-port
-```
-
-
-
-
-Example output:
-
-```
-Finding all available ports for the MotorBus.
-['/dev/tty.usbmodem575E0032081', '/dev/tty.usbmodem575E0031751']
-Remove the USB cable from your MotorsBus and press Enter when done.
-
-[...Disconnect corresponding leader or follower arm and press Enter...]
-
-The port of this MotorsBus is /dev/tty.usbmodem575E0032081
-Reconnect the USB cable.
-```
-
-Where the found port is: `/dev/tty.usbmodem575E0032081` corresponding to your leader or follower arm.
-
-
-
-
-On Linux, you might need to give access to the USB ports by running:
-
-```bash
-sudo chmod 666 /dev/ttyACM0
-sudo chmod 666 /dev/ttyACM1
-```
-
-Example output:
-
-```
-Finding all available ports for the MotorBus.
-['/dev/ttyACM0', '/dev/ttyACM1']
-Remove the usb cable from your MotorsBus and press Enter when done.
-
-[...Disconnect corresponding leader or follower arm and press Enter...]
-
-The port of this MotorsBus is /dev/ttyACM1
-Reconnect the USB cable.
-```
-
-Where the found port is: `/dev/ttyACM1` corresponding to your leader or follower arm.
-
-
-
-
-### 2. Set the motors ids and baudrates
-
-Each motor is identified by a unique id on the bus. When brand new, motors usually come with a default id of `1`. For the communication to work properly between the motors and the controller, we first need to set a unique, different id to each motor. Additionally, the speed at which data is transmitted on the bus is determined by the baudrate. In order to talk to each other, the controller and all the motors need to be configured with the same baudrate.
-
-To that end, we first need to connect to each motor individually with the controller in order to set these. Since we will write these parameters in the non-volatile section of the motors' internal memory (EEPROM), we'll only need to do this once.
-
-If you are repurposing motors from another robot, you will probably also need to perform this step as the ids and baudrate likely won't match.
-
-The video below shows the sequence of steps for setting the motor ids.
-
-##### Setup motors video
-
-
-
-
-
-#### Follower
-
-Connect the usb cable from your computer and the power supply to the follower arm's controller board. Then, run the following command or run the API example with the port you got from the previous step. You'll also need to give your leader arm a name with the `id` parameter.
-
-
-
-
-```bash
-lerobot-setup-motors \
- --robot.type=so101_follower \
- --robot.port=/dev/tty.usbmodem585A0076841 # <- paste here the port found at previous step
-```
-
-
-
-
-
-```python
-from lerobot.robots.so_follower import SO101Follower, SO101FollowerConfig
-
-config = SO101FollowerConfig(
- port="/dev/tty.usbmodem585A0076841",
- id="my_awesome_follower_arm",
-)
-follower = SO101Follower(config)
-follower.setup_motors()
-```
-
-
-
-
-
-You should see the following instruction
-
-```bash
-Connect the controller board to the 'gripper' motor only and press enter.
-```
-
-As instructed, plug the gripper's motor. Make sure it's the only motor connected to the board, and that the motor itself is not yet daisy-chained to any other motor. As you press `[Enter]`, the script will automatically set the id and baudrate for that motor.
-
-
-Troubleshooting
-
-If you get an error at that point, check your cables and make sure they are plugged in properly:
-
-
-
Power supply
-
USB cable between your computer and the controller board
-
The 3-pin cable from the controller board to the motor
-
-
-If you are using a Waveshare controller board, make sure that the two jumpers are set on the `B` channel (USB).
-
-
-
-You should then see the following message:
-
-```bash
-'gripper' motor id set to 6
-```
-
-Followed by the next instruction:
-
-```bash
-Connect the controller board to the 'wrist_roll' motor only and press enter.
-```
-
-You can disconnect the 3-pin cable from the controller board, but you can leave it connected to the gripper motor on the other end, as it will already be in the right place. Now, plug in another 3-pin cable to the wrist roll motor and connect it to the controller board. As with the previous motor, make sure it is the only motor connected to the board and that the motor itself isn't connected to any other one.
-
-Repeat the operation for each motor as instructed.
-
-> [!TIP]
-> Check your cabling at each step before pressing Enter. For instance, the power supply cable might disconnect as you manipulate the board.
-
-When you are done, the script will simply finish, at which point the motors are ready to be used. You can now plug the 3-pin cable from each motor to the next one, and the cable from the first motor (the 'shoulder pan' with id=1) to the controller board, which can now be attached to the base of the arm.
-
-#### Leader
-
-Do the same steps for the leader arm.
-
-
-
-
-```bash
-lerobot-setup-motors \
- --teleop.type=so101_leader \
- --teleop.port=/dev/tty.usbmodem575E0031751 # <- paste here the port found at previous step
-```
-
-
-
-
-
-```python
-from lerobot.teleoperators.so_leader import SO101Leader, SO101LeaderConfig
-
-config = SO101LeaderConfig(
- port="/dev/tty.usbmodem585A0076841",
- id="my_awesome_leader_arm",
-)
-leader = SO101Leader(config)
-leader.setup_motors()
-```
-
-
-
-
-
-### Clean Parts
-
-Remove all support material from the 3D-printed parts. The easiest way to do this is using a small screwdriver to get underneath the support material.
-
-It is advisable to install one 3-pin cable in the motor after placing them before continuing assembly.
-
-### Joint 1
-
-- Place the first motor into the base.
-- Fasten the motor with 4 M2x6mm screws (smallest screws). Two from the top and two from the bottom.
-- Slide over the first motor holder and fasten it using two M2x6mm screws (one on each side).
-- Install both motor horns, securing the top horn with a M3x6mm screw.
-- Attach the shoulder part.
-- Tighten the shoulder part with 4 M3x6mm screws on top and 4 M3x6mm screws on the bottom
-- Add the shoulder motor holder.
-
-
-
-
-
-### Joint 2
-
-- Slide the second motor in from the top.
-- Fasten the second motor with 4 M2x6mm screws.
-- Attach both motor horns to motor 2, again use the M3x6mm horn screw.
-- Attach the upper arm with 4 M3x6mm screws on each side.
-
-
-
-
-
-### Joint 3
-
-- Insert motor 3 and fasten using 4 M2x6mm screws
-- Attach both motor horns to motor 3 and secure one again with a M3x6mm horn screw.
-- Connect the forearm to motor 3 using 4 M3x6mm screws on each side.
-
-
-
-
-
-### Joint 4
-
-- Slide over motor holder 4.
-- Slide in motor 4.
-- Fasten motor 4 with 4 M2x6mm screws and attach its motor horns, use a M3x6mm horn screw.
-
-
-
-
-
-### Joint 5
-
-- Insert motor 5 into the wrist holder and secure it with 2 M2x6mm front screws.
-- Install only one motor horn on the wrist motor and secure it with a M3x6mm horn screw.
-- Secure the wrist to motor 4 using 4 M3x6mm screws on both sides.
-
-
-
-
-
-### Gripper / Handle
-
-
-
-
-- Attach the gripper to motor 5, attach it to the motor horn on the wrist using 4 M3x6mm screws.
-- Insert the gripper motor and secure it with 2 M2x6mm screws on each side.
-- Attach the motor horns and again use a M3x6mm horn screw.
-- Install the gripper claw and secure it with 4 M3x6mm screws on both sides.
-
-
-
-
-
-
-
-
-- Mount the leader holder onto the wrist and secure it with 4 M3x6mm screws.
-- Attach the handle to motor 5 using 1 M2x6mm screw.
-- Insert the gripper motor, secure it with 2 M2x6mm screws on each side, attach a motor horn using a M3x6mm horn screw.
-- Attach the follower trigger with 4 M3x6mm screws.
-
-
-
-
-
-
-
-
-## Calibrate
-
-Next, you'll need to calibrate your robot to ensure that the leader and follower arms have the same position values when they are in the same physical position.
-The calibration process is very important because it allows a neural network trained on one robot to work on another.
-
-#### Follower
-
-Run the following command or API example to calibrate the follower arm:
-
-
-
-
-```bash
-lerobot-calibrate \
- --robot.type=so101_follower \
- --robot.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
- --robot.id=my_awesome_follower_arm # <- Give the robot a unique name
-```
-
-
-
-
-
-```python
-from lerobot.robots.so_follower import SO101FollowerConfig, SO101Follower
-
-config = SO101FollowerConfig(
- port="/dev/tty.usbmodem585A0076891",
- id="my_awesome_follower_arm",
-)
-
-follower = SO101Follower(config)
-follower.connect(calibrate=False)
-follower.calibrate()
-follower.disconnect()
-```
-
-
-
-
-
-The video below shows how to perform the calibration. First you need to move the robot to the position where all joints are in the middle of their ranges. Then after pressing enter you have to move each joint through its full range of motion.
-
-##### Calibration video
-
-
-
-
-
-#### Leader
-
-Do the same steps to calibrate the leader arm, run the following command or API example:
-
-
-
-
-```bash
-lerobot-calibrate \
- --teleop.type=so101_leader \
- --teleop.port=/dev/tty.usbmodem58760431551 \ # <- The port of your robot
- --teleop.id=my_awesome_leader_arm # <- Give the robot a unique name
-```
-
-
-
-
-
-```python
-from lerobot.teleoperators.so_leader import SO101LeaderConfig, SO101Leader
-
-config = SO101LeaderConfig(
- port="/dev/tty.usbmodem58760431551",
- id="my_awesome_leader_arm",
-)
-
-leader = SO101Leader(config)
-leader.connect(calibrate=False)
-leader.calibrate()
-leader.disconnect()
-```
-
-
-
-
-
-Congrats 🎉, your robot is all set to learn a task on its own. Start training it by following this tutorial: [Getting started with real-world robots](./il_robots)
-
-> [!TIP]
-> If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
diff --git a/lerobot/docs/source/unitree_g1.mdx b/lerobot/docs/source/unitree_g1.mdx
deleted file mode 100644
index d2986fc119350c8e1cc515b8ac10bd1ae8cbfbc0..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/unitree_g1.mdx
+++ /dev/null
@@ -1,203 +0,0 @@
-# Unitree G1
-
-This guide covers the complete setup process for the Unitree G1 humanoid, from initial connection to running gr00t_wbc locomotion.
-
-## About
-
-We support both 29 and 23 DOF G1 EDU version. We introduce:
-
-- **`unitree g1` robot class, handling low level read/write from/to the humanoid**
-- **ZMQ socket bridge** for remote communication and camera streaming, allowing for remote policy deployment over wlan, eth or directly on the robot
-- **Locomotion policies** from NVIDIA gr00t and Amazon FAR Holosoma
-- **Simulation mode** for testing policies without the physical robot in mujoco
-
----
-
-## Connection guide
-
-### Step 1: Configure Ethernet Interface
-
-Set a static IP on the same subnet as the robot:
-
-```bash
-# Replace 'enp131s0' with your ethernet interface name (check with `ip a`)
-sudo ip addr flush dev enp131s0
-sudo ip addr add 192.168.123.200/24 dev enp131s0
-sudo ip link set enp131s0 up
-```
-
-**Note**: The G1's Ethernet IP is fixed at `192.168.123.164`. Your computer must use `192.168.123.x` with x ≠ 164.
-
-### Step 2: SSH into the Robot
-
-```bash
-ssh unitree@192.168.123.164
-# Password: 123
-```
-
-You should now be connected to the G1's Orin.
-
----
-
-## Part 2: Enable WiFi on the Robot
-
-Wlan0 is disabled by default on the G1. To enable it:
-
-### Step 1: Enable WiFi Hardware
-
-```bash
-sudo rfkill unblock wifi
-sudo rfkill unblock all
-
-# Bring up wlan0
-sudo ip link set wlan0 up
-
-# Enable NetworkManager control of wlan0
-sudo nmcli radio wifi on
-sudo nmcli device set wlan0 managed yes
-sudo systemctl restart NetworkManager
-```
-
-### Step 2: Enable Internet Forwarding
-
-**On your laptop:**
-
-```bash
-# Enable IP forwarding
-sudo sysctl -w net.ipv4.ip_forward=1
-
-# Set up NAT (replace wlp132s0f0 with your WiFi interface)
-sudo iptables -t nat -A POSTROUTING -o wlp132s0f0 -s 192.168.123.0/24 -j MASQUERADE
-sudo iptables -A FORWARD -i wlp132s0f0 -o enp131s0 -m state --state RELATED,ESTABLISHED -j ACCEPT
-sudo iptables -A FORWARD -i enp131s0 -o wlp132s0f0 -j ACCEPT
-```
-
-**On the G1:**
-
-```bash
-# Add laptop as default gateway
-sudo ip route del default 2>/dev/null || true
-sudo ip route add default via 192.168.123.200 dev eth0
-echo "nameserver 8.8.8.8" | sudo tee /etc/resolv.conf
-
-# Test connection
-ping -c 3 8.8.8.8
-```
-
-### Step 3: Connect to WiFi Network
-
-```bash
-# List available networks
-nmcli device wifi list
-
-# Connect to your WiFi (example)
-sudo nmcli connection add type wifi ifname wlan0 con-name "YourNetwork" ssid "YourNetwork"
-sudo nmcli connection modify "YourNetwork" wifi-sec.key-mgmt wpa-psk
-sudo nmcli connection modify "YourNetwork" wifi-sec.psk "YourPassword"
-sudo nmcli connection modify "YourNetwork" connection.autoconnect yes
-sudo nmcli connection up "YourNetwork"
-
-# Check WiFi IP address
-ip a show wlan0
-```
-
-### Step 4: SSH Over WiFi
-
-Once connected to WiFi, note the robot's IP address and disconnect the Ethernet cable. You can now SSH over WiFi:
-
-```bash
-ssh unitree@
-# Password: 123
-```
-
-Replace `` with your robot's actual WiFi IP address.
-
----
-
-## Part 3: Robot Server Setup
-
-### Step 1: Install LeRobot on the Orin
-
-SSH into the robot and install LeRobot:
-
-```bash
-ssh unitree@
-
-conda create -y -n lerobot python=3.10
-conda activate lerobot
-git clone https://github.com/huggingface/lerobot.git
-cd lerobot
-pip install -e '.[unitree_g1]'
-git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
-cd unitree_sdk2_python && pip install -e .
-```
-
-**Note**: The Unitree SDK requires CycloneDDS v0.10.2 to be installed. See the [Unitree SDK documentation](https://github.com/unitreerobotics/unitree_sdk2_python) for details.
-
-### Step 2: Run the Robot Server
-
-On the robot:
-
-```bash
-python src/lerobot/robots/unitree_g1/run_g1_server.py
-```
-
-**Important**: Keep this terminal running. The server must be active for remote control.
-
----
-
-## Part 4: Controlling the robot
-
-With the robot server running, you can now control the robot remotely. Let's launch a locomotion policy
-
-### Step 1: Install LeRobot on your machine
-
-```bash
-conda create -y -n lerobot python=3.10
-conda activate lerobot
-git clone https://github.com/huggingface/lerobot.git
-cd lerobot
-pip install -e '.[unitree_g1]'
-git clone https://github.com/unitreerobotics/unitree_sdk2_python.git
-cd unitree_sdk2_python && pip install -e .
-```
-
-### Step 2: Update Robot IP in Config
-
-Edit the config file to match your robot's WiFi IP:
-
-```python
-# In src/lerobot/robots/unitree_g1/config_unitree_g1.py
-robot_ip: str = "" # Replace with your robot's WiFi IP.
-```
-
-### Step 3: Run the Locomotion Policy
-
-```bash
-# Run GR00T locomotion controller
-python examples/unitree_g1/gr00t_locomotion.py --repo-id "nepyope/GR00T-WholeBodyControl_g1"
-
-# Run Holosoma locomotion controller
-python examples/unitree_g1/holosoma_locomotion.py
-
-```
-
-Press `Ctrl+C` to stop the policy.
-
----
-
-## Running in Simulation Mode (MuJoCo)
-
-You can now test policies before unleashing them on the physical robot using MuJoCo. To do so simply set `is_simulation=True` in config.
-
-## Additional Resources
-
-- [Unitree SDK Documentation](https://github.com/unitreerobotics/unitree_sdk2_python)
-- [GR00T-WholeBodyControl](https://github.com/NVlabs/GR00T-WholeBodyControl)
-- [Holosoma](https://github.com/amazon-far/holosoma)
-- [LeRobot Documentation](https://github.com/huggingface/lerobot)
-- [Unitree_IL_Lerobot](https://github.com/unitreerobotics/unitree_IL_lerobot)
-
----
-
-_Last updated: December 2025_
diff --git a/lerobot/docs/source/using_dataset_tools.mdx b/lerobot/docs/source/using_dataset_tools.mdx
deleted file mode 100644
index 9b9885abddb78802b4e3ad6b26fe3d84bb958a90..0000000000000000000000000000000000000000
--- a/lerobot/docs/source/using_dataset_tools.mdx
+++ /dev/null
@@ -1,203 +0,0 @@
-# Using Dataset Tools
-
-This guide covers the dataset tools utilities available in LeRobot for modifying and editing existing datasets.
-
-## Overview
-
-LeRobot provides several utilities for manipulating datasets:
-
-1. **Delete Episodes** - Remove specific episodes from a dataset
-2. **Split Dataset** - Divide a dataset into multiple smaller datasets
-3. **Merge Datasets** - Combine multiple datasets into one. The datasets must have identical features, and episodes are concatenated in the order specified in `repo_ids`
-4. **Add Features** - Add new features to a dataset
-5. **Remove Features** - Remove features from a dataset
-6. **Convert to Video** - Convert image-based datasets to video format for efficient storage
-
-The core implementation is in `lerobot.datasets.dataset_tools`.
-An example script detailing how to use the tools API is available in `examples/dataset/use_dataset_tools.py`.
-
-## Command-Line Tool: lerobot-edit-dataset
-
-`lerobot-edit-dataset` is a command-line script for editing datasets. It can be used to delete episodes, split datasets, merge datasets, add features, remove features, and convert image datasets to video format.
-
-Run `lerobot-edit-dataset --help` for more information on the configuration of each operation.
-
-### Usage Examples
-
-#### Delete Episodes
-
-Remove specific episodes from a dataset. This is useful for filtering out undesired data.
-
-```bash
-# Delete episodes 0, 2, and 5 (modifies original dataset)
-lerobot-edit-dataset \
- --repo_id lerobot/pusht \
- --operation.type delete_episodes \
- --operation.episode_indices "[0, 2, 5]"
-
-# Delete episodes and save to a new dataset (preserves original dataset)
-lerobot-edit-dataset \
- --repo_id lerobot/pusht \
- --new_repo_id lerobot/pusht_after_deletion \
- --operation.type delete_episodes \
- --operation.episode_indices "[0, 2, 5]"
-```
-
-#### Split Dataset
-
-Divide a dataset into multiple subsets.
-
-```bash
-# Split by fractions (e.g. 80% train, 20% test, 20% val)
-lerobot-edit-dataset \
- --repo_id lerobot/pusht \
- --operation.type split \
- --operation.splits '{"train": 0.8, "test": 0.2, "val": 0.2}'
-
-# Split by specific episode indices
-lerobot-edit-dataset \
- --repo_id lerobot/pusht \
- --operation.type split \
- --operation.splits '{"task1": [0, 1, 2, 3], "task2": [4, 5]}'
-```
-
-There are no constraints on the split names, they can be determined by the user. Resulting datasets are saved under the repo id with the split name appended, e.g. `lerobot/pusht_train`, `lerobot/pusht_task1`, `lerobot/pusht_task2`.
-
-#### Merge Datasets
-
-Combine multiple datasets into a single dataset.
-
-```bash
-# Merge train and validation splits back into one dataset
-lerobot-edit-dataset \
- --repo_id lerobot/pusht_merged \
- --operation.type merge \
- --operation.repo_ids "['lerobot/pusht_train', 'lerobot/pusht_val']"
-```
-
-#### Remove Features
-
-Remove features from a dataset.
-
-```bash
-# Remove a camera feature
-lerobot-edit-dataset \
- --repo_id lerobot/pusht \
- --operation.type remove_feature \
- --operation.feature_names "['observation.images.top']"
-```
-
-#### Convert to Video
-
-Convert an image-based dataset to video format, creating a new LeRobotDataset where images are stored as videos. This is useful for reducing storage requirements and improving data loading performance. The new dataset will have the exact same structure as the original, but with images encoded as MP4 videos in the proper LeRobot format.
-
-```bash
-# Local-only: Save to a custom output directory (no hub push)
-lerobot-edit-dataset \
- --repo_id lerobot/pusht_image \
- --operation.type convert_to_video \
- --operation.output_dir /path/to/output/pusht_video
-
-# Save with new repo_id (local storage)
-lerobot-edit-dataset \
- --repo_id lerobot/pusht_image \
- --new_repo_id lerobot/pusht_video \
- --operation.type convert_to_video
-
-# Convert and push to Hugging Face Hub
-lerobot-edit-dataset \
- --repo_id lerobot/pusht_image \
- --new_repo_id lerobot/pusht_video \
- --operation.type convert_to_video \
- --push_to_hub true
-
-# Convert with custom video codec and quality settings
-lerobot-edit-dataset \
- --repo_id lerobot/pusht_image \
- --operation.type convert_to_video \
- --operation.output_dir outputs/pusht_video \
- --operation.vcodec libsvtav1 \
- --operation.pix_fmt yuv420p \
- --operation.g 2 \
- --operation.crf 30
-
-# Convert only specific episodes
-lerobot-edit-dataset \
- --repo_id lerobot/pusht_image \
- --operation.type convert_to_video \
- --operation.output_dir outputs/pusht_video \
- --operation.episode_indices "[0, 1, 2, 5, 10]"
-
-# Convert with multiple workers for parallel processing
-lerobot-edit-dataset \
- --repo_id lerobot/pusht_image \
- --operation.type convert_to_video \
- --operation.output_dir outputs/pusht_video \
- --operation.num_workers 8
-```
-
-**Parameters:**
-
-- `output_dir`: Custom output directory (optional - by default uses `new_repo_id` or `{repo_id}_video`)
-- `vcodec`: Video codec to use - options: `h264`, `hevc`, `libsvtav1` (default: `libsvtav1`)
-- `pix_fmt`: Pixel format - options: `yuv420p`, `yuv444p` (default: `yuv420p`)
-- `g`: Group of pictures (GOP) size - lower values give better quality but larger files (default: 2)
-- `crf`: Constant rate factor - lower values give better quality but larger files, 0 is lossless (default: 30)
-- `fast_decode`: Fast decode tuning option (default: 0)
-- `episode_indices`: List of specific episodes to convert (default: all episodes)
-- `num_workers`: Number of parallel workers for processing (default: 4)
-
-**Note:** The resulting dataset will be a proper LeRobotDataset with all cameras encoded as videos in the `videos/` directory, with parquet files containing only metadata (no raw image data). All episodes, stats, and tasks are preserved.
-
-### Push to Hub
-
-Add the `--push_to_hub true` flag to any command to automatically upload the resulting dataset to the Hugging Face Hub:
-
-```bash
-lerobot-edit-dataset \
- --repo_id lerobot/pusht \
- --new_repo_id lerobot/pusht_after_deletion \
- --operation.type delete_episodes \
- --operation.episode_indices "[0, 2, 5]" \
- --push_to_hub true
-```
-
-There is also a tool for adding features to a dataset that is not yet covered in `lerobot-edit-dataset`.
-
-# Dataset Visualization
-
-## Online Visualization
-
-When you record a dataset using `lerobot`, it automatically uploads to the Hugging Face Hub unless you specify otherwise. To view the dataset online, use our **LeRobot Dataset Visualizer**, available at:
-https://huggingface.co/spaces/lerobot/visualize_dataset
-
-## Local Visualization
-
-You can also visualize episodes from a dataset locally using our command-line tool.
-
-**From the Hugging Face Hub:**
-
-```bash
-lerobot-dataset-viz \
- --repo-id lerobot/pusht \
- --episode-index 0
-```
-
-**From a local folder:**
-Add the `--root` option and set `--mode local`. For example, to search in `./my_local_data_dir/lerobot/pusht`:
-
-```bash
-lerobot-dataset-viz \
- --repo-id lerobot/pusht \
- --root ./my_local_data_dir \
- --mode local \
- --episode-index 0
-```
-
-Once executed, the tool opens `rerun.io` and displays the camera streams, robot states, and actions for the selected episode.
-
-For advanced usage—including visualizing datasets stored on a remote server—run:
-
-```bash
-lerobot-dataset-viz --help
-```
diff --git a/lerobot/src/lerobot/cameras/opencv/__init__.py b/lerobot/src/lerobot/cameras/opencv/__init__.py
deleted file mode 100644
index bb7c12a7aa99a2f615ebe326dcc72226d7f48485..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/cameras/opencv/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .camera_opencv import OpenCVCamera
-from .configuration_opencv import OpenCVCameraConfig
-
-__all__ = ["OpenCVCamera", "OpenCVCameraConfig"]
diff --git a/lerobot/src/lerobot/cameras/opencv/camera_opencv.py b/lerobot/src/lerobot/cameras/opencv/camera_opencv.py
deleted file mode 100644
index 2026bad42e9ff325560d748669742ffb00ee5168..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/cameras/opencv/camera_opencv.py
+++ /dev/null
@@ -1,541 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Provides the OpenCVCamera class for capturing frames from cameras using OpenCV.
-"""
-
-import logging
-import math
-import os
-import platform
-import time
-from pathlib import Path
-from threading import Event, Lock, Thread
-from typing import Any
-
-from numpy.typing import NDArray # type: ignore # TODO: add type stubs for numpy.typing
-
-# Fix MSMF hardware transform compatibility for Windows before importing cv2
-if platform.system() == "Windows" and "OPENCV_VIDEOIO_MSMF_ENABLE_HW_TRANSFORMS" not in os.environ:
- os.environ["OPENCV_VIDEOIO_MSMF_ENABLE_HW_TRANSFORMS"] = "0"
-import cv2 # type: ignore # TODO: add type stubs for OpenCV
-
-from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
-
-from ..camera import Camera
-from ..utils import get_cv2_backend, get_cv2_rotation
-from .configuration_opencv import ColorMode, OpenCVCameraConfig
-
-# NOTE(Steven): The maximum opencv device index depends on your operating system. For instance,
-# if you have 3 cameras, they should be associated to index 0, 1, and 2. This is the case
-# on MacOS. However, on Ubuntu, the indices are different like 6, 16, 23.
-# When you change the USB port or reboot the computer, the operating system might
-# treat the same cameras as new devices. Thus we select a higher bound to search indices.
-MAX_OPENCV_INDEX = 60
-
-logger = logging.getLogger(__name__)
-
-
-class OpenCVCamera(Camera):
- """
- Manages camera interactions using OpenCV for efficient frame recording.
-
- This class provides a high-level interface to connect to, configure, and read
- frames from cameras compatible with OpenCV's VideoCapture. It supports both
- synchronous and asynchronous frame reading.
-
- An OpenCVCamera instance requires a camera index (e.g., 0) or a device path
- (e.g., '/dev/video0' on Linux). Camera indices can be unstable across reboots
- or port changes, especially on Linux. Use the provided utility script to find
- available camera indices or paths:
- ```bash
- lerobot-find-cameras opencv
- ```
-
- The camera's default settings (FPS, resolution, color mode) are used unless
- overridden in the configuration.
-
- Example:
- ```python
- from lerobot.cameras.opencv import OpenCVCamera
- from lerobot.cameras.configuration_opencv import OpenCVCameraConfig, ColorMode, Cv2Rotation
-
- # Basic usage with camera index 0
- config = OpenCVCameraConfig(index_or_path=0)
- camera = OpenCVCamera(config)
- camera.connect()
-
- # Read 1 frame synchronously
- color_image = camera.read()
- print(color_image.shape)
-
- # Read 1 frame asynchronously
- async_image = camera.async_read()
-
- # When done, properly disconnect the camera using
- camera.disconnect()
-
- # Example with custom settings
- custom_config = OpenCVCameraConfig(
- index_or_path='/dev/video0', # Or use an index
- fps=30,
- width=1280,
- height=720,
- color_mode=ColorMode.RGB,
- rotation=Cv2Rotation.ROTATE_90
- )
- custom_camera = OpenCVCamera(custom_config)
- # ... connect, read, disconnect ...
- ```
- """
-
- def __init__(self, config: OpenCVCameraConfig):
- """
- Initializes the OpenCVCamera instance.
-
- Args:
- config: The configuration settings for the camera.
- """
- super().__init__(config)
-
- self.config = config
- self.index_or_path = config.index_or_path
-
- self.fps = config.fps
- self.color_mode = config.color_mode
- self.warmup_s = config.warmup_s
-
- self.videocapture: cv2.VideoCapture | None = None
-
- self.thread: Thread | None = None
- self.stop_event: Event | None = None
- self.frame_lock: Lock = Lock()
- self.latest_frame: NDArray[Any] | None = None
- self.new_frame_event: Event = Event()
-
- self.rotation: int | None = get_cv2_rotation(config.rotation)
- self.backend: int = get_cv2_backend()
-
- if self.height and self.width:
- self.capture_width, self.capture_height = self.width, self.height
- if self.rotation in [cv2.ROTATE_90_CLOCKWISE, cv2.ROTATE_90_COUNTERCLOCKWISE]:
- self.capture_width, self.capture_height = self.height, self.width
-
- def __str__(self) -> str:
- return f"{self.__class__.__name__}({self.index_or_path})"
-
- @property
- def is_connected(self) -> bool:
- """Checks if the camera is currently connected and opened."""
- return isinstance(self.videocapture, cv2.VideoCapture) and self.videocapture.isOpened()
-
- def connect(self, warmup: bool = True) -> None:
- """
- Connects to the OpenCV camera specified in the configuration.
-
- Initializes the OpenCV VideoCapture object, sets desired camera properties
- (FPS, width, height), and performs initial checks.
-
- Raises:
- DeviceAlreadyConnectedError: If the camera is already connected.
- ConnectionError: If the specified camera index/path is not found or the camera is found but fails to open.
- RuntimeError: If the camera opens but fails to apply requested FPS/resolution settings.
- """
- if self.is_connected:
- raise DeviceAlreadyConnectedError(f"{self} is already connected.")
-
- # Use 1 thread for OpenCV operations to avoid potential conflicts or
- # blocking in multi-threaded applications, especially during data collection.
- cv2.setNumThreads(1)
-
- self.videocapture = cv2.VideoCapture(self.index_or_path, self.backend)
-
- if not self.videocapture.isOpened():
- self.videocapture.release()
- self.videocapture = None
- raise ConnectionError(
- f"Failed to open {self}.Run `lerobot-find-cameras opencv` to find available cameras."
- )
-
- self._configure_capture_settings()
-
- if warmup:
- start_time = time.time()
- while time.time() - start_time < self.warmup_s:
- self.read()
- time.sleep(0.1)
-
- logger.info(f"{self} connected.")
-
- def _configure_capture_settings(self) -> None:
- """
- Applies the specified FOURCC, FPS, width, and height settings to the connected camera.
-
- This method attempts to set the camera properties via OpenCV. It checks if
- the camera successfully applied the settings and raises an error if not.
- FOURCC is set first (if specified) as it can affect the available FPS and resolution options.
-
- Args:
- fourcc: The desired FOURCC code (e.g., "MJPG", "YUYV"). If None, auto-detect.
- fps: The desired frames per second. If None, the setting is skipped.
- width: The desired capture width. If None, the setting is skipped.
- height: The desired capture height. If None, the setting is skipped.
-
- Raises:
- RuntimeError: If the camera fails to set any of the specified properties
- to the requested value.
- DeviceNotConnectedError: If the camera is not connected when attempting
- to configure settings.
- """
- if not self.is_connected:
- raise DeviceNotConnectedError(f"Cannot configure settings for {self} as it is not connected.")
-
- # Set FOURCC first (if specified) as it can affect available FPS/resolution options
- if self.config.fourcc is not None:
- self._validate_fourcc()
- if self.videocapture is None:
- raise DeviceNotConnectedError(f"{self} videocapture is not initialized")
-
- default_width = int(round(self.videocapture.get(cv2.CAP_PROP_FRAME_WIDTH)))
- default_height = int(round(self.videocapture.get(cv2.CAP_PROP_FRAME_HEIGHT)))
-
- if self.width is None or self.height is None:
- self.width, self.height = default_width, default_height
- self.capture_width, self.capture_height = default_width, default_height
- if self.rotation in [cv2.ROTATE_90_CLOCKWISE, cv2.ROTATE_90_COUNTERCLOCKWISE]:
- self.width, self.height = default_height, default_width
- self.capture_width, self.capture_height = default_width, default_height
- else:
- self._validate_width_and_height()
-
- if self.fps is None:
- self.fps = self.videocapture.get(cv2.CAP_PROP_FPS)
- else:
- self._validate_fps()
-
- def _validate_fps(self) -> None:
- """Validates and sets the camera's frames per second (FPS)."""
-
- if self.videocapture is None:
- raise DeviceNotConnectedError(f"{self} videocapture is not initialized")
-
- if self.fps is None:
- raise ValueError(f"{self} FPS is not set")
-
- success = self.videocapture.set(cv2.CAP_PROP_FPS, float(self.fps))
- actual_fps = self.videocapture.get(cv2.CAP_PROP_FPS)
- # Use math.isclose for robust float comparison
- if not success or not math.isclose(self.fps, actual_fps, rel_tol=1e-3):
- raise RuntimeError(f"{self} failed to set fps={self.fps} ({actual_fps=}).")
-
- def _validate_fourcc(self) -> None:
- """Validates and sets the camera's FOURCC code."""
-
- fourcc_code = cv2.VideoWriter_fourcc(*self.config.fourcc)
-
- if self.videocapture is None:
- raise DeviceNotConnectedError(f"{self} videocapture is not initialized")
-
- success = self.videocapture.set(cv2.CAP_PROP_FOURCC, fourcc_code)
- actual_fourcc_code = self.videocapture.get(cv2.CAP_PROP_FOURCC)
-
- # Convert actual FOURCC code back to string for comparison
- actual_fourcc_code_int = int(actual_fourcc_code)
- actual_fourcc = "".join([chr((actual_fourcc_code_int >> 8 * i) & 0xFF) for i in range(4)])
-
- if not success or actual_fourcc != self.config.fourcc:
- logger.warning(
- f"{self} failed to set fourcc={self.config.fourcc} (actual={actual_fourcc}, success={success}). "
- f"Continuing with default format."
- )
-
- def _validate_width_and_height(self) -> None:
- """Validates and sets the camera's frame capture width and height."""
-
- if self.videocapture is None:
- raise DeviceNotConnectedError(f"{self} videocapture is not initialized")
-
- if self.capture_width is None or self.capture_height is None:
- raise ValueError(f"{self} capture_width or capture_height is not set")
-
- width_success = self.videocapture.set(cv2.CAP_PROP_FRAME_WIDTH, float(self.capture_width))
- height_success = self.videocapture.set(cv2.CAP_PROP_FRAME_HEIGHT, float(self.capture_height))
-
- actual_width = int(round(self.videocapture.get(cv2.CAP_PROP_FRAME_WIDTH)))
- if not width_success or self.capture_width != actual_width:
- raise RuntimeError(
- f"{self} failed to set capture_width={self.capture_width} ({actual_width=}, {width_success=})."
- )
-
- actual_height = int(round(self.videocapture.get(cv2.CAP_PROP_FRAME_HEIGHT)))
- if not height_success or self.capture_height != actual_height:
- raise RuntimeError(
- f"{self} failed to set capture_height={self.capture_height} ({actual_height=}, {height_success=})."
- )
-
- @staticmethod
- def find_cameras() -> list[dict[str, Any]]:
- """
- Detects available OpenCV cameras connected to the system.
-
- On Linux, it scans '/dev/video*' paths. On other systems (like macOS, Windows),
- it checks indices from 0 up to `MAX_OPENCV_INDEX`.
-
- Returns:
- List[Dict[str, Any]]: A list of dictionaries,
- where each dictionary contains 'type', 'id' (port index or path),
- and the default profile properties (width, height, fps, format).
- """
- found_cameras_info = []
-
- targets_to_scan: list[str | int]
- if platform.system() == "Linux":
- possible_paths = sorted(Path("/dev").glob("video*"), key=lambda p: p.name)
- targets_to_scan = [str(p) for p in possible_paths]
- else:
- targets_to_scan = [int(i) for i in range(MAX_OPENCV_INDEX)]
-
- for target in targets_to_scan:
- camera = cv2.VideoCapture(target)
- if camera.isOpened():
- default_width = int(camera.get(cv2.CAP_PROP_FRAME_WIDTH))
- default_height = int(camera.get(cv2.CAP_PROP_FRAME_HEIGHT))
- default_fps = camera.get(cv2.CAP_PROP_FPS)
- default_format = camera.get(cv2.CAP_PROP_FORMAT)
-
- # Get FOURCC code and convert to string
- default_fourcc_code = camera.get(cv2.CAP_PROP_FOURCC)
- default_fourcc_code_int = int(default_fourcc_code)
- default_fourcc = "".join([chr((default_fourcc_code_int >> 8 * i) & 0xFF) for i in range(4)])
-
- camera_info = {
- "name": f"OpenCV Camera @ {target}",
- "type": "OpenCV",
- "id": target,
- "backend_api": camera.getBackendName(),
- "default_stream_profile": {
- "format": default_format,
- "fourcc": default_fourcc,
- "width": default_width,
- "height": default_height,
- "fps": default_fps,
- },
- }
-
- found_cameras_info.append(camera_info)
- camera.release()
-
- return found_cameras_info
-
- def read(self, color_mode: ColorMode | None = None) -> NDArray[Any]:
- """
- Reads a single frame synchronously from the camera.
-
- This is a blocking call. It waits for the next available frame from the
- camera hardware via OpenCV.
-
- Args:
- color_mode (Optional[ColorMode]): If specified, overrides the default
- color mode (`self.color_mode`) for this read operation (e.g.,
- request RGB even if default is BGR).
-
- Returns:
- np.ndarray: The captured frame as a NumPy array in the format
- (height, width, channels), using the specified or default
- color mode and applying any configured rotation.
-
- Raises:
- DeviceNotConnectedError: If the camera is not connected.
- RuntimeError: If reading the frame from the camera fails or if the
- received frame dimensions don't match expectations before rotation.
- ValueError: If an invalid `color_mode` is requested.
- """
- if not self.is_connected:
- raise DeviceNotConnectedError(f"{self} is not connected.")
-
- start_time = time.perf_counter()
-
- if self.videocapture is None:
- raise DeviceNotConnectedError(f"{self} videocapture is not initialized")
-
- ret, frame = self.videocapture.read()
-
- if not ret or frame is None:
- raise RuntimeError(f"{self} read failed (status={ret}).")
-
- processed_frame = self._postprocess_image(frame, color_mode)
-
- read_duration_ms = (time.perf_counter() - start_time) * 1e3
- logger.debug(f"{self} read took: {read_duration_ms:.1f}ms")
-
- return processed_frame
-
- def _postprocess_image(self, image: NDArray[Any], color_mode: ColorMode | None = None) -> NDArray[Any]:
- """
- Applies color conversion, dimension validation, and rotation to a raw frame.
-
- Args:
- image (np.ndarray): The raw image frame (expected BGR format from OpenCV).
- color_mode (Optional[ColorMode]): The target color mode (RGB or BGR). If None,
- uses the instance's default `self.color_mode`.
-
- Returns:
- np.ndarray: The processed image frame.
-
- Raises:
- ValueError: If the requested `color_mode` is invalid.
- RuntimeError: If the raw frame dimensions do not match the configured
- `width` and `height`.
- """
- requested_color_mode = self.color_mode if color_mode is None else color_mode
-
- if requested_color_mode not in (ColorMode.RGB, ColorMode.BGR):
- raise ValueError(
- f"Invalid color mode '{requested_color_mode}'. Expected {ColorMode.RGB} or {ColorMode.BGR}."
- )
-
- h, w, c = image.shape
-
- if h != self.capture_height or w != self.capture_width:
- raise RuntimeError(
- f"{self} frame width={w} or height={h} do not match configured width={self.capture_width} or height={self.capture_height}."
- )
-
- if c != 3:
- raise RuntimeError(f"{self} frame channels={c} do not match expected 3 channels (RGB/BGR).")
-
- processed_image = image
- if requested_color_mode == ColorMode.RGB:
- processed_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
-
- if self.rotation in [cv2.ROTATE_90_CLOCKWISE, cv2.ROTATE_90_COUNTERCLOCKWISE, cv2.ROTATE_180]:
- processed_image = cv2.rotate(processed_image, self.rotation)
-
- return processed_image
-
- def _read_loop(self) -> None:
- """
- Internal loop run by the background thread for asynchronous reading.
-
- On each iteration:
- 1. Reads a color frame
- 2. Stores result in latest_frame (thread-safe)
- 3. Sets new_frame_event to notify listeners
-
- Stops on DeviceNotConnectedError, logs other errors and continues.
- """
- if self.stop_event is None:
- raise RuntimeError(f"{self}: stop_event is not initialized before starting read loop.")
-
- while not self.stop_event.is_set():
- try:
- color_image = self.read()
-
- with self.frame_lock:
- self.latest_frame = color_image
- self.new_frame_event.set()
-
- except DeviceNotConnectedError:
- break
- except Exception as e:
- logger.warning(f"Error reading frame in background thread for {self}: {e}")
-
- def _start_read_thread(self) -> None:
- """Starts or restarts the background read thread if it's not running."""
- if self.thread is not None and self.thread.is_alive():
- self.thread.join(timeout=0.1)
- if self.stop_event is not None:
- self.stop_event.set()
-
- self.stop_event = Event()
- self.thread = Thread(target=self._read_loop, args=(), name=f"{self}_read_loop")
- self.thread.daemon = True
- self.thread.start()
-
- def _stop_read_thread(self) -> None:
- """Signals the background read thread to stop and waits for it to join."""
- if self.stop_event is not None:
- self.stop_event.set()
-
- if self.thread is not None and self.thread.is_alive():
- self.thread.join(timeout=2.0)
-
- self.thread = None
- self.stop_event = None
-
- def async_read(self, timeout_ms: float = 200) -> NDArray[Any]:
- """
- Reads the latest available frame asynchronously.
-
- This method retrieves the most recent frame captured by the background
- read thread. It does not block waiting for the camera hardware directly,
- but may wait up to timeout_ms for the background thread to provide a frame.
-
- Args:
- timeout_ms (float): Maximum time in milliseconds to wait for a frame
- to become available. Defaults to 200ms (0.2 seconds).
-
- Returns:
- np.ndarray: The latest captured frame as a NumPy array in the format
- (height, width, channels), processed according to configuration.
-
- Raises:
- DeviceNotConnectedError: If the camera is not connected.
- TimeoutError: If no frame becomes available within the specified timeout.
- RuntimeError: If an unexpected error occurs.
- """
- if not self.is_connected:
- raise DeviceNotConnectedError(f"{self} is not connected.")
-
- if self.thread is None or not self.thread.is_alive():
- self._start_read_thread()
-
- if not self.new_frame_event.wait(timeout=timeout_ms / 1000.0):
- thread_alive = self.thread is not None and self.thread.is_alive()
- raise TimeoutError(
- f"Timed out waiting for frame from camera {self} after {timeout_ms} ms. "
- f"Read thread alive: {thread_alive}."
- )
-
- with self.frame_lock:
- frame = self.latest_frame
- self.new_frame_event.clear()
-
- if frame is None:
- raise RuntimeError(f"Internal error: Event set but no frame available for {self}.")
-
- return frame
-
- def disconnect(self) -> None:
- """
- Disconnects from the camera and cleans up resources.
-
- Stops the background read thread (if running) and releases the OpenCV
- VideoCapture object.
-
- Raises:
- DeviceNotConnectedError: If the camera is already disconnected.
- """
- if not self.is_connected and self.thread is None:
- raise DeviceNotConnectedError(f"{self} not connected.")
-
- if self.thread is not None:
- self._stop_read_thread()
-
- if self.videocapture is not None:
- self.videocapture.release()
- self.videocapture = None
-
- logger.info(f"{self} disconnected.")
diff --git a/lerobot/src/lerobot/cameras/opencv/configuration_opencv.py b/lerobot/src/lerobot/cameras/opencv/configuration_opencv.py
deleted file mode 100644
index 88ce873432972b561fa7d68062d4b50c7d3efd04..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/cameras/opencv/configuration_opencv.py
+++ /dev/null
@@ -1,85 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-from pathlib import Path
-
-from ..configs import CameraConfig, ColorMode, Cv2Rotation
-
-__all__ = ["OpenCVCameraConfig", "ColorMode", "Cv2Rotation"]
-
-
-@CameraConfig.register_subclass("opencv")
-@dataclass
-class OpenCVCameraConfig(CameraConfig):
- """Configuration class for OpenCV-based camera devices or video files.
-
- This class provides configuration options for cameras accessed through OpenCV,
- supporting both physical camera devices and video files. It includes settings
- for resolution, frame rate, color mode, and image rotation.
-
- Example configurations:
- ```python
- # Basic configurations
- OpenCVCameraConfig(0, 30, 1280, 720) # 1280x720 @ 30FPS
- OpenCVCameraConfig(/dev/video4, 60, 640, 480) # 640x480 @ 60FPS
-
- # Advanced configurations with FOURCC format
- OpenCVCameraConfig(128422271347, 30, 640, 480, rotation=Cv2Rotation.ROTATE_90, fourcc="MJPG") # With 90° rotation and MJPG format
- OpenCVCameraConfig(0, 30, 1280, 720, fourcc="YUYV") # With YUYV format
- ```
-
- Attributes:
- index_or_path: Either an integer representing the camera device index,
- or a Path object pointing to a video file.
- fps: Requested frames per second for the color stream.
- width: Requested frame width in pixels for the color stream.
- height: Requested frame height in pixels for the color stream.
- color_mode: Color mode for image output (RGB or BGR). Defaults to RGB.
- rotation: Image rotation setting (0°, 90°, 180°, or 270°). Defaults to no rotation.
- warmup_s: Time reading frames before returning from connect (in seconds)
- fourcc: FOURCC code for video format (e.g., "MJPG", "YUYV", "I420"). Defaults to None (auto-detect).
-
- Note:
- - Only 3-channel color output (RGB/BGR) is currently supported.
- - FOURCC codes must be 4-character strings (e.g., "MJPG", "YUYV"). Some common FOUCC codes: https://learn.microsoft.com/en-us/windows/win32/medfound/video-fourccs#fourcc-constants
- - Setting FOURCC can help achieve higher frame rates on some cameras.
- """
-
- index_or_path: int | Path
- color_mode: ColorMode = ColorMode.RGB
- rotation: Cv2Rotation = Cv2Rotation.NO_ROTATION
- warmup_s: int = 1
- fourcc: str | None = None
-
- def __post_init__(self) -> None:
- if self.color_mode not in (ColorMode.RGB, ColorMode.BGR):
- raise ValueError(
- f"`color_mode` is expected to be {ColorMode.RGB.value} or {ColorMode.BGR.value}, but {self.color_mode} is provided."
- )
-
- if self.rotation not in (
- Cv2Rotation.NO_ROTATION,
- Cv2Rotation.ROTATE_90,
- Cv2Rotation.ROTATE_180,
- Cv2Rotation.ROTATE_270,
- ):
- raise ValueError(
- f"`rotation` is expected to be in {(Cv2Rotation.NO_ROTATION, Cv2Rotation.ROTATE_90, Cv2Rotation.ROTATE_180, Cv2Rotation.ROTATE_270)}, but {self.rotation} is provided."
- )
-
- if self.fourcc is not None and (not isinstance(self.fourcc, str) or len(self.fourcc) != 4):
- raise ValueError(
- f"`fourcc` must be a 4-character string (e.g., 'MJPG', 'YUYV'), but '{self.fourcc}' is provided."
- )
diff --git a/lerobot/src/lerobot/cameras/reachy2_camera/__init__.py b/lerobot/src/lerobot/cameras/reachy2_camera/__init__.py
deleted file mode 100644
index cc9d87f781dd48b14349d5d22fa5d2cf31367430..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/cameras/reachy2_camera/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .configuration_reachy2_camera import Reachy2CameraConfig
-from .reachy2_camera import Reachy2Camera
diff --git a/lerobot/src/lerobot/cameras/reachy2_camera/configuration_reachy2_camera.py b/lerobot/src/lerobot/cameras/reachy2_camera/configuration_reachy2_camera.py
deleted file mode 100644
index ba1535042d03483ad15cbf7450d098b1de4a3140..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/cameras/reachy2_camera/configuration_reachy2_camera.py
+++ /dev/null
@@ -1,80 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-
-from ..configs import CameraConfig, ColorMode
-
-__all__ = ["CameraConfig", "ColorMode", "Reachy2CameraConfig"]
-
-
-@CameraConfig.register_subclass("reachy2_camera")
-@dataclass
-class Reachy2CameraConfig(CameraConfig):
- """Configuration class for Reachy 2 camera devices.
-
- This class provides configuration options for Reachy 2 cameras,
- supporting both the teleop and depth cameras. It includes settings
- for resolution, frame rate, color mode, and the selection of the cameras.
-
- Example configurations:
- ```python
- # Basic configurations
- Reachy2CameraConfig(
- name="teleop",
- image_type="left",
- ip_address="192.168.0.200", # IP address of the robot
- port=50065, # Port of the camera server
- width=640,
- height=480,
- fps=30, # Not configurable for Reachy 2 cameras
- color_mode=ColorMode.RGB,
- ) # Left teleop camera, 640x480 @ 30FPS
- ```
-
- Attributes:
- name: Name of the camera device. Can be "teleop" or "depth".
- image_type: Type of image stream. For "teleop" camera, can be "left" or "right".
- For "depth" camera, can be "rgb" or "depth". (depth is not supported yet)
- fps: Requested frames per second for the color stream. Not configurable for Reachy 2 cameras.
- width: Requested frame width in pixels for the color stream.
- height: Requested frame height in pixels for the color stream.
- color_mode: Color mode for image output (RGB or BGR). Defaults to RGB.
- ip_address: IP address of the robot. Defaults to "localhost".
- port: Port number for the camera server. Defaults to 50065.
-
- Note:
- - Only 3-channel color output (RGB/BGR) is currently supported.
- """
-
- name: str
- image_type: str
- color_mode: ColorMode = ColorMode.RGB
- ip_address: str | None = "localhost"
- port: int = 50065
-
- def __post_init__(self) -> None:
- if self.name not in ["teleop", "depth"]:
- raise ValueError(f"`name` is expected to be 'teleop' or 'depth', but {self.name} is provided.")
- if (self.name == "teleop" and self.image_type not in ["left", "right"]) or (
- self.name == "depth" and self.image_type not in ["rgb", "depth"]
- ):
- raise ValueError(
- f"`image_type` is expected to be 'left' or 'right' for teleop camera, and 'rgb' or 'depth' for depth camera, but {self.image_type} is provided."
- )
-
- if self.color_mode not in ["rgb", "bgr"]:
- raise ValueError(
- f"`color_mode` is expected to be 'rgb' or 'bgr', but {self.color_mode} is provided."
- )
diff --git a/lerobot/src/lerobot/cameras/reachy2_camera/reachy2_camera.py b/lerobot/src/lerobot/cameras/reachy2_camera/reachy2_camera.py
deleted file mode 100644
index b681d0f2a90f05609d87dcfcf6ba720926f70aef..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/cameras/reachy2_camera/reachy2_camera.py
+++ /dev/null
@@ -1,220 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Provides the Reachy2Camera class for capturing frames from Reachy 2 cameras using Reachy 2's CameraManager.
-"""
-
-from __future__ import annotations
-
-import logging
-import os
-import platform
-import time
-from typing import TYPE_CHECKING, Any
-
-from numpy.typing import NDArray # type: ignore # TODO: add type stubs for numpy.typing
-
-# Fix MSMF hardware transform compatibility for Windows before importing cv2
-if platform.system() == "Windows" and "OPENCV_VIDEOIO_MSMF_ENABLE_HW_TRANSFORMS" not in os.environ:
- os.environ["OPENCV_VIDEOIO_MSMF_ENABLE_HW_TRANSFORMS"] = "0"
-import cv2 # type: ignore # TODO: add type stubs for OpenCV
-import numpy as np # type: ignore # TODO: add type stubs for numpy
-
-from lerobot.utils.import_utils import _reachy2_sdk_available
-
-if TYPE_CHECKING or _reachy2_sdk_available:
- from reachy2_sdk.media.camera import CameraView
- from reachy2_sdk.media.camera_manager import CameraManager
-else:
- CameraManager = None
-
- class CameraView:
- LEFT = 0
- RIGHT = 1
-
-
-from lerobot.utils.errors import DeviceNotConnectedError
-
-from ..camera import Camera
-from .configuration_reachy2_camera import ColorMode, Reachy2CameraConfig
-
-logger = logging.getLogger(__name__)
-
-
-class Reachy2Camera(Camera):
- """
- Manages Reachy 2 camera using Reachy 2 CameraManager.
-
- This class provides a high-level interface to connect to, configure, and read
- frames from Reachy 2 cameras. It supports both synchronous and asynchronous
- frame reading.
-
- An Reachy2Camera instance requires a camera name (e.g., "teleop") and an image
- type (e.g., "left") to be specified in the configuration.
-
- The camera's default settings (FPS, resolution, color mode) are used unless
- overridden in the configuration.
- """
-
- def __init__(self, config: Reachy2CameraConfig):
- """
- Initializes the Reachy2Camera instance.
-
- Args:
- config: The configuration settings for the camera.
- """
- super().__init__(config)
-
- self.config = config
-
- self.color_mode = config.color_mode
-
- self.cam_manager: CameraManager | None = None
-
- def __str__(self) -> str:
- return f"{self.__class__.__name__}({self.config.name}, {self.config.image_type})"
-
- @property
- def is_connected(self) -> bool:
- """Checks if the camera is currently connected and opened."""
- if self.config.name == "teleop":
- return bool(
- self.cam_manager._grpc_connected and self.cam_manager.teleop if self.cam_manager else False
- )
- elif self.config.name == "depth":
- return bool(
- self.cam_manager._grpc_connected and self.cam_manager.depth if self.cam_manager else False
- )
- else:
- raise ValueError(f"Invalid camera name '{self.config.name}'. Expected 'teleop' or 'depth'.")
-
- def connect(self, warmup: bool = True) -> None:
- """
- Connects to the Reachy2 CameraManager as specified in the configuration.
-
- Raises:
- DeviceNotConnectedError: If the camera is not connected.
- """
- self.cam_manager = CameraManager(host=self.config.ip_address, port=self.config.port)
- if self.cam_manager is None:
- raise DeviceNotConnectedError(f"Could not connect to {self}.")
- self.cam_manager.initialize_cameras()
-
- logger.info(f"{self} connected.")
-
- @staticmethod
- def find_cameras() -> list[dict[str, Any]]:
- """
- Detection not implemented for Reachy2 cameras.
- """
- raise NotImplementedError("Camera detection is not implemented for Reachy2 cameras.")
-
- def read(self, color_mode: ColorMode | None = None) -> NDArray[Any]:
- """
- Reads a single frame synchronously from the camera.
-
- This is a blocking call.
-
- Args:
- color_mode (Optional[ColorMode]): If specified, overrides the default
- color mode (`self.color_mode`) for this read operation (e.g.,
- request RGB even if default is BGR).
-
- Returns:
- np.ndarray: The captured frame as a NumPy array in the format
- (height, width, channels), using the specified or default
- color mode and applying any configured rotation.
- """
- start_time = time.perf_counter()
-
- if not self.is_connected:
- raise DeviceNotConnectedError(f"{self} is not connected.")
-
- if self.cam_manager is None:
- raise DeviceNotConnectedError(f"{self} is not connected.")
-
- frame: NDArray[Any] = np.empty((0, 0, 3), dtype=np.uint8)
-
- if self.config.name == "teleop" and hasattr(self.cam_manager, "teleop"):
- if self.config.image_type == "left":
- frame = self.cam_manager.teleop.get_frame(
- CameraView.LEFT, size=(self.config.width, self.config.height)
- )[0]
- elif self.config.image_type == "right":
- frame = self.cam_manager.teleop.get_frame(
- CameraView.RIGHT, size=(self.config.width, self.config.height)
- )[0]
- elif self.config.name == "depth" and hasattr(self.cam_manager, "depth"):
- if self.config.image_type == "depth":
- frame = self.cam_manager.depth.get_depth_frame()[0]
- elif self.config.image_type == "rgb":
- frame = self.cam_manager.depth.get_frame(size=(self.config.width, self.config.height))[0]
- else:
- raise ValueError(f"Invalid camera name '{self.config.name}'. Expected 'teleop' or 'depth'.")
-
- if frame is None:
- return np.empty((0, 0, 3), dtype=np.uint8)
-
- if self.config.color_mode == "rgb":
- frame = cv2.cvtColor(frame, cv2.COLOR_BGR2RGB)
-
- read_duration_ms = (time.perf_counter() - start_time) * 1e3
- logger.debug(f"{self} read took: {read_duration_ms:.1f}ms")
-
- return frame
-
- def async_read(self, timeout_ms: float = 200) -> NDArray[Any]:
- """
- Reads the latest available frame.
-
- This method retrieves the most recent frame available in Reachy 2's low-level software.
-
- Args:
- timeout_ms (float): Maximum time in milliseconds to wait for a frame
- to become available. Defaults to 200ms (0.2 seconds).
-
- Returns:
- np.ndarray: The latest captured frame as a NumPy array in the format
- (height, width, channels), processed according to configuration.
-
- Raises:
- DeviceNotConnectedError: If the camera is not connected.
- TimeoutError: If no frame becomes available within the specified timeout.
- RuntimeError: If an unexpected error occurs.
- """
- if not self.is_connected:
- raise DeviceNotConnectedError(f"{self} is not connected.")
-
- frame = self.read()
-
- if frame is None:
- raise RuntimeError(f"Internal error: No frame available for {self}.")
-
- return frame
-
- def disconnect(self) -> None:
- """
- Stops the background read thread (if running).
-
- Raises:
- DeviceNotConnectedError: If the camera is already disconnected.
- """
- if not self.is_connected:
- raise DeviceNotConnectedError(f"{self} not connected.")
-
- if self.cam_manager is not None:
- self.cam_manager.disconnect()
-
- logger.info(f"{self} disconnected.")
diff --git a/lerobot/src/lerobot/cameras/realsense/__init__.py b/lerobot/src/lerobot/cameras/realsense/__init__.py
deleted file mode 100644
index bc5184a99bc33c17d0c759dc5a561ce800f5a278..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/cameras/realsense/__init__.py
+++ /dev/null
@@ -1,16 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .camera_realsense import RealSenseCamera
-from .configuration_realsense import RealSenseCameraConfig
diff --git a/lerobot/src/lerobot/cameras/realsense/camera_realsense.py b/lerobot/src/lerobot/cameras/realsense/camera_realsense.py
deleted file mode 100644
index e4b8c3164c7f9b44bd2ed24a73ddfdf7d1961d6d..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/cameras/realsense/camera_realsense.py
+++ /dev/null
@@ -1,568 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Provides the RealSenseCamera class for capturing frames from Intel RealSense cameras.
-"""
-
-import logging
-import time
-from threading import Event, Lock, Thread
-from typing import Any
-
-import cv2 # type: ignore # TODO: add type stubs for OpenCV
-import numpy as np # type: ignore # TODO: add type stubs for numpy
-from numpy.typing import NDArray # type: ignore # TODO: add type stubs for numpy.typing
-
-try:
- import pyrealsense2 as rs # type: ignore # TODO: add type stubs for pyrealsense2
-except Exception as e:
- logging.info(f"Could not import realsense: {e}")
-
-from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
-
-from ..camera import Camera
-from ..configs import ColorMode
-from ..utils import get_cv2_rotation
-from .configuration_realsense import RealSenseCameraConfig
-
-logger = logging.getLogger(__name__)
-
-
-class RealSenseCamera(Camera):
- """
- Manages interactions with Intel RealSense cameras for frame and depth recording.
-
- This class provides an interface similar to `OpenCVCamera` but tailored for
- RealSense devices, leveraging the `pyrealsense2` library. It uses the camera's
- unique serial number for identification, offering more stability than device
- indices, especially on Linux. It also supports capturing depth maps alongside
- color frames.
-
- Use the provided utility script to find available camera indices and default profiles:
- ```bash
- lerobot-find-cameras realsense
- ```
-
- A `RealSenseCamera` instance requires a configuration object specifying the
- camera's serial number or a unique device name. If using the name, ensure only
- one camera with that name is connected.
-
- The camera's default settings (FPS, resolution, color mode) from the stream
- profile are used unless overridden in the configuration.
-
- Example:
- ```python
- from lerobot.cameras.realsense import RealSenseCamera, RealSenseCameraConfig
- from lerobot.cameras import ColorMode, Cv2Rotation
-
- # Basic usage with serial number
- config = RealSenseCameraConfig(serial_number_or_name="0123456789") # Replace with actual SN
- camera = RealSenseCamera(config)
- camera.connect()
-
- # Read 1 frame synchronously
- color_image = camera.read()
- print(color_image.shape)
-
- # Read 1 frame asynchronously
- async_image = camera.async_read()
-
- # When done, properly disconnect the camera using
- camera.disconnect()
-
- # Example with depth capture and custom settings
- custom_config = RealSenseCameraConfig(
- serial_number_or_name="0123456789", # Replace with actual SN
- fps=30,
- width=1280,
- height=720,
- color_mode=ColorMode.BGR, # Request BGR output
- rotation=Cv2Rotation.NO_ROTATION,
- use_depth=True
- )
- depth_camera = RealSenseCamera(custom_config)
- depth_camera.connect()
-
- # Read 1 depth frame
- depth_map = depth_camera.read_depth()
-
- # Example using a unique camera name
- name_config = RealSenseCameraConfig(serial_number_or_name="Intel RealSense D435") # If unique
- name_camera = RealSenseCamera(name_config)
- # ... connect, read, disconnect ...
- ```
- """
-
- def __init__(self, config: RealSenseCameraConfig):
- """
- Initializes the RealSenseCamera instance.
-
- Args:
- config: The configuration settings for the camera.
- """
-
- super().__init__(config)
-
- self.config = config
-
- if config.serial_number_or_name.isdigit():
- self.serial_number = config.serial_number_or_name
- else:
- self.serial_number = self._find_serial_number_from_name(config.serial_number_or_name)
-
- self.fps = config.fps
- self.color_mode = config.color_mode
- self.use_depth = config.use_depth
- self.warmup_s = config.warmup_s
-
- self.rs_pipeline: rs.pipeline | None = None
- self.rs_profile: rs.pipeline_profile | None = None
-
- self.thread: Thread | None = None
- self.stop_event: Event | None = None
- self.frame_lock: Lock = Lock()
- self.latest_frame: NDArray[Any] | None = None
- self.new_frame_event: Event = Event()
-
- self.rotation: int | None = get_cv2_rotation(config.rotation)
-
- if self.height and self.width:
- self.capture_width, self.capture_height = self.width, self.height
- if self.rotation in [cv2.ROTATE_90_CLOCKWISE, cv2.ROTATE_90_COUNTERCLOCKWISE]:
- self.capture_width, self.capture_height = self.height, self.width
-
- def __str__(self) -> str:
- return f"{self.__class__.__name__}({self.serial_number})"
-
- @property
- def is_connected(self) -> bool:
- """Checks if the camera pipeline is started and streams are active."""
- return self.rs_pipeline is not None and self.rs_profile is not None
-
- def connect(self, warmup: bool = True) -> None:
- """
- Connects to the RealSense camera specified in the configuration.
-
- Initializes the RealSense pipeline, configures the required streams (color
- and optionally depth), starts the pipeline, and validates the actual stream settings.
-
- Raises:
- DeviceAlreadyConnectedError: If the camera is already connected.
- ValueError: If the configuration is invalid (e.g., missing serial/name, name not unique).
- ConnectionError: If the camera is found but fails to start the pipeline or no RealSense devices are detected at all.
- RuntimeError: If the pipeline starts but fails to apply requested settings.
- """
- if self.is_connected:
- raise DeviceAlreadyConnectedError(f"{self} is already connected.")
-
- self.rs_pipeline = rs.pipeline()
- rs_config = rs.config()
- self._configure_rs_pipeline_config(rs_config)
-
- try:
- self.rs_profile = self.rs_pipeline.start(rs_config)
- except RuntimeError as e:
- self.rs_profile = None
- self.rs_pipeline = None
- raise ConnectionError(
- f"Failed to open {self}.Run `lerobot-find-cameras realsense` to find available cameras."
- ) from e
-
- self._configure_capture_settings()
-
- if warmup:
- time.sleep(
- 1
- ) # NOTE(Steven): RS cameras need a bit of time to warm up before the first read. If we don't wait, the first read from the warmup will raise.
- start_time = time.time()
- while time.time() - start_time < self.warmup_s:
- self.read()
- time.sleep(0.1)
-
- logger.info(f"{self} connected.")
-
- @staticmethod
- def find_cameras() -> list[dict[str, Any]]:
- """
- Detects available Intel RealSense cameras connected to the system.
-
- Returns:
- List[Dict[str, Any]]: A list of dictionaries,
- where each dictionary contains 'type', 'id' (serial number), 'name',
- firmware version, USB type, and other available specs, and the default profile properties (width, height, fps, format).
-
- Raises:
- OSError: If pyrealsense2 is not installed.
- ImportError: If pyrealsense2 is not installed.
- """
- found_cameras_info = []
- context = rs.context()
- devices = context.query_devices()
-
- for device in devices:
- camera_info = {
- "name": device.get_info(rs.camera_info.name),
- "type": "RealSense",
- "id": device.get_info(rs.camera_info.serial_number),
- "firmware_version": device.get_info(rs.camera_info.firmware_version),
- "usb_type_descriptor": device.get_info(rs.camera_info.usb_type_descriptor),
- "physical_port": device.get_info(rs.camera_info.physical_port),
- "product_id": device.get_info(rs.camera_info.product_id),
- "product_line": device.get_info(rs.camera_info.product_line),
- }
-
- # Get stream profiles for each sensor
- sensors = device.query_sensors()
- for sensor in sensors:
- profiles = sensor.get_stream_profiles()
-
- for profile in profiles:
- if profile.is_video_stream_profile() and profile.is_default():
- vprofile = profile.as_video_stream_profile()
- stream_info = {
- "stream_type": vprofile.stream_name(),
- "format": vprofile.format().name,
- "width": vprofile.width(),
- "height": vprofile.height(),
- "fps": vprofile.fps(),
- }
- camera_info["default_stream_profile"] = stream_info
-
- found_cameras_info.append(camera_info)
-
- return found_cameras_info
-
- def _find_serial_number_from_name(self, name: str) -> str:
- """Finds the serial number for a given unique camera name."""
- camera_infos = self.find_cameras()
- found_devices = [cam for cam in camera_infos if str(cam["name"]) == name]
-
- if not found_devices:
- available_names = [cam["name"] for cam in camera_infos]
- raise ValueError(
- f"No RealSense camera found with name '{name}'. Available camera names: {available_names}"
- )
-
- if len(found_devices) > 1:
- serial_numbers = [dev["serial_number"] for dev in found_devices]
- raise ValueError(
- f"Multiple RealSense cameras found with name '{name}'. "
- f"Please use a unique serial number instead. Found SNs: {serial_numbers}"
- )
-
- serial_number = str(found_devices[0]["serial_number"])
- return serial_number
-
- def _configure_rs_pipeline_config(self, rs_config: Any) -> None:
- """Creates and configures the RealSense pipeline configuration object."""
- rs.config.enable_device(rs_config, self.serial_number)
-
- if self.width and self.height and self.fps:
- rs_config.enable_stream(
- rs.stream.color, self.capture_width, self.capture_height, rs.format.rgb8, self.fps
- )
- if self.use_depth:
- rs_config.enable_stream(
- rs.stream.depth, self.capture_width, self.capture_height, rs.format.z16, self.fps
- )
- else:
- rs_config.enable_stream(rs.stream.color)
- if self.use_depth:
- rs_config.enable_stream(rs.stream.depth)
-
- def _configure_capture_settings(self) -> None:
- """Sets fps, width, and height from device stream if not already configured.
-
- Uses the color stream profile to update unset attributes. Handles rotation by
- swapping width/height when needed. Original capture dimensions are always stored.
-
- Raises:
- DeviceNotConnectedError: If device is not connected.
- """
- if not self.is_connected:
- raise DeviceNotConnectedError(f"Cannot validate settings for {self} as it is not connected.")
-
- if self.rs_profile is None:
- raise RuntimeError(f"{self}: rs_profile must be initialized before use.")
-
- stream = self.rs_profile.get_stream(rs.stream.color).as_video_stream_profile()
-
- if self.fps is None:
- self.fps = stream.fps()
-
- if self.width is None or self.height is None:
- actual_width = int(round(stream.width()))
- actual_height = int(round(stream.height()))
- if self.rotation in [cv2.ROTATE_90_CLOCKWISE, cv2.ROTATE_90_COUNTERCLOCKWISE]:
- self.width, self.height = actual_height, actual_width
- self.capture_width, self.capture_height = actual_width, actual_height
- else:
- self.width, self.height = actual_width, actual_height
- self.capture_width, self.capture_height = actual_width, actual_height
-
- def read_depth(self, timeout_ms: int = 200) -> NDArray[Any]:
- """
- Reads a single frame (depth) synchronously from the camera.
-
- This is a blocking call. It waits for a coherent set of frames (depth)
- from the camera hardware via the RealSense pipeline.
-
- Args:
- timeout_ms (int): Maximum time in milliseconds to wait for a frame. Defaults to 200ms.
-
- Returns:
- np.ndarray: The depth map as a NumPy array (height, width)
- of type `np.uint16` (raw depth values in millimeters) and rotation.
-
- Raises:
- DeviceNotConnectedError: If the camera is not connected.
- RuntimeError: If reading frames from the pipeline fails or frames are invalid.
- """
-
- if not self.is_connected:
- raise DeviceNotConnectedError(f"{self} is not connected.")
- if not self.use_depth:
- raise RuntimeError(
- f"Failed to capture depth frame '.read_depth()'. Depth stream is not enabled for {self}."
- )
-
- start_time = time.perf_counter()
-
- if self.rs_pipeline is None:
- raise RuntimeError(f"{self}: rs_pipeline must be initialized before use.")
-
- ret, frame = self.rs_pipeline.try_wait_for_frames(timeout_ms=timeout_ms)
-
- if not ret or frame is None:
- raise RuntimeError(f"{self} read_depth failed (status={ret}).")
-
- depth_frame = frame.get_depth_frame()
- depth_map = np.asanyarray(depth_frame.get_data())
-
- depth_map_processed = self._postprocess_image(depth_map, depth_frame=True)
-
- read_duration_ms = (time.perf_counter() - start_time) * 1e3
- logger.debug(f"{self} read took: {read_duration_ms:.1f}ms")
-
- return depth_map_processed
-
- def read(self, color_mode: ColorMode | None = None, timeout_ms: int = 200) -> NDArray[Any]:
- """
- Reads a single frame (color) synchronously from the camera.
-
- This is a blocking call. It waits for a coherent set of frames (color)
- from the camera hardware via the RealSense pipeline.
-
- Args:
- timeout_ms (int): Maximum time in milliseconds to wait for a frame. Defaults to 200ms.
-
- Returns:
- np.ndarray: The captured color frame as a NumPy array
- (height, width, channels), processed according to `color_mode` and rotation.
-
- Raises:
- DeviceNotConnectedError: If the camera is not connected.
- RuntimeError: If reading frames from the pipeline fails or frames are invalid.
- ValueError: If an invalid `color_mode` is requested.
- """
-
- if not self.is_connected:
- raise DeviceNotConnectedError(f"{self} is not connected.")
-
- start_time = time.perf_counter()
-
- if self.rs_pipeline is None:
- raise RuntimeError(f"{self}: rs_pipeline must be initialized before use.")
-
- ret, frame = self.rs_pipeline.try_wait_for_frames(timeout_ms=timeout_ms)
-
- if not ret or frame is None:
- raise RuntimeError(f"{self} read failed (status={ret}).")
-
- color_frame = frame.get_color_frame()
- color_image_raw = np.asanyarray(color_frame.get_data())
-
- color_image_processed = self._postprocess_image(color_image_raw, color_mode)
-
- read_duration_ms = (time.perf_counter() - start_time) * 1e3
- logger.debug(f"{self} read took: {read_duration_ms:.1f}ms")
-
- return color_image_processed
-
- def _postprocess_image(
- self, image: NDArray[Any], color_mode: ColorMode | None = None, depth_frame: bool = False
- ) -> NDArray[Any]:
- """
- Applies color conversion, dimension validation, and rotation to a raw color frame.
-
- Args:
- image (np.ndarray): The raw image frame (expected RGB format from RealSense).
- color_mode (Optional[ColorMode]): The target color mode (RGB or BGR). If None,
- uses the instance's default `self.color_mode`.
-
- Returns:
- np.ndarray: The processed image frame according to `self.color_mode` and `self.rotation`.
-
- Raises:
- ValueError: If the requested `color_mode` is invalid.
- RuntimeError: If the raw frame dimensions do not match the configured
- `width` and `height`.
- """
-
- if color_mode and color_mode not in (ColorMode.RGB, ColorMode.BGR):
- raise ValueError(
- f"Invalid requested color mode '{color_mode}'. Expected {ColorMode.RGB} or {ColorMode.BGR}."
- )
-
- if depth_frame:
- h, w = image.shape
- else:
- h, w, c = image.shape
-
- if c != 3:
- raise RuntimeError(f"{self} frame channels={c} do not match expected 3 channels (RGB/BGR).")
-
- if h != self.capture_height or w != self.capture_width:
- raise RuntimeError(
- f"{self} frame width={w} or height={h} do not match configured width={self.capture_width} or height={self.capture_height}."
- )
-
- processed_image = image
- if self.color_mode == ColorMode.BGR:
- processed_image = cv2.cvtColor(image, cv2.COLOR_RGB2BGR)
-
- if self.rotation in [cv2.ROTATE_90_CLOCKWISE, cv2.ROTATE_90_COUNTERCLOCKWISE, cv2.ROTATE_180]:
- processed_image = cv2.rotate(processed_image, self.rotation)
-
- return processed_image
-
- def _read_loop(self) -> None:
- """
- Internal loop run by the background thread for asynchronous reading.
-
- On each iteration:
- 1. Reads a color frame with 500ms timeout
- 2. Stores result in latest_frame (thread-safe)
- 3. Sets new_frame_event to notify listeners
-
- Stops on DeviceNotConnectedError, logs other errors and continues.
- """
- if self.stop_event is None:
- raise RuntimeError(f"{self}: stop_event is not initialized before starting read loop.")
-
- while not self.stop_event.is_set():
- try:
- color_image = self.read(timeout_ms=500)
-
- with self.frame_lock:
- self.latest_frame = color_image
- self.new_frame_event.set()
-
- except DeviceNotConnectedError:
- break
- except Exception as e:
- logger.warning(f"Error reading frame in background thread for {self}: {e}")
-
- def _start_read_thread(self) -> None:
- """Starts or restarts the background read thread if it's not running."""
- if self.thread is not None and self.thread.is_alive():
- self.thread.join(timeout=0.1)
- if self.stop_event is not None:
- self.stop_event.set()
-
- self.stop_event = Event()
- self.thread = Thread(target=self._read_loop, args=(), name=f"{self}_read_loop")
- self.thread.daemon = True
- self.thread.start()
-
- def _stop_read_thread(self) -> None:
- """Signals the background read thread to stop and waits for it to join."""
- if self.stop_event is not None:
- self.stop_event.set()
-
- if self.thread is not None and self.thread.is_alive():
- self.thread.join(timeout=2.0)
-
- self.thread = None
- self.stop_event = None
-
- # NOTE(Steven): Missing implementation for depth for now
- def async_read(self, timeout_ms: float = 200) -> NDArray[Any]:
- """
- Reads the latest available frame data (color) asynchronously.
-
- This method retrieves the most recent color frame captured by the background
- read thread. It does not block waiting for the camera hardware directly,
- but may wait up to timeout_ms for the background thread to provide a frame.
-
- Args:
- timeout_ms (float): Maximum time in milliseconds to wait for a frame
- to become available. Defaults to 200ms (0.2 seconds).
-
- Returns:
- np.ndarray:
- The latest captured frame data (color image), processed according to configuration.
-
- Raises:
- DeviceNotConnectedError: If the camera is not connected.
- TimeoutError: If no frame data becomes available within the specified timeout.
- RuntimeError: If the background thread died unexpectedly or another error occurs.
- """
- if not self.is_connected:
- raise DeviceNotConnectedError(f"{self} is not connected.")
-
- if self.thread is None or not self.thread.is_alive():
- self._start_read_thread()
-
- if not self.new_frame_event.wait(timeout=timeout_ms / 1000.0):
- thread_alive = self.thread is not None and self.thread.is_alive()
- raise TimeoutError(
- f"Timed out waiting for frame from camera {self} after {timeout_ms} ms. "
- f"Read thread alive: {thread_alive}."
- )
-
- with self.frame_lock:
- frame = self.latest_frame
- self.new_frame_event.clear()
-
- if frame is None:
- raise RuntimeError(f"Internal error: Event set but no frame available for {self}.")
-
- return frame
-
- def disconnect(self) -> None:
- """
- Disconnects from the camera, stops the pipeline, and cleans up resources.
-
- Stops the background read thread (if running) and stops the RealSense pipeline.
-
- Raises:
- DeviceNotConnectedError: If the camera is already disconnected (pipeline not running).
- """
-
- if not self.is_connected and self.thread is None:
- raise DeviceNotConnectedError(
- f"Attempted to disconnect {self}, but it appears already disconnected."
- )
-
- if self.thread is not None:
- self._stop_read_thread()
-
- if self.rs_pipeline is not None:
- self.rs_pipeline.stop()
- self.rs_pipeline = None
- self.rs_profile = None
-
- logger.info(f"{self} disconnected.")
diff --git a/lerobot/src/lerobot/cameras/realsense/configuration_realsense.py b/lerobot/src/lerobot/cameras/realsense/configuration_realsense.py
deleted file mode 100644
index e981b35341e004a528c8bfeac9ef2c0f0542fdd4..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/cameras/realsense/configuration_realsense.py
+++ /dev/null
@@ -1,82 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-
-from ..configs import CameraConfig, ColorMode, Cv2Rotation
-
-
-@CameraConfig.register_subclass("intelrealsense")
-@dataclass
-class RealSenseCameraConfig(CameraConfig):
- """Configuration class for Intel RealSense cameras.
-
- This class provides specialized configuration options for Intel RealSense cameras,
- including support for depth sensing and device identification via serial number or name.
-
- Example configurations for Intel RealSense D405:
- ```python
- # Basic configurations
- RealSenseCameraConfig("0123456789", 30, 1280, 720) # 1280x720 @ 30FPS
- RealSenseCameraConfig("0123456789", 60, 640, 480) # 640x480 @ 60FPS
-
- # Advanced configurations
- RealSenseCameraConfig("0123456789", 30, 640, 480, use_depth=True) # With depth sensing
- RealSenseCameraConfig("0123456789", 30, 640, 480, rotation=Cv2Rotation.ROTATE_90) # With 90° rotation
- ```
-
- Attributes:
- fps: Requested frames per second for the color stream.
- width: Requested frame width in pixels for the color stream.
- height: Requested frame height in pixels for the color stream.
- serial_number_or_name: Unique serial number or human-readable name to identify the camera.
- color_mode: Color mode for image output (RGB or BGR). Defaults to RGB.
- use_depth: Whether to enable depth stream. Defaults to False.
- rotation: Image rotation setting (0°, 90°, 180°, or 270°). Defaults to no rotation.
- warmup_s: Time reading frames before returning from connect (in seconds)
-
- Note:
- - Either name or serial_number must be specified.
- - Depth stream configuration (if enabled) will use the same FPS as the color stream.
- - The actual resolution and FPS may be adjusted by the camera to the nearest supported mode.
- - For `fps`, `width` and `height`, either all of them need to be set, or none of them.
- """
-
- serial_number_or_name: str
- color_mode: ColorMode = ColorMode.RGB
- use_depth: bool = False
- rotation: Cv2Rotation = Cv2Rotation.NO_ROTATION
- warmup_s: int = 1
-
- def __post_init__(self) -> None:
- if self.color_mode not in (ColorMode.RGB, ColorMode.BGR):
- raise ValueError(
- f"`color_mode` is expected to be {ColorMode.RGB.value} or {ColorMode.BGR.value}, but {self.color_mode} is provided."
- )
-
- if self.rotation not in (
- Cv2Rotation.NO_ROTATION,
- Cv2Rotation.ROTATE_90,
- Cv2Rotation.ROTATE_180,
- Cv2Rotation.ROTATE_270,
- ):
- raise ValueError(
- f"`rotation` is expected to be in {(Cv2Rotation.NO_ROTATION, Cv2Rotation.ROTATE_90, Cv2Rotation.ROTATE_180, Cv2Rotation.ROTATE_270)}, but {self.rotation} is provided."
- )
-
- values = (self.fps, self.width, self.height)
- if any(v is not None for v in values) and any(v is None for v in values):
- raise ValueError(
- "For `fps`, `width` and `height`, either all of them need to be set, or none of them."
- )
diff --git a/lerobot/src/lerobot/cameras/zmq/__init__.py b/lerobot/src/lerobot/cameras/zmq/__init__.py
deleted file mode 100644
index 963a16ba2b04aed1d55942f6f0afcdf1114c4e2a..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/cameras/zmq/__init__.py
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .camera_zmq import ZMQCamera
-from .configuration_zmq import ZMQCameraConfig
-
-__all__ = ["ZMQCamera", "ZMQCameraConfig"]
diff --git a/lerobot/src/lerobot/cameras/zmq/camera_zmq.py b/lerobot/src/lerobot/cameras/zmq/camera_zmq.py
deleted file mode 100644
index d561aa8431d8bd21bb46fa1b92f65233eea899dc..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/cameras/zmq/camera_zmq.py
+++ /dev/null
@@ -1,235 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-ZMQCamera - Captures frames from remote cameras via ZeroMQ using JSON protocol in the
-following format:
- {
- "timestamps": {"camera_name": float},
- "images": {"camera_name": ""}
- }
-"""
-
-import base64
-import json
-import logging
-import time
-from threading import Event, Lock, Thread
-from typing import Any
-
-import cv2
-import numpy as np
-from numpy.typing import NDArray
-
-from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
-
-from ..camera import Camera
-from ..configs import ColorMode
-from .configuration_zmq import ZMQCameraConfig
-
-logger = logging.getLogger(__name__)
-
-
-class ZMQCamera(Camera):
- """
- Example usage:
- ```python
- from lerobot.cameras.zmq import ZMQCamera, ZMQCameraConfig
-
- config = ZMQCameraConfig(server_address="192.168.123.164", port=5555, camera_name="head_camera")
- camera = ZMQCamera(config)
- camera.connect()
- frame = camera.read()
- camera.disconnect()
- ```
- """
-
- def __init__(self, config: ZMQCameraConfig):
- super().__init__(config)
- import zmq
-
- self.config = config
- self.server_address = config.server_address
- self.port = config.port
- self.camera_name = config.camera_name
- self.color_mode = config.color_mode
- self.timeout_ms = config.timeout_ms
-
- self.context: zmq.Context | None = None
- self.socket: zmq.Socket | None = None
- self._connected = False
-
- self.thread: Thread | None = None
- self.stop_event: Event | None = None
- self.frame_lock: Lock = Lock()
- self.latest_frame: NDArray[Any] | None = None
- self.new_frame_event: Event = Event()
-
- def __str__(self) -> str:
- return f"ZMQCamera({self.camera_name}@{self.server_address}:{self.port})"
-
- @property
- def is_connected(self) -> bool:
- return self._connected and self.context is not None and self.socket is not None
-
- def connect(self, warmup: bool = True) -> None:
- """Connect to ZMQ camera server."""
- if self.is_connected:
- raise DeviceAlreadyConnectedError(f"{self} is already connected.")
-
- logger.info(f"Connecting to {self}...")
-
- try:
- import zmq
-
- self.context = zmq.Context()
- self.socket = self.context.socket(zmq.SUB)
- self.socket.setsockopt_string(zmq.SUBSCRIBE, "")
- self.socket.setsockopt(zmq.RCVTIMEO, self.timeout_ms)
- self.socket.setsockopt(zmq.CONFLATE, True)
- self.socket.connect(f"tcp://{self.server_address}:{self.port}")
- self._connected = True
-
- # Auto-detect resolution
- if self.width is None or self.height is None:
- h, w = self.read().shape[:2]
- self.height = h
- self.width = w
- logger.info(f"{self} resolution: {w}x{h}")
-
- logger.info(f"{self} connected.")
-
- if warmup:
- time.sleep(0.1)
-
- except Exception as e:
- self._cleanup()
- raise RuntimeError(f"Failed to connect to {self}: {e}") from e
-
- def _cleanup(self):
- """Clean up ZMQ resources."""
- self._connected = False
- if self.socket:
- self.socket.close()
- self.socket = None
- if self.context:
- self.context.term()
- self.context = None
-
- @staticmethod
- def find_cameras() -> list[dict[str, Any]]:
- """ZMQ cameras require manual configuration (server address/port)."""
- return []
-
- def read(self, color_mode: ColorMode | None = None) -> NDArray[Any]:
- """
- Read a single frame from the ZMQ camera.
-
- Returns:
- np.ndarray: Decoded frame (height, width, 3)
- """
- if not self.is_connected or self.socket is None:
- raise DeviceNotConnectedError(f"{self} is not connected.")
-
- try:
- message = self.socket.recv_string()
- except Exception as e:
- if type(e).__name__ == "Again":
- raise TimeoutError(f"{self} timeout after {self.timeout_ms}ms") from e
- raise
-
- # Decode JSON message
- data = json.loads(message)
-
- if "images" not in data:
- raise RuntimeError(f"{self} invalid message: missing 'images' key")
-
- images = data["images"]
-
- # Get image by camera name or first available
- if self.camera_name in images:
- img_b64 = images[self.camera_name]
- elif images:
- img_b64 = next(iter(images.values()))
- else:
- raise RuntimeError(f"{self} no images in message")
-
- # Decode base64 JPEG
- img_bytes = base64.b64decode(img_b64)
- frame = cv2.imdecode(np.frombuffer(img_bytes, np.uint8), cv2.IMREAD_COLOR)
-
- if frame is None:
- raise RuntimeError(f"{self} failed to decode image")
-
- return frame
-
- def _read_loop(self) -> None:
- while self.stop_event and not self.stop_event.is_set():
- try:
- frame = self.read()
- with self.frame_lock:
- self.latest_frame = frame
- self.new_frame_event.set()
- except DeviceNotConnectedError:
- break
- except TimeoutError:
- pass
- except Exception as e:
- logger.warning(f"Read error: {e}")
-
- def _start_read_thread(self) -> None:
- if self.thread and self.thread.is_alive():
- return
- self.stop_event = Event()
- self.thread = Thread(target=self._read_loop, daemon=True)
- self.thread.start()
-
- def _stop_read_thread(self) -> None:
- if self.stop_event:
- self.stop_event.set()
- if self.thread and self.thread.is_alive():
- self.thread.join(timeout=2.0)
- self.thread = None
- self.stop_event = None
-
- def async_read(self, timeout_ms: float = 10000) -> NDArray[Any]:
- """Read latest frame asynchronously (non-blocking)."""
- if not self.is_connected:
- raise DeviceNotConnectedError(f"{self} is not connected.")
-
- if not self.thread or not self.thread.is_alive():
- self._start_read_thread()
-
- if not self.new_frame_event.wait(timeout=timeout_ms / 1000.0):
- raise TimeoutError(f"{self} async_read timeout after {timeout_ms}ms")
-
- with self.frame_lock:
- frame = self.latest_frame
- self.new_frame_event.clear()
-
- if frame is None:
- raise RuntimeError(f"{self} no frame available")
-
- return frame
-
- def disconnect(self) -> None:
- """Disconnect from ZMQ camera."""
- if not self.is_connected and not self.thread:
- raise DeviceNotConnectedError(f"{self} not connected.")
-
- self._stop_read_thread()
- self._cleanup()
- logger.info(f"{self} disconnected.")
diff --git a/lerobot/src/lerobot/cameras/zmq/configuration_zmq.py b/lerobot/src/lerobot/cameras/zmq/configuration_zmq.py
deleted file mode 100644
index 569e37fd30622af9fb983ebd2896a71f969adfb8..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/cameras/zmq/configuration_zmq.py
+++ /dev/null
@@ -1,46 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-
-from ..configs import CameraConfig, ColorMode
-
-__all__ = ["ZMQCameraConfig", "ColorMode"]
-
-
-@CameraConfig.register_subclass("zmq")
-@dataclass
-class ZMQCameraConfig(CameraConfig):
- server_address: str
- port: int = 5555
- camera_name: str = "zmq_camera"
- color_mode: ColorMode = ColorMode.RGB
- timeout_ms: int = 5000
-
- def __post_init__(self) -> None:
- if self.color_mode not in (ColorMode.RGB, ColorMode.BGR):
- raise ValueError(
- f"`color_mode` is expected to be {ColorMode.RGB.value} or {ColorMode.BGR.value}, but {self.color_mode} is provided."
- )
-
- if self.timeout_ms <= 0:
- raise ValueError(f"`timeout_ms` must be positive, but {self.timeout_ms} is provided.")
-
- if not self.server_address:
- raise ValueError("`server_address` cannot be empty.")
-
- if self.port <= 0 or self.port > 65535:
- raise ValueError(f"`port` must be between 1 and 65535, but {self.port} is provided.")
diff --git a/lerobot/src/lerobot/cameras/zmq/image_server.py b/lerobot/src/lerobot/cameras/zmq/image_server.py
deleted file mode 100644
index 87436bb7b84dd9fe07a1b48d9a8aa30b79e50a55..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/cameras/zmq/image_server.py
+++ /dev/null
@@ -1,114 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Streams camera images over ZMQ.
-Uses lerobot's OpenCVCamera for capture, encodes images to base64 and sends them over ZMQ.
-"""
-
-import base64
-import contextlib
-import json
-import logging
-import time
-from collections import deque
-
-import cv2
-import numpy as np
-import zmq
-
-from lerobot.cameras.configs import ColorMode
-from lerobot.cameras.opencv import OpenCVCamera, OpenCVCameraConfig
-
-logger = logging.getLogger(__name__)
-
-
-def encode_image(image: np.ndarray, quality: int = 80) -> str:
- """Encode RGB image to base64 JPEG string."""
- _, buffer = cv2.imencode(".jpg", image, [int(cv2.IMWRITE_JPEG_QUALITY), quality])
- return base64.b64encode(buffer).decode("utf-8")
-
-
-class ImageServer:
- def __init__(self, config: dict, port: int = 5555):
- self.fps = config.get("fps", 30)
- self.cameras: dict[str, OpenCVCamera] = {}
-
- for name, cfg in config.get("cameras", {}).items():
- shape = cfg.get("shape", [480, 640])
- cam_config = OpenCVCameraConfig(
- index_or_path=cfg.get("device_id", 0),
- fps=self.fps,
- width=shape[1],
- height=shape[0],
- color_mode=ColorMode.RGB,
- )
- camera = OpenCVCamera(cam_config)
- camera.connect()
- self.cameras[name] = camera
- logger.info(f"Camera {name}: {shape[1]}x{shape[0]}")
-
- # ZMQ PUB socket
- self.context = zmq.Context()
- self.socket = self.context.socket(zmq.PUB)
- self.socket.setsockopt(zmq.SNDHWM, 20)
- self.socket.setsockopt(zmq.LINGER, 0)
- self.socket.bind(f"tcp://*:{port}")
-
- logger.info(f"ImageServer running on port {port}")
-
- def run(self):
- frame_count = 0
- frame_times = deque(maxlen=60)
-
- try:
- while True:
- t0 = time.time()
-
- # Build message
- message = {"timestamps": {}, "images": {}}
- for name, cam in self.cameras.items():
- frame = cam.read() # Returns RGB
- message["timestamps"][name] = time.time()
- message["images"][name] = encode_image(frame)
-
- # Send as JSON string (suppress if buffer full)
- with contextlib.suppress(zmq.Again):
- self.socket.send_string(json.dumps(message), zmq.NOBLOCK)
-
- frame_count += 1
- frame_times.append(time.time() - t0)
-
- if frame_count % 60 == 0:
- logger.debug(f"FPS: {len(frame_times) / sum(frame_times):.1f}")
-
- sleep = (1.0 / self.fps) - (time.time() - t0)
- if sleep > 0:
- time.sleep(sleep)
-
- except KeyboardInterrupt:
- pass
- finally:
- for cam in self.cameras.values():
- cam.disconnect()
- self.socket.close()
- self.context.term()
-
-
-if __name__ == "__main__":
- logging.basicConfig(level=logging.INFO)
- config = {"fps": 30, "cameras": {"head_camera": {"device_id": 4, "shape": [480, 640]}}}
- ImageServer(config, port=5555).run()
diff --git a/lerobot/src/lerobot/data_processing/sarm_annotations/__init__.py b/lerobot/src/lerobot/data_processing/sarm_annotations/__init__.py
deleted file mode 100644
index 2a07d1b051e63468ab496d3325eb3c74b7ccc22d..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/data_processing/sarm_annotations/__init__.py
+++ /dev/null
@@ -1,13 +0,0 @@
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
diff --git a/lerobot/src/lerobot/data_processing/sarm_annotations/subtask_annotation.py b/lerobot/src/lerobot/data_processing/sarm_annotations/subtask_annotation.py
deleted file mode 100644
index 99e403cefd5043826aece3273acfaa9220e6ac99..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/data_processing/sarm_annotations/subtask_annotation.py
+++ /dev/null
@@ -1,1202 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-SARM Subtask Annotation using local GPU (Qwen3-VL).
-
-This script implements the annotation approach from the SARM paper using local GPU inference:
-"SARM: Stage-Aware Reward Modeling for Long Horizon Robot Manipulation"
-Paper: https://arxiv.org/pdf/2509.25358
-
-What it does:
-1. Takes videos from a LeRobot dataset
-2. Uses Qwen3-VL running locally on GPU to identify when subtasks occur
-3. Saves subtask timestamps to the dataset metadata
-4. Optionally pushes the annotated dataset to HuggingFace Hub
-
-SARM trains reward models that predict:
- - Stage: Which subtask is currently being executed (discrete classification)
- - Progress: How far along the subtask we are (continuous 0-1)
-
-Supports three annotation modes:
- 1. No annotations (no args): Auto-creates single sparse "task" stage covering full episode.
- Use with SARM config annotation_mode="single_stage" for simple tasks.
-
- 2. Dense-only (--dense-only --dense-subtasks): Dense annotations from VLM, auto-generated
- single sparse "task" stage. Use with annotation_mode="dense_only".
-
- 3. Dual mode (--sparse-subtasks + --dense-subtasks): Both sparse and dense annotations
- from VLM. Use with annotation_mode="dual".
-
-Requirements:
- - GPU with sufficient VRAM (16GB+ recommended for 30B model)
- - `pip install transformers, torch, qwen-vl-utils`
-
-Run with:
-```bash
-python examples/dataset_annotation/subtask_annotation.py \
- --repo-id your-username/your-dataset \
- --sparse-subtasks "Do ..." \
- --dense-subtasks "Do task 1, Do task 2, Do task 3" \
- --video-key observation.images.base \
- --push-to-hub
-```
-"""
-
-import argparse
-import json
-import multiprocessing as mp
-import random
-import re
-import subprocess
-import tempfile
-import textwrap
-import time
-from concurrent.futures import ProcessPoolExecutor, as_completed
-from pathlib import Path
-from typing import Any
-
-import cv2
-import numpy as np
-import pandas as pd
-import torch
-from pydantic import BaseModel, Field
-from transformers import AutoProcessor, Qwen3VLMoeForConditionalGeneration
-
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-
-
-# Pydantic Models for SARM Subtask Annotation
-class Timestamp(BaseModel):
- """Timestamp in MM:SS or SS format"""
-
- start: str = Field(description="Start timestamp (MM:SS or just seconds)")
- end: str = Field(description="End timestamp (MM:SS or just seconds)")
-
-
-class Subtask(BaseModel):
- """Individual subtask/stage - must use EXACT names from provided list"""
-
- name: str = Field(description="Subtask name - MUST match one from the predefined list exactly")
- timestamps: Timestamp
-
-
-class SubtaskAnnotation(BaseModel):
- """Complete annotation for a robot manipulation episode"""
-
- subtasks: list[Subtask] = Field(description="List of all subtasks in temporal order")
-
-
-def compute_temporal_proportions(
- annotations: dict[int, Any], fps: int = 30, subtask_order: list[str] | None = None
-) -> dict[str, float]:
- """
- Compute dataset-level temporal proportions (priors) for each subtask.
-
- Implements SARM Paper Formula (1): ᾱ_k = (1/M) × Σ_i (L_{i,k} / T_i)
-
- Args:
- annotations: Dict mapping episode index to SubtaskAnnotation object.
- fps: Frames per second (unused, kept for API compatibility)
- subtask_order: Optional list defining the output order of subtasks.
-
- Returns:
- Dict mapping subtask name to its temporal proportion (ᾱ_k), ordered by subtask_order if provided.
- """
- subtask_proportions: dict[str, list[float]] = {}
-
- for annotation in annotations.values():
- total_duration = 0
- durations: dict[str, int] = {}
-
- for subtask in annotation.subtasks:
- start_parts = subtask.timestamps.start.split(":")
- end_parts = subtask.timestamps.end.split(":")
-
- start_seconds = (
- int(start_parts[0]) * 60 + int(start_parts[1])
- if len(start_parts) == 2
- else int(start_parts[0])
- )
- end_seconds = (
- int(end_parts[0]) * 60 + int(end_parts[1]) if len(end_parts) == 2 else int(end_parts[0])
- )
-
- duration = end_seconds - start_seconds
- durations[subtask.name] = duration
- total_duration += duration
-
- if total_duration > 0:
- for name, duration in durations.items():
- if name not in subtask_proportions:
- subtask_proportions[name] = []
- subtask_proportions[name].append(duration / total_duration)
-
- if not subtask_proportions:
- return {}
-
- avg_proportions = {name: sum(props) / len(props) for name, props in subtask_proportions.items()}
-
- total = sum(avg_proportions.values())
- if total > 0:
- avg_proportions = {name: prop / total for name, prop in avg_proportions.items()}
-
- # Reorder according to subtask_order if provided
- if subtask_order:
- avg_proportions = {
- name: avg_proportions.get(name, 0.0) for name in subtask_order if name in avg_proportions
- }
-
- return avg_proportions
-
-
-def create_sarm_prompt(subtask_list: list[str]) -> str:
- subtask_str = "\n".join([f" - {name}" for name in subtask_list])
-
- return textwrap.dedent(f"""\
- # Role
- You are a Robotics Vision System specializing in temporal action localization for robot manipulation. Your job is to segment a single demonstration video into distinct, non-overlapping atomic actions from a fixed subtask list.
-
- # Subtask Label Set (Closed Vocabulary)
- You must strictly identify the video segments using ONLY the following labels. Do not create new labels or modify existing ones:
-
- [
- {subtask_str}
- ]
-
- The video shows one successful execution of all subtasks in a logical order.
-
- # Ground-Truth Semantics (Very Important)
- Use **visual state changes** to define when a subtask starts and ends. Do NOT assume equal durations for the subtasks.
-
- - A subtask **starts** at the first frame where the robot's motion clearly initiates that subtask.
- - A subtask **ends** at the first frame where that specific action is visually completed and the manipulated object reaches a temporary, stable configuration.
-
- If there are short pauses or micro-motions that don't clearly correspond to a new subtask, they belong to the **current** subtask.
-
- # Hard Constraints & Logic
- 1. **Continuous Coverage (No Gaps):**
- - The entire video duration from "00:00" to the final timestamp must be covered by subtasks.
- - There can be no gaps between subtasks.
- - If there is any idle or ambiguous time between clear actions, extend the *preceding* subtask to cover it.
-
- 2. **Boundary Consistency:**
- - The `"end"` timestamp of one subtask must be exactly equal to the `"start"` timestamp of the next subtask.
- - Boundaries must coincide with a real visual state transition, not just a convenient time split.
-
- 3. **Chronological Order, One Occurrence Each:**
- - This is a single successful demonstration.
- - Each subtask from the vocabulary appears **exactly once**, in the correct logical order.
- - **Durations may be very different** between subtasks. Never assume they are similar lengths. Base all boundaries only on the video.
-
- 4. **Reject Uniform Segmentation (Important):**
- - Do NOT simply divide the video into equal or nearly equal time chunks.
- - If your boundaries would result in subtasks with similar durations (e.g. all around 5 seconds), treat this as evidence that your segmentation is wrong and refine the boundaries.
- - Only use nearly equal durations if the video truly shows each subtask taking the same amount of time (this is very rare).
-
- 5. **Timestamps:**
- - Timestamps must be in `"MM:SS"` format.
- - The first subtask always starts at `"00:00"`.
- - The last subtask ends at the final visible frame of the video.
-
- # Step 1 — Textual Timeline (must do this first)
- First, write a extensive and detailed textual timeline describing what happens in the video with approximate timestamps.
- For each subtask, include:
- - its name
- - an approximate start and end time,
- - an description of the visual event at the boundary (e.g. "shirt fully folded to the left", "robot rotates folded shirt 90 degrees").
-
- Format this as a bullet list.
-
- # Step 2 — JSON Output (final answer)
- After the textual timeline, output **only** valid JSON with this structure.
- The JSON **must** be consistent with the textual timeline above:
-
- {{
- "subtasks": [
- {{
- "name": "EXACT_NAME_FROM_LIST",
- "timestamps": {{
- "start": "MM:SS",
- "end": "MM:SS"
- }}
- }},
- {{
- "name": "EXACT_NAME_FROM_LIST",
- "timestamps": {{
- "start": "MM:SS",
- "end": "MM:SS"
- }}
- }}
- ]
- }}
-
- Do not add any extra keys to the JSON.
- """)
-
-
-class VideoAnnotator:
- """Annotates robot manipulation videos using local Qwen3-VL model on GPU"""
-
- def __init__(
- self,
- subtask_list: list[str],
- model_name: str = "Qwen/Qwen3-VL-30B-A3B-Instruct",
- device: str = "cuda",
- torch_dtype: torch.dtype = torch.bfloat16,
- model: Qwen3VLMoeForConditionalGeneration | None = None, # noqa: F821
- processor: AutoProcessor | None = None, # noqa: F821
- ):
- """
- Initialize the video annotator with local model.
-
- Args:
- subtask_list: List of allowed subtask names (for consistency)
- model_name: Hugging Face model name (default: Qwen/Qwen3-VL-30B-A3B-Instruct)
- device: Device to use (cuda, cpu)
- torch_dtype: Data type for model (bfloat16, float16, float32)
- model: Pre-loaded model instance (optional, to share between annotators)
- processor: Pre-loaded processor instance (optional, to share between annotators)
- """
- self.subtask_list = subtask_list
- self.prompt = create_sarm_prompt(subtask_list)
- self.device = device
-
- # Use provided model/processor or load new ones
- if model is not None and processor is not None:
- self.model = model
- self.processor = processor
- print(f"Using shared model on {device}")
- else:
- from transformers import AutoProcessor, Qwen3VLMoeForConditionalGeneration
-
- print(f"Loading model: {model_name}...")
-
- self.model = Qwen3VLMoeForConditionalGeneration.from_pretrained(
- model_name, torch_dtype=torch_dtype, device_map=device, trust_remote_code=True
- )
-
- self.processor = AutoProcessor.from_pretrained(model_name, trust_remote_code=True)
-
- print(f"Model loaded successfully on {device}")
-
- def extract_episode_segment(
- self, file_path: Path, start_timestamp: float, end_timestamp: float, target_fps: int = 1
- ) -> Path:
- """
- Extract a specific episode segment from concatenated video.
- Uses minimal compression to preserve quality for local inference.
-
- Args:
- file_path: Path to the concatenated video file
- start_timestamp: Starting timestamp in seconds (within this video file)
- end_timestamp: Ending timestamp in seconds (within this video file)
- target_fps: Target FPS (default: 1 for faster processing)
-
- Returns:
- Path to extracted video file
- """
- # Create temporary file for extracted video
- with tempfile.NamedTemporaryFile(suffix=".mp4", delete=False) as tmp_file:
- tmp_path = Path(tmp_file.name)
-
- try:
- # Check if ffmpeg is available
- subprocess.run( # nosec B607
- ["ffmpeg", "-version"], stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, check=True
- )
- except (subprocess.CalledProcessError, FileNotFoundError) as err:
- raise RuntimeError("ffmpeg not found, cannot extract episode segment") from err
-
- try:
- # Calculate duration
- duration = end_timestamp - start_timestamp
-
- print(f"Extracting episode: {start_timestamp:.1f}s-{end_timestamp:.1f}s ({duration:.1f}s)")
-
- # Use ffmpeg to extract segment with minimal quality loss
- cmd = [
- "ffmpeg",
- "-i",
- str(file_path),
- "-ss",
- str(start_timestamp),
- "-t",
- str(duration),
- "-r",
- str(target_fps),
- "-c:v",
- "libx264",
- "-preset",
- "ultrafast",
- "-crf",
- "23",
- "-an",
- "-y",
- str(tmp_path),
- ]
-
- subprocess.run(cmd, stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL, check=True)
-
- # Verify the output file was created and is not empty
- if not tmp_path.exists() or tmp_path.stat().st_size == 0:
- print("Video extraction failed (0 bytes) - skipping episode")
- if tmp_path.exists():
- tmp_path.unlink()
- raise RuntimeError("FFmpeg produced empty video file")
-
- # Show extraction results
- file_size_mb = tmp_path.stat().st_size / (1024 * 1024)
-
- # Fail if file is too small (< 100KB likely means extraction failed)
- if file_size_mb < 0.1:
- print(f"Extracted video too small ({file_size_mb:.2f}MB) - skipping episode")
- tmp_path.unlink()
- raise RuntimeError(f"Video extraction produced invalid file ({file_size_mb:.2f}MB)")
-
- print(f"Extracted: {file_size_mb:.1f}MB ({target_fps} FPS)")
-
- return tmp_path
-
- except subprocess.CalledProcessError as e:
- raise RuntimeError(f"ffmpeg failed ({e})") from e
-
- def annotate(
- self,
- file_path: str | Path,
- fps: int,
- start_timestamp: float = 0.0,
- end_timestamp: float | None = None,
- max_retries: int = 3,
- ) -> SubtaskAnnotation:
- """Annotate a video segment using local GPU."""
- from qwen_vl_utils import process_vision_info
-
- file_path = Path(file_path)
-
- if end_timestamp is None:
- cap = cv2.VideoCapture(str(file_path))
- end_timestamp = int(cap.get(cv2.CAP_PROP_FRAME_COUNT)) / (cap.get(cv2.CAP_PROP_FPS) or 1)
- cap.release()
-
- duration = end_timestamp - start_timestamp
- duration_str = f"{int(duration // 60):02d}:{int(duration % 60):02d}"
-
- extracted_path = self.extract_episode_segment(file_path, start_timestamp, end_timestamp, 1)
- is_extracted = extracted_path != file_path
-
- try:
- messages = [
- {"role": "system", "content": [{"type": "text", "text": self.prompt}]},
- {
- "role": "user",
- "content": [
- {"type": "video", "video": str(extracted_path), "fps": 1.0},
- {
- "type": "text",
- "text": f"Video is {duration_str} (~{duration:.1f}s). Follow instructions.",
- },
- ],
- },
- ]
-
- for attempt in range(max_retries):
- try:
- text = self.processor.apply_chat_template(
- messages, tokenize=False, add_generation_prompt=True
- )
- image_inputs, video_inputs = process_vision_info(messages)
- inputs = self.processor(
- text=[text],
- images=image_inputs,
- videos=video_inputs,
- padding=True,
- return_tensors="pt",
- ).to(self.device)
-
- with torch.no_grad():
- generated_ids = self.model.generate(
- **inputs, max_new_tokens=1024, do_sample=True, temperature=0.7
- )
-
- response = self.processor.batch_decode(
- [out[len(inp) :] for inp, out in zip(inputs.input_ids, generated_ids, strict=True)],
- skip_special_tokens=True,
- )[0].strip()
-
- # Extract JSON
- if "```json" in response:
- response = response.split("```json")[1].split("```")[0]
- elif "```" in response:
- response = response.split("```")[1].split("```")[0]
-
- try:
- return SubtaskAnnotation.model_validate(json.loads(response))
- except json.JSONDecodeError:
- match = re.search(r"\{.*\}", response, re.DOTALL)
- if match:
- return SubtaskAnnotation.model_validate(json.loads(match.group()))
- raise ValueError("No JSON found") from None
- except Exception as e:
- if attempt == max_retries - 1:
- raise RuntimeError(f"Failed after {max_retries} attempts") from e
- time.sleep(1)
- finally:
- if is_extracted and extracted_path.exists():
- extracted_path.unlink()
-
-
-def display_annotation(annotation: SubtaskAnnotation, episode_idx: int, fps: int, prefix: str = ""):
- """Display annotation summary."""
- subtask_summary = ", ".join(
- f"{s.name}({s.timestamps.start}-{s.timestamps.end})" for s in annotation.subtasks
- )
- print(f"Episode {episode_idx} {prefix}: {len(annotation.subtasks)} subtasks - {subtask_summary}")
-
-
-def timestamp_to_seconds(timestamp: str) -> float:
- """Convert MM:SS or SS timestamp to seconds"""
- parts = timestamp.split(":")
- if len(parts) == 2:
- return int(parts[0]) * 60 + int(parts[1])
- else:
- return int(parts[0])
-
-
-def extract_frame(video_path: Path, timestamp: float) -> np.ndarray | None:
- """Extract a single frame from video at given timestamp."""
- cap = cv2.VideoCapture(str(video_path))
- if not cap.isOpened():
- return None
- cap.set(cv2.CAP_PROP_POS_MSEC, timestamp * 1000)
- ret, frame = cap.read()
- cap.release()
- return cv2.cvtColor(frame, cv2.COLOR_BGR2RGB) if ret else None
-
-
-def draw_timeline(ax, subtasks, total_duration, colors):
- """Draw a timeline with color-coded subtask segments."""
- import matplotlib.patches as mpatches
-
- bar_height, bar_y = 0.6, 0.5
-
- for i, subtask in enumerate(subtasks):
- start = timestamp_to_seconds(subtask.timestamps.start)
- end = timestamp_to_seconds(subtask.timestamps.end)
- color = colors[i % len(colors)]
-
- rect = mpatches.FancyBboxPatch(
- (start, bar_y - bar_height / 2),
- end - start,
- bar_height,
- boxstyle="round,pad=0.02,rounding_size=0.1",
- facecolor=color,
- edgecolor="white",
- linewidth=1.5,
- alpha=0.85,
- )
- ax.add_patch(rect)
-
- # Add label if segment is wide enough
- duration = end - start
- if duration > total_duration * 0.06:
- ax.text(
- (start + end) / 2,
- bar_y,
- subtask.name,
- ha="center",
- va="center",
- fontsize=8,
- fontweight="bold",
- color="white",
- rotation=0 if duration > total_duration * 0.12 else 45,
- )
-
- if i > 0:
- ax.axvline(x=start, ymin=0.1, ymax=0.9, color="white", linestyle="--", linewidth=1.5, alpha=0.7)
-
- ax.axvline(x=0, ymin=0.1, ymax=0.9, color="#00ff00", linestyle="-", linewidth=2, alpha=0.9)
- if subtasks:
- ax.axvline(
- x=timestamp_to_seconds(subtasks[-1].timestamps.end),
- ymin=0.1,
- ymax=0.9,
- color="white",
- linestyle="--",
- linewidth=1.5,
- alpha=0.7,
- )
-
- ax.set_xlim(-total_duration * 0.02, total_duration * 1.02)
- ax.set_ylim(-0.1, 1.1)
- ax.set_xlabel("Time (seconds)", fontsize=10, color="white", labelpad=5)
- for spine in ["top", "right", "left"]:
- ax.spines[spine].set_visible(False)
- ax.spines["bottom"].set_color("#444444")
- ax.tick_params(axis="x", colors="#888888", labelsize=8)
- ax.tick_params(axis="y", left=False, labelleft=False)
-
-
-def visualize_episode(
- ep_idx: int,
- annotation: SubtaskAnnotation,
- video_path: Path,
- video_start: float,
- video_end: float,
- output_path: Path,
- video_key: str,
- ann_type: str,
-):
- """Create visualization for a single episode with frames and timeline."""
- import matplotlib.pyplot as plt
-
- if annotation is None:
- print(f"No {ann_type} annotation for episode {ep_idx}")
- return
-
- subtasks = annotation.subtasks
- if not subtasks:
- print(f"No subtasks for episode {ep_idx}")
- return
-
- colors = plt.cm.tab10(np.linspace(0, 1, max(len(subtasks), 10)))
- total_duration = timestamp_to_seconds(subtasks[-1].timestamps.end)
-
- # Extract middle frame from each subtask
- sample_frames, frame_times = [], []
- for subtask in subtasks:
- start = timestamp_to_seconds(subtask.timestamps.start)
- end = timestamp_to_seconds(subtask.timestamps.end)
- mid = (start + end) / 2
- frame_times.append(mid)
- sample_frames.append(extract_frame(video_path, video_start + mid))
-
- # Create figure
- fig_width = max(16, len(subtasks) * 2.5)
- fig = plt.figure(figsize=(fig_width, 10))
- fig.patch.set_facecolor("#1a1a2e")
-
- gs = fig.add_gridspec(
- 2,
- max(len(subtasks), 1),
- height_ratios=[2, 1],
- hspace=0.3,
- wspace=0.1,
- left=0.05,
- right=0.95,
- top=0.88,
- bottom=0.1,
- )
-
- fig.suptitle(
- f"Episode {ep_idx} - {ann_type.capitalize()} Annotations",
- fontsize=18,
- fontweight="bold",
- color="white",
- y=0.96,
- )
- fig.text(
- 0.5,
- 0.91,
- f"Camera: {video_key} | Duration: {video_end - video_start:.1f}s | {len(subtasks)} subtasks",
- ha="center",
- fontsize=11,
- color="#888888",
- )
-
- # Plot frames
- for i, (frame, subtask) in enumerate(zip(sample_frames, subtasks, strict=True)):
- ax = fig.add_subplot(gs[0, i])
- ax.set_facecolor("#16213e")
- if frame is not None:
- ax.imshow(frame)
- else:
- ax.text(
- 0.5, 0.5, "N/A", ha="center", va="center", fontsize=12, color="white", transform=ax.transAxes
- )
- ax.set_title(subtask.name, fontsize=10, fontweight="bold", color=colors[i % len(colors)], pad=8)
- ax.axis("off")
- ax.text(
- 0.5,
- -0.08,
- f"t={frame_times[i]:.1f}s",
- ha="center",
- fontsize=9,
- color="#888888",
- transform=ax.transAxes,
- )
-
- # Plot timeline
- ax_timeline = fig.add_subplot(gs[1, :])
- ax_timeline.set_facecolor("#16213e")
- draw_timeline(ax_timeline, subtasks, total_duration, colors)
-
- output_path.parent.mkdir(parents=True, exist_ok=True)
- plt.savefig(output_path, dpi=150, facecolor=fig.get_facecolor(), edgecolor="none", bbox_inches="tight")
- plt.close()
- print(f"Saved: {output_path}")
-
-
-def visualize_annotations(
- dataset: LeRobotDataset,
- sparse_annotations: dict[int, SubtaskAnnotation],
- dense_annotations: dict[int, SubtaskAnnotation] | None,
- video_key: str,
- output_dir: Path,
- num_episodes: int = 5,
- annotation_type: str = "sparse",
- episode_indices: list[int] | None = None,
-):
- """
- Visualize subtask annotations for a set of episodes.
-
- Args:
- dataset: LeRobotDataset instance
- sparse_annotations: Dict mapping episode index to sparse annotations
- dense_annotations: Dict mapping episode index to dense annotations (or None)
- video_key: Camera/video key to use
- output_dir: Directory to save visualization images
- num_episodes: Number of episodes to visualize (ignored if episode_indices provided)
- annotation_type: "sparse", "dense", or "both"
- episode_indices: Specific episode indices to visualize (optional)
- """
- # Determine available episodes based on annotation type
- if annotation_type == "sparse":
- available = set(sparse_annotations.keys())
- elif annotation_type == "dense":
- available = set(dense_annotations.keys()) if dense_annotations else set()
- else: # both
- sparse_set = set(sparse_annotations.keys())
- dense_set = set(dense_annotations.keys()) if dense_annotations else set()
- available = sparse_set | dense_set
-
- if not available:
- print("Error: No annotations found to visualize.")
- return
-
- # Select episodes to visualize
- if episode_indices:
- episodes = sorted([e for e in episode_indices if e in available])
- missing = set(episode_indices) - available
- if missing:
- print(f"Episodes not found in annotations: {sorted(missing)}")
- else:
- episodes = sorted(random.sample(list(available), min(num_episodes, len(available))))
- print(f"Visualizing {len(episodes)} episodes: {episodes}")
- output_dir.mkdir(parents=True, exist_ok=True)
-
- # Generate visualizations
- for i, ep_idx in enumerate(episodes, 1):
- print(f"Processing episode {ep_idx} ({i}/{len(episodes)})")
- video_path = dataset.root / dataset.meta.get_video_file_path(ep_idx, video_key)
- if not video_path.exists():
- print(f"Video not found: {video_path}")
- continue
-
- video_start = float(dataset.meta.episodes[f"videos/{video_key}/from_timestamp"][ep_idx])
- video_end = float(dataset.meta.episodes[f"videos/{video_key}/to_timestamp"][ep_idx])
-
- if annotation_type == "both":
- # Visualize both sparse and dense
- for ann_type, annotations in [("sparse", sparse_annotations), ("dense", dense_annotations)]:
- if annotations and ep_idx in annotations:
- output_path = output_dir / f"episode_{ep_idx:04d}_{ann_type}.png"
- visualize_episode(
- ep_idx,
- annotations.get(ep_idx),
- video_path,
- video_start,
- video_end,
- output_path,
- video_key,
- ann_type,
- )
- else:
- annotations = sparse_annotations if annotation_type == "sparse" else dense_annotations
- if annotations and ep_idx in annotations:
- output_path = output_dir / f"episode_{ep_idx:04d}_{annotation_type}.png"
- visualize_episode(
- ep_idx,
- annotations.get(ep_idx),
- video_path,
- video_start,
- video_end,
- output_path,
- video_key,
- annotation_type,
- )
-
- print(f"Visualizations saved to: {output_dir.absolute()}")
-
-
-def save_annotations_to_dataset(
- dataset_path: Path, annotations: dict[int, SubtaskAnnotation], fps: int, prefix: str = "sparse"
-):
- """Save annotations to LeRobot dataset parquet format."""
- from lerobot.datasets.utils import DEFAULT_EPISODES_PATH, load_episodes
-
- episodes_dataset = load_episodes(dataset_path)
- if not episodes_dataset or len(episodes_dataset) == 0:
- return
-
- episodes_df = episodes_dataset.to_pandas()
- cols = [
- f"{prefix}_{c}"
- for c in [
- "subtask_names",
- "subtask_start_times",
- "subtask_end_times",
- "subtask_start_frames",
- "subtask_end_frames",
- ]
- ]
- for col in cols:
- episodes_df[col] = None
-
- for ep_idx, ann in annotations.items():
- if ep_idx >= len(episodes_df):
- continue
- names, starts, ends, start_frames, end_frames = [], [], [], [], []
- for s in ann.subtasks:
- names.append(s.name)
- st, et = timestamp_to_seconds(s.timestamps.start), timestamp_to_seconds(s.timestamps.end)
- starts.append(st)
- ends.append(et)
- start_frames.append(int(st * fps))
- end_frames.append(int(et * fps))
- episodes_df.at[ep_idx, cols[0]] = names
- episodes_df.at[ep_idx, cols[1]] = starts
- episodes_df.at[ep_idx, cols[2]] = ends
- episodes_df.at[ep_idx, cols[3]] = start_frames
- episodes_df.at[ep_idx, cols[4]] = end_frames
-
- # Group by file and write
- for ep_idx in episodes_df.index:
- key = (
- episodes_df.loc[ep_idx, "meta/episodes/chunk_index"],
- episodes_df.loc[ep_idx, "meta/episodes/file_index"],
- )
- path = dataset_path / DEFAULT_EPISODES_PATH.format(chunk_index=key[0], file_index=key[1])
- if path.exists():
- file_df = pd.read_parquet(path)
- for col in cols + (
- [
- "subtask_names",
- "subtask_start_times",
- "subtask_end_times",
- "subtask_start_frames",
- "subtask_end_frames",
- ]
- if prefix == "sparse"
- else []
- ):
- if col not in file_df.columns:
- file_df[col] = None
- if ep_idx in annotations:
- for col in cols:
- file_df.at[ep_idx, col] = episodes_df.loc[ep_idx, col]
- if prefix == "sparse": # Legacy columns
- for i, legacy in enumerate(
- [
- "subtask_names",
- "subtask_start_times",
- "subtask_end_times",
- "subtask_start_frames",
- "subtask_end_frames",
- ]
- ):
- file_df.at[ep_idx, legacy] = episodes_df.loc[ep_idx, cols[i]]
- file_df.to_parquet(path, engine="pyarrow", compression="snappy")
-
-
-def generate_auto_sparse_annotations(
- dataset: LeRobotDataset, episode_indices: list[int], video_key: str
-) -> dict[int, SubtaskAnnotation]:
- """Auto-generate single 'task' stage annotations for all episodes."""
- annotations = {}
- for ep_idx in episode_indices:
- start = float(dataset.meta.episodes[f"videos/{video_key}/from_timestamp"][ep_idx])
- end = float(dataset.meta.episodes[f"videos/{video_key}/to_timestamp"][ep_idx])
- duration = end - start
- end_str = f"{int(duration // 60):02d}:{int(duration % 60):02d}"
- annotations[ep_idx] = SubtaskAnnotation(
- subtasks=[Subtask(name="task", timestamps=Timestamp(start="00:00", end=end_str))]
- )
- return annotations
-
-
-def load_annotations_from_dataset(dataset_path: Path, prefix: str = "sparse") -> dict[int, SubtaskAnnotation]:
- """Load annotations from LeRobot dataset parquet files."""
- from lerobot.datasets.utils import load_episodes
-
- episodes_dataset = load_episodes(dataset_path)
- if not episodes_dataset or len(episodes_dataset) == 0:
- return {}
-
- col_names = f"{prefix}_subtask_names"
- col_start = f"{prefix}_subtask_start_times"
- col_end = f"{prefix}_subtask_end_times"
-
- # Fall back to legacy columns for sparse
- if col_names not in episodes_dataset.column_names:
- if prefix == "sparse" and "subtask_names" in episodes_dataset.column_names:
- col_names, col_start, col_end = "subtask_names", "subtask_start_times", "subtask_end_times"
- else:
- return {}
-
- df = episodes_dataset.to_pandas()
- annotations = {}
- for ep_idx in df.index:
- names = df.loc[ep_idx, col_names]
- if names is None or (isinstance(names, float) and pd.isna(names)):
- continue
- starts, ends = df.loc[ep_idx, col_start], df.loc[ep_idx, col_end]
- annotations[int(ep_idx)] = SubtaskAnnotation(
- subtasks=[
- Subtask(
- name=n,
- timestamps=Timestamp(
- start=f"{int(s) // 60:02d}:{int(s) % 60:02d}",
- end=f"{int(e) // 60:02d}:{int(e) % 60:02d}",
- ),
- )
- for n, s, e in zip(names, starts, ends, strict=True)
- ]
- )
- return annotations
-
-
-def process_single_episode(
- ep_idx: int,
- dataset_root: Path,
- dataset_meta,
- video_key: str,
- fps: int,
- annotator: VideoAnnotator,
-) -> tuple[int, SubtaskAnnotation | None, str | None]:
- """Process a single episode annotation."""
- try:
- video_path = dataset_root / dataset_meta.get_video_file_path(ep_idx, video_key)
- if not video_path.exists():
- return ep_idx, None, f"Video not found: {video_path}"
-
- start = float(dataset_meta.episodes[f"videos/{video_key}/from_timestamp"][ep_idx])
- end = float(dataset_meta.episodes[f"videos/{video_key}/to_timestamp"][ep_idx])
- return ep_idx, annotator.annotate(video_path, fps, start, end), None
- except Exception as e:
- return ep_idx, None, str(e)
-
-
-def worker_process_episodes(
- worker_id: int,
- gpu_id: int,
- episode_indices: list[int],
- repo_id: str,
- video_key: str,
- sparse_subtask_list: list[str],
- dense_subtask_list: list[str] | None,
- model_name: str,
- torch_dtype: torch.dtype,
-) -> tuple[dict, dict | None]:
- """Worker for parallel processing across GPUs."""
- device = f"cuda:{gpu_id}"
- dataset = LeRobotDataset(repo_id, download_videos=False)
-
- sparse_annotator = VideoAnnotator(sparse_subtask_list, model_name, device, torch_dtype)
- dense_annotator = (
- VideoAnnotator(
- dense_subtask_list,
- model_name,
- device,
- torch_dtype,
- sparse_annotator.model,
- sparse_annotator.processor,
- )
- if dense_subtask_list
- else None
- )
-
- sparse_annotations, dense_annotations = {}, {} if dense_subtask_list else None
-
- for ep_idx in episode_indices:
- _, sparse_ann, err = process_single_episode(
- ep_idx, dataset.root, dataset.meta, video_key, dataset.fps, sparse_annotator
- )
- if sparse_ann:
- sparse_annotations[ep_idx] = sparse_ann
-
- if dense_annotator:
- _, dense_ann, _ = process_single_episode(
- ep_idx, dataset.root, dataset.meta, video_key, dataset.fps, dense_annotator
- )
- if dense_ann:
- dense_annotations[ep_idx] = dense_ann
-
- return sparse_annotations, dense_annotations
-
-
-def main():
- parser = argparse.ArgumentParser(description="SARM-style subtask annotation using local GPU (Qwen3-VL)")
- parser.add_argument("--repo-id", type=str, required=True, help="HuggingFace dataset repository ID")
- parser.add_argument(
- "--sparse-subtasks", type=str, default=None, help="Comma-separated sparse subtask names"
- )
- parser.add_argument(
- "--dense-subtasks", type=str, default=None, help="Comma-separated dense subtask names"
- )
- parser.add_argument(
- "--dense-only", action="store_true", help="Dense-only mode with auto-generated sparse 'task' stage"
- )
- parser.add_argument("--episodes", type=int, nargs="+", default=None, help="Episode indices to annotate")
- parser.add_argument("--model", type=str, default="Qwen/Qwen3-VL-30B-A3B-Instruct", help="VLM model")
- parser.add_argument("--skip-existing", action="store_true", help="Skip already annotated episodes")
- parser.add_argument("--video-key", type=str, default=None, help="Video key (default: first available)")
- parser.add_argument("--push-to-hub", action="store_true", help="Push to HuggingFace Hub")
- parser.add_argument("--output-repo-id", type=str, default=None, help="Output repo ID for push")
- parser.add_argument("--device", type=str, default="cuda", help="Device (cuda/cpu)")
- parser.add_argument("--dtype", type=str, default="bfloat16", choices=["bfloat16", "float16", "float32"])
- parser.add_argument("--num-workers", type=int, default=1, help="Parallel workers for multi-GPU")
- parser.add_argument("--gpu-ids", type=int, nargs="+", default=None, help="GPU IDs to use")
- # Visualization options
- parser.add_argument(
- "--visualize-only",
- action="store_true",
- help="Only visualize existing annotations (no generation)",
- )
- parser.add_argument(
- "--num-visualizations",
- type=int,
- default=5,
- help="Number of episodes to visualize (default: 5)",
- )
- parser.add_argument(
- "--visualize-type",
- type=str,
- default="sparse",
- choices=["sparse", "dense", "both"],
- help="Type of annotations to visualize (default: sparse)",
- )
- parser.add_argument(
- "--output-dir",
- type=str,
- default="./subtask_viz",
- help="Output directory for visualizations (default: ./subtask_viz)",
- )
-
- args = parser.parse_args()
-
- # Load dataset first (needed for both annotation and visualization)
- print(f"Loading dataset: {args.repo_id}")
- dataset = LeRobotDataset(args.repo_id, download_videos=True)
- fps = dataset.fps
-
- if not dataset.meta.video_keys:
- raise ValueError("No video keys found")
-
- video_key = (
- args.video_key if args.video_key in (dataset.meta.video_keys or []) else dataset.meta.video_keys[0]
- )
- print(f"Using camera: {video_key}, FPS: {fps}")
-
- # Handle visualization-only mode
- if args.visualize_only:
- print("Visualization-only mode")
- sparse_annotations = load_annotations_from_dataset(dataset.root, prefix="sparse")
- dense_annotations = load_annotations_from_dataset(dataset.root, prefix="dense")
-
- if not sparse_annotations and not dense_annotations:
- return print("Error: No annotations found. Run annotation first.")
-
- print(f"Found {len(sparse_annotations)} sparse, {len(dense_annotations)} dense annotations")
-
- visualize_annotations(
- dataset=dataset,
- sparse_annotations=sparse_annotations,
- dense_annotations=dense_annotations if dense_annotations else None,
- video_key=video_key,
- output_dir=Path(args.output_dir),
- num_episodes=args.num_visualizations,
- annotation_type=args.visualize_type,
- episode_indices=args.episodes,
- )
- return
-
- # Validate arguments for annotation mode
- if args.dense_only and not args.dense_subtasks:
- return print("Error: --dense-only requires --dense-subtasks")
- if args.dense_subtasks and not args.sparse_subtasks and not args.dense_only:
- return print("Error: --dense-subtasks requires --sparse-subtasks or --dense-only")
-
- sparse_subtask_list = (
- [s.strip() for s in args.sparse_subtasks.split(",")] if args.sparse_subtasks else None
- )
- dense_subtask_list = [s.strip() for s in args.dense_subtasks.split(",")] if args.dense_subtasks else None
- auto_sparse = sparse_subtask_list is None
- dense_mode = dense_subtask_list is not None
- torch_dtype = {"bfloat16": torch.bfloat16, "float16": torch.float16, "float32": torch.float32}[args.dtype]
-
- # Determine episodes
- episode_indices = args.episodes or list(range(dataset.meta.total_episodes))
-
- existing_annotations = load_annotations_from_dataset(dataset.root, prefix="sparse")
- if args.skip_existing:
- episode_indices = [ep for ep in episode_indices if ep not in existing_annotations]
-
- if not episode_indices:
- return print("All episodes already annotated!")
- print(f"Annotating {len(episode_indices)} episodes")
-
- # GPU setup
- gpu_ids = args.gpu_ids or list(
- range(min(args.num_workers, torch.cuda.device_count() if torch.cuda.is_available() else 1))
- )
- args.num_workers = len(gpu_ids)
-
- sparse_annotations = existing_annotations.copy()
- dense_annotations = {} if dense_mode else None
-
- # Auto-sparse mode
- if auto_sparse:
- sparse_annotations.update(generate_auto_sparse_annotations(dataset, episode_indices, video_key))
- save_annotations_to_dataset(dataset.root, sparse_annotations, fps, prefix="sparse")
- print(f"Auto-generated {len(episode_indices)} sparse 'task' annotations")
-
- # VLM annotation (for sparse if not auto, and for dense)
- need_vlm = (not auto_sparse) or dense_mode
-
- if need_vlm:
- if args.num_workers > 1 and not auto_sparse:
- # Parallel processing
- print(f"Parallel processing with {args.num_workers} workers")
- episodes_per_worker = [[] for _ in range(args.num_workers)]
- for i, ep_idx in enumerate(episode_indices):
- episodes_per_worker[i % args.num_workers].append(ep_idx)
-
- with ProcessPoolExecutor(
- max_workers=args.num_workers, mp_context=mp.get_context("spawn")
- ) as executor:
- futures = [
- executor.submit(
- worker_process_episodes,
- w,
- gpu_ids[w],
- episodes_per_worker[w],
- args.repo_id,
- video_key,
- sparse_subtask_list,
- dense_subtask_list,
- args.model,
- torch_dtype,
- )
- for w in range(args.num_workers)
- if episodes_per_worker[w]
- ]
-
- for future in as_completed(futures):
- try:
- worker_sparse, worker_dense = future.result()
- sparse_annotations.update(worker_sparse)
- if dense_mode and worker_dense:
- dense_annotations.update(worker_dense)
- save_annotations_to_dataset(dataset.root, sparse_annotations, fps, prefix="sparse")
- if dense_mode:
- save_annotations_to_dataset(dataset.root, dense_annotations, fps, prefix="dense")
- except Exception as e:
- raise RuntimeError(f"Worker failed: {e}") from e
- else:
- # Sequential processing
- sparse_annotator = (
- VideoAnnotator(sparse_subtask_list, args.model, args.device, torch_dtype)
- if not auto_sparse and sparse_subtask_list
- else None
- )
- dense_annotator = (
- VideoAnnotator(
- dense_subtask_list,
- args.model,
- args.device,
- torch_dtype,
- sparse_annotator.model if sparse_annotator else None,
- sparse_annotator.processor if sparse_annotator else None,
- )
- if dense_mode
- else None
- )
-
- for i, ep_idx in enumerate(episode_indices):
- print(f"Episode {ep_idx} ({i + 1}/{len(episode_indices)})")
-
- if sparse_annotator:
- _, sparse_ann, err = process_single_episode(
- ep_idx, dataset.root, dataset.meta, video_key, fps, sparse_annotator
- )
- if sparse_ann:
- sparse_annotations[ep_idx] = sparse_ann
- save_annotations_to_dataset(dataset.root, sparse_annotations, fps, prefix="sparse")
- elif err:
- print(f"Sparse failed: {err}")
-
- if dense_annotator:
- _, dense_ann, err = process_single_episode(
- ep_idx, dataset.root, dataset.meta, video_key, fps, dense_annotator
- )
- if dense_ann:
- dense_annotations[ep_idx] = dense_ann
- save_annotations_to_dataset(dataset.root, dense_annotations, fps, prefix="dense")
- elif err:
- print(f"Dense failed: {err}")
-
- # Save temporal proportions
- def save_proportions(annotations, prefix, subtask_list=None, is_auto=False):
- props: dict[str, float] = (
- {"task": 1.0} if is_auto else compute_temporal_proportions(annotations, fps, subtask_list)
- )
- path = dataset.root / "meta" / f"temporal_proportions_{prefix}.json"
- path.parent.mkdir(parents=True, exist_ok=True)
- with open(path, "w") as f:
- json.dump(props, f, indent=2)
- print(f"Saved {prefix} temporal proportions")
-
- save_proportions(sparse_annotations, "sparse", sparse_subtask_list, auto_sparse)
- if dense_mode and dense_annotations:
- save_proportions(dense_annotations, "dense", dense_subtask_list)
-
- print(f"\nComplete! {len(sparse_annotations)} sparse, {len(dense_annotations or {})} dense annotations")
-
- # Visualize annotations after generation
- if args.num_visualizations > 0:
- print(f"\nGenerating {args.num_visualizations} visualizations...")
- visualize_type = "both" if dense_mode else "sparse"
- visualize_annotations(
- dataset=dataset,
- sparse_annotations=sparse_annotations,
- dense_annotations=dense_annotations,
- video_key=video_key,
- output_dir=Path(args.output_dir),
- num_episodes=args.num_visualizations,
- annotation_type=visualize_type,
- )
-
- if args.push_to_hub:
- try:
- dataset.push_to_hub(push_videos=True)
- print(f"Pushed to {args.output_repo_id or args.repo_id}")
- except Exception as e:
- print(f"Push failed: {e}")
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/datasets/push_dataset_to_hub/utils.py b/lerobot/src/lerobot/datasets/push_dataset_to_hub/utils.py
deleted file mode 100644
index 970196d378b033c34b56a6750a87f3611ba3f968..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/datasets/push_dataset_to_hub/utils.py
+++ /dev/null
@@ -1,73 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import datasets
-import torch
-
-
-# TODO(aliberts): remove
-def calculate_episode_data_index(hf_dataset: datasets.Dataset) -> dict[str, torch.Tensor]:
- """
- Calculate episode data index for the provided HuggingFace Dataset. Relies on episode_index column of hf_dataset.
-
- Parameters:
- - hf_dataset (datasets.Dataset): A HuggingFace dataset containing the episode index.
-
- Returns:
- - episode_data_index: A dictionary containing the data index for each episode. The dictionary has two keys:
- - "from": A tensor containing the starting index of each episode.
- - "to": A tensor containing the ending index of each episode.
- """
- episode_data_index = {"from": [], "to": []}
-
- current_episode = None
- """
- The episode_index is a list of integers, each representing the episode index of the corresponding example.
- For instance, the following is a valid episode_index:
- [0, 0, 0, 1, 1, 1, 1, 2, 2, 2, 2, 2]
-
- Below, we iterate through the episode_index and populate the episode_data_index dictionary with the starting and
- ending index of each episode. For the episode_index above, the episode_data_index dictionary will look like this:
- {
- "from": [0, 3, 7],
- "to": [3, 7, 12]
- }
- """
- if len(hf_dataset) == 0:
- episode_data_index = {
- "from": torch.tensor([]),
- "to": torch.tensor([]),
- }
- return episode_data_index
- for idx, episode_idx in enumerate(hf_dataset["episode_index"]):
- if episode_idx != current_episode:
- # We encountered a new episode, so we append its starting location to the "from" list
- episode_data_index["from"].append(idx)
- # If this is not the first episode, we append the ending location of the previous episode to the "to" list
- if current_episode is not None:
- episode_data_index["to"].append(idx)
- # Let's keep track of the current episode index
- current_episode = episode_idx
- else:
- # We are still in the same episode, so there is nothing for us to do here
- pass
- # We have reached the end of the dataset, so we append the ending location of the last episode to the "to" list
- episode_data_index["to"].append(idx + 1)
-
- for k in ["from", "to"]:
- episode_data_index[k] = torch.tensor(episode_data_index[k])
-
- return episode_data_index
diff --git a/lerobot/src/lerobot/datasets/v30/augment_dataset_quantile_stats.py b/lerobot/src/lerobot/datasets/v30/augment_dataset_quantile_stats.py
deleted file mode 100644
index 83a60c7442f4db0a9173f716ac765dcc208f5e66..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/datasets/v30/augment_dataset_quantile_stats.py
+++ /dev/null
@@ -1,260 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-This script augments existing LeRobot datasets with quantile statistics.
-
-Most datasets created before the quantile feature was added do not contain
-quantile statistics (q01, q10, q50, q90, q99) in their metadata. This script:
-
-1. Loads an existing LeRobot dataset in v3.0 format
-2. Checks if it already contains quantile statistics
-3. If missing, computes quantile statistics for all features
-4. Updates the dataset metadata with the new quantile statistics
-
-Usage:
-
-```bash
-python src/lerobot/datasets/v30/augment_dataset_quantile_stats.py \
- --repo-id=lerobot/pusht \
-```
-"""
-
-import argparse
-import concurrent.futures
-import logging
-from pathlib import Path
-
-import numpy as np
-import torch
-from huggingface_hub import HfApi
-from requests import HTTPError
-from tqdm import tqdm
-
-from lerobot.datasets.compute_stats import DEFAULT_QUANTILES, aggregate_stats, get_feature_stats
-from lerobot.datasets.lerobot_dataset import CODEBASE_VERSION, LeRobotDataset
-from lerobot.datasets.utils import write_stats
-from lerobot.utils.utils import init_logging
-
-
-def has_quantile_stats(stats: dict[str, dict] | None, quantile_list_keys: list[str] | None = None) -> bool:
- """Check if dataset statistics already contain quantile information.
-
- Args:
- stats: Dataset statistics dictionary
-
- Returns:
- True if quantile statistics are present, False otherwise
- """
- if quantile_list_keys is None:
- quantile_list_keys = [f"q{int(q * 100):02d}" for q in DEFAULT_QUANTILES]
-
- if stats is None:
- return False
-
- for feature_stats in stats.values():
- if any(q_key in feature_stats for q_key in quantile_list_keys):
- return True
-
- return False
-
-
-def process_single_episode(dataset: LeRobotDataset, episode_idx: int) -> dict:
- """Process a single episode and return its statistics.
-
- Args:
- dataset: The LeRobot dataset
- episode_idx: Index of the episode to process
-
- Returns:
- Dictionary containing episode statistics
- """
- logging.info(f"Computing stats for episode {episode_idx}")
-
- start_idx = dataset.meta.episodes[episode_idx]["dataset_from_index"]
- end_idx = dataset.meta.episodes[episode_idx]["dataset_to_index"]
-
- collected_data: dict[str, list] = {}
- for idx in range(start_idx, end_idx):
- item = dataset[idx]
- for key, value in item.items():
- if key not in dataset.features:
- continue
-
- if key not in collected_data:
- collected_data[key] = []
- collected_data[key].append(value)
-
- ep_stats = {}
- for key, data_list in collected_data.items():
- if dataset.features[key]["dtype"] == "string":
- continue
-
- data = torch.stack(data_list).cpu().numpy()
- if dataset.features[key]["dtype"] in ["image", "video"]:
- if data.dtype == np.uint8:
- data = data.astype(np.float32) / 255.0
-
- axes_to_reduce = (0, 2, 3)
- keepdims = True
- else:
- axes_to_reduce = 0
- keepdims = data.ndim == 1
-
- ep_stats[key] = get_feature_stats(
- data, axis=axes_to_reduce, keepdims=keepdims, quantile_list=DEFAULT_QUANTILES
- )
-
- if dataset.features[key]["dtype"] in ["image", "video"]:
- ep_stats[key] = {
- k: v if k == "count" else np.squeeze(v, axis=0) for k, v in ep_stats[key].items()
- }
-
- return ep_stats
-
-
-def compute_quantile_stats_for_dataset(dataset: LeRobotDataset) -> dict[str, dict]:
- """Compute quantile statistics for all episodes in the dataset.
-
- Args:
- dataset: The LeRobot dataset to compute statistics for
-
- Returns:
- Dictionary containing aggregated statistics with quantiles
-
- Note:
- Video decoding operations are not thread-safe, so we process episodes sequentially
- when video keys are present. For datasets without videos, we use parallel processing
- with ThreadPoolExecutor for better performance.
- """
- logging.info(f"Computing quantile statistics for dataset with {dataset.num_episodes} episodes")
-
- episode_stats_list = []
- has_videos = len(dataset.meta.video_keys) > 0
-
- if has_videos:
- logging.info("Dataset contains video keys - using sequential processing for thread safety")
- for episode_idx in tqdm(range(dataset.num_episodes), desc="Processing episodes"):
- ep_stats = process_single_episode(dataset, episode_idx)
- episode_stats_list.append(ep_stats)
- else:
- logging.info("Dataset has no video keys - using parallel processing for better performance")
- max_workers = min(dataset.num_episodes, 16)
-
- with concurrent.futures.ThreadPoolExecutor(max_workers=max_workers) as executor:
- future_to_episode = {
- executor.submit(process_single_episode, dataset, episode_idx): episode_idx
- for episode_idx in range(dataset.num_episodes)
- }
-
- episode_results = {}
- with tqdm(total=dataset.num_episodes, desc="Processing episodes") as pbar:
- for future in concurrent.futures.as_completed(future_to_episode):
- episode_idx = future_to_episode[future]
- ep_stats = future.result()
- episode_results[episode_idx] = ep_stats
- pbar.update(1)
-
- for episode_idx in range(dataset.num_episodes):
- if episode_idx in episode_results:
- episode_stats_list.append(episode_results[episode_idx])
-
- if not episode_stats_list:
- raise ValueError("No episode data found for computing statistics")
-
- logging.info(f"Aggregating statistics from {len(episode_stats_list)} episodes")
- return aggregate_stats(episode_stats_list)
-
-
-def augment_dataset_with_quantile_stats(
- repo_id: str,
- root: str | Path | None = None,
- overwrite: bool = False,
-) -> None:
- """Augment a dataset with quantile statistics if they are missing.
-
- Args:
- repo_id: Repository ID of the dataset
- root: Local root directory for the dataset
- overwrite: Overwrite existing quantile statistics if they already exist
- """
- logging.info(f"Loading dataset: {repo_id}")
- dataset = LeRobotDataset(
- repo_id=repo_id,
- root=root,
- )
-
- if not overwrite and has_quantile_stats(dataset.meta.stats):
- logging.info("Dataset already contains quantile statistics. No action needed.")
- return
-
- logging.info("Dataset does not contain quantile statistics. Computing them now...")
-
- new_stats = compute_quantile_stats_for_dataset(dataset)
-
- logging.info("Updating dataset metadata with new quantile statistics")
- dataset.meta.stats = new_stats
-
- write_stats(new_stats, dataset.meta.root)
-
- logging.info("Successfully updated dataset with quantile statistics")
- dataset.push_to_hub()
-
- hub_api = HfApi()
- try:
- hub_api.delete_tag(repo_id, tag=CODEBASE_VERSION, repo_type="dataset")
- except HTTPError as e:
- logging.info(f"tag={CODEBASE_VERSION} probably doesn't exist. Skipping exception ({e})")
- pass
- hub_api.create_tag(repo_id, tag=CODEBASE_VERSION, revision=None, repo_type="dataset")
-
-
-def main():
- """Main function to run the augmentation script."""
- parser = argparse.ArgumentParser(description="Augment LeRobot dataset with quantile statistics")
-
- parser.add_argument(
- "--repo-id",
- type=str,
- required=True,
- help="Repository ID of the dataset (e.g., 'lerobot/pusht')",
- )
-
- parser.add_argument(
- "--root",
- type=str,
- help="Local root directory for the dataset",
- )
- parser.add_argument(
- "--overwrite",
- action="store_true",
- help="Overwrite existing quantile statistics if they already exist",
- )
-
- args = parser.parse_args()
- root = Path(args.root) if args.root else None
-
- init_logging()
-
- augment_dataset_with_quantile_stats(
- repo_id=args.repo_id,
- root=root,
- overwrite=args.overwrite,
- )
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/datasets/v30/convert_dataset_v21_to_v30.py b/lerobot/src/lerobot/datasets/v30/convert_dataset_v21_to_v30.py
deleted file mode 100644
index b3198053ba7c6773fbdc93ef4dd6bdb4ba50c524..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/datasets/v30/convert_dataset_v21_to_v30.py
+++ /dev/null
@@ -1,571 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-This script will help you convert any LeRobot dataset already pushed to the hub from codebase version 2.1 to
-3.0. It will:
-
-- Generate per-episodes stats and writes them in `episodes_stats.jsonl`
-- Check consistency between these new stats and the old ones.
-- Remove the deprecated `stats.json`.
-- Update codebase_version in `info.json`.
-- Push this new version to the hub on the 'main' branch and tags it with "v3.0".
-
-Usage:
-
-Convert a dataset from the hub:
-```bash
-python src/lerobot/datasets/v30/convert_dataset_v21_to_v30.py \
- --repo-id=lerobot/pusht
-```
-
-Convert a local dataset (works in place):
-```bash
-python src/lerobot/datasets/v30/convert_dataset_v21_to_v30.py \
- --repo-id=lerobot/pusht \
- --root=/path/to/local/dataset/directory
- --push-to-hub=false
-```
-
-"""
-
-import argparse
-import logging
-import shutil
-from pathlib import Path
-from typing import Any
-
-import jsonlines
-import pandas as pd
-import pyarrow as pa
-import tqdm
-from datasets import Dataset, Features, Image
-from huggingface_hub import HfApi, snapshot_download
-from requests import HTTPError
-
-from lerobot.datasets.compute_stats import aggregate_stats
-from lerobot.datasets.lerobot_dataset import CODEBASE_VERSION, LeRobotDataset
-from lerobot.datasets.utils import (
- DEFAULT_CHUNK_SIZE,
- DEFAULT_DATA_FILE_SIZE_IN_MB,
- DEFAULT_DATA_PATH,
- DEFAULT_VIDEO_FILE_SIZE_IN_MB,
- DEFAULT_VIDEO_PATH,
- LEGACY_EPISODES_PATH,
- LEGACY_EPISODES_STATS_PATH,
- LEGACY_TASKS_PATH,
- cast_stats_to_numpy,
- flatten_dict,
- get_file_size_in_mb,
- get_parquet_file_size_in_mb,
- get_parquet_num_frames,
- load_info,
- update_chunk_file_indices,
- write_episodes,
- write_info,
- write_stats,
- write_tasks,
-)
-from lerobot.datasets.video_utils import concatenate_video_files, get_video_duration_in_s
-from lerobot.utils.constants import HF_LEROBOT_HOME
-from lerobot.utils.utils import init_logging
-
-V21 = "v2.1"
-V30 = "v3.0"
-
-"""
--------------------------
-OLD
-data/chunk-000/episode_000000.parquet
-
-NEW
-data/chunk-000/file_000.parquet
--------------------------
-OLD
-videos/chunk-000/CAMERA/episode_000000.mp4
-
-NEW
-videos/CAMERA/chunk-000/file_000.mp4
--------------------------
-OLD
-episodes.jsonl
-{"episode_index": 1, "tasks": ["Put the blue block in the green bowl"], "length": 266}
-
-NEW
-meta/episodes/chunk-000/episodes_000.parquet
-episode_index | video_chunk_index | video_file_index | data_chunk_index | data_file_index | tasks | length
--------------------------
-OLD
-tasks.jsonl
-{"task_index": 1, "task": "Put the blue block in the green bowl"}
-
-NEW
-meta/tasks/chunk-000/file_000.parquet
-task_index | task
--------------------------
-OLD
-episodes_stats.jsonl
-
-NEW
-meta/episodes_stats/chunk-000/file_000.parquet
-episode_index | mean | std | min | max
--------------------------
-UPDATE
-meta/info.json
--------------------------
-"""
-
-
-def load_jsonlines(fpath: Path) -> list[Any]:
- with jsonlines.open(fpath, "r") as reader:
- return list(reader)
-
-
-def legacy_load_episodes(local_dir: Path) -> dict:
- episodes = load_jsonlines(local_dir / LEGACY_EPISODES_PATH)
- return {item["episode_index"]: item for item in sorted(episodes, key=lambda x: x["episode_index"])}
-
-
-def legacy_load_episodes_stats(local_dir: Path) -> dict:
- episodes_stats = load_jsonlines(local_dir / LEGACY_EPISODES_STATS_PATH)
- return {
- item["episode_index"]: cast_stats_to_numpy(item["stats"])
- for item in sorted(episodes_stats, key=lambda x: x["episode_index"])
- }
-
-
-def legacy_load_tasks(local_dir: Path) -> tuple[dict, dict]:
- tasks = load_jsonlines(local_dir / LEGACY_TASKS_PATH)
- tasks = {item["task_index"]: item["task"] for item in sorted(tasks, key=lambda x: x["task_index"])}
- task_to_task_index = {task: task_index for task_index, task in tasks.items()}
- return tasks, task_to_task_index
-
-
-def validate_local_dataset_version(local_path: Path) -> None:
- """Validate that the local dataset has the expected v2.1 version."""
- info = load_info(local_path)
- dataset_version = info.get("codebase_version", "unknown")
- if dataset_version != V21:
- raise ValueError(
- f"Local dataset has codebase version '{dataset_version}', expected '{V21}'. "
- f"This script is specifically for converting v2.1 datasets to v3.0."
- )
-
-
-def convert_tasks(root, new_root):
- logging.info(f"Converting tasks from {root} to {new_root}")
- tasks, _ = legacy_load_tasks(root)
- task_indices = tasks.keys()
- task_strings = tasks.values()
- df_tasks = pd.DataFrame({"task_index": task_indices}, index=task_strings)
- write_tasks(df_tasks, new_root)
-
-
-def concat_data_files(paths_to_cat, new_root, chunk_idx, file_idx, image_keys):
- # TODO(rcadene): to save RAM use Dataset.from_parquet(file) and concatenate_datasets
- dataframes = [pd.read_parquet(file) for file in paths_to_cat]
- # Concatenate all DataFrames along rows
- concatenated_df = pd.concat(dataframes, ignore_index=True)
-
- path = new_root / DEFAULT_DATA_PATH.format(chunk_index=chunk_idx, file_index=file_idx)
- path.parent.mkdir(parents=True, exist_ok=True)
-
- if len(image_keys) > 0:
- schema = pa.Schema.from_pandas(concatenated_df)
- features = Features.from_arrow_schema(schema)
- for key in image_keys:
- features[key] = Image()
- schema = features.arrow_schema
- else:
- schema = None
-
- concatenated_df.to_parquet(path, index=False, schema=schema)
-
-
-def convert_data(root: Path, new_root: Path, data_file_size_in_mb: int):
- data_dir = root / "data"
- ep_paths = sorted(data_dir.glob("*/*.parquet"))
-
- image_keys = get_image_keys(root)
-
- ep_idx = 0
- chunk_idx = 0
- file_idx = 0
- size_in_mb = 0
- num_frames = 0
- paths_to_cat = []
- episodes_metadata = []
-
- logging.info(f"Converting data files from {len(ep_paths)} episodes")
-
- for ep_path in tqdm.tqdm(ep_paths, desc="convert data files"):
- ep_size_in_mb = get_parquet_file_size_in_mb(ep_path)
- ep_num_frames = get_parquet_num_frames(ep_path)
- ep_metadata = {
- "episode_index": ep_idx,
- "data/chunk_index": chunk_idx,
- "data/file_index": file_idx,
- "dataset_from_index": num_frames,
- "dataset_to_index": num_frames + ep_num_frames,
- }
- size_in_mb += ep_size_in_mb
- num_frames += ep_num_frames
- episodes_metadata.append(ep_metadata)
- ep_idx += 1
-
- if size_in_mb < data_file_size_in_mb:
- paths_to_cat.append(ep_path)
- continue
-
- if paths_to_cat:
- concat_data_files(paths_to_cat, new_root, chunk_idx, file_idx, image_keys)
-
- # Reset for the next file
- size_in_mb = ep_size_in_mb
- paths_to_cat = [ep_path]
-
- chunk_idx, file_idx = update_chunk_file_indices(chunk_idx, file_idx, DEFAULT_CHUNK_SIZE)
-
- # Write remaining data if any
- if paths_to_cat:
- concat_data_files(paths_to_cat, new_root, chunk_idx, file_idx, image_keys)
-
- return episodes_metadata
-
-
-def get_video_keys(root):
- info = load_info(root)
- features = info["features"]
- video_keys = [key for key, ft in features.items() if ft["dtype"] == "video"]
- return video_keys
-
-
-def get_image_keys(root):
- info = load_info(root)
- features = info["features"]
- image_keys = [key for key, ft in features.items() if ft["dtype"] == "image"]
- return image_keys
-
-
-def convert_videos(root: Path, new_root: Path, video_file_size_in_mb: int):
- logging.info(f"Converting videos from {root} to {new_root}")
-
- video_keys = get_video_keys(root)
- if len(video_keys) == 0:
- return None
-
- video_keys = sorted(video_keys)
-
- eps_metadata_per_cam = []
- for camera in video_keys:
- eps_metadata = convert_videos_of_camera(root, new_root, camera, video_file_size_in_mb)
- eps_metadata_per_cam.append(eps_metadata)
-
- num_eps_per_cam = [len(eps_cam_map) for eps_cam_map in eps_metadata_per_cam]
- if len(set(num_eps_per_cam)) != 1:
- raise ValueError(f"All cams dont have same number of episodes ({num_eps_per_cam}).")
-
- episods_metadata = []
- num_cameras = len(video_keys)
- num_episodes = num_eps_per_cam[0]
- for ep_idx in tqdm.tqdm(range(num_episodes), desc="convert videos"):
- # Sanity check
- ep_ids = [eps_metadata_per_cam[cam_idx][ep_idx]["episode_index"] for cam_idx in range(num_cameras)]
- ep_ids += [ep_idx]
- if len(set(ep_ids)) != 1:
- raise ValueError(f"All episode indices need to match ({ep_ids}).")
-
- ep_dict = {}
- for cam_idx in range(num_cameras):
- ep_dict.update(eps_metadata_per_cam[cam_idx][ep_idx])
- episods_metadata.append(ep_dict)
-
- return episods_metadata
-
-
-def convert_videos_of_camera(root: Path, new_root: Path, video_key: str, video_file_size_in_mb: int):
- # Access old paths to mp4
- videos_dir = root / "videos"
- ep_paths = sorted(videos_dir.glob(f"*/{video_key}/*.mp4"))
-
- ep_idx = 0
- chunk_idx = 0
- file_idx = 0
- size_in_mb = 0
- duration_in_s = 0.0
- paths_to_cat = []
- episodes_metadata = []
-
- for ep_path in tqdm.tqdm(ep_paths, desc=f"convert videos of {video_key}"):
- ep_size_in_mb = get_file_size_in_mb(ep_path)
- ep_duration_in_s = get_video_duration_in_s(ep_path)
-
- # Check if adding this episode would exceed the limit
- if size_in_mb + ep_size_in_mb >= video_file_size_in_mb and len(paths_to_cat) > 0:
- # Size limit would be exceeded, save current accumulation WITHOUT this episode
- concatenate_video_files(
- paths_to_cat,
- new_root
- / DEFAULT_VIDEO_PATH.format(video_key=video_key, chunk_index=chunk_idx, file_index=file_idx),
- )
-
- # Update episodes metadata for the file we just saved
- for i, _ in enumerate(paths_to_cat):
- past_ep_idx = ep_idx - len(paths_to_cat) + i
- episodes_metadata[past_ep_idx][f"videos/{video_key}/chunk_index"] = chunk_idx
- episodes_metadata[past_ep_idx][f"videos/{video_key}/file_index"] = file_idx
-
- # Move to next file and start fresh with current episode
- chunk_idx, file_idx = update_chunk_file_indices(chunk_idx, file_idx, DEFAULT_CHUNK_SIZE)
- size_in_mb = 0
- duration_in_s = 0.0
- paths_to_cat = []
-
- # Add current episode metadata
- ep_metadata = {
- "episode_index": ep_idx,
- f"videos/{video_key}/chunk_index": chunk_idx, # Will be updated when file is saved
- f"videos/{video_key}/file_index": file_idx, # Will be updated when file is saved
- f"videos/{video_key}/from_timestamp": duration_in_s,
- f"videos/{video_key}/to_timestamp": duration_in_s + ep_duration_in_s,
- }
- episodes_metadata.append(ep_metadata)
-
- # Add current episode to accumulation
- paths_to_cat.append(ep_path)
- size_in_mb += ep_size_in_mb
- duration_in_s += ep_duration_in_s
- ep_idx += 1
-
- # Write remaining videos if any
- if paths_to_cat:
- concatenate_video_files(
- paths_to_cat,
- new_root
- / DEFAULT_VIDEO_PATH.format(video_key=video_key, chunk_index=chunk_idx, file_index=file_idx),
- )
-
- # Update episodes metadata for the final file
- for i, _ in enumerate(paths_to_cat):
- past_ep_idx = ep_idx - len(paths_to_cat) + i
- episodes_metadata[past_ep_idx][f"videos/{video_key}/chunk_index"] = chunk_idx
- episodes_metadata[past_ep_idx][f"videos/{video_key}/file_index"] = file_idx
-
- return episodes_metadata
-
-
-def generate_episode_metadata_dict(
- episodes_legacy_metadata, episodes_metadata, episodes_stats, episodes_videos=None
-):
- num_episodes = len(episodes_metadata)
- episodes_legacy_metadata_vals = list(episodes_legacy_metadata.values())
- episodes_stats_vals = list(episodes_stats.values())
- episodes_stats_keys = list(episodes_stats.keys())
-
- for i in range(num_episodes):
- ep_legacy_metadata = episodes_legacy_metadata_vals[i]
- ep_metadata = episodes_metadata[i]
- ep_stats = episodes_stats_vals[i]
-
- ep_ids_set = {
- ep_legacy_metadata["episode_index"],
- ep_metadata["episode_index"],
- episodes_stats_keys[i],
- }
-
- if episodes_videos is None:
- ep_video = {}
- else:
- ep_video = episodes_videos[i]
- ep_ids_set.add(ep_video["episode_index"])
-
- if len(ep_ids_set) != 1:
- raise ValueError(f"Number of episodes is not the same ({ep_ids_set}).")
-
- ep_dict = {**ep_metadata, **ep_video, **ep_legacy_metadata, **flatten_dict({"stats": ep_stats})}
- ep_dict["meta/episodes/chunk_index"] = 0
- ep_dict["meta/episodes/file_index"] = 0
- yield ep_dict
-
-
-def convert_episodes_metadata(root, new_root, episodes_metadata, episodes_video_metadata=None):
- logging.info(f"Converting episodes metadata from {root} to {new_root}")
-
- episodes_legacy_metadata = legacy_load_episodes(root)
- episodes_stats = legacy_load_episodes_stats(root)
-
- num_eps_set = {len(episodes_legacy_metadata), len(episodes_metadata)}
- if episodes_video_metadata is not None:
- num_eps_set.add(len(episodes_video_metadata))
-
- if len(num_eps_set) != 1:
- raise ValueError(f"Number of episodes is not the same ({num_eps_set}).")
-
- ds_episodes = Dataset.from_generator(
- lambda: generate_episode_metadata_dict(
- episodes_legacy_metadata, episodes_metadata, episodes_stats, episodes_video_metadata
- )
- )
- write_episodes(ds_episodes, new_root)
-
- stats = aggregate_stats(list(episodes_stats.values()))
- write_stats(stats, new_root)
-
-
-def convert_info(root, new_root, data_file_size_in_mb, video_file_size_in_mb):
- info = load_info(root)
- info["codebase_version"] = V30
- del info["total_chunks"]
- del info["total_videos"]
- info["data_files_size_in_mb"] = data_file_size_in_mb
- info["video_files_size_in_mb"] = video_file_size_in_mb
- info["data_path"] = DEFAULT_DATA_PATH
- info["video_path"] = DEFAULT_VIDEO_PATH if info["video_path"] is not None else None
- info["fps"] = int(info["fps"])
- logging.info(f"Converting info from {root} to {new_root}")
- for key in info["features"]:
- if info["features"][key]["dtype"] == "video":
- # already has fps in video_info
- continue
- info["features"][key]["fps"] = info["fps"]
- write_info(info, new_root)
-
-
-def convert_dataset(
- repo_id: str,
- branch: str | None = None,
- data_file_size_in_mb: int | None = None,
- video_file_size_in_mb: int | None = None,
- root: str | Path | None = None,
- push_to_hub: bool = True,
- force_conversion: bool = False,
-):
- if data_file_size_in_mb is None:
- data_file_size_in_mb = DEFAULT_DATA_FILE_SIZE_IN_MB
- if video_file_size_in_mb is None:
- video_file_size_in_mb = DEFAULT_VIDEO_FILE_SIZE_IN_MB
-
- # First check if the dataset already has a v3.0 version
- if root is None and not force_conversion:
- try:
- print("Trying to download v3.0 version of the dataset from the hub...")
- snapshot_download(repo_id, repo_type="dataset", revision=V30, local_dir=HF_LEROBOT_HOME / repo_id)
- return
- except Exception:
- print("Dataset does not have an uploaded v3.0 version. Continuing with conversion.")
-
- # Set root based on whether local dataset path is provided
- use_local_dataset = False
- root = HF_LEROBOT_HOME / repo_id if root is None else Path(root) / repo_id
- if root.exists():
- validate_local_dataset_version(root)
- use_local_dataset = True
- print(f"Using local dataset at {root}")
-
- old_root = root.parent / f"{root.name}_old"
- new_root = root.parent / f"{root.name}_v30"
-
- # Handle old_root cleanup if both old_root and root exist
- if old_root.is_dir() and root.is_dir():
- shutil.rmtree(str(root))
- shutil.move(str(old_root), str(root))
-
- if new_root.is_dir():
- shutil.rmtree(new_root)
-
- if not use_local_dataset:
- snapshot_download(
- repo_id,
- repo_type="dataset",
- revision=V21,
- local_dir=root,
- )
-
- convert_info(root, new_root, data_file_size_in_mb, video_file_size_in_mb)
- convert_tasks(root, new_root)
- episodes_metadata = convert_data(root, new_root, data_file_size_in_mb)
- episodes_videos_metadata = convert_videos(root, new_root, video_file_size_in_mb)
- convert_episodes_metadata(root, new_root, episodes_metadata, episodes_videos_metadata)
-
- shutil.move(str(root), str(old_root))
- shutil.move(str(new_root), str(root))
-
- if push_to_hub:
- hub_api = HfApi()
- try:
- hub_api.delete_tag(repo_id, tag=CODEBASE_VERSION, repo_type="dataset")
- except HTTPError as e:
- print(f"tag={CODEBASE_VERSION} probably doesn't exist. Skipping exception ({e})")
- pass
- hub_api.delete_files(
- delete_patterns=["data/chunk*/episode_*", "meta/*.jsonl", "videos/chunk*"],
- repo_id=repo_id,
- revision=branch,
- repo_type="dataset",
- )
- hub_api.create_tag(repo_id, tag=CODEBASE_VERSION, revision=branch, repo_type="dataset")
-
- LeRobotDataset(repo_id).push_to_hub()
-
-
-if __name__ == "__main__":
- init_logging()
- parser = argparse.ArgumentParser()
- parser.add_argument(
- "--repo-id",
- type=str,
- required=True,
- help="Repository identifier on Hugging Face: a community or a user name `/` the name of the dataset "
- "(e.g. `lerobot/pusht`, `cadene/aloha_sim_insertion_human`).",
- )
- parser.add_argument(
- "--branch",
- type=str,
- default=None,
- help="Repo branch to push your dataset. Defaults to the main branch.",
- )
- parser.add_argument(
- "--data-file-size-in-mb",
- type=int,
- default=None,
- help="File size in MB. Defaults to 100 for data and 500 for videos.",
- )
- parser.add_argument(
- "--video-file-size-in-mb",
- type=int,
- default=None,
- help="File size in MB. Defaults to 100 for data and 500 for videos.",
- )
- parser.add_argument(
- "--root",
- type=str,
- default=None,
- help="Local directory to use for downloading/writing the dataset.",
- )
- parser.add_argument(
- "--push-to-hub",
- type=lambda input: input.lower() == "true",
- default=True,
- help="Push the converted dataset to the hub.",
- )
- parser.add_argument(
- "--force-conversion",
- action="store_true",
- help="Force conversion even if the dataset already has a v3.0 version.",
- )
-
- args = parser.parse_args()
- convert_dataset(**vars(args))
diff --git a/lerobot/src/lerobot/motors/dynamixel/__init__.py b/lerobot/src/lerobot/motors/dynamixel/__init__.py
deleted file mode 100644
index 38b770cd6a0947395f4c0f6e2f37bbdc8120d965..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/motors/dynamixel/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .dynamixel import DriveMode, DynamixelMotorsBus, OperatingMode, TorqueMode
-from .tables import *
diff --git a/lerobot/src/lerobot/motors/dynamixel/dynamixel.py b/lerobot/src/lerobot/motors/dynamixel/dynamixel.py
deleted file mode 100644
index fbc63fef8e999e1756dd8c05784f0c4dce545d39..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/motors/dynamixel/dynamixel.py
+++ /dev/null
@@ -1,264 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# TODO(aliberts): Should we implement FastSyncRead/Write?
-# https://github.com/ROBOTIS-GIT/DynamixelSDK/pull/643
-# https://github.com/ROBOTIS-GIT/DynamixelSDK/releases/tag/3.8.2
-# https://emanual.robotis.com/docs/en/dxl/protocol2/#fast-sync-read-0x8a
-# -> Need to check compatibility across models
-
-import logging
-from copy import deepcopy
-from enum import Enum
-
-from lerobot.motors.encoding_utils import decode_twos_complement, encode_twos_complement
-
-from ..motors_bus import Motor, MotorCalibration, MotorsBus, NameOrID, Value, get_address
-from .tables import (
- AVAILABLE_BAUDRATES,
- MODEL_BAUDRATE_TABLE,
- MODEL_CONTROL_TABLE,
- MODEL_ENCODING_TABLE,
- MODEL_NUMBER_TABLE,
- MODEL_RESOLUTION,
-)
-
-PROTOCOL_VERSION = 2.0
-DEFAULT_BAUDRATE = 1_000_000
-DEFAULT_TIMEOUT_MS = 1000
-
-NORMALIZED_DATA = ["Goal_Position", "Present_Position"]
-
-logger = logging.getLogger(__name__)
-
-
-class OperatingMode(Enum):
- # DYNAMIXEL only controls current(torque) regardless of speed and position. This mode is ideal for a
- # gripper or a system that only uses current(torque) control or a system that has additional
- # velocity/position controllers.
- CURRENT = 0
-
- # This mode controls velocity. This mode is identical to the Wheel Mode(endless) from existing DYNAMIXEL.
- # This mode is ideal for wheel-type robots.
- VELOCITY = 1
-
- # This mode controls position. This mode is identical to the Joint Mode from existing DYNAMIXEL. Operating
- # position range is limited by the Max Position Limit(48) and the Min Position Limit(52). This mode is
- # ideal for articulated robots that each joint rotates less than 360 degrees.
- POSITION = 3
-
- # This mode controls position. This mode is identical to the Multi-turn Position Control from existing
- # DYNAMIXEL. 512 turns are supported(-256[rev] ~ 256[rev]). This mode is ideal for multi-turn wrists or
- # conveyor systems or a system that requires an additional reduction gear. Note that Max Position
- # Limit(48), Min Position Limit(52) are not used on Extended Position Control Mode.
- EXTENDED_POSITION = 4
-
- # This mode controls both position and current(torque). Up to 512 turns are supported (-256[rev] ~
- # 256[rev]). This mode is ideal for a system that requires both position and current control such as
- # articulated robots or grippers.
- CURRENT_POSITION = 5
-
- # This mode directly controls PWM output. (Voltage Control Mode)
- PWM = 16
-
-
-class DriveMode(Enum):
- NON_INVERTED = 0
- INVERTED = 1
-
-
-class TorqueMode(Enum):
- ENABLED = 1
- DISABLED = 0
-
-
-def _split_into_byte_chunks(value: int, length: int) -> list[int]:
- import dynamixel_sdk as dxl
-
- if length == 1:
- data = [value]
- elif length == 2:
- data = [dxl.DXL_LOBYTE(value), dxl.DXL_HIBYTE(value)]
- elif length == 4:
- data = [
- dxl.DXL_LOBYTE(dxl.DXL_LOWORD(value)),
- dxl.DXL_HIBYTE(dxl.DXL_LOWORD(value)),
- dxl.DXL_LOBYTE(dxl.DXL_HIWORD(value)),
- dxl.DXL_HIBYTE(dxl.DXL_HIWORD(value)),
- ]
- return data
-
-
-class DynamixelMotorsBus(MotorsBus):
- """
- The Dynamixel implementation for a MotorsBus. It relies on the python dynamixel sdk to communicate with
- the motors. For more info, see the Dynamixel SDK Documentation:
- https://emanual.robotis.com/docs/en/software/dynamixel/dynamixel_sdk/sample_code/python_read_write_protocol_2_0/#python-read-write-protocol-20
- """
-
- apply_drive_mode = False
- available_baudrates = deepcopy(AVAILABLE_BAUDRATES)
- default_baudrate = DEFAULT_BAUDRATE
- default_timeout = DEFAULT_TIMEOUT_MS
- model_baudrate_table = deepcopy(MODEL_BAUDRATE_TABLE)
- model_ctrl_table = deepcopy(MODEL_CONTROL_TABLE)
- model_encoding_table = deepcopy(MODEL_ENCODING_TABLE)
- model_number_table = deepcopy(MODEL_NUMBER_TABLE)
- model_resolution_table = deepcopy(MODEL_RESOLUTION)
- normalized_data = deepcopy(NORMALIZED_DATA)
-
- def __init__(
- self,
- port: str,
- motors: dict[str, Motor],
- calibration: dict[str, MotorCalibration] | None = None,
- ):
- super().__init__(port, motors, calibration)
- import dynamixel_sdk as dxl
-
- self.port_handler = dxl.PortHandler(self.port)
- self.packet_handler = dxl.PacketHandler(PROTOCOL_VERSION)
- self.sync_reader = dxl.GroupSyncRead(self.port_handler, self.packet_handler, 0, 0)
- self.sync_writer = dxl.GroupSyncWrite(self.port_handler, self.packet_handler, 0, 0)
- self._comm_success = dxl.COMM_SUCCESS
- self._no_error = 0x00
-
- def _assert_protocol_is_compatible(self, instruction_name: str) -> None:
- pass
-
- def _handshake(self) -> None:
- self._assert_motors_exist()
-
- def _find_single_motor(self, motor: str, initial_baudrate: int | None = None) -> tuple[int, int]:
- model = self.motors[motor].model
- search_baudrates = (
- [initial_baudrate] if initial_baudrate is not None else self.model_baudrate_table[model]
- )
-
- for baudrate in search_baudrates:
- self.set_baudrate(baudrate)
- id_model = self.broadcast_ping()
- if id_model:
- found_id, found_model = next(iter(id_model.items()))
- expected_model_nb = self.model_number_table[model]
- if found_model != expected_model_nb:
- raise RuntimeError(
- f"Found one motor on {baudrate=} with id={found_id} but it has a "
- f"model number '{found_model}' different than the one expected: '{expected_model_nb}'. "
- f"Make sure you are connected only connected to the '{motor}' motor (model '{model}')."
- )
- return baudrate, found_id
-
- raise RuntimeError(f"Motor '{motor}' (model '{model}') was not found. Make sure it is connected.")
-
- def configure_motors(self, return_delay_time=0) -> None:
- # By default, Dynamixel motors have a 500µs delay response time (corresponding to a value of 250 on
- # the 'Return_Delay_Time' address). We ensure this is reduced to the minimum of 2µs (value of 0).
- for motor in self.motors:
- self.write("Return_Delay_Time", motor, return_delay_time)
-
- @property
- def is_calibrated(self) -> bool:
- return self.calibration == self.read_calibration()
-
- def read_calibration(self) -> dict[str, MotorCalibration]:
- offsets = self.sync_read("Homing_Offset", normalize=False)
- mins = self.sync_read("Min_Position_Limit", normalize=False)
- maxes = self.sync_read("Max_Position_Limit", normalize=False)
- drive_modes = self.sync_read("Drive_Mode", normalize=False)
-
- calibration = {}
- for motor, m in self.motors.items():
- calibration[motor] = MotorCalibration(
- id=m.id,
- drive_mode=drive_modes[motor],
- homing_offset=offsets[motor],
- range_min=mins[motor],
- range_max=maxes[motor],
- )
-
- return calibration
-
- def write_calibration(self, calibration_dict: dict[str, MotorCalibration], cache: bool = True) -> None:
- for motor, calibration in calibration_dict.items():
- self.write("Homing_Offset", motor, calibration.homing_offset)
- self.write("Min_Position_Limit", motor, calibration.range_min)
- self.write("Max_Position_Limit", motor, calibration.range_max)
-
- if cache:
- self.calibration = calibration_dict
-
- def disable_torque(self, motors: str | list[str] | None = None, num_retry: int = 0) -> None:
- for motor in self._get_motors_list(motors):
- self.write("Torque_Enable", motor, TorqueMode.DISABLED.value, num_retry=num_retry)
-
- def _disable_torque(self, motor_id: int, model: str, num_retry: int = 0) -> None:
- addr, length = get_address(self.model_ctrl_table, model, "Torque_Enable")
- self._write(addr, length, motor_id, TorqueMode.DISABLED.value, num_retry=num_retry)
-
- def enable_torque(self, motors: str | list[str] | None = None, num_retry: int = 0) -> None:
- for motor in self._get_motors_list(motors):
- self.write("Torque_Enable", motor, TorqueMode.ENABLED.value, num_retry=num_retry)
-
- def _encode_sign(self, data_name: str, ids_values: dict[int, int]) -> dict[int, int]:
- for id_ in ids_values:
- model = self._id_to_model(id_)
- encoding_table = self.model_encoding_table.get(model)
- if encoding_table and data_name in encoding_table:
- n_bytes = encoding_table[data_name]
- ids_values[id_] = encode_twos_complement(ids_values[id_], n_bytes)
-
- return ids_values
-
- def _decode_sign(self, data_name: str, ids_values: dict[int, int]) -> dict[int, int]:
- for id_ in ids_values:
- model = self._id_to_model(id_)
- encoding_table = self.model_encoding_table.get(model)
- if encoding_table and data_name in encoding_table:
- n_bytes = encoding_table[data_name]
- ids_values[id_] = decode_twos_complement(ids_values[id_], n_bytes)
-
- return ids_values
-
- def _get_half_turn_homings(self, positions: dict[NameOrID, Value]) -> dict[NameOrID, Value]:
- """
- On Dynamixel Motors:
- Present_Position = Actual_Position + Homing_Offset
- """
- half_turn_homings = {}
- for motor, pos in positions.items():
- model = self._get_motor_model(motor)
- max_res = self.model_resolution_table[model] - 1
- half_turn_homings[motor] = int(max_res / 2) - pos
-
- return half_turn_homings
-
- def _split_into_byte_chunks(self, value: int, length: int) -> list[int]:
- return _split_into_byte_chunks(value, length)
-
- def broadcast_ping(self, num_retry: int = 0, raise_on_error: bool = False) -> dict[int, int] | None:
- for n_try in range(1 + num_retry):
- data_list, comm = self.packet_handler.broadcastPing(self.port_handler)
- if self._is_comm_success(comm):
- break
- logger.debug(f"Broadcast ping failed on port '{self.port}' ({n_try=})")
- logger.debug(self.packet_handler.getTxRxResult(comm))
-
- if not self._is_comm_success(comm):
- if raise_on_error:
- raise ConnectionError(self.packet_handler.getTxRxResult(comm))
-
- return
-
- return {id_: data[0] for id_, data in data_list.items()}
diff --git a/lerobot/src/lerobot/motors/dynamixel/tables.py b/lerobot/src/lerobot/motors/dynamixel/tables.py
deleted file mode 100644
index 904cc3ae1a529017c0b7d33a47cd41c6806e1524..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/motors/dynamixel/tables.py
+++ /dev/null
@@ -1,199 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# TODO(Steven): Consider doing the following:
-# from enum import Enum
-# class MyControlTableKey(Enum):
-# ID = "ID"
-# GOAL_SPEED = "Goal_Speed"
-# ...
-#
-# MY_CONTROL_TABLE ={
-# MyControlTableKey.ID.value: (5,1)
-# MyControlTableKey.GOAL_SPEED.value: (46, 2)
-# ...
-# }
-# This allows me do to:
-# bus.write(MyControlTableKey.GOAL_SPEED, ...)
-# Instead of:
-# bus.write("Goal_Speed", ...)
-# This is important for two reasons:
-# 1. The linter will tell me if I'm trying to use an invalid key, instead of me realizing when I get the RunTimeError
-# 2. We can change the value of the MyControlTableKey enums without impacting the client code
-
-
-# {data_name: (address, size_byte)}
-# https://emanual.robotis.com/docs/en/dxl/x/{MODEL}/#control-table
-X_SERIES_CONTROL_TABLE = {
- "Model_Number": (0, 2),
- "Model_Information": (2, 4),
- "Firmware_Version": (6, 1),
- "ID": (7, 1),
- "Baud_Rate": (8, 1),
- "Return_Delay_Time": (9, 1),
- "Drive_Mode": (10, 1),
- "Operating_Mode": (11, 1),
- "Secondary_ID": (12, 1),
- "Protocol_Type": (13, 1),
- "Homing_Offset": (20, 4),
- "Moving_Threshold": (24, 4),
- "Temperature_Limit": (31, 1),
- "Max_Voltage_Limit": (32, 2),
- "Min_Voltage_Limit": (34, 2),
- "PWM_Limit": (36, 2),
- "Current_Limit": (38, 2),
- "Acceleration_Limit": (40, 4),
- "Velocity_Limit": (44, 4),
- "Max_Position_Limit": (48, 4),
- "Min_Position_Limit": (52, 4),
- "Shutdown": (63, 1),
- "Torque_Enable": (64, 1),
- "LED": (65, 1),
- "Status_Return_Level": (68, 1),
- "Registered_Instruction": (69, 1),
- "Hardware_Error_Status": (70, 1),
- "Velocity_I_Gain": (76, 2),
- "Velocity_P_Gain": (78, 2),
- "Position_D_Gain": (80, 2),
- "Position_I_Gain": (82, 2),
- "Position_P_Gain": (84, 2),
- "Feedforward_2nd_Gain": (88, 2),
- "Feedforward_1st_Gain": (90, 2),
- "Bus_Watchdog": (98, 1),
- "Goal_PWM": (100, 2),
- "Goal_Current": (102, 2),
- "Goal_Velocity": (104, 4),
- "Profile_Acceleration": (108, 4),
- "Profile_Velocity": (112, 4),
- "Goal_Position": (116, 4),
- "Realtime_Tick": (120, 2),
- "Moving": (122, 1),
- "Moving_Status": (123, 1),
- "Present_PWM": (124, 2),
- "Present_Current": (126, 2),
- "Present_Velocity": (128, 4),
- "Present_Position": (132, 4),
- "Velocity_Trajectory": (136, 4),
- "Position_Trajectory": (140, 4),
- "Present_Input_Voltage": (144, 2),
- "Present_Temperature": (146, 1),
-}
-
-# https://emanual.robotis.com/docs/en/dxl/x/{MODEL}/#baud-rate8
-X_SERIES_BAUDRATE_TABLE = {
- 9_600: 0,
- 57_600: 1,
- 115_200: 2,
- 1_000_000: 3,
- 2_000_000: 4,
- 3_000_000: 5,
- 4_000_000: 6,
-}
-
-# {data_name: size_byte}
-X_SERIES_ENCODINGS_TABLE = {
- "Homing_Offset": X_SERIES_CONTROL_TABLE["Homing_Offset"][1],
- "Goal_PWM": X_SERIES_CONTROL_TABLE["Goal_PWM"][1],
- "Goal_Current": X_SERIES_CONTROL_TABLE["Goal_Current"][1],
- "Goal_Velocity": X_SERIES_CONTROL_TABLE["Goal_Velocity"][1],
- "Goal_Position": X_SERIES_CONTROL_TABLE["Goal_Position"][1],
- "Present_Position": X_SERIES_CONTROL_TABLE["Present_Position"][1],
- "Present_PWM": X_SERIES_CONTROL_TABLE["Present_PWM"][1],
- "Present_Current": X_SERIES_CONTROL_TABLE["Present_Current"][1],
- "Present_Velocity": X_SERIES_CONTROL_TABLE["Present_Velocity"][1],
-}
-
-MODEL_ENCODING_TABLE = {
- "x_series": X_SERIES_ENCODINGS_TABLE,
- "xl330-m077": X_SERIES_ENCODINGS_TABLE,
- "xl330-m288": X_SERIES_ENCODINGS_TABLE,
- "xl430-w250": X_SERIES_ENCODINGS_TABLE,
- "xm430-w350": X_SERIES_ENCODINGS_TABLE,
- "xm540-w270": X_SERIES_ENCODINGS_TABLE,
- "xc430-w150": X_SERIES_ENCODINGS_TABLE,
-}
-
-# {model: model_resolution}
-# https://emanual.robotis.com/docs/en/dxl/x/{MODEL}/#specifications
-MODEL_RESOLUTION = {
- "x_series": 4096,
- "xl330-m077": 4096,
- "xl330-m288": 4096,
- "xl430-w250": 4096,
- "xm430-w350": 4096,
- "xm540-w270": 4096,
- "xc430-w150": 4096,
-}
-
-# {model: model_number}
-# https://emanual.robotis.com/docs/en/dxl/x/{MODEL}/#control-table-of-eeprom-area
-MODEL_NUMBER_TABLE = {
- "xl330-m077": 1190,
- "xl330-m288": 1200,
- "xl430-w250": 1060,
- "xm430-w350": 1020,
- "xm540-w270": 1120,
- "xc430-w150": 1070,
-}
-
-# {model: available_operating_modes}
-# https://emanual.robotis.com/docs/en/dxl/x/{MODEL}/#operating-mode11
-MODEL_OPERATING_MODES = {
- "xl330-m077": [0, 1, 3, 4, 5, 16],
- "xl330-m288": [0, 1, 3, 4, 5, 16],
- "xl430-w250": [1, 3, 4, 16],
- "xm430-w350": [0, 1, 3, 4, 5, 16],
- "xm540-w270": [0, 1, 3, 4, 5, 16],
- "xc430-w150": [1, 3, 4, 16],
-}
-
-MODEL_CONTROL_TABLE = {
- "x_series": X_SERIES_CONTROL_TABLE,
- "xl330-m077": X_SERIES_CONTROL_TABLE,
- "xl330-m288": X_SERIES_CONTROL_TABLE,
- "xl430-w250": X_SERIES_CONTROL_TABLE,
- "xm430-w350": X_SERIES_CONTROL_TABLE,
- "xm540-w270": X_SERIES_CONTROL_TABLE,
- "xc430-w150": X_SERIES_CONTROL_TABLE,
-}
-
-MODEL_BAUDRATE_TABLE = {
- "x_series": X_SERIES_BAUDRATE_TABLE,
- "xl330-m077": X_SERIES_BAUDRATE_TABLE,
- "xl330-m288": X_SERIES_BAUDRATE_TABLE,
- "xl430-w250": X_SERIES_BAUDRATE_TABLE,
- "xm430-w350": X_SERIES_BAUDRATE_TABLE,
- "xm540-w270": X_SERIES_BAUDRATE_TABLE,
- "xc430-w150": X_SERIES_BAUDRATE_TABLE,
-}
-
-AVAILABLE_BAUDRATES = [
- 9_600,
- 19_200,
- 38_400,
- 57_600,
- 115_200,
- 230_400,
- 460_800,
- 500_000,
- 576_000,
- 921_600,
- 1_000_000,
- 1_152_000,
- 2_000_000,
- 2_500_000,
- 3_000_000,
- 3_500_000,
- 4_000_000,
-]
diff --git a/lerobot/src/lerobot/motors/feetech/__init__.py b/lerobot/src/lerobot/motors/feetech/__init__.py
deleted file mode 100644
index 33992c51d2ceb32f2480d8d3a727f9a9d75bab5d..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/motors/feetech/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .feetech import DriveMode, FeetechMotorsBus, OperatingMode, TorqueMode
-from .tables import *
diff --git a/lerobot/src/lerobot/motors/feetech/feetech.py b/lerobot/src/lerobot/motors/feetech/feetech.py
deleted file mode 100644
index 98cde209c44158a244e4da03e756d2774fa0fb93..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/motors/feetech/feetech.py
+++ /dev/null
@@ -1,455 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-from copy import deepcopy
-from enum import Enum
-from pprint import pformat
-
-from lerobot.motors.encoding_utils import decode_sign_magnitude, encode_sign_magnitude
-
-from ..motors_bus import Motor, MotorCalibration, MotorsBus, NameOrID, Value, get_address
-from .tables import (
- FIRMWARE_MAJOR_VERSION,
- FIRMWARE_MINOR_VERSION,
- MODEL_BAUDRATE_TABLE,
- MODEL_CONTROL_TABLE,
- MODEL_ENCODING_TABLE,
- MODEL_NUMBER,
- MODEL_NUMBER_TABLE,
- MODEL_PROTOCOL,
- MODEL_RESOLUTION,
- SCAN_BAUDRATES,
-)
-
-DEFAULT_PROTOCOL_VERSION = 0
-DEFAULT_BAUDRATE = 1_000_000
-DEFAULT_TIMEOUT_MS = 1000
-
-NORMALIZED_DATA = ["Goal_Position", "Present_Position"]
-
-logger = logging.getLogger(__name__)
-
-
-class OperatingMode(Enum):
- # position servo mode
- POSITION = 0
- # The motor is in constant speed mode, which is controlled by parameter 0x2e, and the highest bit 15 is
- # the direction bit
- VELOCITY = 1
- # PWM open-loop speed regulation mode, with parameter 0x2c running time parameter control, bit11 as
- # direction bit
- PWM = 2
- # In step servo mode, the number of step progress is represented by parameter 0x2a, and the highest bit 15
- # is the direction bit
- STEP = 3
-
-
-class DriveMode(Enum):
- NON_INVERTED = 0
- INVERTED = 1
-
-
-class TorqueMode(Enum):
- ENABLED = 1
- DISABLED = 0
-
-
-def _split_into_byte_chunks(value: int, length: int) -> list[int]:
- import scservo_sdk as scs
-
- if length == 1:
- data = [value]
- elif length == 2:
- data = [scs.SCS_LOBYTE(value), scs.SCS_HIBYTE(value)]
- elif length == 4:
- data = [
- scs.SCS_LOBYTE(scs.SCS_LOWORD(value)),
- scs.SCS_HIBYTE(scs.SCS_LOWORD(value)),
- scs.SCS_LOBYTE(scs.SCS_HIWORD(value)),
- scs.SCS_HIBYTE(scs.SCS_HIWORD(value)),
- ]
- return data
-
-
-def patch_setPacketTimeout(self, packet_length): # noqa: N802
- """
- HACK: This patches the PortHandler behavior to set the correct packet timeouts.
-
- It fixes https://gitee.com/ftservo/SCServoSDK/issues/IBY2S6
- The bug is fixed on the official Feetech SDK repo (https://gitee.com/ftservo/FTServo_Python)
- but because that version is not published on PyPI, we rely on the (unofficial) on that is, which needs
- patching.
- """
- self.packet_start_time = self.getCurrentTime()
- self.packet_timeout = (self.tx_time_per_byte * packet_length) + (self.tx_time_per_byte * 3.0) + 50
-
-
-class FeetechMotorsBus(MotorsBus):
- """
- The FeetechMotorsBus class allows to efficiently read and write to the attached motors. It relies on the
- python feetech sdk to communicate with the motors, which is itself based on the dynamixel sdk.
- """
-
- apply_drive_mode = True
- available_baudrates = deepcopy(SCAN_BAUDRATES)
- default_baudrate = DEFAULT_BAUDRATE
- default_timeout = DEFAULT_TIMEOUT_MS
- model_baudrate_table = deepcopy(MODEL_BAUDRATE_TABLE)
- model_ctrl_table = deepcopy(MODEL_CONTROL_TABLE)
- model_encoding_table = deepcopy(MODEL_ENCODING_TABLE)
- model_number_table = deepcopy(MODEL_NUMBER_TABLE)
- model_resolution_table = deepcopy(MODEL_RESOLUTION)
- normalized_data = deepcopy(NORMALIZED_DATA)
-
- def __init__(
- self,
- port: str,
- motors: dict[str, Motor],
- calibration: dict[str, MotorCalibration] | None = None,
- protocol_version: int = DEFAULT_PROTOCOL_VERSION,
- ):
- super().__init__(port, motors, calibration)
- self.protocol_version = protocol_version
- self._assert_same_protocol()
- import scservo_sdk as scs
-
- self.port_handler = scs.PortHandler(self.port)
- # HACK: monkeypatch
- self.port_handler.setPacketTimeout = patch_setPacketTimeout.__get__(
- self.port_handler, scs.PortHandler
- )
- self.packet_handler = scs.PacketHandler(protocol_version)
- self.sync_reader = scs.GroupSyncRead(self.port_handler, self.packet_handler, 0, 0)
- self.sync_writer = scs.GroupSyncWrite(self.port_handler, self.packet_handler, 0, 0)
- self._comm_success = scs.COMM_SUCCESS
- self._no_error = 0x00
-
- if any(MODEL_PROTOCOL[model] != self.protocol_version for model in self.models):
- raise ValueError(f"Some motors are incompatible with protocol_version={self.protocol_version}")
-
- def _assert_same_protocol(self) -> None:
- if any(MODEL_PROTOCOL[model] != self.protocol_version for model in self.models):
- raise RuntimeError("Some motors use an incompatible protocol.")
-
- def _assert_protocol_is_compatible(self, instruction_name: str) -> None:
- if instruction_name == "sync_read" and self.protocol_version == 1:
- raise NotImplementedError(
- "'Sync Read' is not available with Feetech motors using Protocol 1. Use 'Read' sequentially instead."
- )
- if instruction_name == "broadcast_ping" and self.protocol_version == 1:
- raise NotImplementedError(
- "'Broadcast Ping' is not available with Feetech motors using Protocol 1. Use 'Ping' sequentially instead."
- )
-
- def _assert_same_firmware(self) -> None:
- firmware_versions = self._read_firmware_version(self.ids, raise_on_error=True)
- if len(set(firmware_versions.values())) != 1:
- raise RuntimeError(
- "Some Motors use different firmware versions:"
- f"\n{pformat(firmware_versions)}\n"
- "Update their firmware first using Feetech's software. "
- "Visit https://www.feetechrc.com/software."
- )
-
- def _handshake(self) -> None:
- self._assert_motors_exist()
- self._assert_same_firmware()
-
- def _find_single_motor(self, motor: str, initial_baudrate: int | None = None) -> tuple[int, int]:
- if self.protocol_version == 0:
- return self._find_single_motor_p0(motor, initial_baudrate)
- else:
- return self._find_single_motor_p1(motor, initial_baudrate)
-
- def _find_single_motor_p0(self, motor: str, initial_baudrate: int | None = None) -> tuple[int, int]:
- model = self.motors[motor].model
- search_baudrates = (
- [initial_baudrate] if initial_baudrate is not None else self.model_baudrate_table[model]
- )
- expected_model_nb = self.model_number_table[model]
-
- for baudrate in search_baudrates:
- self.set_baudrate(baudrate)
- id_model = self.broadcast_ping()
- if id_model:
- found_id, found_model = next(iter(id_model.items()))
- if found_model != expected_model_nb:
- raise RuntimeError(
- f"Found one motor on {baudrate=} with id={found_id} but it has a "
- f"model number '{found_model}' different than the one expected: '{expected_model_nb}'. "
- f"Make sure you are connected only connected to the '{motor}' motor (model '{model}')."
- )
- return baudrate, found_id
-
- raise RuntimeError(f"Motor '{motor}' (model '{model}') was not found. Make sure it is connected.")
-
- def _find_single_motor_p1(self, motor: str, initial_baudrate: int | None = None) -> tuple[int, int]:
- import scservo_sdk as scs
-
- model = self.motors[motor].model
- search_baudrates = (
- [initial_baudrate] if initial_baudrate is not None else self.model_baudrate_table[model]
- )
- expected_model_nb = self.model_number_table[model]
-
- for baudrate in search_baudrates:
- self.set_baudrate(baudrate)
- for id_ in range(scs.MAX_ID + 1):
- found_model = self.ping(id_)
- if found_model is not None:
- if found_model != expected_model_nb:
- raise RuntimeError(
- f"Found one motor on {baudrate=} with id={id_} but it has a "
- f"model number '{found_model}' different than the one expected: '{expected_model_nb}'. "
- f"Make sure you are connected only connected to the '{motor}' motor (model '{model}')."
- )
- return baudrate, id_
-
- raise RuntimeError(f"Motor '{motor}' (model '{model}') was not found. Make sure it is connected.")
-
- def configure_motors(self, return_delay_time=0, maximum_acceleration=254, acceleration=254) -> None:
- for motor in self.motors:
- # By default, Feetech motors have a 500µs delay response time (corresponding to a value of 250 on
- # the 'Return_Delay_Time' address). We ensure this is reduced to the minimum of 2µs (value of 0).
- self.write("Return_Delay_Time", motor, return_delay_time)
- # Set 'Maximum_Acceleration' to 254 to speedup acceleration and deceleration of the motors.
- if self.protocol_version == 0:
- self.write("Maximum_Acceleration", motor, maximum_acceleration)
- self.write("Acceleration", motor, acceleration)
-
- @property
- def is_calibrated(self) -> bool:
- motors_calibration = self.read_calibration()
- if set(motors_calibration) != set(self.calibration):
- return False
-
- same_ranges = all(
- self.calibration[motor].range_min == cal.range_min
- and self.calibration[motor].range_max == cal.range_max
- for motor, cal in motors_calibration.items()
- )
- if self.protocol_version == 1:
- return same_ranges
-
- same_offsets = all(
- self.calibration[motor].homing_offset == cal.homing_offset
- for motor, cal in motors_calibration.items()
- )
- return same_ranges and same_offsets
-
- def read_calibration(self) -> dict[str, MotorCalibration]:
- offsets, mins, maxes = {}, {}, {}
- for motor in self.motors:
- mins[motor] = self.read("Min_Position_Limit", motor, normalize=False)
- maxes[motor] = self.read("Max_Position_Limit", motor, normalize=False)
- offsets[motor] = (
- self.read("Homing_Offset", motor, normalize=False) if self.protocol_version == 0 else 0
- )
-
- calibration = {}
- for motor, m in self.motors.items():
- calibration[motor] = MotorCalibration(
- id=m.id,
- drive_mode=0,
- homing_offset=offsets[motor],
- range_min=mins[motor],
- range_max=maxes[motor],
- )
-
- return calibration
-
- def write_calibration(self, calibration_dict: dict[str, MotorCalibration], cache: bool = True) -> None:
- for motor, calibration in calibration_dict.items():
- if self.protocol_version == 0:
- self.write("Homing_Offset", motor, calibration.homing_offset)
- self.write("Min_Position_Limit", motor, calibration.range_min)
- self.write("Max_Position_Limit", motor, calibration.range_max)
-
- if cache:
- self.calibration = calibration_dict
-
- def _get_half_turn_homings(self, positions: dict[NameOrID, Value]) -> dict[NameOrID, Value]:
- """
- On Feetech Motors:
- Present_Position = Actual_Position - Homing_Offset
- """
- half_turn_homings = {}
- for motor, pos in positions.items():
- model = self._get_motor_model(motor)
- max_res = self.model_resolution_table[model] - 1
- half_turn_homings[motor] = pos - int(max_res / 2)
-
- return half_turn_homings
-
- def disable_torque(self, motors: str | list[str] | None = None, num_retry: int = 0) -> None:
- for motor in self._get_motors_list(motors):
- self.write("Torque_Enable", motor, TorqueMode.DISABLED.value, num_retry=num_retry)
- self.write("Lock", motor, 0, num_retry=num_retry)
-
- def _disable_torque(self, motor_id: int, model: str, num_retry: int = 0) -> None:
- addr, length = get_address(self.model_ctrl_table, model, "Torque_Enable")
- self._write(addr, length, motor_id, TorqueMode.DISABLED.value, num_retry=num_retry)
- addr, length = get_address(self.model_ctrl_table, model, "Lock")
- self._write(addr, length, motor_id, 0, num_retry=num_retry)
-
- def enable_torque(self, motors: str | list[str] | None = None, num_retry: int = 0) -> None:
- for motor in self._get_motors_list(motors):
- self.write("Torque_Enable", motor, TorqueMode.ENABLED.value, num_retry=num_retry)
- self.write("Lock", motor, 1, num_retry=num_retry)
-
- def _encode_sign(self, data_name: str, ids_values: dict[int, int]) -> dict[int, int]:
- for id_ in ids_values:
- model = self._id_to_model(id_)
- encoding_table = self.model_encoding_table.get(model)
- if encoding_table and data_name in encoding_table:
- sign_bit = encoding_table[data_name]
- ids_values[id_] = encode_sign_magnitude(ids_values[id_], sign_bit)
-
- return ids_values
-
- def _decode_sign(self, data_name: str, ids_values: dict[int, int]) -> dict[int, int]:
- for id_ in ids_values:
- model = self._id_to_model(id_)
- encoding_table = self.model_encoding_table.get(model)
- if encoding_table and data_name in encoding_table:
- sign_bit = encoding_table[data_name]
- ids_values[id_] = decode_sign_magnitude(ids_values[id_], sign_bit)
-
- return ids_values
-
- def _split_into_byte_chunks(self, value: int, length: int) -> list[int]:
- return _split_into_byte_chunks(value, length)
-
- def _broadcast_ping(self) -> tuple[dict[int, int], int]:
- import scservo_sdk as scs
-
- data_list = {}
-
- status_length = 6
-
- rx_length = 0
- wait_length = status_length * scs.MAX_ID
-
- txpacket = [0] * 6
-
- tx_time_per_byte = (1000.0 / self.port_handler.getBaudRate()) * 10.0
-
- txpacket[scs.PKT_ID] = scs.BROADCAST_ID
- txpacket[scs.PKT_LENGTH] = 2
- txpacket[scs.PKT_INSTRUCTION] = scs.INST_PING
-
- result = self.packet_handler.txPacket(self.port_handler, txpacket)
- if result != scs.COMM_SUCCESS:
- self.port_handler.is_using = False
- return data_list, result
-
- # set rx timeout
- self.port_handler.setPacketTimeoutMillis((wait_length * tx_time_per_byte) + (3.0 * scs.MAX_ID) + 16.0)
-
- rxpacket = []
- while not self.port_handler.isPacketTimeout() and rx_length < wait_length:
- rxpacket += self.port_handler.readPort(wait_length - rx_length)
- rx_length = len(rxpacket)
-
- self.port_handler.is_using = False
-
- if rx_length == 0:
- return data_list, scs.COMM_RX_TIMEOUT
-
- while True:
- if rx_length < status_length:
- return data_list, scs.COMM_RX_CORRUPT
-
- # find packet header
- for idx in range(0, (rx_length - 1)):
- if (rxpacket[idx] == 0xFF) and (rxpacket[idx + 1] == 0xFF):
- break
-
- if idx == 0: # found at the beginning of the packet
- # calculate checksum
- checksum = 0
- for idx in range(2, status_length - 1): # except header & checksum
- checksum += rxpacket[idx]
-
- checksum = ~checksum & 0xFF
- if rxpacket[status_length - 1] == checksum:
- result = scs.COMM_SUCCESS
- data_list[rxpacket[scs.PKT_ID]] = rxpacket[scs.PKT_ERROR]
-
- del rxpacket[0:status_length]
- rx_length = rx_length - status_length
-
- if rx_length == 0:
- return data_list, result
- else:
- result = scs.COMM_RX_CORRUPT
- # remove header (0xFF 0xFF)
- del rxpacket[0:2]
- rx_length = rx_length - 2
- else:
- # remove unnecessary packets
- del rxpacket[0:idx]
- rx_length = rx_length - idx
-
- def broadcast_ping(self, num_retry: int = 0, raise_on_error: bool = False) -> dict[int, int] | None:
- self._assert_protocol_is_compatible("broadcast_ping")
- for n_try in range(1 + num_retry):
- ids_status, comm = self._broadcast_ping()
- if self._is_comm_success(comm):
- break
- logger.debug(f"Broadcast ping failed on port '{self.port}' ({n_try=})")
- logger.debug(self.packet_handler.getTxRxResult(comm))
-
- if not self._is_comm_success(comm):
- if raise_on_error:
- raise ConnectionError(self.packet_handler.getTxRxResult(comm))
- return
-
- ids_errors = {id_: status for id_, status in ids_status.items() if self._is_error(status)}
- if ids_errors:
- display_dict = {id_: self.packet_handler.getRxPacketError(err) for id_, err in ids_errors.items()}
- logger.error(f"Some motors found returned an error status:\n{pformat(display_dict, indent=4)}")
-
- return self._read_model_number(list(ids_status), raise_on_error)
-
- def _read_firmware_version(self, motor_ids: list[int], raise_on_error: bool = False) -> dict[int, str]:
- firmware_versions = {}
- for id_ in motor_ids:
- firm_ver_major, comm, error = self._read(
- *FIRMWARE_MAJOR_VERSION, id_, raise_on_error=raise_on_error
- )
- if not self._is_comm_success(comm) or self._is_error(error):
- continue
-
- firm_ver_minor, comm, error = self._read(
- *FIRMWARE_MINOR_VERSION, id_, raise_on_error=raise_on_error
- )
- if not self._is_comm_success(comm) or self._is_error(error):
- continue
-
- firmware_versions[id_] = f"{firm_ver_major}.{firm_ver_minor}"
-
- return firmware_versions
-
- def _read_model_number(self, motor_ids: list[int], raise_on_error: bool = False) -> dict[int, int]:
- model_numbers = {}
- for id_ in motor_ids:
- model_nb, comm, error = self._read(*MODEL_NUMBER, id_, raise_on_error=raise_on_error)
- if not self._is_comm_success(comm) or self._is_error(error):
- continue
-
- model_numbers[id_] = model_nb
-
- return model_numbers
diff --git a/lerobot/src/lerobot/motors/feetech/tables.py b/lerobot/src/lerobot/motors/feetech/tables.py
deleted file mode 100644
index e26d24226275d0330254ca4b1ab028d7b7bfa850..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/motors/feetech/tables.py
+++ /dev/null
@@ -1,256 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-FIRMWARE_MAJOR_VERSION = (0, 1)
-FIRMWARE_MINOR_VERSION = (1, 1)
-MODEL_NUMBER = (3, 2)
-
-# TODO(Steven): Consider doing the following:
-# from enum import Enum
-# class MyControlTableKey(Enum):
-# ID = "ID"
-# GOAL_SPEED = "Goal_Speed"
-# ...
-#
-# MY_CONTROL_TABLE ={
-# MyControlTableKey.ID.value: (5,1)
-# MyControlTableKey.GOAL_SPEED.value: (46, 2)
-# ...
-# }
-# This allows me do to:
-# bus.write(MyControlTableKey.GOAL_SPEED, ...)
-# Instead of:
-# bus.write("Goal_Speed", ...)
-# This is important for two reasons:
-# 1. The linter will tell me if I'm trying to use an invalid key, instead of me realizing when I get the RunTimeError
-# 2. We can change the value of the MyControlTableKey enums without impacting the client code
-
-# data_name: (address, size_byte)
-# http://doc.feetech.cn/#/prodinfodownload?srcType=FT-SMS-STS-emanual-229f4476422d4059abfb1cb0
-STS_SMS_SERIES_CONTROL_TABLE = {
- # EPROM
- "Firmware_Major_Version": FIRMWARE_MAJOR_VERSION, # read-only
- "Firmware_Minor_Version": FIRMWARE_MINOR_VERSION, # read-only
- "Model_Number": MODEL_NUMBER, # read-only
- "ID": (5, 1),
- "Baud_Rate": (6, 1),
- "Return_Delay_Time": (7, 1),
- "Response_Status_Level": (8, 1),
- "Min_Position_Limit": (9, 2),
- "Max_Position_Limit": (11, 2),
- "Max_Temperature_Limit": (13, 1),
- "Max_Voltage_Limit": (14, 1),
- "Min_Voltage_Limit": (15, 1),
- "Max_Torque_Limit": (16, 2),
- "Phase": (18, 1),
- "Unloading_Condition": (19, 1),
- "LED_Alarm_Condition": (20, 1),
- "P_Coefficient": (21, 1),
- "D_Coefficient": (22, 1),
- "I_Coefficient": (23, 1),
- "Minimum_Startup_Force": (24, 2),
- "CW_Dead_Zone": (26, 1),
- "CCW_Dead_Zone": (27, 1),
- "Protection_Current": (28, 2),
- "Angular_Resolution": (30, 1),
- "Homing_Offset": (31, 2),
- "Operating_Mode": (33, 1),
- "Protective_Torque": (34, 1),
- "Protection_Time": (35, 1),
- "Overload_Torque": (36, 1),
- "Velocity_closed_loop_P_proportional_coefficient": (37, 1),
- "Over_Current_Protection_Time": (38, 1),
- "Velocity_closed_loop_I_integral_coefficient": (39, 1),
- # SRAM
- "Torque_Enable": (40, 1),
- "Acceleration": (41, 1),
- "Goal_Position": (42, 2),
- "Goal_Time": (44, 2),
- "Goal_Velocity": (46, 2),
- "Torque_Limit": (48, 2),
- "Lock": (55, 1),
- "Present_Position": (56, 2), # read-only
- "Present_Velocity": (58, 2), # read-only
- "Present_Load": (60, 2), # read-only
- "Present_Voltage": (62, 1), # read-only
- "Present_Temperature": (63, 1), # read-only
- "Status": (65, 1), # read-only
- "Moving": (66, 1), # read-only
- "Present_Current": (69, 2), # read-only
- "Goal_Position_2": (71, 2), # read-only
- # Factory
- "Moving_Velocity": (80, 1),
- "Moving_Velocity_Threshold": (80, 1),
- "DTs": (81, 1), # (ms)
- "Velocity_Unit_factor": (82, 1),
- "Hts": (83, 1), # (ns) valid for firmware >= 2.54, other versions keep 0
- "Maximum_Velocity_Limit": (84, 1),
- "Maximum_Acceleration": (85, 1),
- "Acceleration_Multiplier ": (86, 1), # Acceleration multiplier in effect when acceleration is 0
-}
-
-# http://doc.feetech.cn/#/prodinfodownload?srcType=FT-SCSCL-emanual-cbcc8ab2e3384282a01d4bf3
-SCS_SERIES_CONTROL_TABLE = {
- # EPROM
- "Firmware_Major_Version": FIRMWARE_MAJOR_VERSION, # read-only
- "Firmware_Minor_Version": FIRMWARE_MINOR_VERSION, # read-only
- "Model_Number": MODEL_NUMBER, # read-only
- "ID": (5, 1),
- "Baud_Rate": (6, 1),
- "Return_Delay_Time": (7, 1),
- "Response_Status_Level": (8, 1),
- "Min_Position_Limit": (9, 2),
- "Max_Position_Limit": (11, 2),
- "Max_Temperature_Limit": (13, 1),
- "Max_Voltage_Limit": (14, 1),
- "Min_Voltage_Limit": (15, 1),
- "Max_Torque_Limit": (16, 2),
- "Phase": (18, 1),
- "Unloading_Condition": (19, 1),
- "LED_Alarm_Condition": (20, 1),
- "P_Coefficient": (21, 1),
- "D_Coefficient": (22, 1),
- "I_Coefficient": (23, 1),
- "Minimum_Startup_Force": (24, 2),
- "CW_Dead_Zone": (26, 1),
- "CCW_Dead_Zone": (27, 1),
- "Protective_Torque": (37, 1),
- "Protection_Time": (38, 1),
- # SRAM
- "Torque_Enable": (40, 1),
- "Acceleration": (41, 1),
- "Goal_Position": (42, 2),
- "Running_Time": (44, 2),
- "Goal_Velocity": (46, 2),
- "Lock": (48, 1),
- "Present_Position": (56, 2), # read-only
- "Present_Velocity": (58, 2), # read-only
- "Present_Load": (60, 2), # read-only
- "Present_Voltage": (62, 1), # read-only
- "Present_Temperature": (63, 1), # read-only
- "Sync_Write_Flag": (64, 1), # read-only
- "Status": (65, 1), # read-only
- "Moving": (66, 1), # read-only
- # Factory
- "PWM_Maximum_Step": (78, 1),
- "Moving_Velocity_Threshold*50": (79, 1),
- "DTs": (80, 1), # (ms)
- "Minimum_Velocity_Limit*50": (81, 1),
- "Maximum_Velocity_Limit*50": (82, 1),
- "Acceleration_2": (83, 1), # don't know what that is
-}
-
-STS_SMS_SERIES_BAUDRATE_TABLE = {
- 1_000_000: 0,
- 500_000: 1,
- 250_000: 2,
- 128_000: 3,
- 115_200: 4,
- 57_600: 5,
- 38_400: 6,
- 19_200: 7,
-}
-
-SCS_SERIES_BAUDRATE_TABLE = {
- 1_000_000: 0,
- 500_000: 1,
- 250_000: 2,
- 128_000: 3,
- 115_200: 4,
- 57_600: 5,
- 38_400: 6,
- 19_200: 7,
-}
-
-MODEL_CONTROL_TABLE = {
- "sts_series": STS_SMS_SERIES_CONTROL_TABLE,
- "scs_series": SCS_SERIES_CONTROL_TABLE,
- "sms_series": STS_SMS_SERIES_CONTROL_TABLE,
- "sts3215": STS_SMS_SERIES_CONTROL_TABLE,
- "sts3250": STS_SMS_SERIES_CONTROL_TABLE,
- "scs0009": SCS_SERIES_CONTROL_TABLE,
- "sm8512bl": STS_SMS_SERIES_CONTROL_TABLE,
-}
-
-MODEL_RESOLUTION = {
- "sts_series": 4096,
- "sms_series": 4096,
- "scs_series": 1024,
- "sts3215": 4096,
- "sts3250": 4096,
- "sm8512bl": 4096,
- "scs0009": 1024,
-}
-
-MODEL_BAUDRATE_TABLE = {
- "sts_series": STS_SMS_SERIES_BAUDRATE_TABLE,
- "sms_series": STS_SMS_SERIES_BAUDRATE_TABLE,
- "scs_series": SCS_SERIES_BAUDRATE_TABLE,
- "sm8512bl": STS_SMS_SERIES_BAUDRATE_TABLE,
- "sts3215": STS_SMS_SERIES_BAUDRATE_TABLE,
- "sts3250": STS_SMS_SERIES_BAUDRATE_TABLE,
- "scs0009": SCS_SERIES_BAUDRATE_TABLE,
-}
-
-# Sign-Magnitude encoding bits
-STS_SMS_SERIES_ENCODINGS_TABLE = {
- "Homing_Offset": 11,
- "Goal_Position": 15,
- "Goal_Velocity": 15,
- "Goal_Speed": 15,
- "Present_Position": 15,
- "Present_Velocity": 15,
- "Present_Speed": 15,
-}
-
-MODEL_ENCODING_TABLE = {
- "sts_series": STS_SMS_SERIES_ENCODINGS_TABLE,
- "sms_series": STS_SMS_SERIES_ENCODINGS_TABLE,
- "scs_series": {},
- "sts3215": STS_SMS_SERIES_ENCODINGS_TABLE,
- "sts3250": STS_SMS_SERIES_ENCODINGS_TABLE,
- "sm8512bl": STS_SMS_SERIES_ENCODINGS_TABLE,
- "scs0009": {},
-}
-
-SCAN_BAUDRATES = [
- 4_800,
- 9_600,
- 14_400,
- 19_200,
- 38_400,
- 57_600,
- 115_200,
- 128_000,
- 250_000,
- 500_000,
- 1_000_000,
-]
-
-MODEL_NUMBER_TABLE = {
- "sts3215": 777,
- "sts3250": 2825,
- "sm8512bl": 11272,
- "scs0009": 1284,
-}
-
-MODEL_PROTOCOL = {
- "sts_series": 0,
- "sms_series": 0,
- "scs_series": 1,
- "sts3215": 0,
- "sts3250": 0,
- "sm8512bl": 0,
- "scs0009": 1,
-}
diff --git a/lerobot/src/lerobot/policies/act/README.md b/lerobot/src/lerobot/policies/act/README.md
deleted file mode 100644
index 04602009852778a28be44647b6e7ba445dae3a95..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/policies/act/README.md
+++ /dev/null
@@ -1 +0,0 @@
-../../../../docs/source/policy_act_README.md
\ No newline at end of file
diff --git a/lerobot/src/lerobot/policies/act/configuration_act.py b/lerobot/src/lerobot/policies/act/configuration_act.py
deleted file mode 100644
index 5c6fdf4275d6a5d40a7eebe443c8a756d3f0fc21..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/policies/act/configuration_act.py
+++ /dev/null
@@ -1,186 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 Tony Z. Zhao and The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from dataclasses import dataclass, field
-
-from lerobot.configs.policies import PreTrainedConfig
-from lerobot.configs.types import NormalizationMode
-from lerobot.optim.optimizers import AdamWConfig
-
-
-@PreTrainedConfig.register_subclass("act")
-@dataclass
-class ACTConfig(PreTrainedConfig):
- """Configuration class for the Action Chunking Transformers policy.
-
- Defaults are configured for training on bimanual Aloha tasks like "insertion" or "transfer".
-
- The parameters you will most likely need to change are the ones which depend on the environment / sensors.
- Those are: `input_shapes` and 'output_shapes`.
-
- Notes on the inputs and outputs:
- - Either:
- - At least one key starting with "observation.image is required as an input.
- AND/OR
- - The key "observation.environment_state" is required as input.
- - If there are multiple keys beginning with "observation.images." they are treated as multiple camera
- views. Right now we only support all images having the same shape.
- - May optionally work without an "observation.state" key for the proprioceptive robot state.
- - "action" is required as an output key.
-
- Args:
- n_obs_steps: Number of environment steps worth of observations to pass to the policy (takes the
- current step and additional steps going back).
- chunk_size: The size of the action prediction "chunks" in units of environment steps.
- n_action_steps: The number of action steps to run in the environment for one invocation of the policy.
- This should be no greater than the chunk size. For example, if the chunk size size 100, you may
- set this to 50. This would mean that the model predicts 100 steps worth of actions, runs 50 in the
- environment, and throws the other 50 out.
- input_shapes: A dictionary defining the shapes of the input data for the policy. The key represents
- the input data name, and the value is a list indicating the dimensions of the corresponding data.
- For example, "observation.image" refers to an input from a camera with dimensions [3, 96, 96],
- indicating it has three color channels and 96x96 resolution. Importantly, `input_shapes` doesn't
- include batch dimension or temporal dimension.
- output_shapes: A dictionary defining the shapes of the output data for the policy. The key represents
- the output data name, and the value is a list indicating the dimensions of the corresponding data.
- For example, "action" refers to an output shape of [14], indicating 14-dimensional actions.
- Importantly, `output_shapes` doesn't include batch dimension or temporal dimension.
- input_normalization_modes: A dictionary with key representing the modality (e.g. "observation.state"),
- and the value specifies the normalization mode to apply. The two available modes are "mean_std"
- which subtracts the mean and divides by the standard deviation and "min_max" which rescale in a
- [-1, 1] range.
- output_normalization_modes: Similar dictionary as `normalize_input_modes`, but to unnormalize to the
- original scale. Note that this is also used for normalizing the training targets.
- vision_backbone: Name of the torchvision resnet backbone to use for encoding images.
- pretrained_backbone_weights: Pretrained weights from torchvision to initialize the backbone.
- `None` means no pretrained weights.
- replace_final_stride_with_dilation: Whether to replace the ResNet's final 2x2 stride with a dilated
- convolution.
- pre_norm: Whether to use "pre-norm" in the transformer blocks.
- dim_model: The transformer blocks' main hidden dimension.
- n_heads: The number of heads to use in the transformer blocks' multi-head attention.
- dim_feedforward: The dimension to expand the transformer's hidden dimension to in the feed-forward
- layers.
- feedforward_activation: The activation to use in the transformer block's feed-forward layers.
- n_encoder_layers: The number of transformer layers to use for the transformer encoder.
- n_decoder_layers: The number of transformer layers to use for the transformer decoder.
- use_vae: Whether to use a variational objective during training. This introduces another transformer
- which is used as the VAE's encoder (not to be confused with the transformer encoder - see
- documentation in the policy class).
- latent_dim: The VAE's latent dimension.
- n_vae_encoder_layers: The number of transformer layers to use for the VAE's encoder.
- temporal_ensemble_coeff: Coefficient for the exponential weighting scheme to apply for temporal
- ensembling. Defaults to None which means temporal ensembling is not used. `n_action_steps` must be
- 1 when using this feature, as inference needs to happen at every step to form an ensemble. For
- more information on how ensembling works, please see `ACTTemporalEnsembler`.
- dropout: Dropout to use in the transformer layers (see code for details).
- kl_weight: The weight to use for the KL-divergence component of the loss if the variational objective
- is enabled. Loss is then calculated as: `reconstruction_loss + kl_weight * kld_loss`.
- """
-
- # Input / output structure.
- n_obs_steps: int = 1
- chunk_size: int = 100
- n_action_steps: int = 100
-
- normalization_mapping: dict[str, NormalizationMode] = field(
- default_factory=lambda: {
- "VISUAL": NormalizationMode.MEAN_STD,
- "STATE": NormalizationMode.MEAN_STD,
- "ACTION": NormalizationMode.MEAN_STD,
- }
- )
-
- # Architecture.
- # Vision backbone.
- vision_backbone: str = "resnet18"
- pretrained_backbone_weights: str | None = "ResNet18_Weights.IMAGENET1K_V1"
- replace_final_stride_with_dilation: int = False
- # Transformer layers.
- pre_norm: bool = False
- dim_model: int = 512
- n_heads: int = 8
- dim_feedforward: int = 3200
- feedforward_activation: str = "relu"
- n_encoder_layers: int = 4
- # Note: Although the original ACT implementation has 7 for `n_decoder_layers`, there is a bug in the code
- # that means only the first layer is used. Here we match the original implementation by setting this to 1.
- # See this issue https://github.com/tonyzhaozh/act/issues/25#issue-2258740521.
- n_decoder_layers: int = 1
- # VAE.
- use_vae: bool = True
- latent_dim: int = 32
- n_vae_encoder_layers: int = 4
-
- # Inference.
- # Note: the value used in ACT when temporal ensembling is enabled is 0.01.
- temporal_ensemble_coeff: float | None = None
-
- # Training and loss computation.
- dropout: float = 0.1
- kl_weight: float = 10.0
-
- # Training preset
- optimizer_lr: float = 1e-5
- optimizer_weight_decay: float = 1e-4
- optimizer_lr_backbone: float = 1e-5
-
- def __post_init__(self):
- super().__post_init__()
-
- """Input validation (not exhaustive)."""
- if not self.vision_backbone.startswith("resnet"):
- raise ValueError(
- f"`vision_backbone` must be one of the ResNet variants. Got {self.vision_backbone}."
- )
- if self.temporal_ensemble_coeff is not None and self.n_action_steps > 1:
- raise NotImplementedError(
- "`n_action_steps` must be 1 when using temporal ensembling. This is "
- "because the policy needs to be queried every step to compute the ensembled action."
- )
- if self.n_action_steps > self.chunk_size:
- raise ValueError(
- f"The chunk size is the upper bound for the number of action steps per model invocation. Got "
- f"{self.n_action_steps} for `n_action_steps` and {self.chunk_size} for `chunk_size`."
- )
- if self.n_obs_steps != 1:
- raise ValueError(
- f"Multiple observation steps not handled yet. Got `nobs_steps={self.n_obs_steps}`"
- )
-
- def get_optimizer_preset(self) -> AdamWConfig:
- return AdamWConfig(
- lr=self.optimizer_lr,
- weight_decay=self.optimizer_weight_decay,
- )
-
- def get_scheduler_preset(self) -> None:
- return None
-
- def validate_features(self) -> None:
- if not self.image_features and not self.env_state_feature:
- raise ValueError("You must provide at least one image or the environment state among the inputs.")
-
- @property
- def observation_delta_indices(self) -> None:
- return None
-
- @property
- def action_delta_indices(self) -> list:
- return list(range(self.chunk_size))
-
- @property
- def reward_delta_indices(self) -> None:
- return None
diff --git a/lerobot/src/lerobot/policies/act/modeling_act.py b/lerobot/src/lerobot/policies/act/modeling_act.py
deleted file mode 100644
index 1c67af9caf15d5deafcdb5a0d3b5feafc5dd72e3..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/policies/act/modeling_act.py
+++ /dev/null
@@ -1,746 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 Tony Z. Zhao and The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Action Chunking Transformer Policy
-
-As per Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware (https://huggingface.co/papers/2304.13705).
-The majority of changes here involve removing unused code, unifying naming, and adding helpful comments.
-"""
-
-import math
-from collections import deque
-from collections.abc import Callable
-from itertools import chain
-
-import einops
-import numpy as np
-import torch
-import torch.nn.functional as F # noqa: N812
-import torchvision
-from torch import Tensor, nn
-from torchvision.models._utils import IntermediateLayerGetter
-from torchvision.ops.misc import FrozenBatchNorm2d
-
-from lerobot.policies.act.configuration_act import ACTConfig
-from lerobot.policies.pretrained import PreTrainedPolicy
-from lerobot.utils.constants import ACTION, OBS_ENV_STATE, OBS_IMAGES, OBS_STATE
-
-
-class ACTPolicy(PreTrainedPolicy):
- """
- Action Chunking Transformer Policy as per Learning Fine-Grained Bimanual Manipulation with Low-Cost
- Hardware (paper: https://huggingface.co/papers/2304.13705, code: https://github.com/tonyzhaozh/act)
- """
-
- config_class = ACTConfig
- name = "act"
-
- def __init__(
- self,
- config: ACTConfig,
- **kwargs,
- ):
- """
- Args:
- config: Policy configuration class instance or None, in which case the default instantiation of
- the configuration class is used.
- """
- super().__init__(config)
- config.validate_features()
- self.config = config
-
- self.model = ACT(config)
-
- if config.temporal_ensemble_coeff is not None:
- self.temporal_ensembler = ACTTemporalEnsembler(config.temporal_ensemble_coeff, config.chunk_size)
-
- self.reset()
-
- def get_optim_params(self) -> dict:
- # TODO(aliberts, rcadene): As of now, lr_backbone == lr
- # Should we remove this and just `return self.parameters()`?
- return [
- {
- "params": [
- p
- for n, p in self.named_parameters()
- if not n.startswith("model.backbone") and p.requires_grad
- ]
- },
- {
- "params": [
- p
- for n, p in self.named_parameters()
- if n.startswith("model.backbone") and p.requires_grad
- ],
- "lr": self.config.optimizer_lr_backbone,
- },
- ]
-
- def reset(self):
- """This should be called whenever the environment is reset."""
- if self.config.temporal_ensemble_coeff is not None:
- self.temporal_ensembler.reset()
- else:
- self._action_queue = deque([], maxlen=self.config.n_action_steps)
-
- @torch.no_grad()
- def select_action(self, batch: dict[str, Tensor]) -> Tensor:
- """Select a single action given environment observations.
-
- This method wraps `select_actions` in order to return one action at a time for execution in the
- environment. It works by managing the actions in a queue and only calling `select_actions` when the
- queue is empty.
- """
- self.eval() # keeping the policy in eval mode as it could be set to train mode while queue is consumed
-
- if self.config.temporal_ensemble_coeff is not None:
- actions = self.predict_action_chunk(batch)
- action = self.temporal_ensembler.update(actions)
- return action
-
- # Action queue logic for n_action_steps > 1. When the action_queue is depleted, populate it by
- # querying the policy.
- if len(self._action_queue) == 0:
- actions = self.predict_action_chunk(batch)[:, : self.config.n_action_steps]
-
- # `self.model.forward` returns a (batch_size, n_action_steps, action_dim) tensor, but the queue
- # effectively has shape (n_action_steps, batch_size, *), hence the transpose.
- self._action_queue.extend(actions.transpose(0, 1))
- return self._action_queue.popleft()
-
- @torch.no_grad()
- def predict_action_chunk(self, batch: dict[str, Tensor]) -> Tensor:
- """Predict a chunk of actions given environment observations."""
- self.eval()
-
- if self.config.image_features:
- batch = dict(batch) # shallow copy so that adding a key doesn't modify the original
- batch[OBS_IMAGES] = [batch[key] for key in self.config.image_features]
-
- actions = self.model(batch)[0]
- return actions
-
- def forward(self, batch: dict[str, Tensor]) -> tuple[Tensor, dict]:
- """Run the batch through the model and compute the loss for training or validation."""
- if self.config.image_features:
- batch = dict(batch) # shallow copy so that adding a key doesn't modify the original
- batch[OBS_IMAGES] = [batch[key] for key in self.config.image_features]
-
- actions_hat, (mu_hat, log_sigma_x2_hat) = self.model(batch)
-
- l1_loss = (
- F.l1_loss(batch[ACTION], actions_hat, reduction="none") * ~batch["action_is_pad"].unsqueeze(-1)
- ).mean()
-
- loss_dict = {"l1_loss": l1_loss.item()}
- if self.config.use_vae:
- # Calculate Dₖₗ(latent_pdf || standard_normal). Note: After computing the KL-divergence for
- # each dimension independently, we sum over the latent dimension to get the total
- # KL-divergence per batch element, then take the mean over the batch.
- # (See App. B of https://huggingface.co/papers/1312.6114 for more details).
- mean_kld = (
- (-0.5 * (1 + log_sigma_x2_hat - mu_hat.pow(2) - (log_sigma_x2_hat).exp())).sum(-1).mean()
- )
- loss_dict["kld_loss"] = mean_kld.item()
- loss = l1_loss + mean_kld * self.config.kl_weight
- else:
- loss = l1_loss
-
- return loss, loss_dict
-
-
-class ACTTemporalEnsembler:
- def __init__(self, temporal_ensemble_coeff: float, chunk_size: int) -> None:
- """Temporal ensembling as described in Algorithm 2 of https://huggingface.co/papers/2304.13705.
-
- The weights are calculated as wᵢ = exp(-temporal_ensemble_coeff * i) where w₀ is the oldest action.
- They are then normalized to sum to 1 by dividing by Σwᵢ. Here's some intuition around how the
- coefficient works:
- - Setting it to 0 uniformly weighs all actions.
- - Setting it positive gives more weight to older actions.
- - Setting it negative gives more weight to newer actions.
- NOTE: The default value for `temporal_ensemble_coeff` used by the original ACT work is 0.01. This
- results in older actions being weighed more highly than newer actions (the experiments documented in
- https://github.com/huggingface/lerobot/pull/319 hint at why highly weighing new actions might be
- detrimental: doing so aggressively may diminish the benefits of action chunking).
-
- Here we use an online method for computing the average rather than caching a history of actions in
- order to compute the average offline. For a simple 1D sequence it looks something like:
-
- ```
- import torch
-
- seq = torch.linspace(8, 8.5, 100)
- print(seq)
-
- m = 0.01
- exp_weights = torch.exp(-m * torch.arange(len(seq)))
- print(exp_weights)
-
- # Calculate offline
- avg = (exp_weights * seq).sum() / exp_weights.sum()
- print("offline", avg)
-
- # Calculate online
- for i, item in enumerate(seq):
- if i == 0:
- avg = item
- continue
- avg *= exp_weights[:i].sum()
- avg += item * exp_weights[i]
- avg /= exp_weights[: i + 1].sum()
- print("online", avg)
- ```
- """
- self.chunk_size = chunk_size
- self.ensemble_weights = torch.exp(-temporal_ensemble_coeff * torch.arange(chunk_size))
- self.ensemble_weights_cumsum = torch.cumsum(self.ensemble_weights, dim=0)
- self.reset()
-
- def reset(self):
- """Resets the online computation variables."""
- self.ensembled_actions = None
- # (chunk_size,) count of how many actions are in the ensemble for each time step in the sequence.
- self.ensembled_actions_count = None
-
- def update(self, actions: Tensor) -> Tensor:
- """
- Takes a (batch, chunk_size, action_dim) sequence of actions, update the temporal ensemble for all
- time steps, and pop/return the next batch of actions in the sequence.
- """
- self.ensemble_weights = self.ensemble_weights.to(device=actions.device)
- self.ensemble_weights_cumsum = self.ensemble_weights_cumsum.to(device=actions.device)
- if self.ensembled_actions is None:
- # Initializes `self._ensembled_action` to the sequence of actions predicted during the first
- # time step of the episode.
- self.ensembled_actions = actions.clone()
- # Note: The last dimension is unsqueeze to make sure we can broadcast properly for tensor
- # operations later.
- self.ensembled_actions_count = torch.ones(
- (self.chunk_size, 1), dtype=torch.long, device=self.ensembled_actions.device
- )
- else:
- # self.ensembled_actions will have shape (batch_size, chunk_size - 1, action_dim). Compute
- # the online update for those entries.
- self.ensembled_actions *= self.ensemble_weights_cumsum[self.ensembled_actions_count - 1]
- self.ensembled_actions += actions[:, :-1] * self.ensemble_weights[self.ensembled_actions_count]
- self.ensembled_actions /= self.ensemble_weights_cumsum[self.ensembled_actions_count]
- self.ensembled_actions_count = torch.clamp(self.ensembled_actions_count + 1, max=self.chunk_size)
- # The last action, which has no prior online average, needs to get concatenated onto the end.
- self.ensembled_actions = torch.cat([self.ensembled_actions, actions[:, -1:]], dim=1)
- self.ensembled_actions_count = torch.cat(
- [self.ensembled_actions_count, torch.ones_like(self.ensembled_actions_count[-1:])]
- )
- # "Consume" the first action.
- action, self.ensembled_actions, self.ensembled_actions_count = (
- self.ensembled_actions[:, 0],
- self.ensembled_actions[:, 1:],
- self.ensembled_actions_count[1:],
- )
- return action
-
-
-class ACT(nn.Module):
- """Action Chunking Transformer: The underlying neural network for ACTPolicy.
-
- Note: In this code we use the terms `vae_encoder`, 'encoder', `decoder`. The meanings are as follows.
- - The `vae_encoder` is, as per the literature around variational auto-encoders (VAE), the part of the
- model that encodes the target data (a sequence of actions), and the condition (the robot
- joint-space).
- - A transformer with an `encoder` (not the VAE encoder) and `decoder` (not the VAE decoder) with
- cross-attention is used as the VAE decoder. For these terms, we drop the `vae_` prefix because we
- have an option to train this model without the variational objective (in which case we drop the
- `vae_encoder` altogether, and nothing about this model has anything to do with a VAE).
-
- Transformer
- Used alone for inference
- (acts as VAE decoder
- during training)
- ┌───────────────────────┐
- │ Outputs │
- │ ▲ │
- │ ┌─────►┌───────┐ │
- ┌──────┐ │ │ │Transf.│ │
- │ │ │ ├─────►│decoder│ │
- ┌────┴────┐ │ │ │ │ │ │
- │ │ │ │ ┌───┴───┬─►│ │ │
- │ VAE │ │ │ │ │ └───────┘ │
- │ encoder │ │ │ │Transf.│ │
- │ │ │ │ │encoder│ │
- └───▲─────┘ │ │ │ │ │
- │ │ │ └▲──▲─▲─┘ │
- │ │ │ │ │ │ │
- inputs └─────┼──┘ │ image emb. │
- │ state emb. │
- └───────────────────────┘
- """
-
- def __init__(self, config: ACTConfig):
- # BERT style VAE encoder with input tokens [cls, robot_state, *action_sequence].
- # The cls token forms parameters of the latent's distribution (like this [*means, *log_variances]).
- super().__init__()
- self.config = config
-
- if self.config.use_vae:
- self.vae_encoder = ACTEncoder(config, is_vae_encoder=True)
- self.vae_encoder_cls_embed = nn.Embedding(1, config.dim_model)
- # Projection layer for joint-space configuration to hidden dimension.
- if self.config.robot_state_feature:
- self.vae_encoder_robot_state_input_proj = nn.Linear(
- self.config.robot_state_feature.shape[0], config.dim_model
- )
- # Projection layer for action (joint-space target) to hidden dimension.
- self.vae_encoder_action_input_proj = nn.Linear(
- self.config.action_feature.shape[0],
- config.dim_model,
- )
- # Projection layer from the VAE encoder's output to the latent distribution's parameter space.
- self.vae_encoder_latent_output_proj = nn.Linear(config.dim_model, config.latent_dim * 2)
- # Fixed sinusoidal positional embedding for the input to the VAE encoder. Unsqueeze for batch
- # dimension.
- num_input_token_encoder = 1 + config.chunk_size
- if self.config.robot_state_feature:
- num_input_token_encoder += 1
- self.register_buffer(
- "vae_encoder_pos_enc",
- create_sinusoidal_pos_embedding(num_input_token_encoder, config.dim_model).unsqueeze(0),
- )
-
- # Backbone for image feature extraction.
- if self.config.image_features:
- backbone_model = getattr(torchvision.models, config.vision_backbone)(
- replace_stride_with_dilation=[False, False, config.replace_final_stride_with_dilation],
- weights=config.pretrained_backbone_weights,
- norm_layer=FrozenBatchNorm2d,
- )
- # Note: The assumption here is that we are using a ResNet model (and hence layer4 is the final
- # feature map).
- # Note: The forward method of this returns a dict: {"feature_map": output}.
- self.backbone = IntermediateLayerGetter(backbone_model, return_layers={"layer4": "feature_map"})
-
- # Transformer (acts as VAE decoder when training with the variational objective).
- self.encoder = ACTEncoder(config)
- self.decoder = ACTDecoder(config)
-
- # Transformer encoder input projections. The tokens will be structured like
- # [latent, (robot_state), (env_state), (image_feature_map_pixels)].
- if self.config.robot_state_feature:
- self.encoder_robot_state_input_proj = nn.Linear(
- self.config.robot_state_feature.shape[0], config.dim_model
- )
- if self.config.env_state_feature:
- self.encoder_env_state_input_proj = nn.Linear(
- self.config.env_state_feature.shape[0], config.dim_model
- )
- self.encoder_latent_input_proj = nn.Linear(config.latent_dim, config.dim_model)
- if self.config.image_features:
- self.encoder_img_feat_input_proj = nn.Conv2d(
- backbone_model.fc.in_features, config.dim_model, kernel_size=1
- )
- # Transformer encoder positional embeddings.
- n_1d_tokens = 1 # for the latent
- if self.config.robot_state_feature:
- n_1d_tokens += 1
- if self.config.env_state_feature:
- n_1d_tokens += 1
- self.encoder_1d_feature_pos_embed = nn.Embedding(n_1d_tokens, config.dim_model)
- if self.config.image_features:
- self.encoder_cam_feat_pos_embed = ACTSinusoidalPositionEmbedding2d(config.dim_model // 2)
-
- # Transformer decoder.
- # Learnable positional embedding for the transformer's decoder (in the style of DETR object queries).
- self.decoder_pos_embed = nn.Embedding(config.chunk_size, config.dim_model)
-
- # Final action regression head on the output of the transformer's decoder.
- self.action_head = nn.Linear(config.dim_model, self.config.action_feature.shape[0])
-
- self._reset_parameters()
-
- def _reset_parameters(self):
- """Xavier-uniform initialization of the transformer parameters as in the original code."""
- for p in chain(self.encoder.parameters(), self.decoder.parameters()):
- if p.dim() > 1:
- nn.init.xavier_uniform_(p)
-
- def forward(self, batch: dict[str, Tensor]) -> tuple[Tensor, tuple[Tensor, Tensor] | tuple[None, None]]:
- """A forward pass through the Action Chunking Transformer (with optional VAE encoder).
-
- `batch` should have the following structure:
- {
- [robot_state_feature] (optional): (B, state_dim) batch of robot states.
-
- [image_features]: (B, n_cameras, C, H, W) batch of images.
- AND/OR
- [env_state_feature]: (B, env_dim) batch of environment states.
-
- [action_feature] (optional, only if training with VAE): (B, chunk_size, action dim) batch of actions.
- }
-
- Returns:
- (B, chunk_size, action_dim) batch of action sequences
- Tuple containing the latent PDF's parameters (mean, log(σ²)) both as (B, L) tensors where L is the
- latent dimension.
- """
- if self.config.use_vae and self.training:
- assert ACTION in batch, (
- "actions must be provided when using the variational objective in training mode."
- )
-
- batch_size = batch[OBS_IMAGES][0].shape[0] if OBS_IMAGES in batch else batch[OBS_ENV_STATE].shape[0]
-
- # Prepare the latent for input to the transformer encoder.
- if self.config.use_vae and ACTION in batch and self.training:
- # Prepare the input to the VAE encoder: [cls, *joint_space_configuration, *action_sequence].
- cls_embed = einops.repeat(
- self.vae_encoder_cls_embed.weight, "1 d -> b 1 d", b=batch_size
- ) # (B, 1, D)
- if self.config.robot_state_feature:
- robot_state_embed = self.vae_encoder_robot_state_input_proj(batch[OBS_STATE])
- robot_state_embed = robot_state_embed.unsqueeze(1) # (B, 1, D)
- action_embed = self.vae_encoder_action_input_proj(batch[ACTION]) # (B, S, D)
-
- if self.config.robot_state_feature:
- vae_encoder_input = [cls_embed, robot_state_embed, action_embed] # (B, S+2, D)
- else:
- vae_encoder_input = [cls_embed, action_embed]
- vae_encoder_input = torch.cat(vae_encoder_input, axis=1)
-
- # Prepare fixed positional embedding.
- # Note: detach() shouldn't be necessary but leaving it the same as the original code just in case.
- pos_embed = self.vae_encoder_pos_enc.clone().detach() # (1, S+2, D)
-
- # Prepare key padding mask for the transformer encoder. We have 1 or 2 extra tokens at the start of the
- # sequence depending whether we use the input states or not (cls and robot state)
- # False means not a padding token.
- cls_joint_is_pad = torch.full(
- (batch_size, 2 if self.config.robot_state_feature else 1),
- False,
- device=batch[OBS_STATE].device,
- )
- key_padding_mask = torch.cat(
- [cls_joint_is_pad, batch["action_is_pad"]], axis=1
- ) # (bs, seq+1 or 2)
-
- # Forward pass through VAE encoder to get the latent PDF parameters.
- cls_token_out = self.vae_encoder(
- vae_encoder_input.permute(1, 0, 2),
- pos_embed=pos_embed.permute(1, 0, 2),
- key_padding_mask=key_padding_mask,
- )[0] # select the class token, with shape (B, D)
- latent_pdf_params = self.vae_encoder_latent_output_proj(cls_token_out)
- mu = latent_pdf_params[:, : self.config.latent_dim]
- # This is 2log(sigma). Done this way to match the original implementation.
- log_sigma_x2 = latent_pdf_params[:, self.config.latent_dim :]
-
- # Sample the latent with the reparameterization trick.
- latent_sample = mu + log_sigma_x2.div(2).exp() * torch.randn_like(mu)
- else:
- # When not using the VAE encoder, we set the latent to be all zeros.
- mu = log_sigma_x2 = None
- # TODO(rcadene, alexander-soare): remove call to `.to` to speedup forward ; precompute and use buffer
- latent_sample = torch.zeros([batch_size, self.config.latent_dim], dtype=torch.float32).to(
- batch[OBS_STATE].device
- )
-
- # Prepare transformer encoder inputs.
- encoder_in_tokens = [self.encoder_latent_input_proj(latent_sample)]
- encoder_in_pos_embed = list(self.encoder_1d_feature_pos_embed.weight.unsqueeze(1))
- # Robot state token.
- if self.config.robot_state_feature:
- encoder_in_tokens.append(self.encoder_robot_state_input_proj(batch[OBS_STATE]))
- # Environment state token.
- if self.config.env_state_feature:
- encoder_in_tokens.append(self.encoder_env_state_input_proj(batch[OBS_ENV_STATE]))
-
- if self.config.image_features:
- # For a list of images, the H and W may vary but H*W is constant.
- # NOTE: If modifying this section, verify on MPS devices that
- # gradients remain stable (no explosions or NaNs).
- for img in batch[OBS_IMAGES]:
- cam_features = self.backbone(img)["feature_map"]
- cam_pos_embed = self.encoder_cam_feat_pos_embed(cam_features).to(dtype=cam_features.dtype)
- cam_features = self.encoder_img_feat_input_proj(cam_features)
-
- # Rearrange features to (sequence, batch, dim).
- cam_features = einops.rearrange(cam_features, "b c h w -> (h w) b c")
- cam_pos_embed = einops.rearrange(cam_pos_embed, "b c h w -> (h w) b c")
-
- # Extend immediately instead of accumulating and concatenating
- # Convert to list to extend properly
- encoder_in_tokens.extend(list(cam_features))
- encoder_in_pos_embed.extend(list(cam_pos_embed))
-
- # Stack all tokens along the sequence dimension.
- encoder_in_tokens = torch.stack(encoder_in_tokens, axis=0)
- encoder_in_pos_embed = torch.stack(encoder_in_pos_embed, axis=0)
-
- # Forward pass through the transformer modules.
- encoder_out = self.encoder(encoder_in_tokens, pos_embed=encoder_in_pos_embed)
- # TODO(rcadene, alexander-soare): remove call to `device` ; precompute and use buffer
- decoder_in = torch.zeros(
- (self.config.chunk_size, batch_size, self.config.dim_model),
- dtype=encoder_in_pos_embed.dtype,
- device=encoder_in_pos_embed.device,
- )
- decoder_out = self.decoder(
- decoder_in,
- encoder_out,
- encoder_pos_embed=encoder_in_pos_embed,
- decoder_pos_embed=self.decoder_pos_embed.weight.unsqueeze(1),
- )
-
- # Move back to (B, S, C).
- decoder_out = decoder_out.transpose(0, 1)
-
- actions = self.action_head(decoder_out)
-
- return actions, (mu, log_sigma_x2)
-
-
-class ACTEncoder(nn.Module):
- """Convenience module for running multiple encoder layers, maybe followed by normalization."""
-
- def __init__(self, config: ACTConfig, is_vae_encoder: bool = False):
- super().__init__()
- self.is_vae_encoder = is_vae_encoder
- num_layers = config.n_vae_encoder_layers if self.is_vae_encoder else config.n_encoder_layers
- self.layers = nn.ModuleList([ACTEncoderLayer(config) for _ in range(num_layers)])
- self.norm = nn.LayerNorm(config.dim_model) if config.pre_norm else nn.Identity()
-
- def forward(
- self, x: Tensor, pos_embed: Tensor | None = None, key_padding_mask: Tensor | None = None
- ) -> Tensor:
- for layer in self.layers:
- x = layer(x, pos_embed=pos_embed, key_padding_mask=key_padding_mask)
- x = self.norm(x)
- return x
-
-
-class ACTEncoderLayer(nn.Module):
- def __init__(self, config: ACTConfig):
- super().__init__()
- self.self_attn = nn.MultiheadAttention(config.dim_model, config.n_heads, dropout=config.dropout)
-
- # Feed forward layers.
- self.linear1 = nn.Linear(config.dim_model, config.dim_feedforward)
- self.dropout = nn.Dropout(config.dropout)
- self.linear2 = nn.Linear(config.dim_feedforward, config.dim_model)
-
- self.norm1 = nn.LayerNorm(config.dim_model)
- self.norm2 = nn.LayerNorm(config.dim_model)
- self.dropout1 = nn.Dropout(config.dropout)
- self.dropout2 = nn.Dropout(config.dropout)
-
- self.activation = get_activation_fn(config.feedforward_activation)
- self.pre_norm = config.pre_norm
-
- def forward(self, x, pos_embed: Tensor | None = None, key_padding_mask: Tensor | None = None) -> Tensor:
- skip = x
- if self.pre_norm:
- x = self.norm1(x)
- q = k = x if pos_embed is None else x + pos_embed
- x = self.self_attn(q, k, value=x, key_padding_mask=key_padding_mask)
- x = x[0] # note: [0] to select just the output, not the attention weights
- x = skip + self.dropout1(x)
- if self.pre_norm:
- skip = x
- x = self.norm2(x)
- else:
- x = self.norm1(x)
- skip = x
- x = self.linear2(self.dropout(self.activation(self.linear1(x))))
- x = skip + self.dropout2(x)
- if not self.pre_norm:
- x = self.norm2(x)
- return x
-
-
-class ACTDecoder(nn.Module):
- def __init__(self, config: ACTConfig):
- """Convenience module for running multiple decoder layers followed by normalization."""
- super().__init__()
- self.layers = nn.ModuleList([ACTDecoderLayer(config) for _ in range(config.n_decoder_layers)])
- self.norm = nn.LayerNorm(config.dim_model)
-
- def forward(
- self,
- x: Tensor,
- encoder_out: Tensor,
- decoder_pos_embed: Tensor | None = None,
- encoder_pos_embed: Tensor | None = None,
- ) -> Tensor:
- for layer in self.layers:
- x = layer(
- x, encoder_out, decoder_pos_embed=decoder_pos_embed, encoder_pos_embed=encoder_pos_embed
- )
- if self.norm is not None:
- x = self.norm(x)
- return x
-
-
-class ACTDecoderLayer(nn.Module):
- def __init__(self, config: ACTConfig):
- super().__init__()
- self.self_attn = nn.MultiheadAttention(config.dim_model, config.n_heads, dropout=config.dropout)
- self.multihead_attn = nn.MultiheadAttention(config.dim_model, config.n_heads, dropout=config.dropout)
-
- # Feed forward layers.
- self.linear1 = nn.Linear(config.dim_model, config.dim_feedforward)
- self.dropout = nn.Dropout(config.dropout)
- self.linear2 = nn.Linear(config.dim_feedforward, config.dim_model)
-
- self.norm1 = nn.LayerNorm(config.dim_model)
- self.norm2 = nn.LayerNorm(config.dim_model)
- self.norm3 = nn.LayerNorm(config.dim_model)
- self.dropout1 = nn.Dropout(config.dropout)
- self.dropout2 = nn.Dropout(config.dropout)
- self.dropout3 = nn.Dropout(config.dropout)
-
- self.activation = get_activation_fn(config.feedforward_activation)
- self.pre_norm = config.pre_norm
-
- def maybe_add_pos_embed(self, tensor: Tensor, pos_embed: Tensor | None) -> Tensor:
- return tensor if pos_embed is None else tensor + pos_embed
-
- def forward(
- self,
- x: Tensor,
- encoder_out: Tensor,
- decoder_pos_embed: Tensor | None = None,
- encoder_pos_embed: Tensor | None = None,
- ) -> Tensor:
- """
- Args:
- x: (Decoder Sequence, Batch, Channel) tensor of input tokens.
- encoder_out: (Encoder Sequence, B, C) output features from the last layer of the encoder we are
- cross-attending with.
- encoder_pos_embed: (ES, 1, C) positional embedding for keys (from the encoder).
- decoder_pos_embed: (DS, 1, C) positional embedding for the queries (from the decoder).
- Returns:
- (DS, B, C) tensor of decoder output features.
- """
- skip = x
- if self.pre_norm:
- x = self.norm1(x)
- q = k = self.maybe_add_pos_embed(x, decoder_pos_embed)
- x = self.self_attn(q, k, value=x)[0] # select just the output, not the attention weights
- x = skip + self.dropout1(x)
- if self.pre_norm:
- skip = x
- x = self.norm2(x)
- else:
- x = self.norm1(x)
- skip = x
- x = self.multihead_attn(
- query=self.maybe_add_pos_embed(x, decoder_pos_embed),
- key=self.maybe_add_pos_embed(encoder_out, encoder_pos_embed),
- value=encoder_out,
- )[0] # select just the output, not the attention weights
- x = skip + self.dropout2(x)
- if self.pre_norm:
- skip = x
- x = self.norm3(x)
- else:
- x = self.norm2(x)
- skip = x
- x = self.linear2(self.dropout(self.activation(self.linear1(x))))
- x = skip + self.dropout3(x)
- if not self.pre_norm:
- x = self.norm3(x)
- return x
-
-
-def create_sinusoidal_pos_embedding(num_positions: int, dimension: int) -> Tensor:
- """1D sinusoidal positional embeddings as in Attention is All You Need.
-
- Args:
- num_positions: Number of token positions required.
- Returns: (num_positions, dimension) position embeddings (the first dimension is the batch dimension).
-
- """
-
- def get_position_angle_vec(position):
- return [position / np.power(10000, 2 * (hid_j // 2) / dimension) for hid_j in range(dimension)]
-
- sinusoid_table = np.array([get_position_angle_vec(pos_i) for pos_i in range(num_positions)])
- sinusoid_table[:, 0::2] = np.sin(sinusoid_table[:, 0::2]) # dim 2i
- sinusoid_table[:, 1::2] = np.cos(sinusoid_table[:, 1::2]) # dim 2i+1
- return torch.from_numpy(sinusoid_table).float()
-
-
-class ACTSinusoidalPositionEmbedding2d(nn.Module):
- """2D sinusoidal positional embeddings similar to what's presented in Attention Is All You Need.
-
- The variation is that the position indices are normalized in [0, 2π] (not quite: the lower bound is 1/H
- for the vertical direction, and 1/W for the horizontal direction.
- """
-
- def __init__(self, dimension: int):
- """
- Args:
- dimension: The desired dimension of the embeddings.
- """
- super().__init__()
- self.dimension = dimension
- self._two_pi = 2 * math.pi
- self._eps = 1e-6
- # Inverse "common ratio" for the geometric progression in sinusoid frequencies.
- self._temperature = 10000
-
- def forward(self, x: Tensor) -> Tensor:
- """
- Args:
- x: A (B, C, H, W) batch of 2D feature map to generate the embeddings for.
- Returns:
- A (1, C, H, W) batch of corresponding sinusoidal positional embeddings.
- """
- not_mask = torch.ones_like(x[0, :1]) # (1, H, W)
- # Note: These are like range(1, H+1) and range(1, W+1) respectively, but in most implementations
- # they would be range(0, H) and range(0, W). Keeping it at as is to match the original code.
- y_range = not_mask.cumsum(1, dtype=torch.float32)
- x_range = not_mask.cumsum(2, dtype=torch.float32)
-
- # "Normalize" the position index such that it ranges in [0, 2π].
- # Note: Adding epsilon on the denominator should not be needed as all values of y_embed and x_range
- # are non-zero by construction. This is an artifact of the original code.
- y_range = y_range / (y_range[:, -1:, :] + self._eps) * self._two_pi
- x_range = x_range / (x_range[:, :, -1:] + self._eps) * self._two_pi
-
- inverse_frequency = self._temperature ** (
- 2 * (torch.arange(self.dimension, dtype=torch.float32, device=x.device) // 2) / self.dimension
- )
-
- x_range = x_range.unsqueeze(-1) / inverse_frequency # (1, H, W, 1)
- y_range = y_range.unsqueeze(-1) / inverse_frequency # (1, H, W, 1)
-
- # Note: this stack then flatten operation results in interleaved sine and cosine terms.
- # pos_embed_x and pos_embed_y are (1, H, W, C // 2).
- pos_embed_x = torch.stack((x_range[..., 0::2].sin(), x_range[..., 1::2].cos()), dim=-1).flatten(3)
- pos_embed_y = torch.stack((y_range[..., 0::2].sin(), y_range[..., 1::2].cos()), dim=-1).flatten(3)
- pos_embed = torch.cat((pos_embed_y, pos_embed_x), dim=3).permute(0, 3, 1, 2) # (1, C, H, W)
-
- return pos_embed
-
-
-def get_activation_fn(activation: str) -> Callable:
- """Return an activation function given a string."""
- if activation == "relu":
- return F.relu
- if activation == "gelu":
- return F.gelu
- if activation == "glu":
- return F.glu
- raise RuntimeError(f"activation should be relu/gelu/glu, not {activation}.")
diff --git a/lerobot/src/lerobot/policies/act/processor_act.py b/lerobot/src/lerobot/policies/act/processor_act.py
deleted file mode 100644
index 1dedf8a99dc60ed2bafe862646e69f77021bcf9e..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/policies/act/processor_act.py
+++ /dev/null
@@ -1,85 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 Tony Z. Zhao and The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from typing import Any
-
-import torch
-
-from lerobot.policies.act.configuration_act import ACTConfig
-from lerobot.processor import (
- AddBatchDimensionProcessorStep,
- DeviceProcessorStep,
- NormalizerProcessorStep,
- PolicyAction,
- PolicyProcessorPipeline,
- RenameObservationsProcessorStep,
- UnnormalizerProcessorStep,
-)
-from lerobot.processor.converters import policy_action_to_transition, transition_to_policy_action
-from lerobot.utils.constants import POLICY_POSTPROCESSOR_DEFAULT_NAME, POLICY_PREPROCESSOR_DEFAULT_NAME
-
-
-def make_act_pre_post_processors(
- config: ACTConfig,
- dataset_stats: dict[str, dict[str, torch.Tensor]] | None = None,
-) -> tuple[
- PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
- PolicyProcessorPipeline[PolicyAction, PolicyAction],
-]:
- """Creates the pre- and post-processing pipelines for the ACT policy.
-
- The pre-processing pipeline handles normalization, batching, and device placement for the model inputs.
- The post-processing pipeline handles unnormalization and moves the model outputs back to the CPU.
-
- Args:
- config (ACTConfig): The ACT policy configuration object.
- dataset_stats (dict[str, dict[str, torch.Tensor]] | None): A dictionary containing dataset
- statistics (e.g., mean and std) used for normalization. Defaults to None.
-
- Returns:
- tuple[PolicyProcessorPipeline[dict[str, Any], dict[str, Any]], PolicyProcessorPipeline[PolicyAction, PolicyAction]]: A tuple containing the
- pre-processor pipeline and the post-processor pipeline.
- """
-
- input_steps = [
- RenameObservationsProcessorStep(rename_map={}),
- AddBatchDimensionProcessorStep(),
- DeviceProcessorStep(device=config.device),
- NormalizerProcessorStep(
- features={**config.input_features, **config.output_features},
- norm_map=config.normalization_mapping,
- stats=dataset_stats,
- device=config.device,
- ),
- ]
- output_steps = [
- UnnormalizerProcessorStep(
- features=config.output_features, norm_map=config.normalization_mapping, stats=dataset_stats
- ),
- DeviceProcessorStep(device="cpu"),
- ]
-
- return (
- PolicyProcessorPipeline[dict[str, Any], dict[str, Any]](
- steps=input_steps,
- name=POLICY_PREPROCESSOR_DEFAULT_NAME,
- ),
- PolicyProcessorPipeline[PolicyAction, PolicyAction](
- steps=output_steps,
- name=POLICY_POSTPROCESSOR_DEFAULT_NAME,
- to_transition=policy_action_to_transition,
- to_output=transition_to_policy_action,
- ),
- )
diff --git a/lerobot/src/lerobot/policies/diffusion/configuration_diffusion.py b/lerobot/src/lerobot/policies/diffusion/configuration_diffusion.py
deleted file mode 100644
index 0aab8040daa399926107bb14e69edddce3f2544c..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/policies/diffusion/configuration_diffusion.py
+++ /dev/null
@@ -1,238 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 Columbia Artificial Intelligence, Robotics Lab,
-# and The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from dataclasses import dataclass, field
-
-from lerobot.configs.policies import PreTrainedConfig
-from lerobot.configs.types import NormalizationMode
-from lerobot.optim.optimizers import AdamConfig
-from lerobot.optim.schedulers import DiffuserSchedulerConfig
-
-
-@PreTrainedConfig.register_subclass("diffusion")
-@dataclass
-class DiffusionConfig(PreTrainedConfig):
- """Configuration class for DiffusionPolicy.
-
- Defaults are configured for training with PushT providing proprioceptive and single camera observations.
-
- The parameters you will most likely need to change are the ones which depend on the environment / sensors.
- Those are: `input_shapes` and `output_shapes`.
-
- Notes on the inputs and outputs:
- - "observation.state" is required as an input key.
- - Either:
- - At least one key starting with "observation.image is required as an input.
- AND/OR
- - The key "observation.environment_state" is required as input.
- - If there are multiple keys beginning with "observation.image" they are treated as multiple camera
- views. Right now we only support all images having the same shape.
- - "action" is required as an output key.
-
- Args:
- n_obs_steps: Number of environment steps worth of observations to pass to the policy (takes the
- current step and additional steps going back).
- horizon: Diffusion model action prediction size as detailed in `DiffusionPolicy.select_action`.
- n_action_steps: The number of action steps to run in the environment for one invocation of the policy.
- See `DiffusionPolicy.select_action` for more details.
- input_shapes: A dictionary defining the shapes of the input data for the policy. The key represents
- the input data name, and the value is a list indicating the dimensions of the corresponding data.
- For example, "observation.image" refers to an input from a camera with dimensions [3, 96, 96],
- indicating it has three color channels and 96x96 resolution. Importantly, `input_shapes` doesn't
- include batch dimension or temporal dimension.
- output_shapes: A dictionary defining the shapes of the output data for the policy. The key represents
- the output data name, and the value is a list indicating the dimensions of the corresponding data.
- For example, "action" refers to an output shape of [14], indicating 14-dimensional actions.
- Importantly, `output_shapes` doesn't include batch dimension or temporal dimension.
- input_normalization_modes: A dictionary with key representing the modality (e.g. "observation.state"),
- and the value specifies the normalization mode to apply. The two available modes are "mean_std"
- which subtracts the mean and divides by the standard deviation and "min_max" which rescale in a
- [-1, 1] range.
- output_normalization_modes: Similar dictionary as `normalize_input_modes`, but to unnormalize to the
- original scale. Note that this is also used for normalizing the training targets.
- vision_backbone: Name of the torchvision resnet backbone to use for encoding images.
- crop_shape: (H, W) shape to crop images to as a preprocessing step for the vision backbone. Must fit
- within the image size. If None, no cropping is done.
- crop_is_random: Whether the crop should be random at training time (it's always a center crop in eval
- mode).
- pretrained_backbone_weights: Pretrained weights from torchvision to initialize the backbone.
- `None` means no pretrained weights.
- use_group_norm: Whether to replace batch normalization with group normalization in the backbone.
- The group sizes are set to be about 16 (to be precise, feature_dim // 16).
- spatial_softmax_num_keypoints: Number of keypoints for SpatialSoftmax.
- use_separate_rgb_encoders_per_camera: Whether to use a separate RGB encoder for each camera view.
- down_dims: Feature dimension for each stage of temporal downsampling in the diffusion modeling Unet.
- You may provide a variable number of dimensions, therefore also controlling the degree of
- downsampling.
- kernel_size: The convolutional kernel size of the diffusion modeling Unet.
- n_groups: Number of groups used in the group norm of the Unet's convolutional blocks.
- diffusion_step_embed_dim: The Unet is conditioned on the diffusion timestep via a small non-linear
- network. This is the output dimension of that network, i.e., the embedding dimension.
- use_film_scale_modulation: FiLM (https://huggingface.co/papers/1709.07871) is used for the Unet conditioning.
- Bias modulation is used be default, while this parameter indicates whether to also use scale
- modulation.
- noise_scheduler_type: Name of the noise scheduler to use. Supported options: ["DDPM", "DDIM"].
- num_train_timesteps: Number of diffusion steps for the forward diffusion schedule.
- beta_schedule: Name of the diffusion beta schedule as per DDPMScheduler from Hugging Face diffusers.
- beta_start: Beta value for the first forward-diffusion step.
- beta_end: Beta value for the last forward-diffusion step.
- prediction_type: The type of prediction that the diffusion modeling Unet makes. Choose from "epsilon"
- or "sample". These have equivalent outcomes from a latent variable modeling perspective, but
- "epsilon" has been shown to work better in many deep neural network settings.
- clip_sample: Whether to clip the sample to [-`clip_sample_range`, +`clip_sample_range`] for each
- denoising step at inference time. WARNING: you will need to make sure your action-space is
- normalized to fit within this range.
- clip_sample_range: The magnitude of the clipping range as described above.
- num_inference_steps: Number of reverse diffusion steps to use at inference time (steps are evenly
- spaced). If not provided, this defaults to be the same as `num_train_timesteps`.
- do_mask_loss_for_padding: Whether to mask the loss when there are copy-padded actions. See
- `LeRobotDataset` and `load_previous_and_future_frames` for more information. Note, this defaults
- to False as the original Diffusion Policy implementation does the same.
- """
-
- # Inputs / output structure.
- n_obs_steps: int = 2
- horizon: int = 16
- n_action_steps: int = 8
-
- normalization_mapping: dict[str, NormalizationMode] = field(
- default_factory=lambda: {
- "VISUAL": NormalizationMode.MEAN_STD,
- "STATE": NormalizationMode.MIN_MAX,
- "ACTION": NormalizationMode.MIN_MAX,
- }
- )
-
- # The original implementation doesn't sample frames for the last 7 steps,
- # which avoids excessive padding and leads to improved training results.
- drop_n_last_frames: int = 7 # horizon - n_action_steps - n_obs_steps + 1
-
- # Architecture / modeling.
- # Vision backbone.
- vision_backbone: str = "resnet18"
- crop_shape: tuple[int, int] | None = (84, 84)
- crop_is_random: bool = True
- pretrained_backbone_weights: str | None = None
- use_group_norm: bool = True
- spatial_softmax_num_keypoints: int = 32
- use_separate_rgb_encoder_per_camera: bool = False
- # Unet.
- down_dims: tuple[int, ...] = (512, 1024, 2048)
- kernel_size: int = 5
- n_groups: int = 8
- diffusion_step_embed_dim: int = 128
- use_film_scale_modulation: bool = True
- # Noise scheduler.
- noise_scheduler_type: str = "DDPM"
- num_train_timesteps: int = 100
- beta_schedule: str = "squaredcos_cap_v2"
- beta_start: float = 0.0001
- beta_end: float = 0.02
- prediction_type: str = "epsilon"
- clip_sample: bool = True
- clip_sample_range: float = 1.0
-
- # Inference
- num_inference_steps: int | None = None
-
- # Loss computation
- do_mask_loss_for_padding: bool = False
-
- # Training presets
- optimizer_lr: float = 1e-4
- optimizer_betas: tuple = (0.95, 0.999)
- optimizer_eps: float = 1e-8
- optimizer_weight_decay: float = 1e-6
- scheduler_name: str = "cosine"
- scheduler_warmup_steps: int = 500
-
- def __post_init__(self):
- super().__post_init__()
-
- """Input validation (not exhaustive)."""
- if not self.vision_backbone.startswith("resnet"):
- raise ValueError(
- f"`vision_backbone` must be one of the ResNet variants. Got {self.vision_backbone}."
- )
-
- supported_prediction_types = ["epsilon", "sample"]
- if self.prediction_type not in supported_prediction_types:
- raise ValueError(
- f"`prediction_type` must be one of {supported_prediction_types}. Got {self.prediction_type}."
- )
- supported_noise_schedulers = ["DDPM", "DDIM"]
- if self.noise_scheduler_type not in supported_noise_schedulers:
- raise ValueError(
- f"`noise_scheduler_type` must be one of {supported_noise_schedulers}. "
- f"Got {self.noise_scheduler_type}."
- )
-
- # Check that the horizon size and U-Net downsampling is compatible.
- # U-Net downsamples by 2 with each stage.
- downsampling_factor = 2 ** len(self.down_dims)
- if self.horizon % downsampling_factor != 0:
- raise ValueError(
- "The horizon should be an integer multiple of the downsampling factor (which is determined "
- f"by `len(down_dims)`). Got {self.horizon=} and {self.down_dims=}"
- )
-
- def get_optimizer_preset(self) -> AdamConfig:
- return AdamConfig(
- lr=self.optimizer_lr,
- betas=self.optimizer_betas,
- eps=self.optimizer_eps,
- weight_decay=self.optimizer_weight_decay,
- )
-
- def get_scheduler_preset(self) -> DiffuserSchedulerConfig:
- return DiffuserSchedulerConfig(
- name=self.scheduler_name,
- num_warmup_steps=self.scheduler_warmup_steps,
- )
-
- def validate_features(self) -> None:
- if len(self.image_features) == 0 and self.env_state_feature is None:
- raise ValueError("You must provide at least one image or the environment state among the inputs.")
-
- if self.crop_shape is not None:
- for key, image_ft in self.image_features.items():
- if self.crop_shape[0] > image_ft.shape[1] or self.crop_shape[1] > image_ft.shape[2]:
- raise ValueError(
- f"`crop_shape` should fit within the images shapes. Got {self.crop_shape} "
- f"for `crop_shape` and {image_ft.shape} for "
- f"`{key}`."
- )
-
- # Check that all input images have the same shape.
- if len(self.image_features) > 0:
- first_image_key, first_image_ft = next(iter(self.image_features.items()))
- for key, image_ft in self.image_features.items():
- if image_ft.shape != first_image_ft.shape:
- raise ValueError(
- f"`{key}` does not match `{first_image_key}`, but we expect all image shapes to match."
- )
-
- @property
- def observation_delta_indices(self) -> list:
- return list(range(1 - self.n_obs_steps, 1))
-
- @property
- def action_delta_indices(self) -> list:
- return list(range(1 - self.n_obs_steps, 1 - self.n_obs_steps + self.horizon))
-
- @property
- def reward_delta_indices(self) -> None:
- return None
diff --git a/lerobot/src/lerobot/policies/diffusion/modeling_diffusion.py b/lerobot/src/lerobot/policies/diffusion/modeling_diffusion.py
deleted file mode 100644
index d3f22ae991a9b38c4ac0e13cd024fe14c731c58a..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/policies/diffusion/modeling_diffusion.py
+++ /dev/null
@@ -1,764 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 Columbia Artificial Intelligence, Robotics Lab,
-# and The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Diffusion Policy as per "Diffusion Policy: Visuomotor Policy Learning via Action Diffusion"
-
-TODO(alexander-soare):
- - Remove reliance on diffusers for DDPMScheduler and LR scheduler.
-"""
-
-import math
-from collections import deque
-from collections.abc import Callable
-
-import einops
-import numpy as np
-import torch
-import torch.nn.functional as F # noqa: N812
-import torchvision
-from diffusers.schedulers.scheduling_ddim import DDIMScheduler
-from diffusers.schedulers.scheduling_ddpm import DDPMScheduler
-from torch import Tensor, nn
-
-from lerobot.policies.diffusion.configuration_diffusion import DiffusionConfig
-from lerobot.policies.pretrained import PreTrainedPolicy
-from lerobot.policies.utils import (
- get_device_from_parameters,
- get_dtype_from_parameters,
- get_output_shape,
- populate_queues,
-)
-from lerobot.utils.constants import ACTION, OBS_ENV_STATE, OBS_IMAGES, OBS_STATE
-
-
-class DiffusionPolicy(PreTrainedPolicy):
- """
- Diffusion Policy as per "Diffusion Policy: Visuomotor Policy Learning via Action Diffusion"
- (paper: https://huggingface.co/papers/2303.04137, code: https://github.com/real-stanford/diffusion_policy).
- """
-
- config_class = DiffusionConfig
- name = "diffusion"
-
- def __init__(
- self,
- config: DiffusionConfig,
- **kwargs,
- ):
- """
- Args:
- config: Policy configuration class instance or None, in which case the default instantiation of
- the configuration class is used.
- dataset_stats: Dataset statistics to be used for normalization. If not passed here, it is expected
- that they will be passed with a call to `load_state_dict` before the policy is used.
- """
- super().__init__(config)
- config.validate_features()
- self.config = config
-
- # queues are populated during rollout of the policy, they contain the n latest observations and actions
- self._queues = None
-
- self.diffusion = DiffusionModel(config)
-
- self.reset()
-
- def get_optim_params(self) -> dict:
- return self.diffusion.parameters()
-
- def reset(self):
- """Clear observation and action queues. Should be called on `env.reset()`"""
- self._queues = {
- OBS_STATE: deque(maxlen=self.config.n_obs_steps),
- ACTION: deque(maxlen=self.config.n_action_steps),
- }
- if self.config.image_features:
- self._queues[OBS_IMAGES] = deque(maxlen=self.config.n_obs_steps)
- if self.config.env_state_feature:
- self._queues[OBS_ENV_STATE] = deque(maxlen=self.config.n_obs_steps)
-
- @torch.no_grad()
- def predict_action_chunk(self, batch: dict[str, Tensor], noise: Tensor | None = None) -> Tensor:
- """Predict a chunk of actions given environment observations."""
- # stack n latest observations from the queue
- batch = {k: torch.stack(list(self._queues[k]), dim=1) for k in batch if k in self._queues}
- actions = self.diffusion.generate_actions(batch, noise=noise)
-
- return actions
-
- @torch.no_grad()
- def select_action(self, batch: dict[str, Tensor], noise: Tensor | None = None) -> Tensor:
- """Select a single action given environment observations.
-
- This method handles caching a history of observations and an action trajectory generated by the
- underlying diffusion model. Here's how it works:
- - `n_obs_steps` steps worth of observations are cached (for the first steps, the observation is
- copied `n_obs_steps` times to fill the cache).
- - The diffusion model generates `horizon` steps worth of actions.
- - `n_action_steps` worth of actions are actually kept for execution, starting from the current step.
- Schematically this looks like:
- ----------------------------------------------------------------------------------------------
- (legend: o = n_obs_steps, h = horizon, a = n_action_steps)
- |timestep | n-o+1 | n-o+2 | ..... | n | ..... | n+a-1 | n+a | ..... | n-o+h |
- |observation is used | YES | YES | YES | YES | NO | NO | NO | NO | NO |
- |action is generated | YES | YES | YES | YES | YES | YES | YES | YES | YES |
- |action is used | NO | NO | NO | YES | YES | YES | NO | NO | NO |
- ----------------------------------------------------------------------------------------------
- Note that this means we require: `n_action_steps <= horizon - n_obs_steps + 1`. Also, note that
- "horizon" may not the best name to describe what the variable actually means, because this period is
- actually measured from the first observation which (if `n_obs_steps` > 1) happened in the past.
- """
- # NOTE: for offline evaluation, we have action in the batch, so we need to pop it out
- if ACTION in batch:
- batch.pop(ACTION)
-
- if self.config.image_features:
- batch = dict(batch) # shallow copy so that adding a key doesn't modify the original
- batch[OBS_IMAGES] = torch.stack([batch[key] for key in self.config.image_features], dim=-4)
- # NOTE: It's important that this happens after stacking the images into a single key.
- self._queues = populate_queues(self._queues, batch)
-
- if len(self._queues[ACTION]) == 0:
- actions = self.predict_action_chunk(batch, noise=noise)
- self._queues[ACTION].extend(actions.transpose(0, 1))
-
- action = self._queues[ACTION].popleft()
- return action
-
- def forward(self, batch: dict[str, Tensor]) -> tuple[Tensor, None]:
- """Run the batch through the model and compute the loss for training or validation."""
- if self.config.image_features:
- batch = dict(batch) # shallow copy so that adding a key doesn't modify the original
- batch[OBS_IMAGES] = torch.stack([batch[key] for key in self.config.image_features], dim=-4)
- loss = self.diffusion.compute_loss(batch)
- # no output_dict so returning None
- return loss, None
-
-
-def _make_noise_scheduler(name: str, **kwargs: dict) -> DDPMScheduler | DDIMScheduler:
- """
- Factory for noise scheduler instances of the requested type. All kwargs are passed
- to the scheduler.
- """
- if name == "DDPM":
- return DDPMScheduler(**kwargs)
- elif name == "DDIM":
- return DDIMScheduler(**kwargs)
- else:
- raise ValueError(f"Unsupported noise scheduler type {name}")
-
-
-class DiffusionModel(nn.Module):
- def __init__(self, config: DiffusionConfig):
- super().__init__()
- self.config = config
-
- # Build observation encoders (depending on which observations are provided).
- global_cond_dim = self.config.robot_state_feature.shape[0]
- if self.config.image_features:
- num_images = len(self.config.image_features)
- if self.config.use_separate_rgb_encoder_per_camera:
- encoders = [DiffusionRgbEncoder(config) for _ in range(num_images)]
- self.rgb_encoder = nn.ModuleList(encoders)
- global_cond_dim += encoders[0].feature_dim * num_images
- else:
- self.rgb_encoder = DiffusionRgbEncoder(config)
- global_cond_dim += self.rgb_encoder.feature_dim * num_images
- if self.config.env_state_feature:
- global_cond_dim += self.config.env_state_feature.shape[0]
-
- self.unet = DiffusionConditionalUnet1d(config, global_cond_dim=global_cond_dim * config.n_obs_steps)
-
- self.noise_scheduler = _make_noise_scheduler(
- config.noise_scheduler_type,
- num_train_timesteps=config.num_train_timesteps,
- beta_start=config.beta_start,
- beta_end=config.beta_end,
- beta_schedule=config.beta_schedule,
- clip_sample=config.clip_sample,
- clip_sample_range=config.clip_sample_range,
- prediction_type=config.prediction_type,
- )
-
- if config.num_inference_steps is None:
- self.num_inference_steps = self.noise_scheduler.config.num_train_timesteps
- else:
- self.num_inference_steps = config.num_inference_steps
-
- # ========= inference ============
- def conditional_sample(
- self,
- batch_size: int,
- global_cond: Tensor | None = None,
- generator: torch.Generator | None = None,
- noise: Tensor | None = None,
- ) -> Tensor:
- device = get_device_from_parameters(self)
- dtype = get_dtype_from_parameters(self)
-
- # Sample prior.
- sample = (
- noise
- if noise is not None
- else torch.randn(
- size=(batch_size, self.config.horizon, self.config.action_feature.shape[0]),
- dtype=dtype,
- device=device,
- generator=generator,
- )
- )
-
- self.noise_scheduler.set_timesteps(self.num_inference_steps)
-
- for t in self.noise_scheduler.timesteps:
- # Predict model output.
- model_output = self.unet(
- sample,
- torch.full(sample.shape[:1], t, dtype=torch.long, device=sample.device),
- global_cond=global_cond,
- )
- # Compute previous image: x_t -> x_t-1
- sample = self.noise_scheduler.step(model_output, t, sample, generator=generator).prev_sample
-
- return sample
-
- def _prepare_global_conditioning(self, batch: dict[str, Tensor]) -> Tensor:
- """Encode image features and concatenate them all together along with the state vector."""
- batch_size, n_obs_steps = batch[OBS_STATE].shape[:2]
- global_cond_feats = [batch[OBS_STATE]]
- # Extract image features.
- if self.config.image_features:
- if self.config.use_separate_rgb_encoder_per_camera:
- # Combine batch and sequence dims while rearranging to make the camera index dimension first.
- images_per_camera = einops.rearrange(batch[OBS_IMAGES], "b s n ... -> n (b s) ...")
- img_features_list = torch.cat(
- [
- encoder(images)
- for encoder, images in zip(self.rgb_encoder, images_per_camera, strict=True)
- ]
- )
- # Separate batch and sequence dims back out. The camera index dim gets absorbed into the
- # feature dim (effectively concatenating the camera features).
- img_features = einops.rearrange(
- img_features_list, "(n b s) ... -> b s (n ...)", b=batch_size, s=n_obs_steps
- )
- else:
- # Combine batch, sequence, and "which camera" dims before passing to shared encoder.
- img_features = self.rgb_encoder(
- einops.rearrange(batch[OBS_IMAGES], "b s n ... -> (b s n) ...")
- )
- # Separate batch dim and sequence dim back out. The camera index dim gets absorbed into the
- # feature dim (effectively concatenating the camera features).
- img_features = einops.rearrange(
- img_features, "(b s n) ... -> b s (n ...)", b=batch_size, s=n_obs_steps
- )
- global_cond_feats.append(img_features)
-
- if self.config.env_state_feature:
- global_cond_feats.append(batch[OBS_ENV_STATE])
-
- # Concatenate features then flatten to (B, global_cond_dim).
- return torch.cat(global_cond_feats, dim=-1).flatten(start_dim=1)
-
- def generate_actions(self, batch: dict[str, Tensor], noise: Tensor | None = None) -> Tensor:
- """
- This function expects `batch` to have:
- {
- "observation.state": (B, n_obs_steps, state_dim)
-
- "observation.images": (B, n_obs_steps, num_cameras, C, H, W)
- AND/OR
- "observation.environment_state": (B, n_obs_steps, environment_dim)
- }
- """
- batch_size, n_obs_steps = batch[OBS_STATE].shape[:2]
- assert n_obs_steps == self.config.n_obs_steps
-
- # Encode image features and concatenate them all together along with the state vector.
- global_cond = self._prepare_global_conditioning(batch) # (B, global_cond_dim)
-
- # run sampling
- actions = self.conditional_sample(batch_size, global_cond=global_cond, noise=noise)
-
- # Extract `n_action_steps` steps worth of actions (from the current observation).
- start = n_obs_steps - 1
- end = start + self.config.n_action_steps
- actions = actions[:, start:end]
-
- return actions
-
- def compute_loss(self, batch: dict[str, Tensor]) -> Tensor:
- """
- This function expects `batch` to have (at least):
- {
- "observation.state": (B, n_obs_steps, state_dim)
-
- "observation.images": (B, n_obs_steps, num_cameras, C, H, W)
- AND/OR
- "observation.environment_state": (B, n_obs_steps, environment_dim)
-
- "action": (B, horizon, action_dim)
- "action_is_pad": (B, horizon)
- }
- """
- # Input validation.
- assert set(batch).issuperset({OBS_STATE, ACTION, "action_is_pad"})
- assert OBS_IMAGES in batch or OBS_ENV_STATE in batch
- n_obs_steps = batch[OBS_STATE].shape[1]
- horizon = batch[ACTION].shape[1]
- assert horizon == self.config.horizon
- assert n_obs_steps == self.config.n_obs_steps
-
- # Encode image features and concatenate them all together along with the state vector.
- global_cond = self._prepare_global_conditioning(batch) # (B, global_cond_dim)
-
- # Forward diffusion.
- trajectory = batch[ACTION]
- # Sample noise to add to the trajectory.
- eps = torch.randn(trajectory.shape, device=trajectory.device)
- # Sample a random noising timestep for each item in the batch.
- timesteps = torch.randint(
- low=0,
- high=self.noise_scheduler.config.num_train_timesteps,
- size=(trajectory.shape[0],),
- device=trajectory.device,
- ).long()
- # Add noise to the clean trajectories according to the noise magnitude at each timestep.
- noisy_trajectory = self.noise_scheduler.add_noise(trajectory, eps, timesteps)
-
- # Run the denoising network (that might denoise the trajectory, or attempt to predict the noise).
- pred = self.unet(noisy_trajectory, timesteps, global_cond=global_cond)
-
- # Compute the loss.
- # The target is either the original trajectory, or the noise.
- if self.config.prediction_type == "epsilon":
- target = eps
- elif self.config.prediction_type == "sample":
- target = batch[ACTION]
- else:
- raise ValueError(f"Unsupported prediction type {self.config.prediction_type}")
-
- loss = F.mse_loss(pred, target, reduction="none")
-
- # Mask loss wherever the action is padded with copies (edges of the dataset trajectory).
- if self.config.do_mask_loss_for_padding:
- if "action_is_pad" not in batch:
- raise ValueError(
- "You need to provide 'action_is_pad' in the batch when "
- f"{self.config.do_mask_loss_for_padding=}."
- )
- in_episode_bound = ~batch["action_is_pad"]
- loss = loss * in_episode_bound.unsqueeze(-1)
-
- return loss.mean()
-
-
-class SpatialSoftmax(nn.Module):
- """
- Spatial Soft Argmax operation described in "Deep Spatial Autoencoders for Visuomotor Learning" by Finn et al.
- (https://huggingface.co/papers/1509.06113). A minimal port of the robomimic implementation.
-
- At a high level, this takes 2D feature maps (from a convnet/ViT) and returns the "center of mass"
- of activations of each channel, i.e., keypoints in the image space for the policy to focus on.
-
- Example: take feature maps of size (512x10x12). We generate a grid of normalized coordinates (10x12x2):
- -----------------------------------------------------
- | (-1., -1.) | (-0.82, -1.) | ... | (1., -1.) |
- | (-1., -0.78) | (-0.82, -0.78) | ... | (1., -0.78) |
- | ... | ... | ... | ... |
- | (-1., 1.) | (-0.82, 1.) | ... | (1., 1.) |
- -----------------------------------------------------
- This is achieved by applying channel-wise softmax over the activations (512x120) and computing the dot
- product with the coordinates (120x2) to get expected points of maximal activation (512x2).
-
- The example above results in 512 keypoints (corresponding to the 512 input channels). We can optionally
- provide num_kp != None to control the number of keypoints. This is achieved by a first applying a learnable
- linear mapping (in_channels, H, W) -> (num_kp, H, W).
- """
-
- def __init__(self, input_shape, num_kp=None):
- """
- Args:
- input_shape (list): (C, H, W) input feature map shape.
- num_kp (int): number of keypoints in output. If None, output will have the same number of channels as input.
- """
- super().__init__()
-
- assert len(input_shape) == 3
- self._in_c, self._in_h, self._in_w = input_shape
-
- if num_kp is not None:
- self.nets = torch.nn.Conv2d(self._in_c, num_kp, kernel_size=1)
- self._out_c = num_kp
- else:
- self.nets = None
- self._out_c = self._in_c
-
- # we could use torch.linspace directly but that seems to behave slightly differently than numpy
- # and causes a small degradation in pc_success of pre-trained models.
- pos_x, pos_y = np.meshgrid(np.linspace(-1.0, 1.0, self._in_w), np.linspace(-1.0, 1.0, self._in_h))
- pos_x = torch.from_numpy(pos_x.reshape(self._in_h * self._in_w, 1)).float()
- pos_y = torch.from_numpy(pos_y.reshape(self._in_h * self._in_w, 1)).float()
- # register as buffer so it's moved to the correct device.
- self.register_buffer("pos_grid", torch.cat([pos_x, pos_y], dim=1))
-
- def forward(self, features: Tensor) -> Tensor:
- """
- Args:
- features: (B, C, H, W) input feature maps.
- Returns:
- (B, K, 2) image-space coordinates of keypoints.
- """
- if self.nets is not None:
- features = self.nets(features)
-
- # [B, K, H, W] -> [B * K, H * W] where K is number of keypoints
- features = features.reshape(-1, self._in_h * self._in_w)
- # 2d softmax normalization
- attention = F.softmax(features, dim=-1)
- # [B * K, H * W] x [H * W, 2] -> [B * K, 2] for spatial coordinate mean in x and y dimensions
- expected_xy = attention @ self.pos_grid
- # reshape to [B, K, 2]
- feature_keypoints = expected_xy.view(-1, self._out_c, 2)
-
- return feature_keypoints
-
-
-class DiffusionRgbEncoder(nn.Module):
- """Encodes an RGB image into a 1D feature vector.
-
- Includes the ability to normalize and crop the image first.
- """
-
- def __init__(self, config: DiffusionConfig):
- super().__init__()
- # Set up optional preprocessing.
- if config.crop_shape is not None:
- self.do_crop = True
- # Always use center crop for eval
- self.center_crop = torchvision.transforms.CenterCrop(config.crop_shape)
- if config.crop_is_random:
- self.maybe_random_crop = torchvision.transforms.RandomCrop(config.crop_shape)
- else:
- self.maybe_random_crop = self.center_crop
- else:
- self.do_crop = False
-
- # Set up backbone.
- backbone_model = getattr(torchvision.models, config.vision_backbone)(
- weights=config.pretrained_backbone_weights
- )
- # Note: This assumes that the layer4 feature map is children()[-3]
- # TODO(alexander-soare): Use a safer alternative.
- self.backbone = nn.Sequential(*(list(backbone_model.children())[:-2]))
- if config.use_group_norm:
- if config.pretrained_backbone_weights:
- raise ValueError(
- "You can't replace BatchNorm in a pretrained model without ruining the weights!"
- )
- self.backbone = _replace_submodules(
- root_module=self.backbone,
- predicate=lambda x: isinstance(x, nn.BatchNorm2d),
- func=lambda x: nn.GroupNorm(num_groups=x.num_features // 16, num_channels=x.num_features),
- )
-
- # Set up pooling and final layers.
- # Use a dry run to get the feature map shape.
- # The dummy input should take the number of image channels from `config.image_features` and it should
- # use the height and width from `config.crop_shape` if it is provided, otherwise it should use the
- # height and width from `config.image_features`.
-
- # Note: we have a check in the config class to make sure all images have the same shape.
- images_shape = next(iter(config.image_features.values())).shape
- dummy_shape_h_w = config.crop_shape if config.crop_shape is not None else images_shape[1:]
- dummy_shape = (1, images_shape[0], *dummy_shape_h_w)
- feature_map_shape = get_output_shape(self.backbone, dummy_shape)[1:]
-
- self.pool = SpatialSoftmax(feature_map_shape, num_kp=config.spatial_softmax_num_keypoints)
- self.feature_dim = config.spatial_softmax_num_keypoints * 2
- self.out = nn.Linear(config.spatial_softmax_num_keypoints * 2, self.feature_dim)
- self.relu = nn.ReLU()
-
- def forward(self, x: Tensor) -> Tensor:
- """
- Args:
- x: (B, C, H, W) image tensor with pixel values in [0, 1].
- Returns:
- (B, D) image feature.
- """
- # Preprocess: maybe crop (if it was set up in the __init__).
- if self.do_crop:
- if self.training: # noqa: SIM108
- x = self.maybe_random_crop(x)
- else:
- # Always use center crop for eval.
- x = self.center_crop(x)
- # Extract backbone feature.
- x = torch.flatten(self.pool(self.backbone(x)), start_dim=1)
- # Final linear layer with non-linearity.
- x = self.relu(self.out(x))
- return x
-
-
-def _replace_submodules(
- root_module: nn.Module, predicate: Callable[[nn.Module], bool], func: Callable[[nn.Module], nn.Module]
-) -> nn.Module:
- """
- Args:
- root_module: The module for which the submodules need to be replaced
- predicate: Takes a module as an argument and must return True if the that module is to be replaced.
- func: Takes a module as an argument and returns a new module to replace it with.
- Returns:
- The root module with its submodules replaced.
- """
- if predicate(root_module):
- return func(root_module)
-
- replace_list = [k.split(".") for k, m in root_module.named_modules(remove_duplicate=True) if predicate(m)]
- for *parents, k in replace_list:
- parent_module = root_module
- if len(parents) > 0:
- parent_module = root_module.get_submodule(".".join(parents))
- if isinstance(parent_module, nn.Sequential):
- src_module = parent_module[int(k)]
- else:
- src_module = getattr(parent_module, k)
- tgt_module = func(src_module)
- if isinstance(parent_module, nn.Sequential):
- parent_module[int(k)] = tgt_module
- else:
- setattr(parent_module, k, tgt_module)
- # verify that all BN are replaced
- assert not any(predicate(m) for _, m in root_module.named_modules(remove_duplicate=True))
- return root_module
-
-
-class DiffusionSinusoidalPosEmb(nn.Module):
- """1D sinusoidal positional embeddings as in Attention is All You Need."""
-
- def __init__(self, dim: int):
- super().__init__()
- self.dim = dim
-
- def forward(self, x: Tensor) -> Tensor:
- device = x.device
- half_dim = self.dim // 2
- emb = math.log(10000) / (half_dim - 1)
- emb = torch.exp(torch.arange(half_dim, device=device) * -emb)
- emb = x.unsqueeze(-1) * emb.unsqueeze(0)
- emb = torch.cat((emb.sin(), emb.cos()), dim=-1)
- return emb
-
-
-class DiffusionConv1dBlock(nn.Module):
- """Conv1d --> GroupNorm --> Mish"""
-
- def __init__(self, inp_channels, out_channels, kernel_size, n_groups=8):
- super().__init__()
-
- self.block = nn.Sequential(
- nn.Conv1d(inp_channels, out_channels, kernel_size, padding=kernel_size // 2),
- nn.GroupNorm(n_groups, out_channels),
- nn.Mish(),
- )
-
- def forward(self, x):
- return self.block(x)
-
-
-class DiffusionConditionalUnet1d(nn.Module):
- """A 1D convolutional UNet with FiLM modulation for conditioning.
-
- Note: this removes local conditioning as compared to the original diffusion policy code.
- """
-
- def __init__(self, config: DiffusionConfig, global_cond_dim: int):
- super().__init__()
-
- self.config = config
-
- # Encoder for the diffusion timestep.
- self.diffusion_step_encoder = nn.Sequential(
- DiffusionSinusoidalPosEmb(config.diffusion_step_embed_dim),
- nn.Linear(config.diffusion_step_embed_dim, config.diffusion_step_embed_dim * 4),
- nn.Mish(),
- nn.Linear(config.diffusion_step_embed_dim * 4, config.diffusion_step_embed_dim),
- )
-
- # The FiLM conditioning dimension.
- cond_dim = config.diffusion_step_embed_dim + global_cond_dim
-
- # In channels / out channels for each downsampling block in the Unet's encoder. For the decoder, we
- # just reverse these.
- in_out = [(config.action_feature.shape[0], config.down_dims[0])] + list(
- zip(config.down_dims[:-1], config.down_dims[1:], strict=True)
- )
-
- # Unet encoder.
- common_res_block_kwargs = {
- "cond_dim": cond_dim,
- "kernel_size": config.kernel_size,
- "n_groups": config.n_groups,
- "use_film_scale_modulation": config.use_film_scale_modulation,
- }
- self.down_modules = nn.ModuleList([])
- for ind, (dim_in, dim_out) in enumerate(in_out):
- is_last = ind >= (len(in_out) - 1)
- self.down_modules.append(
- nn.ModuleList(
- [
- DiffusionConditionalResidualBlock1d(dim_in, dim_out, **common_res_block_kwargs),
- DiffusionConditionalResidualBlock1d(dim_out, dim_out, **common_res_block_kwargs),
- # Downsample as long as it is not the last block.
- nn.Conv1d(dim_out, dim_out, 3, 2, 1) if not is_last else nn.Identity(),
- ]
- )
- )
-
- # Processing in the middle of the auto-encoder.
- self.mid_modules = nn.ModuleList(
- [
- DiffusionConditionalResidualBlock1d(
- config.down_dims[-1], config.down_dims[-1], **common_res_block_kwargs
- ),
- DiffusionConditionalResidualBlock1d(
- config.down_dims[-1], config.down_dims[-1], **common_res_block_kwargs
- ),
- ]
- )
-
- # Unet decoder.
- self.up_modules = nn.ModuleList([])
- for ind, (dim_out, dim_in) in enumerate(reversed(in_out[1:])):
- is_last = ind >= (len(in_out) - 1)
- self.up_modules.append(
- nn.ModuleList(
- [
- # dim_in * 2, because it takes the encoder's skip connection as well
- DiffusionConditionalResidualBlock1d(dim_in * 2, dim_out, **common_res_block_kwargs),
- DiffusionConditionalResidualBlock1d(dim_out, dim_out, **common_res_block_kwargs),
- # Upsample as long as it is not the last block.
- nn.ConvTranspose1d(dim_out, dim_out, 4, 2, 1) if not is_last else nn.Identity(),
- ]
- )
- )
-
- self.final_conv = nn.Sequential(
- DiffusionConv1dBlock(config.down_dims[0], config.down_dims[0], kernel_size=config.kernel_size),
- nn.Conv1d(config.down_dims[0], config.action_feature.shape[0], 1),
- )
-
- def forward(self, x: Tensor, timestep: Tensor | int, global_cond=None) -> Tensor:
- """
- Args:
- x: (B, T, input_dim) tensor for input to the Unet.
- timestep: (B,) tensor of (timestep_we_are_denoising_from - 1).
- global_cond: (B, global_cond_dim)
- output: (B, T, input_dim)
- Returns:
- (B, T, input_dim) diffusion model prediction.
- """
- # For 1D convolutions we'll need feature dimension first.
- x = einops.rearrange(x, "b t d -> b d t")
-
- timesteps_embed = self.diffusion_step_encoder(timestep)
-
- # If there is a global conditioning feature, concatenate it to the timestep embedding.
- if global_cond is not None:
- global_feature = torch.cat([timesteps_embed, global_cond], axis=-1)
- else:
- global_feature = timesteps_embed
-
- # Run encoder, keeping track of skip features to pass to the decoder.
- encoder_skip_features: list[Tensor] = []
- for resnet, resnet2, downsample in self.down_modules:
- x = resnet(x, global_feature)
- x = resnet2(x, global_feature)
- encoder_skip_features.append(x)
- x = downsample(x)
-
- for mid_module in self.mid_modules:
- x = mid_module(x, global_feature)
-
- # Run decoder, using the skip features from the encoder.
- for resnet, resnet2, upsample in self.up_modules:
- x = torch.cat((x, encoder_skip_features.pop()), dim=1)
- x = resnet(x, global_feature)
- x = resnet2(x, global_feature)
- x = upsample(x)
-
- x = self.final_conv(x)
-
- x = einops.rearrange(x, "b d t -> b t d")
- return x
-
-
-class DiffusionConditionalResidualBlock1d(nn.Module):
- """ResNet style 1D convolutional block with FiLM modulation for conditioning."""
-
- def __init__(
- self,
- in_channels: int,
- out_channels: int,
- cond_dim: int,
- kernel_size: int = 3,
- n_groups: int = 8,
- # Set to True to do scale modulation with FiLM as well as bias modulation (defaults to False meaning
- # FiLM just modulates bias).
- use_film_scale_modulation: bool = False,
- ):
- super().__init__()
-
- self.use_film_scale_modulation = use_film_scale_modulation
- self.out_channels = out_channels
-
- self.conv1 = DiffusionConv1dBlock(in_channels, out_channels, kernel_size, n_groups=n_groups)
-
- # FiLM modulation (https://huggingface.co/papers/1709.07871) outputs per-channel bias and (maybe) scale.
- cond_channels = out_channels * 2 if use_film_scale_modulation else out_channels
- self.cond_encoder = nn.Sequential(nn.Mish(), nn.Linear(cond_dim, cond_channels))
-
- self.conv2 = DiffusionConv1dBlock(out_channels, out_channels, kernel_size, n_groups=n_groups)
-
- # A final convolution for dimension matching the residual (if needed).
- self.residual_conv = (
- nn.Conv1d(in_channels, out_channels, 1) if in_channels != out_channels else nn.Identity()
- )
-
- def forward(self, x: Tensor, cond: Tensor) -> Tensor:
- """
- Args:
- x: (B, in_channels, T)
- cond: (B, cond_dim)
- Returns:
- (B, out_channels, T)
- """
- out = self.conv1(x)
-
- # Get condition embedding. Unsqueeze for broadcasting to `out`, resulting in (B, out_channels, 1).
- cond_embed = self.cond_encoder(cond).unsqueeze(-1)
- if self.use_film_scale_modulation:
- # Treat the embedding as a list of scales and biases.
- scale = cond_embed[:, : self.out_channels]
- bias = cond_embed[:, self.out_channels :]
- out = scale * out + bias
- else:
- # Treat the embedding as biases.
- out = out + cond_embed
-
- out = self.conv2(out)
- out = out + self.residual_conv(x)
- return out
diff --git a/lerobot/src/lerobot/processor/__init__.py b/lerobot/src/lerobot/processor/__init__.py
deleted file mode 100644
index 6b44800abf5a6123b2ab119d18e80d075bc99c20..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/processor/__init__.py
+++ /dev/null
@@ -1,131 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .batch_processor import AddBatchDimensionProcessorStep
-from .converters import (
- batch_to_transition,
- create_transition,
- transition_to_batch,
-)
-from .core import (
- EnvAction,
- EnvTransition,
- PolicyAction,
- RobotAction,
- RobotObservation,
- TransitionKey,
-)
-from .delta_action_processor import MapDeltaActionToRobotActionStep, MapTensorToDeltaActionDictStep
-from .device_processor import DeviceProcessorStep
-from .factory import (
- make_default_processors,
- make_default_robot_action_processor,
- make_default_robot_observation_processor,
- make_default_teleop_action_processor,
-)
-from .gym_action_processor import (
- Numpy2TorchActionProcessorStep,
- Torch2NumpyActionProcessorStep,
-)
-from .hil_processor import (
- AddTeleopActionAsComplimentaryDataStep,
- AddTeleopEventsAsInfoStep,
- GripperPenaltyProcessorStep,
- ImageCropResizeProcessorStep,
- InterventionActionProcessorStep,
- RewardClassifierProcessorStep,
- TimeLimitProcessorStep,
-)
-from .normalize_processor import NormalizerProcessorStep, UnnormalizerProcessorStep, hotswap_stats
-from .observation_processor import VanillaObservationProcessorStep
-from .pipeline import (
- ActionProcessorStep,
- ComplementaryDataProcessorStep,
- DataProcessorPipeline,
- DoneProcessorStep,
- IdentityProcessorStep,
- InfoProcessorStep,
- ObservationProcessorStep,
- PolicyActionProcessorStep,
- PolicyProcessorPipeline,
- ProcessorKwargs,
- ProcessorStep,
- ProcessorStepRegistry,
- RewardProcessorStep,
- RobotActionProcessorStep,
- RobotProcessorPipeline,
- TruncatedProcessorStep,
-)
-from .policy_robot_bridge import (
- PolicyActionToRobotActionProcessorStep,
- RobotActionToPolicyActionProcessorStep,
-)
-from .rename_processor import RenameObservationsProcessorStep
-from .tokenizer_processor import ActionTokenizerProcessorStep, TokenizerProcessorStep
-
-__all__ = [
- "ActionProcessorStep",
- "AddTeleopActionAsComplimentaryDataStep",
- "AddTeleopEventsAsInfoStep",
- "ComplementaryDataProcessorStep",
- "batch_to_transition",
- "create_transition",
- "DeviceProcessorStep",
- "DoneProcessorStep",
- "EnvAction",
- "EnvTransition",
- "GripperPenaltyProcessorStep",
- "hotswap_stats",
- "IdentityProcessorStep",
- "ImageCropResizeProcessorStep",
- "InfoProcessorStep",
- "InterventionActionProcessorStep",
- "make_default_processors",
- "make_default_teleop_action_processor",
- "make_default_robot_action_processor",
- "make_default_robot_observation_processor",
- "MapDeltaActionToRobotActionStep",
- "MapTensorToDeltaActionDictStep",
- "NormalizerProcessorStep",
- "Numpy2TorchActionProcessorStep",
- "ObservationProcessorStep",
- "PolicyAction",
- "PolicyActionProcessorStep",
- "PolicyProcessorPipeline",
- "ProcessorKwargs",
- "ProcessorStep",
- "ProcessorStepRegistry",
- "RobotAction",
- "RobotActionProcessorStep",
- "RobotObservation",
- "RenameObservationsProcessorStep",
- "RewardClassifierProcessorStep",
- "RewardProcessorStep",
- "DataProcessorPipeline",
- "TimeLimitProcessorStep",
- "AddBatchDimensionProcessorStep",
- "RobotProcessorPipeline",
- "TokenizerProcessorStep",
- "ActionTokenizerProcessorStep",
- "Torch2NumpyActionProcessorStep",
- "RobotActionToPolicyActionProcessorStep",
- "PolicyActionToRobotActionProcessorStep",
- "transition_to_batch",
- "TransitionKey",
- "TruncatedProcessorStep",
- "UnnormalizerProcessorStep",
- "VanillaObservationProcessorStep",
-]
diff --git a/lerobot/src/lerobot/processor/core.py b/lerobot/src/lerobot/processor/core.py
deleted file mode 100644
index 393be2af744fb4ccaac9eec4db5b696341f82c09..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/processor/core.py
+++ /dev/null
@@ -1,56 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from __future__ import annotations
-
-from enum import Enum
-from typing import Any, TypeAlias, TypedDict
-
-import numpy as np
-import torch
-
-
-class TransitionKey(str, Enum):
- """Keys for accessing EnvTransition dictionary components."""
-
- # TODO(Steven): Use consts
- OBSERVATION = "observation"
- ACTION = "action"
- REWARD = "reward"
- DONE = "done"
- TRUNCATED = "truncated"
- INFO = "info"
- COMPLEMENTARY_DATA = "complementary_data"
-
-
-PolicyAction: TypeAlias = torch.Tensor
-RobotAction: TypeAlias = dict[str, Any]
-EnvAction: TypeAlias = np.ndarray
-RobotObservation: TypeAlias = dict[str, Any]
-
-
-EnvTransition = TypedDict(
- "EnvTransition",
- {
- TransitionKey.OBSERVATION.value: RobotObservation | None,
- TransitionKey.ACTION.value: PolicyAction | RobotAction | EnvAction | None,
- TransitionKey.REWARD.value: float | torch.Tensor | None,
- TransitionKey.DONE.value: bool | torch.Tensor | None,
- TransitionKey.TRUNCATED.value: bool | torch.Tensor | None,
- TransitionKey.INFO.value: dict[str, Any] | None,
- TransitionKey.COMPLEMENTARY_DATA.value: dict[str, Any] | None,
- },
-)
diff --git a/lerobot/src/lerobot/processor/delta_action_processor.py b/lerobot/src/lerobot/processor/delta_action_processor.py
deleted file mode 100644
index 91912ae7da2423421b5226b83e7409c8a9e44edf..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/processor/delta_action_processor.py
+++ /dev/null
@@ -1,143 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-
-from lerobot.configs.types import FeatureType, PipelineFeatureType, PolicyFeature
-
-from .core import PolicyAction, RobotAction
-from .pipeline import ActionProcessorStep, ProcessorStepRegistry, RobotActionProcessorStep
-
-
-@ProcessorStepRegistry.register("map_tensor_to_delta_action_dict")
-@dataclass
-class MapTensorToDeltaActionDictStep(ActionProcessorStep):
- """
- Maps a flat action tensor from a policy to a structured delta action dictionary.
-
- This step is typically used after a policy outputs a continuous action vector.
- It decomposes the vector into named components for delta movements of the
- end-effector (x, y, z) and optionally the gripper.
-
- Attributes:
- use_gripper: If True, assumes the 4th element of the tensor is the
- gripper action.
- """
-
- use_gripper: bool = True
-
- def action(self, action: PolicyAction) -> RobotAction:
- if not isinstance(action, PolicyAction):
- raise ValueError("Only PolicyAction is supported for this processor")
-
- if action.dim() > 1:
- action = action.squeeze(0)
-
- # TODO (maractingi): add rotation
- delta_action = {
- "delta_x": action[0].item(),
- "delta_y": action[1].item(),
- "delta_z": action[2].item(),
- }
- if self.use_gripper:
- delta_action["gripper"] = action[3].item()
- return delta_action
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- for axis in ["x", "y", "z"]:
- features[PipelineFeatureType.ACTION][f"delta_{axis}"] = PolicyFeature(
- type=FeatureType.ACTION, shape=(1,)
- )
-
- if self.use_gripper:
- features[PipelineFeatureType.ACTION]["gripper"] = PolicyFeature(
- type=FeatureType.ACTION, shape=(1,)
- )
- return features
-
-
-@ProcessorStepRegistry.register("map_delta_action_to_robot_action")
-@dataclass
-class MapDeltaActionToRobotActionStep(RobotActionProcessorStep):
- """
- Maps delta actions from teleoperators to robot target actions for inverse kinematics.
-
- This step converts a dictionary of delta movements (e.g., from a gamepad)
- into a target action format that includes an "enabled" flag and target
- end-effector positions. It also handles scaling and noise filtering.
-
- Attributes:
- position_scale: A factor to scale the delta position inputs.
- noise_threshold: The magnitude below which delta inputs are considered noise
- and do not trigger an "enabled" state.
- """
-
- # Scale factors for delta movements
- position_scale: float = 1.0
- noise_threshold: float = 1e-3 # 1 mm threshold to filter out noise
-
- def action(self, action: RobotAction) -> RobotAction:
- # NOTE (maractingi): Action can be a dict from the teleop_devices or a tensor from the policy
- # TODO (maractingi): changing this target_xyz naming convention from the teleop_devices
- delta_x = action.pop("delta_x")
- delta_y = action.pop("delta_y")
- delta_z = action.pop("delta_z")
- gripper = action.pop("gripper")
-
- # Determine if the teleoperator is actively providing input
- # Consider enabled if any significant movement delta is detected
- position_magnitude = (delta_x**2 + delta_y**2 + delta_z**2) ** 0.5 # Use Euclidean norm for position
- enabled = position_magnitude > self.noise_threshold # Small threshold to avoid noise
-
- # Scale the deltas appropriately
- scaled_delta_x = delta_x * self.position_scale
- scaled_delta_y = delta_y * self.position_scale
- scaled_delta_z = delta_z * self.position_scale
-
- # For gamepad/keyboard, we don't have rotation input, so set to 0
- # These could be extended in the future for more sophisticated teleoperators
- target_wx = 0.0
- target_wy = 0.0
- target_wz = 0.0
-
- # Update action with robot target format
- action = {
- "enabled": enabled,
- "target_x": scaled_delta_x,
- "target_y": scaled_delta_y,
- "target_z": scaled_delta_z,
- "target_wx": target_wx,
- "target_wy": target_wy,
- "target_wz": target_wz,
- "gripper_vel": float(gripper),
- }
-
- return action
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- for axis in ["x", "y", "z", "gripper"]:
- features[PipelineFeatureType.ACTION].pop(f"delta_{axis}", None)
-
- for feat in ["enabled", "target_x", "target_y", "target_z", "target_wx", "target_wy", "target_wz"]:
- features[PipelineFeatureType.ACTION][f"{feat}"] = PolicyFeature(
- type=FeatureType.ACTION, shape=(1,)
- )
-
- return features
diff --git a/lerobot/src/lerobot/processor/device_processor.py b/lerobot/src/lerobot/processor/device_processor.py
deleted file mode 100644
index 5042a4379628ee094c0121b9fc0faff42b557e02..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/processor/device_processor.py
+++ /dev/null
@@ -1,194 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-This script defines a processor step for moving environment transition data to a specific torch device and casting
-its floating-point precision.
-"""
-
-from dataclasses import dataclass
-from typing import Any
-
-import torch
-
-from lerobot.configs.types import PipelineFeatureType, PolicyFeature
-from lerobot.utils.utils import get_safe_torch_device
-
-from .core import EnvTransition, PolicyAction, TransitionKey
-from .pipeline import ProcessorStep, ProcessorStepRegistry
-
-
-@ProcessorStepRegistry.register("device_processor")
-@dataclass
-class DeviceProcessorStep(ProcessorStep):
- """
- Processor step to move all tensors within an `EnvTransition` to a specified device and optionally cast their
- floating-point data type.
-
- This is crucial for preparing data for model training or inference on hardware like GPUs.
-
- Attributes:
- device: The target device for tensors (e.g., "cpu", "cuda", "cuda:0").
- float_dtype: The target floating-point dtype as a string (e.g., "float32", "float16", "bfloat16").
- If None, the dtype is not changed.
- """
-
- device: str = "cpu"
- float_dtype: str | None = None
-
- DTYPE_MAPPING = {
- "float16": torch.float16,
- "float32": torch.float32,
- "float64": torch.float64,
- "bfloat16": torch.bfloat16,
- "half": torch.float16,
- "float": torch.float32,
- "double": torch.float64,
- }
-
- def __post_init__(self):
- """
- Initializes the processor by converting string configurations to torch objects.
-
- This method sets up the `torch.device`, determines if transfers can be non-blocking, and validates the
- `float_dtype` string, converting it to a `torch.dtype` object.
- """
- self.tensor_device: torch.device = get_safe_torch_device(self.device)
- # Update device string in case a specific GPU was selected (e.g., "cuda" -> "cuda:0")
- self.device = self.tensor_device.type
- self.non_blocking = "cuda" in str(self.device)
-
- # Validate and convert float_dtype string to torch dtype
- if self.float_dtype is not None:
- if self.float_dtype not in self.DTYPE_MAPPING:
- raise ValueError(
- f"Invalid float_dtype '{self.float_dtype}'. Available options: {list(self.DTYPE_MAPPING.keys())}"
- )
- self._target_float_dtype = self.DTYPE_MAPPING[self.float_dtype]
- else:
- self._target_float_dtype = None
-
- def _process_tensor(self, tensor: torch.Tensor) -> torch.Tensor:
- """
- Moves a single tensor to the target device and casts its dtype.
-
- Handles multi-GPU scenarios by not moving a tensor if it's already on a different CUDA device than
- the target, which is useful when using frameworks like Accelerate.
-
- Args:
- tensor: The input torch.Tensor.
-
- Returns:
- The processed tensor on the correct device and with the correct dtype.
- """
- # Determine target device
- if tensor.is_cuda and self.tensor_device.type == "cuda":
- # Both tensor and target are on GPU - preserve tensor's GPU placement.
- # This handles multi-GPU scenarios where Accelerate has already placed
- # tensors on the correct GPU for each process.
- target_device = tensor.device
- else:
- # Either tensor is on CPU, or we're configured for CPU.
- # In both cases, use the configured device.
- target_device = self.tensor_device
-
- # MPS workaround: Convert float64 to float32 since MPS doesn't support float64
- if target_device.type == "mps" and tensor.dtype == torch.float64:
- tensor = tensor.to(dtype=torch.float32)
-
- # Only move if necessary
- if tensor.device != target_device:
- tensor = tensor.to(target_device, non_blocking=self.non_blocking)
-
- # Convert float dtype if specified and tensor is floating point
- if self._target_float_dtype is not None and tensor.is_floating_point():
- tensor = tensor.to(dtype=self._target_float_dtype)
-
- return tensor
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """
- Applies device and dtype conversion to all tensors in an environment transition.
-
- It iterates through the transition, finds all `torch.Tensor` objects (including those nested in
- dictionaries like `observation`), and processes them.
-
- Args:
- transition: The input `EnvTransition` object.
-
- Returns:
- A new `EnvTransition` object with all tensors moved to the target device and dtype.
- """
- new_transition = transition.copy()
- action = new_transition.get(TransitionKey.ACTION)
-
- if action is not None and not isinstance(action, PolicyAction):
- raise ValueError(f"If action is not None should be a PolicyAction type got {type(action)}")
-
- simple_tensor_keys = [
- TransitionKey.ACTION,
- TransitionKey.REWARD,
- TransitionKey.DONE,
- TransitionKey.TRUNCATED,
- ]
-
- dict_tensor_keys = [
- TransitionKey.OBSERVATION,
- TransitionKey.COMPLEMENTARY_DATA,
- ]
-
- # Process simple, top-level tensors
- for key in simple_tensor_keys:
- value = transition.get(key)
- if isinstance(value, torch.Tensor):
- new_transition[key] = self._process_tensor(value)
-
- # Process tensors nested within dictionaries
- for key in dict_tensor_keys:
- data_dict = transition.get(key)
- if data_dict is not None:
- new_data_dict = {
- k: self._process_tensor(v) if isinstance(v, torch.Tensor) else v
- for k, v in data_dict.items()
- }
- new_transition[key] = new_data_dict
-
- return new_transition
-
- def get_config(self) -> dict[str, Any]:
- """
- Returns the serializable configuration of the processor.
-
- Returns:
- A dictionary containing the device and float_dtype settings.
- """
- return {"device": self.device, "float_dtype": self.float_dtype}
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- """
- Returns the input features unchanged.
-
- Device and dtype transformations do not alter the fundamental definition of the features (e.g., shape).
-
- Args:
- features: A dictionary of policy features.
-
- Returns:
- The original dictionary of policy features.
- """
- return features
diff --git a/lerobot/src/lerobot/processor/env_processor.py b/lerobot/src/lerobot/processor/env_processor.py
deleted file mode 100644
index 84e5bbf64707adeaa045730509cf69b8402dc41e..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/processor/env_processor.py
+++ /dev/null
@@ -1,230 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from dataclasses import dataclass
-
-import torch
-
-from lerobot.configs.types import PipelineFeatureType, PolicyFeature
-from lerobot.utils.constants import OBS_IMAGES, OBS_PREFIX, OBS_STATE, OBS_STR
-
-from .pipeline import ObservationProcessorStep, ProcessorStepRegistry
-
-
-@dataclass
-@ProcessorStepRegistry.register(name="libero_processor")
-class LiberoProcessorStep(ObservationProcessorStep):
- """
- Processes LIBERO observations into the LeRobot format.
-
- This step handles the specific observation structure from LIBERO environments,
- which includes nested robot_state dictionaries and image observations.
-
- **State Processing:**
- - Processes the `robot_state` dictionary which contains nested end-effector,
- gripper, and joint information.
- - Extracts and concatenates:
- - End-effector position (3D)
- - End-effector quaternion converted to axis-angle (3D)
- - Gripper joint positions (2D)
- - Maps the concatenated state to `"observation.state"`.
-
- **Image Processing:**
- - Rotates images by 180 degrees by flipping both height and width dimensions.
- - This accounts for the HuggingFaceVLA/libero camera orientation convention.
- """
-
- def _process_observation(self, observation):
- """
- Processes both image and robot_state observations from LIBERO.
- """
- processed_obs = observation.copy()
- for key in list(processed_obs.keys()):
- if key.startswith(f"{OBS_IMAGES}."):
- img = processed_obs[key]
-
- # Flip both H and W
- img = torch.flip(img, dims=[2, 3])
-
- processed_obs[key] = img
- # Process robot_state into a flat state vector
- observation_robot_state_str = OBS_PREFIX + "robot_state"
- if observation_robot_state_str in processed_obs:
- robot_state = processed_obs.pop(observation_robot_state_str)
-
- # Extract components
- eef_pos = robot_state["eef"]["pos"] # (B, 3,)
- eef_quat = robot_state["eef"]["quat"] # (B, 4,)
- gripper_qpos = robot_state["gripper"]["qpos"] # (B, 2,)
-
- # Convert quaternion to axis-angle
- eef_axisangle = self._quat2axisangle(eef_quat) # (B, 3)
- # Concatenate into a single state vector
- state = torch.cat((eef_pos, eef_axisangle, gripper_qpos), dim=-1)
-
- # ensure float32
- state = state.float()
- if state.dim() == 1:
- state = state.unsqueeze(0)
-
- processed_obs[OBS_STATE] = state
- return processed_obs
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- """
- Transforms feature keys from the LIBERO format to the LeRobot standard.
- """
- new_features: dict[PipelineFeatureType, dict[str, PolicyFeature]] = {}
-
- # copy over non-STATE features
- for ft, feats in features.items():
- if ft != PipelineFeatureType.STATE:
- new_features[ft] = feats.copy()
-
- # rebuild STATE features
- state_feats = {}
-
- # add our new flattened state
- state_feats[OBS_STATE] = PolicyFeature(
- key=OBS_STATE,
- shape=(8,), # [eef_pos(3), axis_angle(3), gripper(2)]
- dtype="float32",
- description=("Concatenated end-effector position (3), axis-angle (3), and gripper qpos (2)."),
- )
-
- new_features[PipelineFeatureType.STATE] = state_feats
-
- return new_features
-
- def observation(self, observation):
- return self._process_observation(observation)
-
- def _quat2axisangle(self, quat: torch.Tensor) -> torch.Tensor:
- """
- Convert batched quaternions to axis-angle format.
- Only accepts torch tensors of shape (B, 4).
-
- Args:
- quat (Tensor): (B, 4) tensor of quaternions in (x, y, z, w) format
-
- Returns:
- Tensor: (B, 3) axis-angle vectors
-
- Raises:
- TypeError: if input is not a torch tensor
- ValueError: if shape is not (B, 4)
- """
-
- if not isinstance(quat, torch.Tensor):
- raise TypeError(f"_quat2axisangle expected a torch.Tensor, got {type(quat)}")
-
- if quat.ndim != 2 or quat.shape[1] != 4:
- raise ValueError(f"_quat2axisangle expected shape (B, 4), got {tuple(quat.shape)}")
-
- quat = quat.to(dtype=torch.float32)
- device = quat.device
- batch_size = quat.shape[0]
-
- w = quat[:, 3].clamp(-1.0, 1.0)
-
- den = torch.sqrt(torch.clamp(1.0 - w * w, min=0.0))
-
- result = torch.zeros((batch_size, 3), device=device)
-
- mask = den > 1e-10
-
- if mask.any():
- angle = 2.0 * torch.acos(w[mask]) # (M,)
- axis = quat[mask, :3] / den[mask].unsqueeze(1)
- result[mask] = axis * angle.unsqueeze(1)
-
- return result
-
-
-@dataclass
-@ProcessorStepRegistry.register(name="isaaclab_arena_processor")
-class IsaaclabArenaProcessorStep(ObservationProcessorStep):
- """
- Processes IsaacLab Arena observations into LeRobot format.
-
- **State Processing:**
- - Extracts state components from obs["policy"] based on `state_keys`.
- - Concatenates into a flat vector mapped to "observation.state".
-
- **Image Processing:**
- - Extracts images from obs["camera_obs"] based on `camera_keys`.
- - Converts from (B, H, W, C) uint8 to (B, C, H, W) float32 [0, 1].
- - Maps to "observation.images.".
- """
-
- # Configurable from IsaacLabEnv config / cli args: --env.state_keys="robot_joint_pos,left_eef_pos"
- state_keys: tuple[str, ...]
-
- # Configurable from IsaacLabEnv config / cli args: --env.camera_keys="robot_pov_cam_rgb"
- camera_keys: tuple[str, ...]
-
- def _process_observation(self, observation):
- """
- Processes both image and policy state observations from IsaacLab Arena.
- """
- processed_obs = {}
-
- if f"{OBS_STR}.camera_obs" in observation:
- camera_obs = observation[f"{OBS_STR}.camera_obs"]
-
- for cam_name, img in camera_obs.items():
- if cam_name not in self.camera_keys:
- continue
-
- img = img.permute(0, 3, 1, 2).contiguous()
- if img.dtype == torch.uint8:
- img = img.float() / 255.0
- elif img.dtype != torch.float32:
- img = img.float()
-
- processed_obs[f"{OBS_IMAGES}.{cam_name}"] = img
-
- # Process policy state -> observation.state
- if f"{OBS_STR}.policy" in observation:
- policy_obs = observation[f"{OBS_STR}.policy"]
-
- # Collect state components in order
- state_components = []
- for key in self.state_keys:
- if key in policy_obs:
- component = policy_obs[key]
- # Flatten extra dims: (B, N, M) -> (B, N*M)
- if component.dim() > 2:
- batch_size = component.shape[0]
- component = component.view(batch_size, -1)
- state_components.append(component)
-
- if state_components:
- state = torch.cat(state_components, dim=-1)
- state = state.float()
- processed_obs[OBS_STATE] = state
-
- return processed_obs
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- """Not used for policy evaluation."""
- return features
-
- def observation(self, observation):
- return self._process_observation(observation)
diff --git a/lerobot/src/lerobot/processor/factory.py b/lerobot/src/lerobot/processor/factory.py
deleted file mode 100644
index 66860a37548580bddbc9654043e961e5042213de..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/processor/factory.py
+++ /dev/null
@@ -1,62 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .converters import (
- observation_to_transition,
- robot_action_observation_to_transition,
- transition_to_observation,
- transition_to_robot_action,
-)
-from .core import RobotAction, RobotObservation
-from .pipeline import IdentityProcessorStep, RobotProcessorPipeline
-
-
-def make_default_teleop_action_processor() -> RobotProcessorPipeline[
- tuple[RobotAction, RobotObservation], RobotAction
-]:
- teleop_action_processor = RobotProcessorPipeline[tuple[RobotAction, RobotObservation], RobotAction](
- steps=[IdentityProcessorStep()],
- to_transition=robot_action_observation_to_transition,
- to_output=transition_to_robot_action,
- )
- return teleop_action_processor
-
-
-def make_default_robot_action_processor() -> RobotProcessorPipeline[
- tuple[RobotAction, RobotObservation], RobotAction
-]:
- robot_action_processor = RobotProcessorPipeline[tuple[RobotAction, RobotObservation], RobotAction](
- steps=[IdentityProcessorStep()],
- to_transition=robot_action_observation_to_transition,
- to_output=transition_to_robot_action,
- )
- return robot_action_processor
-
-
-def make_default_robot_observation_processor() -> RobotProcessorPipeline[RobotObservation, RobotObservation]:
- robot_observation_processor = RobotProcessorPipeline[RobotObservation, RobotObservation](
- steps=[IdentityProcessorStep()],
- to_transition=observation_to_transition,
- to_output=transition_to_observation,
- )
- return robot_observation_processor
-
-
-def make_default_processors():
- teleop_action_processor = make_default_teleop_action_processor()
- robot_action_processor = make_default_robot_action_processor()
- robot_observation_processor = make_default_robot_observation_processor()
- return (teleop_action_processor, robot_action_processor, robot_observation_processor)
diff --git a/lerobot/src/lerobot/processor/gym_action_processor.py b/lerobot/src/lerobot/processor/gym_action_processor.py
deleted file mode 100644
index 15e0c579f7ff32322c990bc88d44c721d7d93e8a..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/processor/gym_action_processor.py
+++ /dev/null
@@ -1,97 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-
-from lerobot.configs.types import PipelineFeatureType, PolicyFeature
-
-from .converters import to_tensor
-from .core import EnvAction, EnvTransition, PolicyAction
-from .pipeline import ActionProcessorStep, ProcessorStep, ProcessorStepRegistry
-
-
-@ProcessorStepRegistry.register("torch2numpy_action_processor")
-@dataclass
-class Torch2NumpyActionProcessorStep(ActionProcessorStep):
- """
- Converts a PyTorch tensor action to a NumPy array.
-
- This step is useful when the output of a policy (typically a torch.Tensor)
- needs to be passed to an environment or component that expects a NumPy array.
-
- Attributes:
- squeeze_batch_dim: If True, removes the first dimension of the array
- if it is of size 1. This is useful for converting a
- batched action of size (1, D) to a single action of size (D,).
- """
-
- squeeze_batch_dim: bool = True
-
- def action(self, action: PolicyAction) -> EnvAction:
- if not isinstance(action, PolicyAction):
- raise TypeError(
- f"Expected PolicyAction or None, got {type(action).__name__}. "
- "Use appropriate processor for non-tensor actions."
- )
-
- numpy_action = action.detach().cpu().numpy()
-
- # Remove batch dimensions but preserve action dimensions.
- # Only squeeze if there's a batch dimension (first dim == 1).
- if (
- self.squeeze_batch_dim
- and numpy_action.shape
- and len(numpy_action.shape) > 1
- and numpy_action.shape[0] == 1
- ):
- numpy_action = numpy_action.squeeze(0)
-
- return numpy_action
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- return features
-
-
-@ProcessorStepRegistry.register("numpy2torch_action_processor")
-@dataclass
-class Numpy2TorchActionProcessorStep(ProcessorStep):
- """Converts a NumPy array action to a PyTorch tensor when action is present."""
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """Converts numpy action to torch tensor if action exists, otherwise passes through."""
- from .core import TransitionKey
-
- self._current_transition = transition.copy()
- new_transition = self._current_transition
-
- action = new_transition.get(TransitionKey.ACTION)
- if action is not None:
- if not isinstance(action, EnvAction):
- raise TypeError(
- f"Expected np.ndarray or None, got {type(action).__name__}. "
- "Use appropriate processor for non-tensor actions."
- )
- torch_action = to_tensor(action, dtype=None) # Preserve original dtype
- new_transition[TransitionKey.ACTION] = torch_action
-
- return new_transition
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- return features
diff --git a/lerobot/src/lerobot/processor/hil_processor.py b/lerobot/src/lerobot/processor/hil_processor.py
deleted file mode 100644
index 1e5ad0ef30028c061e5d7c9bec42f2d765024ce5..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/processor/hil_processor.py
+++ /dev/null
@@ -1,596 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import math
-import time
-from dataclasses import dataclass
-from typing import Any, Protocol, TypeVar, runtime_checkable
-
-import numpy as np
-import torch
-import torchvision.transforms.functional as F # noqa: N812
-
-from lerobot.configs.types import PipelineFeatureType, PolicyFeature
-from lerobot.teleoperators.teleoperator import Teleoperator
-from lerobot.teleoperators.utils import TeleopEvents
-
-from .core import EnvTransition, PolicyAction, TransitionKey
-from .pipeline import (
- ComplementaryDataProcessorStep,
- InfoProcessorStep,
- ObservationProcessorStep,
- ProcessorStep,
- ProcessorStepRegistry,
- TruncatedProcessorStep,
-)
-
-GRIPPER_KEY = "gripper"
-DISCRETE_PENALTY_KEY = "discrete_penalty"
-TELEOP_ACTION_KEY = "teleop_action"
-
-
-@runtime_checkable
-class HasTeleopEvents(Protocol):
- """
- Minimal protocol for objects that provide teleoperation events.
-
- This protocol defines the `get_teleop_events()` method, allowing processor
- steps to interact with teleoperators that support event-based controls
- (like episode termination or success flagging) without needing to know the
- teleoperator's specific class.
- """
-
- def get_teleop_events(self) -> dict[str, Any]:
- """
- Get extra control events from the teleoperator.
-
- Returns:
- A dictionary containing control events such as:
- - `is_intervention`: bool - Whether the human is currently intervening.
- - `terminate_episode`: bool - Whether to terminate the current episode.
- - `success`: bool - Whether the episode was successful.
- - `rerecord_episode`: bool - Whether to rerecord the episode.
- """
- ...
-
-
-# Type variable constrained to Teleoperator subclasses that also implement events
-TeleopWithEvents = TypeVar("TeleopWithEvents", bound=Teleoperator)
-
-
-def _check_teleop_with_events(teleop: Teleoperator) -> None:
- """
- Runtime check that a teleoperator implements the `HasTeleopEvents` protocol.
-
- Args:
- teleop: The teleoperator instance to check.
-
- Raises:
- TypeError: If the teleoperator does not have a `get_teleop_events` method.
- """
- if not isinstance(teleop, HasTeleopEvents):
- raise TypeError(
- f"Teleoperator {type(teleop).__name__} must implement get_teleop_events() method. "
- f"Compatible teleoperators: GamepadTeleop, KeyboardEndEffectorTeleop"
- )
-
-
-@ProcessorStepRegistry.register("add_teleop_action_as_complementary_data")
-@dataclass
-class AddTeleopActionAsComplimentaryDataStep(ComplementaryDataProcessorStep):
- """
- Adds the raw action from a teleoperator to the transition's complementary data.
-
- This is useful for human-in-the-loop scenarios where the human's input needs to
- be available to downstream processors, for example, to override a policy's action
- during an intervention.
-
- Attributes:
- teleop_device: The teleoperator instance to get the action from.
- """
-
- teleop_device: Teleoperator
-
- def complementary_data(self, complementary_data: dict) -> dict:
- """
- Retrieves the teleoperator's action and adds it to the complementary data.
-
- Args:
- complementary_data: The incoming complementary data dictionary.
-
- Returns:
- A new dictionary with the teleoperator action added under the
- `teleop_action` key.
- """
- new_complementary_data = dict(complementary_data)
- new_complementary_data[TELEOP_ACTION_KEY] = self.teleop_device.get_action()
- return new_complementary_data
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- return features
-
-
-@ProcessorStepRegistry.register("add_teleop_action_as_info")
-@dataclass
-class AddTeleopEventsAsInfoStep(InfoProcessorStep):
- """
- Adds teleoperator control events (e.g., terminate, success) to the transition's info.
-
- This step extracts control events from teleoperators that support event-based
- interaction, making these signals available to other parts of the system.
-
- Attributes:
- teleop_device: An instance of a teleoperator that implements the
- `HasTeleopEvents` protocol.
- """
-
- teleop_device: TeleopWithEvents
-
- def __post_init__(self):
- """Validates that the provided teleoperator supports events after initialization."""
- _check_teleop_with_events(self.teleop_device)
-
- def info(self, info: dict) -> dict:
- """
- Retrieves teleoperator events and updates the info dictionary.
-
- Args:
- info: The incoming info dictionary.
-
- Returns:
- A new dictionary including the teleoperator events.
- """
- new_info = dict(info)
-
- teleop_events = self.teleop_device.get_teleop_events()
- new_info.update(teleop_events)
- return new_info
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- return features
-
-
-@ProcessorStepRegistry.register("image_crop_resize_processor")
-@dataclass
-class ImageCropResizeProcessorStep(ObservationProcessorStep):
- """
- Crops and/or resizes image observations.
-
- This step iterates through all image keys in an observation dictionary and applies
- the specified transformations. It handles device placement, moving tensors to the
- CPU if necessary for operations not supported on certain accelerators like MPS.
-
- Attributes:
- crop_params_dict: A dictionary mapping image keys to cropping parameters
- (top, left, height, width).
- resize_size: A tuple (height, width) to resize all images to.
- """
-
- crop_params_dict: dict[str, tuple[int, int, int, int]] | None = None
- resize_size: tuple[int, int] | None = None
-
- def observation(self, observation: dict) -> dict:
- """
- Applies cropping and resizing to all images in the observation dictionary.
-
- Args:
- observation: The observation dictionary, potentially containing image tensors.
-
- Returns:
- A new observation dictionary with transformed images.
- """
- if self.resize_size is None and not self.crop_params_dict:
- return observation
-
- new_observation = dict(observation)
-
- # Process all image keys in the observation
- for key in observation:
- if "image" not in key:
- continue
-
- image = observation[key]
- device = image.device
- # NOTE (maractingi): No mps kernel for crop and resize, so we need to move to cpu
- if device.type == "mps":
- image = image.cpu()
- # Crop if crop params are provided for this key
- if self.crop_params_dict is not None and key in self.crop_params_dict:
- crop_params = self.crop_params_dict[key]
- image = F.crop(image, *crop_params)
- if self.resize_size is not None:
- image = F.resize(image, self.resize_size)
- image = image.clamp(0.0, 1.0)
- new_observation[key] = image.to(device)
-
- return new_observation
-
- def get_config(self) -> dict[str, Any]:
- """
- Returns the configuration of the step for serialization.
-
- Returns:
- A dictionary with the crop parameters and resize dimensions.
- """
- return {
- "crop_params_dict": self.crop_params_dict,
- "resize_size": self.resize_size,
- }
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- """
- Updates the image feature shapes in the policy features dictionary if resizing is applied.
-
- Args:
- features: The policy features dictionary.
-
- Returns:
- The updated policy features dictionary with new image shapes.
- """
- if self.resize_size is None:
- return features
- for key in features[PipelineFeatureType.OBSERVATION]:
- if "image" in key:
- nb_channel = features[PipelineFeatureType.OBSERVATION][key].shape[0]
- features[PipelineFeatureType.OBSERVATION][key] = PolicyFeature(
- type=features[PipelineFeatureType.OBSERVATION][key].type,
- shape=(nb_channel, *self.resize_size),
- )
- return features
-
-
-@dataclass
-@ProcessorStepRegistry.register("time_limit_processor")
-class TimeLimitProcessorStep(TruncatedProcessorStep):
- """
- Tracks episode steps and enforces a time limit by truncating the episode.
-
- Attributes:
- max_episode_steps: The maximum number of steps allowed per episode.
- current_step: The current step count for the active episode.
- """
-
- max_episode_steps: int
- current_step: int = 0
-
- def truncated(self, truncated: bool) -> bool:
- """
- Increments the step counter and sets the truncated flag if the time limit is reached.
-
- Args:
- truncated: The incoming truncated flag.
-
- Returns:
- True if the episode step limit is reached, otherwise the incoming value.
- """
- self.current_step += 1
- if self.current_step >= self.max_episode_steps:
- truncated = True
- # TODO (steven): missing an else truncated = False?
- return truncated
-
- def get_config(self) -> dict[str, Any]:
- """
- Returns the configuration of the step for serialization.
-
- Returns:
- A dictionary containing the `max_episode_steps`.
- """
- return {
- "max_episode_steps": self.max_episode_steps,
- }
-
- def reset(self) -> None:
- """Resets the step counter, typically called at the start of a new episode."""
- self.current_step = 0
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- return features
-
-
-@dataclass
-@ProcessorStepRegistry.register("gripper_penalty_processor")
-class GripperPenaltyProcessorStep(ComplementaryDataProcessorStep):
- """
- Applies a penalty for inefficient gripper usage.
-
- This step penalizes actions that attempt to close an already closed gripper or
- open an already open one, based on position thresholds.
-
- Attributes:
- penalty: The negative reward value to apply.
- max_gripper_pos: The maximum position value for the gripper, used for normalization.
- """
-
- penalty: float = -0.01
- max_gripper_pos: float = 30.0
-
- def complementary_data(self, complementary_data: dict) -> dict:
- """
- Calculates the gripper penalty and adds it to the complementary data.
-
- Args:
- complementary_data: The incoming complementary data, which should contain
- raw joint positions.
-
- Returns:
- A new complementary data dictionary with the `discrete_penalty` key added.
- """
- action = self.transition.get(TransitionKey.ACTION)
-
- raw_joint_positions = complementary_data.get("raw_joint_positions")
- if raw_joint_positions is None:
- return complementary_data
-
- current_gripper_pos = raw_joint_positions.get(GRIPPER_KEY, None)
- if current_gripper_pos is None:
- return complementary_data
-
- # Gripper action is a PolicyAction at this stage
- gripper_action = action[-1].item()
- gripper_action_normalized = gripper_action / self.max_gripper_pos
-
- # Normalize gripper state and action
- gripper_state_normalized = current_gripper_pos / self.max_gripper_pos
-
- # Calculate penalty boolean as in original
- gripper_penalty_bool = (gripper_state_normalized < 0.5 and gripper_action_normalized > 0.5) or (
- gripper_state_normalized > 0.75 and gripper_action_normalized < 0.5
- )
-
- gripper_penalty = self.penalty * int(gripper_penalty_bool)
-
- # Create new complementary data with penalty info
- new_complementary_data = dict(complementary_data)
- new_complementary_data[DISCRETE_PENALTY_KEY] = gripper_penalty
-
- return new_complementary_data
-
- def get_config(self) -> dict[str, Any]:
- """
- Returns the configuration of the step for serialization.
-
- Returns:
- A dictionary containing the penalty value and max gripper position.
- """
- return {
- "penalty": self.penalty,
- "max_gripper_pos": self.max_gripper_pos,
- }
-
- def reset(self) -> None:
- """Resets the processor's internal state."""
- pass
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- return features
-
-
-@dataclass
-@ProcessorStepRegistry.register("intervention_action_processor")
-class InterventionActionProcessorStep(ProcessorStep):
- """
- Handles human intervention, overriding policy actions and managing episode termination.
-
- When an intervention is detected (via teleoperator events in the `info` dict),
- this step replaces the policy's action with the human's teleoperated action.
- It also processes signals to terminate the episode or flag success.
-
- Attributes:
- use_gripper: Whether to include the gripper in the teleoperated action.
- terminate_on_success: If True, automatically sets the `done` flag when a
- `success` event is received.
- """
-
- use_gripper: bool = False
- terminate_on_success: bool = True
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """
- Processes the transition to handle interventions.
-
- Args:
- transition: The incoming environment transition.
-
- Returns:
- The modified transition, potentially with an overridden action, updated
- reward, and termination status.
- """
- action = transition.get(TransitionKey.ACTION)
- if not isinstance(action, PolicyAction):
- raise ValueError(f"Action should be a PolicyAction type got {type(action)}")
-
- # Get intervention signals from complementary data
- info = transition.get(TransitionKey.INFO, {})
- complementary_data = transition.get(TransitionKey.COMPLEMENTARY_DATA, {})
- teleop_action = complementary_data.get(TELEOP_ACTION_KEY, {})
- is_intervention = info.get(TeleopEvents.IS_INTERVENTION, False)
- terminate_episode = info.get(TeleopEvents.TERMINATE_EPISODE, False)
- success = info.get(TeleopEvents.SUCCESS, False)
- rerecord_episode = info.get(TeleopEvents.RERECORD_EPISODE, False)
-
- new_transition = transition.copy()
-
- # Override action if intervention is active
- if is_intervention and teleop_action is not None:
- if isinstance(teleop_action, dict):
- # Convert teleop_action dict to tensor format
- action_list = [
- teleop_action.get("delta_x", 0.0),
- teleop_action.get("delta_y", 0.0),
- teleop_action.get("delta_z", 0.0),
- ]
- if self.use_gripper:
- action_list.append(teleop_action.get(GRIPPER_KEY, 1.0))
- elif isinstance(teleop_action, np.ndarray):
- action_list = teleop_action.tolist()
- else:
- action_list = teleop_action
-
- teleop_action_tensor = torch.tensor(action_list, dtype=action.dtype, device=action.device)
- new_transition[TransitionKey.ACTION] = teleop_action_tensor
-
- # Handle episode termination
- new_transition[TransitionKey.DONE] = bool(terminate_episode) or (
- self.terminate_on_success and success
- )
- new_transition[TransitionKey.REWARD] = float(success)
-
- # Update info with intervention metadata
- info = new_transition.get(TransitionKey.INFO, {})
- info[TeleopEvents.IS_INTERVENTION] = is_intervention
- info[TeleopEvents.RERECORD_EPISODE] = rerecord_episode
- info[TeleopEvents.SUCCESS] = success
- new_transition[TransitionKey.INFO] = info
-
- # Update complementary data with teleop action
- complementary_data = new_transition.get(TransitionKey.COMPLEMENTARY_DATA, {})
- complementary_data[TELEOP_ACTION_KEY] = new_transition.get(TransitionKey.ACTION)
- new_transition[TransitionKey.COMPLEMENTARY_DATA] = complementary_data
-
- return new_transition
-
- def get_config(self) -> dict[str, Any]:
- """
- Returns the configuration of the step for serialization.
-
- Returns:
- A dictionary containing the step's configuration attributes.
- """
- return {
- "use_gripper": self.use_gripper,
- "terminate_on_success": self.terminate_on_success,
- }
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- return features
-
-
-@dataclass
-@ProcessorStepRegistry.register("reward_classifier_processor")
-class RewardClassifierProcessorStep(ProcessorStep):
- """
- Applies a pretrained reward classifier to image observations to predict success.
-
- This step uses a model to determine if the current state is successful, updating
- the reward and potentially terminating the episode.
-
- Attributes:
- pretrained_path: Path to the pretrained reward classifier model.
- device: The device to run the classifier on.
- success_threshold: The probability threshold to consider a prediction as successful.
- success_reward: The reward value to assign on success.
- terminate_on_success: If True, terminates the episode upon successful classification.
- reward_classifier: The loaded classifier model instance.
- """
-
- pretrained_path: str | None = None
- device: str = "cpu"
- success_threshold: float = 0.5
- success_reward: float = 1.0
- terminate_on_success: bool = True
-
- reward_classifier: Any = None
-
- def __post_init__(self):
- """Initializes the reward classifier model after the dataclass is created."""
- if self.pretrained_path is not None:
- from lerobot.policies.sac.reward_model.modeling_classifier import Classifier
-
- self.reward_classifier = Classifier.from_pretrained(self.pretrained_path)
- self.reward_classifier.to(self.device)
- self.reward_classifier.eval()
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """
- Processes a transition, applying the reward classifier to its image observations.
-
- Args:
- transition: The incoming environment transition.
-
- Returns:
- The modified transition with an updated reward and done flag based on the
- classifier's prediction.
- """
- new_transition = transition.copy()
- observation = new_transition.get(TransitionKey.OBSERVATION)
- if observation is None or self.reward_classifier is None:
- return new_transition
-
- # Extract images from observation
- images = {key: value for key, value in observation.items() if "image" in key}
-
- if not images:
- return new_transition
-
- # Run reward classifier
- start_time = time.perf_counter()
- with torch.inference_mode():
- success = self.reward_classifier.predict_reward(images, threshold=self.success_threshold)
-
- classifier_frequency = 1 / (time.perf_counter() - start_time)
-
- # Calculate reward and termination
- reward = new_transition.get(TransitionKey.REWARD, 0.0)
- terminated = new_transition.get(TransitionKey.DONE, False)
-
- if math.isclose(success, 1, abs_tol=1e-2):
- reward = self.success_reward
- if self.terminate_on_success:
- terminated = True
-
- # Update transition
- new_transition[TransitionKey.REWARD] = reward
- new_transition[TransitionKey.DONE] = terminated
-
- # Update info with classifier frequency
- info = new_transition.get(TransitionKey.INFO, {})
- info["reward_classifier_frequency"] = classifier_frequency
- new_transition[TransitionKey.INFO] = info
-
- return new_transition
-
- def get_config(self) -> dict[str, Any]:
- """
- Returns the configuration of the step for serialization.
-
- Returns:
- A dictionary containing the step's configuration attributes.
- """
- return {
- "device": self.device,
- "success_threshold": self.success_threshold,
- "success_reward": self.success_reward,
- "terminate_on_success": self.terminate_on_success,
- }
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- return features
diff --git a/lerobot/src/lerobot/processor/migrate_policy_normalization.py b/lerobot/src/lerobot/processor/migrate_policy_normalization.py
deleted file mode 100644
index b45d96cca1155d60db3727c9598e4cc493c96854..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/processor/migrate_policy_normalization.py
+++ /dev/null
@@ -1,769 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-A generic script to migrate LeRobot policies with built-in normalization layers to the new
-pipeline-based processor system.
-
-This script performs the following steps:
-1. Loads a pretrained policy model and its configuration from a local path or the
- Hugging Face Hub.
-2. Scans the model's state dictionary to extract normalization statistics (e.g., mean,
- std, min, max) for all features.
-3. Creates two new processor pipelines:
- - A preprocessor that normalizes inputs (observations) and outputs (actions).
- - A postprocessor that unnormalizes outputs (actions) for inference.
-4. Removes the original normalization layers from the model's state dictionary,
- creating a "clean" model.
-5. Saves the new clean model, the preprocessor, the postprocessor, and a generated
- model card to a new directory.
-6. Optionally pushes all the new artifacts to the Hugging Face Hub.
-
-Usage:
- python src/lerobot/processor/migrate_policy_normalization.py \
- --pretrained-path lerobot/act_aloha_sim_transfer_cube_human \
- --push-to-hub \
- --branch main
-
-Note: This script now uses the modern `make_pre_post_processors` and `make_policy_config`
-factory functions from `lerobot.policies.factory` to create processors and configurations,
-ensuring consistency with the current codebase.
-
-The script extracts normalization statistics from the old model's state_dict, creates clean
-processor pipelines using the factory functions, and saves a migrated model that is compatible
-with the new PolicyProcessorPipeline architecture.
-"""
-
-import argparse
-import json
-import os
-from pathlib import Path
-from typing import Any
-
-import torch
-from huggingface_hub import HfApi, hf_hub_download
-from safetensors.torch import load_file as load_safetensors
-
-from lerobot.configs.types import FeatureType, NormalizationMode, PolicyFeature
-from lerobot.policies.factory import get_policy_class, make_policy_config, make_pre_post_processors
-from lerobot.utils.constants import ACTION
-
-
-def extract_normalization_stats(state_dict: dict[str, torch.Tensor]) -> dict[str, dict[str, torch.Tensor]]:
- """
- Scans a model's state_dict to find and extract normalization statistics.
-
- This function identifies keys corresponding to normalization layers (e.g., those
- for mean, std, min, max) based on a set of predefined patterns and organizes
- them into a nested dictionary.
-
- Args:
- state_dict: The state dictionary of a pretrained policy model.
-
- Returns:
- A nested dictionary where outer keys are feature names (e.g.,
- 'observation.state') and inner keys are statistic types ('mean', 'std'),
- mapping to their corresponding tensor values.
- """
- stats = {}
-
- # Define patterns to match and their prefixes to remove
- normalization_patterns = [
- "normalize_inputs.buffer_",
- "unnormalize_outputs.buffer_",
- "normalize_targets.buffer_",
- "normalize.", # Must come after normalize_* patterns
- "unnormalize.", # Must come after unnormalize_* patterns
- "input_normalizer.",
- "output_normalizer.",
- "normalalize_inputs.",
- "unnormalize_outputs.",
- "normalize_targets.",
- "unnormalize_targets.",
- ]
-
- # Process each key in state_dict
- for key, tensor in state_dict.items():
- # Try each pattern
- for pattern in normalization_patterns:
- if key.startswith(pattern):
- # Extract the remaining part after the pattern
- remaining = key[len(pattern) :]
- parts = remaining.split(".")
-
- # Need at least feature name and stat type
- if len(parts) >= 2:
- # Last part is the stat type (mean, std, min, max, etc.)
- stat_type = parts[-1]
- # Everything else is the feature name
- feature_name = ".".join(parts[:-1]).replace("_", ".")
-
- # Add to stats
- if feature_name not in stats:
- stats[feature_name] = {}
- stats[feature_name][stat_type] = tensor.clone()
-
- # Only process the first matching pattern
- break
-
- return stats
-
-
-def detect_features_and_norm_modes(
- config: dict[str, Any], stats: dict[str, dict[str, torch.Tensor]]
-) -> tuple[dict[str, PolicyFeature], dict[FeatureType, NormalizationMode]]:
- """
- Infers policy features and normalization modes from the model config and stats.
-
- This function first attempts to find feature definitions and normalization
- mappings directly from the policy's configuration file. If this information is
- not present, it infers it from the extracted normalization statistics, using
- tensor shapes to determine feature shapes and the presence of specific stat
- keys (e.g., 'mean'/'std' vs 'min'/'max') to determine the normalization mode.
- It applies sensible defaults if inference is not possible.
-
- Args:
- config: The policy's configuration dictionary from `config.json`.
- stats: The normalization statistics extracted from the model's state_dict.
-
- Returns:
- A tuple containing:
- - A dictionary mapping feature names to `PolicyFeature` objects.
- - A dictionary mapping `FeatureType` enums to `NormalizationMode` enums.
- """
- features = {}
- norm_modes = {}
-
- # First, check if there's a normalization_mapping in the config
- if "normalization_mapping" in config:
- print(f"Found normalization_mapping in config: {config['normalization_mapping']}")
- # Extract normalization modes from config
- for feature_type_str, mode_str in config["normalization_mapping"].items():
- # Convert string to FeatureType enum
- try:
- if feature_type_str == "VISUAL":
- feature_type = FeatureType.VISUAL
- elif feature_type_str == "STATE":
- feature_type = FeatureType.STATE
- elif feature_type_str == "ACTION":
- feature_type = FeatureType.ACTION
- else:
- print(f"Warning: Unknown feature type '{feature_type_str}', skipping")
- continue
- except (AttributeError, ValueError):
- print(f"Warning: Could not parse feature type '{feature_type_str}', skipping")
- continue
-
- # Convert string to NormalizationMode enum
- try:
- if mode_str == "MEAN_STD":
- mode = NormalizationMode.MEAN_STD
- elif mode_str == "MIN_MAX":
- mode = NormalizationMode.MIN_MAX
- elif mode_str == "IDENTITY":
- mode = NormalizationMode.IDENTITY
- else:
- print(
- f"Warning: Unknown normalization mode '{mode_str}' for feature type '{feature_type_str}'"
- )
- continue
- except (AttributeError, ValueError):
- print(f"Warning: Could not parse normalization mode '{mode_str}', skipping")
- continue
-
- norm_modes[feature_type] = mode
-
- # Try to extract from config
- if "features" in config:
- for key, feature_config in config["features"].items():
- shape = feature_config.get("shape", feature_config.get("dim"))
- shape = (shape,) if isinstance(shape, int) else tuple(shape)
-
- # Determine feature type
- if "image" in key or "visual" in key:
- feature_type = FeatureType.VISUAL
- elif "state" in key:
- feature_type = FeatureType.STATE
- elif ACTION in key:
- feature_type = FeatureType.ACTION
- else:
- feature_type = FeatureType.STATE # Default
-
- features[key] = PolicyFeature(feature_type, shape)
-
- # If no features in config, infer from stats
- if not features:
- for key, stat_dict in stats.items():
- # Get shape from any stat tensor
- tensor = next(iter(stat_dict.values()))
- shape = tuple(tensor.shape)
-
- # Determine feature type based on key
- if "image" in key or "visual" in key or "pixels" in key:
- feature_type = FeatureType.VISUAL
- elif "state" in key or "joint" in key or "position" in key:
- feature_type = FeatureType.STATE
- elif ACTION in key:
- feature_type = FeatureType.ACTION
- else:
- feature_type = FeatureType.STATE
-
- features[key] = PolicyFeature(feature_type, shape)
-
- # If normalization modes weren't in config, determine based on available stats
- if not norm_modes:
- for key, stat_dict in stats.items():
- if key in features:
- if "mean" in stat_dict and "std" in stat_dict:
- feature_type = features[key].type
- if feature_type not in norm_modes:
- norm_modes[feature_type] = NormalizationMode.MEAN_STD
- elif "min" in stat_dict and "max" in stat_dict:
- feature_type = features[key].type
- if feature_type not in norm_modes:
- norm_modes[feature_type] = NormalizationMode.MIN_MAX
-
- # Default normalization modes if not detected
- if FeatureType.VISUAL not in norm_modes:
- norm_modes[FeatureType.VISUAL] = NormalizationMode.MEAN_STD
- if FeatureType.STATE not in norm_modes:
- norm_modes[FeatureType.STATE] = NormalizationMode.MIN_MAX
- if FeatureType.ACTION not in norm_modes:
- norm_modes[FeatureType.ACTION] = NormalizationMode.MEAN_STD
-
- return features, norm_modes
-
-
-def remove_normalization_layers(state_dict: dict[str, torch.Tensor]) -> dict[str, torch.Tensor]:
- """
- Creates a new state_dict with all normalization-related layers removed.
-
- This function filters the original state dictionary, excluding any keys that
- match a set of predefined patterns associated with normalization modules.
-
- Args:
- state_dict: The original model state dictionary.
-
- Returns:
- A new state dictionary containing only the core model weights, without
- any normalization parameters.
- """
- new_state_dict = {}
-
- # Patterns to remove
- remove_patterns = [
- "normalize_inputs.",
- "unnormalize_outputs.",
- "normalize_targets.", # Added pattern for target normalization
- "normalize.",
- "unnormalize.",
- "input_normalizer.",
- "output_normalizer.",
- "normalizer.",
- ]
-
- for key, tensor in state_dict.items():
- should_remove = any(pattern in key for pattern in remove_patterns)
- if not should_remove:
- new_state_dict[key] = tensor
-
- return new_state_dict
-
-
-def clean_state_dict(
- state_dict: dict[str, torch.Tensor], remove_str: str = "._orig_mod"
-) -> dict[str, torch.Tensor]:
- """
- Remove a substring (e.g. '._orig_mod') from all keys in a state dict.
-
- Args:
- state_dict (dict): The original state dict.
- remove_str (str): The substring to remove from the keys.
-
- Returns:
- dict: A new state dict with cleaned keys.
- """
- new_state_dict = {}
- for k, v in state_dict.items():
- new_k = k.replace(remove_str, "")
- new_state_dict[new_k] = v
- return new_state_dict
-
-
-def load_state_dict_with_missing_key_handling(
- policy: torch.nn.Module,
- state_dict: dict[str, torch.Tensor],
- policy_type: str,
- known_missing_keys_whitelist: dict[str, list[str]],
-) -> list[str]:
- """
- Load state dict into policy with graceful handling of missing keys.
-
- This function loads the state dict with strict=False, filters out whitelisted
- missing keys, and provides detailed reporting about any issues found.
-
- Args:
- policy: The policy model to load the state dict into.
- state_dict: The cleaned state dictionary to load.
- policy_type: The type of policy (used for whitelist lookup).
- known_missing_keys_whitelist: Dictionary mapping policy types to lists of
- known acceptable missing keys.
-
- Returns:
- List of problematic missing keys that weren't in the whitelist.
- """
- # Load the cleaned state dict with strict=False to capture missing/unexpected keys
- load_result = policy.load_state_dict(state_dict, strict=False)
-
- # Check for missing keys
- missing_keys = load_result.missing_keys
- unexpected_keys = load_result.unexpected_keys
-
- # Filter out whitelisted missing keys
- policy_type_lower = policy_type.lower()
- whitelisted_keys = known_missing_keys_whitelist.get(policy_type_lower, [])
- problematic_missing_keys = [key for key in missing_keys if key not in whitelisted_keys]
-
- if missing_keys:
- if problematic_missing_keys:
- print(f"WARNING: Found {len(problematic_missing_keys)} unexpected missing keys:")
- for key in problematic_missing_keys:
- print(f" - {key}")
-
- if len(missing_keys) > len(problematic_missing_keys):
- whitelisted_missing = [key for key in missing_keys if key in whitelisted_keys]
- print(f"INFO: Found {len(whitelisted_missing)} expected missing keys (whitelisted):")
- for key in whitelisted_missing:
- print(f" - {key}")
-
- if unexpected_keys:
- print(f"WARNING: Found {len(unexpected_keys)} unexpected keys:")
- for key in unexpected_keys:
- print(f" - {key}")
-
- if not missing_keys and not unexpected_keys:
- print("Successfully loaded cleaned state dict into policy model (all keys matched)")
- else:
- print("State dict loaded with some missing/unexpected keys (see details above)")
-
- return problematic_missing_keys
-
-
-def convert_features_to_policy_features(features_dict: dict[str, dict]) -> dict[str, PolicyFeature]:
- """
- Converts a feature dictionary from the old config format to the new `PolicyFeature` format.
-
- Args:
- features_dict: The feature dictionary in the old format, where values are
- simple dictionaries (e.g., `{"shape": [7]}`).
-
- Returns:
- A dictionary mapping feature names to `PolicyFeature` dataclass objects.
- """
- converted_features = {}
-
- for key, feature_dict in features_dict.items():
- # Determine feature type based on key
- if "image" in key or "visual" in key:
- feature_type = FeatureType.VISUAL
- elif "state" in key:
- feature_type = FeatureType.STATE
- elif ACTION in key:
- feature_type = FeatureType.ACTION
- else:
- feature_type = FeatureType.STATE
-
- # Get shape from feature dict
- shape = feature_dict.get("shape", feature_dict.get("dim"))
- shape = (shape,) if isinstance(shape, int) else tuple(shape) if shape is not None else ()
-
- converted_features[key] = PolicyFeature(feature_type, shape)
-
- return converted_features
-
-
-def display_migration_summary_with_warnings(problematic_missing_keys: list[str]) -> None:
- """
- Display final migration summary with warnings about problematic missing keys.
-
- Args:
- problematic_missing_keys: List of missing keys that weren't in the whitelist.
- """
- if not problematic_missing_keys:
- return
-
- print("\n" + "=" * 60)
- print("IMPORTANT: MIGRATION COMPLETED WITH WARNINGS")
- print("=" * 60)
- print(
- f"The migration was successful, but {len(problematic_missing_keys)} unexpected missing keys were found:"
- )
- print()
- for key in problematic_missing_keys:
- print(f" - {key}")
- print()
- print("These missing keys may indicate:")
- print(" • The model architecture has changed")
- print(" • Some components were not properly saved in the original model")
- print(" • The migration script needs to be updated for this policy type")
- print()
- print("What to do next:")
- print(" 1. Test your migrated model carefully to ensure it works as expected")
- print(" 2. If you encounter issues, please open an issue at:")
- print(" https://github.com/huggingface/lerobot/issues")
- print(" 3. Include this migration log and the missing keys listed above")
- print()
- print("If the model works correctly despite these warnings, the missing keys")
- print("might be expected for your policy type and can be added to the whitelist.")
- print("=" * 60)
-
-
-def load_model_from_hub(
- repo_id: str, revision: str | None = None
-) -> tuple[dict[str, torch.Tensor], dict[str, Any], dict[str, Any] | None]:
- """
- Downloads and loads a model's state_dict and configs from the Hugging Face Hub.
-
- Args:
- repo_id: The repository ID on the Hub (e.g., 'lerobot/aloha').
- revision: The specific git revision (branch, tag, or commit hash) to use.
-
- Returns:
- A tuple containing the model's state dictionary, the policy configuration,
- and the training configuration (None if train_config.json is not found).
- """
- # Download files.
- safetensors_path = hf_hub_download(repo_id=repo_id, filename="model.safetensors", revision=revision)
-
- config_path = hf_hub_download(repo_id=repo_id, filename="config.json", revision=revision)
-
- # Load state_dict
- state_dict = load_safetensors(safetensors_path)
-
- # Load config
- with open(config_path) as f:
- config = json.load(f)
-
- # Try to load train_config (optional)
- train_config = None
- try:
- train_config_path = hf_hub_download(repo_id=repo_id, filename="train_config.json", revision=revision)
- with open(train_config_path) as f:
- train_config = json.load(f)
- except FileNotFoundError:
- print("train_config.json not found - continuing without training configuration")
-
- return state_dict, config, train_config
-
-
-def main():
- parser = argparse.ArgumentParser(
- description="Migrate policy models with normalization layers to new pipeline system"
- )
- parser.add_argument(
- "--pretrained-path",
- type=str,
- required=True,
- help="Path to pretrained model (hub repo or local directory)",
- )
- parser.add_argument(
- "--output-dir",
- type=str,
- default=None,
- help="Output directory for migrated model (default: same as pretrained-path)",
- )
- parser.add_argument("--push-to-hub", action="store_true", help="Push migrated model to hub")
- parser.add_argument(
- "--hub-repo-id",
- type=str,
- default=None,
- help="Hub repository ID for pushing (default: same as pretrained-path)",
- )
- parser.add_argument("--revision", type=str, default=None, help="Revision of the model to load")
- parser.add_argument("--private", action="store_true", help="Make the hub repository private")
- parser.add_argument(
- "--branch",
- type=str,
- default=None,
- help="Git branch to use when pushing to hub. If specified, a PR will be created automatically (default: push directly to main)",
- )
-
- args = parser.parse_args()
-
- # Load model and config
- print(f"Loading model from {args.pretrained_path}...")
- if os.path.isdir(args.pretrained_path):
- # Local directory
- state_dict = load_safetensors(os.path.join(args.pretrained_path, "model.safetensors"))
- with open(os.path.join(args.pretrained_path, "config.json")) as f:
- config = json.load(f)
-
- # Try to load train_config (optional)
- train_config = None
- train_config_path = os.path.join(args.pretrained_path, "train_config.json")
- if os.path.exists(train_config_path):
- with open(train_config_path) as f:
- train_config = json.load(f)
- else:
- print("train_config.json not found - continuing without training configuration")
- else:
- # Hub repository
- state_dict, config, train_config = load_model_from_hub(args.pretrained_path, args.revision)
-
- # Extract normalization statistics
- print("Extracting normalization statistics...")
- stats = extract_normalization_stats(state_dict)
-
- print(f"Found normalization statistics for: {list(stats.keys())}")
-
- # Detect input features and normalization modes
- print("Detecting features and normalization modes...")
- features, norm_map = detect_features_and_norm_modes(config, stats)
-
- print(f"Detected features: {list(features.keys())}")
- print(f"Normalization modes: {norm_map}")
-
- # Remove normalization layers from state_dict
- print("Removing normalization layers from model...")
- new_state_dict = remove_normalization_layers(state_dict)
- new_state_dict = clean_state_dict(new_state_dict, remove_str="._orig_mod")
-
- removed_keys = set(state_dict.keys()) - set(new_state_dict.keys())
- if removed_keys:
- print(f"Removed {len(removed_keys)} normalization layer keys")
-
- # Determine output path
- if args.output_dir:
- output_dir = Path(args.output_dir)
- else:
- if os.path.isdir(args.pretrained_path):
- output_dir = Path(args.pretrained_path).parent / f"{Path(args.pretrained_path).name}_migrated"
- else:
- output_dir = Path(f"./{args.pretrained_path.replace('/', '_')}_migrated")
-
- output_dir.mkdir(parents=True, exist_ok=True)
-
- # Extract policy type from config
- if "type" not in config:
- raise ValueError("Policy type not found in config.json. The config must contain a 'type' field.")
-
- policy_type = config["type"]
- print(f"Detected policy type: {policy_type}")
-
- # Clean up config - remove fields that shouldn't be passed to config constructor
- cleaned_config = dict(config)
-
- # Remove fields that are not part of the config class constructors
- fields_to_remove = ["normalization_mapping", "type"]
- for field in fields_to_remove:
- if field in cleaned_config:
- print(f"Removing '{field}' field from config")
- del cleaned_config[field]
-
- # Convert input_features and output_features to PolicyFeature objects if they exist
- if "input_features" in cleaned_config:
- cleaned_config["input_features"] = convert_features_to_policy_features(
- cleaned_config["input_features"]
- )
- if "output_features" in cleaned_config:
- cleaned_config["output_features"] = convert_features_to_policy_features(
- cleaned_config["output_features"]
- )
-
- # Add normalization mapping to config
- cleaned_config["normalization_mapping"] = norm_map
-
- # Create policy configuration using the factory
- print(f"Creating {policy_type} policy configuration...")
- policy_config = make_policy_config(policy_type, **cleaned_config)
-
- # Create policy instance using the factory
- print(f"Instantiating {policy_type} policy...")
- policy_class = get_policy_class(policy_type)
- policy = policy_class(policy_config)
-
- # Define whitelist of known missing keys that are acceptable (for example weight tie) for certain policy types
- known_missing_keys_whitelist = {
- "pi0": ["model.paligemma_with_expert.paligemma.model.language_model.embed_tokens.weight"],
- # Add other policy types and their known missing keys here as needed
- }
-
- # Load state dict with graceful missing key handling
- problematic_missing_keys = load_state_dict_with_missing_key_handling(
- policy=policy,
- state_dict=new_state_dict,
- policy_type=policy_type,
- known_missing_keys_whitelist=known_missing_keys_whitelist,
- )
- policy.to(torch.float32)
- # Create preprocessor and postprocessor using the factory
- print("Creating preprocessor and postprocessor using make_pre_post_processors...")
- preprocessor, postprocessor = make_pre_post_processors(policy_cfg=policy_config, dataset_stats=stats)
-
- # Determine hub repo ID if pushing to hub
- hub_repo_id = None
- if args.push_to_hub:
- if args.hub_repo_id:
- hub_repo_id = args.hub_repo_id
- else:
- if not os.path.isdir(args.pretrained_path):
- # Use same repo with "_migrated" suffix
- hub_repo_id = f"{args.pretrained_path}_migrated"
- else:
- raise ValueError("--hub-repo-id must be specified when pushing local model to hub")
-
- # Save all components to local directory first
- print(f"Saving preprocessor to {output_dir}...")
- preprocessor.save_pretrained(output_dir)
-
- print(f"Saving postprocessor to {output_dir}...")
- postprocessor.save_pretrained(output_dir)
-
- print(f"Saving model to {output_dir}...")
- policy.save_pretrained(output_dir)
-
- # Generate and save model card
- print("Generating model card...")
- # Get metadata from original config
- dataset_repo_id = "unknown"
- if train_config is not None:
- dataset_repo_id = train_config.get("repo_id", "unknown")
- license = config.get("license", "apache-2.0")
-
- tags = config.get("tags", ["robotics", "lerobot", policy_type]) or ["robotics", "lerobot", policy_type]
- tags = set(tags).union({"robotics", "lerobot", policy_type})
- tags = list(tags)
-
- # Generate model card
- card = policy.generate_model_card(
- dataset_repo_id=dataset_repo_id, model_type=policy_type, license=license, tags=tags
- )
-
- # Save model card locally
- card.save(str(output_dir / "README.md"))
- print(f"Model card saved to {output_dir / 'README.md'}")
- # Push all files to hub in a single operation if requested
- if args.push_to_hub and hub_repo_id:
- api = HfApi()
-
- # Determine if we should create a PR (automatically if branch is specified)
- create_pr = args.branch is not None
- target_location = f"branch '{args.branch}'" if args.branch else "main branch"
-
- print(f"Pushing all migrated files to {hub_repo_id} on {target_location}...")
-
- # Upload all files in a single commit with automatic PR creation if branch specified
- commit_message = "Migrate policy to PolicyProcessorPipeline system"
- commit_description = None
-
- if create_pr:
- # Separate commit description for PR body
- commit_description = """**Automated Policy Migration to PolicyProcessorPipeline**
-
-This PR migrates your model to the new LeRobot policy format using the modern PolicyProcessorPipeline architecture.
-
-## What Changed
-
-### **New Architecture - PolicyProcessorPipeline**
-Your model now uses external PolicyProcessorPipeline components for data processing instead of built-in normalization layers. This provides:
-- **Modularity**: Separate preprocessing and postprocessing pipelines
-- **Flexibility**: Easy to swap, configure, and debug processing steps
-- **Compatibility**: Works with the latest LeRobot ecosystem
-
-### **Normalization Extraction**
-We've extracted normalization statistics from your model's state_dict and removed the built-in normalization layers:
-- **Extracted patterns**: `normalize_inputs.*`, `unnormalize_outputs.*`, `normalize.*`, `unnormalize.*`, `input_normalizer.*`, `output_normalizer.*`
-- **Statistics preserved**: Mean, std, min, max values for all features
-- **Clean model**: State dict now contains only core model weights
-
-### **Files Added**
-- **preprocessor_config.json**: Configuration for input preprocessing pipeline
-- **postprocessor_config.json**: Configuration for output postprocessing pipeline
-- **model.safetensors**: Clean model weights without normalization layers
-- **config.json**: Updated model configuration
-- **train_config.json**: Training configuration
-- **README.md**: Updated model card with migration information
-
-### **Benefits**
-- **Backward Compatible**: Your model behavior remains identical
-- **Future Ready**: Compatible with latest LeRobot features and updates
-- **Debuggable**: Easy to inspect and modify processing steps
-- **Portable**: Processors can be shared and reused across models
-
-### **Usage**
-```python
-# Load your migrated model
-from lerobot.policies import get_policy_class
-from lerobot.processor import PolicyProcessorPipeline
-
-# The preprocessor and postprocessor are now external
-preprocessor = PolicyProcessorPipeline.from_pretrained("your-model-repo", config_filename="preprocessor_config.json")
-postprocessor = PolicyProcessorPipeline.from_pretrained("your-model-repo", config_filename="postprocessor_config.json")
-policy = get_policy_class("your-policy-type").from_pretrained("your-model-repo")
-
-# Process data through the pipeline
-processed_batch = preprocessor(raw_batch)
-action = policy(processed_batch)
-final_action = postprocessor(action)
-```
-
-*Generated automatically by the LeRobot policy migration script*"""
-
- upload_kwargs = {
- "repo_id": hub_repo_id,
- "folder_path": output_dir,
- "repo_type": "model",
- "commit_message": commit_message,
- "revision": args.branch,
- "create_pr": create_pr,
- "allow_patterns": ["*.json", "*.safetensors", "*.md"],
- "ignore_patterns": ["*.tmp", "*.log"],
- }
-
- # Add commit_description for PR body if creating PR
- if create_pr and commit_description:
- upload_kwargs["commit_description"] = commit_description
-
- api.upload_folder(**upload_kwargs)
-
- if create_pr:
- print("All files pushed and pull request created successfully!")
- else:
- print("All files pushed to main branch successfully!")
-
- print("\nMigration complete!")
- print(f"Migrated model saved to: {output_dir}")
- if args.push_to_hub and hub_repo_id:
- if args.branch:
- print(
- f"Successfully pushed all files to branch '{args.branch}' and created PR on https://huggingface.co/{hub_repo_id}"
- )
- else:
- print(f"Successfully pushed to https://huggingface.co/{hub_repo_id}")
- if args.branch:
- print(f"\nView the branch at: https://huggingface.co/{hub_repo_id}/tree/{args.branch}")
- print(
- f"View the PR at: https://huggingface.co/{hub_repo_id}/discussions (look for the most recent PR)"
- )
- else:
- print(f"\nView the changes at: https://huggingface.co/{hub_repo_id}")
-
- # Display final summary about any problematic missing keys
- display_migration_summary_with_warnings(problematic_missing_keys)
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/processor/normalize_processor.py b/lerobot/src/lerobot/processor/normalize_processor.py
deleted file mode 100644
index 309f8c95463f618a991ede40e13d32f56de37d1b..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/processor/normalize_processor.py
+++ /dev/null
@@ -1,560 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from __future__ import annotations
-
-from copy import deepcopy
-from dataclasses import dataclass, field
-from typing import Any
-
-import torch
-from torch import Tensor
-
-from lerobot.configs.types import FeatureType, NormalizationMode, PipelineFeatureType, PolicyFeature
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.utils.constants import ACTION
-
-from .converters import from_tensor_to_numpy, to_tensor
-from .core import EnvTransition, PolicyAction, TransitionKey
-from .pipeline import PolicyProcessorPipeline, ProcessorStep, ProcessorStepRegistry, RobotObservation
-
-
-@dataclass
-class _NormalizationMixin:
- """
- A mixin class providing core functionality for normalization and unnormalization.
-
- This class manages normalization statistics (`stats`), converts them to tensors for
- efficient computation, handles device placement, and implements the logic for
- applying normalization transformations (mean/std and min/max). It is designed to
- be inherited by concrete `ProcessorStep` implementations and should not be used
- directly.
-
- **Stats Override Preservation:**
- When stats are explicitly provided during construction (e.g., via overrides in
- `DataProcessorPipeline.from_pretrained()`), they are preserved even when
- `load_state_dict()` is called. This allows users to override normalization
- statistics from saved models while keeping the rest of the model state intact.
-
- Examples:
- ```python
- # Common use case: Override with dataset stats
- from lerobot.datasets import LeRobotDataset
-
- dataset = LeRobotDataset("my_dataset")
- pipeline = DataProcessorPipeline.from_pretrained(
- "model_path", overrides={"normalizer_processor": {"stats": dataset.meta.stats}}
- )
- # dataset.meta.stats will be used, not the stats from the saved model
-
- # Custom stats override
- custom_stats = {"action": {"mean": [0.0], "std": [1.0]}}
- pipeline = DataProcessorPipeline.from_pretrained(
- "model_path", overrides={"normalizer_processor": {"stats": custom_stats}}
- )
- ```
-
- Attributes:
- features: A dictionary mapping feature names to `PolicyFeature` objects, defining
- the data structure to be processed.
- norm_map: A dictionary mapping `FeatureType` to `NormalizationMode`, specifying
- which normalization method to use for each type of feature.
- stats: A dictionary containing the normalization statistics (e.g., mean, std,
- min, max) for each feature.
- device: The PyTorch device on which to store and perform tensor operations.
- eps: A small epsilon value to prevent division by zero in normalization
- calculations.
- normalize_observation_keys: An optional set of keys to selectively apply
- normalization to specific observation features.
- _tensor_stats: An internal dictionary holding the normalization statistics as
- PyTorch tensors.
- _stats_explicitly_provided: Internal flag tracking whether stats were explicitly
- provided during construction (used for override preservation).
- """
-
- features: dict[str, PolicyFeature]
- norm_map: dict[FeatureType, NormalizationMode]
- stats: dict[str, dict[str, Any]] | None = None
- device: torch.device | str | None = None
- dtype: torch.dtype | None = None
- eps: float = 1e-8
- normalize_observation_keys: set[str] | None = None
-
- _tensor_stats: dict[str, dict[str, Tensor]] = field(default_factory=dict, init=False, repr=False)
- _stats_explicitly_provided: bool = field(default=False, init=False, repr=False)
-
- def __post_init__(self):
- """
- Initializes the mixin after dataclass construction.
-
- This method handles the robust deserialization of `features` and `norm_map`
- from JSON-compatible formats (where enums become strings and tuples become
- lists) and converts the provided `stats` dictionary into a dictionary of
- tensors (`_tensor_stats`) on the specified device.
- """
- # Track if stats were explicitly provided (not None and not empty)
- self._stats_explicitly_provided = self.stats is not None and bool(self.stats)
- # Robust JSON deserialization handling (guard empty maps).
- if self.features:
- first_val = next(iter(self.features.values()))
- if isinstance(first_val, dict):
- reconstructed = {}
- for key, ft_dict in self.features.items():
- reconstructed[key] = PolicyFeature(
- type=FeatureType(ft_dict["type"]), shape=tuple(ft_dict["shape"])
- )
- self.features = reconstructed
-
- # if keys are strings (JSON), rebuild enum map
- if self.norm_map and all(isinstance(k, str) for k in self.norm_map):
- reconstructed = {}
- for ft_type_str, norm_mode_str in self.norm_map.items():
- reconstructed[FeatureType(ft_type_str)] = NormalizationMode(norm_mode_str)
- self.norm_map = reconstructed
-
- # Convert stats to tensors and move to the target device once during initialization.
- self.stats = self.stats or {}
- if self.dtype is None:
- self.dtype = torch.float32
- self._tensor_stats = to_tensor(self.stats, device=self.device, dtype=self.dtype)
-
- def to(
- self, device: torch.device | str | None = None, dtype: torch.dtype | None = None
- ) -> _NormalizationMixin:
- """
- Moves the processor's normalization stats to the specified device.
-
- Args:
- device: The target PyTorch device.
-
- Returns:
- The instance of the class, allowing for method chaining.
- """
- if device is not None:
- self.device = device
- if dtype is not None:
- self.dtype = dtype
- self._tensor_stats = to_tensor(self.stats, device=self.device, dtype=self.dtype)
- return self
-
- def state_dict(self) -> dict[str, Tensor]:
- """
- Returns the normalization statistics as a flat state dictionary.
-
- All tensors are moved to the CPU before being returned, which is standard practice
- for saving state dictionaries.
-
- Returns:
- A flat dictionary mapping from `'feature_name.stat_name'` to the
- corresponding statistics tensor on the CPU.
- """
- flat: dict[str, Tensor] = {}
- for key, sub in self._tensor_stats.items():
- for stat_name, tensor in sub.items():
- flat[f"{key}.{stat_name}"] = tensor.cpu() # Always save to CPU
- return flat
-
- def load_state_dict(self, state: dict[str, Tensor]) -> None:
- """
- Loads normalization statistics from a state dictionary.
-
- The loaded tensors are moved to the processor's configured device.
-
- **Stats Override Preservation:**
- If stats were explicitly provided during construction (e.g., via overrides in
- `DataProcessorPipeline.from_pretrained()`), they are preserved and the state
- dictionary is ignored. This allows users to override normalization statistics
- while still loading the rest of the model state.
-
- This behavior is crucial for scenarios where users want to adapt a pretrained
- model to a new dataset with different statistics without retraining the entire
- model.
-
- Args:
- state: A flat state dictionary with keys in the format
- `'feature_name.stat_name'`.
-
- Note:
- When stats are preserved due to explicit provision, only the tensor
- representation is updated to ensure consistency with the current device
- and dtype settings.
- """
- # If stats were explicitly provided during construction, preserve them
- if self._stats_explicitly_provided and self.stats is not None:
- # Don't load from state_dict, keep the explicitly provided stats
- # But ensure _tensor_stats is properly initialized
- self._tensor_stats = to_tensor(self.stats, device=self.device, dtype=self.dtype) # type: ignore[assignment]
- return
-
- # Normal behavior: load stats from state_dict
- self._tensor_stats.clear()
- for flat_key, tensor in state.items():
- key, stat_name = flat_key.rsplit(".", 1)
- # Load to the processor's configured device.
- self._tensor_stats.setdefault(key, {})[stat_name] = tensor.to(
- dtype=torch.float32, device=self.device
- )
-
- # Reconstruct the original stats dict from tensor stats for compatibility with to() method
- # and other functions that rely on self.stats
- self.stats = {}
- for key, tensor_dict in self._tensor_stats.items():
- self.stats[key] = {}
- for stat_name, tensor in tensor_dict.items():
- # Convert tensor back to python/numpy format
- self.stats[key][stat_name] = from_tensor_to_numpy(tensor)
-
- def get_config(self) -> dict[str, Any]:
- """
- Returns a serializable dictionary of the processor's configuration.
-
- This method is used when saving the processor to disk, ensuring that its
- configuration can be reconstructed later.
-
- Returns:
- A JSON-serializable dictionary containing the configuration.
- """
- config = {
- "eps": self.eps,
- "features": {
- key: {"type": ft.type.value, "shape": ft.shape} for key, ft in self.features.items()
- },
- "norm_map": {ft_type.value: norm_mode.value for ft_type, norm_mode in self.norm_map.items()},
- }
- if self.normalize_observation_keys is not None:
- config["normalize_observation_keys"] = sorted(self.normalize_observation_keys)
- return config
-
- def _normalize_observation(self, observation: RobotObservation, inverse: bool) -> dict[str, Tensor]:
- """
- Applies (un)normalization to all relevant features in an observation dictionary.
-
- Args:
- observation: The observation dictionary to process.
- inverse: If `True`, applies unnormalization; otherwise, applies normalization.
-
- Returns:
- A new observation dictionary with the transformed tensor values.
- """
- new_observation = dict(observation)
- for key, feature in self.features.items():
- if self.normalize_observation_keys is not None and key not in self.normalize_observation_keys:
- continue
- if feature.type != FeatureType.ACTION and key in new_observation:
- # Convert to tensor but preserve original dtype for adaptation logic
- tensor = torch.as_tensor(new_observation[key])
- new_observation[key] = self._apply_transform(tensor, key, feature.type, inverse=inverse)
- return new_observation
-
- def _normalize_action(self, action: Tensor, inverse: bool) -> Tensor:
- # Convert to tensor but preserve original dtype for adaptation logic
- """
- Applies (un)normalization to an action tensor.
-
- Args:
- action: The action tensor to process.
- inverse: If `True`, applies unnormalization; otherwise, applies normalization.
-
- Returns:
- The transformed action tensor.
- """
- processed_action = self._apply_transform(action, ACTION, FeatureType.ACTION, inverse=inverse)
- return processed_action
-
- def _apply_transform(
- self, tensor: Tensor, key: str, feature_type: FeatureType, *, inverse: bool = False
- ) -> Tensor:
- """
- Core logic to apply a normalization or unnormalization transformation to a tensor.
-
- This method selects the appropriate normalization mode based on the feature type
- and applies the corresponding mathematical operation.
-
- Normalization Modes:
- - MEAN_STD: Centers data around zero with unit variance.
- - MIN_MAX: Scales data to [-1, 1] range using actual min/max values.
- - QUANTILES: Scales data to [-1, 1] range using 1st and 99th percentiles (q01/q99).
- - QUANTILE10: Scales data to [-1, 1] range using 10th and 90th percentiles (q10/q90).
-
- Args:
- tensor: The input tensor to transform.
- key: The feature key corresponding to the tensor.
- feature_type: The `FeatureType` of the tensor.
- inverse: If `True`, applies the inverse transformation (unnormalization).
-
- Returns:
- The transformed tensor.
-
- Raises:
- ValueError: If an unsupported normalization mode is encountered.
- """
- norm_mode = self.norm_map.get(feature_type, NormalizationMode.IDENTITY)
- if norm_mode == NormalizationMode.IDENTITY or key not in self._tensor_stats:
- return tensor
-
- if norm_mode not in (
- NormalizationMode.MEAN_STD,
- NormalizationMode.MIN_MAX,
- NormalizationMode.QUANTILES,
- NormalizationMode.QUANTILE10,
- ):
- raise ValueError(f"Unsupported normalization mode: {norm_mode}")
-
- # For Accelerate compatibility: Ensure stats are on the same device and dtype as the input tensor
- if self._tensor_stats and key in self._tensor_stats:
- first_stat = next(iter(self._tensor_stats[key].values()))
- if first_stat.device != tensor.device or first_stat.dtype != tensor.dtype:
- self.to(device=tensor.device, dtype=tensor.dtype)
-
- stats = self._tensor_stats[key]
-
- if norm_mode == NormalizationMode.MEAN_STD:
- mean = stats.get("mean", None)
- std = stats.get("std", None)
- if mean is None or std is None:
- raise ValueError(
- "MEAN_STD normalization mode requires mean and std stats, please update the dataset with the correct stats"
- )
-
- mean, std = stats["mean"], stats["std"]
- # Avoid division by zero by adding a small epsilon.
- denom = std + self.eps
- if inverse:
- return tensor * std + mean
- return (tensor - mean) / denom
-
- if norm_mode == NormalizationMode.MIN_MAX:
- min_val = stats.get("min", None)
- max_val = stats.get("max", None)
- if min_val is None or max_val is None:
- raise ValueError(
- "MIN_MAX normalization mode requires min and max stats, please update the dataset with the correct stats"
- )
-
- min_val, max_val = stats["min"], stats["max"]
- denom = max_val - min_val
- # When min_val == max_val, substitute the denominator with a small epsilon
- # to prevent division by zero. This consistently maps an input equal to
- # min_val to -1, ensuring a stable transformation.
- denom = torch.where(
- denom == 0, torch.tensor(self.eps, device=tensor.device, dtype=tensor.dtype), denom
- )
- if inverse:
- # Map from [-1, 1] back to [min, max]
- return (tensor + 1) / 2 * denom + min_val
- # Map from [min, max] to [-1, 1]
- return 2 * (tensor - min_val) / denom - 1
-
- if norm_mode == NormalizationMode.QUANTILES:
- q01 = stats.get("q01", None)
- q99 = stats.get("q99", None)
- if q01 is None or q99 is None:
- raise ValueError(
- "QUANTILES normalization mode requires q01 and q99 stats, please update the dataset with the correct stats using the `augment_dataset_quantile_stats.py` script"
- )
-
- denom = q99 - q01
- # Avoid division by zero by adding epsilon when quantiles are identical
- denom = torch.where(
- denom == 0, torch.tensor(self.eps, device=tensor.device, dtype=tensor.dtype), denom
- )
- if inverse:
- return (tensor + 1.0) * denom / 2.0 + q01
- return 2.0 * (tensor - q01) / denom - 1.0
-
- if norm_mode == NormalizationMode.QUANTILE10:
- q10 = stats.get("q10", None)
- q90 = stats.get("q90", None)
- if q10 is None or q90 is None:
- raise ValueError(
- "QUANTILE10 normalization mode requires q10 and q90 stats, please update the dataset with the correct stats using the `augment_dataset_quantile_stats.py` script"
- )
-
- denom = q90 - q10
- # Avoid division by zero by adding epsilon when quantiles are identical
- denom = torch.where(
- denom == 0, torch.tensor(self.eps, device=tensor.device, dtype=tensor.dtype), denom
- )
- if inverse:
- return (tensor + 1.0) * denom / 2.0 + q10
- return 2.0 * (tensor - q10) / denom - 1.0
-
- # If necessary stats are missing, return input unchanged.
- return tensor
-
-
-@dataclass
-@ProcessorStepRegistry.register(name="normalizer_processor")
-class NormalizerProcessorStep(_NormalizationMixin, ProcessorStep):
- """
- A processor step that applies normalization to observations and actions in a transition.
-
- This class uses the logic from `_NormalizationMixin` to perform forward normalization
- (e.g., scaling data to have zero mean and unit variance, or to the range [-1, 1]).
- It is typically used in the pre-processing pipeline before feeding data to a policy.
- """
-
- @classmethod
- def from_lerobot_dataset(
- cls,
- dataset: LeRobotDataset,
- features: dict[str, PolicyFeature],
- norm_map: dict[FeatureType, NormalizationMode],
- *,
- normalize_observation_keys: set[str] | None = None,
- eps: float = 1e-8,
- device: torch.device | str | None = None,
- ) -> NormalizerProcessorStep:
- """
- Creates a `NormalizerProcessorStep` instance using statistics from a `LeRobotDataset`.
-
- Args:
- dataset: The dataset from which to extract normalization statistics.
- features: The feature definition for the processor.
- norm_map: The mapping from feature types to normalization modes.
- normalize_observation_keys: An optional set of observation keys to normalize.
- eps: A small epsilon value for numerical stability.
- device: The target device for the processor.
-
- Returns:
- A new instance of `NormalizerProcessorStep`.
- """
- return cls(
- features=features,
- norm_map=norm_map,
- stats=dataset.meta.stats,
- normalize_observation_keys=normalize_observation_keys,
- eps=eps,
- device=device,
- )
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- new_transition = transition.copy()
-
- # Handle observation normalization.
- observation = new_transition.get(TransitionKey.OBSERVATION)
- if observation is not None:
- new_transition[TransitionKey.OBSERVATION] = self._normalize_observation(
- observation, inverse=False
- )
-
- # Handle action normalization.
- action = new_transition.get(TransitionKey.ACTION)
-
- if action is None:
- return new_transition
-
- if not isinstance(action, PolicyAction):
- raise ValueError(f"Action should be a PolicyAction type got {type(action)}")
-
- new_transition[TransitionKey.ACTION] = self._normalize_action(action, inverse=False)
-
- return new_transition
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- return features
-
-
-@dataclass
-@ProcessorStepRegistry.register(name="unnormalizer_processor")
-class UnnormalizerProcessorStep(_NormalizationMixin, ProcessorStep):
- """
- A processor step that applies unnormalization to observations and actions.
-
- This class inverts the normalization process, scaling data back to its original
- range. It is typically used in the post-processing pipeline to convert a policy's
- normalized action output into a format that can be executed by a robot or
- environment.
- """
-
- @classmethod
- def from_lerobot_dataset(
- cls,
- dataset: LeRobotDataset,
- features: dict[str, PolicyFeature],
- norm_map: dict[FeatureType, NormalizationMode],
- *,
- device: torch.device | str | None = None,
- ) -> UnnormalizerProcessorStep:
- """
- Creates an `UnnormalizerProcessorStep` using statistics from a `LeRobotDataset`.
-
- Args:
- dataset: The dataset from which to extract normalization statistics.
- features: The feature definition for the processor.
- norm_map: The mapping from feature types to normalization modes.
- device: The target device for the processor.
-
- Returns:
- A new instance of `UnnormalizerProcessorStep`.
- """
- return cls(features=features, norm_map=norm_map, stats=dataset.meta.stats, device=device)
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- new_transition = transition.copy()
-
- # Handle observation unnormalization.
- observation = new_transition.get(TransitionKey.OBSERVATION)
- if observation is not None:
- new_transition[TransitionKey.OBSERVATION] = self._normalize_observation(observation, inverse=True)
-
- # Handle action unnormalization.
- action = new_transition.get(TransitionKey.ACTION)
-
- if action is None:
- return new_transition
- if not isinstance(action, PolicyAction):
- raise ValueError(f"Action should be a PolicyAction type got {type(action)}")
-
- new_transition[TransitionKey.ACTION] = self._normalize_action(action, inverse=True)
-
- return new_transition
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- return features
-
-
-def hotswap_stats(
- policy_processor: PolicyProcessorPipeline, stats: dict[str, dict[str, Any]]
-) -> PolicyProcessorPipeline:
- """
- Replaces normalization statistics in an existing `PolicyProcessorPipeline` instance.
-
- This function creates a deep copy of the provided pipeline and updates the
- statistics of any `NormalizerProcessorStep` or `UnnormalizerProcessorStep` it
- contains. This is useful for adapting a trained policy to a new environment or
- dataset with different data distributions without having to reconstruct the entire
- pipeline.
-
- Args:
- policy_processor: The policy processor pipeline to modify.
- stats: The new dictionary of normalization statistics to apply.
-
- Returns:
- A new `PolicyProcessorPipeline` instance with the updated statistics.
- """
- rp = deepcopy(policy_processor)
- for step in rp.steps:
- if isinstance(step, _NormalizationMixin):
- step.stats = stats
- # Re-initialize tensor_stats on the correct device.
- step._tensor_stats = to_tensor(stats, device=step.device, dtype=step.dtype) # type: ignore[assignment]
- return rp
diff --git a/lerobot/src/lerobot/processor/observation_processor.py b/lerobot/src/lerobot/processor/observation_processor.py
deleted file mode 100644
index ec893ab041fc2f452a421f2a835851fc849e454c..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/processor/observation_processor.py
+++ /dev/null
@@ -1,206 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from dataclasses import dataclass
-
-import einops
-import numpy as np
-import torch
-from torch import Tensor
-
-from lerobot.configs.types import PipelineFeatureType, PolicyFeature
-from lerobot.utils.constants import OBS_ENV_STATE, OBS_IMAGE, OBS_IMAGES, OBS_STATE, OBS_STR
-
-from .pipeline import ObservationProcessorStep, ProcessorStepRegistry
-
-
-@dataclass
-@ProcessorStepRegistry.register(name="observation_processor")
-class VanillaObservationProcessorStep(ObservationProcessorStep):
- """
- Processes standard Gymnasium observations into the LeRobot format.
-
- This step handles both image and state data from a typical observation dictionary,
- preparing it for use in a LeRobot policy.
-
- **Image Processing:**
- - Converts channel-last (H, W, C), `uint8` images to channel-first (C, H, W),
- `float32` tensors.
- - Normalizes pixel values from the [0, 255] range to [0, 1].
- - Adds a batch dimension if one is not already present.
- - Recognizes a single image under the key `"pixels"` and maps it to
- `"observation.image"`.
- - Recognizes a dictionary of images under the key `"pixels"` and maps them
- to `"observation.images.{camera_name}"`.
-
- **State Processing:**
- - Maps the `"environment_state"` key to `"observation.environment_state"`.
- - Maps the `"agent_pos"` key to `"observation.state"`.
- - Converts NumPy arrays to PyTorch tensors.
- - Adds a batch dimension if one is not already present.
- """
-
- def _process_single_image(self, img: np.ndarray) -> Tensor:
- """
- Processes a single NumPy image array into a channel-first, normalized tensor.
-
- Args:
- img: A NumPy array representing the image, expected to be in channel-last
- (H, W, C) format with a `uint8` dtype.
-
- Returns:
- A `float32` PyTorch tensor in channel-first (B, C, H, W) format, with
- pixel values normalized to the [0, 1] range.
-
- Raises:
- ValueError: If the input image does not appear to be in channel-last
- format or is not of `uint8` dtype.
- """
- # Convert to tensor
- img_tensor = torch.from_numpy(img)
-
- # Add batch dimension if needed
- if img_tensor.ndim == 3:
- img_tensor = img_tensor.unsqueeze(0)
-
- # Validate image format
- _, h, w, c = img_tensor.shape
- if not (c < h and c < w):
- raise ValueError(f"Expected channel-last images, but got shape {img_tensor.shape}")
-
- if img_tensor.dtype != torch.uint8:
- raise ValueError(f"Expected torch.uint8 images, but got {img_tensor.dtype}")
-
- # Convert to channel-first format
- img_tensor = einops.rearrange(img_tensor, "b h w c -> b c h w").contiguous()
-
- # Convert to float32 and normalize to [0, 1]
- img_tensor = img_tensor.type(torch.float32) / 255.0
-
- return img_tensor
-
- def _process_observation(self, observation):
- """
- Processes both image and state observations.
- """
-
- processed_obs = observation.copy()
-
- if "pixels" in processed_obs:
- pixels = processed_obs.pop("pixels")
-
- if isinstance(pixels, dict):
- imgs = {f"{OBS_IMAGES}.{key}": img for key, img in pixels.items()}
- else:
- imgs = {OBS_IMAGE: pixels}
-
- for imgkey, img in imgs.items():
- processed_obs[imgkey] = self._process_single_image(img)
-
- if "environment_state" in processed_obs:
- env_state_np = processed_obs.pop("environment_state")
- env_state = torch.from_numpy(env_state_np).float()
- if env_state.dim() == 1:
- env_state = env_state.unsqueeze(0)
- processed_obs[OBS_ENV_STATE] = env_state
-
- if "agent_pos" in processed_obs:
- agent_pos_np = processed_obs.pop("agent_pos")
- agent_pos = torch.from_numpy(agent_pos_np).float()
- if agent_pos.dim() == 1:
- agent_pos = agent_pos.unsqueeze(0)
- processed_obs[OBS_STATE] = agent_pos
-
- return processed_obs
-
- def observation(self, observation):
- return self._process_observation(observation)
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- """
- Transforms feature keys from the Gym standard to the LeRobot standard.
-
- This method standardizes the feature dictionary by renaming keys according
- to LeRobot's conventions, ensuring that policies can be constructed correctly.
- It handles various raw key formats, including those with an "observation." prefix.
-
- **Renaming Rules:**
- - `pixels` or `observation.pixels` -> `observation.image`
- - `pixels.{cam}` or `observation.pixels.{cam}` -> `observation.images.{cam}`
- - `environment_state` or `observation.environment_state` -> `observation.environment_state`
- - `agent_pos` or `observation.agent_pos` -> `observation.state`
-
- Args:
- features: The policy features dictionary with Gym-style keys.
-
- Returns:
- The policy features dictionary with standardized LeRobot keys.
- """
- # Build a new features mapping keyed by the same FeatureType buckets
- # We assume callers already placed features in the correct FeatureType.
- new_features: dict[PipelineFeatureType, dict[str, PolicyFeature]] = {ft: {} for ft in features}
-
- exact_pairs = {
- "pixels": OBS_IMAGE,
- "environment_state": OBS_ENV_STATE,
- "agent_pos": OBS_STATE,
- }
-
- prefix_pairs = {
- "pixels.": f"{OBS_IMAGES}.",
- }
-
- # Iterate over all incoming feature buckets and normalize/move each entry
- for src_ft, bucket in features.items():
- for key, feat in list(bucket.items()):
- handled = False
-
- # Prefix-based rules (e.g. pixels.cam1 -> OBS_IMAGES.cam1)
- for old_prefix, new_prefix in prefix_pairs.items():
- prefixed_old = f"{OBS_STR}.{old_prefix}"
- if key.startswith(prefixed_old):
- suffix = key[len(prefixed_old) :]
- new_key = f"{new_prefix}{suffix}"
- new_features[src_ft][new_key] = feat
- handled = True
- break
-
- if key.startswith(old_prefix):
- suffix = key[len(old_prefix) :]
- new_key = f"{new_prefix}{suffix}"
- new_features[src_ft][new_key] = feat
- handled = True
- break
-
- if handled:
- continue
-
- # Exact-name rules (pixels, environment_state, agent_pos)
- for old, new in exact_pairs.items():
- if key == old or key == f"{OBS_STR}.{old}":
- new_key = new
- new_features[src_ft][new_key] = feat
- handled = True
- break
-
- if handled:
- continue
-
- # Default: keep key in the same source FeatureType bucket
- new_features[src_ft][key] = feat
-
- return new_features
diff --git a/lerobot/src/lerobot/processor/pipeline.py b/lerobot/src/lerobot/processor/pipeline.py
deleted file mode 100644
index 37ad25dd7c1e1ad08e051d24ec801afd001b74b9..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/processor/pipeline.py
+++ /dev/null
@@ -1,1716 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-This module defines a generic, sequential data processing pipeline framework, primarily designed for
-transforming robotics data (observations, actions, rewards, etc.).
-
-The core components are:
-- ProcessorStep: An abstract base class for a single data transformation operation.
-- ProcessorStepRegistry: A mechanism to register and retrieve ProcessorStep classes by name.
-- DataProcessorPipeline: A class that chains multiple ProcessorStep instances together to form a complete
- data processing workflow. It integrates with the Hugging Face Hub for easy sharing and versioning of
- pipelines, including their configuration and state.
-- Specialized abstract ProcessorStep subclasses (e.g., ObservationProcessorStep, ActionProcessorStep)
- to simplify the creation of steps that target specific parts of a data transition.
-"""
-
-from __future__ import annotations
-
-import importlib
-import json
-import os
-import re
-from abc import ABC, abstractmethod
-from collections.abc import Callable, Iterable, Sequence
-from copy import deepcopy
-from dataclasses import dataclass, field
-from pathlib import Path
-from typing import Any, Generic, TypeAlias, TypedDict, TypeVar, cast
-
-import torch
-from huggingface_hub import hf_hub_download
-from safetensors.torch import load_file, save_file
-
-from lerobot.configs.types import PipelineFeatureType, PolicyFeature
-from lerobot.utils.hub import HubMixin
-
-from .converters import batch_to_transition, create_transition, transition_to_batch
-from .core import EnvAction, EnvTransition, PolicyAction, RobotAction, RobotObservation, TransitionKey
-
-# Generic type variables for pipeline input and output.
-TInput = TypeVar("TInput")
-TOutput = TypeVar("TOutput")
-
-
-class ProcessorStepRegistry:
- """A registry for ProcessorStep classes to allow instantiation from a string name.
-
- This class provides a way to map string identifiers to `ProcessorStep` classes,
- which is useful for deserializing pipelines from configuration files without
-
- hardcoding class imports.
- """
-
- _registry: dict[str, type] = {}
-
- @classmethod
- def register(cls, name: str | None = None):
- """A class decorator to register a ProcessorStep.
-
- Args:
- name: The name to register the class under. If None, the class's `__name__` is used.
-
- Returns:
- A decorator function that registers the class and returns it.
-
- Raises:
- ValueError: If a step with the same name is already registered.
- """
-
- def decorator(step_class: type) -> type:
- """The actual decorator that performs the registration."""
- registration_name = name if name is not None else step_class.__name__
-
- if registration_name in cls._registry:
- raise ValueError(
- f"Processor step '{registration_name}' is already registered. "
- f"Use a different name or unregister the existing one first."
- )
-
- cls._registry[registration_name] = step_class
- # Store the registration name on the class for easy lookup during serialization.
- step_class._registry_name = registration_name
- return step_class
-
- return decorator
-
- @classmethod
- def get(cls, name: str) -> type:
- """Retrieves a processor step class from the registry by its name.
-
- Args:
- name: The name of the step to retrieve.
-
- Returns:
- The processor step class corresponding to the given name.
-
- Raises:
- KeyError: If the name is not found in the registry.
- """
- if name not in cls._registry:
- available = list(cls._registry.keys())
- raise KeyError(
- f"Processor step '{name}' not found in registry. "
- f"Available steps: {available}. "
- f"Make sure the step is registered using @ProcessorStepRegistry.register()"
- )
- return cls._registry[name]
-
- @classmethod
- def unregister(cls, name: str) -> None:
- """Removes a processor step from the registry.
-
- Args:
- name: The name of the step to unregister.
- """
- cls._registry.pop(name, None)
-
- @classmethod
- def list(cls) -> list[str]:
- """Returns a list of all registered processor step names."""
- return list(cls._registry.keys())
-
- @classmethod
- def clear(cls) -> None:
- """Clears all processor steps from the registry."""
- cls._registry.clear()
-
-
-class ProcessorStep(ABC):
- """Abstract base class for a single step in a data processing pipeline.
-
- Each step must implement the `__call__` method to perform its transformation
- on a data transition and the `transform_features` method to describe how it
- alters the shape or type of data features.
-
- Subclasses can optionally be stateful by implementing `state_dict` and `load_state_dict`.
- """
-
- _current_transition: EnvTransition | None = None
-
- @property
- def transition(self) -> EnvTransition:
- """Provides access to the most recent transition being processed.
-
- This is useful for steps that need to access other parts of the transition
- data beyond their primary target (e.g., an action processing step that
- needs to look at the observation).
-
- Raises:
- ValueError: If accessed before the step has been called with a transition.
- """
- if self._current_transition is None:
- raise ValueError("Transition is not set. Make sure to call the step with a transition first.")
- return self._current_transition
-
- @abstractmethod
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """Processes an environment transition.
-
- This method should contain the core logic of the processing step.
-
- Args:
- transition: The input data transition to be processed.
-
- Returns:
- The processed transition.
- """
- return transition
-
- def get_config(self) -> dict[str, Any]:
- """Returns the configuration of the step for serialization.
-
- Returns:
- A JSON-serializable dictionary of configuration parameters.
- """
- return {}
-
- def state_dict(self) -> dict[str, torch.Tensor]:
- """Returns the state of the step (e.g., learned parameters, running means).
-
- Returns:
- A dictionary mapping state names to tensors.
- """
- return {}
-
- def load_state_dict(self, state: dict[str, torch.Tensor]) -> None:
- """Loads the step's state from a state dictionary.
-
- Args:
- state: A dictionary of state tensors.
- """
- return None
-
- def reset(self) -> None:
- """Resets the internal state of the processor step, if any."""
- return None
-
- @abstractmethod
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- """Defines how this step modifies the description of pipeline features.
-
- This method is used to track changes in data shapes, dtypes, or modalities
- as data flows through the pipeline, without needing to process actual data.
-
- Args:
- features: A dictionary describing the input features for observations, actions, etc.
-
- Returns:
- A dictionary describing the output features after this step's transformation.
- """
- return features
-
-
-class ProcessorKwargs(TypedDict, total=False):
- """A TypedDict for optional keyword arguments used in pipeline construction."""
-
- to_transition: Callable[[dict[str, Any]], EnvTransition] | None
- to_output: Callable[[EnvTransition], Any] | None
- name: str | None
- before_step_hooks: list[Callable[[int, EnvTransition], None]] | None
- after_step_hooks: list[Callable[[int, EnvTransition], None]] | None
-
-
-class ProcessorMigrationError(Exception):
- """Raised when a model needs migration to the processor format"""
-
- def __init__(self, model_path: str | Path, migration_command: str, original_error: str):
- self.model_path = model_path
- self.migration_command = migration_command
- self.original_error = original_error
- super().__init__(
- f"Model '{model_path}' requires migration to processor format. "
- f"Run: {migration_command}\n\nOriginal error: {original_error}"
- )
-
-
-@dataclass
-class DataProcessorPipeline(HubMixin, Generic[TInput, TOutput]):
- """A sequential pipeline for processing data, integrated with the Hugging Face Hub.
-
- This class chains together multiple `ProcessorStep` instances to form a complete
- data processing workflow. It's generic, allowing for custom input and output types,
- which are handled by the `to_transition` and `to_output` converters.
-
- Attributes:
- steps: A sequence of `ProcessorStep` objects that make up the pipeline.
- name: A descriptive name for the pipeline.
- to_transition: A function to convert raw input data into the standardized `EnvTransition` format.
- to_output: A function to convert the final `EnvTransition` into the desired output format.
- before_step_hooks: A list of functions to be called before each step is executed.
- after_step_hooks: A list of functions to be called after each step is executed.
- """
-
- steps: Sequence[ProcessorStep] = field(default_factory=list)
- name: str = "DataProcessorPipeline"
-
- to_transition: Callable[[TInput], EnvTransition] = field(
- default_factory=lambda: cast(Callable[[TInput], EnvTransition], batch_to_transition), repr=False
- )
- to_output: Callable[[EnvTransition], TOutput] = field(
- default_factory=lambda: cast(Callable[[EnvTransition], TOutput], transition_to_batch),
- repr=False,
- )
-
- before_step_hooks: list[Callable[[int, EnvTransition], None]] = field(default_factory=list, repr=False)
- after_step_hooks: list[Callable[[int, EnvTransition], None]] = field(default_factory=list, repr=False)
-
- def __call__(self, data: TInput) -> TOutput:
- """Processes input data through the full pipeline.
-
- Args:
- data: The input data to process.
-
- Returns:
- The processed data in the specified output format.
- """
- transition = self.to_transition(data)
- transformed_transition = self._forward(transition)
- return self.to_output(transformed_transition)
-
- def _forward(self, transition: EnvTransition) -> EnvTransition:
- """Executes all processing steps and hooks in sequence.
-
- Args:
- transition: The initial `EnvTransition` object.
-
- Returns:
- The final `EnvTransition` after all steps have been applied.
- """
- for idx, processor_step in enumerate(self.steps):
- # Execute pre-hooks
- for hook in self.before_step_hooks:
- hook(idx, transition)
-
- transition = processor_step(transition)
-
- # Execute post-hooks
- for hook in self.after_step_hooks:
- hook(idx, transition)
- return transition
-
- def step_through(self, data: TInput) -> Iterable[EnvTransition]:
- """Processes data step-by-step, yielding the transition at each stage.
-
- This is a generator method useful for debugging and inspecting the intermediate
- state of the data as it passes through the pipeline.
-
- Args:
- data: The input data.
-
- Yields:
- The `EnvTransition` object, starting with the initial state and then after
- each processing step.
- """
- transition = self.to_transition(data)
-
- # Yield the initial state before any processing.
- yield transition
-
- for processor_step in self.steps:
- transition = processor_step(transition)
- yield transition
-
- def _save_pretrained(self, save_directory: Path, **kwargs):
- """Internal method to comply with `HubMixin`'s saving mechanism.
-
- This method does the actual saving work and is called by HubMixin.save_pretrained.
- """
- config_filename = kwargs.pop("config_filename", None)
-
- # Sanitize the pipeline name to create a valid filename prefix.
- sanitized_name = re.sub(r"[^a-zA-Z0-9_]", "_", self.name.lower())
-
- if config_filename is None:
- config_filename = f"{sanitized_name}.json"
-
- config: dict[str, Any] = {
- "name": self.name,
- "steps": [],
- }
-
- # Iterate through each step to build its configuration entry.
- for step_index, processor_step in enumerate(self.steps):
- registry_name = getattr(processor_step.__class__, "_registry_name", None)
-
- step_entry: dict[str, Any] = {}
- # Prefer registry name for portability, otherwise fall back to full class path.
- if registry_name:
- step_entry["registry_name"] = registry_name
- else:
- step_entry["class"] = (
- f"{processor_step.__class__.__module__}.{processor_step.__class__.__name__}"
- )
-
- # Save step configuration if `get_config` is implemented.
- if hasattr(processor_step, "get_config"):
- step_entry["config"] = processor_step.get_config()
-
- # Save step state if `state_dict` is implemented and returns a non-empty dict.
- if hasattr(processor_step, "state_dict"):
- state = processor_step.state_dict()
- if state:
- # Clone tensors to avoid modifying the original state.
- cloned_state = {key: tensor.clone() for key, tensor in state.items()}
-
- # Create a unique filename for the state file.
- if registry_name:
- state_filename = f"{sanitized_name}_step_{step_index}_{registry_name}.safetensors"
- else:
- state_filename = f"{sanitized_name}_step_{step_index}.safetensors"
-
- save_file(cloned_state, os.path.join(str(save_directory), state_filename))
- step_entry["state_file"] = state_filename
-
- config["steps"].append(step_entry)
-
- # Write the main configuration JSON file.
- with open(os.path.join(str(save_directory), config_filename), "w") as file_pointer:
- json.dump(config, file_pointer, indent=2)
-
- def save_pretrained(
- self,
- save_directory: str | Path | None = None,
- *,
- repo_id: str | None = None,
- push_to_hub: bool = False,
- card_kwargs: dict[str, Any] | None = None,
- config_filename: str | None = None,
- **push_to_hub_kwargs,
- ):
- """Saves the pipeline's configuration and state to a directory.
-
- This method creates a JSON configuration file that defines the pipeline's structure
- (name and steps). For each stateful step, it also saves a `.safetensors` file
- containing its state dictionary.
-
- Args:
- save_directory: The directory where the pipeline will be saved. If None, saves to
- HF_LEROBOT_HOME/processors/{sanitized_pipeline_name}.
- repo_id: ID of your repository on the Hub. Used only if `push_to_hub=True`.
- push_to_hub: Whether or not to push your object to the Hugging Face Hub after saving it.
- card_kwargs: Additional arguments passed to the card template to customize the card.
- config_filename: The name of the JSON configuration file. If None, a name is
- generated from the pipeline's `name` attribute.
- **push_to_hub_kwargs: Additional key word arguments passed along to the push_to_hub method.
- """
- if save_directory is None:
- # Use default directory in HF_LEROBOT_HOME
- from lerobot.utils.constants import HF_LEROBOT_HOME
-
- sanitized_name = re.sub(r"[^a-zA-Z0-9_]", "_", self.name.lower())
- save_directory = HF_LEROBOT_HOME / "processors" / sanitized_name
-
- # For direct saves (not through hub), handle config_filename
- if not push_to_hub and config_filename is not None:
- # Call _save_pretrained directly with config_filename
- save_directory = Path(save_directory)
- save_directory.mkdir(parents=True, exist_ok=True)
- self._save_pretrained(save_directory, config_filename=config_filename)
- return None
-
- # Pass config_filename through kwargs for _save_pretrained when using hub
- if config_filename is not None:
- push_to_hub_kwargs["config_filename"] = config_filename
-
- # Call parent's save_pretrained which will call our _save_pretrained
- return super().save_pretrained(
- save_directory=save_directory,
- repo_id=repo_id,
- push_to_hub=push_to_hub,
- card_kwargs=card_kwargs,
- **push_to_hub_kwargs,
- )
-
- @classmethod
- def from_pretrained(
- cls,
- pretrained_model_name_or_path: str | Path,
- config_filename: str,
- *,
- force_download: bool = False,
- resume_download: bool | None = None,
- proxies: dict[str, str] | None = None,
- token: str | bool | None = None,
- cache_dir: str | Path | None = None,
- local_files_only: bool = False,
- revision: str | None = None,
- overrides: dict[str, Any] | None = None,
- to_transition: Callable[[TInput], EnvTransition] | None = None,
- to_output: Callable[[EnvTransition], TOutput] | None = None,
- **kwargs,
- ) -> DataProcessorPipeline[TInput, TOutput]:
- """Loads a pipeline from a local directory, single file, or Hugging Face Hub repository.
-
- This method implements a simplified loading pipeline with intelligent migration detection:
-
- **Simplified Loading Strategy**:
- 1. **Config Loading** (_load_config):
- - **Directory**: Load specified config_filename from directory
- - **Single file**: Load file directly (config_filename ignored)
- - **Hub repository**: Download specified config_filename from Hub
-
- 2. **Config Validation** (_validate_loaded_config):
- - Format validation: Ensure config is valid processor format
- - Migration detection: Guide users to migrate old LeRobot models
- - Clear errors: Provide actionable error messages
-
- 3. **Step Construction** (_build_steps_with_overrides):
- - Class resolution: Registry lookup or dynamic imports
- - Override merging: User parameters override saved config
- - State loading: Load .safetensors files for stateful steps
-
- 4. **Override Validation** (_validate_overrides_used):
- - Ensure all user overrides were applied (catch typos)
- - Provide helpful error messages with available keys
-
- **Migration Detection**:
- - **Smart detection**: Analyzes JSON files to detect old LeRobot models
- - **Precise targeting**: Avoids false positives on other HuggingFace models
- - **Clear guidance**: Provides exact migration command to run
- - **Error mode**: Always raises ProcessorMigrationError for clear user action
-
- **Loading Examples**:
- ```python
- # Directory loading
- pipeline = DataProcessorPipeline.from_pretrained("/models/my_model", config_filename="processor.json")
-
- # Single file loading
- pipeline = DataProcessorPipeline.from_pretrained(
- "/models/my_model/processor.json", config_filename="processor.json"
- )
-
- # Hub loading
- pipeline = DataProcessorPipeline.from_pretrained("user/repo", config_filename="processor.json")
-
- # Multiple configs (preprocessor/postprocessor)
- preprocessor = DataProcessorPipeline.from_pretrained(
- "model", config_filename="policy_preprocessor.json"
- )
- postprocessor = DataProcessorPipeline.from_pretrained(
- "model", config_filename="policy_postprocessor.json"
- )
- ```
-
- **Override System**:
- - **Key matching**: Use registry names or class names as override keys
- - **Config merging**: User overrides take precedence over saved config
- - **Validation**: Ensure all override keys match actual steps (catch typos)
- - **Example**: overrides={"NormalizeStep": {"device": "cuda"}}
-
- Args:
- pretrained_model_name_or_path: The identifier of the repository on the Hugging Face Hub,
- a path to a local directory, or a path to a single config file.
- config_filename: The name of the pipeline's JSON configuration file. Always required
- to prevent ambiguity when multiple configs exist (e.g., preprocessor vs postprocessor).
- force_download: Whether to force (re)downloading the files.
- resume_download: Whether to resume a previously interrupted download.
- proxies: A dictionary of proxy servers to use.
- token: The token to use as HTTP bearer authorization for private Hub repositories.
- cache_dir: The path to a specific cache folder to store downloaded files.
- local_files_only: If True, avoid downloading files from the Hub.
- revision: The specific model version to use (e.g., a branch name, tag name, or commit id).
- overrides: A dictionary to override the configuration of specific steps. Keys should
- match the step's class name or registry name.
- to_transition: A custom function to convert input data to `EnvTransition`.
- to_output: A custom function to convert the final `EnvTransition` to the output format.
- **kwargs: Additional arguments (not used).
-
- Returns:
- An instance of `DataProcessorPipeline` loaded with the specified configuration and state.
-
- Raises:
- FileNotFoundError: If the config file cannot be found.
- ValueError: If configuration is ambiguous or instantiation fails.
- ImportError: If a step's class cannot be imported.
- KeyError: If an override key doesn't match any step in the pipeline.
- ProcessorMigrationError: If the model requires migration to processor format.
- """
- model_id = str(pretrained_model_name_or_path)
- hub_download_kwargs = {
- "force_download": force_download,
- "resume_download": resume_download,
- "proxies": proxies,
- "token": token,
- "cache_dir": cache_dir,
- "local_files_only": local_files_only,
- "revision": revision,
- }
-
- # 1. Load configuration using simplified 3-way logic
- loaded_config, base_path = cls._load_config(model_id, config_filename, hub_download_kwargs)
-
- # 2. Validate configuration and handle migration
- cls._validate_loaded_config(model_id, loaded_config, config_filename)
-
- # 3. Build steps with overrides
- steps, validated_overrides = cls._build_steps_with_overrides(
- loaded_config, overrides or {}, model_id, base_path, hub_download_kwargs
- )
-
- # 4. Validate that all overrides were used
- cls._validate_overrides_used(validated_overrides, loaded_config)
-
- # 5. Construct and return the final pipeline instance
- return cls(
- steps=steps,
- name=loaded_config.get("name", "DataProcessorPipeline"),
- to_transition=to_transition or cast(Callable[[TInput], EnvTransition], batch_to_transition),
- to_output=to_output or cast(Callable[[EnvTransition], TOutput], transition_to_batch),
- )
-
- @classmethod
- def _load_config(
- cls,
- model_id: str,
- config_filename: str,
- hub_download_kwargs: dict[str, Any],
- ) -> tuple[dict[str, Any], Path]:
- """Load configuration from local file or Hugging Face Hub.
-
- This method implements a super-simplified 3-way loading strategy:
-
- 1. **Local directory**: Load config_filename from directory
- - Example: model_id="/models/my_model", config_filename="processor.json"
- - Loads: "/models/my_model/processor.json"
-
- 2. **Single file**: Load file directly (ignore config_filename)
- - Example: model_id="/models/my_model/processor.json"
- - Loads: "/models/my_model/processor.json" (config_filename ignored)
-
- 3. **Hub repository**: Download config_filename from Hub
- - Example: model_id="user/repo", config_filename="processor.json"
- - Downloads and loads: config_filename from Hub repo
-
- **Benefits of Explicit config_filename**:
- - No auto-detection complexity or edge cases
- - No risk of loading wrong config (preprocessor vs postprocessor)
- - Consistent behavior across local and Hub usage
- - Clear, predictable errors
-
- Args:
- model_id: The model identifier (Hub repo ID, local directory, or file path)
- config_filename: The explicit config filename to load (always required)
- hub_download_kwargs: Parameters for hf_hub_download (tokens, cache, etc.)
-
- Returns:
- Tuple of (loaded_config, base_path)
- - loaded_config: Parsed JSON config dict (always loaded, never None)
- - base_path: Directory containing config file (for state file resolution)
-
- Raises:
- FileNotFoundError: If config file cannot be found locally or on Hub
- """
- model_path = Path(model_id)
-
- if model_path.is_dir():
- # Directory: load specified config from directory
- config_path = model_path / config_filename
- if not config_path.exists():
- # Check for migration before giving clear error
- if cls._should_suggest_migration(model_path):
- cls._suggest_processor_migration(model_id, f"Config file '{config_filename}' not found")
- raise FileNotFoundError(
- f"Config file '{config_filename}' not found in directory '{model_id}'"
- )
-
- with open(config_path) as f:
- return json.load(f), model_path
-
- elif model_path.is_file():
- # File: load file directly (config_filename is ignored for single files)
- with open(model_path) as f:
- return json.load(f), model_path.parent
-
- else:
- # Hub: download specified config
- try:
- config_path = hf_hub_download(
- repo_id=model_id,
- filename=config_filename,
- repo_type="model",
- **hub_download_kwargs,
- )
-
- with open(config_path) as f:
- return json.load(f), Path(config_path).parent
-
- except Exception as e:
- raise FileNotFoundError(
- f"Could not find '{config_filename}' on the HuggingFace Hub at '{model_id}'"
- ) from e
-
- @classmethod
- def _validate_loaded_config(
- cls, model_id: str, loaded_config: dict[str, Any], config_filename: str
- ) -> None:
- """Validate that a config was loaded and is a valid processor config.
-
- This method validates processor config format with intelligent migration detection:
-
- **Config Format Validation**:
- - Use _is_processor_config() to validate structure
- - Must have "steps" field with list of step configurations
- - Each step needs "class" or "registry_name"
- - If validation fails AND local directory: Check for migration need
- - If migration needed: Raise ProcessorMigrationError with command
- - If no migration: Raise ValueError with helpful error message
-
- **Migration Detection Logic**:
- - Only triggered for local directories (not Hub repos)
- - Analyzes all JSON files in directory to detect old LeRobot models
- - Provides exact migration command with model path
-
- Args:
- model_id: The model identifier (used for migration detection)
- loaded_config: The loaded config dictionary (guaranteed non-None)
- config_filename: The config filename that was loaded (for error messages)
-
- Raises:
- ValueError: If config format is invalid
- ProcessorMigrationError: If model needs migration to processor format
- """
- # Validate that this is actually a processor config
- if not cls._is_processor_config(loaded_config):
- if Path(model_id).is_dir() and cls._should_suggest_migration(Path(model_id)):
- cls._suggest_processor_migration(
- model_id,
- f"Config file '{config_filename}' is not a valid processor configuration",
- )
- raise ValueError(
- f"Config file '{config_filename}' is not a valid processor configuration. "
- f"Expected a config with 'steps' field, but got: {list(loaded_config.keys())}"
- )
-
- @classmethod
- def _build_steps_with_overrides(
- cls,
- loaded_config: dict[str, Any],
- overrides: dict[str, Any],
- model_id: str,
- base_path: Path | None,
- hub_download_kwargs: dict[str, Any],
- ) -> tuple[list[ProcessorStep], set[str]]:
- """Build all processor steps with overrides and state loading.
-
- This method orchestrates the complete step construction pipeline:
-
- **For each step in loaded_config["steps"]**:
-
- 1. **Class Resolution** (via _resolve_step_class):
- - **If "registry_name" exists**: Look up in ProcessorStepRegistry
- Example: {"registry_name": "normalize_step"} -> Get registered class
- - **Else use "class" field**: Dynamic import from full module path
- Example: {"class": "lerobot.processor.normalize.NormalizeStep"}
- - **Result**: (step_class, step_key) where step_key is used for overrides
-
- 2. **Step Instantiation** (via _instantiate_step):
- - **Merge configs**: saved_config + user_overrides
- - **Override priority**: User overrides take precedence over saved config
- - **Example**: saved={"mean": 0.0}, override={"mean": 1.0} -> final={"mean": 1.0}
- - **Result**: Instantiated ProcessorStep object
-
- 3. **State Loading** (via _load_step_state):
- - **If step has "state_file"**: Load tensor state from .safetensors
- - **Local first**: Check base_path/state_file.safetensors
- - **Hub fallback**: Download state file if not found locally
- - **Optional**: Only load if step has load_state_dict method
-
- 4. **Override Tracking**:
- - **Track used overrides**: Remove step_key from remaining set
- - **Purpose**: Validate all user overrides were applied (detect typos)
-
- **Error Handling**:
- - Class resolution errors -> ImportError with helpful message
- - Instantiation errors -> ValueError with config details
- - State loading errors -> Propagated from load_state_dict
-
- Args:
- loaded_config: The loaded processor configuration (must have "steps" field)
- overrides: User-provided parameter overrides (keyed by class/registry name)
- model_id: The model identifier (needed for Hub state file downloads)
- base_path: Local directory path for finding state files
- hub_download_kwargs: Parameters for hf_hub_download (tokens, cache, etc.)
-
- Returns:
- Tuple of (instantiated_steps_list, unused_override_keys)
- - instantiated_steps_list: List of ready-to-use ProcessorStep instances
- - unused_override_keys: Override keys that didn't match any step (for validation)
-
- Raises:
- ImportError: If a step class cannot be imported or found in registry
- ValueError: If a step cannot be instantiated with its configuration
- """
- steps: list[ProcessorStep] = []
- override_keys = set(overrides.keys())
-
- for step_entry in loaded_config["steps"]:
- # 1. Get step class and key
- step_class, step_key = cls._resolve_step_class(step_entry)
-
- # 2. Instantiate step with overrides
- step_instance = cls._instantiate_step(step_entry, step_class, step_key, overrides)
-
- # 3. Load step state if available
- cls._load_step_state(step_instance, step_entry, model_id, base_path, hub_download_kwargs)
-
- # 4. Track used overrides
- if step_key in override_keys:
- override_keys.discard(step_key)
-
- steps.append(step_instance)
-
- return steps, override_keys
-
- @classmethod
- def _resolve_step_class(cls, step_entry: dict[str, Any]) -> tuple[type[ProcessorStep], str]:
- """Resolve step class from registry or import path.
-
- This method implements a two-tier resolution strategy:
-
- **Tier 1: Registry-based resolution** (preferred):
- - **If "registry_name" in step_entry**: Look up in ProcessorStepRegistry
- - **Advantage**: Faster, no imports needed, guaranteed compatibility
- - **Example**: {"registry_name": "normalize_step"} -> Get pre-registered class
- - **Error**: KeyError if registry_name not found -> Convert to ImportError
-
- **Tier 2: Dynamic import fallback**:
- - **Else use "class" field**: Full module.ClassName import path
- - **Process**: Split "module.path.ClassName" into module + class parts
- - **Import**: Use importlib.import_module() + getattr()
- - **Example**: "lerobot.processor.normalize.NormalizeStep"
- a. Import module: "lerobot.processor.normalize"
- b. Get class: getattr(module, "NormalizeStep")
- - **step_key**: Use class_name ("NormalizeStep") for overrides
-
- **Override Key Strategy**:
- - Registry steps: Use registry_name ("normalize_step")
- - Import steps: Use class_name ("NormalizeStep")
- - This allows users to override with: {"normalize_step": {...}} or {"NormalizeStep": {...}}
-
- **Error Handling**:
- - Registry KeyError -> ImportError with registry context
- - Import/Attribute errors -> ImportError with helpful suggestions
- - All errors include troubleshooting guidance
-
- Args:
- step_entry: The step configuration dictionary (must have "registry_name" or "class")
-
- Returns:
- Tuple of (step_class, step_key)
- - step_class: The resolved ProcessorStep class (ready for instantiation)
- - step_key: The key used for user overrides (registry_name or class_name)
-
- Raises:
- ImportError: If step class cannot be loaded from registry or import path
- """
- if "registry_name" in step_entry:
- try:
- step_class = ProcessorStepRegistry.get(step_entry["registry_name"])
- return step_class, step_entry["registry_name"]
- except KeyError as e:
- raise ImportError(f"Failed to load processor step from registry. {str(e)}") from e
- else:
- # Fallback to dynamic import using the full class path
- full_class_path = step_entry["class"]
- module_path, class_name = full_class_path.rsplit(".", 1)
-
- try:
- module = importlib.import_module(module_path)
- step_class = getattr(module, class_name)
- return step_class, class_name
- except (ImportError, AttributeError) as e:
- raise ImportError(
- f"Failed to load processor step '{full_class_path}'. "
- f"Make sure the module '{module_path}' is installed and contains class '{class_name}'. "
- f"Consider registering the step using @ProcessorStepRegistry.register() for better portability. "
- f"Error: {str(e)}"
- ) from e
-
- @classmethod
- def _instantiate_step(
- cls,
- step_entry: dict[str, Any],
- step_class: type[ProcessorStep],
- step_key: str,
- overrides: dict[str, Any],
- ) -> ProcessorStep:
- """Instantiate a single processor step with config overrides.
-
- This method handles the configuration merging and instantiation logic:
-
- **Configuration Merging Strategy**:
- 1. **Extract saved config**: Get step_entry.get("config", {}) from saved pipeline
- - Example: {"config": {"mean": 0.0, "std": 1.0}}
- 2. **Extract user overrides**: Get overrides.get(step_key, {}) for this step
- - Example: overrides = {"NormalizeStep": {"mean": 2.0, "device": "cuda"}}
- 3. **Merge with priority**: {**saved_cfg, **step_overrides}
- - **Override priority**: User values override saved values
- - **Result**: {"mean": 2.0, "std": 1.0, "device": "cuda"}
-
- **Instantiation Process**:
- - **Call constructor**: step_class(**merged_cfg)
- - **Example**: NormalizeStep(mean=2.0, std=1.0, device="cuda")
-
- **Error Handling**:
- - **Any exception during instantiation**: Convert to ValueError
- - **Include context**: step name, attempted config, original error
- - **Purpose**: Help users debug configuration issues
- - **Common causes**:
- a. Invalid parameter types (str instead of float)
- b. Missing required parameters
- c. Incompatible parameter combinations
-
- Args:
- step_entry: The step configuration from saved config (contains "config" dict)
- step_class: The step class to instantiate (already resolved)
- step_key: The key used for overrides ("registry_name" or class name)
- overrides: User-provided parameter overrides (keyed by step_key)
-
- Returns:
- The instantiated processor step (ready for use)
-
- Raises:
- ValueError: If step cannot be instantiated, with detailed error context
- """
- try:
- saved_cfg = step_entry.get("config", {})
- step_overrides = overrides.get(step_key, {})
- merged_cfg = {**saved_cfg, **step_overrides}
- return step_class(**merged_cfg)
- except Exception as e:
- step_name = step_entry.get("registry_name", step_entry.get("class", "Unknown"))
- raise ValueError(
- f"Failed to instantiate processor step '{step_name}' with config: {step_entry.get('config', {})}. "
- f"Error: {str(e)}"
- ) from e
-
- @classmethod
- def _load_step_state(
- cls,
- step_instance: ProcessorStep,
- step_entry: dict[str, Any],
- model_id: str,
- base_path: Path | None,
- hub_download_kwargs: dict[str, Any],
- ) -> None:
- """Load state dictionary for a processor step if available.
-
- This method implements conditional state loading with local/Hub fallback:
-
- **Precondition Checks** (early return if not met):
- 1. **"state_file" in step_entry**: Step config specifies a state file
- - **If missing**: Step has no saved state (e.g., stateless transforms)
- 2. **hasattr(step_instance, "load_state_dict")**: Step supports state loading
- - **If missing**: Step doesn't implement state loading (rare)
-
- **State File Resolution Strategy**:
- 1. **Local file priority**: Check base_path/state_filename exists
- - **Advantage**: Faster, no network calls
- - **Example**: "/models/my_model/normalize_step_0.safetensors"
- - **Use case**: Loading from local saved model directory
-
- 2. **Hub download fallback**: Download state file from repository
- - **When triggered**: Local file not found or base_path is None
- - **Process**: Use hf_hub_download with same parameters as config
- - **Example**: Download "normalize_step_0.safetensors" from "user/repo"
- - **Result**: Downloaded to local cache, path returned
-
- **State Loading Process**:
- - **Load tensors**: Use safetensors.torch.load_file()
- - **Apply to step**: Call step_instance.load_state_dict(tensor_dict)
- - **In-place modification**: Updates step's internal tensor state
-
- **Common state file examples**:
- - "normalize_step_0.safetensors" - normalization statistics
- - "custom_step_1.safetensors" - learned parameters
- - "tokenizer_step_2.safetensors" - vocabulary embeddings
-
- Args:
- step_instance: The step instance to load state into (must have load_state_dict)
- step_entry: The step configuration dictionary (may contain "state_file")
- model_id: The model identifier (used for Hub downloads if needed)
- base_path: Local directory path for finding state files (None for Hub-only)
- hub_download_kwargs: Parameters for hf_hub_download (tokens, cache, etc.)
-
- Note:
- This method modifies step_instance in-place and returns None.
- If state loading fails, exceptions from load_state_dict propagate.
- """
- if "state_file" not in step_entry or not hasattr(step_instance, "load_state_dict"):
- return
-
- state_filename = step_entry["state_file"]
-
- # Try local file first
- if base_path and (base_path / state_filename).exists():
- state_path = str(base_path / state_filename)
- else:
- # Download from Hub
- state_path = hf_hub_download(
- repo_id=model_id,
- filename=state_filename,
- repo_type="model",
- **hub_download_kwargs,
- )
-
- step_instance.load_state_dict(load_file(state_path))
-
- @classmethod
- def _validate_overrides_used(
- cls, remaining_override_keys: set[str], loaded_config: dict[str, Any]
- ) -> None:
- """Validate that all provided overrides were used.
-
- This method ensures user overrides are valid to catch typos and configuration errors:
-
- **Validation Logic**:
- 1. **If remaining_override_keys is empty**: All overrides were used -> Success
- - **Early return**: No validation needed
- - **Normal case**: User provided correct override keys
-
- 2. **If remaining_override_keys has entries**: Some overrides unused -> Error
- - **Root cause**: User provided keys that don't match any step
- - **Common issues**:
- a. Typos in step names ("NormalizStep" vs "NormalizeStep")
- b. Using wrong key type (class name vs registry name)
- c. Step doesn't exist in saved pipeline
-
- **Helpful Error Generation**:
- - **Extract available keys**: Build list of valid override keys from config
- a. **Registry steps**: Use "registry_name" directly
- b. **Import steps**: Extract class name from "class" field
- - Example: "lerobot.processor.normalize.NormalizeStep" -> "NormalizeStep"
- - **Error message includes**:
- a. Invalid keys provided by user
- b. List of valid keys they can use
- c. Guidance about registry vs class names
-
- **Override Key Resolution Rules**:
- - Steps with "registry_name": Use registry_name for overrides
- - Steps with "class": Use final class name for overrides
- - Users must match these exact keys in their overrides dict
-
- Args:
- remaining_override_keys: Override keys that weren't matched to any step
- loaded_config: The loaded processor configuration (contains "steps" list)
-
- Raises:
- KeyError: If any override keys were not used, with helpful error message
- """
- if not remaining_override_keys:
- return
-
- available_keys = [
- step.get("registry_name") or step["class"].rsplit(".", 1)[1] for step in loaded_config["steps"]
- ]
-
- raise KeyError(
- f"Override keys {list(remaining_override_keys)} do not match any step in the saved configuration. "
- f"Available step keys: {available_keys}. "
- f"Make sure override keys match exact step class names or registry names."
- )
-
- @classmethod
- def _should_suggest_migration(cls, model_path: Path) -> bool:
- """Check if directory has JSON files but no processor configs.
-
- This method implements smart migration detection to avoid false positives:
-
- **Decision Logic**:
- 1. **No JSON files found**: Return False
- - **Reason**: Empty directory or only non-config files
- - **Example**: Directory with only .safetensors, .md files
- - **Action**: No migration needed
-
- 2. **JSON files exist**: Analyze each file
- - **Goal**: Determine if ANY file is a valid processor config
- - **Process**:
- a. Try to parse each .json file
- b. Skip files with JSON parse errors (malformed)
- c. Check if parsed config passes _is_processor_config()
- - **If ANY valid processor found**: Return False (no migration)
- - **If NO valid processors found**: Return True (migration needed)
-
- **Examples**:
- - **No migration**: ["processor.json", "config.json"] where processor.json is valid
- - **Migration needed**: ["config.json", "train.json"] where both are model configs
- - **No migration**: [] (empty directory)
- - **Migration needed**: ["old_model_config.json"] with old LeRobot format
-
- **Why this works**:
- - **Precise detection**: Only suggests migration for actual old LeRobot models
- - **Avoids false positives**: Won't trigger on other HuggingFace model types
- - **Graceful handling**: Ignores malformed JSON files
-
- Args:
- model_path: Path to local directory to analyze
-
- Returns:
- True if directory has JSON configs but none are processor configs (migration needed)
- False if no JSON files or at least one valid processor config exists
- """
- json_files = list(model_path.glob("*.json"))
- if len(json_files) == 0:
- return False
-
- # Check if any JSON file is a processor config
- for json_file in json_files:
- try:
- with open(json_file) as f:
- config = json.load(f)
-
- if cls._is_processor_config(config):
- return False # Found at least one processor config, no migration needed
-
- except (json.JSONDecodeError, OSError):
- # Skip files that can't be parsed as JSON
- continue
-
- # Have JSON files but no processor configs - suggest migration
- return True
-
- @classmethod
- def _is_processor_config(cls, config: dict) -> bool:
- """Check if config follows DataProcessorPipeline format.
-
- This method validates the processor configuration structure:
-
- **Required Structure Validation**:
- 1. **"steps" field existence**: Must have top-level "steps" key
- - **If missing**: Not a processor config (e.g., model config, train config)
- - **Example invalid**: {"type": "act", "hidden_dim": 256}
-
- 2. **"steps" field type**: Must be a list, not other types
- - **If not list**: Invalid format
- - **Example invalid**: {"steps": "some_string"} or {"steps": {"key": "value"}}
-
- 3. **Empty steps validation**: Empty list is valid
- - **If len(steps) == 0**: Return True immediately
- - **Use case**: Empty processor pipeline (no-op)
- - **Example valid**: {"name": "EmptyProcessor", "steps": []}
-
- **Individual Step Validation** (for non-empty steps):
- For each step in the steps list:
- 1. **Step type**: Must be a dictionary
- - **If not dict**: Invalid step format
- - **Example invalid**: ["string_step", 123, true]
-
- 2. **Step identifier**: Must have either "class" OR "registry_name"
- - **"registry_name"**: Registered step (preferred)
- Example: {"registry_name": "normalize_step", "config": {...}}
- - **"class"**: Full import path
- Example: {"class": "lerobot.processor.normalize.NormalizeStep"}
- - **If neither**: Invalid step (can't resolve class)
- - **If both**: Also valid (registry_name takes precedence)
-
- **Valid Processor Config Examples**:
- - {"steps": []} - Empty processor
- - {"steps": [{"registry_name": "normalize"}]} - Registry step
- - {"steps": [{"class": "my.module.Step"}]} - Import step
- - {"name": "MyProcessor", "steps": [...]} - With name
-
- **Invalid Config Examples**:
- - {"type": "act"} - Missing "steps"
- - {"steps": "normalize"} - Steps not a list
- - {"steps": [{}]} - Step missing class/registry_name
- - {"steps": ["string"]} - Step not a dict
-
- Args:
- config: The configuration dictionary to validate
-
- Returns:
- True if config follows valid DataProcessorPipeline format, False otherwise
- """
- # Must have a "steps" field with a list of step configurations
- if not isinstance(config.get("steps"), list):
- return False
-
- steps = config["steps"]
- if len(steps) == 0:
- return True # Empty processor is valid
-
- # Each step must be a dict with either "class" or "registry_name"
- for step in steps:
- if not isinstance(step, dict):
- return False
- if not ("class" in step or "registry_name" in step):
- return False
-
- return True
-
- @classmethod
- def _suggest_processor_migration(cls, model_path: str | Path, original_error: str) -> None:
- """Raise migration error when we detect JSON files but no processor configs.
-
- This method is called when migration detection determines that a model
- directory contains configuration files but none are valid processor configs.
- This typically indicates an old LeRobot model that needs migration.
-
- **When this is called**:
- - User tries to load DataProcessorPipeline from local directory
- - Directory contains JSON configuration files
- - None of the JSON files follow processor config format
- - _should_suggest_migration() returned True
-
- **Migration Command Generation**:
- - Constructs exact command user needs to run
- - Uses the migration script: migrate_policy_normalization.py
- - Includes the model path automatically
- - Example: "python src/lerobot/processor/migrate_policy_normalization.py --pretrained-path /models/old_model"
-
- **Error Structure**:
- - **Always raises**: ProcessorMigrationError (never returns)
- - **Includes**: model_path, migration_command, original_error
- - **Purpose**: Force user attention to migration need
- - **User experience**: Clear actionable error with exact command to run
-
- **Migration Process**:
- The suggested command will:
- 1. Extract normalization stats from old model
- 2. Create new processor configs (preprocessor + postprocessor)
- 3. Remove normalization layers from model
- 4. Save migrated model with processor pipeline
-
- Args:
- model_path: Path to the model directory needing migration
- original_error: The error that triggered migration detection (for context)
-
- Raises:
- ProcessorMigrationError: Always raised (this method never returns normally)
- """
- migration_command = (
- f"python src/lerobot/processor/migrate_policy_normalization.py --pretrained-path {model_path}"
- )
-
- raise ProcessorMigrationError(model_path, migration_command, original_error)
-
- def __len__(self) -> int:
- """Returns the number of steps in the pipeline."""
- return len(self.steps)
-
- def __getitem__(self, idx: int | slice) -> ProcessorStep | DataProcessorPipeline[TInput, TOutput]:
- """Retrieves a step or a sub-pipeline by index or slice.
-
- Args:
- idx: An integer index or a slice object.
-
- Returns:
- A `ProcessorStep` if `idx` is an integer, or a new `DataProcessorPipeline`
- containing the sliced steps.
- """
- if isinstance(idx, slice):
- # Return a new pipeline instance with the sliced steps.
- return DataProcessorPipeline(
- steps=self.steps[idx],
- name=self.name,
- to_transition=self.to_transition,
- to_output=self.to_output,
- before_step_hooks=self.before_step_hooks.copy(),
- after_step_hooks=self.after_step_hooks.copy(),
- )
- return self.steps[idx]
-
- def register_before_step_hook(self, fn: Callable[[int, EnvTransition], None]):
- """Registers a function to be called before each step.
-
- Args:
- fn: A callable that accepts the step index and the current transition.
- """
- self.before_step_hooks.append(fn)
-
- def unregister_before_step_hook(self, fn: Callable[[int, EnvTransition], None]):
- """Unregisters a 'before_step' hook.
-
- Args:
- fn: The exact function object that was previously registered.
-
- Raises:
- ValueError: If the hook is not found in the list.
- """
- try:
- self.before_step_hooks.remove(fn)
- except ValueError:
- raise ValueError(
- f"Hook {fn} not found in before_step_hooks. Make sure to pass the exact same function reference."
- ) from None
-
- def register_after_step_hook(self, fn: Callable[[int, EnvTransition], None]):
- """Registers a function to be called after each step.
-
- Args:
- fn: A callable that accepts the step index and the current transition.
- """
- self.after_step_hooks.append(fn)
-
- def unregister_after_step_hook(self, fn: Callable[[int, EnvTransition], None]):
- """Unregisters an 'after_step' hook.
-
- Args:
- fn: The exact function object that was previously registered.
-
- Raises:
- ValueError: If the hook is not found in the list.
- """
- try:
- self.after_step_hooks.remove(fn)
- except ValueError:
- raise ValueError(
- f"Hook {fn} not found in after_step_hooks. Make sure to pass the exact same function reference."
- ) from None
-
- def reset(self):
- """Resets the state of all stateful steps in the pipeline."""
- for step in self.steps:
- if hasattr(step, "reset"):
- step.reset()
-
- def __repr__(self) -> str:
- """Provides a concise string representation of the pipeline."""
- step_names = [step.__class__.__name__ for step in self.steps]
-
- if not step_names:
- steps_repr = "steps=0: []"
- elif len(step_names) <= 3:
- steps_repr = f"steps={len(step_names)}: [{', '.join(step_names)}]"
- else:
- # For long pipelines, show the first, second, and last steps.
- displayed = f"{step_names[0]}, {step_names[1]}, ..., {step_names[-1]}"
- steps_repr = f"steps={len(step_names)}: [{displayed}]"
-
- parts = [f"name='{self.name}'", steps_repr]
-
- return f"DataProcessorPipeline({', '.join(parts)})"
-
- def __post_init__(self):
- """Validates that all provided steps are instances of `ProcessorStep`."""
- for i, step in enumerate(self.steps):
- if not isinstance(step, ProcessorStep):
- raise TypeError(f"Step {i} ({type(step).__name__}) must inherit from ProcessorStep")
-
- def transform_features(
- self, initial_features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- """Applies feature transformations from all steps sequentially.
-
- This method propagates a feature description dictionary through each step's
- `transform_features` method, allowing the pipeline to statically determine
- the output feature specification without processing any real data.
-
- Args:
- initial_features: A dictionary describing the initial features.
-
- Returns:
- The final feature description after all transformations.
- """
- features: dict[PipelineFeatureType, dict[str, PolicyFeature]] = deepcopy(initial_features)
-
- for _, step in enumerate(self.steps):
- out = step.transform_features(features)
- features = out
- return features
-
- # Convenience methods for processing individual parts of a transition.
- def process_observation(self, observation: RobotObservation) -> RobotObservation:
- """Processes only the observation part of a transition through the pipeline.
-
- Args:
- observation: The observation dictionary.
-
- Returns:
- The processed observation dictionary.
- """
- transition: EnvTransition = create_transition(observation=observation)
- transformed_transition = self._forward(transition)
- return transformed_transition[TransitionKey.OBSERVATION]
-
- def process_action(
- self, action: PolicyAction | RobotAction | EnvAction
- ) -> PolicyAction | RobotAction | EnvAction:
- """Processes only the action part of a transition through the pipeline.
-
- Args:
- action: The action data.
-
- Returns:
- The processed action.
- """
- transition: EnvTransition = create_transition(action=action)
- transformed_transition = self._forward(transition)
- return transformed_transition[TransitionKey.ACTION]
-
- def process_reward(self, reward: float | torch.Tensor) -> float | torch.Tensor:
- """Processes only the reward part of a transition through the pipeline.
-
- Args:
- reward: The reward value.
-
- Returns:
- The processed reward.
- """
- transition: EnvTransition = create_transition(reward=reward)
- transformed_transition = self._forward(transition)
- return transformed_transition[TransitionKey.REWARD]
-
- def process_done(self, done: bool | torch.Tensor) -> bool | torch.Tensor:
- """Processes only the done flag of a transition through the pipeline.
-
- Args:
- done: The done flag.
-
- Returns:
- The processed done flag.
- """
- transition: EnvTransition = create_transition(done=done)
- transformed_transition = self._forward(transition)
- return transformed_transition[TransitionKey.DONE]
-
- def process_truncated(self, truncated: bool | torch.Tensor) -> bool | torch.Tensor:
- """Processes only the truncated flag of a transition through the pipeline.
-
- Args:
- truncated: The truncated flag.
-
- Returns:
- The processed truncated flag.
- """
- transition: EnvTransition = create_transition(truncated=truncated)
- transformed_transition = self._forward(transition)
- return transformed_transition[TransitionKey.TRUNCATED]
-
- def process_info(self, info: dict[str, Any]) -> dict[str, Any]:
- """Processes only the info dictionary of a transition through the pipeline.
-
- Args:
- info: The info dictionary.
-
- Returns:
- The processed info dictionary.
- """
- transition: EnvTransition = create_transition(info=info)
- transformed_transition = self._forward(transition)
- return transformed_transition[TransitionKey.INFO]
-
- def process_complementary_data(self, complementary_data: dict[str, Any]) -> dict[str, Any]:
- """Processes only the complementary data part of a transition through the pipeline.
-
- Args:
- complementary_data: The complementary data dictionary.
-
- Returns:
- The processed complementary data dictionary.
- """
- transition: EnvTransition = create_transition(complementary_data=complementary_data)
- transformed_transition = self._forward(transition)
- return transformed_transition[TransitionKey.COMPLEMENTARY_DATA]
-
-
-# Type aliases for semantic clarity.
-RobotProcessorPipeline: TypeAlias = DataProcessorPipeline[TInput, TOutput]
-PolicyProcessorPipeline: TypeAlias = DataProcessorPipeline[TInput, TOutput]
-
-
-class ObservationProcessorStep(ProcessorStep, ABC):
- """An abstract `ProcessorStep` that specifically targets the observation in a transition."""
-
- @abstractmethod
- def observation(self, observation: RobotObservation) -> RobotObservation:
- """Processes an observation dictionary. Subclasses must implement this method.
-
- Args:
- observation: The input observation dictionary from the transition.
-
- Returns:
- The processed observation dictionary.
- """
- ...
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """Applies the `observation` method to the transition's observation."""
- self._current_transition = transition.copy()
- new_transition = self._current_transition
-
- observation = new_transition.get(TransitionKey.OBSERVATION)
- if observation is None or not isinstance(observation, dict):
- raise ValueError("ObservationProcessorStep requires an observation in the transition.")
-
- processed_observation = self.observation(observation.copy())
- new_transition[TransitionKey.OBSERVATION] = processed_observation
- return new_transition
-
-
-class ActionProcessorStep(ProcessorStep, ABC):
- """An abstract `ProcessorStep` that specifically targets the action in a transition."""
-
- @abstractmethod
- def action(
- self, action: PolicyAction | RobotAction | EnvAction
- ) -> PolicyAction | RobotAction | EnvAction:
- """Processes an action. Subclasses must implement this method.
-
- Args:
- action: The input action from the transition.
-
- Returns:
- The processed action.
- """
- ...
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """Applies the `action` method to the transition's action."""
- self._current_transition = transition.copy()
- new_transition = self._current_transition
-
- action = new_transition.get(TransitionKey.ACTION)
- if action is None:
- raise ValueError("ActionProcessorStep requires an action in the transition.")
-
- processed_action = self.action(action)
- new_transition[TransitionKey.ACTION] = processed_action
- return new_transition
-
-
-class RobotActionProcessorStep(ProcessorStep, ABC):
- """An abstract `ProcessorStep` for processing a `RobotAction` (a dictionary)."""
-
- @abstractmethod
- def action(self, action: RobotAction) -> RobotAction:
- """Processes a `RobotAction`. Subclasses must implement this method.
-
- Args:
- action: The input `RobotAction` dictionary.
-
- Returns:
- The processed `RobotAction`.
- """
- ...
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """Applies the `action` method to the transition's action, ensuring it's a `RobotAction`."""
- self._current_transition = transition.copy()
- new_transition = self._current_transition
-
- action = new_transition.get(TransitionKey.ACTION)
- if action is None or not isinstance(action, dict):
- raise ValueError(f"Action should be a RobotAction type (dict), but got {type(action)}")
-
- processed_action = self.action(action.copy())
- new_transition[TransitionKey.ACTION] = processed_action
- return new_transition
-
-
-class PolicyActionProcessorStep(ProcessorStep, ABC):
- """An abstract `ProcessorStep` for processing a `PolicyAction` (a tensor or dict of tensors)."""
-
- @abstractmethod
- def action(self, action: PolicyAction) -> PolicyAction:
- """Processes a `PolicyAction`. Subclasses must implement this method.
-
- Args:
- action: The input `PolicyAction`.
-
- Returns:
- The processed `PolicyAction`.
- """
- ...
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """Applies the `action` method to the transition's action, ensuring it's a `PolicyAction`."""
- self._current_transition = transition.copy()
- new_transition = self._current_transition
-
- action = new_transition.get(TransitionKey.ACTION)
- if not isinstance(action, PolicyAction):
- raise ValueError(f"Action should be a PolicyAction type (tensor), but got {type(action)}")
-
- processed_action = self.action(action)
- new_transition[TransitionKey.ACTION] = processed_action
- return new_transition
-
-
-class RewardProcessorStep(ProcessorStep, ABC):
- """An abstract `ProcessorStep` that specifically targets the reward in a transition."""
-
- @abstractmethod
- def reward(self, reward) -> float | torch.Tensor:
- """Processes a reward. Subclasses must implement this method.
-
- Args:
- reward: The input reward from the transition.
-
- Returns:
- The processed reward.
- """
- ...
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """Applies the `reward` method to the transition's reward."""
- self._current_transition = transition.copy()
- new_transition = self._current_transition
-
- reward = new_transition.get(TransitionKey.REWARD)
- if reward is None:
- raise ValueError("RewardProcessorStep requires a reward in the transition.")
-
- processed_reward = self.reward(reward)
- new_transition[TransitionKey.REWARD] = processed_reward
- return new_transition
-
-
-class DoneProcessorStep(ProcessorStep, ABC):
- """An abstract `ProcessorStep` that specifically targets the 'done' flag in a transition."""
-
- @abstractmethod
- def done(self, done) -> bool | torch.Tensor:
- """Processes a 'done' flag. Subclasses must implement this method.
-
- Args:
- done: The input 'done' flag from the transition.
-
- Returns:
- The processed 'done' flag.
- """
- ...
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """Applies the `done` method to the transition's 'done' flag."""
- self._current_transition = transition.copy()
- new_transition = self._current_transition
-
- done = new_transition.get(TransitionKey.DONE)
- if done is None:
- raise ValueError("DoneProcessorStep requires a done flag in the transition.")
-
- processed_done = self.done(done)
- new_transition[TransitionKey.DONE] = processed_done
- return new_transition
-
-
-class TruncatedProcessorStep(ProcessorStep, ABC):
- """An abstract `ProcessorStep` that specifically targets the 'truncated' flag in a transition."""
-
- @abstractmethod
- def truncated(self, truncated) -> bool | torch.Tensor:
- """Processes a 'truncated' flag. Subclasses must implement this method.
-
- Args:
- truncated: The input 'truncated' flag from the transition.
-
- Returns:
- The processed 'truncated' flag.
- """
- ...
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """Applies the `truncated` method to the transition's 'truncated' flag."""
- self._current_transition = transition.copy()
- new_transition = self._current_transition
-
- truncated = new_transition.get(TransitionKey.TRUNCATED)
- if truncated is None:
- raise ValueError("TruncatedProcessorStep requires a truncated flag in the transition.")
-
- processed_truncated = self.truncated(truncated)
- new_transition[TransitionKey.TRUNCATED] = processed_truncated
- return new_transition
-
-
-class InfoProcessorStep(ProcessorStep, ABC):
- """An abstract `ProcessorStep` that specifically targets the 'info' dictionary in a transition."""
-
- @abstractmethod
- def info(self, info) -> dict[str, Any]:
- """Processes an 'info' dictionary. Subclasses must implement this method.
-
- Args:
- info: The input 'info' dictionary from the transition.
-
- Returns:
- The processed 'info' dictionary.
- """
- ...
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """Applies the `info` method to the transition's 'info' dictionary."""
- self._current_transition = transition.copy()
- new_transition = self._current_transition
-
- info = new_transition.get(TransitionKey.INFO)
- if info is None or not isinstance(info, dict):
- raise ValueError("InfoProcessorStep requires an info dictionary in the transition.")
-
- processed_info = self.info(info.copy())
- new_transition[TransitionKey.INFO] = processed_info
- return new_transition
-
-
-class ComplementaryDataProcessorStep(ProcessorStep, ABC):
- """An abstract `ProcessorStep` that targets the 'complementary_data' in a transition."""
-
- @abstractmethod
- def complementary_data(self, complementary_data) -> dict[str, Any]:
- """Processes a 'complementary_data' dictionary. Subclasses must implement this method.
-
- Args:
- complementary_data: The input 'complementary_data' from the transition.
-
- Returns:
- The processed 'complementary_data' dictionary.
- """
- ...
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """Applies the `complementary_data` method to the transition's data."""
- self._current_transition = transition.copy()
- new_transition = self._current_transition
-
- complementary_data = new_transition.get(TransitionKey.COMPLEMENTARY_DATA)
- if complementary_data is None or not isinstance(complementary_data, dict):
- raise ValueError("ComplementaryDataProcessorStep requires complementary data in the transition.")
-
- processed_complementary_data = self.complementary_data(complementary_data.copy())
- new_transition[TransitionKey.COMPLEMENTARY_DATA] = processed_complementary_data
- return new_transition
-
-
-class IdentityProcessorStep(ProcessorStep):
- """A no-op processor step that returns the input transition and features unchanged.
-
- This can be useful as a placeholder or for debugging purposes.
- """
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """Returns the transition without modification."""
- return transition
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- """Returns the features without modification."""
- return features
diff --git a/lerobot/src/lerobot/processor/policy_robot_bridge.py b/lerobot/src/lerobot/processor/policy_robot_bridge.py
deleted file mode 100644
index 42289ae8828c40bad4c984b35e47e8cc4db1c78b..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/processor/policy_robot_bridge.py
+++ /dev/null
@@ -1,69 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import asdict, dataclass
-from typing import Any
-
-import torch
-
-from lerobot.configs.types import FeatureType, PipelineFeatureType, PolicyFeature
-from lerobot.processor import ActionProcessorStep, PolicyAction, ProcessorStepRegistry, RobotAction
-from lerobot.utils.constants import ACTION
-
-
-@dataclass
-@ProcessorStepRegistry.register("robot_action_to_policy_action_processor")
-class RobotActionToPolicyActionProcessorStep(ActionProcessorStep):
- """Processor step to map a dictionary to a tensor action."""
-
- motor_names: list[str]
-
- def action(self, action: RobotAction) -> PolicyAction:
- if len(self.motor_names) != len(action):
- raise ValueError(f"Action must have {len(self.motor_names)} elements, got {len(action)}")
- return torch.tensor([action[f"{name}.pos"] for name in self.motor_names])
-
- def get_config(self) -> dict[str, Any]:
- return asdict(self)
-
- def transform_features(self, features):
- features[PipelineFeatureType.ACTION][ACTION] = PolicyFeature(
- type=FeatureType.ACTION, shape=(len(self.motor_names),)
- )
- return features
-
-
-@dataclass
-@ProcessorStepRegistry.register("policy_action_to_robot_action_processor")
-class PolicyActionToRobotActionProcessorStep(ActionProcessorStep):
- """Processor step to map a policy action to a robot action."""
-
- motor_names: list[str]
-
- def action(self, action: PolicyAction) -> RobotAction:
- if len(self.motor_names) != len(action):
- raise ValueError(f"Action must have {len(self.motor_names)} elements, got {len(action)}")
- return {f"{name}.pos": action[i] for i, name in enumerate(self.motor_names)}
-
- def get_config(self) -> dict[str, Any]:
- return asdict(self)
-
- def transform_features(self, features):
- for name in self.motor_names:
- features[PipelineFeatureType.ACTION][f"{name}.pos"] = PolicyFeature(
- type=FeatureType.ACTION, shape=(1,)
- )
- return features
diff --git a/lerobot/src/lerobot/processor/rename_processor.py b/lerobot/src/lerobot/processor/rename_processor.py
deleted file mode 100644
index f0aa4070b865319b1ef8363afe68502016dad400..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/processor/rename_processor.py
+++ /dev/null
@@ -1,93 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from copy import deepcopy
-from dataclasses import dataclass, field
-from typing import Any
-
-from lerobot.configs.types import PipelineFeatureType, PolicyFeature
-
-from .pipeline import ObservationProcessorStep, ProcessorStepRegistry
-
-
-@dataclass
-@ProcessorStepRegistry.register(name="rename_observations_processor")
-class RenameObservationsProcessorStep(ObservationProcessorStep):
- """
- A processor step that renames keys in an observation dictionary.
-
- This step is useful for creating a standardized data interface by mapping keys
- from an environment's format to the format expected by a LeRobot policy or
- other downstream components.
-
- Attributes:
- rename_map: A dictionary mapping from old key names to new key names.
- Keys present in an observation that are not in this map will
- be kept with their original names.
- """
-
- rename_map: dict[str, str] = field(default_factory=dict)
-
- def observation(self, observation):
- processed_obs = {}
- for key, value in observation.items():
- if key in self.rename_map:
- processed_obs[self.rename_map[key]] = value
- else:
- processed_obs[key] = value
-
- return processed_obs
-
- def get_config(self) -> dict[str, Any]:
- return {"rename_map": self.rename_map}
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- """Transforms:
- - Each key in the observation that appears in `rename_map` is renamed to its value.
- - Keys not in `rename_map` remain unchanged.
- """
- new_features: dict[PipelineFeatureType, dict[str, PolicyFeature]] = features.copy()
- new_features[PipelineFeatureType.OBSERVATION] = {
- self.rename_map.get(k, k): v for k, v in features[PipelineFeatureType.OBSERVATION].items()
- }
- return new_features
-
-
-def rename_stats(stats: dict[str, dict[str, Any]], rename_map: dict[str, str]) -> dict[str, dict[str, Any]]:
- """
- Renames the top-level keys in a statistics dictionary using a provided mapping.
-
- This is a helper function typically used to keep normalization statistics
- consistent with renamed observation or action features. It performs a defensive
- deep copy to avoid modifying the original `stats` dictionary.
-
- Args:
- stats: A nested dictionary of statistics, where top-level keys are
- feature names (e.g., `{"observation.state": {"mean": 0.5}}`).
- rename_map: A dictionary mapping old feature names to new feature names.
-
- Returns:
- A new statistics dictionary with its top-level keys renamed. Returns an
- empty dictionary if the input `stats` is empty.
- """
- if not stats:
- return {}
- renamed: dict[str, dict[str, Any]] = {}
- for old_key, sub_stats in stats.items():
- new_key = rename_map.get(old_key, old_key)
- renamed[new_key] = deepcopy(sub_stats) if sub_stats is not None else {}
- return renamed
diff --git a/lerobot/src/lerobot/processor/tokenizer_processor.py b/lerobot/src/lerobot/processor/tokenizer_processor.py
deleted file mode 100644
index 9eb93db1f5f414bbbf6f45517e375de837f62c44..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/processor/tokenizer_processor.py
+++ /dev/null
@@ -1,530 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-This script defines a processor for tokenizing natural language instructions from an environment transition.
-
-It uses a tokenizer from the Hugging Face `transformers` library to convert task descriptions (text) into
-token IDs and attention masks, which are then added to the observation dictionary.
-"""
-
-from __future__ import annotations
-
-import logging
-from dataclasses import dataclass, field
-from typing import TYPE_CHECKING, Any
-
-import torch
-
-from lerobot.configs.types import FeatureType, PipelineFeatureType, PolicyFeature
-from lerobot.utils.constants import (
- ACTION_TOKEN_MASK,
- ACTION_TOKENS,
- OBS_LANGUAGE_ATTENTION_MASK,
- OBS_LANGUAGE_TOKENS,
-)
-from lerobot.utils.import_utils import _transformers_available
-
-from .core import EnvTransition, RobotObservation, TransitionKey
-from .pipeline import ActionProcessorStep, ObservationProcessorStep, ProcessorStepRegistry
-
-# Conditional import for type checking and lazy loading
-if TYPE_CHECKING or _transformers_available:
- from transformers import AutoProcessor, AutoTokenizer
-else:
- AutoProcessor = None
- AutoTokenizer = None
-
-
-@dataclass
-@ProcessorStepRegistry.register(name="tokenizer_processor")
-class TokenizerProcessorStep(ObservationProcessorStep):
- """
- Processor step to tokenize a natural language task description.
-
- This step extracts a task string from the `complementary_data` of an `EnvTransition`,
- tokenizes it using a Hugging Face `transformers` tokenizer, and adds the resulting
- token IDs and attention mask to the `observation` dictionary.
-
- Requires the `transformers` library to be installed.
-
- Attributes:
- tokenizer_name: The name of a pretrained tokenizer from the Hugging Face Hub (e.g., "bert-base-uncased").
- tokenizer: A pre-initialized tokenizer object. If provided, `tokenizer_name` is ignored.
- max_length: The maximum length to pad or truncate sequences to.
- task_key: The key in `complementary_data` where the task string is stored.
- padding_side: The side to pad on ('left' or 'right').
- padding: The padding strategy ('max_length', 'longest', etc.).
- truncation: Whether to truncate sequences longer than `max_length`.
- input_tokenizer: The internal tokenizer instance, loaded during initialization.
- """
-
- tokenizer_name: str | None = None
- tokenizer: Any | None = None # Use `Any` for compatibility without a hard dependency
- max_length: int = 512
- task_key: str = "task"
- padding_side: str = "right"
- padding: str = "max_length"
- truncation: bool = True
-
- # Internal tokenizer instance (not part of the config)
- input_tokenizer: Any = field(default=None, init=False, repr=False)
-
- def __post_init__(self):
- """
- Initializes the tokenizer after the dataclass is created.
-
- It checks for the availability of the `transformers` library and loads the tokenizer
- either from a provided object or by name from the Hugging Face Hub.
-
- Raises:
- ImportError: If the `transformers` library is not installed.
- ValueError: If neither `tokenizer` nor `tokenizer_name` is provided.
- """
- if not _transformers_available:
- raise ImportError(
- "The 'transformers' library is not installed. "
- "Please install it with `pip install 'lerobot[transformers-dep]'` to use TokenizerProcessorStep."
- )
-
- if self.tokenizer is not None:
- # Use provided tokenizer object directly
- self.input_tokenizer = self.tokenizer
- elif self.tokenizer_name is not None:
- if AutoTokenizer is None:
- raise ImportError("AutoTokenizer is not available")
- self.input_tokenizer = AutoTokenizer.from_pretrained(self.tokenizer_name)
- else:
- raise ValueError(
- "Either 'tokenizer' or 'tokenizer_name' must be provided. "
- "Pass a tokenizer object directly or a tokenizer name to auto-load."
- )
-
- def get_task(self, transition: EnvTransition) -> list[str] | None:
- """
- Extracts the task description(s) from the transition's complementary data.
-
- Args:
- transition: The environment transition.
-
- Returns:
- A list of task strings, or None if the task key is not found or the value is None.
- """
- complementary_data = transition.get(TransitionKey.COMPLEMENTARY_DATA)
- if complementary_data is None:
- raise ValueError("Complementary data is None so no task can be extracted from it")
-
- task = complementary_data[self.task_key]
- if task is None:
- raise ValueError("Task extracted from Complementary data is None")
-
- # Standardize to a list of strings for the tokenizer
- if isinstance(task, str):
- return [task]
- elif isinstance(task, list) and all(isinstance(t, str) for t in task):
- return task
-
- return None
-
- def observation(self, observation: RobotObservation) -> RobotObservation:
- """
- Tokenizes the task description and adds it to the observation dictionary.
-
- This method retrieves the task, tokenizes it, moves the resulting tensors to the
- same device as other data in the transition, and updates the observation.
-
- Args:
- observation: The original observation dictionary.
-
- Returns:
- The updated observation dictionary including token IDs and an attention mask.
- """
- task = self.get_task(self.transition)
- if task is None:
- raise ValueError("Task cannot be None")
-
- # Tokenize the task (this will create CPU tensors)
- tokenized_prompt = self._tokenize_text(task)
-
- # Detect the device from existing tensors in the transition to ensure consistency
- target_device = self._detect_device(self.transition)
-
- # Move new tokenized tensors to the detected device
- if target_device is not None:
- tokenized_prompt = {
- k: v.to(target_device) if isinstance(v, torch.Tensor) else v
- for k, v in tokenized_prompt.items()
- }
-
- # Create a new observation dict to avoid modifying the original in place
- new_observation = dict(observation)
-
- # Add tokenized data to the observation
- new_observation[OBS_LANGUAGE_TOKENS] = tokenized_prompt["input_ids"]
- new_observation[OBS_LANGUAGE_ATTENTION_MASK] = tokenized_prompt["attention_mask"].to(dtype=torch.bool)
-
- return new_observation
-
- def _detect_device(self, transition: EnvTransition) -> torch.device | None:
- """
- Detects the torch.device from existing tensors in the transition.
-
- It checks tensors in the observation dictionary first, then the action tensor.
-
- Args:
- transition: The environment transition.
-
- Returns:
- The detected `torch.device`, or None if no tensors are found.
- """
- # Check observation tensors first (most likely place to find tensors)
- observation = transition.get(TransitionKey.OBSERVATION)
- if observation:
- for value in observation.values():
- if isinstance(value, torch.Tensor):
- return value.device
-
- # Fallback to checking the action tensor
- action = transition.get(TransitionKey.ACTION)
- if isinstance(action, torch.Tensor):
- return action.device
-
- return None # No tensors found, default will be CPU
-
- def _tokenize_text(self, text: str | list[str]) -> dict[str, torch.Tensor]:
- """
- A wrapper around the tokenizer call.
-
- Args:
- text: A string or list of strings to tokenize.
-
- Returns:
- A dictionary containing tokenized 'input_ids' and 'attention_mask' as PyTorch tensors.
- """
- return self.input_tokenizer(
- text,
- max_length=self.max_length,
- truncation=self.truncation,
- padding=self.padding,
- padding_side=self.padding_side,
- return_tensors="pt",
- )
-
- def get_config(self) -> dict[str, Any]:
- """
- Returns the serializable configuration of the processor.
-
- Note: The tokenizer object itself is not serialized. If the processor was initialized
- with a tokenizer name, that name will be included in the config.
-
- Returns:
- A dictionary with the processor's configuration parameters.
- """
- config = {
- "max_length": self.max_length,
- "task_key": self.task_key,
- "padding_side": self.padding_side,
- "padding": self.padding,
- "truncation": self.truncation,
- }
-
- # Only save tokenizer_name if it was used to create the tokenizer
- if self.tokenizer_name is not None and self.tokenizer is None:
- config["tokenizer_name"] = self.tokenizer_name
-
- return config
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- """
- Adds feature definitions for the language tokens and attention mask.
-
- This updates the policy features dictionary to include the new data added to the
- observation, ensuring downstream components are aware of their shape and type.
-
- Args:
- features: The dictionary of existing policy features.
-
- Returns:
- The updated dictionary of policy features.
- """
- # Add a feature for the token IDs if it doesn't already exist
- if OBS_LANGUAGE_TOKENS not in features[PipelineFeatureType.OBSERVATION]:
- features[PipelineFeatureType.OBSERVATION][OBS_LANGUAGE_TOKENS] = PolicyFeature(
- type=FeatureType.LANGUAGE, shape=(self.max_length,)
- )
-
- # Add a feature for the attention mask if it doesn't already exist
- if OBS_LANGUAGE_ATTENTION_MASK not in features[PipelineFeatureType.OBSERVATION]:
- features[PipelineFeatureType.OBSERVATION][OBS_LANGUAGE_ATTENTION_MASK] = PolicyFeature(
- type=FeatureType.LANGUAGE, shape=(self.max_length,)
- )
-
- return features
-
-
-@dataclass
-@ProcessorStepRegistry.register(name="action_tokenizer_processor")
-class ActionTokenizerProcessorStep(ActionProcessorStep):
- """
- Processor step to tokenize action data using a fast action tokenizer.
-
- This step takes action tensors from an `EnvTransition`, tokenizes them using
- a Hugging Face `transformers` AutoProcessor (such as the Physical Intelligence "fast" tokenizer),
- and returns the tokenized action.
-
- Requires the `transformers` library to be installed.
-
- Attributes:
- tokenizer_name: The name of a pretrained processor from the Hugging Face Hub (e.g., "physical-intelligence/fast").
- tokenizer: A pre-initialized processor/tokenizer object. If provided, `tokenizer_name` is ignored.
- trust_remote_code: Whether to trust remote code when loading the tokenizer (required for some tokenizers).
- action_tokenizer: The internal tokenizer/processor instance, loaded during initialization.
- paligemma_tokenizer_name: The name of a pretrained PaliGemma tokenizer from the Hugging Face Hub (e.g., "google/paligemma-3b-pt-224").
- """
-
- action_tokenizer_name: str | None = None
- action_tokenizer_input_object: Any | None = None
- trust_remote_code: bool = True
- max_action_tokens: int = 256
- fast_skip_tokens: int = 128
- paligemma_tokenizer_name: str = "google/paligemma-3b-pt-224"
- # Internal tokenizer instance (not part of the config)
- action_tokenizer: Any = field(default=None, init=False, repr=False)
- _paligemma_tokenizer: Any = field(default=None, init=False, repr=False)
-
- def __post_init__(self):
- """
- Initializes the action tokenizer after the dataclass is created.
-
- It checks for the availability of the `transformers` library and loads the tokenizer
- either from a provided object or by name from the Hugging Face Hub.
-
- Raises:
- ImportError: If the `transformers` library is not installed.
- ValueError: If neither `tokenizer` nor `tokenizer_name` is provided.
- """
- if not _transformers_available:
- raise ImportError(
- "The 'transformers' library is not installed. "
- "Please install it with `pip install 'lerobot[transformers-dep]'` to use ActionTokenizerProcessorStep."
- )
-
- if self.action_tokenizer_input_object is not None:
- self.action_tokenizer = self.action_tokenizer_input_object
-
- elif self.action_tokenizer_name is not None:
- if AutoProcessor is None:
- raise ImportError("AutoProcessor is not available")
- self.action_tokenizer = AutoProcessor.from_pretrained(
- self.action_tokenizer_name, trust_remote_code=self.trust_remote_code
- )
- else:
- raise ValueError(
- "Either 'action_tokenizer' or 'action_tokenizer_name' must be provided. "
- "Pass a tokenizer object directly or a tokenizer name to auto-load."
- )
-
- self._paligemma_tokenizer = AutoTokenizer.from_pretrained(
- self.paligemma_tokenizer_name,
- trust_remote_code=self.trust_remote_code,
- add_eos_token=True,
- add_bos_token=False,
- )
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- """
- Applies action tokenization to the transition.
-
- This overrides the base class to handle both tokens and mask.
-
- Args:
- transition: The input transition with action data.
-
- Returns:
- The processed transition with tokenized actions and mask in complementary data.
- """
- self._current_transition = transition.copy()
- new_transition = self._current_transition
-
- action = new_transition.get(TransitionKey.ACTION)
- if action is None:
- # During inference, no action is available, skip tokenization
- return new_transition
-
- # Tokenize and get both tokens and mask
- tokens, mask = self._tokenize_action(action)
-
- # Store mask in complementary data
- complementary_data = new_transition.get(TransitionKey.COMPLEMENTARY_DATA, {})
- if complementary_data is None:
- complementary_data = {}
- complementary_data[ACTION_TOKEN_MASK] = mask
- complementary_data[ACTION_TOKENS] = tokens
- new_transition[TransitionKey.COMPLEMENTARY_DATA] = complementary_data
- return new_transition
-
- def _act_tokens_to_paligemma_tokens(self, tokens: torch.Tensor) -> torch.Tensor:
- """
- Converts action tokens to PaliGemma tokens.
- """
- return self._paligemma_tokenizer.vocab_size - 1 - self.fast_skip_tokens - tokens
-
- def _tokenize_action(self, action: torch.Tensor) -> tuple[torch.Tensor, torch.Tensor]:
- """
- Tokenizes the action tensor and creates a mask.
-
- Args:
- action: The input action tensor to tokenize. Shape: (B, H, action_dim) or (H, action_dim,)
-
- Returns:
- A tuple of (tokens, mask) where:
- - tokens: Tensor of token IDs with shape (B, max_action_tokens)
- - mask: Boolean mask with shape (B, max_action_tokens), True for real tokens, False for padding
- """
- if action is None:
- raise ValueError("Action cannot be None")
-
- # Get the device and dtype of the input action
- device = action.device if isinstance(action, torch.Tensor) else None
-
- # Handle single sample (add batch dimension)
- single_sample = action.dim() == 1
- if single_sample:
- action = action.unsqueeze(0)
-
- batch_size = action.shape[0]
-
- # Tokenize the action batch
- # The fast tokenizer expects action data and returns token IDs
- tokens_list = []
- masks_list = []
-
- for i in range(batch_size):
- # Tokenize single action (move to CPU first as tokenizer uses scipy which requires numpy)
- action_cpu = action[i : i + 1].cpu()
- tokens = self.action_tokenizer(action_cpu)
-
- # Convert to numpy array if it's a list
- if isinstance(tokens, list) or not isinstance(tokens, torch.Tensor):
- tokens = torch.tensor(tokens, dtype=torch.long, device=action.device)
- else:
- # Move tokens back to the same device as input action
- tokens = tokens.to(device=action.device)
-
- # Flatten to 1D if needed
- if tokens.dim() > 1:
- tokens = tokens.flatten()
-
- bos_id = self._paligemma_tokenizer.bos_token_id
- # add bos
- tokens = torch.cat(
- [
- torch.tensor([bos_id], device=action.device),
- torch.tensor(
- self._paligemma_tokenizer.encode("Action: ", add_special_tokens=False),
- device=action.device,
- ),
- self._act_tokens_to_paligemma_tokens(tokens),
- torch.tensor(self._paligemma_tokenizer.encode("|"), device=action.device),
- ]
- )
-
- # Truncate or pad to max_action_tokens
- if len(tokens) > self.max_action_tokens:
- logging.warning(
- f"Token length ({len(tokens)}) exceeds max length ({self.max_action_tokens}), truncating. "
- "Consider increasing the `max_action_tokens` in your model config if this happens frequently."
- )
- tokens = tokens[: self.max_action_tokens]
- mask = torch.ones(self.max_action_tokens, dtype=torch.bool, device=action.device)
- else:
- mask = torch.cat(
- [
- torch.ones(len(tokens), dtype=torch.bool, device=action.device),
- torch.zeros(
- self.max_action_tokens - len(tokens), dtype=torch.bool, device=action.device
- ),
- ]
- )
- # Pad tokens with zeros
- tokens = torch.nn.functional.pad(tokens, (0, self.max_action_tokens - len(tokens)), value=0)
-
- tokens_list.append(tokens)
- masks_list.append(mask)
-
- # Stack into batched tensors
- tokens_batch = torch.stack(tokens_list, dim=0) # (B, max_action_tokens)
- masks_batch = torch.stack(masks_list, dim=0) # (B, max_action_tokens)
-
- # Remove batch dimension if input was single sample
- if single_sample:
- tokens_batch = tokens_batch.squeeze(0)
- masks_batch = masks_batch.squeeze(0)
-
- # Move to the same device as the input
- if device is not None:
- tokens_batch = tokens_batch.to(device)
- masks_batch = masks_batch.to(device)
-
- return tokens_batch, masks_batch
-
- def action(self, action: torch.Tensor) -> torch.Tensor:
- """
- This method is not used since we override __call__.
- Required by ActionProcessorStep ABC.
- """
- tokens, _ = self._tokenize_action(action)
- return tokens
-
- def get_config(self) -> dict[str, Any]:
- """
- Returns the serializable configuration of the processor.
-
- Note: The tokenizer object itself is not serialized. If the processor was initialized
- with a tokenizer name, that name will be included in the config.
-
- Returns:
- A dictionary with the processor's configuration parameters.
- """
- config = {
- "trust_remote_code": self.trust_remote_code,
- "max_action_tokens": self.max_action_tokens,
- }
-
- # Only save tokenizer_name if it was used to create the tokenizer
- if self.action_tokenizer_name is not None and self.action_tokenizer_input_object is None:
- config["action_tokenizer_name"] = self.action_tokenizer_name
-
- return config
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- """
- Updates feature definitions to reflect tokenized actions.
-
- This updates the policy features dictionary to indicate that the action
- has been tokenized into a sequence of token IDs with shape (max_action_tokens,).
-
- Args:
- features: The dictionary of existing policy features.
-
- Returns:
- The updated dictionary of policy features.
- """
- return features
diff --git a/lerobot/src/lerobot/rl/actor.py b/lerobot/src/lerobot/rl/actor.py
deleted file mode 100644
index 81f4170d21456b77f0ccdbb244f60d048f1f3302..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/rl/actor.py
+++ /dev/null
@@ -1,738 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Actor server runner for distributed HILSerl robot policy training.
-
-This script implements the actor component of the distributed HILSerl architecture.
-It executes the policy in the robot environment, collects experience,
-and sends transitions to the learner server for policy updates.
-
-Examples of usage:
-
-- Start an actor server for real robot training with human-in-the-loop intervention:
-```bash
-python -m lerobot.rl.actor --config_path src/lerobot/configs/train_config_hilserl_so100.json
-```
-
-**NOTE**: The actor server requires a running learner server to connect to. Ensure the learner
-server is started before launching the actor.
-
-**NOTE**: Human intervention is key to HILSerl training. Press the upper right trigger button on the
-gamepad to take control of the robot during training. Initially intervene frequently, then gradually
-reduce interventions as the policy improves.
-
-**WORKFLOW**:
-1. Determine robot workspace bounds using `lerobot-find-joint-limits`
-2. Record demonstrations with `gym_manipulator.py` in record mode
-3. Process the dataset and determine camera crops with `crop_dataset_roi.py`
-4. Start the learner server with the training configuration
-5. Start this actor server with the same configuration
-6. Use human interventions to guide policy learning
-
-For more details on the complete HILSerl training workflow, see:
-https://github.com/michel-aractingi/lerobot-hilserl-guide
-"""
-
-import logging
-import os
-import time
-from functools import lru_cache
-from queue import Empty
-
-import grpc
-import torch
-from torch import nn
-from torch.multiprocessing import Event, Queue
-
-from lerobot.cameras import opencv # noqa: F401
-from lerobot.configs import parser
-from lerobot.configs.train import TrainRLServerPipelineConfig
-from lerobot.policies.factory import make_policy
-from lerobot.policies.sac.modeling_sac import SACPolicy
-from lerobot.processor import TransitionKey
-from lerobot.rl.process import ProcessSignalHandler
-from lerobot.rl.queue import get_last_item_from_queue
-from lerobot.robots import so_follower # noqa: F401
-from lerobot.teleoperators import gamepad, so_leader # noqa: F401
-from lerobot.teleoperators.utils import TeleopEvents
-from lerobot.transport import services_pb2, services_pb2_grpc
-from lerobot.transport.utils import (
- bytes_to_state_dict,
- grpc_channel_options,
- python_object_to_bytes,
- receive_bytes_in_chunks,
- send_bytes_in_chunks,
- transitions_to_bytes,
-)
-from lerobot.utils.random_utils import set_seed
-from lerobot.utils.robot_utils import precise_sleep
-from lerobot.utils.transition import (
- Transition,
- move_state_dict_to_device,
- move_transition_to_device,
-)
-from lerobot.utils.utils import (
- TimerManager,
- get_safe_torch_device,
- init_logging,
-)
-
-from .gym_manipulator import (
- create_transition,
- make_processors,
- make_robot_env,
- step_env_and_process_transition,
-)
-
-# Main entry point
-
-
-@parser.wrap()
-def actor_cli(cfg: TrainRLServerPipelineConfig):
- cfg.validate()
- display_pid = False
- if not use_threads(cfg):
- import torch.multiprocessing as mp
-
- mp.set_start_method("spawn")
- display_pid = True
-
- # Create logs directory to ensure it exists
- log_dir = os.path.join(cfg.output_dir, "logs")
- os.makedirs(log_dir, exist_ok=True)
- log_file = os.path.join(log_dir, f"actor_{cfg.job_name}.log")
-
- # Initialize logging with explicit log file
- init_logging(log_file=log_file, display_pid=display_pid)
- logging.info(f"Actor logging initialized, writing to {log_file}")
-
- is_threaded = use_threads(cfg)
- shutdown_event = ProcessSignalHandler(is_threaded, display_pid=display_pid).shutdown_event
-
- learner_client, grpc_channel = learner_service_client(
- host=cfg.policy.actor_learner_config.learner_host,
- port=cfg.policy.actor_learner_config.learner_port,
- )
-
- logging.info("[ACTOR] Establishing connection with Learner")
- if not establish_learner_connection(learner_client, shutdown_event):
- logging.error("[ACTOR] Failed to establish connection with Learner")
- return
-
- if not use_threads(cfg):
- # If we use multithreading, we can reuse the channel
- grpc_channel.close()
- grpc_channel = None
-
- logging.info("[ACTOR] Connection with Learner established")
-
- parameters_queue = Queue()
- transitions_queue = Queue()
- interactions_queue = Queue()
-
- concurrency_entity = None
- if use_threads(cfg):
- from threading import Thread
-
- concurrency_entity = Thread
- else:
- from multiprocessing import Process
-
- concurrency_entity = Process
-
- receive_policy_process = concurrency_entity(
- target=receive_policy,
- args=(cfg, parameters_queue, shutdown_event, grpc_channel),
- daemon=True,
- )
-
- transitions_process = concurrency_entity(
- target=send_transitions,
- args=(cfg, transitions_queue, shutdown_event, grpc_channel),
- daemon=True,
- )
-
- interactions_process = concurrency_entity(
- target=send_interactions,
- args=(cfg, interactions_queue, shutdown_event, grpc_channel),
- daemon=True,
- )
-
- transitions_process.start()
- interactions_process.start()
- receive_policy_process.start()
-
- act_with_policy(
- cfg=cfg,
- shutdown_event=shutdown_event,
- parameters_queue=parameters_queue,
- transitions_queue=transitions_queue,
- interactions_queue=interactions_queue,
- )
- logging.info("[ACTOR] Policy process joined")
-
- logging.info("[ACTOR] Closing queues")
- transitions_queue.close()
- interactions_queue.close()
- parameters_queue.close()
-
- transitions_process.join()
- logging.info("[ACTOR] Transitions process joined")
- interactions_process.join()
- logging.info("[ACTOR] Interactions process joined")
- receive_policy_process.join()
- logging.info("[ACTOR] Receive policy process joined")
-
- logging.info("[ACTOR] join queues")
- transitions_queue.cancel_join_thread()
- interactions_queue.cancel_join_thread()
- parameters_queue.cancel_join_thread()
-
- logging.info("[ACTOR] queues closed")
-
-
-# Core algorithm functions
-
-
-def act_with_policy(
- cfg: TrainRLServerPipelineConfig,
- shutdown_event: any, # Event,
- parameters_queue: Queue,
- transitions_queue: Queue,
- interactions_queue: Queue,
-):
- """
- Executes policy interaction within the environment.
-
- This function rolls out the policy in the environment, collecting interaction data and pushing it to a queue for streaming to the learner.
- Once an episode is completed, updated network parameters received from the learner are retrieved from a queue and loaded into the network.
-
- Args:
- cfg: Configuration settings for the interaction process.
- shutdown_event: Event to check if the process should shutdown.
- parameters_queue: Queue to receive updated network parameters from the learner.
- transitions_queue: Queue to send transitions to the learner.
- interactions_queue: Queue to send interactions to the learner.
- """
- # Initialize logging for multiprocessing
- if not use_threads(cfg):
- log_dir = os.path.join(cfg.output_dir, "logs")
- os.makedirs(log_dir, exist_ok=True)
- log_file = os.path.join(log_dir, f"actor_policy_{os.getpid()}.log")
- init_logging(log_file=log_file, display_pid=True)
- logging.info("Actor policy process logging initialized")
-
- logging.info("make_env online")
-
- online_env, teleop_device = make_robot_env(cfg=cfg.env)
- env_processor, action_processor = make_processors(online_env, teleop_device, cfg.env, cfg.policy.device)
-
- set_seed(cfg.seed)
- device = get_safe_torch_device(cfg.policy.device, log=True)
-
- torch.backends.cudnn.benchmark = True
- torch.backends.cuda.matmul.allow_tf32 = True
-
- logging.info("make_policy")
-
- ### Instantiate the policy in both the actor and learner processes
- ### To avoid sending a SACPolicy object through the port, we create a policy instance
- ### on both sides, the learner sends the updated parameters every n steps to update the actor's parameters
- policy: SACPolicy = make_policy(
- cfg=cfg.policy,
- env_cfg=cfg.env,
- )
- policy = policy.eval()
- assert isinstance(policy, nn.Module)
-
- obs, info = online_env.reset()
- env_processor.reset()
- action_processor.reset()
-
- # Process initial observation
- transition = create_transition(observation=obs, info=info)
- transition = env_processor(transition)
-
- # NOTE: For the moment we will solely handle the case of a single environment
- sum_reward_episode = 0
- list_transition_to_send_to_learner = []
- episode_intervention = False
- # Add counters for intervention rate calculation
- episode_intervention_steps = 0
- episode_total_steps = 0
-
- policy_timer = TimerManager("Policy inference", log=False)
-
- for interaction_step in range(cfg.policy.online_steps):
- start_time = time.perf_counter()
- if shutdown_event.is_set():
- logging.info("[ACTOR] Shutting down act_with_policy")
- return
-
- observation = {
- k: v for k, v in transition[TransitionKey.OBSERVATION].items() if k in cfg.policy.input_features
- }
-
- # Time policy inference and check if it meets FPS requirement
- with policy_timer:
- # Extract observation from transition for policy
- action = policy.select_action(batch=observation)
- policy_fps = policy_timer.fps_last
-
- log_policy_frequency_issue(policy_fps=policy_fps, cfg=cfg, interaction_step=interaction_step)
-
- # Use the new step function
- new_transition = step_env_and_process_transition(
- env=online_env,
- transition=transition,
- action=action,
- env_processor=env_processor,
- action_processor=action_processor,
- )
-
- # Extract values from processed transition
- next_observation = {
- k: v
- for k, v in new_transition[TransitionKey.OBSERVATION].items()
- if k in cfg.policy.input_features
- }
-
- # Teleop action is the action that was executed in the environment
- # It is either the action from the teleop device or the action from the policy
- executed_action = new_transition[TransitionKey.COMPLEMENTARY_DATA]["teleop_action"]
-
- reward = new_transition[TransitionKey.REWARD]
- done = new_transition.get(TransitionKey.DONE, False)
- truncated = new_transition.get(TransitionKey.TRUNCATED, False)
-
- sum_reward_episode += float(reward)
- episode_total_steps += 1
-
- # Check for intervention from transition info
- intervention_info = new_transition[TransitionKey.INFO]
- if intervention_info.get(TeleopEvents.IS_INTERVENTION, False):
- episode_intervention = True
- episode_intervention_steps += 1
-
- complementary_info = {
- "discrete_penalty": torch.tensor(
- [new_transition[TransitionKey.COMPLEMENTARY_DATA].get("discrete_penalty", 0.0)]
- ),
- }
- # Create transition for learner (convert to old format)
- list_transition_to_send_to_learner.append(
- Transition(
- state=observation,
- action=executed_action,
- reward=reward,
- next_state=next_observation,
- done=done,
- truncated=truncated,
- complementary_info=complementary_info,
- )
- )
-
- # Update transition for next iteration
- transition = new_transition
-
- if done or truncated:
- logging.info(f"[ACTOR] Global step {interaction_step}: Episode reward: {sum_reward_episode}")
-
- update_policy_parameters(policy=policy, parameters_queue=parameters_queue, device=device)
-
- if len(list_transition_to_send_to_learner) > 0:
- push_transitions_to_transport_queue(
- transitions=list_transition_to_send_to_learner,
- transitions_queue=transitions_queue,
- )
- list_transition_to_send_to_learner = []
-
- stats = get_frequency_stats(policy_timer)
- policy_timer.reset()
-
- # Calculate intervention rate
- intervention_rate = 0.0
- if episode_total_steps > 0:
- intervention_rate = episode_intervention_steps / episode_total_steps
-
- # Send episodic reward to the learner
- interactions_queue.put(
- python_object_to_bytes(
- {
- "Episodic reward": sum_reward_episode,
- "Interaction step": interaction_step,
- "Episode intervention": int(episode_intervention),
- "Intervention rate": intervention_rate,
- **stats,
- }
- )
- )
-
- # Reset intervention counters and environment
- sum_reward_episode = 0.0
- episode_intervention = False
- episode_intervention_steps = 0
- episode_total_steps = 0
-
- # Reset environment and processors
- obs, info = online_env.reset()
- env_processor.reset()
- action_processor.reset()
-
- # Process initial observation
- transition = create_transition(observation=obs, info=info)
- transition = env_processor(transition)
-
- if cfg.env.fps is not None:
- dt_time = time.perf_counter() - start_time
- precise_sleep(max(1 / cfg.env.fps - dt_time, 0.0))
-
-
-# Communication Functions - Group all gRPC/messaging functions
-
-
-def establish_learner_connection(
- stub: services_pb2_grpc.LearnerServiceStub,
- shutdown_event: Event, # type: ignore
- attempts: int = 30,
-):
- """Establish a connection with the learner.
-
- Args:
- stub (services_pb2_grpc.LearnerServiceStub): The stub to use for the connection.
- shutdown_event (Event): The event to check if the connection should be established.
- attempts (int): The number of attempts to establish the connection.
- Returns:
- bool: True if the connection is established, False otherwise.
- """
- for _ in range(attempts):
- if shutdown_event.is_set():
- logging.info("[ACTOR] Shutting down establish_learner_connection")
- return False
-
- # Force a connection attempt and check state
- try:
- logging.info("[ACTOR] Send ready message to Learner")
- if stub.Ready(services_pb2.Empty()) == services_pb2.Empty():
- return True
- except grpc.RpcError as e:
- logging.error(f"[ACTOR] Waiting for Learner to be ready... {e}")
- time.sleep(2)
- return False
-
-
-@lru_cache(maxsize=1)
-def learner_service_client(
- host: str = "127.0.0.1",
- port: int = 50051,
-) -> tuple[services_pb2_grpc.LearnerServiceStub, grpc.Channel]:
- """
- Returns a client for the learner service.
-
- GRPC uses HTTP/2, which is a binary protocol and multiplexes requests over a single connection.
- So we need to create only one client and reuse it.
- """
-
- channel = grpc.insecure_channel(
- f"{host}:{port}",
- grpc_channel_options(),
- )
- stub = services_pb2_grpc.LearnerServiceStub(channel)
- logging.info("[ACTOR] Learner service client created")
- return stub, channel
-
-
-def receive_policy(
- cfg: TrainRLServerPipelineConfig,
- parameters_queue: Queue,
- shutdown_event: Event, # type: ignore
- learner_client: services_pb2_grpc.LearnerServiceStub | None = None,
- grpc_channel: grpc.Channel | None = None,
-):
- """Receive parameters from the learner.
-
- Args:
- cfg (TrainRLServerPipelineConfig): The configuration for the actor.
- parameters_queue (Queue): The queue to receive the parameters.
- shutdown_event (Event): The event to check if the process should shutdown.
- """
- logging.info("[ACTOR] Start receiving parameters from the Learner")
- if not use_threads(cfg):
- # Create a process-specific log file
- log_dir = os.path.join(cfg.output_dir, "logs")
- os.makedirs(log_dir, exist_ok=True)
- log_file = os.path.join(log_dir, f"actor_receive_policy_{os.getpid()}.log")
-
- # Initialize logging with explicit log file
- init_logging(log_file=log_file, display_pid=True)
- logging.info("Actor receive policy process logging initialized")
-
- # Setup process handlers to handle shutdown signal
- # But use shutdown event from the main process
- _ = ProcessSignalHandler(use_threads=False, display_pid=True)
-
- if grpc_channel is None or learner_client is None:
- learner_client, grpc_channel = learner_service_client(
- host=cfg.policy.actor_learner_config.learner_host,
- port=cfg.policy.actor_learner_config.learner_port,
- )
-
- try:
- iterator = learner_client.StreamParameters(services_pb2.Empty())
- receive_bytes_in_chunks(
- iterator,
- parameters_queue,
- shutdown_event,
- log_prefix="[ACTOR] parameters",
- )
-
- except grpc.RpcError as e:
- logging.error(f"[ACTOR] gRPC error: {e}")
-
- if not use_threads(cfg):
- grpc_channel.close()
- logging.info("[ACTOR] Received policy loop stopped")
-
-
-def send_transitions(
- cfg: TrainRLServerPipelineConfig,
- transitions_queue: Queue,
- shutdown_event: any, # Event,
- learner_client: services_pb2_grpc.LearnerServiceStub | None = None,
- grpc_channel: grpc.Channel | None = None,
-) -> services_pb2.Empty:
- """
- Sends transitions to the learner.
-
- This function continuously retrieves messages from the queue and processes:
-
- - Transition Data:
- - A batch of transitions (observation, action, reward, next observation) is collected.
- - Transitions are moved to the CPU and serialized using PyTorch.
- - The serialized data is wrapped in a `services_pb2.Transition` message and sent to the learner.
- """
-
- if not use_threads(cfg):
- # Create a process-specific log file
- log_dir = os.path.join(cfg.output_dir, "logs")
- os.makedirs(log_dir, exist_ok=True)
- log_file = os.path.join(log_dir, f"actor_transitions_{os.getpid()}.log")
-
- # Initialize logging with explicit log file
- init_logging(log_file=log_file, display_pid=True)
- logging.info("Actor transitions process logging initialized")
-
- if grpc_channel is None or learner_client is None:
- learner_client, grpc_channel = learner_service_client(
- host=cfg.policy.actor_learner_config.learner_host,
- port=cfg.policy.actor_learner_config.learner_port,
- )
-
- try:
- learner_client.SendTransitions(
- transitions_stream(
- shutdown_event, transitions_queue, cfg.policy.actor_learner_config.queue_get_timeout
- )
- )
- except grpc.RpcError as e:
- logging.error(f"[ACTOR] gRPC error: {e}")
-
- logging.info("[ACTOR] Finished streaming transitions")
-
- if not use_threads(cfg):
- grpc_channel.close()
- logging.info("[ACTOR] Transitions process stopped")
-
-
-def send_interactions(
- cfg: TrainRLServerPipelineConfig,
- interactions_queue: Queue,
- shutdown_event: Event, # type: ignore
- learner_client: services_pb2_grpc.LearnerServiceStub | None = None,
- grpc_channel: grpc.Channel | None = None,
-) -> services_pb2.Empty:
- """
- Sends interactions to the learner.
-
- This function continuously retrieves messages from the queue and processes:
-
- - Interaction Messages:
- - Contains useful statistics about episodic rewards and policy timings.
- - The message is serialized using `pickle` and sent to the learner.
- """
-
- if not use_threads(cfg):
- # Create a process-specific log file
- log_dir = os.path.join(cfg.output_dir, "logs")
- os.makedirs(log_dir, exist_ok=True)
- log_file = os.path.join(log_dir, f"actor_interactions_{os.getpid()}.log")
-
- # Initialize logging with explicit log file
- init_logging(log_file=log_file, display_pid=True)
- logging.info("Actor interactions process logging initialized")
-
- # Setup process handlers to handle shutdown signal
- # But use shutdown event from the main process
- _ = ProcessSignalHandler(use_threads=False, display_pid=True)
-
- if grpc_channel is None or learner_client is None:
- learner_client, grpc_channel = learner_service_client(
- host=cfg.policy.actor_learner_config.learner_host,
- port=cfg.policy.actor_learner_config.learner_port,
- )
-
- try:
- learner_client.SendInteractions(
- interactions_stream(
- shutdown_event, interactions_queue, cfg.policy.actor_learner_config.queue_get_timeout
- )
- )
- except grpc.RpcError as e:
- logging.error(f"[ACTOR] gRPC error: {e}")
-
- logging.info("[ACTOR] Finished streaming interactions")
-
- if not use_threads(cfg):
- grpc_channel.close()
- logging.info("[ACTOR] Interactions process stopped")
-
-
-def transitions_stream(shutdown_event: Event, transitions_queue: Queue, timeout: float) -> services_pb2.Empty: # type: ignore
- while not shutdown_event.is_set():
- try:
- message = transitions_queue.get(block=True, timeout=timeout)
- except Empty:
- logging.debug("[ACTOR] Transition queue is empty")
- continue
-
- yield from send_bytes_in_chunks(
- message, services_pb2.Transition, log_prefix="[ACTOR] Send transitions"
- )
-
- return services_pb2.Empty()
-
-
-def interactions_stream(
- shutdown_event: Event,
- interactions_queue: Queue,
- timeout: float, # type: ignore
-) -> services_pb2.Empty:
- while not shutdown_event.is_set():
- try:
- message = interactions_queue.get(block=True, timeout=timeout)
- except Empty:
- logging.debug("[ACTOR] Interaction queue is empty")
- continue
-
- yield from send_bytes_in_chunks(
- message,
- services_pb2.InteractionMessage,
- log_prefix="[ACTOR] Send interactions",
- )
-
- return services_pb2.Empty()
-
-
-# Policy functions
-
-
-def update_policy_parameters(policy: SACPolicy, parameters_queue: Queue, device):
- bytes_state_dict = get_last_item_from_queue(parameters_queue, block=False)
- if bytes_state_dict is not None:
- logging.info("[ACTOR] Load new parameters from Learner.")
- state_dicts = bytes_to_state_dict(bytes_state_dict)
-
- # TODO: check encoder parameter synchronization possible issues:
- # 1. When shared_encoder=True, we're loading stale encoder params from actor's state_dict
- # instead of the updated encoder params from critic (which is optimized separately)
- # 2. When freeze_vision_encoder=True, we waste bandwidth sending/loading frozen params
- # 3. Need to handle encoder params correctly for both actor and discrete_critic
- # Potential fixes:
- # - Send critic's encoder state when shared_encoder=True
- # - Skip encoder params entirely when freeze_vision_encoder=True
- # - Ensure discrete_critic gets correct encoder state (currently uses encoder_critic)
-
- # Load actor state dict
- actor_state_dict = move_state_dict_to_device(state_dicts["policy"], device=device)
- policy.actor.load_state_dict(actor_state_dict)
-
- # Load discrete critic if present
- if hasattr(policy, "discrete_critic") and "discrete_critic" in state_dicts:
- discrete_critic_state_dict = move_state_dict_to_device(
- state_dicts["discrete_critic"], device=device
- )
- policy.discrete_critic.load_state_dict(discrete_critic_state_dict)
- logging.info("[ACTOR] Loaded discrete critic parameters from Learner.")
-
-
-# Utilities functions
-
-
-def push_transitions_to_transport_queue(transitions: list, transitions_queue):
- """Send transitions to learner in smaller chunks to avoid network issues.
-
- Args:
- transitions: List of transitions to send
- message_queue: Queue to send messages to learner
- chunk_size: Size of each chunk to send
- """
- transition_to_send_to_learner = []
- for transition in transitions:
- tr = move_transition_to_device(transition=transition, device="cpu")
- for key, value in tr["state"].items():
- if torch.isnan(value).any():
- logging.warning(f"Found NaN values in transition {key}")
-
- transition_to_send_to_learner.append(tr)
-
- transitions_queue.put(transitions_to_bytes(transition_to_send_to_learner))
-
-
-def get_frequency_stats(timer: TimerManager) -> dict[str, float]:
- """Get the frequency statistics of the policy.
-
- Args:
- timer (TimerManager): The timer with collected metrics.
-
- Returns:
- dict[str, float]: The frequency statistics of the policy.
- """
- stats = {}
- if timer.count > 1:
- avg_fps = timer.fps_avg
- p90_fps = timer.fps_percentile(90)
- logging.debug(f"[ACTOR] Average policy frame rate: {avg_fps}")
- logging.debug(f"[ACTOR] Policy frame rate 90th percentile: {p90_fps}")
- stats = {
- "Policy frequency [Hz]": avg_fps,
- "Policy frequency 90th-p [Hz]": p90_fps,
- }
- return stats
-
-
-def log_policy_frequency_issue(policy_fps: float, cfg: TrainRLServerPipelineConfig, interaction_step: int):
- if policy_fps < cfg.env.fps:
- logging.warning(
- f"[ACTOR] Policy FPS {policy_fps:.1f} below required {cfg.env.fps} at step {interaction_step}"
- )
-
-
-def use_threads(cfg: TrainRLServerPipelineConfig) -> bool:
- return cfg.policy.concurrency.actor == "threads"
-
-
-if __name__ == "__main__":
- actor_cli()
diff --git a/lerobot/src/lerobot/rl/buffer.py b/lerobot/src/lerobot/rl/buffer.py
deleted file mode 100644
index f558b6375fbc584c39a1f76bba3e9758f0c5fa66..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/rl/buffer.py
+++ /dev/null
@@ -1,834 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import functools
-from collections.abc import Callable, Sequence
-from contextlib import suppress
-from typing import TypedDict
-
-import torch
-import torch.nn.functional as F # noqa: N812
-from tqdm import tqdm
-
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.utils.constants import ACTION, DONE, OBS_IMAGE, REWARD
-from lerobot.utils.transition import Transition
-
-
-class BatchTransition(TypedDict):
- state: dict[str, torch.Tensor]
- action: torch.Tensor
- reward: torch.Tensor
- next_state: dict[str, torch.Tensor]
- done: torch.Tensor
- truncated: torch.Tensor
- complementary_info: dict[str, torch.Tensor | float | int] | None = None
-
-
-def random_crop_vectorized(images: torch.Tensor, output_size: tuple) -> torch.Tensor:
- """
- Perform a per-image random crop over a batch of images in a vectorized way.
- (Same as shown previously.)
- """
- B, C, H, W = images.shape # noqa: N806
- crop_h, crop_w = output_size
-
- if crop_h > H or crop_w > W:
- raise ValueError(
- f"Requested crop size ({crop_h}, {crop_w}) is bigger than the image size ({H}, {W})."
- )
-
- tops = torch.randint(0, H - crop_h + 1, (B,), device=images.device)
- lefts = torch.randint(0, W - crop_w + 1, (B,), device=images.device)
-
- rows = torch.arange(crop_h, device=images.device).unsqueeze(0) + tops.unsqueeze(1)
- cols = torch.arange(crop_w, device=images.device).unsqueeze(0) + lefts.unsqueeze(1)
-
- rows = rows.unsqueeze(2).expand(-1, -1, crop_w) # (B, crop_h, crop_w)
- cols = cols.unsqueeze(1).expand(-1, crop_h, -1) # (B, crop_h, crop_w)
-
- images_hwcn = images.permute(0, 2, 3, 1) # (B, H, W, C)
-
- # Gather pixels
- cropped_hwcn = images_hwcn[torch.arange(B, device=images.device).view(B, 1, 1), rows, cols, :]
- # cropped_hwcn => (B, crop_h, crop_w, C)
-
- cropped = cropped_hwcn.permute(0, 3, 1, 2) # (B, C, crop_h, crop_w)
- return cropped
-
-
-def random_shift(images: torch.Tensor, pad: int = 4):
- """Vectorized random shift, imgs: (B,C,H,W), pad: #pixels"""
- _, _, h, w = images.shape
- images = F.pad(input=images, pad=(pad, pad, pad, pad), mode="replicate")
- return random_crop_vectorized(images=images, output_size=(h, w))
-
-
-class ReplayBuffer:
- def __init__(
- self,
- capacity: int,
- device: str = "cuda:0",
- state_keys: Sequence[str] | None = None,
- image_augmentation_function: Callable | None = None,
- use_drq: bool = True,
- storage_device: str = "cpu",
- optimize_memory: bool = False,
- ):
- """
- Replay buffer for storing transitions.
- It will allocate tensors on the specified device, when the first transition is added.
- NOTE: If you encounter memory issues, you can try to use the `optimize_memory` flag to save memory or
- and use the `storage_device` flag to store the buffer on a different device.
- Args:
- capacity (int): Maximum number of transitions to store in the buffer.
- device (str): The device where the tensors will be moved when sampling ("cuda:0" or "cpu").
- state_keys (List[str]): The list of keys that appear in `state` and `next_state`.
- image_augmentation_function (Optional[Callable]): A function that takes a batch of images
- and returns a batch of augmented images. If None, a default augmentation function is used.
- use_drq (bool): Whether to use the default DRQ image augmentation style, when sampling in the buffer.
- storage_device: The device (e.g. "cpu" or "cuda:0") where the data will be stored.
- Using "cpu" can help save GPU memory.
- optimize_memory (bool): If True, optimizes memory by not storing duplicate next_states when
- they can be derived from states. This is useful for large datasets where next_state[i] = state[i+1].
- """
- if capacity <= 0:
- raise ValueError("Capacity must be greater than 0.")
-
- self.capacity = capacity
- self.device = device
- self.storage_device = storage_device
- self.position = 0
- self.size = 0
- self.initialized = False
- self.optimize_memory = optimize_memory
-
- # Track episode boundaries for memory optimization
- self.episode_ends = torch.zeros(capacity, dtype=torch.bool, device=storage_device)
-
- # If no state_keys provided, default to an empty list
- self.state_keys = state_keys if state_keys is not None else []
-
- self.image_augmentation_function = image_augmentation_function
-
- if image_augmentation_function is None:
- base_function = functools.partial(random_shift, pad=4)
- self.image_augmentation_function = torch.compile(base_function)
- self.use_drq = use_drq
-
- def _initialize_storage(
- self,
- state: dict[str, torch.Tensor],
- action: torch.Tensor,
- complementary_info: dict[str, torch.Tensor] | None = None,
- ):
- """Initialize the storage tensors based on the first transition."""
- # Determine shapes from the first transition
- state_shapes = {key: val.squeeze(0).shape for key, val in state.items()}
- action_shape = action.squeeze(0).shape
-
- # Pre-allocate tensors for storage
- self.states = {
- key: torch.empty((self.capacity, *shape), device=self.storage_device)
- for key, shape in state_shapes.items()
- }
- self.actions = torch.empty((self.capacity, *action_shape), device=self.storage_device)
- self.rewards = torch.empty((self.capacity,), device=self.storage_device)
-
- if not self.optimize_memory:
- # Standard approach: store states and next_states separately
- self.next_states = {
- key: torch.empty((self.capacity, *shape), device=self.storage_device)
- for key, shape in state_shapes.items()
- }
- else:
- # Memory-optimized approach: don't allocate next_states buffer
- # Just create a reference to states for consistent API
- self.next_states = self.states # Just a reference for API consistency
-
- self.dones = torch.empty((self.capacity,), dtype=torch.bool, device=self.storage_device)
- self.truncateds = torch.empty((self.capacity,), dtype=torch.bool, device=self.storage_device)
-
- # Initialize storage for complementary_info
- self.has_complementary_info = complementary_info is not None
- self.complementary_info_keys = []
- self.complementary_info = {}
-
- if self.has_complementary_info:
- self.complementary_info_keys = list(complementary_info.keys())
- # Pre-allocate tensors for each key in complementary_info
- for key, value in complementary_info.items():
- if isinstance(value, torch.Tensor):
- value_shape = value.squeeze(0).shape
- self.complementary_info[key] = torch.empty(
- (self.capacity, *value_shape), device=self.storage_device
- )
- elif isinstance(value, (int | float)):
- # Handle scalar values similar to reward
- self.complementary_info[key] = torch.empty((self.capacity,), device=self.storage_device)
- else:
- raise ValueError(f"Unsupported type {type(value)} for complementary_info[{key}]")
-
- self.initialized = True
-
- def __len__(self):
- return self.size
-
- def add(
- self,
- state: dict[str, torch.Tensor],
- action: torch.Tensor,
- reward: float,
- next_state: dict[str, torch.Tensor],
- done: bool,
- truncated: bool,
- complementary_info: dict[str, torch.Tensor] | None = None,
- ):
- """Saves a transition, ensuring tensors are stored on the designated storage device."""
- # Initialize storage if this is the first transition
- if not self.initialized:
- self._initialize_storage(state=state, action=action, complementary_info=complementary_info)
-
- # Store the transition in pre-allocated tensors
- for key in self.states:
- self.states[key][self.position].copy_(state[key].squeeze(dim=0))
-
- if not self.optimize_memory:
- # Only store next_states if not optimizing memory
- self.next_states[key][self.position].copy_(next_state[key].squeeze(dim=0))
-
- self.actions[self.position].copy_(action.squeeze(dim=0))
- self.rewards[self.position] = reward
- self.dones[self.position] = done
- self.truncateds[self.position] = truncated
-
- # Handle complementary_info if provided and storage is initialized
- if complementary_info is not None and self.has_complementary_info:
- # Store the complementary_info
- for key in self.complementary_info_keys:
- if key in complementary_info:
- value = complementary_info[key]
- if isinstance(value, torch.Tensor):
- self.complementary_info[key][self.position].copy_(value.squeeze(dim=0))
- elif isinstance(value, (int | float)):
- self.complementary_info[key][self.position] = value
-
- self.position = (self.position + 1) % self.capacity
- self.size = min(self.size + 1, self.capacity)
-
- def sample(self, batch_size: int) -> BatchTransition:
- """Sample a random batch of transitions and collate them into batched tensors."""
- if not self.initialized:
- raise RuntimeError("Cannot sample from an empty buffer. Add transitions first.")
-
- batch_size = min(batch_size, self.size)
- high = max(0, self.size - 1) if self.optimize_memory and self.size < self.capacity else self.size
-
- # Random indices for sampling - create on the same device as storage
- idx = torch.randint(low=0, high=high, size=(batch_size,), device=self.storage_device)
-
- # Identify image keys that need augmentation
- image_keys = [k for k in self.states if k.startswith(OBS_IMAGE)] if self.use_drq else []
-
- # Create batched state and next_state
- batch_state = {}
- batch_next_state = {}
-
- # First pass: load all state tensors to target device
- for key in self.states:
- batch_state[key] = self.states[key][idx].to(self.device)
-
- if not self.optimize_memory:
- # Standard approach - load next_states directly
- batch_next_state[key] = self.next_states[key][idx].to(self.device)
- else:
- # Memory-optimized approach - get next_state from the next index
- next_idx = (idx + 1) % self.capacity
- batch_next_state[key] = self.states[key][next_idx].to(self.device)
-
- # Apply image augmentation in a batched way if needed
- if self.use_drq and image_keys:
- # Concatenate all images from state and next_state
- all_images = []
- for key in image_keys:
- all_images.append(batch_state[key])
- all_images.append(batch_next_state[key])
-
- # Optimization: Batch all images and apply augmentation once
- all_images_tensor = torch.cat(all_images, dim=0)
- augmented_images = self.image_augmentation_function(all_images_tensor)
-
- # Split the augmented images back to their sources
- for i, key in enumerate(image_keys):
- # Calculate offsets for the current image key:
- # For each key, we have 2*batch_size images (batch_size for states, batch_size for next_states)
- # States start at index i*2*batch_size and take up batch_size slots
- batch_state[key] = augmented_images[i * 2 * batch_size : (i * 2 + 1) * batch_size]
- # Next states start after the states at index (i*2+1)*batch_size and also take up batch_size slots
- batch_next_state[key] = augmented_images[(i * 2 + 1) * batch_size : (i + 1) * 2 * batch_size]
-
- # Sample other tensors
- batch_actions = self.actions[idx].to(self.device)
- batch_rewards = self.rewards[idx].to(self.device)
- batch_dones = self.dones[idx].to(self.device).float()
- batch_truncateds = self.truncateds[idx].to(self.device).float()
-
- # Sample complementary_info if available
- batch_complementary_info = None
- if self.has_complementary_info:
- batch_complementary_info = {}
- for key in self.complementary_info_keys:
- batch_complementary_info[key] = self.complementary_info[key][idx].to(self.device)
-
- return BatchTransition(
- state=batch_state,
- action=batch_actions,
- reward=batch_rewards,
- next_state=batch_next_state,
- done=batch_dones,
- truncated=batch_truncateds,
- complementary_info=batch_complementary_info,
- )
-
- def get_iterator(
- self,
- batch_size: int,
- async_prefetch: bool = True,
- queue_size: int = 2,
- ):
- """
- Creates an infinite iterator that yields batches of transitions.
- Will automatically restart when internal iterator is exhausted.
-
- Args:
- batch_size (int): Size of batches to sample
- async_prefetch (bool): Whether to use asynchronous prefetching with threads (default: True)
- queue_size (int): Number of batches to prefetch (default: 2)
-
- Yields:
- BatchTransition: Batched transitions
- """
- while True: # Create an infinite loop
- if async_prefetch:
- # Get the standard iterator
- iterator = self._get_async_iterator(queue_size=queue_size, batch_size=batch_size)
- else:
- iterator = self._get_naive_iterator(batch_size=batch_size, queue_size=queue_size)
-
- # Yield all items from the iterator
- with suppress(StopIteration):
- yield from iterator
-
- def _get_async_iterator(self, batch_size: int, queue_size: int = 2):
- """
- Create an iterator that continuously yields prefetched batches in a
- background thread. The design is intentionally simple and avoids busy
- waiting / complex state management.
-
- Args:
- batch_size (int): Size of batches to sample.
- queue_size (int): Maximum number of prefetched batches to keep in
- memory.
-
- Yields:
- BatchTransition: A batch sampled from the replay buffer.
- """
- import queue
- import threading
-
- data_queue: queue.Queue = queue.Queue(maxsize=queue_size)
- shutdown_event = threading.Event()
-
- def producer() -> None:
- """Continuously put sampled batches into the queue until shutdown."""
- while not shutdown_event.is_set():
- try:
- batch = self.sample(batch_size)
- # The timeout ensures the thread unblocks if the queue is full
- # and the shutdown event gets set meanwhile.
- data_queue.put(batch, block=True, timeout=0.5)
- except queue.Full:
- # Queue is full – loop again (will re-check shutdown_event)
- continue
- except Exception:
- # Surface any unexpected error and terminate the producer.
- shutdown_event.set()
-
- producer_thread = threading.Thread(target=producer, daemon=True)
- producer_thread.start()
-
- try:
- while not shutdown_event.is_set():
- try:
- yield data_queue.get(block=True)
- except Exception:
- # If the producer already set the shutdown flag we exit.
- if shutdown_event.is_set():
- break
- finally:
- shutdown_event.set()
- # Drain the queue quickly to help the thread exit if it's blocked on `put`.
- while not data_queue.empty():
- _ = data_queue.get_nowait()
- # Give the producer thread a bit of time to finish.
- producer_thread.join(timeout=1.0)
-
- def _get_naive_iterator(self, batch_size: int, queue_size: int = 2):
- """
- Creates a simple non-threaded iterator that yields batches.
-
- Args:
- batch_size (int): Size of batches to sample
- queue_size (int): Number of initial batches to prefetch
-
- Yields:
- BatchTransition: Batch transitions
- """
- import collections
-
- queue = collections.deque()
-
- def enqueue(n):
- for _ in range(n):
- data = self.sample(batch_size)
- queue.append(data)
-
- enqueue(queue_size)
- while queue:
- yield queue.popleft()
- enqueue(1)
-
- @classmethod
- def from_lerobot_dataset(
- cls,
- lerobot_dataset: LeRobotDataset,
- device: str = "cuda:0",
- state_keys: Sequence[str] | None = None,
- capacity: int | None = None,
- image_augmentation_function: Callable | None = None,
- use_drq: bool = True,
- storage_device: str = "cpu",
- optimize_memory: bool = False,
- ) -> "ReplayBuffer":
- """
- Convert a LeRobotDataset into a ReplayBuffer.
-
- Args:
- lerobot_dataset (LeRobotDataset): The dataset to convert.
- device (str): The device for sampling tensors. Defaults to "cuda:0".
- state_keys (Sequence[str] | None): The list of keys that appear in `state` and `next_state`.
- capacity (int | None): Buffer capacity. If None, uses dataset length.
- action_mask (Sequence[int] | None): Indices of action dimensions to keep.
- image_augmentation_function (Callable | None): Function for image augmentation.
- If None, uses default random shift with pad=4.
- use_drq (bool): Whether to use DrQ image augmentation when sampling.
- storage_device (str): Device for storing tensor data. Using "cpu" saves GPU memory.
- optimize_memory (bool): If True, reduces memory usage by not duplicating state data.
-
- Returns:
- ReplayBuffer: The replay buffer with dataset transitions.
- """
- if capacity is None:
- capacity = len(lerobot_dataset)
-
- if capacity < len(lerobot_dataset):
- raise ValueError(
- "The capacity of the ReplayBuffer must be greater than or equal to the length of the LeRobotDataset."
- )
-
- # Create replay buffer with image augmentation and DrQ settings
- replay_buffer = cls(
- capacity=capacity,
- device=device,
- state_keys=state_keys,
- image_augmentation_function=image_augmentation_function,
- use_drq=use_drq,
- storage_device=storage_device,
- optimize_memory=optimize_memory,
- )
-
- # Convert dataset to transitions
- list_transition = cls._lerobotdataset_to_transitions(dataset=lerobot_dataset, state_keys=state_keys)
-
- # Initialize the buffer with the first transition to set up storage tensors
- if list_transition:
- first_transition = list_transition[0]
- first_state = {k: v.to(device) for k, v in first_transition["state"].items()}
- first_action = first_transition[ACTION].to(device)
-
- # Get complementary info if available
- first_complementary_info = None
- if (
- "complementary_info" in first_transition
- and first_transition["complementary_info"] is not None
- ):
- first_complementary_info = {
- k: v.to(device) for k, v in first_transition["complementary_info"].items()
- }
-
- replay_buffer._initialize_storage(
- state=first_state, action=first_action, complementary_info=first_complementary_info
- )
-
- # Fill the buffer with all transitions
- for data in list_transition:
- for k, v in data.items():
- if isinstance(v, dict):
- for key, tensor in v.items():
- v[key] = tensor.to(storage_device)
- elif isinstance(v, torch.Tensor):
- data[k] = v.to(storage_device)
-
- action = data[ACTION]
-
- replay_buffer.add(
- state=data["state"],
- action=action,
- reward=data["reward"],
- next_state=data["next_state"],
- done=data["done"],
- truncated=False, # NOTE: Truncation are not supported yet in lerobot dataset
- complementary_info=data.get("complementary_info", None),
- )
-
- return replay_buffer
-
- def to_lerobot_dataset(
- self,
- repo_id: str,
- fps=1,
- root=None,
- task_name="from_replay_buffer",
- ) -> LeRobotDataset:
- """
- Converts all transitions in this ReplayBuffer into a single LeRobotDataset object.
- """
- if self.size == 0:
- raise ValueError("The replay buffer is empty. Cannot convert to a dataset.")
-
- # Create features dictionary for the dataset
- features = {
- "index": {"dtype": "int64", "shape": [1]}, # global index across episodes
- "episode_index": {"dtype": "int64", "shape": [1]}, # which episode
- "frame_index": {"dtype": "int64", "shape": [1]}, # index inside an episode
- "timestamp": {"dtype": "float32", "shape": [1]}, # for now we store dummy
- "task_index": {"dtype": "int64", "shape": [1]},
- }
-
- # Add "action"
- sample_action = self.actions[0]
- act_info = guess_feature_info(t=sample_action, name=ACTION)
- features[ACTION] = act_info
-
- # Add "reward" and "done"
- features[REWARD] = {"dtype": "float32", "shape": (1,)}
- features[DONE] = {"dtype": "bool", "shape": (1,)}
-
- # Add state keys
- for key in self.states:
- sample_val = self.states[key][0]
- f_info = guess_feature_info(t=sample_val, name=key)
- features[key] = f_info
-
- # Add complementary_info keys if available
- if self.has_complementary_info:
- for key in self.complementary_info_keys:
- sample_val = self.complementary_info[key][0]
- if isinstance(sample_val, torch.Tensor) and sample_val.ndim == 0:
- sample_val = sample_val.unsqueeze(0)
- f_info = guess_feature_info(t=sample_val, name=f"complementary_info.{key}")
- features[f"complementary_info.{key}"] = f_info
-
- # Create an empty LeRobotDataset
- lerobot_dataset = LeRobotDataset.create(
- repo_id=repo_id,
- fps=fps,
- root=root,
- robot_type=None,
- features=features,
- use_videos=True,
- )
-
- # Start writing images if needed
- lerobot_dataset.start_image_writer(num_processes=0, num_threads=3)
-
- # Convert transitions into episodes and frames
-
- for idx in range(self.size):
- actual_idx = (self.position - self.size + idx) % self.capacity
-
- frame_dict = {}
-
- # Fill the data for state keys
- for key in self.states:
- frame_dict[key] = self.states[key][actual_idx].cpu()
-
- # Fill action, reward, done
- frame_dict[ACTION] = self.actions[actual_idx].cpu()
- frame_dict[REWARD] = torch.tensor([self.rewards[actual_idx]], dtype=torch.float32).cpu()
- frame_dict[DONE] = torch.tensor([self.dones[actual_idx]], dtype=torch.bool).cpu()
- frame_dict["task"] = task_name
-
- # Add complementary_info if available
- if self.has_complementary_info:
- for key in self.complementary_info_keys:
- val = self.complementary_info[key][actual_idx]
- # Convert tensors to CPU
- if isinstance(val, torch.Tensor):
- if val.ndim == 0:
- val = val.unsqueeze(0)
- frame_dict[f"complementary_info.{key}"] = val.cpu()
- # Non-tensor values can be used directly
- else:
- frame_dict[f"complementary_info.{key}"] = val
-
- # Add to the dataset's buffer
- lerobot_dataset.add_frame(frame_dict)
-
- # If we reached an episode boundary, call save_episode, reset counters
- if self.dones[actual_idx] or self.truncateds[actual_idx]:
- lerobot_dataset.save_episode()
-
- # Save any remaining frames in the buffer
- if lerobot_dataset.episode_buffer["size"] > 0:
- lerobot_dataset.save_episode()
-
- lerobot_dataset.stop_image_writer()
- lerobot_dataset.finalize()
-
- return lerobot_dataset
-
- @staticmethod
- def _lerobotdataset_to_transitions(
- dataset: LeRobotDataset,
- state_keys: Sequence[str] | None = None,
- ) -> list[Transition]:
- """
- Convert a LeRobotDataset into a list of RL (s, a, r, s', done) transitions.
-
- Args:
- dataset (LeRobotDataset):
- The dataset to convert. Each item in the dataset is expected to have
- at least the following keys:
- {
- "action": ...
- "next.reward": ...
- "next.done": ...
- "episode_index": ...
- }
- plus whatever your 'state_keys' specify.
-
- state_keys (Sequence[str] | None):
- The dataset keys to include in 'state' and 'next_state'. Their names
- will be kept as-is in the output transitions. E.g.
- ["observation.state", "observation.environment_state"].
- If None, you must handle or define default keys.
-
- Returns:
- transitions (List[Transition]):
- A list of Transition dictionaries with the same length as `dataset`.
- """
- if state_keys is None:
- raise ValueError("State keys must be provided when converting LeRobotDataset to Transitions.")
-
- transitions = []
- num_frames = len(dataset)
-
- # Check if the dataset has "next.done" key
- sample = dataset[0]
- has_done_key = DONE in sample
-
- # Check for complementary_info keys
- complementary_info_keys = [key for key in sample if key.startswith("complementary_info.")]
- has_complementary_info = len(complementary_info_keys) > 0
-
- # If not, we need to infer it from episode boundaries
- if not has_done_key:
- print("'next.done' key not found in dataset. Inferring from episode boundaries...")
-
- for i in tqdm(range(num_frames)):
- current_sample = dataset[i]
-
- # ----- 1) Current state -----
- current_state: dict[str, torch.Tensor] = {}
- for key in state_keys:
- val = current_sample[key]
- current_state[key] = val.unsqueeze(0) # Add batch dimension
-
- # ----- 2) Action -----
- action = current_sample[ACTION].unsqueeze(0) # Add batch dimension
-
- # ----- 3) Reward and done -----
- reward = float(current_sample[REWARD].item()) # ensure float
-
- # Determine done flag - use next.done if available, otherwise infer from episode boundaries
- if has_done_key:
- done = bool(current_sample[DONE].item()) # ensure bool
- else:
- # If this is the last frame or if next frame is in a different episode, mark as done
- done = False
- if i == num_frames - 1:
- done = True
- elif i < num_frames - 1:
- next_sample = dataset[i + 1]
- if next_sample["episode_index"] != current_sample["episode_index"]:
- done = True
-
- # TODO: (azouitine) Handle truncation (using the same value as done for now)
- truncated = done
-
- # ----- 4) Next state -----
- # If not done and the next sample is in the same episode, we pull the next sample's state.
- # Otherwise (done=True or next sample crosses to a new episode), next_state = current_state.
- next_state = current_state # default
- if not done and (i < num_frames - 1):
- next_sample = dataset[i + 1]
- if next_sample["episode_index"] == current_sample["episode_index"]:
- # Build next_state from the same keys
- next_state_data: dict[str, torch.Tensor] = {}
- for key in state_keys:
- val = next_sample[key]
- next_state_data[key] = val.unsqueeze(0) # Add batch dimension
- next_state = next_state_data
-
- # ----- 5) Complementary info (if available) -----
- complementary_info = None
- if has_complementary_info:
- complementary_info = {}
- for key in complementary_info_keys:
- # Strip the "complementary_info." prefix to get the actual key
- clean_key = key[len("complementary_info.") :]
- val = current_sample[key]
- # Handle tensor and non-tensor values differently
- if isinstance(val, torch.Tensor):
- complementary_info[clean_key] = val.unsqueeze(0) # Add batch dimension
- else:
- # TODO: (azouitine) Check if it's necessary to convert to tensor
- # For non-tensor values, use directly
- complementary_info[clean_key] = val
-
- # ----- Construct the Transition -----
- transition = Transition(
- state=current_state,
- action=action,
- reward=reward,
- next_state=next_state,
- done=done,
- truncated=truncated,
- complementary_info=complementary_info,
- )
- transitions.append(transition)
-
- return transitions
-
-
-# Utility function to guess shapes/dtypes from a tensor
-def guess_feature_info(t, name: str):
- """
- Return a dictionary with the 'dtype' and 'shape' for a given tensor or scalar value.
- If it looks like a 3D (C,H,W) shape, we might consider it an 'image'.
- Otherwise default to appropriate dtype for numeric.
- """
-
- shape = tuple(t.shape)
- # Basic guess: if we have exactly 3 dims and shape[0] in {1, 3}, guess 'image'
- if len(shape) == 3 and shape[0] in [1, 3]:
- return {
- "dtype": "image",
- "shape": shape,
- }
- else:
- # Otherwise treat as numeric
- return {
- "dtype": "float32",
- "shape": shape,
- }
-
-
-def concatenate_batch_transitions(
- left_batch_transitions: BatchTransition, right_batch_transition: BatchTransition
-) -> BatchTransition:
- """
- Concatenates two BatchTransition objects into one.
-
- This function merges the right BatchTransition into the left one by concatenating
- all corresponding tensors along dimension 0. The operation modifies the left_batch_transitions
- in place and also returns it.
-
- Args:
- left_batch_transitions (BatchTransition): The first batch to concatenate and the one
- that will be modified in place.
- right_batch_transition (BatchTransition): The second batch to append to the first one.
-
- Returns:
- BatchTransition: The concatenated batch (same object as left_batch_transitions).
-
- Warning:
- This function modifies the left_batch_transitions object in place.
- """
- # Concatenate state fields
- left_batch_transitions["state"] = {
- key: torch.cat(
- [left_batch_transitions["state"][key], right_batch_transition["state"][key]],
- dim=0,
- )
- for key in left_batch_transitions["state"]
- }
-
- # Concatenate basic fields
- left_batch_transitions[ACTION] = torch.cat(
- [left_batch_transitions[ACTION], right_batch_transition[ACTION]], dim=0
- )
- left_batch_transitions["reward"] = torch.cat(
- [left_batch_transitions["reward"], right_batch_transition["reward"]], dim=0
- )
-
- # Concatenate next_state fields
- left_batch_transitions["next_state"] = {
- key: torch.cat(
- [left_batch_transitions["next_state"][key], right_batch_transition["next_state"][key]],
- dim=0,
- )
- for key in left_batch_transitions["next_state"]
- }
-
- # Concatenate done and truncated fields
- left_batch_transitions["done"] = torch.cat(
- [left_batch_transitions["done"], right_batch_transition["done"]], dim=0
- )
- left_batch_transitions["truncated"] = torch.cat(
- [left_batch_transitions["truncated"], right_batch_transition["truncated"]],
- dim=0,
- )
-
- # Handle complementary_info
- left_info = left_batch_transitions.get("complementary_info")
- right_info = right_batch_transition.get("complementary_info")
-
- # Only process if right_info exists
- if right_info is not None:
- # Initialize left complementary_info if needed
- if left_info is None:
- left_batch_transitions["complementary_info"] = right_info
- else:
- # Concatenate each field
- for key in right_info:
- if key in left_info:
- left_info[key] = torch.cat([left_info[key], right_info[key]], dim=0)
- else:
- left_info[key] = right_info[key]
-
- return left_batch_transitions
diff --git a/lerobot/src/lerobot/rl/crop_dataset_roi.py b/lerobot/src/lerobot/rl/crop_dataset_roi.py
deleted file mode 100644
index 8281f716857cc2e0875da604f161490b0ff6e3d8..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/rl/crop_dataset_roi.py
+++ /dev/null
@@ -1,326 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import argparse
-import json
-from copy import deepcopy
-from pathlib import Path
-
-import cv2
-import torch
-import torchvision.transforms.functional as F # type: ignore # noqa: N812
-from tqdm import tqdm # type: ignore
-
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.utils.constants import DONE, REWARD
-
-
-def select_rect_roi(img):
- """
- Allows the user to draw a rectangular ROI on the image.
-
- The user must click and drag to draw the rectangle.
- - While dragging, the rectangle is dynamically drawn.
- - On mouse button release, the rectangle is fixed.
- - Press 'c' to confirm the selection.
- - Press 'r' to reset the selection.
- - Press ESC to cancel.
-
- Returns:
- A tuple (top, left, height, width) representing the rectangular ROI,
- or None if no valid ROI is selected.
- """
- # Create a working copy of the image
- clone = img.copy()
- working_img = clone.copy()
-
- roi = None # Will store the final ROI as (top, left, height, width)
- drawing = False
- index_x, index_y = -1, -1 # Initial click coordinates
-
- def mouse_callback(event, x, y, flags, param):
- nonlocal index_x, index_y, drawing, roi, working_img
-
- if event == cv2.EVENT_LBUTTONDOWN:
- # Start drawing: record starting coordinates
- drawing = True
- index_x, index_y = x, y
-
- elif event == cv2.EVENT_MOUSEMOVE:
- if drawing:
- # Compute the top-left and bottom-right corners regardless of drag direction
- top = min(index_y, y)
- left = min(index_x, x)
- bottom = max(index_y, y)
- right = max(index_x, x)
- # Show a temporary image with the current rectangle drawn
- temp = working_img.copy()
- cv2.rectangle(temp, (left, top), (right, bottom), (0, 255, 0), 2)
- cv2.imshow("Select ROI", temp)
-
- elif event == cv2.EVENT_LBUTTONUP:
- # Finish drawing
- drawing = False
- top = min(index_y, y)
- left = min(index_x, x)
- bottom = max(index_y, y)
- right = max(index_x, x)
- height = bottom - top
- width = right - left
- roi = (top, left, height, width) # (top, left, height, width)
- # Draw the final rectangle on the working image and display it
- working_img = clone.copy()
- cv2.rectangle(working_img, (left, top), (right, bottom), (0, 255, 0), 2)
- cv2.imshow("Select ROI", working_img)
-
- # Create the window and set the callback
- cv2.namedWindow("Select ROI")
- cv2.setMouseCallback("Select ROI", mouse_callback)
- cv2.imshow("Select ROI", working_img)
-
- print("Instructions for ROI selection:")
- print(" - Click and drag to draw a rectangular ROI.")
- print(" - Press 'c' to confirm the selection.")
- print(" - Press 'r' to reset and draw again.")
- print(" - Press ESC to cancel the selection.")
-
- # Wait until the user confirms with 'c', resets with 'r', or cancels with ESC
- while True:
- key = cv2.waitKey(1) & 0xFF
- # Confirm ROI if one has been drawn
- if key == ord("c") and roi is not None:
- break
- # Reset: clear the ROI and restore the original image
- elif key == ord("r"):
- working_img = clone.copy()
- roi = None
- cv2.imshow("Select ROI", working_img)
- # Cancel selection for this image
- elif key == 27: # ESC key
- roi = None
- break
-
- cv2.destroyWindow("Select ROI")
- return roi
-
-
-def select_square_roi_for_images(images: dict) -> dict:
- """
- For each image in the provided dictionary, open a window to allow the user
- to select a rectangular ROI. Returns a dictionary mapping each key to a tuple
- (top, left, height, width) representing the ROI.
-
- Parameters:
- images (dict): Dictionary where keys are identifiers and values are OpenCV images.
-
- Returns:
- dict: Mapping of image keys to the selected rectangular ROI.
- """
- selected_rois = {}
-
- for key, img in images.items():
- if img is None:
- print(f"Image for key '{key}' is None, skipping.")
- continue
-
- print(f"\nSelect rectangular ROI for image with key: '{key}'")
- roi = select_rect_roi(img)
-
- if roi is None:
- print(f"No valid ROI selected for '{key}'.")
- else:
- selected_rois[key] = roi
- print(f"ROI for '{key}': {roi}")
-
- return selected_rois
-
-
-def get_image_from_lerobot_dataset(dataset: LeRobotDataset):
- """
- Find the first row in the dataset and extract the image in order to be used for the crop.
- """
- row = dataset[0]
- image_dict = {}
- for k in row:
- if "image" in k:
- image_dict[k] = deepcopy(row[k])
- return image_dict
-
-
-def convert_lerobot_dataset_to_cropped_lerobot_dataset(
- original_dataset: LeRobotDataset,
- crop_params_dict: dict[str, tuple[int, int, int, int]],
- new_repo_id: str,
- new_dataset_root: str,
- resize_size: tuple[int, int] = (128, 128),
- push_to_hub: bool = False,
- task: str = "",
-) -> LeRobotDataset:
- """
- Converts an existing LeRobotDataset by iterating over its episodes and frames,
- applying cropping and resizing to image observations, and saving a new dataset
- with the transformed data.
-
- Args:
- original_dataset (LeRobotDataset): The source dataset.
- crop_params_dict (Dict[str, Tuple[int, int, int, int]]):
- A dictionary mapping observation keys to crop parameters (top, left, height, width).
- new_repo_id (str): Repository id for the new dataset.
- new_dataset_root (str): The root directory where the new dataset will be written.
- resize_size (Tuple[int, int], optional): The target size (height, width) after cropping.
- Defaults to (128, 128).
-
- Returns:
- LeRobotDataset: A new LeRobotDataset where the specified image observations have been cropped
- and resized.
- """
- # 1. Create a new (empty) LeRobotDataset for writing.
- new_dataset = LeRobotDataset.create(
- repo_id=new_repo_id,
- fps=int(original_dataset.fps),
- root=new_dataset_root,
- robot_type=original_dataset.meta.robot_type,
- features=original_dataset.meta.info["features"],
- use_videos=len(original_dataset.meta.video_keys) > 0,
- )
-
- # Update the metadata for every image key that will be cropped:
- # (Here we simply set the shape to be the final resize_size.)
- for key in crop_params_dict:
- if key in new_dataset.meta.info["features"]:
- new_dataset.meta.info["features"][key]["shape"] = [3] + list(resize_size)
-
- # TODO: Directly modify the mp4 video + meta info features, instead of recreating a dataset
- prev_episode_index = 0
- for frame_idx in tqdm(range(len(original_dataset))):
- frame = original_dataset[frame_idx]
-
- # Create a copy of the frame to add to the new dataset
- new_frame = {}
- for key, value in frame.items():
- if key in ("task_index", "timestamp", "episode_index", "frame_index", "index", "task"):
- continue
- if key in (DONE, REWARD):
- # if not isinstance(value, str) and len(value.shape) == 0:
- value = value.unsqueeze(0)
-
- if key in crop_params_dict:
- top, left, height, width = crop_params_dict[key]
- # Apply crop then resize.
- cropped = F.crop(value, top, left, height, width)
- value = F.resize(cropped, resize_size)
- value = value.clamp(0, 1)
- if key.startswith("complementary_info") and isinstance(value, torch.Tensor) and value.dim() == 0:
- value = value.unsqueeze(0)
- new_frame[key] = value
-
- new_frame["task"] = task
- new_dataset.add_frame(new_frame)
-
- if frame["episode_index"].item() != prev_episode_index:
- # Save the episode
- new_dataset.save_episode()
- prev_episode_index = frame["episode_index"].item()
-
- # Save the last episode
- new_dataset.save_episode()
-
- if push_to_hub:
- new_dataset.push_to_hub()
-
- return new_dataset
-
-
-if __name__ == "__main__":
- parser = argparse.ArgumentParser(description="Crop rectangular ROIs from a LeRobot dataset.")
- parser.add_argument(
- "--repo-id",
- type=str,
- default="lerobot",
- help="The repository id of the LeRobot dataset to process.",
- )
- parser.add_argument(
- "--root",
- type=str,
- default=None,
- help="The root directory of the LeRobot dataset.",
- )
- parser.add_argument(
- "--crop-params-path",
- type=str,
- default=None,
- help="The path to the JSON file containing the ROIs.",
- )
- parser.add_argument(
- "--push-to-hub",
- action="store_true",
- help="Whether to push the new dataset to the hub.",
- )
- parser.add_argument(
- "--task",
- type=str,
- default="",
- help="The natural language task to describe the dataset.",
- )
- parser.add_argument(
- "--new-repo-id",
- type=str,
- default=None,
- help="The repository id for the new cropped and resized dataset. If not provided, it defaults to `repo_id` + '_cropped_resized'.",
- )
- args = parser.parse_args()
-
- dataset = LeRobotDataset(repo_id=args.repo_id, root=args.root)
-
- images = get_image_from_lerobot_dataset(dataset)
- images = {k: v.cpu().permute(1, 2, 0).numpy() for k, v in images.items()}
- images = {k: (v * 255).astype("uint8") for k, v in images.items()}
-
- if args.crop_params_path is None:
- rois = select_square_roi_for_images(images)
- else:
- with open(args.crop_params_path) as f:
- rois = json.load(f)
-
- # Print the selected rectangular ROIs
- print("\nSelected Rectangular Regions of Interest (top, left, height, width):")
- for key, roi in rois.items():
- print(f"{key}: {roi}")
-
- new_repo_id = args.new_repo_id if args.new_repo_id else args.repo_id + "_cropped_resized"
-
- if args.new_repo_id:
- new_dataset_name = args.new_repo_id.split("/")[-1]
- # Parent 1: HF user, Parent 2: HF LeRobot Home
- new_dataset_root = dataset.root.parent.parent / new_dataset_name
- else:
- new_dataset_root = Path(str(dataset.root) + "_cropped_resized")
-
- cropped_resized_dataset = convert_lerobot_dataset_to_cropped_lerobot_dataset(
- original_dataset=dataset,
- crop_params_dict=rois,
- new_repo_id=new_repo_id,
- new_dataset_root=new_dataset_root,
- resize_size=(128, 128),
- push_to_hub=args.push_to_hub,
- task=args.task,
- )
-
- meta_dir = new_dataset_root / "meta"
- meta_dir.mkdir(exist_ok=True)
-
- with open(meta_dir / "crop_params.json", "w") as f:
- json.dump(rois, f, indent=4)
diff --git a/lerobot/src/lerobot/rl/eval_policy.py b/lerobot/src/lerobot/rl/eval_policy.py
deleted file mode 100644
index d7baa41cc1f20ebaaa074475582ea7ed75587c79..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/rl/eval_policy.py
+++ /dev/null
@@ -1,75 +0,0 @@
-# !/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import logging
-
-from lerobot.cameras import opencv # noqa: F401
-from lerobot.configs import parser
-from lerobot.configs.train import TrainRLServerPipelineConfig
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.policies.factory import make_policy
-from lerobot.robots import ( # noqa: F401
- RobotConfig,
- make_robot_from_config,
- so_follower,
-)
-from lerobot.teleoperators import (
- gamepad, # noqa: F401
- so_leader, # noqa: F401
-)
-
-from .gym_manipulator import make_robot_env
-
-logging.basicConfig(level=logging.INFO)
-
-
-def eval_policy(env, policy, n_episodes):
- sum_reward_episode = []
- for _ in range(n_episodes):
- obs, _ = env.reset()
- episode_reward = 0.0
- while True:
- action = policy.select_action(obs)
- obs, reward, terminated, truncated, _ = env.step(action)
- episode_reward += reward
- if terminated or truncated:
- break
- sum_reward_episode.append(episode_reward)
-
- logging.info(f"Success after 20 steps {sum_reward_episode}")
- logging.info(f"success rate {sum(sum_reward_episode) / len(sum_reward_episode)}")
-
-
-@parser.wrap()
-def main(cfg: TrainRLServerPipelineConfig):
- env_cfg = cfg.env
- env = make_robot_env(env_cfg)
- dataset_cfg = cfg.dataset
- dataset = LeRobotDataset(repo_id=dataset_cfg.repo_id)
- dataset_meta = dataset.meta
-
- policy = make_policy(
- cfg=cfg.policy,
- # env_cfg=cfg.env,
- ds_meta=dataset_meta,
- )
- policy = policy.from_pretrained(env_cfg.pretrained_policy_name_or_path)
- policy.eval()
-
- eval_policy(env, policy=policy, n_episodes=10)
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/rl/gym_manipulator.py b/lerobot/src/lerobot/rl/gym_manipulator.py
deleted file mode 100644
index 7eb1a646934679b3dcb55f0d464de3cade4a83a9..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/rl/gym_manipulator.py
+++ /dev/null
@@ -1,771 +0,0 @@
-# !/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import time
-from dataclasses import dataclass
-from typing import Any
-
-import gymnasium as gym
-import numpy as np
-import torch
-
-from lerobot.cameras import opencv # noqa: F401
-from lerobot.configs import parser
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.envs.configs import HILSerlRobotEnvConfig
-from lerobot.model.kinematics import RobotKinematics
-from lerobot.processor import (
- AddBatchDimensionProcessorStep,
- AddTeleopActionAsComplimentaryDataStep,
- AddTeleopEventsAsInfoStep,
- DataProcessorPipeline,
- DeviceProcessorStep,
- EnvTransition,
- GripperPenaltyProcessorStep,
- ImageCropResizeProcessorStep,
- InterventionActionProcessorStep,
- MapDeltaActionToRobotActionStep,
- MapTensorToDeltaActionDictStep,
- Numpy2TorchActionProcessorStep,
- RewardClassifierProcessorStep,
- RobotActionToPolicyActionProcessorStep,
- RobotObservation,
- TimeLimitProcessorStep,
- Torch2NumpyActionProcessorStep,
- TransitionKey,
- VanillaObservationProcessorStep,
- create_transition,
-)
-from lerobot.processor.converters import identity_transition
-from lerobot.robots import ( # noqa: F401
- RobotConfig,
- make_robot_from_config,
- so_follower,
-)
-from lerobot.robots.robot import Robot
-from lerobot.robots.so_follower.robot_kinematic_processor import (
- EEBoundsAndSafety,
- EEReferenceAndDelta,
- ForwardKinematicsJointsToEEObservation,
- GripperVelocityToJoint,
- InverseKinematicsRLStep,
-)
-from lerobot.teleoperators import (
- gamepad, # noqa: F401
- keyboard, # noqa: F401
- make_teleoperator_from_config,
- so_leader, # noqa: F401
-)
-from lerobot.teleoperators.teleoperator import Teleoperator
-from lerobot.teleoperators.utils import TeleopEvents
-from lerobot.utils.constants import ACTION, DONE, OBS_IMAGES, OBS_STATE, REWARD
-from lerobot.utils.robot_utils import precise_sleep
-from lerobot.utils.utils import log_say
-
-from .joint_observations_processor import JointVelocityProcessorStep, MotorCurrentProcessorStep
-
-logging.basicConfig(level=logging.INFO)
-
-
-@dataclass
-class DatasetConfig:
- """Configuration for dataset creation and management."""
-
- repo_id: str
- task: str
- root: str | None = None
- num_episodes_to_record: int = 5
- replay_episode: int | None = None
- push_to_hub: bool = False
-
-
-@dataclass
-class GymManipulatorConfig:
- """Main configuration for gym manipulator environment."""
-
- env: HILSerlRobotEnvConfig
- dataset: DatasetConfig
- mode: str | None = None # Either "record", "replay", None
- device: str = "cpu"
-
-
-def reset_follower_position(robot_arm: Robot, target_position: np.ndarray) -> None:
- """Reset robot arm to target position using smooth trajectory."""
- current_position_dict = robot_arm.bus.sync_read("Present_Position")
- current_position = np.array(
- [current_position_dict[name] for name in current_position_dict], dtype=np.float32
- )
- trajectory = torch.from_numpy(
- np.linspace(current_position, target_position, 50)
- ) # NOTE: 30 is just an arbitrary number
- for pose in trajectory:
- action_dict = dict(zip(current_position_dict, pose, strict=False))
- robot_arm.bus.sync_write("Goal_Position", action_dict)
- precise_sleep(0.015)
-
-
-class RobotEnv(gym.Env):
- """Gym environment for robotic control with human intervention support."""
-
- def __init__(
- self,
- robot,
- use_gripper: bool = False,
- display_cameras: bool = False,
- reset_pose: list[float] | None = None,
- reset_time_s: float = 5.0,
- ) -> None:
- """Initialize robot environment with configuration options.
-
- Args:
- robot: Robot interface for hardware communication.
- use_gripper: Whether to include gripper in action space.
- display_cameras: Whether to show camera feeds during execution.
- reset_pose: Joint positions for environment reset.
- reset_time_s: Time to wait during reset.
- """
- super().__init__()
-
- self.robot = robot
- self.display_cameras = display_cameras
-
- # Connect to the robot if not already connected.
- if not self.robot.is_connected:
- self.robot.connect()
-
- # Episode tracking.
- self.current_step = 0
- self.episode_data = None
-
- self._joint_names = [f"{key}.pos" for key in self.robot.bus.motors]
- self._image_keys = self.robot.cameras.keys()
-
- self.reset_pose = reset_pose
- self.reset_time_s = reset_time_s
-
- self.use_gripper = use_gripper
-
- self._joint_names = list(self.robot.bus.motors.keys())
- self._raw_joint_positions = None
-
- self._setup_spaces()
-
- def _get_observation(self) -> RobotObservation:
- """Get current robot observation including joint positions and camera images."""
- obs_dict = self.robot.get_observation()
- raw_joint_joint_position = {f"{name}.pos": obs_dict[f"{name}.pos"] for name in self._joint_names}
- joint_positions = np.array([raw_joint_joint_position[f"{name}.pos"] for name in self._joint_names])
-
- images = {key: obs_dict[key] for key in self._image_keys}
-
- return {"agent_pos": joint_positions, "pixels": images, **raw_joint_joint_position}
-
- def _setup_spaces(self) -> None:
- """Configure observation and action spaces based on robot capabilities."""
- current_observation = self._get_observation()
-
- observation_spaces = {}
-
- # Define observation spaces for images and other states.
- if current_observation is not None and "pixels" in current_observation:
- prefix = OBS_IMAGES
- observation_spaces = {
- f"{prefix}.{key}": gym.spaces.Box(
- low=0, high=255, shape=current_observation["pixels"][key].shape, dtype=np.uint8
- )
- for key in current_observation["pixels"]
- }
-
- if current_observation is not None:
- agent_pos = current_observation["agent_pos"]
- observation_spaces[OBS_STATE] = gym.spaces.Box(
- low=0,
- high=10,
- shape=agent_pos.shape,
- dtype=np.float32,
- )
-
- self.observation_space = gym.spaces.Dict(observation_spaces)
-
- # Define the action space for joint positions along with setting an intervention flag.
- action_dim = 3
- bounds = {}
- bounds["min"] = -np.ones(action_dim)
- bounds["max"] = np.ones(action_dim)
-
- if self.use_gripper:
- action_dim += 1
- bounds["min"] = np.concatenate([bounds["min"], [0]])
- bounds["max"] = np.concatenate([bounds["max"], [2]])
-
- self.action_space = gym.spaces.Box(
- low=bounds["min"],
- high=bounds["max"],
- shape=(action_dim,),
- dtype=np.float32,
- )
-
- def reset(
- self, *, seed: int | None = None, options: dict[str, Any] | None = None
- ) -> tuple[RobotObservation, dict[str, Any]]:
- """Reset environment to initial state.
-
- Args:
- seed: Random seed for reproducibility.
- options: Additional reset options.
-
- Returns:
- Tuple of (observation, info) dictionaries.
- """
- # Reset the robot
- # self.robot.reset()
- start_time = time.perf_counter()
- if self.reset_pose is not None:
- log_say("Reset the environment.", play_sounds=True)
- reset_follower_position(self.robot, np.array(self.reset_pose))
- log_say("Reset the environment done.", play_sounds=True)
-
- precise_sleep(max(self.reset_time_s - (time.perf_counter() - start_time), 0.0))
-
- super().reset(seed=seed, options=options)
-
- # Reset episode tracking variables.
- self.current_step = 0
- self.episode_data = None
- obs = self._get_observation()
- self._raw_joint_positions = {f"{key}.pos": obs[f"{key}.pos"] for key in self._joint_names}
- return obs, {TeleopEvents.IS_INTERVENTION: False}
-
- def step(self, action) -> tuple[RobotObservation, float, bool, bool, dict[str, Any]]:
- """Execute one environment step with given action."""
- joint_targets_dict = {f"{key}.pos": action[i] for i, key in enumerate(self.robot.bus.motors.keys())}
-
- self.robot.send_action(joint_targets_dict)
-
- obs = self._get_observation()
-
- self._raw_joint_positions = {f"{key}.pos": obs[f"{key}.pos"] for key in self._joint_names}
-
- if self.display_cameras:
- self.render()
-
- self.current_step += 1
-
- reward = 0.0
- terminated = False
- truncated = False
-
- return (
- obs,
- reward,
- terminated,
- truncated,
- {TeleopEvents.IS_INTERVENTION: False},
- )
-
- def render(self) -> None:
- """Display robot camera feeds."""
- import cv2
-
- current_observation = self._get_observation()
- if current_observation is not None:
- image_keys = [key for key in current_observation if "image" in key]
-
- for key in image_keys:
- cv2.imshow(key, cv2.cvtColor(current_observation[key].numpy(), cv2.COLOR_RGB2BGR))
- cv2.waitKey(1)
-
- def close(self) -> None:
- """Close environment and disconnect robot."""
- if self.robot.is_connected:
- self.robot.disconnect()
-
- def get_raw_joint_positions(self) -> dict[str, float]:
- """Get raw joint positions."""
- return self._raw_joint_positions
-
-
-def make_robot_env(cfg: HILSerlRobotEnvConfig) -> tuple[gym.Env, Any]:
- """Create robot environment from configuration.
-
- Args:
- cfg: Environment configuration.
-
- Returns:
- Tuple of (gym environment, teleoperator device).
- """
- # Check if this is a GymHIL simulation environment
- if cfg.name == "gym_hil":
- assert cfg.robot is None and cfg.teleop is None, "GymHIL environment does not support robot or teleop"
- import gym_hil # noqa: F401
-
- # Extract gripper settings with defaults
- use_gripper = cfg.processor.gripper.use_gripper if cfg.processor.gripper is not None else True
- gripper_penalty = cfg.processor.gripper.gripper_penalty if cfg.processor.gripper is not None else 0.0
-
- env = gym.make(
- f"gym_hil/{cfg.task}",
- image_obs=True,
- render_mode="human",
- use_gripper=use_gripper,
- gripper_penalty=gripper_penalty,
- )
-
- return env, None
-
- # Real robot environment
- assert cfg.robot is not None, "Robot config must be provided for real robot environment"
- assert cfg.teleop is not None, "Teleop config must be provided for real robot environment"
-
- robot = make_robot_from_config(cfg.robot)
- teleop_device = make_teleoperator_from_config(cfg.teleop)
- teleop_device.connect()
-
- # Create base environment with safe defaults
- use_gripper = cfg.processor.gripper.use_gripper if cfg.processor.gripper is not None else True
- display_cameras = (
- cfg.processor.observation.display_cameras if cfg.processor.observation is not None else False
- )
- reset_pose = cfg.processor.reset.fixed_reset_joint_positions if cfg.processor.reset is not None else None
-
- env = RobotEnv(
- robot=robot,
- use_gripper=use_gripper,
- display_cameras=display_cameras,
- reset_pose=reset_pose,
- )
-
- return env, teleop_device
-
-
-def make_processors(
- env: gym.Env, teleop_device: Teleoperator | None, cfg: HILSerlRobotEnvConfig, device: str = "cpu"
-) -> tuple[
- DataProcessorPipeline[EnvTransition, EnvTransition], DataProcessorPipeline[EnvTransition, EnvTransition]
-]:
- """Create environment and action processors.
-
- Args:
- env: Robot environment instance.
- teleop_device: Teleoperator device for intervention.
- cfg: Processor configuration.
- device: Target device for computations.
-
- Returns:
- Tuple of (environment processor, action processor).
- """
- terminate_on_success = (
- cfg.processor.reset.terminate_on_success if cfg.processor.reset is not None else True
- )
-
- if cfg.name == "gym_hil":
- action_pipeline_steps = [
- InterventionActionProcessorStep(terminate_on_success=terminate_on_success),
- Torch2NumpyActionProcessorStep(),
- ]
-
- env_pipeline_steps = [
- Numpy2TorchActionProcessorStep(),
- VanillaObservationProcessorStep(),
- AddBatchDimensionProcessorStep(),
- DeviceProcessorStep(device=device),
- ]
-
- return DataProcessorPipeline(
- steps=env_pipeline_steps, to_transition=identity_transition, to_output=identity_transition
- ), DataProcessorPipeline(
- steps=action_pipeline_steps, to_transition=identity_transition, to_output=identity_transition
- )
-
- # Full processor pipeline for real robot environment
- # Get robot and motor information for kinematics
- motor_names = list(env.robot.bus.motors.keys())
-
- # Set up kinematics solver if inverse kinematics is configured
- kinematics_solver = None
- if cfg.processor.inverse_kinematics is not None:
- kinematics_solver = RobotKinematics(
- urdf_path=cfg.processor.inverse_kinematics.urdf_path,
- target_frame_name=cfg.processor.inverse_kinematics.target_frame_name,
- joint_names=motor_names,
- )
-
- env_pipeline_steps = [VanillaObservationProcessorStep()]
-
- if cfg.processor.observation is not None:
- if cfg.processor.observation.add_joint_velocity_to_observation:
- env_pipeline_steps.append(JointVelocityProcessorStep(dt=1.0 / cfg.fps))
- if cfg.processor.observation.add_current_to_observation:
- env_pipeline_steps.append(MotorCurrentProcessorStep(robot=env.robot))
-
- if kinematics_solver is not None:
- env_pipeline_steps.append(
- ForwardKinematicsJointsToEEObservation(
- kinematics=kinematics_solver,
- motor_names=motor_names,
- )
- )
-
- if cfg.processor.image_preprocessing is not None:
- env_pipeline_steps.append(
- ImageCropResizeProcessorStep(
- crop_params_dict=cfg.processor.image_preprocessing.crop_params_dict,
- resize_size=cfg.processor.image_preprocessing.resize_size,
- )
- )
-
- # Add time limit processor if reset config exists
- if cfg.processor.reset is not None:
- env_pipeline_steps.append(
- TimeLimitProcessorStep(max_episode_steps=int(cfg.processor.reset.control_time_s * cfg.fps))
- )
-
- # Add gripper penalty processor if gripper config exists and enabled
- if cfg.processor.gripper is not None and cfg.processor.gripper.use_gripper:
- env_pipeline_steps.append(
- GripperPenaltyProcessorStep(
- penalty=cfg.processor.gripper.gripper_penalty,
- max_gripper_pos=cfg.processor.max_gripper_pos,
- )
- )
-
- if (
- cfg.processor.reward_classifier is not None
- and cfg.processor.reward_classifier.pretrained_path is not None
- ):
- env_pipeline_steps.append(
- RewardClassifierProcessorStep(
- pretrained_path=cfg.processor.reward_classifier.pretrained_path,
- device=device,
- success_threshold=cfg.processor.reward_classifier.success_threshold,
- success_reward=cfg.processor.reward_classifier.success_reward,
- terminate_on_success=terminate_on_success,
- )
- )
-
- env_pipeline_steps.append(AddBatchDimensionProcessorStep())
- env_pipeline_steps.append(DeviceProcessorStep(device=device))
-
- action_pipeline_steps = [
- AddTeleopActionAsComplimentaryDataStep(teleop_device=teleop_device),
- AddTeleopEventsAsInfoStep(teleop_device=teleop_device),
- InterventionActionProcessorStep(
- use_gripper=cfg.processor.gripper.use_gripper if cfg.processor.gripper is not None else False,
- terminate_on_success=terminate_on_success,
- ),
- ]
-
- # Replace InverseKinematicsProcessor with new kinematic processors
- if cfg.processor.inverse_kinematics is not None and kinematics_solver is not None:
- # Add EE bounds and safety processor
- inverse_kinematics_steps = [
- MapTensorToDeltaActionDictStep(
- use_gripper=cfg.processor.gripper.use_gripper if cfg.processor.gripper is not None else False
- ),
- MapDeltaActionToRobotActionStep(),
- EEReferenceAndDelta(
- kinematics=kinematics_solver,
- end_effector_step_sizes=cfg.processor.inverse_kinematics.end_effector_step_sizes,
- motor_names=motor_names,
- use_latched_reference=False,
- use_ik_solution=True,
- ),
- EEBoundsAndSafety(
- end_effector_bounds=cfg.processor.inverse_kinematics.end_effector_bounds,
- ),
- GripperVelocityToJoint(
- clip_max=cfg.processor.max_gripper_pos,
- speed_factor=1.0,
- discrete_gripper=True,
- ),
- InverseKinematicsRLStep(
- kinematics=kinematics_solver, motor_names=motor_names, initial_guess_current_joints=False
- ),
- ]
- action_pipeline_steps.extend(inverse_kinematics_steps)
- action_pipeline_steps.append(RobotActionToPolicyActionProcessorStep(motor_names=motor_names))
-
- return DataProcessorPipeline(
- steps=env_pipeline_steps, to_transition=identity_transition, to_output=identity_transition
- ), DataProcessorPipeline(
- steps=action_pipeline_steps, to_transition=identity_transition, to_output=identity_transition
- )
-
-
-def step_env_and_process_transition(
- env: gym.Env,
- transition: EnvTransition,
- action: torch.Tensor,
- env_processor: DataProcessorPipeline[EnvTransition, EnvTransition],
- action_processor: DataProcessorPipeline[EnvTransition, EnvTransition],
-) -> EnvTransition:
- """
- Execute one step with processor pipeline.
-
- Args:
- env: The robot environment
- transition: Current transition state
- action: Action to execute
- env_processor: Environment processor
- action_processor: Action processor
-
- Returns:
- Processed transition with updated state.
- """
-
- # Create action transition
- transition[TransitionKey.ACTION] = action
- transition[TransitionKey.OBSERVATION] = (
- env.get_raw_joint_positions() if hasattr(env, "get_raw_joint_positions") else {}
- )
- processed_action_transition = action_processor(transition)
- processed_action = processed_action_transition[TransitionKey.ACTION]
-
- obs, reward, terminated, truncated, info = env.step(processed_action)
-
- reward = reward + processed_action_transition[TransitionKey.REWARD]
- terminated = terminated or processed_action_transition[TransitionKey.DONE]
- truncated = truncated or processed_action_transition[TransitionKey.TRUNCATED]
- complementary_data = processed_action_transition[TransitionKey.COMPLEMENTARY_DATA].copy()
- new_info = processed_action_transition[TransitionKey.INFO].copy()
- new_info.update(info)
-
- new_transition = create_transition(
- observation=obs,
- action=processed_action,
- reward=reward,
- done=terminated,
- truncated=truncated,
- info=new_info,
- complementary_data=complementary_data,
- )
- new_transition = env_processor(new_transition)
-
- return new_transition
-
-
-def control_loop(
- env: gym.Env,
- env_processor: DataProcessorPipeline[EnvTransition, EnvTransition],
- action_processor: DataProcessorPipeline[EnvTransition, EnvTransition],
- teleop_device: Teleoperator,
- cfg: GymManipulatorConfig,
-) -> None:
- """Main control loop for robot environment interaction.
- if cfg.mode == "record": then a dataset will be created and recorded
-
- Args:
- env: The robot environment
- env_processor: Environment processor
- action_processor: Action processor
- teleop_device: Teleoperator device
- cfg: gym_manipulator configuration
- """
- dt = 1.0 / cfg.env.fps
-
- print(f"Starting control loop at {cfg.env.fps} FPS")
- print("Controls:")
- print("- Use gamepad/teleop device for intervention")
- print("- When not intervening, robot will stay still")
- print("- Press Ctrl+C to exit")
-
- # Reset environment and processors
- obs, info = env.reset()
- complementary_data = (
- {"raw_joint_positions": info.pop("raw_joint_positions")} if "raw_joint_positions" in info else {}
- )
- env_processor.reset()
- action_processor.reset()
-
- # Process initial observation
- transition = create_transition(observation=obs, info=info, complementary_data=complementary_data)
- transition = env_processor(data=transition)
-
- # Determine if gripper is used
- use_gripper = cfg.env.processor.gripper.use_gripper if cfg.env.processor.gripper is not None else True
-
- dataset = None
- if cfg.mode == "record":
- action_features = teleop_device.action_features
- features = {
- ACTION: action_features,
- REWARD: {"dtype": "float32", "shape": (1,), "names": None},
- DONE: {"dtype": "bool", "shape": (1,), "names": None},
- }
- if use_gripper:
- features["complementary_info.discrete_penalty"] = {
- "dtype": "float32",
- "shape": (1,),
- "names": ["discrete_penalty"],
- }
-
- for key, value in transition[TransitionKey.OBSERVATION].items():
- if key == OBS_STATE:
- features[key] = {
- "dtype": "float32",
- "shape": value.squeeze(0).shape,
- "names": None,
- }
- if "image" in key:
- features[key] = {
- "dtype": "video",
- "shape": value.squeeze(0).shape,
- "names": ["channels", "height", "width"],
- }
-
- # Create dataset
- dataset = LeRobotDataset.create(
- cfg.dataset.repo_id,
- cfg.env.fps,
- root=cfg.dataset.root,
- use_videos=True,
- image_writer_threads=4,
- image_writer_processes=0,
- features=features,
- )
-
- episode_idx = 0
- episode_step = 0
- episode_start_time = time.perf_counter()
-
- while episode_idx < cfg.dataset.num_episodes_to_record:
- step_start_time = time.perf_counter()
-
- # Create a neutral action (no movement)
- neutral_action = torch.tensor([0.0, 0.0, 0.0], dtype=torch.float32)
- if use_gripper:
- neutral_action = torch.cat([neutral_action, torch.tensor([1.0])]) # Gripper stay
-
- # Use the new step function
- transition = step_env_and_process_transition(
- env=env,
- transition=transition,
- action=neutral_action,
- env_processor=env_processor,
- action_processor=action_processor,
- )
- terminated = transition.get(TransitionKey.DONE, False)
- truncated = transition.get(TransitionKey.TRUNCATED, False)
-
- if cfg.mode == "record":
- observations = {
- k: v.squeeze(0).cpu()
- for k, v in transition[TransitionKey.OBSERVATION].items()
- if isinstance(v, torch.Tensor)
- }
- # Use teleop_action if available, otherwise use the action from the transition
- action_to_record = transition[TransitionKey.COMPLEMENTARY_DATA].get(
- "teleop_action", transition[TransitionKey.ACTION]
- )
- frame = {
- **observations,
- ACTION: action_to_record.cpu(),
- REWARD: np.array([transition[TransitionKey.REWARD]], dtype=np.float32),
- DONE: np.array([terminated or truncated], dtype=bool),
- }
- if use_gripper:
- discrete_penalty = transition[TransitionKey.COMPLEMENTARY_DATA].get("discrete_penalty", 0.0)
- frame["complementary_info.discrete_penalty"] = np.array([discrete_penalty], dtype=np.float32)
-
- if dataset is not None:
- frame["task"] = cfg.dataset.task
- dataset.add_frame(frame)
-
- episode_step += 1
-
- # Handle episode termination
- if terminated or truncated:
- episode_time = time.perf_counter() - episode_start_time
- logging.info(
- f"Episode ended after {episode_step} steps in {episode_time:.1f}s with reward {transition[TransitionKey.REWARD]}"
- )
- episode_step = 0
- episode_idx += 1
-
- if dataset is not None:
- if transition[TransitionKey.INFO].get(TeleopEvents.RERECORD_EPISODE, False):
- logging.info(f"Re-recording episode {episode_idx}")
- dataset.clear_episode_buffer()
- episode_idx -= 1
- else:
- logging.info(f"Saving episode {episode_idx}")
- dataset.save_episode()
-
- # Reset for new episode
- obs, info = env.reset()
- env_processor.reset()
- action_processor.reset()
-
- transition = create_transition(observation=obs, info=info)
- transition = env_processor(transition)
-
- # Maintain fps timing
- precise_sleep(max(dt - (time.perf_counter() - step_start_time), 0.0))
-
- if dataset is not None and cfg.dataset.push_to_hub:
- logging.info("Pushing dataset to hub")
- dataset.push_to_hub()
-
-
-def replay_trajectory(
- env: gym.Env, action_processor: DataProcessorPipeline, cfg: GymManipulatorConfig
-) -> None:
- """Replay recorded trajectory on robot environment."""
- assert cfg.dataset.replay_episode is not None, "Replay episode must be provided for replay"
-
- dataset = LeRobotDataset(
- cfg.dataset.repo_id,
- root=cfg.dataset.root,
- episodes=[cfg.dataset.replay_episode],
- download_videos=False,
- )
- episode_frames = dataset.hf_dataset.filter(lambda x: x["episode_index"] == cfg.dataset.replay_episode)
- actions = episode_frames.select_columns(ACTION)
-
- _, info = env.reset()
-
- for action_data in actions:
- start_time = time.perf_counter()
- transition = create_transition(
- observation=env.get_raw_joint_positions() if hasattr(env, "get_raw_joint_positions") else {},
- action=action_data[ACTION],
- )
- transition = action_processor(transition)
- env.step(transition[TransitionKey.ACTION])
- precise_sleep(max(1 / cfg.env.fps - (time.perf_counter() - start_time), 0.0))
-
-
-@parser.wrap()
-def main(cfg: GymManipulatorConfig) -> None:
- """Main entry point for gym manipulator script."""
- env, teleop_device = make_robot_env(cfg.env)
- env_processor, action_processor = make_processors(env, teleop_device, cfg.env, cfg.device)
-
- print("Environment observation space:", env.observation_space)
- print("Environment action space:", env.action_space)
- print("Environment processor:", env_processor)
- print("Action processor:", action_processor)
-
- if cfg.mode == "replay":
- replay_trajectory(env, action_processor, cfg)
- exit()
-
- control_loop(env, env_processor, action_processor, teleop_device, cfg)
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/rl/joint_observations_processor.py b/lerobot/src/lerobot/rl/joint_observations_processor.py
deleted file mode 100644
index 7287c8a33545eb0f5c5c4bf5d76a38fe4b176699..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/rl/joint_observations_processor.py
+++ /dev/null
@@ -1,211 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-from typing import Any
-
-import torch
-
-from lerobot.configs.types import PipelineFeatureType, PolicyFeature
-from lerobot.processor.pipeline import (
- ObservationProcessorStep,
- ProcessorStepRegistry,
-)
-from lerobot.robots import Robot
-from lerobot.utils.constants import OBS_STATE
-
-
-@dataclass
-@ProcessorStepRegistry.register("joint_velocity_processor")
-class JointVelocityProcessorStep(ObservationProcessorStep):
- """
- Calculates and appends joint velocity information to the observation state.
-
- This step computes the velocity of each joint by calculating the finite
- difference between the current and the last observed joint positions. The
- resulting velocity vector is then concatenated to the original state vector.
-
- Attributes:
- dt: The time step (delta time) in seconds between observations, used for
- calculating velocity.
- last_joint_positions: Stores the joint positions from the previous step
- to enable velocity calculation.
- """
-
- dt: float = 0.1
-
- last_joint_positions: torch.Tensor | None = None
-
- def observation(self, observation: dict) -> dict:
- """
- Computes joint velocities and adds them to the observation state.
-
- Args:
- observation: The input observation dictionary, expected to contain
- an `observation.state` key with joint positions.
-
- Returns:
- A new observation dictionary with the `observation.state` tensor
- extended to include joint velocities.
-
- Raises:
- ValueError: If `observation.state` is not found in the observation.
- """
- # Get current joint positions (assuming they're in observation.state)
- current_positions = observation.get(OBS_STATE)
- if current_positions is None:
- raise ValueError(f"{OBS_STATE} is not in observation")
-
- # Initialize last joint positions if not already set
- if self.last_joint_positions is None:
- self.last_joint_positions = current_positions.clone()
- joint_velocities = torch.zeros_like(current_positions)
- else:
- # Compute velocities
- joint_velocities = (current_positions - self.last_joint_positions) / self.dt
-
- self.last_joint_positions = current_positions.clone()
-
- # Extend observation with velocities
- extended_state = torch.cat([current_positions, joint_velocities], dim=-1)
-
- # Create new observation dict
- new_observation = dict(observation)
- new_observation[OBS_STATE] = extended_state
-
- return new_observation
-
- def get_config(self) -> dict[str, Any]:
- """
- Returns the configuration of the step for serialization.
-
- Returns:
- A dictionary containing the time step `dt`.
- """
- return {
- "dt": self.dt,
- }
-
- def reset(self) -> None:
- """Resets the internal state, clearing the last known joint positions."""
- self.last_joint_positions = None
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- """
- Updates the `observation.state` feature to reflect the added velocities.
-
- This method doubles the size of the first dimension of the `observation.state`
- shape to account for the concatenation of position and velocity vectors.
-
- Args:
- features: The policy features dictionary.
-
- Returns:
- The updated policy features dictionary.
- """
- if OBS_STATE in features[PipelineFeatureType.OBSERVATION]:
- original_feature = features[PipelineFeatureType.OBSERVATION][OBS_STATE]
- # Double the shape to account for positions + velocities
- new_shape = (original_feature.shape[0] * 2,) + original_feature.shape[1:]
-
- features[PipelineFeatureType.OBSERVATION][OBS_STATE] = PolicyFeature(
- type=original_feature.type, shape=new_shape
- )
- return features
-
-
-@dataclass
-@ProcessorStepRegistry.register("current_processor")
-class MotorCurrentProcessorStep(ObservationProcessorStep):
- """
- Reads motor currents from a robot and appends them to the observation state.
-
- This step queries the robot's hardware interface to get the present current
- for each motor and concatenates this information to the existing state vector.
-
- Attributes:
- robot: An instance of a `lerobot` Robot class that provides access to
- the hardware bus.
- """
-
- robot: Robot | None = None
-
- def observation(self, observation: dict) -> dict:
- """
- Fetches motor currents and adds them to the observation state.
-
- Args:
- observation: The input observation dictionary.
-
- Returns:
- A new observation dictionary with the `observation.state` tensor
- extended to include motor currents.
-
- Raises:
- ValueError: If the `robot` attribute has not been set.
- """
- # Get current values from robot state
- if self.robot is None:
- raise ValueError("Robot is not set")
-
- present_current_dict = self.robot.bus.sync_read("Present_Current") # type: ignore[attr-defined]
- motor_currents = torch.tensor(
- [present_current_dict[name] for name in self.robot.bus.motors], # type: ignore[attr-defined]
- dtype=torch.float32,
- ).unsqueeze(0)
-
- current_state = observation.get(OBS_STATE)
- if current_state is None:
- return observation
-
- extended_state = torch.cat([current_state, motor_currents], dim=-1)
-
- # Create new observation dict
- new_observation = dict(observation)
- new_observation[OBS_STATE] = extended_state
-
- return new_observation
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- """
- Updates the `observation.state` feature to reflect the added motor currents.
-
- This method increases the size of the first dimension of the `observation.state`
- shape by the number of motors in the robot.
-
- Args:
- features: The policy features dictionary.
-
- Returns:
- The updated policy features dictionary.
- """
- if OBS_STATE in features[PipelineFeatureType.OBSERVATION] and self.robot is not None:
- original_feature = features[PipelineFeatureType.OBSERVATION][OBS_STATE]
- # Add motor current dimensions to the original state shape
- num_motors = 0
- if hasattr(self.robot, "bus") and hasattr(self.robot.bus, "motors"): # type: ignore[attr-defined]
- num_motors = len(self.robot.bus.motors) # type: ignore[attr-defined]
-
- if num_motors > 0:
- new_shape = (original_feature.shape[0] + num_motors,) + original_feature.shape[1:]
- features[PipelineFeatureType.OBSERVATION][OBS_STATE] = PolicyFeature(
- type=original_feature.type, shape=new_shape
- )
- return features
diff --git a/lerobot/src/lerobot/rl/learner.py b/lerobot/src/lerobot/rl/learner.py
deleted file mode 100644
index c56d38a8cf9b4298d16bf1ac8bd83253945a7f54..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/rl/learner.py
+++ /dev/null
@@ -1,1203 +0,0 @@
-# !/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team.
-# All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""
-Learner server runner for distributed HILSerl robot policy training.
-
-This script implements the learner component of the distributed HILSerl architecture.
-It initializes the policy network, maintains replay buffers, and updates
-the policy based on transitions received from the actor server.
-
-Examples of usage:
-
-- Start a learner server for training:
-```bash
-python -m lerobot.rl.learner --config_path src/lerobot/configs/train_config_hilserl_so100.json
-```
-
-**NOTE**: Start the learner server before launching the actor server. The learner opens a gRPC server
-to communicate with actors.
-
-**NOTE**: Training progress can be monitored through Weights & Biases if wandb.enable is set to true
-in your configuration.
-
-**WORKFLOW**:
-1. Create training configuration with proper policy, dataset, and environment settings
-2. Start this learner server with the configuration
-3. Start an actor server with the same configuration
-4. Monitor training progress through wandb dashboard
-
-For more details on the complete HILSerl training workflow, see:
-https://github.com/michel-aractingi/lerobot-hilserl-guide
-"""
-
-import logging
-import os
-import shutil
-import time
-from concurrent.futures import ThreadPoolExecutor
-from pathlib import Path
-from pprint import pformat
-
-import grpc
-import torch
-from termcolor import colored
-from torch import nn
-from torch.multiprocessing import Queue
-from torch.optim.optimizer import Optimizer
-
-from lerobot.cameras import opencv # noqa: F401
-from lerobot.configs import parser
-from lerobot.configs.train import TrainRLServerPipelineConfig
-from lerobot.datasets.factory import make_dataset
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.policies.factory import make_policy
-from lerobot.policies.sac.modeling_sac import SACPolicy
-from lerobot.rl.buffer import ReplayBuffer, concatenate_batch_transitions
-from lerobot.rl.process import ProcessSignalHandler
-from lerobot.rl.wandb_utils import WandBLogger
-from lerobot.robots import so_follower # noqa: F401
-from lerobot.teleoperators import gamepad, so_leader # noqa: F401
-from lerobot.teleoperators.utils import TeleopEvents
-from lerobot.transport import services_pb2_grpc
-from lerobot.transport.utils import (
- MAX_MESSAGE_SIZE,
- bytes_to_python_object,
- bytes_to_transitions,
- state_to_bytes,
-)
-from lerobot.utils.constants import (
- ACTION,
- CHECKPOINTS_DIR,
- LAST_CHECKPOINT_LINK,
- PRETRAINED_MODEL_DIR,
- TRAINING_STATE_DIR,
-)
-from lerobot.utils.random_utils import set_seed
-from lerobot.utils.train_utils import (
- get_step_checkpoint_dir,
- load_training_state as utils_load_training_state,
- save_checkpoint,
- update_last_checkpoint,
-)
-from lerobot.utils.transition import move_state_dict_to_device, move_transition_to_device
-from lerobot.utils.utils import (
- format_big_number,
- get_safe_torch_device,
- init_logging,
-)
-
-from .learner_service import MAX_WORKERS, SHUTDOWN_TIMEOUT, LearnerService
-
-
-@parser.wrap()
-def train_cli(cfg: TrainRLServerPipelineConfig):
- if not use_threads(cfg):
- import torch.multiprocessing as mp
-
- mp.set_start_method("spawn")
-
- # Use the job_name from the config
- train(
- cfg,
- job_name=cfg.job_name,
- )
-
- logging.info("[LEARNER] train_cli finished")
-
-
-def train(cfg: TrainRLServerPipelineConfig, job_name: str | None = None):
- """
- Main training function that initializes and runs the training process.
-
- Args:
- cfg (TrainRLServerPipelineConfig): The training configuration
- job_name (str | None, optional): Job name for logging. Defaults to None.
- """
-
- cfg.validate()
-
- if job_name is None:
- job_name = cfg.job_name
-
- if job_name is None:
- raise ValueError("Job name must be specified either in config or as a parameter")
-
- display_pid = False
- if not use_threads(cfg):
- display_pid = True
-
- # Create logs directory to ensure it exists
- log_dir = os.path.join(cfg.output_dir, "logs")
- os.makedirs(log_dir, exist_ok=True)
- log_file = os.path.join(log_dir, f"learner_{job_name}.log")
-
- # Initialize logging with explicit log file
- init_logging(log_file=log_file, display_pid=display_pid)
- logging.info(f"Learner logging initialized, writing to {log_file}")
- logging.info(pformat(cfg.to_dict()))
-
- # Setup WandB logging if enabled
- if cfg.wandb.enable and cfg.wandb.project:
- from lerobot.rl.wandb_utils import WandBLogger
-
- wandb_logger = WandBLogger(cfg)
- else:
- wandb_logger = None
- logging.info(colored("Logs will be saved locally.", "yellow", attrs=["bold"]))
-
- # Handle resume logic
- cfg = handle_resume_logic(cfg)
-
- set_seed(seed=cfg.seed)
-
- torch.backends.cudnn.benchmark = True
- torch.backends.cuda.matmul.allow_tf32 = True
-
- is_threaded = use_threads(cfg)
- shutdown_event = ProcessSignalHandler(is_threaded, display_pid=display_pid).shutdown_event
-
- start_learner_threads(
- cfg=cfg,
- wandb_logger=wandb_logger,
- shutdown_event=shutdown_event,
- )
-
-
-def start_learner_threads(
- cfg: TrainRLServerPipelineConfig,
- wandb_logger: WandBLogger | None,
- shutdown_event: any, # Event,
-) -> None:
- """
- Start the learner threads for training.
-
- Args:
- cfg (TrainRLServerPipelineConfig): Training configuration
- wandb_logger (WandBLogger | None): Logger for metrics
- shutdown_event: Event to signal shutdown
- """
- # Create multiprocessing queues
- transition_queue = Queue()
- interaction_message_queue = Queue()
- parameters_queue = Queue()
-
- concurrency_entity = None
-
- if use_threads(cfg):
- from threading import Thread
-
- concurrency_entity = Thread
- else:
- from torch.multiprocessing import Process
-
- concurrency_entity = Process
-
- communication_process = concurrency_entity(
- target=start_learner,
- args=(
- parameters_queue,
- transition_queue,
- interaction_message_queue,
- shutdown_event,
- cfg,
- ),
- daemon=True,
- )
- communication_process.start()
-
- add_actor_information_and_train(
- cfg=cfg,
- wandb_logger=wandb_logger,
- shutdown_event=shutdown_event,
- transition_queue=transition_queue,
- interaction_message_queue=interaction_message_queue,
- parameters_queue=parameters_queue,
- )
- logging.info("[LEARNER] Training process stopped")
-
- logging.info("[LEARNER] Closing queues")
- transition_queue.close()
- interaction_message_queue.close()
- parameters_queue.close()
-
- communication_process.join()
- logging.info("[LEARNER] Communication process joined")
-
- logging.info("[LEARNER] join queues")
- transition_queue.cancel_join_thread()
- interaction_message_queue.cancel_join_thread()
- parameters_queue.cancel_join_thread()
-
- logging.info("[LEARNER] queues closed")
-
-
-# Core algorithm functions
-
-
-def add_actor_information_and_train(
- cfg: TrainRLServerPipelineConfig,
- wandb_logger: WandBLogger | None,
- shutdown_event: any, # Event,
- transition_queue: Queue,
- interaction_message_queue: Queue,
- parameters_queue: Queue,
-):
- """
- Handles data transfer from the actor to the learner, manages training updates,
- and logs training progress in an online reinforcement learning setup.
-
- This function continuously:
- - Transfers transitions from the actor to the replay buffer.
- - Logs received interaction messages.
- - Ensures training begins only when the replay buffer has a sufficient number of transitions.
- - Samples batches from the replay buffer and performs multiple critic updates.
- - Periodically updates the actor, critic, and temperature optimizers.
- - Logs training statistics, including loss values and optimization frequency.
-
- NOTE: This function doesn't have a single responsibility, it should be split into multiple functions
- in the future. The reason why we did that is the GIL in Python. It's super slow the performance
- are divided by 200. So we need to have a single thread that does all the work.
-
- Args:
- cfg (TrainRLServerPipelineConfig): Configuration object containing hyperparameters.
- wandb_logger (WandBLogger | None): Logger for tracking training progress.
- shutdown_event (Event): Event to signal shutdown.
- transition_queue (Queue): Queue for receiving transitions from the actor.
- interaction_message_queue (Queue): Queue for receiving interaction messages from the actor.
- parameters_queue (Queue): Queue for sending policy parameters to the actor.
- """
- # Extract all configuration variables at the beginning, it improve the speed performance
- # of 7%
- device = get_safe_torch_device(try_device=cfg.policy.device, log=True)
- storage_device = get_safe_torch_device(try_device=cfg.policy.storage_device)
- clip_grad_norm_value = cfg.policy.grad_clip_norm
- online_step_before_learning = cfg.policy.online_step_before_learning
- utd_ratio = cfg.policy.utd_ratio
- fps = cfg.env.fps
- log_freq = cfg.log_freq
- save_freq = cfg.save_freq
- policy_update_freq = cfg.policy.policy_update_freq
- policy_parameters_push_frequency = cfg.policy.actor_learner_config.policy_parameters_push_frequency
- saving_checkpoint = cfg.save_checkpoint
- online_steps = cfg.policy.online_steps
- async_prefetch = cfg.policy.async_prefetch
-
- # Initialize logging for multiprocessing
- if not use_threads(cfg):
- log_dir = os.path.join(cfg.output_dir, "logs")
- os.makedirs(log_dir, exist_ok=True)
- log_file = os.path.join(log_dir, f"learner_train_process_{os.getpid()}.log")
- init_logging(log_file=log_file, display_pid=True)
- logging.info("Initialized logging for actor information and training process")
-
- logging.info("Initializing policy")
-
- policy: SACPolicy = make_policy(
- cfg=cfg.policy,
- env_cfg=cfg.env,
- )
-
- assert isinstance(policy, nn.Module)
-
- policy.train()
-
- push_actor_policy_to_queue(parameters_queue=parameters_queue, policy=policy)
-
- last_time_policy_pushed = time.time()
-
- optimizers, lr_scheduler = make_optimizers_and_scheduler(cfg=cfg, policy=policy)
-
- # If we are resuming, we need to load the training state
- resume_optimization_step, resume_interaction_step = load_training_state(cfg=cfg, optimizers=optimizers)
-
- log_training_info(cfg=cfg, policy=policy)
-
- replay_buffer = initialize_replay_buffer(cfg, device, storage_device)
- batch_size = cfg.batch_size
- offline_replay_buffer = None
-
- if cfg.dataset is not None:
- offline_replay_buffer = initialize_offline_replay_buffer(
- cfg=cfg,
- device=device,
- storage_device=storage_device,
- )
- batch_size: int = batch_size // 2 # We will sample from both replay buffer
-
- logging.info("Starting learner thread")
- interaction_message = None
- optimization_step = resume_optimization_step if resume_optimization_step is not None else 0
- interaction_step_shift = resume_interaction_step if resume_interaction_step is not None else 0
-
- dataset_repo_id = None
- if cfg.dataset is not None:
- dataset_repo_id = cfg.dataset.repo_id
-
- # Initialize iterators
- online_iterator = None
- offline_iterator = None
-
- # NOTE: THIS IS THE MAIN LOOP OF THE LEARNER
- while True:
- # Exit the training loop if shutdown is requested
- if shutdown_event is not None and shutdown_event.is_set():
- logging.info("[LEARNER] Shutdown signal received. Exiting...")
- break
-
- # Process all available transitions to the replay buffer, send by the actor server
- process_transitions(
- transition_queue=transition_queue,
- replay_buffer=replay_buffer,
- offline_replay_buffer=offline_replay_buffer,
- device=device,
- dataset_repo_id=dataset_repo_id,
- shutdown_event=shutdown_event,
- )
-
- # Process all available interaction messages sent by the actor server
- interaction_message = process_interaction_messages(
- interaction_message_queue=interaction_message_queue,
- interaction_step_shift=interaction_step_shift,
- wandb_logger=wandb_logger,
- shutdown_event=shutdown_event,
- )
-
- # Wait until the replay buffer has enough samples to start training
- if len(replay_buffer) < online_step_before_learning:
- continue
-
- if online_iterator is None:
- online_iterator = replay_buffer.get_iterator(
- batch_size=batch_size, async_prefetch=async_prefetch, queue_size=2
- )
-
- if offline_replay_buffer is not None and offline_iterator is None:
- offline_iterator = offline_replay_buffer.get_iterator(
- batch_size=batch_size, async_prefetch=async_prefetch, queue_size=2
- )
-
- time_for_one_optimization_step = time.time()
- for _ in range(utd_ratio - 1):
- # Sample from the iterators
- batch = next(online_iterator)
-
- if dataset_repo_id is not None:
- batch_offline = next(offline_iterator)
- batch = concatenate_batch_transitions(
- left_batch_transitions=batch, right_batch_transition=batch_offline
- )
-
- actions = batch[ACTION]
- rewards = batch["reward"]
- observations = batch["state"]
- next_observations = batch["next_state"]
- done = batch["done"]
- check_nan_in_transition(observations=observations, actions=actions, next_state=next_observations)
-
- observation_features, next_observation_features = get_observation_features(
- policy=policy, observations=observations, next_observations=next_observations
- )
-
- # Create a batch dictionary with all required elements for the forward method
- forward_batch = {
- ACTION: actions,
- "reward": rewards,
- "state": observations,
- "next_state": next_observations,
- "done": done,
- "observation_feature": observation_features,
- "next_observation_feature": next_observation_features,
- "complementary_info": batch["complementary_info"],
- }
-
- # Use the forward method for critic loss
- critic_output = policy.forward(forward_batch, model="critic")
-
- # Main critic optimization
- loss_critic = critic_output["loss_critic"]
- optimizers["critic"].zero_grad()
- loss_critic.backward()
- critic_grad_norm = torch.nn.utils.clip_grad_norm_(
- parameters=policy.critic_ensemble.parameters(), max_norm=clip_grad_norm_value
- )
- optimizers["critic"].step()
-
- # Discrete critic optimization (if available)
- if policy.config.num_discrete_actions is not None:
- discrete_critic_output = policy.forward(forward_batch, model="discrete_critic")
- loss_discrete_critic = discrete_critic_output["loss_discrete_critic"]
- optimizers["discrete_critic"].zero_grad()
- loss_discrete_critic.backward()
- discrete_critic_grad_norm = torch.nn.utils.clip_grad_norm_(
- parameters=policy.discrete_critic.parameters(), max_norm=clip_grad_norm_value
- )
- optimizers["discrete_critic"].step()
-
- # Update target networks (main and discrete)
- policy.update_target_networks()
-
- # Sample for the last update in the UTD ratio
- batch = next(online_iterator)
-
- if dataset_repo_id is not None:
- batch_offline = next(offline_iterator)
- batch = concatenate_batch_transitions(
- left_batch_transitions=batch, right_batch_transition=batch_offline
- )
-
- actions = batch[ACTION]
- rewards = batch["reward"]
- observations = batch["state"]
- next_observations = batch["next_state"]
- done = batch["done"]
-
- check_nan_in_transition(observations=observations, actions=actions, next_state=next_observations)
-
- observation_features, next_observation_features = get_observation_features(
- policy=policy, observations=observations, next_observations=next_observations
- )
-
- # Create a batch dictionary with all required elements for the forward method
- forward_batch = {
- ACTION: actions,
- "reward": rewards,
- "state": observations,
- "next_state": next_observations,
- "done": done,
- "observation_feature": observation_features,
- "next_observation_feature": next_observation_features,
- }
-
- critic_output = policy.forward(forward_batch, model="critic")
-
- loss_critic = critic_output["loss_critic"]
- optimizers["critic"].zero_grad()
- loss_critic.backward()
- critic_grad_norm = torch.nn.utils.clip_grad_norm_(
- parameters=policy.critic_ensemble.parameters(), max_norm=clip_grad_norm_value
- ).item()
- optimizers["critic"].step()
-
- # Initialize training info dictionary
- training_infos = {
- "loss_critic": loss_critic.item(),
- "critic_grad_norm": critic_grad_norm,
- }
-
- # Discrete critic optimization (if available)
- if policy.config.num_discrete_actions is not None:
- discrete_critic_output = policy.forward(forward_batch, model="discrete_critic")
- loss_discrete_critic = discrete_critic_output["loss_discrete_critic"]
- optimizers["discrete_critic"].zero_grad()
- loss_discrete_critic.backward()
- discrete_critic_grad_norm = torch.nn.utils.clip_grad_norm_(
- parameters=policy.discrete_critic.parameters(), max_norm=clip_grad_norm_value
- ).item()
- optimizers["discrete_critic"].step()
-
- # Add discrete critic info to training info
- training_infos["loss_discrete_critic"] = loss_discrete_critic.item()
- training_infos["discrete_critic_grad_norm"] = discrete_critic_grad_norm
-
- # Actor and temperature optimization (at specified frequency)
- if optimization_step % policy_update_freq == 0:
- for _ in range(policy_update_freq):
- # Actor optimization
- actor_output = policy.forward(forward_batch, model="actor")
- loss_actor = actor_output["loss_actor"]
- optimizers["actor"].zero_grad()
- loss_actor.backward()
- actor_grad_norm = torch.nn.utils.clip_grad_norm_(
- parameters=policy.actor.parameters(), max_norm=clip_grad_norm_value
- ).item()
- optimizers["actor"].step()
-
- # Add actor info to training info
- training_infos["loss_actor"] = loss_actor.item()
- training_infos["actor_grad_norm"] = actor_grad_norm
-
- # Temperature optimization
- temperature_output = policy.forward(forward_batch, model="temperature")
- loss_temperature = temperature_output["loss_temperature"]
- optimizers["temperature"].zero_grad()
- loss_temperature.backward()
- temp_grad_norm = torch.nn.utils.clip_grad_norm_(
- parameters=[policy.log_alpha], max_norm=clip_grad_norm_value
- ).item()
- optimizers["temperature"].step()
-
- # Add temperature info to training info
- training_infos["loss_temperature"] = loss_temperature.item()
- training_infos["temperature_grad_norm"] = temp_grad_norm
- training_infos["temperature"] = policy.temperature
-
- # Update temperature
- policy.update_temperature()
-
- # Push policy to actors if needed
- if time.time() - last_time_policy_pushed > policy_parameters_push_frequency:
- push_actor_policy_to_queue(parameters_queue=parameters_queue, policy=policy)
- last_time_policy_pushed = time.time()
-
- # Update target networks (main and discrete)
- policy.update_target_networks()
-
- # Log training metrics at specified intervals
- if optimization_step % log_freq == 0:
- training_infos["replay_buffer_size"] = len(replay_buffer)
- if offline_replay_buffer is not None:
- training_infos["offline_replay_buffer_size"] = len(offline_replay_buffer)
- training_infos["Optimization step"] = optimization_step
-
- # Log training metrics
- if wandb_logger:
- wandb_logger.log_dict(d=training_infos, mode="train", custom_step_key="Optimization step")
-
- # Calculate and log optimization frequency
- time_for_one_optimization_step = time.time() - time_for_one_optimization_step
- frequency_for_one_optimization_step = 1 / (time_for_one_optimization_step + 1e-9)
-
- logging.info(f"[LEARNER] Optimization frequency loop [Hz]: {frequency_for_one_optimization_step}")
-
- # Log optimization frequency
- if wandb_logger:
- wandb_logger.log_dict(
- {
- "Optimization frequency loop [Hz]": frequency_for_one_optimization_step,
- "Optimization step": optimization_step,
- },
- mode="train",
- custom_step_key="Optimization step",
- )
-
- optimization_step += 1
- if optimization_step % log_freq == 0:
- logging.info(f"[LEARNER] Number of optimization step: {optimization_step}")
-
- # Save checkpoint at specified intervals
- if saving_checkpoint and (optimization_step % save_freq == 0 or optimization_step == online_steps):
- save_training_checkpoint(
- cfg=cfg,
- optimization_step=optimization_step,
- online_steps=online_steps,
- interaction_message=interaction_message,
- policy=policy,
- optimizers=optimizers,
- replay_buffer=replay_buffer,
- offline_replay_buffer=offline_replay_buffer,
- dataset_repo_id=dataset_repo_id,
- fps=fps,
- )
-
-
-def start_learner(
- parameters_queue: Queue,
- transition_queue: Queue,
- interaction_message_queue: Queue,
- shutdown_event: any, # Event,
- cfg: TrainRLServerPipelineConfig,
-):
- """
- Start the learner server for training.
- It will receive transitions and interaction messages from the actor server,
- and send policy parameters to the actor server.
-
- Args:
- parameters_queue: Queue for sending policy parameters to the actor
- transition_queue: Queue for receiving transitions from the actor
- interaction_message_queue: Queue for receiving interaction messages from the actor
- shutdown_event: Event to signal shutdown
- cfg: Training configuration
- """
- if not use_threads(cfg):
- # Create a process-specific log file
- log_dir = os.path.join(cfg.output_dir, "logs")
- os.makedirs(log_dir, exist_ok=True)
- log_file = os.path.join(log_dir, f"learner_process_{os.getpid()}.log")
-
- # Initialize logging with explicit log file
- init_logging(log_file=log_file, display_pid=True)
- logging.info("Learner server process logging initialized")
-
- # Setup process handlers to handle shutdown signal
- # But use shutdown event from the main process
- # Return back for MP
- # TODO: Check if its useful
- _ = ProcessSignalHandler(False, display_pid=True)
-
- service = LearnerService(
- shutdown_event=shutdown_event,
- parameters_queue=parameters_queue,
- seconds_between_pushes=cfg.policy.actor_learner_config.policy_parameters_push_frequency,
- transition_queue=transition_queue,
- interaction_message_queue=interaction_message_queue,
- queue_get_timeout=cfg.policy.actor_learner_config.queue_get_timeout,
- )
-
- server = grpc.server(
- ThreadPoolExecutor(max_workers=MAX_WORKERS),
- options=[
- ("grpc.max_receive_message_length", MAX_MESSAGE_SIZE),
- ("grpc.max_send_message_length", MAX_MESSAGE_SIZE),
- ],
- )
-
- services_pb2_grpc.add_LearnerServiceServicer_to_server(
- service,
- server,
- )
-
- host = cfg.policy.actor_learner_config.learner_host
- port = cfg.policy.actor_learner_config.learner_port
-
- server.add_insecure_port(f"{host}:{port}")
- server.start()
- logging.info("[LEARNER] gRPC server started")
-
- shutdown_event.wait()
- logging.info("[LEARNER] Stopping gRPC server...")
- server.stop(SHUTDOWN_TIMEOUT)
- logging.info("[LEARNER] gRPC server stopped")
-
-
-def save_training_checkpoint(
- cfg: TrainRLServerPipelineConfig,
- optimization_step: int,
- online_steps: int,
- interaction_message: dict | None,
- policy: nn.Module,
- optimizers: dict[str, Optimizer],
- replay_buffer: ReplayBuffer,
- offline_replay_buffer: ReplayBuffer | None = None,
- dataset_repo_id: str | None = None,
- fps: int = 30,
-) -> None:
- """
- Save training checkpoint and associated data.
-
- This function performs the following steps:
- 1. Creates a checkpoint directory with the current optimization step
- 2. Saves the policy model, configuration, and optimizer states
- 3. Saves the current interaction step for resuming training
- 4. Updates the "last" checkpoint symlink to point to this checkpoint
- 5. Saves the replay buffer as a dataset for later use
- 6. If an offline replay buffer exists, saves it as a separate dataset
-
- Args:
- cfg: Training configuration
- optimization_step: Current optimization step
- online_steps: Total number of online steps
- interaction_message: Dictionary containing interaction information
- policy: Policy model to save
- optimizers: Dictionary of optimizers
- replay_buffer: Replay buffer to save as dataset
- offline_replay_buffer: Optional offline replay buffer to save
- dataset_repo_id: Repository ID for dataset
- fps: Frames per second for dataset
- """
- logging.info(f"Checkpoint policy after step {optimization_step}")
- _num_digits = max(6, len(str(online_steps)))
- interaction_step = interaction_message["Interaction step"] if interaction_message is not None else 0
-
- # Create checkpoint directory
- checkpoint_dir = get_step_checkpoint_dir(cfg.output_dir, online_steps, optimization_step)
-
- # Save checkpoint
- save_checkpoint(
- checkpoint_dir=checkpoint_dir,
- step=optimization_step,
- cfg=cfg,
- policy=policy,
- optimizer=optimizers,
- scheduler=None,
- )
-
- # Save interaction step manually
- training_state_dir = os.path.join(checkpoint_dir, TRAINING_STATE_DIR)
- os.makedirs(training_state_dir, exist_ok=True)
- training_state = {"step": optimization_step, "interaction_step": interaction_step}
- torch.save(training_state, os.path.join(training_state_dir, "training_state.pt"))
-
- # Update the "last" symlink
- update_last_checkpoint(checkpoint_dir)
-
- # TODO : temporary save replay buffer here, remove later when on the robot
- # We want to control this with the keyboard inputs
- dataset_dir = os.path.join(cfg.output_dir, "dataset")
- if os.path.exists(dataset_dir) and os.path.isdir(dataset_dir):
- shutil.rmtree(dataset_dir)
-
- # Save dataset
- # NOTE: Handle the case where the dataset repo id is not specified in the config
- # eg. RL training without demonstrations data
- repo_id_buffer_save = cfg.env.task if dataset_repo_id is None else dataset_repo_id
- replay_buffer.to_lerobot_dataset(repo_id=repo_id_buffer_save, fps=fps, root=dataset_dir)
-
- if offline_replay_buffer is not None:
- dataset_offline_dir = os.path.join(cfg.output_dir, "dataset_offline")
- if os.path.exists(dataset_offline_dir) and os.path.isdir(dataset_offline_dir):
- shutil.rmtree(dataset_offline_dir)
-
- offline_replay_buffer.to_lerobot_dataset(
- cfg.dataset.repo_id,
- fps=fps,
- root=dataset_offline_dir,
- )
-
- logging.info("Resume training")
-
-
-def make_optimizers_and_scheduler(cfg: TrainRLServerPipelineConfig, policy: nn.Module):
- """
- Creates and returns optimizers for the actor, critic, and temperature components of a reinforcement learning policy.
-
- This function sets up Adam optimizers for:
- - The **actor network**, ensuring that only relevant parameters are optimized.
- - The **critic ensemble**, which evaluates the value function.
- - The **temperature parameter**, which controls the entropy in soft actor-critic (SAC)-like methods.
-
- It also initializes a learning rate scheduler, though currently, it is set to `None`.
-
- NOTE:
- - If the encoder is shared, its parameters are excluded from the actor's optimization process.
- - The policy's log temperature (`log_alpha`) is wrapped in a list to ensure proper optimization as a standalone tensor.
-
- Args:
- cfg: Configuration object containing hyperparameters.
- policy (nn.Module): The policy model containing the actor, critic, and temperature components.
-
- Returns:
- Tuple[Dict[str, torch.optim.Optimizer], Optional[torch.optim.lr_scheduler._LRScheduler]]:
- A tuple containing:
- - `optimizers`: A dictionary mapping component names ("actor", "critic", "temperature") to their respective Adam optimizers.
- - `lr_scheduler`: Currently set to `None` but can be extended to support learning rate scheduling.
-
- """
- optimizer_actor = torch.optim.Adam(
- params=[
- p
- for n, p in policy.actor.named_parameters()
- if not policy.config.shared_encoder or not n.startswith("encoder")
- ],
- lr=cfg.policy.actor_lr,
- )
- optimizer_critic = torch.optim.Adam(params=policy.critic_ensemble.parameters(), lr=cfg.policy.critic_lr)
-
- if cfg.policy.num_discrete_actions is not None:
- optimizer_discrete_critic = torch.optim.Adam(
- params=policy.discrete_critic.parameters(), lr=cfg.policy.critic_lr
- )
- optimizer_temperature = torch.optim.Adam(params=[policy.log_alpha], lr=cfg.policy.critic_lr)
- lr_scheduler = None
- optimizers = {
- "actor": optimizer_actor,
- "critic": optimizer_critic,
- "temperature": optimizer_temperature,
- }
- if cfg.policy.num_discrete_actions is not None:
- optimizers["discrete_critic"] = optimizer_discrete_critic
- return optimizers, lr_scheduler
-
-
-# Training setup functions
-
-
-def handle_resume_logic(cfg: TrainRLServerPipelineConfig) -> TrainRLServerPipelineConfig:
- """
- Handle the resume logic for training.
-
- If resume is True:
- - Verifies that a checkpoint exists
- - Loads the checkpoint configuration
- - Logs resumption details
- - Returns the checkpoint configuration
-
- If resume is False:
- - Checks if an output directory exists (to prevent accidental overwriting)
- - Returns the original configuration
-
- Args:
- cfg (TrainRLServerPipelineConfig): The training configuration
-
- Returns:
- TrainRLServerPipelineConfig: The updated configuration
-
- Raises:
- RuntimeError: If resume is True but no checkpoint found, or if resume is False but directory exists
- """
- out_dir = cfg.output_dir
-
- # Case 1: Not resuming, but need to check if directory exists to prevent overwrites
- if not cfg.resume:
- checkpoint_dir = os.path.join(out_dir, CHECKPOINTS_DIR, LAST_CHECKPOINT_LINK)
- if os.path.exists(checkpoint_dir):
- raise RuntimeError(
- f"Output directory {checkpoint_dir} already exists. Use `resume=true` to resume training."
- )
- return cfg
-
- # Case 2: Resuming training
- checkpoint_dir = os.path.join(out_dir, CHECKPOINTS_DIR, LAST_CHECKPOINT_LINK)
- if not os.path.exists(checkpoint_dir):
- raise RuntimeError(f"No model checkpoint found in {checkpoint_dir} for resume=True")
-
- # Log that we found a valid checkpoint and are resuming
- logging.info(
- colored(
- "Valid checkpoint found: resume=True detected, resuming previous run",
- color="yellow",
- attrs=["bold"],
- )
- )
-
- # Load config using Draccus
- checkpoint_cfg_path = os.path.join(checkpoint_dir, PRETRAINED_MODEL_DIR, "train_config.json")
- checkpoint_cfg = TrainRLServerPipelineConfig.from_pretrained(checkpoint_cfg_path)
-
- # Ensure resume flag is set in returned config
- checkpoint_cfg.resume = True
- return checkpoint_cfg
-
-
-def load_training_state(
- cfg: TrainRLServerPipelineConfig,
- optimizers: Optimizer | dict[str, Optimizer],
-):
- """
- Loads the training state (optimizers, step count, etc.) from a checkpoint.
-
- Args:
- cfg (TrainRLServerPipelineConfig): Training configuration
- optimizers (Optimizer | dict): Optimizers to load state into
-
- Returns:
- tuple: (optimization_step, interaction_step) or (None, None) if not resuming
- """
- if not cfg.resume:
- return None, None
-
- # Construct path to the last checkpoint directory
- checkpoint_dir = os.path.join(cfg.output_dir, CHECKPOINTS_DIR, LAST_CHECKPOINT_LINK)
-
- logging.info(f"Loading training state from {checkpoint_dir}")
-
- try:
- # Use the utility function from train_utils which loads the optimizer state
- step, optimizers, _ = utils_load_training_state(Path(checkpoint_dir), optimizers, None)
-
- # Load interaction step separately from training_state.pt
- training_state_path = os.path.join(checkpoint_dir, TRAINING_STATE_DIR, "training_state.pt")
- interaction_step = 0
- if os.path.exists(training_state_path):
- training_state = torch.load(training_state_path, weights_only=False) # nosec B614: Safe usage of torch.load
- interaction_step = training_state.get("interaction_step", 0)
-
- logging.info(f"Resuming from step {step}, interaction step {interaction_step}")
- return step, interaction_step
-
- except Exception as e:
- logging.error(f"Failed to load training state: {e}")
- return None, None
-
-
-def log_training_info(cfg: TrainRLServerPipelineConfig, policy: nn.Module) -> None:
- """
- Log information about the training process.
-
- Args:
- cfg (TrainRLServerPipelineConfig): Training configuration
- policy (nn.Module): Policy model
- """
- num_learnable_params = sum(p.numel() for p in policy.parameters() if p.requires_grad)
- num_total_params = sum(p.numel() for p in policy.parameters())
-
- logging.info(colored("Output dir:", "yellow", attrs=["bold"]) + f" {cfg.output_dir}")
- logging.info(f"{cfg.env.task=}")
- logging.info(f"{cfg.policy.online_steps=}")
- logging.info(f"{num_learnable_params=} ({format_big_number(num_learnable_params)})")
- logging.info(f"{num_total_params=} ({format_big_number(num_total_params)})")
-
-
-def initialize_replay_buffer(
- cfg: TrainRLServerPipelineConfig, device: str, storage_device: str
-) -> ReplayBuffer:
- """
- Initialize a replay buffer, either empty or from a dataset if resuming.
-
- Args:
- cfg (TrainRLServerPipelineConfig): Training configuration
- device (str): Device to store tensors on
- storage_device (str): Device for storage optimization
-
- Returns:
- ReplayBuffer: Initialized replay buffer
- """
- if not cfg.resume:
- return ReplayBuffer(
- capacity=cfg.policy.online_buffer_capacity,
- device=device,
- state_keys=cfg.policy.input_features.keys(),
- storage_device=storage_device,
- optimize_memory=True,
- )
-
- logging.info("Resume training load the online dataset")
- dataset_path = os.path.join(cfg.output_dir, "dataset")
-
- # NOTE: In RL is possible to not have a dataset.
- repo_id = None
- if cfg.dataset is not None:
- repo_id = cfg.dataset.repo_id
- dataset = LeRobotDataset(
- repo_id=repo_id,
- root=dataset_path,
- )
- return ReplayBuffer.from_lerobot_dataset(
- lerobot_dataset=dataset,
- capacity=cfg.policy.online_buffer_capacity,
- device=device,
- state_keys=cfg.policy.input_features.keys(),
- optimize_memory=True,
- )
-
-
-def initialize_offline_replay_buffer(
- cfg: TrainRLServerPipelineConfig,
- device: str,
- storage_device: str,
-) -> ReplayBuffer:
- """
- Initialize an offline replay buffer from a dataset.
-
- Args:
- cfg (TrainRLServerPipelineConfig): Training configuration
- device (str): Device to store tensors on
- storage_device (str): Device for storage optimization
-
- Returns:
- ReplayBuffer: Initialized offline replay buffer
- """
- if not cfg.resume:
- logging.info("make_dataset offline buffer")
- offline_dataset = make_dataset(cfg)
- else:
- logging.info("load offline dataset")
- dataset_offline_path = os.path.join(cfg.output_dir, "dataset_offline")
- offline_dataset = LeRobotDataset(
- repo_id=cfg.dataset.repo_id,
- root=dataset_offline_path,
- )
-
- logging.info("Convert to a offline replay buffer")
- offline_replay_buffer = ReplayBuffer.from_lerobot_dataset(
- offline_dataset,
- device=device,
- state_keys=cfg.policy.input_features.keys(),
- storage_device=storage_device,
- optimize_memory=True,
- capacity=cfg.policy.offline_buffer_capacity,
- )
- return offline_replay_buffer
-
-
-# Utilities/Helpers functions
-
-
-def get_observation_features(
- policy: SACPolicy, observations: torch.Tensor, next_observations: torch.Tensor
-) -> tuple[torch.Tensor | None, torch.Tensor | None]:
- """
- Get observation features from the policy encoder. It act as cache for the observation features.
- when the encoder is frozen, the observation features are not updated.
- We can save compute by caching the observation features.
-
- Args:
- policy: The policy model
- observations: The current observations
- next_observations: The next observations
-
- Returns:
- tuple: observation_features, next_observation_features
- """
-
- if policy.config.vision_encoder_name is None or not policy.config.freeze_vision_encoder:
- return None, None
-
- with torch.no_grad():
- observation_features = policy.actor.encoder.get_cached_image_features(observations)
- next_observation_features = policy.actor.encoder.get_cached_image_features(next_observations)
-
- return observation_features, next_observation_features
-
-
-def use_threads(cfg: TrainRLServerPipelineConfig) -> bool:
- return cfg.policy.concurrency.learner == "threads"
-
-
-def check_nan_in_transition(
- observations: torch.Tensor,
- actions: torch.Tensor,
- next_state: torch.Tensor,
- raise_error: bool = False,
-) -> bool:
- """
- Check for NaN values in transition data.
-
- Args:
- observations: Dictionary of observation tensors
- actions: Action tensor
- next_state: Dictionary of next state tensors
- raise_error: If True, raises ValueError when NaN is detected
-
- Returns:
- bool: True if NaN values were detected, False otherwise
- """
- nan_detected = False
-
- # Check observations
- for key, tensor in observations.items():
- if torch.isnan(tensor).any():
- logging.error(f"observations[{key}] contains NaN values")
- nan_detected = True
- if raise_error:
- raise ValueError(f"NaN detected in observations[{key}]")
-
- # Check next state
- for key, tensor in next_state.items():
- if torch.isnan(tensor).any():
- logging.error(f"next_state[{key}] contains NaN values")
- nan_detected = True
- if raise_error:
- raise ValueError(f"NaN detected in next_state[{key}]")
-
- # Check actions
- if torch.isnan(actions).any():
- logging.error("actions contains NaN values")
- nan_detected = True
- if raise_error:
- raise ValueError("NaN detected in actions")
-
- return nan_detected
-
-
-def push_actor_policy_to_queue(parameters_queue: Queue, policy: nn.Module):
- logging.debug("[LEARNER] Pushing actor policy to the queue")
-
- # Create a dictionary to hold all the state dicts
- state_dicts = {"policy": move_state_dict_to_device(policy.actor.state_dict(), device="cpu")}
-
- # Add discrete critic if it exists
- if hasattr(policy, "discrete_critic") and policy.discrete_critic is not None:
- state_dicts["discrete_critic"] = move_state_dict_to_device(
- policy.discrete_critic.state_dict(), device="cpu"
- )
- logging.debug("[LEARNER] Including discrete critic in state dict push")
-
- state_bytes = state_to_bytes(state_dicts)
- parameters_queue.put(state_bytes)
-
-
-def process_interaction_message(
- message, interaction_step_shift: int, wandb_logger: WandBLogger | None = None
-):
- """Process a single interaction message with consistent handling."""
- message = bytes_to_python_object(message)
- # Shift interaction step for consistency with checkpointed state
- message["Interaction step"] += interaction_step_shift
-
- # Log if logger available
- if wandb_logger:
- wandb_logger.log_dict(d=message, mode="train", custom_step_key="Interaction step")
-
- return message
-
-
-def process_transitions(
- transition_queue: Queue,
- replay_buffer: ReplayBuffer,
- offline_replay_buffer: ReplayBuffer,
- device: str,
- dataset_repo_id: str | None,
- shutdown_event: any,
-):
- """Process all available transitions from the queue.
-
- Args:
- transition_queue: Queue for receiving transitions from the actor
- replay_buffer: Replay buffer to add transitions to
- offline_replay_buffer: Offline replay buffer to add transitions to
- device: Device to move transitions to
- dataset_repo_id: Repository ID for dataset
- shutdown_event: Event to signal shutdown
- """
- while not transition_queue.empty() and not shutdown_event.is_set():
- transition_list = transition_queue.get()
- transition_list = bytes_to_transitions(buffer=transition_list)
-
- for transition in transition_list:
- transition = move_transition_to_device(transition=transition, device=device)
-
- # Skip transitions with NaN values
- if check_nan_in_transition(
- observations=transition["state"],
- actions=transition[ACTION],
- next_state=transition["next_state"],
- ):
- logging.warning("[LEARNER] NaN detected in transition, skipping")
- continue
-
- replay_buffer.add(**transition)
-
- # Add to offline buffer if it's an intervention
- if dataset_repo_id is not None and transition.get("complementary_info", {}).get(
- TeleopEvents.IS_INTERVENTION
- ):
- offline_replay_buffer.add(**transition)
-
-
-def process_interaction_messages(
- interaction_message_queue: Queue,
- interaction_step_shift: int,
- wandb_logger: WandBLogger | None,
- shutdown_event: any,
-) -> dict | None:
- """Process all available interaction messages from the queue.
-
- Args:
- interaction_message_queue: Queue for receiving interaction messages
- interaction_step_shift: Amount to shift interaction step by
- wandb_logger: Logger for tracking progress
- shutdown_event: Event to signal shutdown
-
- Returns:
- dict | None: The last interaction message processed, or None if none were processed
- """
- last_message = None
- while not interaction_message_queue.empty() and not shutdown_event.is_set():
- message = interaction_message_queue.get()
- last_message = process_interaction_message(
- message=message,
- interaction_step_shift=interaction_step_shift,
- wandb_logger=wandb_logger,
- )
-
- return last_message
-
-
-if __name__ == "__main__":
- train_cli()
- logging.info("[LEARNER] main finished")
diff --git a/lerobot/src/lerobot/rl/learner_service.py b/lerobot/src/lerobot/rl/learner_service.py
deleted file mode 100644
index 99742321b9c3bc8c98933126e7cc55c4ab8924d0..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/rl/learner_service.py
+++ /dev/null
@@ -1,117 +0,0 @@
-# !/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team.
-# All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import time
-from multiprocessing import Event, Queue
-
-from lerobot.rl.queue import get_last_item_from_queue
-from lerobot.transport import services_pb2, services_pb2_grpc
-from lerobot.transport.utils import receive_bytes_in_chunks, send_bytes_in_chunks
-
-MAX_WORKERS = 3 # Stream parameters, send transitions and interactions
-SHUTDOWN_TIMEOUT = 10
-
-
-class LearnerService(services_pb2_grpc.LearnerServiceServicer):
- """
- Implementation of the LearnerService gRPC service
- This service is used to send parameters to the Actor and receive transitions and interactions from the Actor
- check transport.proto for the gRPC service definition
- """
-
- def __init__(
- self,
- shutdown_event: Event, # type: ignore
- parameters_queue: Queue,
- seconds_between_pushes: float,
- transition_queue: Queue,
- interaction_message_queue: Queue,
- queue_get_timeout: float = 0.001,
- ):
- self.shutdown_event = shutdown_event
- self.parameters_queue = parameters_queue
- self.seconds_between_pushes = seconds_between_pushes
- self.transition_queue = transition_queue
- self.interaction_message_queue = interaction_message_queue
- self.queue_get_timeout = queue_get_timeout
-
- def StreamParameters(self, request, context): # noqa: N802
- # TODO: authorize the request
- logging.info("[LEARNER] Received request to stream parameters from the Actor")
-
- last_push_time = 0
-
- while not self.shutdown_event.is_set():
- time_since_last_push = time.time() - last_push_time
- if time_since_last_push < self.seconds_between_pushes:
- self.shutdown_event.wait(self.seconds_between_pushes - time_since_last_push)
- # Continue, because we could receive a shutdown event,
- # and it's checked in the while loop
- continue
-
- logging.info("[LEARNER] Push parameters to the Actor")
- buffer = get_last_item_from_queue(
- self.parameters_queue, block=True, timeout=self.queue_get_timeout
- )
-
- if buffer is None:
- continue
-
- yield from send_bytes_in_chunks(
- buffer,
- services_pb2.Parameters,
- log_prefix="[LEARNER] Sending parameters",
- silent=True,
- )
-
- last_push_time = time.time()
- logging.info("[LEARNER] Parameters sent")
-
- logging.info("[LEARNER] Stream parameters finished")
- return services_pb2.Empty()
-
- def SendTransitions(self, request_iterator, _context): # noqa: N802
- # TODO: authorize the request
- logging.info("[LEARNER] Received request to receive transitions from the Actor")
-
- receive_bytes_in_chunks(
- request_iterator,
- self.transition_queue,
- self.shutdown_event,
- log_prefix="[LEARNER] transitions",
- )
-
- logging.debug("[LEARNER] Finished receiving transitions")
- return services_pb2.Empty()
-
- def SendInteractions(self, request_iterator, _context): # noqa: N802
- # TODO: authorize the request
- logging.info("[LEARNER] Received request to receive interactions from the Actor")
-
- receive_bytes_in_chunks(
- request_iterator,
- self.interaction_message_queue,
- self.shutdown_event,
- log_prefix="[LEARNER] interactions",
- )
-
- logging.debug("[LEARNER] Finished receiving interactions")
- return services_pb2.Empty()
-
- def Ready(self, request, context): # noqa: N802
- return services_pb2.Empty()
diff --git a/lerobot/src/lerobot/rl/process.py b/lerobot/src/lerobot/rl/process.py
deleted file mode 100644
index 96d10b64463fa430fedd6885974ebb5dfc6d3bf8..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/rl/process.py
+++ /dev/null
@@ -1,83 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team.
-# All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import os
-import signal
-import sys
-
-
-class ProcessSignalHandler:
- """Utility class to attach graceful shutdown signal handlers.
-
- The class exposes a shutdown_event attribute that is set when a shutdown
- signal is received. A counter tracks how many shutdown signals have been
- caught. On the second signal the process exits with status 1.
- """
-
- _SUPPORTED_SIGNALS = ("SIGINT", "SIGTERM", "SIGHUP", "SIGQUIT")
-
- def __init__(self, use_threads: bool, display_pid: bool = False):
- # TODO: Check if we can use Event from threading since Event from
- # multiprocessing is the a clone of threading.Event.
- # https://docs.python.org/3/library/multiprocessing.html#multiprocessing.Event
- if use_threads:
- from threading import Event
- else:
- from multiprocessing import Event
-
- self.shutdown_event = Event()
- self._counter: int = 0
- self._display_pid = display_pid
-
- self._register_handlers()
-
- @property
- def counter(self) -> int: # pragma: no cover – simple accessor
- """Number of shutdown signals that have been intercepted."""
- return self._counter
-
- def _register_handlers(self):
- """Attach the internal _signal_handler to a subset of POSIX signals."""
-
- def _signal_handler(signum, frame):
- pid_str = ""
- if self._display_pid:
- pid_str = f"[PID: {os.getpid()}]"
- logging.info(f"{pid_str} Shutdown signal {signum} received. Cleaning up…")
- self.shutdown_event.set()
- self._counter += 1
-
- # On a second Ctrl-C (or any supported signal) force the exit to
- # mimic the previous behaviour while giving the caller one chance to
- # shutdown gracefully.
- # TODO: Investigate if we need it later
- if self._counter > 1:
- logging.info("Force shutdown")
- sys.exit(1)
-
- for sig_name in self._SUPPORTED_SIGNALS:
- sig = getattr(signal, sig_name, None)
- if sig is None:
- # The signal is not available on this platform (Windows for
- # instance does not provide SIGHUP, SIGQUIT…). Skip it.
- continue
- try:
- signal.signal(sig, _signal_handler)
- except (ValueError, OSError): # pragma: no cover – unlikely but safe
- # Signal not supported or we are in a non-main thread.
- continue
diff --git a/lerobot/src/lerobot/rl/queue.py b/lerobot/src/lerobot/rl/queue.py
deleted file mode 100644
index 57d0c3695b8ec04bb0b96e9e4bcba3a946a578fb..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/rl/queue.py
+++ /dev/null
@@ -1,52 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import platform
-from contextlib import suppress
-from queue import Empty
-from typing import Any
-
-from torch.multiprocessing import Queue
-
-
-def get_last_item_from_queue(queue: Queue, block=True, timeout: float = 0.1) -> Any:
- if block:
- try:
- item = queue.get(timeout=timeout)
- except Empty:
- return None
- else:
- item = None
-
- # Drain queue and keep only the most recent parameters
- if platform.system() == "Darwin":
- # On Mac, avoid using `qsize` due to unreliable implementation.
- # There is a comment on `qsize` code in the Python source:
- # Raises NotImplementedError on Mac OSX because of broken sem_getvalue()
- try:
- while True:
- item = queue.get_nowait()
- except Empty:
- pass
-
- return item
-
- # Details about using qsize in https://github.com/huggingface/lerobot/issues/1523
- while queue.qsize() > 0:
- with suppress(Empty):
- item = queue.get_nowait()
-
- return item
diff --git a/lerobot/src/lerobot/rl/wandb_utils.py b/lerobot/src/lerobot/rl/wandb_utils.py
deleted file mode 100644
index f4014b61c3a375474e9efbcfb62f760ea64a2061..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/rl/wandb_utils.py
+++ /dev/null
@@ -1,188 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import logging
-import os
-import re
-from glob import glob
-from pathlib import Path
-
-from huggingface_hub.constants import SAFETENSORS_SINGLE_FILE
-from termcolor import colored
-
-from lerobot.configs.train import TrainPipelineConfig
-from lerobot.utils.constants import PRETRAINED_MODEL_DIR
-
-
-def cfg_to_group(cfg: TrainPipelineConfig, return_list: bool = False) -> list[str] | str:
- """Return a group name for logging. Optionally returns group name as list."""
- lst = [
- f"policy:{cfg.policy.type}",
- f"seed:{cfg.seed}",
- ]
- if cfg.dataset is not None:
- lst.append(f"dataset:{cfg.dataset.repo_id}")
- if cfg.env is not None:
- lst.append(f"env:{cfg.env.type}")
- return lst if return_list else "-".join(lst)
-
-
-def get_wandb_run_id_from_filesystem(log_dir: Path) -> str:
- # Get the WandB run ID.
- paths = glob(str(log_dir / "wandb/latest-run/run-*"))
- if len(paths) != 1:
- raise RuntimeError("Couldn't get the previous WandB run ID for run resumption.")
- match = re.search(r"run-([^\.]+).wandb", paths[0].split("/")[-1])
- if match is None:
- raise RuntimeError("Couldn't get the previous WandB run ID for run resumption.")
- wandb_run_id = match.groups(0)[0]
- return wandb_run_id
-
-
-def get_safe_wandb_artifact_name(name: str):
- """WandB artifacts don't accept ":" or "/" in their name."""
- return name.replace(":", "_").replace("/", "_")
-
-
-class WandBLogger:
- """A helper class to log object using wandb."""
-
- def __init__(self, cfg: TrainPipelineConfig):
- self.cfg = cfg.wandb
- self.log_dir = cfg.output_dir
- self.job_name = cfg.job_name
- self.env_fps = cfg.env.fps if cfg.env else None
- self._group = cfg_to_group(cfg)
-
- # Set up WandB.
- os.environ["WANDB_SILENT"] = "True"
- import wandb
-
- wandb_run_id = (
- cfg.wandb.run_id
- if cfg.wandb.run_id
- else get_wandb_run_id_from_filesystem(self.log_dir)
- if cfg.resume
- else None
- )
- wandb.init(
- id=wandb_run_id,
- project=self.cfg.project,
- entity=self.cfg.entity,
- name=self.job_name,
- notes=self.cfg.notes,
- tags=cfg_to_group(cfg, return_list=True),
- dir=self.log_dir,
- config=cfg.to_dict(),
- # TODO(rcadene): try set to True
- save_code=False,
- # TODO(rcadene): split train and eval, and run async eval with job_type="eval"
- job_type="train_eval",
- resume="must" if cfg.resume else None,
- mode=self.cfg.mode if self.cfg.mode in ["online", "offline", "disabled"] else "online",
- )
- run_id = wandb.run.id
- # NOTE: We will override the cfg.wandb.run_id with the wandb run id.
- # This is because we want to be able to resume the run from the wandb run id.
- cfg.wandb.run_id = run_id
- # Handle custom step key for rl asynchronous training.
- self._wandb_custom_step_key: set[str] | None = None
- logging.info(colored("Logs will be synced with wandb.", "blue", attrs=["bold"]))
- logging.info(f"Track this run --> {colored(wandb.run.get_url(), 'yellow', attrs=['bold'])}")
- self._wandb = wandb
-
- def log_policy(self, checkpoint_dir: Path):
- """Checkpoints the policy to wandb."""
- if self.cfg.disable_artifact:
- return
-
- step_id = checkpoint_dir.name
- artifact_name = f"{self._group}-{step_id}"
- artifact_name = get_safe_wandb_artifact_name(artifact_name)
- artifact = self._wandb.Artifact(artifact_name, type="model")
- pretrained_model_dir = checkpoint_dir / PRETRAINED_MODEL_DIR
-
- # Check if this is a PEFT model (has adapter files instead of model.safetensors)
- adapter_model_file = pretrained_model_dir / "adapter_model.safetensors"
- standard_model_file = pretrained_model_dir / SAFETENSORS_SINGLE_FILE
-
- if adapter_model_file.exists():
- # PEFT model: add adapter files and configs
- artifact.add_file(adapter_model_file)
- adapter_config_file = pretrained_model_dir / "adapter_config.json"
- if adapter_config_file.exists():
- artifact.add_file(adapter_config_file)
- # Also add the policy config which is needed for loading
- config_file = pretrained_model_dir / "config.json"
- if config_file.exists():
- artifact.add_file(config_file)
- elif standard_model_file.exists():
- # Standard model: add the single safetensors file
- artifact.add_file(standard_model_file)
- else:
- logging.warning(
- f"No {SAFETENSORS_SINGLE_FILE} or adapter_model.safetensors found in {pretrained_model_dir}. "
- "Skipping model artifact upload to WandB."
- )
- return
-
- self._wandb.log_artifact(artifact)
-
- def log_dict(
- self, d: dict, step: int | None = None, mode: str = "train", custom_step_key: str | None = None
- ):
- if mode not in {"train", "eval"}:
- raise ValueError(mode)
- if step is None and custom_step_key is None:
- raise ValueError("Either step or custom_step_key must be provided.")
-
- # NOTE: This is not simple. Wandb step must always monotonically increase and it
- # increases with each wandb.log call, but in the case of asynchronous RL for example,
- # multiple time steps is possible. For example, the interaction step with the environment,
- # the training step, the evaluation step, etc. So we need to define a custom step key
- # to log the correct step for each metric.
- if custom_step_key is not None:
- if self._wandb_custom_step_key is None:
- self._wandb_custom_step_key = set()
- new_custom_key = f"{mode}/{custom_step_key}"
- if new_custom_key not in self._wandb_custom_step_key:
- self._wandb_custom_step_key.add(new_custom_key)
- self._wandb.define_metric(new_custom_key, hidden=True)
-
- for k, v in d.items():
- if not isinstance(v, (int | float | str)):
- logging.warning(
- f'WandB logging of key "{k}" was ignored as its type "{type(v)}" is not handled by this wrapper.'
- )
- continue
-
- # Do not log the custom step key itself.
- if self._wandb_custom_step_key is not None and k in self._wandb_custom_step_key:
- continue
-
- if custom_step_key is not None:
- value_custom_step = d[custom_step_key]
- data = {f"{mode}/{k}": v, f"{mode}/{custom_step_key}": value_custom_step}
- self._wandb.log(data)
- continue
-
- self._wandb.log(data={f"{mode}/{k}": v}, step=step)
-
- def log_video(self, video_path: str, step: int, mode: str = "train"):
- if mode not in {"train", "eval"}:
- raise ValueError(mode)
-
- wandb_video = self._wandb.Video(video_path, fps=self.env_fps, format="mp4")
- self._wandb.log({f"{mode}/video": wandb_video}, step=step)
diff --git a/lerobot/src/lerobot/robots/__init__.py b/lerobot/src/lerobot/robots/__init__.py
deleted file mode 100644
index ca7f736c858be000e9cbeb22f3bc2858ef80933f..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .config import RobotConfig
-from .robot import Robot
-from .utils import make_robot_from_config
diff --git a/lerobot/src/lerobot/robots/bi_so_follower/__init__.py b/lerobot/src/lerobot/robots/bi_so_follower/__init__.py
deleted file mode 100644
index 6eaecf1c7565d2e64559a0c57dc1a090be0dfdca..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/bi_so_follower/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .bi_so_follower import BiSOFollower
-from .config_bi_so_follower import BiSOFollowerConfig
diff --git a/lerobot/src/lerobot/robots/config.py b/lerobot/src/lerobot/robots/config.py
deleted file mode 100644
index 245ee1f9631269c8ee306d399e77ddfe6b4b641c..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/config.py
+++ /dev/null
@@ -1,40 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import abc
-from dataclasses import dataclass
-from pathlib import Path
-
-import draccus
-
-
-@dataclass(kw_only=True)
-class RobotConfig(draccus.ChoiceRegistry, abc.ABC):
- # Allows to distinguish between different robots of the same type
- id: str | None = None
- # Directory to store calibration file
- calibration_dir: Path | None = None
-
- def __post_init__(self):
- if hasattr(self, "cameras") and self.cameras:
- for _, config in self.cameras.items():
- for attr in ["width", "height", "fps"]:
- if getattr(config, attr) is None:
- raise ValueError(
- f"Specifying '{attr}' is required for the camera to be used in a robot"
- )
-
- @property
- def type(self) -> str:
- return self.get_choice_name(self.__class__)
diff --git a/lerobot/src/lerobot/robots/earthrover_mini_plus/__init__.py b/lerobot/src/lerobot/robots/earthrover_mini_plus/__init__.py
deleted file mode 100644
index aba983e9a46ba929a5ea60da909c666d9bc1e3ce..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/earthrover_mini_plus/__init__.py
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .config_earthrover_mini_plus import EarthRoverMiniPlusConfig
-from .robot_earthrover_mini_plus import EarthRoverMiniPlus
-
-__all__ = ["EarthRoverMiniPlus", "EarthRoverMiniPlusConfig"]
diff --git a/lerobot/src/lerobot/robots/earthrover_mini_plus/earthrover_mini_plus.mdx b/lerobot/src/lerobot/robots/earthrover_mini_plus/earthrover_mini_plus.mdx
deleted file mode 100644
index 37509e0a908ef8b69ca33fc8718f8052b63b0d79..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/earthrover_mini_plus/earthrover_mini_plus.mdx
+++ /dev/null
@@ -1 +0,0 @@
-../../../../docs/source/earthrover_mini_plus.mdx
\ No newline at end of file
diff --git a/lerobot/src/lerobot/robots/earthrover_mini_plus/robot_earthrover_mini_plus.py b/lerobot/src/lerobot/robots/earthrover_mini_plus/robot_earthrover_mini_plus.py
deleted file mode 100644
index d659a3f74a7bb980ca641f413a251691faf182c2..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/earthrover_mini_plus/robot_earthrover_mini_plus.py
+++ /dev/null
@@ -1,469 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""EarthRover Mini Plus robot using Frodobots SDK."""
-
-import base64
-import logging
-from functools import cached_property
-
-import cv2
-import numpy as np
-import requests
-
-from lerobot.processor import RobotAction, RobotObservation
-from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
-from lerobot.utils.errors import DeviceNotConnectedError
-
-from ..robot import Robot
-from .config_earthrover_mini_plus import EarthRoverMiniPlusConfig
-
-logger = logging.getLogger(__name__)
-
-# Action feature keys
-ACTION_LINEAR_VEL = "linear.vel"
-ACTION_ANGULAR_VEL = "angular.vel"
-
-# Observation feature keys
-OBS_FRONT = "front"
-OBS_REAR = "rear"
-OBS_LINEAR_VEL = "linear.vel"
-OBS_BATTERY_LEVEL = "battery.level"
-OBS_ORIENTATION_DEG = "orientation.deg"
-OBS_GPS_LATITUDE = "gps.latitude"
-OBS_GPS_LONGITUDE = "gps.longitude"
-OBS_GPS_SIGNAL = "gps.signal"
-OBS_SIGNAL_LEVEL = "signal.level"
-OBS_VIBRATION = "vibration"
-OBS_LAMP_STATE = "lamp.state"
-
-
-class EarthRoverMiniPlus(Robot):
- """
- EarthRover Mini Plus robot controlled via Frodobots SDK HTTP API.
-
- This robot uses cloud-based control through the Frodobots SDK instead of direct
- hardware connection. Cameras stream via WebRTC through Agora cloud, and control
- commands are sent via HTTP POST requests.
-
- The robot supports:
- - Dual cameras (front and rear) accessed via SDK HTTP endpoints
- - Linear and angular velocity control
- - Battery and orientation telemetry
-
- Attributes:
- config: Robot configuration
- sdk_base_url: URL of the Frodobots SDK server (default: http://localhost:8000)
- """
-
- config_class = EarthRoverMiniPlusConfig
- name = "earthrover_mini_plus"
-
- def __init__(self, config: EarthRoverMiniPlusConfig):
- """Initialize EarthRover Mini Plus robot.
-
- Args:
- config: Robot configuration including SDK URL
- """
- super().__init__(config)
- self.config = config
- self.sdk_base_url = "http://localhost:8000"
-
- # Empty cameras dict for compatibility with recording script
- # Cameras are accessed directly via SDK, not through Camera objects
- self.cameras = {}
- self._is_connected = False
-
- # Cache for camera frames (fallback when requests fail)
- self._last_front_frame = None
- self._last_rear_frame = None
-
- # Cache for robot telemetry data (fallback when requests fail)
- self._last_robot_data = None
-
- logger.info(f"Initialized {self.name} with SDK at {self.sdk_base_url}")
-
- @property
- def is_connected(self) -> bool:
- """Check if robot is connected to SDK."""
- return self._is_connected
-
- @check_if_already_connected
- def connect(self, calibrate: bool = True) -> None:
- """Connect to robot via Frodobots SDK.
-
- Args:
- calibrate: Not used for SDK-based robot (kept for API compatibility)
-
- Raises:
- DeviceAlreadyConnectedError: If robot is already connected
- DeviceNotConnectedError: If cannot connect to SDK server
- """
-
- # Verify SDK is running and accessible
- try:
- response = requests.get(f"{self.sdk_base_url}/data", timeout=10.0)
- if response.status_code != 200:
- raise DeviceNotConnectedError(
- f"Cannot connect to SDK at {self.sdk_base_url}. "
- "Make sure it's running: hypercorn main:app --reload"
- )
- except requests.RequestException as e:
- raise DeviceNotConnectedError(f"Cannot connect to SDK at {self.sdk_base_url}: {e}") from e
-
- self._is_connected = True
- logger.info(f"{self.name} connected to SDK")
-
- if calibrate:
- self.calibrate()
-
- def calibrate(self) -> None:
- """Calibration not needed for SDK-based robot."""
- logger.info("Calibration not required for SDK-based robot")
-
- @property
- def is_calibrated(self) -> bool:
- """SDK robot doesn't require calibration.
-
- Returns:
- bool: Always True for SDK-based robots
- """
- return True
-
- def configure(self) -> None:
- """Configure robot (no-op for SDK-based robot)."""
- pass
-
- @cached_property
- def observation_features(self) -> dict[str, type | tuple]:
- """Define the observation space for dataset recording.
-
- Returns:
- dict: Observation features with types/shapes:
- - front: (480, 640, 3) - Front camera RGB image
- - rear: (480, 640, 3) - Rear camera RGB image
- - linear.vel: float - Current speed (0-1, SDK reports only positive speeds)
- - battery.level: float - Battery level (0-1, normalized from 0-100)
- - orientation.deg: float - Robot orientation (0-1, normalized from raw value)
- - gps.latitude: float - GPS latitude coordinate
- - gps.longitude: float - GPS longitude coordinate
- - gps.signal: float - GPS signal strength (0-1, normalized from percentage)
- - signal.level: float - Network signal level (0-1, normalized from 0-5)
- - vibration: float - Vibration sensor reading
- - lamp.state: float - Lamp state (0=off, 1=on)
- """
- return {
- # Cameras (height, width, channels)
- OBS_FRONT: (480, 640, 3),
- OBS_REAR: (480, 640, 3),
- # Motion state
- OBS_LINEAR_VEL: float,
- # Robot state
- OBS_BATTERY_LEVEL: float,
- OBS_ORIENTATION_DEG: float,
- # GPS
- OBS_GPS_LATITUDE: float,
- OBS_GPS_LONGITUDE: float,
- OBS_GPS_SIGNAL: float,
- # Sensors
- OBS_SIGNAL_LEVEL: float,
- OBS_VIBRATION: float,
- OBS_LAMP_STATE: float,
- }
-
- @cached_property
- def action_features(self) -> dict[str, type]:
- """Define the action space.
-
- Returns:
- dict: Action features with types:
- - linear.vel: float - Target linear velocity
- - angular.vel: float - Target angular velocity
- """
- return {
- ACTION_LINEAR_VEL: float,
- ACTION_ANGULAR_VEL: float,
- }
-
- @check_if_not_connected
- def get_observation(self) -> RobotObservation:
- """Get current robot observation from SDK.
-
- Returns:
- RobotObservation: Observation containing:
- - front: Front camera image (480, 640, 3) in RGB format
- - rear: Rear camera image (480, 640, 3) in RGB format
- - linear.vel: Current speed (0-1, SDK reports only positive speeds)
- - battery.level: Battery level (0-1, normalized from 0-100)
- - orientation.deg: Robot orientation (0-1, normalized from raw value)
- - gps.latitude: GPS latitude coordinate
- - gps.longitude: GPS longitude coordinate
- - gps.signal: GPS signal strength (0-1, normalized from percentage)
- - signal.level: Network signal level (0-1, normalized from 0-5)
- - vibration: Vibration sensor reading
- - lamp.state: Lamp state (0=off, 1=on)
-
- Raises:
- DeviceNotConnectedError: If robot is not connected
-
- Note:
- Camera frames are retrieved from SDK endpoints /v2/front and /v2/rear.
- Frames are decoded from base64 and converted from BGR to RGB format.
- Robot telemetry is retrieved from /data endpoint.
- All SDK values are normalized to appropriate ranges for dataset recording.
- """
-
- observation = {}
-
- # Get camera images from SDK
- frames = self._get_camera_frames()
- observation[OBS_FRONT] = frames["front"]
- observation[OBS_REAR] = frames["rear"]
-
- # Get robot state from SDK
- robot_data = self._get_robot_data()
-
- # Motion state
- observation[OBS_LINEAR_VEL] = robot_data["speed"] / 100.0 # Normalize 0-100 to 0-1
-
- # Robot state
- observation[OBS_BATTERY_LEVEL] = robot_data["battery"] / 100.0 # Normalize 0-100 to 0-1
- observation[OBS_ORIENTATION_DEG] = robot_data["orientation"] / 360.0 # Normalize to 0-1
-
- # GPS data
- observation[OBS_GPS_LATITUDE] = robot_data["latitude"]
- observation[OBS_GPS_LONGITUDE] = robot_data["longitude"]
- observation[OBS_GPS_SIGNAL] = robot_data["gps_signal"] / 100.0 # Normalize percentage to 0-1
-
- # Sensors
- observation[OBS_SIGNAL_LEVEL] = robot_data["signal_level"] / 5.0 # Normalize 0-5 to 0-1
- observation[OBS_VIBRATION] = robot_data["vibration"]
- observation[OBS_LAMP_STATE] = float(robot_data["lamp"]) # 0 or 1
-
- return observation
-
- @check_if_not_connected
- def send_action(self, action: RobotAction) -> RobotAction:
- """Send action to robot via SDK.
-
- Args:
- action: Action dict with keys:
- - linear.vel: Target linear velocity (-1 to 1)
- - angular.vel: Target angular velocity (-1 to 1)
-
- Returns:
- RobotAction: The action that was sent (matches action_features keys)
- Raises:
- DeviceNotConnectedError: If robot is not connected
-
- Note:
- Actions are sent to SDK via POST /control endpoint.
- SDK expects commands in range [-1, 1].
- """
-
- # Extract action values and convert to float
- linear = float(action.get(ACTION_LINEAR_VEL, 0.0))
- angular = float(action.get(ACTION_ANGULAR_VEL, 0.0))
-
- # Send command to SDK
- try:
- self._send_command_to_sdk(linear, angular)
- except Exception as e:
- logger.error(f"Error sending action: {e}")
-
- # Return action in format matching action_features
- return {
- ACTION_LINEAR_VEL: linear,
- ACTION_ANGULAR_VEL: angular,
- }
-
- @check_if_not_connected
- def disconnect(self) -> None:
- """Disconnect from robot.
-
- Stops the robot and closes connection to SDK.
-
- Raises:
- DeviceNotConnectedError: If robot is not connected
- """
-
- # Stop the robot before disconnecting
- try:
- self._send_command_to_sdk(0.0, 0.0)
- except Exception as e:
- logger.warning(f"Failed to stop robot during disconnect: {e}")
-
- self._is_connected = False
- logger.info(f"{self.name} disconnected")
-
- # Private helper methods for SDK communication
-
- def _get_camera_frames(self) -> dict[str, np.ndarray]:
- """Get camera frames from SDK using v2 endpoints with caching fallback.
-
- Returns:
- dict: Dictionary with 'front' and 'rear' keys containing:
- - Current frame (if request succeeds)
- - Cached frame (if request fails but cache exists)
- - Zero array (if request fails and no cache exists yet)
-
- Note:
- Uses /v2/front and /v2/rear endpoints which are 15x faster than /screenshot.
- Images are base64 encoded, resized to 640x480, and converted from BGR to RGB.
- If request fails, returns the last successfully retrieved frame (cached).
- """
- frames = {}
-
- # Get front camera
- try:
- response = requests.get(f"{self.sdk_base_url}/v2/front", timeout=2.0)
- if response.status_code == 200:
- data = response.json()
- if "front_frame" in data and data["front_frame"]:
- front_img = self._decode_base64_image(data["front_frame"])
- if front_img is not None:
- # Resize and convert BGR to RGB
- front_img = cv2.resize(front_img, (640, 480))
- front_rgb = cv2.cvtColor(front_img, cv2.COLOR_BGR2RGB)
- frames["front"] = front_rgb
- # Cache the successful frame
- self._last_front_frame = front_rgb
- except Exception as e:
- logger.warning(f"Error fetching front camera: {e}")
-
- # Fallback: use cache or zero array
- if "front" not in frames:
- if self._last_front_frame is not None:
- frames["front"] = self._last_front_frame
- else:
- frames["front"] = np.zeros((480, 640, 3), dtype=np.uint8)
-
- # Get rear camera
- try:
- response = requests.get(f"{self.sdk_base_url}/v2/rear", timeout=2.0)
- if response.status_code == 200:
- data = response.json()
- if "rear_frame" in data and data["rear_frame"]:
- rear_img = self._decode_base64_image(data["rear_frame"])
- if rear_img is not None:
- # Resize and convert BGR to RGB
- rear_img = cv2.resize(rear_img, (640, 480))
- rear_rgb = cv2.cvtColor(rear_img, cv2.COLOR_BGR2RGB)
- frames["rear"] = rear_rgb
- # Cache the successful frame
- self._last_rear_frame = rear_rgb
- except Exception as e:
- logger.warning(f"Error fetching rear camera: {e}")
-
- # Fallback: use cache or zero array
- if "rear" not in frames:
- if self._last_rear_frame is not None:
- frames["rear"] = self._last_rear_frame
- else:
- frames["rear"] = np.zeros((480, 640, 3), dtype=np.uint8)
-
- return frames
-
- def _decode_base64_image(self, base64_string: str) -> np.ndarray | None:
- """Decode base64 string to image.
-
- Args:
- base64_string: Base64 encoded image string
-
- Returns:
- np.ndarray: Decoded image in BGR format (OpenCV default), or None if decoding fails
- """
- try:
- img_bytes = base64.b64decode(base64_string)
- nparr = np.frombuffer(img_bytes, np.uint8)
- img = cv2.imdecode(nparr, cv2.IMREAD_COLOR)
- return img # Return in BGR format (OpenCV default)
- except Exception as e:
- logger.error(f"Error decoding image: {e}")
- return None
-
- def _get_robot_data(self) -> dict:
- """Get robot telemetry data from SDK.
-
- Returns:
- dict: Robot telemetry data including battery, speed, orientation, GPS, etc:
- - Current data (if request succeeds)
- - Cached data (if request fails but cache exists)
- - Default values (if request fails and no cache exists yet)
-
- Note:
- Uses /data endpoint which provides comprehensive robot state.
- If request fails, returns the last successfully retrieved data (cached).
- """
- try:
- response = requests.get(f"{self.sdk_base_url}/data", timeout=2.0)
- if response.status_code == 200:
- data = response.json()
- # Cache the successful data
- self._last_robot_data = data
- return data
- except Exception as e:
- logger.warning(f"Error fetching robot data: {e}")
-
- # Fallback: use cache or default values
- if self._last_robot_data is not None:
- return self._last_robot_data
- else:
- # Return dict with default values (used only on first failure before any cache exists)
- return {
- "speed": 0,
- "battery": 0,
- "orientation": 0,
- "latitude": 0.0,
- "longitude": 0.0,
- "gps_signal": 0,
- "signal_level": 0,
- "vibration": 0.0,
- "lamp": 0,
- }
-
- def _send_command_to_sdk(self, linear: float, angular: float, lamp: int = 0) -> bool:
- """Send control command to SDK.
-
- Args:
- linear: Linear velocity command (-1 to 1)
- angular: Angular velocity command (-1 to 1)
- lamp: Lamp control (0=off, 1=on)
-
- Returns:
- bool: True if command sent successfully, False otherwise
-
- Note:
- Uses POST /control endpoint. Commands are sent as JSON payload.
- """
- try:
- payload = {
- "command": {
- "linear": linear,
- "angular": angular,
- "lamp": lamp,
- }
- }
-
- response = requests.post(
- f"{self.sdk_base_url}/control",
- json=payload,
- timeout=1.0,
- )
-
- return response.status_code == 200
- except Exception as e:
- logger.error(f"Error sending command: {e}")
- return False
diff --git a/lerobot/src/lerobot/robots/hope_jr/__init__.py b/lerobot/src/lerobot/robots/hope_jr/__init__.py
deleted file mode 100644
index 57aab9e3efa9d2988d22c3112c9ce6f3e6a7de2c..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/hope_jr/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .config_hope_jr import HopeJrArmConfig, HopeJrHandConfig
-from .hope_jr_arm import HopeJrArm
-from .hope_jr_hand import HopeJrHand
diff --git a/lerobot/src/lerobot/robots/hope_jr/config_hope_jr.py b/lerobot/src/lerobot/robots/hope_jr/config_hope_jr.py
deleted file mode 100644
index 8274f69653fd0d9a0164f5e496355bd0446687f6..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/hope_jr/config_hope_jr.py
+++ /dev/null
@@ -1,51 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass, field
-
-from lerobot.cameras import CameraConfig
-
-from ..config import RobotConfig
-
-
-@RobotConfig.register_subclass("hope_jr_hand")
-@dataclass
-class HopeJrHandConfig(RobotConfig):
- port: str # Port to connect to the hand
- side: str # "left" / "right"
-
- disable_torque_on_disconnect: bool = True
-
- cameras: dict[str, CameraConfig] = field(default_factory=dict)
-
- def __post_init__(self):
- super().__post_init__()
- if self.side not in ["right", "left"]:
- raise ValueError(self.side)
-
-
-@RobotConfig.register_subclass("hope_jr_arm")
-@dataclass
-class HopeJrArmConfig(RobotConfig):
- port: str # Port to connect to the hand
- disable_torque_on_disconnect: bool = True
-
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a dictionary that maps motor
- # names to the max_relative_target value for that motor.
- max_relative_target: float | dict[str, float] | None = None
-
- cameras: dict[str, CameraConfig] = field(default_factory=dict)
diff --git a/lerobot/src/lerobot/robots/hope_jr/hope_jr.mdx b/lerobot/src/lerobot/robots/hope_jr/hope_jr.mdx
deleted file mode 100644
index a076e4754acc0f88e53320da293f2fc9ec2d06cf..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/hope_jr/hope_jr.mdx
+++ /dev/null
@@ -1 +0,0 @@
-../../../../docs/source/hope_jr.mdx
\ No newline at end of file
diff --git a/lerobot/src/lerobot/robots/hope_jr/hope_jr_arm.py b/lerobot/src/lerobot/robots/hope_jr/hope_jr_arm.py
deleted file mode 100644
index 937d0eba831535c4cdc303d79df9111d1214834b..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/hope_jr/hope_jr_arm.py
+++ /dev/null
@@ -1,169 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import time
-from functools import cached_property
-
-from lerobot.cameras.utils import make_cameras_from_configs
-from lerobot.motors import Motor, MotorNormMode
-from lerobot.motors.calibration_gui import RangeFinderGUI
-from lerobot.motors.feetech import (
- FeetechMotorsBus,
-)
-from lerobot.processor import RobotAction, RobotObservation
-from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
-
-from ..robot import Robot
-from ..utils import ensure_safe_goal_position
-from .config_hope_jr import HopeJrArmConfig
-
-logger = logging.getLogger(__name__)
-
-
-class HopeJrArm(Robot):
- config_class = HopeJrArmConfig
- name = "hope_jr_arm"
-
- def __init__(self, config: HopeJrArmConfig):
- super().__init__(config)
- self.config = config
- self.bus = FeetechMotorsBus(
- port=self.config.port,
- motors={
- "shoulder_pitch": Motor(1, "sm8512bl", MotorNormMode.RANGE_M100_100),
- "shoulder_yaw": Motor(2, "sts3250", MotorNormMode.RANGE_M100_100),
- "shoulder_roll": Motor(3, "sts3250", MotorNormMode.RANGE_M100_100),
- "elbow_flex": Motor(4, "sts3250", MotorNormMode.RANGE_M100_100),
- "wrist_roll": Motor(5, "sts3250", MotorNormMode.RANGE_M100_100),
- "wrist_yaw": Motor(6, "sts3250", MotorNormMode.RANGE_M100_100),
- "wrist_pitch": Motor(7, "sts3250", MotorNormMode.RANGE_M100_100),
- },
- calibration=self.calibration,
- )
- self.cameras = make_cameras_from_configs(config.cameras)
-
- # HACK
- self.shoulder_pitch = "shoulder_pitch"
- self.other_motors = [m for m in self.bus.motors if m != "shoulder_pitch"]
-
- @property
- def _motors_ft(self) -> dict[str, type]:
- return {f"{motor}.pos": float for motor in self.bus.motors}
-
- @property
- def _cameras_ft(self) -> dict[str, tuple]:
- return {
- cam: (self.config.cameras[cam].height, self.config.cameras[cam].width, 3) for cam in self.cameras
- }
-
- @cached_property
- def observation_features(self) -> dict[str, type | tuple]:
- return {**self._motors_ft, **self._cameras_ft}
-
- @cached_property
- def action_features(self) -> dict[str, type]:
- return self._motors_ft
-
- @property
- def is_connected(self) -> bool:
- return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
-
- @check_if_already_connected
- def connect(self, calibrate: bool = True) -> None:
- """
- We assume that at connection time, arm is in a rest position,
- and torque can be safely disabled to run calibration.
- """
-
- self.bus.connect(handshake=False)
- if not self.is_calibrated and calibrate:
- self.calibrate()
-
- # Connect the cameras
- for cam in self.cameras.values():
- cam.connect()
-
- self.configure()
- logger.info(f"{self} connected.")
-
- @property
- def is_calibrated(self) -> bool:
- return self.bus.is_calibrated
-
- def calibrate(self) -> None:
- groups = {
- "all": list(self.bus.motors.keys()),
- "shoulder": ["shoulder_pitch", "shoulder_yaw", "shoulder_roll"],
- "elbow": ["elbow_flex"],
- "wrist": ["wrist_roll", "wrist_yaw", "wrist_pitch"],
- }
-
- self.calibration = RangeFinderGUI(self.bus, groups).run()
- self._save_calibration()
- print("Calibration saved to", self.calibration_fpath)
-
- def configure(self) -> None:
- with self.bus.torque_disabled():
- self.bus.configure_motors(maximum_acceleration=30, acceleration=30)
-
- def setup_motors(self) -> None:
- # TODO: add docstring
- for motor in reversed(self.bus.motors):
- input(f"Connect the controller board to the '{motor}' motor only and press enter.")
- self.bus.setup_motor(motor)
- print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
-
- @check_if_not_connected
- def get_observation(self) -> RobotObservation:
- # Read arm position
- start = time.perf_counter()
- obs_dict = self.bus.sync_read("Present_Position", self.other_motors)
- obs_dict[self.shoulder_pitch] = self.bus.read("Present_Position", self.shoulder_pitch)
- obs_dict = {f"{motor}.pos": val for motor, val in obs_dict.items()}
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read state: {dt_ms:.1f}ms")
-
- # Capture images from cameras
- for cam_key, cam in self.cameras.items():
- start = time.perf_counter()
- obs_dict[cam_key] = cam.async_read()
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read {cam_key}: {dt_ms:.1f}ms")
-
- return obs_dict
-
- @check_if_not_connected
- def send_action(self, action: RobotAction) -> RobotAction:
- goal_pos = {key.removesuffix(".pos"): val for key, val in action.items() if key.endswith(".pos")}
-
- # Cap goal position when too far away from present position.
- # /!\ Slower fps expected due to reading from the follower.
- if self.config.max_relative_target is not None:
- present_pos = self.bus.sync_read("Present_Position")
- goal_present_pos = {key: (g_pos, present_pos[key]) for key, g_pos in goal_pos.items()}
- goal_pos = ensure_safe_goal_position(goal_present_pos, self.config.max_relative_target)
-
- self.bus.sync_write("Goal_Position", goal_pos)
- return {f"{motor}.pos": val for motor, val in goal_pos.items()}
-
- @check_if_not_connected
- def disconnect(self):
- self.bus.disconnect(self.config.disable_torque_on_disconnect)
- for cam in self.cameras.values():
- cam.disconnect()
-
- logger.info(f"{self} disconnected.")
diff --git a/lerobot/src/lerobot/robots/hope_jr/hope_jr_hand.py b/lerobot/src/lerobot/robots/hope_jr/hope_jr_hand.py
deleted file mode 100644
index 82373e5738f49199ff80ab6025d942486fe9d5ca..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/hope_jr/hope_jr_hand.py
+++ /dev/null
@@ -1,192 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import time
-from functools import cached_property
-
-from lerobot.cameras.utils import make_cameras_from_configs
-from lerobot.motors import Motor, MotorNormMode
-from lerobot.motors.calibration_gui import RangeFinderGUI
-from lerobot.motors.feetech import (
- FeetechMotorsBus,
-)
-from lerobot.processor import RobotAction, RobotObservation
-from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
-
-from ..robot import Robot
-from .config_hope_jr import HopeJrHandConfig
-
-logger = logging.getLogger(__name__)
-
-RIGHT_HAND_INVERSIONS = [
- "thumb_mcp",
- "thumb_dip",
- "index_ulnar_flexor",
- "middle_ulnar_flexor",
- "ring_ulnar_flexor",
- "ring_pip_dip",
- "pinky_ulnar_flexor",
- "pinky_pip_dip",
-]
-
-LEFT_HAND_INVERSIONS = [
- "thumb_cmc",
- "thumb_mcp",
- "thumb_dip",
- "index_radial_flexor",
- "index_pip_dip",
- "middle_radial_flexor",
- "middle_pip_dip",
- "ring_radial_flexor",
- "ring_pip_dip",
- "pinky_radial_flexor",
- # "pinky_pip_dip",
-]
-
-
-class HopeJrHand(Robot):
- config_class = HopeJrHandConfig
- name = "hope_jr_hand"
-
- def __init__(self, config: HopeJrHandConfig):
- super().__init__(config)
- self.config = config
- self.bus = FeetechMotorsBus(
- port=self.config.port,
- motors={
- # Thumb
- "thumb_cmc": Motor(1, "scs0009", MotorNormMode.RANGE_0_100),
- "thumb_mcp": Motor(2, "scs0009", MotorNormMode.RANGE_0_100),
- "thumb_pip": Motor(3, "scs0009", MotorNormMode.RANGE_0_100),
- "thumb_dip": Motor(4, "scs0009", MotorNormMode.RANGE_0_100),
- # Index
- "index_radial_flexor": Motor(5, "scs0009", MotorNormMode.RANGE_0_100),
- "index_ulnar_flexor": Motor(6, "scs0009", MotorNormMode.RANGE_0_100),
- "index_pip_dip": Motor(7, "scs0009", MotorNormMode.RANGE_0_100),
- # Middle
- "middle_radial_flexor": Motor(8, "scs0009", MotorNormMode.RANGE_0_100),
- "middle_ulnar_flexor": Motor(9, "scs0009", MotorNormMode.RANGE_0_100),
- "middle_pip_dip": Motor(10, "scs0009", MotorNormMode.RANGE_0_100),
- # Ring
- "ring_radial_flexor": Motor(11, "scs0009", MotorNormMode.RANGE_0_100),
- "ring_ulnar_flexor": Motor(12, "scs0009", MotorNormMode.RANGE_0_100),
- "ring_pip_dip": Motor(13, "scs0009", MotorNormMode.RANGE_0_100),
- # Pinky
- "pinky_radial_flexor": Motor(14, "scs0009", MotorNormMode.RANGE_0_100),
- "pinky_ulnar_flexor": Motor(15, "scs0009", MotorNormMode.RANGE_0_100),
- "pinky_pip_dip": Motor(16, "scs0009", MotorNormMode.RANGE_0_100),
- },
- calibration=self.calibration,
- protocol_version=1,
- )
- self.cameras = make_cameras_from_configs(config.cameras)
- self.inverted_motors = RIGHT_HAND_INVERSIONS if config.side == "right" else LEFT_HAND_INVERSIONS
-
- @property
- def _motors_ft(self) -> dict[str, type]:
- return {f"{motor}.pos": float for motor in self.bus.motors}
-
- @property
- def _cameras_ft(self) -> dict[str, tuple]:
- return {
- cam: (self.config.cameras[cam].height, self.config.cameras[cam].width, 3) for cam in self.cameras
- }
-
- @cached_property
- def observation_features(self) -> dict[str, type | tuple]:
- return {**self._motors_ft, **self._cameras_ft}
-
- @cached_property
- def action_features(self) -> dict[str, type]:
- return self._motors_ft
-
- @property
- def is_connected(self) -> bool:
- return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
-
- @check_if_already_connected
- def connect(self, calibrate: bool = True) -> None:
- self.bus.connect()
- if not self.is_calibrated and calibrate:
- self.calibrate()
-
- # Connect the cameras
- for cam in self.cameras.values():
- cam.connect()
-
- self.configure()
- logger.info(f"{self} connected.")
-
- @property
- def is_calibrated(self) -> bool:
- return self.bus.is_calibrated
-
- def calibrate(self) -> None:
- fingers = {}
- for finger in ["thumb", "index", "middle", "ring", "pinky"]:
- fingers[finger] = [motor for motor in self.bus.motors if motor.startswith(finger)]
-
- self.calibration = RangeFinderGUI(self.bus, fingers).run()
- for motor in self.inverted_motors:
- self.calibration[motor].drive_mode = 1
- self._save_calibration()
- print("Calibration saved to", self.calibration_fpath)
-
- def configure(self) -> None:
- with self.bus.torque_disabled():
- self.bus.configure_motors()
-
- def setup_motors(self) -> None:
- # TODO: add docstring
- for motor in self.bus.motors:
- input(f"Connect the controller board to the '{motor}' motor only and press enter.")
- self.bus.setup_motor(motor)
- print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
-
- @check_if_not_connected
- def get_observation(self) -> RobotObservation:
- obs_dict = {}
-
- # Read hand position
- start = time.perf_counter()
- for motor in self.bus.motors:
- obs_dict[f"{motor}.pos"] = self.bus.read("Present_Position", motor)
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read state: {dt_ms:.1f}ms")
-
- # Capture images from cameras
- for cam_key, cam in self.cameras.items():
- start = time.perf_counter()
- obs_dict[cam_key] = cam.async_read()
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read {cam_key}: {dt_ms:.1f}ms")
-
- return obs_dict
-
- @check_if_not_connected
- def send_action(self, action: RobotAction) -> RobotAction:
- goal_pos = {key.removesuffix(".pos"): val for key, val in action.items() if key.endswith(".pos")}
- self.bus.sync_write("Goal_Position", goal_pos)
- return action
-
- @check_if_not_connected
- def disconnect(self):
- self.bus.disconnect(self.config.disable_torque_on_disconnect)
- for cam in self.cameras.values():
- cam.disconnect()
-
- logger.info(f"{self} disconnected.")
diff --git a/lerobot/src/lerobot/robots/koch_follower/__init__.py b/lerobot/src/lerobot/robots/koch_follower/__init__.py
deleted file mode 100644
index 6524ab97bf45ae8c7c4eb6b110a049704e5e3e0a..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/koch_follower/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .config_koch_follower import KochFollowerConfig
-from .koch_follower import KochFollower
diff --git a/lerobot/src/lerobot/robots/koch_follower/config_koch_follower.py b/lerobot/src/lerobot/robots/koch_follower/config_koch_follower.py
deleted file mode 100644
index cab1f6e11a65c953acdd5ad05bf316270bc08b79..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/koch_follower/config_koch_follower.py
+++ /dev/null
@@ -1,39 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass, field
-
-from lerobot.cameras import CameraConfig
-
-from ..config import RobotConfig
-
-
-@RobotConfig.register_subclass("koch_follower")
-@dataclass
-class KochFollowerConfig(RobotConfig):
- # Port to connect to the arm
- port: str
-
- disable_torque_on_disconnect: bool = True
-
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a dictionary that maps motor
- # names to the max_relative_target value for that motor.
- max_relative_target: float | dict[str, float] | None = None
-
- # cameras
- cameras: dict[str, CameraConfig] = field(default_factory=dict)
-
- # Set to `True` for backward compatibility with previous policies/dataset
- use_degrees: bool = False
diff --git a/lerobot/src/lerobot/robots/koch_follower/koch.mdx b/lerobot/src/lerobot/robots/koch_follower/koch.mdx
deleted file mode 100644
index ef43feb06680f79223d511915a46e352d33ac480..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/koch_follower/koch.mdx
+++ /dev/null
@@ -1 +0,0 @@
-../../../../docs/source/koch.mdx
\ No newline at end of file
diff --git a/lerobot/src/lerobot/robots/koch_follower/koch_follower.py b/lerobot/src/lerobot/robots/koch_follower/koch_follower.py
deleted file mode 100644
index 70040b79ce679c6950bb56aeedb4de6594408f40..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/koch_follower/koch_follower.py
+++ /dev/null
@@ -1,236 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import time
-from functools import cached_property
-
-from lerobot.cameras.utils import make_cameras_from_configs
-from lerobot.motors import Motor, MotorCalibration, MotorNormMode
-from lerobot.motors.dynamixel import (
- DynamixelMotorsBus,
- OperatingMode,
-)
-from lerobot.processor import RobotAction, RobotObservation
-from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
-
-from ..robot import Robot
-from ..utils import ensure_safe_goal_position
-from .config_koch_follower import KochFollowerConfig
-
-logger = logging.getLogger(__name__)
-
-
-class KochFollower(Robot):
- """
- - [Koch v1.0](https://github.com/AlexanderKoch-Koch/low_cost_robot), with and without the wrist-to-elbow
- expansion, developed by Alexander Koch from [Tau Robotics](https://tau-robotics.com)
- - [Koch v1.1](https://github.com/jess-moss/koch-v1-1) developed by Jess Moss
- """
-
- config_class = KochFollowerConfig
- name = "koch_follower"
-
- def __init__(self, config: KochFollowerConfig):
- super().__init__(config)
- self.config = config
- norm_mode_body = MotorNormMode.DEGREES if config.use_degrees else MotorNormMode.RANGE_M100_100
- self.bus = DynamixelMotorsBus(
- port=self.config.port,
- motors={
- "shoulder_pan": Motor(1, "xl430-w250", norm_mode_body),
- "shoulder_lift": Motor(2, "xl430-w250", norm_mode_body),
- "elbow_flex": Motor(3, "xl330-m288", norm_mode_body),
- "wrist_flex": Motor(4, "xl330-m288", norm_mode_body),
- "wrist_roll": Motor(5, "xl330-m288", norm_mode_body),
- "gripper": Motor(6, "xl330-m288", MotorNormMode.RANGE_0_100),
- },
- calibration=self.calibration,
- )
- self.cameras = make_cameras_from_configs(config.cameras)
-
- @property
- def _motors_ft(self) -> dict[str, type]:
- return {f"{motor}.pos": float for motor in self.bus.motors}
-
- @property
- def _cameras_ft(self) -> dict[str, tuple]:
- return {
- cam: (self.config.cameras[cam].height, self.config.cameras[cam].width, 3) for cam in self.cameras
- }
-
- @cached_property
- def observation_features(self) -> dict[str, type | tuple]:
- return {**self._motors_ft, **self._cameras_ft}
-
- @cached_property
- def action_features(self) -> dict[str, type]:
- return self._motors_ft
-
- @property
- def is_connected(self) -> bool:
- return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
-
- @check_if_already_connected
- def connect(self, calibrate: bool = True) -> None:
- """
- We assume that at connection time, arm is in a rest position,
- and torque can be safely disabled to run calibration.
- """
-
- self.bus.connect()
- if not self.is_calibrated and calibrate:
- logger.info(
- "Mismatch between calibration values in the motor and the calibration file or no calibration file found"
- )
- self.calibrate()
-
- for cam in self.cameras.values():
- cam.connect()
-
- self.configure()
- logger.info(f"{self} connected.")
-
- @property
- def is_calibrated(self) -> bool:
- return self.bus.is_calibrated
-
- def calibrate(self) -> None:
- self.bus.disable_torque()
- if self.calibration:
- # Calibration file exists, ask user whether to use it or run new calibration
- user_input = input(
- f"Press ENTER to use provided calibration file associated with the id {self.id}, or type 'c' and press ENTER to run calibration: "
- )
- if user_input.strip().lower() != "c":
- logger.info(f"Writing calibration file associated with the id {self.id} to the motors")
- self.bus.write_calibration(self.calibration)
- return
- logger.info(f"\nRunning calibration of {self}")
- for motor in self.bus.motors:
- self.bus.write("Operating_Mode", motor, OperatingMode.EXTENDED_POSITION.value)
-
- input(f"Move {self} to the middle of its range of motion and press ENTER....")
- homing_offsets = self.bus.set_half_turn_homings()
-
- full_turn_motors = ["shoulder_pan", "wrist_roll"]
- unknown_range_motors = [motor for motor in self.bus.motors if motor not in full_turn_motors]
- print(
- f"Move all joints except {full_turn_motors} sequentially through their entire "
- "ranges of motion.\nRecording positions. Press ENTER to stop..."
- )
- range_mins, range_maxes = self.bus.record_ranges_of_motion(unknown_range_motors)
- for motor in full_turn_motors:
- range_mins[motor] = 0
- range_maxes[motor] = 4095
-
- self.calibration = {}
- for motor, m in self.bus.motors.items():
- self.calibration[motor] = MotorCalibration(
- id=m.id,
- drive_mode=0,
- homing_offset=homing_offsets[motor],
- range_min=range_mins[motor],
- range_max=range_maxes[motor],
- )
-
- self.bus.write_calibration(self.calibration)
- self._save_calibration()
- logger.info(f"Calibration saved to {self.calibration_fpath}")
-
- def configure(self) -> None:
- with self.bus.torque_disabled():
- self.bus.configure_motors()
- # Use 'extended position mode' for all motors except gripper, because in joint mode the servos
- # can't rotate more than 360 degrees (from 0 to 4095) And some mistake can happen while assembling
- # the arm, you could end up with a servo with a position 0 or 4095 at a crucial point
- for motor in self.bus.motors:
- if motor != "gripper":
- self.bus.write("Operating_Mode", motor, OperatingMode.EXTENDED_POSITION.value)
-
- # Use 'position control current based' for gripper to be limited by the limit of the current. For
- # the follower gripper, it means it can grasp an object without forcing too much even tho, its
- # goal position is a complete grasp (both gripper fingers are ordered to join and reach a touch).
- # For the leader gripper, it means we can use it as a physical trigger, since we can force with
- # our finger to make it move, and it will move back to its original target position when we
- # release the force.
- self.bus.write("Operating_Mode", "gripper", OperatingMode.CURRENT_POSITION.value)
-
- # Set better PID values to close the gap between recorded states and actions
- # TODO(rcadene): Implement an automatic procedure to set optimal PID values for each motor
- self.bus.write("Position_P_Gain", "elbow_flex", 1500)
- self.bus.write("Position_I_Gain", "elbow_flex", 0)
- self.bus.write("Position_D_Gain", "elbow_flex", 600)
-
- def setup_motors(self) -> None:
- for motor in reversed(self.bus.motors):
- input(f"Connect the controller board to the '{motor}' motor only and press enter.")
- self.bus.setup_motor(motor)
- print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
-
- @check_if_not_connected
- def get_observation(self) -> RobotObservation:
- # Read arm position
- start = time.perf_counter()
- obs_dict = self.bus.sync_read("Present_Position")
- obs_dict = {f"{motor}.pos": val for motor, val in obs_dict.items()}
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read state: {dt_ms:.1f}ms")
-
- # Capture images from cameras
- for cam_key, cam in self.cameras.items():
- start = time.perf_counter()
- obs_dict[cam_key] = cam.async_read()
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read {cam_key}: {dt_ms:.1f}ms")
-
- return obs_dict
-
- @check_if_not_connected
- def send_action(self, action: RobotAction) -> RobotAction:
- """Command arm to move to a target joint configuration.
-
- The relative action magnitude may be clipped depending on the configuration parameter
- `max_relative_target`. In this case, the action sent differs from original action.
- Thus, this function always returns the action actually sent.
-
- Args:
- action (RobotAction): The goal positions for the motors.
-
- Returns:
- RobotAction: The action sent to the motors, potentially clipped.
- """
-
- goal_pos = {key.removesuffix(".pos"): val for key, val in action.items() if key.endswith(".pos")}
-
- # Cap goal position when too far away from present position.
- # /!\ Slower fps expected due to reading from the follower.
- if self.config.max_relative_target is not None:
- present_pos = self.bus.sync_read("Present_Position")
- goal_present_pos = {key: (g_pos, present_pos[key]) for key, g_pos in goal_pos.items()}
- goal_pos = ensure_safe_goal_position(goal_present_pos, self.config.max_relative_target)
-
- # Send goal position to the arm
- self.bus.sync_write("Goal_Position", goal_pos)
- return {f"{motor}.pos": val for motor, val in goal_pos.items()}
-
- @check_if_not_connected
- def disconnect(self):
- self.bus.disconnect(self.config.disable_torque_on_disconnect)
- for cam in self.cameras.values():
- cam.disconnect()
-
- logger.info(f"{self} disconnected.")
diff --git a/lerobot/src/lerobot/robots/lekiwi/__init__.py b/lerobot/src/lerobot/robots/lekiwi/__init__.py
deleted file mode 100644
index c4f4724cfdb1a3893a227c0fe470f90beea52ed7..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/lekiwi/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .config_lekiwi import LeKiwiClientConfig, LeKiwiConfig
-from .lekiwi import LeKiwi
-from .lekiwi_client import LeKiwiClient
diff --git a/lerobot/src/lerobot/robots/lekiwi/config_lekiwi.py b/lerobot/src/lerobot/robots/lekiwi/config_lekiwi.py
deleted file mode 100644
index 6d252072177152f10a5de7e38781ce1d62fa22be..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/lekiwi/config_lekiwi.py
+++ /dev/null
@@ -1,96 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass, field
-
-from lerobot.cameras.configs import CameraConfig, Cv2Rotation
-from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
-
-from ..config import RobotConfig
-
-
-def lekiwi_cameras_config() -> dict[str, CameraConfig]:
- return {
- "front": OpenCVCameraConfig(
- index_or_path="/dev/video0", fps=30, width=640, height=480, rotation=Cv2Rotation.ROTATE_180
- ),
- "wrist": OpenCVCameraConfig(
- index_or_path="/dev/video2", fps=30, width=480, height=640, rotation=Cv2Rotation.ROTATE_90
- ),
- }
-
-
-@RobotConfig.register_subclass("lekiwi")
-@dataclass
-class LeKiwiConfig(RobotConfig):
- port: str = "/dev/ttyACM0" # port to connect to the bus
-
- disable_torque_on_disconnect: bool = True
-
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a dictionary that maps motor
- # names to the max_relative_target value for that motor.
- max_relative_target: float | dict[str, float] | None = None
-
- cameras: dict[str, CameraConfig] = field(default_factory=lekiwi_cameras_config)
-
- # Set to `True` for backward compatibility with previous policies/dataset
- use_degrees: bool = False
-
-
-@dataclass
-class LeKiwiHostConfig:
- # Network Configuration
- port_zmq_cmd: int = 5555
- port_zmq_observations: int = 5556
-
- # Duration of the application
- connection_time_s: int = 30
-
- # Watchdog: stop the robot if no command is received for over 0.5 seconds.
- watchdog_timeout_ms: int = 500
-
- # If robot jitters decrease the frequency and monitor cpu load with `top` in cmd
- max_loop_freq_hz: int = 30
-
-
-@RobotConfig.register_subclass("lekiwi_client")
-@dataclass
-class LeKiwiClientConfig(RobotConfig):
- # Network Configuration
- remote_ip: str
- port_zmq_cmd: int = 5555
- port_zmq_observations: int = 5556
-
- teleop_keys: dict[str, str] = field(
- default_factory=lambda: {
- # Movement
- "forward": "w",
- "backward": "s",
- "left": "a",
- "right": "d",
- "rotate_left": "z",
- "rotate_right": "x",
- # Speed control
- "speed_up": "r",
- "speed_down": "f",
- # quit teleop
- "quit": "q",
- }
- )
-
- cameras: dict[str, CameraConfig] = field(default_factory=lekiwi_cameras_config)
-
- polling_timeout_ms: int = 15
- connect_timeout_s: int = 5
diff --git a/lerobot/src/lerobot/robots/lekiwi/lekiwi.mdx b/lerobot/src/lerobot/robots/lekiwi/lekiwi.mdx
deleted file mode 100644
index f651589989a9ba21671a17f076a54d00f0256a5e..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/lekiwi/lekiwi.mdx
+++ /dev/null
@@ -1 +0,0 @@
-../../../../docs/source/lekiwi.mdx
\ No newline at end of file
diff --git a/lerobot/src/lerobot/robots/lekiwi/lekiwi.py b/lerobot/src/lerobot/robots/lekiwi/lekiwi.py
deleted file mode 100644
index 6345d6b06d9440c546b85716a987b9f7f78b407a..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/lekiwi/lekiwi.py
+++ /dev/null
@@ -1,417 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import time
-from functools import cached_property
-from itertools import chain
-from typing import Any
-
-import numpy as np
-
-from lerobot.cameras.utils import make_cameras_from_configs
-from lerobot.motors import Motor, MotorCalibration, MotorNormMode
-from lerobot.motors.feetech import (
- FeetechMotorsBus,
- OperatingMode,
-)
-from lerobot.processor import RobotAction, RobotObservation
-from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
-
-from ..robot import Robot
-from ..utils import ensure_safe_goal_position
-from .config_lekiwi import LeKiwiConfig
-
-logger = logging.getLogger(__name__)
-
-
-class LeKiwi(Robot):
- """
- The robot includes a three omniwheel mobile base and a remote follower arm.
- The leader arm is connected locally (on the laptop) and its joint positions are recorded and then
- forwarded to the remote follower arm (after applying a safety clamp).
- In parallel, keyboard teleoperation is used to generate raw velocity commands for the wheels.
- """
-
- config_class = LeKiwiConfig
- name = "lekiwi"
-
- def __init__(self, config: LeKiwiConfig):
- super().__init__(config)
- self.config = config
- norm_mode_body = MotorNormMode.DEGREES if config.use_degrees else MotorNormMode.RANGE_M100_100
- self.bus = FeetechMotorsBus(
- port=self.config.port,
- motors={
- # arm
- "arm_shoulder_pan": Motor(1, "sts3215", norm_mode_body),
- "arm_shoulder_lift": Motor(2, "sts3215", norm_mode_body),
- "arm_elbow_flex": Motor(3, "sts3215", norm_mode_body),
- "arm_wrist_flex": Motor(4, "sts3215", norm_mode_body),
- "arm_wrist_roll": Motor(5, "sts3215", norm_mode_body),
- "arm_gripper": Motor(6, "sts3215", MotorNormMode.RANGE_0_100),
- # base
- "base_left_wheel": Motor(7, "sts3215", MotorNormMode.RANGE_M100_100),
- "base_back_wheel": Motor(8, "sts3215", MotorNormMode.RANGE_M100_100),
- "base_right_wheel": Motor(9, "sts3215", MotorNormMode.RANGE_M100_100),
- },
- calibration=self.calibration,
- )
- self.arm_motors = [motor for motor in self.bus.motors if motor.startswith("arm")]
- self.base_motors = [motor for motor in self.bus.motors if motor.startswith("base")]
- self.cameras = make_cameras_from_configs(config.cameras)
-
- @property
- def _state_ft(self) -> dict[str, type]:
- return dict.fromkeys(
- (
- "arm_shoulder_pan.pos",
- "arm_shoulder_lift.pos",
- "arm_elbow_flex.pos",
- "arm_wrist_flex.pos",
- "arm_wrist_roll.pos",
- "arm_gripper.pos",
- "x.vel",
- "y.vel",
- "theta.vel",
- ),
- float,
- )
-
- @property
- def _cameras_ft(self) -> dict[str, tuple]:
- return {
- cam: (self.config.cameras[cam].height, self.config.cameras[cam].width, 3) for cam in self.cameras
- }
-
- @cached_property
- def observation_features(self) -> dict[str, type | tuple]:
- return {**self._state_ft, **self._cameras_ft}
-
- @cached_property
- def action_features(self) -> dict[str, type]:
- return self._state_ft
-
- @property
- def is_connected(self) -> bool:
- return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
-
- @check_if_already_connected
- def connect(self, calibrate: bool = True) -> None:
- self.bus.connect()
- if not self.is_calibrated and calibrate:
- logger.info(
- "Mismatch between calibration values in the motor and the calibration file or no calibration file found"
- )
- self.calibrate()
-
- for cam in self.cameras.values():
- cam.connect()
-
- self.configure()
- logger.info(f"{self} connected.")
-
- @property
- def is_calibrated(self) -> bool:
- return self.bus.is_calibrated
-
- def calibrate(self) -> None:
- if self.calibration:
- # Calibration file exists, ask user whether to use it or run new calibration
- user_input = input(
- f"Press ENTER to use provided calibration file associated with the id {self.id}, or type 'c' and press ENTER to run calibration: "
- )
- if user_input.strip().lower() != "c":
- logger.info(f"Writing calibration file associated with the id {self.id} to the motors")
- self.bus.write_calibration(self.calibration)
- return
- logger.info(f"\nRunning calibration of {self}")
-
- motors = self.arm_motors + self.base_motors
-
- self.bus.disable_torque(self.arm_motors)
- for name in self.arm_motors:
- self.bus.write("Operating_Mode", name, OperatingMode.POSITION.value)
-
- input("Move robot to the middle of its range of motion and press ENTER....")
- homing_offsets = self.bus.set_half_turn_homings(self.arm_motors)
-
- homing_offsets.update(dict.fromkeys(self.base_motors, 0))
-
- full_turn_motor = [
- motor for motor in motors if any(keyword in motor for keyword in ["wheel", "wrist_roll"])
- ]
- unknown_range_motors = [motor for motor in motors if motor not in full_turn_motor]
-
- print(
- f"Move all arm joints except '{full_turn_motor}' sequentially through their "
- "entire ranges of motion.\nRecording positions. Press ENTER to stop..."
- )
- range_mins, range_maxes = self.bus.record_ranges_of_motion(unknown_range_motors)
- for name in full_turn_motor:
- range_mins[name] = 0
- range_maxes[name] = 4095
-
- self.calibration = {}
- for name, motor in self.bus.motors.items():
- self.calibration[name] = MotorCalibration(
- id=motor.id,
- drive_mode=0,
- homing_offset=homing_offsets[name],
- range_min=range_mins[name],
- range_max=range_maxes[name],
- )
-
- self.bus.write_calibration(self.calibration)
- self._save_calibration()
- print("Calibration saved to", self.calibration_fpath)
-
- def configure(self):
- # Set-up arm actuators (position mode)
- # We assume that at connection time, arm is in a rest position,
- # and torque can be safely disabled to run calibration.
- self.bus.disable_torque()
- self.bus.configure_motors()
- for name in self.arm_motors:
- self.bus.write("Operating_Mode", name, OperatingMode.POSITION.value)
- # Set P_Coefficient to lower value to avoid shakiness (Default is 32)
- self.bus.write("P_Coefficient", name, 16)
- # Set I_Coefficient and D_Coefficient to default value 0 and 32
- self.bus.write("I_Coefficient", name, 0)
- self.bus.write("D_Coefficient", name, 32)
-
- for name in self.base_motors:
- self.bus.write("Operating_Mode", name, OperatingMode.VELOCITY.value)
-
- self.bus.enable_torque()
-
- def setup_motors(self) -> None:
- for motor in chain(reversed(self.arm_motors), reversed(self.base_motors)):
- input(f"Connect the controller board to the '{motor}' motor only and press enter.")
- self.bus.setup_motor(motor)
- print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
-
- @staticmethod
- def _degps_to_raw(degps: float) -> int:
- steps_per_deg = 4096.0 / 360.0
- speed_in_steps = degps * steps_per_deg
- speed_int = int(round(speed_in_steps))
- # Cap the value to fit within signed 16-bit range (-32768 to 32767)
- if speed_int > 0x7FFF:
- speed_int = 0x7FFF # 32767 -> maximum positive value
- elif speed_int < -0x8000:
- speed_int = -0x8000 # -32768 -> minimum negative value
- return speed_int
-
- @staticmethod
- def _raw_to_degps(raw_speed: int) -> float:
- steps_per_deg = 4096.0 / 360.0
- magnitude = raw_speed
- degps = magnitude / steps_per_deg
- return degps
-
- def _body_to_wheel_raw(
- self,
- x: float,
- y: float,
- theta: float,
- wheel_radius: float = 0.05,
- base_radius: float = 0.125,
- max_raw: int = 3000,
- ) -> dict:
- """
- Convert desired body-frame velocities into wheel raw commands.
-
- Parameters:
- x_cmd : Linear velocity in x (m/s).
- y_cmd : Linear velocity in y (m/s).
- theta_cmd : Rotational velocity (deg/s).
- wheel_radius: Radius of each wheel (meters).
- base_radius : Distance from the center of rotation to each wheel (meters).
- max_raw : Maximum allowed raw command (ticks) per wheel.
-
- Returns:
- A dictionary with wheel raw commands:
- {"base_left_wheel": value, "base_back_wheel": value, "base_right_wheel": value}.
-
- Notes:
- - Internally, the method converts theta_cmd to rad/s for the kinematics.
- - The raw command is computed from the wheels angular speed in deg/s
- using _degps_to_raw(). If any command exceeds max_raw, all commands
- are scaled down proportionally.
- """
- # Convert rotational velocity from deg/s to rad/s.
- theta_rad = theta * (np.pi / 180.0)
- # Create the body velocity vector [x, y, theta_rad].
- velocity_vector = np.array([x, y, theta_rad])
-
- # Define the wheel mounting angles with a -90° offset.
- angles = np.radians(np.array([240, 0, 120]) - 90)
- # Build the kinematic matrix: each row maps body velocities to a wheel’s linear speed.
- # The third column (base_radius) accounts for the effect of rotation.
- m = np.array([[np.cos(a), np.sin(a), base_radius] for a in angles])
-
- # Compute each wheel’s linear speed (m/s) and then its angular speed (rad/s).
- wheel_linear_speeds = m.dot(velocity_vector)
- wheel_angular_speeds = wheel_linear_speeds / wheel_radius
-
- # Convert wheel angular speeds from rad/s to deg/s.
- wheel_degps = wheel_angular_speeds * (180.0 / np.pi)
-
- # Scaling
- steps_per_deg = 4096.0 / 360.0
- raw_floats = [abs(degps) * steps_per_deg for degps in wheel_degps]
- max_raw_computed = max(raw_floats)
- if max_raw_computed > max_raw:
- scale = max_raw / max_raw_computed
- wheel_degps = wheel_degps * scale
-
- # Convert each wheel’s angular speed (deg/s) to a raw integer.
- wheel_raw = [self._degps_to_raw(deg) for deg in wheel_degps]
-
- return {
- "base_left_wheel": wheel_raw[0],
- "base_back_wheel": wheel_raw[1],
- "base_right_wheel": wheel_raw[2],
- }
-
- def _wheel_raw_to_body(
- self,
- left_wheel_speed,
- back_wheel_speed,
- right_wheel_speed,
- wheel_radius: float = 0.05,
- base_radius: float = 0.125,
- ) -> dict[str, Any]:
- """
- Convert wheel raw command feedback back into body-frame velocities.
-
- Parameters:
- wheel_raw : Vector with raw wheel commands ("base_left_wheel", "base_back_wheel", "base_right_wheel").
- wheel_radius: Radius of each wheel (meters).
- base_radius : Distance from the robot center to each wheel (meters).
-
- Returns:
- A dict (x.vel, y.vel, theta.vel) all in m/s
- """
-
- # Convert each raw command back to an angular speed in deg/s.
- wheel_degps = np.array(
- [
- self._raw_to_degps(left_wheel_speed),
- self._raw_to_degps(back_wheel_speed),
- self._raw_to_degps(right_wheel_speed),
- ]
- )
-
- # Convert from deg/s to rad/s.
- wheel_radps = wheel_degps * (np.pi / 180.0)
- # Compute each wheel’s linear speed (m/s) from its angular speed.
- wheel_linear_speeds = wheel_radps * wheel_radius
-
- # Define the wheel mounting angles with a -90° offset.
- angles = np.radians(np.array([240, 0, 120]) - 90)
- m = np.array([[np.cos(a), np.sin(a), base_radius] for a in angles])
-
- # Solve the inverse kinematics: body_velocity = M⁻¹ · wheel_linear_speeds.
- m_inv = np.linalg.inv(m)
- velocity_vector = m_inv.dot(wheel_linear_speeds)
- x, y, theta_rad = velocity_vector
- theta = theta_rad * (180.0 / np.pi)
- return {
- "x.vel": x,
- "y.vel": y,
- "theta.vel": theta,
- } # m/s and deg/s
-
- @check_if_not_connected
- def get_observation(self) -> RobotObservation:
- # Read actuators position for arm and vel for base
- start = time.perf_counter()
- arm_pos = self.bus.sync_read("Present_Position", self.arm_motors)
- base_wheel_vel = self.bus.sync_read("Present_Velocity", self.base_motors)
-
- base_vel = self._wheel_raw_to_body(
- base_wheel_vel["base_left_wheel"],
- base_wheel_vel["base_back_wheel"],
- base_wheel_vel["base_right_wheel"],
- )
-
- arm_state = {f"{k}.pos": v for k, v in arm_pos.items()}
-
- obs_dict = {**arm_state, **base_vel}
-
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read state: {dt_ms:.1f}ms")
-
- # Capture images from cameras
- for cam_key, cam in self.cameras.items():
- start = time.perf_counter()
- obs_dict[cam_key] = cam.async_read()
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read {cam_key}: {dt_ms:.1f}ms")
-
- return obs_dict
-
- @check_if_not_connected
- def send_action(self, action: RobotAction) -> RobotAction:
- """Command lekiwi to move to a target joint configuration.
-
- The relative action magnitude may be clipped depending on the configuration parameter
- `max_relative_target`. In this case, the action sent differs from original action.
- Thus, this function always returns the action actually sent.
-
- Raises:
- RobotDeviceNotConnectedError: if robot is not connected.
-
- Returns:
- RobotAction: the action sent to the motors, potentially clipped.
- """
-
- arm_goal_pos = {k: v for k, v in action.items() if k.endswith(".pos")}
- base_goal_vel = {k: v for k, v in action.items() if k.endswith(".vel")}
-
- base_wheel_goal_vel = self._body_to_wheel_raw(
- base_goal_vel["x.vel"], base_goal_vel["y.vel"], base_goal_vel["theta.vel"]
- )
-
- # Cap goal position when too far away from present position.
- # /!\ Slower fps expected due to reading from the follower.
- if self.config.max_relative_target is not None:
- present_pos = self.bus.sync_read("Present_Position", self.arm_motors)
- goal_present_pos = {key: (g_pos, present_pos[key]) for key, g_pos in arm_goal_pos.items()}
- arm_safe_goal_pos = ensure_safe_goal_position(goal_present_pos, self.config.max_relative_target)
- arm_goal_pos = arm_safe_goal_pos
-
- # Send goal position to the actuators
- arm_goal_pos_raw = {k.replace(".pos", ""): v for k, v in arm_goal_pos.items()}
- self.bus.sync_write("Goal_Position", arm_goal_pos_raw)
- self.bus.sync_write("Goal_Velocity", base_wheel_goal_vel)
-
- return {**arm_goal_pos, **base_goal_vel}
-
- def stop_base(self):
- self.bus.sync_write("Goal_Velocity", dict.fromkeys(self.base_motors, 0), num_retry=5)
- logger.info("Base motors stopped")
-
- @check_if_not_connected
- def disconnect(self):
- self.stop_base()
- self.bus.disconnect(self.config.disable_torque_on_disconnect)
- for cam in self.cameras.values():
- cam.disconnect()
-
- logger.info(f"{self} disconnected.")
diff --git a/lerobot/src/lerobot/robots/lekiwi/lekiwi_client.py b/lerobot/src/lerobot/robots/lekiwi/lekiwi_client.py
deleted file mode 100644
index 4305602de7f8575d41331c40a4e77cb1c608f046..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/lekiwi/lekiwi_client.py
+++ /dev/null
@@ -1,335 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# TODO(aliberts, Steven, Pepijn): use gRPC calls instead of zmq?
-
-import base64
-import json
-import logging
-from functools import cached_property
-
-import cv2
-import numpy as np
-
-from lerobot.processor import RobotAction, RobotObservation
-from lerobot.utils.constants import ACTION, OBS_STATE
-from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
-from lerobot.utils.errors import DeviceNotConnectedError
-
-from ..robot import Robot
-from .config_lekiwi import LeKiwiClientConfig
-
-
-class LeKiwiClient(Robot):
- config_class = LeKiwiClientConfig
- name = "lekiwi_client"
-
- def __init__(self, config: LeKiwiClientConfig):
- import zmq
-
- self._zmq = zmq
- super().__init__(config)
- self.config = config
- self.id = config.id
- self.robot_type = config.type
-
- self.remote_ip = config.remote_ip
- self.port_zmq_cmd = config.port_zmq_cmd
- self.port_zmq_observations = config.port_zmq_observations
-
- self.teleop_keys = config.teleop_keys
-
- self.polling_timeout_ms = config.polling_timeout_ms
- self.connect_timeout_s = config.connect_timeout_s
-
- self.zmq_context = None
- self.zmq_cmd_socket = None
- self.zmq_observation_socket = None
-
- self.last_frames = {}
-
- self.last_remote_state = {}
-
- # Define three speed levels and a current index
- self.speed_levels = [
- {"xy": 0.1, "theta": 30}, # slow
- {"xy": 0.2, "theta": 60}, # medium
- {"xy": 0.3, "theta": 90}, # fast
- ]
- self.speed_index = 0 # Start at slow
-
- self._is_connected = False
- self.logs = {}
-
- @cached_property
- def _state_ft(self) -> dict[str, type]:
- return dict.fromkeys(
- (
- "arm_shoulder_pan.pos",
- "arm_shoulder_lift.pos",
- "arm_elbow_flex.pos",
- "arm_wrist_flex.pos",
- "arm_wrist_roll.pos",
- "arm_gripper.pos",
- "x.vel",
- "y.vel",
- "theta.vel",
- ),
- float,
- )
-
- @cached_property
- def _state_order(self) -> tuple[str, ...]:
- return tuple(self._state_ft.keys())
-
- @cached_property
- def _cameras_ft(self) -> dict[str, tuple[int, int, int]]:
- return {name: (cfg.height, cfg.width, 3) for name, cfg in self.config.cameras.items()}
-
- @cached_property
- def observation_features(self) -> dict[str, type | tuple]:
- return {**self._state_ft, **self._cameras_ft}
-
- @cached_property
- def action_features(self) -> dict[str, type]:
- return self._state_ft
-
- @property
- def is_connected(self) -> bool:
- return self._is_connected
-
- @property
- def is_calibrated(self) -> bool:
- pass
-
- @check_if_already_connected
- def connect(self) -> None:
- """Establishes ZMQ sockets with the remote mobile robot"""
-
- zmq = self._zmq
- self.zmq_context = zmq.Context()
- self.zmq_cmd_socket = self.zmq_context.socket(zmq.PUSH)
- zmq_cmd_locator = f"tcp://{self.remote_ip}:{self.port_zmq_cmd}"
- self.zmq_cmd_socket.connect(zmq_cmd_locator)
- self.zmq_cmd_socket.setsockopt(zmq.CONFLATE, 1)
-
- self.zmq_observation_socket = self.zmq_context.socket(zmq.PULL)
- zmq_observations_locator = f"tcp://{self.remote_ip}:{self.port_zmq_observations}"
- self.zmq_observation_socket.connect(zmq_observations_locator)
- self.zmq_observation_socket.setsockopt(zmq.CONFLATE, 1)
-
- poller = zmq.Poller()
- poller.register(self.zmq_observation_socket, zmq.POLLIN)
- socks = dict(poller.poll(self.connect_timeout_s * 1000))
- if self.zmq_observation_socket not in socks or socks[self.zmq_observation_socket] != zmq.POLLIN:
- raise DeviceNotConnectedError("Timeout waiting for LeKiwi Host to connect expired.")
-
- self._is_connected = True
-
- def calibrate(self) -> None:
- pass
-
- def _poll_and_get_latest_message(self) -> str | None:
- """Polls the ZMQ socket for a limited time and returns the latest message string."""
- zmq = self._zmq
- poller = zmq.Poller()
- poller.register(self.zmq_observation_socket, zmq.POLLIN)
-
- try:
- socks = dict(poller.poll(self.polling_timeout_ms))
- except zmq.ZMQError as e:
- logging.error(f"ZMQ polling error: {e}")
- return None
-
- if self.zmq_observation_socket not in socks:
- logging.info("No new data available within timeout.")
- return None
-
- last_msg = None
- while True:
- try:
- msg = self.zmq_observation_socket.recv_string(zmq.NOBLOCK)
- last_msg = msg
- except zmq.Again:
- break
-
- if last_msg is None:
- logging.warning("Poller indicated data, but failed to retrieve message.")
-
- return last_msg
-
- def _parse_observation_json(self, obs_string: str) -> RobotObservation | None:
- """Parses the JSON observation string."""
- try:
- return json.loads(obs_string)
- except json.JSONDecodeError as e:
- logging.error(f"Error decoding JSON observation: {e}")
- return None
-
- def _decode_image_from_b64(self, image_b64: str) -> np.ndarray | None:
- """Decodes a base64 encoded image string to an OpenCV image."""
- if not image_b64:
- return None
- try:
- jpg_data = base64.b64decode(image_b64)
- np_arr = np.frombuffer(jpg_data, dtype=np.uint8)
- frame = cv2.imdecode(np_arr, cv2.IMREAD_COLOR)
- if frame is None:
- logging.warning("cv2.imdecode returned None for an image.")
- return frame
- except (TypeError, ValueError) as e:
- logging.error(f"Error decoding base64 image data: {e}")
- return None
-
- def _remote_state_from_obs(
- self, observation: RobotObservation
- ) -> tuple[dict[str, np.ndarray], RobotObservation]:
- """Extracts frames, and state from the parsed observation."""
-
- flat_state = {key: observation.get(key, 0.0) for key in self._state_order}
-
- state_vec = np.array([flat_state[key] for key in self._state_order], dtype=np.float32)
-
- obs_dict: RobotObservation = {**flat_state, OBS_STATE: state_vec}
-
- # Decode images
- current_frames: dict[str, np.ndarray] = {}
- for cam_name, image_b64 in observation.items():
- if cam_name not in self._cameras_ft:
- continue
- frame = self._decode_image_from_b64(image_b64)
- if frame is not None:
- current_frames[cam_name] = frame
-
- return current_frames, obs_dict
-
- def _get_data(self) -> tuple[dict[str, np.ndarray], RobotObservation]:
- """
- Polls the video socket for the latest observation data.
-
- Attempts to retrieve and decode the latest message within a short timeout.
- If successful, updates and returns the new frames, speed, and arm state.
- If no new data arrives or decoding fails, returns the last known values.
- """
-
- # 1. Get the latest message string from the socket
- latest_message_str = self._poll_and_get_latest_message()
-
- # 2. If no message, return cached data
- if latest_message_str is None:
- return self.last_frames, self.last_remote_state
-
- # 3. Parse the JSON message
- observation = self._parse_observation_json(latest_message_str)
-
- # 4. If JSON parsing failed, return cached data
- if observation is None:
- return self.last_frames, self.last_remote_state
-
- # 5. Process the valid observation data
- try:
- new_frames, new_state = self._remote_state_from_obs(observation)
- except Exception as e:
- logging.error(f"Error processing observation data, serving last observation: {e}")
- return self.last_frames, self.last_remote_state
-
- self.last_frames = new_frames
- self.last_remote_state = new_state
-
- return new_frames, new_state
-
- @check_if_not_connected
- def get_observation(self) -> RobotObservation:
- """
- Capture observations from the remote robot: current follower arm positions,
- present wheel speeds (converted to body-frame velocities: x, y, theta),
- and a camera frame. Receives over ZMQ, translate to body-frame vel
- """
-
- frames, obs_dict = self._get_data()
-
- # Loop over each configured camera
- for cam_name, frame in frames.items():
- if frame is None:
- logging.warning("Frame is None")
- frame = np.zeros((640, 480, 3), dtype=np.uint8)
- obs_dict[cam_name] = frame
-
- return obs_dict
-
- def _from_keyboard_to_base_action(self, pressed_keys: np.ndarray):
- # Speed control
- if self.teleop_keys["speed_up"] in pressed_keys:
- self.speed_index = min(self.speed_index + 1, 2)
- if self.teleop_keys["speed_down"] in pressed_keys:
- self.speed_index = max(self.speed_index - 1, 0)
- speed_setting = self.speed_levels[self.speed_index]
- xy_speed = speed_setting["xy"] # e.g. 0.1, 0.25, or 0.4
- theta_speed = speed_setting["theta"] # e.g. 30, 60, or 90
-
- x_cmd = 0.0 # m/s forward/backward
- y_cmd = 0.0 # m/s lateral
- theta_cmd = 0.0 # deg/s rotation
-
- if self.teleop_keys["forward"] in pressed_keys:
- x_cmd += xy_speed
- if self.teleop_keys["backward"] in pressed_keys:
- x_cmd -= xy_speed
- if self.teleop_keys["left"] in pressed_keys:
- y_cmd += xy_speed
- if self.teleop_keys["right"] in pressed_keys:
- y_cmd -= xy_speed
- if self.teleop_keys["rotate_left"] in pressed_keys:
- theta_cmd += theta_speed
- if self.teleop_keys["rotate_right"] in pressed_keys:
- theta_cmd -= theta_speed
- return {
- "x.vel": x_cmd,
- "y.vel": y_cmd,
- "theta.vel": theta_cmd,
- }
-
- def configure(self):
- pass
-
- @check_if_not_connected
- def send_action(self, action: RobotAction) -> RobotAction:
- """Command lekiwi to move to a target joint configuration. Translates to motor space + sends over ZMQ
-
- Args:
- action (RobotAction): array containing the goal positions for the motors.
- Raises:
- RobotDeviceNotConnectedError: if robot is not connected.
-
- Returns:
- np.ndarray: the action sent to the motors, potentially clipped.
- """
-
- self.zmq_cmd_socket.send_string(json.dumps(action)) # action is in motor space
-
- # TODO(Steven): Remove the np conversion when it is possible to record a non-numpy array value
- actions = np.array([action.get(k, 0.0) for k in self._state_order], dtype=np.float32)
-
- action_sent = {key: actions[i] for i, key in enumerate(self._state_order)}
- action_sent[ACTION] = actions
- return action_sent
-
- @check_if_not_connected
- def disconnect(self):
- """Cleans ZMQ comms"""
-
- self.zmq_observation_socket.close()
- self.zmq_cmd_socket.close()
- self.zmq_context.term()
- self._is_connected = False
diff --git a/lerobot/src/lerobot/robots/lekiwi/lekiwi_host.py b/lerobot/src/lerobot/robots/lekiwi/lekiwi_host.py
deleted file mode 100644
index 741005a8867c47fc55563dffe5edba86654cc8fb..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/lekiwi/lekiwi_host.py
+++ /dev/null
@@ -1,136 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import base64
-import json
-import logging
-import time
-from dataclasses import dataclass, field
-
-import cv2
-import draccus
-import zmq
-
-from .config_lekiwi import LeKiwiConfig, LeKiwiHostConfig
-from .lekiwi import LeKiwi
-
-
-@dataclass
-class LeKiwiServerConfig:
- """Configuration for the LeKiwi host script."""
-
- robot: LeKiwiConfig = field(default_factory=LeKiwiConfig)
- host: LeKiwiHostConfig = field(default_factory=LeKiwiHostConfig)
-
-
-class LeKiwiHost:
- def __init__(self, config: LeKiwiHostConfig):
- self.zmq_context = zmq.Context()
- self.zmq_cmd_socket = self.zmq_context.socket(zmq.PULL)
- self.zmq_cmd_socket.setsockopt(zmq.CONFLATE, 1)
- self.zmq_cmd_socket.bind(f"tcp://*:{config.port_zmq_cmd}")
-
- self.zmq_observation_socket = self.zmq_context.socket(zmq.PUSH)
- self.zmq_observation_socket.setsockopt(zmq.CONFLATE, 1)
- self.zmq_observation_socket.bind(f"tcp://*:{config.port_zmq_observations}")
-
- self.connection_time_s = config.connection_time_s
- self.watchdog_timeout_ms = config.watchdog_timeout_ms
- self.max_loop_freq_hz = config.max_loop_freq_hz
-
- def disconnect(self):
- self.zmq_observation_socket.close()
- self.zmq_cmd_socket.close()
- self.zmq_context.term()
-
-
-@draccus.wrap()
-def main(cfg: LeKiwiServerConfig):
- logging.info("Configuring LeKiwi")
- robot = LeKiwi(cfg.robot)
-
- logging.info("Connecting LeKiwi")
- robot.connect()
-
- logging.info("Starting HostAgent")
- host = LeKiwiHost(cfg.host)
-
- last_cmd_time = time.time()
- watchdog_active = False
- logging.info("Waiting for commands...")
- try:
- # Business logic
- start = time.perf_counter()
- duration = 0
- while duration < host.connection_time_s:
- loop_start_time = time.time()
- try:
- msg = host.zmq_cmd_socket.recv_string(zmq.NOBLOCK)
- data = dict(json.loads(msg))
- _action_sent = robot.send_action(data)
- last_cmd_time = time.time()
- watchdog_active = False
- except zmq.Again:
- if not watchdog_active:
- logging.warning("No command available")
- except Exception as e:
- logging.error("Message fetching failed: %s", e)
-
- now = time.time()
- if (now - last_cmd_time > host.watchdog_timeout_ms / 1000) and not watchdog_active:
- logging.warning(
- f"Command not received for more than {host.watchdog_timeout_ms} milliseconds. Stopping the base."
- )
- watchdog_active = True
- robot.stop_base()
-
- last_observation = robot.get_observation()
-
- # Encode ndarrays to base64 strings
- for cam_key, _ in robot.cameras.items():
- ret, buffer = cv2.imencode(
- ".jpg", last_observation[cam_key], [int(cv2.IMWRITE_JPEG_QUALITY), 90]
- )
- if ret:
- last_observation[cam_key] = base64.b64encode(buffer).decode("utf-8")
- else:
- last_observation[cam_key] = ""
-
- # Send the observation to the remote agent
- try:
- host.zmq_observation_socket.send_string(json.dumps(last_observation), flags=zmq.NOBLOCK)
- except zmq.Again:
- logging.info("Dropping observation, no client connected")
-
- # Ensure a short sleep to avoid overloading the CPU.
- elapsed = time.time() - loop_start_time
-
- time.sleep(max(1 / host.max_loop_freq_hz - elapsed, 0))
- duration = time.perf_counter() - start
- print("Cycle time reached.")
-
- except KeyboardInterrupt:
- print("Keyboard interrupt received. Exiting...")
- finally:
- print("Shutting down Lekiwi Host.")
- robot.disconnect()
- host.disconnect()
-
- logging.info("Finished LeKiwi cleanly")
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/robots/omx_follower/__init__.py b/lerobot/src/lerobot/robots/omx_follower/__init__.py
deleted file mode 100644
index 86a431cfcbeb91b5b255d952cc6bffcf64c3f753..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/omx_follower/__init__.py
+++ /dev/null
@@ -1,21 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# OMX is a fully open-source robot from ROBOTIS.
-# More information at: https://ai.robotis.com/omx/introduction_omx.html
-
-from .config_omx_follower import OmxFollowerConfig
-from .omx_follower import OmxFollower
diff --git a/lerobot/src/lerobot/robots/omx_follower/config_omx_follower.py b/lerobot/src/lerobot/robots/omx_follower/config_omx_follower.py
deleted file mode 100644
index f64788fbdb0f4fdfb6001dd6b106d14c990349d0..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/omx_follower/config_omx_follower.py
+++ /dev/null
@@ -1,39 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass, field
-
-from lerobot.cameras import CameraConfig
-
-from ..config import RobotConfig
-
-
-@RobotConfig.register_subclass("omx_follower")
-@dataclass
-class OmxFollowerConfig(RobotConfig):
- # Port to connect to the arm
- port: str
-
- disable_torque_on_disconnect: bool = True
-
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a dictionary that maps motor
- # names to the max_relative_target value for that motor.
- max_relative_target: float | dict[str, float] | None = None
-
- # cameras
- cameras: dict[str, CameraConfig] = field(default_factory=dict)
-
- # Set to `True` for backward compatibility with previous policies/dataset
- use_degrees: bool = False
diff --git a/lerobot/src/lerobot/robots/omx_follower/omx_follower.py b/lerobot/src/lerobot/robots/omx_follower/omx_follower.py
deleted file mode 100644
index ca34997b7731f458632971e71a8ef6e6aab87ee2..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/omx_follower/omx_follower.py
+++ /dev/null
@@ -1,219 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import time
-from functools import cached_property
-
-from lerobot.cameras.utils import make_cameras_from_configs
-from lerobot.motors import Motor, MotorCalibration, MotorNormMode
-from lerobot.motors.dynamixel import (
- DriveMode,
- DynamixelMotorsBus,
- OperatingMode,
-)
-from lerobot.processor import RobotAction, RobotObservation
-from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
-
-from ..robot import Robot
-from ..utils import ensure_safe_goal_position
-from .config_omx_follower import OmxFollowerConfig
-
-logger = logging.getLogger(__name__)
-
-
-class OmxFollower(Robot):
- """
- - [OMX](https://github.com/ROBOTIS-GIT/open_manipulator),
- expansion, developed by Woojin Wie and Junha Cha from [ROBOTIS](https://ai.robotis.com/)
- """
-
- config_class = OmxFollowerConfig
- name = "omx_follower"
-
- def __init__(self, config: OmxFollowerConfig):
- super().__init__(config)
- self.config = config
- norm_mode_body = MotorNormMode.DEGREES if config.use_degrees else MotorNormMode.RANGE_M100_100
- self.bus = DynamixelMotorsBus(
- port=self.config.port,
- motors={
- "shoulder_pan": Motor(11, "xl430-w250", norm_mode_body),
- "shoulder_lift": Motor(12, "xl430-w250", norm_mode_body),
- "elbow_flex": Motor(13, "xl430-w250", norm_mode_body),
- "wrist_flex": Motor(14, "xl330-m288", norm_mode_body),
- "wrist_roll": Motor(15, "xl330-m288", norm_mode_body),
- "gripper": Motor(16, "xl330-m288", MotorNormMode.RANGE_0_100),
- },
- calibration=self.calibration,
- )
- self.cameras = make_cameras_from_configs(config.cameras)
-
- @property
- def _motors_ft(self) -> dict[str, type]:
- return {f"{motor}.pos": float for motor in self.bus.motors}
-
- @property
- def _cameras_ft(self) -> dict[str, tuple]:
- return {
- cam: (self.config.cameras[cam].height, self.config.cameras[cam].width, 3) for cam in self.cameras
- }
-
- @cached_property
- def observation_features(self) -> dict[str, type | tuple]:
- return {**self._motors_ft, **self._cameras_ft}
-
- @cached_property
- def action_features(self) -> dict[str, type]:
- return self._motors_ft
-
- @property
- def is_connected(self) -> bool:
- return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
-
- @check_if_already_connected
- def connect(self, calibrate: bool = True) -> None:
- """
- For OMX robots that come pre-calibrated:
- - If default calibration from package doesn't match motors, read from motors and save
- - This allows using pre-calibrated robots without manual calibration
- - If no calibration file exists, use factory default values (homing_offset=0, range_min=0, range_max=4095)
- """
-
- self.bus.connect()
- if not self.is_calibrated and calibrate:
- logger.info(
- "Mismatch between calibration values in the motor and the calibration file or no calibration file found"
- )
- self.calibrate()
-
- for cam in self.cameras.values():
- cam.connect()
-
- self.configure()
- logger.info(f"{self} connected.")
-
- @property
- def is_calibrated(self) -> bool:
- return self.bus.is_calibrated
-
- def calibrate(self) -> None:
- self.bus.disable_torque()
- logger.info(f"\nUsing factory default calibration values for {self}")
- logger.info(f"\nWriting default configuration of {self} to the motors")
- for motor in self.bus.motors:
- self.bus.write("Operating_Mode", motor, OperatingMode.EXTENDED_POSITION.value)
-
- for motor in self.bus.motors:
- self.bus.write("Drive_Mode", motor, DriveMode.NON_INVERTED.value)
-
- self.calibration = {}
- for motor, m in self.bus.motors.items():
- self.calibration[motor] = MotorCalibration(
- id=m.id,
- drive_mode=0,
- homing_offset=0,
- range_min=0,
- range_max=4095,
- )
-
- self.bus.write_calibration(self.calibration)
- self._save_calibration()
- logger.info(f"Calibration saved to {self.calibration_fpath}")
-
- def configure(self) -> None:
- with self.bus.torque_disabled():
- self.bus.configure_motors()
- # Use 'extended position mode' for all motors except gripper, because in joint mode the servos
- # can't rotate more than 360 degrees (from 0 to 4095) And some mistake can happen while assembling
- # the arm, you could end up with a servo with a position 0 or 4095 at a crucial point
- for motor in self.bus.motors:
- if motor != "gripper":
- self.bus.write("Operating_Mode", motor, OperatingMode.EXTENDED_POSITION.value)
-
- # Use 'position control current based' for gripper to be limited by the limit of the current. For
- # the follower gripper, it means it can grasp an object without forcing too much even tho, its
- # goal position is a complete grasp (both gripper fingers are ordered to join and reach a touch).
- # For the leader gripper, it means we can use it as a physical trigger, since we can force with
- # our finger to make it move, and it will move back to its original target position when we
- # release the force.
- self.bus.write("Operating_Mode", "gripper", OperatingMode.CURRENT_POSITION.value)
-
- # Set better PID values to close the gap between recorded states and actions
- # TODO(rcadene): Implement an automatic procedure to set optimal PID values for each motor
- self.bus.write("Position_P_Gain", "elbow_flex", 1500)
- self.bus.write("Position_I_Gain", "elbow_flex", 0)
- self.bus.write("Position_D_Gain", "elbow_flex", 600)
-
- def setup_motors(self) -> None:
- for motor in reversed(self.bus.motors):
- input(f"Connect the controller board to the '{motor}' motor only and press enter.")
- self.bus.setup_motor(motor)
- print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
-
- @check_if_not_connected
- def get_observation(self) -> RobotObservation:
- # Read arm position
- start = time.perf_counter()
- obs_dict = self.bus.sync_read("Present_Position")
- obs_dict = {f"{motor}.pos": val for motor, val in obs_dict.items()}
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read state: {dt_ms:.1f}ms")
-
- # Capture images from cameras
- for cam_key, cam in self.cameras.items():
- start = time.perf_counter()
- obs_dict[cam_key] = cam.async_read()
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read {cam_key}: {dt_ms:.1f}ms")
-
- return obs_dict
-
- @check_if_not_connected
- def send_action(self, action: RobotAction) -> RobotAction:
- """Command arm to move to a target joint configuration.
-
- The relative action magnitude may be clipped depending on the configuration parameter
- `max_relative_target`. In this case, the action sent differs from original action.
- Thus, this function always returns the action actually sent.
-
- Args:
- action (RobotAction): The goal positions for the motors.
-
- Returns:
- RobotAction: The action sent to the motors, potentially clipped.
- """
-
- goal_pos = {key.removesuffix(".pos"): val for key, val in action.items() if key.endswith(".pos")}
-
- # Cap goal position when too far away from present position.
- # /!\ Slower fps expected due to reading from the follower.
- if self.config.max_relative_target is not None:
- present_pos = self.bus.sync_read("Present_Position")
- goal_present_pos = {key: (g_pos, present_pos[key]) for key, g_pos in goal_pos.items()}
- goal_pos = ensure_safe_goal_position(goal_present_pos, self.config.max_relative_target)
-
- # Send goal position to the arm
- self.bus.sync_write("Goal_Position", goal_pos)
- return {f"{motor}.pos": val for motor, val in goal_pos.items()}
-
- @check_if_not_connected
- def disconnect(self):
- self.bus.disconnect(self.config.disable_torque_on_disconnect)
- for cam in self.cameras.values():
- cam.disconnect()
-
- logger.info(f"{self} disconnected.")
diff --git a/lerobot/src/lerobot/robots/reachy2/__init__.py b/lerobot/src/lerobot/robots/reachy2/__init__.py
deleted file mode 100644
index ef9d3c4764d32fe38461980a832e838d27bdc4b8..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/reachy2/__init__.py
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .configuration_reachy2 import Reachy2RobotConfig
-from .robot_reachy2 import (
- REACHY2_ANTENNAS_JOINTS,
- REACHY2_L_ARM_JOINTS,
- REACHY2_NECK_JOINTS,
- REACHY2_R_ARM_JOINTS,
- REACHY2_VEL,
- Reachy2Robot,
-)
diff --git a/lerobot/src/lerobot/robots/reachy2/configuration_reachy2.py b/lerobot/src/lerobot/robots/reachy2/configuration_reachy2.py
deleted file mode 100644
index e67145ca5058856b0e2d4474b812eebc64e45e84..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/reachy2/configuration_reachy2.py
+++ /dev/null
@@ -1,117 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass, field
-
-from lerobot.cameras import CameraConfig
-from lerobot.cameras.configs import ColorMode
-from lerobot.cameras.reachy2_camera import Reachy2CameraConfig
-
-from ..config import RobotConfig
-
-
-@RobotConfig.register_subclass("reachy2")
-@dataclass
-class Reachy2RobotConfig(RobotConfig):
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors.
- max_relative_target: float | None = None
-
- # IP address of the Reachy 2 robot
- ip_address: str | None = "localhost"
- # Port of the Reachy 2 robot
- port: int = 50065
-
- # If True, turn_off_smoothly() will be sent to the robot before disconnecting.
- disable_torque_on_disconnect: bool = False
-
- # Tag for external commands control
- # Set to True if you use an external commands system to control the robot,
- # such as the official teleoperation application: https://github.com/pollen-robotics/Reachy2Teleoperation
- # If True, robot.send_action() will not send commands to the robot.
- use_external_commands: bool = False
-
- # Robot parts
- # Set to False to not add the corresponding joints part to the robot list of joints.
- # By default, all parts are set to True.
- with_mobile_base: bool = True
- with_l_arm: bool = True
- with_r_arm: bool = True
- with_neck: bool = True
- with_antennas: bool = True
-
- # Robot cameras
- # Set to True if you want to use the corresponding cameras in the observations.
- # By default, no camera is used.
- with_left_teleop_camera: bool = False
- with_right_teleop_camera: bool = False
- with_torso_camera: bool = False
-
- # Camera parameters
- camera_width: int = 640
- camera_height: int = 480
-
- # For cameras other than the 3 default Reachy 2 cameras.
- cameras: dict[str, CameraConfig] = field(default_factory=dict)
-
- def __post_init__(self) -> None:
- # Add cameras with same ip_address as the robot
- if self.with_left_teleop_camera:
- self.cameras["teleop_left"] = Reachy2CameraConfig(
- name="teleop",
- image_type="left",
- ip_address=self.ip_address,
- port=self.port,
- width=self.camera_width,
- height=self.camera_height,
- fps=30, # Not configurable for Reachy 2 cameras
- color_mode=ColorMode.RGB,
- )
- if self.with_right_teleop_camera:
- self.cameras["teleop_right"] = Reachy2CameraConfig(
- name="teleop",
- image_type="right",
- ip_address=self.ip_address,
- port=self.port,
- width=self.camera_width,
- height=self.camera_height,
- fps=30, # Not configurable for Reachy 2 cameras
- color_mode=ColorMode.RGB,
- )
- if self.with_torso_camera:
- self.cameras["torso_rgb"] = Reachy2CameraConfig(
- name="depth",
- image_type="rgb",
- ip_address=self.ip_address,
- port=self.port,
- width=self.camera_width,
- height=self.camera_height,
- fps=30, # Not configurable for Reachy 2 cameras
- color_mode=ColorMode.RGB,
- )
-
- super().__post_init__()
-
- if not (
- self.with_mobile_base
- or self.with_l_arm
- or self.with_r_arm
- or self.with_neck
- or self.with_antennas
- ):
- raise ValueError(
- "No Reachy2Robot part used.\n"
- "At least one part of the robot must be set to True "
- "(with_mobile_base, with_l_arm, with_r_arm, with_neck, with_antennas)"
- )
diff --git a/lerobot/src/lerobot/robots/reachy2/robot_reachy2.py b/lerobot/src/lerobot/robots/reachy2/robot_reachy2.py
deleted file mode 100644
index a59fd13190ba56ddd6108593fe74701a169a007b..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/reachy2/robot_reachy2.py
+++ /dev/null
@@ -1,235 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from __future__ import annotations
-
-import time
-from typing import TYPE_CHECKING, Any
-
-from lerobot.cameras.utils import make_cameras_from_configs
-from lerobot.processor import RobotAction, RobotObservation
-from lerobot.utils.import_utils import _reachy2_sdk_available
-
-from ..robot import Robot
-from ..utils import ensure_safe_goal_position
-from .configuration_reachy2 import Reachy2RobotConfig
-
-if TYPE_CHECKING or _reachy2_sdk_available:
- from reachy2_sdk import ReachySDK
-else:
- ReachySDK = None
-
-# {lerobot_keys: reachy2_sdk_keys}
-REACHY2_NECK_JOINTS = {
- "neck_yaw.pos": "head.neck.yaw",
- "neck_pitch.pos": "head.neck.pitch",
- "neck_roll.pos": "head.neck.roll",
-}
-
-REACHY2_ANTENNAS_JOINTS = {
- "l_antenna.pos": "head.l_antenna",
- "r_antenna.pos": "head.r_antenna",
-}
-
-REACHY2_R_ARM_JOINTS = {
- "r_shoulder_pitch.pos": "r_arm.shoulder.pitch",
- "r_shoulder_roll.pos": "r_arm.shoulder.roll",
- "r_elbow_yaw.pos": "r_arm.elbow.yaw",
- "r_elbow_pitch.pos": "r_arm.elbow.pitch",
- "r_wrist_roll.pos": "r_arm.wrist.roll",
- "r_wrist_pitch.pos": "r_arm.wrist.pitch",
- "r_wrist_yaw.pos": "r_arm.wrist.yaw",
- "r_gripper.pos": "r_arm.gripper",
-}
-
-REACHY2_L_ARM_JOINTS = {
- "l_shoulder_pitch.pos": "l_arm.shoulder.pitch",
- "l_shoulder_roll.pos": "l_arm.shoulder.roll",
- "l_elbow_yaw.pos": "l_arm.elbow.yaw",
- "l_elbow_pitch.pos": "l_arm.elbow.pitch",
- "l_wrist_roll.pos": "l_arm.wrist.roll",
- "l_wrist_pitch.pos": "l_arm.wrist.pitch",
- "l_wrist_yaw.pos": "l_arm.wrist.yaw",
- "l_gripper.pos": "l_arm.gripper",
-}
-
-REACHY2_VEL = {
- "mobile_base.vx": "vx",
- "mobile_base.vy": "vy",
- "mobile_base.vtheta": "vtheta",
-}
-
-
-class Reachy2Robot(Robot):
- """
- [Reachy 2](https://www.pollen-robotics.com/reachy/), by Pollen Robotics.
- """
-
- config_class = Reachy2RobotConfig
- name = "reachy2"
-
- def __init__(self, config: Reachy2RobotConfig):
- super().__init__(config)
-
- self.config = config
- self.robot_type = self.config.type
- self.use_external_commands = self.config.use_external_commands
-
- self.reachy: None | ReachySDK = None
- self.cameras = make_cameras_from_configs(config.cameras)
-
- self.logs: dict[str, float] = {}
-
- self.joints_dict: dict[str, str] = self._generate_joints_dict()
-
- @property
- def observation_features(self) -> dict[str, Any]:
- return {**self.motors_features, **self.camera_features}
-
- @property
- def action_features(self) -> dict[str, type]:
- return self.motors_features
-
- @property
- def camera_features(self) -> dict[str, tuple[int | None, int | None, int]]:
- return {cam: (self.cameras[cam].height, self.cameras[cam].width, 3) for cam in self.cameras}
-
- @property
- def motors_features(self) -> dict[str, type]:
- if self.config.with_mobile_base:
- return {
- **dict.fromkeys(
- self.joints_dict.keys(),
- float,
- ),
- **dict.fromkeys(
- REACHY2_VEL.keys(),
- float,
- ),
- }
- else:
- return dict.fromkeys(self.joints_dict.keys(), float)
-
- @property
- def is_connected(self) -> bool:
- return self.reachy.is_connected() if self.reachy is not None else False
-
- def connect(self, calibrate: bool = False) -> None:
- self.reachy = ReachySDK(self.config.ip_address)
- if not self.is_connected:
- raise ConnectionError()
-
- for cam in self.cameras.values():
- cam.connect()
-
- self.configure()
-
- def configure(self) -> None:
- if self.reachy is not None:
- self.reachy.turn_on()
- self.reachy.reset_default_limits()
-
- @property
- def is_calibrated(self) -> bool:
- return True
-
- def calibrate(self) -> None:
- pass
-
- def _generate_joints_dict(self) -> dict[str, str]:
- joints = {}
- if self.config.with_neck:
- joints.update(REACHY2_NECK_JOINTS)
- if self.config.with_l_arm:
- joints.update(REACHY2_L_ARM_JOINTS)
- if self.config.with_r_arm:
- joints.update(REACHY2_R_ARM_JOINTS)
- if self.config.with_antennas:
- joints.update(REACHY2_ANTENNAS_JOINTS)
- return joints
-
- def _get_state(self) -> dict[str, float]:
- if self.reachy is not None:
- pos_dict = {k: self.reachy.joints[v].present_position for k, v in self.joints_dict.items()}
- if not self.config.with_mobile_base:
- return pos_dict
- vel_dict = {k: self.reachy.mobile_base.odometry[v] for k, v in REACHY2_VEL.items()}
- return {**pos_dict, **vel_dict}
- else:
- return {}
-
- def get_observation(self) -> RobotObservation:
- obs_dict: RobotObservation = {}
-
- # Read Reachy 2 state
- before_read_t = time.perf_counter()
- obs_dict.update(self._get_state())
- self.logs["read_pos_dt_s"] = time.perf_counter() - before_read_t
-
- # Capture images from cameras
- for cam_key, cam in self.cameras.items():
- obs_dict[cam_key] = cam.async_read()
-
- return obs_dict
-
- def send_action(self, action: RobotAction) -> RobotAction:
- if self.reachy is not None:
- if not self.is_connected:
- raise ConnectionError()
-
- before_write_t = time.perf_counter()
-
- vel = {}
- goal_pos = {}
- for key, val in action.items():
- if key not in self.joints_dict:
- if key not in REACHY2_VEL:
- raise KeyError(f"Key '{key}' is not a valid motor key in Reachy 2.")
- else:
- vel[REACHY2_VEL[key]] = float(val)
- else:
- if not self.use_external_commands and self.config.max_relative_target is not None:
- goal_pos[key] = float(val)
- goal_present_pos = {
- key: (
- goal_pos[key],
- self.reachy.joints[self.joints_dict[key]].present_position,
- )
- }
- safe_goal_pos = ensure_safe_goal_position(
- goal_present_pos, float(self.config.max_relative_target)
- )
- val = safe_goal_pos[key]
- self.reachy.joints[self.joints_dict[key]].goal_position = float(val)
-
- if self.config.with_mobile_base:
- self.reachy.mobile_base.set_goal_speed(vel["vx"], vel["vy"], vel["vtheta"])
-
- # We don't send the goal positions if we control Reachy 2 externally
- if not self.use_external_commands:
- self.reachy.send_goal_positions()
- if self.config.with_mobile_base:
- self.reachy.mobile_base.send_speed_command()
-
- self.logs["write_pos_dt_s"] = time.perf_counter() - before_write_t
- return action
-
- def disconnect(self) -> None:
- if self.reachy is not None:
- for cam in self.cameras.values():
- cam.disconnect()
- if self.config.disable_torque_on_disconnect:
- self.reachy.turn_off_smoothly()
- self.reachy.disconnect()
diff --git a/lerobot/src/lerobot/robots/robot.py b/lerobot/src/lerobot/robots/robot.py
deleted file mode 100644
index 75952188d3053288ee15f6705a52b3fad07db779..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/robot.py
+++ /dev/null
@@ -1,185 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import abc
-import builtins
-from pathlib import Path
-
-import draccus
-
-from lerobot.motors import MotorCalibration
-from lerobot.processor import RobotAction, RobotObservation
-from lerobot.utils.constants import HF_LEROBOT_CALIBRATION, ROBOTS
-
-from .config import RobotConfig
-
-
-# TODO(aliberts): action/obs typing such as Generic[ObsType, ActType] similar to gym.Env ?
-# https://github.com/Farama-Foundation/Gymnasium/blob/3287c869f9a48d99454306b0d4b4ec537f0f35e3/gymnasium/core.py#L23
-class Robot(abc.ABC):
- """
- The base abstract class for all LeRobot-compatible robots.
-
- This class provides a standardized interface for interacting with physical robots.
- Subclasses must implement all abstract methods and properties to be usable.
-
- Attributes:
- config_class (RobotConfig): The expected configuration class for this robot.
- name (str): The unique robot name used to identify this robot type.
- """
-
- # Set these in ALL subclasses
- config_class: builtins.type[RobotConfig]
- name: str
-
- def __init__(self, config: RobotConfig):
- self.robot_type = self.name
- self.id = config.id
- self.calibration_dir = (
- config.calibration_dir if config.calibration_dir else HF_LEROBOT_CALIBRATION / ROBOTS / self.name
- )
- self.calibration_dir.mkdir(parents=True, exist_ok=True)
- self.calibration_fpath = self.calibration_dir / f"{self.id}.json"
- self.calibration: dict[str, MotorCalibration] = {}
- if self.calibration_fpath.is_file():
- self._load_calibration()
-
- def __str__(self) -> str:
- return f"{self.id} {self.__class__.__name__}"
-
- # TODO(aliberts): create a proper Feature class for this that links with datasets
- @property
- @abc.abstractmethod
- def observation_features(self) -> dict:
- """
- A dictionary describing the structure and types of the observations produced by the robot.
- Its structure (keys) should match the structure of what is returned by :pymeth:`get_observation`.
- Values for the dict should either be:
- - The type of the value if it's a simple value, e.g. `float` for single proprioceptive value (a joint's position/velocity)
- - A tuple representing the shape if it's an array-type value, e.g. `(height, width, channel)` for images
-
- Note: this property should be able to be called regardless of whether the robot is connected or not.
- """
- pass
-
- @property
- @abc.abstractmethod
- def action_features(self) -> dict:
- """
- A dictionary describing the structure and types of the actions expected by the robot. Its structure
- (keys) should match the structure of what is passed to :pymeth:`send_action`. Values for the dict
- should be the type of the value if it's a simple value, e.g. `float` for single proprioceptive value
- (a joint's goal position/velocity)
-
- Note: this property should be able to be called regardless of whether the robot is connected or not.
- """
- pass
-
- @property
- @abc.abstractmethod
- def is_connected(self) -> bool:
- """
- Whether the robot is currently connected or not. If `False`, calling :pymeth:`get_observation` or
- :pymeth:`send_action` should raise an error.
- """
- pass
-
- @abc.abstractmethod
- def connect(self, calibrate: bool = True) -> None:
- """
- Establish communication with the robot.
-
- Args:
- calibrate (bool): If True, automatically calibrate the robot after connecting if it's not
- calibrated or needs calibration (this is hardware-dependant).
- """
- pass
-
- @property
- @abc.abstractmethod
- def is_calibrated(self) -> bool:
- """Whether the robot is currently calibrated or not. Should be always `True` if not applicable"""
- pass
-
- @abc.abstractmethod
- def calibrate(self) -> None:
- """
- Calibrate the robot if applicable. If not, this should be a no-op.
-
- This method should collect any necessary data (e.g., motor offsets) and update the
- :pyattr:`calibration` dictionary accordingly.
- """
- pass
-
- def _load_calibration(self, fpath: Path | None = None) -> None:
- """
- Helper to load calibration data from the specified file.
-
- Args:
- fpath (Path | None): Optional path to the calibration file. Defaults to `self.calibration_fpath`.
- """
- fpath = self.calibration_fpath if fpath is None else fpath
- with open(fpath) as f, draccus.config_type("json"):
- self.calibration = draccus.load(dict[str, MotorCalibration], f)
-
- def _save_calibration(self, fpath: Path | None = None) -> None:
- """
- Helper to save calibration data to the specified file.
-
- Args:
- fpath (Path | None): Optional path to save the calibration file. Defaults to `self.calibration_fpath`.
- """
- fpath = self.calibration_fpath if fpath is None else fpath
- with open(fpath, "w") as f, draccus.config_type("json"):
- draccus.dump(self.calibration, f, indent=4)
-
- @abc.abstractmethod
- def configure(self) -> None:
- """
- Apply any one-time or runtime configuration to the robot.
- This may include setting motor parameters, control modes, or initial state.
- """
- pass
-
- @abc.abstractmethod
- def get_observation(self) -> RobotObservation:
- """
- Retrieve the current observation from the robot.
-
- Returns:
- RobotObservation: A flat dictionary representing the robot's current sensory state. Its structure
- should match :pymeth:`observation_features`.
- """
-
- pass
-
- @abc.abstractmethod
- def send_action(self, action: RobotAction) -> RobotAction:
- """
- Send an action command to the robot.
-
- Args:
- action (RobotAction): Dictionary representing the desired action. Its structure should match
- :pymeth:`action_features`.
-
- Returns:
- RobotAction: The action actually sent to the motors potentially clipped or modified, e.g. by
- safety limits on velocity.
- """
- pass
-
- @abc.abstractmethod
- def disconnect(self) -> None:
- """Disconnect from the robot and perform any necessary cleanup."""
- pass
diff --git a/lerobot/src/lerobot/robots/so_follower/__init__.py b/lerobot/src/lerobot/robots/so_follower/__init__.py
deleted file mode 100644
index dfe0e340d78350d4de2c7f9bd0cc35a7684aaca6..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/so_follower/__init__.py
+++ /dev/null
@@ -1,23 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .config_so_follower import (
- SO100FollowerConfig,
- SO101FollowerConfig,
- SOFollowerConfig,
- SOFollowerRobotConfig,
-)
-from .so_follower import SO100Follower, SO101Follower, SOFollower
diff --git a/lerobot/src/lerobot/robots/so_follower/config_so_follower.py b/lerobot/src/lerobot/robots/so_follower/config_so_follower.py
deleted file mode 100644
index 4a9f7bf3ec0e6a6be700a73968d12d1cf4e2fc89..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/so_follower/config_so_follower.py
+++ /dev/null
@@ -1,54 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass, field
-from typing import TypeAlias
-
-from lerobot.cameras import CameraConfig
-
-from ..config import RobotConfig
-
-
-@dataclass
-class SOFollowerConfig:
- """Base configuration class for SO Follower robots."""
-
- # Port to connect to the arm
- port: str
-
- disable_torque_on_disconnect: bool = True
-
- # `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
- # Set this to a positive scalar to have the same value for all motors, or a dictionary that maps motor
- # names to the max_relative_target value for that motor.
- max_relative_target: float | dict[str, float] | None = None
-
- # cameras
- cameras: dict[str, CameraConfig] = field(default_factory=dict)
-
- # Set to `True` for backward compatibility with previous policies/dataset
- use_degrees: bool = False
-
-
-@RobotConfig.register_subclass("so101_follower")
-@RobotConfig.register_subclass("so100_follower")
-@dataclass
-class SOFollowerRobotConfig(RobotConfig, SOFollowerConfig):
- pass
-
-
-SO100FollowerConfig: TypeAlias = SOFollowerRobotConfig
-SO101FollowerConfig: TypeAlias = SOFollowerRobotConfig
diff --git a/lerobot/src/lerobot/robots/so_follower/robot_kinematic_processor.py b/lerobot/src/lerobot/robots/so_follower/robot_kinematic_processor.py
deleted file mode 100644
index 2d7b21ec4a4c75cf82dfe3abd6697c66bd7bed8f..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/so_follower/robot_kinematic_processor.py
+++ /dev/null
@@ -1,611 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass, field
-from typing import Any
-
-import numpy as np
-
-from lerobot.configs.types import FeatureType, PipelineFeatureType, PolicyFeature
-from lerobot.model.kinematics import RobotKinematics
-from lerobot.processor import (
- EnvTransition,
- ObservationProcessorStep,
- ProcessorStep,
- ProcessorStepRegistry,
- RobotAction,
- RobotActionProcessorStep,
- RobotObservation,
- TransitionKey,
-)
-from lerobot.utils.rotation import Rotation
-
-
-@ProcessorStepRegistry.register("ee_reference_and_delta")
-@dataclass
-class EEReferenceAndDelta(RobotActionProcessorStep):
- """
- Computes a target end-effector pose from a relative delta command.
-
- This step takes a desired change in position and orientation (`target_*`) and applies it to a
- reference end-effector pose to calculate an absolute target pose. The reference pose is derived
- from the current robot joint positions using forward kinematics.
-
- The processor can operate in two modes:
- 1. `use_latched_reference=True`: The reference pose is "latched" or saved at the moment the action
- is first enabled. Subsequent commands are relative to this fixed reference.
- 2. `use_latched_reference=False`: The reference pose is updated to the robot's current pose at
- every step.
-
- Attributes:
- kinematics: The robot's kinematic model for forward kinematics.
- end_effector_step_sizes: A dictionary scaling the input delta commands.
- motor_names: A list of motor names required for forward kinematics.
- use_latched_reference: If True, latch the reference pose on enable; otherwise, always use the
- current pose as the reference.
- reference_ee_pose: Internal state storing the latched reference pose.
- _prev_enabled: Internal state to detect the rising edge of the enable signal.
- _command_when_disabled: Internal state to hold the last command while disabled.
- """
-
- kinematics: RobotKinematics
- end_effector_step_sizes: dict
- motor_names: list[str]
- use_latched_reference: bool = (
- True # If True, latch reference on enable; if False, always use current pose
- )
- use_ik_solution: bool = False
-
- reference_ee_pose: np.ndarray | None = field(default=None, init=False, repr=False)
- _prev_enabled: bool = field(default=False, init=False, repr=False)
- _command_when_disabled: np.ndarray | None = field(default=None, init=False, repr=False)
-
- def action(self, action: RobotAction) -> RobotAction:
- observation = self.transition.get(TransitionKey.OBSERVATION).copy()
-
- if observation is None:
- raise ValueError("Joints observation is require for computing robot kinematics")
-
- if self.use_ik_solution and "IK_solution" in self.transition.get(TransitionKey.COMPLEMENTARY_DATA):
- q_raw = self.transition.get(TransitionKey.COMPLEMENTARY_DATA)["IK_solution"]
- else:
- q_raw = np.array(
- [
- float(v)
- for k, v in observation.items()
- if isinstance(k, str)
- and k.endswith(".pos")
- and k.removesuffix(".pos") in self.motor_names
- ],
- dtype=float,
- )
-
- if q_raw is None:
- raise ValueError("Joints observation is require for computing robot kinematics")
-
- # Current pose from FK on measured joints
- t_curr = self.kinematics.forward_kinematics(q_raw)
-
- enabled = bool(action.pop("enabled"))
- tx = float(action.pop("target_x"))
- ty = float(action.pop("target_y"))
- tz = float(action.pop("target_z"))
- wx = float(action.pop("target_wx"))
- wy = float(action.pop("target_wy"))
- wz = float(action.pop("target_wz"))
- gripper_vel = float(action.pop("gripper_vel"))
-
- desired = None
-
- if enabled:
- ref = t_curr
- if self.use_latched_reference:
- # Latched reference mode: latch reference at the rising edge
- if not self._prev_enabled or self.reference_ee_pose is None:
- self.reference_ee_pose = t_curr.copy()
- ref = self.reference_ee_pose if self.reference_ee_pose is not None else t_curr
-
- delta_p = np.array(
- [
- tx * self.end_effector_step_sizes["x"],
- ty * self.end_effector_step_sizes["y"],
- tz * self.end_effector_step_sizes["z"],
- ],
- dtype=float,
- )
- r_abs = Rotation.from_rotvec([wx, wy, wz]).as_matrix()
- desired = np.eye(4, dtype=float)
- desired[:3, :3] = ref[:3, :3] @ r_abs
- desired[:3, 3] = ref[:3, 3] + delta_p
-
- self._command_when_disabled = desired.copy()
- else:
- # While disabled, keep sending the same command to avoid drift.
- if self._command_when_disabled is None:
- # If we've never had an enabled command yet, freeze current FK pose once.
- self._command_when_disabled = t_curr.copy()
- desired = self._command_when_disabled.copy()
-
- # Write action fields
- pos = desired[:3, 3]
- tw = Rotation.from_matrix(desired[:3, :3]).as_rotvec()
- action["ee.x"] = float(pos[0])
- action["ee.y"] = float(pos[1])
- action["ee.z"] = float(pos[2])
- action["ee.wx"] = float(tw[0])
- action["ee.wy"] = float(tw[1])
- action["ee.wz"] = float(tw[2])
- action["ee.gripper_vel"] = gripper_vel
-
- self._prev_enabled = enabled
- return action
-
- def reset(self):
- """Resets the internal state of the processor."""
- self._prev_enabled = False
- self.reference_ee_pose = None
- self._command_when_disabled = None
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- for feat in [
- "enabled",
- "target_x",
- "target_y",
- "target_z",
- "target_wx",
- "target_wy",
- "target_wz",
- "gripper_vel",
- ]:
- features[PipelineFeatureType.ACTION].pop(f"{feat}", None)
-
- for feat in ["x", "y", "z", "wx", "wy", "wz", "gripper_vel"]:
- features[PipelineFeatureType.ACTION][f"ee.{feat}"] = PolicyFeature(
- type=FeatureType.ACTION, shape=(1,)
- )
-
- return features
-
-
-@ProcessorStepRegistry.register("ee_bounds_and_safety")
-@dataclass
-class EEBoundsAndSafety(RobotActionProcessorStep):
- """
- Clips the end-effector pose to predefined bounds and checks for unsafe jumps.
-
- This step ensures that the target end-effector pose remains within a safe operational workspace.
- It also moderates the command to prevent large, sudden movements between consecutive steps.
-
- Attributes:
- end_effector_bounds: A dictionary with "min" and "max" keys for position clipping.
- max_ee_step_m: The maximum allowed change in position (in meters) between steps.
- _last_pos: Internal state storing the last commanded position.
- """
-
- end_effector_bounds: dict
- max_ee_step_m: float = 0.05
- _last_pos: np.ndarray | None = field(default=None, init=False, repr=False)
-
- def action(self, action: RobotAction) -> RobotAction:
- x = action["ee.x"]
- y = action["ee.y"]
- z = action["ee.z"]
- wx = action["ee.wx"]
- wy = action["ee.wy"]
- wz = action["ee.wz"]
- # TODO(Steven): ee.gripper_vel does not need to be bounded
-
- if None in (x, y, z, wx, wy, wz):
- raise ValueError(
- "Missing required end-effector pose components: x, y, z, wx, wy, wz must all be present in action"
- )
-
- pos = np.array([x, y, z], dtype=float)
- twist = np.array([wx, wy, wz], dtype=float)
-
- # Clip position
- pos = np.clip(pos, self.end_effector_bounds["min"], self.end_effector_bounds["max"])
-
- # Check for jumps in position
- if self._last_pos is not None:
- dpos = pos - self._last_pos
- n = float(np.linalg.norm(dpos))
- if n > self.max_ee_step_m and n > 0:
- pos = self._last_pos + dpos * (self.max_ee_step_m / n)
- raise ValueError(f"EE jump {n:.3f}m > {self.max_ee_step_m}m")
-
- self._last_pos = pos
-
- action["ee.x"] = float(pos[0])
- action["ee.y"] = float(pos[1])
- action["ee.z"] = float(pos[2])
- action["ee.wx"] = float(twist[0])
- action["ee.wy"] = float(twist[1])
- action["ee.wz"] = float(twist[2])
- return action
-
- def reset(self):
- """Resets the last known position and orientation."""
- self._last_pos = None
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- return features
-
-
-@ProcessorStepRegistry.register("inverse_kinematics_ee_to_joints")
-@dataclass
-class InverseKinematicsEEToJoints(RobotActionProcessorStep):
- """
- Computes desired joint positions from a target end-effector pose using inverse kinematics (IK).
-
- This step translates a Cartesian command (position and orientation of the end-effector) into
- the corresponding joint-space commands for each motor.
-
- Attributes:
- kinematics: The robot's kinematic model for inverse kinematics.
- motor_names: A list of motor names for which to compute joint positions.
- q_curr: Internal state storing the last joint positions, used as an initial guess for the IK solver.
- initial_guess_current_joints: If True, use the robot's current joint state as the IK guess.
- If False, use the solution from the previous step.
- """
-
- kinematics: RobotKinematics
- motor_names: list[str]
- q_curr: np.ndarray | None = field(default=None, init=False, repr=False)
- initial_guess_current_joints: bool = True
-
- def action(self, action: RobotAction) -> RobotAction:
- x = action.pop("ee.x")
- y = action.pop("ee.y")
- z = action.pop("ee.z")
- wx = action.pop("ee.wx")
- wy = action.pop("ee.wy")
- wz = action.pop("ee.wz")
- gripper_pos = action.pop("ee.gripper_pos")
-
- if None in (x, y, z, wx, wy, wz, gripper_pos):
- raise ValueError(
- "Missing required end-effector pose components: ee.x, ee.y, ee.z, ee.wx, ee.wy, ee.wz, ee.gripper_pos must all be present in action"
- )
-
- observation = self.transition.get(TransitionKey.OBSERVATION).copy()
- if observation is None:
- raise ValueError("Joints observation is require for computing robot kinematics")
-
- q_raw = np.array(
- [float(v) for k, v in observation.items() if isinstance(k, str) and k.endswith(".pos")],
- dtype=float,
- )
- if q_raw is None:
- raise ValueError("Joints observation is require for computing robot kinematics")
-
- if self.initial_guess_current_joints: # Use current joints as initial guess
- self.q_curr = q_raw
- else: # Use previous ik solution as initial guess
- if self.q_curr is None:
- self.q_curr = q_raw
-
- # Build desired 4x4 transform from pos + rotvec (twist)
- t_des = np.eye(4, dtype=float)
- t_des[:3, :3] = Rotation.from_rotvec([wx, wy, wz]).as_matrix()
- t_des[:3, 3] = [x, y, z]
-
- # Compute inverse kinematics
- q_target = self.kinematics.inverse_kinematics(self.q_curr, t_des)
- self.q_curr = q_target
-
- # TODO: This is sentitive to order of motor_names = q_target mapping
- for i, name in enumerate(self.motor_names):
- if name != "gripper":
- action[f"{name}.pos"] = float(q_target[i])
- else:
- action["gripper.pos"] = float(gripper_pos)
-
- return action
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- for feat in ["x", "y", "z", "wx", "wy", "wz", "gripper_pos"]:
- features[PipelineFeatureType.ACTION].pop(f"ee.{feat}", None)
-
- for name in self.motor_names:
- features[PipelineFeatureType.ACTION][f"{name}.pos"] = PolicyFeature(
- type=FeatureType.ACTION, shape=(1,)
- )
-
- return features
-
- def reset(self):
- """Resets the initial guess for the IK solver."""
- self.q_curr = None
-
-
-@ProcessorStepRegistry.register("gripper_velocity_to_joint")
-@dataclass
-class GripperVelocityToJoint(RobotActionProcessorStep):
- """
- Converts a gripper velocity command into a target gripper joint position.
-
- This step integrates a normalized velocity command over time to produce a position command,
- taking the current gripper position as a starting point. It also supports a discrete mode
- where integer actions map to open, close, or no-op.
-
- Attributes:
- motor_names: A list of motor names, which must include 'gripper'.
- speed_factor: A scaling factor to convert the normalized velocity command to a position change.
- clip_min: The minimum allowed gripper joint position.
- clip_max: The maximum allowed gripper joint position.
- discrete_gripper: If True, treat the input action as discrete (0: open, 1: close, 2: stay).
- """
-
- speed_factor: float = 20.0
- clip_min: float = 0.0
- clip_max: float = 100.0
- discrete_gripper: bool = False
-
- def action(self, action: RobotAction) -> RobotAction:
- observation = self.transition.get(TransitionKey.OBSERVATION).copy()
-
- gripper_vel = action.pop("ee.gripper_vel")
-
- if observation is None:
- raise ValueError("Joints observation is require for computing robot kinematics")
-
- q_raw = np.array(
- [float(v) for k, v in observation.items() if isinstance(k, str) and k.endswith(".pos")],
- dtype=float,
- )
- if q_raw is None:
- raise ValueError("Joints observation is require for computing robot kinematics")
-
- if self.discrete_gripper:
- # Discrete gripper actions are in [0, 1, 2]
- # 0: open, 1: close, 2: stay
- # We need to shift them to [-1, 0, 1] and then scale them to clip_max
- gripper_vel = (gripper_vel - 1) * self.clip_max
-
- # Compute desired gripper position
- delta = gripper_vel * float(self.speed_factor)
- # TODO: This assumes gripper is the last specified joint in the robot
- gripper_pos = float(np.clip(q_raw[-1] + delta, self.clip_min, self.clip_max))
- action["ee.gripper_pos"] = gripper_pos
-
- return action
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- features[PipelineFeatureType.ACTION].pop("ee.gripper_vel", None)
- features[PipelineFeatureType.ACTION]["ee.gripper_pos"] = PolicyFeature(
- type=FeatureType.ACTION, shape=(1,)
- )
-
- return features
-
-
-def compute_forward_kinematics_joints_to_ee(
- joints: dict[str, Any], kinematics: RobotKinematics, motor_names: list[str]
-) -> dict[str, Any]:
- motor_joint_values = [joints[f"{n}.pos"] for n in motor_names]
-
- q = np.array(motor_joint_values, dtype=float)
- t = kinematics.forward_kinematics(q)
- pos = t[:3, 3]
- tw = Rotation.from_matrix(t[:3, :3]).as_rotvec()
- gripper_pos = joints["gripper.pos"]
- for n in motor_names:
- joints.pop(f"{n}.pos")
- joints["ee.x"] = float(pos[0])
- joints["ee.y"] = float(pos[1])
- joints["ee.z"] = float(pos[2])
- joints["ee.wx"] = float(tw[0])
- joints["ee.wy"] = float(tw[1])
- joints["ee.wz"] = float(tw[2])
- joints["ee.gripper_pos"] = float(gripper_pos)
- return joints
-
-
-@ProcessorStepRegistry.register("forward_kinematics_joints_to_ee_observation")
-@dataclass
-class ForwardKinematicsJointsToEEObservation(ObservationProcessorStep):
- """
- Computes the end-effector pose from joint positions using forward kinematics (FK).
-
- This step is typically used to add the robot's Cartesian pose to the observation space,
- which can be useful for visualization or as an input to a policy.
-
- Attributes:
- kinematics: The robot's kinematic model.
- """
-
- kinematics: RobotKinematics
- motor_names: list[str]
-
- def observation(self, observation: RobotObservation) -> RobotObservation:
- return compute_forward_kinematics_joints_to_ee(observation, self.kinematics, self.motor_names)
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- # We only use the ee pose in the dataset, so we don't need the joint positions
- for n in self.motor_names:
- features[PipelineFeatureType.OBSERVATION].pop(f"{n}.pos", None)
- # We specify the dataset features of this step that we want to be stored in the dataset
- for k in ["x", "y", "z", "wx", "wy", "wz", "gripper_pos"]:
- features[PipelineFeatureType.OBSERVATION][f"ee.{k}"] = PolicyFeature(
- type=FeatureType.STATE, shape=(1,)
- )
- return features
-
-
-@ProcessorStepRegistry.register("forward_kinematics_joints_to_ee_action")
-@dataclass
-class ForwardKinematicsJointsToEEAction(RobotActionProcessorStep):
- """
- Computes the end-effector pose from joint positions using forward kinematics (FK).
-
- This step is typically used to add the robot's Cartesian pose to the observation space,
- which can be useful for visualization or as an input to a policy.
-
- Attributes:
- kinematics: The robot's kinematic model.
- """
-
- kinematics: RobotKinematics
- motor_names: list[str]
-
- def action(self, action: RobotAction) -> RobotAction:
- return compute_forward_kinematics_joints_to_ee(action, self.kinematics, self.motor_names)
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- # We only use the ee pose in the dataset, so we don't need the joint positions
- for n in self.motor_names:
- features[PipelineFeatureType.ACTION].pop(f"{n}.pos", None)
- # We specify the dataset features of this step that we want to be stored in the dataset
- for k in ["x", "y", "z", "wx", "wy", "wz", "gripper_pos"]:
- features[PipelineFeatureType.ACTION][f"ee.{k}"] = PolicyFeature(
- type=FeatureType.STATE, shape=(1,)
- )
- return features
-
-
-@ProcessorStepRegistry.register(name="forward_kinematics_joints_to_ee")
-@dataclass
-class ForwardKinematicsJointsToEE(ProcessorStep):
- kinematics: RobotKinematics
- motor_names: list[str]
-
- def __post_init__(self):
- self.joints_to_ee_action_processor = ForwardKinematicsJointsToEEAction(
- kinematics=self.kinematics, motor_names=self.motor_names
- )
- self.joints_to_ee_observation_processor = ForwardKinematicsJointsToEEObservation(
- kinematics=self.kinematics, motor_names=self.motor_names
- )
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- if transition.get(TransitionKey.ACTION) is not None:
- transition = self.joints_to_ee_action_processor(transition)
- if transition.get(TransitionKey.OBSERVATION) is not None:
- transition = self.joints_to_ee_observation_processor(transition)
- return transition
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- if features[PipelineFeatureType.ACTION] is not None:
- features = self.joints_to_ee_action_processor.transform_features(features)
- if features[PipelineFeatureType.OBSERVATION] is not None:
- features = self.joints_to_ee_observation_processor.transform_features(features)
- return features
-
-
-@ProcessorStepRegistry.register("inverse_kinematics_rl_step")
-@dataclass
-class InverseKinematicsRLStep(ProcessorStep):
- """
- Computes desired joint positions from a target end-effector pose using inverse kinematics (IK).
-
- This is modified from the InverseKinematicsEEToJoints step to be used in the RL pipeline.
- """
-
- kinematics: RobotKinematics
- motor_names: list[str]
- q_curr: np.ndarray | None = field(default=None, init=False, repr=False)
- initial_guess_current_joints: bool = True
-
- def __call__(self, transition: EnvTransition) -> EnvTransition:
- new_transition = dict(transition)
- action = new_transition.get(TransitionKey.ACTION)
- if action is None:
- raise ValueError("Action is required for InverseKinematicsEEToJoints")
- action = dict(action)
-
- x = action.pop("ee.x")
- y = action.pop("ee.y")
- z = action.pop("ee.z")
- wx = action.pop("ee.wx")
- wy = action.pop("ee.wy")
- wz = action.pop("ee.wz")
- gripper_pos = action.pop("ee.gripper_pos")
-
- if None in (x, y, z, wx, wy, wz, gripper_pos):
- raise ValueError(
- "Missing required end-effector pose components: ee.x, ee.y, ee.z, ee.wx, ee.wy, ee.wz, ee.gripper_pos must all be present in action"
- )
-
- observation = new_transition.get(TransitionKey.OBSERVATION).copy()
- if observation is None:
- raise ValueError("Joints observation is require for computing robot kinematics")
-
- q_raw = np.array(
- [float(v) for k, v in observation.items() if isinstance(k, str) and k.endswith(".pos")],
- dtype=float,
- )
- if q_raw is None:
- raise ValueError("Joints observation is require for computing robot kinematics")
-
- if self.initial_guess_current_joints: # Use current joints as initial guess
- self.q_curr = q_raw
- else: # Use previous ik solution as initial guess
- if self.q_curr is None:
- self.q_curr = q_raw
-
- # Build desired 4x4 transform from pos + rotvec (twist)
- t_des = np.eye(4, dtype=float)
- t_des[:3, :3] = Rotation.from_rotvec([wx, wy, wz]).as_matrix()
- t_des[:3, 3] = [x, y, z]
-
- # Compute inverse kinematics
- q_target = self.kinematics.inverse_kinematics(self.q_curr, t_des)
- self.q_curr = q_target
-
- # TODO: This is sentitive to order of motor_names = q_target mapping
- for i, name in enumerate(self.motor_names):
- if name != "gripper":
- action[f"{name}.pos"] = float(q_target[i])
- else:
- action["gripper.pos"] = float(gripper_pos)
-
- new_transition[TransitionKey.ACTION] = action
- complementary_data = new_transition.get(TransitionKey.COMPLEMENTARY_DATA, {})
- complementary_data["IK_solution"] = q_target
- new_transition[TransitionKey.COMPLEMENTARY_DATA] = complementary_data
- return new_transition
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- for feat in ["x", "y", "z", "wx", "wy", "wz", "gripper_pos"]:
- features[PipelineFeatureType.ACTION].pop(f"ee.{feat}", None)
-
- for name in self.motor_names:
- features[PipelineFeatureType.ACTION][f"{name}.pos"] = PolicyFeature(
- type=FeatureType.ACTION, shape=(1,)
- )
-
- return features
-
- def reset(self):
- """Resets the initial guess for the IK solver."""
- self.q_curr = None
diff --git a/lerobot/src/lerobot/robots/so_follower/so100.md b/lerobot/src/lerobot/robots/so_follower/so100.md
deleted file mode 100644
index ad1154e75a74a496aa74cb1ac1b545238d5174e4..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/so_follower/so100.md
+++ /dev/null
@@ -1 +0,0 @@
-../../../../docs/source/so100.mdx
\ No newline at end of file
diff --git a/lerobot/src/lerobot/robots/so_follower/so101.md b/lerobot/src/lerobot/robots/so_follower/so101.md
deleted file mode 100644
index 27b89266029afbf0aa59be195cc0b4b6ee93ac26..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/so_follower/so101.md
+++ /dev/null
@@ -1 +0,0 @@
-../../../../docs/source/so101.mdx
\ No newline at end of file
diff --git a/lerobot/src/lerobot/robots/so_follower/so_follower.py b/lerobot/src/lerobot/robots/so_follower/so_follower.py
deleted file mode 100644
index 1060b1ef892431666ca2ff6cd697fc1ca1660c32..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/so_follower/so_follower.py
+++ /dev/null
@@ -1,234 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import time
-from functools import cached_property
-from typing import TypeAlias
-
-from lerobot.cameras.utils import make_cameras_from_configs
-from lerobot.motors import Motor, MotorCalibration, MotorNormMode
-from lerobot.motors.feetech import (
- FeetechMotorsBus,
- OperatingMode,
-)
-from lerobot.processor import RobotAction, RobotObservation
-from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
-
-from ..robot import Robot
-from ..utils import ensure_safe_goal_position
-from .config_so_follower import SOFollowerRobotConfig
-
-logger = logging.getLogger(__name__)
-
-
-class SOFollower(Robot):
- """
- Generic SO follower base implementing common functionality for SO-100/101/10X.
- Designed to be subclassed with a per-hardware-model `config_class` and `name`.
- """
-
- config_class = SOFollowerRobotConfig
- name = "so_follower"
-
- def __init__(self, config: SOFollowerRobotConfig):
- super().__init__(config)
- self.config = config
- # choose normalization mode depending on config if available
- norm_mode_body = MotorNormMode.DEGREES if config.use_degrees else MotorNormMode.RANGE_M100_100
- self.bus = FeetechMotorsBus(
- port=self.config.port,
- motors={
- "shoulder_pan": Motor(1, "sts3215", norm_mode_body),
- "shoulder_lift": Motor(2, "sts3215", norm_mode_body),
- "elbow_flex": Motor(3, "sts3215", norm_mode_body),
- "wrist_flex": Motor(4, "sts3215", norm_mode_body),
- "wrist_roll": Motor(5, "sts3215", norm_mode_body),
- "gripper": Motor(6, "sts3215", MotorNormMode.RANGE_0_100),
- },
- calibration=self.calibration,
- )
- self.cameras = make_cameras_from_configs(config.cameras)
-
- @property
- def _motors_ft(self) -> dict[str, type]:
- return {f"{motor}.pos": float for motor in self.bus.motors}
-
- @property
- def _cameras_ft(self) -> dict[str, tuple]:
- return {
- cam: (self.config.cameras[cam].height, self.config.cameras[cam].width, 3) for cam in self.cameras
- }
-
- @cached_property
- def observation_features(self) -> dict[str, type | tuple]:
- return {**self._motors_ft, **self._cameras_ft}
-
- @cached_property
- def action_features(self) -> dict[str, type]:
- return self._motors_ft
-
- @property
- def is_connected(self) -> bool:
- return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
-
- @check_if_already_connected
- def connect(self, calibrate: bool = True) -> None:
- """
- We assume that at connection time, arm is in a rest position,
- and torque can be safely disabled to run calibration.
- """
-
- self.bus.connect()
- if not self.is_calibrated and calibrate:
- logger.info(
- "Mismatch between calibration values in the motor and the calibration file or no calibration file found"
- )
- self.calibrate()
-
- for cam in self.cameras.values():
- cam.connect()
-
- self.configure()
- logger.info(f"{self} connected.")
-
- @property
- def is_calibrated(self) -> bool:
- return self.bus.is_calibrated
-
- def calibrate(self) -> None:
- if self.calibration:
- # Calibration file exists, ask user whether to use it or run new calibration
- user_input = input(
- f"Press ENTER to use provided calibration file associated with the id {self.id}, or type 'c' and press ENTER to run calibration: "
- )
- if user_input.strip().lower() != "c":
- logger.info(f"Writing calibration file associated with the id {self.id} to the motors")
- self.bus.write_calibration(self.calibration)
- return
-
- logger.info(f"\nRunning calibration of {self}")
- self.bus.disable_torque()
- for motor in self.bus.motors:
- self.bus.write("Operating_Mode", motor, OperatingMode.POSITION.value)
-
- input(f"Move {self} to the middle of its range of motion and press ENTER....")
- homing_offsets = self.bus.set_half_turn_homings()
-
- # Attempt to call record_ranges_of_motion with a reduced motor set when appropriate.
- full_turn_motor = "wrist_roll"
- unknown_range_motors = [motor for motor in self.bus.motors if motor != full_turn_motor]
- print(
- f"Move all joints except '{full_turn_motor}' sequentially through their "
- "entire ranges of motion.\nRecording positions. Press ENTER to stop..."
- )
- range_mins, range_maxes = self.bus.record_ranges_of_motion(unknown_range_motors)
- range_mins[full_turn_motor] = 0
- range_maxes[full_turn_motor] = 4095
-
- self.calibration = {}
- for motor, m in self.bus.motors.items():
- self.calibration[motor] = MotorCalibration(
- id=m.id,
- drive_mode=0,
- homing_offset=homing_offsets[motor],
- range_min=range_mins[motor],
- range_max=range_maxes[motor],
- )
-
- self.bus.write_calibration(self.calibration)
- self._save_calibration()
- print("Calibration saved to", self.calibration_fpath)
-
- def configure(self) -> None:
- with self.bus.torque_disabled():
- self.bus.configure_motors()
- for motor in self.bus.motors:
- self.bus.write("Operating_Mode", motor, OperatingMode.POSITION.value)
- # Set P_Coefficient to lower value to avoid shakiness (Default is 32)
- self.bus.write("P_Coefficient", motor, 16)
- # Set I_Coefficient and D_Coefficient to default value 0 and 32
- self.bus.write("I_Coefficient", motor, 0)
- self.bus.write("D_Coefficient", motor, 32)
-
- if motor == "gripper":
- self.bus.write("Max_Torque_Limit", motor, 500) # 50% of max torque to avoid burnout
- self.bus.write("Protection_Current", motor, 250) # 50% of max current to avoid burnout
- self.bus.write("Overload_Torque", motor, 25) # 25% torque when overloaded
-
- def setup_motors(self) -> None:
- for motor in reversed(self.bus.motors):
- input(f"Connect the controller board to the '{motor}' motor only and press enter.")
- self.bus.setup_motor(motor)
- print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
-
- @check_if_not_connected
- def get_observation(self) -> RobotObservation:
- # Read arm position
- start = time.perf_counter()
- obs_dict = self.bus.sync_read("Present_Position")
- obs_dict = {f"{motor}.pos": val for motor, val in obs_dict.items()}
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read state: {dt_ms:.1f}ms")
-
- # Capture images from cameras
- for cam_key, cam in self.cameras.items():
- start = time.perf_counter()
- obs_dict[cam_key] = cam.async_read()
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read {cam_key}: {dt_ms:.1f}ms")
-
- return obs_dict
-
- @check_if_not_connected
- def send_action(self, action: RobotAction) -> RobotAction:
- """Command arm to move to a target joint configuration.
-
- The relative action magnitude may be clipped depending on the configuration parameter
- `max_relative_target`. In this case, the action sent differs from original action.
- Thus, this function always returns the action actually sent.
-
- Raises:
- RobotDeviceNotConnectedError: if robot is not connected.
-
- Returns:
- RobotAction: the action sent to the motors, potentially clipped.
- """
-
- goal_pos = {key.removesuffix(".pos"): val for key, val in action.items() if key.endswith(".pos")}
-
- # Cap goal position when too far away from present position.
- # /!\ Slower fps expected due to reading from the follower.
- if self.config.max_relative_target is not None:
- present_pos = self.bus.sync_read("Present_Position")
- goal_present_pos = {key: (g_pos, present_pos[key]) for key, g_pos in goal_pos.items()}
- goal_pos = ensure_safe_goal_position(goal_present_pos, self.config.max_relative_target)
-
- # Send goal position to the arm
- self.bus.sync_write("Goal_Position", goal_pos)
- return {f"{motor}.pos": val for motor, val in goal_pos.items()}
-
- @check_if_not_connected
- def disconnect(self):
- self.bus.disconnect(self.config.disable_torque_on_disconnect)
- for cam in self.cameras.values():
- cam.disconnect()
-
- logger.info(f"{self} disconnected.")
-
-
-SO100Follower: TypeAlias = SOFollower
-SO101Follower: TypeAlias = SOFollower
diff --git a/lerobot/src/lerobot/robots/unitree_g1/__init__.py b/lerobot/src/lerobot/robots/unitree_g1/__init__.py
deleted file mode 100644
index 7feb128bed5c2a8b57e6c33798dcbdb8b53c1229..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/unitree_g1/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .config_unitree_g1 import UnitreeG1Config
-from .unitree_g1 import UnitreeG1
diff --git a/lerobot/src/lerobot/robots/unitree_g1/config_unitree_g1.py b/lerobot/src/lerobot/robots/unitree_g1/config_unitree_g1.py
deleted file mode 100644
index bdd2bfa9be06581e77d6dc7f2846326d0504e3e9..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/unitree_g1/config_unitree_g1.py
+++ /dev/null
@@ -1,67 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass, field
-
-from lerobot.cameras import CameraConfig
-
-from ..config import RobotConfig
-
-_GAINS: dict[str, dict[str, list[float]]] = {
- "left_leg": {
- "kp": [150, 150, 150, 300, 40, 40],
- "kd": [2, 2, 2, 4, 2, 2],
- }, # pitch, roll, yaw, knee, ankle_pitch, ankle_roll
- "right_leg": {"kp": [150, 150, 150, 300, 40, 40], "kd": [2, 2, 2, 4, 2, 2]},
- "waist": {"kp": [250, 250, 250], "kd": [5, 5, 5]}, # yaw, roll, pitch
- "left_arm": {"kp": [80, 80, 80, 80], "kd": [3, 3, 3, 3]}, # shoulder_pitch/roll/yaw, elbow
- "left_wrist": {"kp": [40, 40, 40], "kd": [1.5, 1.5, 1.5]}, # roll, pitch, yaw
- "right_arm": {"kp": [80, 80, 80, 80], "kd": [3, 3, 3, 3]},
- "right_wrist": {"kp": [40, 40, 40], "kd": [1.5, 1.5, 1.5]},
- "other": {"kp": [80, 80, 80, 80, 80, 80], "kd": [3, 3, 3, 3, 3, 3]},
-}
-
-
-def _build_gains() -> tuple[list[float], list[float]]:
- """Build kp and kd lists from body-part groupings."""
- kp = [v for g in _GAINS.values() for v in g["kp"]]
- kd = [v for g in _GAINS.values() for v in g["kd"]]
- return kp, kd
-
-
-_DEFAULT_KP, _DEFAULT_KD = _build_gains()
-
-
-@RobotConfig.register_subclass("unitree_g1")
-@dataclass
-class UnitreeG1Config(RobotConfig):
- kp: list[float] = field(default_factory=lambda: _DEFAULT_KP.copy())
- kd: list[float] = field(default_factory=lambda: _DEFAULT_KD.copy())
-
- # Default joint positions
- default_positions: list[float] = field(default_factory=lambda: [0.0] * 29)
-
- # Control loop timestep
- control_dt: float = 1.0 / 250.0 # 250Hz
-
- # Launch mujoco simulation
- is_simulation: bool = True
-
- # Socket config for ZMQ bridge
- robot_ip: str = "192.168.123.164" # default G1 IP
-
- # Cameras (ZMQ-based remote cameras)
- cameras: dict[str, CameraConfig] = field(default_factory=dict)
diff --git a/lerobot/src/lerobot/robots/unitree_g1/g1_utils.py b/lerobot/src/lerobot/robots/unitree_g1/g1_utils.py
deleted file mode 100644
index fa0e637b18e372ddd58aff11960b84caa0c2bc8d..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/unitree_g1/g1_utils.py
+++ /dev/null
@@ -1,81 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from enum import IntEnum
-
-# ruff: noqa: N801, N815
-
-NUM_MOTORS = 35
-
-
-class G1_29_JointArmIndex(IntEnum):
- # Left arm
- kLeftShoulderPitch = 15
- kLeftShoulderRoll = 16
- kLeftShoulderYaw = 17
- kLeftElbow = 18
- kLeftWristRoll = 19
- kLeftWristPitch = 20
- kLeftWristyaw = 21
-
- # Right arm
- kRightShoulderPitch = 22
- kRightShoulderRoll = 23
- kRightShoulderYaw = 24
- kRightElbow = 25
- kRightWristRoll = 26
- kRightWristPitch = 27
- kRightWristYaw = 28
-
-
-class G1_29_JointIndex(IntEnum):
- # Left leg
- kLeftHipPitch = 0
- kLeftHipRoll = 1
- kLeftHipYaw = 2
- kLeftKnee = 3
- kLeftAnklePitch = 4
- kLeftAnkleRoll = 5
-
- # Right leg
- kRightHipPitch = 6
- kRightHipRoll = 7
- kRightHipYaw = 8
- kRightKnee = 9
- kRightAnklePitch = 10
- kRightAnkleRoll = 11
-
- kWaistYaw = 12
- kWaistRoll = 13
- kWaistPitch = 14
-
- # Left arm
- kLeftShoulderPitch = 15
- kLeftShoulderRoll = 16
- kLeftShoulderYaw = 17
- kLeftElbow = 18
- kLeftWristRoll = 19
- kLeftWristPitch = 20
- kLeftWristyaw = 21
-
- # Right arm
- kRightShoulderPitch = 22
- kRightShoulderRoll = 23
- kRightShoulderYaw = 24
- kRightElbow = 25
- kRightWristRoll = 26
- kRightWristPitch = 27
- kRightWristYaw = 28
diff --git a/lerobot/src/lerobot/robots/unitree_g1/run_g1_server.py b/lerobot/src/lerobot/robots/unitree_g1/run_g1_server.py
deleted file mode 100644
index 8406d0cd60e30cd227a1f517d5587e0fbce6c755..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/unitree_g1/run_g1_server.py
+++ /dev/null
@@ -1,212 +0,0 @@
-#!/usr/bin/env python3
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-DDS-to-ZMQ bridge server for Unitree G1 robot.
-
-This server runs on the robot and forwards:
-- Robot state (LowState) from DDS to ZMQ (for remote clients)
-- Robot commands (LowCmd) from ZMQ to DDS (from remote clients)
-
-Uses JSON for secure serialization instead of pickle.
-"""
-
-import base64
-import contextlib
-import json
-import threading
-import time
-from typing import Any
-
-import zmq
-from unitree_sdk2py.comm.motion_switcher.motion_switcher_client import MotionSwitcherClient
-from unitree_sdk2py.core.channel import ChannelFactoryInitialize, ChannelPublisher, ChannelSubscriber
-from unitree_sdk2py.idl.default import unitree_hg_msg_dds__LowCmd_
-from unitree_sdk2py.idl.unitree_hg.msg.dds_ import LowCmd_ as hg_LowCmd, LowState_ as hg_LowState
-from unitree_sdk2py.utils.crc import CRC
-
-# DDS topic names follow Unitree SDK naming conventions
-# ruff: noqa: N816
-kTopicLowCommand_Debug = "rt/lowcmd" # action to robot
-kTopicLowState = "rt/lowstate" # observation from robot
-
-LOWCMD_PORT = 6000
-LOWSTATE_PORT = 6001
-NUM_MOTORS = 35
-
-
-def lowstate_to_dict(msg: hg_LowState) -> dict[str, Any]:
- """Convert LowState SDK message to a JSON-serializable dictionary."""
- motor_states = []
- for i in range(NUM_MOTORS):
- temp = msg.motor_state[i].temperature
- avg_temp = float(sum(temp) / len(temp)) if isinstance(temp, list) else float(temp)
- motor_states.append(
- {
- "q": float(msg.motor_state[i].q),
- "dq": float(msg.motor_state[i].dq),
- "tau_est": float(msg.motor_state[i].tau_est),
- "temperature": avg_temp,
- }
- )
-
- return {
- "motor_state": motor_states,
- "imu_state": {
- "quaternion": [float(x) for x in msg.imu_state.quaternion],
- "gyroscope": [float(x) for x in msg.imu_state.gyroscope],
- "accelerometer": [float(x) for x in msg.imu_state.accelerometer],
- "rpy": [float(x) for x in msg.imu_state.rpy],
- "temperature": float(msg.imu_state.temperature),
- },
- # Encode bytes as base64 for JSON compatibility
- "wireless_remote": base64.b64encode(bytes(msg.wireless_remote)).decode("ascii"),
- "mode_machine": int(msg.mode_machine),
- }
-
-
-def dict_to_lowcmd(data: dict[str, Any]) -> hg_LowCmd:
- """Convert dictionary back to LowCmd SDK message."""
- cmd = unitree_hg_msg_dds__LowCmd_()
- cmd.mode_pr = data.get("mode_pr", 0)
- cmd.mode_machine = data.get("mode_machine", 0)
-
- for i, motor_data in enumerate(data.get("motor_cmd", [])):
- cmd.motor_cmd[i].mode = motor_data.get("mode", 0)
- cmd.motor_cmd[i].q = motor_data.get("q", 0.0)
- cmd.motor_cmd[i].dq = motor_data.get("dq", 0.0)
- cmd.motor_cmd[i].kp = motor_data.get("kp", 0.0)
- cmd.motor_cmd[i].kd = motor_data.get("kd", 0.0)
- cmd.motor_cmd[i].tau = motor_data.get("tau", 0.0)
-
- return cmd
-
-
-def state_forward_loop(
- lowstate_sub: ChannelSubscriber,
- lowstate_sock: zmq.Socket,
- state_period: float,
- shutdown_event: threading.Event,
-) -> None:
- """Read observation from DDS and forward to ZMQ clients."""
- last_state_time = 0.0
-
- while not shutdown_event.is_set():
- # read from DDS
- msg = lowstate_sub.Read()
- if msg is None:
- continue
-
- now = time.time()
- # optional downsampling (if robot dds rate > state_period)
- if now - last_state_time >= state_period:
- # Convert to dict and serialize with JSON
- state_dict = lowstate_to_dict(msg)
- payload = json.dumps({"topic": kTopicLowState, "data": state_dict}).encode("utf-8")
- # if no subscribers / tx buffer full, just drop
- with contextlib.suppress(zmq.Again):
- lowstate_sock.send(payload, zmq.NOBLOCK)
- last_state_time = now
-
-
-def cmd_forward_loop(
- lowcmd_sock: zmq.Socket,
- lowcmd_pub_debug: ChannelPublisher,
- crc: CRC,
-) -> None:
- """Receive commands from ZMQ and forward to DDS."""
- while True:
- try:
- payload = lowcmd_sock.recv()
- except zmq.ContextTerminated:
- break
- msg_dict = json.loads(payload.decode("utf-8"))
-
- topic = msg_dict.get("topic", "")
- cmd_data = msg_dict.get("data", {})
-
- # Reconstruct LowCmd object from dict
- cmd = dict_to_lowcmd(cmd_data)
-
- # recompute crc
- cmd.crc = crc.Crc(cmd)
-
- if topic == kTopicLowCommand_Debug:
- lowcmd_pub_debug.Write(cmd)
-
-
-def main() -> None:
- """Main entry point for the robot server bridge."""
- # initialize DDS
- ChannelFactoryInitialize(0)
-
- # stop all active publishers on the robot
- msc = MotionSwitcherClient()
- msc.SetTimeout(5.0)
- msc.Init()
-
- status, result = msc.CheckMode()
- while result is not None and "name" in result and result["name"]:
- msc.ReleaseMode()
- status, result = msc.CheckMode()
- time.sleep(1.0)
-
- crc = CRC()
-
- # initialize DDS publisher
- lowcmd_pub_debug = ChannelPublisher(kTopicLowCommand_Debug, hg_LowCmd)
- lowcmd_pub_debug.Init()
-
- # initialize DDS subscriber
- lowstate_sub = ChannelSubscriber(kTopicLowState, hg_LowState)
- lowstate_sub.Init()
-
- # initialize ZMQ
- ctx = zmq.Context.instance()
-
- # receive commands from remote client
- lowcmd_sock = ctx.socket(zmq.PULL)
- lowcmd_sock.bind(f"tcp://0.0.0.0:{LOWCMD_PORT}")
-
- # publish state to remote clients
- lowstate_sock = ctx.socket(zmq.PUB)
- lowstate_sock.bind(f"tcp://0.0.0.0:{LOWSTATE_PORT}")
-
- state_period = 0.002 # ~500 hz
- shutdown_event = threading.Event()
-
- # start observation forwarding in background thread
- t_state = threading.Thread(
- target=state_forward_loop,
- args=(lowstate_sub, lowstate_sock, state_period, shutdown_event),
- )
- t_state.start()
-
- print("bridge running (lowstate -> zmq, lowcmd -> dds)")
-
- # run command forwarding in main thread
- try:
- cmd_forward_loop(lowcmd_sock, lowcmd_pub_debug, crc)
- except KeyboardInterrupt:
- print("shutting down bridge...")
- finally:
- shutdown_event.set()
- ctx.term() # terminates blocking zmq.recv() calls
- t_state.join(timeout=2.0)
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/robots/unitree_g1/unitree_g1.py b/lerobot/src/lerobot/robots/unitree_g1/unitree_g1.py
deleted file mode 100644
index 8cc9e8b560afad0de08869607d837888942d980f..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/unitree_g1/unitree_g1.py
+++ /dev/null
@@ -1,432 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import struct
-import threading
-import time
-from dataclasses import dataclass, field
-from functools import cached_property
-from typing import Any
-
-import numpy as np
-
-from lerobot.cameras.utils import make_cameras_from_configs
-from lerobot.envs.factory import make_env
-from lerobot.processor import RobotAction, RobotObservation
-from lerobot.robots.unitree_g1.g1_utils import G1_29_JointIndex
-
-from ..robot import Robot
-from .config_unitree_g1 import UnitreeG1Config
-
-logger = logging.getLogger(__name__)
-
-# DDS topic names follow Unitree SDK naming conventions
-# ruff: noqa: N816
-kTopicLowCommand_Debug = "rt/lowcmd"
-kTopicLowState = "rt/lowstate"
-
-
-@dataclass
-class MotorState:
- q: float | None = None # position
- dq: float | None = None # velocity
- tau_est: float | None = None # estimated torque
- temperature: float | None = None # motor temperature
-
-
-@dataclass
-class IMUState:
- quaternion: np.ndarray | None = None # [w, x, y, z]
- gyroscope: np.ndarray | None = None # [x, y, z] angular velocity (rad/s)
- accelerometer: np.ndarray | None = None # [x, y, z] linear acceleration (m/s²)
- rpy: np.ndarray | None = None # [roll, pitch, yaw] (rad)
- temperature: float | None = None # IMU temperature
-
-
-# g1 observation class
-@dataclass
-class G1_29_LowState: # noqa: N801
- motor_state: list[MotorState] = field(default_factory=lambda: [MotorState() for _ in G1_29_JointIndex])
- imu_state: IMUState = field(default_factory=IMUState)
- wireless_remote: Any = None # Raw wireless remote data
- mode_machine: int = 0 # Robot mode
-
-
-class UnitreeG1(Robot):
- config_class = UnitreeG1Config
- name = "unitree_g1"
-
- # unitree remote controller
- class RemoteController:
- def __init__(self):
- self.lx = 0
- self.ly = 0
- self.rx = 0
- self.ry = 0
- self.button = [0] * 16
-
- def set(self, data):
- # wireless_remote
- keys = struct.unpack("H", data[2:4])[0]
- for i in range(16):
- self.button[i] = (keys & (1 << i)) >> i
- self.lx = struct.unpack("f", data[4:8])[0]
- self.rx = struct.unpack("f", data[8:12])[0]
- self.ry = struct.unpack("f", data[12:16])[0]
- self.ly = struct.unpack("f", data[20:24])[0]
-
- def __init__(self, config: UnitreeG1Config):
- super().__init__(config)
-
- logger.info("Initialize UnitreeG1...")
-
- self.config = config
- self.control_dt = config.control_dt
-
- # Initialize cameras config (ZMQ-based) - actual connection in connect()
- self._cameras = make_cameras_from_configs(config.cameras)
-
- # Import channel classes based on mode
- if config.is_simulation:
- from unitree_sdk2py.core.channel import (
- ChannelFactoryInitialize,
- ChannelPublisher,
- ChannelSubscriber,
- )
- else:
- from lerobot.robots.unitree_g1.unitree_sdk2_socket import (
- ChannelFactoryInitialize,
- ChannelPublisher,
- ChannelSubscriber,
- )
-
- # Store for use in connect()
- self._ChannelFactoryInitialize = ChannelFactoryInitialize
- self._ChannelPublisher = ChannelPublisher
- self._ChannelSubscriber = ChannelSubscriber
-
- # Initialize state variables
- self.sim_env = None
- self._env_wrapper = None
- self._lowstate = None
- self._shutdown_event = threading.Event()
- self.subscribe_thread = None
- self.remote_controller = self.RemoteController()
-
- def _subscribe_motor_state(self): # polls robot state @ 250Hz
- while not self._shutdown_event.is_set():
- start_time = time.time()
-
- # Step simulation if in simulation mode
- if self.config.is_simulation and self.sim_env is not None:
- self.sim_env.step()
-
- msg = self.lowstate_subscriber.Read()
- if msg is not None:
- lowstate = G1_29_LowState()
-
- # Capture motor states using jointindex
- for id in G1_29_JointIndex:
- lowstate.motor_state[id].q = msg.motor_state[id].q
- lowstate.motor_state[id].dq = msg.motor_state[id].dq
- lowstate.motor_state[id].tau_est = msg.motor_state[id].tau_est
- lowstate.motor_state[id].temperature = msg.motor_state[id].temperature
-
- # Capture IMU state
- lowstate.imu_state.quaternion = list(msg.imu_state.quaternion)
- lowstate.imu_state.gyroscope = list(msg.imu_state.gyroscope)
- lowstate.imu_state.accelerometer = list(msg.imu_state.accelerometer)
- lowstate.imu_state.rpy = list(msg.imu_state.rpy)
- lowstate.imu_state.temperature = msg.imu_state.temperature
-
- # Capture wireless remote data
- lowstate.wireless_remote = msg.wireless_remote
-
- # Capture mode_machine
- lowstate.mode_machine = msg.mode_machine
-
- self._lowstate = lowstate
-
- current_time = time.time()
- all_t_elapsed = current_time - start_time
- sleep_time = max(0, (self.control_dt - all_t_elapsed)) # maintain constant control dt
- time.sleep(sleep_time)
-
- @cached_property
- def action_features(self) -> dict[str, type]:
- return {f"{G1_29_JointIndex(motor).name}.q": float for motor in G1_29_JointIndex}
-
- def calibrate(self) -> None: # robot is already calibrated
- pass
-
- def configure(self) -> None:
- pass
-
- def connect(self, calibrate: bool = True) -> None: # connect to DDS
- from unitree_sdk2py.idl.default import unitree_hg_msg_dds__LowCmd_
- from unitree_sdk2py.idl.unitree_hg.msg.dds_ import (
- LowCmd_ as hg_LowCmd,
- LowState_ as hg_LowState,
- )
- from unitree_sdk2py.utils.crc import CRC
-
- # Initialize DDS channel and simulation environment
- if self.config.is_simulation:
- self._ChannelFactoryInitialize(0, "lo")
- self._env_wrapper = make_env("lerobot/unitree-g1-mujoco", trust_remote_code=True)
- # Extract the actual gym env from the dict structure
- self.sim_env = self._env_wrapper["hub_env"][0].envs[0]
- else:
- self._ChannelFactoryInitialize(0)
-
- # Initialize direct motor control interface
- self.lowcmd_publisher = self._ChannelPublisher(kTopicLowCommand_Debug, hg_LowCmd)
- self.lowcmd_publisher.Init()
- self.lowstate_subscriber = self._ChannelSubscriber(kTopicLowState, hg_LowState)
- self.lowstate_subscriber.Init()
-
- # Start subscribe thread to read robot state
- self.subscribe_thread = threading.Thread(target=self._subscribe_motor_state)
- self.subscribe_thread.start()
-
- # Connect cameras
- for cam in self._cameras.values():
- if not cam.is_connected:
- cam.connect()
-
- logger.info(f"Connected {len(self._cameras)} camera(s).")
-
- # Initialize lowcmd message
- self.crc = CRC()
- self.msg = unitree_hg_msg_dds__LowCmd_()
- self.msg.mode_pr = 0
-
- # Wait for first state message to arrive
- lowstate = None
- while lowstate is None:
- lowstate = self._lowstate
- if lowstate is None:
- time.sleep(0.01)
- logger.warning("[UnitreeG1] Waiting for robot state...")
- logger.warning("[UnitreeG1] Connected to robot.")
- self.msg.mode_machine = lowstate.mode_machine
-
- # Initialize all motors with unified kp/kd from config
- self.kp = np.array(self.config.kp, dtype=np.float32)
- self.kd = np.array(self.config.kd, dtype=np.float32)
-
- for id in G1_29_JointIndex:
- self.msg.motor_cmd[id].mode = 1
- self.msg.motor_cmd[id].kp = self.kp[id.value]
- self.msg.motor_cmd[id].kd = self.kd[id.value]
- self.msg.motor_cmd[id].q = lowstate.motor_state[id.value].q
-
- def disconnect(self):
- # Signal thread to stop and unblock any waits
- self._shutdown_event.set()
-
- # Wait for subscribe thread to finish
- if self.subscribe_thread is not None:
- self.subscribe_thread.join(timeout=2.0)
- if self.subscribe_thread.is_alive():
- logger.warning("Subscribe thread did not stop cleanly")
-
- # Close simulation environment
- if self.config.is_simulation and self.sim_env is not None:
- try:
- # Force-kill the image publish subprocess first to avoid long waits
- if hasattr(self.sim_env, "simulator") and hasattr(self.sim_env.simulator, "sim_env"):
- sim_env_inner = self.sim_env.simulator.sim_env
- if hasattr(sim_env_inner, "image_publish_process"):
- proc = sim_env_inner.image_publish_process
- if proc.process and proc.process.is_alive():
- logger.info("Force-terminating image publish subprocess...")
- proc.stop_event.set()
- proc.process.terminate()
- proc.process.join(timeout=1)
- if proc.process.is_alive():
- proc.process.kill()
- self.sim_env.close()
- except Exception as e:
- logger.warning(f"Error closing sim_env: {e}")
- self.sim_env = None
- self._env_wrapper = None
-
- # Disconnect cameras
- for cam in self._cameras.values():
- cam.disconnect()
-
- def get_observation(self) -> RobotObservation:
- lowstate = self._lowstate
- if lowstate is None:
- return {}
-
- obs = {}
-
- # Motors - q, dq, tau for all joints
- for motor in G1_29_JointIndex:
- name = motor.name
- idx = motor.value
- obs[f"{name}.q"] = lowstate.motor_state[idx].q
- obs[f"{name}.dq"] = lowstate.motor_state[idx].dq
- obs[f"{name}.tau"] = lowstate.motor_state[idx].tau_est
-
- # IMU - gyroscope
- if lowstate.imu_state.gyroscope:
- obs["imu.gyro.x"] = lowstate.imu_state.gyroscope[0]
- obs["imu.gyro.y"] = lowstate.imu_state.gyroscope[1]
- obs["imu.gyro.z"] = lowstate.imu_state.gyroscope[2]
-
- # IMU - accelerometer
- if lowstate.imu_state.accelerometer:
- obs["imu.accel.x"] = lowstate.imu_state.accelerometer[0]
- obs["imu.accel.y"] = lowstate.imu_state.accelerometer[1]
- obs["imu.accel.z"] = lowstate.imu_state.accelerometer[2]
-
- # IMU - quaternion
- if lowstate.imu_state.quaternion:
- obs["imu.quat.w"] = lowstate.imu_state.quaternion[0]
- obs["imu.quat.x"] = lowstate.imu_state.quaternion[1]
- obs["imu.quat.y"] = lowstate.imu_state.quaternion[2]
- obs["imu.quat.z"] = lowstate.imu_state.quaternion[3]
-
- # IMU - rpy
- if lowstate.imu_state.rpy:
- obs["imu.rpy.roll"] = lowstate.imu_state.rpy[0]
- obs["imu.rpy.pitch"] = lowstate.imu_state.rpy[1]
- obs["imu.rpy.yaw"] = lowstate.imu_state.rpy[2]
-
- # Controller - parse wireless_remote and add to obs
- if lowstate.wireless_remote and len(lowstate.wireless_remote) >= 24:
- self.remote_controller.set(lowstate.wireless_remote)
- obs["remote.buttons"] = self.remote_controller.button.copy()
- obs["remote.lx"] = self.remote_controller.lx
- obs["remote.ly"] = self.remote_controller.ly
- obs["remote.rx"] = self.remote_controller.rx
- obs["remote.ry"] = self.remote_controller.ry
-
- # Cameras - read images from ZMQ cameras
- for cam_name, cam in self._cameras.items():
- obs[cam_name] = cam.async_read()
-
- return obs
-
- @property
- def is_calibrated(self) -> bool:
- return True
-
- @property
- def is_connected(self) -> bool:
- return self._lowstate is not None
-
- @property
- def _motors_ft(self) -> dict[str, type]:
- return {f"{G1_29_JointIndex(motor).name}.q": float for motor in G1_29_JointIndex}
-
- @property
- def cameras(self) -> dict:
- return self._cameras
-
- @property
- def _cameras_ft(self) -> dict[str, tuple]:
- return {
- cam: (self.config.cameras[cam].height, self.config.cameras[cam].width, 3) for cam in self.cameras
- }
-
- @cached_property
- def observation_features(self) -> dict[str, type | tuple]:
- return {**self._motors_ft, **self._cameras_ft}
-
- def send_action(self, action: RobotAction) -> RobotAction:
- for motor in G1_29_JointIndex:
- key = f"{motor.name}.q"
- if key in action:
- self.msg.motor_cmd[motor.value].q = action[key]
- self.msg.motor_cmd[motor.value].qd = 0
- self.msg.motor_cmd[motor.value].kp = self.kp[motor.value]
- self.msg.motor_cmd[motor.value].kd = self.kd[motor.value]
- self.msg.motor_cmd[motor.value].tau = 0
-
- self.msg.crc = self.crc.Crc(self.msg)
- self.lowcmd_publisher.Write(self.msg)
- return action
-
- def get_gravity_orientation(self, quaternion): # get gravity orientation from quaternion
- """Get gravity orientation from quaternion."""
- qw = quaternion[0]
- qx = quaternion[1]
- qy = quaternion[2]
- qz = quaternion[3]
-
- gravity_orientation = np.zeros(3)
- gravity_orientation[0] = 2 * (-qz * qx + qw * qy)
- gravity_orientation[1] = -2 * (qz * qy + qw * qx)
- gravity_orientation[2] = 1 - 2 * (qw * qw + qz * qz)
- return gravity_orientation
-
- def reset(
- self,
- control_dt: float | None = None,
- default_positions: list[float] | None = None,
- ) -> None: # move robot to default position
- if control_dt is None:
- control_dt = self.config.control_dt
- if default_positions is None:
- default_positions = np.array(self.config.default_positions, dtype=np.float32)
-
- if self.config.is_simulation and self.sim_env is not None:
- self.sim_env.reset()
-
- for motor in G1_29_JointIndex:
- self.msg.motor_cmd[motor.value].q = default_positions[motor.value]
- self.msg.motor_cmd[motor.value].qd = 0
- self.msg.motor_cmd[motor.value].kp = self.kp[motor.value]
- self.msg.motor_cmd[motor.value].kd = self.kd[motor.value]
- self.msg.motor_cmd[motor.value].tau = 0
- self.msg.crc = self.crc.Crc(self.msg)
- self.lowcmd_publisher.Write(self.msg)
- else:
- total_time = 3.0
- num_steps = int(total_time / control_dt)
-
- # get current state
- obs = self.get_observation()
-
- # record current positions
- init_dof_pos = np.zeros(29, dtype=np.float32)
- for motor in G1_29_JointIndex:
- init_dof_pos[motor.value] = obs[f"{motor.name}.q"]
-
- # Interpolate to default position
- for step in range(num_steps):
- start_time = time.time()
-
- alpha = step / num_steps
- action_dict = {}
- for motor in G1_29_JointIndex:
- target_pos = default_positions[motor.value]
- interp_pos = init_dof_pos[motor.value] * (1 - alpha) + target_pos * alpha
- action_dict[f"{motor.name}.q"] = float(interp_pos)
-
- self.send_action(action_dict)
-
- # Maintain constant control rate
- elapsed = time.time() - start_time
- sleep_time = max(0, control_dt - elapsed)
- time.sleep(sleep_time)
-
- logger.info("Reached default position")
diff --git a/lerobot/src/lerobot/robots/unitree_g1/unitree_sdk2_socket.py b/lerobot/src/lerobot/robots/unitree_g1/unitree_sdk2_socket.py
deleted file mode 100644
index ede193dfb0a696830bc875ee0824c818700ca36a..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/unitree_g1/unitree_sdk2_socket.py
+++ /dev/null
@@ -1,168 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import base64
-import json
-from typing import Any
-
-import zmq
-
-from lerobot.robots.unitree_g1.config_unitree_g1 import UnitreeG1Config
-
-_ctx: zmq.Context | None = None
-_lowcmd_sock: zmq.Socket | None = None
-_lowstate_sock: zmq.Socket | None = None
-
-LOWCMD_PORT = 6000
-LOWSTATE_PORT = 6001
-
-# DDS topic names follow Unitree SDK naming conventions
-# ruff: noqa: N816
-kTopicLowCommand_Debug = "rt/lowcmd"
-
-
-class LowStateMsg:
- """
- Wrapper class that mimics the Unitree SDK LowState_ message structure.
-
- Reconstructs the message from deserialized JSON data to maintain
- compatibility with existing code that expects SDK message objects.
- """
-
- class MotorState:
- """Motor state data for a single joint."""
-
- def __init__(self, data: dict[str, Any]) -> None:
- self.q: float = data.get("q", 0.0)
- self.dq: float = data.get("dq", 0.0)
- self.tau_est: float = data.get("tau_est", 0.0)
- self.temperature: float = data.get("temperature", 0.0)
-
- class IMUState:
- """IMU sensor data."""
-
- def __init__(self, data: dict[str, Any]) -> None:
- self.quaternion: list[float] = data.get("quaternion", [1.0, 0.0, 0.0, 0.0])
- self.gyroscope: list[float] = data.get("gyroscope", [0.0, 0.0, 0.0])
- self.accelerometer: list[float] = data.get("accelerometer", [0.0, 0.0, 0.0])
- self.rpy: list[float] = data.get("rpy", [0.0, 0.0, 0.0])
- self.temperature: float = data.get("temperature", 0.0)
-
- def __init__(self, data: dict[str, Any]) -> None:
- """Initialize from deserialized JSON data."""
- self.motor_state = [self.MotorState(m) for m in data.get("motor_state", [])]
- self.imu_state = self.IMUState(data.get("imu_state", {}))
- # Decode base64-encoded wireless_remote bytes
- wireless_b64 = data.get("wireless_remote", "")
- self.wireless_remote: bytes = base64.b64decode(wireless_b64) if wireless_b64 else b""
- self.mode_machine: int = data.get("mode_machine", 0)
-
-
-def lowcmd_to_dict(topic: str, msg: Any) -> dict[str, Any]:
- """Convert LowCmd message to a JSON-serializable dictionary."""
- motor_cmds = []
- # Iterate over all motor commands in the message
- for i in range(len(msg.motor_cmd)):
- motor_cmds.append(
- {
- "mode": int(msg.motor_cmd[i].mode),
- "q": float(msg.motor_cmd[i].q),
- "dq": float(msg.motor_cmd[i].dq),
- "kp": float(msg.motor_cmd[i].kp),
- "kd": float(msg.motor_cmd[i].kd),
- "tau": float(msg.motor_cmd[i].tau),
- }
- )
-
- return {
- "topic": topic,
- "data": {
- "mode_pr": int(msg.mode_pr),
- "mode_machine": int(msg.mode_machine),
- "motor_cmd": motor_cmds,
- },
- }
-
-
-def ChannelFactoryInitialize(*args: Any, **kwargs: Any) -> None: # noqa: N802
- """
- Initialize ZMQ sockets for robot communication.
-
- This function mimics the Unitree SDK's ChannelFactoryInitialize but uses
- ZMQ sockets to connect to the robot server bridge instead of DDS.
- """
- global _ctx, _lowcmd_sock, _lowstate_sock
-
- # read socket config
- config = UnitreeG1Config()
- robot_ip = config.robot_ip
-
- ctx = zmq.Context.instance()
- _ctx = ctx
-
- # lowcmd: send robot commands
- lowcmd_sock = ctx.socket(zmq.PUSH)
- lowcmd_sock.setsockopt(zmq.CONFLATE, 1) # keep only last message
- lowcmd_sock.connect(f"tcp://{robot_ip}:{LOWCMD_PORT}")
- _lowcmd_sock = lowcmd_sock
-
- # lowstate: receive robot observations
- lowstate_sock = ctx.socket(zmq.SUB)
- lowstate_sock.setsockopt(zmq.CONFLATE, 1) # keep only last message
- lowstate_sock.connect(f"tcp://{robot_ip}:{LOWSTATE_PORT}")
- lowstate_sock.setsockopt_string(zmq.SUBSCRIBE, "")
- _lowstate_sock = lowstate_sock
-
-
-class ChannelPublisher:
- """ZMQ-based publisher that sends commands to the robot server."""
-
- def __init__(self, topic: str, msg_type: type) -> None:
- self.topic = topic
- self.msg_type = msg_type
-
- def Init(self) -> None: # noqa: N802
- """Initialize the publisher (no-op for ZMQ)."""
- pass
-
- def Write(self, msg: Any) -> None: # noqa: N802
- """Serialize and send a command message to the robot."""
- if _lowcmd_sock is None:
- raise RuntimeError("ChannelFactoryInitialize must be called first")
-
- payload = json.dumps(lowcmd_to_dict(self.topic, msg)).encode("utf-8")
- _lowcmd_sock.send(payload)
-
-
-class ChannelSubscriber:
- """ZMQ-based subscriber that receives state from the robot server."""
-
- def __init__(self, topic: str, msg_type: type) -> None:
- self.topic = topic
- self.msg_type = msg_type
-
- def Init(self) -> None: # noqa: N802
- """Initialize the subscriber (no-op for ZMQ)."""
- pass
-
- def Read(self) -> LowStateMsg: # noqa: N802
- """Receive and deserialize a state message from the robot."""
- if _lowstate_sock is None:
- raise RuntimeError("ChannelFactoryInitialize must be called first")
-
- payload = _lowstate_sock.recv()
- msg_dict = json.loads(payload.decode("utf-8"))
- return LowStateMsg(msg_dict.get("data", {}))
diff --git a/lerobot/src/lerobot/robots/utils.py b/lerobot/src/lerobot/robots/utils.py
deleted file mode 100644
index e4bff61d48cd599571e292d47daf151e83c759ea..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/robots/utils.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-from pprint import pformat
-from typing import cast
-
-from lerobot.utils.import_utils import make_device_from_device_class
-
-from .config import RobotConfig
-from .robot import Robot
-
-
-def make_robot_from_config(config: RobotConfig) -> Robot:
- # TODO(Steven): Consider just using the make_device_from_device_class for all types
- if config.type == "koch_follower":
- from .koch_follower import KochFollower
-
- return KochFollower(config)
- elif config.type == "omx_follower":
- from .omx_follower import OmxFollower
-
- return OmxFollower(config)
- elif config.type == "so100_follower":
- from .so_follower import SO100Follower
-
- return SO100Follower(config)
- elif config.type == "so101_follower":
- from .so_follower import SO101Follower
-
- return SO101Follower(config)
- elif config.type == "lekiwi":
- from .lekiwi import LeKiwi
-
- return LeKiwi(config)
- elif config.type == "hope_jr_hand":
- from .hope_jr import HopeJrHand
-
- return HopeJrHand(config)
- elif config.type == "hope_jr_arm":
- from .hope_jr import HopeJrArm
-
- return HopeJrArm(config)
- elif config.type == "bi_so_follower":
- from .bi_so_follower import BiSOFollower
-
- return BiSOFollower(config)
- elif config.type == "reachy2":
- from .reachy2 import Reachy2Robot
-
- return Reachy2Robot(config)
- elif config.type == "mock_robot":
- from tests.mocks.mock_robot import MockRobot
-
- return MockRobot(config)
- else:
- try:
- return cast(Robot, make_device_from_device_class(config))
- except Exception as e:
- raise ValueError(f"Error creating robot with config {config}: {e}") from e
-
-
-# TODO(pepijn): Move to pipeline step to make sure we don't have to do this in the robot code and send action to robot is clean for use in dataset
-def ensure_safe_goal_position(
- goal_present_pos: dict[str, tuple[float, float]], max_relative_target: float | dict[str, float]
-) -> dict[str, float]:
- """Caps relative action target magnitude for safety."""
-
- if isinstance(max_relative_target, float):
- diff_cap = dict.fromkeys(goal_present_pos, max_relative_target)
- elif isinstance(max_relative_target, dict):
- if not set(goal_present_pos) == set(max_relative_target):
- raise ValueError("max_relative_target keys must match those of goal_present_pos.")
- diff_cap = max_relative_target
- else:
- raise TypeError(max_relative_target)
-
- warnings_dict = {}
- safe_goal_positions = {}
- for key, (goal_pos, present_pos) in goal_present_pos.items():
- diff = goal_pos - present_pos
- max_diff = diff_cap[key]
- safe_diff = min(diff, max_diff)
- safe_diff = max(safe_diff, -max_diff)
- safe_goal_pos = present_pos + safe_diff
- safe_goal_positions[key] = safe_goal_pos
- if abs(safe_goal_pos - goal_pos) > 1e-4:
- warnings_dict[key] = {
- "original goal_pos": goal_pos,
- "safe goal_pos": safe_goal_pos,
- }
-
- if warnings_dict:
- logging.warning(
- "Relative goal position magnitude had to be clamped to be safe.\n"
- f"{pformat(warnings_dict, indent=4)}"
- )
-
- return safe_goal_positions
diff --git a/lerobot/src/lerobot/scripts/lerobot_calibrate.py b/lerobot/src/lerobot/scripts/lerobot_calibrate.py
deleted file mode 100644
index efdfbb414e9c0c3c68e4300d7c1694172e768b15..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/scripts/lerobot_calibrate.py
+++ /dev/null
@@ -1,94 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Helper to recalibrate your device (robot or teleoperator).
-
-Example:
-
-```shell
-lerobot-calibrate \
- --teleop.type=so100_leader \
- --teleop.port=/dev/tty.usbmodem58760431551 \
- --teleop.id=blue
-```
-"""
-
-import logging
-from dataclasses import asdict, dataclass
-from pprint import pformat
-
-import draccus
-
-from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig # noqa: F401
-from lerobot.cameras.realsense.configuration_realsense import RealSenseCameraConfig # noqa: F401
-from lerobot.robots import ( # noqa: F401
- Robot,
- RobotConfig,
- bi_so_follower,
- hope_jr,
- koch_follower,
- lekiwi,
- make_robot_from_config,
- omx_follower,
- so_follower,
-)
-from lerobot.teleoperators import ( # noqa: F401
- Teleoperator,
- TeleoperatorConfig,
- bi_so_leader,
- homunculus,
- koch_leader,
- make_teleoperator_from_config,
- omx_leader,
- so_leader,
-)
-from lerobot.utils.import_utils import register_third_party_plugins
-from lerobot.utils.utils import init_logging
-
-
-@dataclass
-class CalibrateConfig:
- teleop: TeleoperatorConfig | None = None
- robot: RobotConfig | None = None
-
- def __post_init__(self):
- if bool(self.teleop) == bool(self.robot):
- raise ValueError("Choose either a teleop or a robot.")
-
- self.device = self.robot if self.robot else self.teleop
-
-
-@draccus.wrap()
-def calibrate(cfg: CalibrateConfig):
- init_logging()
- logging.info(pformat(asdict(cfg)))
-
- if isinstance(cfg.device, RobotConfig):
- device = make_robot_from_config(cfg.device)
- elif isinstance(cfg.device, TeleoperatorConfig):
- device = make_teleoperator_from_config(cfg.device)
-
- device.connect(calibrate=False)
- device.calibrate()
- device.disconnect()
-
-
-def main():
- register_third_party_plugins()
- calibrate()
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/scripts/lerobot_dataset_viz.py b/lerobot/src/lerobot/scripts/lerobot_dataset_viz.py
deleted file mode 100644
index 14eeabe3dec0b247097728ac1aabed946edefe8f..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/scripts/lerobot_dataset_viz.py
+++ /dev/null
@@ -1,287 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" Visualize data of **all** frames of any episode of a dataset of type LeRobotDataset.
-
-Note: The last frame of the episode doesn't always correspond to a final state.
-That's because our datasets are composed of transition from state to state up to
-the antepenultimate state associated to the ultimate action to arrive in the final state.
-However, there might not be a transition from a final state to another state.
-
-Note: This script aims to visualize the data used to train the neural networks.
-~What you see is what you get~. When visualizing image modality, it is often expected to observe
-lossy compression artifacts since these images have been decoded from compressed mp4 videos to
-save disk space. The compression factor applied has been tuned to not affect success rate.
-
-Examples:
-
-- Visualize data stored on a local machine:
-```
-local$ lerobot-dataset-viz \
- --repo-id lerobot/pusht \
- --episode-index 0
-```
-
-- Visualize data stored on a distant machine with a local viewer:
-```
-distant$ lerobot-dataset-viz \
- --repo-id lerobot/pusht \
- --episode-index 0 \
- --save 1 \
- --output-dir path/to/directory
-
-local$ scp distant:path/to/directory/lerobot_pusht_episode_0.rrd .
-local$ rerun lerobot_pusht_episode_0.rrd
-```
-
-- Visualize data stored on a distant machine through streaming:
-(You need to forward the websocket port to the distant machine, with
-`ssh -L 9087:localhost:9087 username@remote-host`)
-```
-distant$ lerobot-dataset-viz \
- --repo-id lerobot/pusht \
- --episode-index 0 \
- --mode distant \
- --ws-port 9087
-
-local$ rerun ws://localhost:9087
-```
-
-"""
-
-import argparse
-import gc
-import logging
-import time
-from pathlib import Path
-
-import numpy as np
-import rerun as rr
-import torch
-import torch.utils.data
-import tqdm
-
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.utils.constants import ACTION, DONE, OBS_STATE, REWARD
-
-
-def to_hwc_uint8_numpy(chw_float32_torch: torch.Tensor) -> np.ndarray:
- assert chw_float32_torch.dtype == torch.float32
- assert chw_float32_torch.ndim == 3
- c, h, w = chw_float32_torch.shape
- assert c < h and c < w, f"expect channel first images, but instead {chw_float32_torch.shape}"
- hwc_uint8_numpy = (chw_float32_torch * 255).type(torch.uint8).permute(1, 2, 0).numpy()
- return hwc_uint8_numpy
-
-
-def visualize_dataset(
- dataset: LeRobotDataset,
- episode_index: int,
- batch_size: int = 32,
- num_workers: int = 0,
- mode: str = "local",
- web_port: int = 9090,
- ws_port: int = 9087,
- save: bool = False,
- output_dir: Path | None = None,
- display_compressed_images: bool = False,
-) -> Path | None:
- if save:
- assert output_dir is not None, (
- "Set an output directory where to write .rrd files with `--output-dir path/to/directory`."
- )
-
- repo_id = dataset.repo_id
-
- logging.info("Loading dataloader")
- dataloader = torch.utils.data.DataLoader(
- dataset,
- num_workers=num_workers,
- batch_size=batch_size,
- )
-
- logging.info("Starting Rerun")
-
- if mode not in ["local", "distant"]:
- raise ValueError(mode)
-
- spawn_local_viewer = mode == "local" and not save
- rr.init(f"{repo_id}/episode_{episode_index}", spawn=spawn_local_viewer)
-
- # Manually call python garbage collector after `rr.init` to avoid hanging in a blocking flush
- # when iterating on a dataloader with `num_workers` > 0
- # TODO(rcadene): remove `gc.collect` when rerun version 0.16 is out, which includes a fix
- gc.collect()
-
- if mode == "distant":
- rr.serve_web_viewer(open_browser=False, web_port=web_port)
-
- logging.info("Logging to Rerun")
-
- for batch in tqdm.tqdm(dataloader, total=len(dataloader)):
- # iterate over the batch
- for i in range(len(batch["index"])):
- rr.set_time("frame_index", sequence=batch["frame_index"][i].item())
- rr.set_time("timestamp", timestamp=batch["timestamp"][i].item())
-
- # display each camera image
- for key in dataset.meta.camera_keys:
- img = to_hwc_uint8_numpy(batch[key][i])
- img_entity = rr.Image(img).compress() if display_compressed_images else rr.Image(img)
- rr.log(key, entity=img_entity)
-
- # display each dimension of action space (e.g. actuators command)
- if ACTION in batch:
- for dim_idx, val in enumerate(batch[ACTION][i]):
- rr.log(f"{ACTION}/{dim_idx}", rr.Scalars(val.item()))
-
- # display each dimension of observed state space (e.g. agent position in joint space)
- if OBS_STATE in batch:
- for dim_idx, val in enumerate(batch[OBS_STATE][i]):
- rr.log(f"state/{dim_idx}", rr.Scalars(val.item()))
-
- if DONE in batch:
- rr.log(DONE, rr.Scalars(batch[DONE][i].item()))
-
- if REWARD in batch:
- rr.log(REWARD, rr.Scalars(batch[REWARD][i].item()))
-
- if "next.success" in batch:
- rr.log("next.success", rr.Scalars(batch["next.success"][i].item()))
-
- if mode == "local" and save:
- # save .rrd locally
- output_dir = Path(output_dir)
- output_dir.mkdir(parents=True, exist_ok=True)
- repo_id_str = repo_id.replace("/", "_")
- rrd_path = output_dir / f"{repo_id_str}_episode_{episode_index}.rrd"
- rr.save(rrd_path)
- return rrd_path
-
- elif mode == "distant":
- # stop the process from exiting since it is serving the websocket connection
- try:
- while True:
- time.sleep(1)
- except KeyboardInterrupt:
- print("Ctrl-C received. Exiting.")
-
-
-def main():
- parser = argparse.ArgumentParser()
-
- parser.add_argument(
- "--repo-id",
- type=str,
- required=True,
- help="Name of hugging face repository containing a LeRobotDataset dataset (e.g. `lerobot/pusht`).",
- )
- parser.add_argument(
- "--episode-index",
- type=int,
- required=True,
- help="Episode to visualize.",
- )
- parser.add_argument(
- "--root",
- type=Path,
- default=None,
- help="Root directory for the dataset stored locally (e.g. `--root data`). By default, the dataset will be loaded from hugging face cache folder, or downloaded from the hub if available.",
- )
- parser.add_argument(
- "--output-dir",
- type=Path,
- default=None,
- help="Directory path to write a .rrd file when `--save 1` is set.",
- )
- parser.add_argument(
- "--batch-size",
- type=int,
- default=32,
- help="Batch size loaded by DataLoader.",
- )
- parser.add_argument(
- "--num-workers",
- type=int,
- default=4,
- help="Number of processes of Dataloader for loading the data.",
- )
- parser.add_argument(
- "--mode",
- type=str,
- default="local",
- help=(
- "Mode of viewing between 'local' or 'distant'. "
- "'local' requires data to be on a local machine. It spawns a viewer to visualize the data locally. "
- "'distant' creates a server on the distant machine where the data is stored. "
- "Visualize the data by connecting to the server with `rerun ws://localhost:PORT` on the local machine."
- ),
- )
- parser.add_argument(
- "--web-port",
- type=int,
- default=9090,
- help="Web port for rerun.io when `--mode distant` is set.",
- )
- parser.add_argument(
- "--ws-port",
- type=int,
- default=9087,
- help="Web socket port for rerun.io when `--mode distant` is set.",
- )
- parser.add_argument(
- "--save",
- type=int,
- default=0,
- help=(
- "Save a .rrd file in the directory provided by `--output-dir`. "
- "It also deactivates the spawning of a viewer. "
- "Visualize the data by running `rerun path/to/file.rrd` on your local machine."
- ),
- )
-
- parser.add_argument(
- "--tolerance-s",
- type=float,
- default=1e-4,
- help=(
- "Tolerance in seconds used to ensure data timestamps respect the dataset fps value"
- "This is argument passed to the constructor of LeRobotDataset and maps to its tolerance_s constructor argument"
- "If not given, defaults to 1e-4."
- ),
- )
-
- parser.add_argument(
- "--display-compressed-images",
- type=bool,
- required=True,
- default=False,
- help="If set, display compressed images in Rerun instead of uncompressed ones.",
- )
-
- args = parser.parse_args()
- kwargs = vars(args)
- repo_id = kwargs.pop("repo_id")
- root = kwargs.pop("root")
- tolerance_s = kwargs.pop("tolerance_s")
-
- logging.info("Loading dataset")
- dataset = LeRobotDataset(repo_id, episodes=[args.episode_index], root=root, tolerance_s=tolerance_s)
-
- visualize_dataset(dataset, **vars(args))
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/scripts/lerobot_edit_dataset.py b/lerobot/src/lerobot/scripts/lerobot_edit_dataset.py
deleted file mode 100644
index fe3d10d728d1fc6af58a106aba85cfb74eff07d1..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/scripts/lerobot_edit_dataset.py
+++ /dev/null
@@ -1,736 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Edit LeRobot datasets using various transformation tools.
-
-This script allows you to delete episodes, split datasets, merge datasets,
-remove features, and convert image datasets to video format.
-When new_repo_id is specified, creates a new dataset.
-
-Usage Examples:
-
-Delete episodes 0, 2, and 5 from a dataset:
- python -m lerobot.scripts.lerobot_edit_dataset \
- --repo_id lerobot/pusht \
- --operation.type delete_episodes \
- --operation.episode_indices "[0, 2, 5]"
-
-Delete episodes and save to a new dataset:
- python -m lerobot.scripts.lerobot_edit_dataset \
- --repo_id lerobot/pusht \
- --new_repo_id lerobot/pusht_filtered \
- --operation.type delete_episodes \
- --operation.episode_indices "[0, 2, 5]"
-
-Split dataset by fractions:
- python -m lerobot.scripts.lerobot_edit_dataset \
- --repo_id lerobot/pusht \
- --operation.type split \
- --operation.splits '{"train": 0.8, "val": 0.2}'
-
-Split dataset by episode indices:
- python -m lerobot.scripts.lerobot_edit_dataset \
- --repo_id lerobot/pusht \
- --operation.type split \
- --operation.splits '{"train": [0, 1, 2, 3], "val": [4, 5]}'
-
-Split into more than two splits:
- python -m lerobot.scripts.lerobot_edit_dataset \
- --repo_id lerobot/pusht \
- --operation.type split \
- --operation.splits '{"train": 0.6, "val": 0.2, "test": 0.2}'
-
-Merge multiple datasets:
- python -m lerobot.scripts.lerobot_edit_dataset \
- --repo_id lerobot/pusht_merged \
- --operation.type merge \
- --operation.repo_ids "['lerobot/pusht_train', 'lerobot/pusht_val']"
-
-Remove camera feature:
- python -m lerobot.scripts.lerobot_edit_dataset \
- --repo_id lerobot/pusht \
- --operation.type remove_feature \
- --operation.feature_names "['observation.images.top']"
-
-Convert image dataset to video format (saves locally):
- python -m lerobot.scripts.lerobot_edit_dataset \
- --repo_id lerobot/pusht_image \
- --operation.type convert_to_video \
- --operation.output_dir /path/to/output/pusht_video
-
-Convert image dataset and save with new repo_id:
- python -m lerobot.scripts.lerobot_edit_dataset \
- --repo_id lerobot/pusht_image \
- --new_repo_id lerobot/pusht_video \
- --operation.type convert_to_video
-
-Convert and push to hub:
- python -m lerobot.scripts.lerobot_edit_dataset \
- --repo_id lerobot/pusht_image \
- --new_repo_id lerobot/pusht_video \
- --operation.type convert_to_video \
- --push_to_hub true
-
-Using JSON config file:
- python -m lerobot.scripts.lerobot_edit_dataset \
- --config_path path/to/edit_config.json
-"""
-
-import logging
-import shutil
-from concurrent.futures import ThreadPoolExecutor, as_completed
-from dataclasses import dataclass
-from pathlib import Path
-
-import pandas as pd
-from tqdm import tqdm
-
-from lerobot.configs import parser
-from lerobot.datasets.dataset_tools import (
- delete_episodes,
- merge_datasets,
- remove_feature,
- split_dataset,
-)
-from lerobot.datasets.lerobot_dataset import LeRobotDataset, LeRobotDatasetMetadata
-from lerobot.datasets.utils import write_stats, write_tasks
-from lerobot.datasets.video_utils import encode_video_frames, get_video_info
-from lerobot.utils.constants import HF_LEROBOT_HOME, OBS_IMAGE
-from lerobot.utils.utils import init_logging
-
-
-@dataclass
-class DeleteEpisodesConfig:
- type: str = "delete_episodes"
- episode_indices: list[int] | None = None
-
-
-@dataclass
-class SplitConfig:
- type: str = "split"
- splits: dict[str, float | list[int]] | None = None
-
-
-@dataclass
-class MergeConfig:
- type: str = "merge"
- repo_ids: list[str] | None = None
-
-
-@dataclass
-class RemoveFeatureConfig:
- type: str = "remove_feature"
- feature_names: list[str] | None = None
-
-
-@dataclass
-class ConvertToVideoConfig:
- type: str = "convert_to_video"
- output_dir: str | None = None
- vcodec: str = "libsvtav1"
- pix_fmt: str = "yuv420p"
- g: int = 2
- crf: int = 30
- fast_decode: int = 0
- episode_indices: list[int] | None = None
- num_workers: int = 4
-
-
-@dataclass
-class EditDatasetConfig:
- repo_id: str
- operation: DeleteEpisodesConfig | SplitConfig | MergeConfig | RemoveFeatureConfig | ConvertToVideoConfig
- root: str | None = None
- new_repo_id: str | None = None
- push_to_hub: bool = False
-
-
-def get_output_path(repo_id: str, new_repo_id: str | None, root: Path | None) -> tuple[str, Path]:
- if new_repo_id:
- output_repo_id = new_repo_id
- output_dir = root / new_repo_id if root else HF_LEROBOT_HOME / new_repo_id
- else:
- output_repo_id = repo_id
- dataset_path = root / repo_id if root else HF_LEROBOT_HOME / repo_id
- old_path = Path(str(dataset_path) + "_old")
-
- if dataset_path.exists():
- if old_path.exists():
- shutil.rmtree(old_path)
- shutil.move(str(dataset_path), str(old_path))
-
- output_dir = dataset_path
-
- return output_repo_id, output_dir
-
-
-def handle_delete_episodes(cfg: EditDatasetConfig) -> None:
- if not isinstance(cfg.operation, DeleteEpisodesConfig):
- raise ValueError("Operation config must be DeleteEpisodesConfig")
-
- if not cfg.operation.episode_indices:
- raise ValueError("episode_indices must be specified for delete_episodes operation")
-
- dataset = LeRobotDataset(cfg.repo_id, root=cfg.root)
- output_repo_id, output_dir = get_output_path(
- cfg.repo_id, cfg.new_repo_id, Path(cfg.root) if cfg.root else None
- )
-
- if cfg.new_repo_id is None:
- dataset.root = Path(str(dataset.root) + "_old")
-
- logging.info(f"Deleting episodes {cfg.operation.episode_indices} from {cfg.repo_id}")
- new_dataset = delete_episodes(
- dataset,
- episode_indices=cfg.operation.episode_indices,
- output_dir=output_dir,
- repo_id=output_repo_id,
- )
-
- logging.info(f"Dataset saved to {output_dir}")
- logging.info(f"Episodes: {new_dataset.meta.total_episodes}, Frames: {new_dataset.meta.total_frames}")
-
- if cfg.push_to_hub:
- logging.info(f"Pushing to hub as {output_repo_id}")
- LeRobotDataset(output_repo_id, root=output_dir).push_to_hub()
-
-
-def handle_split(cfg: EditDatasetConfig) -> None:
- if not isinstance(cfg.operation, SplitConfig):
- raise ValueError("Operation config must be SplitConfig")
-
- if not cfg.operation.splits:
- raise ValueError(
- "splits dict must be specified with split names as keys and fractions/episode lists as values"
- )
-
- dataset = LeRobotDataset(cfg.repo_id, root=cfg.root)
-
- logging.info(f"Splitting dataset {cfg.repo_id} with splits: {cfg.operation.splits}")
- split_datasets = split_dataset(dataset, splits=cfg.operation.splits)
-
- for split_name, split_ds in split_datasets.items():
- split_repo_id = f"{cfg.repo_id}_{split_name}"
- logging.info(
- f"{split_name}: {split_ds.meta.total_episodes} episodes, {split_ds.meta.total_frames} frames"
- )
-
- if cfg.push_to_hub:
- logging.info(f"Pushing {split_name} split to hub as {split_repo_id}")
- LeRobotDataset(split_ds.repo_id, root=split_ds.root).push_to_hub()
-
-
-def handle_merge(cfg: EditDatasetConfig) -> None:
- if not isinstance(cfg.operation, MergeConfig):
- raise ValueError("Operation config must be MergeConfig")
-
- if not cfg.operation.repo_ids:
- raise ValueError("repo_ids must be specified for merge operation")
-
- if not cfg.repo_id:
- raise ValueError("repo_id must be specified as the output repository for merged dataset")
-
- logging.info(f"Loading {len(cfg.operation.repo_ids)} datasets to merge")
- datasets = [LeRobotDataset(repo_id, root=cfg.root) for repo_id in cfg.operation.repo_ids]
-
- output_dir = Path(cfg.root) / cfg.repo_id if cfg.root else HF_LEROBOT_HOME / cfg.repo_id
-
- logging.info(f"Merging datasets into {cfg.repo_id}")
- merged_dataset = merge_datasets(
- datasets,
- output_repo_id=cfg.repo_id,
- output_dir=output_dir,
- )
-
- logging.info(f"Merged dataset saved to {output_dir}")
- logging.info(
- f"Episodes: {merged_dataset.meta.total_episodes}, Frames: {merged_dataset.meta.total_frames}"
- )
-
- if cfg.push_to_hub:
- logging.info(f"Pushing to hub as {cfg.repo_id}")
- LeRobotDataset(merged_dataset.repo_id, root=output_dir).push_to_hub()
-
-
-def handle_remove_feature(cfg: EditDatasetConfig) -> None:
- if not isinstance(cfg.operation, RemoveFeatureConfig):
- raise ValueError("Operation config must be RemoveFeatureConfig")
-
- if not cfg.operation.feature_names:
- raise ValueError("feature_names must be specified for remove_feature operation")
-
- dataset = LeRobotDataset(cfg.repo_id, root=cfg.root)
- output_repo_id, output_dir = get_output_path(
- cfg.repo_id, cfg.new_repo_id, Path(cfg.root) if cfg.root else None
- )
-
- if cfg.new_repo_id is None:
- dataset.root = Path(str(dataset.root) + "_old")
-
- logging.info(f"Removing features {cfg.operation.feature_names} from {cfg.repo_id}")
- new_dataset = remove_feature(
- dataset,
- feature_names=cfg.operation.feature_names,
- output_dir=output_dir,
- repo_id=output_repo_id,
- )
-
- logging.info(f"Dataset saved to {output_dir}")
- logging.info(f"Remaining features: {list(new_dataset.meta.features.keys())}")
-
- if cfg.push_to_hub:
- logging.info(f"Pushing to hub as {output_repo_id}")
- LeRobotDataset(output_repo_id, root=output_dir).push_to_hub()
-
-
-def save_episode_images_for_video(
- dataset: LeRobotDataset,
- imgs_dir: Path,
- img_key: str,
- episode_index: int,
- num_workers: int = 4,
-) -> None:
- """Save images from a specific episode and camera to disk for video encoding.
-
- Args:
- dataset: The LeRobot dataset to extract images from
- imgs_dir: Directory to save images to
- img_key: The image key (camera) to extract
- episode_index: Index of the episode to save
- num_workers: Number of threads for parallel image saving
- """
- # Create directory
- imgs_dir.mkdir(parents=True, exist_ok=True)
-
- # Get dataset without torch format for PIL image access
- hf_dataset = dataset.hf_dataset.with_format(None)
-
- # Select only this camera's images
- imgs_dataset = hf_dataset.select_columns(img_key)
-
- # Get episode start and end indices
- from_idx = dataset.meta.episodes["dataset_from_index"][episode_index]
- to_idx = dataset.meta.episodes["dataset_to_index"][episode_index]
-
- # Get all items for this episode
- episode_dataset = imgs_dataset.select(range(from_idx, to_idx))
-
- # Define function to save a single image
- def save_single_image(i_item_tuple):
- i, item = i_item_tuple
- img = item[img_key]
- # Use frame-XXXXXX.png format to match encode_video_frames expectations
- img.save(str(imgs_dir / f"frame-{i:06d}.png"), quality=100)
- return i
-
- # Save images with proper naming convention for encode_video_frames (frame-XXXXXX.png)
- items = list(enumerate(episode_dataset))
-
- with ThreadPoolExecutor(max_workers=num_workers) as executor:
- futures = [executor.submit(save_single_image, item) for item in items]
- for future in as_completed(futures):
- future.result() # This will raise any exceptions that occurred
-
-
-def encode_episode_videos(
- dataset: LeRobotDataset,
- new_meta: LeRobotDatasetMetadata,
- episode_index: int,
- vcodec: str,
- pix_fmt: str,
- g: int,
- crf: int,
- fast_decode: int,
- temp_dir: Path,
- num_image_workers: int = 4,
-) -> dict[str, dict]:
- """Encode videos for a single episode and return video metadata.
-
- Args:
- dataset: Source dataset with images
- new_meta: Metadata object for the new video dataset
- episode_index: Episode index to process
- vcodec: Video codec
- pix_fmt: Pixel format
- g: Group of pictures size
- crf: Constant rate factor
- fast_decode: Fast decode tuning
- temp_dir: Temporary directory for images
- num_image_workers: Number of workers for saving images
-
- Returns:
- Dictionary mapping video keys to their metadata (chunk_index, file_index, timestamps)
- """
- hf_dataset = dataset.hf_dataset.with_format(None)
- img_keys = [key for key in hf_dataset.features if key.startswith(OBS_IMAGE)]
-
- video_metadata = {}
- fps = int(dataset.fps) # Convert to int for PyAV compatibility
- episode_length = dataset.meta.episodes["length"][episode_index]
- episode_duration = episode_length / dataset.fps # Use original fps for duration calculation
-
- for img_key in img_keys:
- # Save images temporarily
- imgs_dir = temp_dir / f"episode_{episode_index:06d}" / img_key
- save_episode_images_for_video(dataset, imgs_dir, img_key, episode_index, num_image_workers)
-
- # Determine chunk and file indices
- # For simplicity, we'll put each episode in its own file
- chunk_idx = episode_index // new_meta.chunks_size
- file_idx = episode_index % new_meta.chunks_size
-
- # Create video path in the new dataset structure
- video_path = new_meta.root / new_meta.video_path.format(
- video_key=img_key, chunk_index=chunk_idx, file_index=file_idx
- )
- video_path.parent.mkdir(parents=True, exist_ok=True)
-
- # Encode video
- encode_video_frames(
- imgs_dir=imgs_dir,
- video_path=video_path,
- fps=fps,
- vcodec=vcodec,
- pix_fmt=pix_fmt,
- g=g,
- crf=crf,
- fast_decode=fast_decode,
- overwrite=True,
- )
-
- # Clean up temporary images
- shutil.rmtree(imgs_dir)
-
- # Store video metadata
- video_metadata[img_key] = {
- f"videos/{img_key}/chunk_index": chunk_idx,
- f"videos/{img_key}/file_index": file_idx,
- f"videos/{img_key}/from_timestamp": 0.0,
- f"videos/{img_key}/to_timestamp": episode_duration,
- }
-
- return video_metadata
-
-
-def convert_dataset_to_videos(
- dataset: LeRobotDataset,
- output_dir: Path,
- repo_id: str | None = None,
- vcodec: str = "libsvtav1",
- pix_fmt: str = "yuv420p",
- g: int = 2,
- crf: int = 30,
- fast_decode: int = 0,
- episode_indices: list[int] | None = None,
- num_workers: int = 4,
-) -> LeRobotDataset:
- """Convert image-based dataset to video-based dataset.
-
- Creates a new LeRobotDataset with videos instead of images, following the proper
- LeRobot dataset structure with videos stored in chunked MP4 files.
-
- Args:
- dataset: The source LeRobot dataset with images
- output_dir: Directory to save the new video dataset
- repo_id: Repository ID for the new dataset (default: original_id + "_video")
- vcodec: Video codec (default: libsvtav1)
- pix_fmt: Pixel format (default: yuv420p)
- g: Group of pictures size (default: 2)
- crf: Constant rate factor (default: 30)
- fast_decode: Fast decode tuning (default: 0)
- episode_indices: List of episode indices to convert (None = all episodes)
- num_workers: Number of threads for parallel processing (default: 4)
-
- Returns:
- New LeRobotDataset with videos
- """
- # Check that it's an image dataset
- if len(dataset.meta.video_keys) > 0:
- raise ValueError(
- f"This operation is for image datasets only. Video dataset provided: {dataset.repo_id}"
- )
-
- # Get all image keys
- hf_dataset = dataset.hf_dataset.with_format(None)
- img_keys = [key for key in hf_dataset.features if key.startswith(OBS_IMAGE)]
-
- if len(img_keys) == 0:
- raise ValueError(f"No image keys found in dataset {dataset.repo_id}")
-
- # Determine which episodes to process
- if episode_indices is None:
- episode_indices = list(range(dataset.meta.total_episodes))
-
- if repo_id is None:
- repo_id = f"{dataset.repo_id}_video"
-
- logging.info(
- f"Converting {len(episode_indices)} episodes with {len(img_keys)} cameras from {dataset.repo_id}"
- )
- logging.info(f"Video codec: {vcodec}, pixel format: {pix_fmt}, GOP: {g}, CRF: {crf}")
-
- # Create new features dict, converting image features to video features
- new_features = {}
- for key, value in dataset.meta.features.items():
- if key not in img_keys:
- new_features[key] = value
- else:
- # Convert image key to video format
- new_features[key] = value.copy()
- new_features[key]["dtype"] = "video" # Change dtype from "image" to "video"
- # Video info will be updated after episodes are encoded
-
- # Create new metadata for video dataset
- new_meta = LeRobotDatasetMetadata.create(
- repo_id=repo_id,
- fps=dataset.meta.fps,
- features=new_features,
- robot_type=dataset.meta.robot_type,
- root=output_dir,
- use_videos=True,
- chunks_size=dataset.meta.chunks_size,
- data_files_size_in_mb=dataset.meta.data_files_size_in_mb,
- video_files_size_in_mb=dataset.meta.video_files_size_in_mb,
- )
-
- # Create temporary directory for image extraction
- temp_dir = output_dir / "temp_images"
- temp_dir.mkdir(parents=True, exist_ok=True)
-
- # Process each episode
- all_episode_metadata = []
-
- try:
- for ep_idx in tqdm(episode_indices, desc="Converting episodes to videos"):
- # Get episode metadata from source
- src_episode = dataset.meta.episodes[ep_idx]
-
- # Encode videos for this episode
- video_metadata = encode_episode_videos(
- dataset=dataset,
- new_meta=new_meta,
- episode_index=ep_idx,
- vcodec=vcodec,
- pix_fmt=pix_fmt,
- g=g,
- crf=crf,
- fast_decode=fast_decode,
- temp_dir=temp_dir,
- num_image_workers=num_workers,
- )
-
- # Build episode metadata
- episode_meta = {
- "episode_index": ep_idx,
- "length": src_episode["length"],
- "dataset_from_index": ep_idx * src_episode["length"],
- "dataset_to_index": (ep_idx + 1) * src_episode["length"],
- }
-
- # Add video metadata
- for img_key in img_keys:
- episode_meta.update(video_metadata[img_key])
-
- # Add data chunk/file info (using same structure as source)
- if "data/chunk_index" in src_episode:
- episode_meta["data/chunk_index"] = src_episode["data/chunk_index"]
- episode_meta["data/file_index"] = src_episode["data/file_index"]
-
- all_episode_metadata.append(episode_meta)
-
- # Copy and transform data files (removing image columns)
- _copy_data_without_images(dataset, new_meta, episode_indices, img_keys)
-
- # Save episode metadata
- episodes_df = pd.DataFrame(all_episode_metadata)
- episodes_path = new_meta.root / "meta" / "episodes" / "chunk-000" / "file-000.parquet"
- episodes_path.parent.mkdir(parents=True, exist_ok=True)
- episodes_df.to_parquet(episodes_path, index=False)
-
- # Update metadata info
- new_meta.info["total_episodes"] = len(episode_indices)
- new_meta.info["total_frames"] = sum(ep["length"] for ep in all_episode_metadata)
- new_meta.info["total_tasks"] = dataset.meta.total_tasks
- new_meta.info["splits"] = {"train": f"0:{len(episode_indices)}"}
-
- # Update video info for all image keys (now videos)
- # We need to manually set video info since update_video_info() checks video_keys first
- for img_key in img_keys:
- if not new_meta.features[img_key].get("info", None):
- video_path = new_meta.root / new_meta.video_path.format(
- video_key=img_key, chunk_index=0, file_index=0
- )
- new_meta.info["features"][img_key]["info"] = get_video_info(video_path)
-
- from lerobot.datasets.utils import write_info
-
- write_info(new_meta.info, new_meta.root)
-
- # Copy stats and tasks
- if dataset.meta.stats is not None:
- # Remove image stats
- new_stats = {k: v for k, v in dataset.meta.stats.items() if k not in img_keys}
- write_stats(new_stats, new_meta.root)
-
- if dataset.meta.tasks is not None:
- write_tasks(dataset.meta.tasks, new_meta.root)
-
- finally:
- # Clean up temporary directory
- if temp_dir.exists():
- shutil.rmtree(temp_dir)
-
- logging.info(f"✓ Completed converting {dataset.repo_id} to video format")
- logging.info(f"New dataset saved to: {output_dir}")
-
- # Return new dataset
- return LeRobotDataset(repo_id=repo_id, root=output_dir)
-
-
-def _copy_data_without_images(
- src_dataset: LeRobotDataset,
- dst_meta: LeRobotDatasetMetadata,
- episode_indices: list[int],
- img_keys: list[str],
-) -> None:
- """Copy data files without image columns.
-
- Args:
- src_dataset: Source dataset
- dst_meta: Destination metadata
- episode_indices: Episodes to include
- img_keys: Image keys to remove
- """
- from lerobot.datasets.utils import DATA_DIR
-
- data_dir = src_dataset.root / DATA_DIR
- parquet_files = sorted(data_dir.glob("*/*.parquet"))
-
- if not parquet_files:
- raise ValueError(f"No parquet files found in {data_dir}")
-
- episode_set = set(episode_indices)
-
- for src_path in tqdm(parquet_files, desc="Processing data files"):
- df = pd.read_parquet(src_path).reset_index(drop=True)
-
- # Filter to only include selected episodes
- df = df[df["episode_index"].isin(episode_set)].copy()
-
- if len(df) == 0:
- continue
-
- # Remove image columns
- columns_to_drop = [col for col in img_keys if col in df.columns]
- if columns_to_drop:
- df = df.drop(columns=columns_to_drop)
-
- # Get chunk and file indices from path
- relative_path = src_path.relative_to(src_dataset.root)
- chunk_dir = relative_path.parts[1]
- file_name = relative_path.parts[2]
- chunk_idx = int(chunk_dir.split("-")[1])
- file_idx = int(file_name.split("-")[1].split(".")[0])
-
- # Write to destination without pandas index
- dst_path = dst_meta.root / f"data/chunk-{chunk_idx:03d}/file-{file_idx:03d}.parquet"
- dst_path.parent.mkdir(parents=True, exist_ok=True)
- df.to_parquet(dst_path, index=False)
-
-
-def handle_convert_to_video(cfg: EditDatasetConfig) -> None:
- # Note: Parser may create any config type with the right fields, so we access fields directly
- # instead of checking isinstance()
- dataset = LeRobotDataset(cfg.repo_id, root=cfg.root)
-
- # Determine output directory and repo_id
- # Priority: 1) new_repo_id, 2) operation.output_dir, 3) auto-generated name
- output_dir_config = getattr(cfg.operation, "output_dir", None)
-
- if cfg.new_repo_id:
- # Use new_repo_id for both local storage and hub push
- output_repo_id = cfg.new_repo_id
- output_dir = Path(cfg.root) / cfg.new_repo_id if cfg.root else HF_LEROBOT_HOME / cfg.new_repo_id
- logging.info(f"Saving to new dataset: {cfg.new_repo_id}")
- elif output_dir_config:
- # Use custom output directory for local-only storage
- output_dir = Path(output_dir_config)
- # Extract repo name from output_dir for the dataset
- output_repo_id = output_dir.name
- logging.info(f"Saving to local directory: {output_dir}")
- else:
- # Auto-generate name: append "_video" to original repo_id
- output_repo_id = f"{cfg.repo_id}_video"
- output_dir = Path(cfg.root) / output_repo_id if cfg.root else HF_LEROBOT_HOME / output_repo_id
- logging.info(f"Saving to auto-generated location: {output_dir}")
-
- logging.info(f"Converting dataset {cfg.repo_id} to video format")
-
- new_dataset = convert_dataset_to_videos(
- dataset=dataset,
- output_dir=output_dir,
- repo_id=output_repo_id,
- vcodec=getattr(cfg.operation, "vcodec", "libsvtav1"),
- pix_fmt=getattr(cfg.operation, "pix_fmt", "yuv420p"),
- g=getattr(cfg.operation, "g", 2),
- crf=getattr(cfg.operation, "crf", 30),
- fast_decode=getattr(cfg.operation, "fast_decode", 0),
- episode_indices=getattr(cfg.operation, "episode_indices", None),
- num_workers=getattr(cfg.operation, "num_workers", 4),
- )
-
- logging.info("Video dataset created successfully!")
- logging.info(f"Location: {output_dir}")
- logging.info(f"Episodes: {new_dataset.meta.total_episodes}")
- logging.info(f"Frames: {new_dataset.meta.total_frames}")
-
- if cfg.push_to_hub:
- logging.info(f"Pushing to hub as {output_repo_id}...")
- new_dataset.push_to_hub()
- logging.info("✓ Successfully pushed to hub!")
- else:
- logging.info("Dataset saved locally (not pushed to hub)")
-
-
-@parser.wrap()
-def edit_dataset(cfg: EditDatasetConfig) -> None:
- operation_type = cfg.operation.type
-
- if operation_type == "delete_episodes":
- handle_delete_episodes(cfg)
- elif operation_type == "split":
- handle_split(cfg)
- elif operation_type == "merge":
- handle_merge(cfg)
- elif operation_type == "remove_feature":
- handle_remove_feature(cfg)
- elif operation_type == "convert_to_video":
- handle_convert_to_video(cfg)
- else:
- raise ValueError(
- f"Unknown operation type: {operation_type}\n"
- f"Available operations: delete_episodes, split, merge, remove_feature, convert_to_video"
- )
-
-
-def main() -> None:
- init_logging()
- edit_dataset()
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/scripts/lerobot_eval.py b/lerobot/src/lerobot/scripts/lerobot_eval.py
deleted file mode 100644
index a999fecff07626ac0cc7e045234e3a08c3989256..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/scripts/lerobot_eval.py
+++ /dev/null
@@ -1,813 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Evaluate a policy on an environment by running rollouts and computing metrics.
-
-Usage examples:
-
-You want to evaluate a model from the hub (eg: https://huggingface.co/lerobot/diffusion_pusht)
-for 10 episodes.
-
-```
-lerobot-eval \
- --policy.path=lerobot/diffusion_pusht \
- --env.type=pusht \
- --eval.batch_size=10 \
- --eval.n_episodes=10 \
- --policy.use_amp=false \
- --policy.device=cuda
-```
-
-OR, you want to evaluate a model checkpoint from the LeRobot training script for 10 episodes.
-```
-lerobot-eval \
- --policy.path=outputs/train/diffusion_pusht/checkpoints/005000/pretrained_model \
- --env.type=pusht \
- --eval.batch_size=10 \
- --eval.n_episodes=10 \
- --policy.use_amp=false \
- --policy.device=cuda
-```
-
-Note that in both examples, the repo/folder should contain at least `config.json` and `model.safetensors` files.
-
-You can learn about the CLI options for this script in the `EvalPipelineConfig` in lerobot/configs/eval.py
-"""
-
-import concurrent.futures as cf
-import json
-import logging
-import threading
-import time
-from collections import defaultdict
-from collections.abc import Callable
-from contextlib import nullcontext
-from copy import deepcopy
-from dataclasses import asdict
-from functools import partial
-from pathlib import Path
-from pprint import pformat
-from typing import Any, TypedDict
-
-import einops
-import gymnasium as gym
-import numpy as np
-import torch
-from termcolor import colored
-from torch import Tensor, nn
-from tqdm import trange
-
-from lerobot.configs import parser
-from lerobot.configs.eval import EvalPipelineConfig
-from lerobot.envs.factory import make_env, make_env_pre_post_processors
-from lerobot.envs.utils import (
- add_envs_task,
- check_env_attributes_and_types,
- close_envs,
- preprocess_observation,
-)
-from lerobot.policies.factory import make_policy, make_pre_post_processors
-from lerobot.policies.pretrained import PreTrainedPolicy
-from lerobot.processor import PolicyAction, PolicyProcessorPipeline
-from lerobot.utils.constants import ACTION, DONE, OBS_STR, REWARD
-from lerobot.utils.import_utils import register_third_party_plugins
-from lerobot.utils.io_utils import write_video
-from lerobot.utils.random_utils import set_seed
-from lerobot.utils.utils import (
- get_safe_torch_device,
- init_logging,
- inside_slurm,
-)
-
-
-def rollout(
- env: gym.vector.VectorEnv,
- policy: PreTrainedPolicy,
- env_preprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
- env_postprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
- preprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
- postprocessor: PolicyProcessorPipeline[PolicyAction, PolicyAction],
- seeds: list[int] | None = None,
- return_observations: bool = False,
- render_callback: Callable[[gym.vector.VectorEnv], None] | None = None,
-) -> dict:
- """Run a batched policy rollout once through a batch of environments.
-
- Note that all environments in the batch are run until the last environment is done. This means some
- data will probably need to be discarded (for environments that aren't the first one to be done).
-
- The return dictionary contains:
- (optional) "observation": A dictionary of (batch, sequence + 1, *) tensors mapped to observation
- keys. NOTE that this has an extra sequence element relative to the other keys in the
- dictionary. This is because an extra observation is included for after the environment is
- terminated or truncated.
- "action": A (batch, sequence, action_dim) tensor of actions applied based on the observations (not
- including the last observations).
- "reward": A (batch, sequence) tensor of rewards received for applying the actions.
- "success": A (batch, sequence) tensor of success conditions (the only time this can be True is upon
- environment termination/truncation).
- "done": A (batch, sequence) tensor of **cumulative** done conditions. For any given batch element,
- the first True is followed by True's all the way till the end. This can be used for masking
- extraneous elements from the sequences above.
-
- Args:
- env: The batch of environments.
- policy: The policy. Must be a PyTorch nn module.
- seeds: The environments are seeded once at the start of the rollout. If provided, this argument
- specifies the seeds for each of the environments.
- return_observations: Whether to include all observations in the returned rollout data. Observations
- are returned optionally because they typically take more memory to cache. Defaults to False.
- render_callback: Optional rendering callback to be used after the environments are reset, and after
- every step.
- Returns:
- The dictionary described above.
- """
- assert isinstance(policy, nn.Module), "Policy must be a PyTorch nn module."
-
- # Reset the policy and environments.
- policy.reset()
- observation, info = env.reset(seed=seeds)
- if render_callback is not None:
- render_callback(env)
-
- all_observations = []
- all_actions = []
- all_rewards = []
- all_successes = []
- all_dones = []
-
- step = 0
- # Keep track of which environments are done.
- done = np.array([False] * env.num_envs)
- max_steps = env.call("_max_episode_steps")[0]
- progbar = trange(
- max_steps,
- desc=f"Running rollout with at most {max_steps} steps",
- disable=inside_slurm(), # we dont want progress bar when we use slurm, since it clutters the logs
- leave=False,
- )
- check_env_attributes_and_types(env)
- while not np.all(done) and step < max_steps:
- # Numpy array to tensor and changing dictionary keys to LeRobot policy format.
- observation = preprocess_observation(observation)
- if return_observations:
- all_observations.append(deepcopy(observation))
-
- # Infer "task" from attributes of environments.
- # TODO: works with SyncVectorEnv but not AsyncVectorEnv
- observation = add_envs_task(env, observation)
-
- # Apply environment-specific preprocessing (e.g., LiberoProcessorStep for LIBERO)
- observation = env_preprocessor(observation)
-
- observation = preprocessor(observation)
- with torch.inference_mode():
- action = policy.select_action(observation)
- action = postprocessor(action)
-
- action_transition = {ACTION: action}
- action_transition = env_postprocessor(action_transition)
- action = action_transition[ACTION]
-
- # Convert to CPU / numpy.
- action_numpy: np.ndarray = action.to("cpu").numpy()
- assert action_numpy.ndim == 2, "Action dimensions should be (batch, action_dim)"
-
- # Apply the next action.
- observation, reward, terminated, truncated, info = env.step(action_numpy)
- if render_callback is not None:
- render_callback(env)
-
- # VectorEnv stores is_success in `info["final_info"][env_index]["is_success"]`. "final_info" isn't
- # available if none of the envs finished.
- if "final_info" in info:
- final_info = info["final_info"]
- if not isinstance(final_info, dict):
- raise RuntimeError(
- "Unsupported `final_info` format: expected dict (Gymnasium >= 1.0). "
- "You're likely using an older version of gymnasium (< 1.0). Please upgrade."
- )
- successes = final_info["is_success"].tolist()
- else:
- successes = [False] * env.num_envs
-
- # Keep track of which environments are done so far.
- # Mark the episode as done if we reach the maximum step limit.
- # This ensures that the rollout always terminates cleanly at `max_steps`,
- # and allows logging/saving (e.g., videos) to be triggered consistently.
- done = terminated | truncated | done
- if step + 1 == max_steps:
- done = np.ones_like(done, dtype=bool)
-
- all_actions.append(torch.from_numpy(action_numpy))
- all_rewards.append(torch.from_numpy(reward))
- all_dones.append(torch.from_numpy(done))
- all_successes.append(torch.tensor(successes))
-
- step += 1
- running_success_rate = (
- einops.reduce(torch.stack(all_successes, dim=1), "b n -> b", "any").numpy().mean()
- )
- progbar.set_postfix({"running_success_rate": f"{running_success_rate.item() * 100:.1f}%"})
- progbar.update()
-
- # Track the final observation.
- if return_observations:
- observation = preprocess_observation(observation)
- all_observations.append(deepcopy(observation))
-
- # Stack the sequence along the first dimension so that we have (batch, sequence, *) tensors.
- ret = {
- ACTION: torch.stack(all_actions, dim=1),
- "reward": torch.stack(all_rewards, dim=1),
- "success": torch.stack(all_successes, dim=1),
- "done": torch.stack(all_dones, dim=1),
- }
- if return_observations:
- stacked_observations = {}
- for key in all_observations[0]:
- stacked_observations[key] = torch.stack([obs[key] for obs in all_observations], dim=1)
- ret[OBS_STR] = stacked_observations
-
- if hasattr(policy, "use_original_modules"):
- policy.use_original_modules()
-
- return ret
-
-
-def eval_policy(
- env: gym.vector.VectorEnv,
- policy: PreTrainedPolicy,
- env_preprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
- env_postprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
- preprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
- postprocessor: PolicyProcessorPipeline[PolicyAction, PolicyAction],
- n_episodes: int,
- max_episodes_rendered: int = 0,
- videos_dir: Path | None = None,
- return_episode_data: bool = False,
- start_seed: int | None = None,
-) -> dict:
- """
- Args:
- env: The batch of environments.
- policy: The policy.
- n_episodes: The number of episodes to evaluate.
- max_episodes_rendered: Maximum number of episodes to render into videos.
- videos_dir: Where to save rendered videos.
- return_episode_data: Whether to return episode data for online training. Incorporates the data into
- the "episodes" key of the returned dictionary.
- start_seed: The first seed to use for the first individual rollout. For all subsequent rollouts the
- seed is incremented by 1. If not provided, the environments are not manually seeded.
- Returns:
- Dictionary with metrics and data regarding the rollouts.
- """
- if max_episodes_rendered > 0 and not videos_dir:
- raise ValueError("If max_episodes_rendered > 0, videos_dir must be provided.")
-
- if not isinstance(policy, PreTrainedPolicy):
- exc = ValueError(
- f"Policy of type 'PreTrainedPolicy' is expected, but type '{type(policy)}' was provided."
- )
- try:
- from peft import PeftModel
-
- if not isinstance(policy, PeftModel):
- raise exc
- except ImportError:
- raise exc from None
-
- start = time.time()
- policy.eval()
-
- # Determine how many batched rollouts we need to get n_episodes. Note that if n_episodes is not evenly
- # divisible by env.num_envs we end up discarding some data in the last batch.
- n_batches = n_episodes // env.num_envs + int((n_episodes % env.num_envs) != 0)
-
- # Keep track of some metrics.
- sum_rewards = []
- max_rewards = []
- all_successes = []
- all_seeds = []
- threads = [] # for video saving threads
- n_episodes_rendered = 0 # for saving the correct number of videos
-
- # Callback for visualization.
- def render_frame(env: gym.vector.VectorEnv):
- # noqa: B023
- if n_episodes_rendered >= max_episodes_rendered:
- return
- n_to_render_now = min(max_episodes_rendered - n_episodes_rendered, env.num_envs)
- if isinstance(env, gym.vector.SyncVectorEnv):
- ep_frames.append(np.stack([env.envs[i].render() for i in range(n_to_render_now)])) # noqa: B023
- elif isinstance(env, gym.vector.AsyncVectorEnv):
- # Here we must render all frames and discard any we don't need.
- ep_frames.append(np.stack(env.call("render")[:n_to_render_now]))
-
- if max_episodes_rendered > 0:
- video_paths: list[str] = []
-
- if return_episode_data:
- episode_data: dict | None = None
-
- # we dont want progress bar when we use slurm, since it clutters the logs
- progbar = trange(n_batches, desc="Stepping through eval batches", disable=inside_slurm())
- for batch_ix in progbar:
- # Cache frames for rendering videos. Each item will be (b, h, w, c), and the list indexes the rollout
- # step.
- if max_episodes_rendered > 0:
- ep_frames: list[np.ndarray] = []
-
- if start_seed is None:
- seeds = None
- else:
- seeds = range(
- start_seed + (batch_ix * env.num_envs), start_seed + ((batch_ix + 1) * env.num_envs)
- )
- rollout_data = rollout(
- env=env,
- policy=policy,
- env_preprocessor=env_preprocessor,
- env_postprocessor=env_postprocessor,
- preprocessor=preprocessor,
- postprocessor=postprocessor,
- seeds=list(seeds) if seeds else None,
- return_observations=return_episode_data,
- render_callback=render_frame if max_episodes_rendered > 0 else None,
- )
-
- # Figure out where in each rollout sequence the first done condition was encountered (results after
- # this won't be included).
- n_steps = rollout_data["done"].shape[1]
- # Note: this relies on a property of argmax: that it returns the first occurrence as a tiebreaker.
- done_indices = torch.argmax(rollout_data["done"].to(int), dim=1)
-
- # Make a mask with shape (batch, n_steps) to mask out rollout data after the first done
- # (batch-element-wise). Note the `done_indices + 1` to make sure to keep the data from the done step.
- mask = (torch.arange(n_steps) <= einops.repeat(done_indices + 1, "b -> b s", s=n_steps)).int()
- # Extend metrics.
- batch_sum_rewards = einops.reduce((rollout_data["reward"] * mask), "b n -> b", "sum")
- sum_rewards.extend(batch_sum_rewards.tolist())
- batch_max_rewards = einops.reduce((rollout_data["reward"] * mask), "b n -> b", "max")
- max_rewards.extend(batch_max_rewards.tolist())
- batch_successes = einops.reduce((rollout_data["success"] * mask), "b n -> b", "any")
- all_successes.extend(batch_successes.tolist())
- if seeds:
- all_seeds.extend(seeds)
- else:
- all_seeds.append(None)
-
- # FIXME: episode_data is either None or it doesn't exist
- if return_episode_data:
- this_episode_data = _compile_episode_data(
- rollout_data,
- done_indices,
- start_episode_index=batch_ix * env.num_envs,
- start_data_index=(0 if episode_data is None else (episode_data["index"][-1].item() + 1)),
- fps=env.unwrapped.metadata["render_fps"],
- )
- if episode_data is None:
- episode_data = this_episode_data
- else:
- # Some sanity checks to make sure we are correctly compiling the data.
- assert episode_data["episode_index"][-1] + 1 == this_episode_data["episode_index"][0]
- assert episode_data["index"][-1] + 1 == this_episode_data["index"][0]
- # Concatenate the episode data.
- episode_data = {k: torch.cat([episode_data[k], this_episode_data[k]]) for k in episode_data}
-
- # Maybe render video for visualization.
- if max_episodes_rendered > 0 and len(ep_frames) > 0:
- batch_stacked_frames = np.stack(ep_frames, axis=1) # (b, t, *)
- for stacked_frames, done_index in zip(
- batch_stacked_frames, done_indices.flatten().tolist(), strict=False
- ):
- if n_episodes_rendered >= max_episodes_rendered:
- break
-
- videos_dir.mkdir(parents=True, exist_ok=True)
- video_path = videos_dir / f"eval_episode_{n_episodes_rendered}.mp4"
- video_paths.append(str(video_path))
- thread = threading.Thread(
- target=write_video,
- args=(
- str(video_path),
- stacked_frames[: done_index + 1], # + 1 to capture the last observation
- env.unwrapped.metadata["render_fps"],
- ),
- )
- thread.start()
- threads.append(thread)
- n_episodes_rendered += 1
-
- progbar.set_postfix(
- {"running_success_rate": f"{np.mean(all_successes[:n_episodes]).item() * 100:.1f}%"}
- )
-
- # Wait till all video rendering threads are done.
- for thread in threads:
- thread.join()
-
- # Compile eval info.
- info = {
- "per_episode": [
- {
- "episode_ix": i,
- "sum_reward": sum_reward,
- "max_reward": max_reward,
- "success": success,
- "seed": seed,
- }
- for i, (sum_reward, max_reward, success, seed) in enumerate(
- zip(
- sum_rewards[:n_episodes],
- max_rewards[:n_episodes],
- all_successes[:n_episodes],
- all_seeds[:n_episodes],
- strict=True,
- )
- )
- ],
- "aggregated": {
- "avg_sum_reward": float(np.nanmean(sum_rewards[:n_episodes])),
- "avg_max_reward": float(np.nanmean(max_rewards[:n_episodes])),
- "pc_success": float(np.nanmean(all_successes[:n_episodes]) * 100),
- "eval_s": time.time() - start,
- "eval_ep_s": (time.time() - start) / n_episodes,
- },
- }
-
- if return_episode_data:
- info["episodes"] = episode_data
-
- if max_episodes_rendered > 0:
- info["video_paths"] = video_paths
-
- return info
-
-
-def _compile_episode_data(
- rollout_data: dict, done_indices: Tensor, start_episode_index: int, start_data_index: int, fps: float
-) -> dict:
- """Convenience function for `eval_policy(return_episode_data=True)`
-
- Compiles all the rollout data into a Hugging Face dataset.
-
- Similar logic is implemented when datasets are pushed to hub (see: `push_to_hub`).
- """
- ep_dicts = []
- total_frames = 0
- for ep_ix in range(rollout_data[ACTION].shape[0]):
- # + 2 to include the first done frame and the last observation frame.
- num_frames = done_indices[ep_ix].item() + 2
- total_frames += num_frames
-
- # Here we do `num_frames - 1` as we don't want to include the last observation frame just yet.
- ep_dict = {
- ACTION: rollout_data[ACTION][ep_ix, : num_frames - 1],
- "episode_index": torch.tensor([start_episode_index + ep_ix] * (num_frames - 1)),
- "frame_index": torch.arange(0, num_frames - 1, 1),
- "timestamp": torch.arange(0, num_frames - 1, 1) / fps,
- DONE: rollout_data["done"][ep_ix, : num_frames - 1],
- "next.success": rollout_data["success"][ep_ix, : num_frames - 1],
- REWARD: rollout_data["reward"][ep_ix, : num_frames - 1].type(torch.float32),
- }
-
- # For the last observation frame, all other keys will just be copy padded.
- for k in ep_dict:
- ep_dict[k] = torch.cat([ep_dict[k], ep_dict[k][-1:]])
-
- for key in rollout_data[OBS_STR]:
- ep_dict[key] = rollout_data[OBS_STR][key][ep_ix, :num_frames]
-
- ep_dicts.append(ep_dict)
-
- data_dict = {}
- for key in ep_dicts[0]:
- data_dict[key] = torch.cat([x[key] for x in ep_dicts])
-
- data_dict["index"] = torch.arange(start_data_index, start_data_index + total_frames, 1)
-
- return data_dict
-
-
-@parser.wrap()
-def eval_main(cfg: EvalPipelineConfig):
- logging.info(pformat(asdict(cfg)))
-
- # Check device is available
- device = get_safe_torch_device(cfg.policy.device, log=True)
-
- torch.backends.cudnn.benchmark = True
- torch.backends.cuda.matmul.allow_tf32 = True
- set_seed(cfg.seed)
-
- logging.info(colored("Output dir:", "yellow", attrs=["bold"]) + f" {cfg.output_dir}")
-
- logging.info("Making environment.")
- envs = make_env(
- cfg.env,
- n_envs=cfg.eval.batch_size,
- use_async_envs=cfg.eval.use_async_envs,
- trust_remote_code=cfg.trust_remote_code,
- )
-
- logging.info("Making policy.")
-
- policy = make_policy(
- cfg=cfg.policy,
- env_cfg=cfg.env,
- rename_map=cfg.rename_map,
- )
-
- policy.eval()
-
- # The inference device is automatically set to match the detected hardware, overriding any previous device settings from training to ensure compatibility.
- preprocessor_overrides = {
- "device_processor": {"device": str(policy.config.device)},
- "rename_observations_processor": {"rename_map": cfg.rename_map},
- }
-
- preprocessor, postprocessor = make_pre_post_processors(
- policy_cfg=cfg.policy,
- pretrained_path=cfg.policy.pretrained_path,
- preprocessor_overrides=preprocessor_overrides,
- )
-
- # Create environment-specific preprocessor and postprocessor (e.g., for LIBERO environments)
- env_preprocessor, env_postprocessor = make_env_pre_post_processors(env_cfg=cfg.env, policy_cfg=cfg.policy)
-
- with torch.no_grad(), torch.autocast(device_type=device.type) if cfg.policy.use_amp else nullcontext():
- info = eval_policy_all(
- envs=envs,
- policy=policy,
- env_preprocessor=env_preprocessor,
- env_postprocessor=env_postprocessor,
- preprocessor=preprocessor,
- postprocessor=postprocessor,
- n_episodes=cfg.eval.n_episodes,
- max_episodes_rendered=10,
- videos_dir=Path(cfg.output_dir) / "videos",
- start_seed=cfg.seed,
- max_parallel_tasks=cfg.env.max_parallel_tasks,
- )
- print("Overall Aggregated Metrics:")
- print(info["overall"])
-
- # Print per-suite stats
- for task_group, task_group_info in info.items():
- print(f"\nAggregated Metrics for {task_group}:")
- print(task_group_info)
- # Close all vec envs
- close_envs(envs)
-
- # Save info
- with open(Path(cfg.output_dir) / "eval_info.json", "w") as f:
- json.dump(info, f, indent=2)
-
- logging.info("End of eval")
-
-
-# ---- typed payload returned by one task eval ----
-class TaskMetrics(TypedDict):
- sum_rewards: list[float]
- max_rewards: list[float]
- successes: list[bool]
- video_paths: list[str]
-
-
-ACC_KEYS = ("sum_rewards", "max_rewards", "successes", "video_paths")
-
-
-def eval_one(
- env: gym.vector.VectorEnv,
- *,
- policy: PreTrainedPolicy,
- env_preprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
- env_postprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
- preprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
- postprocessor: PolicyProcessorPipeline[PolicyAction, PolicyAction],
- n_episodes: int,
- max_episodes_rendered: int,
- videos_dir: Path | None,
- return_episode_data: bool,
- start_seed: int | None,
-) -> TaskMetrics:
- """Evaluates one task_id of one suite using the provided vec env."""
-
- task_videos_dir = videos_dir
-
- task_result = eval_policy(
- env=env,
- policy=policy,
- env_preprocessor=env_preprocessor,
- env_postprocessor=env_postprocessor,
- preprocessor=preprocessor,
- postprocessor=postprocessor,
- n_episodes=n_episodes,
- max_episodes_rendered=max_episodes_rendered,
- videos_dir=task_videos_dir,
- return_episode_data=return_episode_data,
- start_seed=start_seed,
- )
-
- per_episode = task_result["per_episode"]
- return TaskMetrics(
- sum_rewards=[ep["sum_reward"] for ep in per_episode],
- max_rewards=[ep["max_reward"] for ep in per_episode],
- successes=[ep["success"] for ep in per_episode],
- video_paths=task_result.get("video_paths", []),
- )
-
-
-def run_one(
- task_group: str,
- task_id: int,
- env,
- *,
- policy,
- env_preprocessor,
- env_postprocessor,
- preprocessor,
- postprocessor,
- n_episodes: int,
- max_episodes_rendered: int,
- videos_dir: Path | None,
- return_episode_data: bool,
- start_seed: int | None,
-):
- """
- Run eval_one for a single (task_group, task_id, env).
- Returns (task_group, task_id, task_metrics_dict).
- This function is intentionally module-level to make it easy to test.
- """
- task_videos_dir = None
- if videos_dir is not None:
- task_videos_dir = videos_dir / f"{task_group}_{task_id}"
- task_videos_dir.mkdir(parents=True, exist_ok=True)
-
- # Call the existing eval_one (assumed to return TaskMetrics-like dict)
- metrics = eval_one(
- env,
- policy=policy,
- env_preprocessor=env_preprocessor,
- env_postprocessor=env_postprocessor,
- preprocessor=preprocessor,
- postprocessor=postprocessor,
- n_episodes=n_episodes,
- max_episodes_rendered=max_episodes_rendered,
- videos_dir=task_videos_dir,
- return_episode_data=return_episode_data,
- start_seed=start_seed,
- )
- # ensure we always provide video_paths key to simplify accumulation
- if max_episodes_rendered > 0:
- metrics.setdefault("video_paths", [])
- return task_group, task_id, metrics
-
-
-def eval_policy_all(
- envs: dict[str, dict[int, gym.vector.VectorEnv]],
- policy,
- env_preprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
- env_postprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
- preprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
- postprocessor: PolicyProcessorPipeline[PolicyAction, PolicyAction],
- n_episodes: int,
- *,
- max_episodes_rendered: int = 0,
- videos_dir: Path | None = None,
- return_episode_data: bool = False,
- start_seed: int | None = None,
- max_parallel_tasks: int = 1,
-) -> dict:
- """
- Evaluate a nested `envs` dict: {task_group: {task_id: vec_env}}.
- This implementation flattens tasks, runs them sequentially or via ThreadPoolExecutor,
- accumulates per-group and overall statistics, and returns the same aggregate metrics
- schema as the single-env evaluator (avg_sum_reward / avg_max_reward / pc_success / timings)
- plus per-task infos.
- """
- start_t = time.time()
-
- # Flatten envs into list of (task_group, task_id, env)
- tasks = [(tg, tid, vec) for tg, group in envs.items() for tid, vec in group.items()]
-
- # accumulators: track metrics at both per-group level and across all groups
- group_acc: dict[str, dict[str, list]] = defaultdict(lambda: {k: [] for k in ACC_KEYS})
- overall: dict[str, list] = {k: [] for k in ACC_KEYS}
- per_task_infos: list[dict] = []
-
- # small inline helper to accumulate one task's metrics into accumulators
- def _accumulate_to(group: str, metrics: dict):
- # metrics expected to contain 'sum_rewards', 'max_rewards', 'successes', optionally 'video_paths'
- # but eval_one may store per-episode lists; we assume metrics uses scalars averaged per task as before.
- # To be robust, accept scalars or lists.
- def _append(key, value):
- if value is None:
- return
- if isinstance(value, list):
- group_acc[group][key].extend(value)
- overall[key].extend(value)
- else:
- group_acc[group][key].append(value)
- overall[key].append(value)
-
- _append("sum_rewards", metrics.get("sum_rewards"))
- _append("max_rewards", metrics.get("max_rewards"))
- _append("successes", metrics.get("successes"))
- # video_paths is list-like
- paths = metrics.get("video_paths", [])
- if paths:
- group_acc[group]["video_paths"].extend(paths)
- overall["video_paths"].extend(paths)
-
- # Choose runner (sequential vs threaded)
- task_runner = partial(
- run_one,
- policy=policy,
- env_preprocessor=env_preprocessor,
- env_postprocessor=env_postprocessor,
- preprocessor=preprocessor,
- postprocessor=postprocessor,
- n_episodes=n_episodes,
- max_episodes_rendered=max_episodes_rendered,
- videos_dir=videos_dir,
- return_episode_data=return_episode_data,
- start_seed=start_seed,
- )
-
- if max_parallel_tasks <= 1:
- # sequential path (single accumulator path on the main thread)
- # NOTE: keeping a single-threaded accumulator avoids concurrent list appends or locks
- for task_group, task_id, env in tasks:
- tg, tid, metrics = task_runner(task_group, task_id, env)
- _accumulate_to(tg, metrics)
- per_task_infos.append({"task_group": tg, "task_id": tid, "metrics": metrics})
- else:
- # threaded path: submit all tasks, consume completions on main thread and accumulate there
- with cf.ThreadPoolExecutor(max_workers=max_parallel_tasks) as executor:
- fut2meta = {}
- for task_group, task_id, env in tasks:
- fut = executor.submit(task_runner, task_group, task_id, env)
- fut2meta[fut] = (task_group, task_id)
- for fut in cf.as_completed(fut2meta):
- tg, tid, metrics = fut.result()
- _accumulate_to(tg, metrics)
- per_task_infos.append({"task_group": tg, "task_id": tid, "metrics": metrics})
-
- # compute aggregated metrics helper (robust to lists/scalars)
- def _agg_from_list(xs):
- if not xs:
- return float("nan")
- arr = np.array(xs, dtype=float)
- return float(np.nanmean(arr))
-
- # compute per-group aggregates
- groups_aggregated = {}
- for group, acc in group_acc.items():
- groups_aggregated[group] = {
- "avg_sum_reward": _agg_from_list(acc["sum_rewards"]),
- "avg_max_reward": _agg_from_list(acc["max_rewards"]),
- "pc_success": _agg_from_list(acc["successes"]) * 100 if acc["successes"] else float("nan"),
- "n_episodes": len(acc["sum_rewards"]),
- "video_paths": list(acc["video_paths"]),
- }
-
- # overall aggregates
- overall_agg = {
- "avg_sum_reward": _agg_from_list(overall["sum_rewards"]),
- "avg_max_reward": _agg_from_list(overall["max_rewards"]),
- "pc_success": _agg_from_list(overall["successes"]) * 100 if overall["successes"] else float("nan"),
- "n_episodes": len(overall["sum_rewards"]),
- "eval_s": time.time() - start_t,
- "eval_ep_s": (time.time() - start_t) / max(1, len(overall["sum_rewards"])),
- "video_paths": list(overall["video_paths"]),
- }
-
- return {
- "per_task": per_task_infos,
- "per_group": groups_aggregated,
- "overall": overall_agg,
- }
-
-
-def main():
- init_logging()
- register_third_party_plugins()
- eval_main()
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/scripts/lerobot_find_cameras.py b/lerobot/src/lerobot/scripts/lerobot_find_cameras.py
deleted file mode 100644
index 32ff9ec67ef1f69da472b3233ad39d6c5ab569c6..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/scripts/lerobot_find_cameras.py
+++ /dev/null
@@ -1,319 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Helper to find the camera devices available in your system.
-
-Example:
-
-```shell
-lerobot-find-cameras
-```
-"""
-
-# NOTE(Steven): RealSense can also be identified/opened as OpenCV cameras. If you know the camera is a RealSense, use the `lerobot-find-cameras realsense` flag to avoid confusion.
-# NOTE(Steven): macOS cameras sometimes report different FPS at init time, not an issue here as we don't specify FPS when opening the cameras, but the information displayed might not be truthful.
-
-import argparse
-import concurrent.futures
-import logging
-import time
-from pathlib import Path
-from typing import Any
-
-import numpy as np
-from PIL import Image
-
-from lerobot.cameras.configs import ColorMode
-from lerobot.cameras.opencv.camera_opencv import OpenCVCamera
-from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
-from lerobot.cameras.realsense.camera_realsense import RealSenseCamera
-from lerobot.cameras.realsense.configuration_realsense import RealSenseCameraConfig
-
-logger = logging.getLogger(__name__)
-
-
-def find_all_opencv_cameras() -> list[dict[str, Any]]:
- """
- Finds all available OpenCV cameras plugged into the system.
-
- Returns:
- A list of all available OpenCV cameras with their metadata.
- """
- all_opencv_cameras_info: list[dict[str, Any]] = []
- logger.info("Searching for OpenCV cameras...")
- try:
- opencv_cameras = OpenCVCamera.find_cameras()
- for cam_info in opencv_cameras:
- all_opencv_cameras_info.append(cam_info)
- logger.info(f"Found {len(opencv_cameras)} OpenCV cameras.")
- except Exception as e:
- logger.error(f"Error finding OpenCV cameras: {e}")
-
- return all_opencv_cameras_info
-
-
-def find_all_realsense_cameras() -> list[dict[str, Any]]:
- """
- Finds all available RealSense cameras plugged into the system.
-
- Returns:
- A list of all available RealSense cameras with their metadata.
- """
- all_realsense_cameras_info: list[dict[str, Any]] = []
- logger.info("Searching for RealSense cameras...")
- try:
- realsense_cameras = RealSenseCamera.find_cameras()
- for cam_info in realsense_cameras:
- all_realsense_cameras_info.append(cam_info)
- logger.info(f"Found {len(realsense_cameras)} RealSense cameras.")
- except ImportError:
- logger.warning("Skipping RealSense camera search: pyrealsense2 library not found or not importable.")
- except Exception as e:
- logger.error(f"Error finding RealSense cameras: {e}")
-
- return all_realsense_cameras_info
-
-
-def find_and_print_cameras(camera_type_filter: str | None = None) -> list[dict[str, Any]]:
- """
- Finds available cameras based on an optional filter and prints their information.
-
- Args:
- camera_type_filter: Optional string to filter cameras ("realsense" or "opencv").
- If None, lists all cameras.
-
- Returns:
- A list of all available cameras matching the filter, with their metadata.
- """
- all_cameras_info: list[dict[str, Any]] = []
-
- if camera_type_filter:
- camera_type_filter = camera_type_filter.lower()
-
- if camera_type_filter is None or camera_type_filter == "opencv":
- all_cameras_info.extend(find_all_opencv_cameras())
- if camera_type_filter is None or camera_type_filter == "realsense":
- all_cameras_info.extend(find_all_realsense_cameras())
-
- if not all_cameras_info:
- if camera_type_filter:
- logger.warning(f"No {camera_type_filter} cameras were detected.")
- else:
- logger.warning("No cameras (OpenCV or RealSense) were detected.")
- else:
- print("\n--- Detected Cameras ---")
- for i, cam_info in enumerate(all_cameras_info):
- print(f"Camera #{i}:")
- for key, value in cam_info.items():
- if key == "default_stream_profile" and isinstance(value, dict):
- print(f" {key.replace('_', ' ').capitalize()}:")
- for sub_key, sub_value in value.items():
- print(f" {sub_key.capitalize()}: {sub_value}")
- else:
- print(f" {key.replace('_', ' ').capitalize()}: {value}")
- print("-" * 20)
- return all_cameras_info
-
-
-def save_image(
- img_array: np.ndarray,
- camera_identifier: str | int,
- images_dir: Path,
- camera_type: str,
-):
- """
- Saves a single image to disk using Pillow. Handles color conversion if necessary.
- """
- try:
- img = Image.fromarray(img_array, mode="RGB")
-
- safe_identifier = str(camera_identifier).replace("/", "_").replace("\\", "_")
- filename_prefix = f"{camera_type.lower()}_{safe_identifier}"
- filename = f"{filename_prefix}.png"
-
- path = images_dir / filename
- path.parent.mkdir(parents=True, exist_ok=True)
- img.save(str(path))
- logger.info(f"Saved image: {path}")
- except Exception as e:
- logger.error(f"Failed to save image for camera {camera_identifier} (type {camera_type}): {e}")
-
-
-def create_camera_instance(cam_meta: dict[str, Any]) -> dict[str, Any] | None:
- """Create and connect to a camera instance based on metadata."""
- cam_type = cam_meta.get("type")
- cam_id = cam_meta.get("id")
- instance = None
-
- logger.info(f"Preparing {cam_type} ID {cam_id} with default profile")
-
- try:
- if cam_type == "OpenCV":
- cv_config = OpenCVCameraConfig(
- index_or_path=cam_id,
- color_mode=ColorMode.RGB,
- )
- instance = OpenCVCamera(cv_config)
- elif cam_type == "RealSense":
- rs_config = RealSenseCameraConfig(
- serial_number_or_name=cam_id,
- color_mode=ColorMode.RGB,
- )
- instance = RealSenseCamera(rs_config)
- else:
- logger.warning(f"Unknown camera type: {cam_type} for ID {cam_id}. Skipping.")
- return None
-
- if instance:
- logger.info(f"Connecting to {cam_type} camera: {cam_id}...")
- instance.connect(warmup=True)
- return {"instance": instance, "meta": cam_meta}
- except Exception as e:
- logger.error(f"Failed to connect or configure {cam_type} camera {cam_id}: {e}")
- if instance and instance.is_connected:
- instance.disconnect()
- return None
-
-
-def process_camera_image(
- cam_dict: dict[str, Any], output_dir: Path, current_time: float
-) -> concurrent.futures.Future | None:
- """Capture and process an image from a single camera."""
- cam = cam_dict["instance"]
- meta = cam_dict["meta"]
- cam_type_str = str(meta.get("type", "unknown"))
- cam_id_str = str(meta.get("id", "unknown"))
-
- try:
- image_data = cam.read()
-
- return save_image(
- image_data,
- cam_id_str,
- output_dir,
- cam_type_str,
- )
- except TimeoutError:
- logger.warning(
- f"Timeout reading from {cam_type_str} camera {cam_id_str} at time {current_time:.2f}s."
- )
- except Exception as e:
- logger.error(f"Error reading from {cam_type_str} camera {cam_id_str}: {e}")
- return None
-
-
-def cleanup_cameras(cameras_to_use: list[dict[str, Any]]):
- """Disconnect all cameras."""
- logger.info(f"Disconnecting {len(cameras_to_use)} cameras...")
- for cam_dict in cameras_to_use:
- try:
- if cam_dict["instance"] and cam_dict["instance"].is_connected:
- cam_dict["instance"].disconnect()
- except Exception as e:
- logger.error(f"Error disconnecting camera {cam_dict['meta'].get('id')}: {e}")
-
-
-def save_images_from_all_cameras(
- output_dir: Path,
- record_time_s: float = 2.0,
- camera_type: str | None = None,
-):
- """
- Connects to detected cameras (optionally filtered by type) and saves images from each.
- Uses default stream profiles for width, height, and FPS.
-
- Args:
- output_dir: Directory to save images.
- record_time_s: Duration in seconds to record images.
- camera_type: Optional string to filter cameras ("realsense" or "opencv").
- If None, uses all detected cameras.
- """
- output_dir.mkdir(parents=True, exist_ok=True)
- logger.info(f"Saving images to {output_dir}")
- all_camera_metadata = find_and_print_cameras(camera_type_filter=camera_type)
-
- if not all_camera_metadata:
- logger.warning("No cameras detected matching the criteria. Cannot save images.")
- return
-
- cameras_to_use = []
- for cam_meta in all_camera_metadata:
- camera_instance = create_camera_instance(cam_meta)
- if camera_instance:
- cameras_to_use.append(camera_instance)
-
- if not cameras_to_use:
- logger.warning("No cameras could be connected. Aborting image save.")
- return
-
- logger.info(f"Starting image capture for {record_time_s} seconds from {len(cameras_to_use)} cameras.")
- start_time = time.perf_counter()
-
- with concurrent.futures.ThreadPoolExecutor(max_workers=len(cameras_to_use) * 2) as executor:
- try:
- while time.perf_counter() - start_time < record_time_s:
- futures = []
- current_capture_time = time.perf_counter()
-
- for cam_dict in cameras_to_use:
- future = process_camera_image(cam_dict, output_dir, current_capture_time)
- if future:
- futures.append(future)
-
- if futures:
- concurrent.futures.wait(futures)
-
- except KeyboardInterrupt:
- logger.info("Capture interrupted by user.")
- finally:
- print("\nFinalizing image saving...")
- executor.shutdown(wait=True)
- cleanup_cameras(cameras_to_use)
- print(f"Image capture finished. Images saved to {output_dir}")
-
-
-def main():
- parser = argparse.ArgumentParser(
- description="Unified camera utility script for listing cameras and capturing images."
- )
-
- parser.add_argument(
- "camera_type",
- type=str,
- nargs="?",
- default=None,
- choices=["realsense", "opencv"],
- help="Specify camera type to capture from (e.g., 'realsense', 'opencv'). Captures from all if omitted.",
- )
- parser.add_argument(
- "--output-dir",
- type=Path,
- default="outputs/captured_images",
- help="Directory to save images. Default: outputs/captured_images",
- )
- parser.add_argument(
- "--record-time-s",
- type=float,
- default=6.0,
- help="Time duration to attempt capturing frames. Default: 6 seconds.",
- )
- args = parser.parse_args()
- save_images_from_all_cameras(**vars(args))
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/scripts/lerobot_find_joint_limits.py b/lerobot/src/lerobot/scripts/lerobot_find_joint_limits.py
deleted file mode 100644
index 53b4d27ef6318c291480896437630e869722b3e3..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/scripts/lerobot_find_joint_limits.py
+++ /dev/null
@@ -1,217 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Script to find joint limits and end-effector bounds via teleoperation.
-
-Example:
-
-```shell
-lerobot-find-joint-limits \
- --robot.type=so100_follower \
- --robot.port=/dev/tty.usbmodem58760432981 \
- --robot.id=black \
- --teleop.type=so100_leader \
- --teleop.port=/dev/tty.usbmodem58760434471 \
- --teleop.id=blue \
- --urdf_path=/SO-ARM100-main/Simulation/SO101/so101_new_calib.urdf \
- --target_frame_name=gripper \
- --teleop_time_s=30 \
- --warmup_time_s=5 \
- --control_loop_fps=30
-```
-"""
-
-import time
-from dataclasses import dataclass
-
-import draccus
-import numpy as np
-
-from lerobot.model.kinematics import RobotKinematics
-from lerobot.robots import ( # noqa: F401
- RobotConfig,
- bi_so_follower,
- koch_follower,
- make_robot_from_config,
- omx_follower,
- so_follower,
-)
-from lerobot.teleoperators import ( # noqa: F401
- TeleoperatorConfig,
- bi_so_leader,
- gamepad,
- koch_leader,
- make_teleoperator_from_config,
- omx_leader,
- so_leader,
-)
-from lerobot.utils.robot_utils import precise_sleep
-
-
-@dataclass
-class FindJointLimitsConfig:
- teleop: TeleoperatorConfig
- robot: RobotConfig
-
- # Path to URDF file for kinematics
- # NOTE: It is highly recommended to use the urdf in the SO-ARM100 repo:
- # https://github.com/TheRobotStudio/SO-ARM100/blob/main/Simulation/SO101/so101_new_calib.urdf
- urdf_path: str
- target_frame_name: str = "gripper"
-
- # Duration of the recording phase in seconds
- teleop_time_s: float = 30
- # Duration of the warmup phase in seconds
- warmup_time_s: float = 5
- # Control loop frequency
- control_loop_fps: int = 30
-
-
-@draccus.wrap()
-def find_joint_and_ee_bounds(cfg: FindJointLimitsConfig):
- teleop = make_teleoperator_from_config(cfg.teleop)
- robot = make_robot_from_config(cfg.robot)
-
- print(f"Connecting to robot: {cfg.robot.type}...")
- teleop.connect()
- robot.connect()
- print("Devices connected.")
-
- # Initialize Kinematics
- try:
- kinematics = RobotKinematics(cfg.urdf_path, cfg.target_frame_name)
- except Exception as e:
- print(f"Error initializing kinematics: {e}")
- print("Ensure URDF path and target frame name are correct.")
- robot.disconnect()
- teleop.disconnect()
- return
-
- # Initialize variables
- max_pos = None
- min_pos = None
- max_ee = None
- min_ee = None
-
- start_t = time.perf_counter()
- warmup_done = False
-
- print("\n" + "=" * 40)
- print(f" WARMUP PHASE ({cfg.warmup_time_s}s)")
- print(" Move the robot freely to ensure control works.")
- print(" Data is NOT being recorded yet.")
- print("=" * 40 + "\n")
-
- try:
- while True:
- t0 = time.perf_counter()
-
- # 1. Teleoperation Control Loop
- action = teleop.get_action()
- robot.send_action(action)
-
- # 2. Read Observations
- observation = robot.get_observation()
- joint_positions = np.array([observation[f"{key}.pos"] for key in robot.bus.motors])
-
- # 3. Calculate Kinematics
- # Forward kinematics to get (x, y, z) translation
- ee_pos = kinematics.forward_kinematics(joint_positions)[:3, 3]
-
- current_time = time.perf_counter()
- elapsed = current_time - start_t
-
- # 4. Handle Phases
- if elapsed < cfg.warmup_time_s:
- # Still in warmup
- pass
-
- else:
- # Phase Transition: Warmup -> Recording
- if not warmup_done:
- print("\n" + "=" * 40)
- print(" RECORDING STARTED")
- print(" Move robot to ALL joint limits.")
- print(" Press Ctrl+C to stop early and save results.")
- print("=" * 40 + "\n")
-
- # Initialize limits with current position at start of recording
- max_pos = joint_positions.copy()
- min_pos = joint_positions.copy()
- max_ee = ee_pos.copy()
- min_ee = ee_pos.copy()
- warmup_done = True
-
- # Update Limits
- max_ee = np.maximum(max_ee, ee_pos)
- min_ee = np.minimum(min_ee, ee_pos)
- max_pos = np.maximum(max_pos, joint_positions)
- min_pos = np.minimum(min_pos, joint_positions)
-
- # Time check
- recording_time = elapsed - cfg.warmup_time_s
- remaining = cfg.teleop_time_s - recording_time
-
- # Simple throttle for print statements (every ~1 sec)
- if int(recording_time * 100) % 100 == 0:
- print(f"Time remaining: {remaining:.1f}s", end="\r")
-
- if recording_time > cfg.teleop_time_s:
- print("\nTime limit reached.")
- break
-
- precise_sleep(max(1.0 / cfg.control_loop_fps - (time.perf_counter() - t0), 0.0))
-
- except KeyboardInterrupt:
- print("\n\nInterrupted by user. Stopping safely...")
-
- finally:
- # Safety: Disconnect devices
- print("\nDisconnecting devices...")
- robot.disconnect()
- teleop.disconnect()
-
- # Results Output
- if max_pos is not None:
- print("\n" + "=" * 40)
- print("FINAL RESULTS")
- print("=" * 40)
-
- # Rounding for readability
- r_max_ee = np.round(max_ee, 4).tolist()
- r_min_ee = np.round(min_ee, 4).tolist()
- r_max_pos = np.round(max_pos, 4).tolist()
- r_min_pos = np.round(min_pos, 4).tolist()
-
- print("\n# End Effector Bounds (x, y, z):")
- print(f"max_ee = {r_max_ee}")
- print(f"min_ee = {r_min_ee}")
-
- print("\n# Joint Position Limits (radians):")
- print(f"max_pos = {r_max_pos}")
- print(f"min_pos = {r_min_pos}")
-
- else:
- print("No data recorded (exited during warmup).")
-
-
-def main():
- find_joint_and_ee_bounds()
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/scripts/lerobot_find_port.py b/lerobot/src/lerobot/scripts/lerobot_find_port.py
deleted file mode 100644
index 56bc8532db8c31bae2282f787cd448771e4e7b1e..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/scripts/lerobot_find_port.py
+++ /dev/null
@@ -1,69 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Helper to find the USB port associated with your MotorsBus.
-
-Example:
-
-```shell
-lerobot-find-port
-```
-"""
-
-import platform
-import time
-from pathlib import Path
-
-
-def find_available_ports():
- from serial.tools import list_ports # Part of pyserial library
-
- if platform.system() == "Windows":
- # List COM ports using pyserial
- ports = [port.device for port in list_ports.comports()]
- else: # Linux/macOS
- # List /dev/tty* ports for Unix-based systems
- ports = [str(path) for path in Path("/dev").glob("tty*")]
- return ports
-
-
-def find_port():
- print("Finding all available ports for the MotorsBus.")
- ports_before = find_available_ports()
- print("Ports before disconnecting:", ports_before)
-
- print("Remove the USB cable from your MotorsBus and press Enter when done.")
- input() # Wait for user to disconnect the device
-
- time.sleep(0.5) # Allow some time for port to be released
- ports_after = find_available_ports()
- ports_diff = list(set(ports_before) - set(ports_after))
-
- if len(ports_diff) == 1:
- port = ports_diff[0]
- print(f"The port of this MotorsBus is '{port}'")
- print("Reconnect the USB cable.")
- elif len(ports_diff) == 0:
- raise OSError(f"Could not detect the port. No difference was found ({ports_diff}).")
- else:
- raise OSError(f"Could not detect the port. More than one port was found ({ports_diff}).")
-
-
-def main():
- find_port()
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/scripts/lerobot_imgtransform_viz.py b/lerobot/src/lerobot/scripts/lerobot_imgtransform_viz.py
deleted file mode 100644
index 90508578e1c9abeaede0324ff2b8015b9f138ea5..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/scripts/lerobot_imgtransform_viz.py
+++ /dev/null
@@ -1,134 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-""" Visualize effects of image transforms for a given configuration.
-
-This script will generate examples of transformed images as they are output by LeRobot dataset.
-Additionally, each individual transform can be visualized separately as well as examples of combined transforms
-
-Example:
-```bash
-lerobot-imgtransform-viz \
- --repo_id=lerobot/pusht \
- --episodes='[0]' \
- --image_transforms.enable=True
-```
-"""
-
-import logging
-from copy import deepcopy
-from dataclasses import replace
-from pathlib import Path
-
-import draccus
-from torchvision.transforms import ToPILImage
-
-from lerobot.configs.default import DatasetConfig
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.datasets.transforms import (
- ImageTransforms,
- ImageTransformsConfig,
- make_transform_from_config,
-)
-
-OUTPUT_DIR = Path("outputs/image_transforms")
-to_pil = ToPILImage()
-
-
-def save_all_transforms(cfg: ImageTransformsConfig, original_frame, output_dir, n_examples):
- output_dir_all = output_dir / "all"
- output_dir_all.mkdir(parents=True, exist_ok=True)
-
- tfs = ImageTransforms(cfg)
- for i in range(1, n_examples + 1):
- transformed_frame = tfs(original_frame)
- to_pil(transformed_frame).save(output_dir_all / f"{i}.png", quality=100)
-
- print("Combined transforms examples saved to:")
- print(f" {output_dir_all}")
-
-
-def save_each_transform(cfg: ImageTransformsConfig, original_frame, output_dir, n_examples):
- if not cfg.enable:
- logging.warning(
- "No single transforms will be saved, because `image_transforms.enable=False`. To enable, set `enable` to True in `ImageTransformsConfig` or in the command line with `--image_transforms.enable=True`."
- )
- return
-
- print("Individual transforms examples saved to:")
- for tf_name, tf_cfg in cfg.tfs.items():
- # Apply a few transformation with random value in min_max range
- output_dir_single = output_dir / tf_name
- output_dir_single.mkdir(parents=True, exist_ok=True)
-
- tf = make_transform_from_config(tf_cfg)
- for i in range(1, n_examples + 1):
- transformed_frame = tf(original_frame)
- to_pil(transformed_frame).save(output_dir_single / f"{i}.png", quality=100)
-
- # Apply min, max, average transformations
- tf_cfg_kwgs_min = deepcopy(tf_cfg.kwargs)
- tf_cfg_kwgs_max = deepcopy(tf_cfg.kwargs)
- tf_cfg_kwgs_avg = deepcopy(tf_cfg.kwargs)
-
- for key, (min_, max_) in tf_cfg.kwargs.items():
- avg = (min_ + max_) / 2
- tf_cfg_kwgs_min[key] = [min_, min_]
- tf_cfg_kwgs_max[key] = [max_, max_]
- tf_cfg_kwgs_avg[key] = [avg, avg]
-
- tf_min = make_transform_from_config(replace(tf_cfg, **{"kwargs": tf_cfg_kwgs_min}))
- tf_max = make_transform_from_config(replace(tf_cfg, **{"kwargs": tf_cfg_kwgs_max}))
- tf_avg = make_transform_from_config(replace(tf_cfg, **{"kwargs": tf_cfg_kwgs_avg}))
-
- tf_frame_min = tf_min(original_frame)
- tf_frame_max = tf_max(original_frame)
- tf_frame_avg = tf_avg(original_frame)
-
- to_pil(tf_frame_min).save(output_dir_single / "min.png", quality=100)
- to_pil(tf_frame_max).save(output_dir_single / "max.png", quality=100)
- to_pil(tf_frame_avg).save(output_dir_single / "mean.png", quality=100)
-
- print(f" {output_dir_single}")
-
-
-@draccus.wrap()
-def visualize_image_transforms(cfg: DatasetConfig, output_dir: Path = OUTPUT_DIR, n_examples: int = 5):
- dataset = LeRobotDataset(
- repo_id=cfg.repo_id,
- episodes=cfg.episodes,
- revision=cfg.revision,
- video_backend=cfg.video_backend,
- )
-
- output_dir = output_dir / cfg.repo_id.split("/")[-1]
- output_dir.mkdir(parents=True, exist_ok=True)
-
- # Get 1st frame from 1st camera of 1st episode
- original_frame = dataset[0][dataset.meta.camera_keys[0]]
- to_pil(original_frame).save(output_dir / "original_frame.png", quality=100)
- print("\nOriginal frame saved to:")
- print(f" {output_dir / 'original_frame.png'}.")
-
- save_all_transforms(cfg.image_transforms, original_frame, output_dir, n_examples)
- save_each_transform(cfg.image_transforms, original_frame, output_dir, n_examples)
-
-
-def main():
- visualize_image_transforms()
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/scripts/lerobot_info.py b/lerobot/src/lerobot/scripts/lerobot_info.py
deleted file mode 100644
index 01228d1d512bc88cdf9ee2ee8a29de77b40bb4d2..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/scripts/lerobot_info.py
+++ /dev/null
@@ -1,126 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Use this script to get a quick summary of your system config.
-It should be able to run without any of LeRobot's dependencies or LeRobot itself installed.
-
-Example:
-
-```shell
-lerobot-info
-```
-"""
-
-import importlib
-import platform
-import shutil
-import subprocess
-from importlib.metadata import PackageNotFoundError, distribution
-
-PACKAGE_NAME = "lerobot"
-
-
-def get_ffmpeg_version() -> str:
- """Get the ffmpeg version if installed, otherwise return 'N/A'."""
- command_path = shutil.which("ffmpeg")
- if command_path is None:
- return "N/A"
- try:
- result = subprocess.run([command_path, "-version"], capture_output=True, text=True, check=True)
- first_line = result.stdout.splitlines()[0]
- version_info = first_line.split(" ")[2]
- return version_info
- except (subprocess.SubprocessError, IndexError):
- return "Installed (version parsing failed)"
-
-
-def get_package_version(package_name: str) -> str:
- """Get the version of a package if it exists, otherwise return 'N/A'."""
- try:
- module = importlib.import_module(package_name)
- return getattr(module, "__version__", "Installed (version not found)")
- except ImportError:
- return "N/A"
-
-
-def get_sys_info() -> dict[str, str]:
- """Run this to get basic system info to help for tracking issues & bugs."""
- # General package versions
- info = {
- "LeRobot version": get_package_version(PACKAGE_NAME),
- "Platform": platform.platform(),
- "Python version": platform.python_version(),
- "Huggingface Hub version": get_package_version("huggingface_hub"),
- "Datasets version": get_package_version("datasets"),
- "Numpy version": get_package_version("numpy"),
- "FFmpeg version": get_ffmpeg_version(),
- }
-
- # PyTorch and GPU specific information
- torch_version = "N/A"
- torch_cuda_available = "N/A"
- cuda_version = "N/A"
- gpu_model = "N/A"
- try:
- import torch
-
- torch_version = str(torch.__version__)
- torch_cuda_available = torch.cuda.is_available()
- if torch_cuda_available:
- cuda_version = str(torch.version.cuda)
- # Gets the name of the first available GPU
- gpu_model = torch.cuda.get_device_name(0)
- except ImportError:
- # If torch is not installed, the default "N/A" values will be used.
- pass
-
- info.update(
- {
- "PyTorch version": torch_version,
- "Is PyTorch built with CUDA support?": str(torch_cuda_available),
- "Cuda version": cuda_version,
- "GPU model": gpu_model,
- "Using GPU in script?": "",
- }
- )
- scripts = "N/A"
- try:
- dist = distribution(PACKAGE_NAME)
- scripts = [ep.name for ep in dist.entry_points if ep.group == "console_scripts"]
- except PackageNotFoundError:
- pass
-
- info.update({f"{PACKAGE_NAME} scripts": str(scripts)})
-
- return info
-
-
-def format_dict_for_markdown(d: dict[str, str]) -> str:
- """Formats a dictionary into a markdown-friendly bulleted list."""
- return "\n".join([f"- {prop}: {val}" for prop, val in d.items()])
-
-
-def main():
- """
- Main function to print system info in markdown format.
- """
- system_info = get_sys_info()
- print(format_dict_for_markdown(system_info))
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/scripts/lerobot_record.py b/lerobot/src/lerobot/scripts/lerobot_record.py
deleted file mode 100644
index 64bd9709023a3ce91528a4b7114954ecf7b4cd17..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/scripts/lerobot_record.py
+++ /dev/null
@@ -1,570 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Records a dataset. Actions for the robot can be either generated by teleoperation or by a policy.
-
-Example:
-
-```shell
-lerobot-record \
- --robot.type=so100_follower \
- --robot.port=/dev/tty.usbmodem58760431541 \
- --robot.cameras="{laptop: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}}" \
- --robot.id=black \
- --dataset.repo_id=/ \
- --dataset.num_episodes=2 \
- --dataset.single_task="Grab the cube" \
- --display_data=true
- # <- Optional: specify video codec (h264, hevc, libsvtav1). Default is libsvtav1. \
- # --dataset.vcodec=h264 \
- # <- Teleop optional if you want to teleoperate to record or in between episodes with a policy \
- # --teleop.type=so100_leader \
- # --teleop.port=/dev/tty.usbmodem58760431551 \
- # --teleop.id=blue \
- # <- Policy optional if you want to record with a policy \
- # --policy.path=${HF_USER}/my_policy \
-```
-
-Example recording with bimanual so100:
-```shell
-lerobot-record \
- --robot.type=bi_so_follower \
- --robot.left_arm_config.port=/dev/tty.usbmodem5A460822851 \
- --robot.right_arm_config.port=/dev/tty.usbmodem5A460814411 \
- --robot.id=bimanual_follower \
- --robot.left_arm_config.cameras='{
- wrist: {"type": "opencv", "index_or_path": 1, "width": 640, "height": 480, "fps": 30},
- top: {"type": "opencv", "index_or_path": 3, "width": 640, "height": 480, "fps": 30},
- }' --robot.right_arm_config.cameras='{
- wrist: {"type": "opencv", "index_or_path": 2, "width": 640, "height": 480, "fps": 30},
- front: {"type": "opencv", "index_or_path": 4, "width": 640, "height": 480, "fps": 30},
- }' \
- --teleop.type=bi_so_leader \
- --teleop.left_arm_config.port=/dev/tty.usbmodem5A460852721 \
- --teleop.right_arm_config.port=/dev/tty.usbmodem5A460819811 \
- --teleop.id=bimanual_leader \
- --display_data=true \
- --dataset.repo_id=${HF_USER}/bimanual-so-handover-cube \
- --dataset.num_episodes=25 \
- --dataset.single_task="Grab and handover the red cube to the other arm"
-```
-"""
-
-import logging
-import time
-from dataclasses import asdict, dataclass, field
-from pathlib import Path
-from pprint import pformat
-from typing import Any
-
-from lerobot.cameras import ( # noqa: F401
- CameraConfig, # noqa: F401
-)
-from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig # noqa: F401
-from lerobot.cameras.reachy2_camera.configuration_reachy2_camera import Reachy2CameraConfig # noqa: F401
-from lerobot.cameras.realsense.configuration_realsense import RealSenseCameraConfig # noqa: F401
-from lerobot.cameras.zmq.configuration_zmq import ZMQCameraConfig # noqa: F401
-from lerobot.configs import parser
-from lerobot.configs.policies import PreTrainedConfig
-from lerobot.datasets.image_writer import safe_stop_image_writer
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.datasets.pipeline_features import aggregate_pipeline_dataset_features, create_initial_features
-from lerobot.datasets.utils import build_dataset_frame, combine_feature_dicts
-from lerobot.datasets.video_utils import VideoEncodingManager
-from lerobot.policies.factory import make_policy, make_pre_post_processors
-from lerobot.policies.pretrained import PreTrainedPolicy
-from lerobot.policies.utils import make_robot_action
-from lerobot.processor import (
- PolicyAction,
- PolicyProcessorPipeline,
- RobotAction,
- RobotObservation,
- RobotProcessorPipeline,
- make_default_processors,
-)
-from lerobot.processor.rename_processor import rename_stats
-from lerobot.robots import ( # noqa: F401
- Robot,
- RobotConfig,
- bi_so_follower,
- earthrover_mini_plus,
- hope_jr,
- koch_follower,
- make_robot_from_config,
- omx_follower,
- reachy2,
- so_follower,
- unitree_g1,
-)
-from lerobot.teleoperators import ( # noqa: F401
- Teleoperator,
- TeleoperatorConfig,
- bi_so_leader,
- homunculus,
- koch_leader,
- make_teleoperator_from_config,
- omx_leader,
- reachy2_teleoperator,
- so_leader,
-)
-from lerobot.teleoperators.keyboard.teleop_keyboard import KeyboardTeleop
-from lerobot.utils.constants import ACTION, OBS_STR
-from lerobot.utils.control_utils import (
- init_keyboard_listener,
- is_headless,
- predict_action,
- sanity_check_dataset_name,
- sanity_check_dataset_robot_compatibility,
-)
-from lerobot.utils.import_utils import register_third_party_plugins
-from lerobot.utils.robot_utils import precise_sleep
-from lerobot.utils.utils import (
- get_safe_torch_device,
- init_logging,
- log_say,
-)
-from lerobot.utils.visualization_utils import init_rerun, log_rerun_data
-
-
-@dataclass
-class DatasetRecordConfig:
- # Dataset identifier. By convention it should match '{hf_username}/{dataset_name}' (e.g. `lerobot/test`).
- repo_id: str
- # A short but accurate description of the task performed during the recording (e.g. "Pick the Lego block and drop it in the box on the right.")
- single_task: str
- # Root directory where the dataset will be stored (e.g. 'dataset/path').
- root: str | Path | None = None
- # Limit the frames per second.
- fps: int = 30
- # Number of seconds for data recording for each episode.
- episode_time_s: int | float = 60
- # Number of seconds for resetting the environment after each episode.
- reset_time_s: int | float = 60
- # Number of episodes to record.
- num_episodes: int = 50
- # Encode frames in the dataset into video
- video: bool = True
- # Upload dataset to Hugging Face hub.
- push_to_hub: bool = True
- # Upload on private repository on the Hugging Face hub.
- private: bool = False
- # Add tags to your dataset on the hub.
- tags: list[str] | None = None
- # Number of subprocesses handling the saving of frames as PNG. Set to 0 to use threads only;
- # set to ≥1 to use subprocesses, each using threads to write images. The best number of processes
- # and threads depends on your system. We recommend 4 threads per camera with 0 processes.
- # If fps is unstable, adjust the thread count. If still unstable, try using 1 or more subprocesses.
- num_image_writer_processes: int = 0
- # Number of threads writing the frames as png images on disk, per camera.
- # Too many threads might cause unstable teleoperation fps due to main thread being blocked.
- # Not enough threads might cause low camera fps.
- num_image_writer_threads_per_camera: int = 4
- # Number of episodes to record before batch encoding videos
- # Set to 1 for immediate encoding (default behavior), or higher for batched encoding
- video_encoding_batch_size: int = 1
- # Video codec for encoding videos. Options: 'h264', 'hevc', 'libsvtav1'.
- # Use 'h264' for faster encoding on systems where AV1 encoding is CPU-heavy.
- vcodec: str = "libsvtav1"
- # Rename map for the observation to override the image and state keys
- rename_map: dict[str, str] = field(default_factory=dict)
-
- def __post_init__(self):
- if self.single_task is None:
- raise ValueError("You need to provide a task as argument in `single_task`.")
-
-
-@dataclass
-class RecordConfig:
- robot: RobotConfig
- dataset: DatasetRecordConfig
- # Whether to control the robot with a teleoperator
- teleop: TeleoperatorConfig | None = None
- # Whether to control the robot with a policy
- policy: PreTrainedConfig | None = None
- # Display all cameras on screen
- display_data: bool = False
- # Display data on a remote Rerun server
- display_ip: str | None = None
- # Port of the remote Rerun server
- display_port: int | None = None
- # Whether to display compressed images in Rerun
- display_compressed_images: bool = False
- # Use vocal synthesis to read events.
- play_sounds: bool = True
- # Resume recording on an existing dataset.
- resume: bool = False
-
- def __post_init__(self):
- # HACK: We parse again the cli args here to get the pretrained path if there was one.
- policy_path = parser.get_path_arg("policy")
-
- if policy_path:
- cli_overrides = parser.get_cli_overrides("policy")
-
- self.policy = PreTrainedConfig.from_pretrained(policy_path, cli_overrides=cli_overrides)
- self.policy.pretrained_path = policy_path
-
- if self.teleop is None and self.policy is None:
- raise ValueError("Choose a policy, a teleoperator or both to control the robot")
-
- @classmethod
- def __get_path_fields__(cls) -> list[str]:
- """This enables the parser to load config from the policy using `--policy.path=local/dir`"""
- return ["policy"]
-
-
-""" --------------- record_loop() data flow --------------------------
- [ Robot ]
- V
- [ robot.get_observation() ] ---> raw_obs
- V
- [ robot_observation_processor ] ---> processed_obs
- V
- .-----( ACTION LOGIC )------------------.
- V V
- [ From Teleoperator ] [ From Policy ]
- | |
- | [teleop.get_action] -> raw_action | [predict_action]
- | | | |
- | V | V
- | [teleop_action_processor] | |
- | | | |
- '---> processed_teleop_action '---> processed_policy_action
- | |
- '-------------------------.-------------'
- V
- [ robot_action_processor ] --> robot_action_to_send
- V
- [ robot.send_action() ] -- (Robot Executes)
- V
- ( Save to Dataset )
- V
- ( Rerun Log / Loop Wait )
-"""
-
-
-@safe_stop_image_writer
-def record_loop(
- robot: Robot,
- events: dict,
- fps: int,
- teleop_action_processor: RobotProcessorPipeline[
- tuple[RobotAction, RobotObservation], RobotAction
- ], # runs after teleop
- robot_action_processor: RobotProcessorPipeline[
- tuple[RobotAction, RobotObservation], RobotAction
- ], # runs before robot
- robot_observation_processor: RobotProcessorPipeline[
- RobotObservation, RobotObservation
- ], # runs after robot
- dataset: LeRobotDataset | None = None,
- teleop: Teleoperator | list[Teleoperator] | None = None,
- policy: PreTrainedPolicy | None = None,
- preprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]] | None = None,
- postprocessor: PolicyProcessorPipeline[PolicyAction, PolicyAction] | None = None,
- control_time_s: int | None = None,
- single_task: str | None = None,
- display_data: bool = False,
- display_compressed_images: bool = False,
-):
- if dataset is not None and dataset.fps != fps:
- raise ValueError(f"The dataset fps should be equal to requested fps ({dataset.fps} != {fps}).")
-
- teleop_arm = teleop_keyboard = None
- if isinstance(teleop, list):
- teleop_keyboard = next((t for t in teleop if isinstance(t, KeyboardTeleop)), None)
- teleop_arm = next(
- (
- t
- for t in teleop
- if isinstance(
- t,
- (
- so_leader.SO100Leader
- | so_leader.SO101Leader
- | koch_leader.KochLeader
- | omx_leader.OmxLeader
- ),
- )
- ),
- None,
- )
-
- if not (teleop_arm and teleop_keyboard and len(teleop) == 2 and robot.name == "lekiwi_client"):
- raise ValueError(
- "For multi-teleop, the list must contain exactly one KeyboardTeleop and one arm teleoperator. Currently only supported for LeKiwi robot."
- )
-
- # Reset policy and processor if they are provided
- if policy is not None and preprocessor is not None and postprocessor is not None:
- policy.reset()
- preprocessor.reset()
- postprocessor.reset()
-
- timestamp = 0
- start_episode_t = time.perf_counter()
- while timestamp < control_time_s:
- start_loop_t = time.perf_counter()
-
- if events["exit_early"]:
- events["exit_early"] = False
- break
-
- # Get robot observation
- obs = robot.get_observation()
-
- # Applies a pipeline to the raw robot observation, default is IdentityProcessor
- obs_processed = robot_observation_processor(obs)
-
- if policy is not None or dataset is not None:
- observation_frame = build_dataset_frame(dataset.features, obs_processed, prefix=OBS_STR)
-
- # Get action from either policy or teleop
- if policy is not None and preprocessor is not None and postprocessor is not None:
- action_values = predict_action(
- observation=observation_frame,
- policy=policy,
- device=get_safe_torch_device(policy.config.device),
- preprocessor=preprocessor,
- postprocessor=postprocessor,
- use_amp=policy.config.use_amp,
- task=single_task,
- robot_type=robot.robot_type,
- )
-
- act_processed_policy: RobotAction = make_robot_action(action_values, dataset.features)
-
- elif policy is None and isinstance(teleop, Teleoperator):
- act = teleop.get_action()
-
- # Applies a pipeline to the raw teleop action, default is IdentityProcessor
- act_processed_teleop = teleop_action_processor((act, obs))
-
- elif policy is None and isinstance(teleop, list):
- arm_action = teleop_arm.get_action()
- arm_action = {f"arm_{k}": v for k, v in arm_action.items()}
- keyboard_action = teleop_keyboard.get_action()
- base_action = robot._from_keyboard_to_base_action(keyboard_action)
- act = {**arm_action, **base_action} if len(base_action) > 0 else arm_action
- act_processed_teleop = teleop_action_processor((act, obs))
- else:
- logging.info(
- "No policy or teleoperator provided, skipping action generation."
- "This is likely to happen when resetting the environment without a teleop device."
- "The robot won't be at its rest position at the start of the next episode."
- )
- continue
-
- # Applies a pipeline to the action, default is IdentityProcessor
- if policy is not None and act_processed_policy is not None:
- action_values = act_processed_policy
- robot_action_to_send = robot_action_processor((act_processed_policy, obs))
- else:
- action_values = act_processed_teleop
- robot_action_to_send = robot_action_processor((act_processed_teleop, obs))
-
- # Send action to robot
- # Action can eventually be clipped using `max_relative_target`,
- # so action actually sent is saved in the dataset. action = postprocessor.process(action)
- # TODO(steven, pepijn, adil): we should use a pipeline step to clip the action, so the sent action is the action that we input to the robot.
- _sent_action = robot.send_action(robot_action_to_send)
-
- # Write to dataset
- if dataset is not None:
- action_frame = build_dataset_frame(dataset.features, action_values, prefix=ACTION)
- frame = {**observation_frame, **action_frame, "task": single_task}
- dataset.add_frame(frame)
-
- if display_data:
- log_rerun_data(
- observation=obs_processed, action=action_values, compress_images=display_compressed_images
- )
-
- dt_s = time.perf_counter() - start_loop_t
- precise_sleep(max(1 / fps - dt_s, 0.0))
-
- timestamp = time.perf_counter() - start_episode_t
-
-
-@parser.wrap()
-def record(cfg: RecordConfig) -> LeRobotDataset:
- init_logging()
- logging.info(pformat(asdict(cfg)))
- if cfg.display_data:
- init_rerun(session_name="recording", ip=cfg.display_ip, port=cfg.display_port)
- display_compressed_images = (
- True
- if (cfg.display_data and cfg.display_ip is not None and cfg.display_port is not None)
- else cfg.display_compressed_images
- )
-
- robot = make_robot_from_config(cfg.robot)
- teleop = make_teleoperator_from_config(cfg.teleop) if cfg.teleop is not None else None
-
- teleop_action_processor, robot_action_processor, robot_observation_processor = make_default_processors()
-
- dataset_features = combine_feature_dicts(
- aggregate_pipeline_dataset_features(
- pipeline=teleop_action_processor,
- initial_features=create_initial_features(
- action=robot.action_features
- ), # TODO(steven, pepijn): in future this should be come from teleop or policy
- use_videos=cfg.dataset.video,
- ),
- aggregate_pipeline_dataset_features(
- pipeline=robot_observation_processor,
- initial_features=create_initial_features(observation=robot.observation_features),
- use_videos=cfg.dataset.video,
- ),
- )
-
- dataset = None
- listener = None
-
- try:
- if cfg.resume:
- dataset = LeRobotDataset(
- cfg.dataset.repo_id,
- root=cfg.dataset.root,
- batch_encoding_size=cfg.dataset.video_encoding_batch_size,
- vcodec=cfg.dataset.vcodec,
- )
-
- if hasattr(robot, "cameras") and len(robot.cameras) > 0:
- dataset.start_image_writer(
- num_processes=cfg.dataset.num_image_writer_processes,
- num_threads=cfg.dataset.num_image_writer_threads_per_camera * len(robot.cameras),
- )
- sanity_check_dataset_robot_compatibility(dataset, robot, cfg.dataset.fps, dataset_features)
- else:
- # Create empty dataset or load existing saved episodes
- sanity_check_dataset_name(cfg.dataset.repo_id, cfg.policy)
- dataset = LeRobotDataset.create(
- cfg.dataset.repo_id,
- cfg.dataset.fps,
- root=cfg.dataset.root,
- robot_type=robot.name,
- features=dataset_features,
- use_videos=cfg.dataset.video,
- image_writer_processes=cfg.dataset.num_image_writer_processes,
- image_writer_threads=cfg.dataset.num_image_writer_threads_per_camera * len(robot.cameras),
- batch_encoding_size=cfg.dataset.video_encoding_batch_size,
- vcodec=cfg.dataset.vcodec,
- )
-
- # Load pretrained policy
- policy = None if cfg.policy is None else make_policy(cfg.policy, ds_meta=dataset.meta)
- preprocessor = None
- postprocessor = None
- if cfg.policy is not None:
- preprocessor, postprocessor = make_pre_post_processors(
- policy_cfg=cfg.policy,
- pretrained_path=cfg.policy.pretrained_path,
- dataset_stats=rename_stats(dataset.meta.stats, cfg.dataset.rename_map),
- preprocessor_overrides={
- "device_processor": {"device": cfg.policy.device},
- "rename_observations_processor": {"rename_map": cfg.dataset.rename_map},
- },
- )
-
- robot.connect()
- if teleop is not None:
- teleop.connect()
-
- listener, events = init_keyboard_listener()
-
- with VideoEncodingManager(dataset):
- recorded_episodes = 0
- while recorded_episodes < cfg.dataset.num_episodes and not events["stop_recording"]:
- log_say(f"Recording episode {dataset.num_episodes}", cfg.play_sounds)
- record_loop(
- robot=robot,
- events=events,
- fps=cfg.dataset.fps,
- teleop_action_processor=teleop_action_processor,
- robot_action_processor=robot_action_processor,
- robot_observation_processor=robot_observation_processor,
- teleop=teleop,
- policy=policy,
- preprocessor=preprocessor,
- postprocessor=postprocessor,
- dataset=dataset,
- control_time_s=cfg.dataset.episode_time_s,
- single_task=cfg.dataset.single_task,
- display_data=cfg.display_data,
- display_compressed_images=display_compressed_images,
- )
-
- # Execute a few seconds without recording to give time to manually reset the environment
- # Skip reset for the last episode to be recorded
- if not events["stop_recording"] and (
- (recorded_episodes < cfg.dataset.num_episodes - 1) or events["rerecord_episode"]
- ):
- log_say("Reset the environment", cfg.play_sounds)
-
- # reset g1 robot
- if robot.name == "unitree_g1":
- robot.reset()
-
- record_loop(
- robot=robot,
- events=events,
- fps=cfg.dataset.fps,
- teleop_action_processor=teleop_action_processor,
- robot_action_processor=robot_action_processor,
- robot_observation_processor=robot_observation_processor,
- teleop=teleop,
- control_time_s=cfg.dataset.reset_time_s,
- single_task=cfg.dataset.single_task,
- display_data=cfg.display_data,
- )
-
- if events["rerecord_episode"]:
- log_say("Re-record episode", cfg.play_sounds)
- events["rerecord_episode"] = False
- events["exit_early"] = False
- dataset.clear_episode_buffer()
- continue
-
- dataset.save_episode()
- recorded_episodes += 1
- finally:
- log_say("Stop recording", cfg.play_sounds, blocking=True)
-
- if dataset:
- dataset.finalize()
-
- if robot.is_connected:
- robot.disconnect()
- if teleop and teleop.is_connected:
- teleop.disconnect()
-
- if not is_headless() and listener:
- listener.stop()
-
- if cfg.dataset.push_to_hub:
- dataset.push_to_hub(tags=cfg.dataset.tags, private=cfg.dataset.private)
-
- log_say("Exiting", cfg.play_sounds)
- return dataset
-
-
-def main():
- register_third_party_plugins()
- record()
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/scripts/lerobot_replay.py b/lerobot/src/lerobot/scripts/lerobot_replay.py
deleted file mode 100644
index f31b8fa28da8eceeb225d42142309b82236cfe36..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/scripts/lerobot_replay.py
+++ /dev/null
@@ -1,138 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Replays the actions of an episode from a dataset on a robot.
-
-Examples:
-
-```shell
-lerobot-replay \
- --robot.type=so100_follower \
- --robot.port=/dev/tty.usbmodem58760431541 \
- --robot.id=black \
- --dataset.repo_id=aliberts/record-test \
- --dataset.episode=0
-```
-
-Example replay with bimanual so100:
-```shell
-lerobot-replay \
- --robot.type=bi_so_follower \
- --robot.left_arm_port=/dev/tty.usbmodem5A460851411 \
- --robot.right_arm_port=/dev/tty.usbmodem5A460812391 \
- --robot.id=bimanual_follower \
- --dataset.repo_id=${HF_USER}/bimanual-so100-handover-cube \
- --dataset.episode=0
-```
-
-"""
-
-import logging
-import time
-from dataclasses import asdict, dataclass
-from pathlib import Path
-from pprint import pformat
-
-from lerobot.configs import parser
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.processor import (
- make_default_robot_action_processor,
-)
-from lerobot.robots import ( # noqa: F401
- Robot,
- RobotConfig,
- bi_so_follower,
- earthrover_mini_plus,
- hope_jr,
- koch_follower,
- make_robot_from_config,
- omx_follower,
- reachy2,
- so_follower,
- unitree_g1,
-)
-from lerobot.utils.constants import ACTION
-from lerobot.utils.import_utils import register_third_party_plugins
-from lerobot.utils.robot_utils import precise_sleep
-from lerobot.utils.utils import (
- init_logging,
- log_say,
-)
-
-
-@dataclass
-class DatasetReplayConfig:
- # Dataset identifier. By convention it should match '{hf_username}/{dataset_name}' (e.g. `lerobot/test`).
- repo_id: str
- # Episode to replay.
- episode: int
- # Root directory where the dataset will be stored (e.g. 'dataset/path').
- root: str | Path | None = None
- # Limit the frames per second. By default, uses the policy fps.
- fps: int = 30
-
-
-@dataclass
-class ReplayConfig:
- robot: RobotConfig
- dataset: DatasetReplayConfig
- # Use vocal synthesis to read events.
- play_sounds: bool = True
-
-
-@parser.wrap()
-def replay(cfg: ReplayConfig):
- init_logging()
- logging.info(pformat(asdict(cfg)))
-
- robot_action_processor = make_default_robot_action_processor()
-
- robot = make_robot_from_config(cfg.robot)
- dataset = LeRobotDataset(cfg.dataset.repo_id, root=cfg.dataset.root, episodes=[cfg.dataset.episode])
-
- # Filter dataset to only include frames from the specified episode since episodes are chunked in dataset V3.0
- episode_frames = dataset.hf_dataset.filter(lambda x: x["episode_index"] == cfg.dataset.episode)
- actions = episode_frames.select_columns(ACTION)
-
- robot.connect()
-
- log_say("Replaying episode", cfg.play_sounds, blocking=True)
- for idx in range(len(episode_frames)):
- start_episode_t = time.perf_counter()
-
- action_array = actions[idx][ACTION]
- action = {}
- for i, name in enumerate(dataset.features[ACTION]["names"]):
- action[name] = action_array[i]
-
- robot_obs = robot.get_observation()
-
- processed_action = robot_action_processor((action, robot_obs))
-
- _ = robot.send_action(processed_action)
-
- dt_s = time.perf_counter() - start_episode_t
- precise_sleep(max(1 / dataset.fps - dt_s, 0.0))
-
- robot.disconnect()
-
-
-def main():
- register_third_party_plugins()
- replay()
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/scripts/lerobot_setup_motors.py b/lerobot/src/lerobot/scripts/lerobot_setup_motors.py
deleted file mode 100644
index 95b9d8f6e5be07b7612ee5ec3272d1e24b656931..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/scripts/lerobot_setup_motors.py
+++ /dev/null
@@ -1,92 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Helper to set motor ids and baudrate.
-
-Example:
-
-```shell
-lerobot-setup-motors \
- --teleop.type=so100_leader \
- --teleop.port=/dev/tty.usbmodem575E0031751
-```
-"""
-
-from dataclasses import dataclass
-
-import draccus
-
-from lerobot.robots import ( # noqa: F401
- RobotConfig,
- bi_so_follower,
- koch_follower,
- lekiwi,
- make_robot_from_config,
- omx_follower,
- so_follower,
-)
-from lerobot.teleoperators import ( # noqa: F401
- TeleoperatorConfig,
- bi_so_leader,
- koch_leader,
- make_teleoperator_from_config,
- omx_leader,
- so_leader,
-)
-
-COMPATIBLE_DEVICES = [
- "koch_follower",
- "koch_leader",
- "omx_follower",
- "omx_leader",
- "so100_follower",
- "so100_leader",
- "so101_follower",
- "so101_leader",
- "lekiwi",
-]
-
-
-@dataclass
-class SetupConfig:
- teleop: TeleoperatorConfig | None = None
- robot: RobotConfig | None = None
-
- def __post_init__(self):
- if bool(self.teleop) == bool(self.robot):
- raise ValueError("Choose either a teleop or a robot.")
-
- self.device = self.robot if self.robot else self.teleop
-
-
-@draccus.wrap()
-def setup_motors(cfg: SetupConfig):
- if cfg.device.type not in COMPATIBLE_DEVICES:
- raise NotImplementedError
-
- if isinstance(cfg.device, RobotConfig):
- device = make_robot_from_config(cfg.device)
- else:
- device = make_teleoperator_from_config(cfg.device)
-
- device.setup_motors()
-
-
-def main():
- setup_motors()
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/scripts/lerobot_teleoperate.py b/lerobot/src/lerobot/scripts/lerobot_teleoperate.py
deleted file mode 100644
index d64832b87fdfdefcaae8d51da63c1197dc5ade44..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/scripts/lerobot_teleoperate.py
+++ /dev/null
@@ -1,244 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""
-Simple script to control a robot from teleoperation.
-
-Example:
-
-```shell
-lerobot-teleoperate \
- --robot.type=so101_follower \
- --robot.port=/dev/tty.usbmodem58760431541 \
- --robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \
- --robot.id=black \
- --teleop.type=so101_leader \
- --teleop.port=/dev/tty.usbmodem58760431551 \
- --teleop.id=blue \
- --display_data=true
-```
-
-Example teleoperation with bimanual so100:
-
-```shell
-lerobot-teleoperate \
- --robot.type=bi_so_follower \
- --robot.left_arm_config.port=/dev/tty.usbmodem5A460822851 \
- --robot.right_arm_config.port=/dev/tty.usbmodem5A460814411 \
- --robot.id=bimanual_follower \
- --robot.left_arm_config.cameras='{
- wrist: {"type": "opencv", "index_or_path": 1, "width": 640, "height": 480, "fps": 30},
- }' --robot.right_arm_config.cameras='{
- wrist: {"type": "opencv", "index_or_path": 2, "width": 640, "height": 480, "fps": 30},
- }' \
- --teleop.type=bi_so_leader \
- --teleop.left_arm_config.port=/dev/tty.usbmodem5A460852721 \
- --teleop.right_arm_config.port=/dev/tty.usbmodem5A460819811 \
- --teleop.id=bimanual_leader \
- --display_data=true
-```
-
-"""
-
-import logging
-import time
-from dataclasses import asdict, dataclass
-from pprint import pformat
-
-import rerun as rr
-
-from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig # noqa: F401
-from lerobot.cameras.realsense.configuration_realsense import RealSenseCameraConfig # noqa: F401
-from lerobot.configs import parser
-from lerobot.processor import (
- RobotAction,
- RobotObservation,
- RobotProcessorPipeline,
- make_default_processors,
-)
-from lerobot.robots import ( # noqa: F401
- Robot,
- RobotConfig,
- bi_so_follower,
- earthrover_mini_plus,
- hope_jr,
- koch_follower,
- make_robot_from_config,
- omx_follower,
- reachy2,
- so_follower,
-)
-from lerobot.teleoperators import ( # noqa: F401
- Teleoperator,
- TeleoperatorConfig,
- bi_so_leader,
- gamepad,
- homunculus,
- keyboard,
- koch_leader,
- make_teleoperator_from_config,
- omx_leader,
- reachy2_teleoperator,
- so_leader,
-)
-from lerobot.utils.import_utils import register_third_party_plugins
-from lerobot.utils.robot_utils import precise_sleep
-from lerobot.utils.utils import init_logging, move_cursor_up
-from lerobot.utils.visualization_utils import init_rerun, log_rerun_data
-
-
-@dataclass
-class TeleoperateConfig:
- # TODO: pepijn, steven: if more robots require multiple teleoperators (like lekiwi) its good to make this possibele in teleop.py and record.py with List[Teleoperator]
- teleop: TeleoperatorConfig
- robot: RobotConfig
- # Limit the maximum frames per second.
- fps: int = 60
- teleop_time_s: float | None = None
- # Display all cameras on screen
- display_data: bool = False
- # Display data on a remote Rerun server
- display_ip: str | None = None
- # Port of the remote Rerun server
- display_port: int | None = None
- # Whether to display compressed images in Rerun
- display_compressed_images: bool = False
-
-
-def teleop_loop(
- teleop: Teleoperator,
- robot: Robot,
- fps: int,
- teleop_action_processor: RobotProcessorPipeline[tuple[RobotAction, RobotObservation], RobotAction],
- robot_action_processor: RobotProcessorPipeline[tuple[RobotAction, RobotObservation], RobotAction],
- robot_observation_processor: RobotProcessorPipeline[RobotObservation, RobotObservation],
- display_data: bool = False,
- duration: float | None = None,
- display_compressed_images: bool = False,
-):
- """
- This function continuously reads actions from a teleoperation device, processes them through optional
- pipelines, sends them to a robot, and optionally displays the robot's state. The loop runs at a
- specified frequency until a set duration is reached or it is manually interrupted.
-
- Args:
- teleop: The teleoperator device instance providing control actions.
- robot: The robot instance being controlled.
- fps: The target frequency for the control loop in frames per second.
- display_data: If True, fetches robot observations and displays them in the console and Rerun.
- display_compressed_images: If True, compresses images before sending them to Rerun for display.
- duration: The maximum duration of the teleoperation loop in seconds. If None, the loop runs indefinitely.
- teleop_action_processor: An optional pipeline to process raw actions from the teleoperator.
- robot_action_processor: An optional pipeline to process actions before they are sent to the robot.
- robot_observation_processor: An optional pipeline to process raw observations from the robot.
- """
-
- display_len = max(len(key) for key in robot.action_features)
- start = time.perf_counter()
-
- while True:
- loop_start = time.perf_counter()
-
- # Get robot observation
- # Not really needed for now other than for visualization
- # teleop_action_processor can take None as an observation
- # given that it is the identity processor as default
- obs = robot.get_observation()
-
- # Get teleop action
- raw_action = teleop.get_action()
-
- # Process teleop action through pipeline
- teleop_action = teleop_action_processor((raw_action, obs))
-
- # Process action for robot through pipeline
- robot_action_to_send = robot_action_processor((teleop_action, obs))
-
- # Send processed action to robot (robot_action_processor.to_output should return RobotAction)
- _ = robot.send_action(robot_action_to_send)
-
- if display_data:
- # Process robot observation through pipeline
- obs_transition = robot_observation_processor(obs)
-
- log_rerun_data(
- observation=obs_transition,
- action=teleop_action,
- compress_images=display_compressed_images,
- )
-
- print("\n" + "-" * (display_len + 10))
- print(f"{'NAME':<{display_len}} | {'NORM':>7}")
- # Display the final robot action that was sent
- for motor, value in robot_action_to_send.items():
- print(f"{motor:<{display_len}} | {value:>7.2f}")
- move_cursor_up(len(robot_action_to_send) + 3)
-
- dt_s = time.perf_counter() - loop_start
- precise_sleep(max(1 / fps - dt_s, 0.0))
- loop_s = time.perf_counter() - loop_start
- print(f"Teleop loop time: {loop_s * 1e3:.2f}ms ({1 / loop_s:.0f} Hz)")
- move_cursor_up(1)
-
- if duration is not None and time.perf_counter() - start >= duration:
- return
-
-
-@parser.wrap()
-def teleoperate(cfg: TeleoperateConfig):
- init_logging()
- logging.info(pformat(asdict(cfg)))
- if cfg.display_data:
- init_rerun(session_name="teleoperation", ip=cfg.display_ip, port=cfg.display_port)
- display_compressed_images = (
- True
- if (cfg.display_data and cfg.display_ip is not None and cfg.display_port is not None)
- else cfg.display_compressed_images
- )
-
- teleop = make_teleoperator_from_config(cfg.teleop)
- robot = make_robot_from_config(cfg.robot)
- teleop_action_processor, robot_action_processor, robot_observation_processor = make_default_processors()
-
- teleop.connect()
- robot.connect()
-
- try:
- teleop_loop(
- teleop=teleop,
- robot=robot,
- fps=cfg.fps,
- display_data=cfg.display_data,
- duration=cfg.teleop_time_s,
- teleop_action_processor=teleop_action_processor,
- robot_action_processor=robot_action_processor,
- robot_observation_processor=robot_observation_processor,
- display_compressed_images=display_compressed_images,
- )
- except KeyboardInterrupt:
- pass
- finally:
- if cfg.display_data:
- rr.rerun_shutdown()
- teleop.disconnect()
- robot.disconnect()
-
-
-def main():
- register_third_party_plugins()
- teleoperate()
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/scripts/lerobot_train.py b/lerobot/src/lerobot/scripts/lerobot_train.py
deleted file mode 100644
index 9fa498280cbaed181ae30485ae5e8560922d36fc..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/scripts/lerobot_train.py
+++ /dev/null
@@ -1,537 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import dataclasses
-import logging
-import time
-from contextlib import nullcontext
-from pprint import pformat
-from typing import Any
-
-import torch
-from accelerate import Accelerator
-from termcolor import colored
-from torch.optim import Optimizer
-
-from lerobot.configs import parser
-from lerobot.configs.train import TrainPipelineConfig
-from lerobot.datasets.factory import make_dataset
-from lerobot.datasets.sampler import EpisodeAwareSampler
-from lerobot.datasets.utils import cycle
-from lerobot.envs.factory import make_env, make_env_pre_post_processors
-from lerobot.envs.utils import close_envs
-from lerobot.optim.factory import make_optimizer_and_scheduler
-from lerobot.policies.factory import make_policy, make_pre_post_processors
-from lerobot.policies.pretrained import PreTrainedPolicy
-from lerobot.rl.wandb_utils import WandBLogger
-from lerobot.scripts.lerobot_eval import eval_policy_all
-from lerobot.utils.import_utils import register_third_party_plugins
-from lerobot.utils.logging_utils import AverageMeter, MetricsTracker
-from lerobot.utils.random_utils import set_seed
-from lerobot.utils.train_utils import (
- get_step_checkpoint_dir,
- get_step_identifier,
- load_training_state,
- save_checkpoint,
- update_last_checkpoint,
-)
-from lerobot.utils.utils import (
- format_big_number,
- has_method,
- init_logging,
-)
-
-
-def update_policy(
- train_metrics: MetricsTracker,
- policy: PreTrainedPolicy,
- batch: Any,
- optimizer: Optimizer,
- grad_clip_norm: float,
- accelerator: Accelerator,
- lr_scheduler=None,
- lock=None,
- rabc_weights_provider=None,
-) -> tuple[MetricsTracker, dict]:
- """
- Performs a single training step to update the policy's weights.
-
- This function executes the forward and backward passes, clips gradients, and steps the optimizer and
- learning rate scheduler. Accelerator handles mixed-precision training automatically.
-
- Args:
- train_metrics: A MetricsTracker instance to record training statistics.
- policy: The policy model to be trained.
- batch: A batch of training data.
- optimizer: The optimizer used to update the policy's parameters.
- grad_clip_norm: The maximum norm for gradient clipping.
- accelerator: The Accelerator instance for distributed training and mixed precision.
- lr_scheduler: An optional learning rate scheduler.
- lock: An optional lock for thread-safe optimizer updates.
- rabc_weights_provider: Optional RABCWeights instance for sample weighting.
-
- Returns:
- A tuple containing:
- - The updated MetricsTracker with new statistics for this step.
- - A dictionary of outputs from the policy's forward pass, for logging purposes.
- """
- start_time = time.perf_counter()
- policy.train()
-
- # Get RA-BC weights if enabled
- rabc_batch_weights = None
- rabc_batch_stats = None
- if rabc_weights_provider is not None:
- rabc_batch_weights, rabc_batch_stats = rabc_weights_provider.compute_batch_weights(batch)
-
- # Let accelerator handle mixed precision
- with accelerator.autocast():
- # Use per-sample loss when RA-BC is enabled for proper weighting
- if rabc_batch_weights is not None:
- # Get per-sample losses
- per_sample_loss, output_dict = policy.forward(batch, reduction="none")
-
- # Apply RA-BC weights: L_RA-BC = Σ(w_i * l_i) / (Σw_i + ε)
- # rabc_batch_weights is already normalized to sum to batch_size
- epsilon = 1e-6
- loss = (per_sample_loss * rabc_batch_weights).sum() / (rabc_batch_weights.sum() + epsilon)
- # Log raw mean weight (before normalization) - this is the meaningful metric
- output_dict["rabc_mean_weight"] = rabc_batch_stats["raw_mean_weight"]
- output_dict["rabc_num_zero_weight"] = rabc_batch_stats["num_zero_weight"]
- output_dict["rabc_num_full_weight"] = rabc_batch_stats["num_full_weight"]
- else:
- loss, output_dict = policy.forward(batch)
-
- # TODO(rcadene): policy.unnormalize_outputs(out_dict)
-
- # Use accelerator's backward method
- accelerator.backward(loss)
-
- # Clip gradients if specified
- if grad_clip_norm > 0:
- grad_norm = accelerator.clip_grad_norm_(policy.parameters(), grad_clip_norm)
- else:
- grad_norm = torch.nn.utils.clip_grad_norm_(
- policy.parameters(), float("inf"), error_if_nonfinite=False
- )
-
- # Optimizer step
- with lock if lock is not None else nullcontext():
- optimizer.step()
-
- optimizer.zero_grad()
-
- # Step through pytorch scheduler at every batch instead of epoch
- if lr_scheduler is not None:
- lr_scheduler.step()
-
- # Update internal buffers if policy has update method
- if has_method(accelerator.unwrap_model(policy, keep_fp32_wrapper=True), "update"):
- accelerator.unwrap_model(policy, keep_fp32_wrapper=True).update()
-
- train_metrics.loss = loss.item()
- train_metrics.grad_norm = grad_norm.item()
- train_metrics.lr = optimizer.param_groups[0]["lr"]
- train_metrics.update_s = time.perf_counter() - start_time
- return train_metrics, output_dict
-
-
-@parser.wrap()
-def train(cfg: TrainPipelineConfig, accelerator: Accelerator | None = None):
- """
- Main function to train a policy.
-
- This function orchestrates the entire training pipeline, including:
- - Setting up logging, seeding, and device configuration.
- - Creating the dataset, evaluation environment (if applicable), policy, and optimizer.
- - Handling resumption from a checkpoint.
- - Running the main training loop, which involves fetching data batches and calling `update_policy`.
- - Periodically logging metrics, saving model checkpoints, and evaluating the policy.
- - Pushing the final trained model to the Hugging Face Hub if configured.
-
- Args:
- cfg: A `TrainPipelineConfig` object containing all training configurations.
- accelerator: Optional Accelerator instance. If None, one will be created automatically.
- """
- cfg.validate()
-
- # Create Accelerator if not provided
- # It will automatically detect if running in distributed mode or single-process mode
- # We set step_scheduler_with_optimizer=False to prevent accelerate from adjusting the lr_scheduler steps based on the num_processes
- # We set find_unused_parameters=True to handle models with conditional computation
- if accelerator is None:
- from accelerate.utils import DistributedDataParallelKwargs
-
- ddp_kwargs = DistributedDataParallelKwargs(find_unused_parameters=True)
- # Accelerate auto-detects the device based on the available hardware and ignores the policy.device setting.
- # Force the device to be CPU when policy.device is set to CPU.
- force_cpu = cfg.policy.device == "cpu"
- accelerator = Accelerator(
- step_scheduler_with_optimizer=False,
- kwargs_handlers=[ddp_kwargs],
- cpu=force_cpu,
- )
-
- init_logging(accelerator=accelerator)
-
- # Determine if this is the main process (for logging and checkpointing)
- # When using accelerate, only the main process should log to avoid duplicate outputs
- is_main_process = accelerator.is_main_process
-
- # Only log on main process
- if is_main_process:
- logging.info(pformat(cfg.to_dict()))
-
- # Initialize wandb only on main process
- if cfg.wandb.enable and cfg.wandb.project and is_main_process:
- wandb_logger = WandBLogger(cfg)
- else:
- wandb_logger = None
- if is_main_process:
- logging.info(colored("Logs will be saved locally.", "yellow", attrs=["bold"]))
-
- if cfg.seed is not None:
- set_seed(cfg.seed, accelerator=accelerator)
-
- # Use accelerator's device
- device = accelerator.device
- torch.backends.cudnn.benchmark = True
- torch.backends.cuda.matmul.allow_tf32 = True
-
- # Dataset loading synchronization: main process downloads first to avoid race conditions
- if is_main_process:
- logging.info("Creating dataset")
- dataset = make_dataset(cfg)
-
- accelerator.wait_for_everyone()
-
- # Now all other processes can safely load the dataset
- if not is_main_process:
- dataset = make_dataset(cfg)
-
- # Create environment used for evaluating checkpoints during training on simulation data.
- # On real-world data, no need to create an environment as evaluations are done outside train.py,
- # using the eval.py instead, with gym_dora environment and dora-rs.
- eval_env = None
- if cfg.eval_freq > 0 and cfg.env is not None:
- if is_main_process:
- logging.info("Creating env")
- eval_env = make_env(cfg.env, n_envs=cfg.eval.batch_size, use_async_envs=cfg.eval.use_async_envs)
-
- if is_main_process:
- logging.info("Creating policy")
- policy = make_policy(
- cfg=cfg.policy,
- ds_meta=dataset.meta,
- rename_map=cfg.rename_map,
- )
-
- if cfg.peft is not None:
- logging.info("Using PEFT! Wrapping model.")
- # Convert CLI peft config to dict for overrides
- peft_cli_overrides = dataclasses.asdict(cfg.peft)
- policy = policy.wrap_with_peft(peft_cli_overrides=peft_cli_overrides)
-
- # Wait for all processes to finish policy creation before continuing
- accelerator.wait_for_everyone()
-
- # Create processors - only provide dataset_stats if not resuming from saved processors
- processor_kwargs = {}
- postprocessor_kwargs = {}
- if (cfg.policy.pretrained_path and not cfg.resume) or not cfg.policy.pretrained_path:
- # Only provide dataset_stats when not resuming from saved processor state
- processor_kwargs["dataset_stats"] = dataset.meta.stats
-
- # For SARM, always provide dataset_meta for progress normalization
- if cfg.policy.type == "sarm":
- processor_kwargs["dataset_meta"] = dataset.meta
-
- if cfg.policy.pretrained_path is not None:
- processor_kwargs["preprocessor_overrides"] = {
- "device_processor": {"device": device.type},
- "normalizer_processor": {
- "stats": dataset.meta.stats,
- "features": {**policy.config.input_features, **policy.config.output_features},
- "norm_map": policy.config.normalization_mapping,
- },
- }
- processor_kwargs["preprocessor_overrides"]["rename_observations_processor"] = {
- "rename_map": cfg.rename_map
- }
- postprocessor_kwargs["postprocessor_overrides"] = {
- "unnormalizer_processor": {
- "stats": dataset.meta.stats,
- "features": policy.config.output_features,
- "norm_map": policy.config.normalization_mapping,
- },
- }
-
- preprocessor, postprocessor = make_pre_post_processors(
- policy_cfg=cfg.policy,
- pretrained_path=cfg.policy.pretrained_path,
- **processor_kwargs,
- **postprocessor_kwargs,
- )
-
- if is_main_process:
- logging.info("Creating optimizer and scheduler")
- optimizer, lr_scheduler = make_optimizer_and_scheduler(cfg, policy)
-
- # Load precomputed SARM progress for RA-BC if enabled
- # Generate progress using: src/lerobot/policies/sarm/compute_rabc_weights.py
- rabc_weights = None
- if cfg.use_rabc:
- from lerobot.utils.rabc import RABCWeights
-
- # Get chunk_size from policy config
- chunk_size = getattr(policy.config, "chunk_size", None)
- if chunk_size is None:
- raise ValueError("Chunk size is not found in policy config")
-
- head_mode = getattr(cfg, "rabc_head_mode", "sparse")
- logging.info(f"Loading SARM progress for RA-BC from {cfg.rabc_progress_path}")
- logging.info(f"Using chunk_size={chunk_size} from policy config, head_mode={head_mode}")
- rabc_weights = RABCWeights(
- progress_path=cfg.rabc_progress_path,
- chunk_size=chunk_size,
- head_mode=head_mode,
- kappa=getattr(cfg, "rabc_kappa", 0.01),
- epsilon=getattr(cfg, "rabc_epsilon", 1e-6),
- device=device,
- )
-
- step = 0 # number of policy updates (forward + backward + optim)
-
- if cfg.resume:
- step, optimizer, lr_scheduler = load_training_state(cfg.checkpoint_path, optimizer, lr_scheduler)
-
- num_learnable_params = sum(p.numel() for p in policy.parameters() if p.requires_grad)
- num_total_params = sum(p.numel() for p in policy.parameters())
-
- if is_main_process:
- logging.info(colored("Output dir:", "yellow", attrs=["bold"]) + f" {cfg.output_dir}")
- if cfg.env is not None:
- logging.info(f"{cfg.env.task=}")
- logging.info("Creating environment processors")
- env_preprocessor, env_postprocessor = make_env_pre_post_processors(
- env_cfg=cfg.env, policy_cfg=cfg.policy
- )
- logging.info(f"{cfg.steps=} ({format_big_number(cfg.steps)})")
- logging.info(f"{dataset.num_frames=} ({format_big_number(dataset.num_frames)})")
- logging.info(f"{dataset.num_episodes=}")
- num_processes = accelerator.num_processes
- effective_bs = cfg.batch_size * num_processes
- logging.info(f"Effective batch size: {cfg.batch_size} x {num_processes} = {effective_bs}")
- logging.info(f"{num_learnable_params=} ({format_big_number(num_learnable_params)})")
- logging.info(f"{num_total_params=} ({format_big_number(num_total_params)})")
-
- # create dataloader for offline training
- if hasattr(cfg.policy, "drop_n_last_frames"):
- shuffle = False
- sampler = EpisodeAwareSampler(
- dataset.meta.episodes["dataset_from_index"],
- dataset.meta.episodes["dataset_to_index"],
- episode_indices_to_use=dataset.episodes,
- drop_n_last_frames=cfg.policy.drop_n_last_frames,
- shuffle=True,
- )
- else:
- shuffle = True
- sampler = None
-
- dataloader = torch.utils.data.DataLoader(
- dataset,
- num_workers=cfg.num_workers,
- batch_size=cfg.batch_size,
- shuffle=shuffle and not cfg.dataset.streaming,
- sampler=sampler,
- pin_memory=device.type == "cuda",
- drop_last=False,
- prefetch_factor=2 if cfg.num_workers > 0 else None,
- )
-
- # Prepare everything with accelerator
- accelerator.wait_for_everyone()
- policy, optimizer, dataloader, lr_scheduler = accelerator.prepare(
- policy, optimizer, dataloader, lr_scheduler
- )
- dl_iter = cycle(dataloader)
-
- policy.train()
-
- train_metrics = {
- "loss": AverageMeter("loss", ":.3f"),
- "grad_norm": AverageMeter("grdn", ":.3f"),
- "lr": AverageMeter("lr", ":0.1e"),
- "update_s": AverageMeter("updt_s", ":.3f"),
- "dataloading_s": AverageMeter("data_s", ":.3f"),
- }
-
- # Use effective batch size for proper epoch calculation in distributed training
- effective_batch_size = cfg.batch_size * accelerator.num_processes
- train_tracker = MetricsTracker(
- effective_batch_size,
- dataset.num_frames,
- dataset.num_episodes,
- train_metrics,
- initial_step=step,
- accelerator=accelerator,
- )
-
- if is_main_process:
- logging.info(
- f"Start offline training on a fixed dataset, with effective batch size: {effective_batch_size}"
- )
-
- for _ in range(step, cfg.steps):
- start_time = time.perf_counter()
- batch = next(dl_iter)
- batch = preprocessor(batch)
- train_tracker.dataloading_s = time.perf_counter() - start_time
-
- train_tracker, output_dict = update_policy(
- train_tracker,
- policy,
- batch,
- optimizer,
- cfg.optimizer.grad_clip_norm,
- accelerator=accelerator,
- lr_scheduler=lr_scheduler,
- rabc_weights_provider=rabc_weights,
- )
-
- # Note: eval and checkpoint happens *after* the `step`th training update has completed, so we
- # increment `step` here.
- step += 1
- train_tracker.step()
- is_log_step = cfg.log_freq > 0 and step % cfg.log_freq == 0 and is_main_process
- is_saving_step = step % cfg.save_freq == 0 or step == cfg.steps
- is_eval_step = cfg.eval_freq > 0 and step % cfg.eval_freq == 0
-
- if is_log_step:
- logging.info(train_tracker)
- if wandb_logger:
- wandb_log_dict = train_tracker.to_dict()
- if output_dict:
- wandb_log_dict.update(output_dict)
- # Log RA-BC statistics if enabled
- if rabc_weights is not None:
- rabc_stats = rabc_weights.get_stats()
- wandb_log_dict.update(
- {
- "rabc_delta_mean": rabc_stats["delta_mean"],
- "rabc_delta_std": rabc_stats["delta_std"],
- "rabc_num_frames": rabc_stats["num_frames"],
- }
- )
- wandb_logger.log_dict(wandb_log_dict, step)
- train_tracker.reset_averages()
-
- if cfg.save_checkpoint and is_saving_step:
- if is_main_process:
- logging.info(f"Checkpoint policy after step {step}")
- checkpoint_dir = get_step_checkpoint_dir(cfg.output_dir, cfg.steps, step)
- save_checkpoint(
- checkpoint_dir=checkpoint_dir,
- step=step,
- cfg=cfg,
- policy=accelerator.unwrap_model(policy),
- optimizer=optimizer,
- scheduler=lr_scheduler,
- preprocessor=preprocessor,
- postprocessor=postprocessor,
- )
- update_last_checkpoint(checkpoint_dir)
- if wandb_logger:
- wandb_logger.log_policy(checkpoint_dir)
-
- accelerator.wait_for_everyone()
-
- if cfg.env and is_eval_step:
- if is_main_process:
- step_id = get_step_identifier(step, cfg.steps)
- logging.info(f"Eval policy at step {step}")
- with torch.no_grad(), accelerator.autocast():
- eval_info = eval_policy_all(
- envs=eval_env, # dict[suite][task_id] -> vec_env
- policy=accelerator.unwrap_model(policy),
- env_preprocessor=env_preprocessor,
- env_postprocessor=env_postprocessor,
- preprocessor=preprocessor,
- postprocessor=postprocessor,
- n_episodes=cfg.eval.n_episodes,
- videos_dir=cfg.output_dir / "eval" / f"videos_step_{step_id}",
- max_episodes_rendered=4,
- start_seed=cfg.seed,
- max_parallel_tasks=cfg.env.max_parallel_tasks,
- )
- # overall metrics (suite-agnostic)
- aggregated = eval_info["overall"]
-
- # optional: per-suite logging
- for suite, suite_info in eval_info.items():
- logging.info("Suite %s aggregated: %s", suite, suite_info)
-
- # meters/tracker
- eval_metrics = {
- "avg_sum_reward": AverageMeter("∑rwrd", ":.3f"),
- "pc_success": AverageMeter("success", ":.1f"),
- "eval_s": AverageMeter("eval_s", ":.3f"),
- }
- eval_tracker = MetricsTracker(
- cfg.batch_size,
- dataset.num_frames,
- dataset.num_episodes,
- eval_metrics,
- initial_step=step,
- accelerator=accelerator,
- )
- eval_tracker.eval_s = aggregated.pop("eval_s")
- eval_tracker.avg_sum_reward = aggregated.pop("avg_sum_reward")
- eval_tracker.pc_success = aggregated.pop("pc_success")
- if wandb_logger:
- wandb_log_dict = {**eval_tracker.to_dict(), **eval_info}
- wandb_logger.log_dict(wandb_log_dict, step, mode="eval")
- wandb_logger.log_video(eval_info["overall"]["video_paths"][0], step, mode="eval")
-
- accelerator.wait_for_everyone()
-
- if eval_env:
- close_envs(eval_env)
-
- if is_main_process:
- logging.info("End of training")
-
- if cfg.policy.push_to_hub:
- unwrapped_policy = accelerator.unwrap_model(policy)
- if cfg.policy.use_peft:
- unwrapped_policy.push_model_to_hub(cfg, peft_model=unwrapped_policy)
- else:
- unwrapped_policy.push_model_to_hub(cfg)
- preprocessor.push_to_hub(cfg.policy.repo_id)
- postprocessor.push_to_hub(cfg.policy.repo_id)
-
- # Properly clean up the distributed process group
- accelerator.wait_for_everyone()
- accelerator.end_training()
-
-
-def main():
- register_third_party_plugins()
- train()
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/scripts/lerobot_train_tokenizer.py b/lerobot/src/lerobot/scripts/lerobot_train_tokenizer.py
deleted file mode 100644
index 238168ae14fc222f625fcb36848649dcd31505ba..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/scripts/lerobot_train_tokenizer.py
+++ /dev/null
@@ -1,604 +0,0 @@
-# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Train FAST tokenizer for action encoding.
-
-This script:
-1. Loads action chunks from LeRobotDataset (with episode sampling)
-2. Optionally applies delta transforms (relative vs absolute actions)
-3. Extracts specified action dimensions for encoding
-4. Applies normalization (MEAN_STD, MIN_MAX, QUANTILES, or other modes)
-5. Trains FAST tokenizer (BPE on DCT coefficients) on the action chunks
-6. Saves tokenizer to output directory
-7. Optionally pushes tokenizer to Hugging Face Hub
-8. Reports compression statistics
-
-Example:
-
-```shell
-lerobot-train-tokenizer \
- --repo_id=user/dataset_name \
- --action_horizon=10 \
- --max_episodes=100 \
- --sample_fraction=0.1 \
- --encoded_dims="0:6" \
- --delta_dims="0,1,2,3,4,5" \
- --use_delta_transform=true \
- --state_key="observation.state" \
- --normalization_mode="QUANTILES" \
- --vocab_size=1024 \
- --scale=10.0 \
- --output_dir="./fast_tokenizer_dataset_name" \
- --push_to_hub=true \
- --hub_repo_id="user/fast_tokenizer_dataset_name" \
- --hub_private=false
-"""
-
-import json
-from dataclasses import dataclass
-from pathlib import Path
-from typing import TYPE_CHECKING
-
-import numpy as np
-import torch
-from huggingface_hub import HfApi
-
-from lerobot.utils.import_utils import _transformers_available
-
-if TYPE_CHECKING or _transformers_available:
- from transformers import AutoProcessor
-else:
- AutoProcessor = None
-
-from lerobot.configs import parser
-from lerobot.configs.types import NormalizationMode
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.utils.constants import ACTION, OBS_STATE
-
-
-@dataclass
-class TokenizerTrainingConfig:
- """Configuration for training FAST tokenizer."""
-
- # LeRobot dataset repository ID
- repo_id: str
- # Root directory for dataset (default: ~/.cache/huggingface/lerobot)
- root: str | None = None
- # Number of future actions in each chunk
- action_horizon: int = 10
- # Max episodes to use (None = all episodes in dataset)
- max_episodes: int | None = None
- # Fraction of chunks to sample per episode
- sample_fraction: float = 0.1
- # Comma-separated dimension ranges to encode (e.g., "0:6,7:23")
- encoded_dims: str = "0:6,7:23"
- # Comma-separated dimension indices for delta transform (e.g., "0,1,2,3,4,5")
- delta_dims: str | None = None
- # Whether to apply delta transform (relative actions vs absolute actions)
- use_delta_transform: bool = False
- # Dataset key for state observations (default: "observation.state")
- state_key: str = OBS_STATE
- # Normalization mode (MEAN_STD, MIN_MAX, QUANTILES, QUANTILE10, IDENTITY)
- normalization_mode: str = "QUANTILES"
- # FAST vocabulary size (BPE vocab size)
- vocab_size: int = 1024
- # DCT scaling factor (default: 10.0)
- scale: float = 10.0
- # Directory to save tokenizer (default: ./fast_tokenizer_{repo_id})
- output_dir: str | None = None
- # Whether to push the tokenizer to Hugging Face Hub
- push_to_hub: bool = False
- # Hub repository ID (e.g., "username/tokenizer-name"). If None, uses output_dir name
- hub_repo_id: str | None = None
- # Whether to create a private repository on the Hub
- hub_private: bool = False
-
-
-def apply_delta_transform(state: np.ndarray, actions: np.ndarray, delta_dims: list[int] | None) -> np.ndarray:
- """Apply delta transform to specified dimensions.
-
- Args:
- state: Current state [D]
- actions: Future actions [D]
- delta_dims: List of dimension indices to apply delta transform to
-
- Returns:
- Transformed actions [D]
- """
- if delta_dims is None or len(delta_dims) == 0:
- return actions
-
- delta_actions = actions.copy()
- for dim in delta_dims:
- delta_actions[dim] = actions[dim] - state[dim]
-
- return delta_actions
-
-
-def apply_normalization(
- data: np.ndarray,
- stats: dict[str, np.ndarray],
- mode: NormalizationMode,
- eps: float = 1e-8,
-) -> np.ndarray:
- """Apply normalization to data based on the specified mode.
-
- Args:
- data: Data to normalize [N, H, D] or [D]
- stats: Dictionary of statistics (mean, std, min, max, q01, q99, q10, q90)
- mode: Normalization mode to apply
- eps: Small epsilon for numerical stability
-
- Returns:
- Normalized data with the same shape as input
- """
- if mode == NormalizationMode.IDENTITY:
- return data
-
- if mode == NormalizationMode.MEAN_STD:
- mean = stats.get("mean")
- std = stats.get("std")
- if mean is None or std is None:
- raise ValueError("MEAN_STD mode requires 'mean' and 'std' in stats")
- return (data - mean) / np.maximum(std, eps)
-
- if mode == NormalizationMode.MIN_MAX:
- min_val = stats.get("min")
- max_val = stats.get("max")
- if min_val is None or max_val is None:
- raise ValueError("MIN_MAX mode requires 'min' and 'max' in stats")
- denom = np.maximum(max_val - min_val, eps)
- return 2.0 * (data - min_val) / denom - 1.0
-
- if mode == NormalizationMode.QUANTILES:
- q01 = stats.get("q01")
- q99 = stats.get("q99")
- if q01 is None or q99 is None:
- raise ValueError("QUANTILES mode requires 'q01' and 'q99' in stats")
- denom = np.maximum(q99 - q01, eps)
- # Clip to quantile range then normalize to [-1, 1]
- clipped = np.clip(data, q01, q99)
- return 2.0 * (clipped - q01) / denom - 1.0
-
- if mode == NormalizationMode.QUANTILE10:
- q10 = stats.get("q10")
- q90 = stats.get("q90")
- if q10 is None or q90 is None:
- raise ValueError("QUANTILE10 mode requires 'q10' and 'q90' in stats")
- denom = np.maximum(q90 - q10, eps)
- # Clip to quantile range then normalize to [-1, 1]
- clipped = np.clip(data, q10, q90)
- return 2.0 * (clipped - q10) / denom - 1.0
-
- raise ValueError(f"Unsupported normalization mode: {mode}")
-
-
-def process_episode(args):
- """Process single episode and return action chunks."""
- dataset, ep_idx, action_horizon, delta_dims, sample_fraction, state_key, use_delta_transform = args
-
- try:
- # get episode info
- ep_info = dataset.meta.episodes[ep_idx]
- from_idx = ep_info["dataset_from_index"]
- to_idx = ep_info["dataset_to_index"]
- ep_length = to_idx - from_idx
-
- if ep_length < action_horizon:
- return None
-
- # load all frames in episode
- # if dataset has episode filtering, we need to use the mapping
- states = []
- actions = []
-
- for abs_idx in range(from_idx, to_idx):
- # map absolute index to relative index if needed
- if dataset._absolute_to_relative_idx is not None:
- if abs_idx not in dataset._absolute_to_relative_idx:
- # this episode's frames aren't in the filtered dataset
- return None
- rel_idx = dataset._absolute_to_relative_idx[abs_idx]
- else:
- rel_idx = abs_idx
-
- frame = dataset.hf_dataset[rel_idx]
-
- # get state (could be from observation.state or other state key)
- if state_key in frame:
- state = (
- frame[state_key].numpy()
- if torch.is_tensor(frame[state_key])
- else np.array(frame[state_key])
- )
- else:
- # if no state key, use zeros (no delta transform)
- state = np.zeros_like(
- frame[ACTION].numpy() if torch.is_tensor(frame[ACTION]) else np.array(frame[ACTION])
- )
-
- action = frame[ACTION].numpy() if torch.is_tensor(frame[ACTION]) else np.array(frame[ACTION])
-
- states.append(state)
- actions.append(action)
-
- states = np.array(states)
- actions = np.array(actions)
-
- # create action chunks (sliding window)
- # all actions in a chunk are relative to the FIRST state in that chunk
- action_chunks = []
-
- for i in range(len(states) - action_horizon + 1):
- current_state = states[i] # First state in chunk
- future_absolute_actions = actions[i : i + action_horizon]
-
- if use_delta_transform:
- # relative actions
- delta_chunk = np.zeros_like(future_absolute_actions)
- for t in range(action_horizon):
- delta_chunk[t] = apply_delta_transform(
- current_state,
- future_absolute_actions[t],
- delta_dims,
- )
- action_chunks.append(delta_chunk)
- else:
- # absolute actions (no delta)
- action_chunks.append(future_absolute_actions)
-
- if len(action_chunks) == 0:
- return None
-
- action_chunks = np.array(action_chunks)
-
- # sample chunks
- if sample_fraction < 1.0:
- n_chunks = len(action_chunks)
- n_samples = max(1, int(n_chunks * sample_fraction))
- episode_seed = hash(ep_idx) % (2**31)
- rng = np.random.RandomState(episode_seed)
- indices = rng.choice(n_chunks, size=n_samples, replace=False)
- action_chunks = action_chunks[indices]
-
- return action_chunks
-
- except Exception as e:
- print(f"Error processing episode {ep_idx}: {e}")
- import traceback
-
- traceback.print_exc()
- return None
-
-
-def train_fast_tokenizer(
- action_chunks: np.ndarray,
- vocab_size: int = 1024,
- scale: float = 10.0,
-) -> AutoProcessor:
- """
- Train FAST tokenizer (BPE on DCT coefficients) on action chunks.
-
- Uses the .fit() method to train a new tokenizer on the provided data.
-
- Args:
- action_chunks: Array of action chunks [N, H, D] where N=num_chunks, H=horizon, D=action_dim
- vocab_size: BPE vocabulary size
- scale: DCT scaling factor for quantization
-
- Returns:
- Trained FAST tokenizer
- """
- print(f"Training FAST tokenizer on {len(action_chunks)} action chunks...")
- print(f"Action chunk shape: {action_chunks.shape}")
- print(f"Vocab size: {vocab_size}")
- print(f"DCT scale: {scale}")
-
- # download the tokenizer source code (not pretrained weights)
- # we'll train a new tokenizer on our own data
- base_tokenizer = AutoProcessor.from_pretrained("physical-intelligence/fast", trust_remote_code=True)
-
- # convert action_chunks array to list of arrays (expected by .fit())
- action_data_list = [action_chunks[i] for i in range(len(action_chunks))]
-
- # train the new tokenizer on our action data using .fit()
- # this trains the BPE tokenizer on DCT coefficients
- print("Training new tokenizer (this may take a few minutes)...")
- tokenizer = base_tokenizer.fit(
- action_data_list,
- scale=scale,
- vocab_size=vocab_size,
- time_horizon=action_chunks.shape[1], # action_horizon
- action_dim=action_chunks.shape[2], # encoded dimensions
- )
- print("✓ Tokenizer training complete!")
-
- # validate it works
- sample_chunk = action_chunks[0]
- encoded = tokenizer(sample_chunk[None])[0]
- if isinstance(encoded, list):
- encoded = np.array(encoded)
- print(f"Sample encoding: {len(encoded)} tokens for chunk shape {sample_chunk.shape}")
-
- return tokenizer
-
-
-def compute_compression_stats(tokenizer, action_chunks: np.ndarray):
- """Compute compression statistics."""
- print("\nComputing compression statistics...")
-
- # sample for stats (use max 1000 chunks for speed)
- sample_size = min(1000, len(action_chunks))
- sample_indices = np.random.RandomState(42).choice(len(action_chunks), size=sample_size, replace=False)
- sample_chunks = action_chunks[sample_indices]
-
- token_lengths = []
- for chunk in sample_chunks:
- encoded = tokenizer(chunk[None])[0]
- if isinstance(encoded, list):
- token_lengths.append(len(encoded))
- else:
- token_lengths.append(encoded.shape[0] if hasattr(encoded, "shape") else len(encoded))
-
- token_lengths = np.array(token_lengths)
-
- # compression ratio: (H * D) / avg_tokens
- input_size = action_chunks.shape[1] * action_chunks.shape[2]
- avg_tokens = np.mean(token_lengths)
- compression_ratio = input_size / avg_tokens
-
- stats = {
- "compression_ratio": float(compression_ratio),
- "mean_token_length": float(np.mean(token_lengths)),
- "p99_token_length": float(np.percentile(token_lengths, 99)),
- "min_token_length": float(np.min(token_lengths)),
- "max_token_length": float(np.max(token_lengths)),
- }
-
- print("Compression Statistics:")
- print(f" Average compression ratio: {stats['compression_ratio']:.2f}x")
- print(f" Mean token length: {stats['mean_token_length']:.1f}")
- print(f" P99 token length: {stats['p99_token_length']:.0f}")
- print(f" Min token length: {stats['min_token_length']:.0f}")
- print(f" Max token length: {stats['max_token_length']:.0f}")
-
- return stats
-
-
-@parser.wrap()
-def train_tokenizer(cfg: TokenizerTrainingConfig):
- """
- Train FAST tokenizer for action encoding.
-
- Args:
- cfg: TokenizerTrainingConfig dataclass with all configuration parameters
- """
- # load dataset
- print(f"Loading dataset: {cfg.repo_id}")
- dataset = LeRobotDataset(repo_id=cfg.repo_id, root=cfg.root)
- print(f"Dataset loaded: {dataset.num_episodes} episodes, {dataset.num_frames} frames")
-
- # parse normalization mode
- try:
- norm_mode = NormalizationMode(cfg.normalization_mode)
- except ValueError as err:
- raise ValueError(
- f"Invalid normalization_mode: {cfg.normalization_mode}. "
- f"Must be one of: {', '.join([m.value for m in NormalizationMode])}"
- ) from err
- print(f"Normalization mode: {norm_mode.value}")
-
- # parse encoded dimensions
- encoded_dim_ranges = []
- for range_str in cfg.encoded_dims.split(","):
- start, end = map(int, range_str.strip().split(":"))
- encoded_dim_ranges.append((start, end))
-
- total_encoded_dims = sum(end - start for start, end in encoded_dim_ranges)
- print(f"Encoding {total_encoded_dims} dimensions: {cfg.encoded_dims}")
-
- # parse delta dimensions
- delta_dim_list = None
- if cfg.delta_dims is not None and cfg.delta_dims.strip():
- delta_dim_list = [int(d.strip()) for d in cfg.delta_dims.split(",")]
- print(f"Delta dimensions: {delta_dim_list}")
- else:
- print("No delta dimensions specified")
-
- print(f"Use delta transform: {cfg.use_delta_transform}")
- if cfg.use_delta_transform and (delta_dim_list is None or len(delta_dim_list) == 0):
- print("Warning: use_delta_transform=True but no delta_dims specified. No delta will be applied.")
-
- print(f"Action horizon: {cfg.action_horizon}")
- print(f"State key: {cfg.state_key}")
-
- # determine episodes to process
- num_episodes = dataset.num_episodes
- if cfg.max_episodes is not None:
- num_episodes = min(cfg.max_episodes, num_episodes)
-
- print(f"Processing {num_episodes} episodes...")
-
- # process episodes sequentially (to avoid pickling issues with dataset)
- all_chunks = []
- for ep_idx in range(num_episodes):
- if ep_idx % 10 == 0:
- print(f" Processing episode {ep_idx}/{num_episodes}...")
-
- chunks = process_episode(
- (
- dataset,
- ep_idx,
- cfg.action_horizon,
- delta_dim_list,
- cfg.sample_fraction,
- cfg.state_key,
- cfg.use_delta_transform,
- )
- )
- if chunks is not None:
- all_chunks.append(chunks)
-
- # concatenate all chunks
- all_chunks = np.concatenate(all_chunks, axis=0)
- print(f"Collected {len(all_chunks)} action chunks")
-
- # extract only encoded dimensions FIRST (before normalization)
- encoded_chunks = []
- for start, end in encoded_dim_ranges:
- encoded_chunks.append(all_chunks[:, :, start:end])
- encoded_chunks = np.concatenate(encoded_chunks, axis=-1) # [N, H, D_encoded]
- print(f"Extracted {encoded_chunks.shape[-1]} encoded dimensions")
-
- # apply normalization to encoded dimensions
- print("\nBefore normalization - overall stats:")
- print(f" Min: {np.min(encoded_chunks):.4f}, Max: {np.max(encoded_chunks):.4f}")
- print(f" Mean: {np.mean(encoded_chunks):.4f}, Std: {np.std(encoded_chunks):.4f}")
-
- # get normalization stats from dataset
- norm_stats = dataset.meta.stats
- if norm_stats is not None and ACTION in norm_stats:
- action_stats = norm_stats[ACTION]
-
- # build encoded dimension indices
- encoded_dim_indices = []
- for start, end in encoded_dim_ranges:
- encoded_dim_indices.extend(range(start, end))
- encoded_dim_indices = np.array(encoded_dim_indices)
-
- # extract stats for encoded dimensions only
- encoded_stats = {}
- for stat_name, stat_values in action_stats.items():
- if isinstance(stat_values, (list, np.ndarray)):
- stat_array = np.array(stat_values)
- if len(stat_array) > max(encoded_dim_indices):
- encoded_stats[stat_name] = stat_array[encoded_dim_indices]
-
- if encoded_stats:
- print(f"\nNormalization stats for encoded dimensions (mode: {norm_mode.value}):")
- for stat_name, stat_values in encoded_stats.items():
- print(
- f" {stat_name}: shape={stat_values.shape}, "
- f"range=[{np.min(stat_values):.4f}, {np.max(stat_values):.4f}]"
- )
-
- # apply normalization based on mode
- try:
- encoded_chunks = apply_normalization(encoded_chunks, encoded_stats, norm_mode, eps=1e-8)
- print(f"\nApplied {norm_mode.value} normalization")
- except ValueError as e:
- print(f"Warning: {e}. Using raw actions without normalization.")
-
- print("\nAfter normalization - overall stats:")
- print(f" Min: {np.min(encoded_chunks):.4f}, Max: {np.max(encoded_chunks):.4f}")
- print(f" Mean: {np.mean(encoded_chunks):.4f}, Std: {np.std(encoded_chunks):.4f}")
-
- print("\nPer-dimension stats (after normalization):")
- for d in range(encoded_chunks.shape[-1]):
- dim_data = encoded_chunks[:, :, d]
- print(
- f" Dim {d}: min={np.min(dim_data):7.4f}, max={np.max(dim_data):7.4f}, "
- f"mean={np.mean(dim_data):7.4f}, std={np.std(dim_data):7.4f}"
- )
- else:
- print("Warning: Could not extract stats for encoded dimensions, using raw actions")
- else:
- print("Warning: No normalization stats found in dataset, using raw actions")
-
- print(f"Encoded chunks shape: {encoded_chunks.shape}")
-
- # train FAST tokenizer
- tokenizer = train_fast_tokenizer(
- encoded_chunks,
- vocab_size=cfg.vocab_size,
- scale=cfg.scale,
- )
-
- # compute compression statistics
- compression_stats = compute_compression_stats(tokenizer, encoded_chunks)
-
- # save tokenizer
- output_dir = cfg.output_dir
- if output_dir is None:
- output_dir = f"fast_tokenizer_{cfg.repo_id.replace('/', '_')}"
- output_path = Path(output_dir)
- output_path.mkdir(parents=True, exist_ok=True)
-
- tokenizer.save_pretrained(output_path)
-
- # save metadata
- metadata = {
- "repo_id": cfg.repo_id,
- "vocab_size": cfg.vocab_size,
- "scale": cfg.scale,
- "encoded_dims": cfg.encoded_dims,
- "encoded_dim_ranges": encoded_dim_ranges,
- "total_encoded_dims": total_encoded_dims,
- "delta_dims": cfg.delta_dims,
- "delta_dim_list": delta_dim_list,
- "use_delta_transform": cfg.use_delta_transform,
- "state_key": cfg.state_key,
- "normalization_mode": norm_mode.value,
- "action_horizon": cfg.action_horizon,
- "num_training_chunks": len(encoded_chunks),
- "compression_stats": compression_stats,
- }
-
- with open(output_path / "metadata.json", "w") as f:
- json.dump(metadata, f, indent=2)
-
- print(f"\nSaved FAST tokenizer to {output_path}")
- print(f"Metadata: {json.dumps(metadata, indent=2)}")
-
- # push to Hugging Face Hub if requested
- if cfg.push_to_hub:
- # determine the hub repository ID
- hub_repo_id = cfg.hub_repo_id
- if hub_repo_id is None:
- hub_repo_id = output_path.name
- print(f"\nNo hub_repo_id provided, using: {hub_repo_id}")
-
- print(f"\nPushing tokenizer to Hugging Face Hub: {hub_repo_id}")
- print(f" Private: {cfg.hub_private}")
-
- try:
- # use the tokenizer's push_to_hub method
- tokenizer.push_to_hub(
- repo_id=hub_repo_id,
- private=cfg.hub_private,
- commit_message=f"Upload FAST tokenizer trained on {cfg.repo_id}",
- )
-
- # also upload the metadata.json file separately
- api = HfApi()
- api.upload_file(
- path_or_fileobj=str(output_path / "metadata.json"),
- path_in_repo="metadata.json",
- repo_id=hub_repo_id,
- repo_type="model",
- commit_message="Upload tokenizer metadata",
- )
-
- print(f"Successfully pushed tokenizer to: https://huggingface.co/{hub_repo_id}")
- except Exception as e:
- print(f"Error pushing to hub: {e}")
- print(" Make sure you're logged in with `huggingface-cli login`")
-
-
-def main():
- """CLI entry point that parses arguments and runs the tokenizer training."""
- train_tokenizer()
-
-
-if __name__ == "__main__":
- main()
diff --git a/lerobot/src/lerobot/teleoperators/__init__.py b/lerobot/src/lerobot/teleoperators/__init__.py
deleted file mode 100644
index 12f533c34973614f548d432a75a600047f5b42c0..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/__init__.py
+++ /dev/null
@@ -1,19 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .config import TeleoperatorConfig
-from .teleoperator import Teleoperator
-from .utils import TeleopEvents, make_teleoperator_from_config
diff --git a/lerobot/src/lerobot/teleoperators/bi_so_leader/__init__.py b/lerobot/src/lerobot/teleoperators/bi_so_leader/__init__.py
deleted file mode 100644
index 79bf42811ae859d52c620b0047771c8dfc648fa6..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/bi_so_leader/__init__.py
+++ /dev/null
@@ -1,17 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .bi_so_leader import BiSOLeader, BiSOLeaderConfig
diff --git a/lerobot/src/lerobot/teleoperators/bi_so_leader/bi_so_leader.py b/lerobot/src/lerobot/teleoperators/bi_so_leader/bi_so_leader.py
deleted file mode 100644
index 66d6c2ca536618681632ef7b56eee9b110b7cb45..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/bi_so_leader/bi_so_leader.py
+++ /dev/null
@@ -1,115 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-from functools import cached_property
-
-from lerobot.teleoperators.so_leader import SOLeaderTeleopConfig
-from lerobot.utils.decorators import check_if_not_connected
-
-from ..so_leader import SOLeader
-from ..teleoperator import Teleoperator
-from .config_bi_so_leader import BiSOLeaderConfig
-
-logger = logging.getLogger(__name__)
-
-
-class BiSOLeader(Teleoperator):
- """
- [Bimanual SO Leader Arms](https://github.com/TheRobotStudio/SO-ARM100) designed by TheRobotStudio
- """
-
- config_class = BiSOLeaderConfig
- name = "bi_so_leader"
-
- def __init__(self, config: BiSOLeaderConfig):
- super().__init__(config)
- self.config = config
-
- left_arm_config = SOLeaderTeleopConfig(
- id=f"{config.id}_left" if config.id else None,
- calibration_dir=config.calibration_dir,
- port=config.left_arm_config.port,
- )
-
- right_arm_config = SOLeaderTeleopConfig(
- id=f"{config.id}_right" if config.id else None,
- calibration_dir=config.calibration_dir,
- port=config.right_arm_config.port,
- )
-
- self.left_arm = SOLeader(left_arm_config)
- self.right_arm = SOLeader(right_arm_config)
-
- @cached_property
- def action_features(self) -> dict[str, type]:
- left_arm_features = self.left_arm.action_features
- right_arm_features = self.right_arm.action_features
-
- return {
- **{f"left_{k}": v for k, v in left_arm_features.items()},
- **{f"right_{k}": v for k, v in right_arm_features.items()},
- }
-
- @cached_property
- def feedback_features(self) -> dict[str, type]:
- return {}
-
- @property
- def is_connected(self) -> bool:
- return self.left_arm.is_connected and self.right_arm.is_connected
-
- def connect(self, calibrate: bool = True) -> None:
- self.left_arm.connect(calibrate)
- self.right_arm.connect(calibrate)
-
- @property
- def is_calibrated(self) -> bool:
- return self.left_arm.is_calibrated and self.right_arm.is_calibrated
-
- def calibrate(self) -> None:
- self.left_arm.calibrate()
- self.right_arm.calibrate()
-
- def configure(self) -> None:
- self.left_arm.configure()
- self.right_arm.configure()
-
- def setup_motors(self) -> None:
- self.left_arm.setup_motors()
- self.right_arm.setup_motors()
-
- @check_if_not_connected
- def get_action(self) -> dict[str, float]:
- action_dict = {}
-
- # Add "left_" prefix
- left_action = self.left_arm.get_action()
- action_dict.update({f"left_{key}": value for key, value in left_action.items()})
-
- # Add "right_" prefix
- right_action = self.right_arm.get_action()
- action_dict.update({f"right_{key}": value for key, value in right_action.items()})
-
- return action_dict
-
- def send_feedback(self, feedback: dict[str, float]) -> None:
- # TODO: Implement force feedback
- raise NotImplementedError
-
- def disconnect(self) -> None:
- self.left_arm.disconnect()
- self.right_arm.disconnect()
diff --git a/lerobot/src/lerobot/teleoperators/bi_so_leader/config_bi_so_leader.py b/lerobot/src/lerobot/teleoperators/bi_so_leader/config_bi_so_leader.py
deleted file mode 100644
index ebc67dce720cbecdd59dedda714949b8e0f91fc5..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/bi_so_leader/config_bi_so_leader.py
+++ /dev/null
@@ -1,30 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-
-from lerobot.teleoperators.so_leader import SOLeaderConfig
-
-from ..config import TeleoperatorConfig
-
-
-@TeleoperatorConfig.register_subclass("bi_so_leader")
-@dataclass
-class BiSOLeaderConfig(TeleoperatorConfig):
- """Configuration class for Bi SO Leader teleoperators."""
-
- left_arm_config: SOLeaderConfig
- right_arm_config: SOLeaderConfig
diff --git a/lerobot/src/lerobot/teleoperators/config.py b/lerobot/src/lerobot/teleoperators/config.py
deleted file mode 100644
index 91431670ce3bad5b07ec868f76c2eb3d10943c28..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/config.py
+++ /dev/null
@@ -1,31 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import abc
-from dataclasses import dataclass
-from pathlib import Path
-
-import draccus
-
-
-@dataclass(kw_only=True)
-class TeleoperatorConfig(draccus.ChoiceRegistry, abc.ABC):
- # Allows to distinguish between different teleoperators of the same type
- id: str | None = None
- # Directory to store calibration file
- calibration_dir: Path | None = None
-
- @property
- def type(self) -> str:
- return self.get_choice_name(self.__class__)
diff --git a/lerobot/src/lerobot/teleoperators/gamepad/__init__.py b/lerobot/src/lerobot/teleoperators/gamepad/__init__.py
deleted file mode 100644
index 1cd2ef3a167f70fac6edb6a178958480bf67d2f4..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/gamepad/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-# !/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .configuration_gamepad import GamepadTeleopConfig
-from .teleop_gamepad import GamepadTeleop
diff --git a/lerobot/src/lerobot/teleoperators/gamepad/configuration_gamepad.py b/lerobot/src/lerobot/teleoperators/gamepad/configuration_gamepad.py
deleted file mode 100644
index c89b45c9755969343722fb98ad8b0113e8e73686..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/gamepad/configuration_gamepad.py
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-
-from ..config import TeleoperatorConfig
-
-
-@TeleoperatorConfig.register_subclass("gamepad")
-@dataclass
-class GamepadTeleopConfig(TeleoperatorConfig):
- use_gripper: bool = True
diff --git a/lerobot/src/lerobot/teleoperators/gamepad/gamepad_utils.py b/lerobot/src/lerobot/teleoperators/gamepad/gamepad_utils.py
deleted file mode 100644
index 3bb8d03818d2cf086640e29ae9c0c8c631fc80ba..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/gamepad/gamepad_utils.py
+++ /dev/null
@@ -1,460 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-
-from ..utils import TeleopEvents
-
-
-class InputController:
- """Base class for input controllers that generate motion deltas."""
-
- def __init__(self, x_step_size=1.0, y_step_size=1.0, z_step_size=1.0):
- """
- Initialize the controller.
-
- Args:
- x_step_size: Base movement step size in meters
- y_step_size: Base movement step size in meters
- z_step_size: Base movement step size in meters
- """
- self.x_step_size = x_step_size
- self.y_step_size = y_step_size
- self.z_step_size = z_step_size
- self.running = True
- self.episode_end_status = None # None, "success", or "failure"
- self.intervention_flag = False
- self.open_gripper_command = False
- self.close_gripper_command = False
-
- def start(self):
- """Start the controller and initialize resources."""
- pass
-
- def stop(self):
- """Stop the controller and release resources."""
- pass
-
- def get_deltas(self):
- """Get the current movement deltas (dx, dy, dz) in meters."""
- return 0.0, 0.0, 0.0
-
- def update(self):
- """Update controller state - call this once per frame."""
- pass
-
- def __enter__(self):
- """Support for use in 'with' statements."""
- self.start()
- return self
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- """Ensure resources are released when exiting 'with' block."""
- self.stop()
-
- def get_episode_end_status(self):
- """
- Get the current episode end status.
-
- Returns:
- None if episode should continue, "success" or "failure" otherwise
- """
- status = self.episode_end_status
- self.episode_end_status = None # Reset after reading
- return status
-
- def should_intervene(self):
- """Return True if intervention flag was set."""
- return self.intervention_flag
-
- def gripper_command(self):
- """Return the current gripper command."""
- if self.open_gripper_command == self.close_gripper_command:
- return "stay"
- elif self.open_gripper_command:
- return "open"
- elif self.close_gripper_command:
- return "close"
-
-
-class KeyboardController(InputController):
- """Generate motion deltas from keyboard input."""
-
- def __init__(self, x_step_size=1.0, y_step_size=1.0, z_step_size=1.0):
- super().__init__(x_step_size, y_step_size, z_step_size)
- self.key_states = {
- "forward_x": False,
- "backward_x": False,
- "forward_y": False,
- "backward_y": False,
- "forward_z": False,
- "backward_z": False,
- "quit": False,
- "success": False,
- "failure": False,
- }
- self.listener = None
-
- def start(self):
- """Start the keyboard listener."""
- from pynput import keyboard
-
- def on_press(key):
- try:
- if key == keyboard.Key.up:
- self.key_states["forward_x"] = True
- elif key == keyboard.Key.down:
- self.key_states["backward_x"] = True
- elif key == keyboard.Key.left:
- self.key_states["forward_y"] = True
- elif key == keyboard.Key.right:
- self.key_states["backward_y"] = True
- elif key == keyboard.Key.shift:
- self.key_states["backward_z"] = True
- elif key == keyboard.Key.shift_r:
- self.key_states["forward_z"] = True
- elif key == keyboard.Key.esc:
- self.key_states["quit"] = True
- self.running = False
- return False
- elif key == keyboard.Key.enter:
- self.key_states["success"] = True
- self.episode_end_status = TeleopEvents.SUCCESS
- elif key == keyboard.Key.backspace:
- self.key_states["failure"] = True
- self.episode_end_status = TeleopEvents.FAILURE
- except AttributeError:
- pass
-
- def on_release(key):
- try:
- if key == keyboard.Key.up:
- self.key_states["forward_x"] = False
- elif key == keyboard.Key.down:
- self.key_states["backward_x"] = False
- elif key == keyboard.Key.left:
- self.key_states["forward_y"] = False
- elif key == keyboard.Key.right:
- self.key_states["backward_y"] = False
- elif key == keyboard.Key.shift:
- self.key_states["backward_z"] = False
- elif key == keyboard.Key.shift_r:
- self.key_states["forward_z"] = False
- elif key == keyboard.Key.enter:
- self.key_states["success"] = False
- elif key == keyboard.Key.backspace:
- self.key_states["failure"] = False
- except AttributeError:
- pass
-
- self.listener = keyboard.Listener(on_press=on_press, on_release=on_release)
- self.listener.start()
-
- print("Keyboard controls:")
- print(" Arrow keys: Move in X-Y plane")
- print(" Shift and Shift_R: Move in Z axis")
- print(" Enter: End episode with SUCCESS")
- print(" Backspace: End episode with FAILURE")
- print(" ESC: Exit")
-
- def stop(self):
- """Stop the keyboard listener."""
- if self.listener and self.listener.is_alive():
- self.listener.stop()
-
- def get_deltas(self):
- """Get the current movement deltas from keyboard state."""
- delta_x = delta_y = delta_z = 0.0
-
- if self.key_states["forward_x"]:
- delta_x += self.x_step_size
- if self.key_states["backward_x"]:
- delta_x -= self.x_step_size
- if self.key_states["forward_y"]:
- delta_y += self.y_step_size
- if self.key_states["backward_y"]:
- delta_y -= self.y_step_size
- if self.key_states["forward_z"]:
- delta_z += self.z_step_size
- if self.key_states["backward_z"]:
- delta_z -= self.z_step_size
-
- return delta_x, delta_y, delta_z
-
-
-class GamepadController(InputController):
- """Generate motion deltas from gamepad input."""
-
- def __init__(self, x_step_size=1.0, y_step_size=1.0, z_step_size=1.0, deadzone=0.1):
- super().__init__(x_step_size, y_step_size, z_step_size)
- self.deadzone = deadzone
- self.joystick = None
- self.intervention_flag = False
-
- def start(self):
- """Initialize pygame and the gamepad."""
- import pygame
-
- pygame.init()
- pygame.joystick.init()
-
- if pygame.joystick.get_count() == 0:
- logging.error("No gamepad detected. Please connect a gamepad and try again.")
- self.running = False
- return
-
- self.joystick = pygame.joystick.Joystick(0)
- self.joystick.init()
- logging.info(f"Initialized gamepad: {self.joystick.get_name()}")
-
- print("Gamepad controls:")
- print(" Left analog stick: Move in X-Y plane")
- print(" Right analog stick (vertical): Move in Z axis")
- print(" B/Circle button: Exit")
- print(" Y/Triangle button: End episode with SUCCESS")
- print(" A/Cross button: End episode with FAILURE")
- print(" X/Square button: Rerecord episode")
-
- def stop(self):
- """Clean up pygame resources."""
- import pygame
-
- if pygame.joystick.get_init():
- if self.joystick:
- self.joystick.quit()
- pygame.joystick.quit()
- pygame.quit()
-
- def update(self):
- """Process pygame events to get fresh gamepad readings."""
- import pygame
-
- for event in pygame.event.get():
- if event.type == pygame.JOYBUTTONDOWN:
- if event.button == 3:
- self.episode_end_status = TeleopEvents.SUCCESS
- # A button (1) for failure
- elif event.button == 1:
- self.episode_end_status = TeleopEvents.FAILURE
- # X button (0) for rerecord
- elif event.button == 0:
- self.episode_end_status = TeleopEvents.RERECORD_EPISODE
-
- # RB button (6) for closing gripper
- elif event.button == 6:
- self.close_gripper_command = True
-
- # LT button (7) for opening gripper
- elif event.button == 7:
- self.open_gripper_command = True
-
- # Reset episode status on button release
- elif event.type == pygame.JOYBUTTONUP:
- if event.button in [0, 2, 3]:
- self.episode_end_status = None
-
- elif event.button == 6:
- self.close_gripper_command = False
-
- elif event.button == 7:
- self.open_gripper_command = False
-
- # Check for RB button (typically button 5) for intervention flag
- if self.joystick.get_button(5):
- self.intervention_flag = True
- else:
- self.intervention_flag = False
-
- def get_deltas(self):
- """Get the current movement deltas from gamepad state."""
- import pygame
-
- try:
- # Read joystick axes
- # Left stick X and Y (typically axes 0 and 1)
- y_input = self.joystick.get_axis(0) # Up/Down (often inverted)
- x_input = self.joystick.get_axis(1) # Left/Right
-
- # Right stick Y (typically axis 3 or 4)
- z_input = self.joystick.get_axis(3) # Up/Down for Z
-
- # Apply deadzone to avoid drift
- x_input = 0 if abs(x_input) < self.deadzone else x_input
- y_input = 0 if abs(y_input) < self.deadzone else y_input
- z_input = 0 if abs(z_input) < self.deadzone else z_input
-
- # Calculate deltas (note: may need to invert axes depending on controller)
- delta_x = -x_input * self.x_step_size # Forward/backward
- delta_y = -y_input * self.y_step_size # Left/right
- delta_z = -z_input * self.z_step_size # Up/down
-
- return delta_x, delta_y, delta_z
-
- except pygame.error:
- logging.error("Error reading gamepad. Is it still connected?")
- return 0.0, 0.0, 0.0
-
-
-class GamepadControllerHID(InputController):
- """Generate motion deltas from gamepad input using HIDAPI."""
-
- def __init__(
- self,
- x_step_size=1.0,
- y_step_size=1.0,
- z_step_size=1.0,
- deadzone=0.1,
- ):
- """
- Initialize the HID gamepad controller.
-
- Args:
- step_size: Base movement step size in meters
- z_scale: Scaling factor for Z-axis movement
- deadzone: Joystick deadzone to prevent drift
- """
- super().__init__(x_step_size, y_step_size, z_step_size)
- self.deadzone = deadzone
- self.device = None
- self.device_info = None
-
- # Movement values (normalized from -1.0 to 1.0)
- self.left_x = 0.0
- self.left_y = 0.0
- self.right_x = 0.0
- self.right_y = 0.0
-
- # Button states
- self.buttons = {}
-
- def find_device(self):
- """Look for the gamepad device by vendor and product ID."""
- import hid
-
- devices = hid.enumerate()
- for device in devices:
- device_name = device["product_string"]
- if any(controller in device_name for controller in ["Logitech", "Xbox", "PS4", "PS5"]):
- return device
-
- logging.error(
- "No gamepad found, check the connection and the product string in HID to add your gamepad"
- )
- return None
-
- def start(self):
- """Connect to the gamepad using HIDAPI."""
- import hid
-
- self.device_info = self.find_device()
- if not self.device_info:
- self.running = False
- return
-
- try:
- logging.info(f"Connecting to gamepad at path: {self.device_info['path']}")
- self.device = hid.device()
- self.device.open_path(self.device_info["path"])
- self.device.set_nonblocking(1)
-
- manufacturer = self.device.get_manufacturer_string()
- product = self.device.get_product_string()
- logging.info(f"Connected to {manufacturer} {product}")
-
- logging.info("Gamepad controls (HID mode):")
- logging.info(" Left analog stick: Move in X-Y plane")
- logging.info(" Right analog stick: Move in Z axis (vertical)")
- logging.info(" Button 1/B/Circle: Exit")
- logging.info(" Button 2/A/Cross: End episode with SUCCESS")
- logging.info(" Button 3/X/Square: End episode with FAILURE")
-
- except OSError as e:
- logging.error(f"Error opening gamepad: {e}")
- logging.error("You might need to run this with sudo/admin privileges on some systems")
- self.running = False
-
- def stop(self):
- """Close the HID device connection."""
- if self.device:
- self.device.close()
- self.device = None
-
- def update(self):
- """
- Read and process the latest gamepad data.
- Due to an issue with the HIDAPI, we need to read the read the device several times in order to get a stable reading
- """
- for _ in range(10):
- self._update()
-
- def _update(self):
- """Read and process the latest gamepad data."""
- if not self.device or not self.running:
- return
-
- try:
- # Read data from the gamepad
- data = self.device.read(64)
- # Interpret gamepad data - this will vary by controller model
- # These offsets are for the Logitech RumblePad 2
- if data and len(data) >= 8:
- # Normalize joystick values from 0-255 to -1.0-1.0
- self.left_y = (data[1] - 128) / 128.0
- self.left_x = (data[2] - 128) / 128.0
- self.right_x = (data[3] - 128) / 128.0
- self.right_y = (data[4] - 128) / 128.0
-
- # Apply deadzone
- self.left_y = 0 if abs(self.left_y) < self.deadzone else self.left_y
- self.left_x = 0 if abs(self.left_x) < self.deadzone else self.left_x
- self.right_x = 0 if abs(self.right_x) < self.deadzone else self.right_x
- self.right_y = 0 if abs(self.right_y) < self.deadzone else self.right_y
-
- # Parse button states (byte 5 in the Logitech RumblePad 2)
- buttons = data[5]
-
- # Check if RB is pressed then the intervention flag should be set
- self.intervention_flag = data[6] in [2, 6, 10, 14]
-
- # Check if RT is pressed
- self.open_gripper_command = data[6] in [8, 10, 12]
-
- # Check if LT is pressed
- self.close_gripper_command = data[6] in [4, 6, 12]
-
- # Check if Y/Triangle button (bit 7) is pressed for saving
- # Check if X/Square button (bit 5) is pressed for failure
- # Check if A/Cross button (bit 4) is pressed for rerecording
- if buttons & 1 << 7:
- self.episode_end_status = TeleopEvents.SUCCESS
- elif buttons & 1 << 5:
- self.episode_end_status = TeleopEvents.FAILURE
- elif buttons & 1 << 4:
- self.episode_end_status = TeleopEvents.RERECORD_EPISODE
- else:
- self.episode_end_status = None
-
- except OSError as e:
- logging.error(f"Error reading from gamepad: {e}")
-
- def get_deltas(self):
- """Get the current movement deltas from gamepad state."""
- # Calculate deltas - invert as needed based on controller orientation
- delta_x = -self.left_x * self.x_step_size # Forward/backward
- delta_y = -self.left_y * self.y_step_size # Left/right
- delta_z = -self.right_y * self.z_step_size # Up/down
-
- return delta_x, delta_y, delta_z
diff --git a/lerobot/src/lerobot/teleoperators/gamepad/teleop_gamepad.py b/lerobot/src/lerobot/teleoperators/gamepad/teleop_gamepad.py
deleted file mode 100644
index c4d81e97e056d0856bee29fe2921fcf609b3f749..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/gamepad/teleop_gamepad.py
+++ /dev/null
@@ -1,186 +0,0 @@
-# !/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import sys
-from enum import IntEnum
-from typing import Any
-
-import numpy as np
-
-from lerobot.processor import RobotAction
-from lerobot.utils.decorators import check_if_not_connected
-
-from ..teleoperator import Teleoperator
-from ..utils import TeleopEvents
-from .configuration_gamepad import GamepadTeleopConfig
-
-
-class GripperAction(IntEnum):
- CLOSE = 0
- STAY = 1
- OPEN = 2
-
-
-gripper_action_map = {
- "close": GripperAction.CLOSE.value,
- "open": GripperAction.OPEN.value,
- "stay": GripperAction.STAY.value,
-}
-
-
-class GamepadTeleop(Teleoperator):
- """
- Teleop class to use gamepad inputs for control.
- """
-
- config_class = GamepadTeleopConfig
- name = "gamepad"
-
- def __init__(self, config: GamepadTeleopConfig):
- super().__init__(config)
- self.config = config
- self.robot_type = config.type
-
- self.gamepad = None
-
- @property
- def action_features(self) -> dict:
- if self.config.use_gripper:
- return {
- "dtype": "float32",
- "shape": (4,),
- "names": {"delta_x": 0, "delta_y": 1, "delta_z": 2, "gripper": 3},
- }
- else:
- return {
- "dtype": "float32",
- "shape": (3,),
- "names": {"delta_x": 0, "delta_y": 1, "delta_z": 2},
- }
-
- @property
- def feedback_features(self) -> dict:
- return {}
-
- def connect(self) -> None:
- # use HidApi for macos
- if sys.platform == "darwin":
- # NOTE: On macOS, pygame doesn’t reliably detect input from some controllers so we fall back to hidapi
- from .gamepad_utils import GamepadControllerHID as Gamepad
- else:
- from .gamepad_utils import GamepadController as Gamepad
-
- self.gamepad = Gamepad()
- self.gamepad.start()
-
- @check_if_not_connected
- def get_action(self) -> RobotAction:
- # Update the controller to get fresh inputs
- self.gamepad.update()
-
- # Get movement deltas from the controller
- delta_x, delta_y, delta_z = self.gamepad.get_deltas()
-
- # Create action from gamepad input
- gamepad_action = np.array([delta_x, delta_y, delta_z], dtype=np.float32)
-
- action_dict = {
- "delta_x": gamepad_action[0],
- "delta_y": gamepad_action[1],
- "delta_z": gamepad_action[2],
- }
-
- # Default gripper action is to stay
- gripper_action = GripperAction.STAY.value
- if self.config.use_gripper:
- gripper_command = self.gamepad.gripper_command()
- gripper_action = gripper_action_map[gripper_command]
- action_dict["gripper"] = gripper_action
-
- return action_dict
-
- def get_teleop_events(self) -> dict[str, Any]:
- """
- Get extra control events from the gamepad such as intervention status,
- episode termination, success indicators, etc.
-
- Returns:
- Dictionary containing:
- - is_intervention: bool - Whether human is currently intervening
- - terminate_episode: bool - Whether to terminate the current episode
- - success: bool - Whether the episode was successful
- - rerecord_episode: bool - Whether to rerecord the episode
- """
- if self.gamepad is None:
- return {
- TeleopEvents.IS_INTERVENTION: False,
- TeleopEvents.TERMINATE_EPISODE: False,
- TeleopEvents.SUCCESS: False,
- TeleopEvents.RERECORD_EPISODE: False,
- }
-
- # Update gamepad state to get fresh inputs
- self.gamepad.update()
-
- # Check if intervention is active
- is_intervention = self.gamepad.should_intervene()
-
- # Get episode end status
- episode_end_status = self.gamepad.get_episode_end_status()
- terminate_episode = episode_end_status in [
- TeleopEvents.RERECORD_EPISODE,
- TeleopEvents.FAILURE,
- ]
- success = episode_end_status == TeleopEvents.SUCCESS
- rerecord_episode = episode_end_status == TeleopEvents.RERECORD_EPISODE
-
- return {
- TeleopEvents.IS_INTERVENTION: is_intervention,
- TeleopEvents.TERMINATE_EPISODE: terminate_episode,
- TeleopEvents.SUCCESS: success,
- TeleopEvents.RERECORD_EPISODE: rerecord_episode,
- }
-
- def disconnect(self) -> None:
- """Disconnect from the gamepad."""
- if self.gamepad is not None:
- self.gamepad.stop()
- self.gamepad = None
-
- @property
- def is_connected(self) -> bool:
- """Check if gamepad is connected."""
- return self.gamepad is not None
-
- def calibrate(self) -> None:
- """Calibrate the gamepad."""
- # No calibration needed for gamepad
- pass
-
- def is_calibrated(self) -> bool:
- """Check if gamepad is calibrated."""
- # Gamepad doesn't require calibration
- return True
-
- def configure(self) -> None:
- """Configure the gamepad."""
- # No additional configuration needed
- pass
-
- def send_feedback(self, feedback: dict) -> None:
- """Send feedback to the gamepad."""
- # Gamepad doesn't support feedback
- pass
diff --git a/lerobot/src/lerobot/teleoperators/homunculus/__init__.py b/lerobot/src/lerobot/teleoperators/homunculus/__init__.py
deleted file mode 100644
index f0ba020db84d453629cca474038cc80dc367c9e5..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/homunculus/__init__.py
+++ /dev/null
@@ -1,20 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .config_homunculus import HomunculusArmConfig, HomunculusGloveConfig
-from .homunculus_arm import HomunculusArm
-from .homunculus_glove import HomunculusGlove
-from .joints_translation import homunculus_glove_to_hope_jr_hand
diff --git a/lerobot/src/lerobot/teleoperators/homunculus/config_homunculus.py b/lerobot/src/lerobot/teleoperators/homunculus/config_homunculus.py
deleted file mode 100644
index a346803a56a40e58f58349692ed6ccaa56b398f9..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/homunculus/config_homunculus.py
+++ /dev/null
@@ -1,38 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-
-from ..config import TeleoperatorConfig
-
-
-@TeleoperatorConfig.register_subclass("homunculus_glove")
-@dataclass
-class HomunculusGloveConfig(TeleoperatorConfig):
- port: str # Port to connect to the glove
- side: str # "left" / "right"
- baud_rate: int = 115_200
-
- def __post_init__(self):
- if self.side not in ["right", "left"]:
- raise ValueError(self.side)
-
-
-@TeleoperatorConfig.register_subclass("homunculus_arm")
-@dataclass
-class HomunculusArmConfig(TeleoperatorConfig):
- port: str # Port to connect to the arm
- baud_rate: int = 115_200
diff --git a/lerobot/src/lerobot/teleoperators/homunculus/homunculus_arm.py b/lerobot/src/lerobot/teleoperators/homunculus/homunculus_arm.py
deleted file mode 100644
index 8945522ac9a682e41a6153594f1ab6557839ed06..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/homunculus/homunculus_arm.py
+++ /dev/null
@@ -1,313 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import threading
-from collections import deque
-from pprint import pformat
-
-import serial
-
-from lerobot.motors.motors_bus import MotorCalibration, MotorNormMode
-from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
-from lerobot.utils.utils import enter_pressed, move_cursor_up
-
-from ..teleoperator import Teleoperator
-from .config_homunculus import HomunculusArmConfig
-
-logger = logging.getLogger(__name__)
-
-
-class HomunculusArm(Teleoperator):
- """
- Homunculus Arm designed by Hugging Face.
- """
-
- config_class = HomunculusArmConfig
- name = "homunculus_arm"
-
- def __init__(self, config: HomunculusArmConfig):
- super().__init__(config)
- self.config = config
- self.serial = serial.Serial(config.port, config.baud_rate, timeout=1)
- self.serial_lock = threading.Lock()
-
- self.joints = {
- "shoulder_pitch": MotorNormMode.RANGE_M100_100,
- "shoulder_yaw": MotorNormMode.RANGE_M100_100,
- "shoulder_roll": MotorNormMode.RANGE_M100_100,
- "elbow_flex": MotorNormMode.RANGE_M100_100,
- "wrist_roll": MotorNormMode.RANGE_M100_100,
- "wrist_yaw": MotorNormMode.RANGE_M100_100,
- "wrist_pitch": MotorNormMode.RANGE_M100_100,
- }
- n = 50
- # EMA parameters ---------------------------------------------------
- self.n: int = n
- self.alpha: float = 2 / (n + 1)
- # one deque *per joint* so we can inspect raw history if needed
- self._buffers: dict[str, deque[int]] = {
- joint: deque(maxlen=n)
- for joint in (
- "shoulder_pitch",
- "shoulder_yaw",
- "shoulder_roll",
- "elbow_flex",
- "wrist_roll",
- "wrist_yaw",
- "wrist_pitch",
- )
- }
- # running EMA value per joint – lazily initialised on first read
- self._ema: dict[str, float | None] = dict.fromkeys(self._buffers)
-
- self._state: dict[str, float] | None = None
- self.new_state_event = threading.Event()
- self.stop_event = threading.Event()
- self.thread = threading.Thread(target=self._read_loop, daemon=True, name=f"{self} _read_loop")
- self.state_lock = threading.Lock()
-
- @property
- def action_features(self) -> dict:
- return {f"{joint}.pos": float for joint in self.joints}
-
- @property
- def feedback_features(self) -> dict:
- return {}
-
- @property
- def is_connected(self) -> bool:
- with self.serial_lock:
- return self.serial.is_open and self.thread.is_alive()
-
- @check_if_already_connected
- def connect(self, calibrate: bool = True) -> None:
- if not self.serial.is_open:
- self.serial.open()
- self.thread.start()
-
- # wait for the thread to ramp up & 1st state to be ready
- if not self.new_state_event.wait(timeout=2):
- raise TimeoutError(f"{self}: Timed out waiting for state after 2s.")
-
- if not self.is_calibrated and calibrate:
- self.calibrate()
-
- logger.info(f"{self} connected.")
-
- @property
- def is_calibrated(self) -> bool:
- return self.calibration_fpath.is_file()
-
- def calibrate(self) -> None:
- print(
- "\nMove all joints through their entire range of motion."
- "\nRecording positions. Press ENTER to stop..."
- )
- range_mins, range_maxes = self._record_ranges_of_motion()
-
- self.calibration = {}
- for id_, joint in enumerate(self.joints):
- self.calibration[joint] = MotorCalibration(
- id=id_,
- drive_mode=0,
- homing_offset=0,
- range_min=range_mins[joint],
- range_max=range_maxes[joint],
- )
-
- self._save_calibration()
- print("Calibration saved to", self.calibration_fpath)
-
- # TODO(Steven): This function is copy/paste from the `HomunculusGlove` class. Consider moving it to an utility to reduce duplicated code.
- def _record_ranges_of_motion(
- self, joints: list[str] | None = None, display_values: bool = True
- ) -> tuple[dict[str, int], dict[str, int]]:
- """Interactively record the min/max encoder values of each joint.
-
- Move the joints while the method streams live positions. Press :kbd:`Enter` to finish.
-
- Args:
- joints (list[str] | None, optional): Joints to record. Defaults to every joint (`None`).
- display_values (bool, optional): When `True` (default) a live table is printed to the console.
-
- Raises:
- TypeError: `joints` is not `None` or a list.
- ValueError: any joint's recorded min and max are the same.
-
- Returns:
- tuple[dict[str, int], dict[str, int]]: Two dictionaries *mins* and *maxes* with the extreme values
- observed for each joint.
- """
- if joints is None:
- joints = list(self.joints)
- elif not isinstance(joints, list):
- raise TypeError(joints)
-
- display_len = max(len(key) for key in joints)
-
- start_positions = self._read(joints, normalize=False)
- mins = start_positions.copy()
- maxes = start_positions.copy()
-
- user_pressed_enter = False
- while not user_pressed_enter:
- positions = self._read(joints, normalize=False)
- mins = {joint: int(min(positions[joint], min_)) for joint, min_ in mins.items()}
- maxes = {joint: int(max(positions[joint], max_)) for joint, max_ in maxes.items()}
-
- if display_values:
- print("\n-------------------------------------------")
- print(f"{'NAME':<{display_len}} | {'MIN':>6} | {'POS':>6} | {'MAX':>6}")
- for joint in joints:
- print(
- f"{joint:<{display_len}} | {mins[joint]:>6} | {positions[joint]:>6} | {maxes[joint]:>6}"
- )
-
- if enter_pressed():
- user_pressed_enter = True
-
- if display_values and not user_pressed_enter:
- # Move cursor up to overwrite the previous output
- move_cursor_up(len(joints) + 3)
-
- same_min_max = [joint for joint in joints if mins[joint] == maxes[joint]]
- if same_min_max:
- raise ValueError(f"Some joints have the same min and max values:\n{pformat(same_min_max)}")
-
- return mins, maxes
-
- def configure(self) -> None:
- pass
-
- # TODO(Steven): This function is copy/paste from the `HomunculusGlove` class. Consider moving it to an utility to reduce duplicated code.
- def _normalize(self, values: dict[str, int]) -> dict[str, float]:
- if not self.calibration:
- raise RuntimeError(f"{self} has no calibration registered.")
-
- normalized_values = {}
- for joint, val in values.items():
- min_ = self.calibration[joint].range_min
- max_ = self.calibration[joint].range_max
- drive_mode = self.calibration[joint].drive_mode
- bounded_val = min(max_, max(min_, val))
-
- if self.joints[joint] is MotorNormMode.RANGE_M100_100:
- norm = (((bounded_val - min_) / (max_ - min_)) * 200) - 100
- normalized_values[joint] = -norm if drive_mode else norm
- elif self.joints[joint] is MotorNormMode.RANGE_0_100:
- norm = ((bounded_val - min_) / (max_ - min_)) * 100
- normalized_values[joint] = 100 - norm if drive_mode else norm
-
- return normalized_values
-
- def _apply_ema(self, raw: dict[str, int]) -> dict[str, float]:
- """Update buffers & running EMA values; return smoothed dict."""
- smoothed: dict[str, float] = {}
- for joint, value in raw.items():
- # maintain raw history
- self._buffers[joint].append(value)
-
- # initialise on first run
- if self._ema[joint] is None:
- self._ema[joint] = float(value)
- else:
- self._ema[joint] = self.alpha * value + (1 - self.alpha) * self._ema[joint]
-
- smoothed[joint] = self._ema[joint]
- return smoothed
-
- def _read(
- self, joints: list[str] | None = None, normalize: bool = True, timeout: float = 1
- ) -> dict[str, int | float]:
- """
- Return the most recent (single) values from self.last_d,
- optionally applying calibration.
- """
- if not self.new_state_event.wait(timeout=timeout):
- raise TimeoutError(f"{self}: Timed out waiting for state after {timeout}s.")
-
- with self.state_lock:
- state = self._state
-
- self.new_state_event.clear()
-
- if state is None:
- raise RuntimeError(f"{self} Internal error: Event set but no state available.")
-
- if joints is not None:
- state = {k: v for k, v in state.items() if k in joints}
-
- if normalize:
- state = self._normalize(state)
-
- state = self._apply_ema(state)
-
- return state
-
- def _read_loop(self):
- """
- Continuously read from the serial buffer in its own thread and sends values to the main thread through
- a queue.
- """
- while not self.stop_event.is_set():
- try:
- raw_values = None
- with self.serial_lock:
- if self.serial.in_waiting > 0:
- lines = []
- while self.serial.in_waiting > 0:
- line = self.serial.read_until().decode("utf-8").strip()
- if line:
- lines.append(line.split(" "))
-
- if lines:
- raw_values = lines[-1]
-
- if raw_values is None or len(raw_values) != 21: # 16 raw + 5 angle values
- continue
-
- joint_angles = {
- "shoulder_pitch": int(raw_values[19]),
- "shoulder_yaw": int(raw_values[18]),
- "shoulder_roll": int(raw_values[20]),
- "elbow_flex": int(raw_values[17]),
- "wrist_roll": int(raw_values[16]),
- "wrist_yaw": int(raw_values[1]),
- "wrist_pitch": int(raw_values[0]),
- }
-
- with self.state_lock:
- self._state = joint_angles
- self.new_state_event.set()
-
- except Exception as e:
- logger.debug(f"Error reading frame in background thread for {self}: {e}")
-
- @check_if_not_connected
- def get_action(self) -> dict[str, float]:
- joint_positions = self._read()
- return {f"{joint}.pos": pos for joint, pos in joint_positions.items()}
-
- def send_feedback(self, feedback: dict[str, float]) -> None:
- raise NotImplementedError
-
- @check_if_not_connected
- def disconnect(self) -> None:
- self.stop_event.set()
- self.thread.join(timeout=1)
- self.serial.close()
- logger.info(f"{self} disconnected.")
diff --git a/lerobot/src/lerobot/teleoperators/homunculus/homunculus_glove.py b/lerobot/src/lerobot/teleoperators/homunculus/homunculus_glove.py
deleted file mode 100644
index 7484f439edb1ce4f18c23e10a31df957c77609a8..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/homunculus/homunculus_glove.py
+++ /dev/null
@@ -1,341 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import threading
-from collections import deque
-from pprint import pformat
-
-import serial
-
-from lerobot.motors import MotorCalibration
-from lerobot.motors.motors_bus import MotorNormMode
-from lerobot.teleoperators.homunculus.joints_translation import homunculus_glove_to_hope_jr_hand
-from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
-from lerobot.utils.utils import enter_pressed, move_cursor_up
-
-from ..teleoperator import Teleoperator
-from .config_homunculus import HomunculusGloveConfig
-
-logger = logging.getLogger(__name__)
-
-LEFT_HAND_INVERSIONS = [
- "thumb_cmc",
- "index_dip",
- "middle_mcp_abduction",
- "middle_dip",
- "pinky_mcp_abduction",
- "pinky_dip",
-]
-
-RIGHT_HAND_INVERSIONS = [
- "thumb_mcp",
- "thumb_cmc",
- "thumb_pip",
- "thumb_dip",
- "index_mcp_abduction",
- # "index_dip",
- "middle_mcp_abduction",
- # "middle_dip",
- "ring_mcp_abduction",
- "ring_mcp_flexion",
- # "ring_dip",
- "pinky_mcp_abduction",
-]
-
-
-class HomunculusGlove(Teleoperator):
- """
- Homunculus Glove designed by NepYope & Hugging Face.
- """
-
- config_class = HomunculusGloveConfig
- name = "homunculus_glove"
-
- def __init__(self, config: HomunculusGloveConfig):
- super().__init__(config)
- self.config = config
- self.serial = serial.Serial(config.port, config.baud_rate, timeout=1)
- self.serial_lock = threading.Lock()
-
- self.joints = {
- "thumb_cmc": MotorNormMode.RANGE_0_100,
- "thumb_mcp": MotorNormMode.RANGE_0_100,
- "thumb_pip": MotorNormMode.RANGE_0_100,
- "thumb_dip": MotorNormMode.RANGE_0_100,
- "index_mcp_abduction": MotorNormMode.RANGE_M100_100,
- "index_mcp_flexion": MotorNormMode.RANGE_0_100,
- "index_dip": MotorNormMode.RANGE_0_100,
- "middle_mcp_abduction": MotorNormMode.RANGE_M100_100,
- "middle_mcp_flexion": MotorNormMode.RANGE_0_100,
- "middle_dip": MotorNormMode.RANGE_0_100,
- "ring_mcp_abduction": MotorNormMode.RANGE_M100_100,
- "ring_mcp_flexion": MotorNormMode.RANGE_0_100,
- "ring_dip": MotorNormMode.RANGE_0_100,
- "pinky_mcp_abduction": MotorNormMode.RANGE_M100_100,
- "pinky_mcp_flexion": MotorNormMode.RANGE_0_100,
- "pinky_dip": MotorNormMode.RANGE_0_100,
- }
- self.inverted_joints = RIGHT_HAND_INVERSIONS if config.side == "right" else LEFT_HAND_INVERSIONS
-
- n = 10
- # EMA parameters ---------------------------------------------------
- self.n: int = n
- self.alpha: float = 2 / (n + 1)
- # one deque *per joint* so we can inspect raw history if needed
- self._buffers: dict[str, deque[int]] = {joint: deque(maxlen=n) for joint in self.joints}
- # running EMA value per joint – lazily initialised on first read
- self._ema: dict[str, float | None] = dict.fromkeys(self._buffers)
-
- self._state: dict[str, float] | None = None
- self.new_state_event = threading.Event()
- self.stop_event = threading.Event()
- self.thread = threading.Thread(target=self._read_loop, daemon=True, name=f"{self} _read_loop")
- self.state_lock = threading.Lock()
-
- @property
- def action_features(self) -> dict:
- return {f"{joint}.pos": float for joint in self.joints}
-
- @property
- def feedback_features(self) -> dict:
- return {}
-
- @property
- def is_connected(self) -> bool:
- with self.serial_lock:
- return self.serial.is_open and self.thread.is_alive()
-
- @check_if_already_connected
- def connect(self, calibrate: bool = True) -> None:
- if not self.serial.is_open:
- self.serial.open()
- self.thread.start()
-
- # wait for the thread to ramp up & 1st state to be ready
- if not self.new_state_event.wait(timeout=2):
- raise TimeoutError(f"{self}: Timed out waiting for state after 2s.")
-
- if not self.is_calibrated and calibrate:
- self.calibrate()
-
- logger.info(f"{self} connected.")
-
- @property
- def is_calibrated(self) -> bool:
- return self.calibration_fpath.is_file()
-
- def calibrate(self) -> None:
- range_mins, range_maxes = {}, {}
- for finger in ["thumb", "index", "middle", "ring", "pinky"]:
- print(
- f"\nMove {finger} through its entire range of motion."
- "\nRecording positions. Press ENTER to stop..."
- )
- finger_joints = [joint for joint in self.joints if joint.startswith(finger)]
- finger_mins, finger_maxes = self._record_ranges_of_motion(finger_joints)
- range_mins.update(finger_mins)
- range_maxes.update(finger_maxes)
-
- self.calibration = {}
- for id_, joint in enumerate(self.joints):
- self.calibration[joint] = MotorCalibration(
- id=id_,
- drive_mode=1 if joint in self.inverted_joints else 0,
- homing_offset=0,
- range_min=range_mins[joint],
- range_max=range_maxes[joint],
- )
-
- self._save_calibration()
- print("Calibration saved to", self.calibration_fpath)
-
- # TODO(Steven): This function is copy/paste from the `HomunculusArm` class. Consider moving it to an utility to reduce duplicated code.
- def _record_ranges_of_motion(
- self, joints: list[str] | None = None, display_values: bool = True
- ) -> tuple[dict[str, int], dict[str, int]]:
- """Interactively record the min/max encoder values of each joint.
-
- Move the joints while the method streams live positions. Press :kbd:`Enter` to finish.
-
- Args:
- joints (list[str] | None, optional): Joints to record. Defaults to every joint (`None`).
- display_values (bool, optional): When `True` (default) a live table is printed to the console.
-
- Raises:
- TypeError: `joints` is not `None` or a list.
- ValueError: any joint's recorded min and max are the same.
-
- Returns:
- tuple[dict[str, int], dict[str, int]]: Two dictionaries *mins* and *maxes* with the extreme values
- observed for each joint.
- """
- if joints is None:
- joints = list(self.joints)
- elif not isinstance(joints, list):
- raise TypeError(joints)
-
- display_len = max(len(key) for key in joints)
-
- start_positions = self._read(joints, normalize=False)
- mins = start_positions.copy()
- maxes = start_positions.copy()
-
- user_pressed_enter = False
- while not user_pressed_enter:
- positions = self._read(joints, normalize=False)
- mins = {joint: int(min(positions[joint], min_)) for joint, min_ in mins.items()}
- maxes = {joint: int(max(positions[joint], max_)) for joint, max_ in maxes.items()}
-
- if display_values:
- print("\n-------------------------------------------")
- print(f"{'NAME':<{display_len}} | {'MIN':>6} | {'POS':>6} | {'MAX':>6}")
- for joint in joints:
- print(
- f"{joint:<{display_len}} | {mins[joint]:>6} | {positions[joint]:>6} | {maxes[joint]:>6}"
- )
-
- if enter_pressed():
- user_pressed_enter = True
-
- if display_values and not user_pressed_enter:
- # Move cursor up to overwrite the previous output
- move_cursor_up(len(joints) + 3)
-
- same_min_max = [joint for joint in joints if mins[joint] == maxes[joint]]
- if same_min_max:
- raise ValueError(f"Some joints have the same min and max values:\n{pformat(same_min_max)}")
-
- return mins, maxes
-
- def configure(self) -> None:
- pass
-
- # TODO(Steven): This function is copy/paste from the `HomunculusArm` class. Consider moving it to an utility to reduce duplicated code.
- def _normalize(self, values: dict[str, int]) -> dict[str, float]:
- if not self.calibration:
- raise RuntimeError(f"{self} has no calibration registered.")
-
- normalized_values = {}
- for joint, val in values.items():
- min_ = self.calibration[joint].range_min
- max_ = self.calibration[joint].range_max
- drive_mode = self.calibration[joint].drive_mode
- bounded_val = min(max_, max(min_, val))
-
- if self.joints[joint] is MotorNormMode.RANGE_M100_100:
- norm = (((bounded_val - min_) / (max_ - min_)) * 200) - 100
- normalized_values[joint] = -norm if drive_mode else norm
- elif self.joints[joint] is MotorNormMode.RANGE_0_100:
- norm = ((bounded_val - min_) / (max_ - min_)) * 100
- normalized_values[joint] = 100 - norm if drive_mode else norm
-
- return normalized_values
-
- def _apply_ema(self, raw: dict[str, int]) -> dict[str, int]:
- """Update buffers & running EMA values; return smoothed dict as integers."""
- smoothed: dict[str, int] = {}
- for joint, value in raw.items():
- # maintain raw history
- self._buffers[joint].append(value)
-
- # initialise on first run
- if self._ema[joint] is None:
- self._ema[joint] = float(value)
- else:
- self._ema[joint] = self.alpha * value + (1 - self.alpha) * self._ema[joint]
-
- # Convert back to int for compatibility with normalization
- smoothed[joint] = int(round(self._ema[joint]))
- return smoothed
-
- def _read(
- self, joints: list[str] | None = None, normalize: bool = True, timeout: float = 1
- ) -> dict[str, int | float]:
- """
- Return the most recent (single) values from self.last_d,
- optionally applying calibration.
- """
- if not self.new_state_event.wait(timeout=timeout):
- raise TimeoutError(f"{self}: Timed out waiting for state after {timeout}s.")
-
- with self.state_lock:
- state = self._state
-
- self.new_state_event.clear()
-
- if state is None:
- raise RuntimeError(f"{self} Internal error: Event set but no state available.")
-
- if joints is not None:
- state = {k: v for k, v in state.items() if k in joints}
-
- # Apply EMA smoothing to raw values first
- state = self._apply_ema(state)
-
- # Then normalize if requested
- if normalize:
- state = self._normalize(state)
-
- return state
-
- def _read_loop(self):
- """
- Continuously read from the serial buffer in its own thread and sends values to the main thread through
- a queue.
- """
- while not self.stop_event.is_set():
- try:
- positions = None
- with self.serial_lock:
- if self.serial.in_waiting > 0:
- lines = []
- while self.serial.in_waiting > 0:
- line = self.serial.read_until().decode("utf-8").strip()
- if line:
- lines.append(line.split(" "))
-
- if lines:
- positions = lines[-1]
-
- if positions is None or len(positions) != len(self.joints):
- continue
-
- joint_positions = {joint: int(pos) for joint, pos in zip(self.joints, positions, strict=True)}
-
- with self.state_lock:
- self._state = joint_positions
- self.new_state_event.set()
-
- except Exception as e:
- logger.debug(f"Error reading frame in background thread for {self}: {e}")
-
- @check_if_not_connected
- def get_action(self) -> dict[str, float]:
- joint_positions = self._read()
- return homunculus_glove_to_hope_jr_hand(
- {f"{joint}.pos": pos for joint, pos in joint_positions.items()}
- )
-
- def send_feedback(self, feedback: dict[str, float]) -> None:
- raise NotImplementedError
-
- @check_if_not_connected
- def disconnect(self) -> None:
- self.stop_event.set()
- self.thread.join(timeout=1)
- self.serial.close()
- logger.info(f"{self} disconnected.")
diff --git a/lerobot/src/lerobot/teleoperators/homunculus/joints_translation.py b/lerobot/src/lerobot/teleoperators/homunculus/joints_translation.py
deleted file mode 100644
index 913943c9ae4fd24b8736fad8075e563f2e46315b..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/homunculus/joints_translation.py
+++ /dev/null
@@ -1,63 +0,0 @@
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-INDEX_SPLAY = 0.3
-MIDDLE_SPLAY = 0.3
-RING_SPLAY = 0.3
-PINKY_SPLAY = 0.5
-
-
-def get_ulnar_flexion(flexion: float, abduction: float, splay: float):
- return -abduction * splay + flexion * (1 - splay)
-
-
-def get_radial_flexion(flexion: float, abduction: float, splay: float):
- return abduction * splay + flexion * (1 - splay)
-
-
-def homunculus_glove_to_hope_jr_hand(glove_action: dict[str, float]) -> dict[str, float]:
- return {
- "thumb_cmc.pos": glove_action["thumb_cmc.pos"],
- "thumb_mcp.pos": glove_action["thumb_mcp.pos"],
- "thumb_pip.pos": glove_action["thumb_pip.pos"],
- "thumb_dip.pos": glove_action["thumb_dip.pos"],
- "index_radial_flexor.pos": get_radial_flexion(
- glove_action["index_mcp_flexion.pos"], glove_action["index_mcp_abduction.pos"], INDEX_SPLAY
- ),
- "index_ulnar_flexor.pos": get_ulnar_flexion(
- glove_action["index_mcp_flexion.pos"], glove_action["index_mcp_abduction.pos"], INDEX_SPLAY
- ),
- "index_pip_dip.pos": glove_action["index_dip.pos"],
- "middle_radial_flexor.pos": get_radial_flexion(
- glove_action["middle_mcp_flexion.pos"], glove_action["middle_mcp_abduction.pos"], MIDDLE_SPLAY
- ),
- "middle_ulnar_flexor.pos": get_ulnar_flexion(
- glove_action["middle_mcp_flexion.pos"], glove_action["middle_mcp_abduction.pos"], MIDDLE_SPLAY
- ),
- "middle_pip_dip.pos": glove_action["middle_dip.pos"],
- "ring_radial_flexor.pos": get_radial_flexion(
- glove_action["ring_mcp_flexion.pos"], glove_action["ring_mcp_abduction.pos"], RING_SPLAY
- ),
- "ring_ulnar_flexor.pos": get_ulnar_flexion(
- glove_action["ring_mcp_flexion.pos"], glove_action["ring_mcp_abduction.pos"], RING_SPLAY
- ),
- "ring_pip_dip.pos": glove_action["ring_dip.pos"],
- "pinky_radial_flexor.pos": get_radial_flexion(
- glove_action["pinky_mcp_flexion.pos"], glove_action["pinky_mcp_abduction.pos"], PINKY_SPLAY
- ),
- "pinky_ulnar_flexor.pos": get_ulnar_flexion(
- glove_action["pinky_mcp_flexion.pos"], glove_action["pinky_mcp_abduction.pos"], PINKY_SPLAY
- ),
- "pinky_pip_dip.pos": glove_action["pinky_dip.pos"],
- }
diff --git a/lerobot/src/lerobot/teleoperators/keyboard/__init__.py b/lerobot/src/lerobot/teleoperators/keyboard/__init__.py
deleted file mode 100644
index 4676a13ef041bfe5d45fb960848e99927e914bf9..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/keyboard/__init__.py
+++ /dev/null
@@ -1,31 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .configuration_keyboard import (
- KeyboardEndEffectorTeleopConfig,
- KeyboardRoverTeleopConfig,
- KeyboardTeleopConfig,
-)
-from .teleop_keyboard import KeyboardEndEffectorTeleop, KeyboardRoverTeleop, KeyboardTeleop
-
-__all__ = [
- "KeyboardTeleopConfig",
- "KeyboardTeleop",
- "KeyboardEndEffectorTeleopConfig",
- "KeyboardEndEffectorTeleop",
- "KeyboardRoverTeleopConfig",
- "KeyboardRoverTeleop",
-]
diff --git a/lerobot/src/lerobot/teleoperators/keyboard/configuration_keyboard.py b/lerobot/src/lerobot/teleoperators/keyboard/configuration_keyboard.py
deleted file mode 100644
index 953e12b7ecfc5635e49dfe5911ba5ffa2c8cbca3..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/keyboard/configuration_keyboard.py
+++ /dev/null
@@ -1,68 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-"""Configuration for keyboard teleoperators."""
-
-from dataclasses import dataclass
-
-from ..config import TeleoperatorConfig
-
-
-@TeleoperatorConfig.register_subclass("keyboard")
-@dataclass
-class KeyboardTeleopConfig(TeleoperatorConfig):
- """KeyboardTeleopConfig"""
-
- # TODO(Steven): Consider setting in here the keys that we want to capture/listen
-
-
-@TeleoperatorConfig.register_subclass("keyboard_ee")
-@dataclass
-class KeyboardEndEffectorTeleopConfig(KeyboardTeleopConfig):
- """Configuration for keyboard end-effector teleoperator.
-
- Used for controlling robot end-effectors with keyboard inputs.
-
- Attributes:
- use_gripper: Whether to include gripper control in actions
- """
-
- use_gripper: bool = True
-
-
-@TeleoperatorConfig.register_subclass("keyboard_rover")
-@dataclass
-class KeyboardRoverTeleopConfig(TeleoperatorConfig):
- """Configuration for keyboard rover teleoperator.
-
- Used for controlling mobile robots like EarthRover Mini Plus with WASD controls.
-
- Attributes:
- linear_speed: Default linear velocity magnitude (-1 to 1 range for SDK robots)
- angular_speed: Default angular velocity magnitude (-1 to 1 range for SDK robots)
- speed_increment: Amount to increase/decrease speed with +/- keys
- turn_assist_ratio: Forward motion multiplier when turning with A/D keys (0.0-1.0)
- angular_speed_ratio: Ratio of angular to linear speed for synchronized adjustments
- min_linear_speed: Minimum linear speed when decreasing (prevents zero speed)
- min_angular_speed: Minimum angular speed when decreasing (prevents zero speed)
- """
-
- linear_speed: float = 1.0
- angular_speed: float = 1.0
- speed_increment: float = 0.1
- turn_assist_ratio: float = 0.3
- angular_speed_ratio: float = 0.6
- min_linear_speed: float = 0.1
- min_angular_speed: float = 0.05
diff --git a/lerobot/src/lerobot/teleoperators/keyboard/teleop_keyboard.py b/lerobot/src/lerobot/teleoperators/keyboard/teleop_keyboard.py
deleted file mode 100644
index d900eaef1b0358cb105729baa8005bc0d620f8e2..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/keyboard/teleop_keyboard.py
+++ /dev/null
@@ -1,432 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import os
-import sys
-import time
-from queue import Queue
-from typing import Any
-
-from lerobot.processor import RobotAction
-from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
-
-from ..teleoperator import Teleoperator
-from ..utils import TeleopEvents
-from .configuration_keyboard import (
- KeyboardEndEffectorTeleopConfig,
- KeyboardRoverTeleopConfig,
- KeyboardTeleopConfig,
-)
-
-PYNPUT_AVAILABLE = True
-try:
- if ("DISPLAY" not in os.environ) and ("linux" in sys.platform):
- logging.info("No DISPLAY set. Skipping pynput import.")
- raise ImportError("pynput blocked intentionally due to no display.")
-
- from pynput import keyboard
-except ImportError:
- keyboard = None
- PYNPUT_AVAILABLE = False
-except Exception as e:
- keyboard = None
- PYNPUT_AVAILABLE = False
- logging.info(f"Could not import pynput: {e}")
-
-
-class KeyboardTeleop(Teleoperator):
- """
- Teleop class to use keyboard inputs for control.
- """
-
- config_class = KeyboardTeleopConfig
- name = "keyboard"
-
- def __init__(self, config: KeyboardTeleopConfig):
- super().__init__(config)
- self.config = config
- self.robot_type = config.type
-
- self.event_queue = Queue()
- self.current_pressed = {}
- self.listener = None
- self.logs = {}
-
- @property
- def action_features(self) -> dict:
- return {
- "dtype": "float32",
- "shape": (len(self.arm),),
- "names": {"motors": list(self.arm.motors)},
- }
-
- @property
- def feedback_features(self) -> dict:
- return {}
-
- @property
- def is_connected(self) -> bool:
- return PYNPUT_AVAILABLE and isinstance(self.listener, keyboard.Listener) and self.listener.is_alive()
-
- @property
- def is_calibrated(self) -> bool:
- pass
-
- @check_if_already_connected
- def connect(self) -> None:
- if PYNPUT_AVAILABLE:
- logging.info("pynput is available - enabling local keyboard listener.")
- self.listener = keyboard.Listener(
- on_press=self._on_press,
- on_release=self._on_release,
- )
- self.listener.start()
- else:
- logging.info("pynput not available - skipping local keyboard listener.")
- self.listener = None
-
- def calibrate(self) -> None:
- pass
-
- def _on_press(self, key):
- if hasattr(key, "char"):
- self.event_queue.put((key.char, True))
-
- def _on_release(self, key):
- if hasattr(key, "char"):
- self.event_queue.put((key.char, False))
- if key == keyboard.Key.esc:
- logging.info("ESC pressed, disconnecting.")
- self.disconnect()
-
- def _drain_pressed_keys(self):
- while not self.event_queue.empty():
- key_char, is_pressed = self.event_queue.get_nowait()
- self.current_pressed[key_char] = is_pressed
-
- def configure(self):
- pass
-
- @check_if_not_connected
- def get_action(self) -> RobotAction:
- before_read_t = time.perf_counter()
-
- self._drain_pressed_keys()
-
- # Generate action based on current key states
- action = {key for key, val in self.current_pressed.items() if val}
- self.logs["read_pos_dt_s"] = time.perf_counter() - before_read_t
-
- return dict.fromkeys(action, None)
-
- def send_feedback(self, feedback: dict[str, Any]) -> None:
- pass
-
- @check_if_not_connected
- def disconnect(self) -> None:
- if self.listener is not None:
- self.listener.stop()
-
-
-class KeyboardEndEffectorTeleop(KeyboardTeleop):
- """
- Teleop class to use keyboard inputs for end effector control.
- Designed to be used with the `So100FollowerEndEffector` robot.
- """
-
- config_class = KeyboardEndEffectorTeleopConfig
- name = "keyboard_ee"
-
- def __init__(self, config: KeyboardEndEffectorTeleopConfig):
- super().__init__(config)
- self.config = config
- self.misc_keys_queue = Queue()
-
- @property
- def action_features(self) -> dict:
- if self.config.use_gripper:
- return {
- "dtype": "float32",
- "shape": (4,),
- "names": {"delta_x": 0, "delta_y": 1, "delta_z": 2, "gripper": 3},
- }
- else:
- return {
- "dtype": "float32",
- "shape": (3,),
- "names": {"delta_x": 0, "delta_y": 1, "delta_z": 2},
- }
-
- @check_if_not_connected
- def get_action(self) -> RobotAction:
- self._drain_pressed_keys()
- delta_x = 0.0
- delta_y = 0.0
- delta_z = 0.0
- gripper_action = 1.0
-
- # Generate action based on current key states
- for key, val in self.current_pressed.items():
- if key == keyboard.Key.up:
- delta_y = -int(val)
- elif key == keyboard.Key.down:
- delta_y = int(val)
- elif key == keyboard.Key.left:
- delta_x = int(val)
- elif key == keyboard.Key.right:
- delta_x = -int(val)
- elif key == keyboard.Key.shift:
- delta_z = -int(val)
- elif key == keyboard.Key.shift_r:
- delta_z = int(val)
- elif key == keyboard.Key.ctrl_r:
- # Gripper actions are expected to be between 0 (close), 1 (stay), 2 (open)
- gripper_action = int(val) + 1
- elif key == keyboard.Key.ctrl_l:
- gripper_action = int(val) - 1
- elif val:
- # If the key is pressed, add it to the misc_keys_queue
- # this will record key presses that are not part of the delta_x, delta_y, delta_z
- # this is useful for retrieving other events like interventions for RL, episode success, etc.
- self.misc_keys_queue.put(key)
-
- self.current_pressed.clear()
-
- action_dict = {
- "delta_x": delta_x,
- "delta_y": delta_y,
- "delta_z": delta_z,
- }
-
- if self.config.use_gripper:
- action_dict["gripper"] = gripper_action
-
- return action_dict
-
- def get_teleop_events(self) -> dict[str, Any]:
- """
- Get extra control events from the keyboard such as intervention status,
- episode termination, success indicators, etc.
-
- Keyboard mappings:
- - Any movement keys pressed = intervention active
- - 's' key = success (terminate episode successfully)
- - 'r' key = rerecord episode (terminate and rerecord)
- - 'q' key = quit episode (terminate without success)
-
- Returns:
- Dictionary containing:
- - is_intervention: bool - Whether human is currently intervening
- - terminate_episode: bool - Whether to terminate the current episode
- - success: bool - Whether the episode was successful
- - rerecord_episode: bool - Whether to rerecord the episode
- """
- if not self.is_connected:
- return {
- TeleopEvents.IS_INTERVENTION: False,
- TeleopEvents.TERMINATE_EPISODE: False,
- TeleopEvents.SUCCESS: False,
- TeleopEvents.RERECORD_EPISODE: False,
- }
-
- # Check if any movement keys are currently pressed (indicates intervention)
- movement_keys = [
- keyboard.Key.up,
- keyboard.Key.down,
- keyboard.Key.left,
- keyboard.Key.right,
- keyboard.Key.shift,
- keyboard.Key.shift_r,
- keyboard.Key.ctrl_r,
- keyboard.Key.ctrl_l,
- ]
- is_intervention = any(self.current_pressed.get(key, False) for key in movement_keys)
-
- # Check for episode control commands from misc_keys_queue
- terminate_episode = False
- success = False
- rerecord_episode = False
-
- # Process any pending misc keys
- while not self.misc_keys_queue.empty():
- key = self.misc_keys_queue.get_nowait()
- if key == "s":
- success = True
- elif key == "r":
- terminate_episode = True
- rerecord_episode = True
- elif key == "q":
- terminate_episode = True
- success = False
-
- return {
- TeleopEvents.IS_INTERVENTION: is_intervention,
- TeleopEvents.TERMINATE_EPISODE: terminate_episode,
- TeleopEvents.SUCCESS: success,
- TeleopEvents.RERECORD_EPISODE: rerecord_episode,
- }
-
-
-class KeyboardRoverTeleop(KeyboardTeleop):
- """
- Keyboard teleoperator for mobile robots like EarthRover Mini Plus.
-
- Provides intuitive WASD-style controls for driving a mobile robot:
- - Linear movement (forward/backward)
- - Angular movement (turning/rotation)
- - Speed adjustment
- - Emergency stop
-
- Keyboard Controls:
- Movement:
- - W: Move forward
- - S: Move backward
- - A: Turn left (with forward motion)
- - D: Turn right (with forward motion)
- - Q: Rotate left in place
- - E: Rotate right in place
- - X: Emergency stop
-
- Speed Control:
- - +/=: Increase speed
- - -: Decrease speed
-
- System:
- - ESC: Disconnect teleoperator
-
- Attributes:
- config: Teleoperator configuration
- current_linear_speed: Current linear velocity magnitude
- current_angular_speed: Current angular velocity magnitude
-
- Example:
- ```python
- from lerobot.teleoperators.keyboard import KeyboardRoverTeleop, KeyboardRoverTeleopConfig
-
- teleop = KeyboardRoverTeleop(
- KeyboardRoverTeleopConfig(linear_speed=1.0, angular_speed=1.0, speed_increment=0.1)
- )
- teleop.connect()
-
- while teleop.is_connected:
- action = teleop.get_action()
- robot.send_action(action)
- ```
- """
-
- config_class = KeyboardRoverTeleopConfig
- name = "keyboard_rover"
-
- def __init__(self, config: KeyboardRoverTeleopConfig):
- super().__init__(config)
- # Add rover-specific speed settings
- self.current_linear_speed = config.linear_speed
- self.current_angular_speed = config.angular_speed
-
- @property
- def action_features(self) -> dict:
- """Return action format for rover (linear and angular velocities)."""
- return {
- "linear.vel": float,
- "angular.vel": float,
- }
-
- @property
- def is_calibrated(self) -> bool:
- """Rover teleop doesn't require calibration."""
- return True
-
- def _drain_pressed_keys(self):
- """Update current_pressed state from event queue without clearing held keys"""
- while not self.event_queue.empty():
- key_char, is_pressed = self.event_queue.get_nowait()
- if is_pressed:
- self.current_pressed[key_char] = True
- else:
- # Only remove key if it's being released
- self.current_pressed.pop(key_char, None)
-
- @check_if_not_connected
- def get_action(self) -> RobotAction:
- """
- Get the current action based on pressed keys.
-
- Returns:
- RobotAction with 'linear.vel' and 'angular.vel' keys
- """
- before_read_t = time.perf_counter()
-
- self._drain_pressed_keys()
-
- linear_velocity = 0.0
- angular_velocity = 0.0
-
- # Check which keys are currently pressed (not released)
- active_keys = {key for key, is_pressed in self.current_pressed.items() if is_pressed}
-
- # Linear movement (W/S) - these take priority
- if "w" in active_keys:
- linear_velocity = self.current_linear_speed
- elif "s" in active_keys:
- linear_velocity = -self.current_linear_speed
-
- # Turning (A/D/Q/E)
- if "d" in active_keys:
- angular_velocity = -self.current_angular_speed
- if linear_velocity == 0: # If not moving forward/back, add slight forward motion
- linear_velocity = self.current_linear_speed * self.config.turn_assist_ratio
- elif "a" in active_keys:
- angular_velocity = self.current_angular_speed
- if linear_velocity == 0: # If not moving forward/back, add slight forward motion
- linear_velocity = self.current_linear_speed * self.config.turn_assist_ratio
- elif "q" in active_keys:
- angular_velocity = self.current_angular_speed
- linear_velocity = 0 # Rotate in place
- elif "e" in active_keys:
- angular_velocity = -self.current_angular_speed
- linear_velocity = 0 # Rotate in place
-
- # Stop (X) - overrides everything
- if "x" in active_keys:
- linear_velocity = 0
- angular_velocity = 0
-
- # Speed adjustment
- if "+" in active_keys or "=" in active_keys:
- self.current_linear_speed += self.config.speed_increment
- self.current_angular_speed += self.config.speed_increment * self.config.angular_speed_ratio
- logging.info(
- f"Speed increased: linear={self.current_linear_speed:.2f}, angular={self.current_angular_speed:.2f}"
- )
- if "-" in active_keys:
- self.current_linear_speed = max(
- self.config.min_linear_speed, self.current_linear_speed - self.config.speed_increment
- )
- self.current_angular_speed = max(
- self.config.min_angular_speed,
- self.current_angular_speed - self.config.speed_increment * self.config.angular_speed_ratio,
- )
- logging.info(
- f"Speed decreased: linear={self.current_linear_speed:.2f}, angular={self.current_angular_speed:.2f}"
- )
-
- self.logs["read_pos_dt_s"] = time.perf_counter() - before_read_t
-
- return {
- "linear.vel": linear_velocity,
- "angular.vel": angular_velocity,
- }
diff --git a/lerobot/src/lerobot/teleoperators/koch_leader/__init__.py b/lerobot/src/lerobot/teleoperators/koch_leader/__init__.py
deleted file mode 100644
index 6cab71363020e1026760b075d1101fc5f7b884f7..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/koch_leader/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .config_koch_leader import KochLeaderConfig
-from .koch_leader import KochLeader
diff --git a/lerobot/src/lerobot/teleoperators/koch_leader/config_koch_leader.py b/lerobot/src/lerobot/teleoperators/koch_leader/config_koch_leader.py
deleted file mode 100644
index d8023c910ce8f0a1a42c1630d7b987f145239719..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/koch_leader/config_koch_leader.py
+++ /dev/null
@@ -1,30 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-
-from ..config import TeleoperatorConfig
-
-
-@TeleoperatorConfig.register_subclass("koch_leader")
-@dataclass
-class KochLeaderConfig(TeleoperatorConfig):
- # Port to connect to the arm
- port: str
-
- # Sets the arm in torque mode with the gripper motor set to this value. This makes it possible to squeeze
- # the gripper and have it spring back to an open position on its own.
- gripper_open_pos: float = 50.0
diff --git a/lerobot/src/lerobot/teleoperators/koch_leader/koch_leader.py b/lerobot/src/lerobot/teleoperators/koch_leader/koch_leader.py
deleted file mode 100644
index e796c20d04f00d133e69591ac6c7c06e480772da..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/koch_leader/koch_leader.py
+++ /dev/null
@@ -1,178 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import time
-
-from lerobot.motors import Motor, MotorCalibration, MotorNormMode
-from lerobot.motors.dynamixel import (
- DriveMode,
- DynamixelMotorsBus,
- OperatingMode,
-)
-from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
-
-from ..teleoperator import Teleoperator
-from .config_koch_leader import KochLeaderConfig
-
-logger = logging.getLogger(__name__)
-
-
-class KochLeader(Teleoperator):
- """
- - [Koch v1.0](https://github.com/AlexanderKoch-Koch/low_cost_robot), with and without the wrist-to-elbow
- expansion, developed by Alexander Koch from [Tau Robotics](https://tau-robotics.com)
- - [Koch v1.1](https://github.com/jess-moss/koch-v1-1) developed by Jess Moss
- """
-
- config_class = KochLeaderConfig
- name = "koch_leader"
-
- def __init__(self, config: KochLeaderConfig):
- super().__init__(config)
- self.config = config
- self.bus = DynamixelMotorsBus(
- port=self.config.port,
- motors={
- "shoulder_pan": Motor(1, "xl330-m077", MotorNormMode.RANGE_M100_100),
- "shoulder_lift": Motor(2, "xl330-m077", MotorNormMode.RANGE_M100_100),
- "elbow_flex": Motor(3, "xl330-m077", MotorNormMode.RANGE_M100_100),
- "wrist_flex": Motor(4, "xl330-m077", MotorNormMode.RANGE_M100_100),
- "wrist_roll": Motor(5, "xl330-m077", MotorNormMode.RANGE_M100_100),
- "gripper": Motor(6, "xl330-m077", MotorNormMode.RANGE_0_100),
- },
- calibration=self.calibration,
- )
-
- @property
- def action_features(self) -> dict[str, type]:
- return {f"{motor}.pos": float for motor in self.bus.motors}
-
- @property
- def feedback_features(self) -> dict[str, type]:
- return {}
-
- @property
- def is_connected(self) -> bool:
- return self.bus.is_connected
-
- @check_if_already_connected
- def connect(self, calibrate: bool = True) -> None:
- self.bus.connect()
- if not self.is_calibrated and calibrate:
- logger.info(
- "Mismatch between calibration values in the motor and the calibration file or no calibration file found"
- )
- self.calibrate()
-
- self.configure()
- logger.info(f"{self} connected.")
-
- @property
- def is_calibrated(self) -> bool:
- return self.bus.is_calibrated
-
- def calibrate(self) -> None:
- self.bus.disable_torque()
- if self.calibration:
- # Calibration file exists, ask user whether to use it or run new calibration
- user_input = input(
- f"Press ENTER to use provided calibration file associated with the id {self.id}, or type 'c' and press ENTER to run calibration: "
- )
- if user_input.strip().lower() != "c":
- logger.info(f"Writing calibration file associated with the id {self.id} to the motors")
- self.bus.write_calibration(self.calibration)
- return
- logger.info(f"\nRunning calibration of {self}")
- for motor in self.bus.motors:
- self.bus.write("Operating_Mode", motor, OperatingMode.EXTENDED_POSITION.value)
-
- self.bus.write("Drive_Mode", "elbow_flex", DriveMode.INVERTED.value)
- drive_modes = {motor: 1 if motor == "elbow_flex" else 0 for motor in self.bus.motors}
-
- input(f"Move {self} to the middle of its range of motion and press ENTER....")
- homing_offsets = self.bus.set_half_turn_homings()
-
- full_turn_motors = ["shoulder_pan", "wrist_roll"]
- unknown_range_motors = [motor for motor in self.bus.motors if motor not in full_turn_motors]
- print(
- f"Move all joints except {full_turn_motors} sequentially through their "
- "entire ranges of motion.\nRecording positions. Press ENTER to stop..."
- )
- range_mins, range_maxes = self.bus.record_ranges_of_motion(unknown_range_motors)
- for motor in full_turn_motors:
- range_mins[motor] = 0
- range_maxes[motor] = 4095
-
- self.calibration = {}
- for motor, m in self.bus.motors.items():
- self.calibration[motor] = MotorCalibration(
- id=m.id,
- drive_mode=drive_modes[motor],
- homing_offset=homing_offsets[motor],
- range_min=range_mins[motor],
- range_max=range_maxes[motor],
- )
-
- self.bus.write_calibration(self.calibration)
- self._save_calibration()
- logger.info(f"Calibration saved to {self.calibration_fpath}")
-
- def configure(self) -> None:
- self.bus.disable_torque()
- self.bus.configure_motors()
- for motor in self.bus.motors:
- if motor != "gripper":
- # Use 'extended position mode' for all motors except gripper, because in joint mode the servos
- # can't rotate more than 360 degrees (from 0 to 4095) And some mistake can happen while
- # assembling the arm, you could end up with a servo with a position 0 or 4095 at a crucial
- # point
- self.bus.write("Operating_Mode", motor, OperatingMode.EXTENDED_POSITION.value)
-
- # Use 'position control current based' for gripper to be limited by the limit of the current.
- # For the follower gripper, it means it can grasp an object without forcing too much even tho,
- # its goal position is a complete grasp (both gripper fingers are ordered to join and reach a touch).
- # For the leader gripper, it means we can use it as a physical trigger, since we can force with our finger
- # to make it move, and it will move back to its original target position when we release the force.
- self.bus.write("Operating_Mode", "gripper", OperatingMode.CURRENT_POSITION.value)
- # Set gripper's goal pos in current position mode so that we can use it as a trigger.
- self.bus.enable_torque("gripper")
- if self.is_calibrated:
- self.bus.write("Goal_Position", "gripper", self.config.gripper_open_pos)
-
- def setup_motors(self) -> None:
- for motor in reversed(self.bus.motors):
- input(f"Connect the controller board to the '{motor}' motor only and press enter.")
- self.bus.setup_motor(motor)
- print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
-
- @check_if_not_connected
- def get_action(self) -> dict[str, float]:
- start = time.perf_counter()
- action = self.bus.sync_read("Present_Position")
- action = {f"{motor}.pos": val for motor, val in action.items()}
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read action: {dt_ms:.1f}ms")
- return action
-
- def send_feedback(self, feedback: dict[str, float]) -> None:
- # TODO(rcadene, aliberts): Implement force feedback
- raise NotImplementedError
-
- @check_if_not_connected
- def disconnect(self) -> None:
- self.bus.disconnect()
- logger.info(f"{self} disconnected.")
diff --git a/lerobot/src/lerobot/teleoperators/omx_leader/__init__.py b/lerobot/src/lerobot/teleoperators/omx_leader/__init__.py
deleted file mode 100644
index efbe335ee16646ce6d044f0a1ef3712f5674ba4f..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/omx_leader/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .config_omx_leader import OmxLeaderConfig
-from .omx_leader import OmxLeader
diff --git a/lerobot/src/lerobot/teleoperators/omx_leader/config_omx_leader.py b/lerobot/src/lerobot/teleoperators/omx_leader/config_omx_leader.py
deleted file mode 100644
index cbb83e66a7f81f6d22a8aaef744422da7775183f..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/omx_leader/config_omx_leader.py
+++ /dev/null
@@ -1,30 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-
-from ..config import TeleoperatorConfig
-
-
-@TeleoperatorConfig.register_subclass("omx_leader")
-@dataclass
-class OmxLeaderConfig(TeleoperatorConfig):
- # Port to connect to the arm
- port: str
-
- # Sets the arm in torque mode with the gripper motor set to this value. This makes it possible to squeeze
- # the gripper and have it spring back to an open position on its own.
- gripper_open_pos: float = 37.0
diff --git a/lerobot/src/lerobot/teleoperators/omx_leader/omx_leader.py b/lerobot/src/lerobot/teleoperators/omx_leader/omx_leader.py
deleted file mode 100644
index 63d0408ee894d74da665065cc7127849ff817a8d..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/omx_leader/omx_leader.py
+++ /dev/null
@@ -1,159 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import time
-
-from lerobot.motors import Motor, MotorCalibration, MotorNormMode
-from lerobot.motors.dynamixel import (
- DriveMode,
- DynamixelMotorsBus,
- OperatingMode,
-)
-from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
-
-from ..teleoperator import Teleoperator
-from .config_omx_leader import OmxLeaderConfig
-
-logger = logging.getLogger(__name__)
-
-
-class OmxLeader(Teleoperator):
- """
- - [OMX](https://github.com/ROBOTIS-GIT/open_manipulator),
- expansion, developed by Woojin Wie and Junha Cha from [ROBOTIS](https://ai.robotis.com/)
- """
-
- config_class = OmxLeaderConfig
- name = "omx_leader"
-
- def __init__(self, config: OmxLeaderConfig):
- super().__init__(config)
- self.config = config
- self.bus = DynamixelMotorsBus(
- port=self.config.port,
- motors={
- "shoulder_pan": Motor(1, "xl330-m288", MotorNormMode.RANGE_M100_100),
- "shoulder_lift": Motor(2, "xl330-m288", MotorNormMode.RANGE_M100_100),
- "elbow_flex": Motor(3, "xl330-m288", MotorNormMode.RANGE_M100_100),
- "wrist_flex": Motor(4, "xl330-m288", MotorNormMode.RANGE_M100_100),
- "wrist_roll": Motor(5, "xl330-m288", MotorNormMode.RANGE_M100_100),
- "gripper": Motor(6, "xl330-m077", MotorNormMode.RANGE_0_100),
- },
- calibration=self.calibration,
- )
-
- @property
- def action_features(self) -> dict[str, type]:
- return {f"{motor}.pos": float for motor in self.bus.motors}
-
- @property
- def feedback_features(self) -> dict[str, type]:
- return {}
-
- @property
- def is_connected(self) -> bool:
- return self.bus.is_connected
-
- @check_if_already_connected
- def connect(self, calibrate: bool = True) -> None:
- self.bus.connect()
- if not self.is_calibrated and calibrate:
- logger.info(
- "Mismatch between calibration values in the motor and the calibration file or no calibration file found"
- )
- self.calibrate()
-
- self.configure()
- logger.info(f"{self} connected.")
-
- @property
- def is_calibrated(self) -> bool:
- return self.bus.is_calibrated
-
- def calibrate(self) -> None:
- self.bus.disable_torque()
- logger.info(f"\nUsing factory default calibration values for {self}")
- logger.info(f"\nWriting default configuration of {self} to the motors")
- for motor in self.bus.motors:
- self.bus.write("Operating_Mode", motor, OperatingMode.EXTENDED_POSITION.value)
-
- for motor in self.bus.motors:
- if motor == "gripper":
- self.bus.write("Drive_Mode", motor, DriveMode.INVERTED.value)
- else:
- self.bus.write("Drive_Mode", motor, DriveMode.NON_INVERTED.value)
- drive_modes = {motor: 1 if motor == "gripper" else 0 for motor in self.bus.motors}
-
- self.calibration = {}
- for motor, m in self.bus.motors.items():
- self.calibration[motor] = MotorCalibration(
- id=m.id,
- drive_mode=drive_modes[motor],
- homing_offset=0,
- range_min=0,
- range_max=4095,
- )
-
- self.bus.write_calibration(self.calibration)
- self._save_calibration()
- logger.info(f"Calibration saved to {self.calibration_fpath}")
-
- def configure(self) -> None:
- self.bus.disable_torque()
- self.bus.configure_motors()
- for motor in self.bus.motors:
- if motor != "gripper":
- # Use 'extended position mode' for all motors except gripper, because in joint mode the servos
- # can't rotate more than 360 degrees (from 0 to 4095) And some mistake can happen while
- # assembling the arm, you could end up with a servo with a position 0 or 4095 at a crucial
- # point
- self.bus.write("Operating_Mode", motor, OperatingMode.EXTENDED_POSITION.value)
-
- # Use 'position control current based' for gripper to be limited by the limit of the current.
- # For the follower gripper, it means it can grasp an object without forcing too much even tho,
- # its goal position is a complete grasp (both gripper fingers are ordered to join and reach a touch).
- # For the leader gripper, it means we can use it as a physical trigger, since we can force with our finger
- # to make it move, and it will move back to its original target position when we release the force.
- self.bus.write("Operating_Mode", "gripper", OperatingMode.CURRENT_POSITION.value)
- # Set gripper's goal pos in current position mode so that we can use it as a trigger.
- self.bus.enable_torque("gripper")
- if self.is_calibrated:
- self.bus.write("Goal_Position", "gripper", self.config.gripper_open_pos)
-
- def setup_motors(self) -> None:
- for motor in reversed(self.bus.motors):
- input(f"Connect the controller board to the '{motor}' motor only and press enter.")
- self.bus.setup_motor(motor)
- print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
-
- @check_if_not_connected
- def get_action(self) -> dict[str, float]:
- start = time.perf_counter()
- action = self.bus.sync_read("Present_Position")
- action = {f"{motor}.pos": val for motor, val in action.items()}
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read action: {dt_ms:.1f}ms")
- return action
-
- def send_feedback(self, feedback: dict[str, float]) -> None:
- # TODO(rcadene, aliberts): Implement force feedback
- raise NotImplementedError
-
- @check_if_not_connected
- def disconnect(self) -> None:
- self.bus.disconnect()
- logger.info(f"{self} disconnected.")
diff --git a/lerobot/src/lerobot/teleoperators/phone/__init__.py b/lerobot/src/lerobot/teleoperators/phone/__init__.py
deleted file mode 100644
index 967a7b3b4ceaac6bb975b06b2fe7c0bc744b3779..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/phone/__init__.py
+++ /dev/null
@@ -1,18 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .config_phone import PhoneConfig
-from .teleop_phone import Phone
diff --git a/lerobot/src/lerobot/teleoperators/phone/config_phone.py b/lerobot/src/lerobot/teleoperators/phone/config_phone.py
deleted file mode 100644
index 121042b1ba4457e1cdee1f4525d206e31e18f0e7..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/phone/config_phone.py
+++ /dev/null
@@ -1,36 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-from enum import Enum
-
-import numpy as np
-
-from ..config import TeleoperatorConfig
-
-
-class PhoneOS(Enum):
- ANDROID = "android"
- IOS = "ios"
-
-
-@TeleoperatorConfig.register_subclass("phone")
-@dataclass
-class PhoneConfig(TeleoperatorConfig):
- phone_os: PhoneOS = PhoneOS.IOS
- camera_offset = np.array(
- [0.0, -0.02, 0.04]
- ) # iPhone 14 Pro camera is 2cm off center and 4cm above center
diff --git a/lerobot/src/lerobot/teleoperators/phone/phone_processor.py b/lerobot/src/lerobot/teleoperators/phone/phone_processor.py
deleted file mode 100644
index 9f3fa76f0f3989f63e1f4a4344761f05c34c5776..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/phone/phone_processor.py
+++ /dev/null
@@ -1,110 +0,0 @@
-# !/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass, field
-
-from lerobot.configs.types import FeatureType, PipelineFeatureType, PolicyFeature
-from lerobot.processor import ProcessorStepRegistry, RobotAction, RobotActionProcessorStep
-from lerobot.teleoperators.phone.config_phone import PhoneOS
-
-
-@ProcessorStepRegistry.register("map_phone_action_to_robot_action")
-@dataclass
-class MapPhoneActionToRobotAction(RobotActionProcessorStep):
- """
- Maps calibrated phone pose actions to standardized robot action inputs.
-
- This processor step acts as a bridge between the phone teleoperator's output
- and the robot's expected action format. It remaps the phone's 6-DoF pose
- (position and rotation) to the robot's target end-effector pose, applying
- necessary axis inversions and swaps. It also interprets platform-specific
- button presses to generate a gripper command.
-
- Attributes:
- platform: The operating system of the phone (iOS or Android), used
- to determine the correct button mappings for the gripper.
- """
-
- # TODO(Steven): Gripper vel could be output of phone_teleop directly
- platform: PhoneOS
- _enabled_prev: bool = field(default=False, init=False, repr=False)
-
- def action(self, action: RobotAction) -> RobotAction:
- """
- Processes the phone action dictionary to create a robot action dictionary.
-
- Args:
- act: The input action dictionary from the phone teleoperator.
-
- Returns:
- A new action dictionary formatted for the robot controller.
-
- Raises:
- ValueError: If 'pos' or 'rot' keys are missing from the input action.
- """
- # Pop them from the action
- enabled = bool(action.pop("phone.enabled"))
- pos = action.pop("phone.pos")
- rot = action.pop("phone.rot")
- inputs = action.pop("phone.raw_inputs")
-
- if pos is None or rot is None:
- raise ValueError("pos and rot must be present in action")
-
- rotvec = rot.as_rotvec() # Absolute orientation as rotvec
-
- # Map certain inputs to certain actions
- if self.platform == PhoneOS.IOS:
- gripper_vel = float(inputs.get("a3", 0.0))
- else:
- a = float(inputs.get("reservedButtonA", 0.0))
- b = float(inputs.get("reservedButtonB", 0.0))
- gripper_vel = (
- a - b
- ) # Positive if a is pressed, negative if b is pressed, 0 if both or neither are pressed
-
- # For some actions we need to invert the axis
- action["enabled"] = enabled
- action["target_x"] = -pos[1] if enabled else 0.0
- action["target_y"] = pos[0] if enabled else 0.0
- action["target_z"] = pos[2] if enabled else 0.0
- action["target_wx"] = rotvec[1] if enabled else 0.0
- action["target_wy"] = rotvec[0] if enabled else 0.0
- action["target_wz"] = -rotvec[2] if enabled else 0.0
- action["gripper_vel"] = gripper_vel # Still send gripper action when disabled
- return action
-
- def transform_features(
- self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]
- ) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
- for feat in ["enabled", "pos", "rot", "raw_inputs"]:
- features[PipelineFeatureType.ACTION].pop(f"phone.{feat}", None)
-
- for feat in [
- "enabled",
- "target_x",
- "target_y",
- "target_z",
- "target_wx",
- "target_wy",
- "target_wz",
- "gripper_vel",
- ]:
- features[PipelineFeatureType.ACTION][f"{feat}"] = PolicyFeature(
- type=FeatureType.ACTION, shape=(1,)
- )
-
- return features
diff --git a/lerobot/src/lerobot/teleoperators/phone/teleop_phone.py b/lerobot/src/lerobot/teleoperators/phone/teleop_phone.py
deleted file mode 100644
index a13d78a7576ecf388b9f57ba2a3328710bc5d61e..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/phone/teleop_phone.py
+++ /dev/null
@@ -1,415 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-# Docs:
-# hebi: https://docs.hebi.us/tools.html#mobile-io
-# teleop: https://github.com/SpesRobotics/teleop
-
-import logging
-import threading
-import time
-
-import hebi
-import numpy as np
-from teleop import Teleop
-
-from lerobot.teleoperators.phone.config_phone import PhoneConfig, PhoneOS
-from lerobot.teleoperators.teleoperator import Teleoperator
-from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
-from lerobot.utils.rotation import Rotation
-
-logger = logging.getLogger(__name__)
-
-
-class BasePhone:
- _enabled: bool = False
- _calib_pos: np.ndarray | None = None
- _calib_rot_inv: Rotation | None = None
-
- def _reapply_position_calibration(self, pos: np.ndarray) -> None:
- self._calib_pos = pos.copy()
-
- @property
- def is_calibrated(self) -> bool:
- return (self._calib_pos is not None) and (self._calib_rot_inv is not None)
-
- @property
- def action_features(self) -> dict[str, type]:
- return {
- "phone.pos": np.ndarray, # shape (3,)
- "phone.rot": Rotation, # scipy.spatial.transform.Rotation
- "phone.raw_inputs": dict, # analogs/buttons or webXR meta
- "phone.enabled": bool,
- }
-
- @property
- def feedback_features(self) -> dict[str, type]:
- # No haptic or other feedback implemented yet
- pass
-
- def configure(self) -> None:
- # No additional configuration required for phone teleop
- pass
-
- def send_feedback(self, feedback: dict[str, float]) -> None:
- # We could add haptic feedback (vibrations) here, but it's not implemented yet
- raise NotImplementedError
-
-
-class IOSPhone(BasePhone, Teleoperator):
- name = "ios_phone"
-
- def __init__(self, config: PhoneConfig):
- super().__init__(config)
- self.config = config
- self._group = None
-
- @property
- def is_connected(self) -> bool:
- return self._group is not None
-
- @check_if_already_connected
- def connect(self) -> None:
- logger.info("Connecting to IPhone, make sure to open the HEBI Mobile I/O app.")
- lookup = hebi.Lookup()
- time.sleep(2.0)
- group = lookup.get_group_from_names(["HEBI"], ["mobileIO"])
- if group is None:
- raise RuntimeError("Mobile I/O not found — check name/family settings in the app.")
- self._group = group
- logger.info(f"{self} connected to HEBI group with {group.size} module(s).")
-
- self.calibrate()
-
- def calibrate(self) -> None:
- print(
- "Hold the phone so that: top edge points forward in same direction as the robot (robot +x) and screen points up (robot +z)"
- )
- print("Press and hold B1 in the HEBI Mobile I/O app to capture this pose...\n")
- position, rotation = self._wait_for_capture_trigger()
- self._calib_pos = position.copy()
- self._calib_rot_inv = rotation.inv()
- self._enabled = False
- print("Calibration done\n")
-
- def _wait_for_capture_trigger(self) -> tuple[np.ndarray, Rotation]:
- """
- Blocks execution until the calibration trigger is detected from the iOS device.
-
- This method enters a loop, continuously reading the phone's state. It waits for the user to press
- and hold the 'B1' button in the HEBI Mobile I/O app. Once B1 is pressed, the loop breaks and
- returns the phone's pose at that exact moment.
-
- Returns:
- A tuple containing the position (np.ndarray) and rotation (Rotation) of the phone at the
- moment the trigger was activated.
- """
- while True:
- has_pose, position, rotation, fb_pose = self._read_current_pose()
- if not has_pose:
- time.sleep(0.01)
- continue
-
- io = getattr(fb_pose, "io", None)
- button_b = getattr(io, "b", None) if io is not None else None
- button_b1_pressed = False
- if button_b is not None:
- button_b1_pressed = bool(button_b.get_int(1))
- if button_b1_pressed:
- return position, rotation
-
- time.sleep(0.01)
-
- def _read_current_pose(self) -> tuple[bool, np.ndarray | None, Rotation | None, object | None]:
- """
- Reads the instantaneous 6-DoF pose from the connected iOS device via the HEBI SDK.
-
- This method fetches the latest feedback packet from the HEBI group, extracts the ARKit
- position and orientation, and converts them into a standard format. It also applies a
- configured camera offset to adjust the pose from the camera's frame to the phone's
- physical frame.
-
- Returns:
- A tuple containing:
- - A boolean indicating if a valid pose was successfully read.
- - The 3D position as a NumPy array, or None if not available.
- - The orientation as a `Rotation` object, or None if not available.
- - The raw HEBI feedback object for accessing other data like button presses.
- """
- fbk = self._group.get_next_feedback()
- pose = fbk[0]
- ar_pos = getattr(pose, "ar_position", None)
- ar_quat = getattr(pose, "ar_orientation", None)
- if ar_pos is None or ar_quat is None:
- return False, None, None, None
- # HEBI provides orientation in w, x, y, z format.
- # Scipy's Rotation expects x, y, z, w.
- quat_xyzw = np.concatenate((ar_quat[1:], [ar_quat[0]])) # wxyz to xyzw
- rot = Rotation.from_quat(quat_xyzw)
- pos = ar_pos - rot.apply(self.config.camera_offset)
- return True, pos, rot, pose
-
- @check_if_not_connected
- def get_action(self) -> dict:
- has_pose, raw_position, raw_rotation, fb_pose = self._read_current_pose()
- if not has_pose or not self.is_calibrated:
- return {}
-
- # Collect raw inputs (B1 / analogs on iOS, move/scale on Android)
- raw_inputs: dict[str, float | int | bool] = {}
- io = getattr(fb_pose, "io", None)
- if io is not None:
- bank_a, bank_b = io.a, io.b
- if bank_a:
- for ch in range(1, 9):
- if bank_a.has_float(ch):
- raw_inputs[f"a{ch}"] = float(bank_a.get_float(ch))
- if bank_b:
- for ch in range(1, 9):
- if bank_b.has_int(ch):
- raw_inputs[f"b{ch}"] = int(bank_b.get_int(ch))
- elif hasattr(bank_b, "has_bool") and bank_b.has_bool(ch):
- raw_inputs[f"b{ch}"] = int(bank_b.get_bool(ch))
-
- enable = bool(raw_inputs.get("b1", 0))
-
- # Rising edge then re-capture calibration immediately from current raw pose
- if enable and not self._enabled:
- self._reapply_position_calibration(raw_position)
-
- # Apply calibration
- pos_cal = self._calib_rot_inv.apply(raw_position - self._calib_pos)
- rot_cal = self._calib_rot_inv * raw_rotation
-
- self._enabled = enable
-
- return {
- "phone.pos": pos_cal,
- "phone.rot": rot_cal,
- "phone.raw_inputs": raw_inputs,
- "phone.enabled": self._enabled,
- }
-
- @check_if_not_connected
- def disconnect(self) -> None:
- self._group = None
-
-
-class AndroidPhone(BasePhone, Teleoperator):
- name = "android_phone"
-
- def __init__(self, config: PhoneConfig):
- super().__init__(config)
- self.config = config
- self._teleop = None
- self._teleop_thread = None
- self._latest_pose = None
- self._latest_message = None
- self._android_lock = threading.Lock()
-
- @property
- def is_connected(self) -> bool:
- return self._teleop is not None
-
- @check_if_already_connected
- def connect(self) -> None:
- logger.info("Starting teleop stream for Android...")
- self._teleop = Teleop()
- self._teleop.subscribe(self._android_callback)
- self._teleop_thread = threading.Thread(target=self._teleop.run, daemon=True)
- self._teleop_thread.start()
- logger.info(f"{self} connected, teleop stream started.")
-
- self.calibrate()
-
- def calibrate(self) -> None:
- print(
- "Hold the phone so that: top edge points forward in same direction as the robot (robot +x) and screen points up (robot +z)"
- )
- print("Touch and move on the WebXR page to capture this pose...\n")
-
- pos, rot = self._wait_for_capture_trigger()
- self._calib_pos = pos.copy()
- self._calib_rot_inv = rot.inv()
- self._enabled = False
- print("Calibration done\n")
-
- def _wait_for_capture_trigger(self) -> tuple[np.ndarray, Rotation]:
- """
- Blocks execution until the calibration trigger is detected from the Android device.
-
- This method enters a loop, continuously checking the latest message received from the WebXR
- session. It waits for the user to touch and move their finger on the screen, which generates
- a `move` event. Once this event is detected, the loop breaks and returns the phone's current
- pose.
-
- Returns:
- A tuple containing the position (np.ndarray) and rotation (Rotation) of the phone at the
- moment the trigger was activated.
- """
- while True:
- with self._android_lock:
- msg = self._latest_message or {}
-
- if bool(msg.get("move", False)):
- ok, pos, rot, _pose = self._read_current_pose()
- if ok:
- return pos, rot
-
- time.sleep(0.01)
-
- def _read_current_pose(self) -> tuple[bool, np.ndarray | None, Rotation | None, object | None]:
- """
- Reads the latest 6-DoF pose received from the Android device's WebXR session.
-
- This method accesses the most recent pose data stored by the `_android_callback`. It uses a
- thread lock to safely read the shared `_latest_pose` variable. The pose, a 4x4 matrix, is
- then decomposed into position and rotation, and the configured camera offset is applied.
-
- Returns:
- A tuple containing:
- - A boolean indicating if a valid pose was available.
- - The 3D position as a NumPy array, or None if no pose has been received yet.
- - The orientation as a `Rotation` object, or None if no pose has been received.
- - The raw 4x4 pose matrix as received from the teleop stream.
- """
- with self._android_lock:
- if self._latest_pose is None:
- return False, None, None, None
- p = self._latest_pose.copy()
- pose = self._latest_pose
- rot = Rotation.from_matrix(p[:3, :3])
- pos = p[:3, 3] - rot.apply(self.config.camera_offset)
- return True, pos, rot, pose
-
- def _android_callback(self, pose: np.ndarray, message: dict) -> None:
- """
- Callback function to handle incoming data from the Android teleop stream.
-
- This method is executed by the `teleop` package's subscriber thread whenever a new
- pose and message are received from the WebXR session on the Android phone. It updates
- the internal state (`_latest_pose` and `_latest_message`) with the new data.
- A thread lock is used to ensure that these shared variables are updated atomically,
- preventing race conditions with the main thread that reads them.
-
- Args:
- pose: A 4x4 NumPy array representing the phone's transformation matrix.
- message: A dictionary containing additional data, such as button presses or touch events.
- """
- with self._android_lock:
- self._latest_pose = pose
- self._latest_message = message
-
- @check_if_not_connected
- def get_action(self) -> dict:
- ok, raw_pos, raw_rot, pose = self._read_current_pose()
- if not ok or not self.is_calibrated:
- return {}
-
- # Collect raw inputs (B1 / analogs on iOS, move/scale on Android)
- raw_inputs: dict[str, float | int | bool] = {}
- msg = self._latest_message or {}
- raw_inputs["move"] = bool(msg.get("move", False))
- raw_inputs["scale"] = float(msg.get("scale", 1.0))
- raw_inputs["reservedButtonA"] = bool(msg.get("reservedButtonA", False))
- raw_inputs["reservedButtonB"] = bool(msg.get("reservedButtonB", False))
-
- enable = bool(raw_inputs.get("move", False))
-
- # Rising edge then re-capture calibration immediately from current raw pose
- if enable and not self._enabled:
- self._reapply_position_calibration(raw_pos)
-
- # Apply calibration
- pos_cal = self._calib_rot_inv.apply(raw_pos - self._calib_pos)
- rot_cal = self._calib_rot_inv * raw_rot
-
- self._enabled = enable
-
- return {
- "phone.pos": pos_cal,
- "phone.rot": rot_cal,
- "phone.raw_inputs": raw_inputs,
- "phone.enabled": self._enabled,
- }
-
- @check_if_not_connected
- def disconnect(self) -> None:
- self._teleop = None
- if self._teleop_thread and self._teleop_thread.is_alive():
- self._teleop_thread.join(timeout=1.0)
- self._teleop_thread = None
- self._latest_pose = None
-
-
-class Phone(Teleoperator):
- """
- Phone-based teleoperator using ARKit (iOS via HEBI Mobile I/O App) or the teleop Python package (Android via WebXR API).
- For HEBI Mobile I/O we also expose 8 analog (a1-a8) and 8 digital (b1-b8) inputs.
-
- Press and hold **B1** to enable teleoperation. While enabled, the first B1 press
- captures a reference pose and rotation, when disabled and pressed again the position is reapplied.
- """
-
- config_class = PhoneConfig
- name = "phone"
-
- def __init__(self, config: PhoneConfig):
- super().__init__(config)
- self.config = config
-
- self._phone_impl: Teleoperator
-
- if self.config.phone_os == PhoneOS.IOS:
- self._phone_impl = IOSPhone(config)
- elif self.config.phone_os == PhoneOS.ANDROID:
- self._phone_impl = AndroidPhone(config)
- else:
- raise ValueError(f"Invalid config phone_os: {self.config.phone_os}")
-
- @property
- def is_connected(self) -> bool:
- return self._phone_impl.is_connected
-
- def connect(self) -> None:
- return self._phone_impl.connect()
-
- def calibrate(self) -> None:
- return self._phone_impl.calibrate()
-
- @property
- def is_calibrated(self) -> bool:
- return self._phone_impl.is_calibrated
-
- @property
- def action_features(self) -> dict[str, type]:
- return self._phone_impl.action_features
-
- @property
- def feedback_features(self) -> dict[str, type]:
- return self._phone_impl.feedback_features
-
- def configure(self) -> None:
- return self._phone_impl.configure()
-
- def get_action(self) -> dict:
- return self._phone_impl.get_action()
-
- def send_feedback(self, feedback: dict[str, float]) -> None:
- return self._phone_impl.send_feedback(feedback)
-
- def disconnect(self) -> None:
- return self._phone_impl.disconnect()
diff --git a/lerobot/src/lerobot/teleoperators/reachy2_teleoperator/__init__.py b/lerobot/src/lerobot/teleoperators/reachy2_teleoperator/__init__.py
deleted file mode 100644
index 845c552607529befd0136941bae4b6b645e8fd66..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/reachy2_teleoperator/__init__.py
+++ /dev/null
@@ -1,25 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from .config_reachy2_teleoperator import Reachy2TeleoperatorConfig
-from .reachy2_teleoperator import (
- REACHY2_ANTENNAS_JOINTS,
- REACHY2_L_ARM_JOINTS,
- REACHY2_NECK_JOINTS,
- REACHY2_R_ARM_JOINTS,
- REACHY2_VEL,
- Reachy2Teleoperator,
-)
diff --git a/lerobot/src/lerobot/teleoperators/reachy2_teleoperator/config_reachy2_teleoperator.py b/lerobot/src/lerobot/teleoperators/reachy2_teleoperator/config_reachy2_teleoperator.py
deleted file mode 100644
index eb301450337d4a5d8cf77b4c325c5556ed9ccff9..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/reachy2_teleoperator/config_reachy2_teleoperator.py
+++ /dev/null
@@ -1,51 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-
-from ..config import TeleoperatorConfig
-
-
-@TeleoperatorConfig.register_subclass("reachy2_teleoperator")
-@dataclass
-class Reachy2TeleoperatorConfig(TeleoperatorConfig):
- # IP address of the Reachy 2 robot used as teleoperator
- ip_address: str | None = "localhost"
-
- # Whether to use the present position of the joints as actions
- # if False, the goal position of the joints will be used
- use_present_position: bool = False
-
- # Which parts of the robot to use
- with_mobile_base: bool = True
- with_l_arm: bool = True
- with_r_arm: bool = True
- with_neck: bool = True
- with_antennas: bool = True
-
- def __post_init__(self):
- if not (
- self.with_mobile_base
- or self.with_l_arm
- or self.with_r_arm
- or self.with_neck
- or self.with_antennas
- ):
- raise ValueError(
- "No Reachy2Teleoperator part used.\n"
- "At least one part of the robot must be set to True "
- "(with_mobile_base, with_l_arm, with_r_arm, with_neck, with_antennas)"
- )
diff --git a/lerobot/src/lerobot/teleoperators/reachy2_teleoperator/reachy2_teleoperator.py b/lerobot/src/lerobot/teleoperators/reachy2_teleoperator/reachy2_teleoperator.py
deleted file mode 100644
index 0215d03ad64cdba86bcb9c0b965882122fe8dda2..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/reachy2_teleoperator/reachy2_teleoperator.py
+++ /dev/null
@@ -1,176 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from __future__ import annotations
-
-import logging
-import time
-from typing import TYPE_CHECKING
-
-from lerobot.utils.import_utils import _reachy2_sdk_available
-
-if TYPE_CHECKING or _reachy2_sdk_available:
- from reachy2_sdk import ReachySDK
-else:
- ReachySDK = None
-
-from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
-from lerobot.utils.errors import DeviceNotConnectedError
-
-from ..teleoperator import Teleoperator
-from .config_reachy2_teleoperator import Reachy2TeleoperatorConfig
-
-logger = logging.getLogger(__name__)
-
-# {lerobot_keys: reachy2_sdk_keys}
-REACHY2_NECK_JOINTS = {
- "neck_yaw.pos": "head.neck.yaw",
- "neck_pitch.pos": "head.neck.pitch",
- "neck_roll.pos": "head.neck.roll",
-}
-
-REACHY2_ANTENNAS_JOINTS = {
- "l_antenna.pos": "head.l_antenna",
- "r_antenna.pos": "head.r_antenna",
-}
-
-REACHY2_R_ARM_JOINTS = {
- "r_shoulder_pitch.pos": "r_arm.shoulder.pitch",
- "r_shoulder_roll.pos": "r_arm.shoulder.roll",
- "r_elbow_yaw.pos": "r_arm.elbow.yaw",
- "r_elbow_pitch.pos": "r_arm.elbow.pitch",
- "r_wrist_roll.pos": "r_arm.wrist.roll",
- "r_wrist_pitch.pos": "r_arm.wrist.pitch",
- "r_wrist_yaw.pos": "r_arm.wrist.yaw",
- "r_gripper.pos": "r_arm.gripper",
-}
-
-REACHY2_L_ARM_JOINTS = {
- "l_shoulder_pitch.pos": "l_arm.shoulder.pitch",
- "l_shoulder_roll.pos": "l_arm.shoulder.roll",
- "l_elbow_yaw.pos": "l_arm.elbow.yaw",
- "l_elbow_pitch.pos": "l_arm.elbow.pitch",
- "l_wrist_roll.pos": "l_arm.wrist.roll",
- "l_wrist_pitch.pos": "l_arm.wrist.pitch",
- "l_wrist_yaw.pos": "l_arm.wrist.yaw",
- "l_gripper.pos": "l_arm.gripper",
-}
-
-REACHY2_VEL = {
- "mobile_base.vx": "vx",
- "mobile_base.vy": "vy",
- "mobile_base.vtheta": "vtheta",
-}
-
-
-class Reachy2Teleoperator(Teleoperator):
- """
- [Reachy 2](https://www.pollen-robotics.com/reachy/), by Pollen Robotics.
- """
-
- config_class = Reachy2TeleoperatorConfig
- name = "reachy2_specific"
-
- def __init__(self, config: Reachy2TeleoperatorConfig):
- super().__init__(config)
-
- self.config = config
- self.reachy: None | ReachySDK = None
-
- self.joints_dict: dict[str, str] = self._generate_joints_dict()
-
- def _generate_joints_dict(self) -> dict[str, str]:
- joints = {}
- if self.config.with_neck:
- joints.update(REACHY2_NECK_JOINTS)
- if self.config.with_l_arm:
- joints.update(REACHY2_L_ARM_JOINTS)
- if self.config.with_r_arm:
- joints.update(REACHY2_R_ARM_JOINTS)
- if self.config.with_antennas:
- joints.update(REACHY2_ANTENNAS_JOINTS)
- return joints
-
- @property
- def action_features(self) -> dict[str, type]:
- if self.config.with_mobile_base:
- return {
- **dict.fromkeys(
- self.joints_dict.keys(),
- float,
- ),
- **dict.fromkeys(
- REACHY2_VEL.keys(),
- float,
- ),
- }
- else:
- return dict.fromkeys(self.joints_dict.keys(), float)
-
- @property
- def feedback_features(self) -> dict[str, type]:
- return {}
-
- @property
- def is_connected(self) -> bool:
- return self.reachy.is_connected() if self.reachy is not None else False
-
- @check_if_already_connected
- def connect(self, calibrate: bool = True) -> None:
- self.reachy = ReachySDK(self.config.ip_address)
-
- if not self.is_connected:
- raise DeviceNotConnectedError()
- logger.info(f"{self} connected.")
-
- @property
- def is_calibrated(self) -> bool:
- return True
-
- def calibrate(self) -> None:
- pass
-
- def configure(self) -> None:
- pass
-
- @check_if_not_connected
- def get_action(self) -> dict[str, float]:
- start = time.perf_counter()
-
- joint_action: dict[str, float] = {}
- vel_action: dict[str, float] = {}
-
- if self.config.use_present_position:
- joint_action = {k: self.reachy.joints[v].present_position for k, v in self.joints_dict.items()}
- else:
- joint_action = {k: self.reachy.joints[v].goal_position for k, v in self.joints_dict.items()}
- if not self.config.with_mobile_base:
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read action: {dt_ms:.1f}ms")
- return joint_action
- if self.config.use_present_position:
- vel_action = {k: self.reachy.mobile_base.odometry[v] for k, v in REACHY2_VEL.items()}
- else:
- vel_action = {k: self.reachy.mobile_base.last_cmd_vel[v] for k, v in REACHY2_VEL.items()}
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read action: {dt_ms:.1f}ms")
- return {**joint_action, **vel_action}
-
- def send_feedback(self, feedback: dict[str, float]) -> None:
- raise NotImplementedError
-
- def disconnect(self) -> None:
- if self.is_connected:
- self.reachy.disconnect()
diff --git a/lerobot/src/lerobot/teleoperators/so_leader/config_so_leader.py b/lerobot/src/lerobot/teleoperators/so_leader/config_so_leader.py
deleted file mode 100644
index ea57fa1060f93484f222b55c56c6d5010293529c..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/so_leader/config_so_leader.py
+++ /dev/null
@@ -1,42 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from dataclasses import dataclass
-from typing import TypeAlias
-
-from ..config import TeleoperatorConfig
-
-
-@dataclass
-class SOLeaderConfig:
- """Base configuration class for SO Leader teleoperators."""
-
- # Port to connect to the arm
- port: str
-
- # Whether to use degrees for angles
- use_degrees: bool = False
-
-
-@TeleoperatorConfig.register_subclass("so101_leader")
-@TeleoperatorConfig.register_subclass("so100_leader")
-@dataclass
-class SOLeaderTeleopConfig(TeleoperatorConfig, SOLeaderConfig):
- pass
-
-
-SO100LeaderConfig: TypeAlias = SOLeaderTeleopConfig
-SO101LeaderConfig: TypeAlias = SOLeaderTeleopConfig
diff --git a/lerobot/src/lerobot/teleoperators/so_leader/so100.md b/lerobot/src/lerobot/teleoperators/so_leader/so100.md
deleted file mode 100644
index ad1154e75a74a496aa74cb1ac1b545238d5174e4..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/so_leader/so100.md
+++ /dev/null
@@ -1 +0,0 @@
-../../../../docs/source/so100.mdx
\ No newline at end of file
diff --git a/lerobot/src/lerobot/teleoperators/so_leader/so101.md b/lerobot/src/lerobot/teleoperators/so_leader/so101.md
deleted file mode 100644
index 27b89266029afbf0aa59be195cc0b4b6ee93ac26..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/so_leader/so101.md
+++ /dev/null
@@ -1 +0,0 @@
-../../../../docs/source/so101.mdx
\ No newline at end of file
diff --git a/lerobot/src/lerobot/teleoperators/so_leader/so_leader.py b/lerobot/src/lerobot/teleoperators/so_leader/so_leader.py
deleted file mode 100644
index 6b441d0f1b91b265ef9c6e085056027e2dcb3128..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/so_leader/so_leader.py
+++ /dev/null
@@ -1,160 +0,0 @@
-# !/usr/bin/env python
-
-# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-import time
-from typing import TypeAlias
-
-from lerobot.motors import Motor, MotorCalibration, MotorNormMode
-from lerobot.motors.feetech import (
- FeetechMotorsBus,
- OperatingMode,
-)
-from lerobot.utils.decorators import check_if_already_connected, check_if_not_connected
-
-from ..teleoperator import Teleoperator
-from .config_so_leader import SOLeaderTeleopConfig
-
-logger = logging.getLogger(__name__)
-
-
-class SOLeader(Teleoperator):
- """Generic SO leader base for SO-100/101/10X teleoperators."""
-
- config_class = SOLeaderTeleopConfig
- name = "so_leader"
-
- def __init__(self, config: SOLeaderTeleopConfig):
- super().__init__(config)
- self.config = config
- norm_mode_body = MotorNormMode.DEGREES if config.use_degrees else MotorNormMode.RANGE_M100_100
- self.bus = FeetechMotorsBus(
- port=self.config.port,
- motors={
- "shoulder_pan": Motor(1, "sts3215", norm_mode_body),
- "shoulder_lift": Motor(2, "sts3215", norm_mode_body),
- "elbow_flex": Motor(3, "sts3215", norm_mode_body),
- "wrist_flex": Motor(4, "sts3215", norm_mode_body),
- "wrist_roll": Motor(5, "sts3215", norm_mode_body),
- "gripper": Motor(6, "sts3215", MotorNormMode.RANGE_0_100),
- },
- calibration=self.calibration,
- )
-
- @property
- def action_features(self) -> dict[str, type]:
- return {f"{motor}.pos": float for motor in self.bus.motors}
-
- @property
- def feedback_features(self) -> dict[str, type]:
- return {}
-
- @property
- def is_connected(self) -> bool:
- return self.bus.is_connected
-
- @check_if_already_connected
- def connect(self, calibrate: bool = True) -> None:
- self.bus.connect()
- if not self.is_calibrated and calibrate:
- logger.info(
- "Mismatch between calibration values in the motor and the calibration file or no calibration file found"
- )
- self.calibrate()
-
- self.configure()
- logger.info(f"{self} connected.")
-
- @property
- def is_calibrated(self) -> bool:
- return self.bus.is_calibrated
-
- def calibrate(self) -> None:
- if self.calibration:
- # Calibration file exists, ask user whether to use it or run new calibration
- user_input = input(
- f"Press ENTER to use provided calibration file associated with the id {self.id}, or type 'c' and press ENTER to run calibration: "
- )
- if user_input.strip().lower() != "c":
- logger.info(f"Writing calibration file associated with the id {self.id} to the motors")
- self.bus.write_calibration(self.calibration)
- return
-
- logger.info(f"\nRunning calibration of {self}")
- self.bus.disable_torque()
- for motor in self.bus.motors:
- self.bus.write("Operating_Mode", motor, OperatingMode.POSITION.value)
-
- input(f"Move {self} to the middle of its range of motion and press ENTER....")
- homing_offsets = self.bus.set_half_turn_homings()
-
- full_turn_motor = "wrist_roll"
- unknown_range_motors = [motor for motor in self.bus.motors if motor != full_turn_motor]
- print(
- f"Move all joints except '{full_turn_motor}' sequentially through their "
- "entire ranges of motion.\nRecording positions. Press ENTER to stop..."
- )
- range_mins, range_maxes = self.bus.record_ranges_of_motion(unknown_range_motors)
- range_mins[full_turn_motor] = 0
- range_maxes[full_turn_motor] = 4095
-
- self.calibration = {}
- for motor, m in self.bus.motors.items():
- self.calibration[motor] = MotorCalibration(
- id=m.id,
- drive_mode=0,
- homing_offset=homing_offsets[motor],
- range_min=range_mins[motor],
- range_max=range_maxes[motor],
- )
-
- self.bus.write_calibration(self.calibration)
- self._save_calibration()
- print(f"Calibration saved to {self.calibration_fpath}")
-
- def configure(self) -> None:
- self.bus.disable_torque()
- self.bus.configure_motors()
- for motor in self.bus.motors:
- self.bus.write("Operating_Mode", motor, OperatingMode.POSITION.value)
-
- def setup_motors(self) -> None:
- for motor in reversed(self.bus.motors):
- input(f"Connect the controller board to the '{motor}' motor only and press enter.")
- self.bus.setup_motor(motor)
- print(f"'{motor}' motor id set to {self.bus.motors[motor].id}")
-
- @check_if_not_connected
- def get_action(self) -> dict[str, float]:
- start = time.perf_counter()
- action = self.bus.sync_read("Present_Position")
- action = {f"{motor}.pos": val for motor, val in action.items()}
- dt_ms = (time.perf_counter() - start) * 1e3
- logger.debug(f"{self} read action: {dt_ms:.1f}ms")
- return action
-
- def send_feedback(self, feedback: dict[str, float]) -> None:
- # TODO: Implement force feedback
- raise NotImplementedError
-
- @check_if_not_connected
- def disconnect(self) -> None:
- self.bus.disconnect()
- logger.info(f"{self} disconnected.")
-
-
-SO100Leader: TypeAlias = SOLeader
-SO101Leader: TypeAlias = SOLeader
diff --git a/lerobot/src/lerobot/teleoperators/teleoperator.py b/lerobot/src/lerobot/teleoperators/teleoperator.py
deleted file mode 100644
index 993d9916ca6abd54833bbf17995c8aea52207c30..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/teleoperator.py
+++ /dev/null
@@ -1,182 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import abc
-import builtins
-from pathlib import Path
-from typing import Any
-
-import draccus
-
-from lerobot.motors.motors_bus import MotorCalibration
-from lerobot.processor import RobotAction
-from lerobot.utils.constants import HF_LEROBOT_CALIBRATION, TELEOPERATORS
-
-from .config import TeleoperatorConfig
-
-
-class Teleoperator(abc.ABC):
- """
- The base abstract class for all LeRobot-compatible teleoperation devices.
-
- This class provides a standardized interface for interacting with physical teleoperators.
- Subclasses must implement all abstract methods and properties to be usable.
-
- Attributes:
- config_class (RobotConfig): The expected configuration class for this teleoperator.
- name (str): The unique name used to identify this teleoperator type.
- """
-
- # Set these in ALL subclasses
- config_class: builtins.type[TeleoperatorConfig]
- name: str
-
- def __init__(self, config: TeleoperatorConfig):
- self.id = config.id
- self.calibration_dir = (
- config.calibration_dir
- if config.calibration_dir
- else HF_LEROBOT_CALIBRATION / TELEOPERATORS / self.name
- )
- self.calibration_dir.mkdir(parents=True, exist_ok=True)
- self.calibration_fpath = self.calibration_dir / f"{self.id}.json"
- self.calibration: dict[str, MotorCalibration] = {}
- if self.calibration_fpath.is_file():
- self._load_calibration()
-
- def __str__(self) -> str:
- return f"{self.id} {self.__class__.__name__}"
-
- @property
- @abc.abstractmethod
- def action_features(self) -> dict:
- """
- A dictionary describing the structure and types of the actions produced by the teleoperator. Its
- structure (keys) should match the structure of what is returned by :pymeth:`get_action`. Values for
- the dict should be the type of the value if it's a simple value, e.g. `float` for single
- proprioceptive value (a joint's goal position/velocity)
-
- Note: this property should be able to be called regardless of whether the robot is connected or not.
- """
- pass
-
- @property
- @abc.abstractmethod
- def feedback_features(self) -> dict:
- """
- A dictionary describing the structure and types of the feedback actions expected by the robot. Its
- structure (keys) should match the structure of what is passed to :pymeth:`send_feedback`. Values for
- the dict should be the type of the value if it's a simple value, e.g. `float` for single
- proprioceptive value (a joint's goal position/velocity)
-
- Note: this property should be able to be called regardless of whether the robot is connected or not.
- """
- pass
-
- @property
- @abc.abstractmethod
- def is_connected(self) -> bool:
- """
- Whether the teleoperator is currently connected or not. If `False`, calling :pymeth:`get_action`
- or :pymeth:`send_feedback` should raise an error.
- """
- pass
-
- @abc.abstractmethod
- def connect(self, calibrate: bool = True) -> None:
- """
- Establish communication with the teleoperator.
-
- Args:
- calibrate (bool): If True, automatically calibrate the teleoperator after connecting if it's not
- calibrated or needs calibration (this is hardware-dependant).
- """
- pass
-
- @property
- @abc.abstractmethod
- def is_calibrated(self) -> bool:
- """Whether the teleoperator is currently calibrated or not. Should be always `True` if not applicable"""
- pass
-
- @abc.abstractmethod
- def calibrate(self) -> None:
- """
- Calibrate the teleoperator if applicable. If not, this should be a no-op.
-
- This method should collect any necessary data (e.g., motor offsets) and update the
- :pyattr:`calibration` dictionary accordingly.
- """
- pass
-
- def _load_calibration(self, fpath: Path | None = None) -> None:
- """
- Helper to load calibration data from the specified file.
-
- Args:
- fpath (Path | None): Optional path to the calibration file. Defaults to `self.calibration_fpath`.
- """
- fpath = self.calibration_fpath if fpath is None else fpath
- with open(fpath) as f, draccus.config_type("json"):
- self.calibration = draccus.load(dict[str, MotorCalibration], f)
-
- def _save_calibration(self, fpath: Path | None = None) -> None:
- """
- Helper to save calibration data to the specified file.
-
- Args:
- fpath (Path | None): Optional path to save the calibration file. Defaults to `self.calibration_fpath`.
- """
- fpath = self.calibration_fpath if fpath is None else fpath
- with open(fpath, "w") as f, draccus.config_type("json"):
- draccus.dump(self.calibration, f, indent=4)
-
- @abc.abstractmethod
- def configure(self) -> None:
- """
- Apply any one-time or runtime configuration to the teleoperator.
- This may include setting motor parameters, control modes, or initial state.
- """
- pass
-
- @abc.abstractmethod
- def get_action(self) -> RobotAction:
- """
- Retrieve the current action from the teleoperator.
-
- Returns:
- RobotAction: A flat dictionary representing the teleoperator's current actions. Its
- structure should match :pymeth:`observation_features`.
- """
- pass
-
- @abc.abstractmethod
- def send_feedback(self, feedback: dict[str, Any]) -> None:
- """
- Send a feedback action command to the teleoperator.
-
- Args:
- feedback (dict[str, Any]): Dictionary representing the desired feedback. Its structure should match
- :pymeth:`feedback_features`.
-
- Returns:
- dict[str, Any]: The action actually sent to the motors potentially clipped or modified, e.g. by
- safety limits on velocity.
- """
- pass
-
- @abc.abstractmethod
- def disconnect(self) -> None:
- """Disconnect from the teleoperator and perform any necessary cleanup."""
- pass
diff --git a/lerobot/src/lerobot/teleoperators/utils.py b/lerobot/src/lerobot/teleoperators/utils.py
deleted file mode 100644
index f5d10e265c1cdef6e66b6013ed727f21b515119b..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/teleoperators/utils.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from enum import Enum
-from typing import cast
-
-from lerobot.utils.import_utils import make_device_from_device_class
-
-from .config import TeleoperatorConfig
-from .teleoperator import Teleoperator
-
-
-class TeleopEvents(Enum):
- """Shared constants for teleoperator events across teleoperators."""
-
- SUCCESS = "success"
- FAILURE = "failure"
- RERECORD_EPISODE = "rerecord_episode"
- IS_INTERVENTION = "is_intervention"
- TERMINATE_EPISODE = "terminate_episode"
-
-
-def make_teleoperator_from_config(config: TeleoperatorConfig) -> Teleoperator:
- # TODO(Steven): Consider just using the make_device_from_device_class for all types
- if config.type == "keyboard":
- from .keyboard import KeyboardTeleop
-
- return KeyboardTeleop(config)
- elif config.type == "koch_leader":
- from .koch_leader import KochLeader
-
- return KochLeader(config)
- elif config.type == "omx_leader":
- from .omx_leader import OmxLeader
-
- return OmxLeader(config)
- elif config.type == "so100_leader":
- from .so_leader import SO100Leader
-
- return SO100Leader(config)
- elif config.type == "so101_leader":
- from .so_leader import SO101Leader
-
- return SO101Leader(config)
- elif config.type == "mock_teleop":
- from tests.mocks.mock_teleop import MockTeleop
-
- return MockTeleop(config)
- elif config.type == "gamepad":
- from .gamepad.teleop_gamepad import GamepadTeleop
-
- return GamepadTeleop(config)
- elif config.type == "keyboard_ee":
- from .keyboard.teleop_keyboard import KeyboardEndEffectorTeleop
-
- return KeyboardEndEffectorTeleop(config)
- elif config.type == "homunculus_glove":
- from .homunculus import HomunculusGlove
-
- return HomunculusGlove(config)
- elif config.type == "homunculus_arm":
- from .homunculus import HomunculusArm
-
- return HomunculusArm(config)
- elif config.type == "bi_so_leader":
- from .bi_so_leader import BiSOLeader
-
- return BiSOLeader(config)
- elif config.type == "reachy2_teleoperator":
- from .reachy2_teleoperator import Reachy2Teleoperator
-
- return Reachy2Teleoperator(config)
- else:
- try:
- return cast(Teleoperator, make_device_from_device_class(config))
- except Exception as e:
- raise ValueError(f"Error creating robot with config {config}: {e}") from e
diff --git a/lerobot/src/lerobot/templates/lerobot_modelcard_template.md b/lerobot/src/lerobot/templates/lerobot_modelcard_template.md
deleted file mode 100644
index 48b9b96a02a3acccba9b7e229aec947947958e69..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/templates/lerobot_modelcard_template.md
+++ /dev/null
@@ -1,91 +0,0 @@
----
-# For reference on model card metadata, see the spec: https://github.com/huggingface/hub-docs/blob/main/modelcard.md?plain=1
-# Doc / guide: https://huggingface.co/docs/hub/model-cards
-# prettier-ignore
-{{card_data}}
----
-
-# Model Card for {{ model_name | default("Model ID", true) }}
-
-
-
-{% if model_name == "smolvla" %}
-[SmolVLA](https://huggingface.co/papers/2506.01844) is a compact, efficient vision-language-action model that achieves competitive performance at reduced computational costs and can be deployed on consumer-grade hardware.
-{% elif model_name == "act" %}
-[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
-{% elif model_name == "tdmpc" %}
-[TD-MPC](https://huggingface.co/papers/2203.04955) combines model-free and model-based approaches to improve sample efficiency and performance in continuous control tasks by using a learned latent dynamics model and terminal value function.
-{% elif model_name == "diffusion" %}
-[Diffusion Policy](https://huggingface.co/papers/2303.04137) treats visuomotor control as a generative diffusion process, producing smooth, multi-step action trajectories that excel at contact-rich manipulation.
-{% elif model_name == "vqbet" %}
-[VQ-BET](https://huggingface.co/papers/2403.03181) combines vector-quantised action tokens with Behaviour Transformers to discretise control and achieve data-efficient imitation across diverse skills.
-{% elif model_name == "pi0" %}
-**π₀ (Pi0)**
-
-π₀ is a Vision-Language-Action model for general robot control, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
-
-**Model Overview**
-
-π₀ represents a breakthrough in robotics as the first general-purpose robot foundation model developed by Physical Intelligence. Unlike traditional robots that are narrow specialists programmed for repetitive motions, π₀ is designed to be a generalist policy that can understand visual inputs, interpret natural language instructions, and control a variety of different robots across diverse tasks.
-
-For more details, see the [Physical Intelligence π₀ blog post](https://www.physicalintelligence.company/blog/pi0).
-{% elif model_name == "pi05" %}
-**π₀.₅ (Pi05) Policy**
-
-π₀.₅ is a Vision-Language-Action model with open-world generalization, from Physical Intelligence. The LeRobot implementation is adapted from their open source OpenPI repository.
-
-**Model Overview**
-
-π₀.₅ represents a significant evolution from π₀, developed by Physical Intelligence to address a big challenge in robotics: open-world generalization. While robots can perform impressive tasks in controlled environments, π₀.₅ is designed to generalize to entirely new environments and situations that were never seen during training.
-
-For more details, see the [Physical Intelligence π₀.₅ blog post](https://www.physicalintelligence.company/blog/pi05).
-{% elif model_name == "sac" %}
-[Soft Actor-Critic (SAC)](https://huggingface.co/papers/1801.01290) is an entropy-regularised actor-critic algorithm offering stable, sample-efficient learning in continuous-control environments.
-{% elif model_name == "reward_classifier" %}
-A reward classifier is a lightweight neural network that scores observations or trajectories for task success, providing a learned reward signal or offline evaluation when explicit rewards are unavailable.
-{% else %}
-_Model type not recognized — please update this template._
-{% endif %}
-
-This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
-See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
-
----
-
-## How to Get Started with the Model
-
-For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
-Below is the short version on how to train and run inference/eval:
-
-### Train from scratch
-
-```bash
-lerobot-train \
- --dataset.repo_id=${HF_USER}/ \
- --policy.type=act \
- --output_dir=outputs/train/ \
- --job_name=lerobot_training \
- --policy.device=cuda \
- --policy.repo_id=${HF_USER}/
- --wandb.enable=true
-```
-
-_Writes checkpoints to `outputs/train//checkpoints/`._
-
-### Evaluate the policy/run inference
-
-```bash
-lerobot-record \
- --robot.type=so100_follower \
- --dataset.repo_id=/eval_ \
- --policy.path=/ \
- --episodes=10
-```
-
-Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
-
----
-
-## Model Details
-
-- **License:** {{ license | default("\[More Information Needed]", true) }}
diff --git a/lerobot/src/lerobot/transport/services.proto b/lerobot/src/lerobot/transport/services.proto
deleted file mode 100644
index 1917bd7d7791d80f124c90769ad527400c74f3c1..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/transport/services.proto
+++ /dev/null
@@ -1,87 +0,0 @@
-// Copyright 2024 The HuggingFace Inc. team.
-// All rights reserved.
-
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-
-// http://www.apache.org/licenses/LICENSE-2.0
-
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.python -m grpc_tools.protoc -I src --python_out=src --grpc_python_out=src src/lerobot/transport/services.proto
-
-// To generate a classes for transport part (services_pb2.py and services_pb2_grpc.py) use the following command:
-//
-// python -m grpc_tools.protoc -I src --python_out=src --grpc_python_out=src src/lerobot/transport/services.proto
-//
-// The command should be launched from the root of the project.
-
-syntax = "proto3";
-
-package transport;
-
-// LearnerService: the Actor calls this to push transitions.
-// The Learner implements this service.
-service LearnerService {
- // Actor -> Learner to store transitions
- rpc StreamParameters(Empty) returns (stream Parameters);
- rpc SendTransitions(stream Transition) returns (Empty);
- rpc SendInteractions(stream InteractionMessage) returns (Empty);
- rpc Ready(Empty) returns (Empty);
-}
-
-// AsyncInference: from Robot perspective
-// Robot send observations to & executes action received from a remote Policy server
-service AsyncInference {
- // Robot -> Policy to share observations with a remote inference server
- // Policy -> Robot to share actions predicted for given observations
- rpc SendObservations(stream Observation) returns (Empty);
- rpc GetActions(Empty) returns (Actions);
- rpc SendPolicyInstructions(PolicySetup) returns (Empty);
- rpc Ready(Empty) returns (Empty);
-}
-
-enum TransferState {
- TRANSFER_UNKNOWN = 0;
- TRANSFER_BEGIN = 1;
- TRANSFER_MIDDLE = 2;
- TRANSFER_END = 3;
-}
-
-// Messages
-message Transition {
- TransferState transfer_state = 1;
- bytes data = 2;
-}
-
-message Parameters {
- TransferState transfer_state = 1;
- bytes data = 2;
-}
-
-message InteractionMessage {
- TransferState transfer_state = 1;
- bytes data = 2;
-}
-
-// Messages
-message Observation {
- // sent by Robot, to remote Policy
- TransferState transfer_state = 1; // Observations can be streamed exceeding 4MB of size
- bytes data = 2;
-}
-
-message Actions {
- // sent by remote Policy, to Robot
- bytes data = 1;
-}
-
-message PolicySetup {
- // sent by Robot to remote server, to init Policy
- bytes data = 1;
-}
-
-message Empty {}
diff --git a/lerobot/src/lerobot/transport/services_pb2.py b/lerobot/src/lerobot/transport/services_pb2.py
deleted file mode 100644
index 5bab324f2b68d93cdb6de229070c01c546c976fb..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/transport/services_pb2.py
+++ /dev/null
@@ -1,53 +0,0 @@
-# Generated by the protocol buffer compiler. DO NOT EDIT!
-# NO CHECKED-IN PROTOBUF GENCODE
-# source: lerobot/transport/services.proto
-# Protobuf Python Version: 6.31.0
-"""Generated protocol buffer code."""
-from google.protobuf import descriptor as _descriptor
-from google.protobuf import descriptor_pool as _descriptor_pool
-from google.protobuf import runtime_version as _runtime_version
-from google.protobuf import symbol_database as _symbol_database
-from google.protobuf.internal import builder as _builder
-_runtime_version.ValidateProtobufRuntimeVersion(
- _runtime_version.Domain.PUBLIC,
- 6,
- 31,
- 0,
- '',
- 'lerobot/transport/services.proto'
-)
-# @@protoc_insertion_point(imports)
-
-_sym_db = _symbol_database.Default()
-
-
-
-
-DESCRIPTOR = _descriptor_pool.Default().AddSerializedFile(b'\n lerobot/transport/services.proto\x12\ttransport\"L\n\nTransition\x12\x30\n\x0etransfer_state\x18\x01 \x01(\x0e\x32\x18.transport.TransferState\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x0c\"L\n\nParameters\x12\x30\n\x0etransfer_state\x18\x01 \x01(\x0e\x32\x18.transport.TransferState\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x0c\"T\n\x12InteractionMessage\x12\x30\n\x0etransfer_state\x18\x01 \x01(\x0e\x32\x18.transport.TransferState\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x0c\"M\n\x0bObservation\x12\x30\n\x0etransfer_state\x18\x01 \x01(\x0e\x32\x18.transport.TransferState\x12\x0c\n\x04\x64\x61ta\x18\x02 \x01(\x0c\"\x17\n\x07\x41\x63tions\x12\x0c\n\x04\x64\x61ta\x18\x01 \x01(\x0c\"\x1b\n\x0bPolicySetup\x12\x0c\n\x04\x64\x61ta\x18\x01 \x01(\x0c\"\x07\n\x05\x45mpty*`\n\rTransferState\x12\x14\n\x10TRANSFER_UNKNOWN\x10\x00\x12\x12\n\x0eTRANSFER_BEGIN\x10\x01\x12\x13\n\x0fTRANSFER_MIDDLE\x10\x02\x12\x10\n\x0cTRANSFER_END\x10\x03\x32\x81\x02\n\x0eLearnerService\x12=\n\x10StreamParameters\x12\x10.transport.Empty\x1a\x15.transport.Parameters0\x01\x12<\n\x0fSendTransitions\x12\x15.transport.Transition\x1a\x10.transport.Empty(\x01\x12\x45\n\x10SendInteractions\x12\x1d.transport.InteractionMessage\x1a\x10.transport.Empty(\x01\x12+\n\x05Ready\x12\x10.transport.Empty\x1a\x10.transport.Empty2\xf5\x01\n\x0e\x41syncInference\x12>\n\x10SendObservations\x12\x16.transport.Observation\x1a\x10.transport.Empty(\x01\x12\x32\n\nGetActions\x12\x10.transport.Empty\x1a\x12.transport.Actions\x12\x42\n\x16SendPolicyInstructions\x12\x16.transport.PolicySetup\x1a\x10.transport.Empty\x12+\n\x05Ready\x12\x10.transport.Empty\x1a\x10.transport.Emptyb\x06proto3')
-
-_globals = globals()
-_builder.BuildMessageAndEnumDescriptors(DESCRIPTOR, _globals)
-_builder.BuildTopDescriptorsAndMessages(DESCRIPTOR, 'lerobot.transport.services_pb2', _globals)
-if not _descriptor._USE_C_DESCRIPTORS:
- DESCRIPTOR._loaded_options = None
- _globals['_TRANSFERSTATE']._serialized_start=431
- _globals['_TRANSFERSTATE']._serialized_end=527
- _globals['_TRANSITION']._serialized_start=47
- _globals['_TRANSITION']._serialized_end=123
- _globals['_PARAMETERS']._serialized_start=125
- _globals['_PARAMETERS']._serialized_end=201
- _globals['_INTERACTIONMESSAGE']._serialized_start=203
- _globals['_INTERACTIONMESSAGE']._serialized_end=287
- _globals['_OBSERVATION']._serialized_start=289
- _globals['_OBSERVATION']._serialized_end=366
- _globals['_ACTIONS']._serialized_start=368
- _globals['_ACTIONS']._serialized_end=391
- _globals['_POLICYSETUP']._serialized_start=393
- _globals['_POLICYSETUP']._serialized_end=420
- _globals['_EMPTY']._serialized_start=422
- _globals['_EMPTY']._serialized_end=429
- _globals['_LEARNERSERVICE']._serialized_start=530
- _globals['_LEARNERSERVICE']._serialized_end=787
- _globals['_ASYNCINFERENCE']._serialized_start=790
- _globals['_ASYNCINFERENCE']._serialized_end=1035
-# @@protoc_insertion_point(module_scope)
diff --git a/lerobot/src/lerobot/transport/services_pb2_grpc.py b/lerobot/src/lerobot/transport/services_pb2_grpc.py
deleted file mode 100644
index 26e5c68186dac26e56570f2d98a752aaf9f1cd39..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/transport/services_pb2_grpc.py
+++ /dev/null
@@ -1,442 +0,0 @@
-# Generated by the gRPC Python protocol compiler plugin. DO NOT EDIT!
-"""Client and server classes corresponding to protobuf-defined services."""
-import grpc
-import warnings
-
-from lerobot.transport import services_pb2 as lerobot_dot_transport_dot_services__pb2
-
-GRPC_GENERATED_VERSION = '1.73.1'
-GRPC_VERSION = grpc.__version__
-_version_not_supported = False
-
-try:
- from grpc._utilities import first_version_is_lower
- _version_not_supported = first_version_is_lower(GRPC_VERSION, GRPC_GENERATED_VERSION)
-except ImportError:
- _version_not_supported = True
-
-if _version_not_supported:
- raise RuntimeError(
- f'The grpc package installed is at version {GRPC_VERSION},'
- + f' but the generated code in lerobot/transport/services_pb2_grpc.py depends on'
- + f' grpcio>={GRPC_GENERATED_VERSION}.'
- + f' Please upgrade your grpc module to grpcio>={GRPC_GENERATED_VERSION}'
- + f' or downgrade your generated code using grpcio-tools<={GRPC_VERSION}.'
- )
-
-
-class LearnerServiceStub:
- """LearnerService: the Actor calls this to push transitions.
- The Learner implements this service.
- """
-
- def __init__(self, channel):
- """Constructor.
-
- Args:
- channel: A grpc.Channel.
- """
- self.StreamParameters = channel.unary_stream(
- '/transport.LearnerService/StreamParameters',
- request_serializer=lerobot_dot_transport_dot_services__pb2.Empty.SerializeToString,
- response_deserializer=lerobot_dot_transport_dot_services__pb2.Parameters.FromString,
- _registered_method=True)
- self.SendTransitions = channel.stream_unary(
- '/transport.LearnerService/SendTransitions',
- request_serializer=lerobot_dot_transport_dot_services__pb2.Transition.SerializeToString,
- response_deserializer=lerobot_dot_transport_dot_services__pb2.Empty.FromString,
- _registered_method=True)
- self.SendInteractions = channel.stream_unary(
- '/transport.LearnerService/SendInteractions',
- request_serializer=lerobot_dot_transport_dot_services__pb2.InteractionMessage.SerializeToString,
- response_deserializer=lerobot_dot_transport_dot_services__pb2.Empty.FromString,
- _registered_method=True)
- self.Ready = channel.unary_unary(
- '/transport.LearnerService/Ready',
- request_serializer=lerobot_dot_transport_dot_services__pb2.Empty.SerializeToString,
- response_deserializer=lerobot_dot_transport_dot_services__pb2.Empty.FromString,
- _registered_method=True)
-
-
-class LearnerServiceServicer:
- """LearnerService: the Actor calls this to push transitions.
- The Learner implements this service.
- """
-
- def StreamParameters(self, request, context):
- """Actor -> Learner to store transitions
- """
- context.set_code(grpc.StatusCode.UNIMPLEMENTED)
- context.set_details('Method not implemented!')
- raise NotImplementedError('Method not implemented!')
-
- def SendTransitions(self, request_iterator, context):
- """Missing associated documentation comment in .proto file."""
- context.set_code(grpc.StatusCode.UNIMPLEMENTED)
- context.set_details('Method not implemented!')
- raise NotImplementedError('Method not implemented!')
-
- def SendInteractions(self, request_iterator, context):
- """Missing associated documentation comment in .proto file."""
- context.set_code(grpc.StatusCode.UNIMPLEMENTED)
- context.set_details('Method not implemented!')
- raise NotImplementedError('Method not implemented!')
-
- def Ready(self, request, context):
- """Missing associated documentation comment in .proto file."""
- context.set_code(grpc.StatusCode.UNIMPLEMENTED)
- context.set_details('Method not implemented!')
- raise NotImplementedError('Method not implemented!')
-
-
-def add_LearnerServiceServicer_to_server(servicer, server):
- rpc_method_handlers = {
- 'StreamParameters': grpc.unary_stream_rpc_method_handler(
- servicer.StreamParameters,
- request_deserializer=lerobot_dot_transport_dot_services__pb2.Empty.FromString,
- response_serializer=lerobot_dot_transport_dot_services__pb2.Parameters.SerializeToString,
- ),
- 'SendTransitions': grpc.stream_unary_rpc_method_handler(
- servicer.SendTransitions,
- request_deserializer=lerobot_dot_transport_dot_services__pb2.Transition.FromString,
- response_serializer=lerobot_dot_transport_dot_services__pb2.Empty.SerializeToString,
- ),
- 'SendInteractions': grpc.stream_unary_rpc_method_handler(
- servicer.SendInteractions,
- request_deserializer=lerobot_dot_transport_dot_services__pb2.InteractionMessage.FromString,
- response_serializer=lerobot_dot_transport_dot_services__pb2.Empty.SerializeToString,
- ),
- 'Ready': grpc.unary_unary_rpc_method_handler(
- servicer.Ready,
- request_deserializer=lerobot_dot_transport_dot_services__pb2.Empty.FromString,
- response_serializer=lerobot_dot_transport_dot_services__pb2.Empty.SerializeToString,
- ),
- }
- generic_handler = grpc.method_handlers_generic_handler(
- 'transport.LearnerService', rpc_method_handlers)
- server.add_generic_rpc_handlers((generic_handler,))
- server.add_registered_method_handlers('transport.LearnerService', rpc_method_handlers)
-
-
- # This class is part of an EXPERIMENTAL API.
-class LearnerService:
- """LearnerService: the Actor calls this to push transitions.
- The Learner implements this service.
- """
-
- @staticmethod
- def StreamParameters(request,
- target,
- options=(),
- channel_credentials=None,
- call_credentials=None,
- insecure=False,
- compression=None,
- wait_for_ready=None,
- timeout=None,
- metadata=None):
- return grpc.experimental.unary_stream(
- request,
- target,
- '/transport.LearnerService/StreamParameters',
- lerobot_dot_transport_dot_services__pb2.Empty.SerializeToString,
- lerobot_dot_transport_dot_services__pb2.Parameters.FromString,
- options,
- channel_credentials,
- insecure,
- call_credentials,
- compression,
- wait_for_ready,
- timeout,
- metadata,
- _registered_method=True)
-
- @staticmethod
- def SendTransitions(request_iterator,
- target,
- options=(),
- channel_credentials=None,
- call_credentials=None,
- insecure=False,
- compression=None,
- wait_for_ready=None,
- timeout=None,
- metadata=None):
- return grpc.experimental.stream_unary(
- request_iterator,
- target,
- '/transport.LearnerService/SendTransitions',
- lerobot_dot_transport_dot_services__pb2.Transition.SerializeToString,
- lerobot_dot_transport_dot_services__pb2.Empty.FromString,
- options,
- channel_credentials,
- insecure,
- call_credentials,
- compression,
- wait_for_ready,
- timeout,
- metadata,
- _registered_method=True)
-
- @staticmethod
- def SendInteractions(request_iterator,
- target,
- options=(),
- channel_credentials=None,
- call_credentials=None,
- insecure=False,
- compression=None,
- wait_for_ready=None,
- timeout=None,
- metadata=None):
- return grpc.experimental.stream_unary(
- request_iterator,
- target,
- '/transport.LearnerService/SendInteractions',
- lerobot_dot_transport_dot_services__pb2.InteractionMessage.SerializeToString,
- lerobot_dot_transport_dot_services__pb2.Empty.FromString,
- options,
- channel_credentials,
- insecure,
- call_credentials,
- compression,
- wait_for_ready,
- timeout,
- metadata,
- _registered_method=True)
-
- @staticmethod
- def Ready(request,
- target,
- options=(),
- channel_credentials=None,
- call_credentials=None,
- insecure=False,
- compression=None,
- wait_for_ready=None,
- timeout=None,
- metadata=None):
- return grpc.experimental.unary_unary(
- request,
- target,
- '/transport.LearnerService/Ready',
- lerobot_dot_transport_dot_services__pb2.Empty.SerializeToString,
- lerobot_dot_transport_dot_services__pb2.Empty.FromString,
- options,
- channel_credentials,
- insecure,
- call_credentials,
- compression,
- wait_for_ready,
- timeout,
- metadata,
- _registered_method=True)
-
-
-class AsyncInferenceStub:
- """AsyncInference: from Robot perspective
- Robot send observations to & executes action received from a remote Policy server
- """
-
- def __init__(self, channel):
- """Constructor.
-
- Args:
- channel: A grpc.Channel.
- """
- self.SendObservations = channel.stream_unary(
- '/transport.AsyncInference/SendObservations',
- request_serializer=lerobot_dot_transport_dot_services__pb2.Observation.SerializeToString,
- response_deserializer=lerobot_dot_transport_dot_services__pb2.Empty.FromString,
- _registered_method=True)
- self.GetActions = channel.unary_unary(
- '/transport.AsyncInference/GetActions',
- request_serializer=lerobot_dot_transport_dot_services__pb2.Empty.SerializeToString,
- response_deserializer=lerobot_dot_transport_dot_services__pb2.Actions.FromString,
- _registered_method=True)
- self.SendPolicyInstructions = channel.unary_unary(
- '/transport.AsyncInference/SendPolicyInstructions',
- request_serializer=lerobot_dot_transport_dot_services__pb2.PolicySetup.SerializeToString,
- response_deserializer=lerobot_dot_transport_dot_services__pb2.Empty.FromString,
- _registered_method=True)
- self.Ready = channel.unary_unary(
- '/transport.AsyncInference/Ready',
- request_serializer=lerobot_dot_transport_dot_services__pb2.Empty.SerializeToString,
- response_deserializer=lerobot_dot_transport_dot_services__pb2.Empty.FromString,
- _registered_method=True)
-
-
-class AsyncInferenceServicer:
- """AsyncInference: from Robot perspective
- Robot send observations to & executes action received from a remote Policy server
- """
-
- def SendObservations(self, request_iterator, context):
- """Robot -> Policy to share observations with a remote inference server
- Policy -> Robot to share actions predicted for given observations
- """
- context.set_code(grpc.StatusCode.UNIMPLEMENTED)
- context.set_details('Method not implemented!')
- raise NotImplementedError('Method not implemented!')
-
- def GetActions(self, request, context):
- """Missing associated documentation comment in .proto file."""
- context.set_code(grpc.StatusCode.UNIMPLEMENTED)
- context.set_details('Method not implemented!')
- raise NotImplementedError('Method not implemented!')
-
- def SendPolicyInstructions(self, request, context):
- """Missing associated documentation comment in .proto file."""
- context.set_code(grpc.StatusCode.UNIMPLEMENTED)
- context.set_details('Method not implemented!')
- raise NotImplementedError('Method not implemented!')
-
- def Ready(self, request, context):
- """Missing associated documentation comment in .proto file."""
- context.set_code(grpc.StatusCode.UNIMPLEMENTED)
- context.set_details('Method not implemented!')
- raise NotImplementedError('Method not implemented!')
-
-
-def add_AsyncInferenceServicer_to_server(servicer, server):
- rpc_method_handlers = {
- 'SendObservations': grpc.stream_unary_rpc_method_handler(
- servicer.SendObservations,
- request_deserializer=lerobot_dot_transport_dot_services__pb2.Observation.FromString,
- response_serializer=lerobot_dot_transport_dot_services__pb2.Empty.SerializeToString,
- ),
- 'GetActions': grpc.unary_unary_rpc_method_handler(
- servicer.GetActions,
- request_deserializer=lerobot_dot_transport_dot_services__pb2.Empty.FromString,
- response_serializer=lerobot_dot_transport_dot_services__pb2.Actions.SerializeToString,
- ),
- 'SendPolicyInstructions': grpc.unary_unary_rpc_method_handler(
- servicer.SendPolicyInstructions,
- request_deserializer=lerobot_dot_transport_dot_services__pb2.PolicySetup.FromString,
- response_serializer=lerobot_dot_transport_dot_services__pb2.Empty.SerializeToString,
- ),
- 'Ready': grpc.unary_unary_rpc_method_handler(
- servicer.Ready,
- request_deserializer=lerobot_dot_transport_dot_services__pb2.Empty.FromString,
- response_serializer=lerobot_dot_transport_dot_services__pb2.Empty.SerializeToString,
- ),
- }
- generic_handler = grpc.method_handlers_generic_handler(
- 'transport.AsyncInference', rpc_method_handlers)
- server.add_generic_rpc_handlers((generic_handler,))
- server.add_registered_method_handlers('transport.AsyncInference', rpc_method_handlers)
-
-
- # This class is part of an EXPERIMENTAL API.
-class AsyncInference:
- """AsyncInference: from Robot perspective
- Robot send observations to & executes action received from a remote Policy server
- """
-
- @staticmethod
- def SendObservations(request_iterator,
- target,
- options=(),
- channel_credentials=None,
- call_credentials=None,
- insecure=False,
- compression=None,
- wait_for_ready=None,
- timeout=None,
- metadata=None):
- return grpc.experimental.stream_unary(
- request_iterator,
- target,
- '/transport.AsyncInference/SendObservations',
- lerobot_dot_transport_dot_services__pb2.Observation.SerializeToString,
- lerobot_dot_transport_dot_services__pb2.Empty.FromString,
- options,
- channel_credentials,
- insecure,
- call_credentials,
- compression,
- wait_for_ready,
- timeout,
- metadata,
- _registered_method=True)
-
- @staticmethod
- def GetActions(request,
- target,
- options=(),
- channel_credentials=None,
- call_credentials=None,
- insecure=False,
- compression=None,
- wait_for_ready=None,
- timeout=None,
- metadata=None):
- return grpc.experimental.unary_unary(
- request,
- target,
- '/transport.AsyncInference/GetActions',
- lerobot_dot_transport_dot_services__pb2.Empty.SerializeToString,
- lerobot_dot_transport_dot_services__pb2.Actions.FromString,
- options,
- channel_credentials,
- insecure,
- call_credentials,
- compression,
- wait_for_ready,
- timeout,
- metadata,
- _registered_method=True)
-
- @staticmethod
- def SendPolicyInstructions(request,
- target,
- options=(),
- channel_credentials=None,
- call_credentials=None,
- insecure=False,
- compression=None,
- wait_for_ready=None,
- timeout=None,
- metadata=None):
- return grpc.experimental.unary_unary(
- request,
- target,
- '/transport.AsyncInference/SendPolicyInstructions',
- lerobot_dot_transport_dot_services__pb2.PolicySetup.SerializeToString,
- lerobot_dot_transport_dot_services__pb2.Empty.FromString,
- options,
- channel_credentials,
- insecure,
- call_credentials,
- compression,
- wait_for_ready,
- timeout,
- metadata,
- _registered_method=True)
-
- @staticmethod
- def Ready(request,
- target,
- options=(),
- channel_credentials=None,
- call_credentials=None,
- insecure=False,
- compression=None,
- wait_for_ready=None,
- timeout=None,
- metadata=None):
- return grpc.experimental.unary_unary(
- request,
- target,
- '/transport.AsyncInference/Ready',
- lerobot_dot_transport_dot_services__pb2.Empty.SerializeToString,
- lerobot_dot_transport_dot_services__pb2.Empty.FromString,
- options,
- channel_credentials,
- insecure,
- call_credentials,
- compression,
- wait_for_ready,
- timeout,
- metadata,
- _registered_method=True)
diff --git a/lerobot/src/lerobot/transport/utils.py b/lerobot/src/lerobot/transport/utils.py
deleted file mode 100644
index 23e21b7b3fab6e085f5ec3b707a0e62ad16b74b3..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/transport/utils.py
+++ /dev/null
@@ -1,189 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team.
-# All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import io
-import json
-import logging
-import pickle # nosec B403: Safe usage for internal serialization only
-from multiprocessing.synchronize import Event as MpEvent
-from queue import Queue
-from typing import Any
-
-import torch
-
-from lerobot.transport import services_pb2
-from lerobot.utils.transition import Transition
-
-# FIX for protobuf: Assign the enum to a variable and ignore the type error once
-TransferState = services_pb2.TransferState # type: ignore[attr-defined]
-
-CHUNK_SIZE = 2 * 1024 * 1024 # 2 MB
-MAX_MESSAGE_SIZE = 4 * 1024 * 1024 # 4 MB
-
-
-def bytes_buffer_size(buffer: io.BytesIO) -> int:
- buffer.seek(0, io.SEEK_END)
- result = buffer.tell()
- buffer.seek(0)
- return result
-
-
-def send_bytes_in_chunks(buffer: bytes, message_class: Any, log_prefix: str = "", silent: bool = True):
- bytes_buffer: io.BytesIO = io.BytesIO(buffer)
- size_in_bytes = bytes_buffer_size(bytes_buffer)
-
- sent_bytes = 0
-
- logging_method = logging.info if not silent else logging.debug
-
- logging_method(f"{log_prefix} Buffer size {size_in_bytes / 1024 / 1024} MB with")
-
- while sent_bytes < size_in_bytes:
- transfer_state = TransferState.TRANSFER_MIDDLE
-
- if sent_bytes + CHUNK_SIZE >= size_in_bytes:
- transfer_state = TransferState.TRANSFER_END
- elif sent_bytes == 0:
- transfer_state = TransferState.TRANSFER_BEGIN
-
- size_to_read = min(CHUNK_SIZE, size_in_bytes - sent_bytes)
- chunk = bytes_buffer.read(size_to_read)
-
- yield message_class(transfer_state=transfer_state, data=chunk)
- sent_bytes += size_to_read
- logging_method(f"{log_prefix} Sent {sent_bytes}/{size_in_bytes} bytes with state {transfer_state}")
-
- logging_method(f"{log_prefix} Published {sent_bytes / 1024 / 1024} MB")
-
-
-def receive_bytes_in_chunks(iterator, queue: Queue | None, shutdown_event: MpEvent, log_prefix: str = ""):
- bytes_buffer = io.BytesIO()
- step = 0
-
- logging.info(f"{log_prefix} Starting receiver")
- for item in iterator:
- logging.debug(f"{log_prefix} Received item")
- if shutdown_event.is_set():
- logging.info(f"{log_prefix} Shutting down receiver")
- return
-
- if item.transfer_state == TransferState.TRANSFER_BEGIN:
- bytes_buffer.seek(0)
- bytes_buffer.truncate(0)
- bytes_buffer.write(item.data)
- logging.debug(f"{log_prefix} Received data at step 0")
- step = 0
- elif item.transfer_state == TransferState.TRANSFER_MIDDLE:
- bytes_buffer.write(item.data)
- step += 1
- logging.debug(f"{log_prefix} Received data at step {step}")
- elif item.transfer_state == TransferState.TRANSFER_END:
- bytes_buffer.write(item.data)
- logging.debug(f"{log_prefix} Received data at step end size {bytes_buffer_size(bytes_buffer)}")
-
- if queue is not None:
- queue.put(bytes_buffer.getvalue())
- else:
- return bytes_buffer.getvalue()
-
- bytes_buffer.seek(0)
- bytes_buffer.truncate(0)
- step = 0
-
- logging.debug(f"{log_prefix} Queue updated")
- else:
- logging.warning(f"{log_prefix} Received unknown transfer state {item.transfer_state}")
- raise ValueError(f"Received unknown transfer state {item.transfer_state}")
-
-
-def state_to_bytes(state_dict: dict[str, torch.Tensor]) -> bytes:
- """Convert model state dict to flat array for transmission"""
- bytes_buffer = io.BytesIO()
-
- torch.save(state_dict, bytes_buffer)
-
- return bytes_buffer.getvalue()
-
-
-def bytes_to_state_dict(buffer: bytes) -> dict[str, torch.Tensor]:
- bytes_buffer = io.BytesIO(buffer)
- bytes_buffer.seek(0)
- return torch.load(bytes_buffer, weights_only=True)
-
-
-def python_object_to_bytes(python_object: Any) -> bytes:
- return pickle.dumps(python_object)
-
-
-def bytes_to_python_object(buffer: bytes) -> Any:
- bytes_buffer = io.BytesIO(buffer)
- bytes_buffer.seek(0)
- obj = pickle.load(bytes_buffer) # nosec B301: Safe usage of pickle.load
- # Add validation checks here
- return obj
-
-
-def bytes_to_transitions(buffer: bytes) -> list[Transition]:
- bytes_buffer = io.BytesIO(buffer)
- bytes_buffer.seek(0)
- transitions = torch.load(bytes_buffer, weights_only=True)
- return transitions
-
-
-def transitions_to_bytes(transitions: list[Transition]) -> bytes:
- bytes_buffer = io.BytesIO()
- torch.save(transitions, bytes_buffer)
- return bytes_buffer.getvalue()
-
-
-def grpc_channel_options(
- max_receive_message_length: int = MAX_MESSAGE_SIZE,
- max_send_message_length: int = MAX_MESSAGE_SIZE,
- enable_retries: bool = True,
- initial_backoff: str = "0.1s",
- max_attempts: int = 5,
- backoff_multiplier: float = 2,
- max_backoff: str = "2s",
-):
- service_config = {
- "methodConfig": [
- {
- "name": [{}], # Applies to ALL methods in ALL services
- "retryPolicy": {
- "maxAttempts": max_attempts, # Max retries (total attempts = 5)
- "initialBackoff": initial_backoff, # First retry after 0.1s
- "maxBackoff": max_backoff, # Max wait time between retries
- "backoffMultiplier": backoff_multiplier, # Exponential backoff factor
- "retryableStatusCodes": [
- "UNAVAILABLE",
- "DEADLINE_EXCEEDED",
- ], # Retries on network failures
- },
- }
- ]
- }
-
- service_config_json = json.dumps(service_config)
-
- retries_option = 1 if enable_retries else 0
-
- return [
- ("grpc.max_receive_message_length", max_receive_message_length),
- ("grpc.max_send_message_length", max_send_message_length),
- ("grpc.enable_retries", retries_option),
- ("grpc.service_config", service_config_json),
- ]
diff --git a/lerobot/src/lerobot/utils/constants.py b/lerobot/src/lerobot/utils/constants.py
deleted file mode 100644
index 55f003aa1d63b6795ee9dd6c741d3fa8f3c412cc..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/utils/constants.py
+++ /dev/null
@@ -1,88 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-# keys
-import os
-from pathlib import Path
-
-from huggingface_hub.constants import HF_HOME
-
-OBS_STR = "observation"
-OBS_PREFIX = OBS_STR + "."
-OBS_ENV_STATE = OBS_STR + ".environment_state"
-OBS_STATE = OBS_STR + ".state"
-OBS_IMAGE = OBS_STR + ".image"
-OBS_IMAGES = OBS_IMAGE + "s"
-OBS_LANGUAGE = OBS_STR + ".language"
-OBS_LANGUAGE_TOKENS = OBS_LANGUAGE + ".tokens"
-OBS_LANGUAGE_ATTENTION_MASK = OBS_LANGUAGE + ".attention_mask"
-
-ACTION = "action"
-ACTION_PREFIX = ACTION + "."
-ACTION_TOKENS = ACTION + ".tokens"
-ACTION_TOKEN_MASK = ACTION + ".token_mask"
-REWARD = "next.reward"
-TRUNCATED = "next.truncated"
-DONE = "next.done"
-INFO = "info"
-
-ROBOTS = "robots"
-TELEOPERATORS = "teleoperators"
-
-# files & directories
-CHECKPOINTS_DIR = "checkpoints"
-LAST_CHECKPOINT_LINK = "last"
-PRETRAINED_MODEL_DIR = "pretrained_model"
-TRAINING_STATE_DIR = "training_state"
-RNG_STATE = "rng_state.safetensors"
-TRAINING_STEP = "training_step.json"
-OPTIMIZER_STATE = "optimizer_state.safetensors"
-OPTIMIZER_PARAM_GROUPS = "optimizer_param_groups.json"
-SCHEDULER_STATE = "scheduler_state.json"
-
-POLICY_PREPROCESSOR_DEFAULT_NAME = "policy_preprocessor"
-POLICY_POSTPROCESSOR_DEFAULT_NAME = "policy_postprocessor"
-
-if "LEROBOT_HOME" in os.environ:
- raise ValueError(
- f"You have a 'LEROBOT_HOME' environment variable set to '{os.getenv('LEROBOT_HOME')}'.\n"
- "'LEROBOT_HOME' is deprecated, please use 'HF_LEROBOT_HOME' instead."
- )
-
-# cache dir
-default_cache_path = Path(HF_HOME) / "lerobot"
-HF_LEROBOT_HOME = Path(os.getenv("HF_LEROBOT_HOME", default_cache_path)).expanduser()
-
-# calibration dir
-default_calibration_path = HF_LEROBOT_HOME / "calibration"
-HF_LEROBOT_CALIBRATION = Path(os.getenv("HF_LEROBOT_CALIBRATION", default_calibration_path)).expanduser()
-
-
-# streaming datasets
-LOOKBACK_BACKTRACKTABLE = 100
-LOOKAHEAD_BACKTRACKTABLE = 100
-
-# openpi
-OPENPI_ATTENTION_MASK_VALUE = -2.3819763e38 # TODO(pepijn): Modify this when extending support to fp8 models
-
-# Constants for LIBERO observation keys
-LIBERO_KEY_EEF_POS = "robot_state/eef/pos"
-LIBERO_KEY_EEF_QUAT = "robot_state/eef/quat"
-LIBERO_KEY_EEF_MAT = "robot_state/eef/mat"
-LIBERO_KEY_EEF_AXISANGLE = "robot_state/eef/axisangle"
-LIBERO_KEY_GRIPPER_QPOS = "robot_state/gripper/qpos"
-LIBERO_KEY_GRIPPER_QVEL = "robot_state/gripper/qvel"
-LIBERO_KEY_JOINTS_POS = "robot_state/joints/pos"
-LIBERO_KEY_JOINTS_VEL = "robot_state/joints/vel"
-LIBERO_KEY_PIXELS_AGENTVIEW = "pixels/agentview_image"
-LIBERO_KEY_PIXELS_EYE_IN_HAND = "pixels/robot0_eye_in_hand_image"
diff --git a/lerobot/src/lerobot/utils/control_utils.py b/lerobot/src/lerobot/utils/control_utils.py
deleted file mode 100644
index ffb75663ef3dc26ad176b31d8f16010f08da7497..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/utils/control_utils.py
+++ /dev/null
@@ -1,235 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-########################################################################################
-# Utilities
-########################################################################################
-
-
-import logging
-import traceback
-from contextlib import nullcontext
-from copy import copy
-from functools import cache
-from typing import Any
-
-import numpy as np
-import torch
-from deepdiff import DeepDiff
-
-from lerobot.datasets.lerobot_dataset import LeRobotDataset
-from lerobot.datasets.utils import DEFAULT_FEATURES
-from lerobot.policies.pretrained import PreTrainedPolicy
-from lerobot.policies.utils import prepare_observation_for_inference
-from lerobot.processor import PolicyAction, PolicyProcessorPipeline
-from lerobot.robots import Robot
-
-
-@cache
-def is_headless():
- """
- Detects if the Python script is running in a headless environment (e.g., without a display).
-
- This function attempts to import `pynput`, a library that requires a graphical environment.
- If the import fails, it assumes the environment is headless. The result is cached to avoid
- re-running the check.
-
- Returns:
- True if the environment is determined to be headless, False otherwise.
- """
- try:
- import pynput # noqa
-
- return False
- except Exception:
- print(
- "Error trying to import pynput. Switching to headless mode. "
- "As a result, the video stream from the cameras won't be shown, "
- "and you won't be able to change the control flow with keyboards. "
- "For more info, see traceback below.\n"
- )
- traceback.print_exc()
- print()
- return True
-
-
-def predict_action(
- observation: dict[str, np.ndarray],
- policy: PreTrainedPolicy,
- device: torch.device,
- preprocessor: PolicyProcessorPipeline[dict[str, Any], dict[str, Any]],
- postprocessor: PolicyProcessorPipeline[PolicyAction, PolicyAction],
- use_amp: bool,
- task: str | None = None,
- robot_type: str | None = None,
-):
- """
- Performs a single-step inference to predict a robot action from an observation.
-
- This function encapsulates the full inference pipeline:
- 1. Prepares the observation by converting it to PyTorch tensors and adding a batch dimension.
- 2. Runs the preprocessor pipeline on the observation.
- 3. Feeds the processed observation to the policy to get a raw action.
- 4. Runs the postprocessor pipeline on the raw action.
- 5. Formats the final action by removing the batch dimension and moving it to the CPU.
-
- Args:
- observation: A dictionary of NumPy arrays representing the robot's current observation.
- policy: The `PreTrainedPolicy` model to use for action prediction.
- device: The `torch.device` (e.g., 'cuda' or 'cpu') to run inference on.
- preprocessor: The `PolicyProcessorPipeline` for preprocessing observations.
- postprocessor: The `PolicyProcessorPipeline` for postprocessing actions.
- use_amp: A boolean to enable/disable Automatic Mixed Precision for CUDA inference.
- task: An optional string identifier for the task.
- robot_type: An optional string identifier for the robot type.
-
- Returns:
- A `torch.Tensor` containing the predicted action, ready for the robot.
- """
- observation = copy(observation)
- with (
- torch.inference_mode(),
- torch.autocast(device_type=device.type) if device.type == "cuda" and use_amp else nullcontext(),
- ):
- # Convert to pytorch format: channel first and float32 in [0,1] with batch dimension
- observation = prepare_observation_for_inference(observation, device, task, robot_type)
- observation = preprocessor(observation)
-
- # Compute the next action with the policy
- # based on the current observation
- action = policy.select_action(observation)
-
- action = postprocessor(action)
-
- return action
-
-
-def init_keyboard_listener():
- """
- Initializes a non-blocking keyboard listener for real-time user interaction.
-
- This function sets up a listener for specific keys (right arrow, left arrow, escape) to control
- the program flow during execution, such as stopping recording or exiting loops. It gracefully
- handles headless environments where keyboard listening is not possible.
-
- Returns:
- A tuple containing:
- - The `pynput.keyboard.Listener` instance, or `None` if in a headless environment.
- - A dictionary of event flags (e.g., `exit_early`) that are set by key presses.
- """
- # Allow to exit early while recording an episode or resetting the environment,
- # by tapping the right arrow key '->'. This might require a sudo permission
- # to allow your terminal to monitor keyboard events.
- events = {}
- events["exit_early"] = False
- events["rerecord_episode"] = False
- events["stop_recording"] = False
-
- if is_headless():
- logging.warning(
- "Headless environment detected. On-screen cameras display and keyboard inputs will not be available."
- )
- listener = None
- return listener, events
-
- # Only import pynput if not in a headless environment
- from pynput import keyboard
-
- def on_press(key):
- try:
- if key == keyboard.Key.right:
- print("Right arrow key pressed. Exiting loop...")
- events["exit_early"] = True
- elif key == keyboard.Key.left:
- print("Left arrow key pressed. Exiting loop and rerecord the last episode...")
- events["rerecord_episode"] = True
- events["exit_early"] = True
- elif key == keyboard.Key.esc:
- print("Escape key pressed. Stopping data recording...")
- events["stop_recording"] = True
- events["exit_early"] = True
- except Exception as e:
- print(f"Error handling key press: {e}")
-
- listener = keyboard.Listener(on_press=on_press)
- listener.start()
-
- return listener, events
-
-
-def sanity_check_dataset_name(repo_id, policy_cfg):
- """
- Validates the dataset repository name against the presence of a policy configuration.
-
- This function enforces a naming convention: a dataset repository ID should start with "eval_"
- if and only if a policy configuration is provided for evaluation purposes.
-
- Args:
- repo_id: The Hugging Face Hub repository ID of the dataset.
- policy_cfg: The configuration object for the policy, or `None`.
-
- Raises:
- ValueError: If the naming convention is violated.
- """
- _, dataset_name = repo_id.split("/")
- # either repo_id doesnt start with "eval_" and there is no policy
- # or repo_id starts with "eval_" and there is a policy
-
- # Check if dataset_name starts with "eval_" but policy is missing
- if dataset_name.startswith("eval_") and policy_cfg is None:
- raise ValueError(
- f"Your dataset name begins with 'eval_' ({dataset_name}), but no policy is provided ({policy_cfg.type})."
- )
-
- # Check if dataset_name does not start with "eval_" but policy is provided
- if not dataset_name.startswith("eval_") and policy_cfg is not None:
- raise ValueError(
- f"Your dataset name does not begin with 'eval_' ({dataset_name}), but a policy is provided ({policy_cfg.type})."
- )
-
-
-def sanity_check_dataset_robot_compatibility(
- dataset: LeRobotDataset, robot: Robot, fps: int, features: dict
-) -> None:
- """
- Checks if a dataset's metadata is compatible with the current robot and recording setup.
-
- This function compares key metadata fields (`robot_type`, `fps`, and `features`) from the
- dataset against the current configuration to ensure that appended data will be consistent.
-
- Args:
- dataset: The `LeRobotDataset` instance to check.
- robot: The `Robot` instance representing the current hardware setup.
- fps: The current recording frequency (frames per second).
- features: The dictionary of features for the current recording session.
-
- Raises:
- ValueError: If any of the checked metadata fields do not match.
- """
- fields = [
- ("robot_type", dataset.meta.robot_type, robot.robot_type),
- ("fps", dataset.fps, fps),
- ("features", dataset.features, {**features, **DEFAULT_FEATURES}),
- ]
-
- mismatches = []
- for field, dataset_value, present_value in fields:
- diff = DeepDiff(dataset_value, present_value, exclude_regex_paths=[r".*\['info'\]$"])
- if diff:
- mismatches.append(f"{field}: expected {present_value}, got {dataset_value}")
-
- if mismatches:
- raise ValueError(
- "Dataset metadata compatibility check failed with mismatches:\n" + "\n".join(mismatches)
- )
diff --git a/lerobot/src/lerobot/utils/decorators.py b/lerobot/src/lerobot/utils/decorators.py
deleted file mode 100644
index 149b6ed98b957ab8d0f9fd9cc15b4a3fce499ac4..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/utils/decorators.py
+++ /dev/null
@@ -1,41 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2026 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from functools import wraps
-
-from lerobot.utils.errors import DeviceAlreadyConnectedError, DeviceNotConnectedError
-
-
-def check_if_not_connected(func):
- @wraps(func)
- def wrapper(self, *args, **kwargs):
- if not self.is_connected:
- raise DeviceNotConnectedError(
- f"{self.__class__.__name__} is not connected. Run `.connect()` first."
- )
- return func(self, *args, **kwargs)
-
- return wrapper
-
-
-def check_if_already_connected(func):
- @wraps(func)
- def wrapper(self, *args, **kwargs):
- if self.is_connected:
- raise DeviceAlreadyConnectedError(f"{self.__class__.__name__} is already connected.")
- return func(self, *args, **kwargs)
-
- return wrapper
diff --git a/lerobot/src/lerobot/utils/errors.py b/lerobot/src/lerobot/utils/errors.py
deleted file mode 100644
index b791eb0b9ae593dc2297a07a0e32fff844b02060..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/utils/errors.py
+++ /dev/null
@@ -1,32 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-
-class DeviceNotConnectedError(ConnectionError):
- """Exception raised when the device is not connected."""
-
- def __init__(self, message="This device is not connected. Try calling `connect()` first."):
- self.message = message
- super().__init__(self.message)
-
-
-class DeviceAlreadyConnectedError(ConnectionError):
- """Exception raised when the device is already connected."""
-
- def __init__(
- self,
- message="This device is already connected. Try not calling `connect()` twice.",
- ):
- self.message = message
- super().__init__(self.message)
diff --git a/lerobot/src/lerobot/utils/hub.py b/lerobot/src/lerobot/utils/hub.py
deleted file mode 100644
index a810068ffc886c918944861b30a4349a298da66b..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/utils/hub.py
+++ /dev/null
@@ -1,203 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import builtins
-from pathlib import Path
-from tempfile import TemporaryDirectory
-from typing import Any, TypeVar
-
-from huggingface_hub import HfApi
-from huggingface_hub.utils import validate_hf_hub_args
-
-T = TypeVar("T", bound="HubMixin")
-
-
-class HubMixin:
- """
- A Mixin containing the functionality to push an object to the hub.
-
- This is similar to huggingface_hub.ModelHubMixin but is lighter and makes less assumptions about its
- subclasses (in particular, the fact that it's not necessarily a model).
-
- The inheriting classes must implement '_save_pretrained' and 'from_pretrained'.
- """
-
- def save_pretrained(
- self,
- save_directory: str | Path,
- *,
- repo_id: str | None = None,
- push_to_hub: bool = False,
- card_kwargs: dict[str, Any] | None = None,
- **push_to_hub_kwargs,
- ) -> str | None:
- """
- Save object in local directory.
-
- Args:
- save_directory (`str` or `Path`):
- Path to directory in which the object will be saved.
- push_to_hub (`bool`, *optional*, defaults to `False`):
- Whether or not to push your object to the Huggingface Hub after saving it.
- repo_id (`str`, *optional*):
- ID of your repository on the Hub. Used only if `push_to_hub=True`. Will default to the folder name if
- not provided.
- card_kwargs (`Dict[str, Any]`, *optional*):
- Additional arguments passed to the card template to customize the card.
- push_to_hub_kwargs:
- Additional key word arguments passed along to the [`~HubMixin.push_to_hub`] method.
- Returns:
- `str` or `None`: url of the commit on the Hub if `push_to_hub=True`, `None` otherwise.
- """
- save_directory = Path(save_directory)
- save_directory.mkdir(parents=True, exist_ok=True)
-
- # save object (weights, files, etc.)
- self._save_pretrained(save_directory)
-
- # push to the Hub if required
- if push_to_hub:
- if repo_id is None:
- repo_id = save_directory.name # Defaults to `save_directory` name
- return self.push_to_hub(repo_id=repo_id, card_kwargs=card_kwargs, **push_to_hub_kwargs)
- return None
-
- def _save_pretrained(self, save_directory: Path) -> None:
- """
- Overwrite this method in subclass to define how to save your object.
-
- Args:
- save_directory (`str` or `Path`):
- Path to directory in which the object files will be saved.
- """
- raise NotImplementedError
-
- @classmethod
- @validate_hf_hub_args
- def from_pretrained(
- cls: builtins.type[T],
- pretrained_name_or_path: str | Path,
- *,
- force_download: bool = False,
- resume_download: bool | None = None,
- proxies: dict | None = None,
- token: str | bool | None = None,
- cache_dir: str | Path | None = None,
- local_files_only: bool = False,
- revision: str | None = None,
- **kwargs,
- ) -> T:
- """
- Download the object from the Huggingface Hub and instantiate it.
-
- Args:
- pretrained_name_or_path (`str`, `Path`):
- - Either the `repo_id` (string) of the object hosted on the Hub, e.g. `lerobot/diffusion_pusht`.
- - Or a path to a `directory` containing the object files saved using `.save_pretrained`,
- e.g., `../path/to/my_model_directory/`.
- revision (`str`, *optional*):
- Revision on the Hub. Can be a branch name, a git tag or any commit id.
- Defaults to the latest commit on `main` branch.
- force_download (`bool`, *optional*, defaults to `False`):
- Whether to force (re-)downloading the files from the Hub, overriding the existing cache.
- proxies (`Dict[str, str]`, *optional*):
- A dictionary of proxy servers to use by protocol or endpoint, e.g., `{'http': 'foo.bar:3128',
- 'http://hostname': 'foo.bar:4012'}`. The proxies are used on every request.
- token (`str` or `bool`, *optional*):
- The token to use as HTTP bearer authorization for remote files. By default, it will use the token
- cached when running `huggingface-cli login`.
- cache_dir (`str`, `Path`, *optional*):
- Path to the folder where cached files are stored.
- local_files_only (`bool`, *optional*, defaults to `False`):
- If `True`, avoid downloading the file and return the path to the local cached file if it exists.
- kwargs (`Dict`, *optional*):
- Additional kwargs to pass to the object during initialization.
- """
- raise NotImplementedError
-
- @validate_hf_hub_args
- def push_to_hub(
- self,
- repo_id: str,
- *,
- commit_message: str | None = None,
- private: bool | None = None,
- token: str | None = None,
- branch: str | None = None,
- create_pr: bool | None = None,
- allow_patterns: list[str] | str | None = None,
- ignore_patterns: list[str] | str | None = None,
- delete_patterns: list[str] | str | None = None,
- card_kwargs: dict[str, Any] | None = None,
- ) -> str:
- """
- Upload model checkpoint to the Hub.
-
- Use `allow_patterns` and `ignore_patterns` to precisely filter which files should be pushed to the hub. Use
- `delete_patterns` to delete existing remote files in the same commit. See [`upload_folder`] reference for more
- details.
-
- Args:
- repo_id (`str`):
- ID of the repository to push to (example: `"username/my-model"`).
- commit_message (`str`, *optional*):
- Message to commit while pushing.
- private (`bool`, *optional*):
- Whether the repository created should be private.
- If `None` (default), the repo will be public unless the organization's default is private.
- token (`str`, *optional*):
- The token to use as HTTP bearer authorization for remote files. By default, it will use the token
- cached when running `huggingface-cli login`.
- branch (`str`, *optional*):
- The git branch on which to push the model. This defaults to `"main"`.
- create_pr (`boolean`, *optional*):
- Whether or not to create a Pull Request from `branch` with that commit. Defaults to `False`.
- allow_patterns (`List[str]` or `str`, *optional*):
- If provided, only files matching at least one pattern are pushed.
- ignore_patterns (`List[str]` or `str`, *optional*):
- If provided, files matching any of the patterns are not pushed.
- delete_patterns (`List[str]` or `str`, *optional*):
- If provided, remote files matching any of the patterns will be deleted from the repo.
- card_kwargs (`Dict[str, Any]`, *optional*):
- Additional arguments passed to the card template to customize the card.
-
- Returns:
- The url of the commit of your object in the given repository.
- """
- api = HfApi(token=token)
- repo_id = api.create_repo(repo_id=repo_id, private=private, exist_ok=True).repo_id
-
- if commit_message is None:
- if "Policy" in self.__class__.__name__:
- commit_message = "Upload policy"
- elif "Config" in self.__class__.__name__:
- commit_message = "Upload config"
- else:
- commit_message = f"Upload {self.__class__.__name__}"
-
- # Push the files to the repo in a single commit
- with TemporaryDirectory(ignore_cleanup_errors=True) as tmp:
- saved_path = Path(tmp) / repo_id
- self.save_pretrained(saved_path, card_kwargs=card_kwargs)
- return api.upload_folder(
- repo_id=repo_id,
- repo_type="model",
- folder_path=saved_path,
- commit_message=commit_message,
- revision=branch,
- create_pr=create_pr,
- allow_patterns=allow_patterns,
- ignore_patterns=ignore_patterns,
- delete_patterns=delete_patterns,
- )
diff --git a/lerobot/src/lerobot/utils/import_utils.py b/lerobot/src/lerobot/utils/import_utils.py
deleted file mode 100644
index 166cf6426d2284caddde96e33f26303e00c03643..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/utils/import_utils.py
+++ /dev/null
@@ -1,163 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import importlib
-import importlib.metadata
-import logging
-from typing import Any
-
-from draccus.choice_types import ChoiceRegistry
-
-
-def is_package_available(pkg_name: str, return_version: bool = False) -> tuple[bool, str] | bool:
- """Copied from https://github.com/huggingface/transformers/blob/main/src/transformers/utils/import_utils.py
- Check if the package spec exists and grab its version to avoid importing a local directory.
- **Note:** this doesn't work for all packages.
- """
- package_exists = importlib.util.find_spec(pkg_name) is not None
- package_version = "N/A"
- if package_exists:
- try:
- # Primary method to get the package version
- package_version = importlib.metadata.version(pkg_name)
-
- except importlib.metadata.PackageNotFoundError:
- # Fallback method: Only for "torch" and versions containing "dev"
- if pkg_name == "torch":
- try:
- package = importlib.import_module(pkg_name)
- temp_version = getattr(package, "__version__", "N/A")
- # Check if the version contains "dev"
- if "dev" in temp_version:
- package_version = temp_version
- package_exists = True
- else:
- package_exists = False
- except ImportError:
- # If the package can't be imported, it's not available
- package_exists = False
- elif pkg_name == "grpc":
- package = importlib.import_module(pkg_name)
- package_version = getattr(package, "__version__", "N/A")
- else:
- # For packages other than "torch", don't attempt the fallback and set as not available
- package_exists = False
- logging.debug(f"Detected {pkg_name} version: {package_version}")
- if return_version:
- return package_exists, package_version
- else:
- return package_exists
-
-
-_transformers_available = is_package_available("transformers")
-_peft_available = is_package_available("peft")
-_scipy_available = is_package_available("scipy")
-_reachy2_sdk_available = is_package_available("reachy2_sdk")
-
-
-def make_device_from_device_class(config: ChoiceRegistry) -> Any:
- """
- Dynamically instantiates an object from its `ChoiceRegistry` configuration.
-
- This factory uses the module path and class name from the `config` object's
- type to locate and instantiate the corresponding device class (not the config).
- It derives the device class name by removing a trailing 'Config' from the config
- class name and tries a few candidate modules where the device implementation is
- commonly located.
- """
- if not isinstance(config, ChoiceRegistry):
- raise ValueError(f"Config should be an instance of `ChoiceRegistry`, got {type(config)}")
-
- config_cls = config.__class__
- module_path = config_cls.__module__ # typical: lerobot_teleop_mydevice.config_mydevice
- config_name = config_cls.__name__ # typical: MyDeviceConfig
-
- # Derive device class name (strip "Config")
- if not config_name.endswith("Config"):
- raise ValueError(f"Config class name '{config_name}' does not end with 'Config'")
-
- device_class_name = config_name[:-6] # typical: MyDeviceConfig -> MyDevice
-
- # Build candidate modules to search for the device class
- parts = module_path.split(".")
- parent_module = ".".join(parts[:-1]) if len(parts) > 1 else module_path
- candidates = [
- parent_module, # typical: lerobot_teleop_mydevice
- parent_module + "." + device_class_name.lower(), # typical: lerobot_teleop_mydevice.mydevice
- ]
-
- # handle modules named like "config_xxx" -> try replacing that piece with "xxx"
- last = parts[-1] if parts else ""
- if last.startswith("config_"):
- candidates.append(".".join(parts[:-1] + [last.replace("config_", "")]))
-
- # de-duplicate while preserving order
- seen: set[str] = set()
- candidates = [c for c in candidates if not (c in seen or seen.add(c))]
-
- tried: list[str] = []
- for candidate in candidates:
- tried.append(candidate)
- try:
- module = importlib.import_module(candidate)
- except ImportError:
- continue
-
- if hasattr(module, device_class_name):
- cls = getattr(module, device_class_name)
- if callable(cls):
- try:
- return cls(config)
- except TypeError as e:
- raise TypeError(
- f"Failed to instantiate '{device_class_name}' from module '{candidate}': {e}"
- ) from e
-
- raise ImportError(
- f"Could not locate device class '{device_class_name}' for config '{config_name}'. "
- f"Tried modules: {tried}. Ensure your device class name is the config class name without "
- f"'Config' and that it's importable from one of those modules."
- )
-
-
-def register_third_party_plugins() -> None:
- """
- Discover and import third-party LeRobot plugins so they can register themselves.
-
- This function uses `importlib.metadata` to find packages installed in the environment
- (including editable installs) starting with 'lerobot_robot_', 'lerobot_camera_',
- 'lerobot_teleoperator_', or 'lerobot_policy_' and imports them.
- """
- prefixes = ("lerobot_robot_", "lerobot_camera_", "lerobot_teleoperator_", "lerobot_policy_")
- imported: list[str] = []
- failed: list[str] = []
-
- def attempt_import(module_name: str):
- try:
- importlib.import_module(module_name)
- imported.append(module_name)
- logging.info("Imported third-party plugin: %s", module_name)
- except Exception:
- logging.exception("Could not import third-party plugin: %s", module_name)
- failed.append(module_name)
-
- for dist in importlib.metadata.distributions():
- dist_name = dist.metadata.get("Name")
- if not dist_name:
- continue
- if dist_name.startswith(prefixes):
- attempt_import(dist_name)
-
- logging.debug("Third-party plugin import summary: imported=%s failed=%s", imported, failed)
diff --git a/lerobot/src/lerobot/utils/io_utils.py b/lerobot/src/lerobot/utils/io_utils.py
deleted file mode 100644
index 1226772c97f4c9f62047d80a500704e6c17673f0..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/utils/io_utils.py
+++ /dev/null
@@ -1,111 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import json
-import warnings
-from pathlib import Path
-from typing import TypeVar
-
-import imageio
-
-JsonLike = str | int | float | bool | None | list["JsonLike"] | dict[str, "JsonLike"] | tuple["JsonLike", ...]
-T = TypeVar("T", bound=JsonLike)
-
-
-def write_video(video_path, stacked_frames, fps):
- # Filter out DeprecationWarnings raised from pkg_resources
- with warnings.catch_warnings():
- warnings.filterwarnings(
- "ignore", "pkg_resources is deprecated as an API", category=DeprecationWarning
- )
- imageio.mimsave(video_path, stacked_frames, fps=fps)
-
-
-def deserialize_json_into_object(fpath: Path, obj: T) -> T:
- """
- Loads the JSON data from `fpath` and recursively fills `obj` with the
- corresponding values (strictly matching structure and types).
- Tuples in `obj` are expected to be lists in the JSON data, which will be
- converted back into tuples.
- """
- with open(fpath, encoding="utf-8") as f:
- data = json.load(f)
-
- def _deserialize(target, source):
- """
- Recursively overwrite the structure in `target` with data from `source`,
- performing strict checks on structure and type.
- Returns the updated version of `target` (especially important for tuples).
- """
-
- # If the target is a dictionary, source must be a dictionary as well.
- if isinstance(target, dict):
- if not isinstance(source, dict):
- raise TypeError(f"Type mismatch: expected dict, got {type(source)}")
-
- # Check that they have exactly the same set of keys.
- if target.keys() != source.keys():
- raise ValueError(
- f"Dictionary keys do not match.\nExpected: {target.keys()}, got: {source.keys()}"
- )
-
- # Recursively update each key.
- for k in target:
- target[k] = _deserialize(target[k], source[k])
-
- return target
-
- # If the target is a list, source must be a list as well.
- elif isinstance(target, list):
- if not isinstance(source, list):
- raise TypeError(f"Type mismatch: expected list, got {type(source)}")
-
- # Check length
- if len(target) != len(source):
- raise ValueError(f"List length mismatch: expected {len(target)}, got {len(source)}")
-
- # Recursively update each element.
- for i in range(len(target)):
- target[i] = _deserialize(target[i], source[i])
-
- return target
-
- # If the target is a tuple, the source must be a list in JSON,
- # which we'll convert back to a tuple.
- elif isinstance(target, tuple):
- if not isinstance(source, list):
- raise TypeError(f"Type mismatch: expected list (for tuple), got {type(source)}")
-
- if len(target) != len(source):
- raise ValueError(f"Tuple length mismatch: expected {len(target)}, got {len(source)}")
-
- # Convert each element, forming a new tuple.
- converted_items = []
- for t_item, s_item in zip(target, source, strict=False):
- converted_items.append(_deserialize(t_item, s_item))
-
- # Return a brand new tuple (tuples are immutable in Python).
- return tuple(converted_items)
-
- # Otherwise, we're dealing with a "primitive" (int, float, str, bool, None).
- else:
- # Check the exact type. If these must match 1:1, do:
- if type(target) is not type(source):
- raise TypeError(f"Type mismatch: expected {type(target)}, got {type(source)}")
- return source
-
- # Perform the in-place/recursive deserialization
- updated_obj = _deserialize(obj, data)
- return updated_obj
diff --git a/lerobot/src/lerobot/utils/logging_utils.py b/lerobot/src/lerobot/utils/logging_utils.py
deleted file mode 100644
index 5b61394ac8239fc11ebe9edbe4c51672848a9f33..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/utils/logging_utils.py
+++ /dev/null
@@ -1,167 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from collections.abc import Callable
-from typing import Any
-
-from lerobot.utils.utils import format_big_number
-
-
-class AverageMeter:
- """
- Computes and stores the average and current value
- Adapted from https://github.com/pytorch/examples/blob/main/imagenet/main.py
- """
-
- def __init__(self, name: str, fmt: str = ":f"):
- self.name = name
- self.fmt = fmt
- self.reset()
-
- def reset(self) -> None:
- self.val = 0.0
- self.avg = 0.0
- self.sum = 0.0
- self.count = 0.0
-
- def update(self, val: float, n: int = 1) -> None:
- self.val = val
- self.sum += val * n
- self.count += n
- self.avg = self.sum / self.count
-
- def __str__(self):
- fmtstr = "{name}:{avg" + self.fmt + "}"
- return fmtstr.format(**self.__dict__)
-
-
-class MetricsTracker:
- """
- A helper class to track and log metrics over time.
-
- Usage pattern:
-
- ```python
- # initialize, potentially with non-zero initial step (e.g. if resuming run)
- metrics = {"loss": AverageMeter("loss", ":.3f")}
- train_metrics = MetricsTracker(cfg, dataset, metrics, initial_step=step)
-
- # update metrics derived from step (samples, episodes, epochs) at each training step
- train_metrics.step()
-
- # update various metrics
- loss = policy.forward(batch)
- train_metrics.loss = loss
-
- # display current metrics
- logging.info(train_metrics)
-
- # export for wandb
- wandb.log(train_metrics.to_dict())
-
- # reset averages after logging
- train_metrics.reset_averages()
- ```
- """
-
- __keys__ = [
- "_batch_size",
- "_num_frames",
- "_avg_samples_per_ep",
- "metrics",
- "steps",
- "samples",
- "episodes",
- "epochs",
- "accelerator",
- ]
-
- def __init__(
- self,
- batch_size: int,
- num_frames: int,
- num_episodes: int,
- metrics: dict[str, AverageMeter],
- initial_step: int = 0,
- accelerator: Callable | None = None,
- ):
- self.__dict__.update(dict.fromkeys(self.__keys__))
- self._batch_size = batch_size
- self._num_frames = num_frames
- self._avg_samples_per_ep = num_frames / num_episodes
- self.metrics = metrics
-
- self.steps = initial_step
- # A sample is an (observation,action) pair, where observation and action
- # can be on multiple timestamps. In a batch, we have `batch_size` number of samples.
- self.samples = self.steps * self._batch_size
- self.episodes = self.samples / self._avg_samples_per_ep
- self.epochs = self.samples / self._num_frames
- self.accelerator = accelerator
-
- def __getattr__(self, name: str) -> int | dict[str, AverageMeter] | AverageMeter | Any:
- if name in self.__dict__:
- return self.__dict__[name]
- elif name in self.metrics:
- return self.metrics[name]
- else:
- raise AttributeError(f"'{self.__class__.__name__}' object has no attribute '{name}'")
-
- def __setattr__(self, name: str, value: Any) -> None:
- if name in self.__dict__:
- super().__setattr__(name, value)
- elif name in self.metrics:
- self.metrics[name].update(value)
- else:
- raise AttributeError(f"'{self.__class__.__name__}' object has no attribute '{name}'")
-
- def step(self) -> None:
- """
- Updates metrics that depend on 'step' for one step.
- """
- self.steps += 1
- self.samples += self._batch_size * (self.accelerator.num_processes if self.accelerator else 1)
- self.episodes = self.samples / self._avg_samples_per_ep
- self.epochs = self.samples / self._num_frames
-
- def __str__(self) -> str:
- display_list = [
- f"step:{format_big_number(self.steps)}",
- # number of samples seen during training
- f"smpl:{format_big_number(self.samples)}",
- # number of episodes seen during training
- f"ep:{format_big_number(self.episodes)}",
- # number of time all unique samples are seen
- f"epch:{self.epochs:.2f}",
- *[str(m) for m in self.metrics.values()],
- ]
- return " ".join(display_list)
-
- def to_dict(self, use_avg: bool = True) -> dict[str, int | float]:
- """
- Returns the current metric values (or averages if `use_avg=True`) as a dict.
- """
- return {
- "steps": self.steps,
- "samples": self.samples,
- "episodes": self.episodes,
- "epochs": self.epochs,
- **{k: m.avg if use_avg else m.val for k, m in self.metrics.items()},
- }
-
- def reset_averages(self) -> None:
- """Resets average meters."""
- for m in self.metrics.values():
- m.reset()
diff --git a/lerobot/src/lerobot/utils/rabc.py b/lerobot/src/lerobot/utils/rabc.py
deleted file mode 100644
index aa3d74045b475f18c76cb29949cae02e93c2d9ec..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/utils/rabc.py
+++ /dev/null
@@ -1,288 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import logging
-from pathlib import Path
-
-import numpy as np
-import pandas as pd
-import torch
-from huggingface_hub import hf_hub_download
-
-
-def resolve_hf_path(path: str | Path) -> Path:
- """Resolve a path that may be a HuggingFace URL (hf://datasets/...) to a local path."""
- path_str = str(path)
- if path_str.startswith("hf://datasets/"):
- parts = path_str.replace("hf://datasets/", "").split("/")
- repo_id = "/".join(parts[:2])
- filename = "/".join(parts[2:])
- return Path(hf_hub_download(repo_id=repo_id, filename=filename, repo_type="dataset"))
- return Path(path)
-
-
-class RABCWeights:
- """
- Load precomputed SARM progress values and compute RA-BC weights during training.
-
- Progress values are loaded from a parquet file (generated by compute_rabc_weights.py).
- During training, computes:
- - progress_delta = progress[t + chunk_size] - progress[t]
- - rabc_weight based on the delta (paper Eq. 8-9)
-
- Args:
- progress_path: Path to parquet file with precomputed progress values
- chunk_size: Number of frames ahead for computing progress delta
- head_mode: Which SARM head to use ("sparse" or "dense")
- kappa: Hard threshold for high-quality samples (default: 0.01)
- epsilon: Small constant for numerical stability (default: 1e-6)
- fallback_weight: Weight to use for frames without valid delta (default: 1.0)
- device: Device to return tensors on
- """
-
- def __init__(
- self,
- progress_path: str | Path,
- chunk_size: int = 50,
- head_mode: str = "sparse",
- kappa: float = 0.01,
- epsilon: float = 1e-6,
- fallback_weight: float = 1.0,
- device: torch.device = None,
- ):
- self.progress_path = resolve_hf_path(progress_path)
- self.chunk_size = chunk_size
- self.head_mode = head_mode
- self.kappa = kappa
- self.epsilon = epsilon
- self.fallback_weight = fallback_weight
- self.device = device or torch.device("cuda" if torch.cuda.is_available() else "cpu")
-
- # Determine progress column name
- self.progress_column = f"progress_{head_mode}"
-
- # Load progress values
- logging.info(f"Loading SARM progress values from {self.progress_path}")
- self.df = pd.read_parquet(self.progress_path)
-
- # Check if the requested head mode column exists
- if self.progress_column not in self.df.columns:
- available = [c for c in self.df.columns if c.startswith("progress")]
- raise ValueError(
- f"Column '{self.progress_column}' not found. Available progress columns: {available}"
- )
-
- logging.info(f"Using progress column: {self.progress_column}")
-
- self.progress_lookup = {}
- self.episode_lookup = {}
-
- for _, row in self.df.iterrows():
- global_idx = int(row["index"])
- progress = row[self.progress_column]
- episode_idx = int(row["episode_index"])
-
- if not np.isnan(progress):
- self.progress_lookup[global_idx] = float(progress)
- self.episode_lookup[global_idx] = episode_idx
-
- # Build episode boundaries for delta computation
- self.episode_boundaries = {}
- for episode_idx in self.df["episode_index"].unique():
- ep_df = self.df[self.df["episode_index"] == episode_idx]
- self.episode_boundaries[int(episode_idx)] = {
- "start": int(ep_df["index"].min()),
- "end": int(ep_df["index"].max()) + 1,
- }
-
- logging.info(f"Loaded {len(self.progress_lookup)} frame progress values")
- logging.info(f"Chunk size for delta computation: {chunk_size}")
-
- # Compute global statistics for weight computation
- self._compute_global_stats()
-
- def _compute_global_stats(self):
- """Compute global mean and std of progress deltas for weight calculation."""
- all_deltas = []
-
- for global_idx, progress in self.progress_lookup.items():
- episode_idx = self.episode_lookup.get(global_idx)
- if episode_idx is None:
- continue
-
- bounds = self.episode_boundaries.get(episode_idx)
- if bounds is None:
- continue
-
- future_idx = global_idx + self.chunk_size
- if future_idx >= bounds["end"]:
- # Near end of episode: use last frame's progress
- future_idx = bounds["end"] - 1
-
- future_progress = self.progress_lookup.get(future_idx)
- if future_progress is not None:
- delta = future_progress - progress
- all_deltas.append(delta)
-
- if all_deltas:
- self.delta_mean = max(np.mean(all_deltas), 0.0)
- self.delta_std = max(np.std(all_deltas), self.epsilon)
- logging.info(f"Progress delta stats: mean={self.delta_mean:.4f}, std={self.delta_std:.4f}")
- else:
- self.delta_mean = 0.0
- self.delta_std = self.epsilon
- logging.warning("No valid progress deltas found, using default stats")
-
- def compute_batch_weights(self, batch: dict) -> tuple[torch.Tensor, dict]:
- """
- Compute RA-BC weights for a batch.
-
- For each sample:
- 1. Get progress at current frame
- 2. Get progress at frame + chunk_size (within same episode)
- 3. Compute delta = future_progress - current_progress
- 4. Compute weight using paper Eq. 8-9
-
- Args:
- batch: Training batch containing "index" key with global frame indices
-
- Returns:
- Tuple of:
- - Weights tensor (batch_size,) normalized to sum to batch_size
- - Stats dict with raw_mean_weight, num_zero_weight, num_full_weight
- """
- indices = batch.get("index")
- if indices is None:
- logging.warning("RA-BC: Batch missing 'index' key, using uniform weights")
- batch_size = self._get_batch_size(batch)
- return torch.ones(batch_size, device=self.device), {"raw_mean_weight": 1.0}
-
- # Convert to list of ints
- if isinstance(indices, torch.Tensor):
- indices = indices.cpu().numpy().tolist()
- elif isinstance(indices, np.ndarray):
- indices = indices.tolist()
-
- # Compute deltas and weights for each sample
- deltas = []
- for idx in indices:
- idx = int(idx)
- delta = self._compute_delta(idx)
- deltas.append(delta)
-
- deltas = np.array(deltas, dtype=np.float32)
-
- # Compute weights from deltas
- weights = self._compute_weights(deltas)
-
- # Compute stats before normalization for logging
- raw_mean_weight = float(np.nanmean(weights))
- num_zero_weight = int(np.sum(weights == 0))
- num_full_weight = int(np.sum(weights == 1.0))
- batch_stats = {
- "raw_mean_weight": raw_mean_weight,
- "num_zero_weight": num_zero_weight,
- "num_full_weight": num_full_weight,
- }
-
- weights = torch.tensor(weights, device=self.device, dtype=torch.float32)
-
- # Normalize to sum to batch_size
- batch_size = len(weights)
- weight_sum = weights.sum() + self.epsilon
- weights = weights * batch_size / weight_sum
-
- return weights, batch_stats
-
- def _compute_delta(self, global_idx: int) -> float:
- """Compute progress delta for a single frame."""
- current_progress = self.progress_lookup.get(global_idx)
- if current_progress is None:
- return np.nan
-
- episode_idx = self.episode_lookup.get(global_idx)
- if episode_idx is None:
- return np.nan
-
- bounds = self.episode_boundaries.get(episode_idx)
- if bounds is None:
- return np.nan
-
- future_idx = global_idx + self.chunk_size # Δ = chunk_size
- if future_idx >= bounds["end"]:
- # Near end of episode: use last frame's progress instead
- future_idx = bounds["end"] - 1
-
- future_progress = self.progress_lookup.get(future_idx)
- if future_progress is None:
- return np.nan
-
- return future_progress - current_progress
-
- def _compute_weights(self, deltas: np.ndarray) -> np.ndarray:
- """
- Compute RA-BC weights from progress deltas.
-
- Following paper Eq. 8-9:
- - Soft weight: ˜wi = clip((ri − (µ − 2σ)) / (4σ + ε), 0, 1)
- - Final weight: wi = 1{ri > κ} + 1{0 ≤ ri ≤ κ}˜wi
-
- Returns:
- Array of weights
- """
- valid_mask = ~np.isnan(deltas)
-
- # Compute soft weights using global statistics
- lower_bound = self.delta_mean - 2 * self.delta_std
- soft_weights = (deltas - lower_bound) / (4 * self.delta_std + self.epsilon)
- soft_weights = np.clip(soft_weights, 0.0, 1.0)
-
- # Apply paper's Eq. 9
- weights = np.zeros_like(deltas, dtype=np.float32)
-
- # High quality: ri > kappa → weight = 1
- high_quality_mask = deltas > self.kappa
- weights[high_quality_mask] = 1.0
-
- # Moderate quality: 0 <= ri <= kappa → weight = soft_weight
- moderate_mask = (deltas >= 0) & (deltas <= self.kappa)
- weights[moderate_mask] = soft_weights[moderate_mask]
-
- # Negative progress: ri < 0 → weight = 0 (already 0)
- # Invalid (NaN): use fallback weight
- weights[~valid_mask] = self.fallback_weight
-
- return weights
-
- def _get_batch_size(self, batch: dict) -> int:
- """Determine batch size from batch."""
- for key in ["action", "index"]:
- if key in batch:
- val = batch[key]
- if isinstance(val, (torch.Tensor, np.ndarray)):
- return val.shape[0]
- return 1
-
- def get_stats(self) -> dict:
- """Get statistics."""
- return {
- "num_frames": len(self.progress_lookup),
- "chunk_size": self.chunk_size,
- "head_mode": self.head_mode,
- "delta_mean": self.delta_mean,
- "delta_std": self.delta_std,
- "kappa": self.kappa,
- }
diff --git a/lerobot/src/lerobot/utils/random_utils.py b/lerobot/src/lerobot/utils/random_utils.py
deleted file mode 100644
index 5d9558b173c2b30d6a6bd653cb52518c2770cf05..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/utils/random_utils.py
+++ /dev/null
@@ -1,198 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import random
-from collections.abc import Callable, Generator
-from contextlib import contextmanager
-from pathlib import Path
-from typing import Any
-
-import numpy as np
-import torch
-from safetensors.torch import load_file, save_file
-
-from lerobot.datasets.utils import flatten_dict, unflatten_dict
-from lerobot.utils.constants import RNG_STATE
-
-
-def serialize_python_rng_state() -> dict[str, torch.Tensor]:
- """
- Returns the rng state for `random` in the form of a flat dict[str, torch.Tensor] to be saved using
- `safetensors.save_file()` or `torch.save()`.
- """
- py_state = random.getstate()
- return {
- "py_rng_version": torch.tensor([py_state[0]], dtype=torch.int64),
- "py_rng_state": torch.tensor(py_state[1], dtype=torch.int64),
- }
-
-
-def deserialize_python_rng_state(rng_state_dict: dict[str, torch.Tensor]) -> None:
- """
- Restores the rng state for `random` from a dictionary produced by `serialize_python_rng_state()`.
- """
- py_state = (rng_state_dict["py_rng_version"].item(), tuple(rng_state_dict["py_rng_state"].tolist()), None)
- random.setstate(py_state)
-
-
-def serialize_numpy_rng_state() -> dict[str, torch.Tensor]:
- """
- Returns the rng state for `numpy` in the form of a flat dict[str, torch.Tensor] to be saved using
- `safetensors.save_file()` or `torch.save()`.
- """
- np_state = np.random.get_state()
- # Ensure no breaking changes from numpy
- assert np_state[0] == "MT19937"
- return {
- "np_rng_state_values": torch.tensor(np_state[1], dtype=torch.int64),
- "np_rng_state_index": torch.tensor([np_state[2]], dtype=torch.int64),
- "np_rng_has_gauss": torch.tensor([np_state[3]], dtype=torch.int64),
- "np_rng_cached_gaussian": torch.tensor([np_state[4]], dtype=torch.float32),
- }
-
-
-def deserialize_numpy_rng_state(rng_state_dict: dict[str, torch.Tensor]) -> None:
- """
- Restores the rng state for `numpy` from a dictionary produced by `serialize_numpy_rng_state()`.
- """
- np_state = (
- "MT19937",
- rng_state_dict["np_rng_state_values"].numpy(),
- rng_state_dict["np_rng_state_index"].item(),
- rng_state_dict["np_rng_has_gauss"].item(),
- rng_state_dict["np_rng_cached_gaussian"].item(),
- )
- np.random.set_state(np_state)
-
-
-def serialize_torch_rng_state() -> dict[str, torch.Tensor]:
- """
- Returns the rng state for `torch` in the form of a flat dict[str, torch.Tensor] to be saved using
- `safetensors.save_file()` or `torch.save()`.
- """
- torch_rng_state_dict = {"torch_rng_state": torch.get_rng_state()}
- if torch.cuda.is_available():
- torch_rng_state_dict["torch_cuda_rng_state"] = torch.cuda.get_rng_state()
- return torch_rng_state_dict
-
-
-def deserialize_torch_rng_state(rng_state_dict: dict[str, torch.Tensor]) -> None:
- """
- Restores the rng state for `torch` from a dictionary produced by `serialize_torch_rng_state()`.
- """
- torch.set_rng_state(rng_state_dict["torch_rng_state"])
- if torch.cuda.is_available() and "torch_cuda_rng_state" in rng_state_dict:
- torch.cuda.set_rng_state(rng_state_dict["torch_cuda_rng_state"])
-
-
-def serialize_rng_state() -> dict[str, torch.Tensor]:
- """
- Returns the rng state for `random`, `numpy`, and `torch`, in the form of a flat
- dict[str, torch.Tensor] to be saved using `safetensors.save_file()` `torch.save()`.
- """
- py_rng_state_dict = serialize_python_rng_state()
- np_rng_state_dict = serialize_numpy_rng_state()
- torch_rng_state_dict = serialize_torch_rng_state()
-
- return {
- **py_rng_state_dict,
- **np_rng_state_dict,
- **torch_rng_state_dict,
- }
-
-
-def deserialize_rng_state(rng_state_dict: dict[str, torch.Tensor]) -> None:
- """
- Restores the rng state for `random`, `numpy`, and `torch` from a dictionary produced by
- `serialize_rng_state()`.
- """
- py_rng_state_dict = {k: v for k, v in rng_state_dict.items() if k.startswith("py")}
- np_rng_state_dict = {k: v for k, v in rng_state_dict.items() if k.startswith("np")}
- torch_rng_state_dict = {k: v for k, v in rng_state_dict.items() if k.startswith("torch")}
-
- deserialize_python_rng_state(py_rng_state_dict)
- deserialize_numpy_rng_state(np_rng_state_dict)
- deserialize_torch_rng_state(torch_rng_state_dict)
-
-
-def save_rng_state(save_dir: Path) -> None:
- rng_state_dict = serialize_rng_state()
- flat_rng_state_dict = flatten_dict(rng_state_dict)
- save_file(flat_rng_state_dict, save_dir / RNG_STATE)
-
-
-def load_rng_state(save_dir: Path) -> None:
- flat_rng_state_dict = load_file(save_dir / RNG_STATE)
- rng_state_dict = unflatten_dict(flat_rng_state_dict)
- deserialize_rng_state(rng_state_dict)
-
-
-def get_rng_state() -> dict[str, Any]:
- """Get the random state for `random`, `numpy`, and `torch`."""
- random_state_dict = {
- "random_state": random.getstate(),
- "numpy_random_state": np.random.get_state(),
- "torch_random_state": torch.random.get_rng_state(),
- }
- if torch.cuda.is_available():
- random_state_dict["torch_cuda_random_state"] = torch.cuda.random.get_rng_state()
- return random_state_dict
-
-
-def set_rng_state(random_state_dict: dict[str, Any]):
- """Set the random state for `random`, `numpy`, and `torch`.
-
- Args:
- random_state_dict: A dictionary of the form returned by `get_rng_state`.
- """
- random.setstate(random_state_dict["random_state"])
- np.random.set_state(random_state_dict["numpy_random_state"])
- torch.random.set_rng_state(random_state_dict["torch_random_state"])
- if torch.cuda.is_available():
- torch.cuda.random.set_rng_state(random_state_dict["torch_cuda_random_state"])
-
-
-def set_seed(seed, accelerator: Callable | None = None) -> None:
- """Set seed for reproducibility."""
- random.seed(seed)
- np.random.seed(seed)
- torch.manual_seed(seed)
-
- if torch.cuda.is_available():
- torch.cuda.manual_seed_all(seed)
-
- if accelerator:
- from accelerate.utils import set_seed as _accelerate_set_seed
-
- _accelerate_set_seed(seed)
-
-
-@contextmanager
-def seeded_context(seed: int) -> Generator[None, None, None]:
- """Set the seed when entering a context, and restore the prior random state at exit.
-
- Example usage:
-
- ```
- a = random.random() # produces some random number
- with seeded_context(1337):
- b = random.random() # produces some other random number
- c = random.random() # produces yet another random number, but the same it would have if we never made `b`
- ```
- """
- random_state_dict = get_rng_state()
- set_seed(seed)
- yield None
- set_rng_state(random_state_dict)
diff --git a/lerobot/src/lerobot/utils/robot_utils.py b/lerobot/src/lerobot/utils/robot_utils.py
deleted file mode 100644
index 3fc18ac9c754364ff8cfb52116bd5ca4176909b6..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/utils/robot_utils.py
+++ /dev/null
@@ -1,55 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import platform
-import time
-
-
-def precise_sleep(seconds: float, spin_threshold: float = 0.010, sleep_margin: float = 0.003):
- """
- Wait for `seconds` with better precision than time.sleep alone at the expense of more CPU usage.
-
- Parameters:
- - seconds: duration to wait
- - spin_threshold: if remaining <= spin_threshold -> spin; otherwise sleep (seconds). Default 10ms
- - sleep_margin: when sleeping leave this much time before deadline to avoid oversleep. Default 3ms
-
- Note:
- The default parameters are chosen to prioritize timing accuracy over CPU usage for the common 30 FPS use case.
- """
- if seconds <= 0:
- return
-
- system = platform.system()
- # On macOS and Windows the scheduler / sleep granularity can make
- # short sleeps inaccurate. Instead of burning CPU for the whole
- # duration, sleep for most of the time and spin for the final few
- # milliseconds to achieve good accuracy with much lower CPU usage.
- if system in ("Darwin", "Windows"):
- end_time = time.perf_counter() + seconds
- while True:
- remaining = end_time - time.perf_counter()
- if remaining <= 0:
- break
- # If there's more than a couple milliseconds left, sleep most
- # of the remaining time and leave a small margin for the final spin.
- if remaining > spin_threshold:
- # Sleep but avoid sleeping past the end by leaving a small margin.
- time.sleep(max(remaining - sleep_margin, 0))
- else:
- # Final short spin to hit precise timing without long sleeps.
- pass
- else:
- # On Linux time.sleep is accurate enough for most uses
- time.sleep(seconds)
diff --git a/lerobot/src/lerobot/utils/rotation.py b/lerobot/src/lerobot/utils/rotation.py
deleted file mode 100644
index 48d1ba5fa16307a0331463084e66f65c94151f78..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/utils/rotation.py
+++ /dev/null
@@ -1,270 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-"""Custom rotation utilities to replace scipy.spatial.transform.Rotation."""
-
-import numpy as np
-
-
-class Rotation:
- """
- Custom rotation class that provides a subset of scipy.spatial.transform.Rotation functionality.
-
- Supports conversions between rotation vectors, rotation matrices, and quaternions.
- """
-
- def __init__(self, quat: np.ndarray) -> None:
- """Initialize rotation from quaternion [x, y, z, w]."""
- self._quat = np.asarray(quat, dtype=float)
- # Normalize quaternion
- norm = np.linalg.norm(self._quat)
- if norm > 0:
- self._quat = self._quat / norm
-
- @classmethod
- def from_rotvec(cls, rotvec: np.ndarray) -> "Rotation":
- """
- Create rotation from rotation vector using Rodrigues' formula.
-
- Args:
- rotvec: Rotation vector [x, y, z] where magnitude is angle in radians
-
- Returns:
- Rotation instance
- """
- rotvec = np.asarray(rotvec, dtype=float)
- angle = np.linalg.norm(rotvec)
-
- if angle < 1e-8:
- # For very small angles, use identity quaternion
- quat = np.array([0.0, 0.0, 0.0, 1.0])
- else:
- axis = rotvec / angle
- half_angle = angle / 2.0
- sin_half = np.sin(half_angle)
- cos_half = np.cos(half_angle)
-
- # Quaternion [x, y, z, w]
- quat = np.array([axis[0] * sin_half, axis[1] * sin_half, axis[2] * sin_half, cos_half])
-
- return cls(quat)
-
- @classmethod
- def from_matrix(cls, matrix: np.ndarray) -> "Rotation":
- """
- Create rotation from 3x3 rotation matrix.
-
- Args:
- matrix: 3x3 rotation matrix
-
- Returns:
- Rotation instance
- """
- matrix = np.asarray(matrix, dtype=float)
-
- # Shepherd's method for converting rotation matrix to quaternion
- trace = np.trace(matrix)
-
- if trace > 0:
- s = np.sqrt(trace + 1.0) * 2 # s = 4 * qw
- qw = 0.25 * s
- qx = (matrix[2, 1] - matrix[1, 2]) / s
- qy = (matrix[0, 2] - matrix[2, 0]) / s
- qz = (matrix[1, 0] - matrix[0, 1]) / s
- elif matrix[0, 0] > matrix[1, 1] and matrix[0, 0] > matrix[2, 2]:
- s = np.sqrt(1.0 + matrix[0, 0] - matrix[1, 1] - matrix[2, 2]) * 2 # s = 4 * qx
- qw = (matrix[2, 1] - matrix[1, 2]) / s
- qx = 0.25 * s
- qy = (matrix[0, 1] + matrix[1, 0]) / s
- qz = (matrix[0, 2] + matrix[2, 0]) / s
- elif matrix[1, 1] > matrix[2, 2]:
- s = np.sqrt(1.0 + matrix[1, 1] - matrix[0, 0] - matrix[2, 2]) * 2 # s = 4 * qy
- qw = (matrix[0, 2] - matrix[2, 0]) / s
- qx = (matrix[0, 1] + matrix[1, 0]) / s
- qy = 0.25 * s
- qz = (matrix[1, 2] + matrix[2, 1]) / s
- else:
- s = np.sqrt(1.0 + matrix[2, 2] - matrix[0, 0] - matrix[1, 1]) * 2 # s = 4 * qz
- qw = (matrix[1, 0] - matrix[0, 1]) / s
- qx = (matrix[0, 2] + matrix[2, 0]) / s
- qy = (matrix[1, 2] + matrix[2, 1]) / s
- qz = 0.25 * s
-
- quat = np.array([qx, qy, qz, qw])
- return cls(quat)
-
- @classmethod
- def from_quat(cls, quat: np.ndarray) -> "Rotation":
- """
- Create rotation from quaternion.
-
- Args:
- quat: Quaternion [x, y, z, w] or [w, x, y, z] (specify convention in docstring)
- This implementation expects [x, y, z, w] format
-
- Returns:
- Rotation instance
- """
- return cls(quat)
-
- def as_matrix(self) -> np.ndarray:
- """
- Convert rotation to 3x3 rotation matrix.
-
- Returns:
- 3x3 rotation matrix
- """
- qx, qy, qz, qw = self._quat
-
- # Compute rotation matrix from quaternion
- return np.array(
- [
- [1 - 2 * (qy * qy + qz * qz), 2 * (qx * qy - qz * qw), 2 * (qx * qz + qy * qw)],
- [2 * (qx * qy + qz * qw), 1 - 2 * (qx * qx + qz * qz), 2 * (qy * qz - qx * qw)],
- [2 * (qx * qz - qy * qw), 2 * (qy * qz + qx * qw), 1 - 2 * (qx * qx + qy * qy)],
- ],
- dtype=float,
- )
-
- def as_rotvec(self) -> np.ndarray:
- """
- Convert rotation to rotation vector.
-
- Returns:
- Rotation vector [x, y, z] where magnitude is angle in radians
- """
- qx, qy, qz, qw = self._quat
-
- # Ensure qw is positive for unique representation
- if qw < 0:
- qx, qy, qz, qw = -qx, -qy, -qz, -qw
-
- # Compute angle and axis
- angle = 2.0 * np.arccos(np.clip(abs(qw), 0.0, 1.0))
- sin_half_angle = np.sqrt(1.0 - qw * qw)
-
- if sin_half_angle < 1e-8:
- # For very small angles, use linearization: rotvec ≈ 2 * [qx, qy, qz]
- return 2.0 * np.array([qx, qy, qz])
-
- # Extract axis and scale by angle
- axis = np.array([qx, qy, qz]) / sin_half_angle
- return angle * axis
-
- def as_quat(self) -> np.ndarray:
- """
- Get quaternion representation.
-
- Returns:
- Quaternion [x, y, z, w]
- """
- return self._quat.copy()
-
- def apply(self, vectors: np.ndarray, inverse: bool = False) -> np.ndarray:
- """
- Apply this rotation to a set of vectors.
-
- This is equivalent to applying the rotation matrix to the vectors:
- self.as_matrix() @ vectors (or self.as_matrix().T @ vectors if inverse=True).
-
- Args:
- vectors: Array of shape (3,) or (N, 3) representing vectors in 3D space
- inverse: If True, apply the inverse of the rotation. Default is False.
-
- Returns:
- Rotated vectors with shape:
- - (3,) if input was single vector with shape (3,)
- - (N, 3) in all other cases
- """
- vectors = np.asarray(vectors, dtype=float)
- original_shape = vectors.shape
-
- # Handle single vector case - ensure it's 2D for matrix multiplication
- if vectors.ndim == 1:
- if len(vectors) != 3:
- raise ValueError("Single vector must have length 3")
- vectors = vectors.reshape(1, 3)
- single_vector = True
- elif vectors.ndim == 2:
- if vectors.shape[1] != 3:
- raise ValueError("Vectors must have shape (N, 3)")
- single_vector = False
- else:
- raise ValueError("Vectors must be 1D or 2D array")
-
- # Get rotation matrix
- rotation_matrix = self.as_matrix()
-
- # Apply inverse if requested (transpose for orthogonal rotation matrices)
- if inverse:
- rotation_matrix = rotation_matrix.T
-
- # Apply rotation: (N, 3) @ (3, 3).T -> (N, 3)
- rotated_vectors = vectors @ rotation_matrix.T
-
- # Return original shape for single vector case
- if single_vector and original_shape == (3,):
- return rotated_vectors.flatten()
-
- return rotated_vectors
-
- def inv(self) -> "Rotation":
- """
- Invert this rotation.
-
- Composition of a rotation with its inverse results in an identity transformation.
-
- Returns:
- Rotation instance containing the inverse of this rotation
- """
- qx, qy, qz, qw = self._quat
-
- # For a unit quaternion, the inverse is the conjugate: [-x, -y, -z, w]
- inverse_quat = np.array([-qx, -qy, -qz, qw])
-
- return Rotation(inverse_quat)
-
- def __mul__(self, other: "Rotation") -> "Rotation":
- """
- Compose this rotation with another rotation using the * operator.
-
- The composition `r2 * r1` means "apply r1 first, then r2".
- This is equivalent to applying rotation matrices: r2.as_matrix() @ r1.as_matrix()
-
- Args:
- other: Another Rotation instance to compose with
-
- Returns:
- Rotation instance representing the composition of rotations
- """
- if not isinstance(other, Rotation):
- return NotImplemented
-
- # Get quaternions [x, y, z, w]
- x1, y1, z1, w1 = other._quat # Apply first
- x2, y2, z2, w2 = self._quat # Apply second
-
- # Quaternion multiplication: q2 * q1 (apply q1 first, then q2)
- composed_quat = np.array(
- [
- w2 * x1 + x2 * w1 + y2 * z1 - z2 * y1, # x component
- w2 * y1 - x2 * z1 + y2 * w1 + z2 * x1, # y component
- w2 * z1 + x2 * y1 - y2 * x1 + z2 * w1, # z component
- w2 * w1 - x2 * x1 - y2 * y1 - z2 * z1, # w component
- ]
- )
-
- return Rotation(composed_quat)
diff --git a/lerobot/src/lerobot/utils/train_utils.py b/lerobot/src/lerobot/utils/train_utils.py
deleted file mode 100644
index faf4af837eb0c2b75eaef041379c3c6b1d6ba839..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/utils/train_utils.py
+++ /dev/null
@@ -1,169 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-from pathlib import Path
-
-from torch.optim import Optimizer
-from torch.optim.lr_scheduler import LRScheduler
-
-from lerobot.configs.train import TrainPipelineConfig
-from lerobot.datasets.utils import load_json, write_json
-from lerobot.optim.optimizers import load_optimizer_state, save_optimizer_state
-from lerobot.optim.schedulers import load_scheduler_state, save_scheduler_state
-from lerobot.policies.pretrained import PreTrainedPolicy
-from lerobot.processor import PolicyProcessorPipeline
-from lerobot.utils.constants import (
- CHECKPOINTS_DIR,
- LAST_CHECKPOINT_LINK,
- PRETRAINED_MODEL_DIR,
- TRAINING_STATE_DIR,
- TRAINING_STEP,
-)
-from lerobot.utils.random_utils import load_rng_state, save_rng_state
-
-
-def get_step_identifier(step: int, total_steps: int) -> str:
- num_digits = max(6, len(str(total_steps)))
- return f"{step:0{num_digits}d}"
-
-
-def get_step_checkpoint_dir(output_dir: Path, total_steps: int, step: int) -> Path:
- """Returns the checkpoint sub-directory corresponding to the step number."""
- step_identifier = get_step_identifier(step, total_steps)
- return output_dir / CHECKPOINTS_DIR / step_identifier
-
-
-def save_training_step(step: int, save_dir: Path) -> None:
- write_json({"step": step}, save_dir / TRAINING_STEP)
-
-
-def load_training_step(save_dir: Path) -> int:
- training_step = load_json(save_dir / TRAINING_STEP)
- return training_step["step"]
-
-
-def update_last_checkpoint(checkpoint_dir: Path) -> Path:
- last_checkpoint_dir = checkpoint_dir.parent / LAST_CHECKPOINT_LINK
- if last_checkpoint_dir.is_symlink():
- last_checkpoint_dir.unlink()
- relative_target = checkpoint_dir.relative_to(checkpoint_dir.parent)
- last_checkpoint_dir.symlink_to(relative_target)
-
-
-def save_checkpoint(
- checkpoint_dir: Path,
- step: int,
- cfg: TrainPipelineConfig,
- policy: PreTrainedPolicy,
- optimizer: Optimizer,
- scheduler: LRScheduler | None = None,
- preprocessor: PolicyProcessorPipeline | None = None,
- postprocessor: PolicyProcessorPipeline | None = None,
-) -> None:
- """This function creates the following directory structure:
-
- 005000/ # training step at checkpoint
- ├── pretrained_model/
- │ ├── config.json # policy config
- │ ├── model.safetensors # policy weights
- │ ├── train_config.json # train config
- │ ├── processor.json # processor config (if preprocessor provided)
- │ └── step_*.safetensors # processor state files (if any)
- └── training_state/
- ├── optimizer_param_groups.json # optimizer param groups
- ├── optimizer_state.safetensors # optimizer state
- ├── rng_state.safetensors # rng states
- ├── scheduler_state.json # scheduler state
- └── training_step.json # training step
-
- Args:
- cfg (TrainPipelineConfig): The training config used for this run.
- step (int): The training step at that checkpoint.
- policy (PreTrainedPolicy): The policy to save.
- optimizer (Optimizer | None, optional): The optimizer to save the state from. Defaults to None.
- scheduler (LRScheduler | None, optional): The scheduler to save the state from. Defaults to None.
- preprocessor: The preprocessor/pipeline to save. Defaults to None.
- """
- pretrained_dir = checkpoint_dir / PRETRAINED_MODEL_DIR
- policy.save_pretrained(pretrained_dir)
- cfg.save_pretrained(pretrained_dir)
- if cfg.peft is not None:
- # When using PEFT, policy.save_pretrained will only write the adapter weights + config, not the
- # policy config which we need for loading the model. In this case we'll write it ourselves.
- policy.config.save_pretrained(pretrained_dir)
- if preprocessor is not None:
- preprocessor.save_pretrained(pretrained_dir)
- if postprocessor is not None:
- postprocessor.save_pretrained(pretrained_dir)
- save_training_state(checkpoint_dir, step, optimizer, scheduler)
-
-
-def save_training_state(
- checkpoint_dir: Path,
- train_step: int,
- optimizer: Optimizer | None = None,
- scheduler: LRScheduler | None = None,
-) -> None:
- """
- Saves the training step, optimizer state, scheduler state, and rng state.
-
- Args:
- save_dir (Path): The directory to save artifacts to.
- train_step (int): Current training step.
- optimizer (Optimizer | None, optional): The optimizer from which to save the state_dict.
- Defaults to None.
- scheduler (LRScheduler | None, optional): The scheduler from which to save the state_dict.
- Defaults to None.
- """
- save_dir = checkpoint_dir / TRAINING_STATE_DIR
- save_dir.mkdir(parents=True, exist_ok=True)
- save_training_step(train_step, save_dir)
- save_rng_state(save_dir)
- if optimizer is not None:
- save_optimizer_state(optimizer, save_dir)
- if scheduler is not None:
- save_scheduler_state(scheduler, save_dir)
-
-
-def load_training_state(
- checkpoint_dir: Path, optimizer: Optimizer, scheduler: LRScheduler | None
-) -> tuple[int, Optimizer, LRScheduler | None]:
- """
- Loads the training step, optimizer state, scheduler state, and rng state.
- This is used to resume a training run.
-
- Args:
- checkpoint_dir (Path): The checkpoint directory. Should contain a 'training_state' dir.
- optimizer (Optimizer): The optimizer to load the state_dict to.
- scheduler (LRScheduler | None): The scheduler to load the state_dict to (can be None).
-
- Raises:
- NotADirectoryError: If 'checkpoint_dir' doesn't contain a 'training_state' dir
-
- Returns:
- tuple[int, Optimizer, LRScheduler | None]: training step, optimizer and scheduler with their
- state_dict loaded.
- """
- training_state_dir = checkpoint_dir / TRAINING_STATE_DIR
- if not training_state_dir.is_dir():
- raise NotADirectoryError(training_state_dir)
-
- load_rng_state(training_state_dir)
- step = load_training_step(training_state_dir)
- optimizer = load_optimizer_state(optimizer, training_state_dir)
- if scheduler is not None:
- scheduler = load_scheduler_state(scheduler, training_state_dir)
-
- return step, optimizer, scheduler
diff --git a/lerobot/src/lerobot/utils/transition.py b/lerobot/src/lerobot/utils/transition.py
deleted file mode 100644
index f09030003ba50b639cadc99ca44bbd6c4df1f3b3..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/utils/transition.py
+++ /dev/null
@@ -1,87 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2025 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-from typing import TypedDict
-
-import torch
-
-from lerobot.utils.constants import ACTION
-
-
-class Transition(TypedDict):
- state: dict[str, torch.Tensor]
- action: torch.Tensor
- reward: float
- next_state: dict[str, torch.Tensor]
- done: bool
- truncated: bool
- complementary_info: dict[str, torch.Tensor | float | int] | None = None
-
-
-def move_transition_to_device(transition: Transition, device: str = "cpu") -> Transition:
- device = torch.device(device)
- non_blocking = device.type == "cuda"
-
- # Move state tensors to device
- transition["state"] = {
- key: val.to(device, non_blocking=non_blocking) for key, val in transition["state"].items()
- }
-
- # Move action to device
- transition[ACTION] = transition[ACTION].to(device, non_blocking=non_blocking)
-
- # Move reward and done if they are tensors
- if isinstance(transition["reward"], torch.Tensor):
- transition["reward"] = transition["reward"].to(device, non_blocking=non_blocking)
-
- if isinstance(transition["done"], torch.Tensor):
- transition["done"] = transition["done"].to(device, non_blocking=non_blocking)
-
- if isinstance(transition["truncated"], torch.Tensor):
- transition["truncated"] = transition["truncated"].to(device, non_blocking=non_blocking)
-
- # Move next_state tensors to device
- transition["next_state"] = {
- key: val.to(device, non_blocking=non_blocking) for key, val in transition["next_state"].items()
- }
-
- # Move complementary_info tensors if present
- if transition.get("complementary_info") is not None:
- for key, val in transition["complementary_info"].items():
- if isinstance(val, torch.Tensor):
- transition["complementary_info"][key] = val.to(device, non_blocking=non_blocking)
- elif isinstance(val, (int | float | bool)):
- transition["complementary_info"][key] = torch.tensor(val, device=device)
- else:
- raise ValueError(f"Unsupported type {type(val)} for complementary_info[{key}]")
- return transition
-
-
-def move_state_dict_to_device(state_dict, device="cpu"):
- """
- Recursively move all tensors in a (potentially) nested
- dict/list/tuple structure to the CPU.
- """
- if isinstance(state_dict, torch.Tensor):
- return state_dict.to(device)
- elif isinstance(state_dict, dict):
- return {k: move_state_dict_to_device(v, device=device) for k, v in state_dict.items()}
- elif isinstance(state_dict, list):
- return [move_state_dict_to_device(v, device=device) for v in state_dict]
- elif isinstance(state_dict, tuple):
- return tuple(move_state_dict_to_device(v, device=device) for v in state_dict)
- else:
- return state_dict
diff --git a/lerobot/src/lerobot/utils/utils.py b/lerobot/src/lerobot/utils/utils.py
deleted file mode 100644
index 496a0ee337e80831685076ef847df26b6182b247..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/utils/utils.py
+++ /dev/null
@@ -1,410 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-import logging
-import os
-import platform
-import select
-import subprocess
-import sys
-import time
-from copy import copy, deepcopy
-from datetime import datetime
-from pathlib import Path
-from statistics import mean
-
-import numpy as np
-import torch
-from accelerate import Accelerator
-from datasets.utils.logging import disable_progress_bar, enable_progress_bar
-
-
-def inside_slurm():
- """Check whether the python process was launched through slurm"""
- # TODO(rcadene): return False for interactive mode `--pty bash`
- return "SLURM_JOB_ID" in os.environ
-
-
-def auto_select_torch_device() -> torch.device:
- """Tries to select automatically a torch device."""
- if torch.cuda.is_available():
- logging.info("Cuda backend detected, using cuda.")
- return torch.device("cuda")
- elif torch.backends.mps.is_available():
- logging.info("Metal backend detected, using mps.")
- return torch.device("mps")
- elif torch.xpu.is_available():
- logging.info("Intel XPU backend detected, using xpu.")
- return torch.device("xpu")
- else:
- logging.warning("No accelerated backend detected. Using default cpu, this will be slow.")
- return torch.device("cpu")
-
-
-# TODO(Steven): Remove log. log shouldn't be an argument, this should be handled by the logger level
-def get_safe_torch_device(try_device: str, log: bool = False) -> torch.device:
- """Given a string, return a torch.device with checks on whether the device is available."""
- try_device = str(try_device)
- if try_device.startswith("cuda"):
- assert torch.cuda.is_available()
- device = torch.device(try_device)
- elif try_device == "mps":
- assert torch.backends.mps.is_available()
- device = torch.device("mps")
- elif try_device == "xpu":
- assert torch.xpu.is_available()
- device = torch.device("xpu")
- elif try_device == "cpu":
- device = torch.device("cpu")
- if log:
- logging.warning("Using CPU, this will be slow.")
- else:
- device = torch.device(try_device)
- if log:
- logging.warning(f"Using custom {try_device} device.")
- return device
-
-
-def get_safe_dtype(dtype: torch.dtype, device: str | torch.device):
- """
- mps is currently not compatible with float64
- """
- if isinstance(device, torch.device):
- device = device.type
- if device == "mps" and dtype == torch.float64:
- return torch.float32
- if device == "xpu" and dtype == torch.float64:
- if hasattr(torch.xpu, "get_device_capability"):
- device_capability = torch.xpu.get_device_capability()
- # NOTE: Some Intel XPU devices do not support double precision (FP64).
- # The `has_fp64` flag is returned by `torch.xpu.get_device_capability()`
- # when available; if False, we fall back to float32 for compatibility.
- if not device_capability.get("has_fp64", False):
- logging.warning(f"Device {device} does not support float64, using float32 instead.")
- return torch.float32
- else:
- logging.warning(
- f"Device {device} capability check failed. Assuming no support for float64, using float32 instead."
- )
- return torch.float32
- return dtype
- else:
- return dtype
-
-
-def is_torch_device_available(try_device: str) -> bool:
- try_device = str(try_device) # Ensure try_device is a string
- if try_device.startswith("cuda"):
- return torch.cuda.is_available()
- elif try_device == "mps":
- return torch.backends.mps.is_available()
- elif try_device == "xpu":
- return torch.xpu.is_available()
- elif try_device == "cpu":
- return True
- else:
- raise ValueError(f"Unknown device {try_device}. Supported devices are: cuda, mps, xpu or cpu.")
-
-
-def is_amp_available(device: str):
- if device in ["cuda", "xpu", "cpu"]:
- return True
- elif device == "mps":
- return False
- else:
- raise ValueError(f"Unknown device '{device}.")
-
-
-def init_logging(
- log_file: Path | None = None,
- display_pid: bool = False,
- console_level: str = "INFO",
- file_level: str = "DEBUG",
- accelerator: Accelerator | None = None,
-):
- """Initialize logging configuration for LeRobot.
-
- In multi-GPU training, only the main process logs to console to avoid duplicate output.
- Non-main processes have console logging suppressed but can still log to file.
-
- Args:
- log_file: Optional file path to write logs to
- display_pid: Include process ID in log messages (useful for debugging multi-process)
- console_level: Logging level for console output
- file_level: Logging level for file output
- accelerator: Optional Accelerator instance (for multi-GPU detection)
- """
-
- def custom_format(record: logging.LogRecord) -> str:
- dt = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
- fnameline = f"{record.pathname}:{record.lineno}"
- pid_str = f"[PID: {os.getpid()}] " if display_pid else ""
- return f"{record.levelname} {pid_str}{dt} {fnameline[-15:]:>15} {record.getMessage()}"
-
- formatter = logging.Formatter()
- formatter.format = custom_format
-
- logger = logging.getLogger()
- logger.setLevel(logging.NOTSET)
-
- # Clear any existing handlers
- logger.handlers.clear()
-
- # Determine if this is a non-main process in distributed training
- is_main_process = accelerator.is_main_process if accelerator is not None else True
-
- # Console logging (main process only)
- if is_main_process:
- console_handler = logging.StreamHandler()
- console_handler.setFormatter(formatter)
- console_handler.setLevel(console_level.upper())
- logger.addHandler(console_handler)
- else:
- # Suppress console output for non-main processes
- logger.addHandler(logging.NullHandler())
- logger.setLevel(logging.ERROR)
-
- if log_file is not None:
- file_handler = logging.FileHandler(log_file)
- file_handler.setFormatter(formatter)
- file_handler.setLevel(file_level.upper())
- logger.addHandler(file_handler)
-
-
-def format_big_number(num, precision=0):
- suffixes = ["", "K", "M", "B", "T", "Q"]
- divisor = 1000.0
-
- for suffix in suffixes:
- if abs(num) < divisor:
- return f"{num:.{precision}f}{suffix}"
- num /= divisor
-
- return num
-
-
-def say(text: str, blocking: bool = False):
- system = platform.system()
-
- if system == "Darwin":
- cmd = ["say", text]
-
- elif system == "Linux":
- cmd = ["spd-say", text]
- if blocking:
- cmd.append("--wait")
-
- elif system == "Windows":
- cmd = [
- "PowerShell",
- "-Command",
- "Add-Type -AssemblyName System.Speech; "
- f"(New-Object System.Speech.Synthesis.SpeechSynthesizer).Speak('{text}')",
- ]
-
- else:
- raise RuntimeError("Unsupported operating system for text-to-speech.")
-
- if blocking:
- subprocess.run(cmd, check=True)
- else:
- subprocess.Popen(cmd, creationflags=subprocess.CREATE_NO_WINDOW if system == "Windows" else 0)
-
-
-def log_say(text: str, play_sounds: bool = True, blocking: bool = False):
- logging.info(text)
-
- if play_sounds:
- say(text, blocking)
-
-
-def get_channel_first_image_shape(image_shape: tuple) -> tuple:
- shape = copy(image_shape)
- if shape[2] < shape[0] and shape[2] < shape[1]: # (h, w, c) -> (c, h, w)
- shape = (shape[2], shape[0], shape[1])
- elif not (shape[0] < shape[1] and shape[0] < shape[2]):
- raise ValueError(image_shape)
-
- return shape
-
-
-def has_method(cls: object, method_name: str) -> bool:
- return hasattr(cls, method_name) and callable(getattr(cls, method_name))
-
-
-def is_valid_numpy_dtype_string(dtype_str: str) -> bool:
- """
- Return True if a given string can be converted to a numpy dtype.
- """
- try:
- # Attempt to convert the string to a numpy dtype
- np.dtype(dtype_str)
- return True
- except TypeError:
- # If a TypeError is raised, the string is not a valid dtype
- return False
-
-
-def enter_pressed() -> bool:
- if platform.system() == "Windows":
- import msvcrt
-
- if msvcrt.kbhit():
- key = msvcrt.getch()
- return key in (b"\r", b"\n") # enter key
- return False
- else:
- return select.select([sys.stdin], [], [], 0)[0] and sys.stdin.readline().strip() == ""
-
-
-def move_cursor_up(lines):
- """Move the cursor up by a specified number of lines."""
- print(f"\033[{lines}A", end="")
-
-
-def get_elapsed_time_in_days_hours_minutes_seconds(elapsed_time_s: float):
- days = int(elapsed_time_s // (24 * 3600))
- elapsed_time_s %= 24 * 3600
- hours = int(elapsed_time_s // 3600)
- elapsed_time_s %= 3600
- minutes = int(elapsed_time_s // 60)
- seconds = elapsed_time_s % 60
- return days, hours, minutes, seconds
-
-
-class SuppressProgressBars:
- """
- Context manager to suppress progress bars.
-
- Example
- --------
- ```python
- with SuppressProgressBars():
- # Code that would normally show progress bars
- ```
- """
-
- def __enter__(self):
- disable_progress_bar()
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- enable_progress_bar()
-
-
-class TimerManager:
- """
- Lightweight utility to measure elapsed time.
-
- Examples
- --------
- ```python
- # Example 1: Using context manager
- timer = TimerManager("Policy", log=False)
- for _ in range(3):
- with timer:
- time.sleep(0.01)
- print(timer.last, timer.fps_avg, timer.percentile(90)) # Prints: 0.01 100.0 0.01
- ```
-
- ```python
- # Example 2: Using start/stop methods
- timer = TimerManager("Policy", log=False)
- timer.start()
- time.sleep(0.01)
- timer.stop()
- print(timer.last, timer.fps_avg, timer.percentile(90)) # Prints: 0.01 100.0 0.01
- ```
- """
-
- def __init__(
- self,
- label: str = "Elapsed-time",
- log: bool = True,
- logger: logging.Logger | None = None,
- ):
- self.label = label
- self.log = log
- self.logger = logger
- self._start: float | None = None
- self._history: list[float] = []
-
- def __enter__(self):
- return self.start()
-
- def __exit__(self, exc_type, exc_val, exc_tb):
- self.stop()
-
- def start(self):
- self._start = time.perf_counter()
- return self
-
- def stop(self) -> float:
- if self._start is None:
- raise RuntimeError("Timer was never started.")
- elapsed = time.perf_counter() - self._start
- self._history.append(elapsed)
- self._start = None
- if self.log:
- if self.logger is not None:
- self.logger.info(f"{self.label}: {elapsed:.6f} s")
- else:
- logging.info(f"{self.label}: {elapsed:.6f} s")
- return elapsed
-
- def reset(self):
- self._history.clear()
-
- @property
- def last(self) -> float:
- return self._history[-1] if self._history else 0.0
-
- @property
- def avg(self) -> float:
- return mean(self._history) if self._history else 0.0
-
- @property
- def total(self) -> float:
- return sum(self._history)
-
- @property
- def count(self) -> int:
- return len(self._history)
-
- @property
- def history(self) -> list[float]:
- return deepcopy(self._history)
-
- @property
- def fps_last(self) -> float:
- return 0.0 if self.last == 0 else 1.0 / self.last
-
- @property
- def fps_avg(self) -> float:
- return 0.0 if self.avg == 0 else 1.0 / self.avg
-
- def percentile(self, p: float) -> float:
- """
- Return the p-th percentile of recorded times.
- """
- if not self._history:
- return 0.0
- return float(np.percentile(self._history, p))
-
- def fps_percentile(self, p: float) -> float:
- """
- FPS corresponding to the p-th percentile time.
- """
- val = self.percentile(p)
- return 0.0 if val == 0 else 1.0 / val
diff --git a/lerobot/src/lerobot/utils/visualization_utils.py b/lerobot/src/lerobot/utils/visualization_utils.py
deleted file mode 100644
index 182623fe4eaaaf164e7389b2e3bfa0ea25c7feab..0000000000000000000000000000000000000000
--- a/lerobot/src/lerobot/utils/visualization_utils.py
+++ /dev/null
@@ -1,112 +0,0 @@
-# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
-#
-# Licensed under the Apache License, Version 2.0 (the "License");
-# you may not use this file except in compliance with the License.
-# You may obtain a copy of the License at
-#
-# http://www.apache.org/licenses/LICENSE-2.0
-#
-# Unless required by applicable law or agreed to in writing, software
-# distributed under the License is distributed on an "AS IS" BASIS,
-# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-# See the License for the specific language governing permissions and
-# limitations under the License.
-
-import numbers
-import os
-
-import numpy as np
-import rerun as rr
-
-from lerobot.processor import RobotAction, RobotObservation
-
-from .constants import ACTION, ACTION_PREFIX, OBS_PREFIX, OBS_STR
-
-
-def init_rerun(
- session_name: str = "lerobot_control_loop", ip: str | None = None, port: int | None = None
-) -> None:
- """
- Initializes the Rerun SDK for visualizing the control loop.
-
- Args:
- session_name: Name of the Rerun session.
- ip: Optional IP for connecting to a Rerun server.
- port: Optional port for connecting to a Rerun server.
- """
- batch_size = os.getenv("RERUN_FLUSH_NUM_BYTES", "8000")
- os.environ["RERUN_FLUSH_NUM_BYTES"] = batch_size
- rr.init(session_name)
- memory_limit = os.getenv("LEROBOT_RERUN_MEMORY_LIMIT", "10%")
- if ip and port:
- rr.connect_grpc(url=f"rerun+http://{ip}:{port}/proxy")
- else:
- rr.spawn(memory_limit=memory_limit)
-
-
-def _is_scalar(x):
- return isinstance(x, (float | numbers.Real | np.integer | np.floating)) or (
- isinstance(x, np.ndarray) and x.ndim == 0
- )
-
-
-def log_rerun_data(
- observation: RobotObservation | None = None,
- action: RobotAction | None = None,
- compress_images: bool = False,
-) -> None:
- """
- Logs observation and action data to Rerun for real-time visualization.
-
- This function iterates through the provided observation and action dictionaries and sends their contents
- to the Rerun viewer. It handles different data types appropriately:
- - Scalars values (floats, ints) are logged as `rr.Scalars`.
- - 3D NumPy arrays that resemble images (e.g., with 1, 3, or 4 channels first) are transposed
- from CHW to HWC format, (optionally) compressed to JPEG and logged as `rr.Image` or `rr.EncodedImage`.
- - 1D NumPy arrays are logged as a series of individual scalars, with each element indexed.
- - Other multi-dimensional arrays are flattened and logged as individual scalars.
-
- Keys are automatically namespaced with "observation." or "action." if not already present.
-
- Args:
- observation: An optional dictionary containing observation data to log.
- action: An optional dictionary containing action data to log.
- compress_images: Whether to compress images before logging to save bandwidth & memory in exchange for cpu and quality.
- """
- if observation:
- for k, v in observation.items():
- if v is None:
- continue
- key = k if str(k).startswith(OBS_PREFIX) else f"{OBS_STR}.{k}"
-
- if _is_scalar(v):
- rr.log(key, rr.Scalars(float(v)))
- elif isinstance(v, np.ndarray):
- arr = v
- # Convert CHW -> HWC when needed
- if arr.ndim == 3 and arr.shape[0] in (1, 3, 4) and arr.shape[-1] not in (1, 3, 4):
- arr = np.transpose(arr, (1, 2, 0))
- if arr.ndim == 1:
- for i, vi in enumerate(arr):
- rr.log(f"{key}_{i}", rr.Scalars(float(vi)))
- else:
- img_entity = rr.Image(arr).compress() if compress_images else rr.Image(arr)
- rr.log(key, entity=img_entity, static=True)
-
- if action:
- for k, v in action.items():
- if v is None:
- continue
- key = k if str(k).startswith(ACTION_PREFIX) else f"{ACTION}.{k}"
-
- if _is_scalar(v):
- rr.log(key, rr.Scalars(float(v)))
- elif isinstance(v, np.ndarray):
- if v.ndim == 1:
- for i, vi in enumerate(v):
- rr.log(f"{key}_{i}", rr.Scalars(float(vi)))
- else:
- # Fall back to flattening higher-dimensional arrays
- flat = v.flatten()
- for i, vi in enumerate(flat):
- rr.log(f"{key}_{i}", rr.Scalars(float(vi)))