Add model card for migrated model
Browse files
README.md
CHANGED
|
@@ -1,195 +1,62 @@
|
|
| 1 |
---
|
|
|
|
| 2 |
library_name: lerobot
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
- lerobot
|
| 6 |
-
- safetensors
|
| 7 |
pipeline_tag: robotics
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
---
|
| 9 |
|
| 10 |
-
#
|
| 11 |
-
|
| 12 |
-
## Overview
|
| 13 |
-
|
| 14 |
-
RobotProcessor is a composable, debuggable post-processing pipeline for robot transitions in the LeRobot framework. It orchestrates an ordered collection of small, functional transforms (steps) that are executed left-to-right on each incoming `EnvTransition`.
|
| 15 |
-
|
| 16 |
-
## Architecture
|
| 17 |
-
|
| 18 |
-
The RobotProcessor provides a modular architecture for processing robot environment transitions through a sequence of composable steps. Each step is a callable that accepts a full `EnvTransition` tuple and returns a potentially modified tuple of the same structure.
|
| 19 |
-
|
| 20 |
-
### EnvTransition Structure
|
| 21 |
-
|
| 22 |
-
An `EnvTransition` is a 7-tuple containing:
|
| 23 |
-
|
| 24 |
-
1. **observation**: Current state observation
|
| 25 |
-
2. **action**: Action taken (can be None)
|
| 26 |
-
3. **reward**: Reward received (float or None)
|
| 27 |
-
4. **done**: Episode termination flag (bool or None)
|
| 28 |
-
5. **truncated**: Episode truncation flag (bool or None)
|
| 29 |
-
6. **info**: Additional information dictionary
|
| 30 |
-
7. **complementary_data**: Extra data dictionary
|
| 31 |
-
|
| 32 |
-
## Key Features
|
| 33 |
-
|
| 34 |
-
- **Composable Pipeline**: Chain multiple processing steps in a specific order
|
| 35 |
-
- **State Persistence**: Save and load processor state using SafeTensors format
|
| 36 |
-
- **Hugging Face Hub Integration**: Easy sharing and loading via `save_pretrained()` and `from_pretrained()`
|
| 37 |
-
- **Debugging Support**: Step-through functionality to inspect intermediate transformations
|
| 38 |
-
- **Hook System**: Before/after step hooks for additional processing or monitoring
|
| 39 |
-
- **Device Support**: Move tensor states to different devices (CPU/GPU)
|
| 40 |
-
- **Performance Profiling**: Built-in profiling to identify bottlenecks
|
| 41 |
-
|
| 42 |
-
## Installation
|
| 43 |
|
| 44 |
-
|
| 45 |
|
| 46 |
-
## Usage
|
| 47 |
|
| 48 |
-
|
| 49 |
|
| 50 |
-
```python
|
| 51 |
-
from lerobot.processor.pipeline import RobotProcessor
|
| 52 |
-
from your_steps import ObservationNormalizer, VelocityCalculator
|
| 53 |
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
steps=[
|
| 57 |
-
ObservationNormalizer(mean=0, std=1),
|
| 58 |
-
VelocityCalculator(window_size=5),
|
| 59 |
-
],
|
| 60 |
-
name="my_robot_processor",
|
| 61 |
-
seed=42
|
| 62 |
-
)
|
| 63 |
-
|
| 64 |
-
# Process a transition
|
| 65 |
-
obs, info = env.reset()
|
| 66 |
-
transition = (obs, None, 0.0, False, False, info, {})
|
| 67 |
-
processed_transition = processor(transition)
|
| 68 |
-
|
| 69 |
-
# Extract processed observation
|
| 70 |
-
processed_obs = processed_transition[0]
|
| 71 |
-
```
|
| 72 |
-
|
| 73 |
-
### Saving and Loading
|
| 74 |
-
|
| 75 |
-
```python
|
| 76 |
-
# Save locally
|
| 77 |
-
processor.save_pretrained("./my_processor")
|
| 78 |
-
|
| 79 |
-
# Push to Hugging Face Hub
|
| 80 |
-
processor.push_to_hub("username/my-robot-processor")
|
| 81 |
-
|
| 82 |
-
# Load from Hub
|
| 83 |
-
loaded_processor = RobotProcessor.from_pretrained("username/my-robot-processor")
|
| 84 |
-
```
|
| 85 |
-
|
| 86 |
-
### Debugging with Step-Through
|
| 87 |
-
|
| 88 |
-
```python
|
| 89 |
-
# Inspect intermediate results
|
| 90 |
-
for idx, intermediate_transition in enumerate(processor.step_through(transition)):
|
| 91 |
-
print(f"After step {idx}: {intermediate_transition[0]}") # Print observation
|
| 92 |
-
```
|
| 93 |
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
```python
|
| 97 |
-
# Add monitoring hook
|
| 98 |
-
def log_observation(step_idx, transition):
|
| 99 |
-
print(f"Step {step_idx}: obs shape = {transition[0].shape}")
|
| 100 |
-
return None # Don't modify transition
|
| 101 |
-
|
| 102 |
-
processor.register_before_step_hook(log_observation)
|
| 103 |
-
```
|
| 104 |
-
|
| 105 |
-
## Creating Custom Steps
|
| 106 |
-
|
| 107 |
-
To create a custom processor step, implement the `ProcessorStep` protocol:
|
| 108 |
-
|
| 109 |
-
```python
|
| 110 |
-
from lerobot.processor.pipeline import ProcessorStepRegistry, EnvTransition
|
| 111 |
-
|
| 112 |
-
@ProcessorStepRegistry.register("my_custom_step")
|
| 113 |
-
class MyCustomStep:
|
| 114 |
-
def __init__(self, param1=1.0):
|
| 115 |
-
self.param1 = param1
|
| 116 |
-
self.buffer = []
|
| 117 |
-
|
| 118 |
-
def __call__(self, transition: EnvTransition) -> EnvTransition:
|
| 119 |
-
obs, action, reward, done, truncated, info, comp_data = transition
|
| 120 |
-
# Process observation
|
| 121 |
-
processed_obs = obs * self.param1
|
| 122 |
-
return (processed_obs, action, reward, done, truncated, info, comp_data)
|
| 123 |
-
|
| 124 |
-
def get_config(self) -> dict:
|
| 125 |
-
return {"param1": self.param1}
|
| 126 |
-
|
| 127 |
-
def state_dict(self) -> dict:
|
| 128 |
-
# Return only torch.Tensor state
|
| 129 |
-
return {}
|
| 130 |
-
|
| 131 |
-
def load_state_dict(self, state: dict) -> None:
|
| 132 |
-
# Load tensor state
|
| 133 |
-
pass
|
| 134 |
-
|
| 135 |
-
def reset(self) -> None:
|
| 136 |
-
# Clear buffers at episode boundaries
|
| 137 |
-
self.buffer.clear()
|
| 138 |
-
```
|
| 139 |
-
|
| 140 |
-
## Advanced Features
|
| 141 |
-
|
| 142 |
-
### Device Management
|
| 143 |
|
| 144 |
-
|
| 145 |
-
# Move all tensor states to GPU
|
| 146 |
-
processor = processor.to("cuda")
|
| 147 |
|
| 148 |
-
|
| 149 |
-
|
| 150 |
-
```
|
| 151 |
|
| 152 |
-
###
|
| 153 |
|
| 154 |
-
```
|
| 155 |
-
|
| 156 |
-
|
| 157 |
-
|
| 158 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 159 |
```
|
| 160 |
|
| 161 |
-
|
| 162 |
|
| 163 |
-
|
| 164 |
-
# Get a single step
|
| 165 |
-
first_step = processor[0]
|
| 166 |
|
| 167 |
-
|
| 168 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 169 |
```
|
| 170 |
|
| 171 |
-
|
| 172 |
|
| 173 |
-
-
|
| 174 |
-
- **Library**: lerobot
|
| 175 |
-
- **Format**: safetensors
|
| 176 |
-
- **License**: Apache 2.0
|
| 177 |
-
|
| 178 |
-
## Limitations
|
| 179 |
-
|
| 180 |
-
- Steps must maintain the 7-tuple structure of EnvTransition
|
| 181 |
-
- All tensor state must be separated from configuration for proper serialization
|
| 182 |
-
- Steps are executed sequentially (no parallel processing within a single transition)
|
| 183 |
-
|
| 184 |
-
## Citation
|
| 185 |
|
| 186 |
-
|
| 187 |
|
| 188 |
-
|
| 189 |
-
@misc{cadene2024lerobot,
|
| 190 |
-
author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Palma, Steven and Kooijmans, Pepijn and Aractingi, Michel and Shukor, Mustafa and Aubakirova, Dana and Russi, Martino and Capuano, Francesco and Pascale, Caroline and Choghari, Jade and Moss, Jess and Wolf, Thomas},
|
| 191 |
-
title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch},
|
| 192 |
-
howpublished = "\url{https://github.com/huggingface/lerobot}",
|
| 193 |
-
year = {2024}
|
| 194 |
-
}
|
| 195 |
-
```
|
|
|
|
| 1 |
---
|
| 2 |
+
datasets: unknown
|
| 3 |
library_name: lerobot
|
| 4 |
+
license: apache-2.0
|
| 5 |
+
model_name: act
|
|
|
|
|
|
|
| 6 |
pipeline_tag: robotics
|
| 7 |
+
tags:
|
| 8 |
+
- lerobot
|
| 9 |
+
- robotics
|
| 10 |
+
- act
|
| 11 |
---
|
| 12 |
|
| 13 |
+
# Model Card for act
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 14 |
|
| 15 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
| 16 |
|
|
|
|
| 17 |
|
| 18 |
+
[Action Chunking with Transformers (ACT)](https://huggingface.co/papers/2304.13705) is an imitation-learning method that predicts short action chunks instead of single steps. It learns from teleoperated data and often achieves high success rates.
|
| 19 |
|
|
|
|
|
|
|
|
|
|
| 20 |
|
| 21 |
+
This policy has been trained and pushed to the Hub using [LeRobot](https://github.com/huggingface/lerobot).
|
| 22 |
+
See the full documentation at [LeRobot Docs](https://huggingface.co/docs/lerobot/index).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
|
| 24 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 25 |
|
| 26 |
+
## How to Get Started with the Model
|
|
|
|
|
|
|
| 27 |
|
| 28 |
+
For a complete walkthrough, see the [training guide](https://huggingface.co/docs/lerobot/il_robots#train-a-policy).
|
| 29 |
+
Below is the short version on how to train and run inference/eval:
|
|
|
|
| 30 |
|
| 31 |
+
### Train from scratch
|
| 32 |
|
| 33 |
+
```bash
|
| 34 |
+
python -m lerobot.scripts.train \
|
| 35 |
+
--dataset.repo_id=${HF_USER}/<dataset> \
|
| 36 |
+
--policy.type=act \
|
| 37 |
+
--output_dir=outputs/train/<desired_policy_repo_id> \
|
| 38 |
+
--job_name=lerobot_training \
|
| 39 |
+
--policy.device=cuda \
|
| 40 |
+
--policy.repo_id=${HF_USER}/<desired_policy_repo_id>
|
| 41 |
+
--wandb.enable=true
|
| 42 |
```
|
| 43 |
|
| 44 |
+
_Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
|
| 45 |
|
| 46 |
+
### Evaluate the policy/run inference
|
|
|
|
|
|
|
| 47 |
|
| 48 |
+
```bash
|
| 49 |
+
python -m lerobot.record \
|
| 50 |
+
--robot.type=so100_follower \
|
| 51 |
+
--dataset.repo_id=<hf_user>/eval_<dataset> \
|
| 52 |
+
--policy.path=<hf_user>/<desired_policy_repo_id> \
|
| 53 |
+
--episodes=10
|
| 54 |
```
|
| 55 |
|
| 56 |
+
Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a local or hub checkpoint.
|
| 57 |
|
| 58 |
+
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 59 |
|
| 60 |
+
## Model Details
|
| 61 |
|
| 62 |
+
- **License:** apache-2.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|