File size: 6,543 Bytes
f61b9bc
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
# Backward compatibility

## Policy Normalization Migration (PR #1452)

**Breaking Change**: LeRobot policies no longer have built-in normalization layers embedded in their weights. Normalization is now handled by external `PolicyProcessorPipeline` components.

### What changed?

|                            | Before PR #1452                                  | After PR #1452                                               |
| -------------------------- | ------------------------------------------------ | ------------------------------------------------------------ |
| **Normalization Location** | Embedded in model weights (`normalize_inputs.*`) | External `PolicyProcessorPipeline` components                |
| **Model State Dict**       | Contains normalization statistics                | **Clean weights only** - no normalization parameters         |
| **Usage**                  | `policy(batch)` handles everything               | `preprocessor(batch)``policy(...)``postprocessor(...)` |

### Impact on existing models

- Models trained **before** PR #1452 have normalization embedded in their weights
- These models need migration to work with the new `PolicyProcessorPipeline` system
- The migration extracts normalization statistics and creates separate processor pipelines

### Migrating old models

Use the migration script to convert models with embedded normalization:

```shell

python src/lerobot/processor/migrate_policy_normalization.py \

    --pretrained-path lerobot/act_aloha_sim_transfer_cube_human \

    --push-to-hub \

    --branch migrated

```

The script:

1. **Extracts** normalization statistics from model weights
2. **Creates** external preprocessor and postprocessor pipelines
3. **Removes** normalization layers from model weights
4. **Saves** clean model + processor pipelines
5. **Pushes** to Hub with automatic PR creation

### Using migrated models

```python

# New usage pattern (after migration)

from lerobot.policies.factory import make_policy, make_pre_post_processors



# Load model and processors separately

policy = make_policy(config, ds_meta=dataset.meta)

preprocessor, postprocessor = make_pre_post_processors(

    policy_cfg=config,

    dataset_stats=dataset.meta.stats

)



# Process data through pipeline

processed_batch = preprocessor(raw_batch)

action = policy.select_action(processed_batch)

final_action = postprocessor(action)

```

## Hardware API redesign

PR [#777](https://github.com/huggingface/lerobot/pull/777) improves the LeRobot calibration but is **not backward-compatible**. Below is a overview of what changed and how you can continue to work with datasets created before this pull request.

### What changed?

|                                   | Before PR #777                                    | After PR #777                                                |
| --------------------------------- | ------------------------------------------------- | ------------------------------------------------------------ |
| **Joint range**                   | Degrees `-180...180°`                             | **Normalised range** Joints: `–100...100` Gripper: `0...100` |
| **Zero position (SO100 / SO101)** | Arm fully extended horizontally                   | **In middle of the range for each joint**                    |
| **Boundary handling**             | Software safeguards to detect ±180 ° wrap-arounds | No wrap-around logic needed due to mid-range zero            |

---

### Impact on existing datasets

- Recorded trajectories created **before** PR #777 will replay incorrectly if loaded directly:
  - Joint angles are offset and incorrectly normalized.
- Any models directly finetuned or trained on the old data will need their inputs and outputs converted.

### Using datasets made with the previous calibration system

We provide a migration example script for replaying an episode recorded with the previous calibration here: `examples/backward_compatibility/replay.py`.
Below we take you through the modifications that are done in the example script to make the previous calibration datasets work.

```diff

+   key = f"{name.removeprefix('main_')}.pos"

    action[key] = action_array[i].item()

+   action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)

+   action["elbow_flex.pos"] -= 90

```

Let's break this down.
New codebase uses `.pos` suffix for the position observations and we have removed `main_` prefix:

<!-- prettier-ignore-start -->
```python

key = f"{name.removeprefix('main_')}.pos"

```
<!-- prettier-ignore-end -->

For `"shoulder_lift"` (id = 2), the 0 position is changed by -90 degrees and the direction is reversed compared to old calibration/code.

<!-- prettier-ignore-start -->
```python

action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)

```
<!-- prettier-ignore-end -->

For `"elbow_flex"` (id = 3), the 0 position is changed by -90 degrees compared to old calibration/code.

<!-- prettier-ignore-start -->
```python

action["elbow_flex.pos"] -= 90

```
<!-- prettier-ignore-end -->

To use degrees normalization we then set the `--robot.use_degrees` option to `true`.

```diff

python examples/backward_compatibility/replay.py \

    --robot.type=so101_follower \

    --robot.port=/dev/tty.usbmodem5A460814411 \

    --robot.id=blue \

+   --robot.use_degrees=true \

    --dataset.repo_id=my_dataset_id \

    --dataset.episode=0

```

### Using policies trained with the previous calibration system

Policies output actions in the same format as the datasets (`torch.Tensors`). Therefore, the same transformations should be applied.

To find these transformations, we recommend to first try and and replay an episode of the dataset your policy was trained on using the section above.
Then, add these same transformations on your inference script (shown here in the `record.py` script):

```diff

action_values = predict_action(

    observation_frame,

    policy,

    get_safe_torch_device(policy.config.device),

    policy.config.use_amp,

    task=single_task,

    robot_type=robot.robot_type,

    )

    action = {key: action_values[i].item() for i, key in enumerate(robot.action_features)}



+   action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)

+   action["elbow_flex.pos"] -= 90

    robot.send_action(action)

```

If you have questions or run into migration issues, feel free to ask them on [Discord](https://discord.gg/s3KuuzsPFb)