Robotics
LeRobot
Safetensors
act

Migrate policy to PolicyProcessorPipeline system πŸ€– **Automated Policy Migration to PolicyProcessorPipeline** This PR migrates your model to the new LeRobot policy format using the modern PolicyProcessorPipeline architecture. ## What Changed ### ✨ **New Architecture - PolicyProcessorPipeline** Your model now uses external PolicyProcessorPipeline components for data processing instead of built-in normalization layers. This provides: - **Modularity**: Separate preprocessing and postprocessing pipelines - **Flexibility**: Easy to swap, configure, and debug processing steps - **Compatibility**: Works with the latest LeRobot ecosystem ### πŸ”§ **Normalization Extraction** We've extracted normalization statistics from your model's state_dict and removed the built-in normalization layers: - **Extracted patterns**: `normalize_inputs.*`, `unnormalize_outputs.*`, `normalize.*`, `unnormalize.*`, `input_normalizer.*`, `output_normalizer.*` - **Statistics preserved**: Mean, std, min, max values for all features - **Clean model**: State dict now contains only core model weights ### πŸ“¦ **Files Added** - **preprocessor_config.json**: Configuration for input preprocessing pipeline - **postprocessor_config.json**: Configuration for output postprocessing pipeline - **model.safetensors**: Clean model weights without normalization layers - **config.json**: Updated model configuration - **train_config.json**: Training configuration - **README.md**: Updated model card with migration information ### πŸš€ **Benefits** - **Backward Compatible**: Your model behavior remains identical - **Future Ready**: Compatible with latest LeRobot features and updates - **Debuggable**: Easy to inspect and modify processing steps - **Portable**: Processors can be shared and reused across models ### πŸ’» **Usage** ```python # Load your migrated model from lerobot.policies import get_policy_class from lerobot.processor import PolicyProcessorPipeline # The preprocessor and postprocessor are now external preprocessor = PolicyProcessorPipeline.from_pretrained("your-model-repo", config_filename="preprocessor_config.json") postprocessor = PolicyProcessorPipeline.from_pretrained("your-model-repo", config_filename="postprocessor_config.json") policy = get_policy_class("your-policy-type").from_pretrained("your-model-repo") # Process data through the pipeline processed_batch = preprocessor(raw_batch) action = policy(processed_batch) final_action = postprocessor(action) ``` *Generated automatically by the LeRobot policy migration script*

#9
by AdilZtn - opened
Files changed (1) hide show
  1. README.md +6 -6
README.md CHANGED
@@ -1,13 +1,13 @@
1
  ---
2
- datasets: AdilZtn/grab_red_cube_test_25
3
  library_name: lerobot
4
  license: apache-2.0
5
  model_name: act
6
  pipeline_tag: robotics
7
  tags:
8
- - act
9
  - lerobot
10
  - robotics
 
11
  ---
12
 
13
  # Model Card for act
@@ -31,7 +31,7 @@ Below is the short version on how to train and run inference/eval:
31
  ### Train from scratch
32
 
33
  ```bash
34
- python -m lerobot.scripts.train \
35
  --dataset.repo_id=${HF_USER}/<dataset> \
36
  --policy.type=act \
37
  --output_dir=outputs/train/<desired_policy_repo_id> \
@@ -41,12 +41,12 @@ python -m lerobot.scripts.train \
41
  --wandb.enable=true
42
  ```
43
 
44
- *Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`.*
45
 
46
  ### Evaluate the policy/run inference
47
 
48
  ```bash
49
- python -m lerobot.record \
50
  --robot.type=so100_follower \
51
  --dataset.repo_id=<hf_user>/eval_<dataset> \
52
  --policy.path=<hf_user>/<desired_policy_repo_id> \
@@ -59,4 +59,4 @@ Prefix the dataset repo with **eval\_** and supply `--policy.path` pointing to a
59
 
60
  ## Model Details
61
 
62
- * **License:** apache-2.0
 
1
  ---
2
+ datasets: unknown
3
  library_name: lerobot
4
  license: apache-2.0
5
  model_name: act
6
  pipeline_tag: robotics
7
  tags:
 
8
  - lerobot
9
  - robotics
10
+ - act
11
  ---
12
 
13
  # Model Card for act
 
31
  ### Train from scratch
32
 
33
  ```bash
34
+ lerobot-train \
35
  --dataset.repo_id=${HF_USER}/<dataset> \
36
  --policy.type=act \
37
  --output_dir=outputs/train/<desired_policy_repo_id> \
 
41
  --wandb.enable=true
42
  ```
43
 
44
+ _Writes checkpoints to `outputs/train/<desired_policy_repo_id>/checkpoints/`._
45
 
46
  ### Evaluate the policy/run inference
47
 
48
  ```bash
49
+ lerobot-record \
50
  --robot.type=so100_follower \
51
  --dataset.repo_id=<hf_user>/eval_<dataset> \
52
  --policy.path=<hf_user>/<desired_policy_repo_id> \
 
59
 
60
  ## Model Details
61
 
62
+ - **License:** apache-2.0