aleenatron commited on
Commit
f4a62da
·
verified ·
1 Parent(s): 4926f37

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .dockerignore +160 -0
  2. .gitattributes +32 -59
  3. .github/ISSUE_TEMPLATE/bug-report.yml +68 -0
  4. .github/PULL_REQUEST_TEMPLATE.md +41 -0
  5. .github/workflows/documentation-upload-pr.yml +40 -0
  6. .github/workflows/documentation.yml +70 -0
  7. .github/workflows/fast_tests.yml +87 -0
  8. .github/workflows/full_tests.yml +210 -0
  9. .github/workflows/nightly.yml +194 -0
  10. .github/workflows/quality.yml +58 -0
  11. .github/workflows/release.yml +179 -0
  12. .github/workflows/security.yml +54 -0
  13. .github/workflows/stale.yml +70 -0
  14. .github/workflows/unbound_deps_tests.yml +183 -0
  15. .gitignore +179 -0
  16. .pre-commit-config.yaml +108 -0
  17. CODE_OF_CONDUCT.md +132 -0
  18. CONTRIBUTING.md +323 -0
  19. LICENSE +507 -0
  20. MANIFEST.in +2 -0
  21. Makefile +180 -0
  22. README.md +343 -0
  23. benchmarks/video/README.md +288 -0
  24. benchmarks/video/benchmark.py +94 -0
  25. benchmarks/video/capture_camera_feed.py +102 -0
  26. benchmarks/video/run_video_benchmark.py +493 -0
  27. dataset_path.py +4 -0
  28. docker/Dockerfile.internal +93 -0
  29. docker/Dockerfile.user +79 -0
  30. docs-requirements.txt +3 -0
  31. docs/README.md +139 -0
  32. docs/source/_toctree.yml +90 -0
  33. docs/source/act.mdx +92 -0
  34. docs/source/async.mdx +312 -0
  35. docs/source/backwardcomp.mdx +151 -0
  36. docs/source/cameras.mdx +206 -0
  37. docs/source/contributing.md +1 -0
  38. docs/source/debug_processor_pipeline.mdx +299 -0
  39. docs/source/feetech.mdx +71 -0
  40. docs/source/groot.mdx +122 -0
  41. docs/source/hilserl.mdx +923 -0
  42. docs/source/hilserl_sim.mdx +154 -0
  43. docs/source/hope_jr.mdx +277 -0
  44. docs/source/il_robots.mdx +603 -0
  45. docs/source/il_sim.mdx +220 -0
  46. docs/source/implement_your_own_processor.mdx +273 -0
  47. docs/source/index.mdx +23 -0
  48. docs/source/installation.mdx +127 -0
  49. docs/source/integrate_hardware.mdx +476 -0
  50. docs/source/introduction_processors.mdx +314 -0
.dockerignore ADDED
@@ -0,0 +1,160 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # Misc
16
+ .git
17
+ tmp
18
+ wandb
19
+ data
20
+ outputs
21
+ .vscode
22
+ rl
23
+ media
24
+
25
+
26
+ # Logging
27
+ logs
28
+
29
+ # HPC
30
+ nautilus/*.yaml
31
+ *.key
32
+
33
+ # Slurm
34
+ sbatch*.sh
35
+
36
+ # Byte-compiled / optimized / DLL files
37
+ __pycache__/
38
+ *.py[cod]
39
+ *$py.class
40
+
41
+ # C extensions
42
+ *.so
43
+
44
+ # Distribution / packaging
45
+ .Python
46
+ build/
47
+ develop-eggs/
48
+ dist/
49
+ downloads/
50
+ eggs/
51
+ .eggs/
52
+ lib/
53
+ lib64/
54
+ parts/
55
+ sdist/
56
+ var/
57
+ wheels/
58
+ pip-wheel-metadata/
59
+ share/python-wheels/
60
+ *.egg-info/
61
+ .installed.cfg
62
+ *.egg
63
+ MANIFEST
64
+
65
+ # PyInstaller
66
+ # Usually these files are written by a python script from a template
67
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
68
+ *.manifest
69
+ *.spec
70
+
71
+ # Installer logs
72
+ pip-log.txt
73
+ pip-delete-this-directory.txt
74
+
75
+ # Unit test / coverage reports
76
+ !tests/artifacts
77
+ htmlcov/
78
+ .tox/
79
+ .nox/
80
+ .coverage
81
+ .coverage.*
82
+ nosetests.xml
83
+ coverage.xml
84
+ *.cover
85
+ *.py,cover
86
+ .hypothesis/
87
+ .pytest_cache/
88
+
89
+ # Ignore .cache except calibration
90
+ .cache/*
91
+ !.cache/calibration/
92
+ !.cache/calibration/**
93
+
94
+ # Translations
95
+ *.mo
96
+ *.pot
97
+
98
+ # Django stuff:
99
+ *.log
100
+ local_settings.py
101
+ db.sqlite3
102
+ db.sqlite3-journal
103
+
104
+ # Flask stuff:
105
+ instance/
106
+ .webassets-cache
107
+
108
+ # Scrapy stuff:
109
+ .scrapy
110
+
111
+ # Sphinx documentation
112
+ docs/_build/
113
+
114
+ # PyBuilder
115
+ target/
116
+
117
+ # Jupyter Notebook
118
+ .ipynb_checkpoints
119
+
120
+ # IPython
121
+ profile_default/
122
+ ipython_config.py
123
+
124
+ # pyenv
125
+ .python-version
126
+
127
+ # pipenv
128
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
129
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
130
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
131
+ # install all needed dependencies.
132
+ #Pipfile.lock
133
+
134
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow
135
+ __pypackages__/
136
+
137
+ # Celery stuff
138
+ celerybeat-schedule
139
+ celerybeat.pid
140
+
141
+ # SageMath parsed files
142
+ *.sage.py
143
+
144
+ # Spyder project settings
145
+ .spyderproject
146
+ .spyproject
147
+
148
+ # Rope project settings
149
+ .ropeproject
150
+
151
+ # mkdocs documentation
152
+ /site
153
+
154
+ # mypy
155
+ .mypy_cache/
156
+ .dmypy.json
157
+ dmypy.json
158
+
159
+ # Pyre type checker
160
+ .pyre/
.gitattributes CHANGED
@@ -1,59 +1,32 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.lz4 filter=lfs diff=lfs merge=lfs -text
12
- *.mds filter=lfs diff=lfs merge=lfs -text
13
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
14
- *.model filter=lfs diff=lfs merge=lfs -text
15
- *.msgpack filter=lfs diff=lfs merge=lfs -text
16
- *.npy filter=lfs diff=lfs merge=lfs -text
17
- *.npz filter=lfs diff=lfs merge=lfs -text
18
- *.onnx filter=lfs diff=lfs merge=lfs -text
19
- *.ot filter=lfs diff=lfs merge=lfs -text
20
- *.parquet filter=lfs diff=lfs merge=lfs -text
21
- *.pb filter=lfs diff=lfs merge=lfs -text
22
- *.pickle filter=lfs diff=lfs merge=lfs -text
23
- *.pkl filter=lfs diff=lfs merge=lfs -text
24
- *.pt filter=lfs diff=lfs merge=lfs -text
25
- *.pth filter=lfs diff=lfs merge=lfs -text
26
- *.rar filter=lfs diff=lfs merge=lfs -text
27
- *.safetensors filter=lfs diff=lfs merge=lfs -text
28
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
29
- *.tar.* filter=lfs diff=lfs merge=lfs -text
30
- *.tar filter=lfs diff=lfs merge=lfs -text
31
- *.tflite filter=lfs diff=lfs merge=lfs -text
32
- *.tgz filter=lfs diff=lfs merge=lfs -text
33
- *.wasm filter=lfs diff=lfs merge=lfs -text
34
- *.xz filter=lfs diff=lfs merge=lfs -text
35
- *.zip filter=lfs diff=lfs merge=lfs -text
36
- *.zst filter=lfs diff=lfs merge=lfs -text
37
- *tfevents* filter=lfs diff=lfs merge=lfs -text
38
- # Audio files - uncompressed
39
- *.pcm filter=lfs diff=lfs merge=lfs -text
40
- *.sam filter=lfs diff=lfs merge=lfs -text
41
- *.raw filter=lfs diff=lfs merge=lfs -text
42
- # Audio files - compressed
43
- *.aac filter=lfs diff=lfs merge=lfs -text
44
- *.flac filter=lfs diff=lfs merge=lfs -text
45
- *.mp3 filter=lfs diff=lfs merge=lfs -text
46
- *.ogg filter=lfs diff=lfs merge=lfs -text
47
- *.wav filter=lfs diff=lfs merge=lfs -text
48
- # Image files - uncompressed
49
- *.bmp filter=lfs diff=lfs merge=lfs -text
50
- *.gif filter=lfs diff=lfs merge=lfs -text
51
- *.png filter=lfs diff=lfs merge=lfs -text
52
- *.tiff filter=lfs diff=lfs merge=lfs -text
53
- # Image files - compressed
54
- *.jpg filter=lfs diff=lfs merge=lfs -text
55
- *.jpeg filter=lfs diff=lfs merge=lfs -text
56
- *.webp filter=lfs diff=lfs merge=lfs -text
57
- # Video files - compressed
58
- *.mp4 filter=lfs diff=lfs merge=lfs -text
59
- *.webm filter=lfs diff=lfs merge=lfs -text
 
1
+ # Copyright 2024 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+ *.memmap filter=lfs diff=lfs merge=lfs -text
15
+ *.stl filter=lfs diff=lfs merge=lfs -text
16
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
17
+ *.mp4 filter=lfs diff=lfs merge=lfs -text
18
+ *.arrow filter=lfs diff=lfs merge=lfs -text
19
+ *.json !text !filter !merge !diff
20
+ tests/artifacts/cameras/*.png filter=lfs diff=lfs merge=lfs -text
21
+ *.bag filter=lfs diff=lfs merge=lfs -text
22
+ media/gym/aloha_act.gif filter=lfs diff=lfs merge=lfs -text
23
+ media/gym/pusht_diffusion.gif filter=lfs diff=lfs merge=lfs -text
24
+ media/gym/simxarm_tdmpc.gif filter=lfs diff=lfs merge=lfs -text
25
+ media/hope_jr/hopejr.png filter=lfs diff=lfs merge=lfs -text
26
+ media/lekiwi/kiwi.webp filter=lfs diff=lfs merge=lfs -text
27
+ media/lerobot-logo-light.png filter=lfs diff=lfs merge=lfs -text
28
+ media/lerobot-logo-thumbnail.png filter=lfs diff=lfs merge=lfs -text
29
+ media/so100/leader_follower.webp filter=lfs diff=lfs merge=lfs -text
30
+ media/so101/so101-leader.webp filter=lfs diff=lfs merge=lfs -text
31
+ media/so101/so101.webp filter=lfs diff=lfs merge=lfs -text
32
+ media/wandb.png filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
.github/ISSUE_TEMPLATE/bug-report.yml ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ name: "\U0001F41B Bug Report"
16
+ description: Submit a bug report to help us improve LeRobot
17
+ body:
18
+ - type: markdown
19
+ attributes:
20
+ value: |
21
+ Thanks for taking the time to submit a bug report! 🐛
22
+ If this is not a bug related to the LeRobot library directly, but instead a general question about your code or the library specifically please use our [discord](https://discord.gg/s3KuuzsPFb).
23
+
24
+ - type: textarea
25
+ id: system-info
26
+ attributes:
27
+ label: System Info
28
+ description: Please share your LeRobot configuration by running `lerobot-info` (if installed) or `python -m lerobot.scripts.display_sys_info` (if not installed) and pasting the output below.
29
+ render: Shell
30
+ placeholder: lerobot version, OS, python version, numpy version, torch version, and lerobot's configuration
31
+ validations:
32
+ required: true
33
+
34
+ - type: checkboxes
35
+ id: information-scripts-examples
36
+ attributes:
37
+ label: Information
38
+ description: 'The problem arises when using:'
39
+ options:
40
+ - label: "One of the scripts in the examples/ folder of LeRobot"
41
+ - label: "My own task or dataset (give details below)"
42
+
43
+ - type: textarea
44
+ id: reproduction
45
+ validations:
46
+ required: true
47
+ attributes:
48
+ label: Reproduction
49
+ description: |
50
+ If needed, provide a simple code sample that reproduces the problem you ran into. It can be a Colab link or just a code snippet.
51
+ Sharing error messages or stack traces could be useful as well!
52
+ Important! Use code tags to correctly format your code. See https://help.github.com/en/github/writing-on-github/creating-and-highlighting-code-blocks#syntax-highlighting
53
+ Try to avoid screenshots, as they are hard to read and don't allow copy-and-pasting.
54
+
55
+ placeholder: |
56
+ Steps to reproduce the behavior:
57
+
58
+ 1.
59
+ 2.
60
+ 3.
61
+
62
+ - type: textarea
63
+ id: expected-behavior
64
+ validations:
65
+ required: true
66
+ attributes:
67
+ label: Expected behavior
68
+ description: "A clear and concise description of what you would expect to happen."
.github/PULL_REQUEST_TEMPLATE.md ADDED
@@ -0,0 +1,41 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## What this does
2
+
3
+ Explain what this PR does. Feel free to tag your PR with the appropriate label(s).
4
+
5
+ Examples:
6
+ | Title | Label |
7
+ |----------------------|-----------------|
8
+ | Fixes #[issue] | (🐛 Bug) |
9
+ | Adds new dataset | (🗃️ Dataset) |
10
+ | Optimizes something | (⚡️ Performance) |
11
+
12
+ ## How it was tested
13
+
14
+ Explain/show how you tested your changes.
15
+
16
+ Examples:
17
+
18
+ - Added `test_something` in `tests/test_stuff.py`.
19
+ - Added `new_feature` and checked that training converges with policy X on dataset/environment Y.
20
+ - Optimized `some_function`, it now runs X times faster than previously.
21
+
22
+ ## How to checkout & try? (for the reviewer)
23
+
24
+ Provide a simple way for the reviewer to try out your changes.
25
+
26
+ Examples:
27
+
28
+ ```bash
29
+ pytest -sx tests/test_stuff.py::test_something
30
+ ```
31
+
32
+ ```bash
33
+ lerobot-train --some.option=true
34
+ ```
35
+
36
+ ## SECTION TO REMOVE BEFORE SUBMITTING YOUR PR
37
+
38
+ **Note**: Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
39
+ members/contributors who may be interested in your PR. Try to avoid tagging more than 3 people.
40
+
41
+ **Note**: Before submitting this PR, please read the [contributor guideline](https://github.com/huggingface/lerobot/blob/main/CONTRIBUTING.md#submitting-a-pull-request-pr).
.github/workflows/documentation-upload-pr.yml ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow uploads the documentation preview built for a PR and comments the link on the PR.
16
+ name: Documentation PR Upload
17
+ permissions:
18
+ contents: read
19
+ pull-requests: write
20
+
21
+ on:
22
+ # Triggered by the completion of the main 'Documentation' workflow.
23
+ workflow_run: # zizmor: ignore[dangerous-triggers] We follow the same pattern as in Transformers
24
+ workflows: ["Documentation"]
25
+ types:
26
+ - completed
27
+
28
+ jobs:
29
+ # This job uploads a preview of the documentation for a pull request.
30
+ upload_and_comment:
31
+ name: Upload Preview and Comment
32
+ if: >
33
+ github.event.workflow_run.event == 'pull_request' &&
34
+ github.event.workflow_run.conclusion == 'success'
35
+ uses: huggingface/doc-builder/.github/workflows/upload_pr_documentation.yml@main
36
+ with:
37
+ package_name: lerobot
38
+ secrets:
39
+ hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
40
+ comment_bot_token: ${{ secrets.COMMENT_BOT_TOKEN }}
.github/workflows/documentation.yml ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow handles building documentation for both main branches and PRs.
16
+ name: Documentation
17
+
18
+ on:
19
+ # Allows running this workflow manually from the Actions tab
20
+ workflow_dispatch:
21
+
22
+ # Triggers the workflow on push events to main for the docs folder
23
+ push:
24
+ branches:
25
+ - main
26
+ paths:
27
+ - "docs/**"
28
+
29
+ # Triggers the workflow on pull request events targeting main for the docs folder
30
+ pull_request:
31
+ branches:
32
+ - main
33
+ paths:
34
+ - "docs/**"
35
+
36
+ # Ensures that only the latest commit for a PR or branch is built, canceling older runs.
37
+ concurrency:
38
+ group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
39
+ cancel-in-progress: true
40
+
41
+ jobs:
42
+ # This job builds and deploys the official documentation.
43
+ build_main_docs:
44
+ name: Build Main Docs
45
+ if: github.event_name == 'push' || github.event_name == 'workflow_dispatch'
46
+ permissions:
47
+ contents: read
48
+ uses: huggingface/doc-builder/.github/workflows/build_main_documentation.yml@main
49
+ with:
50
+ commit_sha: ${{ github.sha }}
51
+ package: lerobot
52
+ additional_args: --not_python_module
53
+ secrets:
54
+ token: ${{ secrets.HUGGINGFACE_PUSH }}
55
+ hf_token: ${{ secrets.HF_DOC_BUILD_PUSH }}
56
+
57
+ # This job builds a preview of the documentation for a pull request.
58
+ # The result of this job triggers the 'Upload PR Documentation' workflow.
59
+ build_pr_docs:
60
+ name: Build PR Docs
61
+ if: github.event_name == 'pull_request'
62
+ permissions:
63
+ contents: read
64
+ pull-requests: write
65
+ uses: huggingface/doc-builder/.github/workflows/build_pr_documentation.yml@main
66
+ with:
67
+ commit_sha: ${{ github.event.pull_request.head.sha }}
68
+ pr_number: ${{ github.event.number }}
69
+ package: lerobot
70
+ additional_args: --not_python_module
.github/workflows/fast_tests.yml ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow handles fast testing.
16
+ name: Fast Tests
17
+
18
+ on:
19
+ # Allows running this workflow manually from the Actions tab
20
+ workflow_dispatch:
21
+
22
+ pull_request:
23
+ branches:
24
+ - main
25
+ paths:
26
+ - "src/**"
27
+ - "tests/**"
28
+ - ".github/workflows/**"
29
+ - "pyproject.toml"
30
+ - "Makefile"
31
+ push:
32
+ branches:
33
+ - main
34
+ paths:
35
+ - "src/**"
36
+ - "tests/**"
37
+ - ".github/workflows/**"
38
+ - "pyproject.toml"
39
+ - "Makefile"
40
+
41
+ permissions:
42
+ contents: read
43
+
44
+ # Sets up the environment variables
45
+ env:
46
+ UV_VERSION: "0.8.0"
47
+ PYTHON_VERSION: "3.10"
48
+ DOCKER_IMAGE_NAME: huggingface/lerobot-gpu
49
+
50
+ # Ensures that only the latest commit for a PR or branch is built, canceling older runs.
51
+ concurrency:
52
+ group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
53
+ cancel-in-progress: true
54
+
55
+ jobs:
56
+ # This job runs pytests with the default dependencies.
57
+ # It runs everytime we commit to a PR or push to main
58
+ fast-pytest-tests:
59
+ name: Fast Pytest Tests
60
+ runs-on: ubuntu-latest
61
+ env:
62
+ MUJOCO_GL: egl
63
+ steps:
64
+ - uses: actions/checkout@v4
65
+ with:
66
+ persist-credentials: false
67
+ lfs: true
68
+
69
+ # TODO(Steven): Evaluate the need of these dependencies
70
+ - name: Install apt dependencies
71
+ run: |
72
+ sudo apt-get update && sudo apt-get install -y build-essential git \
73
+ curl libglib2.0-0 libegl1-mesa-dev ffmpeg \
74
+ libusb-1.0-0-dev speech-dispatcher libgeos-dev portaudio19-dev
75
+
76
+ - name: Setup uv and Python
77
+ uses: astral-sh/setup-uv@v6 # zizmor: ignore[unpinned-uses]
78
+ with:
79
+ enable-cache: true
80
+ version: ${{ env.UV_VERSION }}
81
+ python-version: ${{ env.PYTHON_VERSION }}
82
+
83
+ - name: Install lerobot with test extras
84
+ run: uv sync --extra "test"
85
+
86
+ - name: Run pytest
87
+ run: uv run pytest tests -vv --maxfail=10
.github/workflows/full_tests.yml ADDED
@@ -0,0 +1,210 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow handles full testing.
16
+ name: Full Tests
17
+
18
+ on:
19
+ # Allows running this workflow manually from the Actions tab
20
+ workflow_dispatch:
21
+
22
+ pull_request_review:
23
+ types: [submitted]
24
+ push:
25
+ branches:
26
+ - main
27
+ paths:
28
+ - "src/**"
29
+ - "tests/**"
30
+ - ".github/workflows/**"
31
+ - "pyproject.toml"
32
+ - "Makefile"
33
+
34
+ permissions:
35
+ contents: read
36
+
37
+ # Sets up the environment variables
38
+ env:
39
+ UV_VERSION: "0.8.0"
40
+ PYTHON_VERSION: "3.10"
41
+ DOCKER_IMAGE_NAME: huggingface/lerobot-gpu
42
+
43
+ # Ensures that only the latest action is built, canceling older runs.
44
+ concurrency:
45
+ group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
46
+ cancel-in-progress: true
47
+
48
+ jobs:
49
+
50
+ # This job runs the E2E tests + pytest with all extras
51
+ # It runs everytime a PR is approved or a push to main
52
+ full-tests:
53
+ name: Full Tests
54
+ runs-on: ubuntu-latest
55
+ if: |
56
+ (github.event_name == 'pull_request_review' && github.event.review.state == 'approved') ||
57
+ github.event_name == 'push' ||
58
+ github.event_name == 'workflow_dispatch'
59
+ env:
60
+ MUJOCO_GL: egl
61
+ steps:
62
+ - uses: actions/checkout@v4
63
+ with:
64
+ lfs: true
65
+ persist-credentials: false
66
+
67
+ - name: Install apt dependencies
68
+ run: |
69
+ sudo apt-get update && sudo apt-get install -y build-essential \
70
+ git curl libglib2.0-0 libegl1-mesa-dev ffmpeg libusb-1.0-0-dev \
71
+ speech-dispatcher libgeos-dev portaudio19-dev
72
+
73
+ - name: Setup uv and Python
74
+ uses: astral-sh/setup-uv@v6 # zizmor: ignore[unpinned-uses]
75
+ with:
76
+ enable-cache: true
77
+ version: ${{ env.UV_VERSION }}
78
+ python-version: ${{ env.PYTHON_VERSION }}
79
+
80
+ - name: Install lerobot with all extras
81
+ run: uv sync --all-extras --no-extra groot # TODO(Steven): Make flash-attn optional
82
+
83
+ - name: Run pytest (all extras)
84
+ run: uv run pytest tests -vv --maxfail=10
85
+
86
+ - name: Run end-to-end tests
87
+ run: uv run make test-end-to-end
88
+
89
+ # This job builds a GPU enabled image for testing
90
+ # It runs everytime a PR is approved or a push to main
91
+ # TODO(Steven): For now we skip this job for community PRs
92
+ build-and-push-docker:
93
+ name: Build and Push Docker
94
+ runs-on:
95
+ group: aws-general-8-plus
96
+ if: |
97
+ (github.event_name == 'pull_request_review' && github.event.review.state == 'approved' && github.event.pull_request.head.repo.fork == false) ||
98
+ github.event_name == 'push' ||
99
+ github.event_name == 'workflow_dispatch'
100
+ outputs:
101
+ image_tag: ${{ steps.set_tag.outputs.image_tag }}
102
+ env:
103
+ GITHUB_EVENT_NAME: ${{ github.event_name }}
104
+ GITHUB_REF: ${{ github.ref }}
105
+ GITHUB_PR_NUMBER: ${{ github.event.pull_request.number }}
106
+ steps:
107
+ - name: Set Docker image tag
108
+ id: set_tag
109
+ run: |
110
+ if [[ "${GITHUB_EVENT_NAME}" == "push" ]]; then
111
+ TAG="${DOCKER_IMAGE_NAME}:latest"
112
+ elif [[ -n "${GITHUB_PR_NUMBER}" ]]; then
113
+ TAG="${DOCKER_IMAGE_NAME}:pr-${GITHUB_PR_NUMBER}"
114
+ else
115
+ TAG="${DOCKER_IMAGE_NAME}:pr-${GITHUB_REF##*/}"
116
+ fi
117
+ echo "image_tag=$TAG" >> $GITHUB_OUTPUT
118
+ - name: Install Git LFS
119
+ run: |
120
+ sudo apt-get update
121
+ sudo apt-get install git-lfs
122
+ git lfs install
123
+ - uses: actions/checkout@v4
124
+ with:
125
+ lfs: true
126
+ persist-credentials: false
127
+ - name: Set up Docker Buildx
128
+ uses: docker/setup-buildx-action@v3 # zizmor: ignore[unpinned-uses]
129
+ with:
130
+ cache-binary: false
131
+ - name: Login to Docker Hub
132
+ uses: docker/login-action@v3 # zizmor: ignore[unpinned-uses]
133
+ with:
134
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
135
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
136
+ - name: Build and push Docker image
137
+ uses: docker/build-push-action@v6 # zizmor: ignore[unpinned-uses]
138
+ with:
139
+ context: .
140
+ file: ./docker/Dockerfile.internal
141
+ push: true
142
+ tags: ${{ steps.set_tag.outputs.image_tag }}
143
+
144
+ # This job runs pytest with all extras in a GPU enabled host
145
+ # It runs everytime a test image is created
146
+ gpu-tests:
147
+ name: GPU Tests
148
+ needs: [build-and-push-docker]
149
+ runs-on:
150
+ group: aws-g6-4xlarge-plus
151
+ env:
152
+ HF_HOME: /home/user_lerobot/.cache/huggingface
153
+ HF_LEROBOT_HOME: /home/user_lerobot/.cache/huggingface/lerobot
154
+ TORCH_HOME: /home/user_lerobot/.cache/torch
155
+ TRITON_CACHE_DIR: /home/user_lerobot/.cache/triton
156
+ container:
157
+ image: ${{ needs.build-and-push-docker.outputs.image_tag }} # zizmor: ignore[unpinned-images]
158
+ options: --gpus all --shm-size "16gb"
159
+ credentials:
160
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
161
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
162
+ defaults:
163
+ run:
164
+ shell: bash
165
+ working-directory: /lerobot
166
+ steps:
167
+ - name: Run pytest on GPU
168
+ run: pytest tests -vv --maxfail=10
169
+ - name: Run end-to-end tests
170
+ run: make test-end-to-end
171
+
172
+ # This job deletes the test image recently created
173
+ # It runs everytime after the gpu-tests have finished
174
+ delete-pr-image:
175
+ name: Delete PR Image
176
+ needs: [gpu-tests, build-and-push-docker]
177
+ if: always() && ((github.event.review.state == 'approved') || (github.event_name == 'workflow_dispatch')) && needs.build-and-push-docker.result == 'success'
178
+ runs-on: ubuntu-latest
179
+ steps:
180
+ - name: Get Docker Hub Token and Delete Image
181
+ # zizmor: ignore[template-injection]
182
+ run: |
183
+ IMAGE_NAME=$(echo "${{ needs.build-and-push-docker.outputs.image_tag }}" | cut -d':' -f1)
184
+ IMAGE_TAG=$(echo "${{ needs.build-and-push-docker.outputs.image_tag }}" | cut -d':' -f2)
185
+
186
+ echo "Attempting to delete image: $IMAGE_NAME:$IMAGE_TAG"
187
+
188
+ TOKEN=$(curl -s -H "Content-Type: application/json" \
189
+ -X POST \
190
+ -d '{"username": "${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}", "password": "${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}"}' \
191
+ https://hub.docker.com/v2/users/login/ | jq -r .token)
192
+
193
+ if [ "$TOKEN" == "null" ] || [ -z "$TOKEN" ]; then
194
+ echo "::error::Failed to get Docker Hub token."
195
+ exit 1
196
+ fi
197
+
198
+ HTTP_RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" \
199
+ -H "Authorization: JWT ${TOKEN}" \
200
+ -X DELETE \
201
+ https://hub.docker.com/v2/repositories/${IMAGE_NAME}/tags/${IMAGE_TAG}/)
202
+
203
+ if [ "$HTTP_RESPONSE" -eq 204 ]; then
204
+ echo "Successfully deleted Docker image tag: $IMAGE_NAME:$IMAGE_TAG"
205
+ else
206
+ echo "::error::Failed to delete Docker image. HTTP status: $HTTP_RESPONSE"
207
+ exit 1
208
+ fi
209
+
210
+ # TODO(Steven): Check dockerimages pull in ubuntu
.github/workflows/nightly.yml ADDED
@@ -0,0 +1,194 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow handles nightly testing & docker images publishing.
16
+ name: Nightly
17
+ permissions:
18
+ contents: read
19
+
20
+ on:
21
+ # Allows running this workflow manually from the Actions tab
22
+ workflow_dispatch:
23
+
24
+ # Runs at 02:00
25
+ schedule:
26
+ - cron: "0 2 * * *"
27
+
28
+ # Sets up the environment variables
29
+ env:
30
+ UV_VERSION: "0.8.0"
31
+ PYTHON_VERSION: "3.10"
32
+ DOCKER_IMAGE_NAME_CPU: huggingface/lerobot-cpu:latest
33
+ DOCKER_IMAGE_NAME_GPU: huggingface/lerobot-gpu:latest
34
+
35
+ # Ensures that only the latest commit is built, canceling older runs.
36
+ concurrency:
37
+ group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
38
+ cancel-in-progress: true
39
+
40
+ jobs:
41
+ # This job builds a CPU image for testing & distribution
42
+ build-docker-cpu-nightly:
43
+ name: Build CPU Docker for Nightly
44
+ runs-on:
45
+ group: aws-general-8-plus
46
+ outputs:
47
+ image_tag: ${{ env.DOCKER_IMAGE_NAME_CPU }}
48
+ steps:
49
+ - name: Install Git LFS
50
+ run: |
51
+ sudo apt-get update
52
+ sudo apt-get install git-lfs
53
+ git lfs install
54
+ - uses: actions/checkout@v4
55
+ with:
56
+ lfs: true
57
+ persist-credentials: false
58
+ - name: Set up Docker Buildx
59
+ uses: docker/setup-buildx-action@v3 # zizmor: ignore[unpinned-uses]
60
+ with:
61
+ cache-binary: false
62
+ - name: Login to Docker Hub
63
+ uses: docker/login-action@v3 # zizmor: ignore[unpinned-uses]
64
+ with:
65
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
66
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
67
+ - name: Build and push Docker image CPU
68
+ uses: docker/build-push-action@v6 # zizmor: ignore[unpinned-uses]
69
+ with:
70
+ context: .
71
+ file: ./docker/Dockerfile.user
72
+ push: true
73
+ tags: ${{ env.DOCKER_IMAGE_NAME_CPU }}
74
+
75
+ # This job builds a GPU image for testing & distribution
76
+ build-docker-gpu-nightly:
77
+ name: Build GPU Docker for Nightly
78
+ runs-on:
79
+ group: aws-general-8-plus
80
+ outputs:
81
+ image_tag: ${{ env.DOCKER_IMAGE_NAME_GPU }}
82
+ steps:
83
+ - name: Install Git LFS
84
+ run: |
85
+ sudo apt-get update
86
+ sudo apt-get install git-lfs
87
+ git lfs install
88
+ - uses: actions/checkout@v4
89
+ with:
90
+ lfs: true
91
+ persist-credentials: false
92
+ - name: Set up Docker Buildx
93
+ uses: docker/setup-buildx-action@v3 # zizmor: ignore[unpinned-uses]
94
+ with:
95
+ cache-binary: false
96
+ - name: Login to Docker Hub
97
+ uses: docker/login-action@v3 # zizmor: ignore[unpinned-uses]
98
+ with:
99
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
100
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
101
+ - name: Build and push Docker image GPU
102
+ uses: docker/build-push-action@v6 # zizmor: ignore[unpinned-uses]
103
+ with:
104
+ context: .
105
+ file: ./docker/Dockerfile.internal
106
+ push: true
107
+ tags: ${{ env.DOCKER_IMAGE_NAME_GPU }}
108
+
109
+ # This job runs the E2E tests + pytest with all extras in the CPU image
110
+ nightly-cpu-tests:
111
+ name: Nightly CPU Tests
112
+ needs: [build-docker-cpu-nightly]
113
+ runs-on:
114
+ group: aws-g6-4xlarge-plus
115
+ env:
116
+ HF_HOME: /home/user_lerobot/.cache/huggingface
117
+ HF_LEROBOT_HOME: /home/user_lerobot/.cache/huggingface/lerobot
118
+ TORCH_HOME: /home/user_lerobot/.cache/torch
119
+ TRITON_CACHE_DIR: /home/user_lerobot/.cache/triton
120
+ container:
121
+ image: ${{ needs.build-docker-cpu-nightly.outputs.image_tag }} # zizmor: ignore[unpinned-images]
122
+ options: --shm-size "16gb"
123
+ credentials:
124
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
125
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
126
+ defaults:
127
+ run:
128
+ shell: bash
129
+ working-directory: /lerobot
130
+ steps:
131
+ - name: Run pytest on CPU
132
+ run: pytest tests -vv --maxfail=10
133
+ - name: Run end-to-end tests
134
+ run: make test-end-to-end
135
+
136
+ # This job runs the E2E tests + pytest with all extras in the GPU image
137
+ nightly-gpu-tests:
138
+ name: Nightly GPU Tests
139
+ needs: [build-docker-gpu-nightly]
140
+ runs-on:
141
+ group: aws-g6-4xlarge-plus
142
+ env:
143
+ HF_HOME: /home/user_lerobot/.cache/huggingface
144
+ HF_LEROBOT_HOME: /home/user_lerobot/.cache/huggingface/lerobot
145
+ TORCH_HOME: /home/user_lerobot/.cache/torch
146
+ TRITON_CACHE_DIR: /home/user_lerobot/.cache/triton
147
+ container:
148
+ image: ${{ needs.build-docker-gpu-nightly.outputs.image_tag }} # zizmor: ignore[unpinned-images]
149
+ options: --gpus all --shm-size "16gb"
150
+ credentials:
151
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
152
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
153
+ defaults:
154
+ run:
155
+ shell: bash
156
+ working-directory: /lerobot
157
+ steps:
158
+ - name: Run pytest on GPU
159
+ run: pytest tests -vv --maxfail=10
160
+ - name: Run end-to-end tests
161
+ run: make test-end-to-end
162
+
163
+ # This job runs multi-GPU training tests with 4 GPUs
164
+ nightly-multi-gpu-tests:
165
+ name: Nightly Multi-GPU Tests
166
+ needs: [build-docker-gpu-nightly]
167
+ runs-on:
168
+ group: aws-g4dn-12xlarge # Instance with 4 GPUs
169
+ env:
170
+ HF_HOME: /home/user_lerobot/.cache/huggingface
171
+ HF_LEROBOT_HOME: /home/user_lerobot/.cache/huggingface/lerobot
172
+ TORCH_HOME: /home/user_lerobot/.cache/torch
173
+ TRITON_CACHE_DIR: /home/user_lerobot/.cache/triton
174
+ CUDA_VISIBLE_DEVICES: "0,1,2,3"
175
+ container:
176
+ image: ${{ needs.build-docker-gpu-nightly.outputs.image_tag }} # zizmor: ignore[unpinned-images]
177
+ options: --gpus all --shm-size "16gb"
178
+ credentials:
179
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
180
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
181
+ defaults:
182
+ run:
183
+ shell: bash
184
+ working-directory: /lerobot
185
+ steps:
186
+ - name: Verify GPU availability
187
+ run: |
188
+ nvidia-smi
189
+ python -c "import torch; print(f'PyTorch CUDA available: {torch.cuda.is_available()}'); print(f'Number of GPUs: {torch.cuda.device_count()}')"
190
+
191
+ - name: Run multi-GPU training tests
192
+ # TODO(Steven): Investigate why motors tests are failing in multi-GPU setup
193
+ run: pytest tests -vv --maxfail=10 --ignore=tests/motors/
194
+ timeout-minutes: 10
.github/workflows/quality.yml ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow handles linting, formatting, and static analysis checks for the codebase.
16
+ name: Quality
17
+ permissions:
18
+ contents: read
19
+
20
+ on:
21
+ # Allows running this workflow manually from the Actions tab
22
+ workflow_dispatch:
23
+
24
+ # Triggers the workflow on push events to main
25
+ push:
26
+ branches:
27
+ - main
28
+
29
+ # Triggers the workflow on pull request events targeting main
30
+ pull_request:
31
+ branches:
32
+ - main
33
+
34
+ # Ensures that only the latest commit for a PR or branch is built, canceling older runs.
35
+ concurrency:
36
+ group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
37
+ cancel-in-progress: true
38
+
39
+ jobs:
40
+ # This job runs pre-commit hooks to check code style and formatting.
41
+ pre-commit-checks:
42
+ name: Run Pre-commit Hooks (Lint, Format & Static Analysis)
43
+ runs-on: ubuntu-latest
44
+ steps:
45
+ - name: Checkout code
46
+ uses: actions/checkout@v4
47
+ with:
48
+ persist-credentials: false
49
+
50
+ - name: Set up Python
51
+ uses: actions/setup-python@v5
52
+ with:
53
+ python-version: '3.10'
54
+
55
+ - name: Run pre-commit hooks
56
+ uses: pre-commit/action@v3.0.1 # zizmor: ignore[unpinned-uses]
57
+ with:
58
+ extra_args: --all-files --show-diff-on-failure --color=always
.github/workflows/release.yml ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ name: Create Release and Publish to PyPI
16
+
17
+ on:
18
+ push:
19
+ tags:
20
+ - 'v*.*.*' # Trigger on tags like v0.1.0, v1.0.0
21
+
22
+ # Sets up the environment variables
23
+ env:
24
+ UV_VERSION: "0.8.0"
25
+ PYTHON_VERSION: "3.10"
26
+
27
+ jobs:
28
+ # This job builds the Python package and publishes it to PyPI
29
+ build-and-publish:
30
+ name: Build and publish Python distributions
31
+ runs-on: ubuntu-latest
32
+ outputs:
33
+ version: ${{ steps.extract_info.outputs.tag_version }}
34
+ permissions:
35
+ contents: write
36
+ id-token: write
37
+
38
+ steps:
39
+ - name: Checkout code
40
+ uses: actions/checkout@v4
41
+ with:
42
+ persist-credentials: false
43
+
44
+ - name: Set up Python
45
+ uses: actions/setup-python@v5
46
+ with:
47
+ python-version: '3.10'
48
+
49
+ - name: Extract Version
50
+ id: extract_info
51
+ # Extract version from tag (e.g., v0.1.0 -> 0.1.0)
52
+ # zizmor: ignore[template-injection]
53
+ run: |
54
+ VERSION=${{ github.ref_name }}
55
+ VERSION_NUMBER=${VERSION#v}
56
+ echo "tag_version=$VERSION_NUMBER" >> $GITHUB_OUTPUT
57
+ - name: Check if version matches pyproject.toml
58
+ if: startsWith(github.ref, 'refs/tags/v') && !contains(github.ref, '-')
59
+ # zizmor: ignore[template-injection]
60
+ run: |
61
+ TAG_VERSION=${{ steps.extract_info.outputs.tag_version }}
62
+
63
+ PYPROJECT_VERSION=$(grep '^version = ' pyproject.toml | awk -F' = ' '{print $2}' | tr -d '"')
64
+
65
+ if [[ "$TAG_VERSION" != "$PYPROJECT_VERSION" ]]; then
66
+ echo "Error: Tag version ($TAG_VERSION) does not match pyproject.toml version ($PYPROJECT_VERSION)." >&2
67
+ exit 1
68
+ else
69
+ echo "Tag version matches pyproject.toml version: $TAG_VERSION. Proceeding with release."
70
+ fi
71
+
72
+ - name: Check if version exists on PyPI
73
+ # zizmor: ignore[template-injection]
74
+ run: |
75
+ NEW_VERSION=${{ steps.extract_info.outputs.tag_version }}
76
+
77
+ response=$(curl -s "https://pypi.org/pypi/lerobot/$NEW_VERSION/json")
78
+ if echo "$response" | grep -q "message"; then
79
+ echo "Version $NEW_VERSION is available on PyPI. Proceeding with release."
80
+ else
81
+ echo "Error: Version $NEW_VERSION already exists on PyPI. Aborting."
82
+ exit 1
83
+ fi
84
+
85
+ - name: Remove Tags with Git dependencies
86
+ # TODO(Steven): Temporary patch to remove libero and pi from PyPi 0.4.0 release due to its reliance on git dependencies.
87
+ run: |
88
+ echo "::info:: Checking for Git dependencies to remove from pyproject.toml..."
89
+ grep -E '@ git\+https|lerobot\[pi\]|lerobot\[libero\]' pyproject.toml | sed 's/^/::warning:: Removing line: /' || true
90
+ sed -E -i '/@ git\+https|lerobot\[pi\]|lerobot\[libero\]/d' pyproject.toml
91
+ echo "::info:: Git dependencies removed. Proceeding with build."
92
+
93
+ - name: Install build dependencies
94
+ run: python -m pip install build
95
+
96
+ - name: Build package
97
+ run: python -m build
98
+
99
+ - name: Create GitHub Release
100
+ env:
101
+ GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
102
+ # zizmor: ignore[template-injection]
103
+ run: |
104
+ gh release create ${{ github.ref_name }} \
105
+ --title "Release ${{ github.ref_name }}" \
106
+ --generate-notes \
107
+ --draft=$([[ "${{ github.ref_name }}" == *-* ]] && echo true || echo false) \
108
+ --prerelease=$([[ "${{ github.ref_name }}" == *-* ]] && echo true || echo false) \
109
+ ./dist/*
110
+
111
+ - name: Publish to TestPyPI for pre-releases
112
+ # True for tags like 'v0.2.0-rc1'
113
+ if: startsWith(github.ref, 'refs/tags/v') && contains(github.ref, '-')
114
+ uses: pypa/gh-action-pypi-publish@v1.13.0 # zizmor: ignore[unpinned-uses, use-trusted-publishing]
115
+ with:
116
+ repository-url: https://test.pypi.org/legacy/
117
+ verbose: true
118
+ print-hash: true
119
+
120
+ - name: Publish to PyPI
121
+ if: startsWith(github.ref, 'refs/tags/v') && !contains(github.ref, '-')
122
+ uses: pypa/gh-action-pypi-publish@v1.13.0 # zizmor: ignore[unpinned-uses, use-trusted-publishing]
123
+ with:
124
+ verbose: true
125
+ print-hash: true
126
+
127
+ # This job runs end-to-end tests on the release
128
+ test-release:
129
+ name: Test Release
130
+ needs: [build-and-publish]
131
+ runs-on: ubuntu-latest
132
+ permissions:
133
+ contents: read
134
+ env:
135
+ MUJOCO_GL: egl
136
+ steps:
137
+ - uses: actions/checkout@v4
138
+ with:
139
+ lfs: true
140
+ persist-credentials: false
141
+ - name: Install apt dependencies
142
+ run: |
143
+ sudo apt-get update && sudo apt-get install -y build-essential \
144
+ git curl libglib2.0-0 libegl1-mesa-dev ffmpeg libusb-1.0-0-dev \
145
+ speech-dispatcher libgeos-dev portaudio19-dev
146
+ - name: Setup uv and Python
147
+ uses: astral-sh/setup-uv@v6 # zizmor: ignore[unpinned-uses]
148
+ with:
149
+ enable-cache: true # zizmor: ignore[cache-poisoning]
150
+ version: ${{ env.UV_VERSION }}
151
+ python-version: ${{ env.PYTHON_VERSION }}
152
+ - name: Create uv virtual environment
153
+ run: uv venv
154
+ - name: Install lerobot release
155
+ # zizmor: ignore[template-injection]
156
+ run: |
157
+ VERSION="${{ needs.build-and-publish.outputs.version }}"
158
+ if [[ "$VERSION" == *-* ]]; then
159
+ BASE_VERSION="${VERSION%%-*}"
160
+ echo "Installing pre-release version $BASE_VERSION from TestPyPI..."
161
+ uv pip install \
162
+ --index-url https://test.pypi.org/simple/ \
163
+ --extra-index-url https://pypi.org/simple \
164
+ --index-strategy unsafe-best-match \
165
+ "lerobot[all]==$BASE_VERSION"
166
+ else
167
+ echo "Installing release version $VERSION from PyPI..."
168
+ uv pip install "lerobot[all]==$VERSION"
169
+ fi
170
+ - name: Check lerobot version
171
+ run: uv run python -c "import lerobot; print(lerobot.__version__)"
172
+
173
+ - name: Run end-to-end tests
174
+ run: uv run make test-end-to-end
175
+
176
+
177
+ # TODO(Steven): Publish draft/pre-release and to test pypi weekly
178
+ # TODO(Steven): Separate build and publish job
179
+ # TODO(Steven): Tag documentation with the same version as the package
.github/workflows/security.yml ADDED
@@ -0,0 +1,54 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow handles secret scanning using TruffleHog to detect sensitive information in the codebase.
16
+ name: Security
17
+ permissions:
18
+ contents: read
19
+
20
+ on:
21
+ # Allows running this workflow manually from the Actions tab
22
+ workflow_dispatch:
23
+
24
+ # Triggers the workflow on push events to main
25
+ push:
26
+ branches:
27
+ - main
28
+
29
+ # Triggers the workflow on pull request events targeting main
30
+ pull_request:
31
+ branches:
32
+ - main
33
+
34
+ # Ensures that only the latest commit for a PR or branch is built, canceling older runs.
35
+ concurrency:
36
+ group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
37
+ cancel-in-progress: true
38
+
39
+ jobs:
40
+ # This job runs TruffleHog to scan the full history of the repository for secrets.
41
+ trufflehog:
42
+ name: Secret Leaks Scan
43
+ runs-on: ubuntu-latest
44
+ steps:
45
+ - name: Checkout code
46
+ uses: actions/checkout@v4 # zizmor: ignore[unpinned-uses]
47
+ with:
48
+ fetch-depth: 0
49
+ persist-credentials: false
50
+
51
+ - name: Secret Scanning
52
+ uses: trufflesecurity/trufflehog@v3.90.0 # zizmor: ignore[unpinned-uses]
53
+ with:
54
+ extra_args: --only-verified
.github/workflows/stale.yml ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow handles closing stale issues and PRs.
16
+ name: Stale
17
+ on:
18
+ # Allows running this workflow manually from the Actions tab
19
+ workflow_dispatch:
20
+
21
+ # Runs at 02:00
22
+ schedule:
23
+ - cron: "0 2 * * *"
24
+
25
+ env:
26
+ CLOSE_ISSUE_MESSAGE: >
27
+ This issue was closed because it has been stalled for 14 days with no activity.
28
+ Feel free to reopen if is still relevant, or to ping a collaborator if you have any questions.
29
+ CLOSE_PR_MESSAGE: >
30
+ This PR was closed because it has been stalled for 21 days with no activity.
31
+ Feel free to reopen if is still relevant, or to ping a collaborator if you have any questions.
32
+ WARN_ISSUE_MESSAGE: >
33
+ This issue has been automatically marked as stale because it has not had
34
+ recent activity (6 months). It will be closed if no further activity occurs.
35
+ Any change, comment or update to this issue will reset this count.
36
+ Thank you for your contributions.
37
+ WARN_PR_MESSAGE: >
38
+ This PR has been automatically marked as stale because it has not had
39
+ recent activity (1 year). It will be closed if no further activity occurs.
40
+ Any change, comment or update to this PR will reset this count.
41
+ Thank you for your contributions.
42
+
43
+ jobs:
44
+ # This job runs the actions/stale action to close stale issues and PRs.
45
+ stale:
46
+ name: Close Stale Issues and PRs
47
+ runs-on: ubuntu-latest
48
+ permissions:
49
+ actions: write
50
+ contents: write # only for delete-branch option
51
+ issues: write
52
+ pull-requests: write
53
+ steps:
54
+ - uses: actions/stale@v10
55
+ with:
56
+ repo-token: ${{ secrets.GITHUB_TOKEN }}
57
+ stale-issue-label: stale
58
+ stale-pr-label: stale
59
+ exempt-issue-labels: never-stale
60
+ exempt-pr-labels: never-stale
61
+ days-before-issue-stale: 180
62
+ days-before-issue-close: 14
63
+ days-before-pr-stale: 365
64
+ days-before-pr-close: 21
65
+ delete-branch: true
66
+ close-issue-message: ${{ env.CLOSE_ISSUE_MESSAGE }}
67
+ close-pr-message: ${{ env.CLOSE_PR_MESSAGE }}
68
+ stale-issue-message: ${{ env.WARN_ISSUE_MESSAGE }}
69
+ stale-pr-message: ${{ env.WARN_PR_MESSAGE }}
70
+ operations-per-run: 500
.github/workflows/unbound_deps_tests.yml ADDED
@@ -0,0 +1,183 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This workflow handles full testing with unboud dependencies versions.
16
+ name: Unbound Dependency Tests
17
+
18
+ on:
19
+ # Allows running this workflow manually from the Actions tab
20
+ workflow_dispatch:
21
+
22
+ # Run on the 1st and 15th of every month at 09:00 UTC
23
+ schedule:
24
+ - cron: '0 2 1,15 * *'
25
+
26
+ permissions:
27
+ contents: read
28
+
29
+ # Sets up the environment variables
30
+ env:
31
+ UV_VERSION: "0.8.0"
32
+ PYTHON_VERSION: "3.10"
33
+ DOCKER_IMAGE_NAME: huggingface/lerobot-gpu:unbound
34
+
35
+ # Ensures that only the latest action is built, canceling older runs.
36
+ concurrency:
37
+ group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
38
+ cancel-in-progress: true
39
+
40
+ jobs:
41
+
42
+ # This job runs the E2E tests + pytest with all unbound extras
43
+ full-tests:
44
+ name: Full Unbound Tests
45
+ runs-on: ubuntu-latest
46
+ env:
47
+ MUJOCO_GL: egl
48
+ steps:
49
+ - uses: actions/checkout@v4
50
+ with:
51
+ lfs: true
52
+ persist-credentials: false
53
+
54
+ - name: Install apt dependencies
55
+ run: |
56
+ sudo apt-get update && sudo apt-get install -y build-essential \
57
+ git curl libglib2.0-0 libegl1-mesa-dev ffmpeg libusb-1.0-0-dev \
58
+ speech-dispatcher libgeos-dev portaudio19-dev
59
+
60
+ - name: Setup uv and Python
61
+ uses: astral-sh/setup-uv@v6 # zizmor: ignore[unpinned-uses]
62
+ with:
63
+ enable-cache: true
64
+ version: ${{ env.UV_VERSION }}
65
+ python-version: ${{ env.PYTHON_VERSION }}
66
+
67
+ - name: Unbound dependencies
68
+ run: |
69
+ sed -i 's/,[[:space:]]*<[0-9\.]*//g' pyproject.toml
70
+ echo "Dependencies unbound:" && cat pyproject.toml
71
+
72
+ - name: Install lerobot with all extras
73
+ run: uv sync --all-extras
74
+
75
+ - name: Run pytest (all extras)
76
+ run: uv run pytest tests -vv
77
+
78
+ - name: Run end-to-end tests
79
+ run: uv run make test-end-to-end
80
+
81
+ # This job builds a GPU enabled image for testing
82
+ build-and-push-docker:
83
+ name: Build and Push Docker
84
+ runs-on:
85
+ group: aws-general-8-plus
86
+ outputs:
87
+ image_tag: ${{ env.DOCKER_IMAGE_NAME }}
88
+ env:
89
+ GITHUB_REF: ${{ github.ref }}
90
+ steps:
91
+ - name: Install Git LFS
92
+ run: |
93
+ sudo apt-get update
94
+ sudo apt-get install git-lfs
95
+ git lfs install
96
+ - uses: actions/checkout@v4
97
+ with:
98
+ lfs: true
99
+ persist-credentials: false
100
+ - name: Set up Docker Buildx
101
+ uses: docker/setup-buildx-action@v3 # zizmor: ignore[unpinned-uses]
102
+ with:
103
+ cache-binary: false
104
+ - name: Login to Docker Hub
105
+ uses: docker/login-action@v3 # zizmor: ignore[unpinned-uses]
106
+ with:
107
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
108
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
109
+ - name: Build and push Docker image
110
+ uses: docker/build-push-action@v6 # zizmor: ignore[unpinned-uses]
111
+ with:
112
+ context: .
113
+ file: ./docker/Dockerfile.internal
114
+ push: true
115
+ tags: ${{ env.DOCKER_IMAGE_NAME }}
116
+ build-args: |
117
+ UNBOUND_DEPS=true
118
+
119
+ # This job runs pytest with all unbound extras in a GPU enabled host
120
+ # It runs everytime a test image is created
121
+ gpu-tests:
122
+ name: GPU Unbound Tests
123
+ needs: [build-and-push-docker]
124
+ runs-on:
125
+ group: aws-g6-4xlarge-plus
126
+ env:
127
+ HF_HOME: /home/user_lerobot/.cache/huggingface
128
+ HF_LEROBOT_HOME: /home/user_lerobot/.cache/huggingface/lerobot
129
+ TORCH_HOME: /home/user_lerobot/.cache/torch
130
+ TRITON_CACHE_DIR: /home/user_lerobot/.cache/triton
131
+ container:
132
+ image: ${{ needs.build-and-push-docker.outputs.image_tag }} # zizmor: ignore[unpinned-images]
133
+ options: --gpus all --shm-size "16gb"
134
+ credentials:
135
+ username: ${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}
136
+ password: ${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}
137
+ defaults:
138
+ run:
139
+ shell: bash
140
+ working-directory: /lerobot
141
+ steps:
142
+ - name: Run pytest on GPU
143
+ run: pytest tests -vv
144
+ - name: Run end-to-end tests
145
+ run: make test-end-to-end
146
+
147
+ # This job deletes the test image recently created
148
+ # It runs everytime after the gpu-tests have finished
149
+ delete-unbound-image:
150
+ name: Delete Unbound Image
151
+ needs: [gpu-tests, build-and-push-docker]
152
+ if: always() && needs.build-and-push-docker.result == 'success'
153
+ runs-on: ubuntu-latest
154
+ steps:
155
+ - name: Get Docker Hub Token and Delete Image
156
+ # zizmor: ignore[template-injection]
157
+ run: |
158
+ IMAGE_NAME=$(echo "${{ needs.build-and-push-docker.outputs.image_tag }}" | cut -d':' -f1)
159
+ IMAGE_TAG=$(echo "${{ needs.build-and-push-docker.outputs.image_tag }}" | cut -d':' -f2)
160
+
161
+ echo "Attempting to delete image: $IMAGE_NAME:$IMAGE_TAG"
162
+
163
+ TOKEN=$(curl -s -H "Content-Type: application/json" \
164
+ -X POST \
165
+ -d '{"username": "${{ secrets.DOCKERHUB_LEROBOT_USERNAME }}", "password": "${{ secrets.DOCKERHUB_LEROBOT_PASSWORD }}"}' \
166
+ https://hub.docker.com/v2/users/login/ | jq -r .token)
167
+
168
+ if [ "$TOKEN" == "null" ] || [ -z "$TOKEN" ]; then
169
+ echo "::error::Failed to get Docker Hub token."
170
+ exit 1
171
+ fi
172
+
173
+ HTTP_RESPONSE=$(curl -s -o /dev/null -w "%{http_code}" \
174
+ -H "Authorization: JWT ${TOKEN}" \
175
+ -X DELETE \
176
+ https://hub.docker.com/v2/repositories/${IMAGE_NAME}/tags/${IMAGE_TAG}/)
177
+
178
+ if [ "$HTTP_RESPONSE" -eq 204 ]; then
179
+ echo "Successfully deleted Docker image tag: $IMAGE_NAME:$IMAGE_TAG"
180
+ else
181
+ echo "::error::Failed to delete Docker image. HTTP status: $HTTP_RESPONSE"
182
+ exit 1
183
+ fi
.gitignore ADDED
@@ -0,0 +1,179 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ ### Environments & Dependencies ###
16
+ .env
17
+ .venv
18
+ env/
19
+ venv/
20
+ env.bak/
21
+ venv.bak/
22
+ .python-version
23
+ __pypackages__/
24
+ node_modules/
25
+
26
+ # Lock files
27
+ poetry.lock
28
+ uv.lock
29
+ Pipfile.lock
30
+
31
+ ### Build & Distribution ###
32
+ build/
33
+ dist/
34
+ sdist/
35
+ wheels/
36
+ downloads/
37
+ eggs/
38
+ .eggs/
39
+ parts/
40
+ var/
41
+ pip-wheel-metadata/
42
+ share/python-wheels/
43
+ develop-eggs/
44
+ *.egg-info/
45
+ .installed.cfg
46
+ *.egg
47
+ MANIFEST
48
+ lib/
49
+ lib64/
50
+
51
+ # PyInstaller
52
+ *.manifest
53
+ *.spec
54
+
55
+ ### Compiled & Cached Files ###
56
+ __pycache__/
57
+ *.py[cod]
58
+ *$py.class
59
+ *.so
60
+ *.sage.py
61
+ .cache/
62
+ .ruff_cache/
63
+ .mypy_cache/
64
+ .pyre/
65
+ .pytype/
66
+ cython_debug/
67
+
68
+ ### Testing & Coverage ###
69
+ htmlcov/
70
+ .tox/
71
+ .nox/
72
+ .coverage
73
+ .coverage.*
74
+ .pytest_cache/
75
+ .hypothesis/
76
+ nosetests.xml
77
+ coverage.xml
78
+ *.cover
79
+ *.py,cover
80
+ !tests/artifacts
81
+
82
+ ### Logs & Temporary Files ###
83
+ logs/
84
+ tmp/
85
+ *.log
86
+ pip-log.txt
87
+ pip-delete-this-directory.txt
88
+ celerybeat-schedule
89
+ celerybeat.pid
90
+
91
+ ### IDE & Editor Config ###
92
+ # VS Code
93
+ .vscode/
94
+ .devcontainer/
95
+
96
+ # JetBrains / PyCharm
97
+ .idea/
98
+
99
+ # Spyder
100
+ .spyderproject
101
+ .spyproject
102
+
103
+ # Rope
104
+ .ropeproject
105
+
106
+ # Vim
107
+ *.swp
108
+
109
+ # Other
110
+ *~
111
+
112
+ ### OS Specific ###
113
+ # macOS
114
+ .DS_Store
115
+
116
+ # Windows
117
+ Thumbs.db
118
+
119
+ ### Framework & Tool Specific ###
120
+
121
+ .Python
122
+
123
+ # Django
124
+ local_settings.py
125
+ db.sqlite3
126
+ db.sqlite3-journal
127
+
128
+ # Flask
129
+ instance/
130
+ .webassets-cache
131
+
132
+ # Scrapy
133
+ .scrapy
134
+
135
+ # Jupyter
136
+ .ipynb_checkpoints/
137
+ profile_default/
138
+ ipython_config.py
139
+
140
+ # Sphinx
141
+ docs/_build/
142
+
143
+ # MkDocs
144
+ /site
145
+
146
+ # PyBuilder
147
+ .pybuilder/
148
+ target/
149
+
150
+ # mypy
151
+ .dmypy.json
152
+ dmypy.json
153
+
154
+ ### HPC & Slurm ###
155
+ nautilus/*.yaml
156
+ *.key
157
+ sbatch*.sh
158
+
159
+ ### Miscellaneous ###
160
+ # W&B
161
+ wandb/
162
+
163
+ # Dev scripts
164
+ .dev/
165
+
166
+ # Data folders
167
+ data/
168
+ outputs/
169
+
170
+ # Translations
171
+ *.mo
172
+ *.pot
173
+
174
+ # Dev folders
175
+ .cache/*
176
+ *.stl
177
+ *.urdf
178
+ *.xml
179
+ *.part
.pre-commit-config.yaml ADDED
@@ -0,0 +1,108 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ default_language_version:
16
+ python: python3.10
17
+
18
+ exclude: "tests/artifacts/.*\\.safetensors$"
19
+
20
+ repos:
21
+ ##### Meta #####
22
+ - repo: meta
23
+ hooks:
24
+ - id: check-useless-excludes
25
+ - id: check-hooks-apply
26
+
27
+ ##### General Code Quality & Formatting #####
28
+ - repo: https://github.com/pre-commit/pre-commit-hooks
29
+ rev: v6.0.0
30
+ hooks:
31
+ - id: check-added-large-files
32
+ args: ['--maxkb=1024']
33
+ - id: debug-statements
34
+ - id: check-merge-conflict
35
+ - id: check-case-conflict
36
+ - id: check-yaml
37
+ - id: check-toml
38
+ - id: end-of-file-fixer
39
+ - id: trailing-whitespace
40
+
41
+ - repo: https://github.com/astral-sh/ruff-pre-commit
42
+ rev: v0.14.1
43
+ hooks:
44
+ - id: ruff-format
45
+ - id: ruff
46
+ args: [--fix, --exit-non-zero-on-fix]
47
+
48
+ - repo: https://github.com/adhtruong/mirrors-typos
49
+ rev: v1.38.1
50
+ hooks:
51
+ - id: typos
52
+ args: [--force-exclude]
53
+
54
+ - repo: https://github.com/asottile/pyupgrade
55
+ rev: v3.21.0
56
+ hooks:
57
+ - id: pyupgrade
58
+ args: [--py310-plus]
59
+
60
+ ##### Markdown Quality #####
61
+ - repo: https://github.com/rbubley/mirrors-prettier
62
+ rev: v3.6.2
63
+ hooks:
64
+ - id: prettier
65
+ name: Format Markdown with Prettier
66
+ types_or: [markdown, mdx]
67
+ args: [--prose-wrap=preserve]
68
+
69
+ ##### Security #####
70
+ - repo: https://github.com/gitleaks/gitleaks
71
+ rev: v8.28.0
72
+ hooks:
73
+ - id: gitleaks
74
+
75
+ - repo: https://github.com/woodruffw/zizmor-pre-commit
76
+ rev: v1.15.2
77
+ hooks:
78
+ - id: zizmor
79
+
80
+ - repo: https://github.com/PyCQA/bandit
81
+ rev: 1.8.6
82
+ hooks:
83
+ - id: bandit
84
+ args: ["-c", "pyproject.toml"]
85
+ additional_dependencies: ["bandit[toml]"]
86
+
87
+ # TODO(Steven): Uncomment when ready to use
88
+ ##### Static Analysis & Typing #####
89
+ - repo: https://github.com/pre-commit/mirrors-mypy
90
+ rev: v1.18.2
91
+ hooks:
92
+ - id: mypy
93
+ args: [--config-file=pyproject.toml]
94
+ exclude: ^(examples|benchmarks|tests)/
95
+
96
+ ##### Docstring Checks #####
97
+ # - repo: https://github.com/akaihola/darglint2
98
+ # rev: v1.8.2
99
+ # hooks:
100
+ # - id: darglint2
101
+ # args: ["--docstring-style", "google", "-v", "2"]
102
+ # exclude: ^tests/.*$
103
+
104
+ # - repo: https://github.com/econchick/interrogate
105
+ # rev: 1.7.0
106
+ # hooks:
107
+ # - id: interrogate
108
+ # args: ["-vv", "--config=pyproject.toml"]
CODE_OF_CONDUCT.md ADDED
@@ -0,0 +1,132 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Contributor Covenant Code of Conduct
2
+
3
+ ## Our Pledge
4
+
5
+ We as members, contributors, and leaders pledge to make participation in our
6
+ community a harassment-free experience for everyone, regardless of age, body
7
+ size, visible or invisible disability, ethnicity, sex characteristics, gender
8
+ identity and expression, level of experience, education, socio-economic status,
9
+ nationality, personal appearance, race, caste, color, religion, or sexual
10
+ identity and orientation.
11
+
12
+ We pledge to act and interact in ways that contribute to an open, welcoming,
13
+ diverse, inclusive, and healthy community.
14
+
15
+ ## Our Standards
16
+
17
+ Examples of behavior that contributes to a positive environment for our
18
+ community include:
19
+
20
+ - Demonstrating empathy and kindness toward other people
21
+ - Being respectful of differing opinions, viewpoints, and experiences
22
+ - Giving and gracefully accepting constructive feedback
23
+ - Accepting responsibility and apologizing to those affected by our mistakes,
24
+ and learning from the experience
25
+ - Focusing on what is best not just for us as individuals, but for the overall
26
+ community
27
+
28
+ Examples of unacceptable behavior include:
29
+
30
+ - The use of sexualized language or imagery, and sexual attention or advances of
31
+ any kind
32
+ - Trolling, insulting or derogatory comments, and personal or political attacks
33
+ - Public or private harassment
34
+ - Publishing others' private information, such as a physical or email address,
35
+ without their explicit permission
36
+ - Other conduct which could reasonably be considered inappropriate in a
37
+ professional setting
38
+
39
+ ## Enforcement Responsibilities
40
+
41
+ Community leaders are responsible for clarifying and enforcing our standards of
42
+ acceptable behavior and will take appropriate and fair corrective action in
43
+ response to any behavior that they deem inappropriate, threatening, offensive,
44
+ or harmful.
45
+
46
+ Community leaders have the right and responsibility to remove, edit, or reject
47
+ comments, commits, code, wiki edits, issues, and other contributions that are
48
+ not aligned to this Code of Conduct, and will communicate reasons for moderation
49
+ decisions when appropriate.
50
+
51
+ ## Scope
52
+
53
+ This Code of Conduct applies within all community spaces, and also applies when
54
+ an individual is officially representing the community in public spaces.
55
+ Examples of representing our community include using an official email address,
56
+ posting via an official social media account, or acting as an appointed
57
+ representative at an online or offline event.
58
+
59
+ ## Enforcement
60
+
61
+ Instances of abusive, harassing, or otherwise unacceptable behavior may be
62
+ reported to the community leaders responsible for enforcement at
63
+ [feedback@huggingface.co](mailto:feedback@huggingface.co).
64
+ All complaints will be reviewed and investigated promptly and fairly.
65
+
66
+ All community leaders are obligated to respect the privacy and security of the
67
+ reporter of any incident.
68
+
69
+ ## Enforcement Guidelines
70
+
71
+ Community leaders will follow these Community Impact Guidelines in determining
72
+ the consequences for any action they deem in violation of this Code of Conduct:
73
+
74
+ ### 1. Correction
75
+
76
+ **Community Impact**: Use of inappropriate language or other behavior deemed
77
+ unprofessional or unwelcome in the community.
78
+
79
+ **Consequence**: A private, written warning from community leaders, providing
80
+ clarity around the nature of the violation and an explanation of why the
81
+ behavior was inappropriate. A public apology may be requested.
82
+
83
+ ### 2. Warning
84
+
85
+ **Community Impact**: A violation through a single incident or series of
86
+ actions.
87
+
88
+ **Consequence**: A warning with consequences for continued behavior. No
89
+ interaction with the people involved, including unsolicited interaction with
90
+ those enforcing the Code of Conduct, for a specified period of time. This
91
+ includes avoiding interactions in community spaces as well as external channels
92
+ like social media. Violating these terms may lead to a temporary or permanent
93
+ ban.
94
+
95
+ ### 3. Temporary Ban
96
+
97
+ **Community Impact**: A serious violation of community standards, including
98
+ sustained inappropriate behavior.
99
+
100
+ **Consequence**: A temporary ban from any sort of interaction or public
101
+ communication with the community for a specified period of time. No public or
102
+ private interaction with the people involved, including unsolicited interaction
103
+ with those enforcing the Code of Conduct, is allowed during this period.
104
+ Violating these terms may lead to a permanent ban.
105
+
106
+ ### 4. Permanent Ban
107
+
108
+ **Community Impact**: Demonstrating a pattern of violation of community
109
+ standards, including sustained inappropriate behavior, harassment of an
110
+ individual, or aggression toward or disparagement of classes of individuals.
111
+
112
+ **Consequence**: A permanent ban from any sort of public interaction within the
113
+ community.
114
+
115
+ ## Attribution
116
+
117
+ This Code of Conduct is adapted from the [Contributor Covenant][homepage],
118
+ version 2.1, available at
119
+ [https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
120
+
121
+ Community Impact Guidelines were inspired by
122
+ [Mozilla's code of conduct enforcement ladder][Mozilla CoC].
123
+
124
+ For answers to common questions about this code of conduct, see the FAQ at
125
+ [https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
126
+ [https://www.contributor-covenant.org/translations][translations].
127
+
128
+ [homepage]: https://www.contributor-covenant.org
129
+ [v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
130
+ [Mozilla CoC]: https://github.com/mozilla/diversity
131
+ [FAQ]: https://www.contributor-covenant.org/faq
132
+ [translations]: https://www.contributor-covenant.org/translations
CONTRIBUTING.md ADDED
@@ -0,0 +1,323 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # How to contribute to 🤗 LeRobot?
2
+
3
+ Everyone is welcome to contribute, and we value everybody's contribution. Code
4
+ is thus not the only way to help the community. Answering questions, helping
5
+ others, reaching out and improving the documentations are immensely valuable to
6
+ the community.
7
+
8
+ It also helps us if you spread the word: reference the library from blog posts
9
+ on the awesome projects it made possible, shout out on Twitter when it has
10
+ helped you, or simply ⭐️ the repo to say "thank you".
11
+
12
+ Whichever way you choose to contribute, please be mindful to respect our
13
+ [code of conduct](https://github.com/huggingface/lerobot/blob/main/CODE_OF_CONDUCT.md).
14
+
15
+ ## You can contribute in so many ways!
16
+
17
+ Some of the ways you can contribute to 🤗 LeRobot:
18
+
19
+ - Fixing outstanding issues with the existing code.
20
+ - Implementing new models, datasets or simulation environments.
21
+ - Contributing to the examples or to the documentation.
22
+ - Submitting issues related to bugs or desired new features.
23
+
24
+ Following the guides below, feel free to open issues and PRs and to coordinate your efforts with the community on our [Discord Channel](https://discord.gg/VjFz58wn3R). For specific inquiries, reach out to [Remi Cadene](mailto:remi.cadene@huggingface.co).
25
+
26
+ If you are not sure how to contribute or want to know the next features we working on, look on this project page: [LeRobot TODO](https://github.com/orgs/huggingface/projects/46)
27
+
28
+ ## Submitting a new issue or feature request
29
+
30
+ Do your best to follow these guidelines when submitting an issue or a feature
31
+ request. It will make it easier for us to come back to you quickly and with good
32
+ feedback.
33
+
34
+ ### Did you find a bug?
35
+
36
+ The 🤗 LeRobot library is robust and reliable thanks to the users who notify us of
37
+ the problems they encounter. So thank you for reporting an issue.
38
+
39
+ First, we would really appreciate it if you could **make sure the bug was not
40
+ already reported** (use the search bar on Github under Issues).
41
+
42
+ Did not find it? :( So we can act quickly on it, please follow these steps:
43
+
44
+ - Include your **OS type and version**, the versions of **Python** and **PyTorch**.
45
+ - A short, self-contained, code snippet that allows us to reproduce the bug in
46
+ less than 30s.
47
+ - The full traceback if an exception is raised.
48
+ - Attach any other additional information, like screenshots, you think may help.
49
+
50
+ ### Do you want a new feature?
51
+
52
+ A good feature request addresses the following points:
53
+
54
+ 1. Motivation first:
55
+
56
+ - Is it related to a problem/frustration with the library? If so, please explain
57
+ why. Providing a code snippet that demonstrates the problem is best.
58
+ - Is it related to something you would need for a project? We'd love to hear
59
+ about it!
60
+ - Is it something you worked on and think could benefit the community?
61
+ Awesome! Tell us what problem it solved for you.
62
+
63
+ 2. Write a _paragraph_ describing the feature.
64
+ 3. Provide a **code snippet** that demonstrates its future use.
65
+ 4. In case this is related to a paper, please attach a link.
66
+ 5. Attach any additional information (drawings, screenshots, etc.) you think may help.
67
+
68
+ If your issue is well written we're already 80% of the way there by the time you
69
+ post it.
70
+
71
+ ## Adding new policies, datasets or environments
72
+
73
+ Look at our implementations for [datasets](./src/lerobot/datasets/), [policies](./src/lerobot/policies/),
74
+ environments ([aloha](https://github.com/huggingface/gym-aloha),
75
+ [pusht](https://github.com/huggingface/gym-pusht))
76
+ and follow the same api design.
77
+
78
+ When implementing a new dataset loadable with LeRobotDataset follow these steps:
79
+
80
+ - Update `available_datasets_per_env` in `lerobot/__init__.py`
81
+
82
+ When implementing a new environment (e.g. `gym_aloha`), follow these steps:
83
+
84
+ - Update `available_tasks_per_env` and `available_datasets_per_env` in `lerobot/__init__.py`
85
+
86
+ When implementing a new policy class (e.g. `DiffusionPolicy`) follow these steps:
87
+
88
+ - Update `available_policies` and `available_policies_per_env`, in `lerobot/__init__.py`
89
+ - Set the required `name` class attribute.
90
+ - Update variables in `tests/test_available.py` by importing your new Policy class
91
+
92
+ ## Submitting a pull request (PR)
93
+
94
+ Before writing code, we strongly advise you to search through the existing PRs or
95
+ issues to make sure that nobody is already working on the same thing. If you are
96
+ unsure, it is always a good idea to open an issue to get some feedback.
97
+
98
+ You will need basic `git` proficiency to be able to contribute to
99
+ 🤗 LeRobot. `git` is not the easiest tool to use but it has the greatest
100
+ manual. Type `git --help` in a shell and enjoy. If you prefer books, [Pro
101
+ Git](https://git-scm.com/book/en/v2) is a very good reference.
102
+
103
+ Follow these steps to start contributing:
104
+
105
+ 1. Fork the [repository](https://github.com/huggingface/lerobot) by
106
+ clicking on the 'Fork' button on the repository's page. This creates a copy of the code
107
+ under your GitHub user account.
108
+
109
+ 2. Clone your fork to your local disk, and add the base repository as a remote. The following command
110
+ assumes you have your public SSH key uploaded to GitHub. See the following guide for more
111
+ [information](https://docs.github.com/en/repositories/creating-and-managing-repositories/cloning-a-repository).
112
+
113
+ ```bash
114
+ git clone git@github.com:<your Github handle>/lerobot.git
115
+ cd lerobot
116
+ git remote add upstream https://github.com/huggingface/lerobot.git
117
+ ```
118
+
119
+ 3. Create a new branch to hold your development changes, and do this for every new PR you work on.
120
+
121
+ Start by synchronizing your `main` branch with the `upstream/main` branch (more details in the [GitHub Docs](https://docs.github.com/en/github/collaborating-with-issues-and-pull-requests/syncing-a-fork)):
122
+
123
+ ```bash
124
+ git checkout main
125
+ git fetch upstream
126
+ git rebase upstream/main
127
+ ```
128
+
129
+ Once your `main` branch is synchronized, create a new branch from it:
130
+
131
+ ```bash
132
+ git checkout -b a-descriptive-name-for-my-changes
133
+ ```
134
+
135
+ 🚨 **Do not** work on the `main` branch.
136
+
137
+ 4. for development, we advise to use a tool like `poetry` or `uv` instead of just `pip` to easily track our dependencies.
138
+ Follow the instructions to [install poetry](https://python-poetry.org/docs/#installation) (use a version >=2.1.0) or to [install uv](https://docs.astral.sh/uv/getting-started/installation/#installation-methods) if you don't have one of them already.
139
+
140
+ Set up a development environment with conda:
141
+
142
+ ```bash
143
+ conda create -y -n lerobot-dev python=3.10 && conda activate lerobot-dev
144
+ ```
145
+
146
+ If you're using `uv`, it can manage python versions so you can instead do:
147
+
148
+ ```bash
149
+ uv venv --python 3.10 && source .venv/bin/activate
150
+ ```
151
+
152
+ To develop on 🤗 LeRobot, you will at least need to install the `dev` and `test` extras dependencies along with the core library:
153
+
154
+ using `poetry`
155
+
156
+ ```bash
157
+ poetry sync --extras "dev test"
158
+ ```
159
+
160
+ using `uv`
161
+
162
+ ```bash
163
+ uv sync --extra dev --extra test
164
+ ```
165
+
166
+ You can also install the project with all its dependencies (including environments):
167
+
168
+ using `poetry`
169
+
170
+ ```bash
171
+ poetry sync --all-extras
172
+ ```
173
+
174
+ using `uv`
175
+
176
+ ```bash
177
+ uv sync --all-extras
178
+ ```
179
+
180
+ > **Note:** If you don't install simulation environments with `--all-extras`, the tests that require them will be skipped when running the pytest suite locally. However, they _will_ be tested in the CI. In general, we advise you to install everything and test locally before pushing.
181
+
182
+ Whichever command you chose to install the project (e.g. `poetry sync --all-extras`), you should run it again when pulling code with an updated version of `pyproject.toml` and `poetry.lock` in order to synchronize your virtual environment with the new dependencies.
183
+
184
+ The equivalent of `pip install some-package`, would just be:
185
+
186
+ using `poetry`
187
+
188
+ ```bash
189
+ poetry add some-package
190
+ ```
191
+
192
+ using `uv`
193
+
194
+ ```bash
195
+ uv add some-package
196
+ ```
197
+
198
+ When making changes to the poetry sections of the `pyproject.toml`, you should run the following command to lock dependencies.
199
+ using `poetry`
200
+
201
+ ```bash
202
+ poetry lock
203
+ ```
204
+
205
+ using `uv`
206
+
207
+ ```bash
208
+ uv lock
209
+ ```
210
+
211
+ 5. Develop the features on your branch.
212
+
213
+ As you work on the features, you should make sure that the test suite
214
+ passes. You should run the tests impacted by your changes like this (see
215
+ below an explanation regarding the environment variable):
216
+
217
+ ```bash
218
+ pytest tests/<TEST_TO_RUN>.py
219
+ ```
220
+
221
+ 6. Follow our style.
222
+
223
+ `lerobot` relies on `ruff` to format its source code
224
+ consistently. Set up [`pre-commit`](https://pre-commit.com/) to run these checks
225
+ automatically as Git commit hooks.
226
+
227
+ Install `pre-commit` hooks:
228
+
229
+ ```bash
230
+ pre-commit install
231
+ ```
232
+
233
+ You can run these hooks whenever you need on staged files with:
234
+
235
+ ```bash
236
+ pre-commit
237
+ ```
238
+
239
+ Once you're happy with your changes, add changed files using `git add` and
240
+ make a commit with `git commit` to record your changes locally:
241
+
242
+ ```bash
243
+ git add modified_file.py
244
+ git commit
245
+ ```
246
+
247
+ Note, if you already committed some changes that have a wrong formatting, you can use:
248
+
249
+ ```bash
250
+ pre-commit run --all-files
251
+ ```
252
+
253
+ Please write [good commit messages](https://chris.beams.io/posts/git-commit/).
254
+
255
+ It is a good idea to sync your copy of the code with the original
256
+ repository regularly. This way you can quickly account for changes:
257
+
258
+ ```bash
259
+ git fetch upstream
260
+ git rebase upstream/main
261
+ ```
262
+
263
+ Push the changes to your account using:
264
+
265
+ ```bash
266
+ git push -u origin a-descriptive-name-for-my-changes
267
+ ```
268
+
269
+ 7. Once you are satisfied (**and the checklist below is happy too**), go to the
270
+ webpage of your fork on GitHub. Click on 'Pull request' to send your changes
271
+ to the project maintainers for review.
272
+
273
+ 8. It's ok if maintainers ask you for changes. It happens to core contributors
274
+ too! So everyone can see the changes in the Pull request, work in your local
275
+ branch and push the changes to your fork. They will automatically appear in
276
+ the pull request.
277
+
278
+ ### Checklist
279
+
280
+ 1. The title of your pull request should be a summary of its contribution;
281
+ 2. If your pull request addresses an issue, please mention the issue number in
282
+ the pull request description to make sure they are linked (and people
283
+ consulting the issue know you are working on it);
284
+ 3. To indicate a work in progress please prefix the title with `[WIP]`, or preferably mark
285
+ the PR as a draft PR. These are useful to avoid duplicated work, and to differentiate
286
+ it from PRs ready to be merged;
287
+ 4. Make sure existing tests pass;
288
+
289
+ ### Tests
290
+
291
+ An extensive test suite is included to test the library behavior and several examples. Library tests can be found in the [tests folder](https://github.com/huggingface/lerobot/tree/main/tests).
292
+
293
+ Install [git lfs](https://git-lfs.com/) to retrieve test artifacts (if you don't have it already).
294
+
295
+ On Mac:
296
+
297
+ ```bash
298
+ brew install git-lfs
299
+ git lfs install
300
+ ```
301
+
302
+ On Ubuntu:
303
+
304
+ ```bash
305
+ sudo apt-get install git-lfs
306
+ git lfs install
307
+ ```
308
+
309
+ Pull artifacts if they're not in [tests/artifacts](tests/artifacts)
310
+
311
+ ```bash
312
+ git lfs pull
313
+ ```
314
+
315
+ We use `pytest` in order to run the tests. From the root of the
316
+ repository, here's how to run tests with `pytest` for the library:
317
+
318
+ ```bash
319
+ python -m pytest -sv ./tests
320
+ ```
321
+
322
+ You can specify a smaller set of tests in order to test only the feature
323
+ you're working on.
LICENSE ADDED
@@ -0,0 +1,507 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Copyright 2024 The Hugging Face team. All rights reserved.
2
+
3
+ Apache License
4
+ Version 2.0, January 2004
5
+ http://www.apache.org/licenses/
6
+
7
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
8
+
9
+ 1. Definitions.
10
+
11
+ "License" shall mean the terms and conditions for use, reproduction,
12
+ and distribution as defined by Sections 1 through 9 of this document.
13
+
14
+ "Licensor" shall mean the copyright owner or entity authorized by
15
+ the copyright owner that is granting the License.
16
+
17
+ "Legal Entity" shall mean the union of the acting entity and all
18
+ other entities that control, are controlled by, or are under common
19
+ control with that entity. For the purposes of this definition,
20
+ "control" means (i) the power, direct or indirect, to cause the
21
+ direction or management of such entity, whether by contract or
22
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
23
+ outstanding shares, or (iii) beneficial ownership of such entity.
24
+
25
+ "You" (or "Your") shall mean an individual or Legal Entity
26
+ exercising permissions granted by this License.
27
+
28
+ "Source" form shall mean the preferred form for making modifications,
29
+ including but not limited to software source code, documentation
30
+ source, and configuration files.
31
+
32
+ "Object" form shall mean any form resulting from mechanical
33
+ transformation or translation of a Source form, including but
34
+ not limited to compiled object code, generated documentation,
35
+ and conversions to other media types.
36
+
37
+ "Work" shall mean the work of authorship, whether in Source or
38
+ Object form, made available under the License, as indicated by a
39
+ copyright notice that is included in or attached to the work
40
+ (an example is provided in the Appendix below).
41
+
42
+ "Derivative Works" shall mean any work, whether in Source or Object
43
+ form, that is based on (or derived from) the Work and for which the
44
+ editorial revisions, annotations, elaborations, or other modifications
45
+ represent, as a whole, an original work of authorship. For the purposes
46
+ of this License, Derivative Works shall not include works that remain
47
+ separable from, or merely link (or bind by name) to the interfaces of,
48
+ the Work and Derivative Works thereof.
49
+
50
+ "Contribution" shall mean any work of authorship, including
51
+ the original version of the Work and any modifications or additions
52
+ to that Work or Derivative Works thereof, that is intentionally
53
+ submitted to Licensor for inclusion in the Work by the copyright owner
54
+ or by an individual or Legal Entity authorized to submit on behalf of
55
+ the copyright owner. For the purposes of this definition, "submitted"
56
+ means any form of electronic, verbal, or written communication sent
57
+ to the Licensor or its representatives, including but not limited to
58
+ communication on electronic mailing lists, source code control systems,
59
+ and issue tracking systems that are managed by, or on behalf of, the
60
+ Licensor for the purpose of discussing and improving the Work, but
61
+ excluding communication that is conspicuously marked or otherwise
62
+ designated in writing by the copyright owner as "Not a Contribution."
63
+
64
+ "Contributor" shall mean Licensor and any individual or Legal Entity
65
+ on behalf of whom a Contribution has been received by Licensor and
66
+ subsequently incorporated within the Work.
67
+
68
+ 2. Grant of Copyright License. Subject to the terms and conditions of
69
+ this License, each Contributor hereby grants to You a perpetual,
70
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
71
+ copyright license to reproduce, prepare Derivative Works of,
72
+ publicly display, publicly perform, sublicense, and distribute the
73
+ Work and such Derivative Works in Source or Object form.
74
+
75
+ 3. Grant of Patent License. Subject to the terms and conditions of
76
+ this License, each Contributor hereby grants to You a perpetual,
77
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
78
+ (except as stated in this section) patent license to make, have made,
79
+ use, offer to sell, sell, import, and otherwise transfer the Work,
80
+ where such license applies only to those patent claims licensable
81
+ by such Contributor that are necessarily infringed by their
82
+ Contribution(s) alone or by combination of their Contribution(s)
83
+ with the Work to which such Contribution(s) was submitted. If You
84
+ institute patent litigation against any entity (including a
85
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
86
+ or a Contribution incorporated within the Work constitutes direct
87
+ or contributory patent infringement, then any patent licenses
88
+ granted to You under this License for that Work shall terminate
89
+ as of the date such litigation is filed.
90
+
91
+ 4. Redistribution. You may reproduce and distribute copies of the
92
+ Work or Derivative Works thereof in any medium, with or without
93
+ modifications, and in Source or Object form, provided that You
94
+ meet the following conditions:
95
+
96
+ (a) You must give any other recipients of the Work or
97
+ Derivative Works a copy of this License; and
98
+
99
+ (b) You must cause any modified files to carry prominent notices
100
+ stating that You changed the files; and
101
+
102
+ (c) You must retain, in the Source form of any Derivative Works
103
+ that You distribute, all copyright, patent, trademark, and
104
+ attribution notices from the Source form of the Work,
105
+ excluding those notices that do not pertain to any part of
106
+ the Derivative Works; and
107
+
108
+ (d) If the Work includes a "NOTICE" text file as part of its
109
+ distribution, then any Derivative Works that You distribute must
110
+ include a readable copy of the attribution notices contained
111
+ within such NOTICE file, excluding those notices that do not
112
+ pertain to any part of the Derivative Works, in at least one
113
+ of the following places: within a NOTICE text file distributed
114
+ as part of the Derivative Works; within the Source form or
115
+ documentation, if provided along with the Derivative Works; or,
116
+ within a display generated by the Derivative Works, if and
117
+ wherever such third-party notices normally appear. The contents
118
+ of the NOTICE file are for informational purposes only and
119
+ do not modify the License. You may add Your own attribution
120
+ notices within Derivative Works that You distribute, alongside
121
+ or as an addendum to the NOTICE text from the Work, provided
122
+ that such additional attribution notices cannot be construed
123
+ as modifying the License.
124
+
125
+ You may add Your own copyright statement to Your modifications and
126
+ may provide additional or different license terms and conditions
127
+ for use, reproduction, or distribution of Your modifications, or
128
+ for any such Derivative Works as a whole, provided Your use,
129
+ reproduction, and distribution of the Work otherwise complies with
130
+ the conditions stated in this License.
131
+
132
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
133
+ any Contribution intentionally submitted for inclusion in the Work
134
+ by You to the Licensor shall be under the terms and conditions of
135
+ this License, without any additional terms or conditions.
136
+ Notwithstanding the above, nothing herein shall supersede or modify
137
+ the terms of any separate license agreement you may have executed
138
+ with Licensor regarding such Contributions.
139
+
140
+ 6. Trademarks. This License does not grant permission to use the trade
141
+ names, trademarks, service marks, or product names of the Licensor,
142
+ except as required for reasonable and customary use in describing the
143
+ origin of the Work and reproducing the content of the NOTICE file.
144
+
145
+ 7. Disclaimer of Warranty. Unless required by applicable law or
146
+ agreed to in writing, Licensor provides the Work (and each
147
+ Contributor provides its Contributions) on an "AS IS" BASIS,
148
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
149
+ implied, including, without limitation, any warranties or conditions
150
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
151
+ PARTICULAR PURPOSE. You are solely responsible for determining the
152
+ appropriateness of using or redistributing the Work and assume any
153
+ risks associated with Your exercise of permissions under this License.
154
+
155
+ 8. Limitation of Liability. In no event and under no legal theory,
156
+ whether in tort (including negligence), contract, or otherwise,
157
+ unless required by applicable law (such as deliberate and grossly
158
+ negligent acts) or agreed to in writing, shall any Contributor be
159
+ liable to You for damages, including any direct, indirect, special,
160
+ incidental, or consequential damages of any character arising as a
161
+ result of this License or out of the use or inability to use the
162
+ Work (including but not limited to damages for loss of goodwill,
163
+ work stoppage, computer failure or malfunction, or any and all
164
+ other commercial damages or losses), even if such Contributor
165
+ has been advised of the possibility of such damages.
166
+
167
+ 9. Accepting Warranty or Additional Liability. While redistributing
168
+ the Work or Derivative Works thereof, You may choose to offer,
169
+ and charge a fee for, acceptance of support, warranty, indemnity,
170
+ or other liability obligations and/or rights consistent with this
171
+ License. However, in accepting such obligations, You may act only
172
+ on Your own behalf and on Your sole responsibility, not on behalf
173
+ of any other Contributor, and only if You agree to indemnify,
174
+ defend, and hold each Contributor harmless for any liability
175
+ incurred by, or claims asserted against, such Contributor by reason
176
+ of your accepting any such warranty or additional liability.
177
+
178
+ END OF TERMS AND CONDITIONS
179
+
180
+ APPENDIX: How to apply the Apache License to your work.
181
+
182
+ To apply the Apache License to your work, attach the following
183
+ boilerplate notice, with the fields enclosed by brackets "[]"
184
+ replaced with your own identifying information. (Don't include
185
+ the brackets!) The text should be enclosed in the appropriate
186
+ comment syntax for the file format. We also recommend that a
187
+ file or class name and description of purpose be included on the
188
+ same "printed page" as the copyright notice for easier
189
+ identification within third-party archives.
190
+
191
+ Copyright [yyyy] [name of copyright owner]
192
+
193
+ Licensed under the Apache License, Version 2.0 (the "License");
194
+ you may not use this file except in compliance with the License.
195
+ You may obtain a copy of the License at
196
+
197
+ http://www.apache.org/licenses/LICENSE-2.0
198
+
199
+ Unless required by applicable law or agreed to in writing, software
200
+ distributed under the License is distributed on an "AS IS" BASIS,
201
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
202
+ See the License for the specific language governing permissions and
203
+ limitations under the License.
204
+
205
+
206
+ ## Some of lerobot's code is derived from Diffusion Policy, which is subject to the following copyright notice:
207
+
208
+ MIT License
209
+
210
+ Copyright (c) 2023 Columbia Artificial Intelligence and Robotics Lab
211
+
212
+ Permission is hereby granted, free of charge, to any person obtaining a copy
213
+ of this software and associated documentation files (the "Software"), to deal
214
+ in the Software without restriction, including without limitation the rights
215
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
216
+ copies of the Software, and to permit persons to whom the Software is
217
+ furnished to do so, subject to the following conditions:
218
+
219
+ The above copyright notice and this permission notice shall be included in all
220
+ copies or substantial portions of the Software.
221
+
222
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
223
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
224
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
225
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
226
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
227
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
228
+ SOFTWARE.
229
+
230
+
231
+ ## Some of lerobot's code is derived from FOWM, which is subject to the following copyright notice:
232
+
233
+ MIT License
234
+
235
+ Copyright (c) 2023 Yunhai Feng
236
+
237
+ Permission is hereby granted, free of charge, to any person obtaining a copy
238
+ of this software and associated documentation files (the "Software"), to deal
239
+ in the Software without restriction, including without limitation the rights
240
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
241
+ copies of the Software, and to permit persons to whom the Software is
242
+ furnished to do so, subject to the following conditions:
243
+
244
+ The above copyright notice and this permission notice shall be included in all
245
+ copies or substantial portions of the Software.
246
+
247
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
248
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
249
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
250
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
251
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
252
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
253
+ SOFTWARE.
254
+
255
+
256
+ ## Some of lerobot's code is derived from simxarm, which is subject to the following copyright notice:
257
+
258
+ MIT License
259
+
260
+ Copyright (c) 2023 Nicklas Hansen & Yanjie Ze
261
+
262
+ Permission is hereby granted, free of charge, to any person obtaining a copy
263
+ of this software and associated documentation files (the "Software"), to deal
264
+ in the Software without restriction, including without limitation the rights
265
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
266
+ copies of the Software, and to permit persons to whom the Software is
267
+ furnished to do so, subject to the following conditions:
268
+
269
+ The above copyright notice and this permission notice shall be included in all
270
+ copies or substantial portions of the Software.
271
+
272
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
273
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
274
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
275
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
276
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
277
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
278
+ SOFTWARE.
279
+
280
+
281
+ ## Some of lerobot's code is derived from ALOHA, which is subject to the following copyright notice:
282
+
283
+ MIT License
284
+
285
+ Copyright (c) 2023 Tony Z. Zhao
286
+
287
+ Permission is hereby granted, free of charge, to any person obtaining a copy
288
+ of this software and associated documentation files (the "Software"), to deal
289
+ in the Software without restriction, including without limitation the rights
290
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
291
+ copies of the Software, and to permit persons to whom the Software is
292
+ furnished to do so, subject to the following conditions:
293
+
294
+ The above copyright notice and this permission notice shall be included in all
295
+ copies or substantial portions of the Software.
296
+
297
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
298
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
299
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
300
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
301
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
302
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
303
+ SOFTWARE.
304
+
305
+ ## Some of lerobot's code is derived from DETR, which is subject to the following copyright notice:
306
+
307
+ Apache License
308
+ Version 2.0, January 2004
309
+ http://www.apache.org/licenses/
310
+
311
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
312
+
313
+ 1. Definitions.
314
+
315
+ "License" shall mean the terms and conditions for use, reproduction,
316
+ and distribution as defined by Sections 1 through 9 of this document.
317
+
318
+ "Licensor" shall mean the copyright owner or entity authorized by
319
+ the copyright owner that is granting the License.
320
+
321
+ "Legal Entity" shall mean the union of the acting entity and all
322
+ other entities that control, are controlled by, or are under common
323
+ control with that entity. For the purposes of this definition,
324
+ "control" means (i) the power, direct or indirect, to cause the
325
+ direction or management of such entity, whether by contract or
326
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
327
+ outstanding shares, or (iii) beneficial ownership of such entity.
328
+
329
+ "You" (or "Your") shall mean an individual or Legal Entity
330
+ exercising permissions granted by this License.
331
+
332
+ "Source" form shall mean the preferred form for making modifications,
333
+ including but not limited to software source code, documentation
334
+ source, and configuration files.
335
+
336
+ "Object" form shall mean any form resulting from mechanical
337
+ transformation or translation of a Source form, including but
338
+ not limited to compiled object code, generated documentation,
339
+ and conversions to other media types.
340
+
341
+ "Work" shall mean the work of authorship, whether in Source or
342
+ Object form, made available under the License, as indicated by a
343
+ copyright notice that is included in or attached to the work
344
+ (an example is provided in the Appendix below).
345
+
346
+ "Derivative Works" shall mean any work, whether in Source or Object
347
+ form, that is based on (or derived from) the Work and for which the
348
+ editorial revisions, annotations, elaborations, or other modifications
349
+ represent, as a whole, an original work of authorship. For the purposes
350
+ of this License, Derivative Works shall not include works that remain
351
+ separable from, or merely link (or bind by name) to the interfaces of,
352
+ the Work and Derivative Works thereof.
353
+
354
+ "Contribution" shall mean any work of authorship, including
355
+ the original version of the Work and any modifications or additions
356
+ to that Work or Derivative Works thereof, that is intentionally
357
+ submitted to Licensor for inclusion in the Work by the copyright owner
358
+ or by an individual or Legal Entity authorized to submit on behalf of
359
+ the copyright owner. For the purposes of this definition, "submitted"
360
+ means any form of electronic, verbal, or written communication sent
361
+ to the Licensor or its representatives, including but not limited to
362
+ communication on electronic mailing lists, source code control systems,
363
+ and issue tracking systems that are managed by, or on behalf of, the
364
+ Licensor for the purpose of discussing and improving the Work, but
365
+ excluding communication that is conspicuously marked or otherwise
366
+ designated in writing by the copyright owner as "Not a Contribution."
367
+
368
+ "Contributor" shall mean Licensor and any individual or Legal Entity
369
+ on behalf of whom a Contribution has been received by Licensor and
370
+ subsequently incorporated within the Work.
371
+
372
+ 2. Grant of Copyright License. Subject to the terms and conditions of
373
+ this License, each Contributor hereby grants to You a perpetual,
374
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
375
+ copyright license to reproduce, prepare Derivative Works of,
376
+ publicly display, publicly perform, sublicense, and distribute the
377
+ Work and such Derivative Works in Source or Object form.
378
+
379
+ 3. Grant of Patent License. Subject to the terms and conditions of
380
+ this License, each Contributor hereby grants to You a perpetual,
381
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
382
+ (except as stated in this section) patent license to make, have made,
383
+ use, offer to sell, sell, import, and otherwise transfer the Work,
384
+ where such license applies only to those patent claims licensable
385
+ by such Contributor that are necessarily infringed by their
386
+ Contribution(s) alone or by combination of their Contribution(s)
387
+ with the Work to which such Contribution(s) was submitted. If You
388
+ institute patent litigation against any entity (including a
389
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
390
+ or a Contribution incorporated within the Work constitutes direct
391
+ or contributory patent infringement, then any patent licenses
392
+ granted to You under this License for that Work shall terminate
393
+ as of the date such litigation is filed.
394
+
395
+ 4. Redistribution. You may reproduce and distribute copies of the
396
+ Work or Derivative Works thereof in any medium, with or without
397
+ modifications, and in Source or Object form, provided that You
398
+ meet the following conditions:
399
+
400
+ (a) You must give any other recipients of the Work or
401
+ Derivative Works a copy of this License; and
402
+
403
+ (b) You must cause any modified files to carry prominent notices
404
+ stating that You changed the files; and
405
+
406
+ (c) You must retain, in the Source form of any Derivative Works
407
+ that You distribute, all copyright, patent, trademark, and
408
+ attribution notices from the Source form of the Work,
409
+ excluding those notices that do not pertain to any part of
410
+ the Derivative Works; and
411
+
412
+ (d) If the Work includes a "NOTICE" text file as part of its
413
+ distribution, then any Derivative Works that You distribute must
414
+ include a readable copy of the attribution notices contained
415
+ within such NOTICE file, excluding those notices that do not
416
+ pertain to any part of the Derivative Works, in at least one
417
+ of the following places: within a NOTICE text file distributed
418
+ as part of the Derivative Works; within the Source form or
419
+ documentation, if provided along with the Derivative Works; or,
420
+ within a display generated by the Derivative Works, if and
421
+ wherever such third-party notices normally appear. The contents
422
+ of the NOTICE file are for informational purposes only and
423
+ do not modify the License. You may add Your own attribution
424
+ notices within Derivative Works that You distribute, alongside
425
+ or as an addendum to the NOTICE text from the Work, provided
426
+ that such additional attribution notices cannot be construed
427
+ as modifying the License.
428
+
429
+ You may add Your own copyright statement to Your modifications and
430
+ may provide additional or different license terms and conditions
431
+ for use, reproduction, or distribution of Your modifications, or
432
+ for any such Derivative Works as a whole, provided Your use,
433
+ reproduction, and distribution of the Work otherwise complies with
434
+ the conditions stated in this License.
435
+
436
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
437
+ any Contribution intentionally submitted for inclusion in the Work
438
+ by You to the Licensor shall be under the terms and conditions of
439
+ this License, without any additional terms or conditions.
440
+ Notwithstanding the above, nothing herein shall supersede or modify
441
+ the terms of any separate license agreement you may have executed
442
+ with Licensor regarding such Contributions.
443
+
444
+ 6. Trademarks. This License does not grant permission to use the trade
445
+ names, trademarks, service marks, or product names of the Licensor,
446
+ except as required for reasonable and customary use in describing the
447
+ origin of the Work and reproducing the content of the NOTICE file.
448
+
449
+ 7. Disclaimer of Warranty. Unless required by applicable law or
450
+ agreed to in writing, Licensor provides the Work (and each
451
+ Contributor provides its Contributions) on an "AS IS" BASIS,
452
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
453
+ implied, including, without limitation, any warranties or conditions
454
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
455
+ PARTICULAR PURPOSE. You are solely responsible for determining the
456
+ appropriateness of using or redistributing the Work and assume any
457
+ risks associated with Your exercise of permissions under this License.
458
+
459
+ 8. Limitation of Liability. In no event and under no legal theory,
460
+ whether in tort (including negligence), contract, or otherwise,
461
+ unless required by applicable law (such as deliberate and grossly
462
+ negligent acts) or agreed to in writing, shall any Contributor be
463
+ liable to You for damages, including any direct, indirect, special,
464
+ incidental, or consequential damages of any character arising as a
465
+ result of this License or out of the use or inability to use the
466
+ Work (including but not limited to damages for loss of goodwill,
467
+ work stoppage, computer failure or malfunction, or any and all
468
+ other commercial damages or losses), even if such Contributor
469
+ has been advised of the possibility of such damages.
470
+
471
+ 9. Accepting Warranty or Additional Liability. While redistributing
472
+ the Work or Derivative Works thereof, You may choose to offer,
473
+ and charge a fee for, acceptance of support, warranty, indemnity,
474
+ or other liability obligations and/or rights consistent with this
475
+ License. However, in accepting such obligations, You may act only
476
+ on Your own behalf and on Your sole responsibility, not on behalf
477
+ of any other Contributor, and only if You agree to indemnify,
478
+ defend, and hold each Contributor harmless for any liability
479
+ incurred by, or claims asserted against, such Contributor by reason
480
+ of your accepting any such warranty or additional liability.
481
+
482
+ END OF TERMS AND CONDITIONS
483
+
484
+ APPENDIX: How to apply the Apache License to your work.
485
+
486
+ To apply the Apache License to your work, attach the following
487
+ boilerplate notice, with the fields enclosed by brackets "[]"
488
+ replaced with your own identifying information. (Don't include
489
+ the brackets!) The text should be enclosed in the appropriate
490
+ comment syntax for the file format. We also recommend that a
491
+ file or class name and description of purpose be included on the
492
+ same "printed page" as the copyright notice for easier
493
+ identification within third-party archives.
494
+
495
+ Copyright 2020 - present, Facebook, Inc
496
+
497
+ Licensed under the Apache License, Version 2.0 (the "License");
498
+ you may not use this file except in compliance with the License.
499
+ You may obtain a copy of the License at
500
+
501
+ http://www.apache.org/licenses/LICENSE-2.0
502
+
503
+ Unless required by applicable law or agreed to in writing, software
504
+ distributed under the License is distributed on an "AS IS" BASIS,
505
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
506
+ See the License for the specific language governing permissions and
507
+ limitations under the License.
MANIFEST.in ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ include src/lerobot/templates/lerobot_modelcard_template.md
2
+ include src/lerobot/datasets/card_template.md
Makefile ADDED
@@ -0,0 +1,180 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2024 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ .PHONY: tests
16
+
17
+ PYTHON_PATH := $(shell which python)
18
+
19
+ # If uv is installed and a virtual environment exists, use it
20
+ UV_CHECK := $(shell command -v uv)
21
+ ifneq ($(UV_CHECK),)
22
+ PYTHON_PATH := $(shell .venv/bin/python)
23
+ endif
24
+
25
+ export PATH := $(dir $(PYTHON_PATH)):$(PATH)
26
+
27
+ DEVICE ?= cpu
28
+
29
+ build-user:
30
+ docker build -f docker/Dockerfile.user -t lerobot-user .
31
+
32
+ build-internal:
33
+ docker build -f docker/Dockerfile.internal -t lerobot-internal .
34
+
35
+ test-end-to-end:
36
+ ${MAKE} DEVICE=$(DEVICE) test-act-ete-train
37
+ ${MAKE} DEVICE=$(DEVICE) test-act-ete-train-resume
38
+ ${MAKE} DEVICE=$(DEVICE) test-act-ete-eval
39
+ ${MAKE} DEVICE=$(DEVICE) test-diffusion-ete-train
40
+ ${MAKE} DEVICE=$(DEVICE) test-diffusion-ete-eval
41
+ ${MAKE} DEVICE=$(DEVICE) test-tdmpc-ete-train
42
+ ${MAKE} DEVICE=$(DEVICE) test-tdmpc-ete-eval
43
+ ${MAKE} DEVICE=$(DEVICE) test-smolvla-ete-train
44
+ ${MAKE} DEVICE=$(DEVICE) test-smolvla-ete-eval
45
+
46
+ test-act-ete-train:
47
+ lerobot-train \
48
+ --policy.type=act \
49
+ --policy.dim_model=64 \
50
+ --policy.n_action_steps=20 \
51
+ --policy.chunk_size=20 \
52
+ --policy.device=$(DEVICE) \
53
+ --policy.push_to_hub=false \
54
+ --env.type=aloha \
55
+ --env.episode_length=5 \
56
+ --dataset.repo_id=lerobot/aloha_sim_transfer_cube_human \
57
+ --dataset.image_transforms.enable=true \
58
+ --dataset.episodes="[0]" \
59
+ --batch_size=2 \
60
+ --steps=4 \
61
+ --eval_freq=2 \
62
+ --eval.n_episodes=1 \
63
+ --eval.batch_size=1 \
64
+ --save_freq=2 \
65
+ --save_checkpoint=true \
66
+ --log_freq=1 \
67
+ --wandb.enable=false \
68
+ --output_dir=tests/outputs/act/
69
+
70
+ test-act-ete-train-resume:
71
+ lerobot-train \
72
+ --config_path=tests/outputs/act/checkpoints/000002/pretrained_model/train_config.json \
73
+ --resume=true
74
+
75
+ test-act-ete-eval:
76
+ lerobot-eval \
77
+ --policy.path=tests/outputs/act/checkpoints/000004/pretrained_model \
78
+ --policy.device=$(DEVICE) \
79
+ --env.type=aloha \
80
+ --env.episode_length=5 \
81
+ --eval.n_episodes=1 \
82
+ --eval.batch_size=1
83
+
84
+ test-diffusion-ete-train:
85
+ lerobot-train \
86
+ --policy.type=diffusion \
87
+ --policy.down_dims='[64,128,256]' \
88
+ --policy.diffusion_step_embed_dim=32 \
89
+ --policy.num_inference_steps=10 \
90
+ --policy.device=$(DEVICE) \
91
+ --policy.push_to_hub=false \
92
+ --env.type=pusht \
93
+ --env.episode_length=5 \
94
+ --dataset.repo_id=lerobot/pusht \
95
+ --dataset.image_transforms.enable=true \
96
+ --dataset.episodes="[0]" \
97
+ --batch_size=2 \
98
+ --steps=2 \
99
+ --eval_freq=2 \
100
+ --eval.n_episodes=1 \
101
+ --eval.batch_size=1 \
102
+ --save_checkpoint=true \
103
+ --save_freq=2 \
104
+ --log_freq=1 \
105
+ --wandb.enable=false \
106
+ --output_dir=tests/outputs/diffusion/
107
+
108
+ test-diffusion-ete-eval:
109
+ lerobot-eval \
110
+ --policy.path=tests/outputs/diffusion/checkpoints/000002/pretrained_model \
111
+ --policy.device=$(DEVICE) \
112
+ --env.type=pusht \
113
+ --env.episode_length=5 \
114
+ --eval.n_episodes=1 \
115
+ --eval.batch_size=1
116
+
117
+ test-tdmpc-ete-train:
118
+ lerobot-train \
119
+ --policy.type=tdmpc \
120
+ --policy.device=$(DEVICE) \
121
+ --policy.push_to_hub=false \
122
+ --env.type=pusht \
123
+ --env.episode_length=5 \
124
+ --dataset.repo_id=lerobot/pusht_image \
125
+ --dataset.image_transforms.enable=true \
126
+ --dataset.episodes="[0]" \
127
+ --batch_size=2 \
128
+ --steps=2 \
129
+ --eval_freq=2 \
130
+ --eval.n_episodes=1 \
131
+ --eval.batch_size=1 \
132
+ --save_checkpoint=true \
133
+ --save_freq=2 \
134
+ --log_freq=1 \
135
+ --wandb.enable=false \
136
+ --output_dir=tests/outputs/tdmpc/
137
+
138
+ test-tdmpc-ete-eval:
139
+ lerobot-eval \
140
+ --policy.path=tests/outputs/tdmpc/checkpoints/000002/pretrained_model \
141
+ --policy.device=$(DEVICE) \
142
+ --env.type=pusht \
143
+ --env.episode_length=5 \
144
+ --env.observation_height=96 \
145
+ --env.observation_width=96 \
146
+ --eval.n_episodes=1 \
147
+ --eval.batch_size=1
148
+
149
+
150
+ test-smolvla-ete-train:
151
+ lerobot-train \
152
+ --policy.type=smolvla \
153
+ --policy.n_action_steps=20 \
154
+ --policy.chunk_size=20 \
155
+ --policy.device=$(DEVICE) \
156
+ --policy.push_to_hub=false \
157
+ --env.type=aloha \
158
+ --env.episode_length=5 \
159
+ --dataset.repo_id=lerobot/aloha_sim_transfer_cube_human \
160
+ --dataset.image_transforms.enable=true \
161
+ --dataset.episodes="[0]" \
162
+ --batch_size=2 \
163
+ --steps=4 \
164
+ --eval_freq=2 \
165
+ --eval.n_episodes=1 \
166
+ --eval.batch_size=1 \
167
+ --save_freq=2 \
168
+ --save_checkpoint=true \
169
+ --log_freq=1 \
170
+ --wandb.enable=false \
171
+ --output_dir=tests/outputs/smolvla/
172
+
173
+ test-smolvla-ete-eval:
174
+ lerobot-eval \
175
+ --policy.path=tests/outputs/smolvla/checkpoints/000004/pretrained_model \
176
+ --policy.device=$(DEVICE) \
177
+ --env.type=aloha \
178
+ --env.episode_length=5 \
179
+ --eval.n_episodes=1 \
180
+ --eval.batch_size=1
README.md ADDED
@@ -0,0 +1,343 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <p align="center">
2
+ <img alt="LeRobot, Hugging Face Robotics Library" src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/lerobot-logo-thumbnail.png" width="100%">
3
+ <br/>
4
+ <br/>
5
+ </p>
6
+
7
+ <div align="center">
8
+
9
+ [![Tests](https://github.com/huggingface/lerobot/actions/workflows/nightly.yml/badge.svg?branch=main)](https://github.com/huggingface/lerobot/actions/workflows/nightly.yml?query=branch%3Amain)
10
+ [![Python versions](https://img.shields.io/pypi/pyversions/lerobot)](https://www.python.org/downloads/)
11
+ [![License](https://img.shields.io/badge/License-Apache%202.0-blue.svg)](https://github.com/huggingface/lerobot/blob/main/LICENSE)
12
+ [![Status](https://img.shields.io/pypi/status/lerobot)](https://pypi.org/project/lerobot/)
13
+ [![Version](https://img.shields.io/pypi/v/lerobot)](https://pypi.org/project/lerobot/)
14
+ [![Contributor Covenant](https://img.shields.io/badge/Contributor%20Covenant-v2.1-ff69b4.svg)](https://github.com/huggingface/lerobot/blob/main/CODE_OF_CONDUCT.md)
15
+ [![Discord](https://dcbadge.vercel.app/api/server/C5P34WJ68S?style=flat)](https://discord.gg/s3KuuzsPFb)
16
+
17
+ <!-- [![Coverage](https://codecov.io/gh/huggingface/lerobot/branch/main/graph/badge.svg?token=TODO)](https://codecov.io/gh/huggingface/lerobot) -->
18
+
19
+ </div>
20
+
21
+ <h2 align="center">
22
+ <p><a href="https://huggingface.co/docs/lerobot/hope_jr">
23
+ Build Your Own HopeJR Robot!</a></p>
24
+ </h2>
25
+
26
+ <div align="center">
27
+ <img
28
+ src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/hope_jr/hopejr.png"
29
+ alt="HopeJR robot"
30
+ title="HopeJR robot"
31
+ width="60%"
32
+ />
33
+
34
+ <p><strong>Meet HopeJR – A humanoid robot arm and hand for dexterous manipulation!</strong></p>
35
+ <p>Control it with exoskeletons and gloves for precise hand movements.</p>
36
+ <p>Perfect for advanced manipulation tasks! 🤖</p>
37
+
38
+ <p><a href="https://huggingface.co/docs/lerobot/hope_jr">
39
+ See the full HopeJR tutorial here.</a></p>
40
+ </div>
41
+
42
+ <br/>
43
+
44
+ <h2 align="center">
45
+ <p><a href="https://huggingface.co/docs/lerobot/so101">
46
+ Build Your Own SO-101 Robot!</a></p>
47
+ </h2>
48
+
49
+ <div align="center">
50
+ <table>
51
+ <tr>
52
+ <td align="center"><img src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/so101/so101.webp" alt="SO-101 follower arm" title="SO-101 follower arm" width="90%"/></td>
53
+ <td align="center"><img src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/so101/so101-leader.webp" alt="SO-101 leader arm" title="SO-101 leader arm" width="90%"/></td>
54
+ </tr>
55
+ </table>
56
+
57
+ <p><strong>Meet the updated SO100, the SO-101 – Just €114 per arm!</strong></p>
58
+ <p>Train it in minutes with a few simple moves on your laptop.</p>
59
+ <p>Then sit back and watch your creation act autonomously! 🤯</p>
60
+
61
+ <p><a href="https://huggingface.co/docs/lerobot/so101">
62
+ See the full SO-101 tutorial here.</a></p>
63
+
64
+ <p>Want to take it to the next level? Make your SO-101 mobile by building LeKiwi!</p>
65
+ <p>Check out the <a href="https://huggingface.co/docs/lerobot/lekiwi">LeKiwi tutorial</a> and bring your robot to life on wheels.</p>
66
+
67
+ <img src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/lekiwi/kiwi.webp" alt="LeKiwi mobile robot" title="LeKiwi mobile robot" width="50%">
68
+ </div>
69
+
70
+ <br/>
71
+
72
+ <h3 align="center">
73
+ <p>LeRobot: State-of-the-art AI for real-world robotics</p>
74
+ </h3>
75
+
76
+ ---
77
+
78
+ 🤗 LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier to entry to robotics so that everyone can contribute and benefit from sharing datasets and pretrained models.
79
+
80
+ 🤗 LeRobot contains state-of-the-art approaches that have been shown to transfer to the real-world with a focus on imitation learning and reinforcement learning.
81
+
82
+ 🤗 LeRobot already provides a set of pretrained models, datasets with human collected demonstrations, and simulation environments to get started without assembling a robot. In the coming weeks, the plan is to add more and more support for real-world robotics on the most affordable and capable robots out there.
83
+
84
+ 🤗 LeRobot hosts pretrained models and datasets on this Hugging Face community page: [huggingface.co/lerobot](https://huggingface.co/lerobot)
85
+
86
+ #### Examples of pretrained models on simulation environments
87
+
88
+ <table>
89
+ <tr>
90
+ <td><img src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/gym/aloha_act.gif" width="100%" alt="ACT policy on ALOHA env"/></td>
91
+ <td><img src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/gym/simxarm_tdmpc.gif" width="100%" alt="TDMPC policy on SimXArm env"/></td>
92
+ <td><img src="https://raw.githubusercontent.com/huggingface/lerobot/main/media/gym/pusht_diffusion.gif" width="100%" alt="Diffusion policy on PushT env"/></td>
93
+ </tr>
94
+ <tr>
95
+ <td align="center">ACT policy on ALOHA env</td>
96
+ <td align="center">TDMPC policy on SimXArm env</td>
97
+ <td align="center">Diffusion policy on PushT env</td>
98
+ </tr>
99
+ </table>
100
+
101
+ ## Installation
102
+
103
+ LeRobot works with Python 3.10+ and PyTorch 2.2+.
104
+
105
+ ### Environment Setup
106
+
107
+ Create a virtual environment with Python 3.10 and activate it, e.g. with [`miniforge`](https://conda-forge.org/download/):
108
+
109
+ ```bash
110
+ conda create -y -n lerobot python=3.10
111
+ conda activate lerobot
112
+ ```
113
+
114
+ When using `conda`, install `ffmpeg` in your environment:
115
+
116
+ ```bash
117
+ conda install ffmpeg -c conda-forge
118
+ ```
119
+
120
+ > **NOTE:** This usually installs `ffmpeg 7.X` for your platform compiled with the `libsvtav1` encoder. If `libsvtav1` is not supported (check supported encoders with `ffmpeg -encoders`), you can:
121
+ >
122
+ > - _[On any platform]_ Explicitly install `ffmpeg 7.X` using:
123
+ >
124
+ > ```bash
125
+ > conda install ffmpeg=7.1.1 -c conda-forge
126
+ > ```
127
+ >
128
+ > - _[On Linux only]_ Install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1), and make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.
129
+
130
+ ### Install LeRobot 🤗
131
+
132
+ #### From Source
133
+
134
+ First, clone the repository and navigate into the directory:
135
+
136
+ ```bash
137
+ git clone https://github.com/huggingface/lerobot.git
138
+ cd lerobot
139
+ ```
140
+
141
+ Then, install the library in editable mode. This is useful if you plan to contribute to the code.
142
+
143
+ ```bash
144
+ pip install -e .
145
+ ```
146
+
147
+ > **NOTE:** If you encounter build errors, you may need to install additional dependencies (`cmake`, `build-essential`, and `ffmpeg libs`). On Linux, run:
148
+ > `sudo apt-get install cmake build-essential python3-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev`. For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/installation.html#bring-your-own-ffmpeg)
149
+
150
+ For simulations, 🤗 LeRobot comes with gymnasium environments that can be installed as extras:
151
+
152
+ - [aloha](https://github.com/huggingface/gym-aloha)
153
+ - [xarm](https://github.com/huggingface/gym-xarm)
154
+ - [pusht](https://github.com/huggingface/gym-pusht)
155
+
156
+ For instance, to install 🤗 LeRobot with aloha and pusht, use:
157
+
158
+ ```bash
159
+ pip install -e ".[aloha, pusht]"
160
+ ```
161
+
162
+ ### Installation from PyPI
163
+
164
+ **Core Library:**
165
+ Install the base package with:
166
+
167
+ ```bash
168
+ pip install lerobot
169
+ ```
170
+
171
+ _This installs only the default dependencies._
172
+
173
+ **Extra Features:**
174
+ To install additional functionality, use one of the following:
175
+
176
+ ```bash
177
+ pip install 'lerobot[all]' # All available features
178
+ pip install 'lerobot[aloha,pusht]' # Specific features (Aloha & Pusht)
179
+ pip install 'lerobot[feetech]' # Feetech motor support
180
+ ```
181
+
182
+ _Replace `[...]` with your desired features._
183
+
184
+ **Available Tags:**
185
+ For a full list of optional dependencies, see:
186
+ https://pypi.org/project/lerobot/
187
+
188
+ ### Weights & Biases
189
+
190
+ To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with
191
+
192
+ ```bash
193
+ wandb login
194
+ ```
195
+
196
+ (note: you will also need to enable WandB in the configuration. See below.)
197
+
198
+ ### Visualize datasets
199
+
200
+ Check out [example 1](https://github.com/huggingface/lerobot/blob/main/examples/dataset/load_lerobot_dataset.py) that illustrates how to use our dataset class which automatically downloads data from the Hugging Face hub.
201
+
202
+ You can also locally visualize episodes from a dataset on the hub by executing our script from the command line:
203
+
204
+ ```bash
205
+ lerobot-dataset-viz \
206
+ --repo-id lerobot/pusht \
207
+ --episode-index 0
208
+ ```
209
+
210
+ or from a dataset in a local folder with the `root` option and the `--mode local` (in the following case the dataset will be searched for in `./my_local_data_dir/lerobot/pusht`)
211
+
212
+ ```bash
213
+ lerobot-dataset-viz \
214
+ --repo-id lerobot/pusht \
215
+ --root ./my_local_data_dir \
216
+ --mode local \
217
+ --episode-index 0
218
+ ```
219
+
220
+ It will open `rerun.io` and display the camera streams, robot states and actions, like this:
221
+
222
+ https://github-production-user-asset-6210df.s3.amazonaws.com/4681518/328035972-fd46b787-b532-47e2-bb6f-fd536a55a7ed.mov?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAVCODYLSA53PQK4ZA%2F20240505%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20240505T172924Z&X-Amz-Expires=300&X-Amz-Signature=d680b26c532eeaf80740f08af3320d22ad0b8a4e4da1bcc4f33142c15b509eda&X-Amz-SignedHeaders=host&actor_id=24889239&key_id=0&repo_id=748713144
223
+
224
+ Our script can also visualize datasets stored on a distant server. See `lerobot-dataset-viz --help` for more instructions.
225
+
226
+ ### The `LeRobotDataset` format
227
+
228
+ A dataset in `LeRobotDataset` format is very simple to use. It can be loaded from a repository on the Hugging Face hub or a local folder simply with e.g. `dataset = LeRobotDataset("lerobot/aloha_static_coffee")` and can be indexed into like any Hugging Face and PyTorch dataset. For instance `dataset[0]` will retrieve a single temporal frame from the dataset containing observation(s) and an action as PyTorch tensors ready to be fed to a model.
229
+
230
+ A specificity of `LeRobotDataset` is that, rather than retrieving a single frame by its index, we can retrieve several frames based on their temporal relationship with the indexed frame, by setting `delta_timestamps` to a list of relative times with respect to the indexed frame. For example, with `delta_timestamps = {"observation.image": [-1, -0.5, -0.2, 0]}` one can retrieve, for a given index, 4 frames: 3 "previous" frames 1 second, 0.5 seconds, and 0.2 seconds before the indexed frame, and the indexed frame itself (corresponding to the 0 entry). See example [1_load_lerobot_dataset.py](https://github.com/huggingface/lerobot/blob/main/examples/dataset/load_lerobot_dataset.py) for more details on `delta_timestamps`.
231
+
232
+ Under the hood, the `LeRobotDataset` format makes use of several ways to serialize data which can be useful to understand if you plan to work more closely with this format. We tried to make a flexible yet simple dataset format that would cover most type of features and specificities present in reinforcement learning and robotics, in simulation and in real-world, with a focus on cameras and robot states but easily extended to other types of sensory inputs as long as they can be represented by a tensor.
233
+
234
+ Here are the important details and internal structure organization of a typical `LeRobotDataset` instantiated with `dataset = LeRobotDataset("lerobot/aloha_static_coffee")`. The exact features will change from dataset to dataset but not the main aspects:
235
+
236
+ ```
237
+ dataset attributes:
238
+ ├ hf_dataset: a Hugging Face dataset (backed by Arrow/parquet). Typical features example:
239
+ │ ├ observation.images.cam_high (VideoFrame):
240
+ │ │ VideoFrame = {'path': path to a mp4 video, 'timestamp' (float32): timestamp in the video}
241
+ │ ├ observation.state (list of float32): position of an arm joints (for instance)
242
+ │ ... (more observations)
243
+ │ ├ action (list of float32): goal position of an arm joints (for instance)
244
+ │ ├ episode_index (int64): index of the episode for this sample
245
+ │ ├ frame_index (int64): index of the frame for this sample in the episode ; starts at 0 for each episode
246
+ │ ├ timestamp (float32): timestamp in the episode
247
+ │ ├ next.done (bool): indicates the end of an episode ; True for the last frame in each episode
248
+ │ └ index (int64): general index in the whole dataset
249
+ ├ meta: a LeRobotDatasetMetadata object containing:
250
+ │ ├ info: a dictionary of metadata on the dataset
251
+ │ │ ├ codebase_version (str): this is to keep track of the codebase version the dataset was created with
252
+ │ │ ├ fps (int): frame per second the dataset is recorded/synchronized to
253
+ │ │ ├ features (dict): all features contained in the dataset with their shapes and types
254
+ │ │ ├ total_episodes (int): total number of episodes in the dataset
255
+ │ │ ├ total_frames (int): total number of frames in the dataset
256
+ │ │ ├ robot_type (str): robot type used for recording
257
+ │ │ ├ data_path (str): formattable string for the parquet files
258
+ │ │ └ video_path (str): formattable string for the video files (if using videos)
259
+ │ ├ episodes: a DataFrame containing episode metadata with columns:
260
+ │ │ ├ episode_index (int): index of the episode
261
+ │ │ ├ tasks (list): list of tasks for this episode
262
+ │ │ ├ length (int): number of frames in this episode
263
+ │ │ ├ dataset_from_index (int): start index of this episode in the dataset
264
+ │ │ └ dataset_to_index (int): end index of this episode in the dataset
265
+ │ ├ stats: a dictionary of statistics (max, mean, min, std) for each feature in the dataset, for instance
266
+ │ │ ├ observation.images.front_cam: {'max': tensor with same number of dimensions (e.g. `(c, 1, 1)` for images, `(c,)` for states), etc.}
267
+ │ │ └ ...
268
+ │ └ tasks: a DataFrame containing task information with task names as index and task_index as values
269
+ ├ root (Path): local directory where the dataset is stored
270
+ ├ image_transforms (Callable): optional image transformations to apply to visual modalities
271
+ └ delta_timestamps (dict): optional delta timestamps for temporal queries
272
+ ```
273
+
274
+ A `LeRobotDataset` is serialised using several widespread file formats for each of its parts, namely:
275
+
276
+ - hf_dataset stored using Hugging Face datasets library serialization to parquet
277
+ - videos are stored in mp4 format to save space
278
+ - metadata are stored in plain json/jsonl files
279
+
280
+ Dataset can be uploaded/downloaded from the HuggingFace hub seamlessly. To work on a local dataset, you can specify its location with the `root` argument if it's not in the default `~/.cache/huggingface/lerobot` location.
281
+
282
+ #### Reproduce state-of-the-art (SOTA)
283
+
284
+ We provide some pretrained policies on our [hub page](https://huggingface.co/lerobot) that can achieve state-of-the-art performances.
285
+ You can reproduce their training by loading the config from their run. Simply running:
286
+
287
+ ```bash
288
+ lerobot-train --config_path=lerobot/diffusion_pusht
289
+ ```
290
+
291
+ reproduces SOTA results for Diffusion Policy on the PushT task.
292
+
293
+ ## Contribute
294
+
295
+ If you would like to contribute to 🤗 LeRobot, please check out our [contribution guide](https://github.com/huggingface/lerobot/blob/main/CONTRIBUTING.md).
296
+
297
+ ### Add a pretrained policy
298
+
299
+ Once you have trained a policy you may upload it to the Hugging Face hub using a hub id that looks like `${hf_user}/${repo_name}` (e.g. [lerobot/diffusion_pusht](https://huggingface.co/lerobot/diffusion_pusht)).
300
+
301
+ You first need to find the checkpoint folder located inside your experiment directory (e.g. `outputs/train/2024-05-05/20-21-12_aloha_act_default/checkpoints/002500`). Within that there is a `pretrained_model` directory which should contain:
302
+
303
+ - `config.json`: A serialized version of the policy configuration (following the policy's dataclass config).
304
+ - `model.safetensors`: A set of `torch.nn.Module` parameters, saved in [Hugging Face Safetensors](https://huggingface.co/docs/safetensors/index) format.
305
+ - `train_config.json`: A consolidated configuration containing all parameters used for training. The policy configuration should match `config.json` exactly. This is useful for anyone who wants to evaluate your policy or for reproducibility.
306
+
307
+ To upload these to the hub, run the following:
308
+
309
+ ```bash
310
+ huggingface-cli upload ${hf_user}/${repo_name} path/to/pretrained_model
311
+ ```
312
+
313
+ See [lerobot_eval.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/lerobot_eval.py) for an example of how other people may use your policy.
314
+
315
+ ### Acknowledgment
316
+
317
+ - The LeRobot team 🤗 for building SmolVLA [Paper](https://arxiv.org/abs/2506.01844), [Blog](https://huggingface.co/blog/smolvla).
318
+ - Thanks to Tony Zhao, Zipeng Fu and colleagues for open sourcing ACT policy, ALOHA environments and datasets. Ours are adapted from [ALOHA](https://tonyzhaozh.github.io/aloha) and [Mobile ALOHA](https://mobile-aloha.github.io).
319
+ - Thanks to Cheng Chi, Zhenjia Xu and colleagues for open sourcing Diffusion policy, Pusht environment and datasets, as well as UMI datasets. Ours are adapted from [Diffusion Policy](https://diffusion-policy.cs.columbia.edu) and [UMI Gripper](https://umi-gripper.github.io).
320
+ - Thanks to Nicklas Hansen, Yunhai Feng and colleagues for open sourcing TDMPC policy, Simxarm environments and datasets. Ours are adapted from [TDMPC](https://github.com/nicklashansen/tdmpc) and [FOWM](https://www.yunhaifeng.com/FOWM).
321
+ - Thanks to Antonio Loquercio and Ashish Kumar for their early support.
322
+ - Thanks to [Seungjae (Jay) Lee](https://sjlee.cc/), [Mahi Shafiullah](https://mahis.life/) and colleagues for open sourcing [VQ-BeT](https://sjlee.cc/vq-bet/) policy and helping us adapt the codebase to our repository. The policy is adapted from [VQ-BeT repo](https://github.com/jayLEE0301/vq_bet_official).
323
+
324
+ ## Citation
325
+
326
+ If you want, you can cite this work with:
327
+
328
+ ```bibtex
329
+ @misc{cadene2024lerobot,
330
+ author = {Cadene, Remi and Alibert, Simon and Soare, Alexander and Gallouedec, Quentin and Zouitine, Adil and Palma, Steven and Kooijmans, Pepijn and Aractingi, Michel and Shukor, Mustafa and Aubakirova, Dana and Russi, Martino and Capuano, Francesco and Pascal, Caroline and Choghari, Jade and Moss, Jess and Wolf, Thomas},
331
+ title = {LeRobot: State-of-the-art Machine Learning for Real-World Robotics in Pytorch},
332
+ howpublished = "\url{https://github.com/huggingface/lerobot}",
333
+ year = {2024}
334
+ }
335
+ ```
336
+
337
+ ## Star History
338
+
339
+ [![Star History Chart](https://api.star-history.com/svg?repos=huggingface/lerobot&type=Timeline)](https://star-history.com/#huggingface/lerobot&Timeline)
340
+
341
+ ```
342
+
343
+ ```
benchmarks/video/README.md ADDED
@@ -0,0 +1,288 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Video benchmark
2
+
3
+ ## Questions
4
+
5
+ What is the optimal trade-off between:
6
+
7
+ - maximizing loading time with random access,
8
+ - minimizing memory space on disk,
9
+ - maximizing success rate of policies,
10
+ - compatibility across devices/platforms for decoding videos (e.g. video players, web browsers).
11
+
12
+ How to encode videos?
13
+
14
+ - Which video codec (`-vcodec`) to use? h264, h265, AV1?
15
+ - What pixel format to use (`-pix_fmt`)? `yuv444p` or `yuv420p`?
16
+ - How much compression (`-crf`)? No compression with `0`, intermediate compression with `25` or extreme with `50+`?
17
+ - Which frequency to chose for key frames (`-g`)? A key frame every `10` frames?
18
+
19
+ How to decode videos?
20
+
21
+ - Which `decoder`? `torchvision`, `torchaudio`, `ffmpegio`, `decord`, or `nvc`?
22
+ - What scenarios to use for the requesting timestamps during benchmark? (`timestamps_mode`)
23
+
24
+ ## Variables
25
+
26
+ **Image content & size**
27
+ We don't expect the same optimal settings for a dataset of images from a simulation, or from real-world in an apartment, or in a factory, or outdoor, or with lots of moving objects in the scene, etc. Similarly, loading times might not vary linearly with the image size (resolution).
28
+ For these reasons, we run this benchmark on four representative datasets:
29
+
30
+ - `lerobot/pusht_image`: (96 x 96 pixels) simulation with simple geometric shapes, fixed camera.
31
+ - `aliberts/aloha_mobile_shrimp_image`: (480 x 640 pixels) real-world indoor, moving camera.
32
+ - `aliberts/paris_street`: (720 x 1280 pixels) real-world outdoor, moving camera.
33
+ - `aliberts/kitchen`: (1080 x 1920 pixels) real-world indoor, fixed camera.
34
+
35
+ Note: The datasets used for this benchmark need to be image datasets, not video datasets.
36
+
37
+ **Data augmentations**
38
+ We might revisit this benchmark and find better settings if we train our policies with various data augmentations to make them more robust (e.g. robust to color changes, compression, etc.).
39
+
40
+ ### Encoding parameters
41
+
42
+ | parameter | values |
43
+ | ----------- | ------------------------------------------------------------ |
44
+ | **vcodec** | `libx264`, `libx265`, `libsvtav1` |
45
+ | **pix_fmt** | `yuv444p`, `yuv420p` |
46
+ | **g** | `1`, `2`, `3`, `4`, `5`, `6`, `10`, `15`, `20`, `40`, `None` |
47
+ | **crf** | `0`, `5`, `10`, `15`, `20`, `25`, `30`, `40`, `50`, `None` |
48
+
49
+ Note that `crf` value might be interpreted differently by various video codecs. In other words, the same value used with one codec doesn't necessarily translate into the same compression level with another codec. In fact, the default value (`None`) isn't the same amongst the different video codecs. Importantly, it is also the case for many other ffmpeg arguments like `g` which specifies the frequency of the key frames.
50
+
51
+ For a comprehensive list and documentation of these parameters, see the ffmpeg documentation depending on the video codec used:
52
+
53
+ - h264: https://trac.ffmpeg.org/wiki/Encode/H.264
54
+ - h265: https://trac.ffmpeg.org/wiki/Encode/H.265
55
+ - AV1: https://trac.ffmpeg.org/wiki/Encode/AV1
56
+
57
+ ### Decoding parameters
58
+
59
+ **Decoder**
60
+ We tested two video decoding backends from torchvision:
61
+
62
+ - `pyav`
63
+ - `video_reader` (requires to build torchvision from source)
64
+
65
+ **Requested timestamps**
66
+ Given the way video decoding works, once a keyframe has been loaded, the decoding of subsequent frames is fast.
67
+ This of course is affected by the `-g` parameter during encoding, which specifies the frequency of the keyframes. Given our typical use cases in robotics policies which might request a few timestamps in different random places, we want to replicate these use cases with the following scenarios:
68
+
69
+ - `1_frame`: 1 frame,
70
+ - `2_frames`: 2 consecutive frames (e.g. `[t, t + 1 / fps]`),
71
+ - `6_frames`: 6 consecutive frames (e.g. `[t + i / fps for i in range(6)]`)
72
+
73
+ Note that this differs significantly from a typical use case like watching a movie, in which every frame is loaded sequentially from the beginning to the end and it's acceptable to have big values for `-g`.
74
+
75
+ Additionally, because some policies might request single timestamps that are a few frames apart, we also have the following scenario:
76
+
77
+ - `2_frames_4_space`: 2 frames with 4 consecutive frames of spacing in between (e.g `[t, t + 5 / fps]`),
78
+
79
+ However, due to how video decoding is implemented with `pyav`, we don't have access to an accurate seek so in practice this scenario is essentially the same as `6_frames` since all 6 frames between `t` and `t + 5 / fps` will be decoded.
80
+
81
+ ## Metrics
82
+
83
+ **Data compression ratio (lower is better)**
84
+ `video_images_size_ratio` is the ratio of the memory space on disk taken by the encoded video over the memory space taken by the original images. For instance, `video_images_size_ratio=25%` means that the video takes 4 times less memory space on disk compared to the original images.
85
+
86
+ **Loading time ratio (lower is better)**
87
+ `video_images_load_time_ratio` is the ratio of the time it takes to decode frames from the video at a given timestamps over the time it takes to load the exact same original images. Lower is better. For instance, `video_images_load_time_ratio=200%` means that decoding from video is 2 times slower than loading the original images.
88
+
89
+ **Average Mean Square Error (lower is better)**
90
+ `avg_mse` is the average mean square error between each decoded frame and its corresponding original image over all requested timestamps, and also divided by the number of pixels in the image to be comparable when switching to different image sizes.
91
+
92
+ **Average Peak Signal to Noise Ratio (higher is better)**
93
+ `avg_psnr` measures the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity of its representation. Higher PSNR indicates better quality.
94
+
95
+ **Average Structural Similarity Index Measure (higher is better)**
96
+ `avg_ssim` evaluates the perceived quality of images by comparing luminance, contrast, and structure. SSIM values range from -1 to 1, where 1 indicates perfect similarity.
97
+
98
+ One aspect that can't be measured here with those metrics is the compatibility of the encoding across platforms, in particular on web browser, for visualization purposes.
99
+ h264, h265 and AV1 are all commonly used codecs and should not pose an issue. However, the chroma subsampling (`pix_fmt`) format might affect compatibility:
100
+
101
+ - `yuv420p` is more widely supported across various platforms, including web browsers.
102
+ - `yuv444p` offers higher color fidelity but might not be supported as broadly.
103
+
104
+ <!-- **Loss of a pretrained policy (higher is better)** (not available)
105
+ `loss_pretrained` is the result of evaluating with the selected encoding/decoding settings a policy pretrained on original images. It is easier to understand than `avg_l2_error`.
106
+
107
+ **Success rate after retraining (higher is better)** (not available)
108
+ `success_rate` is the result of training and evaluating a policy with the selected encoding/decoding settings. It is the most difficult metric to get but also the very best. -->
109
+
110
+ ## How the benchmark works
111
+
112
+ The benchmark evaluates both encoding and decoding of video frames on the first episode of each dataset.
113
+
114
+ **Encoding:** for each `vcodec` and `pix_fmt` pair, we use a default value for `g` and `crf` upon which we change a single value (either `g` or `crf`) to one of the specified values (we don't test every combination of those as this would be computationally too heavy).
115
+ This gives a unique set of encoding parameters which is used to encode the episode.
116
+
117
+ **Decoding:** Then, for each of those unique encodings, we iterate through every combination of the decoding parameters `backend` and `timestamps_mode`. For each of them, we record the metrics of a number of samples (given by `--num-samples`). This is parallelized for efficiency and the number of processes can be controlled with `--num-workers`. Ideally, it's best to have a `--num-samples` that is divisible by `--num-workers`.
118
+
119
+ Intermediate results saved for each `vcodec` and `pix_fmt` combination in csv tables.
120
+ These are then all concatenated to a single table ready for analysis.
121
+
122
+ ## Caveats
123
+
124
+ We tried to measure the most impactful parameters for both encoding and decoding. However, for computational reasons we can't test out every combination.
125
+
126
+ Additional encoding parameters exist that are not included in this benchmark. In particular:
127
+
128
+ - `-preset` which allows for selecting encoding presets. This represents a collection of options that will provide a certain encoding speed to compression ratio. By leaving this parameter unspecified, it is considered to be `medium` for libx264 and libx265 and `8` for libsvtav1.
129
+ - `-tune` which allows to optimize the encoding for certain aspects (e.g. film quality, fast decoding, etc.).
130
+
131
+ See the documentation mentioned above for more detailed info on these settings and for a more comprehensive list of other parameters.
132
+
133
+ Similarly on the decoding side, other decoders exist but are not implemented in our current benchmark. To name a few:
134
+
135
+ - `torchaudio`
136
+ - `ffmpegio`
137
+ - `decord`
138
+ - `nvc`
139
+
140
+ Note as well that since we are mostly interested in the performance at decoding time (also because encoding is done only once before uploading a dataset), we did not measure encoding times nor have any metrics regarding encoding.
141
+ However, besides the necessity to build ffmpeg from source, encoding did not pose any issue and it didn't take a significant amount of time during this benchmark.
142
+
143
+ ## Install
144
+
145
+ Building ffmpeg from source is required to include libx265 and libaom/libsvtav1 (av1) video codecs ([compilation guide](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu)).
146
+
147
+ **Note:** While you still need to build torchvision with a conda-installed `ffmpeg<4.3` to use the `video_reader` decoder (as described in [#220](https://github.com/huggingface/lerobot/pull/220)), you also need another version which is custom-built with all the video codecs for encoding. For the script to then use that version, you can prepend the command above with `PATH="$HOME/bin:$PATH"`, which is where ffmpeg should be built.
148
+
149
+ ## Adding a video decoder
150
+
151
+ Right now, we're only benchmarking the two video decoder available with torchvision: `pyav` and `video_reader`.
152
+ You can easily add a new decoder to benchmark by adding it to this function in the script:
153
+
154
+ ```diff
155
+ def decode_video_frames(
156
+ video_path: str,
157
+ timestamps: list[float],
158
+ tolerance_s: float,
159
+ backend: str,
160
+ ) -> torch.Tensor:
161
+ if backend in ["pyav", "video_reader"]:
162
+ return decode_video_frames_torchvision(
163
+ video_path, timestamps, tolerance_s, backend
164
+ )
165
+ + elif backend == ["your_decoder"]:
166
+ + return your_decoder_function(
167
+ + video_path, timestamps, tolerance_s, backend
168
+ + )
169
+ else:
170
+ raise NotImplementedError(backend)
171
+ ```
172
+
173
+ ## Example
174
+
175
+ For a quick run, you can try these parameters:
176
+
177
+ ```bash
178
+ python benchmark/video/run_video_benchmark.py \
179
+ --output-dir outputs/video_benchmark \
180
+ --repo-ids \
181
+ lerobot/pusht_image \
182
+ aliberts/aloha_mobile_shrimp_image \
183
+ --vcodec libx264 libx265 \
184
+ --pix-fmt yuv444p yuv420p \
185
+ --g 2 20 None \
186
+ --crf 10 40 None \
187
+ --timestamps-modes 1_frame 2_frames \
188
+ --backends pyav video_reader \
189
+ --num-samples 5 \
190
+ --num-workers 5 \
191
+ --save-frames 0
192
+ ```
193
+
194
+ ## Results
195
+
196
+ ### Reproduce
197
+
198
+ We ran the benchmark with the following parameters:
199
+
200
+ ```bash
201
+ # h264 and h265 encodings
202
+ python benchmark/video/run_video_benchmark.py \
203
+ --output-dir outputs/video_benchmark \
204
+ --repo-ids \
205
+ lerobot/pusht_image \
206
+ aliberts/aloha_mobile_shrimp_image \
207
+ aliberts/paris_street \
208
+ aliberts/kitchen \
209
+ --vcodec libx264 libx265 \
210
+ --pix-fmt yuv444p yuv420p \
211
+ --g 1 2 3 4 5 6 10 15 20 40 None \
212
+ --crf 0 5 10 15 20 25 30 40 50 None \
213
+ --timestamps-modes 1_frame 2_frames 6_frames \
214
+ --backends pyav video_reader \
215
+ --num-samples 50 \
216
+ --num-workers 5 \
217
+ --save-frames 1
218
+
219
+ # av1 encoding (only compatible with yuv420p and pyav decoder)
220
+ python benchmark/video/run_video_benchmark.py \
221
+ --output-dir outputs/video_benchmark \
222
+ --repo-ids \
223
+ lerobot/pusht_image \
224
+ aliberts/aloha_mobile_shrimp_image \
225
+ aliberts/paris_street \
226
+ aliberts/kitchen \
227
+ --vcodec libsvtav1 \
228
+ --pix-fmt yuv420p \
229
+ --g 1 2 3 4 5 6 10 15 20 40 None \
230
+ --crf 0 5 10 15 20 25 30 40 50 None \
231
+ --timestamps-modes 1_frame 2_frames 6_frames \
232
+ --backends pyav \
233
+ --num-samples 50 \
234
+ --num-workers 5 \
235
+ --save-frames 1
236
+ ```
237
+
238
+ The full results are available [here](https://docs.google.com/spreadsheets/d/1OYJB43Qu8fC26k_OyoMFgGBBKfQRCi4BIuYitQnq3sw/edit?usp=sharing)
239
+
240
+ ### Parameters selected for LeRobotDataset
241
+
242
+ Considering these results, we chose what we think is the best set of encoding parameter:
243
+
244
+ - vcodec: `libsvtav1`
245
+ - pix-fmt: `yuv420p`
246
+ - g: `2`
247
+ - crf: `30`
248
+
249
+ Since we're using av1 encoding, we're choosing the `pyav` decoder as `video_reader` does not support it (and `pyav` doesn't require a custom build of `torchvision`).
250
+
251
+ ### Summary
252
+
253
+ These tables show the results for `g=2` and `crf=30`, using `timestamps-modes=6_frames` and `backend=pyav`
254
+
255
+ | video_images_size_ratio | vcodec | pix_fmt | | | |
256
+ | ---------------------------------- | ---------- | ------- | --------- | --------- | --------- |
257
+ | | libx264 | | libx265 | | libsvtav1 |
258
+ | repo_id | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p |
259
+ | lerobot/pusht_image | **16.97%** | 17.58% | 18.57% | 18.86% | 22.06% |
260
+ | aliberts/aloha_mobile_shrimp_image | 2.14% | 2.11% | 1.38% | **1.37%** | 5.59% |
261
+ | aliberts/paris_street | 2.12% | 2.13% | **1.54%** | **1.54%** | 4.43% |
262
+ | aliberts/kitchen | 1.40% | 1.39% | **1.00%** | **1.00%** | 2.52% |
263
+
264
+ | video_images_load_time_ratio | vcodec | pix_fmt | | | |
265
+ | ---------------------------------- | ------- | ------- | -------- | ------- | --------- |
266
+ | | libx264 | | libx265 | | libsvtav1 |
267
+ | repo_id | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p |
268
+ | lerobot/pusht_image | 6.45 | 5.19 | **1.90** | 2.12 | 2.47 |
269
+ | aliberts/aloha_mobile_shrimp_image | 11.80 | 7.92 | 0.71 | 0.85 | **0.48** |
270
+ | aliberts/paris_street | 2.21 | 2.05 | 0.36 | 0.49 | **0.30** |
271
+ | aliberts/kitchen | 1.46 | 1.46 | 0.28 | 0.51 | **0.26** |
272
+
273
+ | | | vcodec | pix_fmt | | | |
274
+ | ---------------------------------- | -------- | -------- | ------------ | -------- | --------- | ------------ |
275
+ | | | libx264 | | libx265 | | libsvtav1 |
276
+ | repo_id | metric | yuv420p | yuv444p | yuv420p | yuv444p | yuv420p |
277
+ | lerobot/pusht_image | avg_mse | 2.90E-04 | **2.03E-04** | 3.13E-04 | 2.29E-04 | 2.19E-04 |
278
+ | | avg_psnr | 35.44 | 37.07 | 35.49 | **37.30** | 37.20 |
279
+ | | avg_ssim | 98.28% | **98.85%** | 98.31% | 98.84% | 98.72% |
280
+ | aliberts/aloha_mobile_shrimp_image | avg_mse | 2.76E-04 | 2.59E-04 | 3.17E-04 | 3.06E-04 | **1.30E-04** |
281
+ | | avg_psnr | 35.91 | 36.21 | 35.88 | 36.09 | **40.17** |
282
+ | | avg_ssim | 95.19% | 95.18% | 95.00% | 95.05% | **97.73%** |
283
+ | aliberts/paris_street | avg_mse | 6.89E-04 | 6.70E-04 | 4.03E-03 | 4.02E-03 | **3.09E-04** |
284
+ | | avg_psnr | 33.48 | 33.68 | 32.05 | 32.15 | **35.40** |
285
+ | | avg_ssim | 93.76% | 93.75% | 89.46% | 89.46% | **95.46%** |
286
+ | aliberts/kitchen | avg_mse | 2.50E-04 | 2.24E-04 | 4.28E-04 | 4.18E-04 | **1.53E-04** |
287
+ | | avg_psnr | 36.73 | 37.33 | 36.56 | 36.75 | **39.12** |
288
+ | | avg_ssim | 95.47% | 95.58% | 95.52% | 95.53% | **96.82%** |
benchmarks/video/benchmark.py ADDED
@@ -0,0 +1,94 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+
3
+ # Copyright 2024 The HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ import threading
17
+ import time
18
+ from contextlib import ContextDecorator
19
+
20
+
21
+ class TimeBenchmark(ContextDecorator):
22
+ """
23
+ Measures execution time using a context manager or decorator.
24
+
25
+ This class supports both context manager and decorator usage, and is thread-safe for multithreaded
26
+ environments.
27
+
28
+ Args:
29
+ print: If True, prints the elapsed time upon exiting the context or completing the function. Defaults
30
+ to False.
31
+
32
+ Examples:
33
+
34
+ Using as a context manager:
35
+
36
+ >>> benchmark = TimeBenchmark()
37
+ >>> with benchmark:
38
+ ... time.sleep(1)
39
+ >>> print(f"Block took {benchmark.result:.4f} seconds")
40
+ Block took approximately 1.0000 seconds
41
+
42
+ Using with multithreading:
43
+
44
+ ```python
45
+ import threading
46
+
47
+ benchmark = TimeBenchmark()
48
+
49
+
50
+ def context_manager_example():
51
+ with benchmark:
52
+ time.sleep(0.01)
53
+ print(f"Block took {benchmark.result_ms:.2f} milliseconds")
54
+
55
+
56
+ threads = []
57
+ for _ in range(3):
58
+ t1 = threading.Thread(target=context_manager_example)
59
+ threads.append(t1)
60
+
61
+ for t in threads:
62
+ t.start()
63
+
64
+ for t in threads:
65
+ t.join()
66
+ ```
67
+ Expected output:
68
+ Block took approximately 10.00 milliseconds
69
+ Block took approximately 10.00 milliseconds
70
+ Block took approximately 10.00 milliseconds
71
+ """
72
+
73
+ def __init__(self, print=False):
74
+ self.local = threading.local()
75
+ self.print_time = print
76
+
77
+ def __enter__(self):
78
+ self.local.start_time = time.perf_counter()
79
+ return self
80
+
81
+ def __exit__(self, *exc):
82
+ self.local.end_time = time.perf_counter()
83
+ self.local.elapsed_time = self.local.end_time - self.local.start_time
84
+ if self.print_time:
85
+ print(f"Elapsed time: {self.local.elapsed_time:.4f} seconds")
86
+ return False
87
+
88
+ @property
89
+ def result(self):
90
+ return getattr(self.local, "elapsed_time", None)
91
+
92
+ @property
93
+ def result_ms(self):
94
+ return self.result * 1e3
benchmarks/video/capture_camera_feed.py ADDED
@@ -0,0 +1,102 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+
3
+ # Copyright 2024 The HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """Capture video feed from a camera as raw images."""
17
+
18
+ import argparse
19
+ import datetime as dt
20
+ import os
21
+ import time
22
+ from pathlib import Path
23
+
24
+ import cv2
25
+ import rerun as rr
26
+
27
+ # see https://rerun.io/docs/howto/visualization/limit-ram
28
+ RERUN_MEMORY_LIMIT = os.getenv("LEROBOT_RERUN_MEMORY_LIMIT", "5%")
29
+
30
+
31
+ def display_and_save_video_stream(output_dir: Path, fps: int, width: int, height: int, duration: int):
32
+ rr.init("lerobot_capture_camera_feed")
33
+ rr.spawn(memory_limit=RERUN_MEMORY_LIMIT)
34
+
35
+ now = dt.datetime.now()
36
+ capture_dir = output_dir / f"{now:%Y-%m-%d}" / f"{now:%H-%M-%S}"
37
+ if not capture_dir.exists():
38
+ capture_dir.mkdir(parents=True, exist_ok=True)
39
+
40
+ # Opens the default webcam
41
+ cap = cv2.VideoCapture(0)
42
+ if not cap.isOpened():
43
+ print("Error: Could not open video stream.")
44
+ return
45
+
46
+ cap.set(cv2.CAP_PROP_FPS, fps)
47
+ cap.set(cv2.CAP_PROP_FRAME_WIDTH, width)
48
+ cap.set(cv2.CAP_PROP_FRAME_HEIGHT, height)
49
+
50
+ frame_index = 0
51
+ start_time = time.time()
52
+ while time.time() - start_time < duration:
53
+ ret, frame = cap.read()
54
+
55
+ if not ret:
56
+ print("Error: Could not read frame.")
57
+ break
58
+ rr.log("video/stream", rr.Image(frame), static=True)
59
+ cv2.imwrite(str(capture_dir / f"frame_{frame_index:06d}.png"), frame)
60
+ frame_index += 1
61
+
62
+ # Release the capture
63
+ cap.release()
64
+
65
+ # TODO(Steven): Add a graceful shutdown via a close() method for the Viewer context, though not currently supported in the Rerun API.
66
+
67
+
68
+ if __name__ == "__main__":
69
+ parser = argparse.ArgumentParser()
70
+
71
+ parser.add_argument(
72
+ "--output-dir",
73
+ type=Path,
74
+ default=Path("outputs/cam_capture/"),
75
+ help="Directory where the capture images are written. A subfolder named with the current date & time will be created inside it for each capture.",
76
+ )
77
+ parser.add_argument(
78
+ "--fps",
79
+ type=int,
80
+ default=30,
81
+ help="Frames Per Second of the capture.",
82
+ )
83
+ parser.add_argument(
84
+ "--width",
85
+ type=int,
86
+ default=1280,
87
+ help="Width of the captured images.",
88
+ )
89
+ parser.add_argument(
90
+ "--height",
91
+ type=int,
92
+ default=720,
93
+ help="Height of the captured images.",
94
+ )
95
+ parser.add_argument(
96
+ "--duration",
97
+ type=int,
98
+ default=20,
99
+ help="Duration in seconds for which the video stream should be captured.",
100
+ )
101
+ args = parser.parse_args()
102
+ display_and_save_video_stream(**vars(args))
benchmarks/video/run_video_benchmark.py ADDED
@@ -0,0 +1,493 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+
3
+ # Copyright 2024 The HuggingFace Inc. team. All rights reserved.
4
+ #
5
+ # Licensed under the Apache License, Version 2.0 (the "License");
6
+ # you may not use this file except in compliance with the License.
7
+ # You may obtain a copy of the License at
8
+ #
9
+ # http://www.apache.org/licenses/LICENSE-2.0
10
+ #
11
+ # Unless required by applicable law or agreed to in writing, software
12
+ # distributed under the License is distributed on an "AS IS" BASIS,
13
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
14
+ # See the License for the specific language governing permissions and
15
+ # limitations under the License.
16
+ """Assess the performance of video decoding in various configurations.
17
+
18
+ This script will benchmark different video encoding and decoding parameters.
19
+ See the provided README.md or run `python benchmark/video/run_video_benchmark.py --help` for usage info.
20
+ """
21
+
22
+ import argparse
23
+ import datetime as dt
24
+ import random
25
+ import shutil
26
+ from collections import OrderedDict
27
+ from concurrent.futures import ThreadPoolExecutor, as_completed
28
+ from pathlib import Path
29
+
30
+ import einops
31
+ import numpy as np
32
+ import pandas as pd
33
+ import PIL
34
+ import torch
35
+ from skimage.metrics import mean_squared_error, peak_signal_noise_ratio, structural_similarity
36
+ from tqdm import tqdm
37
+
38
+ from benchmarks.video.benchmark import TimeBenchmark
39
+ from lerobot.datasets.lerobot_dataset import LeRobotDataset
40
+ from lerobot.datasets.video_utils import (
41
+ decode_video_frames_torchvision,
42
+ encode_video_frames,
43
+ )
44
+ from lerobot.utils.constants import OBS_IMAGE
45
+
46
+ BASE_ENCODING = OrderedDict(
47
+ [
48
+ ("vcodec", "libx264"),
49
+ ("pix_fmt", "yuv444p"),
50
+ ("g", 2),
51
+ ("crf", None),
52
+ # TODO(aliberts): Add fastdecode
53
+ # ("fastdecode", 0),
54
+ ]
55
+ )
56
+
57
+
58
+ # TODO(rcadene, aliberts): move to `utils.py` folder when we want to refactor
59
+ def parse_int_or_none(value) -> int | None:
60
+ if value.lower() == "none":
61
+ return None
62
+ try:
63
+ return int(value)
64
+ except ValueError as e:
65
+ raise argparse.ArgumentTypeError(f"Invalid int or None: {value}") from e
66
+
67
+
68
+ def check_datasets_formats(repo_ids: list) -> None:
69
+ for repo_id in repo_ids:
70
+ dataset = LeRobotDataset(repo_id)
71
+ if len(dataset.meta.video_keys) > 0:
72
+ raise ValueError(
73
+ f"Use only image dataset for running this benchmark. Video dataset provided: {repo_id}"
74
+ )
75
+
76
+
77
+ def get_directory_size(directory: Path) -> int:
78
+ total_size = 0
79
+ for item in directory.rglob("*"):
80
+ if item.is_file():
81
+ total_size += item.stat().st_size
82
+ return total_size
83
+
84
+
85
+ def load_original_frames(imgs_dir: Path, timestamps: list[float], fps: int) -> torch.Tensor:
86
+ frames = []
87
+ for ts in timestamps:
88
+ idx = int(ts * fps)
89
+ frame = PIL.Image.open(imgs_dir / f"frame_{idx:06d}.png")
90
+ frame = torch.from_numpy(np.array(frame))
91
+ frame = frame.type(torch.float32) / 255
92
+ frame = einops.rearrange(frame, "h w c -> c h w")
93
+ frames.append(frame)
94
+ return torch.stack(frames)
95
+
96
+
97
+ def save_decoded_frames(
98
+ imgs_dir: Path, save_dir: Path, frames: torch.Tensor, timestamps: list[float], fps: int
99
+ ) -> None:
100
+ if save_dir.exists() and len(list(save_dir.glob("frame_*.png"))) == len(timestamps):
101
+ return
102
+
103
+ save_dir.mkdir(parents=True, exist_ok=True)
104
+ for i, ts in enumerate(timestamps):
105
+ idx = int(ts * fps)
106
+ frame_hwc = (frames[i].permute((1, 2, 0)) * 255).type(torch.uint8).cpu().numpy()
107
+ PIL.Image.fromarray(frame_hwc).save(save_dir / f"frame_{idx:06d}_decoded.png")
108
+ shutil.copyfile(imgs_dir / f"frame_{idx:06d}.png", save_dir / f"frame_{idx:06d}_original.png")
109
+
110
+
111
+ def save_first_episode(imgs_dir: Path, dataset: LeRobotDataset) -> None:
112
+ episode_index = 0
113
+ ep_num_images = dataset.meta.episodes["length"][episode_index]
114
+ if imgs_dir.exists() and len(list(imgs_dir.glob("frame_*.png"))) == ep_num_images:
115
+ return
116
+
117
+ imgs_dir.mkdir(parents=True, exist_ok=True)
118
+ hf_dataset = dataset.hf_dataset.with_format(None)
119
+
120
+ # We only save images from the first camera
121
+ img_keys = [key for key in hf_dataset.features if key.startswith(OBS_IMAGE)]
122
+ imgs_dataset = hf_dataset.select_columns(img_keys[0])
123
+
124
+ for i, item in enumerate(
125
+ tqdm(imgs_dataset, desc=f"saving {dataset.repo_id} first episode images", leave=False)
126
+ ):
127
+ img = item[img_keys[0]]
128
+ img.save(str(imgs_dir / f"frame_{i:06d}.png"), quality=100)
129
+
130
+ if i >= ep_num_images - 1:
131
+ break
132
+
133
+
134
+ def sample_timestamps(timestamps_mode: str, ep_num_images: int, fps: int) -> list[float]:
135
+ # Start at 5 to allow for 2_frames_4_space and 6_frames
136
+ idx = random.randint(5, ep_num_images - 1)
137
+ match timestamps_mode:
138
+ case "1_frame":
139
+ frame_indexes = [idx]
140
+ case "2_frames":
141
+ frame_indexes = [idx - 1, idx]
142
+ case "2_frames_4_space":
143
+ frame_indexes = [idx - 5, idx]
144
+ case "6_frames":
145
+ frame_indexes = [idx - i for i in range(6)][::-1]
146
+ case _:
147
+ raise ValueError(timestamps_mode)
148
+
149
+ return [idx / fps for idx in frame_indexes]
150
+
151
+
152
+ def decode_video_frames(
153
+ video_path: str,
154
+ timestamps: list[float],
155
+ tolerance_s: float,
156
+ backend: str,
157
+ ) -> torch.Tensor:
158
+ if backend in ["pyav", "video_reader"]:
159
+ return decode_video_frames_torchvision(video_path, timestamps, tolerance_s, backend)
160
+ else:
161
+ raise NotImplementedError(backend)
162
+
163
+
164
+ def benchmark_decoding(
165
+ imgs_dir: Path,
166
+ video_path: Path,
167
+ timestamps_mode: str,
168
+ backend: str,
169
+ ep_num_images: int,
170
+ fps: int,
171
+ num_samples: int = 50,
172
+ num_workers: int = 4,
173
+ save_frames: bool = False,
174
+ ) -> dict:
175
+ def process_sample(sample: int):
176
+ time_benchmark = TimeBenchmark()
177
+ timestamps = sample_timestamps(timestamps_mode, ep_num_images, fps)
178
+ num_frames = len(timestamps)
179
+ result = {
180
+ "psnr_values": [],
181
+ "ssim_values": [],
182
+ "mse_values": [],
183
+ }
184
+
185
+ with time_benchmark:
186
+ frames = decode_video_frames(video_path, timestamps=timestamps, tolerance_s=5e-1, backend=backend)
187
+ result["load_time_video_ms"] = time_benchmark.result_ms / num_frames
188
+
189
+ with time_benchmark:
190
+ original_frames = load_original_frames(imgs_dir, timestamps, fps)
191
+ result["load_time_images_ms"] = time_benchmark.result_ms / num_frames
192
+
193
+ frames_np, original_frames_np = frames.numpy(), original_frames.numpy()
194
+ for i in range(num_frames):
195
+ result["mse_values"].append(mean_squared_error(original_frames_np[i], frames_np[i]))
196
+ result["psnr_values"].append(
197
+ peak_signal_noise_ratio(original_frames_np[i], frames_np[i], data_range=1.0)
198
+ )
199
+ result["ssim_values"].append(
200
+ structural_similarity(original_frames_np[i], frames_np[i], data_range=1.0, channel_axis=0)
201
+ )
202
+
203
+ if save_frames and sample == 0:
204
+ save_dir = video_path.with_suffix("") / f"{timestamps_mode}_{backend}"
205
+ save_decoded_frames(imgs_dir, save_dir, frames, timestamps, fps)
206
+
207
+ return result
208
+
209
+ load_times_video_ms = []
210
+ load_times_images_ms = []
211
+ mse_values = []
212
+ psnr_values = []
213
+ ssim_values = []
214
+
215
+ # A sample is a single set of decoded frames specified by timestamps_mode (e.g. a single frame, 2 frames, etc.).
216
+ # For each sample, we record metrics (loading time and quality metrics) which are then averaged over all samples.
217
+ # As these samples are independent, we run them in parallel threads to speed up the benchmark.
218
+ with ThreadPoolExecutor(max_workers=num_workers) as executor:
219
+ futures = [executor.submit(process_sample, i) for i in range(num_samples)]
220
+ for future in tqdm(as_completed(futures), total=num_samples, desc="samples", leave=False):
221
+ result = future.result()
222
+ load_times_video_ms.append(result["load_time_video_ms"])
223
+ load_times_images_ms.append(result["load_time_images_ms"])
224
+ psnr_values.extend(result["psnr_values"])
225
+ ssim_values.extend(result["ssim_values"])
226
+ mse_values.extend(result["mse_values"])
227
+
228
+ avg_load_time_video_ms = float(np.array(load_times_video_ms).mean())
229
+ avg_load_time_images_ms = float(np.array(load_times_images_ms).mean())
230
+ video_images_load_time_ratio = avg_load_time_video_ms / avg_load_time_images_ms
231
+
232
+ return {
233
+ "avg_load_time_video_ms": avg_load_time_video_ms,
234
+ "avg_load_time_images_ms": avg_load_time_images_ms,
235
+ "video_images_load_time_ratio": video_images_load_time_ratio,
236
+ "avg_mse": float(np.mean(mse_values)),
237
+ "avg_psnr": float(np.mean(psnr_values)),
238
+ "avg_ssim": float(np.mean(ssim_values)),
239
+ }
240
+
241
+
242
+ def benchmark_encoding_decoding(
243
+ dataset: LeRobotDataset,
244
+ video_path: Path,
245
+ imgs_dir: Path,
246
+ encoding_cfg: dict,
247
+ decoding_cfg: dict,
248
+ num_samples: int,
249
+ num_workers: int,
250
+ save_frames: bool,
251
+ overwrite: bool = False,
252
+ seed: int = 1337,
253
+ ) -> list[dict]:
254
+ fps = dataset.fps
255
+
256
+ if overwrite or not video_path.is_file():
257
+ tqdm.write(f"encoding {video_path}")
258
+ encode_video_frames(
259
+ imgs_dir=imgs_dir,
260
+ video_path=video_path,
261
+ fps=fps,
262
+ vcodec=encoding_cfg["vcodec"],
263
+ pix_fmt=encoding_cfg["pix_fmt"],
264
+ g=encoding_cfg.get("g"),
265
+ crf=encoding_cfg.get("crf"),
266
+ # fast_decode=encoding_cfg.get("fastdecode"),
267
+ overwrite=True,
268
+ )
269
+
270
+ episode_index = 0
271
+ ep_num_images = dataset.meta.episodes["length"][episode_index]
272
+ width, height = tuple(dataset[0][dataset.meta.camera_keys[0]].shape[-2:])
273
+ num_pixels = width * height
274
+ video_size_bytes = video_path.stat().st_size
275
+ images_size_bytes = get_directory_size(imgs_dir)
276
+ video_images_size_ratio = video_size_bytes / images_size_bytes
277
+
278
+ random.seed(seed)
279
+ benchmark_table = []
280
+ for timestamps_mode in tqdm(
281
+ decoding_cfg["timestamps_modes"], desc="decodings (timestamps_modes)", leave=False
282
+ ):
283
+ for backend in tqdm(decoding_cfg["backends"], desc="decodings (backends)", leave=False):
284
+ benchmark_row = benchmark_decoding(
285
+ imgs_dir,
286
+ video_path,
287
+ timestamps_mode,
288
+ backend,
289
+ ep_num_images,
290
+ fps,
291
+ num_samples,
292
+ num_workers,
293
+ save_frames,
294
+ )
295
+ benchmark_row.update(
296
+ **{
297
+ "repo_id": dataset.repo_id,
298
+ "resolution": f"{width} x {height}",
299
+ "num_pixels": num_pixels,
300
+ "video_size_bytes": video_size_bytes,
301
+ "images_size_bytes": images_size_bytes,
302
+ "video_images_size_ratio": video_images_size_ratio,
303
+ "timestamps_mode": timestamps_mode,
304
+ "backend": backend,
305
+ },
306
+ **encoding_cfg,
307
+ )
308
+ benchmark_table.append(benchmark_row)
309
+
310
+ return benchmark_table
311
+
312
+
313
+ def main(
314
+ output_dir: Path,
315
+ repo_ids: list[str],
316
+ vcodec: list[str],
317
+ pix_fmt: list[str],
318
+ g: list[int],
319
+ crf: list[int],
320
+ # fastdecode: list[int],
321
+ timestamps_modes: list[str],
322
+ backends: list[str],
323
+ num_samples: int,
324
+ num_workers: int,
325
+ save_frames: bool,
326
+ ):
327
+ check_datasets_formats(repo_ids)
328
+ encoding_benchmarks = {
329
+ "g": g,
330
+ "crf": crf,
331
+ # "fastdecode": fastdecode,
332
+ }
333
+ decoding_benchmarks = {
334
+ "timestamps_modes": timestamps_modes,
335
+ "backends": backends,
336
+ }
337
+ headers = ["repo_id", "resolution", "num_pixels"]
338
+ headers += list(BASE_ENCODING.keys())
339
+ headers += [
340
+ "timestamps_mode",
341
+ "backend",
342
+ "video_size_bytes",
343
+ "images_size_bytes",
344
+ "video_images_size_ratio",
345
+ "avg_load_time_video_ms",
346
+ "avg_load_time_images_ms",
347
+ "video_images_load_time_ratio",
348
+ "avg_mse",
349
+ "avg_psnr",
350
+ "avg_ssim",
351
+ ]
352
+ file_paths = []
353
+ for video_codec in tqdm(vcodec, desc="encodings (vcodec)"):
354
+ for pixel_format in tqdm(pix_fmt, desc="encodings (pix_fmt)", leave=False):
355
+ benchmark_table = []
356
+ for repo_id in tqdm(repo_ids, desc="encodings (datasets)", leave=False):
357
+ dataset = LeRobotDataset(repo_id)
358
+ imgs_dir = output_dir / "images" / dataset.repo_id.replace("/", "_")
359
+ # We only use the first episode
360
+ save_first_episode(imgs_dir, dataset)
361
+ for key, values in tqdm(encoding_benchmarks.items(), desc="encodings (g, crf)", leave=False):
362
+ for value in tqdm(values, desc=f"encodings ({key})", leave=False):
363
+ encoding_cfg = BASE_ENCODING.copy()
364
+ encoding_cfg["vcodec"] = video_codec
365
+ encoding_cfg["pix_fmt"] = pixel_format
366
+ encoding_cfg[key] = value
367
+ args_path = Path("_".join(str(value) for value in encoding_cfg.values()))
368
+ video_path = output_dir / "videos" / args_path / f"{repo_id.replace('/', '_')}.mp4"
369
+ benchmark_table += benchmark_encoding_decoding(
370
+ dataset,
371
+ video_path,
372
+ imgs_dir,
373
+ encoding_cfg,
374
+ decoding_benchmarks,
375
+ num_samples,
376
+ num_workers,
377
+ save_frames,
378
+ )
379
+
380
+ # Save intermediate results
381
+ benchmark_df = pd.DataFrame(benchmark_table, columns=headers)
382
+ now = dt.datetime.now()
383
+ csv_path = (
384
+ output_dir
385
+ / f"{now:%Y-%m-%d}_{now:%H-%M-%S}_{video_codec}_{pixel_format}_{num_samples}-samples.csv"
386
+ )
387
+ benchmark_df.to_csv(csv_path, header=True, index=False)
388
+ file_paths.append(csv_path)
389
+ del benchmark_df
390
+
391
+ # Concatenate all results
392
+ df_list = [pd.read_csv(csv_path) for csv_path in file_paths]
393
+ concatenated_df = pd.concat(df_list, ignore_index=True)
394
+ concatenated_path = output_dir / f"{now:%Y-%m-%d}_{now:%H-%M-%S}_all_{num_samples}-samples.csv"
395
+ concatenated_df.to_csv(concatenated_path, header=True, index=False)
396
+
397
+
398
+ if __name__ == "__main__":
399
+ parser = argparse.ArgumentParser()
400
+ parser.add_argument(
401
+ "--output-dir",
402
+ type=Path,
403
+ default=Path("outputs/video_benchmark"),
404
+ help="Directory where the video benchmark outputs are written.",
405
+ )
406
+ parser.add_argument(
407
+ "--repo-ids",
408
+ type=str,
409
+ nargs="*",
410
+ default=[
411
+ "lerobot/pusht_image",
412
+ "aliberts/aloha_mobile_shrimp_image",
413
+ "aliberts/paris_street",
414
+ "aliberts/kitchen",
415
+ ],
416
+ help="Datasets repo-ids to test against. First episodes only are used. Must be images.",
417
+ )
418
+ parser.add_argument(
419
+ "--vcodec",
420
+ type=str,
421
+ nargs="*",
422
+ default=["libx264", "hevc", "libsvtav1"],
423
+ help="Video codecs to be tested",
424
+ )
425
+ parser.add_argument(
426
+ "--pix-fmt",
427
+ type=str,
428
+ nargs="*",
429
+ default=["yuv444p", "yuv420p"],
430
+ help="Pixel formats (chroma subsampling) to be tested",
431
+ )
432
+ parser.add_argument(
433
+ "--g",
434
+ type=parse_int_or_none,
435
+ nargs="*",
436
+ default=[1, 2, 3, 4, 5, 6, 10, 15, 20, 40, 100, None],
437
+ help="Group of pictures sizes to be tested.",
438
+ )
439
+ parser.add_argument(
440
+ "--crf",
441
+ type=parse_int_or_none,
442
+ nargs="*",
443
+ default=[0, 5, 10, 15, 20, 25, 30, 40, 50, None],
444
+ help="Constant rate factors to be tested.",
445
+ )
446
+ # parser.add_argument(
447
+ # "--fastdecode",
448
+ # type=int,
449
+ # nargs="*",
450
+ # default=[0, 1],
451
+ # help="Use the fastdecode tuning option. 0 disables it. "
452
+ # "For libx264 and libx265/hevc, only 1 is possible. "
453
+ # "For libsvtav1, 1, 2 or 3 are possible values with a higher number meaning a faster decoding optimization",
454
+ # )
455
+ parser.add_argument(
456
+ "--timestamps-modes",
457
+ type=str,
458
+ nargs="*",
459
+ default=[
460
+ "1_frame",
461
+ "2_frames",
462
+ "2_frames_4_space",
463
+ "6_frames",
464
+ ],
465
+ help="Timestamps scenarios to be tested.",
466
+ )
467
+ parser.add_argument(
468
+ "--backends",
469
+ type=str,
470
+ nargs="*",
471
+ default=["pyav", "video_reader"],
472
+ help="Torchvision decoding backend to be tested.",
473
+ )
474
+ parser.add_argument(
475
+ "--num-samples",
476
+ type=int,
477
+ default=50,
478
+ help="Number of samples for each encoding x decoding config.",
479
+ )
480
+ parser.add_argument(
481
+ "--num-workers",
482
+ type=int,
483
+ default=10,
484
+ help="Number of processes for parallelized sample processing.",
485
+ )
486
+ parser.add_argument(
487
+ "--save-frames",
488
+ type=int,
489
+ default=0,
490
+ help="Whether to save decoded frames or not. Enter a non-zero number for true.",
491
+ )
492
+ args = parser.parse_args()
493
+ main(**vars(args))
dataset_path.py ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ from huggingface_hub import HfApi
2
+
3
+ hub_api = HfApi()
4
+ hub_api.create_tag("aleenatron/sample_test_aleena", tag="_version_", repo_type="dataset")
docker/Dockerfile.internal ADDED
@@ -0,0 +1,93 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This Dockerfile is designed for HuggingFace internal CI environments
16
+ # that require GPU access. It starts from an NVIDIA CUDA base image.
17
+
18
+ # docker build -f docker/Dockerfile.internal -t lerobot-internal .
19
+
20
+ # Configure the base image for CI with GPU access
21
+ # TODO(Steven): Bump these versions
22
+ ARG CUDA_VERSION=12.4.1
23
+ ARG OS_VERSION=22.04
24
+ FROM nvidia/cuda:${CUDA_VERSION}-base-ubuntu${OS_VERSION}
25
+
26
+ # Define Python version argument
27
+ ARG PYTHON_VERSION=3.10
28
+
29
+ # Configure environment variables
30
+ ENV DEBIAN_FRONTEND=noninteractive \
31
+ MUJOCO_GL=egl \
32
+ PATH=/lerobot/.venv/bin:$PATH \
33
+ CUDA_VISIBLE_DEVICES=0 \
34
+ TEST_TYPE=single_gpu \
35
+ DEVICE=cuda
36
+
37
+ # Install Python, system dependencies, and uv (as root)
38
+ RUN apt-get update && apt-get install -y --no-install-recommends \
39
+ software-properties-common build-essential git curl \
40
+ libglib2.0-0 libgl1-mesa-glx libegl1-mesa ffmpeg \
41
+ libusb-1.0-0-dev speech-dispatcher libgeos-dev portaudio19-dev \
42
+ cmake pkg-config ninja-build \
43
+ && add-apt-repository -y ppa:deadsnakes/ppa \
44
+ && apt-get update \
45
+ && apt-get install -y --no-install-recommends \
46
+ python${PYTHON_VERSION} \
47
+ python${PYTHON_VERSION}-venv \
48
+ python${PYTHON_VERSION}-dev \
49
+ && curl -LsSf https://astral.sh/uv/install.sh | sh \
50
+ && mv /root/.local/bin/uv /usr/local/bin/uv \
51
+ && useradd --create-home --shell /bin/bash user_lerobot \
52
+ && usermod -aG sudo user_lerobot \
53
+ && apt-get clean && rm -rf /var/lib/apt/lists/*
54
+
55
+ # Create application directory and set permissions
56
+ WORKDIR /lerobot
57
+ RUN chown -R user_lerobot:user_lerobot /lerobot
58
+
59
+ # Switch to the non-root user
60
+ USER user_lerobot
61
+
62
+ # Environment variables for the testing
63
+ ENV HOME=/home/user_lerobot \
64
+ HF_HOME=/home/user_lerobot/.cache/huggingface \
65
+ HF_LEROBOT_HOME=/home/user_lerobot/.cache/huggingface/lerobot \
66
+ TORCH_HOME=/home/user_lerobot/.cache/torch \
67
+ TRITON_CACHE_DIR=/home/user_lerobot/.cache/triton
68
+
69
+ # Create the virtual environment
70
+ # We use a virtual environment inside the container—even though the container itself \
71
+ # provides isolation—to ensure compatibility with the cluster and to prevent \
72
+ # issues with MuJoCo and OpenGL drivers.
73
+ RUN uv venv --python python${PYTHON_VERSION}
74
+
75
+ # Install Python dependencies for caching
76
+ COPY --chown=user_lerobot:user_lerobot pyproject.toml README.md MANIFEST.in ./
77
+ COPY --chown=user_lerobot:user_lerobot src/ src/
78
+
79
+ ARG UNBOUND_DEPS=false
80
+
81
+ RUN if [ "$UNBOUND_DEPS" = "true" ]; then \
82
+ sed -i 's/,[[:space:]]*<[0-9\.]*//g' pyproject.toml; \
83
+ echo "Dependencies unbound:" && cat pyproject.toml; \
84
+ fi
85
+
86
+ RUN uv pip install --no-cache ".[all]"
87
+
88
+ # Copy the rest of the application source code
89
+ # Make sure to have the git-LFS files for testing
90
+ COPY --chown=user_lerobot:user_lerobot . .
91
+
92
+ # Set the default command
93
+ CMD ["/bin/bash"]
docker/Dockerfile.user ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Copyright 2025 The HuggingFace Inc. team. All rights reserved.
2
+ #
3
+ # Licensed under the Apache License, Version 2.0 (the "License");
4
+ # you may not use this file except in compliance with the License.
5
+ # You may obtain a copy of the License at
6
+ #
7
+ # http://www.apache.org/licenses/LICENSE-2.0
8
+ #
9
+ # Unless required by applicable law or agreed to in writing, software
10
+ # distributed under the License is distributed on an "AS IS" BASIS,
11
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
12
+ # See the License for the specific language governing permissions and
13
+ # limitations under the License.
14
+
15
+ # This Dockerfile is designed for a lerobot user who wants to
16
+ # experiment with the project. It starts from an Python Slim base image.
17
+
18
+ # docker build -f docker/Dockerfile.user -t lerobot-user .
19
+ # docker run -it --rm lerobot-user
20
+
21
+ # Configure the base image
22
+ ARG PYTHON_VERSION=3.10
23
+ FROM python:${PYTHON_VERSION}-slim
24
+
25
+ # Configure environment variables
26
+ ENV DEBIAN_FRONTEND=noninteractive \
27
+ MUJOCO_GL=egl \
28
+ PATH=/lerobot/.venv/bin:$PATH
29
+
30
+ # Install system dependencies and uv (as root)
31
+ RUN apt-get update && apt-get install -y --no-install-recommends \
32
+ build-essential git curl libglib2.0-0 libegl1-mesa-dev ffmpeg \
33
+ libusb-1.0-0-dev speech-dispatcher libgeos-dev portaudio19-dev \
34
+ cmake pkg-config ninja-build \
35
+ && curl -LsSf https://astral.sh/uv/install.sh | sh \
36
+ && mv /root/.local/bin/uv /usr/local/bin/uv \
37
+ && useradd --create-home --shell /bin/bash user_lerobot \
38
+ && usermod -aG sudo user_lerobot \
39
+ && apt-get clean && rm -rf /var/lib/apt/lists/*
40
+
41
+ # Create application directory and set permissions
42
+ WORKDIR /lerobot
43
+ RUN chown -R user_lerobot:user_lerobot /lerobot
44
+
45
+ # Switch to the non-root user
46
+ USER user_lerobot
47
+
48
+ # Environment variables for the testing
49
+ ENV HOME=/home/user_lerobot \
50
+ HF_HOME=/home/user_lerobot/.cache/huggingface \
51
+ HF_LEROBOT_HOME=/home/user_lerobot/.cache/huggingface/lerobot \
52
+ TORCH_HOME=/home/user_lerobot/.cache/torch \
53
+ TRITON_CACHE_DIR=/home/user_lerobot/.cache/triton
54
+
55
+ # Create the virtual environment
56
+ # We use a virtual environment inside the container—even though the container itself \
57
+ # provides isolation—to closely resemble local development and allow users to \
58
+ # run other Python projects in the same container without dependency conflicts.
59
+ RUN uv venv
60
+
61
+ # Install Python dependencies for caching
62
+ COPY --chown=user_lerobot:user_lerobot pyproject.toml README.md MANIFEST.in ./
63
+ COPY --chown=user_lerobot:user_lerobot src/ src/
64
+
65
+ ARG UNBOUND_DEPS=false
66
+
67
+ RUN if [ "$UNBOUND_DEPS" = "true" ]; then \
68
+ sed -i 's/,[[:space:]]*<[0-9\.]*//g' pyproject.toml; \
69
+ echo "Dependencies unbound:" && cat pyproject.toml; \
70
+ fi
71
+
72
+ RUN uv pip install --no-cache ".[all]"
73
+
74
+ # Copy the rest of the application code
75
+ # Make sure to have the git-LFS files for testing
76
+ COPY --chown=user_lerobot:user_lerobot . .
77
+
78
+ # Set the default command
79
+ CMD ["/bin/bash"]
docs-requirements.txt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ # docs-requirements.txt
2
+ hf-doc-builder @ git+https://github.com/huggingface/doc-builder.git@main
3
+ watchdog>=6.0.0
docs/README.md ADDED
@@ -0,0 +1,139 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <!---
2
+ Copyright 2020 The HuggingFace Team. All rights reserved.
3
+
4
+ Licensed under the Apache License, Version 2.0 (the "License");
5
+ you may not use this file except in compliance with the License.
6
+ You may obtain a copy of the License at
7
+
8
+ http://www.apache.org/licenses/LICENSE-2.0
9
+
10
+ Unless required by applicable law or agreed to in writing, software
11
+ distributed under the License is distributed on an "AS IS" BASIS,
12
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ See the License for the specific language governing permissions and
14
+ limitations under the License.
15
+ -->
16
+
17
+ # Generating the documentation
18
+
19
+ To generate the documentation, you first have to build it. Several packages are necessary to build the doc,
20
+ you can install them with the following command, at the root of the code repository:
21
+
22
+ ```bash
23
+ pip install -e . -r docs-requirements.txt
24
+ ```
25
+
26
+ You will also need `nodejs`. Please refer to their [installation page](https://nodejs.org/en/download)
27
+
28
+ ---
29
+
30
+ **NOTE**
31
+
32
+ You only need to generate the documentation to inspect it locally (if you're planning changes and want to
33
+ check how they look before committing for instance). You don't have to `git commit` the built documentation.
34
+
35
+ ---
36
+
37
+ ## Building the documentation
38
+
39
+ Once you have setup the `doc-builder` and additional packages, you can generate the documentation by
40
+ typing the following command:
41
+
42
+ ```bash
43
+ doc-builder build lerobot docs/source/ --build_dir ~/tmp/test-build
44
+ ```
45
+
46
+ You can adapt the `--build_dir` to set any temporary folder that you prefer. This command will create it and generate
47
+ the MDX files that will be rendered as the documentation on the main website. You can inspect them in your favorite
48
+ Markdown editor.
49
+
50
+ ## Previewing the documentation
51
+
52
+ To preview the docs, first install the `watchdog` module with:
53
+
54
+ ```bash
55
+ pip install watchdog
56
+ ```
57
+
58
+ Then run the following command:
59
+
60
+ ```bash
61
+ doc-builder preview lerobot docs/source/
62
+ ```
63
+
64
+ The docs will be viewable at [http://localhost:3000](http://localhost:3000). You can also preview the docs once you have opened a PR. You will see a bot add a comment to a link where the documentation with your changes lives.
65
+
66
+ ---
67
+
68
+ **NOTE**
69
+
70
+ The `preview` command only works with existing doc files. When you add a completely new file, you need to update `_toctree.yml` & restart `preview` command (`ctrl-c` to stop it & call `doc-builder preview ...` again).
71
+
72
+ ---
73
+
74
+ ## Adding a new element to the navigation bar
75
+
76
+ Accepted files are Markdown (.md).
77
+
78
+ Create a file with its extension and put it in the source directory. You can then link it to the toc-tree by putting
79
+ the filename without the extension in the [`_toctree.yml`](https://github.com/huggingface/lerobot/blob/main/docs/source/_toctree.yml) file.
80
+
81
+ ## Renaming section headers and moving sections
82
+
83
+ It helps to keep the old links working when renaming the section header and/or moving sections from one document to another. This is because the old links are likely to be used in Issues, Forums, and Social media and it'd make for a much more superior user experience if users reading those months later could still easily navigate to the originally intended information.
84
+
85
+ Therefore, we simply keep a little map of moved sections at the end of the document where the original section was. The key is to preserve the original anchor.
86
+
87
+ So if you renamed a section from: "Section A" to "Section B", then you can add at the end of the file:
88
+
89
+ ```
90
+ Sections that were moved:
91
+
92
+ [ <a href="#section-b">Section A</a><a id="section-a"></a> ]
93
+ ```
94
+
95
+ and of course, if you moved it to another file, then:
96
+
97
+ ```
98
+ Sections that were moved:
99
+
100
+ [ <a href="../new-file#section-b">Section A</a><a id="section-a"></a> ]
101
+ ```
102
+
103
+ Use the relative style to link to the new file so that the versioned docs continue to work.
104
+
105
+ For an example of a rich moved sections set please see the very end of [the transformers Trainer doc](https://github.com/huggingface/transformers/blob/main/docs/source/en/main_classes/trainer.md).
106
+
107
+ ### Adding a new tutorial
108
+
109
+ Adding a new tutorial or section is done in two steps:
110
+
111
+ - Add a new file under `./source`. This file can either be ReStructuredText (.rst) or Markdown (.md).
112
+ - Link that file in `./source/_toctree.yml` on the correct toc-tree.
113
+
114
+ Make sure to put your new file under the proper section. If you have a doubt, feel free to ask in a Github Issue or PR.
115
+
116
+ ### Writing source documentation
117
+
118
+ Values that should be put in `code` should either be surrounded by backticks: \`like so\`. Note that argument names
119
+ and objects like True, None or any strings should usually be put in `code`.
120
+
121
+ #### Writing a multi-line code block
122
+
123
+ Multi-line code blocks can be useful for displaying examples. They are done between two lines of three backticks as usual in Markdown:
124
+
125
+ ````
126
+ ```
127
+ # first line of code
128
+ # second line
129
+ # etc
130
+ ```
131
+ ````
132
+
133
+ #### Adding an image
134
+
135
+ Due to the rapidly growing repository, it is important to make sure that no files that would significantly weigh down the repository are added. This includes images, videos, and other non-text files. We prefer to leverage a hf.co hosted `dataset` like
136
+ the ones hosted on [`hf-internal-testing`](https://huggingface.co/hf-internal-testing) in which to place these files and reference
137
+ them by URL. We recommend putting them in the following dataset: [huggingface/documentation-images](https://huggingface.co/datasets/huggingface/documentation-images).
138
+ If an external contribution, feel free to add the images to your PR and ask a Hugging Face member to migrate your images
139
+ to this dataset.
docs/source/_toctree.yml ADDED
@@ -0,0 +1,90 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ - sections:
2
+ - local: index
3
+ title: LeRobot
4
+ - local: installation
5
+ title: Installation
6
+ title: Get started
7
+ - sections:
8
+ - local: il_robots
9
+ title: Imitation Learning for Robots
10
+ - local: cameras
11
+ title: Cameras
12
+ - local: integrate_hardware
13
+ title: Bring Your Own Hardware
14
+ - local: hilserl
15
+ title: Train a Robot with RL
16
+ - local: hilserl_sim
17
+ title: Train RL in Simulation
18
+ - local: async
19
+ title: Use Async Inference
20
+ - local: multi_gpu_training
21
+ title: Multi GPU training
22
+ title: "Tutorials"
23
+ - sections:
24
+ - local: lerobot-dataset-v3
25
+ title: Using LeRobotDataset
26
+ - local: porting_datasets_v3
27
+ title: Porting Large Datasets
28
+ - local: using_dataset_tools
29
+ title: Using the Dataset Tools
30
+ title: "Datasets"
31
+ - sections:
32
+ - local: act
33
+ title: ACT
34
+ - local: smolvla
35
+ title: SmolVLA
36
+ - local: pi0
37
+ title: π₀ (Pi0)
38
+ - local: pi05
39
+ title: π₀.₅ (Pi05)
40
+ - local: groot
41
+ title: NVIDIA GR00T N1.5
42
+ title: "Policies"
43
+ - sections:
44
+ - local: il_sim
45
+ title: Imitation Learning in Sim
46
+ - local: libero
47
+ title: Using Libero
48
+ - local: metaworld
49
+ title: Using MetaWorld
50
+ title: "Simulation"
51
+ - sections:
52
+ - local: introduction_processors
53
+ title: Introduction to Robot Processors
54
+ - local: debug_processor_pipeline
55
+ title: Debug your processor pipeline
56
+ - local: implement_your_own_processor
57
+ title: Implement your own processor
58
+ - local: processors_robots_teleop
59
+ title: Processors for Robots and Teleoperators
60
+ title: "Robot Processors"
61
+ - sections:
62
+ - local: so101
63
+ title: SO-101
64
+ - local: so100
65
+ title: SO-100
66
+ - local: koch
67
+ title: Koch v1.1
68
+ - local: lekiwi
69
+ title: LeKiwi
70
+ - local: hope_jr
71
+ title: Hope Jr
72
+ - local: reachy2
73
+ title: Reachy 2
74
+ title: "Robots"
75
+ - sections:
76
+ - local: phone_teleop
77
+ title: Phone
78
+ title: "Teleoperators"
79
+ - sections:
80
+ - local: notebooks
81
+ title: Notebooks
82
+ - local: feetech
83
+ title: Updating Feetech Firmware
84
+ title: "Resources"
85
+ - sections:
86
+ - local: contributing
87
+ title: Contribute to LeRobot
88
+ - local: backwardcomp
89
+ title: Backward compatibility
90
+ title: "About"
docs/source/act.mdx ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # ACT (Action Chunking with Transformers)
2
+
3
+ ACT is a **lightweight and efficient policy for imitation learning**, especially well-suited for fine-grained manipulation tasks. It's the **first model we recommend when you're starting out** with LeRobot due to its fast training time, low computational requirements, and strong performance.
4
+
5
+ <div class="video-container">
6
+ <iframe
7
+ width="100%"
8
+ height="415"
9
+ src="https://www.youtube.com/embed/ft73x0LfGpM"
10
+ title="LeRobot ACT Tutorial"
11
+ frameborder="0"
12
+ allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture"
13
+ allowfullscreen
14
+ ></iframe>
15
+ </div>
16
+
17
+ _Watch this tutorial from the LeRobot team to learn how ACT works: [LeRobot ACT Tutorial](https://www.youtube.com/watch?v=ft73x0LfGpM)_
18
+
19
+ ## Model Overview
20
+
21
+ Action Chunking with Transformers (ACT) was introduced in the paper [Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware](https://arxiv.org/abs/2304.13705) by Zhao et al. The policy was designed to enable precise, contact-rich manipulation tasks using affordable hardware and minimal demonstration data.
22
+
23
+ ### Why ACT is Great for Beginners
24
+
25
+ ACT stands out as an excellent starting point for several reasons:
26
+
27
+ - **Fast Training**: Trains in a few hours on a single GPU
28
+ - **Lightweight**: Only ~80M parameters, making it efficient and easy to work with
29
+ - **Data Efficient**: Often achieves high success rates with just 50 demonstrations
30
+
31
+ ### Architecture
32
+
33
+ ACT uses a transformer-based architecture with three main components:
34
+
35
+ 1. **Vision Backbone**: ResNet-18 processes images from multiple camera viewpoints
36
+ 2. **Transformer Encoder**: Synthesizes information from camera features, joint positions, and a learned latent variable
37
+ 3. **Transformer Decoder**: Generates coherent action sequences using cross-attention
38
+
39
+ The policy takes as input:
40
+
41
+ - Multiple RGB images (e.g., from wrist cameras, front/top cameras)
42
+ - Current robot joint positions
43
+ - A latent style variable `z` (learned during training, set to zero during inference)
44
+
45
+ And outputs a chunk of `k` future action sequences.
46
+
47
+ ## Installation Requirements
48
+
49
+ 1. Install LeRobot by following our [Installation Guide](./installation).
50
+ 2. ACT is included in the base LeRobot installation, so no additional dependencies are needed!
51
+
52
+ ## Training ACT
53
+
54
+ ACT works seamlessly with the standard LeRobot training pipeline. Here's a complete example for training ACT on your dataset:
55
+
56
+ ```bash
57
+ lerobot-train \
58
+ --dataset.repo_id=${HF_USER}/your_dataset \
59
+ --policy.type=act \
60
+ --output_dir=outputs/train/act_your_dataset \
61
+ --job_name=act_your_dataset \
62
+ --policy.device=cuda \
63
+ --wandb.enable=true \
64
+ --policy.repo_id=${HF_USER}/act_policy
65
+ ```
66
+
67
+ ### Training Tips
68
+
69
+ 1. **Start with defaults**: ACT's default hyperparameters work well for most tasks
70
+ 2. **Training duration**: Expect a few hours for 100k training steps on a single GPU
71
+ 3. **Batch size**: Start with batch size 8 and adjust based on your GPU memory
72
+
73
+ ### Train using Google Colab
74
+
75
+ If your local computer doesn't have a powerful GPU, you can utilize Google Colab to train your model by following the [ACT training notebook](./notebooks#training-act).
76
+
77
+ ## Evaluating ACT
78
+
79
+ Once training is complete, you can evaluate your ACT policy using the `lerobot-record` command with your trained policy. This will run inference and record evaluation episodes:
80
+
81
+ ```bash
82
+ lerobot-record \
83
+ --robot.type=so100_follower \
84
+ --robot.port=/dev/ttyACM0 \
85
+ --robot.id=my_robot \
86
+ --robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}}" \
87
+ --display_data=true \
88
+ --dataset.repo_id=${HF_USER}/eval_act_your_dataset \
89
+ --dataset.num_episodes=10 \
90
+ --dataset.single_task="Your task description" \
91
+ --policy.path=${HF_USER}/act_policy
92
+ ```
docs/source/async.mdx ADDED
@@ -0,0 +1,312 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Asynchronous Inference
2
+
3
+ With our [SmolVLA](https://huggingface.co/papers/2506.01844) we introduced a new way to run inference on real-world robots, **decoupling action prediction from action execution**.
4
+ In this tutorial, we'll show how to use asynchronous inference (_async inference_) using a finetuned version of SmolVLA, and all the policies supported by LeRobot.
5
+ **Try async inference with all the policies** supported by LeRobot!
6
+
7
+ **What you'll learn:**
8
+
9
+ 1. Why asynchronous inference matters and how it compares to, more traditional, sequential inference.
10
+ 2. How to spin-up a `PolicyServer` and connect a `RobotClient` from the same machine, and even over the network.
11
+ 3. How to tune key parameters (`actions_per_chunk`, `chunk_size_threshold`) for your robot and policy.
12
+
13
+ If you get stuck, hop into our [Discord community](https://discord.gg/s3KuuzsPFb)!
14
+
15
+ In a nutshell: with _async inference_, your robot keeps acting while the policy server is already busy computing the next chunk of actions---eliminating "wait-for-inference" lags and unlocking smoother, more reactive behaviours.
16
+ This is fundamentally different from synchronous inference (sync), where the robot stays idle while the policy computes the next chunk of actions.
17
+
18
+ ---
19
+
20
+ ## Getting started with async inference
21
+
22
+ You can read more information on asynchronous inference in our [blogpost](https://huggingface.co/blog/async-robot-inference). This guide is designed to help you quickly set up and run asynchronous inference in your environment.
23
+
24
+ First, install `lerobot` with the `async` tag, to install the extra dependencies required to run async inference.
25
+
26
+ ```shell
27
+ pip install -e ".[async]"
28
+ ```
29
+
30
+ Then, spin up a policy server (in one terminal, or in a separate machine) specifying the host address and port for the client to connect to.
31
+ You can spin up a policy server running:
32
+
33
+ ```shell
34
+ python -m lerobot.async_inference.policy_server \
35
+ --host=127.0.0.1 \
36
+ --port=8080
37
+ ```
38
+
39
+ This will start a policy server listening on `127.0.0.1:8080` (`localhost`, port 8080). At this stage, the policy server is empty, as all information related to which policy to run and with which parameters are specified during the first handshake with the client. Spin up a client with:
40
+
41
+ ```shell
42
+ python -m lerobot.async_inference.robot_client \
43
+ --server_address=127.0.0.1:8080 \ # SERVER: the host address and port of the policy server
44
+ --robot.type=so100_follower \ # ROBOT: your robot type
45
+ --robot.port=/dev/tty.usbmodem585A0076841 \ # ROBOT: your robot port
46
+ --robot.id=follower_so100 \ # ROBOT: your robot id, to load calibration file
47
+ --robot.cameras="{ laptop: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}, phone: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \ # POLICY: the cameras used to acquire frames, with keys matching the keys expected by the policy
48
+ --task="dummy" \ # POLICY: The task to run the policy on (`Fold my t-shirt`). Not necessarily defined for all policies, such as `act`
49
+ --policy_type=your_policy_type \ # POLICY: the type of policy to run (smolvla, act, etc)
50
+ --pretrained_name_or_path=user/model \ # POLICY: the model name/path on server to the checkpoint to run (e.g., lerobot/smolvla_base)
51
+ --policy_device=mps \ # POLICY: the device to run the policy on, on the server
52
+ --actions_per_chunk=50 \ # POLICY: the number of actions to output at once
53
+ --chunk_size_threshold=0.5 \ # CLIENT: the threshold for the chunk size before sending a new observation to the server
54
+ --aggregate_fn_name=weighted_average \ # CLIENT: the function to aggregate actions on overlapping portions
55
+ --debug_visualize_queue_size=True # CLIENT: whether to visualize the queue size at runtime
56
+ ```
57
+
58
+ In summary, you need to specify instructions for:
59
+
60
+ - `SERVER`: the address and port of the policy server
61
+ - `ROBOT`: the type of robot to connect to, the port to connect to, and the local `id` of the robot
62
+ - `POLICY`: the type of policy to run, and the model name/path on server to the checkpoint to run. You also need to specify which device should the sever be using, and how many actions to output at once (capped at the policy max actions value).
63
+ - `CLIENT`: the threshold for the chunk size before sending a new observation to the server, and the function to aggregate actions on overlapping portions. Optionally, you can also visualize the queue size at runtime, to help you tune the `CLIENT` parameters.
64
+
65
+ Importantly,
66
+
67
+ - `actions_per_chunk` and `chunk_size_threshold` are key parameters to tune for your setup.
68
+ - `aggregate_fn_name` is the function to aggregate actions on overlapping portions. You can either add a new one to a registry of functions, or add your own in `robot_client.py` (see [here](NOTE:addlinktoLOC))
69
+ - `debug_visualize_queue_size` is a useful tool to tune the `CLIENT` parameters.
70
+
71
+ ## Done! You should see your robot moving around by now 😉
72
+
73
+ ## Async vs. synchronous inference
74
+
75
+ Synchronous inference relies on interleaving action chunk prediction and action execution. This inherently results in _idle frames_, frames where the robot awaits idle the policy's output: a new action chunk.
76
+ In turn, inference is plagued by evident real-time lags, where the robot simply stops acting due to the lack of available actions.
77
+ With robotics models increasing in size, this problem risks becoming only more severe.
78
+
79
+ <p align="center">
80
+ <img
81
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/async-inference/sync.png"
82
+ width="80%"
83
+ ></img>
84
+ </p>
85
+ <p align="center">
86
+ <i>Synchronous inference</i> makes the robot idle while the policy is
87
+ computing the next chunk of actions.
88
+ </p>
89
+
90
+ To overcome this, we design async inference, a paradigm where action planning and execution are decoupled, resulting in (1) higher adaptability and, most importantly, (2) no idle frames.
91
+ Crucially, with async inference, the next action chunk is computed _before_ the current one is exhausted, resulting in no idleness.
92
+ Higher adaptability is ensured by aggregating the different action chunks on overlapping portions, obtaining an up-to-date plan and a tighter control loop.
93
+
94
+ <p align="center">
95
+ <img
96
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/async-inference/async.png"
97
+ width="80%"
98
+ ></img>
99
+ </p>
100
+ <p align="center">
101
+ <i>Asynchronous inference</i> results in no idleness because the next chunk is
102
+ computed before the current chunk is exhausted.
103
+ </p>
104
+
105
+ ---
106
+
107
+ ## Start the Policy Server
108
+
109
+ Policy servers are wrappers around a `PreTrainedPolicy` interfacing them with observations coming from a robot client.
110
+ Policy servers are initialized as empty containers which are populated with the requested policy specified in the initial handshake between the robot client and the policy server.
111
+ As such, spinning up a policy server is as easy as specifying the host address and port. If you're running the policy server on the same machine as the robot client, you can use `localhost` as the host address.
112
+
113
+ <hfoptions id="start_policy_server">
114
+ <hfoption id="Command">
115
+ ```bash
116
+ python -m lerobot.async_inference.policy_server \
117
+ --host=127.0.0.1 \
118
+ --port=8080
119
+ ```
120
+ </hfoption>
121
+ <hfoption id="API example">
122
+
123
+ <!-- prettier-ignore-start -->
124
+ ```python
125
+ from lerobot.async_inference.configs import PolicyServerConfig
126
+ from lerobot.async_inference.policy_server import serve
127
+
128
+ config = PolicyServerConfig(
129
+ host="localhost",
130
+ port=8080,
131
+ )
132
+ serve(config)
133
+ ```
134
+ <!-- prettier-ignore-end -->
135
+
136
+ </hfoption>
137
+ </hfoptions>
138
+
139
+ This listens on `localhost:8080` for an incoming connection from the associated`RobotClient`, which will communicate which policy to run during the first client-server handshake.
140
+
141
+ ---
142
+
143
+ ## Launch the Robot Client
144
+
145
+ `RobotClient` is a wrapper around a `Robot` instance, which `RobotClient` connects to the (possibly remote) `PolicyServer`.
146
+ The `RobotClient` streams observations to the `PolicyServer`, and receives action chunks obtained running inference on the server (which we assume to have better computational resources than the robot controller).
147
+
148
+ <hfoptions id="start_robot_client">
149
+ <hfoption id="Command">
150
+ ```bash
151
+ python -m lerobot.async_inference.robot_client \
152
+ --server_address=127.0.0.1:8080 \ # SERVER: the host address and port of the policy server
153
+ --robot.type=so100_follower \ # ROBOT: your robot type
154
+ --robot.port=/dev/tty.usbmodem585A0076841 \ # ROBOT: your robot port
155
+ --robot.id=follower_so100 \ # ROBOT: your robot id, to load calibration file
156
+ --robot.cameras="{ laptop: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}, phone: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \ # POLICY: the cameras used to acquire frames, with keys matching the keys expected by the policy
157
+ --task="dummy" \ # POLICY: The task to run the policy on (`Fold my t-shirt`). Not necessarily defined for all policies, such as `act`
158
+ --policy_type=your_policy_type \ # POLICY: the type of policy to run (smolvla, act, etc)
159
+ --pretrained_name_or_path=user/model \ # POLICY: the model name/path on server to the checkpoint to run (e.g., lerobot/smolvla_base)
160
+ --policy_device=mps \ # POLICY: the device to run the policy on, on the server
161
+ --actions_per_chunk=50 \ # POLICY: the number of actions to output at once
162
+ --chunk_size_threshold=0.5 \ # CLIENT: the threshold for the chunk size before sending a new observation to the server
163
+ --aggregate_fn_name=weighted_average \ # CLIENT: the function to aggregate actions on overlapping portions
164
+ --debug_visualize_queue_size=True # CLIENT: whether to visualize the queue size at runtime
165
+ ```
166
+ </hfoption>
167
+ <hfoption id="API example">
168
+
169
+ <!-- prettier-ignore-start -->
170
+ ```python
171
+ import threading
172
+ from lerobot.robots.so100_follower import SO100FollowerConfig
173
+ from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
174
+ from lerobot.async_inference.configs import RobotClientConfig
175
+ from lerobot.async_inference.robot_client import RobotClient
176
+ from lerobot.async_inference.helpers import visualize_action_queue_size
177
+
178
+ # 1. Create the robot instance
179
+ """Check out the cameras available in your setup by running `python lerobot/find_cameras.py`"""
180
+ # these cameras must match the ones expected by the policy
181
+ # check the config.json on the Hub for the policy you are using
182
+ camera_cfg = {
183
+ "top": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=30),
184
+ "side": OpenCVCameraConfig(index_or_path=1, width=640, height=480, fps=30)
185
+ }
186
+
187
+ robot_cfg = SO100FollowerConfig(
188
+ port="/dev/tty.usbmodem585A0076841",
189
+ id="follower_so100",
190
+ cameras=camera_cfg
191
+ )
192
+
193
+ # 3. Create client configuration
194
+ client_cfg = RobotClientConfig(
195
+ robot=robot_cfg,
196
+ server_address="localhost:8080",
197
+ policy_device="mps",
198
+ policy_type="smolvla",
199
+ pretrained_name_or_path="fracapuano/smolvla_async",
200
+ chunk_size_threshold=0.5,
201
+ actions_per_chunk=50, # make sure this is less than the max actions of the policy
202
+ )
203
+
204
+ # 4. Create and start client
205
+ client = RobotClient(client_cfg)
206
+
207
+ # 5. Specify the task
208
+ task = "Don't do anything, stay still"
209
+
210
+ if client.start():
211
+ # Start action receiver thread
212
+ action_receiver_thread = threading.Thread(target=client.receive_actions, daemon=True)
213
+ action_receiver_thread.start()
214
+
215
+ try:
216
+ # Run the control loop
217
+ client.control_loop(task)
218
+ except KeyboardInterrupt:
219
+ client.stop()
220
+ action_receiver_thread.join()
221
+ # (Optionally) plot the action queue size
222
+ visualize_action_queue_size(client.action_queue_size)
223
+ ```
224
+ <!-- prettier-ignore-end -->
225
+
226
+ </hfoption>
227
+ </hfoptions>
228
+
229
+ The following two parameters are key in every setup:
230
+
231
+ <table>
232
+ <thead>
233
+ <tr>
234
+ <th>Hyperparameter</th>
235
+ <th>Default</th>
236
+ <th>What it does</th>
237
+ </tr>
238
+ </thead>
239
+ <tbody>
240
+ <tr>
241
+ <td>
242
+ <code>actions_per_chunk</code>
243
+ </td>
244
+ <td>50</td>
245
+ <td>
246
+ How many actions the policy outputs at once. Typical values: 10-50.
247
+ </td>
248
+ </tr>
249
+ <tr>
250
+ <td>
251
+ <code>chunk_size_threshold</code>
252
+ </td>
253
+ <td>0.7</td>
254
+ <td>
255
+ When the queue is ≤ 50% full, the client sends a fresh observation.
256
+ Value in [0, 1].
257
+ </td>
258
+ </tr>
259
+ </tbody>
260
+ </table>
261
+
262
+ <Tip>
263
+ Different values of `actions_per_chunk` and `chunk_size_threshold` do result
264
+ in different behaviours.
265
+ </Tip>
266
+
267
+ On the one hand, increasing the value of `actions_per_chunk` will result in reducing the likelihood of ending up with no actions to execute, as more actions will be available when the new chunk is computed.
268
+ However, larger values of `actions_per_chunk` might also result in less precise actions, due to the compounding errors consequent to predicting actions over longer timespans.
269
+
270
+ On the other hand, increasing the value of `chunk_size_threshold` will result in sending out to the `PolicyServer` observations for inference more often, resulting in a larger number of updates action chunks, overlapping on significant portions. This results in high adaptability, in the limit predicting one action chunk for each observation, which is in turn only marginally consumed while a new one is produced.
271
+ This option does also put more pressure on the inference pipeline, as a consequence of the many requests. Conversely, values of `chunk_size_threshold` close to 0.0 collapse to the synchronous edge case, whereby new observations are only sent out whenever the current chunk is exhausted.
272
+
273
+ We found the default values of `actions_per_chunk` and `chunk_size_threshold` to work well in the experiments we developed for the [SmolVLA paper](https://huggingface.co/papers/2506.01844), but recommend experimenting with different values to find the best fit for your setup.
274
+
275
+ ### Tuning async inference for your setup
276
+
277
+ 1. **Choose your computational resources carefully.** [PI0](https://huggingface.co/lerobot/pi0) occupies 14GB of memory at inference time, while [SmolVLA](https://huggingface.co/lerobot/smolvla_base) requires only ~2GB. You should identify the best computational resource for your use case keeping in mind smaller policies require less computational resources. The combination of policy and device used (CPU-intensive, using MPS, or the number of CUDA cores on a given NVIDIA GPU) directly impacts the average inference latency you should expect.
278
+ 2. **Adjust your `fps` based on inference latency.** While the server generates a new action chunk, the client is not idle and is stepping through its current action queue. If the two processes happen at fundamentally different speeds, the client might end up with an empty queue. As such, you should reduce your fps if you consistently run out of actions in queue.
279
+ 3. **Adjust `chunk_size_threshold`**.
280
+ - Values closer to `0.0` result in almost sequential behavior. Values closer to `1.0` → send observation every step (more bandwidth, relies on good world-model).
281
+ - We found values around 0.5-0.6 to work well. If you want to tweak this, spin up a `RobotClient` setting the `--debug-visualize-queue-size` to `True`. This will plot the action queue size evolution at runtime, and you can use it to find the value of `chunk_size_threshold` that works best for your setup.
282
+
283
+ <p align="center">
284
+ <img
285
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/async-inference/queues.png"
286
+ width="80%"
287
+ ></img>
288
+ </p>
289
+ <p align="center">
290
+ <i>
291
+ The action queue size is plotted at runtime when the
292
+ `--debug-visualize-queue-size` flag is passed, for various levels of
293
+ `chunk_size_threshold` (`g` in the SmolVLA paper).
294
+ </i>
295
+ </p>
296
+
297
+ ---
298
+
299
+ ## Conclusion
300
+
301
+ Asynchronous inference represents a significant advancement in real-time robotics control, addressing the fundamental challenge of inference latency that has long plagued robotics applications. Through this tutorial, you've learned how to implement a complete async inference pipeline that eliminates idle frames and enables smoother, more reactive robot behaviors.
302
+
303
+ **Key Takeaways:**
304
+
305
+ - **Paradigm Shift**: Async inference decouples action prediction from execution, allowing robots to continue acting while new action chunks are computed in parallel
306
+ - **Performance Benefits**: Eliminates "wait-for-inference" lags that are inherent in synchronous approaches, becoming increasingly important as policy models grow larger
307
+ - **Flexible Architecture**: The server-client design enables distributed computing, where inference can run on powerful remote hardware while maintaining real-time robot control
308
+ - **Tunable Parameters**: Success depends on properly configuring `actions_per_chunk` and `chunk_size_threshold` for your specific hardware, policy, and task requirements
309
+ - **Universal Compatibility**: Works with all LeRobot-supported policies, from lightweight ACT models to vision-language models like SmolVLA
310
+
311
+ Start experimenting with the default parameters, monitor your action queue sizes, and iteratively refine your setup to achieve optimal performance for your specific use case.
312
+ If you want to discuss this further, hop into our [Discord community](https://discord.gg/s3KuuzsPFb), or open an issue on our [GitHub repository](https://github.com/lerobot/lerobot/issues).
docs/source/backwardcomp.mdx ADDED
@@ -0,0 +1,151 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Backward compatibility
2
+
3
+ ## Policy Normalization Migration (PR #1452)
4
+
5
+ **Breaking Change**: LeRobot policies no longer have built-in normalization layers embedded in their weights. Normalization is now handled by external `PolicyProcessorPipeline` components.
6
+
7
+ ### What changed?
8
+
9
+ | | Before PR #1452 | After PR #1452 |
10
+ | -------------------------- | ------------------------------------------------ | ------------------------------------------------------------ |
11
+ | **Normalization Location** | Embedded in model weights (`normalize_inputs.*`) | External `PolicyProcessorPipeline` components |
12
+ | **Model State Dict** | Contains normalization statistics | **Clean weights only** - no normalization parameters |
13
+ | **Usage** | `policy(batch)` handles everything | `preprocessor(batch)` → `policy(...)` → `postprocessor(...)` |
14
+
15
+ ### Impact on existing models
16
+
17
+ - Models trained **before** PR #1452 have normalization embedded in their weights
18
+ - These models need migration to work with the new `PolicyProcessorPipeline` system
19
+ - The migration extracts normalization statistics and creates separate processor pipelines
20
+
21
+ ### Migrating old models
22
+
23
+ Use the migration script to convert models with embedded normalization:
24
+
25
+ ```shell
26
+ python src/lerobot/processor/migrate_policy_normalization.py \
27
+ --pretrained-path lerobot/act_aloha_sim_transfer_cube_human \
28
+ --push-to-hub \
29
+ --branch migrated
30
+ ```
31
+
32
+ The script:
33
+
34
+ 1. **Extracts** normalization statistics from model weights
35
+ 2. **Creates** external preprocessor and postprocessor pipelines
36
+ 3. **Removes** normalization layers from model weights
37
+ 4. **Saves** clean model + processor pipelines
38
+ 5. **Pushes** to Hub with automatic PR creation
39
+
40
+ ### Using migrated models
41
+
42
+ ```python
43
+ # New usage pattern (after migration)
44
+ from lerobot.policies.factory import make_policy, make_pre_post_processors
45
+
46
+ # Load model and processors separately
47
+ policy = make_policy(config, ds_meta=dataset.meta)
48
+ preprocessor, postprocessor = make_pre_post_processors(
49
+ policy_cfg=config,
50
+ dataset_stats=dataset.meta.stats
51
+ )
52
+
53
+ # Process data through pipeline
54
+ processed_batch = preprocessor(raw_batch)
55
+ action = policy.select_action(processed_batch)
56
+ final_action = postprocessor(action)
57
+ ```
58
+
59
+ ## Hardware API redesign
60
+
61
+ PR [#777](https://github.com/huggingface/lerobot/pull/777) improves the LeRobot calibration but is **not backward-compatible**. Below is a overview of what changed and how you can continue to work with datasets created before this pull request.
62
+
63
+ ### What changed?
64
+
65
+ | | Before PR #777 | After PR #777 |
66
+ | --------------------------------- | ------------------------------------------------- | ------------------------------------------------------------ |
67
+ | **Joint range** | Degrees `-180...180°` | **Normalised range** Joints: `–100...100` Gripper: `0...100` |
68
+ | **Zero position (SO100 / SO101)** | Arm fully extended horizontally | **In middle of the range for each joint** |
69
+ | **Boundary handling** | Software safeguards to detect ±180 ° wrap-arounds | No wrap-around logic needed due to mid-range zero |
70
+
71
+ ---
72
+
73
+ ### Impact on existing datasets
74
+
75
+ - Recorded trajectories created **before** PR #777 will replay incorrectly if loaded directly:
76
+ - Joint angles are offset and incorrectly normalized.
77
+ - Any models directly finetuned or trained on the old data will need their inputs and outputs converted.
78
+
79
+ ### Using datasets made with the previous calibration system
80
+
81
+ We provide a migration example script for replaying an episode recorded with the previous calibration here: `examples/backward_compatibility/replay.py`.
82
+ Below we take you through the modifications that are done in the example script to make the previous calibration datasets work.
83
+
84
+ ```diff
85
+ + key = f"{name.removeprefix('main_')}.pos"
86
+ action[key] = action_array[i].item()
87
+ + action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
88
+ + action["elbow_flex.pos"] -= 90
89
+ ```
90
+
91
+ Let's break this down.
92
+ New codebase uses `.pos` suffix for the position observations and we have removed `main_` prefix:
93
+
94
+ <!-- prettier-ignore-start -->
95
+ ```python
96
+ key = f"{name.removeprefix('main_')}.pos"
97
+ ```
98
+ <!-- prettier-ignore-end -->
99
+
100
+ For `"shoulder_lift"` (id = 2), the 0 position is changed by -90 degrees and the direction is reversed compared to old calibration/code.
101
+
102
+ <!-- prettier-ignore-start -->
103
+ ```python
104
+ action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
105
+ ```
106
+ <!-- prettier-ignore-end -->
107
+
108
+ For `"elbow_flex"` (id = 3), the 0 position is changed by -90 degrees compared to old calibration/code.
109
+
110
+ <!-- prettier-ignore-start -->
111
+ ```python
112
+ action["elbow_flex.pos"] -= 90
113
+ ```
114
+ <!-- prettier-ignore-end -->
115
+
116
+ To use degrees normalization we then set the `--robot.use_degrees` option to `true`.
117
+
118
+ ```diff
119
+ python examples/backward_compatibility/replay.py \
120
+ --robot.type=so101_follower \
121
+ --robot.port=/dev/tty.usbmodem5A460814411 \
122
+ --robot.id=blue \
123
+ + --robot.use_degrees=true \
124
+ --dataset.repo_id=my_dataset_id \
125
+ --dataset.episode=0
126
+ ```
127
+
128
+ ### Using policies trained with the previous calibration system
129
+
130
+ Policies output actions in the same format as the datasets (`torch.Tensors`). Therefore, the same transformations should be applied.
131
+
132
+ To find these transformations, we recommend to first try and and replay an episode of the dataset your policy was trained on using the section above.
133
+ Then, add these same transformations on your inference script (shown here in the `record.py` script):
134
+
135
+ ```diff
136
+ action_values = predict_action(
137
+ observation_frame,
138
+ policy,
139
+ get_safe_torch_device(policy.config.device),
140
+ policy.config.use_amp,
141
+ task=single_task,
142
+ robot_type=robot.robot_type,
143
+ )
144
+ action = {key: action_values[i].item() for i, key in enumerate(robot.action_features)}
145
+
146
+ + action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90)
147
+ + action["elbow_flex.pos"] -= 90
148
+ robot.send_action(action)
149
+ ```
150
+
151
+ If you have questions or run into migration issues, feel free to ask them on [Discord](https://discord.gg/s3KuuzsPFb)
docs/source/cameras.mdx ADDED
@@ -0,0 +1,206 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Cameras
2
+
3
+ LeRobot offers multiple options for video capture, including phone cameras, built-in laptop cameras, external webcams, and Intel RealSense cameras. To efficiently record frames from most cameras, you can use either the `OpenCVCamera` or `RealSenseCamera` class. For additional compatibility details on the `OpenCVCamera` class, refer to the [Video I/O with OpenCV Overview](https://docs.opencv.org/4.x/d0/da7/videoio_overview.html).
4
+
5
+ ### Finding your camera
6
+
7
+ To instantiate a camera, you need a camera identifier. This identifier might change if you reboot your computer or re-plug your camera, a behavior mostly dependant on your operating system.
8
+
9
+ To find the camera indices of the cameras plugged into your system, run the following script:
10
+
11
+ ```bash
12
+ lerobot-find-cameras opencv # or realsense for Intel Realsense cameras
13
+ ```
14
+
15
+ The output will look something like this if you have two cameras connected:
16
+
17
+ ```
18
+ --- Detected Cameras ---
19
+ Camera #0:
20
+ Name: OpenCV Camera @ 0
21
+ Type: OpenCV
22
+ Id: 0
23
+ Backend api: AVFOUNDATION
24
+ Default stream profile:
25
+ Format: 16.0
26
+ Width: 1920
27
+ Height: 1080
28
+ Fps: 15.0
29
+ --------------------
30
+ (more cameras ...)
31
+ ```
32
+
33
+ > [!WARNING]
34
+ > When using Intel RealSense cameras in `macOS`, you could get this [error](https://github.com/IntelRealSense/librealsense/issues/12307): `Error finding RealSense cameras: failed to set power state`, this can be solved by running the same command with `sudo` permissions. Note that using RealSense cameras in `macOS` is unstable.
35
+
36
+ ## Use Cameras
37
+
38
+ Below are two examples, demonstrating how to work with the API.
39
+
40
+ - **Asynchronous frame capture** using an OpenCV-based camera
41
+ - **Color and depth capture** using an Intel RealSense camera
42
+
43
+ <hfoptions id="shell_restart">
44
+ <hfoption id="Open CV Camera">
45
+
46
+ <!-- prettier-ignore-start -->
47
+ ```python
48
+ from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
49
+ from lerobot.cameras.opencv.camera_opencv import OpenCVCamera
50
+ from lerobot.cameras.configs import ColorMode, Cv2Rotation
51
+
52
+ # Construct an `OpenCVCameraConfig` with your desired FPS, resolution, color mode, and rotation.
53
+ config = OpenCVCameraConfig(
54
+ index_or_path=0,
55
+ fps=15,
56
+ width=1920,
57
+ height=1080,
58
+ color_mode=ColorMode.RGB,
59
+ rotation=Cv2Rotation.NO_ROTATION
60
+ )
61
+
62
+ # Instantiate and connect an `OpenCVCamera`, performing a warm-up read (default).
63
+ camera = OpenCVCamera(config)
64
+ camera.connect()
65
+
66
+ # Read frames asynchronously in a loop via `async_read(timeout_ms)`
67
+ try:
68
+ for i in range(10):
69
+ frame = camera.async_read(timeout_ms=200)
70
+ print(f"Async frame {i} shape:", frame.shape)
71
+ finally:
72
+ camera.disconnect()
73
+ ```
74
+ <!-- prettier-ignore-end -->
75
+
76
+ </hfoption>
77
+ <hfoption id="Intel Realsense Camera">
78
+
79
+ <!-- prettier-ignore-start -->
80
+ ```python
81
+ from lerobot.cameras.realsense.configuration_realsense import RealSenseCameraConfig
82
+ from lerobot.cameras.realsense.camera_realsense import RealSenseCamera
83
+ from lerobot.cameras.configs import ColorMode, Cv2Rotation
84
+
85
+ # Create a `RealSenseCameraConfig` specifying your camera’s serial number and enabling depth.
86
+ config = RealSenseCameraConfig(
87
+ serial_number_or_name="233522074606",
88
+ fps=15,
89
+ width=640,
90
+ height=480,
91
+ color_mode=ColorMode.RGB,
92
+ use_depth=True,
93
+ rotation=Cv2Rotation.NO_ROTATION
94
+ )
95
+
96
+ # Instantiate and connect a `RealSenseCamera` with warm-up read (default).
97
+ camera = RealSenseCamera(config)
98
+ camera.connect()
99
+
100
+ # Capture a color frame via `read()` and a depth map via `read_depth()`.
101
+ try:
102
+ color_frame = camera.read()
103
+ depth_map = camera.read_depth()
104
+ print("Color frame shape:", color_frame.shape)
105
+ print("Depth map shape:", depth_map.shape)
106
+ finally:
107
+ camera.disconnect()
108
+ ```
109
+ <!-- prettier-ignore-end -->
110
+
111
+ </hfoption>
112
+ </hfoptions>
113
+
114
+ ## Use your phone
115
+
116
+ <hfoptions id="use phone">
117
+ <hfoption id="Mac">
118
+
119
+ To use your iPhone as a camera on macOS, enable the Continuity Camera feature:
120
+
121
+ - Ensure your Mac is running macOS 13 or later, and your iPhone is on iOS 16 or later.
122
+ - Sign in both devices with the same Apple ID.
123
+ - Connect your devices with a USB cable or turn on Wi-Fi and Bluetooth for a wireless connection.
124
+
125
+ For more details, visit [Apple support](https://support.apple.com/en-gb/guide/mac-help/mchl77879b8a/mac).
126
+
127
+ Your iPhone should be detected automatically when running the camera setup script in the next section.
128
+
129
+ </hfoption>
130
+ <hfoption id="Linux">
131
+
132
+ If you want to use your phone as a camera on Linux, follow these steps to set up a virtual camera
133
+
134
+ 1. _Install `v4l2loopback-dkms` and `v4l-utils`_. Those packages are required to create virtual camera devices (`v4l2loopback`) and verify their settings with the `v4l2-ctl` utility from `v4l-utils`. Install them using:
135
+
136
+ <!-- prettier-ignore-start -->
137
+ ```python
138
+ sudo apt install v4l2loopback-dkms v4l-utils
139
+ ```
140
+ <!-- prettier-ignore-end -->
141
+
142
+ 2. _Install [DroidCam](https://droidcam.app) on your phone_. This app is available for both iOS and Android.
143
+ 3. _Install [OBS Studio](https://obsproject.com)_. This software will help you manage the camera feed. Install it using [Flatpak](https://flatpak.org):
144
+
145
+ <!-- prettier-ignore-start -->
146
+ ```python
147
+ flatpak install flathub com.obsproject.Studio
148
+ ```
149
+ <!-- prettier-ignore-end -->
150
+
151
+ 4. _Install the DroidCam OBS plugin_. This plugin integrates DroidCam with OBS Studio. Install it with:
152
+
153
+ <!-- prettier-ignore-start -->
154
+ ```python
155
+ flatpak install flathub com.obsproject.Studio.Plugin.DroidCam
156
+ ```
157
+ <!-- prettier-ignore-end -->
158
+
159
+ 5. _Start OBS Studio_. Launch with:
160
+
161
+ <!-- prettier-ignore-start -->
162
+ ```python
163
+ flatpak run com.obsproject.Studio
164
+ ```
165
+ <!-- prettier-ignore-end -->
166
+
167
+ 6. _Add your phone as a source_. Follow the instructions [here](https://droidcam.app/obs/usage). Be sure to set the resolution to `640x480`.
168
+ 7. _Adjust resolution settings_. In OBS Studio, go to `File > Settings > Video`. Change the `Base(Canvas) Resolution` and the `Output(Scaled) Resolution` to `640x480` by manually typing it in.
169
+ 8. _Start virtual camera_. In OBS Studio, follow the instructions [here](https://obsproject.com/kb/virtual-camera-guide).
170
+ 9. _Verify the virtual camera setup_. Use `v4l2-ctl` to list the devices:
171
+
172
+ <!-- prettier-ignore-start -->
173
+ ```python
174
+ v4l2-ctl --list-devices
175
+ ```
176
+ <!-- prettier-ignore-end -->
177
+
178
+ You should see an entry like:
179
+
180
+ ```
181
+ VirtualCam (platform:v4l2loopback-000):
182
+ /dev/video1
183
+ ```
184
+
185
+ 10. _Check the camera resolution_. Use `v4l2-ctl` to ensure that the virtual camera output resolution is `640x480`. Change `/dev/video1` to the port of your virtual camera from the output of `v4l2-ctl --list-devices`.
186
+
187
+ <!-- prettier-ignore-start -->
188
+ ```python
189
+ v4l2-ctl -d /dev/video1 --get-fmt-video
190
+ ```
191
+ <!-- prettier-ignore-end -->
192
+
193
+ You should see an entry like:
194
+
195
+ ```
196
+ >>> Format Video Capture:
197
+ >>> Width/Height : 640/480
198
+ >>> Pixel Format : 'YUYV' (YUYV 4:2:2)
199
+ ```
200
+
201
+ Troubleshooting: If the resolution is not correct you will have to delete the Virtual Camera port and try again as it cannot be changed.
202
+
203
+ If everything is set up correctly, you can proceed with the rest of the tutorial.
204
+
205
+ </hfoption>
206
+ </hfoptions>
docs/source/contributing.md ADDED
@@ -0,0 +1 @@
 
 
1
+ ../../CONTRIBUTING.md
docs/source/debug_processor_pipeline.mdx ADDED
@@ -0,0 +1,299 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Debug Your Processor Pipeline
2
+
3
+ Processor pipelines can be complex, especially when chaining multiple transformation steps.
4
+ Unlike simple function calls, pipelines lack natural observability, you can't easily see what happens
5
+ between each step or where things go wrong.
6
+ This guide provides debugging tools and techniques specifically designed to address these challenges
7
+ and help you understand data flow through your pipelines.
8
+
9
+ We'll explore three complementary debugging approaches: **hooks** for runtime monitoring, **step-through debugging** for detailed inspection, and **feature validation** for catching structural mismatches. Each serves a different purpose and together they provide complete visibility into your pipeline's behavior.
10
+
11
+ ## Understanding Hooks
12
+
13
+ Hooks are functions that get called at specific points during pipeline execution.
14
+ They provide a way to inspect, monitor, or modify data without changing your pipeline code.
15
+ Think of them as "event listeners" for your pipeline.
16
+
17
+ ### What is a Hook?
18
+
19
+ A hook is a callback function that gets automatically invoked at specific moments during pipeline execution.
20
+ The concept comes from event-driven programming, imagine you could "hook into" the pipeline's execution flow to observe or react to what's happening.
21
+
22
+ Think of hooks like inserting checkpoints into your pipeline. Every time the pipeline reaches one of these checkpoints, it pauses briefly to call your hook function, giving you a chance to inspect the current state, log information, and validate data.
23
+
24
+ A hook is simply a function that accepts two parameters:
25
+
26
+ - `step_idx: int` - The index of the current processing step (0, 1, 2, etc.)
27
+ - `transition: EnvTransition` - The data transition at that point in the pipeline
28
+
29
+ The beauty of hooks is their non-invasive nature: you can add monitoring, validation, or debugging logic without changing a single line of your pipeline code. The pipeline remains clean and focused on its core logic, while hooks handle the cross-cutting concerns like logging, monitoring, and debugging.
30
+
31
+ ### Before vs After Hooks
32
+
33
+ The pipeline supports two types of hooks:
34
+
35
+ - **Before hooks** (`register_before_step_hook`) - Called before each step executes
36
+ - **After hooks** (`register_after_step_hook`) - Called after each step completes
37
+
38
+ ```python
39
+ def before_hook(step_idx: int, transition: EnvTransition):
40
+ """Called before step processes the transition."""
41
+ print(f"About to execute step {step_idx}")
42
+ # Useful for: logging, validation, setup
43
+
44
+ def after_hook(step_idx: int, transition: EnvTransition):
45
+ """Called after step has processed the transition."""
46
+ print(f"Completed step {step_idx}")
47
+ # Useful for: monitoring results, cleanup, debugging
48
+
49
+ processor.register_before_step_hook(before_hook)
50
+ processor.register_after_step_hook(after_hook)
51
+ ```
52
+
53
+ ### Implementing a NaN Detection Hook
54
+
55
+ Here's a practical example of a hook that detects NaN values:
56
+
57
+ ```python
58
+ def check_nans(step_idx: int, transition: EnvTransition):
59
+ """Check for NaN values in observations."""
60
+ obs = transition.get(TransitionKey.OBSERVATION)
61
+ if obs:
62
+ for key, value in obs.items():
63
+ if isinstance(value, torch.Tensor) and torch.isnan(value).any():
64
+ print(f"NaN detected in {key} at step {step_idx}")
65
+
66
+ # Register the hook to run after each step
67
+ processor.register_after_step_hook(check_nans)
68
+
69
+ # Process your data - the hook will be called automatically
70
+ output = processor(input_data)
71
+
72
+ # Remove the hook when done debugging
73
+ processor.unregister_after_step_hook(check_nans)
74
+ ```
75
+
76
+ ### How Hooks Work Internally
77
+
78
+ Understanding the internal mechanism helps you use hooks more effectively. The pipeline maintains two separate lists: one for before-step hooks and another for after-step hooks. When you register a hook, it's simply appended to the appropriate list.
79
+
80
+ During execution, the pipeline follows a strict sequence: for each processing step, it first calls all before-hooks in registration order, then executes the actual step transformation, and finally calls all after-hooks in registration order. This creates a predictable, sandwich-like structure around each step.
81
+
82
+ The key insight is that hooks don't change the core pipeline logic—they're purely additive. The pipeline's `_forward` method orchestrates this dance between hooks and processing steps, ensuring that your debugging or monitoring code runs at exactly the right moments without interfering with the main data flow.
83
+
84
+ Here's a simplified view of how the pipeline executes hooks:
85
+
86
+ ```python
87
+ class DataProcessorPipeline:
88
+ def __init__(self):
89
+ self.steps = [...]
90
+ self.before_step_hooks = [] # List of before hooks
91
+ self.after_step_hooks = [] # List of after hooks
92
+
93
+ def _forward(self, transition):
94
+ """Internal method that processes the transition through all steps."""
95
+ for step_idx, processor_step in enumerate(self.steps):
96
+ # 1. Call all BEFORE hooks
97
+ for hook in self.before_step_hooks:
98
+ hook(step_idx, transition)
99
+
100
+ # 2. Execute the actual processing step
101
+ transition = processor_step(transition)
102
+
103
+ # 3. Call all AFTER hooks
104
+ for hook in self.after_step_hooks:
105
+ hook(step_idx, transition)
106
+
107
+ return transition
108
+
109
+ def register_before_step_hook(self, hook_fn):
110
+ self.before_step_hooks.append(hook_fn)
111
+
112
+ def register_after_step_hook(self, hook_fn):
113
+ self.after_step_hooks.append(hook_fn)
114
+ ```
115
+
116
+ ### Execution Flow
117
+
118
+ The execution flow looks like this:
119
+
120
+ ```
121
+ Input → Before Hook → Step 0 → After Hook → Before Hook → Step 1 → After Hook → ... → Output
122
+ ```
123
+
124
+ For example, with 3 steps and both hook types:
125
+
126
+ ```python
127
+ def timing_before(step_idx, transition):
128
+ print(f"⏱️ Starting step {step_idx}")
129
+
130
+ def validation_after(step_idx, transition):
131
+ print(f"✅ Completed step {step_idx}")
132
+
133
+ processor.register_before_step_hook(timing_before)
134
+ processor.register_after_step_hook(validation_after)
135
+
136
+ # This will output:
137
+ # ⏱️ Starting step 0
138
+ # ✅ Completed step 0
139
+ # ⏱️ Starting step 1
140
+ # ✅ Completed step 1
141
+ # ⏱️ Starting step 2
142
+ # ✅ Completed step 2
143
+ ```
144
+
145
+ ### Multiple Hooks
146
+
147
+ You can register multiple hooks of the same type - they execute in the order registered:
148
+
149
+ ```python
150
+ def log_shapes(step_idx: int, transition: EnvTransition):
151
+ obs = transition.get(TransitionKey.OBSERVATION)
152
+ if obs:
153
+ print(f"Step {step_idx} observation shapes:")
154
+ for key, value in obs.items():
155
+ if isinstance(value, torch.Tensor):
156
+ print(f" {key}: {value.shape}")
157
+
158
+ processor.register_after_step_hook(check_nans) # Executes first
159
+ processor.register_after_step_hook(log_shapes) # Executes second
160
+
161
+ # Both hooks will be called after each step in registration order
162
+ output = processor(input_data)
163
+ ```
164
+
165
+ While hooks are excellent for monitoring specific issues (like NaN detection) or gathering metrics during normal pipeline execution, sometimes you need to dive deeper. When you want to understand exactly what happens at each step or debug complex transformation logic, step-through debugging provides the detailed inspection you need.
166
+
167
+ ## Step-Through Debugging
168
+
169
+ Step-through debugging is like having a slow-motion replay for your pipeline. Instead of watching your data get transformed in one quick blur from input to output, you can pause and examine what happens after each individual step.
170
+
171
+ This approach is particularly valuable when you're trying to understand a complex pipeline, debug unexpected behavior, or verify that each transformation is working as expected. Unlike hooks, which are great for automated monitoring, step-through debugging gives you manual, interactive control over the inspection process.
172
+
173
+ The `step_through()` method is a generator that yields the transition state after each processing step, allowing you to inspect intermediate results. Think of it as creating a series of snapshots of your data as it flows through the pipeline—each snapshot shows you exactly what your data looks like after one more transformation has been applied.
174
+
175
+ ### How Step-Through Works
176
+
177
+ The `step_through()` method fundamentally changes how the pipeline executes. Instead of running all steps in sequence and only returning the final result, it transforms the pipeline into an iterator that yields intermediate results.
178
+
179
+ Here's what happens internally: the method starts by converting your input data into the pipeline's internal transition format, then yields this initial state. Next, it applies the first processing step and yields the result. Then it applies the second step to that result and yields again, and so on. Each `yield` gives you a complete snapshot of the transition at that point.
180
+
181
+ This generator pattern is powerful because it's lazy—the pipeline only computes the next step when you ask for it. This means you can stop at any point, inspect the current state thoroughly, and decide whether to continue. You're not forced to run the entire pipeline just to debug one problematic step.
182
+
183
+ Instead of running the entire pipeline and only seeing the final result, `step_through()` pauses after each step and gives you the intermediate transition:
184
+
185
+ ```python
186
+ # This creates a generator that yields intermediate states
187
+ for i, intermediate_result in enumerate(processor.step_through(input_data)):
188
+ print(f"=== After step {i} ===")
189
+
190
+ # Inspect the observation at this stage
191
+ obs = intermediate_result.get(TransitionKey.OBSERVATION)
192
+ if obs:
193
+ for key, value in obs.items():
194
+ if isinstance(value, torch.Tensor):
195
+ print(f"{key}: shape={value.shape}, dtype={value.dtype}")
196
+ ```
197
+
198
+ ### Interactive Debugging with Breakpoints
199
+
200
+ You can add breakpoints in the step-through loop to interactively debug:
201
+
202
+ ```python
203
+ # Step through the pipeline with debugging
204
+ for i, intermediate in enumerate(processor.step_through(data)):
205
+ print(f"Step {i}: {processor.steps[i].__class__.__name__}")
206
+
207
+ # Set a breakpoint to inspect the current state
208
+ breakpoint() # Debugger will pause here
209
+
210
+ # You can now inspect 'intermediate' in the debugger:
211
+ # - Check tensor shapes and values
212
+ # - Verify expected transformations
213
+ # - Look for unexpected changes
214
+ ```
215
+
216
+ During the debugger session, you can:
217
+
218
+ - Examine `intermediate[TransitionKey.OBSERVATION]` to see observation data
219
+ - Check `intermediate[TransitionKey.ACTION]` for action transformations
220
+ - Inspect any part of the transition to understand what each step does
221
+
222
+ Step-through debugging is perfect for understanding the _data_ transformations, but what about the _structure_ of that data? While hooks and step-through help you debug runtime behavior, you also need to ensure your pipeline produces data in the format expected by downstream components. This is where feature contract validation comes in.
223
+
224
+ ## Validating Feature Contracts
225
+
226
+ Feature contracts define what data structure your pipeline expects as input and produces as output.
227
+ Validating these contracts helps catch mismatches early.
228
+
229
+ ### Understanding Feature Contracts
230
+
231
+ Each processor step has a `transform_features()` method that describes how it changes the data structure:
232
+
233
+ ```python
234
+ # Get the expected output features from your pipeline
235
+ initial_features = {
236
+ PipelineFeatureType.OBSERVATION: {
237
+ "observation.state": PolicyFeature(type=FeatureType.STATE, shape=(7,)),
238
+ "observation.image": PolicyFeature(type=FeatureType.IMAGE, shape=(3, 224, 224))
239
+ },
240
+ PipelineFeatureType.ACTION: {
241
+ "action": PolicyFeature(type=FeatureType.ACTION, shape=(4,))
242
+ }
243
+ }
244
+
245
+ # Check what your pipeline will output
246
+ output_features = processor.transform_features(initial_features)
247
+
248
+ print("Input features:")
249
+ for feature_type, features in initial_features.items():
250
+ print(f" {feature_type}:")
251
+ for key, feature in features.items():
252
+ print(f" {key}: {feature.type.value}, shape={feature.shape}")
253
+
254
+ print("\nOutput features:")
255
+ for feature_type, features in output_features.items():
256
+ print(f" {feature_type}:")
257
+ for key, feature in features.items():
258
+ print(f" {key}: {feature.type.value}, shape={feature.shape}")
259
+ ```
260
+
261
+ ### Verifying Expected Features
262
+
263
+ Check that your pipeline produces the features you expect:
264
+
265
+ ```python
266
+ # Define what features you expect the pipeline to produce
267
+ expected_keys = ["observation.state", "observation.image", "action"]
268
+
269
+ print("Validating feature contract...")
270
+ for expected_key in expected_keys:
271
+ found = False
272
+ for feature_type, features in output_features.items():
273
+ if expected_key in features:
274
+ feature = features[expected_key]
275
+ print(f"✅ {expected_key}: {feature.type.value}, shape={feature.shape}")
276
+ found = True
277
+ break
278
+
279
+ if not found:
280
+ print(f"❌ Missing expected feature: {expected_key}")
281
+ ```
282
+
283
+ This validation helps ensure your pipeline will work correctly with downstream components that expect specific data structures.
284
+
285
+ ## Summary
286
+
287
+ Now that you understand the three debugging approaches, you can tackle any pipeline issue systematically:
288
+
289
+ 1. **Hooks** - For runtime monitoring and validation without modifying pipeline code
290
+ 2. **Step-through** - For inspecting intermediate states and understanding transformations
291
+ 3. **Feature validation** - For ensuring data structure contracts are met
292
+
293
+ **When to use each approach:**
294
+
295
+ - Start with **step-through debugging** when you need to understand what your pipeline does or when something unexpected happens
296
+ - Add **hooks** for continuous monitoring during development and production to catch issues automatically
297
+ - Use **feature validation** before deployment to ensure your pipeline works with downstream components
298
+
299
+ These three tools work together to give you the complete observability that complex pipelines naturally lack. With hooks watching for issues, step-through helping you understand behavior, and feature validation ensuring compatibility, you'll be able to debug any pipeline confidently and efficiently.
docs/source/feetech.mdx ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Feetech Motor Firmware Update
2
+
3
+ This tutorial guides you through updating the firmware of Feetech motors using the official Feetech software.
4
+
5
+ ## Prerequisites
6
+
7
+ - Windows computer (Feetech software is only available for Windows)
8
+ - Feetech motor control board
9
+ - USB cable to connect the control board to your computer
10
+ - Feetech motors connected to the control board
11
+
12
+ ## Step 1: Download Feetech Software
13
+
14
+ 1. Visit the official Feetech software download page: [https://www.feetechrc.com/software.html](https://www.feetechrc.com/software.html)
15
+ 2. Download the latest version of the Feetech debugging software (FD)
16
+ 3. Install the software on your Windows computer
17
+
18
+ ## Step 2: Hardware Setup
19
+
20
+ 1. Connect your Feetech motors to the motor control board
21
+ 2. Connect the motor control board to your Windows computer via USB cable
22
+ 3. Ensure power is supplied to the motors
23
+
24
+ ## Step 3: Configure Connection
25
+
26
+ 1. Launch the Feetech debugging software
27
+ 2. Select the correct COM port from the port dropdown menu
28
+ - If unsure which port to use, check Windows Device Manager under "Ports (COM & LPT)"
29
+ 3. Set the appropriate baud rate (typically 1000000 for most Feetech motors)
30
+ 4. Click "Open" to establish communication with the control board
31
+
32
+ ## Step 4: Scan for Motors
33
+
34
+ 1. Once connected, click the "Search" button to detect all connected motors
35
+ 2. The software will automatically discover and list all motors on the bus
36
+ 3. Each motor will appear with its ID number
37
+
38
+ ## Step 5: Update Firmware
39
+
40
+ For each motor you want to update:
41
+
42
+ 1. **Select the motor** from the list by clicking on it
43
+ 2. **Click on Upgrade tab**:
44
+ 3. **Click on Online button**:
45
+ - If an potential firmware update is found, it will be displayed in the box
46
+ 4. **Click on Upgrade button**:
47
+ - The update progress will be displayed
48
+
49
+ ## Step 6: Verify Update
50
+
51
+ 1. After the update completes, the software should automatically refresh the motor information
52
+ 2. Verify that the firmware version has been updated to the expected version
53
+
54
+ ## Important Notes
55
+
56
+ ⚠️ **Warning**: Do not disconnect power or USB during firmware updates, it will potentially brick the motor.
57
+
58
+ ## Bonus: Motor Debugging on Linux/macOS
59
+
60
+ For debugging purposes only, you can use the open-source Feetech Debug Tool:
61
+
62
+ - **Repository**: [FT_SCServo_Debug_Qt](https://github.com/CarolinePascal/FT_SCServo_Debug_Qt/tree/fix/port-search-timer)
63
+
64
+ ### Installation Instructions
65
+
66
+ Follow the instructions in the repository to install the tool, for Ubuntu you can directly install it, for MacOS you need to build it from source.
67
+
68
+ **Limitations:**
69
+
70
+ - This tool is for debugging and parameter adjustment only
71
+ - Firmware updates must still be done on Windows with official Feetech software
docs/source/groot.mdx ADDED
@@ -0,0 +1,122 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # GR00T N1.5 Policy
2
+
3
+ GR00T N1.5 is an open foundation model from NVIDIA designed for generalized humanoid robot reasoning and skills. It is a cross-embodiment model that accepts multimodal input, including language and images, to perform manipulation tasks in diverse environments.
4
+
5
+ This document outlines the specifics of its integration and usage within the LeRobot framework.
6
+
7
+ ## Model Overview
8
+
9
+ NVIDIA Isaac GR00T N1.5 is an upgraded version of the GR00T N1 foundation model. It is built to improve generalization and language-following abilities for humanoid robots.
10
+
11
+ Developers and researchers can post-train GR00T N1.5 with their own real or synthetic data to adapt it for specific humanoid robots or tasks.
12
+
13
+ GR00T N1.5 (specifically the GR00T-N1.5-3B model) is built using pre-trained vision and language encoders. It utilizes a flow matching action transformer to model a chunk of actions, conditioned on vision, language, and proprioception.
14
+
15
+ Its strong performance comes from being trained on an expansive and diverse humanoid dataset, which includes:
16
+
17
+ - Real captured data from robots.
18
+ - Synthetic data generated using NVIDIA Isaac GR00T Blueprint.
19
+ - Internet-scale video data.
20
+
21
+ This approach allows the model to be highly adaptable through post-training for specific embodiments, tasks, and environments.
22
+
23
+ ## Installation Requirements
24
+
25
+ As of today, GR00T N1.5 requires flash attention for it's internal working.
26
+
27
+ We are working on making this optional, but in the meantime that means that we require an extra installation step and it can only be used in CUDA enabled devices.
28
+
29
+ 1. Following the Environment Setup of our [Installation Guide](./installation). **Attention** don't install `lerobot` in this step.
30
+ 2. Install [Flash Attention](https://github.com/Dao-AILab/flash-attention) by running:
31
+
32
+ ```bash
33
+ # Check https://pytorch.org/get-started/locally/ for your system
34
+ pip install "torch>=2.2.1,<2.8.0" "torchvision>=0.21.0,<0.23.0" # --index-url https://download.pytorch.org/whl/cu1XX
35
+ pip install ninja "packaging>=24.2,<26.0" # flash attention dependencies
36
+ pip install "flash-attn>=2.5.9,<3.0.0" --no-build-isolation
37
+ python -c "import flash_attn; print(f'Flash Attention {flash_attn.__version__} imported successfully')"
38
+ ```
39
+
40
+ 3. Install LeRobot by running:
41
+
42
+ ```bash
43
+ pip install lerobot[groot] # consider also installing libero,dev and test tags
44
+ ```
45
+
46
+ ## Usage
47
+
48
+ To use GR00T in your LeRobot configuration, specify the policy type as:
49
+
50
+ ```python
51
+ policy.type=groot
52
+ ```
53
+
54
+ ## Training
55
+
56
+ ### Training Command Example
57
+
58
+ Here's a complete training command for finetuning the base GR00T model on your own dataset:
59
+
60
+ ```bash
61
+ # Using a multi-GPU setup
62
+ accelerate launch \
63
+ --multi_gpu \
64
+ --num_processes=$NUM_GPUS \
65
+ $(which lerobot-train) \
66
+ --output_dir=$OUTPUT_DIR \
67
+ --save_checkpoint=true \
68
+ --batch_size=$BATCH_SIZE \
69
+ --steps=$NUM_STEPS \
70
+ --save_freq=$SAVE_FREQ \
71
+ --log_freq=$LOG_FREQ \
72
+ --policy.push_to_hub=true \
73
+ --policy.type=groot \
74
+ --policy.repo_id=$REPO_ID \
75
+ --policy.tune_diffusion_model=false \
76
+ --dataset.repo_id=$DATASET_ID \
77
+ --wandb.enable=true \
78
+ --wandb.disable_artifact=true \
79
+ --job_name=$JOB_NAME
80
+ ```
81
+
82
+ ## Performance Results
83
+
84
+ ### Libero Benchmark Results
85
+
86
+ GR00T has demonstrated strong performance on the Libero benchmark suite. To compare and test its LeRobot implementation, we finetuned the GR00T N1.5 model for 30k steps on the Libero dataset and compared the results to the GR00T reference results.
87
+
88
+ | Benchmark | LeRobot Implementation | GR00T Reference |
89
+ | ------------------ | ---------------------- | --------------- |
90
+ | **Libero Spatial** | 82.0% | 92.0% |
91
+ | **Libero Object** | 99.0% | 92.0% |
92
+ | **Libero Long** | 82.0% | 76.0% |
93
+ | **Average** | 87.0% | 87.0% |
94
+
95
+ These results demonstrate GR00T's strong generalization capabilities across diverse robotic manipulation tasks. To reproduce these results, you can follow the instructions in the [Libero](https://huggingface.co/docs/lerobot/libero) section.
96
+
97
+ ### Evaluate in your hardware setup
98
+
99
+ Once you have trained your model using your parameters you can run inference in your downstream task. Follow the instructions in [Imitation Learning for Robots](./il_robots). For example:
100
+
101
+ ```bash
102
+ lerobot-record \
103
+ --robot.type=bi_so100_follower \
104
+ --robot.left_arm_port=/dev/ttyACM1 \
105
+ --robot.right_arm_port=/dev/ttyACM0 \
106
+ --robot.id=bimanual_follower \
107
+ --robot.cameras='{ right: {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30},
108
+ left: {"type": "opencv", "index_or_path": 2, "width": 640, "height": 480, "fps": 30},
109
+ top: {"type": "opencv", "index_or_path": 4, "width": 640, "height": 480, "fps": 30},
110
+ }' \
111
+ --display_data=true \
112
+ --dataset.repo_id=<user>/eval_groot-bimanual \
113
+ --dataset.num_episodes=10 \
114
+ --dataset.single_task="Grab and handover the red cube to the other arm"
115
+ --policy.path=<user>/groot-bimanual # your trained model
116
+ --dataset.episode_time_s=30
117
+ --dataset.reset_time_s=10
118
+ ```
119
+
120
+ ## License
121
+
122
+ This model follows the **Apache 2.0 License**, consistent with the original [GR00T repository](https://github.com/NVIDIA/Isaac-GR00T).
docs/source/hilserl.mdx ADDED
@@ -0,0 +1,923 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # HIL-SERL Real Robot Training Workflow Guide
2
+
3
+ In this tutorial you will go through the full Human-in-the-Loop Sample-Efficient Reinforcement Learning (HIL-SERL) workflow using LeRobot. You will master training a policy with RL on a real robot in just a few hours.
4
+
5
+ HIL-SERL is a sample-efficient reinforcement learning algorithm that combines human demonstrations with online learning and human interventions. The approach starts from a small set of human demonstrations, uses them to train a reward classifier, and then employs an actor-learner architecture where humans can intervene during policy execution to guide exploration and correct unsafe behaviors. In this tutorial, you'll use a gamepad to provide interventions and control the robot during the learning process.
6
+
7
+ It combines three key ingredients:
8
+
9
+ 1. **Offline demonstrations & reward classifier:** a handful of human-teleop episodes plus a vision-based success detector give the policy a shaped starting point.
10
+
11
+ 2. **On-robot actor / learner loop with human interventions:** a distributed Soft Actor Critic (SAC) learner updates the policy while an actor explores on the physical robot; the human can jump in at any time to correct dangerous or unproductive behaviour.
12
+
13
+ 3. **Safety & efficiency tools:** joint/end-effector (EE) bounds, crop region of interest (ROI) preprocessing and WandB monitoring keep the data useful and the hardware safe.
14
+
15
+ Together these elements let HIL-SERL reach near-perfect task success and faster cycle times than imitation-only baselines.
16
+
17
+ <p align="center">
18
+ <img
19
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/hilserl-main-figure.png"
20
+ alt="HIL-SERL workflow"
21
+ title="HIL-SERL workflow"
22
+ width="100%"
23
+ ></img>
24
+ </p>
25
+
26
+ <p align="center">
27
+ <i>HIL-SERL workflow, Luo et al. 2024</i>
28
+ </p>
29
+
30
+ This guide provides step-by-step instructions for training a robot policy using LeRobot's HilSerl implementation to train on a real robot.
31
+
32
+ ## What do I need?
33
+
34
+ - A gamepad (recommended) or keyboard to control the robot
35
+ - A Nvidia GPU
36
+ - A real robot with a follower and leader arm (optional if you use the keyboard or the gamepad)
37
+ - A URDF file for the robot for the kinematics package (check `lerobot/model/kinematics.py`)
38
+
39
+ ## What kind of tasks can I train?
40
+
41
+ One can use HIL-SERL to train on a variety of manipulation tasks. Some recommendations:
42
+
43
+ - Start with a simple task to understand how the system works.
44
+ - Push cube to a goal region
45
+ - Pick and lift cube with the gripper
46
+ - Avoid extremely long horizon tasks. Focus on tasks that can be completed in 5-10 seconds.
47
+ - Once you have a good idea of how the system works, you can try more complex tasks and longer horizons.
48
+ - Pick and place cube
49
+ - Bimanual tasks to pick objects with two arms
50
+ - Hand-over tasks to transfer objects from one arm to another
51
+ - Go crazy!
52
+
53
+ ## Install LeRobot with HIL-SERL
54
+
55
+ To install LeRobot with HIL-SERL, you need to install the `hilserl` extra.
56
+
57
+ ```bash
58
+ pip install -e ".[hilserl]"
59
+ ```
60
+
61
+ ## Real Robot Training Workflow
62
+
63
+ ### Understanding Configuration
64
+
65
+ The training process begins with proper configuration for the HILSerl environment. The main configuration class is `GymManipulatorConfig` in `lerobot/rl/gym_manipulator.py`, which contains nested `HILSerlRobotEnvConfig` and `DatasetConfig`. The configuration is organized into focused, nested sub-configs:
66
+
67
+ <!-- prettier-ignore-start -->
68
+ ```python
69
+ class GymManipulatorConfig:
70
+ env: HILSerlRobotEnvConfig # Environment configuration (nested)
71
+ dataset: DatasetConfig # Dataset recording/replay configuration (nested)
72
+ mode: str | None = None # "record", "replay", or None (for training)
73
+ device: str = "cpu" # Compute device
74
+
75
+ class HILSerlRobotEnvConfig(EnvConfig):
76
+ robot: RobotConfig | None = None # Main robot agent (defined in `lerobot/robots`)
77
+ teleop: TeleoperatorConfig | None = None # Teleoperator agent, e.g., gamepad or leader arm
78
+ processor: HILSerlProcessorConfig # Processing pipeline configuration (nested)
79
+ name: str = "real_robot" # Environment name
80
+ task: str | None = None # Task identifier
81
+ fps: int = 10 # Control frequency
82
+
83
+ # Nested processor configuration
84
+ class HILSerlProcessorConfig:
85
+ control_mode: str = "gamepad" # Control mode
86
+ observation: ObservationConfig | None = None # Observation processing settings
87
+ image_preprocessing: ImagePreprocessingConfig | None = None # Image crop/resize settings
88
+ gripper: GripperConfig | None = None # Gripper control and penalty settings
89
+ reset: ResetConfig | None = None # Environment reset and timing settings
90
+ inverse_kinematics: InverseKinematicsConfig | None = None # IK processing settings
91
+ reward_classifier: RewardClassifierConfig | None = None # Reward classifier settings
92
+ max_gripper_pos: float | None = 100.0 # Maximum gripper position
93
+
94
+ # Sub-configuration classes
95
+ class ObservationConfig:
96
+ add_joint_velocity_to_observation: bool = False # Add joint velocities to state
97
+ add_current_to_observation: bool = False # Add motor currents to state
98
+ display_cameras: bool = False # Display camera feeds during execution
99
+
100
+ class ImagePreprocessingConfig:
101
+ crop_params_dict: dict[str, tuple[int, int, int, int]] | None = None # Image cropping parameters
102
+ resize_size: tuple[int, int] | None = None # Target image size
103
+
104
+ class GripperConfig:
105
+ use_gripper: bool = True # Enable gripper control
106
+ gripper_penalty: float = 0.0 # Penalty for inappropriate gripper usage
107
+
108
+ class ResetConfig:
109
+ fixed_reset_joint_positions: Any | None = None # Joint positions for reset
110
+ reset_time_s: float = 5.0 # Time to wait during reset
111
+ control_time_s: float = 20.0 # Maximum episode duration
112
+ terminate_on_success: bool = True # Whether to terminate episodes on success detection
113
+
114
+ class InverseKinematicsConfig:
115
+ urdf_path: str | None = None # Path to robot URDF file
116
+ target_frame_name: str | None = None # End-effector frame name
117
+ end_effector_bounds: dict[str, list[float]] | None = None # EE workspace bounds
118
+ end_effector_step_sizes: dict[str, float] | None = None # EE step sizes per axis
119
+
120
+ class RewardClassifierConfig:
121
+ pretrained_path: str | None = None # Path to pretrained reward classifier
122
+ success_threshold: float = 0.5 # Success detection threshold
123
+ success_reward: float = 1.0 # Reward value for successful episodes
124
+
125
+ # Dataset configuration
126
+ class DatasetConfig:
127
+ repo_id: str # LeRobot dataset repository ID
128
+ task: str # Task identifier
129
+ root: str | None = None # Local dataset root directory
130
+ num_episodes_to_record: int = 5 # Number of episodes for recording
131
+ replay_episode: int | None = None # Episode index for replay
132
+ push_to_hub: bool = False # Whether to push datasets to Hub
133
+ ```
134
+ <!-- prettier-ignore-end -->
135
+
136
+ ### Processor Pipeline Architecture
137
+
138
+ HIL-SERL uses a modular processor pipeline architecture that processes robot observations and actions through a series of composable steps. The pipeline is divided into two main components:
139
+
140
+ #### Environment Processor Pipeline
141
+
142
+ The environment processor (`env_processor`) handles incoming observations and environment state:
143
+
144
+ 1. **VanillaObservationProcessorStep**: Converts raw robot observations into standardized format
145
+ 2. **JointVelocityProcessorStep** (optional): Adds joint velocity information to observations
146
+ 3. **MotorCurrentProcessorStep** (optional): Adds motor current readings to observations
147
+ 4. **ForwardKinematicsJointsToEE** (optional): Computes end-effector pose from joint positions
148
+ 5. **ImageCropResizeProcessorStep** (optional): Crops and resizes camera images
149
+ 6. **TimeLimitProcessorStep** (optional): Enforces episode time limits
150
+ 7. **GripperPenaltyProcessorStep** (optional): Applies penalties for inappropriate gripper usage
151
+ 8. **RewardClassifierProcessorStep** (optional): Automated reward detection using vision models
152
+ 9. **AddBatchDimensionProcessorStep**: Converts data to batch format for neural network processing
153
+ 10. **DeviceProcessorStep**: Moves data to the specified compute device (CPU/GPU)
154
+
155
+ #### Action Processor Pipeline
156
+
157
+ The action processor (`action_processor`) handles outgoing actions and human interventions:
158
+
159
+ 1. **AddTeleopActionAsComplimentaryDataStep**: Captures teleoperator actions for logging
160
+ 2. **AddTeleopEventsAsInfoStep**: Records intervention events and episode control signals
161
+ 3. **InterventionActionProcessorStep**: Handles human interventions and episode termination
162
+ 4. **Inverse Kinematics Pipeline** (when enabled):
163
+ - **MapDeltaActionToRobotActionStep**: Converts delta actions to robot action format
164
+ - **EEReferenceAndDelta**: Computes end-effector reference and delta movements
165
+ - **EEBoundsAndSafety**: Enforces workspace safety bounds
166
+ - **InverseKinematicsEEToJoints**: Converts end-effector actions to joint targets
167
+ - **GripperVelocityToJoint**: Handles gripper control commands
168
+
169
+ #### Configuration Examples
170
+
171
+ **Basic Observation Processing**:
172
+
173
+ ```json
174
+ {
175
+ "env": {
176
+ "processor": {
177
+ "observation": {
178
+ "add_joint_velocity_to_observation": true,
179
+ "add_current_to_observation": false,
180
+ "display_cameras": false
181
+ }
182
+ }
183
+ }
184
+ }
185
+ ```
186
+
187
+ **Image Processing**:
188
+
189
+ ```json
190
+ {
191
+ "env": {
192
+ "processor": {
193
+ "image_preprocessing": {
194
+ "crop_params_dict": {
195
+ "observation.images.front": [180, 250, 120, 150],
196
+ "observation.images.side": [180, 207, 180, 200]
197
+ },
198
+ "resize_size": [128, 128]
199
+ }
200
+ }
201
+ }
202
+ }
203
+ ```
204
+
205
+ **Inverse Kinematics Setup**:
206
+
207
+ ```json
208
+ {
209
+ "env": {
210
+ "processor": {
211
+ "inverse_kinematics": {
212
+ "urdf_path": "path/to/robot.urdf",
213
+ "target_frame_name": "end_effector",
214
+ "end_effector_bounds": {
215
+ "min": [0.16, -0.08, 0.03],
216
+ "max": [0.24, 0.2, 0.1]
217
+ },
218
+ "end_effector_step_sizes": {
219
+ "x": 0.02,
220
+ "y": 0.02,
221
+ "z": 0.02
222
+ }
223
+ }
224
+ }
225
+ }
226
+ }
227
+ ```
228
+
229
+ ### Advanced Observation Processing
230
+
231
+ The HIL-SERL framework supports additional observation processing features that can improve policy learning:
232
+
233
+ #### Joint Velocity Processing
234
+
235
+ Enable joint velocity estimation to provide the policy with motion information:
236
+
237
+ ```json
238
+ {
239
+ "env": {
240
+ "processor": {
241
+ "observation": {
242
+ "add_joint_velocity_to_observation": true
243
+ }
244
+ }
245
+ }
246
+ }
247
+ ```
248
+
249
+ This processor:
250
+
251
+ - Estimates joint velocities using finite differences between consecutive joint position readings
252
+ - Adds velocity information to the observation state vector
253
+ - Useful for policies that need motion awareness for dynamic tasks
254
+
255
+ #### Motor Current Processing
256
+
257
+ Monitor motor currents to detect contact forces and load conditions:
258
+
259
+ ```json
260
+ {
261
+ "env": {
262
+ "processor": {
263
+ "observation": {
264
+ "add_current_to_observation": true
265
+ }
266
+ }
267
+ }
268
+ }
269
+ ```
270
+
271
+ This processor:
272
+
273
+ - Reads motor current values from the robot's control system
274
+ - Adds current measurements to the observation state vector
275
+ - Helps detect contact events, object weights, and mechanical resistance
276
+ - Useful for contact-rich manipulation tasks
277
+
278
+ #### Combined Observation Processing
279
+
280
+ You can enable multiple observation processing features simultaneously:
281
+
282
+ ```json
283
+ {
284
+ "env": {
285
+ "processor": {
286
+ "observation": {
287
+ "add_joint_velocity_to_observation": true,
288
+ "add_current_to_observation": true,
289
+ "display_cameras": false
290
+ }
291
+ }
292
+ }
293
+ }
294
+ ```
295
+
296
+ **Note**: Enabling additional observation features increases the state space dimensionality, which may require adjusting your policy network architecture and potentially collecting more training data.
297
+
298
+ ### Finding Robot Workspace Bounds
299
+
300
+ Before collecting demonstrations, you need to determine the appropriate operational bounds for your robot.
301
+
302
+ This helps simplify the problem of learning on the real robot in two ways: 1) by limiting the robot's operational space to a specific region that solves the task and avoids unnecessary or unsafe exploration, and 2) by allowing training in end-effector space rather than joint space. Empirically, learning in joint space for reinforcement learning in manipulation is often a harder problem - some tasks are nearly impossible to learn in joint space but become learnable when the action space is transformed to end-effector coordinates.
303
+
304
+ **Using lerobot-find-joint-limits**
305
+
306
+ This script helps you find the safe operational bounds for your robot's end-effector. Given that you have a follower and leader arm, you can use the script to find the bounds for the follower arm that will be applied during training.
307
+ Bounding the action space will reduce the redundant exploration of the agent and guarantees safety.
308
+
309
+ ```bash
310
+ lerobot-find-joint-limits \
311
+ --robot.type=so100_follower \
312
+ --robot.port=/dev/tty.usbmodem58760431541 \
313
+ --robot.id=black \
314
+ --teleop.type=so100_leader \
315
+ --teleop.port=/dev/tty.usbmodem58760431551 \
316
+ --teleop.id=blue
317
+ ```
318
+
319
+ **Workflow**
320
+
321
+ 1. Run the script and move the robot through the space that solves the task
322
+ 2. The script will record the minimum and maximum end-effector positions and the joint angles and prints them to the console, for example:
323
+ ```
324
+ Max ee position [0.2417 0.2012 0.1027]
325
+ Min ee position [0.1663 -0.0823 0.0336]
326
+ Max joint positions [-20.0, -20.0, -20.0, -20.0, -20.0, -20.0]
327
+ Min joint positions [50.0, 50.0, 50.0, 50.0, 50.0, 50.0]
328
+ ```
329
+ 3. Use these values in the configuration of your teleoperation device (TeleoperatorConfig) under the `end_effector_bounds` field
330
+
331
+ **Example Configuration**
332
+
333
+ ```json
334
+ "end_effector_bounds": {
335
+ "max": [0.24, 0.20, 0.10],
336
+ "min": [0.16, -0.08, 0.03]
337
+ }
338
+ ```
339
+
340
+ ### Collecting Demonstrations
341
+
342
+ With the bounds defined, you can safely collect demonstrations for training. Training RL with off-policy algorithm allows us to use offline datasets collected in order to improve the efficiency of the learning process.
343
+
344
+ **Setting Up Record Mode**
345
+
346
+ Create a configuration file for recording demonstrations (or edit an existing one like [env_config.json](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/env_config.json)):
347
+
348
+ 1. Set `mode` to `"record"` at the root level
349
+ 2. Specify a unique `repo_id` for your dataset in the `dataset` section (e.g., "username/task_name")
350
+ 3. Set `num_episodes_to_record` in the `dataset` section to the number of demonstrations you want to collect
351
+ 4. Set `env.processor.image_preprocessing.crop_params_dict` to `{}` initially (we'll determine crops later)
352
+ 5. Configure `env.robot`, `env.teleop`, and other hardware settings in the `env` section
353
+
354
+ Example configuration section:
355
+
356
+ ```json
357
+ {
358
+ "env": {
359
+ "type": "gym_manipulator",
360
+ "name": "real_robot",
361
+ "fps": 10,
362
+ "processor": {
363
+ "control_mode": "gamepad",
364
+ "observation": {
365
+ "display_cameras": false
366
+ },
367
+ "image_preprocessing": {
368
+ "crop_params_dict": {},
369
+ "resize_size": [128, 128]
370
+ },
371
+ "gripper": {
372
+ "use_gripper": true,
373
+ "gripper_penalty": 0.0
374
+ },
375
+ "reset": {
376
+ "reset_time_s": 5.0,
377
+ "control_time_s": 20.0
378
+ }
379
+ },
380
+ "robot": {
381
+ // ... robot configuration ...
382
+ },
383
+ "teleop": {
384
+ // ... teleoperator configuration ...
385
+ }
386
+ },
387
+ "dataset": {
388
+ "repo_id": "username/pick_lift_cube",
389
+ "root": null,
390
+ "task": "pick_and_lift",
391
+ "num_episodes_to_record": 15,
392
+ "replay_episode": 0,
393
+ "push_to_hub": true
394
+ },
395
+ "mode": "record",
396
+ "device": "cpu"
397
+ }
398
+ ```
399
+
400
+ ### Using a Teleoperation Device
401
+
402
+ Along with your robot, you will need a teleoperation device to control it in order to collect datasets of your task and perform interventions during the online training.
403
+ We support using a gamepad or a keyboard or the leader arm of the robot.
404
+
405
+ HIL-Serl learns actions in the end-effector space of the robot. Therefore, the teleoperation will control the end-effector's x,y,z displacements.
406
+
407
+ For that we need to define a version of the robot that takes actions in the end-effector space. Check the robot class `SO100FollowerEndEffector` and its configuration `SO100FollowerEndEffectorConfig` for the default parameters related to the end-effector space.
408
+
409
+ <!-- prettier-ignore-start -->
410
+ ```python
411
+ class SO100FollowerEndEffectorConfig(SO100FollowerConfig):
412
+ """Configuration for the SO100FollowerEndEffector robot."""
413
+
414
+ # Default bounds for the end-effector position (in meters)
415
+ end_effector_bounds: dict[str, list[float]] = field( # bounds for the end-effector in x,y,z direction
416
+ default_factory=lambda: {
417
+ "min": [-1.0, -1.0, -1.0], # min x, y, z
418
+ "max": [1.0, 1.0, 1.0], # max x, y, z
419
+ }
420
+ )
421
+
422
+ max_gripper_pos: float = 50 # maximum gripper position that the gripper will be open at
423
+
424
+ end_effector_step_sizes: dict[str, float] = field( # maximum step size for the end-effector in x,y,z direction
425
+ default_factory=lambda: {
426
+ "x": 0.02,
427
+ "y": 0.02,
428
+ "z": 0.02,
429
+ }
430
+ )
431
+ ```
432
+ <!-- prettier-ignore-end -->
433
+
434
+ The `Teleoperator` defines the teleoperation device. You can check the list of available teleoperators in `lerobot/teleoperators`.
435
+
436
+ **Setting up the Gamepad**
437
+
438
+ The gamepad provides a very convenient way to control the robot and the episode state.
439
+
440
+ To setup the gamepad, you need to set the `control_mode` to `"gamepad"` and define the `teleop` section in the configuration file.
441
+
442
+ ```json
443
+ {
444
+ "env": {
445
+ "teleop": {
446
+ "type": "gamepad",
447
+ "use_gripper": true
448
+ },
449
+ "processor": {
450
+ "control_mode": "gamepad",
451
+ "gripper": {
452
+ "use_gripper": true
453
+ }
454
+ }
455
+ }
456
+ }
457
+ ```
458
+
459
+ <p align="center">
460
+ <img
461
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/gamepad_guide.jpg?raw=true"
462
+ alt="Figure shows the control mappings on a Logitech gamepad."
463
+ title="Gamepad Control Mapping"
464
+ width="100%"
465
+ ></img>
466
+ </p>
467
+ <p align="center">
468
+ <i>Gamepad button mapping for robot control and episode management</i>
469
+ </p>
470
+
471
+ **Setting up the SO101 leader**
472
+
473
+ The SO101 leader arm has reduced gears that allows it to move and track the follower arm during exploration. Therefore, taking over is much smoother than the gearless SO100.
474
+
475
+ To setup the SO101 leader, you need to set the `control_mode` to `"leader"` and define the `teleop` section in the configuration file.
476
+
477
+ ```json
478
+ {
479
+ "env": {
480
+ "teleop": {
481
+ "type": "so101_leader",
482
+ "port": "/dev/tty.usbmodem585A0077921",
483
+ "use_degrees": true
484
+ },
485
+ "processor": {
486
+ "control_mode": "leader",
487
+ "gripper": {
488
+ "use_gripper": true
489
+ }
490
+ }
491
+ }
492
+ }
493
+ ```
494
+
495
+ In order to annotate the success/failure of the episode, **you will need** to use a keyboard to press `s` for success, `esc` for failure.
496
+ During the online training, press `space` to take over the policy and `space` again to give the control back to the policy.
497
+
498
+ <details>
499
+ <summary><strong>Video: SO101 leader teleoperation</strong></summary>
500
+
501
+ <div class="video-container">
502
+ <video controls width="600">
503
+ <source
504
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/so101_leader_tutorial.mp4"
505
+ type="video/mp4"
506
+ />
507
+ </video>
508
+ </div>
509
+
510
+ <p align="center"><i>SO101 leader teleoperation example, the leader tracks the follower, press `space` to intervene</i></p>
511
+ </details>
512
+
513
+ **Recording Demonstrations**
514
+
515
+ Start the recording process, an example of the config file can be found [here](https://huggingface.co/datasets/aractingi/lerobot-example-config-files/blob/main/env_config_so100.json):
516
+
517
+ ```bash
518
+ python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config_so100.json
519
+ ```
520
+
521
+ During recording:
522
+
523
+ 1. The robot will reset to the initial position defined in the configuration file `env.processor.reset.fixed_reset_joint_positions`
524
+ 2. Complete the task successfully
525
+ 3. The episode ends with a reward of 1 when you press the "success" button
526
+ 4. If the time limit is reached, or the fail button is pressed, the episode ends with a reward of 0
527
+ 5. You can rerecord an episode by pressing the "rerecord" button
528
+ 6. The process automatically continues to the next episode
529
+ 7. After recording all episodes, the dataset is pushed to the Hugging Face Hub (optional) and saved locally
530
+
531
+ ### Processing the Dataset
532
+
533
+ After collecting demonstrations, process them to determine optimal camera crops.
534
+ Reinforcement learning is sensitive to background distractions, so it is important to crop the images to the relevant workspace area.
535
+
536
+ Visual RL algorithms learn directly from pixel inputs, making them vulnerable to irrelevant visual information. Background elements like changing lighting, shadows, people moving, or objects outside the workspace can confuse the learning process. Good ROI selection should:
537
+
538
+ - Include only the essential workspace where the task happens
539
+ - Capture the robot's end-effector and all objects involved in the task
540
+ - Exclude unnecessary background elements and distractions
541
+
542
+ Note: If you already know the crop parameters, you can skip this step and just set the `crop_params_dict` in the configuration file during recording.
543
+
544
+ **Determining Crop Parameters**
545
+
546
+ Use the `crop_dataset_roi.py` script to interactively select regions of interest in your camera images:
547
+
548
+ ```bash
549
+ python -m lerobot.rl.crop_dataset_roi --repo-id username/pick_lift_cube
550
+ ```
551
+
552
+ 1. For each camera view, the script will display the first frame
553
+ 2. Draw a rectangle around the relevant workspace area
554
+ 3. Press 'c' to confirm the selection
555
+ 4. Repeat for all camera views
556
+ 5. The script outputs cropping parameters and creates a new cropped dataset
557
+
558
+ Example output:
559
+
560
+ ```
561
+ Selected Rectangular Regions of Interest (top, left, height, width):
562
+ observation.images.side: [180, 207, 180, 200]
563
+ observation.images.front: [180, 250, 120, 150]
564
+ ```
565
+
566
+ <p align="center">
567
+ <img
568
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/crop_dataset.gif"
569
+ width="600"
570
+ />
571
+ </p>
572
+
573
+ <p align="center">
574
+ <i>Interactive cropping tool for selecting regions of interest</i>
575
+ </p>
576
+
577
+ **Updating Configuration**
578
+
579
+ Add these crop parameters to your training configuration:
580
+
581
+ ```json
582
+ {
583
+ "env": {
584
+ "processor": {
585
+ "image_preprocessing": {
586
+ "crop_params_dict": {
587
+ "observation.images.side": [180, 207, 180, 200],
588
+ "observation.images.front": [180, 250, 120, 150]
589
+ },
590
+ "resize_size": [128, 128]
591
+ }
592
+ }
593
+ }
594
+ }
595
+ ```
596
+
597
+ **Recommended image resolution**
598
+
599
+ Most vision-based policies have been validated on square inputs of either **128×128** (default) or **64×64** pixels. We therefore advise setting the resize_size parameter to [128, 128] – or [64, 64] if you need to save GPU memory and bandwidth. Other resolutions are possible but have not been extensively tested.
600
+
601
+ ### Training a Reward Classifier
602
+
603
+ The reward classifier plays an important role in the HIL-SERL workflow by automating reward assignment and automatically detecting episode success. Instead of manually defining reward functions or relying on human feedback for every timestep, the reward classifier learns to predict success/failure from visual observations. This enables the RL algorithm to learn efficiently by providing consistent and automated reward signals based on the robot's camera inputs.
604
+
605
+ This guide explains how to train a reward classifier for human-in-the-loop reinforcement learning implementation of LeRobot. Reward classifiers learn to predict the reward value given a state which can be used in an RL setup to train a policy.
606
+
607
+ **Note**: Training a reward classifier is optional. You can start the first round of RL experiments by annotating the success manually with your gamepad or keyboard device.
608
+
609
+ The reward classifier implementation in `modeling_classifier.py` uses a pretrained vision model to process the images. It can output either a single value for binary rewards to predict success/fail cases or multiple values for multi-class settings.
610
+
611
+ **Collecting a Dataset for the reward classifier**
612
+
613
+ Before training, you need to collect a dataset with labeled examples. The `record_dataset` function in `gym_manipulator.py` enables the process of collecting a dataset of observations, actions, and rewards.
614
+
615
+ To collect a dataset, you need to modify some parameters in the environment configuration based on HILSerlRobotEnvConfig.
616
+
617
+ ```bash
618
+ python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/reward_classifier_train_config.json
619
+ ```
620
+
621
+ **Key Parameters for Data Collection**
622
+
623
+ - **mode**: set it to `"record"` to collect a dataset (at root level)
624
+ - **dataset.repo_id**: `"hf_username/dataset_name"`, name of the dataset and repo on the hub
625
+ - **dataset.num_episodes_to_record**: Number of episodes to record
626
+ - **env.processor.reset.terminate_on_success**: Whether to automatically terminate episodes when success is detected (default: `true`)
627
+ - **env.fps**: Number of frames per second to record
628
+ - **dataset.push_to_hub**: Whether to push the dataset to the hub
629
+
630
+ The `env.processor.reset.terminate_on_success` parameter allows you to control episode termination behavior. When set to `false`, episodes will continue even after success is detected, allowing you to collect more positive examples with the reward=1 label. This is crucial for training reward classifiers as it provides more success state examples in your dataset. When set to `true` (default), episodes terminate immediately upon success detection.
631
+
632
+ **Important**: For reward classifier training, set `terminate_on_success: false` to collect sufficient positive examples. For regular HIL-SERL training, keep it as `true` to enable automatic episode termination when the task is completed successfully.
633
+
634
+ Example configuration section for data collection:
635
+
636
+ ```json
637
+ {
638
+ "env": {
639
+ "type": "gym_manipulator",
640
+ "name": "real_robot",
641
+ "fps": 10,
642
+ "processor": {
643
+ "reset": {
644
+ "reset_time_s": 5.0,
645
+ "control_time_s": 20.0,
646
+ "terminate_on_success": false
647
+ },
648
+ "gripper": {
649
+ "use_gripper": true
650
+ }
651
+ },
652
+ "robot": {
653
+ // ... robot configuration ...
654
+ },
655
+ "teleop": {
656
+ // ... teleoperator configuration ...
657
+ }
658
+ },
659
+ "dataset": {
660
+ "repo_id": "hf_username/dataset_name",
661
+ "dataset_root": "data/your_dataset",
662
+ "task": "reward_classifier_task",
663
+ "num_episodes_to_record": 20,
664
+ "replay_episode": null,
665
+ "push_to_hub": true
666
+ },
667
+ "mode": "record",
668
+ "device": "cpu"
669
+ }
670
+ ```
671
+
672
+ **Reward Classifier Configuration**
673
+
674
+ The reward classifier is configured using `configuration_classifier.py`. Here are the key parameters:
675
+
676
+ - **model_name**: Base model architecture (e.g., we mainly use `"helper2424/resnet10"`)
677
+ - **model_type**: `"cnn"` or `"transformer"`
678
+ - **num_cameras**: Number of camera inputs
679
+ - **num_classes**: Number of output classes (typically 2 for binary success/failure)
680
+ - **hidden_dim**: Size of hidden representation
681
+ - **dropout_rate**: Regularization parameter
682
+ - **learning_rate**: Learning rate for optimizer
683
+
684
+ Example configuration for training the [reward classifier](https://huggingface.co/datasets/aractingi/lerobot-example-config-files/blob/main/reward_classifier_train_config.json):
685
+
686
+ ```json
687
+ {
688
+ "policy": {
689
+ "type": "reward_classifier",
690
+ "model_name": "helper2424/resnet10",
691
+ "model_type": "cnn",
692
+ "num_cameras": 2,
693
+ "num_classes": 2,
694
+ "hidden_dim": 256,
695
+ "dropout_rate": 0.1,
696
+ "learning_rate": 1e-4,
697
+ "device": "cuda",
698
+ "use_amp": true,
699
+ "input_features": {
700
+ "observation.images.front": {
701
+ "type": "VISUAL",
702
+ "shape": [3, 128, 128]
703
+ },
704
+ "observation.images.side": {
705
+ "type": "VISUAL",
706
+ "shape": [3, 128, 128]
707
+ }
708
+ }
709
+ }
710
+ }
711
+ ```
712
+
713
+ **Training the Classifier**
714
+
715
+ To train the classifier, use the `train.py` script with your configuration:
716
+
717
+ ```bash
718
+ lerobot-train --config_path path/to/reward_classifier_train_config.json
719
+ ```
720
+
721
+ **Deploying and Testing the Model**
722
+
723
+ To use your trained reward classifier, configure the `HILSerlRobotEnvConfig` to use your model:
724
+
725
+ <!-- prettier-ignore-start -->
726
+ ```python
727
+ config = GymManipulatorConfig(
728
+ env=HILSerlRobotEnvConfig(
729
+ processor=HILSerlProcessorConfig(
730
+ reward_classifier=RewardClassifierConfig(
731
+ pretrained_path="path_to_your_pretrained_trained_model"
732
+ )
733
+ ),
734
+ # Other environment parameters
735
+ ),
736
+ dataset=DatasetConfig(...),
737
+ mode=None # For training
738
+ )
739
+ ```
740
+ <!-- prettier-ignore-end -->
741
+
742
+ or set the argument in the json config file.
743
+
744
+ ```json
745
+ {
746
+ "env": {
747
+ "processor": {
748
+ "reward_classifier": {
749
+ "pretrained_path": "path_to_your_pretrained_model",
750
+ "success_threshold": 0.7,
751
+ "success_reward": 1.0
752
+ },
753
+ "reset": {
754
+ "terminate_on_success": true
755
+ }
756
+ }
757
+ }
758
+ }
759
+ ```
760
+
761
+ Run `gym_manipulator.py` to test the model.
762
+
763
+ ```bash
764
+ python -m lerobot.rl.gym_manipulator --config_path path/to/env_config.json
765
+ ```
766
+
767
+ The reward classifier will automatically provide rewards based on the visual input from the robot's cameras.
768
+
769
+ **Example Workflow for training the reward classifier**
770
+
771
+ 1. **Create the configuration files**:
772
+ Create the necessary json configuration files for the reward classifier and the environment. Check the examples [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/reward_classifier/config.json).
773
+
774
+ 2. **Collect a dataset**:
775
+
776
+ ```bash
777
+ python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
778
+ ```
779
+
780
+ 3. **Train the classifier**:
781
+
782
+ ```bash
783
+ lerobot-train --config_path src/lerobot/configs/reward_classifier_train_config.json
784
+ ```
785
+
786
+ 4. **Test the classifier**:
787
+ ```bash
788
+ python -m lerobot.rl.gym_manipulator --config_path src/lerobot/configs/env_config.json
789
+ ```
790
+
791
+ ### Training with Actor-Learner
792
+
793
+ The LeRobot system uses a distributed actor-learner architecture for training. This architecture decouples robot interactions from the learning process, allowing them to run concurrently without blocking each other. The actor server handles robot observations and actions, sending interaction data to the learner server. The learner server performs gradient descent and periodically updates the actor's policy weights. You will need to start two processes: a learner and an actor.
794
+
795
+ **Configuration Setup**
796
+
797
+ Create a training configuration file (example available [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/train_config.json)). The training config is based on the main `TrainRLServerPipelineConfig` class in `lerobot/configs/train.py`.
798
+
799
+ 1. Configure the policy settings (`type="sac"`, `device`, etc.)
800
+ 2. Set `dataset` to your cropped dataset
801
+ 3. Configure environment settings with crop parameters
802
+ 4. Check the other parameters related to SAC in [configuration_sac.py](https://github.com/huggingface/lerobot/blob/main/src/lerobot/policies/sac/configuration_sac.py#L79).
803
+ 5. Verify that the `policy` config is correct with the right `input_features` and `output_features` for your task.
804
+
805
+ **Starting the Learner**
806
+
807
+ First, start the learner server process:
808
+
809
+ ```bash
810
+ python -m lerobot.rl.learner --config_path src/lerobot/configs/train_config_hilserl_so100.json
811
+ ```
812
+
813
+ The learner:
814
+
815
+ - Initializes the policy network
816
+ - Prepares replay buffers
817
+ - Opens a `gRPC` server to communicate with actors
818
+ - Processes transitions and updates the policy
819
+
820
+ **Starting the Actor**
821
+
822
+ In a separate terminal, start the actor process with the same configuration:
823
+
824
+ ```bash
825
+ python -m lerobot.rl.actor --config_path src/lerobot/configs/train_config_hilserl_so100.json
826
+ ```
827
+
828
+ The actor:
829
+
830
+ - Connects to the learner via `gRPC`
831
+ - Initializes the environment
832
+ - Execute rollouts of the policy to collect experience
833
+ - Sends transitions to the learner
834
+ - Receives updated policy parameters
835
+
836
+ **Training Flow**
837
+
838
+ The training proceeds automatically:
839
+
840
+ 1. The actor executes the policy in the environment
841
+ 2. Transitions are collected and sent to the learner
842
+ 3. The learner updates the policy based on these transitions
843
+ 4. Updated policy parameters are sent back to the actor
844
+ 5. The process continues until the specified step limit is reached
845
+
846
+ **Human in the Loop**
847
+
848
+ - The key to learning efficiently is to have human interventions to provide corrective feedback and completing the task to aide the policy learning and exploration.
849
+ - To perform human interventions, you can press the upper right trigger button on the gamepad (or the `space` key on the keyboard). This will pause the policy actions and allow you to take over.
850
+ - A successful experiment is one where the human has to intervene at the start but then reduces the amount of interventions as the policy improves. You can monitor the intervention rate in the `wandb` dashboard.
851
+
852
+ <p align="center">
853
+ <img
854
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/hil_effect.png?raw=true"
855
+ alt="Figure shows the control mappings on a Logitech gamepad."
856
+ title="Gamepad Control Mapping"
857
+ width="100%"
858
+ ></img>
859
+ </p>
860
+
861
+ <p align="center">
862
+ <i>
863
+ Example showing how human interventions help guide policy learning over time
864
+ </i>
865
+ </p>
866
+
867
+ - The figure shows the plot of the episodic reward over interaction step. The figure shows the effect of human interventions on the policy learning.
868
+ - The orange curve is an experiment without any human interventions. While the pink and blue curves are experiments with human interventions.
869
+ - We can observe that the number of steps where the policy starts achieving the maximum reward is cut by a quarter when human interventions are present.
870
+
871
+ **Monitoring and Debugging**
872
+
873
+ If you have `wandb.enable` set to `true` in your configuration, you can monitor training progress in real-time through the [Weights & Biases](https://wandb.ai/site/) dashboard.
874
+
875
+ ### Guide to Human Interventions
876
+
877
+ The learning process is very sensitive to the intervention strategy. It will takes a few runs to understand how to intervene effectively. Some tips and hints:
878
+
879
+ - Allow the policy to explore for a few episodes at the start of training.
880
+ - Avoid intervening for long periods of time. Try to intervene in situation to correct the robot's behaviour when it goes off track.
881
+ - Once the policy starts achieving the task, even if its not perfect, you can limit your interventions to simple quick actions like a simple grasping commands.
882
+
883
+ The ideal behaviour is that your intervention rate should drop gradually during training as shown in the figure below.
884
+
885
+ <p align="center">
886
+ <img
887
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/intervention_rate_tutorial_rl.png?raw=true"
888
+ alt="Intervention rate"
889
+ title="Intervention rate during training"
890
+ width="100%"
891
+ ></img>
892
+ </p>
893
+
894
+ <p align="center">
895
+ <i>
896
+ Plot of the intervention rate during a training run on a pick and lift cube
897
+ task
898
+ </i>
899
+ </p>
900
+
901
+ ### Key hyperparameters to tune
902
+
903
+ Some configuration values have a disproportionate impact on training stability and speed:
904
+
905
+ - **`temperature_init`** (`policy.temperature_init`) – initial entropy temperature in SAC. Higher values encourage more exploration; lower values make the policy more deterministic early on. A good starting point is `1e-2`. We observed that setting it too high can make human interventions ineffective and slow down learning.
906
+ - **`policy_parameters_push_frequency`** (`policy.actor_learner_config.policy_parameters_push_frequency`) – interval in _seconds_ between two weight pushes from the learner to the actor. The default is `4 s`. Decrease to **1-2 s** to provide fresher weights (at the cost of more network traffic); increase only if your connection is slow, as this will reduce sample efficiency.
907
+ - **`storage_device`** (`policy.storage_device`) – device on which the learner keeps the policy parameters. If you have spare GPU memory, set this to `"cuda"` (instead of the default `"cpu"`). Keeping the weights on-GPU removes CPU→GPU transfer overhead and can significantly increase the number of learner updates per second.
908
+
909
+ Congrats 🎉, you have finished this tutorial!
910
+
911
+ > [!TIP]
912
+ > If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
913
+
914
+ Paper citation:
915
+
916
+ ```
917
+ @article{luo2024precise,
918
+ title={Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning},
919
+ author={Luo, Jianlan and Xu, Charles and Wu, Jeffrey and Levine, Sergey},
920
+ journal={arXiv preprint arXiv:2410.21845},
921
+ year={2024}
922
+ }
923
+ ```
docs/source/hilserl_sim.mdx ADDED
@@ -0,0 +1,154 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Train RL in Simulation
2
+
3
+ This guide explains how to use the `gym_hil` simulation environments as an alternative to real robots when working with the LeRobot framework for Human-In-the-Loop (HIL) reinforcement learning.
4
+
5
+ `gym_hil` is a package that provides Gymnasium-compatible simulation environments specifically designed for Human-In-the-Loop reinforcement learning. These environments allow you to:
6
+
7
+ - Train policies in simulation to test the RL stack before training on real robots
8
+
9
+ - Collect demonstrations in sim using external devices like gamepads or keyboards
10
+ - Perform human interventions during policy learning
11
+
12
+ Currently, the main environment is a Franka Panda robot simulation based on MuJoCo, with tasks like picking up a cube.
13
+
14
+ ## Installation
15
+
16
+ First, install the `gym_hil` package within the LeRobot environment:
17
+
18
+ ```bash
19
+ pip install -e ".[hilserl]"
20
+ ```
21
+
22
+ ## What do I need?
23
+
24
+ - A gamepad or keyboard to control the robot
25
+ - A Nvidia GPU
26
+
27
+ ## Configuration
28
+
29
+ To use `gym_hil` with LeRobot, you need to create a configuration file. An example is provided [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/gym_hil/env_config.json). Key configuration sections include:
30
+
31
+ ### Environment Type and Task
32
+
33
+ ```json
34
+ {
35
+ "env": {
36
+ "type": "gym_manipulator",
37
+ "name": "gym_hil",
38
+ "task": "PandaPickCubeGamepad-v0",
39
+ "fps": 10
40
+ },
41
+ "device": "cuda"
42
+ }
43
+ ```
44
+
45
+ Available tasks:
46
+
47
+ - `PandaPickCubeBase-v0`: Basic environment
48
+ - `PandaPickCubeGamepad-v0`: With gamepad control
49
+ - `PandaPickCubeKeyboard-v0`: With keyboard control
50
+
51
+ ### Processor Configuration
52
+
53
+ ```json
54
+ {
55
+ "env": {
56
+ "processor": {
57
+ "control_mode": "gamepad",
58
+ "gripper": {
59
+ "use_gripper": true,
60
+ "gripper_penalty": -0.02
61
+ },
62
+ "reset": {
63
+ "control_time_s": 15.0,
64
+ "fixed_reset_joint_positions": [
65
+ 0.0, 0.195, 0.0, -2.43, 0.0, 2.62, 0.785
66
+ ]
67
+ },
68
+ "inverse_kinematics": {
69
+ "end_effector_step_sizes": {
70
+ "x": 0.025,
71
+ "y": 0.025,
72
+ "z": 0.025
73
+ }
74
+ }
75
+ }
76
+ }
77
+ }
78
+ ```
79
+
80
+ Important parameters:
81
+
82
+ - `gripper.gripper_penalty`: Penalty for excessive gripper movement
83
+ - `gripper.use_gripper`: Whether to enable gripper control
84
+ - `inverse_kinematics.end_effector_step_sizes`: Size of the steps in the x,y,z axes of the end-effector
85
+ - `control_mode`: Set to `"gamepad"` to use a gamepad controller
86
+
87
+ ## Running with HIL RL of LeRobot
88
+
89
+ ### Basic Usage
90
+
91
+ To run the environment, set mode to null:
92
+
93
+ ```bash
94
+ python -m lerobot.rl.gym_manipulator --config_path path/to/gym_hil_env.json
95
+ ```
96
+
97
+ ### Recording a Dataset
98
+
99
+ To collect a dataset, set the mode to `record` whilst defining the repo_id and number of episodes to record:
100
+
101
+ ```json
102
+ {
103
+ "env": {
104
+ "type": "gym_manipulator",
105
+ "name": "gym_hil",
106
+ "task": "PandaPickCubeGamepad-v0"
107
+ },
108
+ "dataset": {
109
+ "repo_id": "username/sim_dataset",
110
+ "root": null,
111
+ "task": "pick_cube",
112
+ "num_episodes_to_record": 10,
113
+ "replay_episode": null,
114
+ "push_to_hub": true
115
+ },
116
+ "mode": "record"
117
+ }
118
+ ```
119
+
120
+ ```bash
121
+ python -m lerobot.rl.gym_manipulator --config_path path/to/gym_hil_env.json
122
+ ```
123
+
124
+ ### Training a Policy
125
+
126
+ To train a policy, checkout the configuration example available [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/rl/gym_hil/train_config.json) and run the actor and learner servers:
127
+
128
+ ```bash
129
+ python -m lerobot.rl.actor --config_path path/to/train_gym_hil_env.json
130
+ ```
131
+
132
+ In a different terminal, run the learner server:
133
+
134
+ ```bash
135
+ python -m lerobot.rl.learner --config_path path/to/train_gym_hil_env.json
136
+ ```
137
+
138
+ The simulation environment provides a safe and repeatable way to develop and test your Human-In-the-Loop reinforcement learning components before deploying to real robots.
139
+
140
+ Congrats 🎉, you have finished this tutorial!
141
+
142
+ > [!TIP]
143
+ > If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
144
+
145
+ Paper citation:
146
+
147
+ ```
148
+ @article{luo2024precise,
149
+ title={Precise and Dexterous Robotic Manipulation via Human-in-the-Loop Reinforcement Learning},
150
+ author={Luo, Jianlan and Xu, Charles and Wu, Jeffrey and Levine, Sergey},
151
+ journal={arXiv preprint arXiv:2410.21845},
152
+ year={2024}
153
+ }
154
+ ```
docs/source/hope_jr.mdx ADDED
@@ -0,0 +1,277 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # HopeJR
2
+
3
+ ## Prerequisites
4
+
5
+ - [Hardware Setup](https://github.com/TheRobotStudio/HOPEJr)
6
+
7
+ ## Install LeRobot
8
+
9
+ Follow the [installation instructions](https://github.com/huggingface/lerobot#installation) to install LeRobot.
10
+
11
+ Install LeRobot with HopeJR dependencies:
12
+
13
+ ```bash
14
+ pip install -e ".[hopejr]"
15
+ ```
16
+
17
+ ## Device Configuration
18
+
19
+ Before starting calibration and operation, you need to identify the USB ports for each HopeJR component. Run this script to find the USB ports for the arm, hand, glove, and exoskeleton:
20
+
21
+ ```bash
22
+ lerobot-find-port
23
+ ```
24
+
25
+ This will display the available USB ports and their associated devices. Make note of the port paths (e.g., `/dev/tty.usbmodem58760433331`, `/dev/tty.usbmodem11301`) as you'll need to specify them in the `--robot.port` and `--teleop.port` parameters when recording data, replaying episodes, or running teleoperation scripts.
26
+
27
+ ## Step 1: Calibration
28
+
29
+ Before performing teleoperation, HopeJR's limbs need to be calibrated. Calibration files will be saved in `~/.cache/huggingface/lerobot/calibration`
30
+
31
+ ### 1.1 Calibrate Robot Hand
32
+
33
+ ```bash
34
+ lerobot-calibrate \
35
+ --robot.type=hope_jr_hand \
36
+ --robot.port=/dev/tty.usbmodem58760432281 \
37
+ --robot.id=blue \
38
+ --robot.side=right
39
+ ```
40
+
41
+ When running the calibration script, a calibration GUI will pop up. Finger joints are named as follows:
42
+
43
+ **Thumb**:
44
+
45
+ - **CMC**: base joint connecting thumb to hand
46
+ - **MCP**: knuckle joint
47
+ - **PIP**: first finger joint
48
+ - **DIP** : fingertip joint
49
+
50
+ **Index, Middle, Ring, and Pinky fingers**:
51
+
52
+ - **Radial flexor**: Moves base of finger towards the thumb
53
+ - **Ulnar flexor**: Moves base of finger towards the pinky
54
+ - **PIP/DIP**: Flexes the distal and proximal phalanx of the finger
55
+
56
+ Each one of these will need to be calibrated individually via the GUI.
57
+ Note that ulnar and radial flexors should have ranges of the same size (but with different offsets) in order to get symmetric movement.
58
+
59
+ <p align="center">
60
+ <img
61
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/calibration_gui_1.png"
62
+ alt="Setting boundaries in the hand calibration GUI"
63
+ title="Setting boundaries in the hand calibration GUI"
64
+ width="100%"
65
+ ></img>
66
+ </p>
67
+
68
+ Use the calibration interface to set the range boundaries for each joint as shown above.
69
+
70
+ <p align="center">
71
+ <img
72
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/calibration_gui_2.png"
73
+ alt="Saving calibration values"
74
+ title="Saving calibration values"
75
+ width="100%"
76
+ ></img>
77
+ </p>
78
+
79
+ Once you have set the appropriate boundaries for all joints, click "Save" to save the calibration values to the motors.
80
+
81
+ ### 1.2 Calibrate Teleoperator Glove
82
+
83
+ ```bash
84
+ lerobot-calibrate \
85
+ --teleop.type=homunculus_glove \
86
+ --teleop.port=/dev/tty.usbmodem11201 \
87
+ --teleop.id=red \
88
+ --teleop.side=right
89
+ ```
90
+
91
+ Move each finger through its full range of motion, starting from the thumb.
92
+
93
+ ```
94
+ Move thumb through its entire range of motion.
95
+ Recording positions. Press ENTER to stop...
96
+
97
+ -------------------------------------------
98
+ NAME | MIN | POS | MAX
99
+ thumb_cmc | 1790 | 1831 | 1853
100
+ thumb_mcp | 1497 | 1514 | 1528
101
+ thumb_pip | 1466 | 1496 | 1515
102
+ thumb_dip | 1463 | 1484 | 1514
103
+ ```
104
+
105
+ Continue with each finger:
106
+
107
+ ```
108
+ Move middle through its entire range of motion.
109
+ Recording positions. Press ENTER to stop...
110
+
111
+ -------------------------------------------
112
+ NAME | MIN | POS | MAX
113
+ middle_mcp_abduction | 1598 | 1718 | 1820
114
+ middle_mcp_flexion | 1512 | 1658 | 2136
115
+ middle_dip | 1484 | 1500 | 1547
116
+ ```
117
+
118
+ Once calibration is complete, the system will save the calibration to `/Users/your_username/.cache/huggingface/lerobot/calibration/teleoperators/homunculus_glove/red.json`
119
+
120
+ ### 1.3 Calibrate Robot Arm
121
+
122
+ ```bash
123
+ lerobot-calibrate \
124
+ --robot.type=hope_jr_arm \
125
+ --robot.port=/dev/tty.usbserial-1110 \
126
+ --robot.id=white
127
+ ```
128
+
129
+ This will open a calibration GUI where you can set the range limits for each motor. The arm motions are organized as follows:
130
+
131
+ - **Shoulder**: pitch, yaw, and roll
132
+ - **Elbow**: flex
133
+ - **Wrist**: pitch, yaw, and roll
134
+
135
+ <p align="center">
136
+ <img
137
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/calibration_gui_2.png"
138
+ alt="Setting boundaries in the arm calibration GUI"
139
+ title="Setting boundaries in the arm calibration GUI"
140
+ width="100%"
141
+ ></img>
142
+ </p>
143
+
144
+ Use the calibration interface to set the range boundaries for each joint. Move each joint through its full range of motion and adjust the minimum and maximum values accordingly. Once you have set the appropriate boundaries for all joints, save the calibration.
145
+
146
+ ### 1.4 Calibrate Teleoperator Exoskeleton
147
+
148
+ ```bash
149
+ lerobot-calibrate \
150
+ --teleop.type=homunculus_arm \
151
+ --teleop.port=/dev/tty.usbmodem11201 \
152
+ --teleop.id=black
153
+ ```
154
+
155
+ The exoskeleton allows one to control the robot arm. During calibration, you'll be prompted to move all joints through their full range of motion:
156
+
157
+ ```
158
+ Move all joints through their entire range of motion.
159
+ Recording positions. Press ENTER to stop...
160
+
161
+ -------------------------------------------
162
+ -------------------------------------------
163
+ NAME | MIN | POS | MAX
164
+ shoulder_pitch | 586 | 736 | 895
165
+ shoulder_yaw | 1257 | 1374 | 1390
166
+ shoulder_roll | 449 | 1034 | 2564
167
+ elbow_flex | 3023 | 3117 | 3134
168
+ wrist_roll | 3073 | 3096 | 3147
169
+ wrist_yaw | 2143 | 2171 | 2185
170
+ wrist_pitch | 1975 | 1993 | 2074
171
+ Calibration saved to /Users/your_username/.cache/huggingface/lerobot/calibration/teleoperators/homunculus_arm/black.json
172
+ ```
173
+
174
+ ## Step 2: Teleoperation
175
+
176
+ Due to global variable conflicts in the Feetech middleware, teleoperation for arm and hand must run in separate shell sessions:
177
+
178
+ ### Hand
179
+
180
+ ```bash
181
+ lerobot-teleoperate \
182
+ --robot.type=hope_jr_hand \
183
+ --robot.port=/dev/tty.usbmodem58760432281 \
184
+ --robot.id=blue \
185
+ --robot.side=right \
186
+ --teleop.type=homunculus_glove \
187
+ --teleop.port=/dev/tty.usbmodem11201 \
188
+ --teleop.id=red \
189
+ --teleop.side=right \
190
+ --display_data=true \
191
+ --fps=30
192
+ ```
193
+
194
+ ### Arm
195
+
196
+ ```bash
197
+ lerobot-teleoperate \
198
+ --robot.type=hope_jr_arm \
199
+ --robot.port=/dev/tty.usbserial-1110 \
200
+ --robot.id=white \
201
+ --teleop.type=homunculus_arm \
202
+ --teleop.port=/dev/tty.usbmodem11201 \
203
+ --teleop.id=black \
204
+ --display_data=true \
205
+ --fps=30
206
+ ```
207
+
208
+ ## Step 3: Record, Replay, Train
209
+
210
+ Record, Replay and Train with Hope-JR is still experimental.
211
+
212
+ ### Record
213
+
214
+ This step records the dataset, which can be seen as an example [here](https://huggingface.co/datasets/nepyope/hand_record_test_with_video_data/settings).
215
+
216
+ ```bash
217
+ lerobot-record \
218
+ --robot.type=hope_jr_hand \
219
+ --robot.port=/dev/tty.usbmodem58760432281 \
220
+ --robot.id=right \
221
+ --robot.side=right \
222
+ --robot.cameras='{"main": {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30}}' \
223
+ --teleop.type=homunculus_glove \
224
+ --teleop.port=/dev/tty.usbmodem1201 \
225
+ --teleop.id=right \
226
+ --teleop.side=right \
227
+ --dataset.repo_id=nepyope/hand_record_test_with_video_data \
228
+ --dataset.single_task="Hand recording test with video data" \
229
+ --dataset.num_episodes=1 \
230
+ --dataset.episode_time_s=5 \
231
+ --dataset.push_to_hub=true \
232
+ --dataset.private=true \
233
+ --display_data=true
234
+ ```
235
+
236
+ ### Replay
237
+
238
+ ```bash
239
+ lerobot-replay \
240
+ --robot.type=hope_jr_hand \
241
+ --robot.port=/dev/tty.usbmodem58760432281 \
242
+ --robot.id=right \
243
+ --robot.side=right \
244
+ --dataset.repo_id=nepyope/hand_record_test_with_camera \
245
+ --dataset.episode=0
246
+ ```
247
+
248
+ ### Train
249
+
250
+ ```bash
251
+ lerobot-train \
252
+ --dataset.repo_id=nepyope/hand_record_test_with_video_data \
253
+ --policy.type=act \
254
+ --output_dir=outputs/train/hopejr_hand \
255
+ --job_name=hopejr \
256
+ --policy.device=mps \
257
+ --wandb.enable=true \
258
+ --policy.repo_id=nepyope/hand_test_policy
259
+ ```
260
+
261
+ ### Evaluate
262
+
263
+ This training run can be viewed as an example [here](https://wandb.ai/tino/lerobot/runs/rp0k8zvw?nw=nwusertino).
264
+
265
+ ```bash
266
+ lerobot-record \
267
+ --robot.type=hope_jr_hand \
268
+ --robot.port=/dev/tty.usbmodem58760432281 \
269
+ --robot.id=right \
270
+ --robot.side=right \
271
+ --robot.cameras='{"main": {"type": "opencv", "index_or_path": 0, "width": 640, "height": 480, "fps": 30}}' \
272
+ --display_data=false \
273
+ --dataset.repo_id=nepyope/eval_hopejr \
274
+ --dataset.single_task="Evaluate hopejr hand policy" \
275
+ --dataset.num_episodes=10 \
276
+ --policy.path=outputs/train/hopejr_hand/checkpoints/last/pretrained_model
277
+ ```
docs/source/il_robots.mdx ADDED
@@ -0,0 +1,603 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Imitation Learning on Real-World Robots
2
+
3
+ This tutorial will explain how to train a neural network to control a real robot autonomously.
4
+
5
+ **You'll learn:**
6
+
7
+ 1. How to record and visualize your dataset.
8
+ 2. How to train a policy using your data and prepare it for evaluation.
9
+ 3. How to evaluate your policy and visualize the results.
10
+
11
+ By following these steps, you'll be able to replicate tasks, such as picking up a Lego block and placing it in a bin with a high success rate, as shown in the video below.
12
+
13
+ <details>
14
+ <summary><strong>Video: pickup lego block task</strong></summary>
15
+
16
+ <div class="video-container">
17
+ <video controls width="600">
18
+ <source
19
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/lerobot_task.mp4"
20
+ type="video/mp4"
21
+ />
22
+ </video>
23
+ </div>
24
+
25
+ </details>
26
+
27
+ This tutorial isn’t tied to a specific robot: we walk you through the commands and API snippets you can adapt for any supported platform.
28
+
29
+ During data collection, you’ll use a “teloperation” device, such as a leader arm or keyboard to teleoperate the robot and record its motion trajectories.
30
+
31
+ Once you’ve gathered enough trajectories, you’ll train a neural network to imitate these trajectories and deploy the trained model so your robot can perform the task autonomously.
32
+
33
+ If you run into any issues at any point, jump into our [Discord community](https://discord.com/invite/s3KuuzsPFb) for support.
34
+
35
+ ## Set up and Calibrate
36
+
37
+ If you haven't yet set up and calibrated your robot and teleop device, please do so by following the robot-specific tutorial.
38
+
39
+ ## Teleoperate
40
+
41
+ In this example, we’ll demonstrate how to teleoperate the SO101 robot. For each command, we also provide a corresponding API example.
42
+
43
+ Note that the `id` associated with a robot is used to store the calibration file. It's important to use the same `id` when teleoperating, recording, and evaluating when using the same setup.
44
+
45
+ <hfoptions id="teleoperate_so101">
46
+ <hfoption id="Command">
47
+ ```bash
48
+ lerobot-teleoperate \
49
+ --robot.type=so101_follower \
50
+ --robot.port=/dev/tty.usbmodem58760431541 \
51
+ --robot.id=my_awesome_follower_arm \
52
+ --teleop.type=so101_leader \
53
+ --teleop.port=/dev/tty.usbmodem58760431551 \
54
+ --teleop.id=my_awesome_leader_arm
55
+ ```
56
+ </hfoption>
57
+ <hfoption id="API example">
58
+
59
+ <!-- prettier-ignore-start -->
60
+ ```python
61
+ from lerobot.teleoperators.so101_leader import SO101LeaderConfig, SO101Leader
62
+ from lerobot.robots.so101_follower import SO101FollowerConfig, SO101Follower
63
+
64
+ robot_config = SO101FollowerConfig(
65
+ port="/dev/tty.usbmodem58760431541",
66
+ id="my_red_robot_arm",
67
+ )
68
+
69
+ teleop_config = SO101LeaderConfig(
70
+ port="/dev/tty.usbmodem58760431551",
71
+ id="my_blue_leader_arm",
72
+ )
73
+
74
+ robot = SO101Follower(robot_config)
75
+ teleop_device = SO101Leader(teleop_config)
76
+ robot.connect()
77
+ teleop_device.connect()
78
+
79
+ while True:
80
+ action = teleop_device.get_action()
81
+ robot.send_action(action)
82
+ ```
83
+ <!-- prettier-ignore-end -->
84
+
85
+ </hfoption>
86
+ </hfoptions>
87
+
88
+ The teleoperate command will automatically:
89
+
90
+ 1. Identify any missing calibrations and initiate the calibration procedure.
91
+ 2. Connect the robot and teleop device and start teleoperation.
92
+
93
+ ## Cameras
94
+
95
+ To add cameras to your setup, follow this [Guide](./cameras#setup-cameras).
96
+
97
+ ## Teleoperate with cameras
98
+
99
+ With `rerun`, you can teleoperate again while simultaneously visualizing the camera feeds and joint positions. In this example, we’re using the Koch arm.
100
+
101
+ <hfoptions id="teleoperate_koch_camera">
102
+ <hfoption id="Command">
103
+ ```bash
104
+ lerobot-teleoperate \
105
+ --robot.type=koch_follower \
106
+ --robot.port=/dev/tty.usbmodem58760431541 \
107
+ --robot.id=my_awesome_follower_arm \
108
+ --robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \
109
+ --teleop.type=koch_leader \
110
+ --teleop.port=/dev/tty.usbmodem58760431551 \
111
+ --teleop.id=my_awesome_leader_arm \
112
+ --display_data=true
113
+ ```
114
+ </hfoption>
115
+ <hfoption id="API example">
116
+
117
+ <!-- prettier-ignore-start -->
118
+ ```python
119
+ from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
120
+ from lerobot.teleoperators.koch_leader import KochLeaderConfig, KochLeader
121
+ from lerobot.robots.koch_follower import KochFollowerConfig, KochFollower
122
+
123
+ camera_config = {
124
+ "front": OpenCVCameraConfig(index_or_path=0, width=1920, height=1080, fps=30)
125
+ }
126
+
127
+ robot_config = KochFollowerConfig(
128
+ port="/dev/tty.usbmodem585A0076841",
129
+ id="my_red_robot_arm",
130
+ cameras=camera_config
131
+ )
132
+
133
+ teleop_config = KochLeaderConfig(
134
+ port="/dev/tty.usbmodem58760431551",
135
+ id="my_blue_leader_arm",
136
+ )
137
+
138
+ robot = KochFollower(robot_config)
139
+ teleop_device = KochLeader(teleop_config)
140
+ robot.connect()
141
+ teleop_device.connect()
142
+
143
+ while True:
144
+ observation = robot.get_observation()
145
+ action = teleop_device.get_action()
146
+ robot.send_action(action)
147
+ ```
148
+ <!-- prettier-ignore-end -->
149
+
150
+ </hfoption>
151
+ </hfoptions>
152
+
153
+ ## Record a dataset
154
+
155
+ Once you're familiar with teleoperation, you can record your first dataset.
156
+
157
+ We use the Hugging Face hub features for uploading your dataset. If you haven't previously used the Hub, make sure you can login via the cli using a write-access token, this token can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens).
158
+
159
+ Add your token to the CLI by running this command:
160
+
161
+ ```bash
162
+ huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
163
+ ```
164
+
165
+ Then store your Hugging Face repository name in a variable:
166
+
167
+ ```bash
168
+ HF_USER=$(hf auth whoami | head -n 1)
169
+ echo $HF_USER
170
+ ```
171
+
172
+ Now you can record a dataset. To record 5 episodes and upload your dataset to the hub, adapt the code below for your robot and execute the command or API example.
173
+
174
+ <hfoptions id="record">
175
+ <hfoption id="Command">
176
+ ```bash
177
+ lerobot-record \
178
+ --robot.type=so101_follower \
179
+ --robot.port=/dev/tty.usbmodem585A0076841 \
180
+ --robot.id=my_awesome_follower_arm \
181
+ --robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \
182
+ --teleop.type=so101_leader \
183
+ --teleop.port=/dev/tty.usbmodem58760431551 \
184
+ --teleop.id=my_awesome_leader_arm \
185
+ --display_data=true \
186
+ --dataset.repo_id=${HF_USER}/record-test \
187
+ --dataset.num_episodes=5 \
188
+ --dataset.single_task="Grab the black cube"
189
+ ```
190
+ </hfoption>
191
+ <hfoption id="API example">
192
+
193
+ <!-- prettier-ignore-start -->
194
+ ```python
195
+ from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
196
+ from lerobot.datasets.lerobot_dataset import LeRobotDataset
197
+ from lerobot.datasets.utils import hw_to_dataset_features
198
+ from lerobot.robots.so100_follower import SO100Follower, SO100FollowerConfig
199
+ from lerobot.teleoperators.so100_leader.config_so100_leader import SO100LeaderConfig
200
+ from lerobot.teleoperators.so100_leader.so100_leader import SO100Leader
201
+ from lerobot.utils.control_utils import init_keyboard_listener
202
+ from lerobot.utils.utils import log_say
203
+ from lerobot.utils.visualization_utils import init_rerun
204
+ from lerobot.record import record_loop
205
+
206
+ NUM_EPISODES = 5
207
+ FPS = 30
208
+ EPISODE_TIME_SEC = 60
209
+ RESET_TIME_SEC = 10
210
+ TASK_DESCRIPTION = "My task description"
211
+
212
+ # Create the robot and teleoperator configurations
213
+ camera_config = {"front": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=FPS)}
214
+ robot_config = SO100FollowerConfig(
215
+ port="/dev/tty.usbmodem58760434471", id="my_awesome_follower_arm", cameras=camera_config
216
+ )
217
+ teleop_config = SO100LeaderConfig(port="/dev/tty.usbmodem585A0077581", id="my_awesome_leader_arm")
218
+
219
+ # Initialize the robot and teleoperator
220
+ robot = SO100Follower(robot_config)
221
+ teleop = SO100Leader(teleop_config)
222
+
223
+ # Configure the dataset features
224
+ action_features = hw_to_dataset_features(robot.action_features, "action")
225
+ obs_features = hw_to_dataset_features(robot.observation_features, "observation")
226
+ dataset_features = {**action_features, **obs_features}
227
+
228
+ # Create the dataset
229
+ dataset = LeRobotDataset.create(
230
+ repo_id="<hf_username>/<dataset_repo_id>",
231
+ fps=FPS,
232
+ features=dataset_features,
233
+ robot_type=robot.name,
234
+ use_videos=True,
235
+ image_writer_threads=4,
236
+ )
237
+
238
+ # Initialize the keyboard listener and rerun visualization
239
+ _, events = init_keyboard_listener()
240
+ init_rerun(session_name="recording")
241
+
242
+ # Connect the robot and teleoperator
243
+ robot.connect()
244
+ teleop.connect()
245
+
246
+ episode_idx = 0
247
+ while episode_idx < NUM_EPISODES and not events["stop_recording"]:
248
+ log_say(f"Recording episode {episode_idx + 1} of {NUM_EPISODES}")
249
+
250
+ record_loop(
251
+ robot=robot,
252
+ events=events,
253
+ fps=FPS,
254
+ teleop=teleop,
255
+ dataset=dataset,
256
+ control_time_s=EPISODE_TIME_SEC,
257
+ single_task=TASK_DESCRIPTION,
258
+ display_data=True,
259
+ )
260
+
261
+ # Reset the environment if not stopping or re-recording
262
+ if not events["stop_recording"] and (episode_idx < NUM_EPISODES - 1 or events["rerecord_episode"]):
263
+ log_say("Reset the environment")
264
+ record_loop(
265
+ robot=robot,
266
+ events=events,
267
+ fps=FPS,
268
+ teleop=teleop,
269
+ control_time_s=RESET_TIME_SEC,
270
+ single_task=TASK_DESCRIPTION,
271
+ display_data=True,
272
+ )
273
+
274
+ if events["rerecord_episode"]:
275
+ log_say("Re-recording episode")
276
+ events["rerecord_episode"] = False
277
+ events["exit_early"] = False
278
+ dataset.clear_episode_buffer()
279
+ continue
280
+
281
+ dataset.save_episode()
282
+ episode_idx += 1
283
+
284
+ # Clean up
285
+ log_say("Stop recording")
286
+ robot.disconnect()
287
+ teleop.disconnect()
288
+ dataset.push_to_hub()
289
+ ```
290
+ <!-- prettier-ignore-end -->
291
+
292
+ </hfoption>
293
+ </hfoptions>
294
+
295
+ #### Dataset upload
296
+
297
+ Locally, your dataset is stored in this folder: `~/.cache/huggingface/lerobot/{repo-id}`. At the end of data recording, your dataset will be uploaded on your Hugging Face page (e.g. `https://huggingface.co/datasets/${HF_USER}/so101_test`) that you can obtain by running:
298
+
299
+ ```bash
300
+ echo https://huggingface.co/datasets/${HF_USER}/so101_test
301
+ ```
302
+
303
+ Your dataset will be automatically tagged with `LeRobot` for the community to find it easily, and you can also add custom tags (in this case `tutorial` for example).
304
+
305
+ You can look for other LeRobot datasets on the hub by searching for `LeRobot` [tags](https://huggingface.co/datasets?other=LeRobot).
306
+
307
+ You can also push your local dataset to the Hub manually, running:
308
+
309
+ ```bash
310
+ huggingface-cli upload ${HF_USER}/record-test ~/.cache/huggingface/lerobot/{repo-id} --repo-type dataset
311
+ ```
312
+
313
+ #### Record function
314
+
315
+ The `record` function provides a suite of tools for capturing and managing data during robot operation:
316
+
317
+ ##### 1. Data Storage
318
+
319
+ - Data is stored using the `LeRobotDataset` format and is stored on disk during recording.
320
+ - By default, the dataset is pushed to your Hugging Face page after recording.
321
+ - To disable uploading, use `--dataset.push_to_hub=False`.
322
+
323
+ ##### 2. Checkpointing and Resuming
324
+
325
+ - Checkpoints are automatically created during recording.
326
+ - If an issue occurs, you can resume by re-running the same command with `--resume=true`. When resuming a recording, `--dataset.num_episodes` must be set to the **number of additional episodes to be recorded**, and not to the targeted total number of episodes in the dataset !
327
+ - To start recording from scratch, **manually delete** the dataset directory.
328
+
329
+ ##### 3. Recording Parameters
330
+
331
+ Set the flow of data recording using command-line arguments:
332
+
333
+ - `--dataset.episode_time_s=60`
334
+ Duration of each data recording episode (default: **60 seconds**).
335
+ - `--dataset.reset_time_s=60`
336
+ Duration for resetting the environment after each episode (default: **60 seconds**).
337
+ - `--dataset.num_episodes=50`
338
+ Total number of episodes to record (default: **50**).
339
+
340
+ ##### 4. Keyboard Controls During Recording
341
+
342
+ Control the data recording flow using keyboard shortcuts:
343
+
344
+ - Press **Right Arrow (`→`)**: Early stop the current episode or reset time and move to the next.
345
+ - Press **Left Arrow (`←`)**: Cancel the current episode and re-record it.
346
+ - Press **Escape (`ESC`)**: Immediately stop the session, encode videos, and upload the dataset.
347
+
348
+ #### Tips for gathering data
349
+
350
+ Once you're comfortable with data recording, you can create a larger dataset for training. A good starting task is grasping an object at different locations and placing it in a bin. We suggest recording at least 50 episodes, with 10 episodes per location. Keep the cameras fixed and maintain consistent grasping behavior throughout the recordings. Also make sure the object you are manipulating is visible on the camera's. A good rule of thumb is you should be able to do the task yourself by only looking at the camera images.
351
+
352
+ In the following sections, you’ll train your neural network. After achieving reliable grasping performance, you can start introducing more variations during data collection, such as additional grasp locations, different grasping techniques, and altering camera positions.
353
+
354
+ Avoid adding too much variation too quickly, as it may hinder your results.
355
+
356
+ If you want to dive deeper into this important topic, you can check out the [blog post](https://huggingface.co/blog/lerobot-datasets#what-makes-a-good-dataset) we wrote on what makes a good dataset.
357
+
358
+ #### Troubleshooting:
359
+
360
+ - On Linux, if the left and right arrow keys and escape key don't have any effect during data recording, make sure you've set the `$DISPLAY` environment variable. See [pynput limitations](https://pynput.readthedocs.io/en/latest/limitations.html#linux).
361
+
362
+ ## Visualize a dataset
363
+
364
+ If you uploaded your dataset to the hub with `--control.push_to_hub=true`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
365
+
366
+ ```bash
367
+ echo ${HF_USER}/so101_test
368
+ ```
369
+
370
+ ## Replay an episode
371
+
372
+ A useful feature is the `replay` function, which allows you to replay any episode that you've recorded or episodes from any dataset out there. This function helps you test the repeatability of your robot's actions and assess transferability across robots of the same model.
373
+
374
+ You can replay the first episode on your robot with either the command below or with the API example:
375
+
376
+ <hfoptions id="replay">
377
+ <hfoption id="Command">
378
+ ```bash
379
+ lerobot-replay \
380
+ --robot.type=so101_follower \
381
+ --robot.port=/dev/tty.usbmodem58760431541 \
382
+ --robot.id=my_awesome_follower_arm \
383
+ --dataset.repo_id=${HF_USER}/record-test \
384
+ --dataset.episode=0 # choose the episode you want to replay
385
+ ```
386
+ </hfoption>
387
+ <hfoption id="API example">
388
+
389
+ <!-- prettier-ignore-start -->
390
+ ```python
391
+ import time
392
+
393
+ from lerobot.datasets.lerobot_dataset import LeRobotDataset
394
+ from lerobot.robots.so100_follower.config_so100_follower import SO100FollowerConfig
395
+ from lerobot.robots.so100_follower.so100_follower import SO100Follower
396
+ from lerobot.utils.robot_utils import busy_wait
397
+ from lerobot.utils.utils import log_say
398
+
399
+ episode_idx = 0
400
+
401
+ robot_config = SO100FollowerConfig(port="/dev/tty.usbmodem58760434471", id="my_awesome_follower_arm")
402
+
403
+ robot = SO100Follower(robot_config)
404
+ robot.connect()
405
+
406
+ dataset = LeRobotDataset("<hf_username>/<dataset_repo_id>", episodes=[episode_idx])
407
+ actions = dataset.hf_dataset.select_columns("action")
408
+
409
+ log_say(f"Replaying episode {episode_idx}")
410
+ for idx in range(dataset.num_frames):
411
+ t0 = time.perf_counter()
412
+
413
+ action = {
414
+ name: float(actions[idx]["action"][i]) for i, name in enumerate(dataset.features["action"]["names"])
415
+ }
416
+ robot.send_action(action)
417
+
418
+ busy_wait(1.0 / dataset.fps - (time.perf_counter() - t0))
419
+
420
+ robot.disconnect()
421
+ ```
422
+ <!-- prettier-ignore-end -->
423
+
424
+ </hfoption>
425
+ </hfoptions>
426
+
427
+ Your robot should replicate movements similar to those you recorded. For example, check out [this video](https://x.com/RemiCadene/status/1793654950905680090) where we use `replay` on a Aloha robot from [Trossen Robotics](https://www.trossenrobotics.com).
428
+
429
+ ## Train a policy
430
+
431
+ To train a policy to control your robot, use the [`lerobot-train`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
432
+
433
+ ```bash
434
+ lerobot-train \
435
+ --dataset.repo_id=${HF_USER}/so101_test \
436
+ --policy.type=act \
437
+ --output_dir=outputs/train/act_so101_test \
438
+ --job_name=act_so101_test \
439
+ --policy.device=cuda \
440
+ --wandb.enable=true \
441
+ --policy.repo_id=${HF_USER}/my_policy
442
+ ```
443
+
444
+ Let's explain the command:
445
+
446
+ 1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/so101_test`.
447
+ 2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
448
+ 3. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
449
+ 4. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
450
+
451
+ Training should take several hours. You will find checkpoints in `outputs/train/act_so101_test/checkpoints`.
452
+
453
+ To resume training from a checkpoint, below is an example command to resume from `last` checkpoint of the `act_so101_test` policy:
454
+
455
+ ```bash
456
+ lerobot-train \
457
+ --config_path=outputs/train/act_so101_test/checkpoints/last/pretrained_model/train_config.json \
458
+ --resume=true
459
+ ```
460
+
461
+ If you do not want to push your model to the hub after training use `--policy.push_to_hub=false`.
462
+
463
+ Additionally you can provide extra `tags` or specify a `license` for your model or make the model repo `private` by adding this: `--policy.private=true --policy.tags=\[ppo,rl\] --policy.license=mit`
464
+
465
+ #### Train using Google Colab
466
+
467
+ If your local computer doesn't have a powerful GPU you could utilize Google Colab to train your model by following the [ACT training notebook](./notebooks#training-act).
468
+
469
+ #### Upload policy checkpoints
470
+
471
+ Once training is done, upload the latest checkpoint with:
472
+
473
+ ```bash
474
+ huggingface-cli upload ${HF_USER}/act_so101_test \
475
+ outputs/train/act_so101_test/checkpoints/last/pretrained_model
476
+ ```
477
+
478
+ You can also upload intermediate checkpoints with:
479
+
480
+ ```bash
481
+ CKPT=010000
482
+ huggingface-cli upload ${HF_USER}/act_so101_test${CKPT} \
483
+ outputs/train/act_so101_test/checkpoints/${CKPT}/pretrained_model
484
+ ```
485
+
486
+ ## Run inference and evaluate your policy
487
+
488
+ You can use the `record` script from [`lerobot/record.py`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/record.py) with a policy checkpoint as input, to run inference and evaluate your policy. For instance, run this command or API example to run inference and record 10 evaluation episodes:
489
+
490
+ <hfoptions id="eval">
491
+ <hfoption id="Command">
492
+ ```bash
493
+ lerobot-record \
494
+ --robot.type=so100_follower \
495
+ --robot.port=/dev/ttyACM1 \
496
+ --robot.cameras="{ up: {type: opencv, index_or_path: /dev/video10, width: 640, height: 480, fps: 30}, side: {type: intelrealsense, serial_number_or_name: 233522074606, width: 640, height: 480, fps: 30}}" \
497
+ --robot.id=my_awesome_follower_arm \
498
+ --display_data=false \
499
+ --dataset.repo_id=${HF_USER}/eval_so100 \
500
+ --dataset.single_task="Put lego brick into the transparent box" \
501
+ # <- Teleop optional if you want to teleoperate in between episodes \
502
+ # --teleop.type=so100_leader \
503
+ # --teleop.port=/dev/ttyACM0 \
504
+ # --teleop.id=my_awesome_leader_arm \
505
+ --policy.path=${HF_USER}/my_policy
506
+ ```
507
+ </hfoption>
508
+ <hfoption id="API example">
509
+
510
+ <!-- prettier-ignore-start -->
511
+ ```python
512
+ from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig
513
+ from lerobot.datasets.lerobot_dataset import LeRobotDataset
514
+ from lerobot.datasets.utils import hw_to_dataset_features
515
+ from lerobot.policies.act.modeling_act import ACTPolicy
516
+ from lerobot.policies.factory import make_pre_post_processors
517
+ from lerobot.robots.so100_follower.config_so100_follower import SO100FollowerConfig
518
+ from lerobot.robots.so100_follower.so100_follower import SO100Follower
519
+ from lerobot.scripts.lerobot_record import record_loop
520
+ from lerobot.utils.control_utils import init_keyboard_listener
521
+ from lerobot.utils.utils import log_say
522
+ from lerobot.utils.visualization_utils import init_rerun
523
+
524
+
525
+ NUM_EPISODES = 5
526
+ FPS = 30
527
+ EPISODE_TIME_SEC = 60
528
+ TASK_DESCRIPTION = "My task description"
529
+ HF_MODEL_ID = "<hf_username>/<model_repo_id>"
530
+ HF_DATASET_ID = "<hf_username>/<eval_dataset_repo_id>"
531
+
532
+ # Create the robot configuration
533
+ camera_config = {"front": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=FPS)}
534
+ robot_config = SO100FollowerConfig(
535
+ port="/dev/tty.usbmodem58760434471", id="my_awesome_follower_arm", cameras=camera_config
536
+ )
537
+
538
+ # Initialize the robot
539
+ robot = SO100Follower(robot_config)
540
+
541
+ # Initialize the policy
542
+ policy = ACTPolicy.from_pretrained(HF_MODEL_ID)
543
+
544
+ # Configure the dataset features
545
+ action_features = hw_to_dataset_features(robot.action_features, "action")
546
+ obs_features = hw_to_dataset_features(robot.observation_features, "observation")
547
+ dataset_features = {**action_features, **obs_features}
548
+
549
+ # Create the dataset
550
+ dataset = LeRobotDataset.create(
551
+ repo_id=HF_DATASET_ID,
552
+ fps=FPS,
553
+ features=dataset_features,
554
+ robot_type=robot.name,
555
+ use_videos=True,
556
+ image_writer_threads=4,
557
+ )
558
+
559
+ # Initialize the keyboard listener and rerun visualization
560
+ _, events = init_keyboard_listener()
561
+ init_rerun(session_name="recording")
562
+
563
+ # Connect the robot
564
+ robot.connect()
565
+
566
+ preprocessor, postprocessor = make_pre_post_processors(
567
+ policy_cfg=policy,
568
+ pretrained_path=HF_MODEL_ID,
569
+ dataset_stats=dataset.meta.stats,
570
+ )
571
+
572
+ for episode_idx in range(NUM_EPISODES):
573
+ log_say(f"Running inference, recording eval episode {episode_idx + 1} of {NUM_EPISODES}")
574
+
575
+ # Run the policy inference loop
576
+ record_loop(
577
+ robot=robot,
578
+ events=events,
579
+ fps=FPS,
580
+ policy=policy,
581
+ preprocessor=preprocessor,
582
+ postprocessor=postprocessor,
583
+ dataset=dataset,
584
+ control_time_s=EPISODE_TIME_SEC,
585
+ single_task=TASK_DESCRIPTION,
586
+ display_data=True,
587
+ )
588
+
589
+ dataset.save_episode()
590
+
591
+ # Clean up
592
+ robot.disconnect()
593
+ dataset.push_to_hub()
594
+ ```
595
+ <!-- prettier-ignore-end -->
596
+
597
+ </hfoption>
598
+ </hfoptions>
599
+
600
+ As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
601
+
602
+ 1. There is an additional `--control.policy.path` argument which indicates the path to your policy checkpoint with (e.g. `outputs/train/eval_act_so101_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `${HF_USER}/act_so101_test`).
603
+ 2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `${HF_USER}/eval_act_so101_test`).
docs/source/il_sim.mdx ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Imitation Learning in Sim
2
+
3
+ This tutorial will explain how to train a neural network to control a robot in simulation with imitation learning.
4
+
5
+ **You'll learn:**
6
+
7
+ 1. How to record a dataset in simulation with [gym-hil](https://github.com/huggingface/gym-hil) and visualize the dataset.
8
+ 2. How to train a policy using your data.
9
+ 3. How to evaluate your policy in simulation and visualize the results.
10
+
11
+ For the simulation environment we use the same [repo](https://github.com/huggingface/gym-hil) that is also being used by the Human-In-the-Loop (HIL) reinforcement learning algorithm.
12
+ This environment is based on [MuJoCo](https://mujoco.org) and allows you to record datasets in LeRobotDataset format.
13
+ Teleoperation is easiest with a controller like the Logitech F710, but you can also use your keyboard if you are up for the challenge.
14
+
15
+ ## Installation
16
+
17
+ First, install the `gym_hil` package within the LeRobot environment, go to your LeRobot folder and run this command:
18
+
19
+ ```bash
20
+ pip install -e ".[hilserl]"
21
+ ```
22
+
23
+ ## Teleoperate and Record a Dataset
24
+
25
+ To use `gym_hil` with LeRobot, you need to use a configuration file. An example config file can be found [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/sim_il/env_config.json).
26
+
27
+ To teleoperate and collect a dataset, we need to modify this config file. Here's an example configuration for imitation learning data collection:
28
+
29
+ ```json
30
+ {
31
+ "env": {
32
+ "type": "gym_manipulator",
33
+ "name": "gym_hil",
34
+ "task": "PandaPickCubeGamepad-v0",
35
+ "fps": 10
36
+ },
37
+ "dataset": {
38
+ "repo_id": "your_username/il_gym",
39
+ "root": null,
40
+ "task": "pick_cube",
41
+ "num_episodes_to_record": 30,
42
+ "replay_episode": null,
43
+ "push_to_hub": true
44
+ },
45
+ "mode": "record",
46
+ "device": "cuda"
47
+ }
48
+ ```
49
+
50
+ Key configuration points:
51
+
52
+ - Set your `repo_id` in the `dataset` section: `"repo_id": "your_username/il_gym"`
53
+ - Set `num_episodes_to_record: 30` to collect 30 demonstration episodes
54
+ - Ensure `mode` is set to `"record"`
55
+ - If you don't have an NVIDIA GPU, change `"device": "cuda"` to `"mps"` for macOS or `"cpu"`
56
+ - To use keyboard instead of gamepad, change `"task"` to `"PandaPickCubeKeyboard-v0"`
57
+
58
+ Then we can run this command to start:
59
+
60
+ <hfoptions id="teleop_sim">
61
+ <hfoption id="Linux">
62
+
63
+ ```bash
64
+ python -m lerobot.rl.gym_manipulator --config_path path/to/env_config_gym_hil_il.json
65
+ ```
66
+
67
+ </hfoption>
68
+ <hfoption id="MacOS">
69
+
70
+ ```bash
71
+ mjpython -m lerobot.rl.gym_manipulator --config_path path/to/env_config_gym_hil_il.json
72
+ ```
73
+
74
+ </hfoption>
75
+ </hfoptions>
76
+
77
+ Once rendered you can teleoperate the robot with the gamepad or keyboard, below you can find the gamepad/keyboard controls.
78
+
79
+ Note that to teleoperate the robot you have to hold the "Human Take Over Pause Policy" Button `RB` to enable control!
80
+
81
+ **Gamepad Controls**
82
+
83
+ <p align="center">
84
+ <img
85
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/gamepad_guide.jpg?raw=true"
86
+ alt="Figure shows the control mappings on a Logitech gamepad."
87
+ title="Gamepad Control Mapping"
88
+ width="100%"
89
+ ></img>
90
+ </p>
91
+ <p align="center">
92
+ <i>Gamepad button mapping for robot control and episode management</i>
93
+ </p>
94
+
95
+ **Keyboard controls**
96
+
97
+ For keyboard controls use the `spacebar` to enable control and the following keys to move the robot:
98
+
99
+ ```bash
100
+ Arrow keys: Move in X-Y plane
101
+ Shift and Shift_R: Move in Z axis
102
+ Right Ctrl and Left Ctrl: Open and close gripper
103
+ ESC: Exit
104
+ ```
105
+
106
+ ## Visualize a dataset
107
+
108
+ If you uploaded your dataset to the hub you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id.
109
+
110
+ <p align="center">
111
+ <img
112
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/dataset_visualizer_sim.png"
113
+ alt="Figure shows the dataset visualizer"
114
+ title="Dataset visualization"
115
+ width="100%"
116
+ ></img>
117
+ </p>
118
+ <p align="center">
119
+ <i>Dataset visualizer</i>
120
+ </p>
121
+
122
+ ## Train a policy
123
+
124
+ To train a policy to control your robot, use the [`lerobot-train`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
125
+
126
+ ```bash
127
+ lerobot-train \
128
+ --dataset.repo_id=${HF_USER}/il_gym \
129
+ --policy.type=act \
130
+ --output_dir=outputs/train/il_sim_test \
131
+ --job_name=il_sim_test \
132
+ --policy.device=cuda \
133
+ --wandb.enable=true
134
+ ```
135
+
136
+ Let's explain the command:
137
+
138
+ 1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/il_gym`.
139
+ 2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor states, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
140
+ 3. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
141
+ 4. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
142
+
143
+ Training should take several hours, 100k steps (which is the default) will take about 1h on Nvidia A100. You will find checkpoints in `outputs/train/il_sim_test/checkpoints`.
144
+
145
+ #### Train using Collab
146
+
147
+ If your local computer doesn't have a powerful GPU you could utilize Google Collab to train your model by following the [ACT training notebook](./notebooks#training-act).
148
+
149
+ #### Upload policy checkpoints
150
+
151
+ Once training is done, upload the latest checkpoint with:
152
+
153
+ ```bash
154
+ huggingface-cli upload ${HF_USER}/il_sim_test \
155
+ outputs/train/il_sim_test/checkpoints/last/pretrained_model
156
+ ```
157
+
158
+ You can also upload intermediate checkpoints with:
159
+
160
+ ```bash
161
+ CKPT=010000
162
+ huggingface-cli upload ${HF_USER}/il_sim_test${CKPT} \
163
+ outputs/train/il_sim_test/checkpoints/${CKPT}/pretrained_model
164
+ ```
165
+
166
+ ## Evaluate your policy in Sim
167
+
168
+ To evaluate your policy we have to use a configuration file. An example can be found [here](https://huggingface.co/datasets/lerobot/config_examples/resolve/main/sim_il/eval_config.json).
169
+
170
+ Here's an example evaluation configuration:
171
+
172
+ ```json
173
+ {
174
+ "env": {
175
+ "type": "gym_manipulator",
176
+ "name": "gym_hil",
177
+ "task": "PandaPickCubeGamepad-v0",
178
+ "fps": 10
179
+ },
180
+ "dataset": {
181
+ "repo_id": "your_username/il_sim_dataset",
182
+ "dataset_root": null,
183
+ "task": "pick_cube"
184
+ },
185
+ "pretrained_policy_name_or_path": "your_username/il_sim_model",
186
+ "device": "cuda"
187
+ }
188
+ ```
189
+
190
+ Make sure to replace:
191
+
192
+ - `repo_id` with the dataset you trained on (e.g., `your_username/il_sim_dataset`)
193
+ - `pretrained_policy_name_or_path` with your model ID (e.g., `your_username/il_sim_model`)
194
+
195
+ Then you can run this command to visualize your trained policy
196
+
197
+ <hfoptions id="eval_policy">
198
+ <hfoption id="Linux">
199
+
200
+ ```bash
201
+ python -m lerobot.rl.eval_policy --config_path=path/to/eval_config_gym_hil.json
202
+ ```
203
+
204
+ </hfoption>
205
+ <hfoption id="MacOS">
206
+
207
+ ```bash
208
+ mjpython -m lerobot.rl.eval_policy --config_path=path/to/eval_config_gym_hil.json
209
+ ```
210
+
211
+ </hfoption>
212
+ </hfoptions>
213
+
214
+ > [!WARNING]
215
+ > While the main workflow of training ACT in simulation is straightforward, there is significant room for exploring how to set up the task, define the initial state of the environment, and determine the type of data required during collection to learn the most effective policy. If your trained policy doesn't perform well, investigate the quality of the dataset it was trained on using our visualizers, as well as the action values and various hyperparameters related to ACT and the simulation.
216
+
217
+ Congrats 🎉, you have finished this tutorial. If you want to continue with using LeRobot in simulation follow this [Tutorial on reinforcement learning in sim with HIL-SERL](https://huggingface.co/docs/lerobot/hilserl_sim)
218
+
219
+ > [!TIP]
220
+ > If you have any questions or need help, please reach out on [Discord](https://discord.com/invite/s3KuuzsPFb).
docs/source/implement_your_own_processor.mdx ADDED
@@ -0,0 +1,273 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Implement your own Robot Processor
2
+
3
+ In this tutorial, you'll learn how to implement your own Robot Processor.
4
+ It begins by exploring the need for a custom processor, then uses the `NormalizerProcessorStep` as the running example to explain how to implement, configure, and serialize a processor. Finally, it lists all helper processors that ship with LeRobot.
5
+
6
+ ## Why would you need a custom processor?
7
+
8
+ In most cases, when reading raw data from sensors or when models output actions, you need to process this data to make it compatible with your target system. For example, a common need is normalizing data ranges to make them suitable for neural networks.
9
+
10
+ LeRobot's `NormalizerProcessorStep` handles this crucial task:
11
+
12
+ ```python
13
+ # Input: raw joint positions in [0, 180] degrees
14
+ raw_action = torch.tensor([90.0, 45.0, 135.0])
15
+
16
+ # After processing: normalized to [-1, 1] range for model training
17
+ normalizer = NormalizerProcessorStep(features=features, norm_map=norm_map, stats=dataset_stats)
18
+ normalized_result = normalizer(transition)
19
+ # ...
20
+ ```
21
+
22
+ Other common processing needs include:
23
+
24
+ - **Device placement**: Moving tensors between CPU/GPU and converting data types
25
+ - **Format conversion**: Transforming between different data structures
26
+ - **Batching**: Adding/removing batch dimensions for model compatibility
27
+ - **Safety constraints**: Applying limits to robot commands
28
+
29
+ ```python
30
+ # Example pipeline combining multiple processors
31
+ pipeline = PolicyProcessorPipeline([
32
+ RenameObservationsProcessorStep(rename_map={}),
33
+ AddBatchDimensionProcessorStep(),
34
+ NormalizerProcessorStep(features=features, stats=stats),
35
+ DeviceProcessorStep(device="cuda"),
36
+ # ...
37
+ ])
38
+ ```
39
+
40
+ LeRobot provides a pipeline mechanism to implement sequences of processing steps for both input data and output actions, making it easy to compose these transformations in the right order for optimal performance.
41
+
42
+ ## How to implement your own processor?
43
+
44
+ We'll use the `NormalizerProcessorStep` as our main example because it demonstrates essential processor patterns including state management, configuration serialization, and tensor handling that you'll commonly need.
45
+
46
+ Prepare the sequence of processing steps necessary for your problem. A processor step is a class that implements the following methods:
47
+
48
+ - `__call__`: implements the processing step for the input transition.
49
+ - `get_config`: gets the configuration of the processor step.
50
+ - `state_dict`: gets the state of the processor step.
51
+ - `load_state_dict`: loads the state of the processor step.
52
+ - `reset`: resets the state of the processor step.
53
+ - `feature_contract`: displays the modification to the feature space during the processor step.
54
+
55
+ ### Implement the `__call__` method
56
+
57
+ The `__call__` method is the core of your processor step. It takes an `EnvTransition` and returns a modified `EnvTransition`. Here's how the `NormalizerProcessorStep` works:
58
+
59
+ ```python
60
+ @dataclass
61
+ @ProcessorStepRegistry.register("normalizer_processor")
62
+ class NormalizerProcessorStep(ProcessorStep):
63
+ """Normalize observations/actions using dataset statistics."""
64
+
65
+ features: dict[str, PolicyFeature]
66
+ norm_map: dict[FeatureType, NormalizationMode]
67
+ stats: dict[str, dict[str, Any]] | None = None
68
+ eps: float = 1e-8
69
+ _tensor_stats: dict = field(default_factory=dict, init=False, repr=False)
70
+
71
+ def __post_init__(self):
72
+ """Convert stats to tensors for efficient computation."""
73
+ self.stats = self.stats or {}
74
+ self._tensor_stats = to_tensor(self.stats, device=self.device, dtype=torch.float32)
75
+
76
+ def __call__(self, transition: EnvTransition) -> EnvTransition:
77
+ new_transition = transition.copy()
78
+ # Normalize observations
79
+ # ...
80
+ # Normalize action
81
+ # ...
82
+ return new_transition
83
+
84
+ ```
85
+
86
+ See the full implementation in `src/lerobot/processor/normalize_processor.py` for complete details.
87
+
88
+ **Key principles:**
89
+
90
+ - **Always use `transition.copy()`** to avoid side effects
91
+ - **Handle both observations and actions** consistently
92
+ - **Separate config from state**: `get_config()` returns JSON-serializable params, `state_dict()` returns tensors
93
+ - **Convert stats to tensors** in `__post_init__()` for efficient computation
94
+
95
+ ### Configuration and State Management
96
+
97
+ Processors support serialization through three methods that separate configuration from tensor state. The `NormalizerProcessorStep` demonstrates this perfectly - it carries dataset statistics (tensors) in its state, and hyperparameters in its config:
98
+
99
+ ```python
100
+ # Continuing the NormalizerProcessorStep example...
101
+
102
+ def get_config(self) -> dict[str, Any]:
103
+ """JSON-serializable configuration (no tensors)."""
104
+ return {
105
+ "eps": self.eps,
106
+ "features": {k: {"type": v.type.value, "shape": v.shape} for k, v in self.features.items()},
107
+ "norm_map": {ft.value: nm.value for ft, nm in self.norm_map.items()},
108
+ # ...
109
+ }
110
+
111
+ def state_dict(self) -> dict[str, torch.Tensor]:
112
+ """Tensor state only (e.g., dataset statistics)."""
113
+ flat: dict[str, torch.Tensor] = {}
114
+ for key, sub in self._tensor_stats.items():
115
+ for stat_name, tensor in sub.items():
116
+ flat[f"{key}.{stat_name}"] = tensor.cpu() # Always save to CPU
117
+ return flat
118
+
119
+ def load_state_dict(self, state: dict[str, torch.Tensor]) -> None:
120
+ """Restore tensor state at runtime."""
121
+ self._tensor_stats.clear()
122
+ for flat_key, tensor in state.items():
123
+ key, stat_name = flat_key.rsplit(".", 1)
124
+ # Load to processor's configured device
125
+ self._tensor_stats.setdefault(key, {})[stat_name] = tensor.to(
126
+ dtype=torch.float32, device=self.device
127
+ )
128
+ # ...
129
+ ```
130
+
131
+ **Usage:**
132
+
133
+ ```python
134
+ # Save (e.g., inside a policy)
135
+ config = normalizer.get_config()
136
+ tensors = normalizer.state_dict()
137
+
138
+ # Restore (e.g., loading a pretrained policy)
139
+ new_normalizer = NormalizerProcessorStep(**config)
140
+ new_normalizer.load_state_dict(tensors)
141
+ # Now new_normalizer has the same stats and configuration
142
+ ```
143
+
144
+ ### Transform features
145
+
146
+ The `transform_features` method defines how your processor transforms feature names and shapes. This is crucial for policy configuration and debugging.
147
+
148
+ For `NormalizerProcessorStep`, features are typically preserved unchanged since normalization doesn't alter keys or shapes:
149
+
150
+ ```python
151
+ def transform_features(self, features: dict[PipelineFeatureType, dict[str, PolicyFeature]]) -> dict[PipelineFeatureType, dict[str, PolicyFeature]]:
152
+ """Normalization preserves all feature definitions."""
153
+ return features # No changes to feature structure
154
+ # ...
155
+ ```
156
+
157
+ When your processor renames or reshapes data, implement this method to reflect the mapping for downstream components. For example, a simple rename processor:
158
+
159
+ ```python
160
+ def transform_features(self, features: dict[str, PolicyFeature]) -> dict[str, PolicyFeature]:
161
+ # Simple renaming
162
+ if "pixels" in features:
163
+ features["observation.image"] = features.pop("pixels")
164
+
165
+ # Pattern-based renaming
166
+ for key in list(features.keys()):
167
+ if key.startswith("env_state."):
168
+ suffix = key[len("env_state."):]
169
+ features[f"observation.{suffix}"] = features.pop(key)
170
+ # ...
171
+
172
+ return features
173
+ ```
174
+
175
+ **Key principles:**
176
+
177
+ - Use `features.pop(old_key)` to remove and get the old feature
178
+ - Use `features[new_key] = old_feature` to add the renamed feature
179
+ - Always return the modified features dictionary
180
+ - Document transformations clearly in the docstring
181
+
182
+ ### Using overrides
183
+
184
+ You can override step parameters at load-time using `overrides`. This is handy for non-serializable objects or site-specific settings. It works both in policy factories and with `DataProcessorPipeline.from_pretrained(...)`.
185
+
186
+ **Foundational model adaptation**: This is particularly useful when working with foundational pretrained policies where you rarely have access to the original training statistics. You can inject your own dataset statistics to adapt the normalizer to your specific robot or environment data.
187
+
188
+ Example: during policy evaluation on the robot, override the device and rename map.
189
+ Use this to run a policy trained on CUDA on a CPU-only robot, or to remap camera keys when the robot uses different names than the dataset.
190
+
191
+ Direct usage with `from_pretrained`:
192
+
193
+ ```python
194
+ from lerobot.processor import RobotProcessorPipeline
195
+
196
+ # Load a foundational policy trained on diverse robot data
197
+ # but adapt normalization to your specific robot/environment
198
+ new_stats = LeRobotDataset(repo_id="username/my-dataset").meta.stats
199
+ processor = RobotProcessorPipeline.from_pretrained(
200
+ "huggingface/foundational-robot-policy", # Pretrained foundation model
201
+ overrides={
202
+ "normalizer_processor": {"stats": new_stats}, # Inject your robot's statistics
203
+ "device_processor": {"device": "cuda:0"}, # registry name for registered steps
204
+ "rename_processor": {"rename_map": robot_key_map}, # Map your robot's observation keys
205
+ # ...
206
+ },
207
+ )
208
+ ```
209
+
210
+ ## Best Practices
211
+
212
+ Based on analysis of all LeRobot processor implementations, here are the key patterns and practices:
213
+
214
+ ### 1. **Safe Data Handling**
215
+
216
+ Always create copies of input data to avoid unintended side effects. Use `transition.copy()` and `observation.copy()` rather than modifying data in-place. This prevents your processor from accidentally affecting other components in the pipeline.
217
+
218
+ Check for required data before processing and handle missing data gracefully. If your processor expects certain keys (like `"pixels"` for image processing), validate their presence first. For optional data, use safe access patterns like `transition.get()` and handle `None` values appropriately.
219
+
220
+ When data validation fails, provide clear, actionable error messages that help users understand what went wrong and how to fix it.
221
+
222
+ ### 2. **Choose Appropriate Base Classes**
223
+
224
+ LeRobot provides specialized base classes that reduce boilerplate code and ensure consistency. Use `ObservationProcessorStep` when you only need to modify observations, `ActionProcessorStep` for action-only processing, and `RobotActionProcessorStep` specifically for dictionary-based robot actions.
225
+
226
+ Only inherit directly from `ProcessorStep` when you need full control over the entire transition or when processing multiple transition components simultaneously. The specialized base classes handle the transition management for you and provide type safety.
227
+
228
+ ### 3. **Registration and Naming**
229
+
230
+ Register your processors with descriptive, namespaced names using `@ProcessorStepRegistry.register()`. Use organization prefixes like `"robotics_lab/safety_clipper"` or `"acme_corp/vision_enhancer"` to avoid naming conflicts. Avoid generic names like `"processor"` or `"step"` that could clash with other implementations.
231
+
232
+ Good registration makes your processors discoverable and enables clean serialization/deserialization when saving and loading pipelines.
233
+
234
+ ### 4. **State Management Patterns**
235
+
236
+ Distinguish between configuration parameters (JSON-serializable values) and internal state (tensors, buffers). Use dataclass fields with `init=False, repr=False` for internal state that shouldn't appear in the constructor or string representation.
237
+
238
+ Implement the `reset()` method to clear internal state between episodes. This is crucial for stateful processors that accumulate data over time, like moving averages or temporal filters.
239
+
240
+ Remember that `get_config()` should only return JSON-serializable configuration, while `state_dict()` handles tensor state separately.
241
+
242
+ ### 5. **Input Validation and Error Handling**
243
+
244
+ Validate input types and shapes before processing. Check tensor properties like `dtype` and dimensions to ensure compatibility with your algorithms. For robot actions, verify that required pose components or joint values are present and within expected ranges.
245
+
246
+ Use early returns for edge cases where no processing is needed. Provide clear, descriptive error messages that include the expected vs. actual data types or shapes. This makes debugging much easier for users.
247
+
248
+ ### 6. **Device and Dtype Awareness**
249
+
250
+ Design your processors to automatically adapt to the device and dtype of input tensors. Internal tensors (like normalization statistics) should match the input tensor's device and dtype to ensure compatibility with multi-GPU training, mixed precision, and distributed setups.
251
+
252
+ Implement a `to()` method that moves your processor's internal state to the specified device. Check device/dtype compatibility at runtime and automatically migrate internal state when needed. This pattern enables seamless operation across different hardware configurations without manual intervention.
253
+
254
+ ## Conclusion
255
+
256
+ You now have all the tools to implement custom processors in LeRobot! The key steps are:
257
+
258
+ 1. **Define your processor** as a dataclass with the required methods (`__call__`, `get_config`, `state_dict`, `load_state_dict`, `reset`, `transform_features`)
259
+ 2. **Register it** using `@ProcessorStepRegistry.register("name")` for discoverability
260
+ 3. **Integrate it** into a `DataProcessorPipeline` with other processing steps
261
+ 4. **Use base classes** like `ObservationProcessorStep` when possible to reduce boilerplate
262
+ 5. **Implement device/dtype awareness** to support multi-GPU and mixed precision setups
263
+
264
+ The processor system is designed to be modular and composable, allowing you to build complex data processing pipelines from simple, focused components. Whether you're preprocessing sensor data for training or post-processing model outputs for robot execution, custom processors give you the flexibility to handle any data transformation your robotics application requires.
265
+
266
+ Key principles for robust processors:
267
+
268
+ - **Device/dtype adaptation**: Internal tensors should match input tensors
269
+ - **Clear error messages**: Help users understand what went wrong
270
+ - **Base class usage**: Leverage specialized base classes to reduce boilerplate
271
+ - **Feature contracts**: Declare data structure changes with `transform_features()`
272
+
273
+ Start simple, test thoroughly, and ensure your processors work seamlessly across different hardware configurations!
docs/source/index.mdx ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div class="flex justify-center">
2
+ <a target="_blank" href="https://huggingface.co/lerobot">
3
+ <img
4
+ alt="HuggingFace Expert Acceleration Program"
5
+ src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/lerobot/lerobot-logo-thumbnail.png"
6
+ style="width: 100%"
7
+ ></img>
8
+ </a>
9
+ </div>
10
+
11
+ # LeRobot
12
+
13
+ **State-of-the-art machine learning for real-world robotics**
14
+
15
+ 🤗 LeRobot aims to provide models, datasets, and tools for real-world robotics in PyTorch. The goal is to lower the barrier for entry to robotics so that everyone can contribute and benefit from sharing datasets and pretrained models.
16
+
17
+ 🤗 LeRobot contains state-of-the-art approaches that have been shown to transfer to the real-world with a focus on imitation learning and reinforcement learning.
18
+
19
+ 🤗 LeRobot already provides a set of pretrained models, datasets with human collected demonstrations, and simulated environments so that everyone can get started.
20
+
21
+ 🤗 LeRobot hosts pretrained models and datasets on the LeRobot HuggingFace page.
22
+
23
+ Join the LeRobot community on [Discord](https://discord.gg/s3KuuzsPFb)
docs/source/installation.mdx ADDED
@@ -0,0 +1,127 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Installation
2
+
3
+ ## Install [`miniforge`](https://conda-forge.org/download/)
4
+
5
+ ```bash
6
+ wget "https://github.com/conda-forge/miniforge/releases/latest/download/Miniforge3-$(uname)-$(uname -m).sh"
7
+ bash Miniforge3-$(uname)-$(uname -m).sh
8
+ ```
9
+
10
+ ## Environment Setup
11
+
12
+ Create a virtual environment with Python 3.10, using conda:
13
+
14
+ ```bash
15
+ conda create -y -n lerobot python=3.10
16
+ ```
17
+
18
+ Then activate your conda environment, you have to do this each time you open a shell to use lerobot:
19
+
20
+ ```bash
21
+ conda activate lerobot
22
+ ```
23
+
24
+ When using `conda`, install `ffmpeg` in your environment:
25
+
26
+ ```bash
27
+ conda install ffmpeg -c conda-forge
28
+ ```
29
+
30
+ > [!TIP]
31
+ > This usually installs `ffmpeg 7.X` for your platform compiled with the `libsvtav1` encoder. If `libsvtav1` is not supported (check supported encoders with `ffmpeg -encoders`), you can:
32
+ >
33
+ > - _[On any platform]_ Explicitly install `ffmpeg 7.X` using:
34
+ >
35
+ > ```bash
36
+ > conda install ffmpeg=7.1.1 -c conda-forge
37
+ > ```
38
+ >
39
+ > - _[On Linux only]_ If you want to bring your own ffmpeg: Install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1), and make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.
40
+
41
+ ## Install LeRobot 🤗
42
+
43
+ ### From Source
44
+
45
+ First, clone the repository and navigate into the directory:
46
+
47
+ ```bash
48
+ git clone https://github.com/huggingface/lerobot.git
49
+ cd lerobot
50
+ ```
51
+
52
+ Then, install the library in editable mode. This is useful if you plan to contribute to the code.
53
+
54
+ ```bash
55
+ pip install -e .
56
+ ```
57
+
58
+ ### Installation from PyPI
59
+
60
+ **Core Library:**
61
+ Install the base package with:
62
+
63
+ ```bash
64
+ pip install lerobot
65
+ ```
66
+
67
+ _This installs only the default dependencies._
68
+
69
+ **Extra Features:**
70
+ To install additional functionality, use one of the following:
71
+
72
+ ```bash
73
+ pip install 'lerobot[all]' # All available features
74
+ pip install 'lerobot[aloha,pusht]' # Specific features (Aloha & Pusht)
75
+ pip install 'lerobot[feetech]' # Feetech motor support
76
+ ```
77
+
78
+ _Replace `[...]` with your desired features._
79
+
80
+ **Available Tags:**
81
+ For a full list of optional dependencies, see:
82
+ https://pypi.org/project/lerobot/
83
+
84
+ > [!NOTE]
85
+ > For lerobot 0.4.0, if you want to install libero or pi, you will have to do: `pip install "lerobot[pi,libero]@git+https://github.com/huggingface/lerobot.git"`
86
+
87
+ ### Troubleshooting
88
+
89
+ If you encounter build errors, you may need to install additional dependencies: `cmake`, `build-essential`, and `ffmpeg libs`.
90
+ To install these for linux run:
91
+
92
+ ```bash
93
+ sudo apt-get install cmake build-essential python-dev pkg-config libavformat-dev libavcodec-dev libavdevice-dev libavutil-dev libswscale-dev libswresample-dev libavfilter-dev pkg-config
94
+ ```
95
+
96
+ For other systems, see: [Compiling PyAV](https://pyav.org/docs/develop/overview/installation.html#bring-your-own-ffmpeg)
97
+
98
+ ## Optional dependencies
99
+
100
+ LeRobot provides optional extras for specific functionalities. Multiple extras can be combined (e.g., `.[aloha,feetech]`). For all available extras, refer to `pyproject.toml`.
101
+
102
+ ### Simulations
103
+
104
+ Install environment packages: `aloha` ([gym-aloha](https://github.com/huggingface/gym-aloha)), or `pusht` ([gym-pusht](https://github.com/huggingface/gym-pusht))
105
+ Example:
106
+
107
+ ```bash
108
+ pip install -e ".[aloha]" # or "[pusht]" for example
109
+ ```
110
+
111
+ ### Motor Control
112
+
113
+ For Koch v1.1 install the Dynamixel SDK, for SO100/SO101/Moss install the Feetech SDK.
114
+
115
+ ```bash
116
+ pip install -e ".[feetech]" # or "[dynamixel]" for example
117
+ ```
118
+
119
+ ### Experiment Tracking
120
+
121
+ To use [Weights and Biases](https://docs.wandb.ai/quickstart) for experiment tracking, log in with
122
+
123
+ ```bash
124
+ wandb login
125
+ ```
126
+
127
+ You can now assemble your robot if it's not ready yet, look for your robot type on the left. Then follow the link below to use Lerobot with your robot.
docs/source/integrate_hardware.mdx ADDED
@@ -0,0 +1,476 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Bring Your Own Hardware
2
+
3
+ This tutorial will explain how to integrate your own robot design into the LeRobot ecosystem and have it access all of our tools (data collection, control pipelines, policy training and inference).
4
+
5
+ To that end, we provide the [`Robot`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/robots/robot.py) base class in the LeRobot which specifies a standard interface for physical robot integration. Let's see how to implement it.
6
+
7
+ ## Prerequisites
8
+
9
+ - Your own robot which exposes a communication interface (e.g. serial, CAN, TCP)
10
+ - A way to read sensor data and send motor commands programmatically, e.g. manufacturer's SDK or API, or your own protocol implementation.
11
+ - LeRobot installed in your environment. Follow our [Installation Guide](./installation).
12
+
13
+ ## Choose your motors
14
+
15
+ If you're using Feetech or Dynamixel motors, LeRobot provides built-in bus interfaces:
16
+
17
+ - [`FeetechMotorsBus`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/feetech/feetech.py) – for controlling Feetech servos
18
+ - [`DynamixelMotorsBus`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/dynamixel/dynamixel.py) – for controlling Dynamixel servos
19
+
20
+ Please refer to the [`MotorsBus`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/motors_bus.py) abstract class to learn about its API.
21
+ For a good example of how it can be used, you can have a look at our own [SO101 follower implementation](https://github.com/huggingface/lerobot/blob/main/src/lerobot/robots/so101_follower/so101_follower.py)
22
+
23
+ Use these if compatible. Otherwise, you'll need to find or write a Python interface (not covered in this tutorial):
24
+
25
+ - Find an existing SDK in Python (or use bindings to C/C++)
26
+ - Or implement a basic communication wrapper (e.g., via pyserial, socket, or CANopen)
27
+
28
+ You're not alone—many community contributions use custom boards or firmware!
29
+
30
+ For Feetech and Dynamixel, we currently support these servos: - Feetech: - STS & SMS series (protocol 0): `sts3215`, `sts3250`, `sm8512bl` - SCS series (protocol 1): `scs0009` - Dynamixel (protocol 2.0 only): `xl330-m077`, `xl330-m288`, `xl430-w250`, `xm430-w350`, `xm540-w270`, `xc430-w150`
31
+
32
+ If you are using Feetech or Dynamixel servos that are not in this list, you can add those in the [Feetech table](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/feetech/tables.py) or [Dynamixel table](https://github.com/huggingface/lerobot/blob/main/src/lerobot/motors/dynamixel/tables.py). Depending on the model, this will require you to add model-specific information. In most cases though, there shouldn't be a lot of additions to do.
33
+
34
+ In the next sections, we'll use a `FeetechMotorsBus` as the motors interface for the examples. Replace it and adapt to your motors if necessary.
35
+
36
+ ## Step 1: Subclass the `Robot` Interface
37
+
38
+ You’ll first need to specify the config class and a string identifier (`name`) for your robot. If your robot has special needs that you'd like to be able to change easily, it should go here (e.g. port/address, baudrate).
39
+
40
+ Here, we'll add the port name and one camera by default for our robot:
41
+
42
+ <!-- prettier-ignore-start -->
43
+ ```python
44
+ from dataclasses import dataclass, field
45
+
46
+ from lerobot.cameras import CameraConfig
47
+ from lerobot.cameras.opencv import OpenCVCameraConfig
48
+ from lerobot.robots import RobotConfig
49
+
50
+
51
+ @RobotConfig.register_subclass("my_cool_robot")
52
+ @dataclass
53
+ class MyCoolRobotConfig(RobotConfig):
54
+ port: str
55
+ cameras: dict[str, CameraConfig] = field(
56
+ default_factory={
57
+ "cam_1": OpenCVCameraConfig(
58
+ index_or_path=2,
59
+ fps=30,
60
+ width=480,
61
+ height=640,
62
+ ),
63
+ }
64
+ )
65
+ ```
66
+ <!-- prettier-ignore-end -->
67
+
68
+ [Cameras tutorial](./cameras) to understand how to detect and add your camera.
69
+
70
+ Next, we'll create our actual robot class which inherits from `Robot`. This abstract class defines a contract you must follow for your robot to be usable with the rest of the LeRobot tools.
71
+
72
+ Here we'll create a simple 5-DoF robot with one camera. It could be a simple arm but notice that the `Robot` abstract class does not assume anything on your robot's form factor. You can let you imagination run wild when designing new robots!
73
+
74
+ <!-- prettier-ignore-start -->
75
+ ```python
76
+ from lerobot.cameras import make_cameras_from_configs
77
+ from lerobot.motors import Motor, MotorNormMode
78
+ from lerobot.motors.feetech import FeetechMotorsBus
79
+ from lerobot.robots import Robot
80
+
81
+ class MyCoolRobot(Robot):
82
+ config_class = MyCoolRobotConfig
83
+ name = "my_cool_robot"
84
+
85
+ def __init__(self, config: MyCoolRobotConfig):
86
+ super().__init__(config)
87
+ self.bus = FeetechMotorsBus(
88
+ port=self.config.port,
89
+ motors={
90
+ "joint_1": Motor(1, "sts3250", MotorNormMode.RANGE_M100_100),
91
+ "joint_2": Motor(2, "sts3215", MotorNormMode.RANGE_M100_100),
92
+ "joint_3": Motor(3, "sts3215", MotorNormMode.RANGE_M100_100),
93
+ "joint_4": Motor(4, "sts3215", MotorNormMode.RANGE_M100_100),
94
+ "joint_5": Motor(5, "sts3215", MotorNormMode.RANGE_M100_100),
95
+ },
96
+ calibration=self.calibration,
97
+ )
98
+ self.cameras = make_cameras_from_configs(config.cameras)
99
+ ```
100
+ <!-- prettier-ignore-end -->
101
+
102
+ ## Step 2: Define Observation and Action Features
103
+
104
+ These two properties define the _interface contract_ between your robot and tools that consume it (such as data collection or learning pipelines).
105
+
106
+ > [!WARNING]
107
+ > Note that these properties must be callable even if the robot is not yet connected, so avoid relying on runtime hardware state to define them.
108
+
109
+ ### `observation_features`
110
+
111
+ This property should return a dictionary describing the structure of sensor outputs from your robot. The keys match what `get_observation()` returns, and the values describe either the shape (for arrays/images) or the type (for simple values).
112
+
113
+ Example for our 5-DoF arm with one camera:
114
+
115
+ <!-- prettier-ignore-start -->
116
+ ```python
117
+ @property
118
+ def _motors_ft(self) -> dict[str, type]:
119
+ return {
120
+ "joint_1.pos": float,
121
+ "joint_2.pos": float,
122
+ "joint_3.pos": float,
123
+ "joint_4.pos": float,
124
+ "joint_5.pos": float,
125
+ }
126
+
127
+ @property
128
+ def _cameras_ft(self) -> dict[str, tuple]:
129
+ return {
130
+ cam: (self.cameras[cam].height, self.cameras[cam].width, 3) for cam in self.cameras
131
+ }
132
+
133
+ @property
134
+ def observation_features(self) -> dict:
135
+ return {**self._motors_ft, **self._cameras_ft}
136
+ ```
137
+ <!-- prettier-ignore-end -->
138
+
139
+ In this case, observations consist of a simple dict storing each motor's position and a camera image.
140
+
141
+ ### `action_features`
142
+
143
+ This property describes the commands your robot expects via `send_action()`. Again, keys must match the expected input format, and values define the shape/type of each command.
144
+
145
+ Here, we simply use the same joints proprioceptive features (`self._motors_ft`) as with `observation_features`: the action sent will simply the goal position for each motor.
146
+
147
+ <!-- prettier-ignore-start -->
148
+ ```python
149
+ def action_features(self) -> dict:
150
+ return self._motors_ft
151
+ ```
152
+ <!-- prettier-ignore-end -->
153
+
154
+ ## Step 3: Handle Connection and Disconnection
155
+
156
+ These methods should handle opening and closing communication with your hardware (e.g. serial ports, CAN interfaces, USB devices, cameras).
157
+
158
+ ### `is_connected`
159
+
160
+ This property should simply reflect that communication with the robot's hardware is established. When this property is `True`, it should be possible to read and write to the hardware using `get_observation()` and `send_action()`.
161
+
162
+ <!-- prettier-ignore-start -->
163
+ ```python
164
+ @property
165
+ def is_connected(self) -> bool:
166
+ return self.bus.is_connected and all(cam.is_connected for cam in self.cameras.values())
167
+ ```
168
+ <!-- prettier-ignore-end -->
169
+
170
+ ### `connect()`
171
+
172
+ This method should establish communication with the hardware. Moreover, if your robot needs calibration and is not calibrated, it should start a calibration procedure by default. If your robot needs some specific configuration, this should also be called here.
173
+
174
+ <!-- prettier-ignore-start -->
175
+ ```python
176
+ def connect(self, calibrate: bool = True) -> None:
177
+ self.bus.connect()
178
+ if not self.is_calibrated and calibrate:
179
+ self.calibrate()
180
+
181
+ for cam in self.cameras.values():
182
+ cam.connect()
183
+
184
+ self.configure()
185
+ ```
186
+ <!-- prettier-ignore-end -->
187
+
188
+ ### `disconnect()`
189
+
190
+ This method should gracefully terminate communication with the hardware: free any related resources (threads or processes), close ports, etc.
191
+
192
+ Here, we already handle this in our `MotorsBus` and `Camera` classes so we just need to call their own `disconnect()` methods:
193
+
194
+ <!-- prettier-ignore-start -->
195
+ ```python
196
+ def disconnect(self) -> None:
197
+ self.bus.disconnect()
198
+ for cam in self.cameras.values():
199
+ cam.disconnect()
200
+ ```
201
+ <!-- prettier-ignore-end -->
202
+
203
+ ## Step 4: Support Calibration and Configuration
204
+
205
+ LeRobot supports saving and loading calibration data automatically. This is useful for joint offsets, zero positions, or sensor alignment.
206
+
207
+ > Note that depending on your hardware, this may not apply. If that's the case, you can simply leave these methods as no-ops:
208
+
209
+ <!-- prettier-ignore-start -->
210
+ ```python
211
+ @property
212
+ def is_calibrated(self) -> bool:
213
+ return True
214
+
215
+ def calibrate(self) -> None:
216
+ pass
217
+ ```
218
+ <!-- prettier-ignore-end -->
219
+
220
+ ### `is_calibrated`
221
+
222
+ This should reflect whether your robot has the required calibration loaded.
223
+
224
+ <!-- prettier-ignore-start -->
225
+ ```python
226
+ @property
227
+ def is_calibrated(self) -> bool:
228
+ return self.bus.is_calibrated
229
+ ```
230
+ <!-- prettier-ignore-end -->
231
+
232
+ ### `calibrate()`
233
+
234
+ The goal of the calibration is twofold:
235
+
236
+ - Know the physical range of motion of each motors in order to only send commands within this range.
237
+ - Normalize raw motors positions to sensible continuous values (e.g. percentages, degrees) instead of arbitrary discrete value dependant on the specific motor used that will not replicate elsewhere.
238
+
239
+ It should implement the logic for calibration (if relevant) and update the `self.calibration` dictionary. If you are using Feetech or Dynamixel motors, our bus interfaces already include methods to help with this.
240
+
241
+ <!-- prettier-ignore-start -->
242
+ ```python
243
+ def calibrate(self) -> None:
244
+ self.bus.disable_torque()
245
+ for motor in self.bus.motors:
246
+ self.bus.write("Operating_Mode", motor, OperatingMode.POSITION.value)
247
+
248
+ input(f"Move {self} to the middle of its range of motion and press ENTER....")
249
+ homing_offsets = self.bus.set_half_turn_homings()
250
+
251
+ print(
252
+ "Move all joints sequentially through their entire ranges "
253
+ "of motion.\nRecording positions. Press ENTER to stop..."
254
+ )
255
+ range_mins, range_maxes = self.bus.record_ranges_of_motion()
256
+
257
+ self.calibration = {}
258
+ for motor, m in self.bus.motors.items():
259
+ self.calibration[motor] = MotorCalibration(
260
+ id=m.id,
261
+ drive_mode=0,
262
+ homing_offset=homing_offsets[motor],
263
+ range_min=range_mins[motor],
264
+ range_max=range_maxes[motor],
265
+ )
266
+
267
+ self.bus.write_calibration(self.calibration)
268
+ self._save_calibration()
269
+ print("Calibration saved to", self.calibration_fpath)
270
+ ```
271
+ <!-- prettier-ignore-end -->
272
+
273
+ ### `configure()`
274
+
275
+ Use this to set up any configuration for your hardware (servos control modes, controller gains, etc.). This should usually be run at connection time and be idempotent.
276
+
277
+ <!-- prettier-ignore-start -->
278
+ ```python
279
+ def configure(self) -> None:
280
+ with self.bus.torque_disabled():
281
+ self.bus.configure_motors()
282
+ for motor in self.bus.motors:
283
+ self.bus.write("Operating_Mode", motor, OperatingMode.POSITION.value)
284
+ self.bus.write("P_Coefficient", motor, 16)
285
+ self.bus.write("I_Coefficient", motor, 0)
286
+ self.bus.write("D_Coefficient", motor, 32)
287
+ ```
288
+ <!-- prettier-ignore-end -->
289
+
290
+ ## Step 5: Implement Sensors Reading and Action Sending
291
+
292
+ These are the most important runtime functions: the core I/O loop.
293
+
294
+ ### `get_observation()`
295
+
296
+ Returns a dictionary of sensor values from the robot. These typically include motor states, camera frames, various sensors, etc. In the LeRobot framework, these observations are what will be fed to a policy in order to predict the actions to take. The dictionary keys and structure must match `observation_features`.
297
+
298
+ <!-- prettier-ignore-start -->
299
+ ```python
300
+ def get_observation(self) -> dict[str, Any]:
301
+ if not self.is_connected:
302
+ raise ConnectionError(f"{self} is not connected.")
303
+
304
+ # Read arm position
305
+ obs_dict = self.bus.sync_read("Present_Position")
306
+ obs_dict = {f"{motor}.pos": val for motor, val in obs_dict.items()}
307
+
308
+ # Capture images from cameras
309
+ for cam_key, cam in self.cameras.items():
310
+ obs_dict[cam_key] = cam.async_read()
311
+
312
+ return obs_dict
313
+ ```
314
+ <!-- prettier-ignore-end -->
315
+
316
+ ### `send_action()`
317
+
318
+ Takes a dictionary that matches `action_features`, and sends it to your hardware. You can add safety limits (clipping, smoothing) and return what was actually sent.
319
+
320
+ For simplicity, we won't be adding any modification of the actions in our example here.
321
+
322
+ <!-- prettier-ignore-start -->
323
+ ```python
324
+ def send_action(self, action: dict[str, Any]) -> dict[str, Any]:
325
+ goal_pos = {key.removesuffix(".pos"): val for key, val in action.items()}
326
+
327
+ # Send goal position to the arm
328
+ self.bus.sync_write("Goal_Position", goal_pos)
329
+
330
+ return action
331
+ ```
332
+ <!-- prettier-ignore-end -->
333
+
334
+ ## Adding a Teleoperator
335
+
336
+ For implementing teleoperation devices, we also provide a [`Teleoperator`](https://github.com/huggingface/lerobot/blob/main/src/lerobot/teleoperators/teleoperator.py) base class. This class is very similar to the `Robot` base class and also doesn't assume anything on form factor.
337
+
338
+ The main differences are in the I/O functions: a teleoperator allows you to produce action via `get_action` and can receive feedback actions via `send_feedback`. Feedback could be anything controllable on the teleoperation device that could help the person controlling it understand the consequences of the actions sent. Think motion/force feedback on a leader arm, vibrations on a gamepad controller for example. To implement a teleoperator, you can follow this same tutorial and adapt it for these two methods.
339
+
340
+ ## Using Your Own `LeRobot` Devices 🔌
341
+
342
+ You can easily extend `lerobot` with your own custom hardware—be it a camera, robot, or teleoperation device—by creating a separate, installable Python package. If you follow a few simple conventions, the `lerobot` command-line tools (like `lerobot-teleop` and `lerobot-record`) will **automatically discover and integrate your creations** without requiring any changes to the `lerobot` source code.
343
+
344
+ This guide outlines the conventions your plugin must follow.
345
+
346
+ ### The 4 Core Conventions
347
+
348
+ To ensure your custom device is discoverable, you must adhere to the following four rules.
349
+
350
+ #### 1\. Create an Installable Package with a Specific Prefix
351
+
352
+ Your project must be a standard, installable Python package. Crucially, the name of your package (as defined in `pyproject.toml` or `setup.py`) must begin with one of these prefixes:
353
+
354
+ - `lerobot_robot_` for a robot.
355
+ - `lerobot_camera_` for a camera.
356
+ - `lerobot_teleoperator_` for a teleoperation device.
357
+
358
+ This prefix system is how `lerobot` automatically finds your plugin in the Python environment.
359
+
360
+ #### 2\. Follow the `SomethingConfig`/`Something` Naming Pattern
361
+
362
+ Your device's implementation class must be named after its configuration class, simply by removing the `Config` suffix.
363
+
364
+ - **Config Class:** `MyAwesomeTeleopConfig`
365
+ - **Device Class:** `MyAwesomeTeleop`
366
+
367
+ #### 3\. Place Your Files in a Predictable Structure
368
+
369
+ The device class (`MyAwesomeTeleop`) must be located in a predictable module relative to its configuration class (`MyAwesomeTeleopConfig`). `lerobot` will automatically search in these locations:
370
+
371
+ - In the **same module** as the config class.
372
+ - In a **submodule named after the device** (e.g., `my_awesome_teleop.py`).
373
+
374
+ The recommended and simplest structure is to place them in separate, clearly named files within the same directory.
375
+
376
+ #### 4\. Expose Classes in `__init__.py`
377
+
378
+ Your package's `__init__.py` file should import and expose both the configuration and the device classes, making them easily accessible.
379
+
380
+ ### Putting It All Together: A Complete Example
381
+
382
+ Let's create a new teleoperator called `my_awesome_teleop`.
383
+
384
+ #### Directory Structure
385
+
386
+ Here is what the project folder should look like. The package name, `lerobot_teleoperator_my_awesome_teleop`, follows **Convention \#1**.
387
+
388
+ ```
389
+ lerobot_teleoperator_my_awesome_teleop/
390
+ ├── pyproject.toml # (or setup.py) lists lerobot as a dependency
391
+ └── lerobot_teleoperator_my_awesome_teleop/
392
+ ├── __init__.py
393
+ ├── config_my_awesome_teleop.py
394
+ └── my_awesome_teleop.py
395
+ ```
396
+
397
+ #### File Contents
398
+
399
+ - **`config_my_awesome_teleop.py`**: Defines the configuration class. Note the `Config` suffix (**Convention \#2**).
400
+
401
+ ```python
402
+ from dataclasses import dataclass
403
+
404
+ from lerobot.teleoperators.config import TeleoperatorConfig
405
+
406
+ @TeleoperatorConfig.register_subclass("my_awesome_teleop")
407
+ @dataclass
408
+ class MyAwesomeTeleopConfig(TeleoperatorConfig):
409
+ # Your configuration fields go here
410
+ port: str = "192.168.1.1"
411
+ ```
412
+
413
+ - **`my_awesome_teleop.py`**: Implements the device. The class name `MyAwesomeTeleop` matches its config class name (**Convention \#2**). This file structure adheres to **Convention \#3**.
414
+
415
+ ```python
416
+ from lerobot.teleoperators.teleoperator import Teleoperator
417
+
418
+ from .config_my_awesome_teleop import MyAwesomeTeleopConfig
419
+
420
+ class MyAwesomeTeleop(Teleoperator):
421
+ config_class = MyAwesomeTeleopConfig
422
+ name = "my_awesome_teleop"
423
+
424
+ def __init__(self, config: MyAwesomeTeleopConfig):
425
+ super().__init__(config)
426
+ self.config = config
427
+
428
+ # Your device logic (e.g., connect) goes here
429
+ ```
430
+
431
+ - **`__init__.py`**: Exposes the key classes (**Convention \#4**).
432
+
433
+ ```python
434
+ from .config_my_awesome_teleop import MyAwesomeTeleopConfig
435
+ from .my_awesome_teleop import MyAwesomeTeleop
436
+ ```
437
+
438
+ ### Installation and Usage
439
+
440
+ 1. **Install your new plugin in your Python environment.** You can install your local plugin package using `pip`'s editable mode or from PyPi.
441
+
442
+ ```bash
443
+ # Locally
444
+ # Navigate to your plugin's root directory and install it
445
+ cd lerobot_teleoperator_my_awesome_teleop
446
+ pip install -e .
447
+
448
+ # From PyPi
449
+ pip install lerobot_teleoperator_my_awesome_teleop
450
+ ```
451
+
452
+ 2. **Use it directly from the command line.** Now, you can use your custom device by referencing its type.
453
+
454
+ ```bash
455
+ lerobot-teleoperate --teleop.type=my_awesome_teleop \
456
+ # other arguments
457
+ ```
458
+
459
+ And that's it\! Your custom device is now fully integrated.
460
+
461
+ ### Looking for an example ?
462
+
463
+ Check out these two packages from the community:
464
+
465
+ - https://github.com/SpesRobotics/lerobot-robot-xarm
466
+ - https://github.com/SpesRobotics/lerobot-teleoperator-teleop
467
+
468
+ ## Wrapping Up
469
+
470
+ Once your robot class is complete, you can leverage the LeRobot ecosystem:
471
+
472
+ - Control your robot with available teleoperators or integrate directly your teleoperating device
473
+ - Record training data and visualize it
474
+ - Integrate it into RL or imitation learning pipelines
475
+
476
+ Don't hesitate to reach out to the community for help on our [Discord](https://discord.gg/s3KuuzsPFb) 🤗
docs/source/introduction_processors.mdx ADDED
@@ -0,0 +1,314 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Introduction to Processors
2
+
3
+ In robotics, there's a fundamental mismatch between the data that robots and humans produce and what machine learning models expect.
4
+ Robots output raw sensor data like camera images and joint positions that need normalization, batching, and device placement before models can process them.
5
+ Language instructions from humans must be tokenized into numerical representations, and different robots use different coordinate systems that need standardization.
6
+
7
+ The challenge extends to model outputs as well.
8
+ Models might output end-effector positions while robots need joint-space commands, or teleoperators produce relative movements while robots expect absolute commands.
9
+ Model predictions are often normalized and need conversion back to real-world scales.
10
+
11
+ Cross-domain translation adds another layer of complexity.
12
+ Training data from one robot setup needs adaptation for deployment on different hardware, models trained with specific camera configurations must work with new arrangements, and datasets with different naming conventions need harmonization.
13
+
14
+ **That's where processors come in.** They serve as universal translators that bridge these gaps, ensuring seamless data flow from sensors to models to actuators.
15
+ Processors handle all the preprocessing and postprocessing steps needed to convert raw environment data into model-ready inputs and vice versa.
16
+
17
+ This means that your favorite policy can be used like this:
18
+
19
+ ```python
20
+ import torch
21
+
22
+ from lerobot.datasets.lerobot_dataset import LeRobotDataset
23
+ from lerobot.policies.factory import make_pre_post_processors
24
+ from lerobot.policies.your_policy import YourPolicy
25
+ from lerobot.processor.pipeline import RobotProcessorPipeline, PolicyProcessorPipeline
26
+ dataset = LeRobotDataset("hf_user/dataset", episodes=[0])
27
+ sample = dataset[10]
28
+
29
+ model = YourPolicy.from_pretrained(
30
+ "hf_user/model",
31
+ )
32
+ model.eval()
33
+ model.to("cuda")
34
+ preprocessor, postprocessor = make_pre_post_processors(model.config, pretrained_path="hf_user/model", dataset_stats=dataset.meta.stats)
35
+
36
+ preprocessed_sample = preprocessor(sample)
37
+ action = model.select_action(preprocessed_sample)
38
+ postprocessed_action = postprocessor(action)
39
+ ```
40
+
41
+ ## What are Processors?
42
+
43
+ In robotics, data comes in many forms: images from cameras, joint positions from sensors, text instructions from users, and more. Each type of data requires specific transformations before a model can use it effectively. Models need this data to be:
44
+
45
+ - **Normalized**: Scaled to appropriate ranges for neural network processing
46
+ - **Batched**: Organized with proper dimensions for batch processing
47
+ - **Tokenized**: Text converted to numerical representations
48
+ - **Device-placed**: Moved to the right hardware (CPU/GPU)
49
+ - **Type-converted**: Cast to appropriate data types
50
+
51
+ Processors handle these transformations through composable, reusable steps that can be chained together into pipelines. Think of them as a modular assembly line where each station performs a specific transformation on your data.
52
+
53
+ ## Core Concepts
54
+
55
+ ### EnvTransition: The Universal Data Container
56
+
57
+ The `EnvTransition` is the fundamental data structure that flows through all processors.
58
+ It's a typed dictionary that represents a complete robot-environment interaction:
59
+
60
+ - **OBSERVATION**: All sensor data (images, states, proprioception)
61
+ - **ACTION**: The action to execute or that was executed
62
+ - **REWARD**: Reinforcement learning signal
63
+ - **DONE/TRUNCATED**: Episode boundary indicators
64
+ - **INFO**: Arbitrary metadata
65
+ - **COMPLEMENTARY_DATA**: Task descriptions, indices, padding flags, inter-step data
66
+
67
+ ### ProcessorStep: The Building Block
68
+
69
+ A `ProcessorStep` is a single transformation unit that processes transitions. It's an abstract base class with two required methods:
70
+
71
+ ```python
72
+ from lerobot.processor import ProcessorStep, EnvTransition
73
+
74
+ class MyProcessorStep(ProcessorStep):
75
+ """Example processor step - inherit and implement abstract methods."""
76
+
77
+ def __call__(self, transition: EnvTransition) -> EnvTransition:
78
+ """Transform the transition - REQUIRED abstract method."""
79
+ # Your processing logic here
80
+ return transition
81
+
82
+ def transform_features(self, features):
83
+ """Declare how this step transforms feature shapes/types - REQUIRED abstract method."""
84
+ return features # Most processors return features unchanged
85
+ ```
86
+
87
+ `__call__` is the core of your processor step. It takes an `EnvTransition` and returns a modified `EnvTransition`.
88
+
89
+ `transform_features` is used to declare how this step transforms feature shapes/types.
90
+
91
+ ### DataProcessorPipeline: The Generic Orchestrator
92
+
93
+ The `DataProcessorPipeline[TInput, TOutput]` chains multiple `ProcessorStep` instances together:
94
+
95
+ ```python
96
+ from lerobot.processor import RobotProcessorPipeline, PolicyProcessorPipeline
97
+
98
+ # For robot hardware (unbatched data)
99
+ robot_processor = RobotProcessorPipeline[RobotAction, RobotAction](
100
+ steps=[step1, step2, step3],
101
+ name="robot_pipeline"
102
+ )
103
+
104
+ # For model training/inference (batched data)
105
+ policy_processor = PolicyProcessorPipeline[dict[str, Any], dict[str, Any]](
106
+ steps=[step1, step2, step3],
107
+ name="policy_pipeline"
108
+ )
109
+ ```
110
+
111
+ ## RobotProcessorPipeline vs PolicyProcessorPipeline
112
+
113
+ The key distinction is in the data structures they handle:
114
+
115
+ | Aspect | RobotProcessorPipeline | PolicyProcessorPipeline |
116
+ | --------------- | -------------------------------------------- | ---------------------------------------- |
117
+ | **Input** | `dict[str, Any]` - Individual robot values | `dict[str, Any]` - Batched tensors |
118
+ | **Output** | `dict[str, Any]` - Individual robot commands | `torch.Tensor` - Policy predictions |
119
+ | **Use Case** | Real-time robot control | Model training/inference |
120
+ | **Data Format** | Unbatched, heterogeneous | Batched, homogeneous |
121
+ | **Examples** | `{"joint_1": 0.5}` | `{"observation.state": tensor([[0.5]])}` |
122
+
123
+ **Use `RobotProcessorPipeline`** for robot hardware interfaces:
124
+
125
+ ```python
126
+ # Robot data structures: dict[str, Any] for observations and actions
127
+ robot_obs: dict[str, Any] = {
128
+ "joint_1": 0.5, # Individual joint values
129
+ "joint_2": -0.3,
130
+ "camera_0": image_array # Raw camera data
131
+ }
132
+
133
+ robot_action: dict[str, Any] = {
134
+ "joint_1": 0.2, # Target joint positions
135
+ "joint_2": 0.1,
136
+ "gripper": 0.8
137
+ }
138
+ ```
139
+
140
+ **Use `PolicyProcessorPipeline`** for model training and batch processing:
141
+
142
+ ```python
143
+ # Policy data structures: batch dicts and tensors
144
+ policy_batch: dict[str, Any] = {
145
+ "observation.state": torch.tensor([[0.5, -0.3]]), # Batched states
146
+ "observation.images.camera0": torch.tensor(...), # Batched images
147
+ "action": torch.tensor([[0.2, 0.1, 0.8]]) # Batched actions
148
+ }
149
+
150
+ policy_action: torch.Tensor = torch.tensor([[0.2, 0.1, 0.8]]) # Model output tensor
151
+ ```
152
+
153
+ ## Converter Functions
154
+
155
+ LeRobot provides converter functions to bridge different data formats in `lerobot.processor.converters`. These functions handle the crucial translations between robot hardware data structures, policy model formats, and the internal `EnvTransition` representation that flows through processor pipelines.
156
+
157
+ | Category | Function | Description |
158
+ | ------------------------------ | ----------------------------- | ------------------------------- |
159
+ | **Robot Hardware Converters** | `robot_action_to_transition` | Robot dict → EnvTransition |
160
+ | | `observation_to_transition` | Robot obs → EnvTransition |
161
+ | | `transition_to_robot_action` | EnvTransition → Robot dict |
162
+ | **Policy/Training Converters** | `batch_to_transition` | Batch dict → EnvTransition |
163
+ | | `transition_to_batch` | EnvTransition → Batch dict |
164
+ | | `policy_action_to_transition` | Policy tensor → EnvTransition |
165
+ | | `transition_to_policy_action` | EnvTransition → Policy tensor |
166
+ | **Utilities** | `create_transition` | Build transitions with defaults |
167
+ | | `identity_transition` | Pass-through converter |
168
+
169
+ The key insight is that **robot hardware converters** work with individual values and dictionaries, while **policy/training converters** work with batched tensors and model outputs. The converter functions automatically handle the structural differences, so your processor steps can focus on the core transformations without worrying about data format compatibility.
170
+
171
+ ## Processor Examples
172
+
173
+ The following examples demonstrate real-world processor configurations for policy training and inference.
174
+
175
+ Here is an example processor for policy training and inference:
176
+
177
+ ```python
178
+ # Training data preprocessing (optimized order for GPU performance)
179
+ training_preprocessor = PolicyProcessorPipeline[dict[str, Any], dict[str, Any]](
180
+ steps=[
181
+ RenameObservationsProcessorStep(rename_map={}), # Standardize keys
182
+ AddBatchDimensionProcessorStep(), # Add batch dims
183
+ TokenizerProcessorStep(tokenizer_name="...", ...), # Tokenize language
184
+ DeviceProcessorStep(device="cuda"), # Move to GPU first
185
+ NormalizerProcessorStep(features=..., stats=...), # Normalize on GPU
186
+ ]
187
+ )
188
+
189
+ # Model output postprocessing
190
+ training_postprocessor = PolicyProcessorPipeline[torch.Tensor, torch.Tensor](
191
+ steps=[
192
+ DeviceProcessorStep(device="cpu"), # Move to CPU
193
+ UnnormalizerProcessorStep(features=..., stats=...), # Denormalize
194
+ ]
195
+ to_transition=policy_action_to_transition,
196
+ to_output=transition_to_policy_action,
197
+ )
198
+ ```
199
+
200
+ ### An interaction between a robot and a policy with processors
201
+
202
+ The most common real-world scenario combines both pipeline types robot hardware generates observations that need policy processing, and policy outputs need robot-compatible postprocessing:
203
+
204
+ ```python
205
+ # Real deployment: Robot sensors → Model → Robot commands
206
+ with torch.no_grad():
207
+ while not done:
208
+ raw_obs = robot.get_observation() # dict[str, Any]
209
+
210
+ # Add your robot observation to policy observation processor
211
+
212
+ policy_input = policy_preprocessor(raw_obs) # Batched dict
213
+
214
+ policy_output = policy.select_action(policy_input) # Policy tensor
215
+
216
+ policy_action = policy_postprocessor(policy_output)
217
+
218
+ # Add your robot action to policy action processor
219
+
220
+ robot.send_action(policy_action)
221
+ ```
222
+
223
+ ## Feature Contracts: Shape and Type Transformation
224
+
225
+ Processors don't just transform data - they can also **change the data structure itself**. The `transform_features()` method declares these changes, which is crucial for dataset recording and policy creation.
226
+
227
+ ### Why Feature Contracts Matter
228
+
229
+ When building datasets or policies, LeRobot needs to know:
230
+
231
+ - **What data fields will exist** after processing
232
+ - **What shapes and types** each field will have
233
+ - **How to configure models** for the expected data structure
234
+
235
+ ```python
236
+ # Example: A processor that adds velocity to observations
237
+ class VelocityProcessor(ObservationProcessorStep):
238
+ def observation(self, obs):
239
+ new_obs = obs.copy()
240
+ if "observation.state" in obs:
241
+ # concatenate computed velocity field to the state
242
+ new_obs["observation.state"] = self._compute_velocity(obs["observation.state"])
243
+ return new_obs
244
+
245
+ def transform_features(self, features):
246
+ """Declare the new velocity field we're adding."""
247
+ state_feature = features[PipelineFeatureType.OBSERVATION].get("observation.state")
248
+ if state_feature:
249
+ double_shape = (state_feature.shape[0] * 2,) if state_feature.shape else (2,)
250
+ features[PipelineFeatureType.OBSERVATION]["observation.state"] = PolicyFeature(
251
+ type=FeatureType.STATE, shape=double_shape
252
+ )
253
+ return features
254
+ ```
255
+
256
+ ### Feature Specification Functions
257
+
258
+ `create_initial_features()` and `aggregate_pipeline_dataset_features()` solve a critical dataset creation problem: determining the exact final data structure before any data is processed.
259
+ Since processor pipelines can add new features (like velocity fields), change tensor shapes (like cropping images), or rename keys, datasets need to know the complete output specification upfront to allocate proper storage and define schemas.
260
+ These functions work together by starting with robot hardware specifications (`create_initial_features()`) then simulating the entire pipeline transformation (`aggregate_pipeline_dataset_features()`) to compute the final feature dictionary that gets passed to `LeRobotDataset.create()`, ensuring perfect alignment between what processors output and what datasets expect to store.
261
+
262
+ ```python
263
+ from lerobot.datasets.pipeline_features import aggregate_pipeline_dataset_features
264
+
265
+ # Start with robot's raw features
266
+ initial_features = create_initial_features(
267
+ observation=robot.observation_features, # {"joint_1.pos": float, "camera_0": (480,640,3)}
268
+ action=robot.action_features # {"joint_1.pos": float, "gripper.pos": float}
269
+ )
270
+
271
+ # Apply processor pipeline to compute final features
272
+ final_features = aggregate_pipeline_dataset_features(
273
+ pipeline=my_processor_pipeline,
274
+ initial_features=initial_features,
275
+ use_videos=True
276
+ )
277
+
278
+ # Use for dataset creation
279
+ dataset = LeRobotDataset.create(
280
+ repo_id="my_dataset",
281
+ features=final_features, # Knows exactly what data to expect
282
+ ...
283
+ )
284
+ ```
285
+
286
+ ## Common Processor Steps
287
+
288
+ LeRobot provides many registered processor steps. Here are the most commonly used core processors:
289
+
290
+ ### Essential Processors
291
+
292
+ - **`normalizer_processor`**: Normalize observations/actions using dataset statistics (mean/std or min/max)
293
+ - **`device_processor`**: Move tensors to CPU/GPU with optional dtype conversion
294
+ - **`to_batch_processor`**: Add batch dimensions to transitions for model compatibility
295
+ - **`rename_observations_processor`**: Rename observation keys using mapping dictionaries
296
+ - **`tokenizer_processor`**: Tokenize natural language task descriptions into tokens and attention masks
297
+
298
+ ### Next Steps
299
+
300
+ - **[Implement Your Own Processor](./implement_your_own_processor)** - Create custom processor steps
301
+ - **[Debug Your Pipeline](./debug_processor_pipeline)** - Troubleshoot and optimize pipelines
302
+ - **[Processors for Robots and Teleoperators](./processors_robots_teleop)** - Real-world integration patterns
303
+
304
+ ## Summary
305
+
306
+ Processors solve the data translation problem in robotics by providing:
307
+
308
+ - **Modular transformations**: Composable, reusable processing steps
309
+ - **Type safety**: Generic pipelines with compile-time checking
310
+ - **Performance optimization**: GPU-accelerated operations
311
+ - **Robot/Policy distinction**: Separate pipelines for different data structures
312
+ - **Comprehensive ecosystem**: 30+ registered processors for common tasks
313
+
314
+ The key insight: `RobotProcessorPipeline` handles unbatched robot hardware data, while `PolicyProcessorPipeline` handles batched model data. Choose the right tool for your data structure!