diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraGripper/chunk-002/file-052.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraGripper/chunk-002/file-052.mp4 deleted file mode 100644 index 6b19aac25728ad10104cd960db535aea37e54d0f..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraGripper/chunk-002/file-052.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:191e0de04e96645b45ddedd621a44228e8c8efa3fa53585e2c0fe4128ac3c8f8 -size 14856 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraGripper/chunk-002/file-053.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraGripper/chunk-002/file-053.mp4 deleted file mode 100644 index ee77b49a06d660808bfe9cd3cbbec3faf46e5b82..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraGripper/chunk-002/file-053.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4d99bbefacff64b81b4c743ab5d677a672f6490632522f4d8b264c17a2172e6e -size 19094 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraGripper/chunk-002/file-054.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraGripper/chunk-002/file-054.mp4 deleted file mode 100644 index e0869404e7fec1fba2c0bf102eb60aaf1f2846bd..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraGripper/chunk-002/file-054.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:003b0530ad29bfee28529ccfe63bba96f94b7a10921eb6bef8acf1dca9f716c1 -size 19798 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraGripper/chunk-002/file-056.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraGripper/chunk-002/file-056.mp4 deleted file mode 100644 index 655aa0c0a71c9e0716d1728a69f08d34d3aed667..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraGripper/chunk-002/file-056.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0ff57067d31ef4c722407b0b388b104e30126f81d686d3787b1e727c0a6eddc1 -size 14790 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-051.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-051.mp4 deleted file mode 100644 index b36fbf4114b0a88e703212001a34910eac97db50..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-051.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:190f435794cf4f6f546ec73954403037640b5b915dfb9931c35669df2249dcbb -size 25584 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-053.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-053.mp4 deleted file mode 100644 index cd65df6fe0411cbde012beb93beb4b31b6b9948c..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-053.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:07e247a9a386ace1337f0c85bae952dedc5fbd5728feb29e3a036230fb765204 -size 25885 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-054.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-054.mp4 deleted file mode 100644 index cdfd730192018f6a53c31db17e34af8179c8288b..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-054.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:733f1f8cb487d793ae4d9331a4f5ea38f86f14bbfbcc70e38bff88184b2057cf -size 25912 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-055.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-055.mp4 deleted file mode 100644 index d566b4f1f3d57cba43f51d485c1d56e1fd44ba78..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-055.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c22d4cc63f0a317ac514864e9ac58e279bf62daa6fc8ba589787d782ee308c99 -size 26416 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-056.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-056.mp4 deleted file mode 100644 index aae4150987dd6ca7be58a62784c7c1002c5de4d8..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-056.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9dd2c559e3fa728126f8256a9dfaa8f186a14232d0fa5dc893e7e58d339ea6b1 -size 26051 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-057.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-057.mp4 deleted file mode 100644 index d65ccb955c5ce59057ec06fc6b650603913c85b5..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-057.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7fd10f97f1b734ca11e225575ff945512b435c9830a51074839a11d6a9e6a391 -size 24982 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-058.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-058.mp4 deleted file mode 100644 index 9f9670229add18e73f1785afabd03d827ea26e37..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-058.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c557d8aceac7a9d8ccff7a2f0ad9a8d3811a59033edbe4a5ca471359106ae040 -size 25663 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-059.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-059.mp4 deleted file mode 100644 index e14bcc3800c501f39929b07b1a43d2f1d1556128..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-059.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:66fe3f4cc1a3f1fb1849a7ffb4ec938715c546be24ccae44d332a336213a0515 -size 26041 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-060.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-060.mp4 deleted file mode 100644 index ee19899a4901060e00f4ec616556d2c40795f780..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-060.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:56a78be86e60e68850be173fdd5c71436ff4acc20b83d793f237f51402e1de8c -size 26858 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-061.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-061.mp4 deleted file mode 100644 index 18bd9b0be957875463d34bbace1984b162126288..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-061.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:74ed0458b74268920e67d67c8a9cbc06c8e7ab1164f01494fa4ea8355788e967 -size 26451 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-062.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-062.mp4 deleted file mode 100644 index 84762d6812dd768902d1191c98a772b5ea64a0d7..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-062.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e4057be330174e08468bd9cc9b29d3fa1028377f90b1e3cd8530146dcb75900c -size 24919 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-063.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-063.mp4 deleted file mode 100644 index b862b563f120b528dfd1128273e16b9f57750b5c..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-063.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:bffc81489b3fc5eed9352e199228ee87fbc664f90666559306887f80152c1600 -size 26198 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-064.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-064.mp4 deleted file mode 100644 index 182f0b4b9ed68c0efee3e4e63975c138125c74d0..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-064.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:438670ca21b7e558844b9bfcfc0df0ffcbbd0253b453d74a16f25cff4465f31b -size 25591 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-065.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-065.mp4 deleted file mode 100644 index c6cc4c0adf4b58b9a1fb5130cfa01d5a7f21b661..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-065.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f947876e139dba5b33d3a3da85765dd8989959440dd7bd750a56bba5780566cd -size 25067 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-066.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-066.mp4 deleted file mode 100644 index dd888508a6bcadfff0fd1b55574935cff8e42eca..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-066.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:451689fd89f4cdea48febc55ffbbeca6d88165eb4cc567fbdedc8cdee380f06c -size 26288 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-067.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-067.mp4 deleted file mode 100644 index ecd02b445b909353fce8fcebfbbd78208f2270be..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-067.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e8e9f25fdb5004b597c1d2b1885f5071f66e9069de8a01f06436a48098c7b6aa -size 26088 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-068.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-068.mp4 deleted file mode 100644 index a8e4472250a9c8680ba01b2444dfd79173a3633c..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-068.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5f7b6ba6150f02d99a31838c0db80ee42ef15fc8ea81e9fbdd70f8958867e6e6 -size 25467 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-069.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-069.mp4 deleted file mode 100644 index 801f642c223f0d91878c6b27c1f228e02006ca2b..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-069.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a007b7fec5a6aa3a4370599cd6aee7e005673d33bc8c322fad6fc29e71975a80 -size 25014 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-070.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-070.mp4 deleted file mode 100644 index f4c4bd5557a709e2fc8d4e4a1dbfcd4a6bb5fc25..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-070.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0922e76d772ebd91299767eda06287341c3e7086deddadaafbdaf67dcf9c99c6 -size 26293 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-071.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-071.mp4 deleted file mode 100644 index a267593e94a28ea321f2de0fd2d9f7c2f9638b9d..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-071.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a463c263ceb2ae078b64a49b18ed5309ba73624cbf4ce89c2ecfaeaa75c97afa -size 26407 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-072.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-072.mp4 deleted file mode 100644 index 5b1ea665b3481b102980d6f5e7d1caf454e83a17..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-072.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:08d166ed09cfda6809f1991dd5c64902040e1a55b25d478ee7b75160b4120089 -size 26093 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-073.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-073.mp4 deleted file mode 100644 index d51011fdbc15b2f1ec2a26e5a49cc00cd9dbfd07..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-073.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:478171f344c7e8ec61ec94fd5c2d1311282f41248eb51820a121d15a0f10bdf1 -size 26484 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-074.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-074.mp4 deleted file mode 100644 index 6652620611760bac7b2c793abb38e99c145be7eb..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-074.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a96cfa5a7e91f958b886d1970446411acdbc3cabdceda8430182d4d3a12c95c7 -size 24211 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-075.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-075.mp4 deleted file mode 100644 index c16057ea06dcec17091ef527e0d3fade312e454e..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-075.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0cc65fe99b25d88b90623729ac2cd52866ddcc77d32157a049169f9699c98338 -size 25926 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-076.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-076.mp4 deleted file mode 100644 index cecb258fe10b48331e2f4a865315f0cf21c3c149..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-076.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:074bbd83cab54244c837355475c09bf8c0dc964ce8f9b96adea7d347ec88d6f3 -size 26382 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-077.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-077.mp4 deleted file mode 100644 index b47baf36b281336c19d0d8e19fc286599d777855..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-077.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f8763056ff033b9516c54b8948c1fe1dccd08f8ede64c94ffd61a330ae2ada82 -size 26149 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-078.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-078.mp4 deleted file mode 100644 index adf1ce7ebaf1db9ef6eb42badaf7d3d3d301cbf3..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-078.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:73e43b8bfb110e32f4dc6acb5baafb45b416a022b32cbf280b4b8e742825acb4 -size 25953 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-079.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-079.mp4 deleted file mode 100644 index 0eacc2f8e4860f1ecf35b79e414edde02f1dc30b..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-079.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a3cc9672981ad639ebbcbee4a7aa9aad71e085e0ca516264e82fde20ce6d6d55 -size 25919 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-080.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-080.mp4 deleted file mode 100644 index 263b4c353e627fd3bf5924eef2930573af811bde..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-080.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:8101b164800fbcc2e5690bd303738f72550460b16ab4a3eca3b933aed23499ec -size 26295 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-081.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-081.mp4 deleted file mode 100644 index 02587b71aac184a3f99a344ff7845c3e15329abc..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-081.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:baed114b82cac34f258e40a1cdf89e9f607820995619abc02345d155f643f76b -size 24484 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-082.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-082.mp4 deleted file mode 100644 index f15f2f98113d406e7a81d676edc9412130123c71..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-082.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b751221780d7b261a7b8393fe3a546993af327d4577a091ae21dade306159e7d -size 25692 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-083.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-083.mp4 deleted file mode 100644 index 3184bdcd99f25c2f967a3e634c4592b1d406ac10..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-083.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fbb40048c6e9669dc0f5e62aec3d1e7707b8f4810f8b350ccc4999d04757ea51 -size 27137 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-084.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-084.mp4 deleted file mode 100644 index 9d3d69e6bfc199f891bdfb42e26ab0aed8eff9ed..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-084.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a710a13bf6efa2eadf73e904fb37d6a8bd100dfc1fd5a8522436ad41c590efb2 -size 26536 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-085.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-085.mp4 deleted file mode 100644 index a48dd13efb1d35f1a0eca3f295ad074186b90b40..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-085.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3b789e5e8bd455d3ade9d3c4e965cd46c81480921e58c7ad5347f9f442848ce1 -size 25498 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-086.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-086.mp4 deleted file mode 100644 index 7e9abaeda9f8c1421bde878fcb0e95e07e6c36cc..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-086.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fe7bf1d12d1b0085dfff2655640bbf850f17e73c85c18c2e02cb621cca563b5b -size 24824 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-087.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-087.mp4 deleted file mode 100644 index 942988c73347d797a1f04bf3f57e49af9db245aa..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-087.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:6bbc75c2b37214b3a76e10e2e53f21a466ab33ace3bb9f21b08199ac58ea1def -size 25778 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-088.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-088.mp4 deleted file mode 100644 index 87ebd3e0e302bbaeb9939ad37d8f774d5e11325e..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-088.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fae797a03524f7c7502486dacb6c69c65c38e66ddd2cdb4be0fff992bfa22999 -size 26123 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-089.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-089.mp4 deleted file mode 100644 index 33bfb5444ee9a6993b4edab2973cbfc62cedfc4b..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-089.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2603ab33b5be33eb3f5550e16c06a1bd74cab65ba460bc25ae864fa12ec502b5 -size 25129 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-090.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-090.mp4 deleted file mode 100644 index c5567e2dd0f2ea5b68290c7e8d51c55014fe8966..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-090.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0cabe45471fb999c1e5c4c16d86fc944ff0c63037d2fbffbcbc267a44e0d7251 -size 26386 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-091.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-091.mp4 deleted file mode 100644 index 886ccd71019542b9fa6131503b2d29cb1d8edcca..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-091.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4f8d39637929e501ddb5485ec67b6f4383702a9467bbbe8d72d73f975a7124b6 -size 26020 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-092.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-092.mp4 deleted file mode 100644 index 6ae60594a55e9fb001cc58a9b7d6207fcd721caf..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-092.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4bcd80ac80eeafbe930ef575b2bdf8f2a03eabcf6cea53dac73561fa326337d5 -size 25693 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-093.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-093.mp4 deleted file mode 100644 index 9e26c6bd48a703dcc2259aeaa33b91f53c8528e7..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-093.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5f0815c2a119e9965dcef3d53802c43b9151ff2d7aa63fa10cbd82d7f7a8b683 -size 25956 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-094.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-094.mp4 deleted file mode 100644 index 88f6b058a92afea3109242d0f46075602fe4d16c..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-094.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2a7ff0079252c1b208ebe352d35f8ac29649d4ca87f4f69431d6e18a755bc9d8 -size 25148 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-095.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-095.mp4 deleted file mode 100644 index 84ea45c3801f34a9b20674367e1093342a683fea..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-095.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d7b80adcedde28f22fdeda086259fee782f3c40fa47428ad77e4058dac163d2c -size 26341 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-096.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-096.mp4 deleted file mode 100644 index e05dc77203bec5d02ea1ecfe859cbe190f4cc50c..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-096.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c99544bd3e6a16d9326e7e6233ab6291d0155cafb47ba03f6c3e3698592a3592 -size 25737 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-097.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-097.mp4 deleted file mode 100644 index 592bef20e68e5acff8e49d75734585c6dea9529e..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-097.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:036c3db4c06a4300dc54d50938bfe3cedfbf1be48ad478674c50800e250a7eea -size 26101 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-098.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-098.mp4 deleted file mode 100644 index 10ceb41ac0b65199dac5bd5c7db35c059f2a4873..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-098.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f8433d3686d2787ce849cf0207434e300eeae65bf22beabf2783e377db5e21c4 -size 26144 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-099.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-099.mp4 deleted file mode 100644 index e2a4bb8f2447bd26e39b9cf2165723746fd6d008..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-001/file-099.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1e174f9291f479a660124355ec2604852970566a262c131963c1cc8fa7efbf88 -size 25811 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-000.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-000.mp4 deleted file mode 100644 index ad2017e529270c9fc570a6a221ab404d1ad8fe86..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-000.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b63cd5f0e2ef773172e87caf488c52a4d91ebe24e1f53d8da5a3624eb9ab6cdf -size 26642 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-001.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-001.mp4 deleted file mode 100644 index 2702b61e655d50738cf7c820c1a92b3e80a1fa02..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-001.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:0e1ec3b39a12a8915acbf76e1c44a88a47ad104e152dfa383bff857ba3d4bc7f -size 26740 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-002.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-002.mp4 deleted file mode 100644 index 9956ab4e9608fc0fa9ddeecab8a28b3e1e40d077..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-002.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7890ad4e2429d1171cee6ff5b4d377757adbed5bbf89dbc0c25c085b6b4d7bbb -size 26172 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-003.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-003.mp4 deleted file mode 100644 index 9d688c23940d28b775abc4b003882f82e106364a..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-003.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:41307cf175a72e5959224553147732e7255c2572126abb307986da32a940be89 -size 25895 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-004.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-004.mp4 deleted file mode 100644 index 7bf46eb2bff0fc1f7152750e9a544c5e90682a69..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-004.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f815c86487967f1667bda5f77c966aa309bcffb2bfeb05e7b77e9f74fd1cd17d -size 26276 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-005.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-005.mp4 deleted file mode 100644 index 71a61129af851bd85a33ca290e33971e6bfb9267..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-005.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:df8dc97f8df3067f70b8305169b99b380d147915872819e8b76c461402978751 -size 25812 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-006.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-006.mp4 deleted file mode 100644 index 48062504ebef57aceb3e6eb7ee01f0ea65a5bb30..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-006.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:54f5fe2447128cdd5ba8baf9e14222c28a8974e527bb917f0b3159744bc342e6 -size 25945 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-007.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-007.mp4 deleted file mode 100644 index b490206c2d2099871743b6222ec1495f95f82842..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-007.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:761e95b8c0a31e92777a0a5854f2535103f7f728da6f9e0d57072e10d47d33e3 -size 25059 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-008.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-008.mp4 deleted file mode 100644 index 827be4ba23ea1d85120f690c88eac1e01528d1bc..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-008.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:83a84a1e2b154d38e18ae9212b9c25096c2dc42ba817cdd91f66278e13a3647d -size 25360 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-009.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-009.mp4 deleted file mode 100644 index 6ea8bbc5dc62f509da730c99b842e3e1ae1a78e1..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-009.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1e00decb18a570f75a726da4c5d5cc404a4bc1c7542296a366db9daafcf0a242 -size 26590 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-010.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-010.mp4 deleted file mode 100644 index 05f980450eefba808b1a42902dd4f16fd318a85e..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-010.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9f640785aeded32282527c8d1247b1f3b29e027fcbf2b1a55912000af6d50b0a -size 26982 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-011.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-011.mp4 deleted file mode 100644 index 53d44457a500f95417fe169996fdf76efabd0830..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-011.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f5cceede2261eb4e32d6a0331d2d5622a5138abc0b975a5e61dea804e2f94ff2 -size 26802 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-013.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-013.mp4 deleted file mode 100644 index eb3ed634e16753237168800689581c2e21b708f6..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-013.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cf80616ad5ccb519e2895103f7ea95c77f5d187352bb1018b117790b7eb439e5 -size 25393 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-014.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-014.mp4 deleted file mode 100644 index dcc615154b814891b25b902b448c8ac69ac0c814..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-014.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:61ec1caadf58cbe33a4652a9addf42978baa5e0d7703c8f94abda31eab52df89 -size 26440 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-015.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-015.mp4 deleted file mode 100644 index 522b688b50262f0bbf40828f105e4ca546a8b8a6..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-015.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1889b31a25ad8319db0c09f9b1ea13e523abc90d2e5b7a8fd2c992d8c0d2a41c -size 25190 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-016.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-016.mp4 deleted file mode 100644 index a3551197aa6561e1dd61be3f4ad6e3f0bd16baec..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-016.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:fc722a0a87cacdc5eca1a6cc96d9c902cf71884c599a575090d997d51e23e5c1 -size 26271 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-017.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-017.mp4 deleted file mode 100644 index ea1e85eb5bd8073371343b08e8f05810dabcd82f..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-017.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e0de6c006e58cafffb88313be636dc7eaa82774e2a88af4557ecaed7160fb135 -size 25961 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-018.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-018.mp4 deleted file mode 100644 index 00f1eeaacd2fb79ce502a286a6fcf506f6d8a8a6..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-018.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3d7973c388cd0d6844473dd29d21ef748da8afa294e44ebe4bacbc4ba470c5fd -size 25482 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-019.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-019.mp4 deleted file mode 100644 index 34c8fbda41b2012d42ad6a9d81f6ccdb4be75d07..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-019.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d2a12a1524846ea69ca644121527f4609cb4aa03790c0ffc42b77d7cddc78a47 -size 25747 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-020.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-020.mp4 deleted file mode 100644 index f7f2aca38ec247b5148fa5b76b7538d470ae34f2..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-020.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b48c4b6c61339b7035501e8479c84e7d039e7e58f6497fea4926f4f6a77d3a5e -size 26213 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-021.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-021.mp4 deleted file mode 100644 index 2bc999bf8b986c2552e3f252e9beef260e80cdd0..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-021.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:cfca3e754bd04557d44da1503b46daf53483f394c45a8564e52f7efcd4574939 -size 25592 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-022.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-022.mp4 deleted file mode 100644 index b941336c0928d343ddd0f14617d0a648e6a016ad..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-022.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e81daf9401484fc3dde11c4638b9a02957f670f072651336c332426871ac327e -size 26882 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-023.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-023.mp4 deleted file mode 100644 index 3eec84bdd9d4e5d6030e670f98fec8181bffe480..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-023.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c13a6a031355b520d00f00c878a7f2ae5f90d9ee36a170a5367f5a6c78705e94 -size 26545 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-024.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-024.mp4 deleted file mode 100644 index bea98e22d2519a5ddfcd9832d4e3d0a808e33886..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-024.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ad24e4935715db808a9d0053cc4a5355274dc55a2311d8f44931c2127743d076 -size 25828 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-025.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-025.mp4 deleted file mode 100644 index cadbf502de2efef9b7b95716d5bb1591ee100ccd..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-025.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7607064410572674a311f92575d5eec40ff5341896d68a4022a2525edb6d0994 -size 25987 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-026.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-026.mp4 deleted file mode 100644 index 8a04fac3927200fff8256da7414569d26ab0ae45..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-026.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:7d4a596608441d8007ae2fd492062c6df6a942b93000d0cd2814c97abe8fc078 -size 25923 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-027.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-027.mp4 deleted file mode 100644 index b4ab1457e144664912d18145e883376ac4e4f19a..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-027.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:617dbfed5a06c0ea815b842a56ec373d40cc84b912a3a2adbb5f90e510116b0d -size 25260 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-028.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-028.mp4 deleted file mode 100644 index 43327951458e5628dfa5d79b37b5b6ea9ddcbed2..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-028.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e51a67e1620f3458e6a874a541cbb396283093079bda92174fdea231bd2e3ba2 -size 25927 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-029.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-029.mp4 deleted file mode 100644 index bd9d208a8959bae4cd8c1a38e1c962ace30c01b2..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-029.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:5355f922a6b80b8d0cddd2a6c6b331dfc7f8c782247a5e4e76111c801a0e3aaa -size 25536 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-030.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-030.mp4 deleted file mode 100644 index ebc624e4650539a298d95bf29497bce9fb9c13f3..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-030.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:508ea10b0d662080da1bbd1fb257452bd5321e7bc9d91d958556d960aac6d62c -size 26644 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-031.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-031.mp4 deleted file mode 100644 index 8b5086a2afd4a5922a5cfdae046bdc974f122970..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-031.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c3376c51d23b9f8d9689bef21c1d00c11e18e31127a53503001a6c9655a41281 -size 26914 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-032.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-032.mp4 deleted file mode 100644 index 1bb3217cb5ba52c6276f8dc2cc270f6de0d0d965..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-032.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c1c41c11cb96fb50d436edef79cdffd2327a6a2ea153c238f8b2560efab32702 -size 25999 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-033.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-033.mp4 deleted file mode 100644 index 341e8b8c658f3f2158d2b6a7f1c90e90f3121452..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-033.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:4398a50fd8328612c6fab8106a38fc0f7d31395cd72077b5b45cf2048c1e8011 -size 26060 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-036.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-036.mp4 deleted file mode 100644 index c7b2a7384f98742ea7984f348640cb74992faa90..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-036.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e5e574b375689b133226e8175ef020ac7ac39cb558721553ead9f015104cd60b -size 25875 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-037.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-037.mp4 deleted file mode 100644 index e25137084d58b13977cc1b5e52739a83fc3bc05a..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-037.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1e600c485893a98f61f7d7d443ae298fa22fba61fd6be9f041f88259937bcb6b -size 25837 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-038.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-038.mp4 deleted file mode 100644 index 9a30e683c94fa1880d4fafbbefd6b2c3e413838a..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-038.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b84c99617693f1d3ed6d3a9bbd9b5cd63d0274d3057325b0c3291a7d861077dc -size 26786 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-039.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-039.mp4 deleted file mode 100644 index fb293f9a6ed79ef80db4e9c527fad52af948bd01..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-039.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d0ea139e73dbda8063916d723c1698305f7186fefdca9455062ad552ce206f48 -size 26115 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-040.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-040.mp4 deleted file mode 100644 index 14fdb3af11756c76a8f5d8ffd154d464c59e2669..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-040.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:e856b47ff9acf682941b6ebf634c57035b226ea4deee159cbdb7ab5c20aaac02 -size 26236 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-041.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-041.mp4 deleted file mode 100644 index c110f762771b2093923b68b6623be8dc997cfd66..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-041.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d586b5afd01f9711718795be31fce37cf203184fc7c1f34911145f066d509e61 -size 25385 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-042.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-042.mp4 deleted file mode 100644 index b32135af9d39c55f3131428638681b98ef068f2c..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-042.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:346ad1edb6eebf36b8476d8347493cb2a3eaaef6ea4e5182a903e73417fad99d -size 26064 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-043.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-043.mp4 deleted file mode 100644 index ad920f581f8f9c26795b33a9cd23fd40d001c301..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-043.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:721c254e88f8d0a831a08501f575db52610adecdecc2371fd1488b3831bccd53 -size 25968 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-044.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-044.mp4 deleted file mode 100644 index 8bf878795cd62a240471ce9753b142a2eeeb3de3..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-044.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:329e8391ac105f0b9806b2e3170be3c65323389cf6806187355bcdd4a9dd042a -size 26289 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-045.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-045.mp4 deleted file mode 100644 index 7d936316ef58f52082ba4057f71c9ad3655ef0c3..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-045.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:6ad8aa84b8d79481a0a34822f0c70f3978798bc20a1bfc66453c5ae223f69359 -size 25172 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-046.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-046.mp4 deleted file mode 100644 index d02c16b79f48de32deebc29581787d9f0e8569ef..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-046.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:402f4ce7836d345feb5e8a4a513fe0dff6df0494cd34805f1d6d86877fad7bcd -size 25529 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-047.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-047.mp4 deleted file mode 100644 index ff916abe0ff60c8ff76880b2cd35358aad1ae2ce..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-047.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b480de51e8fd29fc742f6e2cbed3f7deae35f94ed0ded53ff700c793f7cba536 -size 25810 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-048.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-048.mp4 deleted file mode 100644 index ba7b1c23f2ffa0b60f21cbcaad1d800c3b9cd204..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-048.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:72dfa833fc152490674c389cc6269ae2b2be65a1a6b2ca8d7144a61991c64283 -size 24633 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-049.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-049.mp4 deleted file mode 100644 index f64fbb7e2b4e28aed1d254f434c3ed8809e55013..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-049.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:ce3afcaca4b27522a3387d12b7bd6242883012804c18a0dcf68152f181a3a6a8 -size 26786 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-050.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-050.mp4 deleted file mode 100644 index edf645a720418775b4a957c175bfcf4da3c62a64..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-050.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:81d5560c8c20be3993a6a64f2000d9e7437d4d17ebe58a3ce48243be9c0bf81b -size 25965 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-051.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-051.mp4 deleted file mode 100644 index 95cf904f0e9b1e60680045f92e5f8b7ea432146a..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-051.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b087adbf03adf44b4fde332e7b058c74f0661f037a85bc72dc3fc6a7e8ee440b -size 26498 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-077.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-077.mp4 deleted file mode 100644 index 3e27318b6dafe219def72efc16f867b0fa2f4faa..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-077.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1c1917358689182aeb8f08ce9c4fe0e2b846f99283939cf1e145217b7f3d0a4f -size 25611 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-078.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-078.mp4 deleted file mode 100644 index 305a0712c3cdcab653cc089830fa8261e96a9b7d..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-078.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3b6f33b623522fa88f52e88255ec3957a1fd9fe570881e9313e890f0a5c08c5a -size 25618 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-079.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-079.mp4 deleted file mode 100644 index 606814c7f707a4967381ad347e10c2ca0427bf92..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-079.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:c1ea8f5d803fd20653face33552ec2bde821259c9f10d92da6fc786c7687f644 -size 26072 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-080.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-080.mp4 deleted file mode 100644 index 009dab5e5a84709a82ca53422f1ba4741188a704..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-080.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:82646b7be54633d7ba732a9e7d3cb4894c4f91bcc0e372ac90296b05c0cd3da3 -size 25542 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-081.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-081.mp4 deleted file mode 100644 index a690181b4c3ef44576e3c1518e90a6e0fd0fce5d..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-081.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:b9cd85639f51735d2761d28185ed073273d004818fdb0f47038ff388f44ca552 -size 26635 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-082.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-082.mp4 deleted file mode 100644 index a6da30230f5183ba70a766a6e45998639cc37bb4..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-082.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9e4639eb5c2387eb12d938857b5a7f9c1d4ba924bd97726df02af43371395435 -size 25719 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-083.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-083.mp4 deleted file mode 100644 index 6ee9c664c90d15b3298b6d1b3c8b0b795bb60242..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-083.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1b877d503fa6c6b31cf2f5e933e4524cfb6a73d616f9deebb49688f0dc2d4af3 -size 26427 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-084.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-084.mp4 deleted file mode 100644 index 022ddc7e14e2c1ef6f4ce753d0cb32925b9cf9e5..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-084.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a883f17af7c5d505560004fe23f695764d97bb016eab6aff23baf111fc55cafe -size 24959 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-085.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-085.mp4 deleted file mode 100644 index 4ba6f86659d8e782c2ef7576ccec35934ea91fce..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-085.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2a93a23ad5fb00b234b9036df0af5dd52165d02725c486824ab477b7a223030e -size 26209 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-086.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-086.mp4 deleted file mode 100644 index 6f8f7282eeef599132e16fe2dcdf450370024733..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-086.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:85bd5186b2d140c6af8a46311cd5351a87cef278dbe8d7a47add60192167b1a4 -size 26066 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-087.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-087.mp4 deleted file mode 100644 index 37a06db215fd12ae986ba50ad2f10bfa94b2d77c..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-087.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:6a4e346f9390ce276286d78a524715457aac21c93ffb90991fb82fb79ccdd72c -size 25374 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-088.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-088.mp4 deleted file mode 100644 index 1649203c83251ba2fb0321e2d6313d46e78710ab..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-088.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:9ee8253dac6436181d7320dd651879fc21e900c8e6efd10b310bad6f45e13c48 -size 25278 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-089.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-089.mp4 deleted file mode 100644 index dbdc333127f8d427db211709fd06be1411e3604f..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-089.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:6d612d9e44ad9045844494ddfd5630f57b421ca61b5f25711241ac12fc0650ad -size 26645 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-090.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-090.mp4 deleted file mode 100644 index d96b9534a78fe124f92192bcfc5e9d5b507806ac..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-090.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f94a7c5ac07ccb69aa0aa1bd8ed13f11abdb9b00c6d756ccf5a8224c56568447 -size 26105 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-091.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-091.mp4 deleted file mode 100644 index 6605a98da089e762d50c9a66003de0d0b99edaaf..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-091.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:1e8c97c34c7609a0a9b73a71d914230a43d2defd017cca2a93e1571fc44c1e70 -size 26275 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-092.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-092.mp4 deleted file mode 100644 index 88a053eda90da317bdc7aa6fce2618406bd1d76d..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-092.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:2fefdc7083973aebe39678fb19ca62a56d689d9e43e4a2525162d289e4fac210 -size 25721 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-093.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-093.mp4 deleted file mode 100644 index 817318cc8d168985d5f349164737e936d7bf6145..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-093.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:83dfdd042348d16259149b9b4ad9f7adb5d9e1f6c6fc8425e0d7492771753e0d -size 25979 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-094.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-094.mp4 deleted file mode 100644 index 2ab4728136de1134af204610c15d408f1272ebc4..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-094.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:51be7dcf85c7a2c3346ff000e4fd9b004bd0e7d05230551e13227d0387bf9205 -size 26287 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-095.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-095.mp4 deleted file mode 100644 index 0a3e6fdf9c813f2e0644b502164a02bc7f19fb6d..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-095.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:3749a91fbe4b4604b23db758ae13f94d8cd60918bfd34e6d67ffc78c7c680f5e -size 25726 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-096.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-096.mp4 deleted file mode 100644 index 8da85ac0bed384a3dbb2bc8ffc3fe41f782d77ea..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-096.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:62c3b441c5890106f087d0633dccea26806ecfd7d277dca678876cf5e3fa3079 -size 21538 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-097.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-097.mp4 deleted file mode 100644 index 08640ebc1fce469ead5c8f85f97d74b111116a06..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-097.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:d96d31a32a7e912c10e88f2e5e2108c6b5e699cd259a67ed3d6361d3846d3bb3 -size 25714 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-098.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-098.mp4 deleted file mode 100644 index a5fa64fad5d6d1b7c36e6ce4a3ef5263717e14c0..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-098.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:a47f256a2cfa7a4d83916ea875315fa59ae44fb6f0555eaf8123f3ed881ed391 -size 26803 diff --git a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-099.mp4 b/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-099.mp4 deleted file mode 100644 index 2f634401106b80a60154985c6e305947d198471c..0000000000000000000000000000000000000000 --- a/lerobot/dataset/session_20260116_163348/videos/observation.masks.CameraLeft/chunk-002/file-099.mp4 +++ /dev/null @@ -1,3 +0,0 @@ -version https://git-lfs.github.com/spec/v1 -oid sha256:f6754163af4f614e44097be03ff8e9610aa381474586ecb0f964058b9a13ba03 -size 23976 diff --git a/lerobot/docs/source/act.mdx b/lerobot/docs/source/act.mdx deleted file mode 100644 index 0867ebe2f31a02736dcb8c3bb4173a82ef3fb530..0000000000000000000000000000000000000000 --- a/lerobot/docs/source/act.mdx +++ /dev/null @@ -1,92 +0,0 @@ -# ACT (Action Chunking with Transformers) - -ACT is a **lightweight and efficient policy for imitation learning**, especially well-suited for fine-grained manipulation tasks. It's the **first model we recommend when you're starting out** with LeRobot due to its fast training time, low computational requirements, and strong performance. - -
- -
- -_Watch this tutorial from the LeRobot team to learn how ACT works: [LeRobot ACT Tutorial](https://www.youtube.com/watch?v=ft73x0LfGpM)_ - -## Model Overview - -Action Chunking with Transformers (ACT) was introduced in the paper [Learning Fine-Grained Bimanual Manipulation with Low-Cost Hardware](https://arxiv.org/abs/2304.13705) by Zhao et al. The policy was designed to enable precise, contact-rich manipulation tasks using affordable hardware and minimal demonstration data. - -### Why ACT is Great for Beginners - -ACT stands out as an excellent starting point for several reasons: - -- **Fast Training**: Trains in a few hours on a single GPU -- **Lightweight**: Only ~80M parameters, making it efficient and easy to work with -- **Data Efficient**: Often achieves high success rates with just 50 demonstrations - -### Architecture - -ACT uses a transformer-based architecture with three main components: - -1. **Vision Backbone**: ResNet-18 processes images from multiple camera viewpoints -2. **Transformer Encoder**: Synthesizes information from camera features, joint positions, and a learned latent variable -3. **Transformer Decoder**: Generates coherent action sequences using cross-attention - -The policy takes as input: - -- Multiple RGB images (e.g., from wrist cameras, front/top cameras) -- Current robot joint positions -- A latent style variable `z` (learned during training, set to zero during inference) - -And outputs a chunk of `k` future action sequences. - -## Installation Requirements - -1. Install LeRobot by following our [Installation Guide](./installation). -2. ACT is included in the base LeRobot installation, so no additional dependencies are needed! - -## Training ACT - -ACT works seamlessly with the standard LeRobot training pipeline. Here's a complete example for training ACT on your dataset: - -```bash -lerobot-train \ - --dataset.repo_id=${HF_USER}/your_dataset \ - --policy.type=act \ - --output_dir=outputs/train/act_your_dataset \ - --job_name=act_your_dataset \ - --policy.device=cuda \ - --wandb.enable=true \ - --policy.repo_id=${HF_USER}/act_policy -``` - -### Training Tips - -1. **Start with defaults**: ACT's default hyperparameters work well for most tasks -2. **Training duration**: Expect a few hours for 100k training steps on a single GPU -3. **Batch size**: Start with batch size 8 and adjust based on your GPU memory - -### Train using Google Colab - -If your local computer doesn't have a powerful GPU, you can utilize Google Colab to train your model by following the [ACT training notebook](./notebooks#training-act). - -## Evaluating ACT - -Once training is complete, you can evaluate your ACT policy using the `lerobot-record` command with your trained policy. This will run inference and record evaluation episodes: - -```bash -lerobot-record \ - --robot.type=so100_follower \ - --robot.port=/dev/ttyACM0 \ - --robot.id=my_robot \ - --robot.cameras="{ front: {type: opencv, index_or_path: 0, width: 640, height: 480, fps: 30}}" \ - --display_data=true \ - --dataset.repo_id=${HF_USER}/eval_act_your_dataset \ - --dataset.num_episodes=10 \ - --dataset.single_task="Your task description" \ - --policy.path=${HF_USER}/act_policy -``` diff --git a/lerobot/docs/source/async.mdx b/lerobot/docs/source/async.mdx deleted file mode 100644 index 732041a19299f5c96c888bb06a8c49e4fd41703e..0000000000000000000000000000000000000000 --- a/lerobot/docs/source/async.mdx +++ /dev/null @@ -1,312 +0,0 @@ -# Asynchronous Inference - -With our [SmolVLA](https://huggingface.co/papers/2506.01844) we introduced a new way to run inference on real-world robots, **decoupling action prediction from action execution**. -In this tutorial, we'll show how to use asynchronous inference (_async inference_) using a finetuned version of SmolVLA, and all the policies supported by LeRobot. -**Try async inference with all the policies** supported by LeRobot! - -**What you'll learn:** - -1. Why asynchronous inference matters and how it compares to, more traditional, sequential inference. -2. How to spin-up a `PolicyServer` and connect a `RobotClient` from the same machine, and even over the network. -3. How to tune key parameters (`actions_per_chunk`, `chunk_size_threshold`) for your robot and policy. - -If you get stuck, hop into our [Discord community](https://discord.gg/s3KuuzsPFb)! - -In a nutshell: with _async inference_, your robot keeps acting while the policy server is already busy computing the next chunk of actions---eliminating "wait-for-inference" lags and unlocking smoother, more reactive behaviours. -This is fundamentally different from synchronous inference (sync), where the robot stays idle while the policy computes the next chunk of actions. - ---- - -## Getting started with async inference - -You can read more information on asynchronous inference in our [blogpost](https://huggingface.co/blog/async-robot-inference). This guide is designed to help you quickly set up and run asynchronous inference in your environment. - -First, install `lerobot` with the `async` tag, to install the extra dependencies required to run async inference. - -```shell -pip install -e ".[async]" -``` - -Then, spin up a policy server (in one terminal, or in a separate machine) specifying the host address and port for the client to connect to. -You can spin up a policy server running: - -```shell -python -m lerobot.async_inference.policy_server \ - --host=127.0.0.1 \ - --port=8080 -``` - -This will start a policy server listening on `127.0.0.1:8080` (`localhost`, port 8080). At this stage, the policy server is empty, as all information related to which policy to run and with which parameters are specified during the first handshake with the client. Spin up a client with: - -```shell -python -m lerobot.async_inference.robot_client \ - --server_address=127.0.0.1:8080 \ # SERVER: the host address and port of the policy server - --robot.type=so100_follower \ # ROBOT: your robot type - --robot.port=/dev/tty.usbmodem585A0076841 \ # ROBOT: your robot port - --robot.id=follower_so100 \ # ROBOT: your robot id, to load calibration file - --robot.cameras="{ laptop: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}, phone: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \ # POLICY: the cameras used to acquire frames, with keys matching the keys expected by the policy - --task="dummy" \ # POLICY: The task to run the policy on (`Fold my t-shirt`). Not necessarily defined for all policies, such as `act` - --policy_type=your_policy_type \ # POLICY: the type of policy to run (smolvla, act, etc) - --pretrained_name_or_path=user/model \ # POLICY: the model name/path on server to the checkpoint to run (e.g., lerobot/smolvla_base) - --policy_device=mps \ # POLICY: the device to run the policy on, on the server - --actions_per_chunk=50 \ # POLICY: the number of actions to output at once - --chunk_size_threshold=0.5 \ # CLIENT: the threshold for the chunk size before sending a new observation to the server - --aggregate_fn_name=weighted_average \ # CLIENT: the function to aggregate actions on overlapping portions - --debug_visualize_queue_size=True # CLIENT: whether to visualize the queue size at runtime -``` - -In summary, you need to specify instructions for: - -- `SERVER`: the address and port of the policy server -- `ROBOT`: the type of robot to connect to, the port to connect to, and the local `id` of the robot -- `POLICY`: the type of policy to run, and the model name/path on server to the checkpoint to run. You also need to specify which device should the sever be using, and how many actions to output at once (capped at the policy max actions value). -- `CLIENT`: the threshold for the chunk size before sending a new observation to the server, and the function to aggregate actions on overlapping portions. Optionally, you can also visualize the queue size at runtime, to help you tune the `CLIENT` parameters. - -Importantly, - -- `actions_per_chunk` and `chunk_size_threshold` are key parameters to tune for your setup. -- `aggregate_fn_name` is the function to aggregate actions on overlapping portions. You can either add a new one to a registry of functions, or add your own in `robot_client.py` (see [here](NOTE:addlinktoLOC)) -- `debug_visualize_queue_size` is a useful tool to tune the `CLIENT` parameters. - -## Done! You should see your robot moving around by now 😉 - -## Async vs. synchronous inference - -Synchronous inference relies on interleaving action chunk prediction and action execution. This inherently results in _idle frames_, frames where the robot awaits idle the policy's output: a new action chunk. -In turn, inference is plagued by evident real-time lags, where the robot simply stops acting due to the lack of available actions. -With robotics models increasing in size, this problem risks becoming only more severe. - -

- -

-

- Synchronous inference makes the robot idle while the policy is - computing the next chunk of actions. -

- -To overcome this, we design async inference, a paradigm where action planning and execution are decoupled, resulting in (1) higher adaptability and, most importantly, (2) no idle frames. -Crucially, with async inference, the next action chunk is computed _before_ the current one is exhausted, resulting in no idleness. -Higher adaptability is ensured by aggregating the different action chunks on overlapping portions, obtaining an up-to-date plan and a tighter control loop. - -

- -

-

- Asynchronous inference results in no idleness because the next chunk is - computed before the current chunk is exhausted. -

- ---- - -## Start the Policy Server - -Policy servers are wrappers around a `PreTrainedPolicy` interfacing them with observations coming from a robot client. -Policy servers are initialized as empty containers which are populated with the requested policy specified in the initial handshake between the robot client and the policy server. -As such, spinning up a policy server is as easy as specifying the host address and port. If you're running the policy server on the same machine as the robot client, you can use `localhost` as the host address. - - - -```bash -python -m lerobot.async_inference.policy_server \ - --host=127.0.0.1 \ - --port=8080 -``` - - - - -```python -from lerobot.async_inference.configs import PolicyServerConfig -from lerobot.async_inference.policy_server import serve - -config = PolicyServerConfig( - host="localhost", - port=8080, -) -serve(config) -``` - - - - - -This listens on `localhost:8080` for an incoming connection from the associated`RobotClient`, which will communicate which policy to run during the first client-server handshake. - ---- - -## Launch the Robot Client - -`RobotClient` is a wrapper around a `Robot` instance, which `RobotClient` connects to the (possibly remote) `PolicyServer`. -The `RobotClient` streams observations to the `PolicyServer`, and receives action chunks obtained running inference on the server (which we assume to have better computational resources than the robot controller). - - - -```bash -python -m lerobot.async_inference.robot_client \ - --server_address=127.0.0.1:8080 \ # SERVER: the host address and port of the policy server - --robot.type=so100_follower \ # ROBOT: your robot type - --robot.port=/dev/tty.usbmodem585A0076841 \ # ROBOT: your robot port - --robot.id=follower_so100 \ # ROBOT: your robot id, to load calibration file - --robot.cameras="{ laptop: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}, phone: {type: opencv, index_or_path: 0, width: 1920, height: 1080, fps: 30}}" \ # POLICY: the cameras used to acquire frames, with keys matching the keys expected by the policy - --task="dummy" \ # POLICY: The task to run the policy on (`Fold my t-shirt`). Not necessarily defined for all policies, such as `act` - --policy_type=your_policy_type \ # POLICY: the type of policy to run (smolvla, act, etc) - --pretrained_name_or_path=user/model \ # POLICY: the model name/path on server to the checkpoint to run (e.g., lerobot/smolvla_base) - --policy_device=mps \ # POLICY: the device to run the policy on, on the server - --actions_per_chunk=50 \ # POLICY: the number of actions to output at once - --chunk_size_threshold=0.5 \ # CLIENT: the threshold for the chunk size before sending a new observation to the server - --aggregate_fn_name=weighted_average \ # CLIENT: the function to aggregate actions on overlapping portions - --debug_visualize_queue_size=True # CLIENT: whether to visualize the queue size at runtime -``` - - - - -```python -import threading -from lerobot.robots.so_follower import SO100FollowerConfig -from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig -from lerobot.async_inference.configs import RobotClientConfig -from lerobot.async_inference.robot_client import RobotClient -from lerobot.async_inference.helpers import visualize_action_queue_size - -# 1. Create the robot instance -"""Check out the cameras available in your setup by running `python lerobot/find_cameras.py`""" -# these cameras must match the ones expected by the policy -# check the config.json on the Hub for the policy you are using -camera_cfg = { - "top": OpenCVCameraConfig(index_or_path=0, width=640, height=480, fps=30), - "side": OpenCVCameraConfig(index_or_path=1, width=640, height=480, fps=30) -} - -robot_cfg = SO100FollowerConfig( - port="/dev/tty.usbmodem585A0076841", - id="follower_so100", - cameras=camera_cfg -) - -# 3. Create client configuration -client_cfg = RobotClientConfig( - robot=robot_cfg, - server_address="localhost:8080", - policy_device="mps", - policy_type="smolvla", - pretrained_name_or_path="/smolvla_async", - chunk_size_threshold=0.5, - actions_per_chunk=50, # make sure this is less than the max actions of the policy -) - -# 4. Create and start client -client = RobotClient(client_cfg) - -# 5. Specify the task -task = "Don't do anything, stay still" - -if client.start(): - # Start action receiver thread - action_receiver_thread = threading.Thread(target=client.receive_actions, daemon=True) - action_receiver_thread.start() - - try: - # Run the control loop - client.control_loop(task) - except KeyboardInterrupt: - client.stop() - action_receiver_thread.join() - # (Optionally) plot the action queue size - visualize_action_queue_size(client.action_queue_size) -``` - - - - - -The following two parameters are key in every setup: - - - - - - - - - - - - - - - - - - - - - -
HyperparameterDefaultWhat it does
- actions_per_chunk - 50 - How many actions the policy outputs at once. Typical values: 10-50. -
- chunk_size_threshold - 0.7 - When the queue is ≤ 50% full, the client sends a fresh observation. - Value in [0, 1]. -
- - - Different values of `actions_per_chunk` and `chunk_size_threshold` do result - in different behaviours. - - -On the one hand, increasing the value of `actions_per_chunk` will result in reducing the likelihood of ending up with no actions to execute, as more actions will be available when the new chunk is computed. -However, larger values of `actions_per_chunk` might also result in less precise actions, due to the compounding errors consequent to predicting actions over longer timespans. - -On the other hand, increasing the value of `chunk_size_threshold` will result in sending out to the `PolicyServer` observations for inference more often, resulting in a larger number of updates action chunks, overlapping on significant portions. This results in high adaptability, in the limit predicting one action chunk for each observation, which is in turn only marginally consumed while a new one is produced. -This option does also put more pressure on the inference pipeline, as a consequence of the many requests. Conversely, values of `chunk_size_threshold` close to 0.0 collapse to the synchronous edge case, whereby new observations are only sent out whenever the current chunk is exhausted. - -We found the default values of `actions_per_chunk` and `chunk_size_threshold` to work well in the experiments we developed for the [SmolVLA paper](https://huggingface.co/papers/2506.01844), but recommend experimenting with different values to find the best fit for your setup. - -### Tuning async inference for your setup - -1. **Choose your computational resources carefully.** [PI0](https://huggingface.co/lerobot/pi0) occupies 14GB of memory at inference time, while [SmolVLA](https://huggingface.co/lerobot/smolvla_base) requires only ~2GB. You should identify the best computational resource for your use case keeping in mind smaller policies require less computational resources. The combination of policy and device used (CPU-intensive, using MPS, or the number of CUDA cores on a given NVIDIA GPU) directly impacts the average inference latency you should expect. -2. **Adjust your `fps` based on inference latency.** While the server generates a new action chunk, the client is not idle and is stepping through its current action queue. If the two processes happen at fundamentally different speeds, the client might end up with an empty queue. As such, you should reduce your fps if you consistently run out of actions in queue. -3. **Adjust `chunk_size_threshold`**. - - Values closer to `0.0` result in almost sequential behavior. Values closer to `1.0` → send observation every step (more bandwidth, relies on good world-model). - - We found values around 0.5-0.6 to work well. If you want to tweak this, spin up a `RobotClient` setting the `--debug_visualize_queue_size` to `True`. This will plot the action queue size evolution at runtime, and you can use it to find the value of `chunk_size_threshold` that works best for your setup. - -

- -

-

- - The action queue size is plotted at runtime when the - `--debug_visualize_queue_size` flag is passed, for various levels of - `chunk_size_threshold` (`g` in the SmolVLA paper). - -

- ---- - -## Conclusion - -Asynchronous inference represents a significant advancement in real-time robotics control, addressing the fundamental challenge of inference latency that has long plagued robotics applications. Through this tutorial, you've learned how to implement a complete async inference pipeline that eliminates idle frames and enables smoother, more reactive robot behaviors. - -**Key Takeaways:** - -- **Paradigm Shift**: Async inference decouples action prediction from execution, allowing robots to continue acting while new action chunks are computed in parallel -- **Performance Benefits**: Eliminates "wait-for-inference" lags that are inherent in synchronous approaches, becoming increasingly important as policy models grow larger -- **Flexible Architecture**: The server-client design enables distributed computing, where inference can run on powerful remote hardware while maintaining real-time robot control -- **Tunable Parameters**: Success depends on properly configuring `actions_per_chunk` and `chunk_size_threshold` for your specific hardware, policy, and task requirements -- **Universal Compatibility**: Works with all LeRobot-supported policies, from lightweight ACT models to vision-language models like SmolVLA - -Start experimenting with the default parameters, monitor your action queue sizes, and iteratively refine your setup to achieve optimal performance for your specific use case. -If you want to discuss this further, hop into our [Discord community](https://discord.gg/s3KuuzsPFb), or open an issue on our [GitHub repository](https://github.com/lerobot/lerobot/issues). diff --git a/lerobot/docs/source/backwardcomp.mdx b/lerobot/docs/source/backwardcomp.mdx deleted file mode 100644 index a0546eee722d123bcb7ab8ebaeedcd27b936f6f6..0000000000000000000000000000000000000000 --- a/lerobot/docs/source/backwardcomp.mdx +++ /dev/null @@ -1,151 +0,0 @@ -# Backward compatibility - -## Policy Normalization Migration (PR #1452) - -**Breaking Change**: LeRobot policies no longer have built-in normalization layers embedded in their weights. Normalization is now handled by external `PolicyProcessorPipeline` components. - -### What changed? - -| | Before PR #1452 | After PR #1452 | -| -------------------------- | ------------------------------------------------ | ------------------------------------------------------------ | -| **Normalization Location** | Embedded in model weights (`normalize_inputs.*`) | External `PolicyProcessorPipeline` components | -| **Model State Dict** | Contains normalization statistics | **Clean weights only** - no normalization parameters | -| **Usage** | `policy(batch)` handles everything | `preprocessor(batch)` → `policy(...)` → `postprocessor(...)` | - -### Impact on existing models - -- Models trained **before** PR #1452 have normalization embedded in their weights -- These models need migration to work with the new `PolicyProcessorPipeline` system -- The migration extracts normalization statistics and creates separate processor pipelines - -### Migrating old models - -Use the migration script to convert models with embedded normalization: - -```shell -python src/lerobot/processor/migrate_policy_normalization.py \ - --pretrained-path lerobot/act_aloha_sim_transfer_cube_human \ - --push-to-hub \ - --branch migrated -``` - -The script: - -1. **Extracts** normalization statistics from model weights -2. **Creates** external preprocessor and postprocessor pipelines -3. **Removes** normalization layers from model weights -4. **Saves** clean model + processor pipelines -5. **Pushes** to Hub with automatic PR creation - -### Using migrated models - -```python -# New usage pattern (after migration) -from lerobot.policies.factory import make_policy, make_pre_post_processors - -# Load model and processors separately -policy = make_policy(config, ds_meta=dataset.meta) -preprocessor, postprocessor = make_pre_post_processors( - policy_cfg=config, - dataset_stats=dataset.meta.stats -) - -# Process data through pipeline -processed_batch = preprocessor(raw_batch) -action = policy.select_action(processed_batch) -final_action = postprocessor(action) -``` - -## Hardware API redesign - -PR [#777](https://github.com/huggingface/lerobot/pull/777) improves the LeRobot calibration but is **not backward-compatible**. Below is a overview of what changed and how you can continue to work with datasets created before this pull request. - -### What changed? - -| | Before PR #777 | After PR #777 | -| --------------------------------- | ------------------------------------------------- | ------------------------------------------------------------ | -| **Joint range** | Degrees `-180...180°` | **Normalised range** Joints: `–100...100` Gripper: `0...100` | -| **Zero position (SO100 / SO101)** | Arm fully extended horizontally | **In middle of the range for each joint** | -| **Boundary handling** | Software safeguards to detect ±180 ° wrap-arounds | No wrap-around logic needed due to mid-range zero | - ---- - -### Impact on existing datasets - -- Recorded trajectories created **before** PR #777 will replay incorrectly if loaded directly: - - Joint angles are offset and incorrectly normalized. -- Any models directly finetuned or trained on the old data will need their inputs and outputs converted. - -### Using datasets made with the previous calibration system - -We provide a migration example script for replaying an episode recorded with the previous calibration here: `examples/backward_compatibility/replay.py`. -Below we take you through the modifications that are done in the example script to make the previous calibration datasets work. - -```diff -+ key = f"{name.removeprefix('main_')}.pos" - action[key] = action_array[i].item() -+ action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90) -+ action["elbow_flex.pos"] -= 90 -``` - -Let's break this down. -New codebase uses `.pos` suffix for the position observations and we have removed `main_` prefix: - - -```python -key = f"{name.removeprefix('main_')}.pos" -``` - - -For `"shoulder_lift"` (id = 2), the 0 position is changed by -90 degrees and the direction is reversed compared to old calibration/code. - - -```python -action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90) -``` - - -For `"elbow_flex"` (id = 3), the 0 position is changed by -90 degrees compared to old calibration/code. - - -```python -action["elbow_flex.pos"] -= 90 -``` - - -To use degrees normalization we then set the `--robot.use_degrees` option to `true`. - -```diff -python examples/backward_compatibility/replay.py \ - --robot.type=so101_follower \ - --robot.port=/dev/tty.usbmodem5A460814411 \ - --robot.id=blue \ -+ --robot.use_degrees=true \ - --dataset.repo_id=my_dataset_id \ - --dataset.episode=0 -``` - -### Using policies trained with the previous calibration system - -Policies output actions in the same format as the datasets (`torch.Tensors`). Therefore, the same transformations should be applied. - -To find these transformations, we recommend to first try and and replay an episode of the dataset your policy was trained on using the section above. -Then, add these same transformations on your inference script (shown here in the `record.py` script): - -```diff -action_values = predict_action( - observation_frame, - policy, - get_safe_torch_device(policy.config.device), - policy.config.use_amp, - task=single_task, - robot_type=robot.robot_type, - ) - action = {key: action_values[i].item() for i, key in enumerate(robot.action_features)} - -+ action["shoulder_lift.pos"] = -(action["shoulder_lift.pos"] - 90) -+ action["elbow_flex.pos"] -= 90 - robot.send_action(action) -``` - -If you have questions or run into migration issues, feel free to ask them on [Discord](https://discord.gg/s3KuuzsPFb) diff --git a/lerobot/docs/source/bring_your_own_policies.mdx b/lerobot/docs/source/bring_your_own_policies.mdx deleted file mode 100644 index df1401fac82dccb9f7d4d5c3f13dbf7cf4ae0b82..0000000000000000000000000000000000000000 --- a/lerobot/docs/source/bring_your_own_policies.mdx +++ /dev/null @@ -1,175 +0,0 @@ -# Bring Your Own Policies - -This tutorial explains how to integrate your own custom policy implementations into the LeRobot ecosystem, allowing you to leverage all LeRobot tools for training, evaluation, and deployment while using your own algorithms. - -## Step 1: Create a Policy Package - -Your custom policy should be organized as an installable Python package following LeRobot's plugin conventions. - -### Package Structure - -Create a package with the prefix `lerobot_policy_` (IMPORTANT!) followed by your policy name: - -```bash -lerobot_policy_my_custom_policy/ -├── pyproject.toml -└── src/ - └── lerobot_policy_my_custom_policy/ - ├── __init__.py - ├── configuration_my_custom_policy.py - ├── modeling_my_custom_policy.py - └── processor_my_custom_policy.py -``` - -### Package Configuration - -Set up your `pyproject.toml`: - -```toml -[project] -name = "lerobot_policy_my_custom_policy" -version = "0.1.0" -dependencies = [ - # your policy-specific dependencies -] -requires-python = ">= 3.11" - -[build-system] -build-backend = # your-build-backend -requires = # your-build-system -``` - -## Step 2: Define the Policy Configuration - -Create a configuration class that inherits from `PreTrainedConfig` and registers your policy type: - -```python -# configuration_my_custom_policy.py -from dataclasses import dataclass, field -from lerobot.configs.policies import PreTrainedConfig -from lerobot.configs.types import NormalizationMode - -@PreTrainedConfig.register_subclass("my_custom_policy") -@dataclass -class MyCustomPolicyConfig(PreTrainedConfig): - """Configuration class for MyCustomPolicy. - - Args: - n_obs_steps: Number of observation steps to use as input - horizon: Action prediction horizon - n_action_steps: Number of action steps to execute - hidden_dim: Hidden dimension for the policy network - # Add your policy-specific parameters here - """ - # ...PreTrainedConfig fields... - pass - - def __post_init__(self): - super().__post_init__() - # Add any validation logic here - - def validate_features(self) -> None: - """Validate input/output feature compatibility.""" - # Implement validation logic for your policy's requirements - pass -``` - -## Step 3: Implement the Policy Class - -Create your policy implementation by inheriting from LeRobot's base `PreTrainedPolicy` class: - -```python -# modeling_my_custom_policy.py -import torch -import torch.nn as nn -from typing import Dict, Any - -from lerobot.policies.pretrained import PreTrainedPolicy -from .configuration_my_custom_policy import MyCustomPolicyConfig - -class MyCustomPolicy(PreTrainedPolicy): - config_class = MyCustomPolicyConfig - name = "my_custom_policy" - - def __init__(self, config: MyCustomPolicyConfig, dataset_stats: Dict[str, Any] = None): - super().__init__(config, dataset_stats) - ... -``` - -## Step 4: Add Data Processors - -Create processor functions: - -```python -# processor_my_custom_policy.py -from typing import Dict, Any -import torch - - -def make_my_custom_policy_pre_post_processors( - config, -) -> tuple[ - PolicyProcessorPipeline[dict[str, Any], dict[str, Any]], - PolicyProcessorPipeline[PolicyAction, PolicyAction], -]: - """Create preprocessing and postprocessing functions for your policy.""" - pass # Define your preprocessing and postprocessing logic here - -``` - -## Step 5: Package Initialization - -Expose your classes in the package's `__init__.py`: - -```python -# __init__.py -"""Custom policy package for LeRobot.""" - -try: - import lerobot # noqa: F401 -except ImportError: - raise ImportError( - "lerobot is not installed. Please install lerobot to use this policy package." - ) - -from .configuration_my_custom_policy import MyCustomPolicyConfig -from .modeling_my_custom_policy import MyCustomPolicy -from .processor_my_custom_policy import make_my_custom_policy_pre_post_processors - -__all__ = [ - "MyCustomPolicyConfig", - "MyCustomPolicy", - "make_my_custom_policy_pre_post_processors", -] -``` - -## Step 6: Installation and Usage - -### Install Your Policy Package - -```bash -cd lerobot_policy_my_custom_policy -pip install -e . - -# Or install from PyPI if published -pip install lerobot_policy_my_custom_policy -``` - -### Use Your Policy - -Once installed, your policy automatically integrates with LeRobot's training and evaluation tools: - -```bash -lerobot-train \ - --policy.type my_custom_policy \ - --env.type pusht \ - --steps 200000 -``` - -## Examples and Community Contributions - -Check out these example policy implementations: - -- [DiTFlow Policy](https://github.com/danielsanjosepro/lerobot_policy_ditflow) - Diffusion Transformer policy with flow-matching objective. Try it out in this example: [DiTFlow Example](https://github.com/danielsanjosepro/test_lerobot_policy_ditflow) - -Share your policy implementations with the community! 🤗 diff --git a/lerobot/docs/source/cameras.mdx b/lerobot/docs/source/cameras.mdx deleted file mode 100644 index 98205ce105bb2191f67da993cc6d85e0f7873b42..0000000000000000000000000000000000000000 --- a/lerobot/docs/source/cameras.mdx +++ /dev/null @@ -1,206 +0,0 @@ -# Cameras - -LeRobot offers multiple options for video capture, including phone cameras, built-in laptop cameras, external webcams, and Intel RealSense cameras. To efficiently record frames from most cameras, you can use either the `OpenCVCamera` or `RealSenseCamera` class. For additional compatibility details on the `OpenCVCamera` class, refer to the [Video I/O with OpenCV Overview](https://docs.opencv.org/4.x/d0/da7/videoio_overview.html). - -### Finding your camera - -To instantiate a camera, you need a camera identifier. This identifier might change if you reboot your computer or re-plug your camera, a behavior mostly dependant on your operating system. - -To find the camera indices of the cameras plugged into your system, run the following script: - -```bash -lerobot-find-cameras opencv # or realsense for Intel Realsense cameras -``` - -The output will look something like this if you have two cameras connected: - -``` ---- Detected Cameras --- -Camera #0: - Name: OpenCV Camera @ 0 - Type: OpenCV - Id: 0 - Backend api: AVFOUNDATION - Default stream profile: - Format: 16.0 - Width: 1920 - Height: 1080 - Fps: 15.0 --------------------- -(more cameras ...) -``` - -> [!WARNING] -> When using Intel RealSense cameras in `macOS`, you could get this [error](https://github.com/IntelRealSense/librealsense/issues/12307): `Error finding RealSense cameras: failed to set power state`, this can be solved by running the same command with `sudo` permissions. Note that using RealSense cameras in `macOS` is unstable. - -## Use Cameras - -Below are two examples, demonstrating how to work with the API. - -- **Asynchronous frame capture** using an OpenCV-based camera -- **Color and depth capture** using an Intel RealSense camera - - - - - -```python -from lerobot.cameras.opencv.configuration_opencv import OpenCVCameraConfig -from lerobot.cameras.opencv.camera_opencv import OpenCVCamera -from lerobot.cameras.configs import ColorMode, Cv2Rotation - -# Construct an `OpenCVCameraConfig` with your desired FPS, resolution, color mode, and rotation. -config = OpenCVCameraConfig( - index_or_path=0, - fps=15, - width=1920, - height=1080, - color_mode=ColorMode.RGB, - rotation=Cv2Rotation.NO_ROTATION -) - -# Instantiate and connect an `OpenCVCamera`, performing a warm-up read (default). -camera = OpenCVCamera(config) -camera.connect() - -# Read frames asynchronously in a loop via `async_read(timeout_ms)` -try: - for i in range(10): - frame = camera.async_read(timeout_ms=200) - print(f"Async frame {i} shape:", frame.shape) -finally: - camera.disconnect() -``` - - - - - - -```python -from lerobot.cameras.realsense.configuration_realsense import RealSenseCameraConfig -from lerobot.cameras.realsense.camera_realsense import RealSenseCamera -from lerobot.cameras.configs import ColorMode, Cv2Rotation - -# Create a `RealSenseCameraConfig` specifying your camera’s serial number and enabling depth. -config = RealSenseCameraConfig( - serial_number_or_name="233522074606", - fps=15, - width=640, - height=480, - color_mode=ColorMode.RGB, - use_depth=True, - rotation=Cv2Rotation.NO_ROTATION -) - -# Instantiate and connect a `RealSenseCamera` with warm-up read (default). -camera = RealSenseCamera(config) -camera.connect() - -# Capture a color frame via `read()` and a depth map via `read_depth()`. -try: - color_frame = camera.read() - depth_map = camera.read_depth() - print("Color frame shape:", color_frame.shape) - print("Depth map shape:", depth_map.shape) -finally: - camera.disconnect() -``` - - - - - -## Use your phone - - - - -To use your iPhone as a camera on macOS, enable the Continuity Camera feature: - -- Ensure your Mac is running macOS 13 or later, and your iPhone is on iOS 16 or later. -- Sign in both devices with the same Apple ID. -- Connect your devices with a USB cable or turn on Wi-Fi and Bluetooth for a wireless connection. - -For more details, visit [Apple support](https://support.apple.com/en-gb/guide/mac-help/mchl77879b8a/mac). - -Your iPhone should be detected automatically when running the camera setup script in the next section. - - - - -If you want to use your phone as a camera on Linux, follow these steps to set up a virtual camera - -1. _Install `v4l2loopback-dkms` and `v4l-utils`_. Those packages are required to create virtual camera devices (`v4l2loopback`) and verify their settings with the `v4l2-ctl` utility from `v4l-utils`. Install them using: - - -```python -sudo apt install v4l2loopback-dkms v4l-utils -``` - - -2. _Install [DroidCam](https://droidcam.app) on your phone_. This app is available for both iOS and Android. -3. _Install [OBS Studio](https://obsproject.com)_. This software will help you manage the camera feed. Install it using [Flatpak](https://flatpak.org): - - -```python -flatpak install flathub com.obsproject.Studio -``` - - -4. _Install the DroidCam OBS plugin_. This plugin integrates DroidCam with OBS Studio. Install it with: - - -```python -flatpak install flathub com.obsproject.Studio.Plugin.DroidCam -``` - - -5. _Start OBS Studio_. Launch with: - - -```python -flatpak run com.obsproject.Studio -``` - - -6. _Add your phone as a source_. Follow the instructions [here](https://droidcam.app/obs/usage). Be sure to set the resolution to `640x480`. -7. _Adjust resolution settings_. In OBS Studio, go to `File > Settings > Video`. Change the `Base(Canvas) Resolution` and the `Output(Scaled) Resolution` to `640x480` by manually typing it in. -8. _Start virtual camera_. In OBS Studio, follow the instructions [here](https://obsproject.com/kb/virtual-camera-guide). -9. _Verify the virtual camera setup_. Use `v4l2-ctl` to list the devices: - - -```python -v4l2-ctl --list-devices -``` - - -You should see an entry like: - -``` -VirtualCam (platform:v4l2loopback-000): -/dev/video1 -``` - -10. _Check the camera resolution_. Use `v4l2-ctl` to ensure that the virtual camera output resolution is `640x480`. Change `/dev/video1` to the port of your virtual camera from the output of `v4l2-ctl --list-devices`. - - -```python -v4l2-ctl -d /dev/video1 --get-fmt-video -``` - - -You should see an entry like: - -``` ->>> Format Video Capture: ->>> Width/Height : 640/480 ->>> Pixel Format : 'YUYV' (YUYV 4:2:2) -``` - -Troubleshooting: If the resolution is not correct you will have to delete the Virtual Camera port and try again as it cannot be changed. - -If everything is set up correctly, you can proceed with the rest of the tutorial. - - - diff --git a/lerobot/docs/source/contributing.md b/lerobot/docs/source/contributing.md deleted file mode 100644 index f939e75f21a8badb5c40f527abd0e098fe9bc472..0000000000000000000000000000000000000000 --- a/lerobot/docs/source/contributing.md +++ /dev/null @@ -1 +0,0 @@ -../../CONTRIBUTING.md \ No newline at end of file diff --git a/lerobot/docs/source/debug_processor_pipeline.mdx b/lerobot/docs/source/debug_processor_pipeline.mdx deleted file mode 100644 index d39eda92c2405ea51cfe20fbb0acf8e3f1f71049..0000000000000000000000000000000000000000 --- a/lerobot/docs/source/debug_processor_pipeline.mdx +++ /dev/null @@ -1,299 +0,0 @@ -# Debug Your Processor Pipeline - -Processor pipelines can be complex, especially when chaining multiple transformation steps. -Unlike simple function calls, pipelines lack natural observability, you can't easily see what happens -between each step or where things go wrong. -This guide provides debugging tools and techniques specifically designed to address these challenges -and help you understand data flow through your pipelines. - -We'll explore three complementary debugging approaches: **hooks** for runtime monitoring, **step-through debugging** for detailed inspection, and **feature validation** for catching structural mismatches. Each serves a different purpose and together they provide complete visibility into your pipeline's behavior. - -## Understanding Hooks - -Hooks are functions that get called at specific points during pipeline execution. -They provide a way to inspect, monitor, or modify data without changing your pipeline code. -Think of them as "event listeners" for your pipeline. - -### What is a Hook? - -A hook is a callback function that gets automatically invoked at specific moments during pipeline execution. -The concept comes from event-driven programming, imagine you could "hook into" the pipeline's execution flow to observe or react to what's happening. - -Think of hooks like inserting checkpoints into your pipeline. Every time the pipeline reaches one of these checkpoints, it pauses briefly to call your hook function, giving you a chance to inspect the current state, log information, and validate data. - -A hook is simply a function that accepts two parameters: - -- `step_idx: int` - The index of the current processing step (0, 1, 2, etc.) -- `transition: EnvTransition` - The data transition at that point in the pipeline - -The beauty of hooks is their non-invasive nature: you can add monitoring, validation, or debugging logic without changing a single line of your pipeline code. The pipeline remains clean and focused on its core logic, while hooks handle the cross-cutting concerns like logging, monitoring, and debugging. - -### Before vs After Hooks - -The pipeline supports two types of hooks: - -- **Before hooks** (`register_before_step_hook`) - Called before each step executes -- **After hooks** (`register_after_step_hook`) - Called after each step completes - -```python -def before_hook(step_idx: int, transition: EnvTransition): - """Called before step processes the transition.""" - print(f"About to execute step {step_idx}") - # Useful for: logging, validation, setup - -def after_hook(step_idx: int, transition: EnvTransition): - """Called after step has processed the transition.""" - print(f"Completed step {step_idx}") - # Useful for: monitoring results, cleanup, debugging - -processor.register_before_step_hook(before_hook) -processor.register_after_step_hook(after_hook) -``` - -### Implementing a NaN Detection Hook - -Here's a practical example of a hook that detects NaN values: - -```python -def check_nans(step_idx: int, transition: EnvTransition): - """Check for NaN values in observations.""" - obs = transition.get(TransitionKey.OBSERVATION) - if obs: - for key, value in obs.items(): - if isinstance(value, torch.Tensor) and torch.isnan(value).any(): - print(f"NaN detected in {key} at step {step_idx}") - -# Register the hook to run after each step -processor.register_after_step_hook(check_nans) - -# Process your data - the hook will be called automatically -output = processor(input_data) - -# Remove the hook when done debugging -processor.unregister_after_step_hook(check_nans) -``` - -### How Hooks Work Internally - -Understanding the internal mechanism helps you use hooks more effectively. The pipeline maintains two separate lists: one for before-step hooks and another for after-step hooks. When you register a hook, it's simply appended to the appropriate list. - -During execution, the pipeline follows a strict sequence: for each processing step, it first calls all before-hooks in registration order, then executes the actual step transformation, and finally calls all after-hooks in registration order. This creates a predictable, sandwich-like structure around each step. - -The key insight is that hooks don't change the core pipeline logic—they're purely additive. The pipeline's `_forward` method orchestrates this dance between hooks and processing steps, ensuring that your debugging or monitoring code runs at exactly the right moments without interfering with the main data flow. - -Here's a simplified view of how the pipeline executes hooks: - -```python -class DataProcessorPipeline: - def __init__(self): - self.steps = [...] - self.before_step_hooks = [] # List of before hooks - self.after_step_hooks = [] # List of after hooks - - def _forward(self, transition): - """Internal method that processes the transition through all steps.""" - for step_idx, processor_step in enumerate(self.steps): - # 1. Call all BEFORE hooks - for hook in self.before_step_hooks: - hook(step_idx, transition) - - # 2. Execute the actual processing step - transition = processor_step(transition) - - # 3. Call all AFTER hooks - for hook in self.after_step_hooks: - hook(step_idx, transition) - - return transition - - def register_before_step_hook(self, hook_fn): - self.before_step_hooks.append(hook_fn) - - def register_after_step_hook(self, hook_fn): - self.after_step_hooks.append(hook_fn) -``` - -### Execution Flow - -The execution flow looks like this: - -``` -Input → Before Hook → Step 0 → After Hook → Before Hook → Step 1 → After Hook → ... → Output -``` - -For example, with 3 steps and both hook types: - -```python -def timing_before(step_idx, transition): - print(f"⏱️ Starting step {step_idx}") - -def validation_after(step_idx, transition): - print(f"✅ Completed step {step_idx}") - -processor.register_before_step_hook(timing_before) -processor.register_after_step_hook(validation_after) - -# This will output: -# ⏱️ Starting step 0 -# ✅ Completed step 0 -# ⏱️ Starting step 1 -# ✅ Completed step 1 -# ⏱️ Starting step 2 -# ✅ Completed step 2 -``` - -### Multiple Hooks - -You can register multiple hooks of the same type - they execute in the order registered: - -```python -def log_shapes(step_idx: int, transition: EnvTransition): - obs = transition.get(TransitionKey.OBSERVATION) - if obs: - print(f"Step {step_idx} observation shapes:") - for key, value in obs.items(): - if isinstance(value, torch.Tensor): - print(f" {key}: {value.shape}") - -processor.register_after_step_hook(check_nans) # Executes first -processor.register_after_step_hook(log_shapes) # Executes second - -# Both hooks will be called after each step in registration order -output = processor(input_data) -``` - -While hooks are excellent for monitoring specific issues (like NaN detection) or gathering metrics during normal pipeline execution, sometimes you need to dive deeper. When you want to understand exactly what happens at each step or debug complex transformation logic, step-through debugging provides the detailed inspection you need. - -## Step-Through Debugging - -Step-through debugging is like having a slow-motion replay for your pipeline. Instead of watching your data get transformed in one quick blur from input to output, you can pause and examine what happens after each individual step. - -This approach is particularly valuable when you're trying to understand a complex pipeline, debug unexpected behavior, or verify that each transformation is working as expected. Unlike hooks, which are great for automated monitoring, step-through debugging gives you manual, interactive control over the inspection process. - -The `step_through()` method is a generator that yields the transition state after each processing step, allowing you to inspect intermediate results. Think of it as creating a series of snapshots of your data as it flows through the pipeline—each snapshot shows you exactly what your data looks like after one more transformation has been applied. - -### How Step-Through Works - -The `step_through()` method fundamentally changes how the pipeline executes. Instead of running all steps in sequence and only returning the final result, it transforms the pipeline into an iterator that yields intermediate results. - -Here's what happens internally: the method starts by converting your input data into the pipeline's internal transition format, then yields this initial state. Next, it applies the first processing step and yields the result. Then it applies the second step to that result and yields again, and so on. Each `yield` gives you a complete snapshot of the transition at that point. - -This generator pattern is powerful because it's lazy—the pipeline only computes the next step when you ask for it. This means you can stop at any point, inspect the current state thoroughly, and decide whether to continue. You're not forced to run the entire pipeline just to debug one problematic step. - -Instead of running the entire pipeline and only seeing the final result, `step_through()` pauses after each step and gives you the intermediate transition: - -```python -# This creates a generator that yields intermediate states -for i, intermediate_result in enumerate(processor.step_through(input_data)): - print(f"=== After step {i} ===") - - # Inspect the observation at this stage - obs = intermediate_result.get(TransitionKey.OBSERVATION) - if obs: - for key, value in obs.items(): - if isinstance(value, torch.Tensor): - print(f"{key}: shape={value.shape}, dtype={value.dtype}") -``` - -### Interactive Debugging with Breakpoints - -You can add breakpoints in the step-through loop to interactively debug: - -```python -# Step through the pipeline with debugging -for i, intermediate in enumerate(processor.step_through(data)): - print(f"Step {i}: {processor.steps[i].__class__.__name__}") - - # Set a breakpoint to inspect the current state - breakpoint() # Debugger will pause here - - # You can now inspect 'intermediate' in the debugger: - # - Check tensor shapes and values - # - Verify expected transformations - # - Look for unexpected changes -``` - -During the debugger session, you can: - -- Examine `intermediate[TransitionKey.OBSERVATION]` to see observation data -- Check `intermediate[TransitionKey.ACTION]` for action transformations -- Inspect any part of the transition to understand what each step does - -Step-through debugging is perfect for understanding the _data_ transformations, but what about the _structure_ of that data? While hooks and step-through help you debug runtime behavior, you also need to ensure your pipeline produces data in the format expected by downstream components. This is where feature contract validation comes in. - -## Validating Feature Contracts - -Feature contracts define what data structure your pipeline expects as input and produces as output. -Validating these contracts helps catch mismatches early. - -### Understanding Feature Contracts - -Each processor step has a `transform_features()` method that describes how it changes the data structure: - -```python -# Get the expected output features from your pipeline -initial_features = { - PipelineFeatureType.OBSERVATION: { - "observation.state": PolicyFeature(type=FeatureType.STATE, shape=(7,)), - "observation.image": PolicyFeature(type=FeatureType.IMAGE, shape=(3, 224, 224)) - }, - PipelineFeatureType.ACTION: { - "action": PolicyFeature(type=FeatureType.ACTION, shape=(4,)) - } -} - -# Check what your pipeline will output -output_features = processor.transform_features(initial_features) - -print("Input features:") -for feature_type, features in initial_features.items(): - print(f" {feature_type}:") - for key, feature in features.items(): - print(f" {key}: {feature.type.value}, shape={feature.shape}") - -print("\nOutput features:") -for feature_type, features in output_features.items(): - print(f" {feature_type}:") - for key, feature in features.items(): - print(f" {key}: {feature.type.value}, shape={feature.shape}") -``` - -### Verifying Expected Features - -Check that your pipeline produces the features you expect: - -```python -# Define what features you expect the pipeline to produce -expected_keys = ["observation.state", "observation.image", "action"] - -print("Validating feature contract...") -for expected_key in expected_keys: - found = False - for feature_type, features in output_features.items(): - if expected_key in features: - feature = features[expected_key] - print(f"✅ {expected_key}: {feature.type.value}, shape={feature.shape}") - found = True - break - - if not found: - print(f"❌ Missing expected feature: {expected_key}") -``` - -This validation helps ensure your pipeline will work correctly with downstream components that expect specific data structures. - -## Summary - -Now that you understand the three debugging approaches, you can tackle any pipeline issue systematically: - -1. **Hooks** - For runtime monitoring and validation without modifying pipeline code -2. **Step-through** - For inspecting intermediate states and understanding transformations -3. **Feature validation** - For ensuring data structure contracts are met - -**When to use each approach:** - -- Start with **step-through debugging** when you need to understand what your pipeline does or when something unexpected happens -- Add **hooks** for continuous monitoring during development and production to catch issues automatically -- Use **feature validation** before deployment to ensure your pipeline works with downstream components - -These three tools work together to give you the complete observability that complex pipelines naturally lack. With hooks watching for issues, step-through helping you understand behavior, and feature validation ensuring compatibility, you'll be able to debug any pipeline confidently and efficiently. diff --git a/lerobot/docs/source/earthrover_mini_plus.mdx b/lerobot/docs/source/earthrover_mini_plus.mdx deleted file mode 100644 index a05ec46f6c29d929c612c23b0d109de49f68ae6f..0000000000000000000000000000000000000000 --- a/lerobot/docs/source/earthrover_mini_plus.mdx +++ /dev/null @@ -1,225 +0,0 @@ -# EarthRover Mini Plus - -The EarthRover Mini Plus is a fully open source mobile robot that connects through the cloud using the Frodobots SDK. This lets you control the robot and record datasets for training AI models. - -## What You Need - -### Hardware - -- EarthRover Mini robot -- Computer with Python 3.10 or newer -- Internet connection - -### Setting Up the Frodobots SDK - -The robot needs the [Frodobots SDK](https://github.com/frodobots-org/earth-rovers-sdk) running on your computer. Here's how: - -1. Download and install the SDK: - -```bash -git clone https://github.com/frodobots-org/earth-rovers-sdk.git -cd earth-rovers-sdk -pip install -r requirements.txt -``` - -2. Save Credentials: - -Write your .env variables with the SDK API key and bot name provided by the Frodobots team. - -```bash -SDK_API_TOKEN=your_sdk_api_token_here -BOT_SLUG=your_bot_slug_here -CHROME_EXECUTABLE_PATH=/path/to/chrome_or_chromium -# Default value is MAP_ZOOM_LEVEL=18 https://wiki.openstreetmap.org/wiki/Zoom_levels -MAP_ZOOM_LEVEL=18 -MISSION_SLUG=your_mission_slug_here -# Image quality between 0.1 and 1.0 (default: 0.8) -# Recommended: 0.8 for better performance -IMAGE_QUALITY=0.8 -# Image format: jpeg, png or webp (default: png) -# Recommended: jpeg for better performance and lower bandwidth usage -IMAGE_FORMAT=jpeg -``` - -3. Start the SDK: - -```bash -hypercorn main:app --reload -``` - -4. Open your web browser and go to `http://localhost:8000`, then click "Join" - -The SDK gives you: - -- Live video from front and rear cameras - -> [!IMPORTANT] -> The SDK must be running before you can use the robot. - -## Install LeRobot - -Follow our [Installation Guide](./installation) to install LeRobot. - -In addition to the base installation, install the EarthRover Mini dependencies: - -```bash -pip install -e . -``` - -## How It Works - -The robot uses the internet to communicate: - -- **Movement commands**: Sent through the SDK -- **Camera video**: Received from the SDK -- **Robot info**: Battery, location, speed from the SDK - -You don't need to plug anything in - it all works through the SDK. - -## Calibration - -No calibration needed! The robot is ready to use as soon as the SDK is running. - -## Controlling the Robot - -You control the robot using your keyboard - just like playing a video game with WASD keys. - -### Keyboard Controls - -| Key | Action | -| --- | -------------------------------- | -| W | Move forward | -| S | Move backward | -| A | Turn left (with forward motion) | -| D | Turn right (with forward motion) | -| Q | Rotate left in place | -| E | Rotate right in place | -| X | Stop all movement | -| +/= | Increase speed | -| - | Decrease speed | -| ESC | Disconnect | - -### Speed Settings - -You can adjust how fast the robot moves: - -- **Forward/backward speed**: Default is full speed (1.0) -- **Turning speed**: Default is full speed (1.0) -- **Speed changes**: Use +/- keys to adjust by 0.1 each time - -### Try It Out - -Test driving the robot before recording data: - -```python -from lerobot.robots.earthrover_mini_plus import EarthRoverMiniPlus, EarthRoverMiniPlusConfig -from lerobot.teleoperators.keyboard import KeyboardRoverTeleop, KeyboardRoverTeleopConfig - -# Initialize robot -robot_config = EarthRoverMiniPlusConfig() -robot = EarthRoverMiniPlus(robot_config) - -# Initialize teleoperator -teleop_config = KeyboardRoverTeleopConfig( - linear_speed=1.0, - angular_speed=1.0, - speed_increment=0.1 -) -teleop = KeyboardRoverTeleop(teleop_config) - -# Connect -robot.connect() -teleop.connect() - -# Teleoperate (use keyboard controls) -try: - while True: - action = teleop.get_action() - robot.send_action(action) -except KeyboardInterrupt: - pass -finally: - robot.disconnect() - teleop.disconnect() -``` - -> [!TIP] -> If you're using a Mac, you might need to give Terminal permission to access your keyboard for teleoperation. Go to System Preferences > Security & Privacy > Input Monitoring and check the box for Terminal. - -## Recording Data - -Once you can drive the robot well, you can start recording data to train AI models. The system records: - -- **What you do**: How you move the robot (forward, backward, turning) -- **What the robot sees**: - - Videos from both cameras - - Robot speed and direction - - Battery level and location - - GPS position and signal - - Other sensor data -- **When it happened**: Timestamps for everything - -### Setting Up Hugging Face - -We use Hugging Face to store your data online. First, log in with your token from [Hugging Face settings](https://huggingface.co/settings/tokens): - -```bash -huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential -``` - -Store your Hugging Face username: - -```bash -HF_USER=$(huggingface-cli whoami | head -n 1) -echo $HF_USER -``` - -### Start Recording - -Use the standard recording command: - -```bash -python src/lerobot/scripts/lerobot_record.py \ - --robot.type=earthrover_mini_plus \ - --teleop.type=keyboard_rover \ - --dataset.repo_id=your_username/dataset_name \ - --dataset.num_episodes=2 \ - --dataset.fps=10 \ - --dataset.single_task="Navigate around obstacles" \ - --display_data=true -``` - -Replace `your_username/dataset_name` with your Hugging Face username and a name for your dataset. - -### What Gets Saved - -Your dataset includes: - -**Your Actions (2 things)**: - -- How much you moved forward/backward -- How much you turned left/right - -**Robot Observations (12 things)**: - -- Front camera video -- Rear camera video -- Current speed -- Battery level -- Which way the robot is facing -- GPS location (latitude, longitude, signal strength) -- Network signal strength -- Vibration level -- Lamp status (on/off) - -### Where Your Data Goes - -On your computer: `~/.cache/huggingface/lerobot/{repo-id}` - -After recording, your data automatically uploads to your Hugging Face page: - -```bash -echo https://huggingface.co/datasets/${HF_USER}/earthrover-navigation -``` - -Your dataset will be tagged with `LeRobot` for community discovery. diff --git a/lerobot/docs/source/env_processor.mdx b/lerobot/docs/source/env_processor.mdx deleted file mode 100644 index 7f5cee2832b1311f70fa6833b4d15a2effbce904..0000000000000000000000000000000000000000 --- a/lerobot/docs/source/env_processor.mdx +++ /dev/null @@ -1,418 +0,0 @@ -# Environment Processors - -Environment processors are a critical layer in LeRobot's data processing architecture that handle **environment-specific** transformations, separate from policy-specific processing. This separation of concerns enables cleaner code, better modularity, and easier experimentation with different environments and policies. - -## Why Environment Processors? - -When working with different robot environments (LIBERO, MetaWorld, Aloha, etc.), each environment often has unique data formats, coordinate systems, and conventions that need standardization **before** policy processing. Without environment processors, these transformations would be: - -1. **Hardcoded in environment code** - Making it difficult to experiment with different state representations -2. **Duplicated across policies** - Each policy would need to handle environment-specific quirks -3. **Mixed with policy logic** - Violating separation of concerns and making debugging harder - -Environment processors solve this by providing a **dedicated processing layer** between raw environment observations and policy inputs. - -## The Processing Pipeline - -Here's how data flows through the complete processing pipeline during evaluation: - -```python -# In lerobot_eval.py rollout() function: - -# 1. Raw environment observation (numpy arrays, various formats) -raw_observation = env.step(action) - -# 2. Convert numpy to torch, normalize images [0,1] -observation = preprocess_observation(raw_observation) - -# 3. Add task metadata (for multi-task environments) -observation = add_envs_task(env, observation) - -# 4. ENVIRONMENT-SPECIFIC preprocessing (NEW!) -# - Flatten robot states -# - Rotate images to match dataset conventions -# - Handle environment-specific coordinate systems -observation = env_preprocessor(observation) - -# 5. POLICY-SPECIFIC preprocessing -# - Normalize with dataset statistics -# - Add batch dimensions -# - Move to GPU -# - Tokenize language instructions -observation = preprocessor(observation) - -# 6. Policy inference -action = policy.select_action(observation) - -# 7. POLICY-SPECIFIC postprocessing -# - Unnormalize actions -# - Remove batch dimensions -action = postprocessor(action) - -# 8. ENVIRONMENT-SPECIFIC postprocessing (NEW!) -# - Convert action formats if needed -# - Apply environment-specific constraints -action_transition = {"action": action} -action_transition = env_postprocessor(action_transition) -action = action_transition["action"] - -# 9. Execute in environment -env.step(action) -``` - -## The Benefits - -### 1. **Separation of Concerns** - -Environment processors handle transformations specific to the **environment's data format**, while policy processors handle transformations specific to the **model's requirements**. - -```python -# ❌ Before: Mixed concerns -class LiberoVLAPolicy: - def preprocess(self, obs): - # Environment-specific: Flatten robot state (shouldn't be in policy!) - state = self._flatten_robot_state(obs["robot_state"]) - # Policy-specific: Normalize with dataset stats - state = self.normalizer(state) - return state - -# ✅ After: Clear separation -# Environment processor: Handles LIBERO's nested robot state -env_preprocessor = LiberoProcessorStep() # Flattens robot_state - -# Policy processor: Handles model requirements -policy_preprocessor = NormalizerProcessorStep(stats=dataset_stats) -``` - -### 2. **Flexibility and Reusability** - -The same policy can work with different environment processors, and the same environment processor can work with different policies: - -```python -# Use SmolVLA policy with LIBERO environment -libero_preprocessor, libero_postprocessor = make_env_pre_post_processors(libero_cfg) -smolvla_preprocessor, smolvla_postprocessor = make_pre_post_processors(smolvla_cfg) - -# Or use ACT policy with the same LIBERO environment -libero_preprocessor, libero_postprocessor = make_env_pre_post_processors(libero_cfg) -act_preprocessor, act_postprocessor = make_pre_post_processors(act_cfg) -``` - -### 3. **Easier Experimentation** - -Want to try different state representations for LIBERO? Just create a new processor: - -```python -# Original: 8D state (pos + quat→axisangle + gripper) -@ProcessorStepRegistry.register("libero_processor") -class LiberoProcessorStep(ObservationProcessorStep): - def _process_observation(self, obs): - eef_pos = robot_state["eef"]["pos"] # 3D - eef_axisangle = quat2axisangle(quat) # 3D - gripper = robot_state["gripper"]["qpos"] # 2D - state = torch.cat([eef_pos, eef_axisangle, gripper], dim=-1) # 8D - return state - -# Experiment: Add velocity for better control -@ProcessorStepRegistry.register("libero_velocity_processor") -class LiberoVelocityProcessorStep(ObservationProcessorStep): - def _process_observation(self, obs): - # Include velocities for 14D state - eef_pos = robot_state["eef"]["pos"] # 3D - eef_axisangle = quat2axisangle(quat) # 3D - eef_vel = robot_state["eef"]["vel"] # 3D (NEW) - gripper_pos = robot_state["gripper"]["qpos"] # 2D - gripper_vel = robot_state["gripper"]["qvel"] # 3D (NEW) - state = torch.cat([eef_pos, eef_axisangle, eef_vel, - gripper_pos, gripper_vel], dim=-1) # 14D - return state -``` - -### 4. **Cleaner Environment Code** - -Environments expose **all available data** without needing to know what downstream models will use: - -```python -# LIBERO environment exposes full robot state -observation = { - "pixels": {"image": img, "image2": img2}, - "robot_state": { - "eef": {"pos": ..., "quat": ..., "vel": ..., "mat": ..., "axisangle": ...}, - "gripper": {"qpos": ..., "qvel": ...}, - "joints": {"pos": ..., "vel": ...} - } -} - -# Environment processor decides what to use -# Policy processor handles model-specific transformations -``` - -## Using Environment Processors - -### Factory Function - -The `make_env_pre_post_processors` function follows the same pattern as `make_pre_post_processors` for policies: - -```python -from lerobot.envs.factory import make_env_pre_post_processors -from lerobot.envs.configs import LiberoEnv, PushtEnv - -# For LIBERO: Returns LiberoProcessorStep in preprocessor -libero_cfg = LiberoEnv(task="libero_spatial", camera_name=["agentview"]) -env_preprocessor, env_postprocessor = make_env_pre_post_processors(libero_cfg) - -# For other environments: Returns identity processors (no-op) -pusht_cfg = PushtEnv() -env_preprocessor, env_postprocessor = make_env_pre_post_processors(pusht_cfg) -``` - -### Implementation in `envs/factory.py` - -```python -def make_env_pre_post_processors( - env_cfg: EnvConfig, -) -> tuple[ - PolicyProcessorPipeline[dict[str, Any], dict[str, Any]], - PolicyProcessorPipeline[dict[str, Any], dict[str, Any]], -]: - """ - Create preprocessor and postprocessor pipelines for environment observations. - - Args: - env_cfg: The configuration of the environment. - - Returns: - A tuple containing: - - preprocessor: Pipeline that processes environment observations - - postprocessor: Pipeline that processes environment outputs - """ - # For LIBERO environments, add the LiberoProcessorStep to preprocessor - if isinstance(env_cfg, LiberoEnv) or "libero" in env_cfg.type: - preprocessor = PolicyProcessorPipeline(steps=[LiberoProcessorStep()]) - else: - # For all other environments, return an identity preprocessor - preprocessor = PolicyProcessorPipeline(steps=[]) - - # Postprocessor is currently identity for all environments - # Future: Could add environment-specific action transformations - postprocessor = PolicyProcessorPipeline(steps=[]) - - return preprocessor, postprocessor -``` - -### Integration in Evaluation - -In `lerobot_eval.py`, the environment processors are created once and used throughout: - -```python -def eval_main(cfg: EvalPipelineConfig): - # Create environment - envs = make_env(cfg.env, n_envs=cfg.eval.batch_size) - - # Create policy - policy = make_policy(cfg=cfg.policy, env_cfg=cfg.env) - - # Create policy processors - preprocessor, postprocessor = make_pre_post_processors( - policy_cfg=cfg.policy, - pretrained_path=cfg.policy.pretrained_path, - ) - - # Create environment processors (NEW!) - env_preprocessor, env_postprocessor = make_env_pre_post_processors(env_cfg=cfg.env) - - # Run evaluation with both processor types - eval_policy_all( - envs=envs, - policy=policy, - env_preprocessor=env_preprocessor, # Environment-specific - env_postprocessor=env_postprocessor, # Environment-specific - preprocessor=preprocessor, # Policy-specific - postprocessor=postprocessor, # Policy-specific - n_episodes=cfg.eval.n_episodes, - ) -``` - -## Example: LIBERO Environment Processor - -The `LiberoProcessorStep` demonstrates a real-world environment processor: - -```python -from lerobot.processor.pipeline import ObservationProcessorStep - -@dataclass -@ProcessorStepRegistry.register(name="libero_processor") -class LiberoProcessorStep(ObservationProcessorStep): - """ - Processes LIBERO observations into the LeRobot format. - - **State Processing:** - - Extracts end-effector position (3D) - - Converts quaternion to axis-angle representation (3D) - - Extracts gripper joint positions (2D) - - Concatenates into 8D state vector - - **Image Processing:** - - Rotates images 180° to match HuggingFaceVLA/libero convention - """ - - def _process_observation(self, observation): - processed_obs = observation.copy() - - # Process images: Flip 180° for camera convention - for key in list(processed_obs.keys()): - if key.startswith("observation.images."): - img = processed_obs[key] - img = torch.flip(img, dims=[2, 3]) # Flip H and W - processed_obs[key] = img - - # Process robot_state: Flatten to 8D vector - if "observation.robot_state" in processed_obs: - robot_state = processed_obs.pop("observation.robot_state") - - eef_pos = robot_state["eef"]["pos"] # (B, 3) - eef_quat = robot_state["eef"]["quat"] # (B, 4) - gripper_qpos = robot_state["gripper"]["qpos"] # (B, 2) - - # Convert quaternion to axis-angle - eef_axisangle = self._quat2axisangle(eef_quat) # (B, 3) - - # Concatenate into single state vector - state = torch.cat((eef_pos, eef_axisangle, gripper_qpos), dim=-1) - state = state.float() - - processed_obs["observation.state"] = state - - return processed_obs -``` - -### Why These Transformations? - -1. **Image Rotation**: The HuggingFaceVLA/libero dataset has images rotated 180° from the raw LIBERO simulator. The processor handles this convention mismatch so policies trained on the dataset work seamlessly. - -2. **State Flattening**: The raw LIBERO environment exposes nested dictionaries with all available state information (position, quaternion, velocity, matrix representation, etc.). The processor: - - Selects the relevant components (pos, quat, gripper) - - Converts quaternion to axis-angle (more suitable for learning) - - Flattens to a single 8D vector that policies expect - -3. **Flexibility**: The environment still exposes **all** raw data. If you want to try different state representations (e.g., including velocities, using matrix representation instead of axis-angle), you can create a new processor without modifying the environment code. - -## Adding Environment Processors for New Environments - -To add environment processors for a new environment: - -### 1. Create the Processor Step - -```python -# In src/lerobot/processor/env_processor.py - -@dataclass -@ProcessorStepRegistry.register(name="myenv_processor") -class MyEnvProcessorStep(ObservationProcessorStep): - """Process observations from MyEnv.""" - - def _process_observation(self, observation): - processed = observation.copy() - - # Your environment-specific transformations - if "myenv.specific.state" in processed: - state = processed.pop("myenv.specific.state") - # Transform to standard format - processed["observation.state"] = self._transform_state(state) - - return processed -``` - -### 2. Update the Factory - -```python -# In src/lerobot/envs/factory.py - -def make_env_pre_post_processors(env_cfg: EnvConfig): - if isinstance(env_cfg, LiberoEnv) or "libero" in env_cfg.type: - preprocessor = PolicyProcessorPipeline(steps=[LiberoProcessorStep()]) - elif isinstance(env_cfg, MyEnvConfig) or "myenv" in env_cfg.type: - preprocessor = PolicyProcessorPipeline(steps=[MyEnvProcessorStep()]) - else: - preprocessor = PolicyProcessorPipeline(steps=[]) - - postprocessor = PolicyProcessorPipeline(steps=[]) - return preprocessor, postprocessor -``` - -### 3. Use in Evaluation - -No changes needed! The evaluation script automatically uses the appropriate processor: - -```bash -lerobot-eval \ - --policy.path=lerobot/my_policy \ - --env.type=myenv \ # Automatically uses MyEnvProcessorStep - --eval.n_episodes=10 -``` - -## Future: Environment Postprocessors - -Currently, postprocessors are identity (no-op) for all environments. Future use cases include: - -### Action Space Transformations - -```python -@dataclass -class MyEnvActionPostprocessor(ProcessorStep): - """Convert policy actions to environment-specific format.""" - - def __call__(self, transition: EnvTransition) -> EnvTransition: - action = transition["action"] - - # Example: Convert from Cartesian to joint space - if self.action_space == "joint": - action = self.ik_solver(action) - - # Example: Apply environment-specific safety limits - action = torch.clamp(action, self.min_action, self.max_action) - - transition["action"] = action - return transition -``` - -### Coordinate System Conversions - -```python -@dataclass -class CoordinateTransformPostprocessor(ProcessorStep): - """Transform actions between coordinate systems.""" - - def __call__(self, transition: EnvTransition) -> EnvTransition: - action = transition["action"] - - # Example: Policy outputs in world frame, env expects base frame - action = self.world_to_base_transform(action) - - transition["action"] = action - return transition -``` - -## Best Practices - -1. **Keep environment processors simple**: They should only handle environment-specific data format issues, not complex learning-related transformations. - -2. **Use policy processors for model requirements**: Normalization, batching, device placement, and tokenization belong in policy processors. - -3. **Expose all data from environments**: Let processors decide what to use rather than hardcoding choices in the environment. - -4. **Document conventions**: Clearly document any coordinate system conventions, camera orientations, or data formats that your processor handles. - -5. **Test independently**: Environment processors should be testable without loading full policies or environments. - -## Summary - -Environment processors provide a **clean separation** between environment-specific data transformations and policy-specific model requirements. This architecture: - -- ✅ Enables easy experimentation with different state representations -- ✅ Allows policies to work seamlessly across different environments -- ✅ Keeps environment code focused on simulation/hardware interface -- ✅ Makes processor pipelines more maintainable and debuggable -- ✅ Follows the single responsibility principle - -The key insight: **Environments define data formats, processors standardize them, policies consume standardized data.** Each layer has a clear, focused responsibility. diff --git a/lerobot/docs/source/envhub.mdx b/lerobot/docs/source/envhub.mdx deleted file mode 100644 index f19aef6c67ac036a98b65005531976a211864833..0000000000000000000000000000000000000000 --- a/lerobot/docs/source/envhub.mdx +++ /dev/null @@ -1,431 +0,0 @@ -# Loading Environments from the Hub - -The **EnvHub** feature allows you to load simulation environments directly from the Hugging Face Hub with a single line of code. This unlocks a powerful new model for collaboration: instead of environments being locked away inside monolithic libraries, anyone can publish custom environments and share them with the community. - -## What is EnvHub? - -EnvHub lets you create custom robotics simulation environments with your own robot models and scenarios, and make them easily usable by anyone through the LeRobot framework. - -EnvHub packages are stored on the Hugging Face Hub, and can be seamlessly pulled and used in your AI robotics projects through LeRobot with a single line of code. - -Thanks to EnvHub, you can: - -1. **Create and publish environments** to the Hugging Face Hub as Git repositories, and distribute complex physics simulations without packaging hassles -2. **Load environments** dynamically, without installing them as packages -3. **Version and track** environment changes using Git semantics -4. **Discover** new simulation tasks shared by the community - -This design means you can go from discovering an interesting environment on the Hub to running experiments in seconds, or create your own custom robot and environment without worrying about dependency conflicts or complex installation procedures. - -When you create an EnvHub package, you can build anything you want inside it and use any simulation tool you like: this is your own space to play with. The only requirement is that the package contains an `env.py` file that defines the environment and allows LeRobot to load and use your EnvHub package. - -This `env.py` file needs to expose a small API so LeRobot can load and run it. In particular, you must provide a `make_env(n_envs: int = 1, use_async_envs: bool = False)` or `make_env(n_envs: int = 1, use_async_envs: bool = False, cfg: EnvConfig)` function, which is the main entry point for LeRobot. It should return one of: - -- A `gym.vector.VectorEnv` (most common) -- A single `gym.Env` (will be automatically wrapped) -- A dict mapping `{suite_name: {task_id: VectorEnv}}` (for multi-task benchmarks) - -You can also pass an `EnvConfig` object to `make_env` to configure the environment (e.g. the number of environments, task, camera name, initial states, control mode, episode length, etc.). - -Finally, your environment must implement the standard `gym.vector.VectorEnv` interface so it works with LeRobot, including methods like `reset` and `step`. - -## Quick Start - -Loading an environment from the Hub is as simple as: - -```python -from lerobot.envs.factory import make_env - -# Load a hub environment (requires explicit consent to run remote code) -env = make_env("lerobot/cartpole-env", trust_remote_code=True) -``` - - - **Security Notice**: Loading environments from the Hub executes Python code - from third-party repositories. Only use `trust_remote_code=True` with - repositories you trust. We strongly recommend pinning to a specific commit - hash for reproducibility and security. - - -## Repository Structure - -To make your environment loadable from the Hub, your repository must contain at minimum: - -### Required Files - -**`env.py`** (or custom Python file) - -- Must expose a `make_env(n_envs: int, use_async_envs: bool)` function -- This function should return one of: - - A `gym.vector.VectorEnv` (most common) - - A single `gym.Env` (will be automatically wrapped) - - A dict mapping `{suite_name: {task_id: VectorEnv}}` (for multi-task benchmarks) - -### Optional Files - -**`requirements.txt`** - -- List any additional dependencies your environment needs -- Users will need to install these manually before loading your environment - -**`README.md`** - -- Document your environment: what task it implements, observation/action spaces, rewards, etc. -- Include usage examples and any special setup instructions - -**`.gitignore`** - -- Exclude unnecessary files from your repository - -### Example Repository Structure - -``` -my-environment-repo/ -├── env.py # Main environment definition (required) -├── requirements.txt # Dependencies (optional) -├── README.md # Documentation (recommended) -├── assets/ # Images, videos, etc. (optional) -│ └── demo.gif -└── configs/ # Config files if needed (optional) - └── task_config.yaml -``` - -## Creating Your Environment Repository - -### Step 1: Define Your Environment - -Create an `env.py` file with a `make_env` function: - -```python -# env.py -import gymnasium as gym - -def make_env(n_envs: int = 1, use_async_envs: bool = False): - """ - Create vectorized environments for your custom task. - - Args: - n_envs: Number of parallel environments - use_async_envs: Whether to use AsyncVectorEnv or SyncVectorEnv - - Returns: - gym.vector.VectorEnv or dict mapping suite names to vectorized envs - """ - def _make_single_env(): - # Create your custom environment - return gym.make("CartPole-v1") - - # Choose vector environment type - env_cls = gym.vector.AsyncVectorEnv if use_async_envs else gym.vector.SyncVectorEnv - - # Create vectorized environment - vec_env = env_cls([_make_single_env for _ in range(n_envs)]) - - return vec_env -``` - -### Step 2: Test Locally - -Before uploading, test your environment locally: - -```python -from lerobot.envs.utils import _load_module_from_path, _call_make_env, _normalize_hub_result - -# Load your module -module = _load_module_from_path("./env.py") - -# Test the make_env function -result = _call_make_env(module, n_envs=2, use_async_envs=False) -normalized = _normalize_hub_result(result) - -# Verify it works -suite_name = next(iter(normalized)) -env = normalized[suite_name][0] -obs, info = env.reset() -print(f"Observation shape: {obs.shape if hasattr(obs, 'shape') else type(obs)}") -env.close() -``` - -### Step 3: Upload to the Hub - -Upload your repository to Hugging Face: - -```bash -# Install huggingface_hub if needed -pip install huggingface_hub - -# Login to Hugging Face -huggingface-cli login - -# Create a new repository -huggingface-cli repo create my-custom-env --type space --org my-org - -# Initialize git and push -git init -git add . -git commit -m "Initial environment implementation" -git remote add origin https://huggingface.co/my-org/my-custom-env -git push -u origin main -``` - -Alternatively, use the `huggingface_hub` Python API: - -```python -from huggingface_hub import HfApi - -api = HfApi() - -# Create repository -api.create_repo("my-custom-env", repo_type="space") - -# Upload files -api.upload_folder( - folder_path="./my-env-folder", - repo_id="username/my-custom-env", - repo_type="space", -) -``` - -## Loading Environments from the Hub - -### Basic Usage - -```python -from lerobot.envs.factory import make_env - -# Load from the hub -envs_dict = make_env( - "username/my-custom-env", - n_envs=4, - trust_remote_code=True -) - -# Access the environment -suite_name = next(iter(envs_dict)) -env = envs_dict[suite_name][0] - -# Use it like any gym environment -obs, info = env.reset() -action = env.action_space.sample() -obs, reward, terminated, truncated, info = env.step(action) -``` - -### Advanced: Pinning to Specific Versions - -For reproducibility and security, pin to a specific Git revision: - -```python -# Pin to a specific branch -env = make_env("username/my-env@main", trust_remote_code=True) - -# Pin to a specific commit (recommended for papers/experiments) -env = make_env("username/my-env@abc123def456", trust_remote_code=True) - -# Pin to a tag -env = make_env("username/my-env@v1.0.0", trust_remote_code=True) -``` - -### Custom File Paths - -If your environment definition is not in `env.py`: - -```python -# Load from a custom file -env = make_env("username/my-env:custom_env.py", trust_remote_code=True) - -# Combine with version pinning -env = make_env("username/my-env@v1.0:envs/task_a.py", trust_remote_code=True) -``` - -### Async Environments - -For better performance with multiple environments: - -```python -envs_dict = make_env( - "username/my-env", - n_envs=8, - use_async_envs=True, # Use AsyncVectorEnv for parallel execution - trust_remote_code=True -) -``` - -## URL Format Reference - -The hub URL format supports several patterns: - -| Pattern | Description | Example | -| -------------------- | ------------------------------ | -------------------------------------- | -| `user/repo` | Load `env.py` from main branch | `make_env("lerobot/pusht-env")` | -| `user/repo@revision` | Load from specific revision | `make_env("lerobot/pusht-env@main")` | -| `user/repo:path` | Load custom file | `make_env("lerobot/envs:pusht.py")` | -| `user/repo@rev:path` | Revision + custom file | `make_env("lerobot/envs@v1:pusht.py")` | - -## Multi-Task Environments - -For benchmarks with multiple tasks (like LIBERO), return a nested dictionary: - -```python -def make_env(n_envs: int = 1, use_async_envs: bool = False): - env_cls = gym.vector.AsyncVectorEnv if use_async_envs else gym.vector.SyncVectorEnv - - # Return dict: {suite_name: {task_id: VectorEnv}} - return { - "suite_1": { - 0: env_cls([lambda: gym.make("Task1-v0") for _ in range(n_envs)]), - 1: env_cls([lambda: gym.make("Task2-v0") for _ in range(n_envs)]), - }, - "suite_2": { - 0: env_cls([lambda: gym.make("Task3-v0") for _ in range(n_envs)]), - } - } -``` - -## Security Considerations - - - **Important**: The `trust_remote_code=True` flag is required to execute - environment code from the Hub. This is by design for security. - - -When loading environments from the Hub: - -1. **Review the code first**: Visit the repository and inspect `env.py` before loading -2. **Pin to commits**: Use specific commit hashes for reproducibility -3. **Check dependencies**: Review `requirements.txt` for suspicious packages -4. **Use trusted sources**: Prefer official organizations or well-known researchers -5. **Sandbox if needed**: Run untrusted code in isolated environments (containers, VMs) - -Example of safe usage: - -```python -# ❌ BAD: Loading without inspection -env = make_env("random-user/untrusted-env", trust_remote_code=True) - -# ✅ GOOD: Review code, then pin to specific commit -# 1. Visit https://huggingface.co/trusted-org/verified-env -# 2. Review the env.py file -# 3. Copy the commit hash -env = make_env("trusted-org/verified-env@a1b2c3d4", trust_remote_code=True) -``` - -## Example: CartPole from the Hub - -Here's a complete example using the reference CartPole environment: - -```python -from lerobot.envs.factory import make_env -import numpy as np - -# Load the environment -envs_dict = make_env("lerobot/cartpole-env", n_envs=4, trust_remote_code=True) - -# Get the vectorized environment -suite_name = next(iter(envs_dict)) -env = envs_dict[suite_name][0] - -# Run a simple episode -obs, info = env.reset() -done = np.zeros(env.num_envs, dtype=bool) -total_reward = np.zeros(env.num_envs) - -while not done.all(): - # Random policy - action = env.action_space.sample() - obs, reward, terminated, truncated, info = env.step(action) - total_reward += reward - done = terminated | truncated - -print(f"Average reward: {total_reward.mean():.2f}") -env.close() -``` - -## Benefits of EnvHub - -### For Environment Authors - -- **Easy distribution**: No PyPI packaging required -- **Version control**: Use Git for environment versioning -- **Rapid iteration**: Push updates instantly -- **Documentation**: Hub README renders beautifully -- **Community**: Reach LeRobot users directly - -### For Researchers - -- **Quick experiments**: Load any environment in one line -- **Reproducibility**: Pin to specific commits -- **Discovery**: Browse environments on the Hub -- **No conflicts**: No need to install conflicting packages - -### For the Community - -- **Growing ecosystem**: More diverse simulation tasks -- **Standardization**: Common `make_env` API -- **Collaboration**: Fork and improve existing environments -- **Accessibility**: Lower barrier to sharing research - -## Troubleshooting - -### "Refusing to execute remote code" - -You must explicitly pass `trust_remote_code=True`: - -```python -env = make_env("user/repo", trust_remote_code=True) -``` - -### "Module X not found" - -The hub environment has dependencies you need to install: - -```bash -# Check the repo's requirements.txt and install dependencies -pip install gymnasium numpy -``` - -### "make_env not found in module" - -Your `env.py` must expose a `make_env` function: - -```python -def make_env(n_envs: int, use_async_envs: bool): - # Your implementation - pass -``` - -### Environment returns wrong type - -The `make_env` function must return: - -- A `gym.vector.VectorEnv`, or -- A single `gym.Env`, or -- A dict `{suite_name: {task_id: VectorEnv}}` - -## Best Practices - -1. **Document your environment**: Include observation/action space descriptions, reward structure, and termination conditions in your README -2. **Add requirements.txt**: List all dependencies with versions -3. **Test thoroughly**: Verify your environment works locally before pushing -4. **Use semantic versioning**: Tag releases with version numbers -5. **Add examples**: Include usage examples in your README -6. **Keep it simple**: Minimize dependencies when possible -7. **License your work**: Add a LICENSE file to clarify usage terms - -## Future Directions - -The EnvHub ecosystem enables exciting possibilities: - -- **GPU-accelerated physics**: Share Isaac Gym or Brax environments -- **Photorealistic rendering**: Distribute environments with advanced graphics -- **Multi-agent scenarios**: Complex interaction tasks -- **Real-world simulators**: Digital twins of physical setups -- **Procedural generation**: Infinite task variations -- **Domain randomization**: Pre-configured DR pipelines - -As more researchers and developers contribute, the diversity and quality of available environments will grow, benefiting the entire robotics learning community. - -## See Also - -- [Hugging Face Hub Documentation](https://huggingface.co/docs/hub/en/index) -- [Gymnasium Documentation](https://gymnasium.farama.org/index.html) -- [Example Hub Environment](https://huggingface.co/lerobot/cartpole-env) diff --git a/lerobot/docs/source/envhub_isaaclab_arena.mdx b/lerobot/docs/source/envhub_isaaclab_arena.mdx deleted file mode 100644 index efeec2bd10c65bc6dc9a034208fd9435b7807214..0000000000000000000000000000000000000000 --- a/lerobot/docs/source/envhub_isaaclab_arena.mdx +++ /dev/null @@ -1,510 +0,0 @@ -# NVIDIA IsaacLab Arena & LeRobot - -LeRobot EnvHub now supports **GPU-accelerated simulation** with IsaacLab Arena for policy evaluation at scale. -Train and evaluate imitation learning policies with high-fidelity simulation — all integrated into the LeRobot ecosystem. - -IsaacLab Arena - GR1 Microwave Environment - -[IsaacLab Arena](https://github.com/isaac-sim/IsaacLab-Arena) integrates with NVIDIA IsaacLab to provide: - -- 🤖 **Humanoid embodiments**: GR1, G1, Galileo with various configurations -- 🎯 **Manipulation & loco-manipulation tasks**: Door opening, pick-and-place, button pressing, and more -- ⚡ **GPU-accelerated rollouts**: Parallel environment execution on NVIDIA GPUs -- 🖼️ **RTX Rendering**: Evaluate vision-based policies with realistic rendering, reflections and refractions -- 📦 **LeRobot-compatible datasets**: Ready for training with GR00T N1x, PI0, SmolVLA, ACT, and Diffusion policies -- 🔄 **EnvHub integration**: Load environments from HuggingFace EnvHub with one line - -## Installation - -### Prerequisites - -Hardware requirements are shared with Isaac Sim, and are detailed in [Isaac Sim Requirements](https://docs.isaacsim.omniverse.nvidia.com/5.1.0/installation/requirements.html). - -- NVIDIA GPU with CUDA support -- NVIDIA driver compatible with IsaacSim 5.1.0 -- Linux (Ubuntu 22.04 / 24.04) - -### Setup - -```bash -# 1. Create conda environment -conda create -y -n lerobot-arena python=3.11 -conda activate lerobot-arena -conda install -y -c conda-forge ffmpeg=7.1.1 - -# 2. Install Isaac Sim 5.1.0 -pip install "isaacsim[all,extscache]==5.1.0" --extra-index-url https://pypi.nvidia.com - -# Accept NVIDIA EULA (required) -export ACCEPT_EULA=Y -export PRIVACY_CONSENT=Y - -# 3. Install IsaacLab 2.3.0 -git clone https://github.com/isaac-sim/IsaacLab.git -cd IsaacLab -git checkout v2.3.0 -./isaaclab.sh -i -cd .. - -# 4. Install IsaacLab Arena -git clone https://github.com/isaac-sim/IsaacLab-Arena.git -cd IsaacLab-Arena -git checkout release/0.1.1 -pip install -e . -cd .. - - -# 5. Install LeRobot -git clone https://github.com/huggingface/lerobot.git -cd lerobot -pip install -e . -cd .. - - -# 6. Install additional dependencies -pip install onnxruntime==1.23.2 lightwheel-sdk==1.0.1 vuer[all]==0.0.70 qpsolvers==4.8.1 -pip install numpy==1.26.0 # Isaac Sim 5.1 depends on numpy==1.26.0, this will be fixed in next release -``` - -## Evaluating Policies - -### Pre-trained Policies - -The following trained policies are available: - -| Policy | Architecture | Task | Link | -| :-------------------------- | :----------- | :------------ | :----------------------------------------------------------------------- | -| pi05-arena-gr1-microwave | PI0.5 | GR1 Microwave | [HuggingFace](https://huggingface.co/nvidia/pi05-arena-gr1-microwave) | -| smolvla-arena-gr1-microwave | SmolVLA | GR1 Microwave | [HuggingFace](https://huggingface.co/nvidia/smolvla-arena-gr1-microwave) | - -### Evaluate SmolVLA - -```bash -pip install -e ".[smolvla]" -pip install numpy==1.26.0 # revert numpy to version 1.26 -``` - -```bash -lerobot-eval \ - --policy.path=nvidia/smolvla-arena-gr1-microwave \ - --env.type=isaaclab_arena \ - --env.hub_path=nvidia/isaaclab-arena-envs \ - --rename_map='{"observation.images.robot_pov_cam_rgb": "observation.images.robot_pov_cam"}' \ - --policy.device=cuda \ - --env.environment=gr1_microwave \ - --env.embodiment=gr1_pink \ - --env.object=mustard_bottle \ - --env.headless=false \ - --env.enable_cameras=true \ - --env.video=true \ - --env.video_length=10 \ - --env.video_interval=15 \ - --env.state_keys=robot_joint_pos \ - --env.camera_keys=robot_pov_cam_rgb \ - --trust_remote_code=True \ - --eval.batch_size=1 -``` - -### Evaluate PI0.5 - -```bash -pip install -e ".[pi]" -pip install numpy==1.26.0 # revert numpy to version 1.26 -``` - -PI0.5 requires disabling torch compile for evaluation: - -```bash -TORCH_COMPILE_DISABLE=1 TORCHINDUCTOR_DISABLE=1 lerobot-eval \ - --policy.path=nvidia/pi05-arena-gr1-microwave \ - --env.type=isaaclab_arena \ - --env.hub_path=nvidia/isaaclab-arena-envs \ - --rename_map='{"observation.images.robot_pov_cam_rgb": "observation.images.robot_pov_cam"}' \ - --policy.device=cuda \ - --env.environment=gr1_microwave \ - --env.embodiment=gr1_pink \ - --env.object=mustard_bottle \ - --env.headless=false \ - --env.enable_cameras=true \ - --env.video=true \ - --env.video_length=15 \ - --env.video_interval=15 \ - --env.state_keys=robot_joint_pos \ - --env.camera_keys=robot_pov_cam_rgb \ - --trust_remote_code=True \ - --eval.batch_size=1 -``` - - - To change the number of parallel environments, use the ```--eval.batch_size``` - flag. - - -### What to Expect - -During evaluation, you will see a progress bar showing the running success rate: - -``` -Stepping through eval batches: 8%|██████▍ | 4/50 [00:45<08:06, 10.58s/it, running_success_rate=25.0%] -``` - -### Video Recording - -To enable video recording during evaluation, add the following flags to your command: - -```bash ---env.video=true \ ---env.video_length=15 \ ---env.video_interval=15 -``` - -For more details on video recording, see the [IsaacLab Recording Documentation](https://isaac-sim.github.io/IsaacLab/main/source/how-to/record_video.html). - - -When running headless with `--env.headless=true`, you must also enable cameras explicitly for camera enabled environments: - -```bash ---env.headless=true --env.enable_cameras=true -``` - - - -### Output Directory - -Evaluation videos are saved to the output directory with the following structure: - -``` -outputs/eval//__/videos/_/eval_episode_.mp4 -``` - -For example: - -``` -outputs/eval/2026-01-02/14-38-01_isaaclab_arena_smolvla/videos/gr1_microwave_0/eval_episode_0.mp4 -``` - -## Training Policies - -To learn more about training policies with LeRobot, please refer to the training documentation: - -- [SmolVLA](./smolvla) -- [Pi0.5](./pi05) -- [GR00T N1.5](./groot) - -Sample IsaacLab Arena datasets are available on HuggingFace Hub for experimentation: - -| Dataset | Description | Frames | -| :-------------------------------------------------------------------------------------------------------- | :------------------------- | :----- | -| [Arena-GR1-Manipulation-Task](https://huggingface.co/datasets/nvidia/Arena-GR1-Manipulation-Task-v3) | GR1 microwave manipulation | ~4K | -| [Arena-G1-Loco-Manipulation-Task](https://huggingface.co/datasets/nvidia/Arena-G1-Loco-Manipulation-Task) | G1 loco-manipulation | ~4K | - -## Environment Configuration - -### Full Configuration Options - -```python -from lerobot.envs.configs import IsaaclabArenaEnv - -config = IsaaclabArenaEnv( - # Environment selection - environment="gr1_microwave", # Task environment - embodiment="gr1_pink", # Robot embodiment - object="power_drill", # Object to manipulate - - # Simulation settings - episode_length=300, # Max steps per episode - headless=True, # Run without GUI - device="cuda:0", # GPU device - seed=42, # Random seed - - # Observation configuration - state_keys="robot_joint_pos", # State observation keys (comma-separated) - camera_keys="robot_pov_cam_rgb", # Camera observation keys (comma-separated) - state_dim=54, # Expected state dimension - action_dim=36, # Expected action dimension - camera_height=512, # Camera image height - camera_width=512, # Camera image width - enable_cameras=True, # Enable camera observations - - # Video recording - video=False, # Enable video recording - video_length=100, # Frames per video - video_interval=200, # Steps between recordings - - # Advanced - mimic=False, # Enable mimic mode - teleop_device=None, # Teleoperation device - disable_fabric=False, # Disable fabric optimization - enable_pinocchio=True, # Enable Pinocchio for IK -) -``` - -### Using Environment Hub directly for advanced usage - -Create a file called `test_env_load_arena.py` or [download from the EnvHub](https://huggingface.co/nvidia/isaaclab-arena-envs/blob/main/tests/test_env_load_arena.py): - -```python -import logging -from dataclasses import asdict -from pprint import pformat -import torch -import tqdm -from lerobot.configs import parser -from lerobot.configs.eval import EvalPipelineConfig - - -@parser.wrap() -def main(cfg: EvalPipelineConfig): - """Run random action rollout for IsaacLab Arena environment.""" - logging.info(pformat(asdict(cfg))) - - from lerobot.envs.factory import make_env - - env_dict = make_env( - cfg.env, - n_envs=cfg.env.num_envs, - trust_remote_code=True, - ) - env = next(iter(env_dict.values()))[0] - env.reset() - for _ in tqdm.tqdm(range(cfg.env.episode_length)): - with torch.inference_mode(): - actions = env.action_space.sample() - obs, rewards, terminated, truncated, info = env.step(actions) - if terminated.any() or truncated.any(): - obs, info = env.reset() - env.close() - - -if __name__ == "__main__": - main() -``` - -Run with: - -```bash -python test_env_load_arena.py \ - --env.environment=g1_locomanip_pnp \ - --env.embodiment=gr1_pink \ - --env.object=cracker_box \ - --env.num_envs=4 \ - --env.enable_cameras=true \ - --env.seed=1000 \ - --env.video=true \ - --env.video_length=10 \ - --env.video_interval=15 \ - --env.headless=false \ - --env.hub_path=nvidia/isaaclab-arena-envs \ - --env.type=isaaclab_arena -``` - -## Creating New Environments - -First create a new IsaacLab Arena environment by following the [IsaacLab Arena Documentation](https://isaac-sim.github.io/IsaacLab-Arena/release/0.1.1/index.html). - -Clone our EnvHub repo: - -```bash -git clone https://huggingface.co/nvidia/isaaclab-arena-envs -``` - -Modify the `example_envs.yaml` file based on your new environment. -[Upload](./envhub#step-3-upload-to-the-hub) your modified repo to HuggingFace EnvHub. - - - Your IsaacLab Arena environment code must be locally available during - evaluation. Users can clone your environment repository separately, or you can - bundle the environment code and assets directly in your EnvHub repo. - - -Then, when evaluating, use your new environment: - -```bash -lerobot-eval \ - --env.hub_path=/isaaclab-arena-envs \ - --env.environment= \ - ...other flags... -``` - -We look forward to your contributions! - -## Troubleshooting - -### CUDA out of memory - -Reduce `batch_size` or use a GPU with more VRAM: - -```bash ---eval.batch_size=1 -``` - -### EULA not accepted - -Set environment variables before running: - -```bash -export ACCEPT_EULA=Y -export PRIVACY_CONSENT=Y -``` - -### Video recording not working - -Enable cameras when running headless: - -```bash ---env.video=true --env.enable_cameras=true --env.headless=true -``` - -### Policy output dimension mismatch - -Ensure `action_dim` matches your policy: - -```bash ---env.action_dim=36 -``` - -### libGLU.so.1 Errors during Isaac Sim initialization - -Ensure you have the following dependencies installed, this is likely to happen on headless machines. - -```bash -sudo apt update && sudo apt install -y libglu1-mesa libxt6 -``` - -## See Also - -- [EnvHub Documentation](./envhub.mdx) - General EnvHub usage -- [IsaacLab Arena GitHub](https://github.com/isaac-sim/IsaacLab-Arena) -- [IsaacLab Documentation](https://isaac-sim.github.io/IsaacLab/) - -## Lightwheel LW-BenchHub - -[Lightwheel](https://www.lightwheel.ai) is bringing `Lightwheel-Libero-Tasks` and `Lightwheel-RoboCasa-Tasks` with 268 tasks to the LeRobot ecosystem. -LW-BenchHub collects and generates large-scale datasets via teleoperation that comply with the LeRobot specification, enabling out-of-the-box training and evaluation workflows. -With the unified interface provided by EnvHub, developers can quickly build end-to-end experimental pipelines. - -### Install - -Assuming you followed the [Installation](#installation) steps, you can install LW-BenchHub with: - -```bash -conda install pinocchio -c conda-forge -y -pip install numpy==1.26.0 # revert numpy to version 1.26 - -sudo apt-get install git-lfs && git lfs install - -git clone https://github.com/LightwheelAI/lw_benchhub -git lfs pull # Ensure LFS files (e.g., .usd assets) are downloaded - -cd lw_benchhub -pip install -e . -``` - -For more detailed instructions, please refer to the [LW-BenchHub Documentation](https://docs.lightwheel.net/lw_benchhub/usage/Installation). - -### Lightwheel Tasks Dataset - -LW-BenchHub datasets are available on HuggingFace Hub: - -| Dataset | Description | Tasks | Frames | -| :------------------------------------------------------------------------------------------------------------ | :---------------------- | :---- | :----- | -| [Lightwheel-Tasks-X7S](https://huggingface.co/datasets/LightwheelAI/Lightwheel-Tasks-X7S) | X7S LIBERO and RoboCasa | 117 | ~10.3M | -| [Lightwheel-Tasks-Double-Piper](https://huggingface.co/datasets/LightwheelAI/Lightwheel-Tasks-Double-Piper) | Double-Piper LIBERO | 130 | ~6.0M | -| [Lightwheel-Tasks-G1-Controller](https://huggingface.co/datasets/LightwheelAI/Lightwheel-Tasks-G1-Controller) | G1-Controller LIBERO | 62 | ~2.7M | -| [Lightwheel-Tasks-G1-WBC](https://huggingface.co/datasets/LightwheelAI/Lightwheel-Tasks-G1-WBC) | G1-WBC RoboCasa | 32 | ~1.5M | - -For training policies, refer to the [Training Policies](#training-policies) section. - -### Evaluating Policies - -#### Pre-trained Policies - -The following trained policies are available: - -| Policy | Architecture | Task | Layout | Robot | Link | -| :----------------------- | :----------- | :----------------------------- | :--------- | :-------------- | :------------------------------------------------------------------------------------ | -| smolvla-double-piper-pnp | SmolVLA | L90K1PutTheBlackBowlOnThePlate | libero-1-1 | DoublePiper-Abs | [HuggingFace](https://huggingface.co/LightwheelAI/smolvla-double-piper-pnp/tree/main) | - -#### Evaluate SmolVLA - -```bash -lerobot-eval \ - --policy.path=LightwheelAI/smolvla-double-piper-pnp \ - --env.type=isaaclab_arena \ - --rename_map='{"observation.images.left_hand_camera_rgb": "observation.images.left_hand", "observation.images.right_hand_camera_rgb": "observation.images.right_hand", "observation.images.first_person_camera_rgb": "observation.images.first_person"}' \ - --env.hub_path=LightwheelAI/lw_benchhub_env \ - --env.kwargs='{"config_path": "configs/envhub/example.yml"}' \ - --trust_remote_code=true \ - --env.state_keys=joint_pos \ - --env.action_dim=12 \ - --env.camera_keys=left_hand_camera_rgb,right_hand_camera_rgb,first_person_camera_rgb \ - --policy.device=cuda \ - --eval.batch_size=10 \ - --eval.n_episodes=100 -``` - -### Environment Configuration - -Evaluation can be quickly launched by modifying the `robot`, `task`, and `layout` settings in the configuration file. - -#### Full Configuration Options - -```yml -# ========================= -# Basic Settings -# ========================= -disable_fabric: false -device: cuda:0 -sensitivity: 1.0 -step_hz: 50 -enable_cameras: true -execute_mode: eval -episode_length_s: 20.0 # Episode length in seconds, increase if episodes timeout during eval - -# ========================= -# Robot Settings -# ========================= -robot: DoublePiper-Abs # Robot type, DoublePiper-Abs, X7S-Abs, G1-Controller or G1-Controller-DecoupledWBC -robot_scale: 1.0 - -# ========================= -# Task & Scene Settings -# ========================= -task: L90K1PutTheBlackBowlOnThePlate # Task name -scene_backend: robocasa -task_backend: robocasa -debug_assets: null -layout: libero-1-1 # Layout and style ID -sources: - - objaverse - - lightwheel - - aigen_objs -object_projects: [] -usd_simplify: false -seed: 42 - -# ========================= -# Object Placement Retry Settings -# ========================= -max_scene_retry: 4 -max_object_placement_retry: 3 - -resample_objects_placement_on_reset: true -resample_robot_placement_on_reset: true - -# ========================= -# Replay Configuration Settings -# ========================= -replay_cfgs: - add_camera_to_observation: true - render_resolution: [640, 480] -``` - -### See Also - -- [LW-BenchHub GitHub](https://github.com/LightwheelAI/LW-BenchHub) -- [LW-BenchHub Documentation](https://docs.lightwheel.net/lw_benchhub/) diff --git a/lerobot/docs/source/envhub_leisaac.mdx b/lerobot/docs/source/envhub_leisaac.mdx deleted file mode 100644 index f2c6f79d570f1d1f474cf9db4b9930057d75627b..0000000000000000000000000000000000000000 --- a/lerobot/docs/source/envhub_leisaac.mdx +++ /dev/null @@ -1,302 +0,0 @@ -# LeIsaac × LeRobot EnvHub - -LeRobot EnvHub now supports **imitation learning in simulation** with LeIsaac. -Spin up everyday manipulation tasks, teleoperate the robot, collect demos, push them to the Hub, and train policies in LeRobot — all in one loop. - -[LeIsaac](https://github.com/LightwheelAI/leisaac) integrates with IsaacLab and the SO101 Leader/Follower setup to provide: - -- 🕹️ **Teleoperation-first workflows** for data collection -- 📦 **Built-in data conversion** ready for LeRobot training -- 🤖 **Everyday skills** like picking oranges, lifting cubes, cleaning tables, and folding cloth -- ☁️ **Ongoing upgrades** from [LightWheel](https://lightwheel.ai/): cloud simulation, EnvHub support, Sim2Real tooling, and more - -Below you’ll find the currently supported LeIsaac tasks exposed through LeRobot EnvHub. - -# Available Environments - -The following table lists all available tasks and environments in LeIsaac x LeRobot Envhub. You can also get the latest list of environments by running the following command: - -```bash -python scripts/environments/list_envs.py -``` - -| Task | Environment ID | Task Description | Related Robot | -| :-------------------------------------------------------------------------------------------------------------------------------------------------------------- | :-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------------------------------------------------------------------------------------------------------------------- | :--------------------------------------------------------- | -| | [LeIsaac-SO101-PickOrange-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/pick_orange/pick_orange_env_cfg.py)

[LeIsaac-SO101-PickOrange-Direct-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/pick_orange/direct/pick_orange_env.py) | Pick three oranges and put them into the plate, then reset the arm to rest state. | Single-Arm SO101 Follower | -| | [LeIsaac-SO101-LiftCube-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/lift_cube/lift_cube_env_cfg.py)

[LeIsaac-SO101-LiftCube-Direct-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/lift_cube/direct/lift_cube_env.py) | Lift the red cube up. | Single-Arm SO101 Follower | -| | [LeIsaac-SO101-CleanToyTable-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/clean_toy_table/clean_toy_table_env_cfg.py)

[LeIsaac-SO101-CleanToyTable-BiArm-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/clean_toy_table/clean_toy_table_bi_arm_env_cfg.py)

[LeIsaac-SO101-CleanToyTable-BiArm-Direct-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/clean_toy_table/direct/clean_toy_table_bi_arm_env.py) | Pick two letter e objects into the box, and reset the arm to rest state. | Single-Arm SO101 Follower

Bi-Arm SO101 Follower | -| | [LeIsaac-SO101-FoldCloth-BiArm-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/fold_cloth/fold_cloth_bi_arm_env_cfg.py)

[LeIsaac-SO101-FoldCloth-BiArm-Direct-v0](https://github.com/LightwheelAI/leisaac/blob/main/source/leisaac/leisaac/tasks/fold_cloth/direct/fold_cloth_bi_arm_env.py) | Fold the cloth, and reset the arm to rest state.

_Note: Only the DirectEnv support check_success in this task._ | Bi-Arm SO101 Follower | - -# Load LeIsaac directly in LeRobot with one line of code - -> EnvHub: Share LeIsaac environments through HuggingFace - -[EnvHub](https://huggingface.co/docs/lerobot/envhub) is our reproducible environment hub, spin up a packaged simulation with one line, experiment immediately, and publish your own tasks for the community. - -LeIsaac offers EnvHub support so you can consume or share tasks with only a few commands. - -