+
+### Assemble arms
+[Assemble arms instruction](https://github.com/huggingface/lerobot/blob/main/examples/10_use_so100.md#d-assemble-the-arms)
+
+## Mobile base (LeKiwi)
+[Assemble LeKiwi](https://github.com/SIGRobotics-UIUC/LeKiwi)
+
+### Update config
+Both config files on the LeKiwi LeRobot and on the laptop should be the same. First we should find the Ip address of the Raspberry Pi of the mobile manipulator. This is the same Ip address used in SSH. We also need the usb port of the control board of the leader arm on the laptop and the port of the control board on LeKiwi. We can find these ports with the following script.
+
+#### a. Run the script to find port
+
+
|
|
|
+
+Make sure the arm is connected to the Raspberry Pi and run this script (on the Raspberry Pi) to launch manual calibration:
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=lekiwi \
+ --robot.cameras='{}' \
+ --control.type=calibrate \
+ --control.arms='["main_follower"]'
+```
+
+### Wired version
+If you have the **wired** LeKiwi version please run all commands including this calibration command on your laptop.
+
+### Calibrate leader arm
+Then to calibrate the leader arm (which is attached to the laptop/pc). You will need to move the leader arm to these positions sequentially:
+
+| 1. Zero position | 2. Rotated position | 3. Rest position |
+| ------------------------------------------------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------ |
+|
|
|
|
+
+Run this script (on your laptop/pc) to launch manual calibration:
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=lekiwi \
+ --robot.cameras='{}' \
+ --control.type=calibrate \
+ --control.arms='["main_leader"]'
+```
+
+# F. Teleoperate
+
+> [!TIP]
+> If you're using a Mac, you might need to give Terminal permission to access your keyboard. Go to System Preferences > Security & Privacy > Input Monitoring and check the box for Terminal.
+
+To teleoperate SSH into your Raspberry Pi, and run `conda activate lerobot` and this script:
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=lekiwi \
+ --control.type=remote_robot
+```
+
+Then on your laptop, also run `conda activate lerobot` and this script:
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=lekiwi \
+ --control.type=teleoperate \
+ --control.fps=30
+```
+
+> **NOTE:** To visualize the data, enable `--control.display_data=true`. This streams the data using `rerun`. For the `--control.type=remote_robot` you will also need to set `--control.viewer_ip` and `--control.viewer_port`
+
+You should see on your laptop something like this: ```[INFO] Connected to remote robot at tcp://172.17.133.91:5555 and video stream at tcp://172.17.133.91:5556.``` Now you can move the leader arm and use the keyboard (w,a,s,d) to drive forward, left, backwards, right. And use (z,x) to turn left or turn right. You can use (r,f) to increase and decrease the speed of the mobile robot. There are three speed modes, see the table below:
+| Speed Mode | Linear Speed (m/s) | Rotation Speed (deg/s) |
+| ---------- | ------------------ | ---------------------- |
+| Fast | 0.4 | 90 |
+| Medium | 0.25 | 60 |
+| Slow | 0.1 | 30 |
+
+
+| Key | Action |
+| --- | -------------- |
+| W | Move forward |
+| A | Move left |
+| S | Move backward |
+| D | Move right |
+| Z | Turn left |
+| X | Turn right |
+| R | Increase speed |
+| F | Decrease speed |
+
+> [!TIP]
+> If you use a different keyboard you can change the keys for each command in the [`LeKiwiRobotConfig`](../lerobot/common/robot_devices/robots/configs.py).
+
+### Wired version
+If you have the **wired** LeKiwi version please run all commands including both these teleoperation commands on your laptop.
+
+## Troubleshoot communication
+
+If you are having trouble connecting to the Mobile SO100, follow these steps to diagnose and resolve the issue.
+
+### 1. Verify IP Address Configuration
+Make sure that the correct ip for the Pi is set in the configuration file. To check the Raspberry Pi's IP address, run (on the Pi command line):
+```bash
+hostname -I
+```
+
+### 2. Check if Pi is reachable from laptop/pc
+Try pinging the Raspberry Pi from your laptop:
+```bach
+ping
|
|
|
+
+Make sure both arms are connected and run this script to launch manual calibration:
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=moss \
+ --robot.cameras='{}' \
+ --control.type=calibrate \
+ --control.arms='["main_follower"]'
+```
+
+**Manual calibration of leader arm**
+Follow step 6 of the [assembly video](https://www.youtube.com/watch?v=DA91NJOtMic) which illustrates the manual calibration. You will need to move the leader arm to these positions sequentially:
+
+| 1. Zero position | 2. Rotated position | 3. Rest position |
+| ------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------- |
+|
|
|
|
+
+Run this script to launch manual calibration:
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=moss \
+ --robot.cameras='{}' \
+ --control.type=calibrate \
+ --control.arms='["main_leader"]'
+```
+
+## Teleoperate
+
+**Simple teleop**
+Then you are ready to teleoperate your robot! Run this simple script (it won't connect and display the cameras):
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=moss \
+ --robot.cameras='{}' \
+ --control.type=teleoperate
+```
+
+
+**Teleop with displaying cameras**
+Follow [this guide to setup your cameras](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#c-add-your-cameras-with-opencvcamera). Then you will be able to display the cameras on your computer while you are teleoperating by running the following code. This is useful to prepare your setup before recording your first dataset.
+
+> **NOTE:** To visualize the data, enable `--control.display_data=true`. This streams the data using `rerun`.
+
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=moss \
+ --control.type=teleoperate
+```
+
+## Record a dataset
+
+Once you're familiar with teleoperation, you can record your first dataset with Moss v1.
+
+If you want to use the Hugging Face hub features for uploading your dataset and you haven't previously done it, make sure you've logged in using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
+```bash
+huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
+```
+
+Store your Hugging Face repository name in a variable to run these commands:
+```bash
+HF_USER=$(huggingface-cli whoami | head -n 1)
+echo $HF_USER
+```
+
+Record 2 episodes and upload your dataset to the hub:
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=moss \
+ --control.type=record \
+ --control.fps=30 \
+ --control.single_task="Grasp a lego block and put it in the bin." \
+ --control.repo_id=${HF_USER}/moss_test \
+ --control.tags='["moss","tutorial"]' \
+ --control.warmup_time_s=5 \
+ --control.episode_time_s=30 \
+ --control.reset_time_s=30 \
+ --control.num_episodes=2 \
+ --control.push_to_hub=true
+```
+
+Note: You can resume recording by adding `--control.resume=true`.
+
+## Visualize a dataset
+
+If you uploaded your dataset to the hub with `--control.push_to_hub=true`, you can [visualize your dataset online](https://huggingface.co/spaces/lerobot/visualize_dataset) by copy pasting your repo id given by:
+```bash
+echo ${HF_USER}/moss_test
+```
+
+If you didn't upload with `--control.push_to_hub=false`, you can also visualize it locally with:
+```bash
+python lerobot/scripts/visualize_dataset_html.py \
+ --repo-id ${HF_USER}/moss_test \
+ --local-files-only 1
+```
+
+## Replay an episode
+
+Now try to replay the first episode on your robot:
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=moss \
+ --control.type=replay \
+ --control.fps=30 \
+ --control.repo_id=${HF_USER}/moss_test \
+ --control.episode=0
+```
+
+## Train a policy
+
+To train a policy to control your robot, use the [`python lerobot/scripts/train.py`](../lerobot/scripts/train.py) script. A few arguments are required. Here is an example command:
+```bash
+python lerobot/scripts/train.py \
+ --dataset.repo_id=${HF_USER}/moss_test \
+ --policy.type=act \
+ --output_dir=outputs/train/act_moss_test \
+ --job_name=act_moss_test \
+ --policy.device=cuda \
+ --wandb.enable=true
+```
+
+Let's explain it:
+1. We provided the dataset as argument with `--dataset.repo_id=${HF_USER}/moss_test`.
+2. We provided the policy with `policy.type=act`. This loads configurations from [`configuration_act.py`](../lerobot/common/policies/act/configuration_act.py). Importantly, this policy will automatically adapt to the number of motor sates, motor actions and cameras of your robot (e.g. `laptop` and `phone`) which have been saved in your dataset.
+4. We provided `policy.device=cuda` since we are training on a Nvidia GPU, but you could use `policy.device=mps` to train on Apple silicon.
+5. We provided `wandb.enable=true` to use [Weights and Biases](https://docs.wandb.ai/quickstart) for visualizing training plots. This is optional but if you use it, make sure you are logged in by running `wandb login`.
+
+Training should take several hours. You will find checkpoints in `outputs/train/act_moss_test/checkpoints`.
+
+## Evaluate your policy
+
+You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) but with a policy checkpoint as input. For instance, run this command to record 10 evaluation episodes:
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=moss \
+ --control.type=record \
+ --control.fps=30 \
+ --control.single_task="Grasp a lego block and put it in the bin." \
+ --control.repo_id=${HF_USER}/eval_act_moss_test \
+ --control.tags='["tutorial"]' \
+ --control.warmup_time_s=5 \
+ --control.episode_time_s=30 \
+ --control.reset_time_s=30 \
+ --control.num_episodes=10 \
+ --control.push_to_hub=true \
+ --control.policy.path=outputs/train/act_moss_test/checkpoints/last/pretrained_model
+```
+
+As you can see, it's almost the same command as previously used to record your training dataset. Two things changed:
+1. There is an additional `--control.policy.path` argument which indicates the path to your policy checkpoint with (e.g. `outputs/train/eval_act_moss_test/checkpoints/last/pretrained_model`). You can also use the model repository if you uploaded a model checkpoint to the hub (e.g. `${HF_USER}/act_moss_test`).
+2. The name of dataset begins by `eval` to reflect that you are running inference (e.g. `${HF_USER}/eval_act_moss_test`).
+
+## More
+
+Follow this [previous tutorial](https://github.com/huggingface/lerobot/blob/main/examples/7_get_started_with_real_robot.md#4-train-a-policy-on-your-data) for a more in-depth tutorial on controlling real robots with LeRobot.
+
+If you have any question or need help, please reach out on Discord in the channel [`#moss-arm`](https://discord.com/channels/1216765309076115607/1275374638985252925).
diff --git a/project/ManiSkill3/src/maniskill3_environment/lerobot/examples/1_load_lerobot_dataset.py b/project/ManiSkill3/src/maniskill3_environment/lerobot/examples/1_load_lerobot_dataset.py
new file mode 100644
index 0000000000000000000000000000000000000000..07db38a1504aeeb6704ee27577c9a859264aeb70
--- /dev/null
+++ b/project/ManiSkill3/src/maniskill3_environment/lerobot/examples/1_load_lerobot_dataset.py
@@ -0,0 +1,148 @@
+# Copyright 2024 The HuggingFace Inc. team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+"""
+This script demonstrates the use of `LeRobotDataset` class for handling and processing robotic datasets from Hugging Face.
+It illustrates how to load datasets, manipulate them, and apply transformations suitable for machine learning tasks in PyTorch.
+
+Features included in this script:
+- Viewing a dataset's metadata and exploring its properties.
+- Loading an existing dataset from the hub or a subset of it.
+- Accessing frames by episode number.
+- Using advanced dataset features like timestamp-based frame selection.
+- Demonstrating compatibility with PyTorch DataLoader for batch processing.
+
+The script ends with examples of how to batch process data using PyTorch's DataLoader.
+"""
+
+from pprint import pprint
+
+import torch
+from huggingface_hub import HfApi
+
+import lerobot
+from lerobot.common.datasets.lerobot_dataset import LeRobotDataset, LeRobotDatasetMetadata
+
+# We ported a number of existing datasets ourselves, use this to see the list:
+print("List of available datasets:")
+pprint(lerobot.available_datasets)
+
+# You can also browse through the datasets created/ported by the community on the hub using the hub api:
+hub_api = HfApi()
+repo_ids = [info.id for info in hub_api.list_datasets(task_categories="robotics", tags=["LeRobot"])]
+pprint(repo_ids)
+
+# Or simply explore them in your web browser directly at:
+# https://huggingface.co/datasets?other=LeRobot
+
+# Let's take this one for this example
+repo_id = "lerobot/aloha_mobile_cabinet"
+# We can have a look and fetch its metadata to know more about it:
+ds_meta = LeRobotDatasetMetadata(repo_id)
+
+# By instantiating just this class, you can quickly access useful information about the content and the
+# structure of the dataset without downloading the actual data yet (only metadata files — which are
+# lightweight).
+print(f"Total number of episodes: {ds_meta.total_episodes}")
+print(f"Average number of frames per episode: {ds_meta.total_frames / ds_meta.total_episodes:.3f}")
+print(f"Frames per second used during data collection: {ds_meta.fps}")
+print(f"Robot type: {ds_meta.robot_type}")
+print(f"keys to access images from cameras: {ds_meta.camera_keys=}\n")
+
+print("Tasks:")
+print(ds_meta.tasks)
+print("Features:")
+pprint(ds_meta.features)
+
+# You can also get a short summary by simply printing the object:
+print(ds_meta)
+
+# You can then load the actual dataset from the hub.
+# Either load any subset of episodes:
+dataset = LeRobotDataset(repo_id, episodes=[0, 10, 11, 23])
+
+# And see how many frames you have:
+print(f"Selected episodes: {dataset.episodes}")
+print(f"Number of episodes selected: {dataset.num_episodes}")
+print(f"Number of frames selected: {dataset.num_frames}")
+
+# Or simply load the entire dataset:
+dataset = LeRobotDataset(repo_id)
+print(f"Number of episodes selected: {dataset.num_episodes}")
+print(f"Number of frames selected: {dataset.num_frames}")
+
+# The previous metadata class is contained in the 'meta' attribute of the dataset:
+print(dataset.meta)
+
+# LeRobotDataset actually wraps an underlying Hugging Face dataset
+# (see https://huggingface.co/docs/datasets for more information).
+print(dataset.hf_dataset)
+
+# LeRobot datasets also subclasses PyTorch datasets so you can do everything you know and love from working
+# with the latter, like iterating through the dataset.
+# The __getitem__ iterates over the frames of the dataset. Since our datasets are also structured by
+# episodes, you can access the frame indices of any episode using the episode_data_index. Here, we access
+# frame indices associated to the first episode:
+episode_index = 0
+from_idx = dataset.episode_data_index["from"][episode_index].item()
+to_idx = dataset.episode_data_index["to"][episode_index].item()
+
+# Then we grab all the image frames from the first camera:
+camera_key = dataset.meta.camera_keys[0]
+frames = [dataset[idx][camera_key] for idx in range(from_idx, to_idx)]
+
+# The objects returned by the dataset are all torch.Tensors
+print(type(frames[0]))
+print(frames[0].shape)
+
+# Since we're using pytorch, the shape is in pytorch, channel-first convention (c, h, w).
+# We can compare this shape with the information available for that feature
+pprint(dataset.features[camera_key])
+# In particular:
+print(dataset.features[camera_key]["shape"])
+# The shape is in (h, w, c) which is a more universal format.
+
+# For many machine learning applications we need to load the history of past observations or trajectories of
+# future actions. Our datasets can load previous and future frames for each key/modality, using timestamps
+# differences with the current loaded frame. For instance:
+delta_timestamps = {
+ # loads 4 images: 1 second before current frame, 500 ms before, 200 ms before, and current frame
+ camera_key: [-1, -0.5, -0.20, 0],
+ # loads 6 state vectors: 1.5 seconds before, 1 second before, ... 200 ms, 100 ms, and current frame
+ "observation.state": [-1.5, -1, -0.5, -0.20, -0.10, 0],
+ # loads 64 action vectors: current frame, 1 frame in the future, 2 frames, ... 63 frames in the future
+ "action": [t / dataset.fps for t in range(64)],
+}
+# Note that in any case, these delta_timestamps values need to be multiples of (1/fps) so that added to any
+# timestamp, you still get a valid timestamp.
+
+dataset = LeRobotDataset(repo_id, delta_timestamps=delta_timestamps)
+print(f"\n{dataset[0][camera_key].shape=}") # (4, c, h, w)
+print(f"{dataset[0]['observation.state'].shape=}") # (6, c)
+print(f"{dataset[0]['action'].shape=}\n") # (64, c)
+
+# Finally, our datasets are fully compatible with PyTorch dataloaders and samplers because they are just
+# PyTorch datasets.
+dataloader = torch.utils.data.DataLoader(
+ dataset,
+ num_workers=0,
+ batch_size=32,
+ shuffle=True,
+)
+
+for batch in dataloader:
+ print(f"{batch[camera_key].shape=}") # (32, 4, c, h, w)
+ print(f"{batch['observation.state'].shape=}") # (32, 6, c)
+ print(f"{batch['action'].shape=}") # (32, 64, c)
+ break
diff --git a/project/ManiSkill3/src/maniskill3_environment/lerobot/examples/7_get_started_with_real_robot.md b/project/ManiSkill3/src/maniskill3_environment/lerobot/examples/7_get_started_with_real_robot.md
new file mode 100644
index 0000000000000000000000000000000000000000..3562c0e666105902117cdf397e45532eeef59a1a
--- /dev/null
+++ b/project/ManiSkill3/src/maniskill3_environment/lerobot/examples/7_get_started_with_real_robot.md
@@ -0,0 +1,1003 @@
+# Getting Started with Real-World Robots
+
+This tutorial will guide you through the process of setting up and training a neural network to autonomously control a real robot.
+
+**What You'll Learn:**
+1. How to order and assemble your robot.
+2. How to connect, configure, and calibrate your robot.
+3. How to record and visualize your dataset.
+4. How to train a policy using your data and prepare it for evaluation.
+5. How to evaluate your policy and visualize the results.
+
+By following these steps, you'll be able to replicate tasks like picking up a Lego block and placing it in a bin with a high success rate, as demonstrated in [this video](https://x.com/RemiCadene/status/1814680760592572934).
+
+This tutorial is specifically made for the affordable [Koch v1.1](https://github.com/jess-moss/koch-v1-1) robot, but it contains additional information to be easily adapted to various types of robots like [Aloha bimanual robot](https://aloha-2.github.io) by changing some configurations. The Koch v1.1 consists of a leader arm and a follower arm, each with 6 motors. It can work with one or several cameras to record the scene, which serve as visual sensors for the robot.
+
+During the data collection phase, you will control the follower arm by moving the leader arm. This process is known as "teleoperation." This technique is used to collect robot trajectories. Afterward, you'll train a neural network to imitate these trajectories and deploy the network to enable your robot to operate autonomously.
+
+If you encounter any issues at any step of the tutorial, feel free to seek help on [Discord](https://discord.com/invite/s3KuuzsPFb) or don't hesitate to iterate with us on the tutorial by creating issues or pull requests. Thanks!
+
+## 1. Order and Assemble your Koch v1.1
+
+Follow the sourcing and assembling instructions provided on the [Koch v1.1 Github page](https://github.com/jess-moss/koch-v1-1). This will guide you through setting up both the follower and leader arms, as shown in the image below.
+
+
+
|
|
|
+
+And here are the corresponding positions for the leader arm:
+
+| 1. Zero position | 2. Rotated position | 3. Rest position |
+| ----------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------- |
+|
|
|
|
+
+You can watch a [video tutorial of the calibration procedure](https://youtu.be/8drnU9uRY24) for more details.
+
+During calibration, we count the number of full 360-degree rotations your motors have made since they were first used. That's why we ask yo to move to this arbitrary "zero" position. We don't actually "set" the zero position, so you don't need to be accurate. After calculating these "offsets" to shift the motor values around 0, we need to assess the rotation direction of each motor, which might differ. That's why we ask you to rotate all motors to roughly 90 degrees, to measure if the values changed negatively or positively.
+
+Finally, the rest position ensures that the follower and leader arms are roughly aligned after calibration, preventing sudden movements that could damage the motors when starting teleoperation.
+
+Importantly, once calibrated, all Koch robots will move to the same positions (e.g. zero and rotated position) when commanded.
+
+Run the following code to calibrate and connect your robot:
+```python
+robot.connect()
+```
+
+The output will look like this:
+```
+Connecting main follower arm
+Connecting main leader arm
+
+Missing calibration file '.cache/calibration/koch/main_follower.json'
+Running calibration of koch main follower...
+Move arm to zero position
+[...]
+Move arm to rotated position
+[...]
+Move arm to rest position
+[...]
+Calibration is done! Saving calibration file '.cache/calibration/koch/main_follower.json'
+
+Missing calibration file '.cache/calibration/koch/main_leader.json'
+Running calibration of koch main leader...
+Move arm to zero position
+[...]
+Move arm to rotated position
+[...]
+Move arm to rest position
+[...]
+Calibration is done! Saving calibration file '.cache/calibration/koch/main_leader.json'
+```
+
+*Verifying Calibration*
+
+Once calibration is complete, you can check the positions of the leader and follower arms to ensure they match. If the calibration was successful, the positions should be very similar.
+
+Run this code to get the positions in degrees:
+```python
+leader_pos = robot.leader_arms["main"].read("Present_Position")
+follower_pos = robot.follower_arms["main"].read("Present_Position")
+
+print(leader_pos)
+print(follower_pos)
+```
+
+Example output:
+```
+array([-0.43945312, 133.94531, 179.82422, -18.984375, -1.9335938, 34.541016], dtype=float32)
+array([-0.58723712, 131.72314, 174.98743, -16.872612, 0.786213, 35.271973], dtype=float32)
+```
+
+These values are in degrees, which makes them easier to interpret and debug. The zero position used during calibration should roughly correspond to 0 degrees for each motor, and the rotated position should roughly correspond to 90 degrees for each motor.
+
+**Teleoperate your Koch v1.1**
+
+You can easily teleoperate your robot by reading the positions from the leader arm and sending them as goal positions to the follower arm.
+
+To teleoperate your robot for 30 seconds at a frequency of approximately 200Hz, run the following code:
+```python
+import tqdm
+seconds = 30
+frequency = 200
+for _ in tqdm.tqdm(range(seconds*frequency)):
+ leader_pos = robot.leader_arms["main"].read("Present_Position")
+ robot.follower_arms["main"].write("Goal_Position", leader_pos)
+```
+
+*Using `teleop_step` for Teleoperation*
+
+Alternatively, you can teleoperate the robot using the `teleop_step` method from [`ManipulatorRobot`](../lerobot/common/robot_devices/robots/manipulator.py).
+
+Run this code to teleoperate:
+```python
+for _ in tqdm.tqdm(range(seconds*frequency)):
+ robot.teleop_step()
+```
+
+*Recording data during Teleoperation*
+
+Teleoperation is particularly useful for recording data. You can use the `teleop_step(record_data=True)` to returns both the follower arm's position as `"observation.state"` and the leader arm's position as `"action"`. This function also converts the numpy arrays into PyTorch tensors. If you're working with a robot that has two leader and two follower arms (like the Aloha), the positions are concatenated.
+
+Run the following code to see how slowly moving the leader arm affects the observation and action:
+```python
+leader_pos = robot.leader_arms["main"].read("Present_Position")
+follower_pos = robot.follower_arms["main"].read("Present_Position")
+observation, action = robot.teleop_step(record_data=True)
+
+print(follower_pos)
+print(observation)
+print(leader_pos)
+print(action)
+```
+
+Expected output:
+```
+array([7.8223, 131.1328, 165.5859, -23.4668, -0.9668, 32.4316], dtype=float32)
+{'observation.state': tensor([7.8223, 131.1328, 165.5859, -23.4668, -0.9668, 32.4316])}
+array([3.4277, 134.1211, 179.8242, -18.5449, -1.5820, 34.7168], dtype=float32)
+{'action': tensor([3.4277, 134.1211, 179.8242, -18.5449, -1.5820, 34.7168])}
+```
+
+*Asynchronous Frame Recording*
+
+Additionally, `teleop_step` can asynchronously record frames from multiple cameras and include them in the observation dictionary as `"observation.images.CAMERA_NAME"`. This feature will be covered in more detail in the next section.
+
+*Disconnecting the Robot*
+
+When you're finished, make sure to disconnect your robot by running:
+```python
+robot.disconnect()
+```
+
+Alternatively, you can unplug the power cord, which will also disable torque.
+
+*/!\ Warning*: These motors tend to overheat, especially under torque or if left plugged in for too long. Unplug after use.
+
+### c. Add your cameras with OpenCVCamera
+
+**(Optional) Use your phone as camera on Linux**
+
+If you want to use your phone as a camera on Linux, follow these steps to set up a virtual camera
+
+1. *Install `v4l2loopback-dkms` and `v4l-utils`*. Those packages are required to create virtual camera devices (`v4l2loopback`) and verify their settings with the `v4l2-ctl` utility from `v4l-utils`. Install them using:
+```python
+sudo apt install v4l2loopback-dkms v4l-utils
+```
+2. *Install [DroidCam](https://droidcam.app) on your phone*. This app is available for both iOS and Android.
+3. *Install [OBS Studio](https://obsproject.com)*. This software will help you manage the camera feed. Install it using [Flatpak](https://flatpak.org):
+```python
+flatpak install flathub com.obsproject.Studio
+```
+4. *Install the DroidCam OBS plugin*. This plugin integrates DroidCam with OBS Studio. Install it with:
+```python
+flatpak install flathub com.obsproject.Studio.Plugin.DroidCam
+```
+5. *Start OBS Studio*. Launch with:
+```python
+flatpak run com.obsproject.Studio
+```
+6. *Add your phone as a source*. Follow the instructions [here](https://droidcam.app/obs/usage). Be sure to set the resolution to `640x480`.
+7. *Adjust resolution settings*. In OBS Studio, go to `File > Settings > Video`. Change the `Base(Canvas) Resolution` and the `Output(Scaled) Resolution` to `640x480` by manually typing it in.
+8. *Start virtual camera*. In OBS Studio, follow the instructions [here](https://obsproject.com/kb/virtual-camera-guide).
+9. *Verify the virtual camera setup*. Use `v4l2-ctl` to list the devices:
+```python
+v4l2-ctl --list-devices
+```
+You should see an entry like:
+```
+VirtualCam (platform:v4l2loopback-000):
+/dev/video1
+```
+10. *Check the camera resolution*. Use `v4l2-ctl` to ensure that the virtual camera output resolution is `640x480`. Change `/dev/video1` to the port of your virtual camera from the output of `v4l2-ctl --list-devices`.
+```python
+v4l2-ctl -d /dev/video1 --get-fmt-video
+```
+You should see an entry like:
+```
+>>> Format Video Capture:
+>>> Width/Height : 640/480
+>>> Pixel Format : 'YUYV' (YUYV 4:2:2)
+```
+
+Troubleshooting: If the resolution is not correct you will have to delete the Virtual Camera port and try again as it cannot be changed.
+
+If everything is set up correctly, you can proceed with the rest of the tutorial.
+
+**(Optional) Use your iPhone as a camera on MacOS**
+
+To use your iPhone as a camera on macOS, enable the Continuity Camera feature:
+- Ensure your Mac is running macOS 13 or later, and your iPhone is on iOS 16 or later.
+- Sign in both devices with the same Apple ID.
+- Connect your devices with a USB cable or turn on Wi-Fi and Bluetooth for a wireless connection.
+
+For more details, visit [Apple support](https://support.apple.com/en-gb/guide/mac-help/mchl77879b8a/mac).
+
+Your iPhone should be detected automatically when running the camera setup script in the next section.
+
+**Instantiate an OpenCVCamera**
+
+The [`OpenCVCamera`](../lerobot/common/robot_devices/cameras/opencv.py) class allows you to efficiently record frames from most cameras using the [`opencv2`](https://docs.opencv.org) library. For more details on compatibility, see [Video I/O with OpenCV Overview](https://docs.opencv.org/4.x/d0/da7/videoio_overview.html).
+
+To instantiate an [`OpenCVCamera`](../lerobot/common/robot_devices/cameras/opencv.py), you need a camera index (e.g. `OpenCVCamera(camera_index=0)`). When you only have one camera like a webcam of a laptop, the camera index is usually `0` but it might differ, and the camera index might change if you reboot your computer or re-plug your camera. This behavior depends on your operating system.
+
+To find the camera indices, run the following utility script, which will save a few frames from each detected camera:
+```bash
+python lerobot/common/robot_devices/cameras/opencv.py \
+ --images-dir outputs/images_from_opencv_cameras
+```
+
+The output will look something like this if you have two cameras connected:
+```
+Mac or Windows detected. Finding available camera indices through scanning all indices from 0 to 60
+[...]
+Camera found at index 0
+Camera found at index 1
+[...]
+Connecting cameras
+OpenCVCamera(0, fps=30.0, width=1920.0, height=1080.0, color_mode=rgb)
+OpenCVCamera(1, fps=24.0, width=1920.0, height=1080.0, color_mode=rgb)
+Saving images to outputs/images_from_opencv_cameras
+Frame: 0000 Latency (ms): 39.52
+[...]
+Frame: 0046 Latency (ms): 40.07
+Images have been saved to outputs/images_from_opencv_cameras
+```
+
+Check the saved images in `outputs/images_from_opencv_cameras` to identify which camera index corresponds to which physical camera (e.g. `0` for `camera_00` or `1` for `camera_01`):
+```
+camera_00_frame_000000.png
+[...]
+camera_00_frame_000047.png
+camera_01_frame_000000.png
+[...]
+camera_01_frame_000047.png
+```
+
+Note: Some cameras may take a few seconds to warm up, and the first frame might be black or green.
+
+Finally, run this code to instantiate and connectyour camera:
+```python
+from lerobot.common.robot_devices.cameras.configs import OpenCVCameraConfig
+from lerobot.common.robot_devices.cameras.opencv import OpenCVCamera
+
+config = OpenCVCameraConfig(camera_index=0)
+camera = OpenCVCamera(config)
+camera.connect()
+color_image = camera.read()
+
+print(color_image.shape)
+print(color_image.dtype)
+```
+
+Expected output for a laptop camera on MacBookPro:
+```
+(1080, 1920, 3)
+uint8
+```
+
+Or like this if you followed our tutorial to set a virtual camera:
+```
+(480, 640, 3)
+uint8
+```
+
+With certain camera, you can also specify additional parameters like frame rate, resolution, and color mode during instantiation. For instance:
+```python
+config = OpenCVCameraConfig(camera_index=0, fps=30, width=640, height=480)
+```
+
+If the provided arguments are not compatible with the camera, an exception will be raised.
+
+*Disconnecting the camera*
+
+When you're done using the camera, disconnect it by running:
+```python
+camera.disconnect()
+```
+
+**Instantiate your robot with cameras**
+
+Additionally, you can set up your robot to work with your cameras.
+
+Modify the following Python code with the appropriate camera names and configurations:
+```python
+robot = ManipulatorRobot(
+ KochRobotConfig(
+ leader_arms={"main": leader_arm},
+ follower_arms={"main": follower_arm},
+ calibration_dir=".cache/calibration/koch",
+ cameras={
+ "laptop": OpenCVCameraConfig(0, fps=30, width=640, height=480),
+ "phone": OpenCVCameraConfig(1, fps=30, width=640, height=480),
+ },
+ )
+)
+robot.connect()
+```
+
+As a result, `teleop_step(record_data=True` will return a frame for each camera following the pytorch "channel first" convention but we keep images in `uint8` with pixels in range [0,255] to easily save them.
+
+Modify this code with the names of your cameras and run it:
+```python
+observation, action = robot.teleop_step(record_data=True)
+print(observation["observation.images.laptop"].shape)
+print(observation["observation.images.phone"].shape)
+print(observation["observation.images.laptop"].min().item())
+print(observation["observation.images.laptop"].max().item())
+```
+
+The output should look like this:
+```
+torch.Size([3, 480, 640])
+torch.Size([3, 480, 640])
+0
+255
+```
+
+### d. Use `control_robot.py` and our `teleoperate` function
+
+Instead of manually running the python code in a terminal window, you can use [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) to instantiate your robot by providing the robot configurations via command line and control your robot with various modes as explained next.
+
+Try running this code to teleoperate your robot (if you dont have a camera, keep reading):
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=koch \
+ --control.type=teleoperate
+```
+
+You will see a lot of lines appearing like this one:
+```
+INFO 2024-08-10 11:15:03 ol_robot.py:209 dt: 5.12 (195.1hz) dtRlead: 4.93 (203.0hz) dtWfoll: 0.19 (5239.0hz)
+```
+
+It contains
+- `2024-08-10 11:15:03` which is the date and time of the call to the print function.
+- `ol_robot.py:209` which is the end of the file name and the line number where the print function is called (`lerobot/scripts/control_robot.py` line `209`).
+- `dt: 5.12 (195.1hz)` which is the "delta time" or the number of milliseconds spent between the previous call to `robot.teleop_step()` and the current one, associated with the frequency (5.12 ms equals 195.1 Hz) ; note that you can control the maximum frequency by adding fps as argument such as `--fps 30`.
+- `dtRlead: 4.93 (203.0hz)` which is the number of milliseconds it took to read the position of the leader arm using `leader_arm.read("Present_Position")`.
+- `dtWfoll: 0.22 (4446.9hz)` which is the number of milliseconds it took to set a new goal position for the follower arm using `follower_arm.write("Goal_position", leader_pos)` ; note that writing is done asynchronously so it takes less time than reading.
+
+Importantly: If you don't have any camera, you can remove them dynamically with this [draccus](https://github.com/dlwh/draccus) syntax `--robot.cameras='{}'`:
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=koch \
+ --robot.cameras='{}' \
+ --control.type=teleoperate
+```
+
+We advise to create a new yaml file when the command becomes too long.
+
+## 3. Record your Dataset and Visualize it
+
+Using what you've learned previously, you can now easily record a dataset of states and actions for one episode. You can use `busy_wait` to control the speed of teleoperation and record at a fixed `fps` (frame per seconds).
+
+Try this code to record 30 seconds at 60 fps:
+```python
+import time
+from lerobot.scripts.control_robot import busy_wait
+
+record_time_s = 30
+fps = 60
+
+states = []
+actions = []
+for _ in range(record_time_s * fps):
+ start_time = time.perf_counter()
+ observation, action = robot.teleop_step(record_data=True)
+
+ states.append(observation["observation.state"])
+ actions.append(action["action"])
+
+ dt_s = time.perf_counter() - start_time
+ busy_wait(1 / fps - dt_s)
+
+# Note that observation and action are available in RAM, but
+# you could potentially store them on disk with pickle/hdf5 or
+# our optimized format `LeRobotDataset`. More on this next.
+```
+
+Importantly, many utilities are still missing. For instance, if you have cameras, you will need to save the images on disk to not go out of RAM, and to do so in threads to not slow down communication with your robot. Also, you will need to store your data in a format optimized for training and web sharing like [`LeRobotDataset`](../lerobot/common/datasets/lerobot_dataset.py). More on this in the next section.
+
+### a. Use the `record` function
+
+You can use the `record` function from [`lerobot/scripts/control_robot.py`](../lerobot/scripts/control_robot.py) to achieve efficient data recording. It encompasses many recording utilities:
+1. Frames from cameras are saved on disk in threads, and encoded into videos at the end of each episode recording.
+2. Video streams from cameras are displayed in window so that you can verify them.
+3. Data is stored with [`LeRobotDataset`](../lerobot/common/datasets/lerobot_dataset.py) format which is pushed to your Hugging Face page (unless `--control.push_to_hub=false` is provided).
+4. Checkpoints are done during recording, so if any issue occurs, you can resume recording by re-running the same command again with `--control.resume=true`. You will need to manually delete the dataset directory if you want to start recording from scratch.
+5. Set the flow of data recording using command line arguments:
+ - `--control.warmup_time_s=10` defines the number of seconds before starting data collection. It allows the robot devices to warmup and synchronize (10 seconds by default).
+ - `--control.episode_time_s=60` defines the number of seconds for data recording for each episode (60 seconds by default).
+ - `--control.reset_time_s=60` defines the number of seconds for resetting the environment after each episode (60 seconds by default).
+ - `--control.num_episodes=50` defines the number of episodes to record (50 by default).
+6. Control the flow during data recording using keyboard keys:
+ - Press right arrow `->` at any time during episode recording to early stop and go to resetting. Same during resetting, to early stop and to go to the next episode recording.
+ - Press left arrow `<-` at any time during episode recording or resetting to early stop, cancel the current episode, and re-record it.
+ - Press escape `ESC` at any time during episode recording to end the session early and go straight to video encoding and dataset uploading.
+7. Similarly to `teleoperate`, you can also use the command line to override anything.
+
+Before trying `record`, if you want to push your dataset to the hub, make sure you've logged in using a write-access token, which can be generated from the [Hugging Face settings](https://huggingface.co/settings/tokens):
+```bash
+huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
+```
+Also, store your Hugging Face repository name in a variable (e.g. `cadene` or `lerobot`). For instance, run this to use your Hugging Face user name as repository:
+```bash
+HF_USER=$(huggingface-cli whoami | head -n 1)
+echo $HF_USER
+```
+If you don't want to push to hub, use `--control.push_to_hub=false`.
+
+Now run this to record 2 episodes:
+```bash
+python lerobot/scripts/control_robot.py \
+ --robot.type=koch \
+ --control.type=record \
+ --control.single_task="Grasp a lego block and put it in the bin." \
+ --control.fps=30 \
+ --control.repo_id=${HF_USER}/koch_test \
+ --control.tags='["tutorial"]' \
+ --control.warmup_time_s=5 \
+ --control.episode_time_s=30 \
+ --control.reset_time_s=30 \
+ --control.num_episodes=2 \
+ --control.push_to_hub=true
+```
+
+
+This will write your dataset locally to `~/.cache/huggingface/lerobot/{repo-id}` (e.g. `data/cadene/koch_test`) and push it on the hub at `https://huggingface.co/datasets/{HF_USER}/{repo-id}`. Your dataset will be automatically tagged with `LeRobot` for the community to find it easily, and you can also add custom tags (in this case `tutorial` for example).
+
+You can look for other LeRobot datasets on the hub by searching for `LeRobot` tags: https://huggingface.co/datasets?other=LeRobot
+
+You will see a lot of lines appearing like this one:
+```
+INFO 2024-08-10 15:02:58 ol_robot.py:219 dt:33.34 (30.0hz) dtRlead: 5.06 (197.5hz) dtWfoll: 0.25 (3963.7hz) dtRfoll: 6.22 (160.7hz) dtRlaptop: 32.57 (30.7hz) dtRphone: 33.84 (29.5hz)
+```
+It contains:
+- `2024-08-10 15:02:58` which is the date and time of the call to the print function,
+- `ol_robot.py:219` which is the end of the file name and the line number where the print function is called (`lerobot/scripts/control_robot.py` line `219`).
+- `dt:33.34 (30.0hz)` which is the "delta time" or the number of milliseconds spent between the previous call to `robot.teleop_step(record_data=True)` and the current one, associated with the frequency (33.34 ms equals 30.0 Hz) ; note that we use `--fps 30` so we expect 30.0 Hz ; when a step takes more time, the line appears in yellow.
+- `dtRlead: 5.06 (197.5hz)` which is the delta time of reading the present position of the leader arm.
+- `dtWfoll: 0.25 (3963.7hz)` which is the delta time of writing the goal position on the follower arm ; writing is asynchronous so it takes less time than reading.
+- `dtRfoll: 6.22 (160.7hz)` which is the delta time of reading the present position on the follower arm.
+- `dtRlaptop:32.57 (30.7hz) ` which is the delta time of capturing an image from the laptop camera in the thread running asynchronously.
+- `dtRphone:33.84 (29.5hz)` which is the delta time of capturing an image from the phone camera in the thread running asynchronously.
+
+Troubleshooting:
+- On Linux, if you encounter any issue during video encoding with `ffmpeg: unknown encoder libsvtav1`, you can:
+ - install with conda-forge by running `conda install -c conda-forge ffmpeg` (it should be compiled with `libsvtav1`),
+> **NOTE:** This usually installs `ffmpeg 7.X` for your platform (check the version installed with `ffmpeg -encoders | grep libsvtav1`). If it isn't `ffmpeg 7.X` or lacks `libsvtav1` support, you can explicitly install `ffmpeg 7.X` using: `conda install ffmpeg=7.1.1 -c conda-forge`
+ - or, install [ffmpeg build dependencies](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#GettheDependencies) and [compile ffmpeg from source with libsvtav1](https://trac.ffmpeg.org/wiki/CompilationGuide/Ubuntu#libsvtav1),
+ - and, make sure you use the corresponding ffmpeg binary to your install with `which ffmpeg`.
+- On Linux, if the left and right arrow keys and escape key don't have any effect during data recording, make sure you've set the `$DISPLAY` environment variable. See [pynput limitations](https://pynput.readthedocs.io/en/latest/limitations.html#linux).
+
+At the end of data recording, your dataset will be uploaded on your Hugging Face page (e.g. https://huggingface.co/datasets/cadene/koch_test) that you can obtain by running:
+```bash
+echo https://huggingface.co/datasets/${HF_USER}/koch_test
+```
+
+### b. Advice for recording dataset
+
+Once you're comfortable with data recording, it's time to create a larger dataset for training. A good starting task is grasping an object at different locations and placing it in a bin. We suggest recording at least 50 episodes, with 10 episodes per location. Keep the cameras fixed and maintain consistent grasping behavior throughout the recordings.
+
+In the following sections, you’ll train your neural network. After achieving reliable grasping performance, you can start introducing more variations during data collection, such as additional grasp locations, different grasping techniques, and altering camera positions.
+
+Avoid adding too much variation too quickly, as it may hinder your results.
+
+In the coming months, we plan to release a foundational model for robotics. We anticipate that fine-tuning this model will enhance generalization, reducing the need for strict consistency during data collection.
+
+### c. Visualize all episodes
+
+You can visualize your dataset by running:
+```bash
+python lerobot/scripts/visualize_dataset_html.py \
+ --repo-id ${HF_USER}/koch_test
+```
+
+Note: You might need to add `--local-files-only 1` if your dataset was not uploaded to hugging face hub.
+
+This will launch a local web server that looks like this:
+
+Episodes:
+ + + + + + +{{ video_info.filename }}
+ ++ Language Instruction: {{ videos_info[0].language_instruction }} +
+ {% endif %} + + + + + ++ Time: 0.00s +
+| + + |
+
+
+
+
+ |
+
+
|---|---|
|
+
+
+
+ |
+
+
+
+
+
+
+
+
+
+ |
+
+