File size: 6,237 Bytes
79454e9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5d4eec2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c009fc3
5d4eec2
 
c009fc3
 
 
5d4eec2
c009fc3
 
5d4eec2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
# Description of Collected Data

**Task 1: pass the knife**

The task has 3 modes: pass the knife with sharp end pointing towards human, pass the knife with the handle pointing towards human and sharp end pointing right, 
pass the knife with the handle pointing towards human and sharp end pointing left. Each mode has 35 demos.


**Task 2: push the block**

The task has 2 modes: push the block from the left, push the block from the right. Each with 50 demos.

**Task 3: put items in the box**

The task has 3 modes: put the items in the box by the order of black box, strawberries, blue box;  blue box, black box, strawberries; 
strawberries, black box, blue box. Each with 30 demos.




# Robot Data Processing

Utilities for turning raw robot HDF5 recordings into a synchronized dataset and then into the [LeRobot v2.1](https://github.com/huggingface/lerobot) format for training.

Pipeline:

```
raw image + low-dim HDF5
        │  sync_image_low_dim.py

    synced HDF5  ──►  visualize_synced_data.py  (per-demo MP4 previews)

        │  convert_synced_h5_to_lerobot.py

    LeRobot v2.1 dataset folder
```

## Environment setup

Create and activate a conda env, then install the dependencies:

```
conda create -n robotdata python=3.10 -y
conda activate robotdata
pip install h5py numpy opencv-python datasets
pip install "lerobot @ git+https://github.com/huggingface/lerobot@0cf864870cf29f4738d3ade893e6fd13fbd7cdb5"
```

**Why this specific commit is required for Pi-0.5 / OpenPI:**
LeRobot 0.4.3+ writes datasets in the v3.0 format, which **OpenPI (Pi-0 / Pi-0.5) cannot read**. OpenPI requires the v2.1 format, which is produced by LeRobot commit `0cf864870cf29f4738d3ade893e6fd13fbd7cdb5` (reports itself as version `0.1.0`). Do **not** upgrade LeRobot unless you also upgrade the downstream training code.

Notes:
- Python 3.10 is the most broadly compatible with this LeRobot commit; 3.11 also works.
- `lerobot` pulls in `torch`, `huggingface_hub`, and other heavy deps. If you need a specific CUDA build of `torch`, install it before `lerobot` using the selector on pytorch.org.

Quick sanity check:

```
python -c "import h5py, numpy, cv2, datasets, lerobot; print('ok')"
```

## Scripts

### 1. `sync_image_low_dim.py` — align two HDF5 streams

Merges an image HDF5 and a low-dimensional HDF5 into a single synced file. Image timestamps are the master timeline; low-dim samples are aligned by nearest timestamp. Handles zero-valued timestamps, sudden timestamp jumps, and non-overlapping intervals by skipping affected demos. Demos excluded or skipped are **renamed to be consecutive** (`demo_0`, `demo_1`, …) in the output so there are no gaps.

**Inputs (per HDF5):** `data/<demo>/obs/<timestamp_key>` plus any number of per-demo datasets.

**Output:** `data/<demo>/obs/{timestamp, <image_keys…>, <lowdim_keys…>}` and optional `data/<demo>/actions`.

**Example:**

```
python sync_image_low_dim.py --image-h5 /path/raw_images.hdf5 --lowdim-h5 /path/raw_lowdim.hdf5 --output-h5 /path/synced.h5 --allow-missing
```

**Useful flags:**
- `--image-keys`, `--lowdim-keys` — restrict which obs datasets to copy (defaults to all except timestamp).
- `--exclude-demo demo_4 demo_5` — drop specific demos. Remaining demos are reindexed.
- `--skip-n N` — keep every `(N+1)`-th frame after syncing (e.g. `--skip-n 2` → keep 0, 3, 6, …).
- `--allow-missing` — log and skip demos with missing keys instead of failing.

### 2. `visualize_synced_data.py` — render per-demo MP4 previews

Renders each demo to an MP4 with selected camera views side-by-side and optional lowdim overlays as on-frame text. Useful to sanity-check a sync before running the LeRobot conversion.

**Example:**

```
python visualize_synced_data.py /path/synced.h5 --out-dir ./vis --fps 10 --image-keys agentview_image oak_image --overlay-keys robot0_eef_pos robot0_gripper_qpos
```

Outputs `./vis/<demo>.mp4` for each demo.

### 3. `convert_synced_h5_to_lerobot.py` — synced HDF5 → LeRobot v2.1

Produces a LeRobot dataset directly in `--output-dir`. The folder must not already exist.

**Example (30 Hz → 10 Hz, 2 cameras, 8-dim state):**

```
python convert_synced_h5_to_lerobot.py --synced-h5 /path/synced.h5 --output-dir /path/lerobot_dataset --fps 10 --source-fps 30 --task "pass the knife by the sharp side" --image-map agentview_image:base_rgb oak_image:wrist_rgb --state-keys robot0_joint_pos robot0_gripper_qpos --action-source next_state --image-size 256 256
```

**Key flags:**
- `--output-dir PATH` — final dataset folder (must not exist; parent is created if needed).
- `--fps N` / `--source-fps M` — target and source frame rates. `M` must be divisible by `N`; the script subsamples by stride `M/N`. If `--source-fps` is omitted, it is estimated from the first demo's timestamps.
- `--image-map src:dst [...]` — rename HDF5 image keys to LeRobot feature names.
- `--state-keys k1 k2 [...]` — concatenate these lowdim datasets into a single `state` vector (order matters).
- `--action-source {next_state, hdf5_actions}` — use the next state as the action when the HDF5 has no `actions` dataset.
- `--image-size H W` — resize images. Omit to keep native resolution.
- `--task "..."` — language instruction stored with every frame.
- `--repo-id user/name` + `--push-to-hub` — optional, pushes to HuggingFace.

**Output layout** (LeRobot v2.1):

```
<output-dir>/
  meta/    info.json, episodes.jsonl, tasks.jsonl, episodes_stats.jsonl
  data/    chunk-000/episode_<6digit>.parquet
```

## Typical workflow

```
# 1. sync raw HDF5s
python sync_image_low_dim.py --image-h5 raw_images.hdf5 --lowdim-h5 raw_lowdim.hdf5 --output-h5 synced.h5 --allow-missing

# 2. eyeball the result
python visualize_synced_data.py synced.h5 --out-dir vis --fps 10 --image-keys agentview_image oak_image --overlay-keys robot0_eef_pos robot0_gripper_qpos

# 3. convert to LeRobot
python convert_synced_h5_to_lerobot.py --synced-h5 synced.h5 --output-dir ./lerobot_dataset --fps 10 --source-fps 30 --task "your instruction" --image-map agentview_image:base_rgb oak_image:wrist_rgb --state-keys robot0_joint_pos robot0_gripper_qpos --action-source next_state --image-size 256 256
```