Datasets:

Modalities:
Text
Formats:
webdataset
ArXiv:
Libraries:
Datasets
WebDataset
File size: 6,891 Bytes
2259d70
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
# UniHand Human Data Preview (Customized LeRobot v3 Format)

This is a preview of the human data of UniHand in the customized LeRobot v3 format. 

We customized the LeRobot v3 format to better fit the structure of our human data and to facilitate downstream usage. The dataset layout and file formats are designed to be intuitive and efficient for common use cases.

## 1. Dataset Layout

```text
{dataset_root}/
  data/
    chunk-000/file-000000.parquet
    chunk-000/file-000001.parquet
    ...
  videos/
    ego-view/
      chunk-000/file-000000.mp4
      chunk-000/file-000001.mp4
      ...
  meta/
    info.json
    tasks_instruction.jsonl
    tasks_description.jsonl
    episodes/
      instruction/
        chunk-000/file-000000.parquet
        ...
      description/
        chunk-000/file-000000.parquet
        ...
```

## 2. `meta/info.json`

`meta/info.json` stores dataset-level metadata.

Example:

```json
{
  "base_dataset": "arctic",
  "dataset_variants": ["arctic", "arctic_aug", "arctic_aug2"],
  "fps": 30.0,
  "action_stride": 1,
  "robot_type": "human hand",
  "layout": "file-centric-split-mirror-v4",
  "mirror_video_transform": "horizontal_flip",
  "mirror_text_transform_version": "left-right-clockwise-v1",
  "num_files": 1641,
  "num_episodes": 229814
}
```

Common keys:

- `base_dataset`
- `dataset_variants`
- `fps`
- `action_stride`
- `robot_type`
- `layout`
- `mirror_video_transform`
- `mirror_text_transform_version`
- `num_files`
- `num_episodes`

How to use it:

- read `layout` as the dataset layout identifier
- read `action_stride` to interpret action horizons
- read `fps` as the default frame rate if you need dataset-level timing metadata

## 3. `meta/tasks_instruction.jsonl`

This file is the text registry for instruction / prediction samples.

Each line is one JSON object:

```json
{"task_index": 0, "task": "Pick up the object ..."}
```

Fields:

- `task_index`: integer task id
- `task`: task text

How to use it:

- if you are reading from `meta/episodes/instruction/...`, use this file to resolve `task_index`
- `task_index` is contiguous and starts from `0` within this file

## 4. `meta/tasks_description.jsonl`

This file is the text registry for description samples.

Each line is one JSON object:

```json
{"task_index": 0, "task": "The hands lift the object ..."}
```

Fields:

- `task_index`: integer task id
- `task`: text description

How to use it:

- if you are reading from `meta/episodes/description/...`, use this file to resolve `task_index`
- `task_index` is contiguous and starts from `0` within this file

## 5. `meta/episodes/instruction/**/*.parquet`

These parquet shards store instruction / prediction episode rows.

Each row describes one temporal slice inside one exported file.

Common fields:

- `file_id`
- `start_timestep`
- `end_timestep`
- `embodiment`
- `task_index`
- `row_id`

Meaning:

- `file_id`: file-level id used to locate motion parquet and video
- `start_timestep`: inclusive start frame
- `end_timestep`: exclusive end frame
- `embodiment`: effective embodiment for this row
- `task_index`: text id in `meta/tasks_instruction.jsonl`
- `row_id`: stable row id

How to use it:

- read one row
- resolve text from `meta/tasks_instruction.jsonl`
- resolve motion/video from `file_id`
- slice the file timeline with `[start_timestep, end_timestep)`
- each parquet shard contains many episode rows, and those rows may reference many different `file_id` values

## 6. `meta/episodes/description/**/*.parquet`

These parquet shards store description episode rows.

The row structure is the same as the instruction split:

- `file_id`
- `start_timestep`
- `end_timestep`
- `embodiment`
- `task_index`
- `row_id`

How to use it:

- read one row
- resolve text from `meta/tasks_description.jsonl`
- resolve motion/video from `file_id`
- slice the file timeline with `[start_timestep, end_timestep)`
- each parquet shard contains many episode rows, and those rows may reference many different `file_id` values

## 7. `data/chunk-xxx/file-xxxxxx.parquet`

Each motion parquet stores frame-level hand motion for one `file_id`.

Common per-frame columns:

- `camera_c2w`
- `left.trans_w`
- `left.rot_axis_angle_w`
- `left.theta`
- `left.beta`
- `right.trans_w`
- `right.rot_axis_angle_w`
- `right.theta`
- `right.beta`
- `valid.left_horizon`
- `valid.right_horizon`
- `valid.joint_horizon`

Meaning:

- `camera_c2w`: flattened camera-to-world transform
- `left/right.trans_w`: wrist translation in world frame
- `left/right.rot_axis_angle_w`: wrist rotation in axis-angle form
- `left/right.theta`: MANO pose parameters
- `left/right.beta`: MANO shape parameters
- `valid.*_horizon`: future validity horizon for action extraction

How to use it:

- locate the file from `file_id`
- load the parquet
- use frame indices in the global file timeline
- if an episode row is `(start_timestep, end_timestep)`, only use frames in that interval

Path rule:

```text
data/chunk-{file_id // 1000:03d}/file-{file_id:06d}.parquet
```

## 8. `videos/ego-view/chunk-xxx/file-xxxxxx.mp4`

These mp4 files store the ego-view video aligned with the motion parquet.

How to resolve the path:

1. Read `file_id`.
2. Use:

```text
videos/ego-view/chunk-{file_id // 1000:03d}/file-{file_id:06d}.mp4
```

## 9. How To Read One Sample

### Instruction / prediction sample

1. Read one row from `meta/episodes/instruction/**/*.parquet`.
2. Use `task_index` to look up text in `meta/tasks_instruction.jsonl`.
3. Use `file_id` to locate the motion parquet in `data/...`.
4. Use `file_id` to locate the ego-view video in `videos/ego-view/...`.
5. Restrict the valid episode range to `start_timestep <= t < end_timestep`.
6. Read the frame(s) you need from the motion parquet.
7. Read the aligned video frame(s) using the same file-global frame index.

### Description sample

1. Read one row from `meta/episodes/description/**/*.parquet`.
2. Use `task_index` to look up text in `meta/tasks_description.jsonl`.
3. Use `file_id` to locate the motion parquet in `data/...`.
4. Use `file_id` to locate the ego-view video in `videos/ego-view/...`.
5. Restrict the valid episode range to `start_timestep <= t < end_timestep`.
6. Read the frame(s) you need from the motion parquet.
7. Read the aligned video frame(s) using the same file-global frame index.

## 10. Valid Horizon

The `valid.*_horizon` columns are used to check whether a timestep can support future action extraction.

Use:

- `valid.left_horizon` for left-hand rows
- `valid.right_horizon` for right-hand rows
- `valid.joint_horizon` for bimanual rows

The horizon is measured in units of `meta/info.json["action_stride"]`.

If your base timestep is `t` and your future chunk needs `K` stride-steps, require:

```text
selected_horizon[t] >= K
```

and also require all sampled future timesteps to stay inside the episode range:

```text
t + K * action_stride < end_timestep
```