| # Cloth-Sphere Push Datasets |
|
|
| Four splits of a cloth-being-pushed-over-a-sphere dataset, generated with |
| `examples/generate_cloth_sphere_push.py` (see `gendata.sh` at repo root). |
|
|
| The train/test split is defined by **cloth size** (particles per side): |
|
|
| | Split | Directory | Episodes | Cloth size range | Seed | |
| |-------|-----------|----------|-----------------|------| |
| | Train | `cloth_push_sphere/` | 2000 | 100–150 × 100–150 | 42 | |
| | ID test | `cloth_push_sphere_id_test/` | 100 | 100–150 × 100–150 | 88 | |
| | OOD test (smaller) | `cloth_push_sphere_ood_test_smaller/` | 50 | 50–100 × 50–100 | 42 | |
| | OOD test (larger) | `cloth_push_sphere_ood_test_larger/` | 50 | 150–200 × 150–200 | 42 | |
|
|
| All splits share the same scene setup, horizon, resolution, and action parameters. |
|
|
| --- |
|
|
| ## Directory Layout (per split) |
|
|
| ``` |
| <split>/ |
| ├── dataset.h5 ← actions + picker positions for all episodes |
| └── episodes/ |
| ├── episode_00000.mp4 |
| └── ... |
| ``` |
|
|
| No `observations` array is stored in the HDF5 file — the MP4s are the |
| canonical source of visual observations (`hdf5+mp4` format). |
|
|
| --- |
|
|
| ## Common Parameters |
|
|
| | Parameter | Value | |
| |-----------|-------| |
| | Horizon (steps/episode) | 64 | |
| | Image resolution | 512 × 512 | |
| | Video FPS | 16 | |
| | Grab steps | 3 | |
| | Action repeat | 16 | |
| | Sphere radius | 0.25 m | |
| | Sphere centre | (0.0, 0.25, 0.0) m | |
| | Num variations | 20 | |
|
|
| --- |
|
|
| ## HDF5 Layout |
|
|
| ``` |
| dataset.h5 |
| ├── episode_00000/ |
| │ ├── actions (64, 8) float32 |
| │ └── picker_positions (65, 2, 3) float32 |
| │ attrs: cloth_dimx, cloth_dimy |
| ├── episode_00001/ |
| │ └── ... |
| attrs: num_episodes, horizon, img_size, grab_steps, action_repeat, |
| cloth_dimx_range, cloth_dimy_range, sphere_radius, sphere_center, |
| seed, num_variations, notes |
| ``` |
|
|
| ### `actions` — `(T, 8)` float32 |
|
|
| ``` |
| actions[t] = [dx1, dy1, dz1, grab1, dx2, dy2, dz2, grab2] |
| ←— picker 0 —→ ←— picker 1 —→ |
| ``` |
|
|
| | Field | Units | Notes | |
| |-------|-------|-------| |
| | `dx`, `dy`, `dz` | metres / sub-step | Actual executed delta: `(pos_after − pos_before) / action_repeat` | |
| | `grab` | — | Always 1.0; cloth is held throughout | |
|
|
| **Faithful recording**: the stored delta is the *actual* motion observed in the |
| simulator, not the intended command. Any clamping by the physics engine is |
| already reflected, so `actions[t]` exactly explains the motion in the MP4. |
|
|
| First `grab_steps = 3` actions have `dx = dy = dz = 0` (grab-only, no motion). |
| Net picker displacement per step = `delta × action_repeat` (up to `16 × 0.01 = 0.16 m`). |
|
|
| ### `picker_positions` — `(T+1, 2, 3)` float32 |
| |
| World-space XYZ in metres (Y-up). Index `t` is the state *before* `actions[t]`. |
| Index `T` is the terminal state after the last action. |
| |
| ### Episode attributes |
| |
| | Attribute | Description | |
| |-----------|-------------| |
| | `cloth_dimx` | Cloth width in particles (sampled from `cloth_dimx_range`) | |
| | `cloth_dimy` | Cloth height in particles (sampled from `cloth_dimy_range`) | |
|
|
| --- |
|
|
| ## Temporal Alignment |
|
|
| ``` |
| t=0 picker_positions[0] → actions[0] → picker_positions[1] |
| video frame 0 video frame 1 |
| ... |
| t=63 picker_positions[63] → actions[63] → picker_positions[64] |
| video frame 63 video frame 64 |
| ``` |
|
|
| `picker_positions[t]` and video frame `t` are the state *before* `actions[t]`. |
| Each MP4 has `T + 1 = 65` frames (including the terminal frame). |
|
|
| --- |
|
|
| ## Reading the Dataset |
|
|
| ```python |
| import h5py |
| import numpy as np |
| |
| with h5py.File('data/cloth_push_sphere/dataset.h5', 'r') as f: |
| horizon = int(f.attrs['horizon']) # 64 |
| action_repeat = int(f.attrs['action_repeat']) # 16 |
| |
| for ep_key in sorted(f.keys()): |
| grp = f[ep_key] |
| acts = grp['actions'][:] # (64, 8) float32 |
| ppos = grp['picker_positions'][:] # (65, 2, 3) float32 |
| dimx = int(grp.attrs['cloth_dimx']) |
| dimy = int(grp.attrs['cloth_dimy']) |
| |
| # (state_t, action_t, state_{t+1}) triples |
| for t in range(horizon): |
| pos_before = ppos[t] # (2, 3) |
| action = acts[t] # (8,) |
| pos_after = ppos[t+1] # (2, 3) |
| |
| # Load the corresponding video (frame t matches picker_positions[t]) |
| import imageio |
| frames = imageio.v2.mimread('data/cloth_push_sphere/episodes/episode_00000.mp4') |
| # frames[t]: (512, 512, 3) uint8, pre-action frame for actions[t] |
| ``` |
|
|
| --- |
|
|
| ## Storage |
|
|
| | Split | HDF5 | Videos | Total | |
| |-------|------|--------|-------| |
| | `cloth_push_sphere` (train) | ~11 MB | ~172 MB | ~183 MB | |
| | `cloth_push_sphere_id_test` | ~0.6 MB | ~8.7 MB | ~9.3 MB | |
| | `cloth_push_sphere_ood_test_smaller` | ~0.3 MB | ~2.4 MB | ~2.7 MB | |
| | `cloth_push_sphere_ood_test_larger` | ~0.3 MB | ~6.6 MB | ~6.9 MB | |
| | **Total** | | | **~202 MB** | |
|
|