| --- |
| task_categories: |
| - robotics |
| language: |
| - en |
| tags: |
| - RDT |
| - rdt |
| - RDT 2 |
| - manipulation |
| - bimanual |
| - ur5e |
| - webdatset |
| - vision-language-action |
| --- |
| |
| ## Dataset Summary |
|
|
| This dataset provides shards in the **WebDataset** format for fine-tuning [RDT-2]() or other policy models on **bimanual manipulation**. |
| Each sample packs: |
|
|
| * a **binocular RGB image** (left + right wrist cameras concatenated horizontally) |
| * a **relative action chunk** (continuous control, 0.8s, 30Hz) |
| * a **discrete action token sequence** (e.g., from an [Residual VQ action tokenizer](https://huggingface.co/robotics-diffusion-transformer/RVQActionTokenizer)) |
| * a **metadata JSON** with an instruction key `sub_task_instruction_key` to index corresponding instruction from `instructions.json` |
|
|
| Data were collected on a **bimanual UR5e** setup. |
|
|
| --- |
|
|
| ## Supported Tasks |
|
|
| * **Instruction-conditioned bimanual manipulation**, including: |
| - Pouring water: different water bottles and cups |
| - Cleaning the desktop: different dustpans and paper balls |
| - Folding towels: towels of different sizes and colors |
| - Stacking cups: cups of different sizes and colors |
| - Folding clothes: clothes of different sizes and colors |
|
|
| --- |
|
|
| ## Data Structure |
|
|
| ### Shard layout |
|
|
| Shards are named `shard-*.tar`. Inside each shard: |
|
|
| ``` |
| shard-000000.tar |
| ├── 0.image.jpg # binocular RGB, H=384, W=768, C=3, uint8 |
| ├── 0.action.npy # relative actions, shape (24, 20), float32 |
| ├── 0.action_token.npy # action tokens, shape (27,), int16 ∈ [0, 1024] |
| ├── 0.meta.json # metadata; includes "sub_task_instruction_key" |
| ├── 1.image.jpg |
| ├── 1.action.npy |
| ├── 1.action_token.npy |
| ├── 1.meta.json |
| └── ... |
| shard-000001.tar |
| shard-000002.tar |
| ... |
| ``` |
|
|
| > **Image:** binocular wrist cameras concatenated horizontally → `np.ndarray` of shape `(384, 768, 3)` with `dtype=uint8` (stored as JPEG). |
| > |
| > **Action (continuous):** `np.ndarray` of shape `(24, 20)`, `dtype=float32` (24-step chunk, 20-D control). |
| > |
| > **Action tokens (discrete):** `np.ndarray` of shape `(27,)`, `dtype=int16`, values in `[0, 1024]`. |
| > |
| > **Metadata:** `meta.json` contains at least `sub_task_instruction_key` pointing to an entry in top-level `instructions.json`. |
| |
| |
| --- |
| |
| ## Example Data Instance |
| |
| ```json |
| { |
| "image": "0.image.jpg", |
| "action": "0.action.npy", |
| "action_token": "0.action_token.npy", |
| "meta": { |
| "sub_task_instruction_key": "fold_cloth_step_3" |
| } |
| } |
| ``` |
| |
| --- |
| |
| ## How to Use |
| |
| ### 1) Official Guidelines to fine-tune RDT 2 series |
| |
| Use the example [scripts](https://github.com/thu-ml/RDT2/blob/cf71b69927f726426c928293e37c63c4881b0165/data/utils.py#L48) and [guidelines](https://github.com/thu-ml/RDT2/blob/cf71b69927f726426c928293e37c63c4881b0165/data/utils.py#L48): |
| |
| ### 2) Minimal Loading example |
| |
| ```python |
| import os |
| import glob |
| import json |
| import random |
| |
| import webdataset as wds |
| |
| |
| def no_split(src): |
| yield from src |
| |
| def get_train_dataset(shards_dir): |
| shards = sorted(glob.glob(os.path.join(shards_dir, "shard-*.tar"))) |
| random.shuffle(shards) |
| |
| num_workers = wds.utils.pytorch_worker_info()[-1] |
| workersplitter = wds.split_by_worker if len(shards) > num_workers else no_split |
| |
| assert shards, f"No shards under {shards_dir}" |
| dataset = ( |
| wds.WebDataset( |
| shards, |
| shardshuffle=False, |
| nodesplitter=no_split, |
| workersplitter=workersplitter, |
| resampled=True, |
| ) |
| .repeat() |
| .shuffle(8192, initial=8192) |
| .decode("pil") |
| .map( |
| lambda sample: { |
| "image": sample["image.jpg"], |
| "action_token": sample["action_token.npy"], |
| "meta": sample["meta.json"], |
| } |
| ) |
| .with_epoch(nsamples=(2048 * 30 * 60 * 60)) # 2048 hours |
| ) |
| |
| return dataset |
| |
| with open(os.path.join("<Dataset Diretory>", "instructions.json") as fp: |
| instructions = json.load(fp) |
| dataset = get_train_dataset(os.path.join("<Dataset Diretory>", "shards")) |
| ``` |
| --- |
| |
| ## Ethical Considerations |
|
|
| * Contains robot teleoperation/automation data. No PII is present by design. |
| * Ensure safe deployment/testing on real robots; follow lab safety and manufacturer guidelines. |
|
|
| --- |
|
|
| ## Citation |
|
|
| If you use this dataset, please cite the dataset and your project appropriately. For example: |
|
|
| ```bibtex |
| TBD |
| ``` |
|
|
| --- |
|
|
| ## License |
|
|
| * **Dataset license:** Apache-2.0 (unless otherwise noted by the maintainers of your fork/release). |
| * Ensure compliance when redistributing derived data or models. |
|
|
| --- |
|
|
| ## Maintainers & Contributions |
|
|
| We welcome fixes and improvements to the conversion scripts and docs (see https://github.com/thu-ml/RDT2/tree/main#troubleshooting). |
| Please open issues/PRs with: |
|
|
| * OS + Python versions |
| * Minimal repro code |
| * Error tracebacks |
| * Any other helpful context |
|
|
|
|