| | --- |
| | language: |
| | - en |
| | license: apache-2.0 |
| | task_categories: |
| | - robotics |
| | tags: |
| | - RDT |
| | - rdt |
| | - RDT 2 |
| | - manipulation |
| | - bimanual |
| | - ur5e |
| | - webdataset |
| | - vision-language-action |
| | arxiv: 2602.03310 |
| | --- |
| | |
| | # RDT2 Dataset |
| |
|
| | [Project page](https://rdt-robotics.github.io/rdt2/) | [Paper](https://huggingface.co/papers/2602.03310) | [GitHub](https://github.com/thu-ml/RDT2) |
| |
|
| | ## Dataset Summary |
| |
|
| | This dataset provides shards in the **WebDataset** format for fine-tuning [RDT-2](https://rdt-robotics.github.io/rdt2/) or other policy models on **bimanual manipulation**. |
| | Each sample packs: |
| |
|
| | * a **binocular RGB image** (left + right wrist cameras concatenated horizontally) |
| | * a **relative action chunk** (continuous control, 0.8s, 30Hz) |
| | * a **discrete action token sequence** (e.g., from an [Residual VQ action tokenizer](https://huggingface.co/robotics-diffusion-transformer/RVQActionTokenizer)) |
| | * a **metadata JSON** with an instruction key `sub_task_instruction_key` to index corresponding instruction from `instructions.json` |
| |
|
| | Data were collected on a **bimanual UR5e** setup. |
| |
|
| | --- |
| |
|
| | ## Supported Tasks |
| |
|
| | * **Instruction-conditioned bimanual manipulation**, including: |
| | - Pouring water: different water bottles and cups |
| | - Cleaning the desktop: different dustpans and paper balls |
| | - Folding towels: towels of different sizes and colors |
| | - Stacking cups: cups of different sizes and colors |
| |
|
| | --- |
| |
|
| | ## Data Structure |
| |
|
| | ### Shard layout |
| |
|
| | Shards are named `shard-*.tar`. Inside each shard: |
| |
|
| | ``` |
| | shard-000000.tar |
| | ├── 0.image.jpg # binocular RGB, H=384, W=768, C=3, uint8 |
| | ├── 0.action.npy # relative actions, shape (24, 20), float32 |
| | ├── 0.action_token.npy # action tokens, shape (27,), int16 ∈ [0, 1024) |
| | ├── 0.meta.json # metadata; includes "sub_task_instruction_key" |
| | ├── 1.image.jpg |
| | ├── 1.action.npy |
| | ├── 1.action_token.npy |
| | ├── 1.meta.json |
| | └── ... |
| | shard-000001.tar |
| | shard-000002.tar |
| | ... |
| | ``` |
| |
|
| | > **Image:** binocular wrist cameras concatenated horizontally → `np.ndarray` of shape `(384, 768, 3)` with `dtype=uint8` (stored as JPEG). |
| | > |
| | > **Action (continuous):** `np.ndarray` of shape `(24, 20)`, `dtype=float32` (24-step chunk, 20-D control). |
| | > |
| | > **Action tokens (discrete):** `np.ndarray` of shape `(27,)`, `dtype=int16`, values in `[0, 1024]`. |
| | > |
| | > **Metadata:** `meta.json` contains at least `sub_task_instruction_key` pointing to an entry in top-level `instructions.json`. |
| | |
| | |
| | --- |
| | |
| | ## Example Data Instance |
| | |
| | ```json |
| | { |
| | "image": "0.image.jpg", |
| | "action": "0.action.npy", |
| | "action_token": "0.action_token.npy", |
| | "meta": { |
| | "sub_task_instruction_key": "fold_cloth_step_3" |
| | } |
| | } |
| | ``` |
| | |
| | --- |
| | |
| | ## How to Use |
| | |
| | ### 1) Official Guidelines to fine-tune RDT 2 series |
| | |
| | Use the official [scripts](https://github.com/thu-ml/RDT2) and [guidelines](https://github.com/thu-ml/RDT2) provided in the GitHub repository. |
| | |
| | ### 2) Minimal Loading example |
| | |
| | ```python |
| | import os |
| | import glob |
| | import json |
| | import random |
| | |
| | import webdataset as wds |
| | |
| | |
| | def no_split(src): |
| | yield from src |
| | |
| | def get_train_dataset(shards_dir): |
| | shards = sorted(glob.glob(os.path.join(shards_dir, "shard-*.tar"))) |
| | random.shuffle(shards) |
| | |
| | num_workers = wds.utils.pytorch_worker_info()[-1] |
| | workersplitter = wds.split_by_worker if len(shards) > num_workers else no_split |
| | |
| | assert shards, f"No shards under {shards_dir}" |
| | dataset = ( |
| | wds.WebDataset( |
| | shards, |
| | shardshuffle=False, |
| | nodesplitter=no_split, |
| | workersplitter=workersplitter, |
| | resampled=True, |
| | ) |
| | .repeat() |
| | .shuffle(8192, initial=8192) |
| | .decode("pil") |
| | .map( |
| | lambda sample: { |
| | "image": sample["image.jpg"], |
| | "action_token": sample["action_token.npy"], |
| | "meta": sample["meta.json"], |
| | } |
| | ) |
| | .with_epoch(nsamples=(2048 * 30 * 60 * 60)) # 2048 hours |
| | ) |
| | |
| | return dataset |
| | |
| | with open(os.path.join("<Dataset Directory>", "instructions.json")) as fp: |
| | instructions = json.load(fp) |
| | dataset = get_train_dataset(os.path.join("<Dataset Directory>", "shards")) |
| | ``` |
| | --- |
| | |
| | ## Ethical Considerations |
| |
|
| | * Contains robot teleoperation/automation data. No PII is present by design. |
| | * Ensure safe deployment/testing on real robots; follow lab safety and manufacturer guidelines. |
| |
|
| | --- |
| |
|
| | ## Citation |
| |
|
| | ```bibtex |
| | @article{rdt2, |
| | title={RDT2: Exploring the Scaling Limit of UMI Data Towards Zero-Shot Cross-Embodiment Generalization}, |
| | author={RDT Team}, |
| | journal={arXiv preprint arXiv:2602.03310}, |
| | year={2026} |
| | } |
| | ``` |
| |
|
| | --- |
| |
|
| | ## License |
| |
|
| | * **Dataset license:** Apache-2.0. |
| | * Ensure compliance when redistributing derived data or models. |