File size: 4,703 Bytes
4001853
 
 
5bc2a72
 
 
4001853
 
 
 
 
 
 
5bc2a72
4001853
5bc2a72
4001853
 
5bc2a72
 
 
 
4001853
 
5bc2a72
4001853
 
 
 
 
 
 
1b46bea
4001853
 
 
 
 
1b46bea
 
 
 
 
4001853
 
 
 
 
 
 
 
 
 
 
 
 
01e4c7c
4001853
 
 
 
 
 
 
 
 
 
 
 
1b46bea
4001853
1b46bea
4001853
1b46bea
 
4001853
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1b46bea
4001853
5bc2a72
4001853
1b46bea
4001853
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5bc2a72
4001853
5bc2a72
4001853
 
 
 
 
 
 
 
 
 
 
 
 
5bc2a72
 
 
 
 
10c7bb0
4001853
 
 
 
 
 
5bc2a72
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
---
language:
- en
license: apache-2.0
task_categories:
- robotics
tags:
- RDT
- rdt
- RDT 2
- manipulation
- bimanual
- ur5e
- webdataset
- vision-language-action
arxiv: 2602.03310
---

# RDT2 Dataset

[Project page](https://rdt-robotics.github.io/rdt2/) | [Paper](https://huggingface.co/papers/2602.03310) | [GitHub](https://github.com/thu-ml/RDT2)

## Dataset Summary

This dataset provides shards in the **WebDataset** format for fine-tuning [RDT-2](https://rdt-robotics.github.io/rdt2/) or other policy models on **bimanual manipulation**.
Each sample packs:

* a **binocular RGB image** (left + right wrist cameras concatenated horizontally)
* a **relative action chunk** (continuous control, 0.8s, 30Hz)
* a **discrete action token sequence** (e.g., from an [Residual VQ action tokenizer](https://huggingface.co/robotics-diffusion-transformer/RVQActionTokenizer))
* a **metadata JSON** with an instruction key `sub_task_instruction_key` to index corresponding instruction from `instructions.json`

Data were collected on a **bimanual UR5e** setup.

---

## Supported Tasks

* **Instruction-conditioned bimanual manipulation**, including:
  - Pouring water: different water bottles and cups
  - Cleaning the desktop: different dustpans and paper balls
  - Folding towels: towels of different sizes and colors
  - Stacking cups: cups of different sizes and colors

---

## Data Structure

### Shard layout

Shards are named `shard-*.tar`. Inside each shard:

```
shard-000000.tar
├── 0.image.jpg          # binocular RGB, H=384, W=768, C=3, uint8
├── 0.action.npy         # relative actions, shape (24, 20), float32
├── 0.action_token.npy   # action tokens, shape (27,), int16 ∈ [0, 1024)
├── 0.meta.json          # metadata; includes "sub_task_instruction_key"
├── 1.image.jpg
├── 1.action.npy
├── 1.action_token.npy
├── 1.meta.json
└── ...
shard-000001.tar
shard-000002.tar
...
```

> **Image:** binocular wrist cameras concatenated horizontally → `np.ndarray` of shape `(384, 768, 3)` with `dtype=uint8` (stored as JPEG).
> 
> **Action (continuous):** `np.ndarray` of shape `(24, 20)`, `dtype=float32` (24-step chunk, 20-D control).
> 
> **Action tokens (discrete):** `np.ndarray` of shape `(27,)`, `dtype=int16`, values in `[0, 1024]`.
> 
> **Metadata:** `meta.json` contains at least `sub_task_instruction_key` pointing to an entry in top-level `instructions.json`.


---

## Example Data Instance

```json
{
  "image": "0.image.jpg",
  "action": "0.action.npy",
  "action_token": "0.action_token.npy",
  "meta": {
    "sub_task_instruction_key": "fold_cloth_step_3"
  }
}
```

---

## How to Use

### 1) Official Guidelines to fine-tune RDT 2 series

Use the official [scripts](https://github.com/thu-ml/RDT2) and [guidelines](https://github.com/thu-ml/RDT2) provided in the GitHub repository.

### 2) Minimal Loading example 

```python
import os
import glob
import json
import random

import webdataset as wds


def no_split(src):
    yield from src

def get_train_dataset(shards_dir):
    shards = sorted(glob.glob(os.path.join(shards_dir, "shard-*.tar")))
    random.shuffle(shards)
    
    num_workers = wds.utils.pytorch_worker_info()[-1]
    workersplitter = wds.split_by_worker if len(shards) > num_workers else no_split
    
    assert shards, f"No shards under {shards_dir}"
    dataset = (
        wds.WebDataset(
            shards,
            shardshuffle=False,
            nodesplitter=no_split,
            workersplitter=workersplitter,
            resampled=True,
        )
        .repeat()
        .shuffle(8192, initial=8192)
        .decode("pil")
        .map(
            lambda sample: {
                "image": sample["image.jpg"],
                "action_token": sample["action_token.npy"],
                "meta": sample["meta.json"],
            }
        )
        .with_epoch(nsamples=(2048 * 30 * 60 * 60))    # 2048 hours
    )
    
    return dataset

with open(os.path.join("<Dataset Directory>", "instructions.json")) as fp:
    instructions = json.load(fp)
dataset = get_train_dataset(os.path.join("<Dataset Directory>", "shards"))
```
---

## Ethical Considerations

* Contains robot teleoperation/automation data. No PII is present by design.
* Ensure safe deployment/testing on real robots; follow lab safety and manufacturer guidelines.

---

## Citation

```bibtex
@article{rdt2,
  title={RDT2: Exploring the Scaling Limit of UMI Data Towards Zero-Shot Cross-Embodiment Generalization},
  author={RDT Team},
  journal={arXiv preprint arXiv:2602.03310},
  year={2026}
}
```

---

## License

* **Dataset license:** Apache-2.0.
* Ensure compliance when redistributing derived data or models.