File size: 9,330 Bytes
4701daf
9b1d7fb
3bad7cd
 
4701daf
9b1d7fb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3bad7cd
 
 
 
 
 
9b1d7fb
3bad7cd
9b1d7fb
 
 
 
 
 
 
 
 
 
 
 
 
3bad7cd
 
 
 
 
9b1d7fb
3bad7cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b1d7fb
 
 
 
 
3bad7cd
9b1d7fb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3bad7cd
 
 
 
 
 
 
 
 
 
 
 
 
d5efc19
 
3bad7cd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b1d7fb
 
3bad7cd
 
9b1d7fb
 
4701daf
3bad7cd
4701daf
 
 
 
 
3bad7cd
4701daf
 
 
9b1d7fb
4701daf
 
 
 
 
 
 
 
 
 
9b1d7fb
3bad7cd
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200

# RoboChallenge Dataset
## Tasks and Embodiments
The dataset includes 30 diverse manipulation tasks (Table30) across 4 embodiments:

### Available Tasks
- `arrange_flowers`
- `arrange_fruits_in_basket` 
- `arrange_paper_cups` 
- `clean_dining_table` 
- `fold_dishcloth` 
- `hang_toothbrush_cup` 
- `make_vegetarian_sandwich` 
- `move_objects_into_box` 
- `open_the_drawer` 
- `place_shoes_on_rack` 
- `plug_in_network_cable` 
- `pour_fries_into_plate` 
- `press_three_buttons` 
- `put_cup_on_coaster` 
- `put_opener_in_drawer` 
- `put_pen_into_pencil_case` 
- `scan_QR_code` 
- `search_green_boxes` 
- `set_the_plates` 
- `shred_scrap_paper` 
- `sort_books` 
- `sort_electronic_products` 
- `stack_bowls` 
- `stack_color_blocks` 
- `stick_tape_to_box`
- `sweep_the_rubbish` 
- `turn_on_faucet` 
- `turn_on_light_switch` 
- `water_potted_plant` 
- `wipe_the_table` 
### Embodiments
- **ARX5** - Single-arm with triple camera setup (wrist + global + right-side views)
- **UR5** - Single-arm with dual camera setup (wrist + global views)
- **FRANKA** - Single-arm with triple perspective setup (wrist + main + side views)
- **ALOHA** - Dual-arm with triple wrist camera setup (left wrist + right wrist + global views)


## Dataset Structure
### Hierarchy
The dataset is organized by tasks, with each task containing multiple demonstration episodes:
```
.
β”œβ”€β”€ <task_name>/                    # e.g., arrange_flowers, fold_dishcloth
β”‚   β”œβ”€β”€ task_desc.json              # Task description
β”‚   β”œβ”€β”€ meta/                       # Task-level metadata
β”‚   β”‚   β”œβ”€β”€ task_info.json         
β”‚   └── data/                       # Episode data
β”‚       β”œβ”€β”€ episode_000000/         # Individual episode
β”‚       β”‚   β”œβ”€β”€ meta/
β”‚       β”‚   β”‚   └── episode_meta.json    # Episode metadata
β”‚       β”‚   β”œβ”€β”€ states/
β”‚       β”‚   β”‚   # for single-arm (ARX5, UR5, Franka)
β”‚       β”‚   β”‚   β”œβ”€β”€ states.jsonl         # Single-arm robot states
β”‚       β”‚   β”‚   # for dual-arm (ALOHA)
β”‚       β”‚   β”‚   β”œβ”€β”€ left_states.jsonl    # Left arm states
β”‚       β”‚   β”‚   └── right_states.jsonl   # Right arm states
β”‚       β”‚   └── videos/
β”‚       β”‚       # Video configurations vary by robot model:
β”‚       β”‚       # ARX5
β”‚       β”‚       β”œβ”€β”€ arm_realsense_rgb.mp4       # Wrist view 
β”‚       β”‚       β”œβ”€β”€ global_realsense_rgb.mp4    # Global view 
β”‚       β”‚       └── right_realsense_rgb.mp4     # Side view
β”‚       β”‚       # UR5
β”‚       β”‚       β”œβ”€β”€ global_realsense_rgb.mp4    # Global view 
β”‚       β”‚       └── handeye_realsense_rgb.mp4   # Wrist view
β”‚       β”‚       # Franka
β”‚       β”‚       β”œβ”€β”€ handeye_realsense_rgb.mp4   # Wrist view
β”‚       β”‚       β”œβ”€β”€ main_realsense_rgb.mp4      # Global view 
β”‚       β”‚       └── side_realsense_rgb.mp4      # Side view 
β”‚       β”‚       # ALOHA
β”‚       β”‚       β”œβ”€β”€ cam_high_rgb.mp4            # Global view 
β”‚       β”‚       β”œβ”€β”€ cam_wrist_left_rgb.mp4      # Left wrist view
β”‚       β”‚       └── cam_wrist_right_rgb.mp4     # Right wrist view
β”‚       β”œβ”€β”€ episode_000001/
β”‚       └── ...
β”œβ”€β”€ convert_to_lerobot.py           # Conversion script
└── README.md
```
### Metadata Schema
`task_info.json`
```json
{
    "robot_id": "arx5_1",                    // Robot model identifier
    "task_desc": {
        "task_name": "arrange_flowers",      // Task identifier
        "prompt": "insert the three flowers on the table into the vase one by one",
        "scoring": "...",                    // Scoring criteria
        "task_tag": [                        // Task characteristics
            "repeated",
            "single-arm", 
            "ARX5",
            "precise3d"
        ]
    },
    "video_info": {
        "fps": 30,                           // Video frame rate
        "ext": "mp4",                        // Video format
        "encoding": {
            "vcodec": "libx264",             // Video codec
            "pix_fmt": "yuv420p"             // Pixel format
        }
    }
}
```
`episode_meta.json`
```json
{
    "episode_index": 0,                      // Episode number
    "start_time": 1750405586.3430033,       // Unix timestamp (start)
    "end_time": 1750405642.5247612,         // Unix timestamp (end)
    "frames": 1672                          // Total video frames
}
```

### Robot States Schema
Each episode contains states data stored in JSONL format. Depending on the embodiment, the structure differs slightly:
- **Single-arm robots (ARX5, UR5, Franka)** β†’ `states.jsonl`  
- **Dual-arm robots (ALOHA)** β†’ `left_states.jsonl` and `right_states.jsonl`  

Each file records the robot’s proprioceptive signals per frame, including joint angles, 
end-effector poses, gripper states, and timestamps.  The exact field definitions and coordinate conventions vary by platform, 
as summarized below.

#### ARX5
| Data Name | Data Key |Shape | Semantics |
|:---------:|:-----:|:----:|:----:|
| Joint control |joint_positions | (6,) | Joint angle (in radians) from the base to the end effector. |
| Pose control | end_effector_pose | (6,) | End effector pose (tx, ty, tz, roll, pitch, yaw), where (roll, pitch, yaw) is relative euler angles from the arm base coordinate. X : back to front; Y: right to left; Z: down to up. |
| Gripper control |gripper_width | (1,) | Actual gripper width measurement in meter. | 
| Time stamp |timestamp | (1,) | Floating point timestamp (in milliseconds) of each frame. |

#### UR5
| Data Name | Data Key |Shape | Semantics |
|:---------:|:-----:|:----:|:----:|
| Joint control |joint_positions | (6,) | Joint angle (in radians) from the base to the end effector. |
| Pose control | ee_positions | (7,) | End effector pose (tx, ty, tz, rx, ry, rz, rw), where (tx, ty, tz) is relative position from the arm base coordinate , (rx, ry, rz, rw) is quaternion rotation. X : front to back; Y: left to right; Z: down to up. |
| Gripper control |gripper | (1,) | Gripper closing angle, 0 for fully open, 255 for fully closed. | 
| Time stamp |timestamp | (1,) | Floating point timestamp (in milliseconds) of each frame. | 

#### Franka
| Data Name | Data Key |Shape | Semantics |
|:---------:|:-----:|:----:|:----:|
| Joint control |joint_positions | (7,) | Joint angle (in radians) from the base to the end effector. |
| Pose control | ee_positions | (7,) | End effector pose (tx, ty, tz, rx, ry, rz, rw), where (tx, ty, tz) is relative position from the arm base coordinate , (rx, ry, rz, rw) is quaternion rotation. X : back to front; Y: right to left; Z: down to up. |
| Gripper control |gripper | (2,) | Gripper trigger signals in the (close_button, open_button) order. |
| Gripper width |gripper_width | (1,) | Actual gripper width measurement |
| Time stamp |timestamp | (1,) | Floating point timestamp (in milliseconds) of each frame. |


#### ALOHA
| Data Name | Data Key |Shape | Semantics |
|:---------:|:-----:|:----:|:----:|
| Master joint control |joint_positions | (6,) | Maste joint angle (in radians) from the base to the end effector. |
|Joint  velocity| joint_vel | (7,) | Speed of 6 joint and gripper |
| Puppet joint control |qpos | (6,) | Puppet joint angle (in radians) from the base to the end effector. |
| Puppet pose control | ee_pose_quaternion | (7,) | End effector pose (tx, ty, tz, rx, ry, rz, rw), where (tx, ty, tz) is relative position from the arm base coordinate , (rx, ry, rz, rw) is quaternion rotation. X : back to front; Y: right to left ; Z: down to up. |
| Puppet pose control | ee_pose_rpy | (6,) | End effector pose (tx, ty, tz, rr, rp, ry), where (tx, ty, tz) is relative position from the arm base coordinate , (rr, rp, ry) is euler (in radians). X : back to front; Y: right to left ; Z: down to up. |
| Gripper control |gripper | (1,) |  Actual gripper width measurement in meter.|
| Time stamp |timestamp | (1,) | Floating point timestamp (in mileseconds) of each frame. |


## Convert to LeRobot

While you can implement a custom Dataset class to read RoboChallenge data directly, **we strongly recommend converting to LeRobot format** to take advantage of [LeRobot](https://github.com/huggingface/lerobot)'s comprehensive data processing and loading utilities.

The example script **`convert_to_lerobot.py`** converts **ARX5** data to the LeRobot dataset as a example. For other robot embodiments (UR5, Franka, ALOHA), you can adapt the script accordingly.

### Prerequisites
- Python 3.9+ with the following packages:
  - `lerobot==0.1.0`
  - `opencv-python`
  - `numpy`
- Configure `$LEROBOT_HOME`  (defaults to `~/.lerobot` if unset).

```bash
pip install lerobot==0.1.0 opencv-python numpy
export LEROBOT_HOME="/path/to/lerobot_home"
```

### Usage

Run the converter from the repository root (or provide an absolute path):

```bash
python convert_to_lerobot.py \
  --repo-name example_repo \
  --raw-dataset /path/to/example_dataset \
  --frame-interval 1 
```

### Output
- Frames and metadata are saved to `$LEROBOT_HOME/<repo-name>`.
- At the end, the script calls `dataset.consolidate(run_compute_stats=False)`. If you require aggregated statistics, run it with `run_compute_stats=True` or execute a separate stats job.