File size: 19,935 Bytes
371163f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
---
pretty_name: RealSource World
size_categories:
- 100B<n<1T
task_categories:
- robotics
language:
- en
tags:
- real-world
- dual-arm
- robotics manipulation
- humanoid robot
license: cc-by-nc-4.0
---

<div align="center">
 <video controls autoplay src="https://realmanrobot.github.io/real_source_dataset/assets/real_source_video-CQfv30ls.mp4"></video>
</div>

# RealSource World

RealSource World is a large-scale real-world robotics manipulation dataset collected using the RS-02 dual-arm humanoid robot. This dataset contains diverse long-horizon manipulation tasks performed in real-world environments, with detailed annotations for atomic skills and quality assessments.

# Key Features 

- **14+ million** frames of real-world dual-arm manipulation demonstrations.
- **11,428+** episodes across **36** distinct manipulation tasks.
- **57-dimensional** proprioceptive state space including joint positions, velocities, forces, torques, and end-effector poses.
- **Multi-camera** visual observations (head camera, left hand camera, right hand camera) at 720x1280 resolution, 30 FPS.
- **Fine-grained annotations** with atomic skill segmentation and quality assessments for each episode.
- **Diverse scenes** including kitchen, conference room, convenience store, and household environments.
- **Dual-arm coordination** tasks demonstrating complex bimanual manipulation skills.

# News
- **`[2025/12]`** RealSource World dataset fully uploaded to Hugging Face, containing 36 tasks with a total size of 549GB. [Download Link](https://huggingface.co/datasets/RealSourceData/RealSource-World)
- **`[2025/11]`** RealSource World released on Hugging Face. [Download Link](https://huggingface.co/datasets/RealSourceData/RealSource-World)

# Changelog
## Version History

### Version 1.1 (December 2025)
- **Complete Dataset Upload**
 - Fully uploaded all dataset files to Hugging Face
 - Total dataset size: 549GB
 - Total files: approximately 104,907 files
 - Contains 36 manipulation tasks

### Version 1.0 (November 2025)
- **Initial Release**
 - Released RealSource World dataset on Hugging Face
 - 36 manipulation tasks with 11,428 episodes
 - 14+ million frames of real-world dual-arm manipulation demonstrations
 - 57-dimensional proprioceptive state space
 - Multi-camera visual observations (head, left hand, right hand cameras)
 - Fine-grained annotations with atomic skill segmentation
 - Complete camera parameters (intrinsic and extrinsic) for all episodes
 - Quality assessments for each episode

# Table of Contents

- [Key Features](#key-features-)
- [News](#news-)
- [Changelog](#changelog-)
- [Get Started](#get-started-)
 - [Download the Dataset](#download-the-dataset)
 - [Dataset Structure](#dataset-structure)
 - [Understanding the Dataset Format](#understanding-the-dataset-format)
 - [Loading and Using the Dataset](#loading-and-using-the-dataset)
- [Data Format Details](#data-format-details)
 - [Proprioceptive State (57-dimensional)](#proprioceptive-state-57-dimensional)
 - [Action Space (17-dimensional)](#action-space-17-dimensional)
 - [Visual Observations](#visual-observations)
 - [Camera Parameters](#camera-parameters)
 - [Sub-task Annotations](#sub-task-annotations)
- [Dataset Statistics](#dataset-statistics)
- [Robot URDF Model](#robot-urdf-model)
- [License and Citation](#license-and-citation)

# Get Started 

## Dataset Access

The RealSource World dataset has been fully uploaded to Hugging Face and can be accessed via:
- **Hugging Face Repository**: [RealSourceData/RealSource-World](https://huggingface.co/datasets/RealSourceData/RealSource-World)
- **Dataset Size**: 549GB
- **File Format**: LeRobot v2.1 format
- **Data Organization**: Organized by tasks, each task contains data/, meta/, and videos/ directories

## Download the Dataset

To download the full dataset, you can use the following code. If you encounter any issues, please refer to the official Hugging Face documentation.

**Note**: Due to the large dataset size (549GB), it is recommended to use Git LFS for downloading, or use the Hugging Face Datasets library to load data on-demand.

```bash

# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install

# When prompted for a password, use an access token with read permissions.

Generate one from your settings: https://huggingface.co/settings/tokens
git clone https://huggingface.co/datasets/RealSourceData/RealSource-World

# If you want to clone without large files - just their pointers
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/RealSourceData/RealSource-World
```

If you only want to download a specific task from the RealSource World dataset, such as `Arrange_the_cups`, follow these steps:

```bash

# Ensure Git LFS is installed (https://git-lfs.com)
git lfs install

# Initialize an empty Git repository
git init RealSource-World
cd RealSource-World

# Set the remote repository
git remote add origin https://huggingface.co/datasets/RealSourceData/RealSource-World

# Enable sparse-checkout
git sparse-checkout init

# Specify the folders and files you want to download
git sparse-checkout set Arrange_the_cups scripts

# Pull the data from the main branch
git pull origin main
```

## Dataset Structure

### Folder Hierarchy

```
RealSource-world/
β”œβ”€β”€ Arrange_the_cups/

# Task name (36 tasks in total)
β”‚ β”œβ”€β”€ data/
β”‚ β”‚ └── chunk-000/
β”‚ β”‚ β”œβ”€β”€ episode_000000.parquet
β”‚ β”‚ β”œβ”€β”€ episode_000001.parquet
β”‚ β”‚ └── ...

## 871 parquet files for this task
β”‚ β”œβ”€β”€ meta/
β”‚ β”‚ β”œβ”€β”€ info.json

# Dataset metadata and feature definitions
β”‚ β”‚ β”œβ”€β”€ episodes.jsonl

# Episode-level metadata
β”‚ β”‚ β”œβ”€β”€ episodes_stats.jsonl

# Episode statistics
β”‚ β”‚ β”œβ”€β”€ tasks.jsonl

# Task descriptions
β”‚ β”‚ β”œβ”€β”€ sub_tasks.jsonl

# Fine-grained sub-task annotations
β”‚ β”‚ └── camera.json

# Camera parameters for all episodes
β”‚ └── videos/
β”‚ └── chunk-000/
β”‚ β”œβ”€β”€ observation.images.head_camera/
β”‚ β”‚ β”œβ”€β”€ episode_000000.mp4
β”‚ β”‚ └── ...
β”‚ β”œβ”€β”€ observation.images.left_hand_camera/
β”‚ β”‚ β”œβ”€β”€ episode_000000.mp4
β”‚ β”‚ └── ...
β”‚ └── observation.images.right_hand_camera/
β”‚ β”œβ”€β”€ episode_000000.mp4
β”‚ └── ...
β”œβ”€β”€ Arrange_the_items_on_the_conference_table/
β”‚ └── ...
β”œβ”€β”€ Clean_the_convenience_store/
β”‚ └── ...
└── ...

## 36 tasks in total
```

## Understanding the Dataset Format

This dataset follows the **LeRobot v2.1** format. Each task directory contains:

- **`data/`**: Parquet files storing time-series data (proprioceptive states, actions, timestamps)
- **`meta/`**: JSON/JSONL files with metadata, episode information, and annotations
- **`videos/`**: MP4 video files from three camera perspectives

### Key Metadata Files

- **`meta/info.json`**: Contains dataset-level metadata including:
 - Total episodes, frames, videos
 - Feature definitions (action and observation shapes, names)
 - Video specifications (resolution, codec, FPS)
 - Robot type and codebase version

- **`meta/episodes.jsonl`**: One JSON object per line, each representing an episode with:
 - `episode_index`: Episode identifier
 - `length`: Number of frames in the episode
 - `tasks`: List of task descriptions
 - `videos`: Paths to video files for each camera

- **`meta/sub_tasks.jsonl`**: Fine-grained annotations for each episode, including:
 - `task_steps`: List of atomic skill segments with start/end frames
 - `success_rating`: Overall task success score (1-5)
 - `quality_assessments`: Detailed quality metrics (PASS/FAIL/VALID)
 - `notes`: Annotation metadata

- **`meta/camera.json`**: Camera intrinsic and extrinsic parameters for each episode

## Loading and Using the Dataset

This dataset is compatible with the [LeRobot library](https://github.com/huggingface/lerobot). Here's how to load and use it:

```python
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset

# Load a specific task
dataset_path = "RealSource-World/Arrange_the_cups"
repo_id = "RealSourceData/RealSource-World"

# Initialize the dataset
dataset = LeRobotDataset(dataset_path, repo_id=repo_id)

# Access episode data
episode_0 = dataset[0]

# First frame of first episode
episode_info = dataset.episode_data[0]

# Episode metadata

Iterate through episodes
for episode_idx in range(len(dataset.episode_data)):
 episode_length = dataset.episode_data[episode_idx]["length"]
 print(f"Episode {episode_idx} has {episode_length} frames")

# Visualize an episode
dataset.show_video(episode_idx=0, video_key="observation.images.head_camera")
```

# Data Format Details

## Proprioceptive State (57-dimensional)

The `observation.state` field contains comprehensive proprioceptive information:

| Index Range | Components | Description |
|------------|-----------|-------------|
| 0-15 | Joint positions | 7 joints Γ— 2 arms + 2 grippers = 16 DOF |
| 16 | Lift position | Mobile base lift height |
| 17-22 | Left arm force/torque | 6D force (fx, fy, fz, mx, my, mz) |
| 23-28 | Right arm force/torque | 6D force (fx, fy, fz, mx, my, mz) |
| 29-35 | Left joint velocities | 7 joints = 7 DOF |
| 36-42 | Right joint velocities | 7 joints = 7 DOF |
| 43-49 | Left end-effector pose | Position (x, y, z) + Quaternion (qw, qx, qy, qz) |
| 50-56 | Right end-effector pose | Position (x, y, z) + Quaternion (qw, qx, qy, qz) |

### State Field Names

```python
[
 "LeftFollowerArm_Joint1.pos", ..., "LeftFollowerArm_Joint7.pos",
 "LeftGripper.pos",
 "RightFollowerArm_Joint1.pos", ..., "RightFollowerArm_Joint7.pos",
 "RightGripper.pos",
 "Lift.position",
 "LeftForce.fx", "LeftForce.fy", "LeftForce.fz",
 "LeftForce.mx", "LeftForce.my", "LeftForce.mz",
 "RightForce.fx", "RightForce.fy", "RightForce.fz",
 "RightForce.mx", "RightForce.my", "RightForce.mz",
 "LeftJoint_Vel1", ..., "LeftJoint_Vel7",
 "RightJoint_Vel1", ..., "RightJoint_Vel7",
 "LeftEnd_X", "LeftEnd_Y", "LeftEnd_Z",
 "LeftEnd_Qw", "LeftEnd_Qx", "LeftEnd_Qy", "LeftEnd_Qz",
 "RightEnd_X", "RightEnd_Y", "RightEnd_Z",
 "RightEnd_Qw", "RightEnd_Qx", "RightEnd_Qy", "RightEnd_Qz"
]
```

## Action Space (17-dimensional)

The `action` field contains commands sent to the robot:

| Components | Description |
|-----------|-------------|
| 0-6 | Left arm joint positions (7 DOF) |
| 7 | Left gripper position |
| 8-14 | Right arm joint positions (7 DOF) |
| 15 | Right gripper position |
| 16 | Lift command |

### Action Field Names

```python
[
 "LeftLeaderArm_Joint1.pos", ..., "LeftLeaderArm_Joint7.pos",
 "LeftGripper.pos",
 "RightLeaderArm_Joint1.pos", ..., "RightLeaderArm_Joint7.pos",
 "RightGripper.pos",
 "Lift.command"
]
```

## Visual Observations

Each episode includes synchronized video from three camera perspectives:

- **`observation.images.head_camera`**: Overhead/head-mounted view
- **`observation.images.left_hand_camera`**: Left end-effector mounted camera
- **`observation.images.right_hand_camera`**: Right end-effector mounted camera

**Video Specifications:**
- Resolution: 720 Γ— 1280 pixels
- Frame rate: 30 FPS
- Codec: H.264
- Format: MP4

## Camera Parameters

Each episode has corresponding camera parameters stored in `meta/camera.json`, keyed by `episode_XXXXXX`. The camera parameters include intrinsic parameters (camera matrix and distortion coefficients) and extrinsic parameters (hand-eye calibration).

### File Structure

The `camera.json` file contains camera parameters for all episodes:

```json
{
 "episode_000000": {
 "camera_ids": {
 "head": "245022300889",
 "left_arm": "245022301980",
 "right_arm": "245022300408",
 "foot": ""
 },
 "camera_parameters": {
 "head": {
 "720P": {
 "MTX": [[648.57, 0, 645.54], [0, 647.80, 375.38], [0, 0, 1]],
 "DIST": [-0.0513, 0.0587, -0.0006, 0.00096, -0.0186]
 },
 "480P": { ... }
 },
 "left_arm": { ... },
 "right_arm": { ... }
 },
 "hand_eye": {
 "left_arm_in_eye": {
 "R": [[...], [...], [...]],
 "T": [x, y, z]
 },
 "right_arm_in_eye": { ... },
 "left_arm_to_eye": { ... },
 "right_arm_to_eye": { ... }
 }
 },
 "episode_000001": { ... }
}
```

### Camera Intrinsic Parameters

Each camera (head, left_arm, right_arm) has intrinsic parameters for two resolutions:

- **`MTX`**: 3Γ—3 camera intrinsic matrix
 ```
 [fx 0 cx]
 [0 fy cy]
 [0 0 1]
 ```
 - `fx`, `fy`: Focal lengths in pixels
 - `cx`, `cy`: Principal point (optical center) in pixels

- **`DIST`**: 5-element distortion coefficients (k1, k2, p1, p2, k3)
 - Used for correcting radial and tangential distortion

**Available Resolutions:**
- `720P`: Parameters for 720p video (720 Γ— 1280)
- `480P`: Parameters for 480p video (480 Γ— 640)

### Hand-Eye Calibration (Extrinsic Parameters)

The `hand_eye` section contains transformations between the robot end-effectors and cameras:

- **`left_arm_in_eye`**: Transformation from left end-effector camera to left arm end-effector center
 - `R`: 3Γ—3 rotation matrix
 - `T`: 3Γ—1 translation vector [x, y, z] in meters
 - Represents the pose of the left wrist-mounted camera relative to the left arm end-effector center

- **`right_arm_in_eye`**: Transformation from right end-effector camera to right arm end-effector center
 - Represents the pose of the right wrist-mounted camera relative to the right arm end-effector center

- **`left_arm_to_eye`**: Transformation from head camera to left arm base coordinate frame
 - `R`: 3Γ—3 rotation matrix
 - `T`: 3Γ—1 translation vector [x, y, z] in meters
 - Represents the pose of the head camera relative to the left arm base frame

- **`right_arm_to_eye`**: Transformation from head camera to right arm base coordinate frame
 - Represents the pose of the head camera relative to the right arm base frame

These parameters enable coordinate transformations between:
- Robot end-effector poses and camera image coordinates
- 3D positions in robot space and pixel coordinates in images
- Multi-view geometric operations and calibration
- Wrist camera frames and end-effector centers
- Head camera frame and arm base frames

### Camera IDs

Each camera has a unique identifier:
- **`head`**: Head-mounted camera ID
- **`left_arm`**: Left end-effector camera ID
- **`right_arm`**: Right end-effector camera ID
- **`foot`**: Foot camera ID (if available)

## Sub-task Annotations

Each episode in `meta/sub_tasks.jsonl` contains detailed annotations:

```json
{
 "task": "Separate the two stacked cups in the dish and place them on the two sides of the dish.",
 "language": "zh",
 "task_index": 0,
 "episode_index": 0,
 "task_steps": [
 {
 "step_name": "Left arm picks up the stack of cups from the center of the plate",
 "start_frame": 100,
 "end_frame": 180,
 "description": "Left arm picks up the stack of cups from the center of the plate",
 "duration_frames": 80
 },
 ...
 ],
 "success_rating": 5,
 "notes": "annotation_date: 2025/11/13",
 "quality_assessments": {
 "overall_valid": "VALID",
 "movement_fluency": "PASS",
 "grasp_success": "PASS",
 "placement_quality": "PASS",
 ...
 },
 "total_frames": 946
}
```

### Quality Assessment Metrics

- **`overall_valid`**: Overall episode validity (VALID/INVALID)
- **`movement_fluency`**: Smoothness of robot movements (PASS/FAIL)
- **`grasp_success`**: Success of grasping actions (PASS/FAIL)
- **`placement_quality`**: Quality of object placement (PASS/FAIL)
- **`no_drop`**: No objects were dropped during the task (PASS/FAIL)
- **`grasp_collisions`**: No collisions during grasping (PASS/FAIL)
- **`arm_collisions`**: No arm collisions (PASS/FAIL)
- **`operation_completeness`**: Task completion status (PASS/FAIL)
- And more...

# Dataset Statistics

## Overall Statistics

- **Total Tasks**: 36
- **Total Dataset Size**: 549GB
- **Total Files**: approximately 104,907 files
- **Total Episodes**: 11,428
- **Total Frames**: 14,085,107
- **Total Videos**: 34,284 (3 cameras Γ— 11,428 episodes)
- **Robot Type**: RS-02 (dual-arm humanoid robot)
- **Dataset Format**: LeRobot v2.1
- **Video Resolution**: 720 Γ— 1280
- **Frame Rate**: 30 FPS

## Task Distribution

The dataset includes diverse manipulation tasks across multiple domains:

- **Kitchen Tasks**: Arranging cups, cooking rice, steaming, cleaning counters, making toast, preparing birthday cake, etc.
- **Organization Tasks**: Organizing magazines, tools, toys, glass tubes, pen holders, TV cabinets, etc.
- **Household Tasks**: Tiding up rooms, placing books, slippers, hanging clothes to dry, etc.
- **Convenience Store Tasks**: Cleaning store, organizing items, collecting mail, etc.
- **Industrial Tasks**: Moving parts between containers, organizing glass tubes, etc.
- **Other Tasks**: Cable plugging, replenishing tea bags, organizing repair tools, etc.

**Complete Task List (36 tasks):**
1. Arrange_the_cups
2. Arrange_the_items_on_the_conference_table
3. Cable_Plugging_able
4. Clean_the_convenience_store
5. Collect_the_mail
6. Cook_rice_using_an_electric_rice_cooker
7. Hang_out_the_clothes_to_dry
8. Make_toast
9. Making_steamed_potatoes
10. Move_industrial_parts_to_different_plastic_boxes
11. Organize_the_TV_cabinet
12. Organize_the_glass_tube_on_the_rack
13. Organize_the_magazines
14. Organize_the_pen_holder
15. Organize_the_repair_tools
16. Organize_the_toys
17. Pack_the_badminton_shuttlecock
18. Place_the_books
19. Place_the_hairdryer
20. Place_the_slippers
21. Prepare_the_birthday_cake
22. Prepare_the_bread
23. Put_the_milk_in_the_refrigerator
24. Refill_the_laundry_detergent
25. Replace_the_tissues_and_arrange_them
26. Replenish_tea_bags
27. Stack_the_cups
28. Steam_buns
29. Steaming_rice_in_a_rice_cooker
30. Take_down_the_book
31. Take_out_the_trash
32. Tidy_up_the_children's_room
33. Tidy_up_the_children_s_room
34. Tidy_up_the_conference_room_table
35. Tidy_up_the_cooking_counter
36. Tidy_up_the_kitchen_counter

# Robot URDF Model

The RealSource World dataset was collected using the **RS-02** dual-arm humanoid robot. For simulation, visualization, and research purposes, we provide the URDF (Unified Robot Description Format) model of the RS-02 robot.

## RS-02 Robot Specifications

- **Robot Type**: Dual-arm humanoid robot
- **Total Links**: 46 links
- **Total Joints**: 45 joints
- **Arms**: 2 Γ— 7-DOF arms (left and right)
- **End-effectors**: Dual-arm grippers with 8 DOF each
- **Base**: Mobile platform with wheels and lift mechanism
- **Sensors**: Head camera, left/right hand cameras

## URDF Package Structure

The RS-02 URDF package includes:

```
RS-02/
β”œβ”€β”€ urdf/
β”‚   β”œβ”€β”€ RS-02.urdf          # Main URDF file (59KB)
β”‚   └── RS-02.csv           # Joint configuration data
β”œβ”€β”€ meshes/                 # 3D mesh models (46 STL files)
β”‚   β”œβ”€β”€ base_link.STL
β”‚   β”œβ”€β”€ L_Link_1-7.STL      # Left arm links
β”‚   β”œβ”€β”€ R_Link_1-7.STL      # Right arm links
β”‚   β”œβ”€β”€ ltool_*.STL         # Left gripper components
β”‚   β”œβ”€β”€ rtool_*.STL         # Right gripper components
β”‚   β”œβ”€β”€ head_*.STL          # Head components
β”‚   └── camera_*.STL        # Camera mounts
β”œβ”€β”€ config/
β”‚   └── joint_names_RS-02.yaml  # Joint name configuration
β”œβ”€β”€ launch/
β”‚   β”œβ”€β”€ display.launch      # RViz visualization
β”‚   └── gazebo.launch       # Gazebo simulation
└── package.xml             # ROS package metadata
```

## Using the URDF Model

### For ROS/ROS2 Users

The URDF model can be used with ROS tools:

**Visualization in RViz:**
```bash
roslaunch RS-02 display.launch
```

**Simulation in Gazebo:**
```bash
roslaunch RS-02 gazebo.launch
```

# License and Citation

All the data and code within this repo are under [CC BY-NC-SA 4.0](https://creativecommons.org/licenses/by-nc-sa/4.0/). Please consider citing our project if it helps your research.

```BibTeX
@misc{realsourceworld,
 title={RealSource World: A Large-Scale Real-World Dual-Arm Manipulation Dataset},
  author={RealSource},
 howpublished={\url{https://huggingface.co/datasets/RealSourceData/RealSource-World}},
 year={2025}
}
```