RealSource-World / README.md
SilasJi's picture
Duplicate from RealSourceData/RealSource-World
371163f verified
metadata
pretty_name: RealSource World
size_categories:
  - 100B<n<1T
task_categories:
  - robotics
language:
  - en
tags:
  - real-world
  - dual-arm
  - robotics manipulation
  - humanoid robot
license: cc-by-nc-4.0

RealSource World

RealSource World is a large-scale real-world robotics manipulation dataset collected using the RS-02 dual-arm humanoid robot. This dataset contains diverse long-horizon manipulation tasks performed in real-world environments, with detailed annotations for atomic skills and quality assessments.

Key Features

  • 14+ million frames of real-world dual-arm manipulation demonstrations.
  • 11,428+ episodes across 36 distinct manipulation tasks.
  • 57-dimensional proprioceptive state space including joint positions, velocities, forces, torques, and end-effector poses.
  • Multi-camera visual observations (head camera, left hand camera, right hand camera) at 720x1280 resolution, 30 FPS.
  • Fine-grained annotations with atomic skill segmentation and quality assessments for each episode.
  • Diverse scenes including kitchen, conference room, convenience store, and household environments.
  • Dual-arm coordination tasks demonstrating complex bimanual manipulation skills.

News

  • [2025/12] RealSource World dataset fully uploaded to Hugging Face, containing 36 tasks with a total size of 549GB. Download Link
  • [2025/11] RealSource World released on Hugging Face. Download Link

Changelog

Version History

Version 1.1 (December 2025)

  • Complete Dataset Upload
  • Fully uploaded all dataset files to Hugging Face
  • Total dataset size: 549GB
  • Total files: approximately 104,907 files
  • Contains 36 manipulation tasks

Version 1.0 (November 2025)

  • Initial Release
  • Released RealSource World dataset on Hugging Face
  • 36 manipulation tasks with 11,428 episodes
  • 14+ million frames of real-world dual-arm manipulation demonstrations
  • 57-dimensional proprioceptive state space
  • Multi-camera visual observations (head, left hand, right hand cameras)
  • Fine-grained annotations with atomic skill segmentation
  • Complete camera parameters (intrinsic and extrinsic) for all episodes
  • Quality assessments for each episode

Table of Contents

Get Started

Dataset Access

The RealSource World dataset has been fully uploaded to Hugging Face and can be accessed via:

  • Hugging Face Repository: RealSourceData/RealSource-World
  • Dataset Size: 549GB
  • File Format: LeRobot v2.1 format
  • Data Organization: Organized by tasks, each task contains data/, meta/, and videos/ directories

Download the Dataset

To download the full dataset, you can use the following code. If you encounter any issues, please refer to the official Hugging Face documentation.

Note: Due to the large dataset size (549GB), it is recommended to use Git LFS for downloading, or use the Hugging Face Datasets library to load data on-demand.


# Make sure you have git-lfs installed (https://git-lfs.com)
git lfs install

# When prompted for a password, use an access token with read permissions.

Generate one from your settings: https://huggingface.co/settings/tokens
git clone https://huggingface.co/datasets/RealSourceData/RealSource-World

# If you want to clone without large files - just their pointers
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/RealSourceData/RealSource-World

If you only want to download a specific task from the RealSource World dataset, such as Arrange_the_cups, follow these steps:


# Ensure Git LFS is installed (https://git-lfs.com)
git lfs install

# Initialize an empty Git repository
git init RealSource-World
cd RealSource-World

# Set the remote repository
git remote add origin https://huggingface.co/datasets/RealSourceData/RealSource-World

# Enable sparse-checkout
git sparse-checkout init

# Specify the folders and files you want to download
git sparse-checkout set Arrange_the_cups scripts

# Pull the data from the main branch
git pull origin main

Dataset Structure

Folder Hierarchy

RealSource-world/
β”œβ”€β”€ Arrange_the_cups/

# Task name (36 tasks in total)
β”‚ β”œβ”€β”€ data/
β”‚ β”‚ └── chunk-000/
β”‚ β”‚ β”œβ”€β”€ episode_000000.parquet
β”‚ β”‚ β”œβ”€β”€ episode_000001.parquet
β”‚ β”‚ └── ...

## 871 parquet files for this task
β”‚ β”œβ”€β”€ meta/
β”‚ β”‚ β”œβ”€β”€ info.json

# Dataset metadata and feature definitions
β”‚ β”‚ β”œβ”€β”€ episodes.jsonl

# Episode-level metadata
β”‚ β”‚ β”œβ”€β”€ episodes_stats.jsonl

# Episode statistics
β”‚ β”‚ β”œβ”€β”€ tasks.jsonl

# Task descriptions
β”‚ β”‚ β”œβ”€β”€ sub_tasks.jsonl

# Fine-grained sub-task annotations
β”‚ β”‚ └── camera.json

# Camera parameters for all episodes
β”‚ └── videos/
β”‚ └── chunk-000/
β”‚ β”œβ”€β”€ observation.images.head_camera/
β”‚ β”‚ β”œβ”€β”€ episode_000000.mp4
β”‚ β”‚ └── ...
β”‚ β”œβ”€β”€ observation.images.left_hand_camera/
β”‚ β”‚ β”œβ”€β”€ episode_000000.mp4
β”‚ β”‚ └── ...
β”‚ └── observation.images.right_hand_camera/
β”‚ β”œβ”€β”€ episode_000000.mp4
β”‚ └── ...
β”œβ”€β”€ Arrange_the_items_on_the_conference_table/
β”‚ └── ...
β”œβ”€β”€ Clean_the_convenience_store/
β”‚ └── ...
└── ...

## 36 tasks in total

Understanding the Dataset Format

This dataset follows the LeRobot v2.1 format. Each task directory contains:

  • data/: Parquet files storing time-series data (proprioceptive states, actions, timestamps)
  • meta/: JSON/JSONL files with metadata, episode information, and annotations
  • videos/: MP4 video files from three camera perspectives

Key Metadata Files

  • meta/info.json: Contains dataset-level metadata including:

  • Total episodes, frames, videos

  • Feature definitions (action and observation shapes, names)

  • Video specifications (resolution, codec, FPS)

  • Robot type and codebase version

  • meta/episodes.jsonl: One JSON object per line, each representing an episode with:

  • episode_index: Episode identifier

  • length: Number of frames in the episode

  • tasks: List of task descriptions

  • videos: Paths to video files for each camera

  • meta/sub_tasks.jsonl: Fine-grained annotations for each episode, including:

  • task_steps: List of atomic skill segments with start/end frames

  • success_rating: Overall task success score (1-5)

  • quality_assessments: Detailed quality metrics (PASS/FAIL/VALID)

  • notes: Annotation metadata

  • meta/camera.json: Camera intrinsic and extrinsic parameters for each episode

Loading and Using the Dataset

This dataset is compatible with the LeRobot library. Here's how to load and use it:

from lerobot.common.datasets.lerobot_dataset import LeRobotDataset

# Load a specific task
dataset_path = "RealSource-World/Arrange_the_cups"
repo_id = "RealSourceData/RealSource-World"

# Initialize the dataset
dataset = LeRobotDataset(dataset_path, repo_id=repo_id)

# Access episode data
episode_0 = dataset[0]

# First frame of first episode
episode_info = dataset.episode_data[0]

# Episode metadata

Iterate through episodes
for episode_idx in range(len(dataset.episode_data)):
 episode_length = dataset.episode_data[episode_idx]["length"]
 print(f"Episode {episode_idx} has {episode_length} frames")

# Visualize an episode
dataset.show_video(episode_idx=0, video_key="observation.images.head_camera")

Data Format Details

Proprioceptive State (57-dimensional)

The observation.state field contains comprehensive proprioceptive information:

Index Range Components Description
0-15 Joint positions 7 joints Γ— 2 arms + 2 grippers = 16 DOF
16 Lift position Mobile base lift height
17-22 Left arm force/torque 6D force (fx, fy, fz, mx, my, mz)
23-28 Right arm force/torque 6D force (fx, fy, fz, mx, my, mz)
29-35 Left joint velocities 7 joints = 7 DOF
36-42 Right joint velocities 7 joints = 7 DOF
43-49 Left end-effector pose Position (x, y, z) + Quaternion (qw, qx, qy, qz)
50-56 Right end-effector pose Position (x, y, z) + Quaternion (qw, qx, qy, qz)

State Field Names

[
 "LeftFollowerArm_Joint1.pos", ..., "LeftFollowerArm_Joint7.pos",
 "LeftGripper.pos",
 "RightFollowerArm_Joint1.pos", ..., "RightFollowerArm_Joint7.pos",
 "RightGripper.pos",
 "Lift.position",
 "LeftForce.fx", "LeftForce.fy", "LeftForce.fz",
 "LeftForce.mx", "LeftForce.my", "LeftForce.mz",
 "RightForce.fx", "RightForce.fy", "RightForce.fz",
 "RightForce.mx", "RightForce.my", "RightForce.mz",
 "LeftJoint_Vel1", ..., "LeftJoint_Vel7",
 "RightJoint_Vel1", ..., "RightJoint_Vel7",
 "LeftEnd_X", "LeftEnd_Y", "LeftEnd_Z",
 "LeftEnd_Qw", "LeftEnd_Qx", "LeftEnd_Qy", "LeftEnd_Qz",
 "RightEnd_X", "RightEnd_Y", "RightEnd_Z",
 "RightEnd_Qw", "RightEnd_Qx", "RightEnd_Qy", "RightEnd_Qz"
]

Action Space (17-dimensional)

The action field contains commands sent to the robot:

Components Description
0-6 Left arm joint positions (7 DOF)
7 Left gripper position
8-14 Right arm joint positions (7 DOF)
15 Right gripper position
16 Lift command

Action Field Names

[
 "LeftLeaderArm_Joint1.pos", ..., "LeftLeaderArm_Joint7.pos",
 "LeftGripper.pos",
 "RightLeaderArm_Joint1.pos", ..., "RightLeaderArm_Joint7.pos",
 "RightGripper.pos",
 "Lift.command"
]

Visual Observations

Each episode includes synchronized video from three camera perspectives:

  • observation.images.head_camera: Overhead/head-mounted view
  • observation.images.left_hand_camera: Left end-effector mounted camera
  • observation.images.right_hand_camera: Right end-effector mounted camera

Video Specifications:

  • Resolution: 720 Γ— 1280 pixels
  • Frame rate: 30 FPS
  • Codec: H.264
  • Format: MP4

Camera Parameters

Each episode has corresponding camera parameters stored in meta/camera.json, keyed by episode_XXXXXX. The camera parameters include intrinsic parameters (camera matrix and distortion coefficients) and extrinsic parameters (hand-eye calibration).

File Structure

The camera.json file contains camera parameters for all episodes:

{
 "episode_000000": {
 "camera_ids": {
 "head": "245022300889",
 "left_arm": "245022301980",
 "right_arm": "245022300408",
 "foot": ""
 },
 "camera_parameters": {
 "head": {
 "720P": {
 "MTX": [[648.57, 0, 645.54], [0, 647.80, 375.38], [0, 0, 1]],
 "DIST": [-0.0513, 0.0587, -0.0006, 0.00096, -0.0186]
 },
 "480P": { ... }
 },
 "left_arm": { ... },
 "right_arm": { ... }
 },
 "hand_eye": {
 "left_arm_in_eye": {
 "R": [[...], [...], [...]],
 "T": [x, y, z]
 },
 "right_arm_in_eye": { ... },
 "left_arm_to_eye": { ... },
 "right_arm_to_eye": { ... }
 }
 },
 "episode_000001": { ... }
}

Camera Intrinsic Parameters

Each camera (head, left_arm, right_arm) has intrinsic parameters for two resolutions:

  • MTX: 3Γ—3 camera intrinsic matrix
[fx 0 cx]
[0 fy cy]
[0 0 1]
  • fx, fy: Focal lengths in pixels

  • cx, cy: Principal point (optical center) in pixels

  • DIST: 5-element distortion coefficients (k1, k2, p1, p2, k3)

  • Used for correcting radial and tangential distortion

Available Resolutions:

  • 720P: Parameters for 720p video (720 Γ— 1280)
  • 480P: Parameters for 480p video (480 Γ— 640)

Hand-Eye Calibration (Extrinsic Parameters)

The hand_eye section contains transformations between the robot end-effectors and cameras:

  • left_arm_in_eye: Transformation from left end-effector camera to left arm end-effector center

  • R: 3Γ—3 rotation matrix

  • T: 3Γ—1 translation vector [x, y, z] in meters

  • Represents the pose of the left wrist-mounted camera relative to the left arm end-effector center

  • right_arm_in_eye: Transformation from right end-effector camera to right arm end-effector center

  • Represents the pose of the right wrist-mounted camera relative to the right arm end-effector center

  • left_arm_to_eye: Transformation from head camera to left arm base coordinate frame

  • R: 3Γ—3 rotation matrix

  • T: 3Γ—1 translation vector [x, y, z] in meters

  • Represents the pose of the head camera relative to the left arm base frame

  • right_arm_to_eye: Transformation from head camera to right arm base coordinate frame

  • Represents the pose of the head camera relative to the right arm base frame

These parameters enable coordinate transformations between:

  • Robot end-effector poses and camera image coordinates
  • 3D positions in robot space and pixel coordinates in images
  • Multi-view geometric operations and calibration
  • Wrist camera frames and end-effector centers
  • Head camera frame and arm base frames

Camera IDs

Each camera has a unique identifier:

  • head: Head-mounted camera ID
  • left_arm: Left end-effector camera ID
  • right_arm: Right end-effector camera ID
  • foot: Foot camera ID (if available)

Sub-task Annotations

Each episode in meta/sub_tasks.jsonl contains detailed annotations:

{
 "task": "Separate the two stacked cups in the dish and place them on the two sides of the dish.",
 "language": "zh",
 "task_index": 0,
 "episode_index": 0,
 "task_steps": [
 {
 "step_name": "Left arm picks up the stack of cups from the center of the plate",
 "start_frame": 100,
 "end_frame": 180,
 "description": "Left arm picks up the stack of cups from the center of the plate",
 "duration_frames": 80
 },
 ...
 ],
 "success_rating": 5,
 "notes": "annotation_date: 2025/11/13",
 "quality_assessments": {
 "overall_valid": "VALID",
 "movement_fluency": "PASS",
 "grasp_success": "PASS",
 "placement_quality": "PASS",
 ...
 },
 "total_frames": 946
}

Quality Assessment Metrics

  • overall_valid: Overall episode validity (VALID/INVALID)
  • movement_fluency: Smoothness of robot movements (PASS/FAIL)
  • grasp_success: Success of grasping actions (PASS/FAIL)
  • placement_quality: Quality of object placement (PASS/FAIL)
  • no_drop: No objects were dropped during the task (PASS/FAIL)
  • grasp_collisions: No collisions during grasping (PASS/FAIL)
  • arm_collisions: No arm collisions (PASS/FAIL)
  • operation_completeness: Task completion status (PASS/FAIL)
  • And more...

Dataset Statistics

Overall Statistics

  • Total Tasks: 36
  • Total Dataset Size: 549GB
  • Total Files: approximately 104,907 files
  • Total Episodes: 11,428
  • Total Frames: 14,085,107
  • Total Videos: 34,284 (3 cameras Γ— 11,428 episodes)
  • Robot Type: RS-02 (dual-arm humanoid robot)
  • Dataset Format: LeRobot v2.1
  • Video Resolution: 720 Γ— 1280
  • Frame Rate: 30 FPS

Task Distribution

The dataset includes diverse manipulation tasks across multiple domains:

  • Kitchen Tasks: Arranging cups, cooking rice, steaming, cleaning counters, making toast, preparing birthday cake, etc.
  • Organization Tasks: Organizing magazines, tools, toys, glass tubes, pen holders, TV cabinets, etc.
  • Household Tasks: Tiding up rooms, placing books, slippers, hanging clothes to dry, etc.
  • Convenience Store Tasks: Cleaning store, organizing items, collecting mail, etc.
  • Industrial Tasks: Moving parts between containers, organizing glass tubes, etc.
  • Other Tasks: Cable plugging, replenishing tea bags, organizing repair tools, etc.

Complete Task List (36 tasks):

  1. Arrange_the_cups
  2. Arrange_the_items_on_the_conference_table
  3. Cable_Plugging_able
  4. Clean_the_convenience_store
  5. Collect_the_mail
  6. Cook_rice_using_an_electric_rice_cooker
  7. Hang_out_the_clothes_to_dry
  8. Make_toast
  9. Making_steamed_potatoes
  10. Move_industrial_parts_to_different_plastic_boxes
  11. Organize_the_TV_cabinet
  12. Organize_the_glass_tube_on_the_rack
  13. Organize_the_magazines
  14. Organize_the_pen_holder
  15. Organize_the_repair_tools
  16. Organize_the_toys
  17. Pack_the_badminton_shuttlecock
  18. Place_the_books
  19. Place_the_hairdryer
  20. Place_the_slippers
  21. Prepare_the_birthday_cake
  22. Prepare_the_bread
  23. Put_the_milk_in_the_refrigerator
  24. Refill_the_laundry_detergent
  25. Replace_the_tissues_and_arrange_them
  26. Replenish_tea_bags
  27. Stack_the_cups
  28. Steam_buns
  29. Steaming_rice_in_a_rice_cooker
  30. Take_down_the_book
  31. Take_out_the_trash
  32. Tidy_up_the_children's_room
  33. Tidy_up_the_children_s_room
  34. Tidy_up_the_conference_room_table
  35. Tidy_up_the_cooking_counter
  36. Tidy_up_the_kitchen_counter

Robot URDF Model

The RealSource World dataset was collected using the RS-02 dual-arm humanoid robot. For simulation, visualization, and research purposes, we provide the URDF (Unified Robot Description Format) model of the RS-02 robot.

RS-02 Robot Specifications

  • Robot Type: Dual-arm humanoid robot
  • Total Links: 46 links
  • Total Joints: 45 joints
  • Arms: 2 Γ— 7-DOF arms (left and right)
  • End-effectors: Dual-arm grippers with 8 DOF each
  • Base: Mobile platform with wheels and lift mechanism
  • Sensors: Head camera, left/right hand cameras

URDF Package Structure

The RS-02 URDF package includes:

RS-02/
β”œβ”€β”€ urdf/
β”‚   β”œβ”€β”€ RS-02.urdf          # Main URDF file (59KB)
β”‚   └── RS-02.csv           # Joint configuration data
β”œβ”€β”€ meshes/                 # 3D mesh models (46 STL files)
β”‚   β”œβ”€β”€ base_link.STL
β”‚   β”œβ”€β”€ L_Link_1-7.STL      # Left arm links
β”‚   β”œβ”€β”€ R_Link_1-7.STL      # Right arm links
β”‚   β”œβ”€β”€ ltool_*.STL         # Left gripper components
β”‚   β”œβ”€β”€ rtool_*.STL         # Right gripper components
β”‚   β”œβ”€β”€ head_*.STL          # Head components
β”‚   └── camera_*.STL        # Camera mounts
β”œβ”€β”€ config/
β”‚   └── joint_names_RS-02.yaml  # Joint name configuration
β”œβ”€β”€ launch/
β”‚   β”œβ”€β”€ display.launch      # RViz visualization
β”‚   └── gazebo.launch       # Gazebo simulation
└── package.xml             # ROS package metadata

Using the URDF Model

For ROS/ROS2 Users

The URDF model can be used with ROS tools:

Visualization in RViz:

roslaunch RS-02 display.launch

Simulation in Gazebo:

roslaunch RS-02 gazebo.launch

License and Citation

All the data and code within this repo are under CC BY-NC-SA 4.0. Please consider citing our project if it helps your research.

@misc{realsourceworld,
 title={RealSource World: A Large-Scale Real-World Dual-Arm Manipulation Dataset},
  author={RealSource},
 howpublished={\url{https://huggingface.co/datasets/RealSourceData/RealSource-World}},
 year={2025}
}