fanxiaohong's picture
Super-squash branch 'main' using huggingface_hub
4b5a054
metadata
license: cc-by-sa-4.0
task_categories:
  - robotics
  - reinforcement-learning
language:
  - en
  - zh
tags:
  - agent
  - robotic
  - real-world
  - dual-arm
  - video
  - vla
  - embodied intelligence
size_categories:
  - n>1T

1w Boasting over 10,000 hours of cumulative data and 1 million+ clips, it ranks as the largest open-source embodied intelligence dataset in the industry.

Update Notes:Stage 2 data upload completed.

  1. 35,000 new clips featuring manual sorting & organizing of daily objects.
  2. Enhanced data FOV for a fuller, more complete view of the lower environment.
  3. More realistic & diverse targets & scenarios, covering flexible, irregular, various-sized objects in different storage boxes.
  4. 40% higher trajectory accuracy and 50% better trajectory stability. We look forward to receiving more of your feedback.

Compared with other datasets, it has the following advantages:

  1. Ample Data Volume & Strong Generalization Each skill is supported by sufficient data, collected from over 3,000 households and nearly 10,000 distinct fine-grained targets. It avoids simple repetitions and ensures robust generalization. 20260104-225300
  2. Authentic Scenarios & Focused Skills Captured from natural operations in real households, we avoid skill fragmentation that compromises quality. Instead, we focus on 10 key household scenarios and 30 core skills.
  3. Bimanual & Long-duration Tasks Full recordings of the entire process of complex household chores and cleaning. Data collection by GenDAS Gripper. 渲染.1681
  4. Multi-modal & High-quality Data Includes large-FOV raw images, trajectories, annotations and joint movements. Trajectory reconstruction ensures industry-leading precision and quality.

Dataset Statistics

Attribute Value
Median Clip Length 210.0 seconds
Storage Size 95 TB
Format mcap
Resolution 1600*1296
Frame Rate 30 fps
Camera Type Large FOV Fisheye Camera
IMU Yes 6-axis
Tactile Array Spatial Yes
Array Spatial Resolution 1 mm
Device Gen DAS Gripper

Stage 1+2 Content:

We have uploaded the data of Stage 1&2 . This is only a small fraction and we will complete updates for the remaining skills as soon as possible.

  1. Stage 1 covers 12 skills across 4 major scenario tasks .Total duration: 950 hours, clips: 39,761, storage 3.45TB.
  2. Stage 2 covers 4 major scenario tasks,it refers to the process of organizing a variety of cluttered items in a real household environment.Total duration: 653 hours, clips: 36,267, storage 1.92TB.
Task Skill
Folding_Clothes_and_Zipper_Operations fold_and_store_clothes
zip_clothes
Cooking_and_Kitchen_Clean clean_container
unscrew_bottle_cap_and_pour
clean_bowl
Organize_Clutter fold_and_store_shopping_bag
fold_towel
desktop_object_sorting
drawer_to_take_items
Shoes_Handling lace_up_shoes_with_both_hands
organize_scattered_shoes
Clutter_Tidy-Up 【Stage2】 irregular_object_clutter
flexible_grasping_and_sorting
carton_sorting_clutter
small_object_storage

And synchronize the progress across major social platforms.In addition to the data, we will also provide relevant support including format conversion and usage guidance,here is the link https://github.com/genrobot-ai/das-datakit.

Contact Us

Any questions, suggestions or desired data collection scenarios/skills are welcome during usage. Let’s co-build this project to digitize all human skills. X:https://x.com/GenrobotAI Linkin:https://www.linkedin.com/company/108767412/admin/dashboard/ Email:opendata@genrobot.ai

Dataset Structure

The mcap files are stored in the final leaf folders of the file directory structure.Note: Each mcap file represents one piece of task data.

Data Format

Dual-arm tasks: robot0 and robot1 represent the left and right grippers respectively. Each gripper contains the following topics:

/robot0/sensor/camera0/compressed   # Fisheye camera image data — compressed and encoded in H.264 format.
/robot0/sensor/camera0/camera_info  # Fisheye Intrinsic and Extrinsic Parameters
/robot0/sensor/imu                  # Inertial Measurement Unit (IMU) Data
/robot0/sensor/magnetic_encoder     # Magnetic encoder data: gripper opening distance
/robot0/vio/eef_pose                # Trajectory data

Topics are serialized using Protobuf for persistent storage

/robot0/sensor/camera0/compressed:

// A compressed image
message CompressedImage {
  // Timestamp of image
  google.protobuf.Timestamp timestamp = 1;

  // frame id
  string frame_id = 4;

  // Compressed image data, h264 video stream
  bytes data = 2;

  // Image format
  // Supported values: `webp`, `jpeg`, `png`, `h264`
  string format = 3;

  // common header, timestamp is inside it
  Header header = 8;
}

message Header {
  string module_name = 1;
  uint32 sequence_num = 2;
  uint64 timestamp = 3;
  string topic_name = 4;
  double expect_hz = 5;
  repeated Input inputs = 6;
}

/robot0/sensor/camera0/camera_info:

// Camera calibration parameters
message CameraCalibration {
  // not used
  google.protobuf.Timestamp timestamp = 1;

  // frame id
  string frame_id = 9;

  // Image width
  fixed32 width = 2;

  // Image height
  fixed32 height = 3;

  // Name of distortion model
  string distortion_model = 4;

  // Distortion parameters
  repeated double D = 5;

  // Intrinsic camera matrix (3x3 row-major matrix)
  // 
  // A 3x3 row-major matrix for the raw (distorted) image.
  // 
  // Projects 3D points in the camera coordinate frame to 2D pixel coordinates using the focal lengths (fx, fy) and principal point (cx, cy).
  // 
  // ```
  //     [fx  0 cx]
  // K = [ 0 fy cy]
  //     [ 0  0  1]
  // ```
  repeated double K = 6; // length 9

  // Rectification matrix (stereo cameras only, 3x3 row-major matrix)
  // 
  // A rotation matrix aligning the camera coordinate system to the ideal stereo image plane so that epipolar lines in both stereo images are parallel.
  repeated double R = 7; // length 9

  // Projection/camera matrix (stereo cameras only, 3x4 row-major matrix)
  //     [fx'  0  cx' Tx]
  // P = [ 0  fy' cy' Ty]
  //     [ 0   0   1   0]
  repeated double P = 8; // length 12

  // transform from camera to base frame
  repeated double T_b_c = 10; // length 7, [tx ty tz qx qy qz qw]
  // common header
  Header header = 11;
}

/robot0/sensor/imu:

// IMU message
message IMUMeasurement {
  // common header
  arnold.common.proto.Header header = 1;
  // frame id
  string frame_id = 2;
  foxglove.Vector3 angular_velocity = 3;
  // Acceleration data in g-force units
  foxglove.Vector3 linear_acceleration = 4;
  // float temperature = 5;
  // repeated float angular_velocity_covariance = 6;
  // repeated float linear_acceleration_covariance = 7;
}

/robot0/sensor/magnetic_encoder:

message MagneticEncoderMeasurement {
  // common header
  arnold.common.proto.Header header = 1;
  // frame id
  string frame_id = 2;
  // Distance between gripper fingers, 0-0.103m, 0 means closed
  double value = 3;
}

/robot0/vio/eef_pose:

// A timestamped pose for an object or reference frame in 3D space
message PoseInFrame {
  // not used
  google.protobuf.Timestamp timestamp = 1;

  // Frame id
  string frame_id = 2;

  // Pose in 3D space
  foxglove.Pose pose = 3;
  // linear vel
  foxglove.Vector3 linear_vel= 4;
  // angular_vel
  foxglove.Vector3 angular_vel = 5;
  // common header
  arnold.common.proto.Header header = 6;
}

How to Vis Data

web view tool;

https://monitor.genrobot.click/#/index

URDF

https://huggingface.co/datasets/genrobot2025/10Kh-RealOmin-OpenData/blob/main/DAS_Gripper_V3.zip

Coordinate System

https://github.com/genrobot-ai/product-docs/blob/main/docs/md-productions-das-Matrix%20Studio%20Manual-en.md#5-data-intruction

How to Load Data

reference:

https://github.com/genrobot-ai/das-datakit