license: cc-by-nc-sa-4.0
LET:Full-size humanoid robot real machine force and tactile dataset
中文| English
📰 Updates
- 2025-12-19: 🆕 Released new tactile dexterous hand and exoskeleton teleoperation data 🖐️🦾
📋 Table of Contents
- LET:Full-Size Humanoid Robot Real-World Dataset
✨ Key Features
- Large-scale, real-world, full-size humanoid robot multi-view, multi-modal data, continuously updated
- Covers multiple domains including industry, home, medical, and service, with 31 sub-task scenarios
- Includes 117 atomic skills such as grasping, bimanual operation, tool use, with a total duration of over 1000 hours
- Expert-labeled and human-verified data to ensure high quality
- Provides a complete toolchain from data conversion, model training to inference and validation
🤖 Hardware Platform
Robot Body
The main hardware platform is Kuavo 4 Pro and its wheeled version, with the following features:
- Robot parameters: Height 1.66 m, weight 55 kg, supports hot-swappable batteries
- Motion control: 40 degrees of freedom, max walking speed 7 km/h, supports bipedal autonomous SLAM
- Generalization: Supports multi-modal large models (e.g., Pangu, DeepSeek, ChatGPT), with 20+ atomic skills
Dexterous Hands and Teleoperation Devices
Note: The following device descriptions are listed from left to right according to the images:
- Linker TA: A 14-DoF teleoperated arm that precisely maps human arm trajectories for high-precision motion capture.
- Linker TG: A high-precision flexible teleoperation glove that captures hand trajectories and finger movements in real-time.
- Linker Hand L6: A high-precision bionic hand with a "6 active + 5 passive" joint design, balancing high rigidity and lightweight.
- Linker FFG: A professional force feedback glove integrating high-precision motion capture and grasp force feedback for immersive interaction.
🚀 Usage Guide
Tool Repository
We provide a complete tool repository, including:
- Data conversion tool (rosbag2lerobot): Convert rosbag files to formats suitable for model training
- Two imitation learning models: Diffusion Policy and ACT
- Model training scripts
- Code and deployment instructions for both real robots and simulation environments
For details, see the open-source repository: kuavo_data_challenge 🔥
🎬 Tasks and Data Overview
This dataset covers various scenarios such as automobile factories, FMCG, hotel services, 3C factories, life services, logistics, etc., including multi-modal observations (RGB, Depth, joints, etc.) and a rich set of atomic skills (grasping, bimanual operation, tool use, etc.).
Semantic Labels
The LET dataset decomposes complex tasks into a series of atomic action steps with clear semantics, using standardized annotation methods to provide sub-task level timelines and natural language annotations for each task.
Each data entry is accompanied by multi-dimensional semantic label information, including:
- Object labels: industrial parts, tableware, daily utensils, medicines, etc.
- Skill labels: grasp, place, rotate, push, pull, press, etc.
- Task and scene identifiers: unified task name coding, scene dimension distinguishes operation context semantics
- End effector type: records actions performed by gripper and dexterous hand separately
- Language description: e.g., "Pick up the medicine box from the conveyor belt and place it on the designated tray", supporting natural language and action alignment modeling
Dataset Directory Structure
.
└── rosbag
└── tactile
└── real
└── Labelled
├── Clean-up_desktop_items-P4-Linker_Hand_L6
├── SF_Express_Parcel_Sorting-P4-Linker_Hand_L6
├── Smart_postal_packages-P4-Linker_Hand_L6
└── Smartandfast-forward-P4-Linker_Hand_L6
Data Format
ROSbag Data Format
Touch
| Topic Type | Topic Name | Message Type | Main Fields / Description |
|---|---|---|---|
| Camera RGB Image | /cam_x/color/image_raw/compressed | sensor_msgs/CompressedImage | x is h/l/r, for head/left wrist/right wrist camera respectively; header (message header with timestamp, sequence, frame, etc.), format (image encoding format), data (image data) |
| Camera Depth Image | /cam_x/depth/image_rect_raw/compressed | sensor_msgs/CompressedImage | x is h/l/r, for head/left wrist/right wrist camera respectively; header (message header), format (encoding format), data (image data) |
| Arm Trajectory Control | /kuavo_arm_traj | sensor_msgs/JointState | header (message header), name (joint name list, 14 joints, arm_joint_1~arm_joint_14), position (desired joint position, structure same as raw sensor data items 12-25) |
| Raw Sensor Data | /sensors_data_raw | kuavo_msgs/sensorsData | sensor_time (timestamp), joint_data (joint data: position, velocity, acceleration, current), imu_data (IMU data: gyroscope, accelerometer, quaternion), end_effector_data (end effector data, currently unused) |
| Dexterous Hand Position (Real Robot) | /control_robot_hand_position | kuavo_msgs/robotHandPosition | left_hand_position (left hand 6D, 0 open, 100 closed), right_hand_position (right hand 6D, 0 open, 100 closed) |
| Dexterous Hand State (Real Robot) | /dexhand/state | sensor_msgs/JointState | name (12 joint names), position (12 joint positions, first 6 for left hand, last 6 for right hand), velocity (12 joint velocities), effort (12 joint currents) |
| Gripper Control (Real Robot) | /leju_claw_command | kuavo_msgs/leju_claw_command | name (length 2, left_claw/right_claw), position (length 2, 0 open, 100 closed), velocity (length 2, target velocity, default 50), effort (length 2, target current in A, default 1) |
| Gripper State (Real Robot) | /leju_claw_state | kuavo_msgs/lejuClawState | state (int8[2], left/right gripper state, see details below), data (kuavo_msgs/endEffectorData, contains gripper position, velocity, current) |
| Simulation Gripper Control | /gripper/command | sensor_msgs/JointState | header (message header), position (length 2, 0 open, 255 closed) |
| Simulation Gripper State | /gripper/state | sensor_msgs/JointState | header (message header), position (length 2, 0 open, 0.8 closed) |
| Robot Position Command | /cmd_pose_world | geometry_msgs/Twist | linear.x/y/z (translation in world frame in m), angular.x/y/z (rotation in world frame in radians) |
Detailed Field Descriptions
/cam_x/color/image_raw/compressed、/cam_x/depth/image_rect_raw/compressed:
- header(std_msgs/Header):Message header with timestamp, sequence number, frame information
- format(string):Image encoding format
- data(uint8[]):Image data
/kuavo_arm_traj:
- header:Message header
- name:Joint name list, 14 joints named arm_joint_1~arm_joint_14
- position:Desired joint position, structure same as raw sensor data items 12-25
/sensors_data_raw:
- sensor_time(time):Timestamp
- joint_data(kuavo_msgs/jointData):Joint data including position, velocity, acceleration, current
- Data order:
First 12 items are lower limb motor data:
- Indices 0–5: left leg
(l_leg_roll,l_leg_yaw,l_leg_pitch,l_knee,l_foot_pitch,l_foot_roll) - Indices 6–11: right leg
(r_leg_roll,r_leg_yaw,r_leg_pitch,r_knee,r_foot_pitch,r_foot_roll)
- Indices 0–5: left leg
Next 14 items are arm motor data:
- Indices 12–18: left arm
(l_arm_pitch,l_arm_roll,l_arm_yaw,l_forearm_pitch,l_hand_yaw,l_hand_pitch,l_hand_roll) - Indices 19–25: right arm
(r_arm_pitch,r_arm_roll,r_arm_yaw,r_forearm_pitch,r_hand_yaw,r_hand_pitch,r_hand_roll)
- Indices 12–18: left arm
Last 2 items are head motor data: head_yaw, head_pitch
- Units: position in radians, velocity in radian/s, acceleration in radian/s², current in Amperes (A)
- Data order:
- imu_data(kuavo_msgs/imuData):IMU data including gyroscope (gyro, unit rad/s), accelerometer (acc, unit m/s²), quat (IMU orientation)
- end_effector_data(kuavo_msgs/endEffectorData):End effector data, currently unused
/control_robot_hand_position:
- left_hand_position(float[6]):Left hand 6D, each element [0,100], 0 fully open, 100 fully closed
- right_hand_position(float[6]):Right hand 6D, same meaning as above
/dexhand/state:
- name(string[12]):12 joint names
- position(float[12]):12 joint positions, first 6 for left hand, last 6 for right hand
- velocity(float[12]):12 joint velocities, first 6 for left hand, last 6 for right hand
- effort(float[12]):12 joint currents, first 6 for left hand, last 6 for right hand
/leju_claw_command:
- name(string[2]):left_claw, right_claw
- position(float[2]):Left/right gripper target position, [0,100], 0 open, 100 closed
- velocity(float[2]):Target velocity, [0,100], default 50
- effort(float[2]):Target current in A, default 1
/leju_claw_state:
- state(int8[2]):Left/right gripper state, meanings as follows:
- -1:Error (execution anomaly)
- 0:Unknown (default initialization state)
- 1:Moving
- 2:Reached target position
- 3:Object grasped
- data(kuavo_msgs/endEffectorData):Contains gripper position, velocity, current, structure same as /leju_claw_command
- state(int8[2]):Left/right gripper state, meanings as follows:
/gripper/command(Simulation):
- header:Message header
- position(float[2]):Left/right gripper target position, [0,255], 0 open, 255 closed
/gripper/state(Simulation):
- header:Message header
- position(float[2]):Left/right gripper current position, [0,0.8], 0 open, 0.8 closed
/cmd_pose_world(Simulation Task 4 only):
- linear.x/y/z(float):Translation in world frame in meters
- angular.x/y/z(float):Rotation in world frame in radians
Tactile
| Topic Name | Topic | Message Type | Description |
|---|---|---|---|
| Hand Control Command | /cb_$pos_hand_control_cmd |
sensor_msgs/JointState |
- header: Message header- name: Joint names, corresponding to: thumb flexion, thumb abduction, index finger flexion, middle finger flexion, ring finger flexion, little finger flexion- position: Target positions for each joint. Notably, the initial value for thumb flexion is around 155, while other joints are set to 255 for no movement.- velocity: Velocity threshold for each joint. |
| Hand State Feedback | /cb_$pos_hand_state |
sensor_msgs/JointState |
- header: Message header- position: Current actual position of each joint. 255 indicates fully open, 0 indicates fully closed.- velocity: Velocity value for each joint (not functionally used). |
| Hand Tactile Matrix | /cb_$pos_hand_matrix_touch_pc2 |
sensor_msgs/PointCloud2 |
- header: Message header- height, width: Dimensions of the data layout- fields: Data type declaration; datatype: 2 corresponds to uint8 format- is_bigendian: Indicates whether the data is in big-endian format- point_step: Bytes per point- row_step: Total bytes per row- data: Taxel array data. Each finger covers 72 taxels, organized in 12 rows × 6 columns = 360 taxels, filled in order from thumb to little finger. 255 = full pressure, 0 = no pressure. |
| Six-Axis Force/Torque | /force6d_$pos_hand_force_torque |
geometry_msgs/WrenchStamped |
- header: Message header- wrench.force: x, y, z components of the external force applied, in Newtons (N)- wrench.torque: x, y, z components of the external torque applied, in Newton-meters (Nm) |
Label Format
Label information is stored in a JSON file with the same name as the data file. Example:
{
"loaction": "Yangtze River Delta Integrated Demonstration Zone Intelligent Robot Training Center",
"primaryScene": "Default primary scene",
"primarySceneCode": "default_level_one_scene",
"secondaryScene": "3C factory scene",
"secondarySceneCode": "3C factory manufacturing",
"tertiaryScene": "Coil sorting",
"tertiarySceneCode": "Coil sorting",
"initSceneText": "Coils of various colors are placed in the middle of the table, material boxes are placed on both sides of the table, and the robot is located at the back of the table",
"englishInitSceneText": "Coils of various colors are placed in the middle of the table, material boxes are placed on both sides of the table, and the robot is located at the back of the table",
"taskGroupName": "Single coil sorting",
"taskGroupCode": "single_coil_sorting",
"taskName": "7-22-Coil classification",
"taskCode": "XQFL_11",
"deviceSn": "P4-209",
"taskPrompt": "",
"marks": [
{
"taskId": "1947326026455584768",
"markStart": "2025-07-22 9:18:39.640",
"markEnd": "2025-07-22 9:18:39.814",
"duration": 0.233,
"startPosition": 0.7363737795977026,
"endPosition": 0.769568869806783,
"skillAtomic": "pick",
"skillDetail": "Pick up the coil from the table",
"enSkillDetail": "pick coil from table",
"markType": "step"
}
]
}
📥Data Access
- Official request: You can request access by contacting the official email
wangsong@lejurobot.com. - Public platforms: The LET dataset will be publicly released on major platforms such as Openloong, ModelScope, and Hugging Face to provide convenience for developers and researchers worldwide.
📋 Data Communication Group
- Data communication QQ group: 1043359345
📝 Citation
If you use this dataset in your research, please cite it according to the platform from which you accessed it:
Citation for Hugging Face
@misc{LET_Touch2025,
title={LET:Full-size humanoid robot real machine force and tactile dataset},
author={LejuRobotics},
year={2025},
howpublished={\url{https://huggingface.co/datasets/LejuRobotics/LET-touch-dataset}}
}
Citation for ModelScope
@misc{LET_Touch2025,
title={LET:Full-size humanoid robot real machine force and tactile dataset},
author={LejuRobotics},
year={2025},
howpublished={\url{https://www.modelscope.cn/datasets/lejurobot/LET-touch-dataset}}
}
Citation for Atomgit AI
@misc{LET_Touch2025,
title={LET:Full-size humanoid robot real machine force and tactile dataset},
author={LejuRobotics},
year={2025},
howpublished={\url{https://ai.atomgit.com/lejurobot/LET-touch-dataset}}
}
📄 License
All the data and code within this repo are under CC BY-NC-SA-4.0.