Richard-Nai's picture
Update README.md
8feb82b verified
metadata
license: cc-by-4.0
task_categories:
  - robotics
language:
  - en
size_categories:
  - n<1K
tags:
  - lerobot
  - Unitree G1

Dataset Card for HuMI

[Project Page] | [Paper]

Dataset Summary

This dataset was collected using the HuMI data collection pipeline and converted into the LeRobot format. It provides robot-free demonstrations for humanoid whole-body manipulation.

Task Description: This repository features a walk-to-clean-table task in which a humanoid robot navigates to a desk and cleans the tabletop using a lint roller to remove scattered paper scraps.

The dataset consists of 105 demonstrations collected within a single environment.

Dataset Structure

Data Instances

Each instance in the dataset represents a single frame from an episode. The data includes synchronized camera observations, end-effector poses, joint positions, and task instructions.

Data Fields

The dataset contains the following features:

  • task (string): The natural language description of the task.
  • observation.images.camera{cam_id}_rgb (video, (H, W, 3)): RGB observations from the cameras.
  • {robot_name}_eef_pos (float32, (3,)): End-effector position (x, y, z).
  • {robot_name}_eef_rot_axis_angle (float32, (3,)): End-effector rotation in axis-angle representation.
  • {robot_name}_gripper_width (float32, (1,)): Gripper width (only present for gripper end-effectors).
  • {robot_name}_demo_start_pose (float32, (6,)): Initial 6DoF pose of the end-effector for the demonstration.
  • {robot_name}_demo_end_pose (float32, (6,)): Final 6DoF pose of the end-effector for the demonstration.
  • joint_pos (float32, (29,)): Full joint positions of the Unitree G1 robot, computed via inverse kinematics using the end-effector poses as targets.

For the HuMI setup, we recorded images from two wrist-mounted cameras, where camera0 corresponds to the right gripper and camera1 corresponds to the left. Additionally, we recorded trajectories for five end-effectors, adopting the naming convention from UMI:

  • robot0: right gripper
  • robot1: left gripper
  • robot2: pelvis
  • robot3: right foot
  • robot4: left foot

Citation Information

@article{nai2026humanoid,
  title={Humanoid Manipulation Interface: Humanoid Whole-Body Manipulation from Robot-Free Demonstrations},
  author={Nai, Ruiqian and Zheng, Boyuan and Zhao, Junming and Zhu, Haodong and Dai, Sicong and Chen, Zunhao and Hu, Yihang and Hu, Yingdong and Zhang, Tong and Wen, Chuan and others},
  journal={arXiv preprint arXiv:2602.06643},
  year={2026}
}