Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,58 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
+
task_categories:
|
| 4 |
+
- robotics
|
| 5 |
+
language:
|
| 6 |
+
- en
|
| 7 |
+
size_categories:
|
| 8 |
+
- n<1K
|
| 9 |
+
tags:
|
| 10 |
+
- lerobot
|
| 11 |
+
- Unitree G1
|
| 12 |
+
---
|
| 13 |
+
# Dataset Card for HuMI
|
| 14 |
+
[[Project Page]](https://humanoid-manipulation-interface.github.io/) | [[Paper]](http://arxiv.org/abs/2602.06643)
|
| 15 |
+
|
| 16 |
+
### Dataset Summary
|
| 17 |
+
This dataset was collected using the [HuMI](https://humanoid-manipulation-interface.github.io/) data collection pipeline and converted into the LeRobot format. It provides robot-free demonstrations for humanoid whole-body manipulation.
|
| 18 |
+
|
| 19 |
+
**Task Description:** This repository includes a `marriage-proposal` task in which a humanoid kneels, picks up a ring-shaped toy from the ground with its right hand, and raises it in a proposal gesture.
|
| 20 |
+
|
| 21 |
+
The dataset comprises 103 demonstrations collected within a single environment.
|
| 22 |
+
|
| 23 |
+
## Dataset Structure
|
| 24 |
+
|
| 25 |
+
### Data Instances
|
| 26 |
+
|
| 27 |
+
Each instance in the dataset represents a single frame from an episode. The data includes synchronized camera observations, end-effector poses, joint positions, and task instructions.
|
| 28 |
+
|
| 29 |
+
### Data Fields
|
| 30 |
+
|
| 31 |
+
The dataset contains the following features:
|
| 32 |
+
|
| 33 |
+
- `task` (`string`): The natural language description of the task.
|
| 34 |
+
- `observation.images.camera{cam_id}_rgb` (video, `(H, W, 3)`): RGB observations from the cameras.
|
| 35 |
+
- `{robot_name}_eef_pos` (`float32`, `(3,)`): End-effector position (x, y, z).
|
| 36 |
+
- `{robot_name}_eef_rot_axis_angle` (`float32`, `(3,)`): End-effector rotation in axis-angle representation.
|
| 37 |
+
- `{robot_name}_gripper_width` (`float32`, `(1,)`): Gripper width (only present for gripper end-effectors).
|
| 38 |
+
- `{robot_name}_demo_start_pose` (`float32`, `(6,)`): Initial 6DoF pose of the end-effector for the demonstration.
|
| 39 |
+
- `{robot_name}_demo_end_pose` (`float32`, `(6,)`): Final 6DoF pose of the end-effector for the demonstration.
|
| 40 |
+
- `joint_pos` (`float32`, `(29,)`): Full joint positions of the Unitree G1 robot, computed via inverse kinematics using the end-effector poses as targets.
|
| 41 |
+
|
| 42 |
+
For the HuMI setup, we recorded images from two wrist-mounted cameras, where `camera0` corresponds to the right gripper and `camera1` corresponds to the left.
|
| 43 |
+
Additionally, we recorded trajectories for five end-effectors, adopting the naming convention from [UMI](https://umi-gripper.github.io/):
|
| 44 |
+
|
| 45 |
+
- `robot0`: right gripper
|
| 46 |
+
- `robot1`: left gripper
|
| 47 |
+
- `robot2`: pelvis
|
| 48 |
+
- `robot3`: right foot
|
| 49 |
+
- `robot4`: left foot
|
| 50 |
+
|
| 51 |
+
### Citation Information
|
| 52 |
+
```bibtex
|
| 53 |
+
@article{nai2026humanoid,
|
| 54 |
+
title={Humanoid Manipulation Interface: Humanoid Whole-Body Manipulation from Robot-Free Demonstrations},
|
| 55 |
+
author={Nai, Ruiqian and Zheng, Boyuan and Zhao, Junming and Zhu, Haodong and Dai, Sicong and Chen, Zunhao and Hu, Yihang and Hu, Yingdong and Zhang, Tong and Wen, Chuan and others},
|
| 56 |
+
journal={arXiv preprint arXiv:2602.06643},
|
| 57 |
+
year={2026}
|
| 58 |
+
}
|