HumanAndRobot / README.md
dannyXSC's picture
Update README.md
79c1b0e verified

H&R

Dataset of paper Human2Robot: Learning Robot Actions from Paired Human-Robot Videos (AAAI 2026 Oral)

Data Details

  • /cam_data
    • /human_camera: obs of human
    • /robot_camera: obs of robot
  • /end_position: eef 6-dof position, [x, y, z, roll, pitch, yaw] represent position and orientation, where the rotation is expressed in Euler angles (degrees) with the order XYZ.
  • /gripper_state: 0/1, 1 for gripper open,0 for close
  • /action: This attribute is not available in version v0, but is introduced in version v1. It represents the spatial pose of the human hand in the robot frame. It is a 7-DOF vector consisting of [x, y, z, roll, pitch, yaw, gripper].
  • qpos: Joint angles of the robot arm
  • qvel: Joint velocity of the robot arm

Version

  • data
    • v0: The older version of the dataset. /end_position together with /gripper_state can be used as the action.
    • v1: In addition to the contents included in v0, there is an additional /action entry.

Visualization Code

Visualization the video of human and robot

python show_video.py --file_path /path/to/the/file

Citation

If you use this dataset, please cite:

@inproceedings{xie2025human2robot,
  title={Human2robot: Learning robot actions from paired human-robot videos},
  author={Xie, Sicheng and Cao, Haidong and Weng, Zejia and Xing, Zhen and Chen, Haoran and Shen, Shiwei and Leng, Jiaqi and Wu, Zuxuan and Jiang, Yu-Gang},
  booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
  year={2026}
}