license: apache-2.0
HORA: Hand–Object to Robot Action Dataset
Dataset Summary
HORA (Hand–Object to Robot Action) is a large-scale multimodal dataset that converts human hand–object interaction (HOI) demonstrations into robot-usable supervision for cross-embodiment learning. It combines HOI-style annotations (e.g., MANO hand parameters, object pose, contact) with embodied-robot learning signals (e.g., robot observations, end-effector trajectories) under a unified canonical action space.
HORA is constructed from three sources/subsets:
- HORA(Mocap): custom multi-view motion capture system with tactile-sensor gloves (includes tactile maps).
- HORA(Recordings): custom RGB(D) HOI recording setup (no tactile).
- HORA(Public Dataset): derived from multiple public HOI datasets and retargeted to robot embodiments (6/7-DoF arms).
Overall scale: ~150k trajectories across all subsets.
Key Features
- Unified multimodal representation across subsets, covering both HOI analysis and downstream robotic learning.
- HOI modalities: MANO hand parameters (pose/shape + global transform), object 6DoF pose, object assets, hand–object contact annotations.
- Robot modalities: wrist-view & third-person observations, and end-effector pose trajectories for robotic arms, all mapped to a canonical action space.
- Tactile (mocap subset): dense tactile map for both hand and object (plus object pose & assets).
Dataset Statistics
| Subset | Tactile | #Trajectories | Notes |
|---|---|---|---|
| HORA(Mocap) | ✅ | 63,141 | 6-DoF object pose + assets + tactile map |
| HORA(Recordings) | ❌ | 23,560 | 6-DoF object pose + assets |
| HORA(Public Dataset) | ❌ | 66,924 | retargeted cross-embodiment robot modalities |
| Total | ~150k |
Supported Tasks and Use Cases
HORA is suitable for:
- Imitation Learning (IL) / Visuomotor policy learning
- Vision–Language–Action (VLA) model training and evaluation
- HOI-centric research: contact analysis, pose/trajectory learning, hand/object dynamics
Data Format
Example Episode Structure
Each episode/trajectory may include:
HOI fields
hand_mano: MANO parameters (pose/shape, global rotation/translation)object_pose_6d: 6DoF object pose sequencecontact: hand–object contact annotationsobject_asset: mesh/texture id or path
Robot fields
Global Attributes
task_description: Natural language instruction for the task (stored as HDF5 attribute).total_demos: Total number of trajectories in the file.
Observations (
obsgroup)agentview_rgb: JPEG byte stream (variable lengthuint8). Decodes to(T, 480, 640, 3).eye_in_hand_{side}_rgb: JPEG byte stream (variable lengthuint8). Decodes to(T, 480, 640, 3).{prefix}_joint_states: Arm joint positions in radians. Shape(T, N_dof).{prefix}_gripper_states: Gripper joint positions. Shape(T, N_grip).{prefix}_eef_pos: End-effector position in Robot Base Frame. Shape(T, 3).{prefix}_eef_quat: End-effector orientation(w, x, y, z)in Robot Base Frame. Shape(T, 4).object_{name}_pos: Object ground truth position in World Frame. Shape(T, 3).object_{name}_quat: Object ground truth orientation(w, x, y, z)in World Frame. Shape(T, 4).
Actions & States
Note: For multi-robot setups, the fields below concatenate data from all robots in order (e.g.,
[robot0, robot1]).actions: Joint-space control targets. Shape(T, N_dof + 1). Format:[joint_positions, normalized_gripper]where gripper is in[0, 1].actions_ee: Cartesian control targets. Shape(T, 7). Format:[pos (3), axis-angle (3), normalized_gripper (1)].robot_states: Robot base pose in World Frame. Shape(T, 7 * N_robots). Format:[pos (3), quat (4)]per robot, quat is(w, x, y, z).
Tactile fields (mocap only)
tactile_hand: dense tactile map (time × sensors/vertices)tactile_object: dense tactile map