The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
HORA: Hand–Object to Robot Action Dataset
Dataset Summary
HORA (Hand–Object to Robot Action) is a large-scale multimodal dataset that converts human hand–object interaction (HOI) demonstrations into robot-usable supervision for cross-embodiment learning. It combines HOI-style annotations (e.g., MANO hand parameters, object pose, contact) with embodied-robot learning signals (e.g., robot observations, end-effector trajectories) under a unified canonical action space. :contentReference[oaicite:5]{index=5} :contentReference[oaicite:6]{index=6}
HORA is constructed from three sources/subsets:
- HORA(Mocap): custom multi-view motion capture system with tactile-sensor gloves (includes tactile maps).
- HORA(Recordings): custom RGB(D) HOI recording setup (no tactile).
- HORA(Public Dataset): derived from multiple public HOI datasets and retargeted to robot embodiments (6/7-DoF arms).
Overall scale: ~150k trajectories across all subsets.
Key Features
- Unified multimodal representation across subsets, covering both HOI analysis and downstream robotic learning.
- HOI modalities: MANO hand parameters (pose/shape + global transform), object 6DoF pose, object assets, hand–object contact annotations.
- Robot modalities: wrist-view & third-person observations, and end-effector pose trajectories for robotic arms, all mapped to a canonical action space.
- Tactile (mocap subset): dense tactile map for both hand and object (plus object pose & assets).
Dataset Statistics
| Subset | Tactile | #Trajectories | Notes |
|---|---|---|---|
| HORA(Mocap) | ✅ | 63,141 | 6-DoF object pose + assets + tactile map |
| HORA(Recordings) | ❌ | 23,560 | 6-DoF object pose + assets |
| HORA(Public Dataset) | ❌ | 66,924 | retargeted cross-embodiment robot modalities |
| Total | ~150k |
Supported Tasks and Use Cases
HORA is suitable for:
- Imitation Learning (IL) / Visuomotor policy learning
- Vision–Language–Action (VLA) model training and evaluation
- HOI-centric research: contact analysis, pose/trajectory learning, hand/object dynamics
Data Format
Example Episode Structure
Each episode/trajectory may include:
HOI fields
hand_mano: MANO parameters (pose/shape, global rotation/translation)object_pose_6d: 6DoF object pose sequencecontact: hand–object contact annotationsobject_asset: mesh/texture id or path
Robot fields
obs_wrist_rgbobs_third_rgbee_pose: end-effector pose trajectory (SE(3))gripper: gripper open/close command (optional)action_space: canonical action space metadata
Tactile fields (mocap only)
tactile_hand: dense tactile map (time × sensors/vertices)tactile_object: dense tactile map
- Downloads last month
- 10