Datasets:
The dataset viewer is not available for this dataset.
Error code: JobManagerCrashedError
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
GraspXL
GraspXL is a large-scale dataset of physically plausible dexterous grasping motion sequences over Objaverse objects. It contains generated grasping motions for different hand models, together with processed object meshes used by the recorded sequences.
The main archives cover diverse hand-object interactions where hands approach objects from different directions. The tabletop archives provide a complementary setting where objects are placed on a table and hands approach from above, which is useful for applications focused on tabletop grasping scenes.
Resources
Highlights
- 10M+ diverse grasping motions for 500k+ objects and multiple dexterous hand models.
- Motions generated with physics simulation to improve physical plausibility.
- Frame-level object and hand poses for each motion sequence.
- Both diverse-approach and tabletop grasping settings.
Potential Applications
- Generating large-scale pseudo 3D RGBD grasping motions with texture generation methods, which can support downstream applications such as training pose estimation or mesh reconstruction models (e.g., HGGT).
- Simulating general human hand motions for human-robot interaction.
- Serving as expert demonstrations for imitation learning and generative models (e.g., PAM).
- Add precise finger motions for full-body grasping motion generation.
- Achieve zero-shot text-to-motion generation with off-the-shelf text-to-mesh generation methods.
Archive Overview
The dataset is stored as .zip archives. To make downloading easier, recorded
motion sequences are split across several archives. For many objects, each
sequence archive contains several motion sequences, so users can choose which
archives to download.
| Archive | Content |
|---|---|
object_dataset.zip |
Processed object meshes used by the recorded sequences. |
objaverse_urdf.zip |
URDF files of the Objaverse objects used by the simulator. |
allegro_dataset_1.zip, allegro_dataset_2.zip |
Allegro Hand motions with diverse approach directions. |
mano_dataset_1.zip.part00, mano_dataset_1.zip.part01 |
Split parts of mano_dataset_1.zip, containing MANO motions with diverse approach directions. |
mano_dataset_2.zip.part00, mano_dataset_2.zip.part01 |
Split parts of mano_dataset_2.zip, containing additional MANO motions with diverse approach directions. |
mano_dataset_3.zip.part00, mano_dataset_3.zip.part01 |
Split parts of mano_dataset_3.zip, containing additional MANO motions with diverse approach directions. |
leap_dataset_1.zip |
LEAP hand motions for a subset of the objects. |
allegro_tabletop.zip |
Allegro Hand tabletop grasping motions. |
mano_tabletop.zip |
MANO tabletop grasping motions. |
sharpa_tabletop.zip |
SharpA Hand tabletop grasping motions. |
Object Data
object_dataset.zip contains processed object meshes. The meshes are scaled and
decimated before use.
object_dataset.zip
|-- small
| |-- <object_id>
| | `-- <object_id>.obj
| `-- ...
|-- medium
| |-- <object_id>
| | `-- <object_id>.obj
| `-- ...
`-- large
|-- <object_id>
| `-- <object_id>.obj
`-- ...
small, medium, and large contain object meshes with different scales used
by the recorded sequences. Please check the paper for more details.
objaverse_urdf.zip contains the corresponding URDF files for the Objaverse
objects. These URDF files are used by the simulation code to generate motions.
Motion Archives
All motion archives use the same high-level layout:
<motion_archive>.zip
|-- small
| |-- <object_id>
| | |-- <hand_prefix>_0.npy
| | |-- <hand_prefix>_1.npy
| | `-- ...
| `-- ...
|-- medium
| |-- <object_id>
| | |-- <hand_prefix>_0.npy
| | `-- ...
| `-- ...
`-- large
|-- <object_id>
| |-- <hand_prefix>_0.npy
| `-- ...
`-- ...
Not every object has the same number of recorded sequences.
Diverse-Approach Motions
These archives contain motions where hands approach objects from diverse directions to generate broad hand-object interaction coverage.
| Archive | Hand model | File prefix | Hand pose shape |
|---|---|---|---|
allegro_dataset_1.zip, allegro_dataset_2.zip |
Allegro | allegro_*.npy |
(frame_num, 22) |
mano_dataset_1.zip, mano_dataset_2.zip, mano_dataset_3.zip |
MANO | mano_*.npy |
(frame_num, 45) |
leap_dataset_1.zip |
LEAP | leap_*.npy |
(frame_num, 22) |
The MANO archives are stored as split files to keep each repository file under the hosting file-size limit. Reconstruct them before extracting:
cat mano_dataset_1.zip.part00 mano_dataset_1.zip.part01 > mano_dataset_1.zip
cat mano_dataset_2.zip.part00 mano_dataset_2.zip.part01 > mano_dataset_2.zip
cat mano_dataset_3.zip.part00 mano_dataset_3.zip.part01 > mano_dataset_3.zip
Tabletop Motions
These archives contain motions in a tabletop setting. Objects are placed on a table and the hand approaches from above. The tabletop setting uses part of the method from RobustDexGrasp and is intended for applications that need tabletop grasping data.
| Archive | Hand model | File prefix | Hand pose shape |
|---|---|---|---|
allegro_tabletop.zip |
Allegro | allegro_*.npy |
(frame_num, 22) |
mano_tabletop.zip |
MANO | mano_*.npy |
(frame_num, 45) |
sharpa_tabletop.zip |
SharpA | sharpa_*.npy |
(frame_num, 28) |
Sequence Format
Common Fields
Each .npy file contains a single motion sequence:
data = np.load("<sequence>.npy", allow_pickle=True).item()
The sequence dictionary contains a right_hand entry and one object entry keyed
by <object_id>.
data["right_hand"]["trans"]: NumPy array of shape(frame_num, 3). Wrist position sequence.data["right_hand"]["rot"]: NumPy array of shape(frame_num, 3). Wrist orientation sequence in axis-angle representation.data["right_hand"]["pose"]: Hand-model-specific pose sequence. See the archive tables above for the pose dimensionality.data[object_id]["trans"]: NumPy array of shape(frame_num, 3). Object position sequence.data[object_id]["rot"]: NumPy array of shape(frame_num, 3). Object orientation sequence in axis-angle representation.
Pose Conventions
- Diverse-approach archives also include
data[object_id]["angle"], which is a reserved field and is not used. - The actual wrist pose is always represented by
data["right_hand"]["trans"]anddata["right_hand"]["rot"]. For non-MANO robotic dexterous hands, the first 6 dimensions ofdata["right_hand"]["pose"]are virtual wrist joint values not used for the actual wrist pose. The remaining dimensions are the hand joint angles. - For MANO,
data["right_hand"]["rot"]anddata["right_hand"]["pose"]concatenate to the original MANO pose parameter. The diverse-approach MANO archives useflat_hand_mean=False, while the tabletop MANO archive usesflat_hand_mean=Truewith an additionalwrist_biasapplied todata["right_hand"]["trans"].
Data Loading
For more detailed data loading and visualization examples, please refer to the visualizer repository.
import numpy as np
data = np.load("allegro_0.npy", allow_pickle=True).item()
object_id = next(key for key in data.keys() if key != "right_hand")
right_hand_trans = data["right_hand"]["trans"]
right_hand_rot = data["right_hand"]["rot"]
right_hand_pose = data["right_hand"]["pose"]
object_trans = data[object_id]["trans"]
object_rot = data[object_id]["rot"]
Citation
If you use GraspXL, please cite:
@inProceedings{zhang2024graspxl,
title={{GraspXL}: Generating Grasping Motions for Diverse Objects at Scale},
author={Zhang, Hui and Christen, Sammy and Fan, Zicong and Hilliges, Otmar and Song, Jie},
booktitle={European Conference on Computer Vision (ECCV)},
year={2024}
}
The tabletop setting uses part of the method from RobustDexGrasp. If you find this setting useful, please consider citing:
@inproceedings{zhang2025RobustDexGrasp,
title={{RobustDexGrasp}: Robust Dexterous Grasping of General Objects},
author={Zhang, Hui and Wu, Zijian and Huang, Linyi and Christen, Sammy and Song, Jie},
booktitle={Conference on Robot Learning (CoRL)},
year={2025}
}
License
This dataset is released under the Creative Commons Attribution-NonCommercial 4.0 International License.
- Downloads last month
- 19