The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
H-RDT: Human Manipulation Enhanced Bimanual Robotic Manipulation
📝Paper | 🌍Project Page | 🤗Model | 💬WeChat Contact
📰 News
• [2025.8.12] Updated RoboTwin2 inference code
H-RDT (Human to Robotics Diffusion Transformer) is a novel approach that leverages large-scale egocentric human manipulation data to enhance robot manipulation capabilities. Our key insight is that large-scale egocentric human manipulation videos with paired 3D hand pose annotations provide rich behavioral priors that capture natural manipulation strategies and can benefit robotic policy learning.
🚀 Installation
Create conda environment:
conda create -n hrdt python=3.10 conda activate hrdtInstall dependencies:
pip install -r requirements.txtDownload pre-trained models:
export HF_ENDPOINT=https://hf-mirror.com huggingface-cli download --resume-download embodiedfoundation/H-RDT --local-dir ./
🔧 Usage
Stage 1: Human Data Pre-training (EgoDx)
Data Preprocessing
Before training, preprocess the EgoDx dataset:
Configure paths:
# Edit datasets/pretrain/setup_pretrain.sh with your paths nano datasets/pretrain/setup_pretrain.sh # Set your EgoDx dataset and T5 model paths: export EGODEX_DATA_ROOT="/path/to/your/egodx/dataset" export T5_MODEL_PATH="/path/to/your/t5-v1_1-xxl"Setup environment:
source datasets/pretrain/setup_pretrain.shRun data processing pipeline:
# Automatically runs: precompute_48d_actions.py → calc_stat.py → encode_lang_batch.py ./datasets/pretrain/run_pretrain_pipeline.sh
Start Pre-training
After data preprocessing is complete:
1. EgoDx Pretrain (fresh start):
- Configure dataset:
# Edit datasets/dataset.py line ~45 self.dataset_name = "egodx" - Run training:
bash pretrain.sh
2. Pretrain Resume:
Edit pretrain.sh, add this line:
--resume_from_checkpoint="checkpoint-450000" \
Stage 2: Cross-Embodiment Fine-tuning
Data Preprocessing (for RobotWin2)
Pre-computed language embeddings are already provided - no preprocessing needed!
Setup environment:
# Edit datasets/robotwin2/setup_robotwin2.sh if needed (only for regenerating files) source datasets/robotwin2/setup_robotwin2.shData processing pipeline (Not Required):
# Not needed - lang_embeddings/ already provided in repository # Only run if you want to regenerate files: # ./datasets/robotwin2/run_robotwin2_pipeline.sh
Robot Fine-tuning (load human pre-trained backbone):
- Configure dataset:
# Edit datasets/dataset.py line ~45 self.dataset_name = "robotwin_agilex" # or your robot name # Add your dataset initialization if not exists: elif self.dataset_name == "your_robot": self.hdf5_dataset = YourRobotDataset(config=config) - Run training:
bash finetune.sh # Already configured with pretrained_backbone_path
Finetune Resume:
Edit your current finetune script, make these changes:
# Change this line:
--mode="finetune" \
# To:
--mode="pretrain" \
# And add:
--resume_from_checkpoint="checkpoint-5000" \
🎯 Training Modes
| Training Scenario | Base Script | Required Shell Script Modifications | Mode & Key Parameters |
|---|---|---|---|
| Human Pretrain (Fresh) | pretrain.sh |
--mode="pretrain" |
Start pretraining on EgoDx human data |
| Human Pretrain Resume | pretrain.sh |
Add: --resume_from_checkpoint="checkpoint-450000" \ |
--mode="pretrain" |
| Robot Fine-tuning | finetune.sh |
Change: --mode="finetune" \Add: --pretrained_backbone_path="./checkpoints/pretrain-0618/checkpoint-500000/pytorch_model.bin" \Change: --config_path="configs/hrdt_finetune.yaml" \ |
Load human pre-trained backbone, fresh action layers |
| Robot Finetune Resume | Your finetune script | Change: --mode="finetune" → --mode="pretrain"Add: --resume_from_checkpoint="checkpoint-5000" \ |
Continue robot fine-tuning |
Dataset Configuration
Before training, you can select datasets via CLI flags or configure in datasets/dataset.py:
For Human Pre-training (EgoDx):
--dataset_name=egodex
For Robot Fine-tuning:
--dataset_name=robotwin_agilex
Adding New Robot Datasets:
- Create your dataset folder:
datasets/your_robot/ - Implement your dataset class (see
datasets/robotwin2/as example) - Create data processing scripts (see
datasets/pretrain/ordatasets/robotwin2/as examples) - Import in
datasets/dataset.py - Add initialization logic in
VLAConsumerDataset.__init__
LeRobot (HE -> LeRobot) Fine-tuning:
--dataset_name=lerobot --dataset_root=/hfm/data/pick_n_squat
See datasets/lerobot/README.md for action stats and language embeddings setup.
Key Configuration Files
configs/hrdt_pretrain.yaml: Human pre-training configurationconfigs/hrdt_finetune.yaml: Robot fine-tuning configurationdatasets/dataset.py: Dataset selection and initialization- Modify
state_dim,action_dim,output_sizefor your robot
📞 Contact Us
WeChat Discussion Group
Join our WeChat group to discuss H-RDT related technical issues:
WeChat Group QR Code
Personal WeChat
For other questions or collaboration opportunities, please add personal WeChat:
Personal WeChat QR Code
Note: If the QR code expires, please contact us through project Issues for the latest contact information.
- Downloads last month
- 28
