UMI-on-Air - Pre-trained Checkpoints
Pre-trained diffusion policy checkpoints for UMI-on-Air: Embodiment-Aware Guidance for Embodiment-Agnostic Visuomotor Policies.
UMI-on-Air is a framework for embodiment-aware deployment of embodiment-agnostic manipulation policies. It leverages diverse, unconstrained human demonstrations to train generalizable visuomotor policies that can be adapted to constrained robotic embodiments at inference time.
📄 Paper: arXiv | Hugging Face Papers
🌐 Project Website: umi-on-air.github.io
💻 Code: GitHub
Checkpoints Included
| Task | Description |
|---|---|
umi_cabinet |
Cabinet door opening task |
umi_peg |
Peg insertion task |
umi_pick |
Object picking task |
umi_valve |
Valve turning task |
Usage
1. Download and Extract
# Download and extract the checkpoints
wget https://huggingface.co/LeCAR-Lab/umi-on-air_checkpoints/resolve/main/checkpoints.tar.gz
tar -xzf checkpoints.tar.gz
Each checkpoint folder contains:
checkpoints/latest.ckpt- Model weightsnormalizer.pkl- Data normalization parameters.hydra/config.yaml- Training configuration
2. Policy Evaluation
After setting up the environment according to the official repository, you can evaluate a policy in simulation with Embodiment-Aware Diffusion Policy (EADP) guidance.
For example, to evaluate the Unmanned Aerial Manipulator (UAM) on the cabinet task with guidance:
cd am_mujoco_ws/policy_learning
python imitate_episodes.py \
--task_name uam_cabinet \
--num_rollouts 30 \
--guidance 1.5 \
--use_3d_viewer
Citation
If you use these checkpoints, please cite our work:
@misc{gupta2025umionairembodimentawareguidanceembodimentagnostic,
title={UMI-on-Air: Embodiment-Aware Guidance for Embodiment-Agnostic Visuomotor Policies},
author={Harsh Gupta and Xiaofeng Guo and Huy Ha and Chuer Pan and Muqing Cao and Dongjae Lee and Sebastian Scherer and Shuran Song and Guanya Shi},
year={2025},
eprint={2510.02614},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2510.02614},
}