| # 🦾 Heterogenous Pre-trained Transformers | |
| [Lirui Wang](https://liruiw.github.io/), [Xinlei Chen](https://xinleic.xyz/), [Jialiang Zhao](https://alanz.info/), [Kaiming He](https://people.csail.mit.edu/kaiming/) | |
| Neural Information Processing Systems (Spotlight), 2024 | |
| You can find more details on our [project page](https://liruiw.github.io/hpt). An alternative clean implementation of HPT in Hugging Face can also be found [here](https://github.com/liruiw/lerobot/tree/hpt_squash/lerobot/common/policies/hpt). | |
| **TL;DR:** HPT aligns different embodiment to a shared latent space and investigates the scaling behaviors in policy learning. Put a scalable transformer in the middle of your policy and don’t train from scratch! | |
| If you find HPT useful in your research, please consider citing: | |
| ``` | |
| @inproceedings{wang2024hpt, | |
| author = {Lirui Wang, Xinlei Chen, Jialiang Zhao, Kaiming He}, | |
| title = {Scaling Proprioceptive-Visual Learning with Heterogeneous Pre-trained Transformers}, | |
| booktitle = {Neurips}, | |
| year = {2024} | |
| } | |
| ``` | |
| ## Contact | |
| If you have any questions, feel free to contact me through email (liruiw@mit.edu). Enjoy! |