--- license: cc-by-nc-4.0 ---

DexWM: World Models for Learning Dexterous Hand-Object Interactions from Human Videos

📄 [Paper](https://arxiv.org/abs/2512.13644)   |   💻 [Code](https://github.com/facebookresearch/dexwm)   |   🌐 [Project Page](https://raktimgg.github.io/dexwm/)
## Description This dataset contains the **RoboCasa simulation data** used in *DexWM: World Models for Learning Dexterous Hand-Object Interactions from Human Videos*. It includes two data regimes for training and evaluation of DexWM. - **RoboCasa Random**: Contains `exploratory_movement` and `gripper_open_and_close` sequences. These are random interaction trajectories collected using a Franka arm with an Allegro hand, used for model fine-tuning. - **Pick-and-Place**: Contains the `pick-and-place-2.0` dataset, used exclusively for evaluating manipulation performance. All data is stored in `.hdf5` format, where each file contains sequential robot interaction trajectories, including states and actions for dexterous manipulation. ## Citation ```bibtex @article{goswami2025dexwm, title={World Models for Learning Dexterous Hand-Object Interactions from Human Videos}, author={Goswami, Raktim Gautam and Bar, Amir and Fan, David and Yang, Tsung-Yen and Zhou, Gaoyue and Krishnamurthy, Prashanth and Rabbat, Michael and Khorrami, Farshad and LeCun, Yann}, journal={arXiv preprint arXiv:2512.13644}, year={2026} }