Datasets:

ArXiv:
License:
dexwm / README.md
davidfan97's picture
Update README.md (#1)
161fbd2
---
license: cc-by-nc-4.0
---
<!-- Data from [DexWM: World Models for Learning Dexterous Hand-Object Interactions from Human Videos](https://arxiv.org/abs/2512.13644). 4 hours of exploratory sequences of random arm movements collected in RoboCasa. -->
<div align="center">
<h1><strong>DexWM: World Models for Learning Dexterous Hand-Object Interactions from Human Videos</strong></h1>
📄 [Paper](https://arxiv.org/abs/2512.13644) &nbsp;&nbsp;|&nbsp;&nbsp; 💻 [Code](https://github.com/facebookresearch/dexwm) &nbsp;&nbsp;|&nbsp;&nbsp; 🌐 [Project Page](https://raktimgg.github.io/dexwm/)
</div>
## Description
This dataset contains the **RoboCasa simulation data** used in *DexWM: World Models for Learning Dexterous Hand-Object Interactions from Human Videos*. It includes two data regimes for training and evaluation of DexWM.
- **RoboCasa Random**: Contains `exploratory_movement` and `gripper_open_and_close` sequences. These are random interaction trajectories collected using a Franka arm with an Allegro hand, used for model fine-tuning.
- **Pick-and-Place**: Contains the `pick-and-place-2.0` dataset, used exclusively for evaluating manipulation performance.
All data is stored in `.hdf5` format, where each file contains sequential robot interaction trajectories, including states and actions for dexterous manipulation.
## Citation
```bibtex
@article{goswami2025dexwm,
title={World Models for Learning Dexterous Hand-Object Interactions from Human Videos},
author={Goswami, Raktim Gautam and Bar, Amir and Fan, David and Yang, Tsung-Yen and Zhou, Gaoyue and Krishnamurthy, Prashanth and Rabbat, Michael and Khorrami, Farshad and LeCun, Yann},
journal={arXiv preprint arXiv:2512.13644},
year={2026}
}