Datasets:

ArXiv:
License:
File size: 1,735 Bytes
97b594b
 
 
161fbd2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
---
license: cc-by-nc-4.0
---
<!-- Data from [DexWM: World Models for Learning Dexterous Hand-Object Interactions from Human Videos](https://arxiv.org/abs/2512.13644). 4 hours of exploratory sequences of random arm movements collected in RoboCasa. -->

<div align="center">

<h1><strong>DexWM: World Models for Learning Dexterous Hand-Object Interactions from Human Videos</strong></h1>

📄 [Paper](https://arxiv.org/abs/2512.13644) &nbsp;&nbsp;|&nbsp;&nbsp; 💻 [Code](https://github.com/facebookresearch/dexwm) &nbsp;&nbsp;|&nbsp;&nbsp; 🌐 [Project Page](https://raktimgg.github.io/dexwm/)

</div>


## Description

This dataset contains the **RoboCasa simulation data** used in *DexWM: World Models for Learning Dexterous Hand-Object Interactions from Human Videos*. It includes two data regimes for training and evaluation of DexWM.

- **RoboCasa Random**: Contains `exploratory_movement` and `gripper_open_and_close` sequences. These are random interaction trajectories collected using a Franka arm with an Allegro hand, used for model fine-tuning.
- **Pick-and-Place**: Contains the `pick-and-place-2.0` dataset, used exclusively for evaluating manipulation performance.

All data is stored in `.hdf5` format, where each file contains sequential robot interaction trajectories, including states and actions for dexterous manipulation.


## Citation
```bibtex
@article{goswami2025dexwm,
  title={World Models for Learning Dexterous Hand-Object Interactions from Human Videos},
  author={Goswami, Raktim Gautam and Bar, Amir and Fan, David and Yang, Tsung-Yen and Zhou, Gaoyue and Krishnamurthy, Prashanth and Rabbat, Michael and Khorrami, Farshad and LeCun, Yann},
  journal={arXiv preprint arXiv:2512.13644},
  year={2026}
}