File size: 1,571 Bytes
3503802
 
 
 
f239101
 
 
3503802
f239101
3503802
 
 
 
f239101
 
 
3503802
 
 
f239101
3503802
f239101
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3503802
f239101
3503802
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
---
license: other
license_name: slab-license
license_link: LICENSE
size_categories:
- 100K<n<1M
task_categories:
- robotics
tags:
- visual-language-action
- vla
---

# Dynamic Object Manipulation (DOM)

[**Project Page**](https://haozhexie.com/project/dynamic-vla) | [**Paper**](https://huggingface.co/papers/2601.22153) | [**Code**](https://github.com/hzxie/DynamicVLA)

**TL;DR:** DOM is a large-scale dynamic manipulation dataset with 200K episodes, 2,800+ scenes, and 206 objects for training and evaluating VLA models.

## Introduction

The Dynamic Object Manipulation (DOM) benchmark is designed to address the challenges of rapid perception and temporal anticipation in robotics. It includes:
- **200K synthetic episodes** across 2,800+ scenes and 206 objects.
- **2K real-world episodes** collected without teleoperation.
- Support for evaluating VLA models in dynamic scenarios requiring continuous control and closed-loop adaptation.

## Citation

If you find this dataset or the DynamicVLA framework useful for your research, please cite:

```bibtex
@article{xie2026dynamicvla,
  title     = {DynamicVLA: A Vision-Language-Action Model for 
               Dynamic Object Manipulation},
  author    = {Xie, Haozhe and 
               Wen, Beichen and 
               Zheng, Jiarui and 
               Chen, Zhaoxi and 
               Hong, Fangzhou icon, 
               Diao, Haiwen and 
               Liu, Ziwei},
  journal   = {arXiv},
  volume    = {2601.22153},
  year      = {2026}
}
```

## Changelog

- [2026/01/31] Repo is created. Please stay tuned!