DOM / README.md
hzxie's picture
Add robotics task category, project links and citation (#1)
f239101
metadata
license: other
license_name: slab-license
license_link: LICENSE
size_categories:
  - 100K<n<1M
task_categories:
  - robotics
tags:
  - visual-language-action
  - vla

Dynamic Object Manipulation (DOM)

Project Page | Paper | Code

TL;DR: DOM is a large-scale dynamic manipulation dataset with 200K episodes, 2,800+ scenes, and 206 objects for training and evaluating VLA models.

Introduction

The Dynamic Object Manipulation (DOM) benchmark is designed to address the challenges of rapid perception and temporal anticipation in robotics. It includes:

  • 200K synthetic episodes across 2,800+ scenes and 206 objects.
  • 2K real-world episodes collected without teleoperation.
  • Support for evaluating VLA models in dynamic scenarios requiring continuous control and closed-loop adaptation.

Citation

If you find this dataset or the DynamicVLA framework useful for your research, please cite:

@article{xie2026dynamicvla,
  title     = {DynamicVLA: A Vision-Language-Action Model for 
               Dynamic Object Manipulation},
  author    = {Xie, Haozhe and 
               Wen, Beichen and 
               Zheng, Jiarui and 
               Chen, Zhaoxi and 
               Hong, Fangzhou icon, 
               Diao, Haiwen and 
               Liu, Ziwei},
  journal   = {arXiv},
  volume    = {2601.22153},
  year      = {2026}
}

Changelog

  • [2026/01/31] Repo is created. Please stay tuned!