nielsr HF Staff commited on
Commit
da5c56d
·
verified ·
1 Parent(s): 3503802

Add robotics task category, project links and citation

Browse files

Hi! I'm Niels from the community science team at Hugging Face. This PR improves the dataset card for the Dynamic Object Manipulation (DOM) dataset by:
- Adding the `robotics` task category to the YAML metadata.
- Including links to the project page, the original paper, and the official GitHub repository.
- Adding the BibTeX citation from the GitHub README.
- Providing a brief summary of the dataset's size and composition for better context.

Files changed (1) hide show
  1. README.md +34 -7
README.md CHANGED
@@ -2,22 +2,49 @@
2
  license: other
3
  license_name: slab-license
4
  license_link: LICENSE
5
- tags:
 
 
6
  - robotics
 
7
  - visual-language-action
8
  - vla
9
- size_categories:
10
- - 100K<n<1M
11
  ---
12
 
13
- # Dynamic Object Manipulation
 
 
14
 
15
  **TL;DR:** DOM is a large-scale dynamic manipulation dataset with 200K episodes, 2,800+ scenes, and 206 objects for training and evaluating VLA models.
16
 
17
- [![arXiv](https://img.shields.io/badge/arXiv-2601.22153-b31b1b.svg)](https://arxiv.org/abs/2601.22153)
18
 
19
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
20
 
21
- # Changelog
22
 
23
  - [2026/01/31] Repo is created. Please stay tuned!
 
2
  license: other
3
  license_name: slab-license
4
  license_link: LICENSE
5
+ size_categories:
6
+ - 100K<n<1M
7
+ task_categories:
8
  - robotics
9
+ tags:
10
  - visual-language-action
11
  - vla
 
 
12
  ---
13
 
14
+ # Dynamic Object Manipulation (DOM)
15
+
16
+ [**Project Page**](https://haozhexie.com/project/dynamic-vla) | [**Paper**](https://huggingface.co/papers/2601.22153) | [**Code**](https://github.com/hzxie/DynamicVLA)
17
 
18
  **TL;DR:** DOM is a large-scale dynamic manipulation dataset with 200K episodes, 2,800+ scenes, and 206 objects for training and evaluating VLA models.
19
 
20
+ ## Introduction
21
 
22
+ The Dynamic Object Manipulation (DOM) benchmark is designed to address the challenges of rapid perception and temporal anticipation in robotics. It includes:
23
+ - **200K synthetic episodes** across 2,800+ scenes and 206 objects.
24
+ - **2K real-world episodes** collected without teleoperation.
25
+ - Support for evaluating VLA models in dynamic scenarios requiring continuous control and closed-loop adaptation.
26
+
27
+ ## Citation
28
+
29
+ If you find this dataset or the DynamicVLA framework useful for your research, please cite:
30
+
31
+ ```bibtex
32
+ @article{xie2026dynamicvla,
33
+ title = {DynamicVLA: A Vision-Language-Action Model for
34
+ Dynamic Object Manipulation},
35
+ author = {Xie, Haozhe and
36
+ Wen, Beichen and
37
+ Zheng, Jiarui and
38
+ Chen, Zhaoxi and
39
+ Hong, Fangzhou icon,
40
+ Diao, Haiwen and
41
+ Liu, Ziwei},
42
+ journal = {arXiv},
43
+ volume = {2601.22153},
44
+ year = {2026}
45
+ }
46
+ ```
47
 
48
+ ## Changelog
49
 
50
  - [2026/01/31] Repo is created. Please stay tuned!