Datasets:

ArXiv:
License:
File size: 16,613 Bytes
7832079
 
 
0dbc092
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
---

license: mit
---


# OpenEAI-Dataset

## Dataset Summary

Preprocessed pretraining datasets for [OpenEAI-VLA](https://github.com/eai-yeslab/OpenEAI-VLA).

This dataset aggregates and unifies data from multiple embodied AI sources (e.g., Open X-Embodiment, UMI Community) for large-scale VLA pretraining.  
All samples are preprocessed and stored in a common format compatible with OpenEAI-VLA's dataset loader.  
Images have been compressed to reduce storage cost.  
**Total size:** ~3.12TB.

## Supported Tasks

- Visual-Language-Action (VLA) Pretraining

## Source Datasets

This dataset contains reformatted or reprocessed samples from the following datasets:

- Open X-Embodiment  
   - [Open X-Embodiment: Robotic Learning Datasets and RT-X Models ](https://robotics-transformer-x.github.io/)
   - License: Apache License 2.0
- UMI Community Dataset  
   - [UMI Robot Dataset Community](https://umi-data.github.io/)
   - License: MIT

If you use this dataset, please also cite these original datasets as appropriate.

## Dataset Structure

**meta/**
```

meta/

├── pretrain_meta.json

├── bc_z_meta.npy

├── droid_meta.npy

└── ... (other meta)

```
Fields for `bc_z_meta.npy`::
- `batch_size`, `episode_length`, `episode_0`: (2,) (accum. start idx, traj len)
- `total_steps`
- `state_dim`, `action_dim`
- `state_stat` (`mean`, `std`, `q01`, `q99`)
- `action_stat` (`mean`, `std`, `q01`, `q99`)

**bc_z/**

```

bc_z/

├── 0000.hdf5

│   ├── episode_0

│   │   ├── attrs (dict: action_type, instruction, length)

│   │   ├── action (traj_length, action_dim)

│   │   ├── state  (traj_length, state_dim)

│   │   └── image_mid (traj_length)

│   ├── episode_1

│   └── ...

├── 0001.hdf5

├── ...

```

**droid/** 



...



### Dataset Loading



The dataset is automatically loaded via [hdf5_loader](https://github.com/eai-yeslab/OpenEAI-VLA/blob/main/openeai/dataset/hdf5_dataset.py).



**Manual loading example:**

```python

import h5py

import numpy as np

# Load trajectory data  

dataset = h5py.File("bc_z/0000.hdf5", "r")

episode = dataset['episode_0']

action = episode['action'][:]      # (traj_length, action_dim)

state = episode['state'][:]        # (traj_length, state_dim)

attrs = dict(episode.attrs)

print(attrs['instruction'])



# Load (compressed) image example

import cv2

from PIL import Image

imgs = episode['image_mid']

image = Image.fromarray(cv2.imdecode(np.frombuffer(imgs[0], np.uint8), cv2.IMREAD_COLOR))



# Load meta data

meta = np.load('meta/bc_z_meta.npy', allow_pickle=True).item()

print(meta['action_stat'])

```



### Dataset Creation



You can transfer your own dataset into this format for pretraining or finetuning.



**Recommended process:**

1. Use our collector:  

   [`collect_data.py`](https://github.com/eai-yeslab/OpenEAI-Arm/blob/main/software/ros2/src/openeai_arm/examples/collect_data.py)  

   (use task name starting with `'openeai_arm'`)

2. (Optional) Write and register a data worker:  

   [Register here](https://github.com/eai-yeslab/OpenEAI-VLA/blob/main/data_utils/vla_data_utils/unique_worker_fn/__init__.py),  

   and see [worker example](https://github.com/eai-yeslab/OpenEAI-VLA/blob/main/data_utils/vla_data_utils/unique_worker_fn/openeai_arm.py)

3. Convert dataset:

    ```bash

    cd OpenEAI-VLA/data_utils

    bash run.sh openeai_arm_<task_name>

    ```

4. Set your dataset path in the training config file.



### Licensing Information



- License: MIT



## Citation Information



If you are using our dataset to pretrain your model, please cite our paper:

```

@inproceedings{openeai_platform,

  title   = {OpenEAI-Platform: Open-source Embodied Artificial Intelligence Hardware-Software Unified Platform},

  author  = {Jinyuan Zhang, Luoyi Fan, Leiyu Wang, Yeqiang Wang, Yichen Zhu, Cewu Lu, Nanyang Ye},

  year    = {2026}

}

```



Also ,if you are using datasets converted from Open X-Embodiedment, please also cite the following:

```

@misc{open_x_embodiment_rt_x_2023,

title={Open {X-E}mbodiment: Robotic Learning Datasets and {RT-X} Models},

author = {Open X-Embodiment Collaboration and Abby O'Neill and Abdul Rehman and Abhinav Gupta and Abhiram Maddukuri and Abhishek Gupta and Abhishek Padalkar and Abraham Lee and Acorn Pooley and Agrim Gupta and Ajay Mandlekar and Ajinkya Jain and Albert Tung and Alex Bewley and Alex Herzog and Alex Irpan and Alexander Khazatsky and Anant Rai and Anchit Gupta and Andrew Wang and Andrey Kolobov and Anikait Singh and Animesh Garg and Aniruddha Kembhavi and Annie Xie and Anthony Brohan and Antonin Raffin and Archit Sharma and Arefeh Yavary and Arhan Jain and Ashwin Balakrishna and Ayzaan Wahid and Ben Burgess-Limerick and Beomjoon Kim and Bernhard Schölkopf and Blake Wulfe and Brian Ichter and Cewu Lu and Charles Xu and Charlotte Le and Chelsea Finn and Chen Wang and Chenfeng Xu and Cheng Chi and Chenguang Huang and Christine Chan and Christopher Agia and Chuer Pan and Chuyuan Fu and Coline Devin and Danfei Xu and Daniel Morton and Danny Driess and Daphne Chen and Deepak Pathak and Dhruv Shah and Dieter Büchler and Dinesh Jayaraman and Dmitry Kalashnikov and Dorsa Sadigh and Edward Johns and Ethan Foster and Fangchen Liu and Federico Ceola and Fei Xia and Feiyu Zhao and Felipe Vieira Frujeri and Freek Stulp and Gaoyue Zhou and Gaurav S. Sukhatme and Gautam Salhotra and Ge Yan and Gilbert Feng and Giulio Schiavi and Glen Berseth and Gregory Kahn and Guangwen Yang and Guanzhi Wang and Hao Su and Hao-Shu Fang and Haochen Shi and Henghui Bao and Heni Ben Amor and Henrik I Christensen and Hiroki Furuta and Homanga Bharadhwaj and Homer Walke and Hongjie Fang and Huy Ha and Igor Mordatch and Ilija Radosavovic and Isabel Leal and Jacky Liang and Jad Abou-Chakra and Jaehyung Kim and Jaimyn Drake and Jan Peters and Jan Schneider and Jasmine Hsu and Jay Vakil and Jeannette Bohg and Jeffrey Bingham and Jeffrey Wu and Jensen Gao and Jiaheng Hu and Jiajun Wu and Jialin Wu and Jiankai Sun and Jianlan Luo and Jiayuan Gu and Jie Tan and Jihoon Oh and Jimmy Wu and Jingpei Lu and Jingyun Yang and Jitendra Malik and João Silvério and Joey Hejna and Jonathan Booher and Jonathan Tompson and Jonathan Yang and Jordi Salvador and Joseph J. Lim and Junhyek Han and Kaiyuan Wang and Kanishka Rao and Karl Pertsch and Karol Hausman and Keegan Go and Keerthana Gopalakrishnan and Ken Goldberg and Kendra Byrne and Kenneth Oslund and Kento Kawaharazuka and Kevin Black and Kevin Lin and Kevin Zhang and Kiana Ehsani and Kiran Lekkala and Kirsty Ellis and Krishan Rana and Krishnan Srinivasan and Kuan Fang and Kunal Pratap Singh and Kuo-Hao Zeng and Kyle Hatch and Kyle Hsu and Laurent Itti and Lawrence Yunliang Chen and Lerrel Pinto and Li Fei-Fei and Liam Tan and Linxi "Jim" Fan and Lionel Ott and Lisa Lee and Luca Weihs and Magnum Chen and Marion Lepert and Marius Memmel and Masayoshi Tomizuka and Masha Itkina and Mateo Guaman Castro and Max Spero and Maximilian Du and Michael Ahn and Michael C. Yip and Mingtong Zhang and Mingyu Ding and Minho Heo and Mohan Kumar Srirama and Mohit Sharma and Moo Jin Kim and Muhammad Zubair Irshad and Naoaki Kanazawa and Nicklas Hansen and Nicolas Heess and Nikhil J Joshi and Niko Suenderhauf and Ning Liu and Norman Di Palo and Nur Muhammad Mahi Shafiullah and Oier Mees and Oliver Kroemer and Osbert Bastani and Pannag R Sanketi and Patrick "Tree" Miller and Patrick Yin and Paul Wohlhart and Peng Xu and Peter David Fagan and Peter Mitrano and Pierre Sermanet and Pieter Abbeel and Priya Sundaresan and Qiuyu Chen and Quan Vuong and Rafael Rafailov and Ran Tian and Ria Doshi and Roberto Mart{'i}n-Mart{'i}n and Rohan Baijal and Rosario Scalise and Rose Hendrix and Roy Lin and Runjia Qian and Ruohan Zhang and Russell Mendonca and Rutav Shah and Ryan Hoque and Ryan Julian and Samuel Bustamante and Sean Kirmani and Sergey Levine and Shan Lin and Sherry Moore and Shikhar Bahl and Shivin Dass and Shubham Sonawani and Shubham Tulsiani and Shuran Song and Sichun Xu and Siddhant Haldar and Siddharth Karamcheti and Simeon Adebola and Simon Guist and Soroush Nasiriany and Stefan Schaal and Stefan Welker and Stephen Tian and Subramanian Ramamoorthy and Sudeep Dasari and Suneel Belkhale and Sungjae Park and Suraj Nair and Suvir Mirchandani and Takayuki Osa and Tanmay Gupta and Tatsuya Harada and Tatsuya Matsushima and Ted Xiao and Thomas Kollar and Tianhe Yu and Tianli Ding and Todor Davchev and Tony Z. Zhao and Travis Armstrong and Trevor Darrell and Trinity Chung and Vidhi Jain and Vikash Kumar and Vincent Vanhoucke and Vitor Guizilini and Wei Zhan and Wenxuan Zhou and Wolfram Burgard and Xi Chen and Xiangyu Chen and Xiaolong Wang and Xinghao Zhu and Xinyang Geng and Xiyuan Liu and Xu Liangwei and Xuanlin Li and Yansong Pang and Yao Lu and Yecheng Jason Ma and Yejin Kim and Yevgen Chebotar and Yifan Zhou and Yifeng Zhu and Yilin Wu and Ying Xu and Yixuan Wang and Yonatan Bisk and Yongqiang Dou and Yoonyoung Cho and Youngwoon Lee and Yuchen Cui and Yue Cao and Yueh-Hua Wu and Yujin Tang and Yuke Zhu and Yunchu Zhang and Yunfan Jiang and Yunshuang Li and Yunzhu Li and Yusuke Iwasawa and Yutaka Matsuo and Zehan Ma and Zhuo Xu and Zichen Jeff Cui and Zichen Zhang and Zipeng Fu and Zipeng Lin},

howpublished  = {\url{https://arxiv.org/abs/2310.08864}},

year = {2023},

}



@inproceedings{jang2021bc,

title={{BC}-Z: Zero-Shot Task Generalization with Robotic Imitation Learning},

author={Eric Jang and Alex Irpan and Mohi Khansari and Daniel Kappler and Frederik Ebert and Corey Lynch and Sergey Levine and Chelsea Finn},

booktitle={5th Annual Conference on Robot Learning},

year={2021},

url={https://openreview.net/forum?id=8kbp23tSGYv}}



@misc{walke2024bridgedatav2datasetrobot,

      title={BridgeData V2: A Dataset for Robot Learning at Scale}, 

      author={Homer Walke and Kevin Black and Abraham Lee and Moo Jin Kim and Max Du and Chongyi Zheng and Tony Zhao and Philippe Hansen-Estruch and Quan Vuong and Andre He and Vivek Myers and Kuan Fang and Chelsea Finn and Sergey Levine},

      year={2024},

      eprint={2308.12952},

      archivePrefix={arXiv},

      primaryClass={cs.RO},

      url={https://arxiv.org/abs/2308.12952}, 

}



@misc{khazatsky2025droidlargescaleinthewildrobot,

      title={DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset}, 

      author={Alexander Khazatsky and Karl Pertsch and Suraj Nair and Ashwin Balakrishna and Sudeep Dasari and Siddharth Karamcheti and Soroush Nasiriany and Mohan Kumar Srirama and Lawrence Yunliang Chen and Kirsty Ellis and Peter David Fagan and Joey Hejna and Masha Itkina and Marion Lepert and Yecheng Jason Ma and Patrick Tree Miller and Jimmy Wu and Suneel Belkhale and Shivin Dass and Huy Ha and Arhan Jain and Abraham Lee and Youngwoon Lee and Marius Memmel and Sungjae Park and Ilija Radosavovic and Kaiyuan Wang and Albert Zhan and Kevin Black and Cheng Chi and Kyle Beltran Hatch and Shan Lin and Jingpei Lu and Jean Mercat and Abdul Rehman and Pannag R Sanketi and Archit Sharma and Cody Simpson and Quan Vuong and Homer Rich Walke and Blake Wulfe and Ted Xiao and Jonathan Heewon Yang and Arefeh Yavary and Tony Z. Zhao and Christopher Agia and Rohan Baijal and Mateo Guaman Castro and Daphne Chen and Qiuyu Chen and Trinity Chung and Jaimyn Drake and Ethan Paul Foster and Jensen Gao and Vitor Guizilini and David Antonio Herrera and Minho Heo and Kyle Hsu and Jiaheng Hu and Muhammad Zubair Irshad and Donovon Jackson and Charlotte Le and Yunshuang Li and Kevin Lin and Roy Lin and Zehan Ma and Abhiram Maddukuri and Suvir Mirchandani and Daniel Morton and Tony Nguyen and Abigail O'Neill and Rosario Scalise and Derick Seale and Victor Son and Stephen Tian and Emi Tran and Andrew E. Wang and Yilin Wu and Annie Xie and Jingyun Yang and Patrick Yin and Yunchu Zhang and Osbert Bastani and Glen Berseth and Jeannette Bohg and Ken Goldberg and Abhinav Gupta and Abhishek Gupta and Dinesh Jayaraman and Joseph J Lim and Jitendra Malik and Roberto Martín-Martín and Subramanian Ramamoorthy and Dorsa Sadigh and Shuran Song and Jiajun Wu and Michael C. Yip and Yuke Zhu and Thomas Kollar and Sergey Levine and Chelsea Finn},

      year={2025},

      eprint={2403.12945},

      archivePrefix={arXiv},

      primaryClass={cs.RO},

      url={https://arxiv.org/abs/2403.12945}, 

}



@inproceedings{gu2023maniskill2,

  title={ManiSkill2: A Unified Benchmark for Generalizable Manipulation Skills},

  author={Gu, Jiayuan and Xiang, Fanbo and Li, Xuanlin and Ling, Zhan and Liu, Xiqiang and Mu, Tongzhou and Tang, Yihe and Tao, Stone and Wei, Xinyue and Yao, Yunchao and Yuan, Xiaodi and Xie, Pengwei and Huang, Zhiao and Chen, Rui and Su, Hao},

  booktitle={International Conference on Learning Representations},

  year={2023}

}



@inproceedings{fu2024mobile,

  author    = {Fu, Zipeng and Zhao, Tony Z. and Finn, Chelsea},

  title     = {Mobile ALOHA: Learning Bimanual Mobile Manipulation with Low-Cost Whole-Body Teleoperation},

  booktitle = {arXiv},

  year      = {2024},

}



@misc{fang2023rh20tcomprehensiveroboticdataset,

      title={RH20T: A Comprehensive Robotic Dataset for Learning Diverse Skills in One-Shot}, 

      author={Hao-Shu Fang and Hongjie Fang and Zhenyu Tang and Jirong Liu and Chenxi Wang and Junbo Wang and Haoyi Zhu and Cewu Lu},

      year={2023},

      eprint={2307.00595},

      archivePrefix={arXiv},

      primaryClass={cs.RO},

      url={https://arxiv.org/abs/2307.00595}, 

}



@misc{bharadhwaj2023roboagent,

                            title={RoboAgent: Generalization and Efficiency in Robot Manipulation via Semantic Augmentations and Action Chunking},

                            author={Homanga Bharadhwaj and Jay Vakil and Mohit Sharma and Abhinav Gupta and Shubham Tulsiani and Vikash Kumar},

                            year={2023},

                            eprint={2309.01918},

                            archivePrefix={arXiv},

                            primaryClass={cs.RO}

                      }



@article{brohan2022rt,

  title={Rt-1: Robotics transformer for real-world control at scale},

  author={Brohan, Anthony and Brown, Noah and Carbajal, Justice and Chebotar, Yevgen and Dabis, Joseph and Finn, Chelsea and Gopalakrishnan, Keerthana and Hausman, Karol and Herzog, Alex and Hsu, Jasmine and others},

  journal={arXiv preprint arXiv:2212.06817},

  year={2022}

}

```



If you are using datasets converted from UMI, please also cite the following:

```

@article{rayyan2025mv,

  title={MV-UMI: A Scalable Multi-View Interface for Cross-Embodiment Learning},

  author={Rayyan, Omar and Abanes, John and Hafez, Mahmoud and Tzes, Anthony and Abu-Dakka, Fares},

  journal={arXiv preprint arXiv:2509.18757},

  year={2025}

}



	

@article{zhu2025touch,

  title={Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper},

  author={Zhu, Xinyue and Huang, Binghao and Li, Yunzhu},

  journal={arXiv preprint arXiv:2507.15062},

  year={2025}

}



@article{liu2025vitamin,

  title={ViTaMIn: Learning Contact-Rich Tasks Through Robot-Free Visuo-Tactile Manipulation Interface},

  author={Liu, Fangchen and Li, Chuanyu and Qin, Yihua and Shaw, Ankit and Xu, Jing and Abbeel, Pieter and Chen, Rui},

  journal={arXiv preprint arXiv:2504.06156},

  year={2025}

}



@article{lin2024data,

    title={Data Scaling Laws in Imitation Learning for Robotic Manipulation},

    author={Lin, Fanqi and Hu, Yingdong and Sheng, Pingyue and Wen, Chuan and You,

    Jiacheng and Gao, Yang},

    journal={arXiv preprint arXiv:2410.18647},

    year={2024}

}



	

@inproceedings{ha2024umilegs,

  title={{UMI} on Legs: Making Manipulation Policies Mobile with Manipulation-Centric Whole-body Controllers},

  author={Huy Ha and Yihuai Gao and Zipeng Fu and Jie Tan and Shuran Song},

  year={2024},

  booktitle={Proceedings of the 2024 Conference on Robot Learning},

}



@article{liu2024maniwav,

    title={ManiWAV: Learning Robot Manipulation from In-the-Wild Audio-Visual Data},

    author={Liu, Zeyi and Chi, Cheng and Cousineau, Eric and Kuppuswamy, Naveen and Burchfiel, Benjamin and Song, Shuran},

    journal={arXiv preprint arXiv:2406.19464},

    year={2024}

}

```



## Additional Information



- Contact: ynylincoln@sjtu.edu.cn



## Notes



- The dataset is large (~3TB). Download/storage optimization required.

- If you use in research, please cite OpenEAI-VLA and relevant sources.