Datasets:

ArXiv:
License:
ffllyy467 commited on
Commit
0dbc092
·
1 Parent(s): f832871

doc: add README and LICENSE

Browse files
Files changed (2) hide show
  1. LICENSE +18 -0
  2. README.md +257 -0
LICENSE ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright 2026 EAI-YesLab
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
6
+
7
+ The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
8
+
9
+ THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
10
+
11
+
12
+ -----------------------------------------------------------------------------
13
+
14
+ NOTICE REGARDING THIRD-PARTY DATASETS:
15
+
16
+ This dataset includes or is derived in part from external datasets distributed under their own licenses.
17
+ The original datasets are listed and cited in the README.
18
+ You must comply with the license terms and any use restrictions of those datasets in addition to this MIT license.
README.md CHANGED
@@ -1,3 +1,260 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+
5
+ # OpenEAI-Dataset
6
+
7
+ ## Dataset Summary
8
+
9
+ Preprocessed pretraining datasets for [OpenEAI-VLA](https://github.com/eai-yeslab/OpenEAI-VLA).
10
+
11
+ This dataset aggregates and unifies data from multiple embodied AI sources (e.g., Open X-Embodiment, UMI Community) for large-scale VLA pretraining.
12
+ All samples are preprocessed and stored in a common format compatible with OpenEAI-VLA's dataset loader.
13
+ Images have been compressed to reduce storage cost.
14
+ **Total size:** ~3.12TB.
15
+
16
+ ## Supported Tasks
17
+
18
+ - Visual-Language-Action (VLA) Pretraining
19
+
20
+ ## Source Datasets
21
+
22
+ This dataset contains reformatted or reprocessed samples from the following datasets:
23
+
24
+ - Open X-Embodiment
25
+ - [Open X-Embodiment: Robotic Learning Datasets and RT-X Models ](https://robotics-transformer-x.github.io/)
26
+ - License: Apache License 2.0
27
+ - UMI Community Dataset
28
+ - [UMI Robot Dataset Community](https://umi-data.github.io/)
29
+ - License: MIT
30
+
31
+ If you use this dataset, please also cite these original datasets as appropriate.
32
+
33
+ ## Dataset Structure
34
+
35
+ **meta/**
36
+ ```
37
+ meta/
38
+ ├── pretrain_meta.json
39
+ ├── bc_z_meta.npy
40
+ ├── droid_meta.npy
41
+ └── ... (other meta)
42
+ ```
43
+ Fields for `bc_z_meta.npy`::
44
+ - `batch_size`, `episode_length`, `episode_0`: (2,) (accum. start idx, traj len)
45
+ - `total_steps`
46
+ - `state_dim`, `action_dim`
47
+ - `state_stat` (`mean`, `std`, `q01`, `q99`)
48
+ - `action_stat` (`mean`, `std`, `q01`, `q99`)
49
+
50
+ **bc_z/**
51
+ ```
52
+ bc_z/
53
+ ├── 0000.hdf5
54
+ │ ├── episode_0
55
+ │ │ ├── attrs (dict: action_type, instruction, length)
56
+ │ │ ├── action (traj_length, action_dim)
57
+ │ │ ├── state (traj_length, state_dim)
58
+ │ │ └── image_mid (traj_length)
59
+ │ ├── episode_1
60
+ │ └── ...
61
+ ├── 0001.hdf5
62
+ ├── ...
63
+ ```
64
+ **droid/**
65
+
66
+ ...
67
+
68
+ ### Dataset Loading
69
+
70
+ The dataset is automatically loaded via [hdf5_loader](https://github.com/eai-yeslab/OpenEAI-VLA/blob/main/openeai/dataset/hdf5_dataset.py).
71
+
72
+ **Manual loading example:**
73
+ ```python
74
+ import h5py
75
+ import numpy as np
76
+ # Load trajectory data
77
+ dataset = h5py.File("bc_z/0000.hdf5", "r")
78
+ episode = dataset['episode_0']
79
+ action = episode['action'][:] # (traj_length, action_dim)
80
+ state = episode['state'][:] # (traj_length, state_dim)
81
+ attrs = dict(episode.attrs)
82
+ print(attrs['instruction'])
83
+
84
+ # Load (compressed) image example
85
+ import cv2
86
+ from PIL import Image
87
+ imgs = episode['image_mid']
88
+ image = Image.fromarray(cv2.imdecode(np.frombuffer(imgs[0], np.uint8), cv2.IMREAD_COLOR))
89
+
90
+ # Load meta data
91
+ meta = np.load('meta/bc_z_meta.npy', allow_pickle=True).item()
92
+ print(meta['action_stat'])
93
+ ```
94
+
95
+ ### Dataset Creation
96
+
97
+ You can transfer your own dataset into this format for pretraining or finetuning.
98
+
99
+ **Recommended process:**
100
+ 1. Use our collector:
101
+ [`collect_data.py`](https://github.com/eai-yeslab/OpenEAI-Arm/blob/main/software/ros2/src/openeai_arm/examples/collect_data.py)
102
+ (use task name starting with `'openeai_arm'`)
103
+ 2. (Optional) Write and register a data worker:
104
+ [Register here](https://github.com/eai-yeslab/OpenEAI-VLA/blob/main/data_utils/vla_data_utils/unique_worker_fn/__init__.py),
105
+ and see [worker example](https://github.com/eai-yeslab/OpenEAI-VLA/blob/main/data_utils/vla_data_utils/unique_worker_fn/openeai_arm.py)
106
+ 3. Convert dataset:
107
+ ```bash
108
+ cd OpenEAI-VLA/data_utils
109
+ bash run.sh openeai_arm_<task_name>
110
+ ```
111
+ 4. Set your dataset path in the training config file.
112
+
113
+ ### Licensing Information
114
+
115
+ - License: MIT
116
+
117
+ ## Citation Information
118
+
119
+ If you are using our dataset to pretrain your model, please cite our paper:
120
+ ```
121
+ @inproceedings{openeai_platform,
122
+ title = {OpenEAI-Platform: Open-source Embodied Artificial Intelligence Hardware-Software Unified Platform},
123
+ author = {Jinyuan Zhang, Luoyi Fan, Leiyu Wang, Yeqiang Wang, Yichen Zhu, Cewu Lu, Nanyang Ye},
124
+ year = {2026}
125
+ }
126
+ ```
127
+
128
+ Also ,if you are using datasets converted from Open X-Embodiedment, please also cite the following:
129
+ ```
130
+ @misc{open_x_embodiment_rt_x_2023,
131
+ title={Open {X-E}mbodiment: Robotic Learning Datasets and {RT-X} Models},
132
+ author = {Open X-Embodiment Collaboration and Abby O'Neill and Abdul Rehman and Abhinav Gupta and Abhiram Maddukuri and Abhishek Gupta and Abhishek Padalkar and Abraham Lee and Acorn Pooley and Agrim Gupta and Ajay Mandlekar and Ajinkya Jain and Albert Tung and Alex Bewley and Alex Herzog and Alex Irpan and Alexander Khazatsky and Anant Rai and Anchit Gupta and Andrew Wang and Andrey Kolobov and Anikait Singh and Animesh Garg and Aniruddha Kembhavi and Annie Xie and Anthony Brohan and Antonin Raffin and Archit Sharma and Arefeh Yavary and Arhan Jain and Ashwin Balakrishna and Ayzaan Wahid and Ben Burgess-Limerick and Beomjoon Kim and Bernhard Schölkopf and Blake Wulfe and Brian Ichter and Cewu Lu and Charles Xu and Charlotte Le and Chelsea Finn and Chen Wang and Chenfeng Xu and Cheng Chi and Chenguang Huang and Christine Chan and Christopher Agia and Chuer Pan and Chuyuan Fu and Coline Devin and Danfei Xu and Daniel Morton and Danny Driess and Daphne Chen and Deepak Pathak and Dhruv Shah and Dieter Büchler and Dinesh Jayaraman and Dmitry Kalashnikov and Dorsa Sadigh and Edward Johns and Ethan Foster and Fangchen Liu and Federico Ceola and Fei Xia and Feiyu Zhao and Felipe Vieira Frujeri and Freek Stulp and Gaoyue Zhou and Gaurav S. Sukhatme and Gautam Salhotra and Ge Yan and Gilbert Feng and Giulio Schiavi and Glen Berseth and Gregory Kahn and Guangwen Yang and Guanzhi Wang and Hao Su and Hao-Shu Fang and Haochen Shi and Henghui Bao and Heni Ben Amor and Henrik I Christensen and Hiroki Furuta and Homanga Bharadhwaj and Homer Walke and Hongjie Fang and Huy Ha and Igor Mordatch and Ilija Radosavovic and Isabel Leal and Jacky Liang and Jad Abou-Chakra and Jaehyung Kim and Jaimyn Drake and Jan Peters and Jan Schneider and Jasmine Hsu and Jay Vakil and Jeannette Bohg and Jeffrey Bingham and Jeffrey Wu and Jensen Gao and Jiaheng Hu and Jiajun Wu and Jialin Wu and Jiankai Sun and Jianlan Luo and Jiayuan Gu and Jie Tan and Jihoon Oh and Jimmy Wu and Jingpei Lu and Jingyun Yang and Jitendra Malik and João Silvério and Joey Hejna and Jonathan Booher and Jonathan Tompson and Jonathan Yang and Jordi Salvador and Joseph J. Lim and Junhyek Han and Kaiyuan Wang and Kanishka Rao and Karl Pertsch and Karol Hausman and Keegan Go and Keerthana Gopalakrishnan and Ken Goldberg and Kendra Byrne and Kenneth Oslund and Kento Kawaharazuka and Kevin Black and Kevin Lin and Kevin Zhang and Kiana Ehsani and Kiran Lekkala and Kirsty Ellis and Krishan Rana and Krishnan Srinivasan and Kuan Fang and Kunal Pratap Singh and Kuo-Hao Zeng and Kyle Hatch and Kyle Hsu and Laurent Itti and Lawrence Yunliang Chen and Lerrel Pinto and Li Fei-Fei and Liam Tan and Linxi "Jim" Fan and Lionel Ott and Lisa Lee and Luca Weihs and Magnum Chen and Marion Lepert and Marius Memmel and Masayoshi Tomizuka and Masha Itkina and Mateo Guaman Castro and Max Spero and Maximilian Du and Michael Ahn and Michael C. Yip and Mingtong Zhang and Mingyu Ding and Minho Heo and Mohan Kumar Srirama and Mohit Sharma and Moo Jin Kim and Muhammad Zubair Irshad and Naoaki Kanazawa and Nicklas Hansen and Nicolas Heess and Nikhil J Joshi and Niko Suenderhauf and Ning Liu and Norman Di Palo and Nur Muhammad Mahi Shafiullah and Oier Mees and Oliver Kroemer and Osbert Bastani and Pannag R Sanketi and Patrick "Tree" Miller and Patrick Yin and Paul Wohlhart and Peng Xu and Peter David Fagan and Peter Mitrano and Pierre Sermanet and Pieter Abbeel and Priya Sundaresan and Qiuyu Chen and Quan Vuong and Rafael Rafailov and Ran Tian and Ria Doshi and Roberto Mart{'i}n-Mart{'i}n and Rohan Baijal and Rosario Scalise and Rose Hendrix and Roy Lin and Runjia Qian and Ruohan Zhang and Russell Mendonca and Rutav Shah and Ryan Hoque and Ryan Julian and Samuel Bustamante and Sean Kirmani and Sergey Levine and Shan Lin and Sherry Moore and Shikhar Bahl and Shivin Dass and Shubham Sonawani and Shubham Tulsiani and Shuran Song and Sichun Xu and Siddhant Haldar and Siddharth Karamcheti and Simeon Adebola and Simon Guist and Soroush Nasiriany and Stefan Schaal and Stefan Welker and Stephen Tian and Subramanian Ramamoorthy and Sudeep Dasari and Suneel Belkhale and Sungjae Park and Suraj Nair and Suvir Mirchandani and Takayuki Osa and Tanmay Gupta and Tatsuya Harada and Tatsuya Matsushima and Ted Xiao and Thomas Kollar and Tianhe Yu and Tianli Ding and Todor Davchev and Tony Z. Zhao and Travis Armstrong and Trevor Darrell and Trinity Chung and Vidhi Jain and Vikash Kumar and Vincent Vanhoucke and Vitor Guizilini and Wei Zhan and Wenxuan Zhou and Wolfram Burgard and Xi Chen and Xiangyu Chen and Xiaolong Wang and Xinghao Zhu and Xinyang Geng and Xiyuan Liu and Xu Liangwei and Xuanlin Li and Yansong Pang and Yao Lu and Yecheng Jason Ma and Yejin Kim and Yevgen Chebotar and Yifan Zhou and Yifeng Zhu and Yilin Wu and Ying Xu and Yixuan Wang and Yonatan Bisk and Yongqiang Dou and Yoonyoung Cho and Youngwoon Lee and Yuchen Cui and Yue Cao and Yueh-Hua Wu and Yujin Tang and Yuke Zhu and Yunchu Zhang and Yunfan Jiang and Yunshuang Li and Yunzhu Li and Yusuke Iwasawa and Yutaka Matsuo and Zehan Ma and Zhuo Xu and Zichen Jeff Cui and Zichen Zhang and Zipeng Fu and Zipeng Lin},
133
+ howpublished = {\url{https://arxiv.org/abs/2310.08864}},
134
+ year = {2023},
135
+ }
136
+
137
+ @inproceedings{jang2021bc,
138
+ title={{BC}-Z: Zero-Shot Task Generalization with Robotic Imitation Learning},
139
+ author={Eric Jang and Alex Irpan and Mohi Khansari and Daniel Kappler and Frederik Ebert and Corey Lynch and Sergey Levine and Chelsea Finn},
140
+ booktitle={5th Annual Conference on Robot Learning},
141
+ year={2021},
142
+ url={https://openreview.net/forum?id=8kbp23tSGYv}}
143
+
144
+ @misc{walke2024bridgedatav2datasetrobot,
145
+ title={BridgeData V2: A Dataset for Robot Learning at Scale},
146
+ author={Homer Walke and Kevin Black and Abraham Lee and Moo Jin Kim and Max Du and Chongyi Zheng and Tony Zhao and Philippe Hansen-Estruch and Quan Vuong and Andre He and Vivek Myers and Kuan Fang and Chelsea Finn and Sergey Levine},
147
+ year={2024},
148
+ eprint={2308.12952},
149
+ archivePrefix={arXiv},
150
+ primaryClass={cs.RO},
151
+ url={https://arxiv.org/abs/2308.12952},
152
+ }
153
+
154
+ @misc{khazatsky2025droidlargescaleinthewildrobot,
155
+ title={DROID: A Large-Scale In-The-Wild Robot Manipulation Dataset},
156
+ author={Alexander Khazatsky and Karl Pertsch and Suraj Nair and Ashwin Balakrishna and Sudeep Dasari and Siddharth Karamcheti and Soroush Nasiriany and Mohan Kumar Srirama and Lawrence Yunliang Chen and Kirsty Ellis and Peter David Fagan and Joey Hejna and Masha Itkina and Marion Lepert and Yecheng Jason Ma and Patrick Tree Miller and Jimmy Wu and Suneel Belkhale and Shivin Dass and Huy Ha and Arhan Jain and Abraham Lee and Youngwoon Lee and Marius Memmel and Sungjae Park and Ilija Radosavovic and Kaiyuan Wang and Albert Zhan and Kevin Black and Cheng Chi and Kyle Beltran Hatch and Shan Lin and Jingpei Lu and Jean Mercat and Abdul Rehman and Pannag R Sanketi and Archit Sharma and Cody Simpson and Quan Vuong and Homer Rich Walke and Blake Wulfe and Ted Xiao and Jonathan Heewon Yang and Arefeh Yavary and Tony Z. Zhao and Christopher Agia and Rohan Baijal and Mateo Guaman Castro and Daphne Chen and Qiuyu Chen and Trinity Chung and Jaimyn Drake and Ethan Paul Foster and Jensen Gao and Vitor Guizilini and David Antonio Herrera and Minho Heo and Kyle Hsu and Jiaheng Hu and Muhammad Zubair Irshad and Donovon Jackson and Charlotte Le and Yunshuang Li and Kevin Lin and Roy Lin and Zehan Ma and Abhiram Maddukuri and Suvir Mirchandani and Daniel Morton and Tony Nguyen and Abigail O'Neill and Rosario Scalise and Derick Seale and Victor Son and Stephen Tian and Emi Tran and Andrew E. Wang and Yilin Wu and Annie Xie and Jingyun Yang and Patrick Yin and Yunchu Zhang and Osbert Bastani and Glen Berseth and Jeannette Bohg and Ken Goldberg and Abhinav Gupta and Abhishek Gupta and Dinesh Jayaraman and Joseph J Lim and Jitendra Malik and Roberto Martín-Martín and Subramanian Ramamoorthy and Dorsa Sadigh and Shuran Song and Jiajun Wu and Michael C. Yip and Yuke Zhu and Thomas Kollar and Sergey Levine and Chelsea Finn},
157
+ year={2025},
158
+ eprint={2403.12945},
159
+ archivePrefix={arXiv},
160
+ primaryClass={cs.RO},
161
+ url={https://arxiv.org/abs/2403.12945},
162
+ }
163
+
164
+ @inproceedings{gu2023maniskill2,
165
+ title={ManiSkill2: A Unified Benchmark for Generalizable Manipulation Skills},
166
+ author={Gu, Jiayuan and Xiang, Fanbo and Li, Xuanlin and Ling, Zhan and Liu, Xiqiang and Mu, Tongzhou and Tang, Yihe and Tao, Stone and Wei, Xinyue and Yao, Yunchao and Yuan, Xiaodi and Xie, Pengwei and Huang, Zhiao and Chen, Rui and Su, Hao},
167
+ booktitle={International Conference on Learning Representations},
168
+ year={2023}
169
+ }
170
+
171
+ @inproceedings{fu2024mobile,
172
+ author = {Fu, Zipeng and Zhao, Tony Z. and Finn, Chelsea},
173
+ title = {Mobile ALOHA: Learning Bimanual Mobile Manipulation with Low-Cost Whole-Body Teleoperation},
174
+ booktitle = {arXiv},
175
+ year = {2024},
176
+ }
177
+
178
+ @misc{fang2023rh20tcomprehensiveroboticdataset,
179
+ title={RH20T: A Comprehensive Robotic Dataset for Learning Diverse Skills in One-Shot},
180
+ author={Hao-Shu Fang and Hongjie Fang and Zhenyu Tang and Jirong Liu and Chenxi Wang and Junbo Wang and Haoyi Zhu and Cewu Lu},
181
+ year={2023},
182
+ eprint={2307.00595},
183
+ archivePrefix={arXiv},
184
+ primaryClass={cs.RO},
185
+ url={https://arxiv.org/abs/2307.00595},
186
+ }
187
+
188
+ @misc{bharadhwaj2023roboagent,
189
+ title={RoboAgent: Generalization and Efficiency in Robot Manipulation via Semantic Augmentations and Action Chunking},
190
+ author={Homanga Bharadhwaj and Jay Vakil and Mohit Sharma and Abhinav Gupta and Shubham Tulsiani and Vikash Kumar},
191
+ year={2023},
192
+ eprint={2309.01918},
193
+ archivePrefix={arXiv},
194
+ primaryClass={cs.RO}
195
+ }
196
+
197
+ @article{brohan2022rt,
198
+ title={Rt-1: Robotics transformer for real-world control at scale},
199
+ author={Brohan, Anthony and Brown, Noah and Carbajal, Justice and Chebotar, Yevgen and Dabis, Joseph and Finn, Chelsea and Gopalakrishnan, Keerthana and Hausman, Karol and Herzog, Alex and Hsu, Jasmine and others},
200
+ journal={arXiv preprint arXiv:2212.06817},
201
+ year={2022}
202
+ }
203
+ ```
204
+
205
+ If you are using datasets converted from UMI, please also cite the following:
206
+ ```
207
+ @article{rayyan2025mv,
208
+ title={MV-UMI: A Scalable Multi-View Interface for Cross-Embodiment Learning},
209
+ author={Rayyan, Omar and Abanes, John and Hafez, Mahmoud and Tzes, Anthony and Abu-Dakka, Fares},
210
+ journal={arXiv preprint arXiv:2509.18757},
211
+ year={2025}
212
+ }
213
+
214
+
215
+ @article{zhu2025touch,
216
+ title={Touch in the Wild: Learning Fine-Grained Manipulation with a Portable Visuo-Tactile Gripper},
217
+ author={Zhu, Xinyue and Huang, Binghao and Li, Yunzhu},
218
+ journal={arXiv preprint arXiv:2507.15062},
219
+ year={2025}
220
+ }
221
+
222
+ @article{liu2025vitamin,
223
+ title={ViTaMIn: Learning Contact-Rich Tasks Through Robot-Free Visuo-Tactile Manipulation Interface},
224
+ author={Liu, Fangchen and Li, Chuanyu and Qin, Yihua and Shaw, Ankit and Xu, Jing and Abbeel, Pieter and Chen, Rui},
225
+ journal={arXiv preprint arXiv:2504.06156},
226
+ year={2025}
227
+ }
228
+
229
+ @article{lin2024data,
230
+ title={Data Scaling Laws in Imitation Learning for Robotic Manipulation},
231
+ author={Lin, Fanqi and Hu, Yingdong and Sheng, Pingyue and Wen, Chuan and You,
232
+ Jiacheng and Gao, Yang},
233
+ journal={arXiv preprint arXiv:2410.18647},
234
+ year={2024}
235
+ }
236
+
237
+
238
+ @inproceedings{ha2024umilegs,
239
+ title={{UMI} on Legs: Making Manipulation Policies Mobile with Manipulation-Centric Whole-body Controllers},
240
+ author={Huy Ha and Yihuai Gao and Zipeng Fu and Jie Tan and Shuran Song},
241
+ year={2024},
242
+ booktitle={Proceedings of the 2024 Conference on Robot Learning},
243
+ }
244
+
245
+ @article{liu2024maniwav,
246
+ title={ManiWAV: Learning Robot Manipulation from In-the-Wild Audio-Visual Data},
247
+ author={Liu, Zeyi and Chi, Cheng and Cousineau, Eric and Kuppuswamy, Naveen and Burchfiel, Benjamin and Song, Shuran},
248
+ journal={arXiv preprint arXiv:2406.19464},
249
+ year={2024}
250
+ }
251
+ ```
252
+
253
+ ## Additional Information
254
+
255
+ - Contact: ynylincoln@sjtu.edu.cn
256
+
257
+ ## Notes
258
+
259
+ - The dataset is large (~3TB). Download/storage optimization required.
260
+ - If you use in research, please cite OpenEAI-VLA and relevant sources.