jnogga commited on
Commit
93eb888
·
verified ·
1 Parent(s): 5d79d0a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +372 -3
README.md CHANGED
@@ -1,3 +1,372 @@
1
- ---
2
- license: cc-by-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - robotics
5
+ tags:
6
+ - LeRobot
7
+ - Robotic manipulation
8
+ pretty_name: BridgeData V2 Scripted Demos
9
+ size_categories:
10
+ - 100K<n<1M
11
+ ---
12
+
13
+ ## BridgeData V2 Scripted Demos
14
+
15
+ Scripted demonstrations in [BridgeData V2](https://rail-berkeley.github.io/bridgedata/).
16
+
17
+ Ported from raw *scripted_6_18* data at full resolution to LeRobotDataset v3.0 format (0.01 TiB | 0.1k inodes).
18
+
19
+ <div align="center" style="margin: 16px 0;">
20
+ <video controls autoplay loop muted playsinline style="max-width: 100%; border-radius: 10px;">
21
+ <source src="https://huggingface.co/datasets/jnogga/bridge_data_v2_scripted/resolve/main/bridge_example_episode.mp4" type="video/mp4">
22
+ Your browser does not support the video tag.
23
+ </video>
24
+ </div>
25
+
26
+ For the teleoperated trajectories with language annotation, see [jnogga/bridge_data_v2_teleop](https://huggingface.co/datasets/jnogga/bridge_data_v2_teleop).
27
+
28
+ ## Dataset Structure
29
+
30
+ Note that the available cameras vary between episodes. Missing camera perspectives are padded, and the corresponding *_available* sample fields serve as a mask.
31
+
32
+ [meta/info.json](meta/info.json):
33
+ ```json
34
+ {
35
+ "codebase_version": "v3.0",
36
+ "robot_type": "widow_x",
37
+ "fps": 5,
38
+ "data_files_size_in_mb": 100.0,
39
+ "video_files_size_in_mb": 200.0,
40
+ "chunks_size": 1000,
41
+ "total_episodes": 9701,
42
+ "total_frames": 456260,
43
+ "total_tasks": 9701,
44
+ "splits": {
45
+ "train": "0:9701"
46
+ },
47
+ "data_path": "data/chunk-{chunk_index:03d}/file_{file_index:03d}.parquet",
48
+ "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file_{file_index:03d}.mp4",
49
+ "features": {
50
+ "action.cartesian": {
51
+ "dtype": "float32",
52
+ "shape": [
53
+ 7
54
+ ],
55
+ "names": [
56
+ "position.x",
57
+ "position.y",
58
+ "position.z",
59
+ "quaternion.w",
60
+ "quaternion.x",
61
+ "quaternion.y",
62
+ "quaternion.z"
63
+ ],
64
+ "fps": 5
65
+ },
66
+ "action.gripper_position": {
67
+ "dtype": "float32",
68
+ "shape": [
69
+ 1
70
+ ],
71
+ "names": null,
72
+ "fps": 5
73
+ },
74
+ "observation.cartesian": {
75
+ "dtype": "float32",
76
+ "shape": [
77
+ 7
78
+ ],
79
+ "names": [
80
+ "position.x",
81
+ "position.y",
82
+ "position.z",
83
+ "quaternion.w",
84
+ "quaternion.x",
85
+ "quaternion.y",
86
+ "quaternion.z"
87
+ ],
88
+ "fps": 5
89
+ },
90
+ "observation.gripper_position": {
91
+ "dtype": "float32",
92
+ "shape": [
93
+ 1
94
+ ],
95
+ "names": null,
96
+ "fps": 5
97
+ },
98
+ "observation.eef_transform": {
99
+ "dtype": "float32",
100
+ "shape": [
101
+ 7
102
+ ],
103
+ "names": [
104
+ "position.x",
105
+ "position.y",
106
+ "position.z",
107
+ "quaternion.w",
108
+ "quaternion.x",
109
+ "quaternion.y",
110
+ "quaternion.z"
111
+ ],
112
+ "fps": 5
113
+ },
114
+ "observation.joint_position": {
115
+ "dtype": "float32",
116
+ "shape": [
117
+ 6
118
+ ],
119
+ "names": [
120
+ "joint_0",
121
+ "joint_1",
122
+ "joint_2",
123
+ "joint_3",
124
+ "joint_4",
125
+ "joint_5"
126
+ ],
127
+ "fps": 5
128
+ },
129
+ "observation.joint_velocity": {
130
+ "dtype": "float32",
131
+ "shape": [
132
+ 6
133
+ ],
134
+ "names": [
135
+ "joint_0",
136
+ "joint_1",
137
+ "joint_2",
138
+ "joint_3",
139
+ "joint_4",
140
+ "joint_5"
141
+ ],
142
+ "fps": 5
143
+ },
144
+ "frame_index": {
145
+ "dtype": "int64",
146
+ "shape": [
147
+ 1
148
+ ],
149
+ "names": null,
150
+ "fps": 5
151
+ },
152
+ "timestamp": {
153
+ "dtype": "float32",
154
+ "shape": [
155
+ 1
156
+ ],
157
+ "names": null,
158
+ "fps": 5
159
+ },
160
+ "index": {
161
+ "dtype": "int64",
162
+ "shape": [
163
+ 1
164
+ ],
165
+ "names": null,
166
+ "fps": 5
167
+ },
168
+ "task_index": {
169
+ "dtype": "int64",
170
+ "shape": [
171
+ 1
172
+ ],
173
+ "names": null,
174
+ "fps": 5
175
+ },
176
+ "episode_index": {
177
+ "dtype": "int64",
178
+ "shape": [
179
+ 1
180
+ ],
181
+ "names": null,
182
+ "fps": 5
183
+ },
184
+ "observation.images.camera_0_available": {
185
+ "dtype": "bool",
186
+ "shape": [
187
+ 1
188
+ ],
189
+ "names": null,
190
+ "fps": 5
191
+ },
192
+ "observation.images.camera_1_available": {
193
+ "dtype": "bool",
194
+ "shape": [
195
+ 1
196
+ ],
197
+ "names": null,
198
+ "fps": 5
199
+ },
200
+ "observation.images.camera_2_available": {
201
+ "dtype": "bool",
202
+ "shape": [
203
+ 1
204
+ ],
205
+ "names": null,
206
+ "fps": 5
207
+ },
208
+ "observation.images.camera_3_available": {
209
+ "dtype": "bool",
210
+ "shape": [
211
+ 1
212
+ ],
213
+ "names": null,
214
+ "fps": 5
215
+ },
216
+ "observation.images.camera_4_available": {
217
+ "dtype": "bool",
218
+ "shape": [
219
+ 1
220
+ ],
221
+ "names": null,
222
+ "fps": 5
223
+ },
224
+ "observation.images.camera_0": {
225
+ "dtype": "video",
226
+ "shape": [
227
+ 480,
228
+ 640,
229
+ 3
230
+ ],
231
+ "names": [
232
+ "height",
233
+ "width",
234
+ "channel"
235
+ ],
236
+ "info": {
237
+ "video.height": 480,
238
+ "video.width": 640,
239
+ "video.codec": "h264",
240
+ "video.pix_fmt": "yuv420p",
241
+ "video.is_depth_map": false,
242
+ "video.fps": 5,
243
+ "video.channels": 3,
244
+ "has_audio": false
245
+ },
246
+ "fps": 5
247
+ },
248
+ "observation.images.camera_1": {
249
+ "dtype": "video",
250
+ "shape": [
251
+ 480,
252
+ 640,
253
+ 3
254
+ ],
255
+ "names": [
256
+ "height",
257
+ "width",
258
+ "channel"
259
+ ],
260
+ "info": {
261
+ "video.height": 480,
262
+ "video.width": 640,
263
+ "video.codec": "h264",
264
+ "video.pix_fmt": "yuv420p",
265
+ "video.is_depth_map": false,
266
+ "video.fps": 5,
267
+ "video.channels": 3,
268
+ "has_audio": false
269
+ },
270
+ "fps": 5
271
+ },
272
+ "observation.images.camera_2": {
273
+ "dtype": "video",
274
+ "shape": [
275
+ 480,
276
+ 640,
277
+ 3
278
+ ],
279
+ "names": [
280
+ "height",
281
+ "width",
282
+ "channel"
283
+ ],
284
+ "info": {
285
+ "video.height": 480,
286
+ "video.width": 640,
287
+ "video.codec": "h264",
288
+ "video.pix_fmt": "yuv420p",
289
+ "video.is_depth_map": false,
290
+ "video.fps": 5,
291
+ "video.channels": 3,
292
+ "has_audio": false
293
+ },
294
+ "fps": 5
295
+ },
296
+ "observation.images.camera_3": {
297
+ "dtype": "video",
298
+ "shape": [
299
+ 480,
300
+ 640,
301
+ 3
302
+ ],
303
+ "names": [
304
+ "height",
305
+ "width",
306
+ "channel"
307
+ ],
308
+ "info": {
309
+ "video.height": 480,
310
+ "video.width": 640,
311
+ "video.codec": "h264",
312
+ "video.pix_fmt": "yuv420p",
313
+ "video.is_depth_map": false,
314
+ "video.fps": 5,
315
+ "video.channels": 3,
316
+ "has_audio": false
317
+ },
318
+ "fps": 5
319
+ },
320
+ "observation.images.camera_4": {
321
+ "dtype": "video",
322
+ "shape": [
323
+ 480,
324
+ 640,
325
+ 3
326
+ ],
327
+ "names": [
328
+ "height",
329
+ "width",
330
+ "channel"
331
+ ],
332
+ "info": {
333
+ "video.height": 480,
334
+ "video.width": 640,
335
+ "video.codec": "h264",
336
+ "video.pix_fmt": "yuv420p",
337
+ "video.is_depth_map": false,
338
+ "video.fps": 5,
339
+ "video.channels": 3,
340
+ "has_audio": false
341
+ },
342
+ "fps": 5
343
+ }
344
+ }
345
+ }
346
+ ```
347
+
348
+ ## Getting started
349
+
350
+ ```py
351
+ # pip install lerobot
352
+ from lerobot.datasets.lerobot_dataset import LeRobotDataset
353
+
354
+ dataset = LeRobotDataset("jnogga/bridge_data_v2_scripted")
355
+ ```
356
+
357
+ See [bridge_example.ipynb](bridge_example.ipynb) for a more detailed example.
358
+
359
+ ## Citation
360
+
361
+ All credit goes to the original authors of BridgeData V2. If you find their work helpful, please cite
362
+
363
+ **BibTeX:**
364
+
365
+ ```bibtex
366
+ @inproceedings{walke2023bridgedata,
367
+ title={BridgeData V2: A Dataset for Robot Learning at Scale},
368
+ author={Walke, Homer and Black, Kevin and Lee, Abraham and Kim, Moo Jin and Du, Max and Zheng, Chongyi and Zhao, Tony and Hansen-Estruch, Philippe and Vuong, Quan and He, Andre and Myers, Vivek and Fang, Kuan and Finn, Chelsea and Levine, Sergey},
369
+ booktitle={Conference on Robot Learning (CoRL)},
370
+ year={2023}
371
+ }
372
+ ```