Datasets:
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Error code: DatasetGenerationError
Exception: ArrowInvalid
Message: External error: RuntimeError: Task was aborted
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1779, in _prepare_split_single
for key, table in generator:
^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/lance/lance.py", line 224, in _generate_tables
for batch_idx, batch in enumerate(
^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/lance/dataset.py", line 4969, in to_batches
yield from self.to_reader()
File "pyarrow/ipc.pxi", line 703, in pyarrow.lib.RecordBatchReader.__next__
File "pyarrow/ipc.pxi", line 737, in pyarrow.lib.RecordBatchReader.read_next_batch
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: External error: RuntimeError: Task was aborted
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 882, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 943, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1646, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1832, in _prepare_split_single
raise DatasetGenerationError("An error occurred while generating the dataset") from e
datasets.exceptions.DatasetGenerationError: An error occurred while generating the datasetNeed help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
observation_state list | action list | episode_index int64 | frame_index int64 | timestamp float32 | next_reward float32 | next_done bool | next_success bool | index int64 | task_index int64 |
|---|---|---|---|---|---|---|---|---|---|
[
222,
97
] | [
233,
71
] | 0 | 0 | 0 | 0.190297 | false | false | 0 | 0 |
[
225.2523956298828,
89.31253051757812
] | [
229,
83
] | 0 | 1 | 0.1 | 0.190297 | false | false | 1 | 0 |
[
227.5923309326172,
84.53437805175781
] | [
229,
86
] | 0 | 2 | 0.2 | 0.190297 | false | false | 2 | 0 |
[
228.420166015625,
84.27986145019531
] | [
230,
86
] | 0 | 3 | 0.3 | 0.190297 | false | false | 3 | 0 |
[
229.04222106933594,
84.95709991455078
] | [
239,
89
] | 0 | 4 | 0.4 | 0.190297 | false | false | 4 | 0 |
[
232.16236877441406,
86.34400177001953
] | [
251,
95
] | 0 | 5 | 0.5 | 0.190297 | false | false | 5 | 0 |
[
238.84613037109375,
89.35484313964844
] | [
263,
102
] | 0 | 6 | 0.6 | 0.190297 | false | false | 6 | 0 |
[
248.09005737304688,
94.0600814819336
] | [
273,
108
] | 0 | 7 | 0.7 | 0.190297 | false | false | 7 | 0 |
[
258.14642333984375,
99.59150695800781
] | [
283,
116
] | 0 | 8 | 0.8 | 0.190297 | false | false | 8 | 0 |
[
268.2669372558594,
105.99491882324219
] | [
294,
127
] | 0 | 9 | 0.9 | 0.190297 | false | false | 9 | 0 |
[
278.6406555175781,
114.0329360961914
] | [
302,
137
] | 0 | 10 | 1 | 0.190297 | false | false | 10 | 0 |
[
288.4110412597656,
123.16401672363281
] | [
305,
141
] | 0 | 11 | 1.1 | 0.190297 | false | false | 11 | 0 |
[
295.9131774902344,
130.9943389892578
] | [
311,
148
] | 0 | 12 | 1.2 | 0.190297 | false | false | 12 | 0 |
[
302.2146911621094,
138.00311279296875
] | [
318,
152
] | 0 | 13 | 1.3 | 0.190297 | false | false | 13 | 0 |
[
308.5594482421875,
144.0330810546875
] | [
324,
158
] | 0 | 14 | 1.4 | 0.190297 | false | false | 14 | 0 |
[
314.881591796875,
149.71780395507812
] | [
332,
163
] | 0 | 15 | 1.5 | 0.190297 | false | false | 15 | 0 |
[
321.6606750488281,
155.1989288330078
] | [
339,
170
] | 0 | 16 | 1.6 | 0.190297 | false | false | 16 | 0 |
[
328.6932373046875,
161.05247497558594
] | [
346,
182
] | 0 | 17 | 1.7 | 0.190297 | false | false | 17 | 0 |
[
335.73968505859375,
168.89393615722656
] | [
352,
195
] | 0 | 18 | 1.8 | 0.190297 | false | false | 18 | 0 |
[
342.4729309082031,
178.94786071777344
] | [
357,
209
] | 0 | 19 | 1.9 | 0.190297 | false | false | 19 | 0 |
[
348.5765380859375,
190.74172973632812
] | [
360,
221
] | 0 | 20 | 2 | 0.190297 | false | false | 20 | 0 |
[
353.56915283203125,
203.03469848632812
] | [
365,
242
] | 0 | 21 | 2.1 | 0.190297 | false | false | 21 | 0 |
[
358.2176513671875,
217.9223175048828
] | [
368,
260
] | 0 | 22 | 2.2 | 0.190297 | false | false | 22 | 0 |
[
362.3818359375,
234.7051239013672
] | [
370,
275
] | 0 | 23 | 2.3 | 0.190297 | false | false | 23 | 0 |
[
365.72149658203125,
251.302978515625
] | [
372,
290
] | 0 | 24 | 2.4 | 0.190297 | false | false | 24 | 0 |
[
368.42388916015625,
267.22650146484375
] | [
373,
305
] | 0 | 25 | 2.5 | 0.190297 | false | false | 25 | 0 |
[
370.4745178222656,
282.6993408203125
] | [
375,
324
] | 0 | 26 | 2.6 | 0.190297 | false | false | 26 | 0 |
[
372.3204650878906,
299.11279296875
] | [
375,
339
] | 0 | 27 | 2.7 | 0.190297 | false | false | 27 | 0 |
[
373.6162109375,
315.5039978027344
] | [
373,
351
] | 0 | 28 | 2.8 | 0.190297 | false | false | 28 | 0 |
[
373.7308044433594,
330.43597412109375
] | [
369,
368
] | 0 | 29 | 2.9 | 0.190297 | false | false | 29 | 0 |
[
372.2610778808594,
345.48870849609375
] | [
363,
384
] | 0 | 30 | 3 | 0.190297 | false | false | 30 | 0 |
[
368.9934997558594,
361.0563049316406
] | [
355,
399
] | 0 | 31 | 3.1 | 0.190297 | false | false | 31 | 0 |
[
363.8223571777344,
376.5606384277344
] | [
345,
410
] | 0 | 32 | 3.2 | 0.190297 | false | false | 32 | 0 |
[
356.69677734375,
390.6689147949219
] | [
333,
423
] | 0 | 33 | 3.3 | 0.190297 | false | false | 33 | 0 |
[
347.5926208496094,
403.94549560546875
] | [
316,
434
] | 0 | 34 | 3.4 | 0.190297 | false | false | 34 | 0 |
[
335.6115417480469,
416.4281311035156
] | [
303,
441
] | 0 | 35 | 3.5 | 0.190297 | false | false | 35 | 0 |
[
322.448486328125,
427.0355529785156
] | [
292,
445
] | 0 | 36 | 3.6 | 0.190297 | false | false | 36 | 0 |
[
309.816650390625,
435.0770263671875
] | [
282,
447
] | 0 | 37 | 3.7 | 0.190297 | false | false | 37 | 0 |
[
298.2059326171875,
440.59625244140625
] | [
275,
448
] | 0 | 38 | 3.8 | 0.190297 | false | false | 38 | 0 |
[
288.2516174316406,
444.1075134277344
] | [
269,
448
] | 0 | 39 | 3.9 | 0.190297 | false | false | 39 | 0 |
[
279.98089599609375,
446.0788879394531
] | [
262,
449
] | 0 | 40 | 4 | 0.190297 | false | false | 40 | 0 |
[
272.525390625,
447.37310791015625
] | [
259,
449
] | 0 | 41 | 4.1 | 0.190297 | false | false | 41 | 0 |
[
266.52667236328125,
448.178466796875
] | [
256,
446
] | 0 | 42 | 4.2 | 0.190297 | false | false | 42 | 0 |
[
261.9126281738281,
447.71441650390625
] | [
253,
437
] | 0 | 43 | 4.3 | 0.190297 | false | false | 43 | 0 |
[
258.1083679199219,
444.30133056640625
] | [
253,
427
] | 0 | 44 | 4.4 | 0.190297 | false | false | 44 | 0 |
[
255.60751342773438,
437.9875183105469
] | [
253,
419
] | 0 | 45 | 4.5 | 0.190297 | false | false | 45 | 0 |
[
254.2709503173828,
430.4437255859375
] | [
253,
409
] | 0 | 46 | 4.6 | 0.190297 | false | false | 46 | 0 |
[
253.6068572998047,
421.98968505859375
] | [
255,
399
] | 0 | 47 | 4.7 | 0.190297 | false | false | 47 | 0 |
[
253.87832641601562,
412.8046569824219
] | [
257,
389
] | 0 | 48 | 4.8 | 0.190297 | false | false | 48 | 0 |
[
254.958251953125,
403.20733642578125
] | [
259,
385
] | 0 | 49 | 4.9 | 0.190297 | false | false | 49 | 0 |
[
256.5018615722656,
395.1748352050781
] | [
259,
383
] | 0 | 50 | 5 | 0.188039 | false | false | 50 | 0 |
[
257.6908264160156,
389.5534362792969
] | [
260,
379
] | 0 | 51 | 5.1 | 0.18348 | false | false | 51 | 0 |
[
258.650390625,
385.0828552246094
] | [
263,
374
] | 0 | 52 | 5.2 | 0.179535 | false | false | 52 | 0 |
[
260.1932373046875,
380.6329650878906
] | [
263,
365
] | 0 | 53 | 5.3 | 0.175263 | false | false | 53 | 0 |
[
261.508544921875,
374.7772521972656
] | [
264,
355
] | 0 | 54 | 5.4 | 0.173685 | false | false | 54 | 0 |
[
262.55633544921875,
367.1870422363281
] | [
265,
345
] | 0 | 55 | 5.5 | 0.186411 | false | false | 55 | 0 |
[
263.55584716796875,
358.42352294921875
] | [
266,
339
] | 0 | 56 | 5.6 | 0.215514 | false | false | 56 | 0 |
[
264.55035400390625,
350.21026611328125
] | [
266,
335
] | 0 | 57 | 5.7 | 0.244252 | false | false | 57 | 0 |
[
265.25091552734375,
343.5539855957031
] | [
267,
331
] | 0 | 58 | 5.8 | 0.269451 | false | false | 58 | 0 |
[
265.9286193847656,
338.15289306640625
] | [
268,
324
] | 0 | 59 | 5.9 | 0.292713 | false | false | 59 | 0 |
[
266.73590087890625,
332.57342529296875
] | [
268,
317
] | 0 | 60 | 6 | 0.32869 | false | false | 60 | 0 |
[
267.3403625488281,
326.3929748535156
] | [
267,
311
] | 0 | 61 | 6.1 | 0.371322 | false | false | 61 | 0 |
[
267.3797607421875,
320.10797119140625
] | [
266,
300
] | 0 | 62 | 6.2 | 0.419392 | false | false | 62 | 0 |
[
266.9327697753906,
312.45013427734375
] | [
265,
291
] | 0 | 63 | 6.3 | 0.468684 | false | false | 63 | 0 |
[
266.2070617675781,
303.8672790527344
] | [
264,
283
] | 0 | 64 | 6.4 | 0.512109 | false | false | 64 | 0 |
[
265.33905029296875,
295.3099060058594
] | [
264,
273
] | 0 | 65 | 6.5 | 0.547014 | false | false | 65 | 0 |
[
264.6973876953125,
286.3922424316406
] | [
264,
269
] | 0 | 66 | 6.6 | 0.543353 | false | false | 66 | 0 |
[
264.34283447265625,
278.7665710449219
] | [
263,
265
] | 0 | 67 | 6.7 | 0.532934 | false | false | 67 | 0 |
[
263.8686828613281,
272.76470947265625
] | [
262,
261
] | 0 | 68 | 6.8 | 0.520257 | false | false | 68 | 0 |
[
263.1662902832031,
267.7571716308594
] | [
260,
259
] | 0 | 69 | 6.9 | 0.506618 | false | false | 69 | 0 |
[
262.0218200683594,
263.86041259765625
] | [
256,
257
] | 0 | 70 | 7 | 0.490312 | false | false | 70 | 0 |
[
259.8880920410156,
260.8598327636719
] | [
252,
254
] | 0 | 71 | 7.1 | 0.469214 | false | false | 71 | 0 |
[
256.8837585449219,
258.0696716308594
] | [
248,
251
] | 0 | 72 | 7.2 | 0.443656 | false | false | 72 | 0 |
[
253.3780975341797,
255.21612548828125
] | [
245,
248
] | 0 | 73 | 7.3 | 0.416267 | false | false | 73 | 0 |
[
249.91180419921875,
252.29579162597656
] | [
241,
243
] | 0 | 74 | 7.4 | 0.385703 | false | false | 74 | 0 |
[
246.34506225585938,
248.74432373046875
] | [
241,
242
] | 0 | 75 | 7.5 | 0.364631 | false | false | 75 | 0 |
[
243.7727508544922,
245.71444702148438
] | [
254,
223
] | 0 | 76 | 7.6 | 0.360246 | false | false | 76 | 0 |
[
246.20457458496094,
238.2499237060547
] | [
254,
218
] | 0 | 77 | 7.7 | 0.360246 | false | false | 77 | 0 |
[
249.65867614746094,
229.72372436523438
] | [
254,
217
] | 0 | 78 | 7.8 | 0.360246 | false | false | 78 | 0 |
[
251.80775451660156,
223.71054077148438
] | [
258,
205
] | 0 | 79 | 7.9 | 0.360246 | false | false | 79 | 0 |
[
254.119140625,
216.7681884765625
] | [
260,
191
] | 0 | 80 | 8 | 0.360246 | false | false | 80 | 0 |
[
256.5482482910156,
207.06312561035156
] | [
260,
175
] | 0 | 81 | 8.1 | 0.360246 | false | false | 81 | 0 |
[
258.222900390625,
194.71087646484375
] | [
257,
167
] | 0 | 82 | 8.2 | 0.360246 | false | false | 82 | 0 |
[
258.24359130859375,
182.94564819335938
] | [
250,
157
] | 0 | 83 | 8.3 | 0.360246 | false | false | 83 | 0 |
[
255.666748046875,
172.1943359375
] | [
236,
144
] | 0 | 84 | 8.4 | 0.360246 | false | false | 84 | 0 |
[
248.9295654296875,
160.9726104736328
] | [
218,
138
] | 0 | 85 | 8.5 | 0.360246 | false | false | 85 | 0 |
[
237.58815002441406,
151.0417022705078
] | [
196,
133
] | 0 | 86 | 8.6 | 0.360246 | false | false | 86 | 0 |
[
221.84231567382812,
143.15516662597656
] | [
166,
133
] | 0 | 87 | 8.7 | 0.360246 | false | false | 87 | 0 |
[
200.69650268554688,
138.14881896972656
] | [
150,
142
] | 0 | 88 | 8.8 | 0.360246 | false | false | 88 | 0 |
[
179.48361206054688,
138.16336059570312
] | [
140,
152
] | 0 | 89 | 8.9 | 0.360246 | false | false | 89 | 0 |
[
162.17274475097656,
142.693115234375
] | [
137,
158
] | 0 | 90 | 9 | 0.360246 | false | false | 90 | 0 |
[
150.34506225585938,
148.76560974121094
] | [
134,
165
] | 0 | 91 | 9.1 | 0.360246 | false | false | 91 | 0 |
[
142.72195434570312,
155.2697296142578
] | [
133,
170
] | 0 | 92 | 9.2 | 0.360246 | false | false | 92 | 0 |
[
138.03524780273438,
161.4319610595703
] | [
131,
180
] | 0 | 93 | 9.3 | 0.360246 | false | false | 93 | 0 |
[
134.8782501220703,
168.55979919433594
] | [
129,
190
] | 0 | 94 | 9.4 | 0.360246 | false | false | 94 | 0 |
[
132.35940551757812,
176.96775817871094
] | [
123,
202
] | 0 | 95 | 9.5 | 0.360246 | false | false | 95 | 0 |
[
128.93890380859375,
186.756591796875
] | [
111,
215
] | 0 | 96 | 9.6 | 0.360246 | false | false | 96 | 0 |
[
122.59100341796875,
197.8950653076172
] | [
103,
223
] | 0 | 97 | 9.7 | 0.360246 | false | false | 97 | 0 |
[
114.79637145996094,
208.46258544921875
] | [
99,
229
] | 0 | 98 | 9.8 | 0.360246 | false | false | 98 | 0 |
[
107.9449691772461,
217.32579040527344
] | [
96,
233
] | 0 | 99 | 9.9 | 0.362709 | false | false | 99 | 0 |
YAML Metadata Warning:The task_categories "lance" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other
LeRobot PushT (Lance Format)
A Lance-formatted version of lerobot/pusht — the canonical PushT benchmark from the Diffusion Policy paper — packaged using the same three-table layout as lance-format/lerobot-xvla-soft-fold so consumers can flip between datasets without changing code. Available directly from the Hub at hf://datasets/lance-format/lerobot-pusht-lance/data.
Key features
- Three-table layout —
frames,episodes,videos— so frame-level training, episode-level trajectory work, and raw video access live side-by-side without scattered parquet shards or sidecar MP4 directories. - Inline MP4 segments in
episodes.lance(one blob per camera, withfrom_timestamp/to_timestampbounds) and full source MP4s invideos.lance, all surfaced as lazyBlobFilehandles viatake_blobsso metadata scans never read the bytes. - Frame-level observations and actions in
frames.lancewith stableepisode_index,frame_index, andindexcolumns for joining or temporal iteration. - Schema-evolution friendly — add alternate camera streams, language annotations, or model predictions later without rewriting the data.
Tables
| Table | Rows ~ | Purpose |
|---|---|---|
frames.lance |
one row per frame | Per-frame observations, actions, episode/task indices |
episodes.lance |
one row per episode | Full per-episode trajectories plus per-camera MP4 segment blobs and timestamp bounds |
videos.lance |
one row per source MP4 | Raw source video blobs and file-level provenance (path, size, sha256) |
Use frames.lance for low-level training (loss-per-timestep, state-conditioned policies). Use episodes.lance when you need the full trajectory and the matching video segments together. Use videos.lance when you want direct access to the original encoded video files.
Schemas
frames.lance
| Column | Type | Notes |
|---|---|---|
observation_state |
list<float32> |
Robot state vector for that frame |
action |
list<float32> |
Action vector for that frame |
timestamp |
float |
Canonical frame timestamp (seconds) |
frame_index |
int64 |
Frame index within episode |
episode_index |
int64 |
Parent episode id |
index |
int64 |
Global frame index |
task_index |
int64 |
Task id |
episodes.lance
| Column | Type | Notes |
|---|---|---|
episode_index |
int64 |
Episode id |
task_index |
int64 |
Task id |
fps |
int32 |
Frame rate of the episode video segments |
timestamps |
list<float32> |
Per-frame timestamps |
actions |
list<list<float32>> |
Per-frame action vectors |
observation_state |
list<list<float32>> |
Per-frame robot state vectors |
<camera>_video_blob |
large_binary (blob-encoded) |
Inline MP4 segment for each camera, read lazily via take_blobs |
<camera>_from_timestamp |
float64 |
Segment start time |
<camera>_to_timestamp |
float64 |
Segment end time |
videos.lance
| Column | Type | Notes |
|---|---|---|
camera_angle |
string |
Camera key |
chunk_index, file_index |
int32 |
IDs parsed from the source path |
relative_path, filename |
string |
Provenance |
file_size_bytes |
int64 |
Source MP4 size |
sha256 |
string |
SHA256 of the MP4 bytes |
video_blob |
large_binary (blob-encoded) |
Raw source MP4 bytes |
Pre-built indices
None bundled. Build indices on a local copy if a workload calls for them — e.g., a BTREE on frames.episode_index for fast episode lookup, or a vector index after attaching observation embeddings via Evolve.
Why Lance?
- Blazing Fast Random Access: Optimized for fetching scattered rows, making it ideal for random sampling, real-time ML serving, and interactive applications without performance degradation.
- Native Multimodal Support: Store text, embeddings, and other data types together in a single file. Large binary objects are loaded lazily, and vectors are optimized for fast similarity search.
- Native Index Support: Lance comes with fast, on-disk, scalable vector and FTS indexes that sit right alongside the dataset on the Hub, so you can share not only your data but also your embeddings and indexes without your users needing to recompute them.
- Efficient Data Evolution: Add new columns and backfill data without rewriting the entire dataset. This is perfect for evolving ML features, adding new embeddings, or introducing moderation tags over time.
- Versatile Querying: Supports combining vector similarity search, full-text search, and SQL-style filtering in a single query, accelerated by on-disk indexes.
- Data Versioning: Every mutation commits a new version; previous versions remain intact on disk. Tags pin a snapshot by name, so retrieval systems and training runs can reproduce against an exact slice of history.
Load with datasets.load_dataset
You can load Lance datasets via the standard HuggingFace datasets interface, suitable when your pipeline already speaks Dataset / IterableDataset or you want a quick streaming sample. Each Lance table is a separate datasets config.
import datasets
hf_ds = datasets.load_dataset("lance-format/lerobot-pusht-lance", split="frames", streaming=True)
for row in hf_ds.take(3):
print(row["episode_index"], row["frame_index"], row["action"])
Load with LanceDB
LanceDB is the embedded retrieval library built on top of the Lance format (docs), and is the interface most users interact with. Each .lance file in data/ is a table — open by name. The same handles are used by the Search, Curate, Evolve, Train, Versioning, and Materialize-a-subset sections below.
import lancedb
db = lancedb.connect("hf://datasets/lance-format/lerobot-pusht-lance/data")
frames = db.open_table("frames")
episodes = db.open_table("episodes")
videos = db.open_table("videos")
print(len(frames), len(episodes), len(videos))
Load with Lance
pylance is the Python binding for the Lance format and works directly with the format's lower-level APIs. Reach for it when you want to inspect dataset internals — schema, scanner, fragments, the list of pre-built indices — or when you need the blob-level take_blobs entry point that streams MP4 bytes lazily from inline storage.
import lance
ds = lance.dataset("hf://datasets/lance-format/lerobot-pusht-lance/data/frames.lance")
print(ds.count_rows(), ds.schema.names)
print(ds.list_indices())
Tip — for production use, download locally first. Streaming from the Hub works for exploration, but heavy random access to video segments and any kind of indexed search are dramatically faster against a local copy:
hf download lance-format/lerobot-pusht-lance --repo-type dataset --local-dir ./lerobot-pushtThen point Lance or LanceDB at
./lerobot-pusht/data.
Search
PushT does not ship a vector index out of the box — observation states are low-dimensional and most robotics workflows look up by index rather than by similarity. The bundled identifier columns (episode_index, task_index, frame_index) make exact lookups a single filtered scan. The example below pulls the first few frames of episode 0 from the frames table.
import lancedb
db = lancedb.connect("hf://datasets/lance-format/lerobot-pusht-lance/data")
frames = db.open_table("frames")
slice_ = (
frames.search()
.where("episode_index = 0 AND frame_index < 10", prefilter=True)
.select(["episode_index", "frame_index", "timestamp", "action", "observation_state"])
.limit(10)
.to_list()
)
for r in slice_:
print(r["frame_index"], r["timestamp"], r["action"])
For similarity-style search across states or actions, attach an embedding column via Evolve and build an IVF_PQ index on it (see Evolve below). For visual similarity over rendered frames, the pre-extracted-frames pattern in Train below produces a table that can carry a learned image embedding alongside the pixels.
Curate
A typical curation pass for a robotics workflow starts with an episode-level filter — pick episodes with a particular task, length, or initial condition — and then drops down to the frames within those episodes. Stacking predicates inside a single filtered scan keeps the result small and explicit, and the bounded .limit(...) makes it cheap to inspect.
import lancedb
db = lancedb.connect("hf://datasets/lance-format/lerobot-pusht-lance/data")
episodes = db.open_table("episodes")
frames = db.open_table("frames")
# Pick a handful of episodes for the default task.
ep_rows = (
episodes.search()
.where("task_index = 0", prefilter=True)
.select(["episode_index", "fps", "observation_images_image_from_timestamp",
"observation_images_image_to_timestamp"])
.limit(10)
.with_row_id(True)
.to_list()
)
ep_ids = [r["episode_index"] for r in ep_rows]
# Pull the frames belonging to those episodes for the next stage.
frame_rows = (
frames.search()
.where(f"episode_index IN ({', '.join(map(str, ep_ids))})", prefilter=True)
.select(["episode_index", "frame_index", "timestamp", "action", "observation_state"])
.limit(2000)
.to_list()
)
print(f"{len(ep_rows)} episodes, {len(frame_rows)} frames selected")
Neither scan reads any video bytes. The MP4 segments live in the blob-encoded _video_blob columns and stay on disk until something explicitly asks for them.
Evolve
Lance stores each column independently, so a new column can be appended without rewriting the existing data. The lightest form is a SQL expression: derive the new column from columns that already exist, and Lance computes it once and persists it. The example below adds an action_magnitude and a large_action flag to the frames table, either of which can then be used directly in where clauses.
Note: Mutations require a local copy of the dataset, since the Hub mount is read-only. See the Materialize-a-subset section at the end of this card for a streaming pattern that downloads only the rows and columns you need.
import lancedb
db = lancedb.connect("./lerobot-pusht/data") # local copy required for writes
frames = db.open_table("frames")
frames.add_columns({
"action_magnitude": "SQRT(action[1] * action[1] + action[2] * action[2])",
"large_action": "SQRT(action[1] * action[1] + action[2] * action[2]) > 5.0",
})
If the values you want to attach already live in another table (offline reward labels, classifier predictions, learned observation embeddings), merge them in by joining on the appropriate key — index for frames or episode_index for episodes:
import pyarrow as pa
rewards = pa.table({
"index": pa.array([0, 1, 2]),
"reward_to_go": pa.array([1.4, 1.3, 1.2]),
})
frames.merge(rewards, on="index")
The original columns and the inline video blobs are untouched, so existing code that does not reference the new columns continues to work unchanged. For column values that require a Python computation (e.g., running a visual encoder over the decoded video frames), Lance provides a batch-UDF API — see the Lance data evolution docs.
Train
A common pattern for vision-conditioned policy training is to pre-extract decoded frame pixels once into a derived LanceDB table — one row per frame, with the per-frame action and observation_state already joined in — and train against that table with the regular projection-based dataloader. take_blobs is the mechanism that makes the extraction step tractable: each episode's MP4 segment is randomly addressable in episodes.lance (the from_timestamp / to_timestamp columns give the segment bounds), so the pass can subset bytes on demand and write decoded frames into a fresh table without an external file store. Other workflows project the _video_blob columns from episodes.lance directly and decode at the batch boundary, or skip pixels entirely and train a state-only policy on frames.lance — the right shape is workload-specific. The actual training loop is the same Permutation.identity(tbl).select_columns(...) snippet in every case; only the source table and the column list change.
For a state-only policy, the frames table is already in the right shape — no pre-extraction needed:
import lancedb
from lancedb.permutation import Permutation
from torch.utils.data import DataLoader
db = lancedb.connect("hf://datasets/lance-format/lerobot-pusht-lance/data")
frames = db.open_table("frames")
train_ds = Permutation.identity(frames).select_columns(["observation_state", "action"])
loader = DataLoader(train_ds, batch_size=128, shuffle=True, num_workers=4)
For a vision-conditioned policy, train against a pre-extracted frames-with-pixels table that joins each frame's decoded image to its action and observation_state:
import lancedb
from lancedb.permutation import Permutation
from torch.utils.data import DataLoader
db = lancedb.connect("./lerobot-pusht-frames") # local table produced by the one-time extraction
tbl = db.open_table("train")
train_ds = Permutation.identity(tbl).select_columns(["image", "observation_state", "action"])
loader = DataLoader(train_ds, batch_size=64, shuffle=True, num_workers=4)
The inline _video_blob storage and take_blobs still earn their place outside of the training loop — visualizing an episode in a notebook, sampling for human review, one-off evaluation against a held-out task, and the pre-extraction step itself — but they are not the dataloader.
Versioning
Every mutation to a Lance table, whether it adds a column, merges labels, or builds an index, commits a new version. Each of frames, episodes, and videos is versioned independently, so a column added to frames does not bump the version of episodes. You can list versions and inspect the history directly from the Hub copy; creating new tags requires a local copy since tags are writes.
import lancedb
db = lancedb.connect("hf://datasets/lance-format/lerobot-pusht-lance/data")
frames = db.open_table("frames")
print("frames version:", frames.version)
print("history:", frames.list_versions())
print("tags:", frames.tags.list())
Once you have a local copy, tag the table for reproducibility:
local_db = lancedb.connect("./lerobot-pusht/data")
local_frames = local_db.open_table("frames")
local_frames.tags.create("pusht-v1", local_frames.version)
Reopen by tag or by version number against either the Hub copy or a local one:
frames_v1 = db.open_table("frames", version="pusht-v1")
frames_v5 = db.open_table("frames", version=5)
Pinning supports two workflows. A policy locked to pusht-v1 keeps reproducing the same behavior while the dataset evolves in parallel. A training experiment pinned to the same tag can be rerun later against the exact same frames, so changes in metrics reflect model changes rather than data drift.
Materialize a subset
Reads from the Hub are lazy, so exploratory queries only transfer the columns and row groups they touch. Mutating operations (Evolve, tag creation, index builds) need a writable backing store, and a training pipeline benefits from a local copy with fast random access into the video blobs. Both can be served by a subset of the dataset rather than the full corpus. The pattern is to stream a filtered query through .to_batches() into a new local table; only the projected columns and matching row groups cross the wire, and the bytes never fully materialize in Python memory.
import lancedb
remote_db = lancedb.connect("hf://datasets/lance-format/lerobot-pusht-lance/data")
remote_frames = remote_db.open_table("frames")
batches = (
remote_frames.search()
.where("task_index = 0 AND episode_index < 50")
.select(["episode_index", "frame_index", "index", "timestamp", "action", "observation_state"])
.to_batches()
)
local_db = lancedb.connect("./pusht-task0-subset")
local_db.create_table("frames", batches)
The resulting ./pusht-task0-subset is a first-class LanceDB database. Every snippet in the Evolve, Train, and Versioning sections above works against it by swapping hf://datasets/lance-format/lerobot-pusht-lance/data for ./pusht-task0-subset. The same pattern applies to episodes and videos — narrow each table to the rows your workload needs, and the resulting database stays small enough to index and iterate cheaply.
Source & license
Converted from lerobot/pusht (LeRobot v3.0 dataset format). PushT is released under the Apache 2.0 license by the LeRobot project and the Diffusion Policy authors.
Citation
@misc{cadene2024lerobot,
title={LeRobot: State-of-the-art Machine Learning for Real-World Robotics in PyTorch},
author={R{\'e}mi Cadene and Simon Alibert and Alexander Soare and Quentin Gallou{\'e}dec and Adil Zouitine and Steven Palma and Pepijn Kooijmans and Michel Aractingi and Mustafa Shukor and Martino Russi and Francesco Capuano and Caroline Pascal and Jade Choghari and Jess Moss and Thomas Wolf},
year={2024},
url={https://github.com/huggingface/lerobot}
}
@inproceedings{chi2023diffusion,
title={Diffusion Policy: Visuomotor Policy Learning via Action Diffusion},
author={Chi, Cheng and Feng, Siyuan and Du, Yilun and Xu, Zhenjia and Cousineau, Eric and Burchfiel, Benjamin and Song, Shuran},
booktitle={Robotics: Science and Systems},
year={2023}
}
- Downloads last month
- 55