The dataset viewer is not available for this subset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 286, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/parquet/parquet.py", line 118, in _split_generators
self.info.features = datasets.Features.from_arrow_schema(pq.read_schema(f))
^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 2392, in read_schema
file = ParquetFile(
^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/pyarrow/parquet/core.py", line 328, in __init__
self.reader.open(
File "pyarrow/_parquet.pyx", line 1656, in pyarrow._parquet.ParquetReader.open
File "pyarrow/error.pxi", line 92, in pyarrow.lib.check_status
pyarrow.lib.ArrowInvalid: Parquet magic bytes not found in footer. Either the file is corrupted or this is not a parquet file.
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 340, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 291, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Description of Collected Data
Task 1: pass the knife
The task has 3 modes: pass the knife with sharp end pointing towards human, pass the knife with the handle pointing towards human and sharp end pointing right, pass the knife with the handle pointing towards human and sharp end pointing left. Each mode has 35 demos.
Task 2: push the block
The task has 2 modes: push the block from the left, push the block from the right. Each with 50 demos.
Task 3: put items in the box
The task has 3 modes: put the items in the box by the order of black box, strawberries, blue box; blue box, black box, strawberries; strawberries, black box, blue box. Each with 30 demos.
Robot Data Processing
Utilities for turning raw robot HDF5 recordings into a synchronized dataset and then into the LeRobot v2.1 format for training.
Pipeline:
raw image + low-dim HDF5
β sync_image_low_dim.py
βΌ
synced HDF5 βββΊ visualize_synced_data.py (per-demo MP4 previews)
β
β convert_synced_h5_to_lerobot.py
βΌ
LeRobot v2.1 dataset folder
Environment setup
Create and activate a conda env, then install the dependencies:
conda create -n robotdata python=3.10 -y
conda activate robotdata
pip install h5py numpy opencv-python datasets
pip install "lerobot @ git+https://github.com/huggingface/lerobot@0cf864870cf29f4738d3ade893e6fd13fbd7cdb5"
Why this specific commit is required for Pi-0.5 / OpenPI:
LeRobot 0.4.3+ writes datasets in the v3.0 format, which OpenPI (Pi-0 / Pi-0.5) cannot read. OpenPI requires the v2.1 format, which is produced by LeRobot commit 0cf864870cf29f4738d3ade893e6fd13fbd7cdb5 (reports itself as version 0.1.0). Do not upgrade LeRobot unless you also upgrade the downstream training code.
Notes:
- Python 3.10 is the most broadly compatible with this LeRobot commit; 3.11 also works.
lerobotpulls intorch,huggingface_hub, and other heavy deps. If you need a specific CUDA build oftorch, install it beforelerobotusing the selector on pytorch.org.
Quick sanity check:
python -c "import h5py, numpy, cv2, datasets, lerobot; print('ok')"
Scripts
1. sync_image_low_dim.py β align two HDF5 streams
Merges an image HDF5 and a low-dimensional HDF5 into a single synced file. Image timestamps are the master timeline; low-dim samples are aligned by nearest timestamp. Handles zero-valued timestamps, sudden timestamp jumps, and non-overlapping intervals by skipping affected demos. Demos excluded or skipped are renamed to be consecutive (demo_0, demo_1, β¦) in the output so there are no gaps.
Inputs (per HDF5): data/<demo>/obs/<timestamp_key> plus any number of per-demo datasets.
Output: data/<demo>/obs/{timestamp, <image_keysβ¦>, <lowdim_keysβ¦>} and optional data/<demo>/actions.
Example:
python sync_image_low_dim.py --image-h5 /path/raw_images.hdf5 --lowdim-h5 /path/raw_lowdim.hdf5 --output-h5 /path/synced.h5 --allow-missing
Useful flags:
--image-keys,--lowdim-keysβ restrict which obs datasets to copy (defaults to all except timestamp).--exclude-demo demo_4 demo_5β drop specific demos. Remaining demos are reindexed.--skip-n Nβ keep every(N+1)-th frame after syncing (e.g.--skip-n 2β keep 0, 3, 6, β¦).--allow-missingβ log and skip demos with missing keys instead of failing.
2. visualize_synced_data.py β render per-demo MP4 previews
Renders each demo to an MP4 with selected camera views side-by-side and optional lowdim overlays as on-frame text. Useful to sanity-check a sync before running the LeRobot conversion.
Example:
python visualize_synced_data.py /path/synced.h5 --out-dir ./vis --fps 10 --image-keys agentview_image oak_image --overlay-keys robot0_eef_pos robot0_gripper_qpos
Outputs ./vis/<demo>.mp4 for each demo.
3. convert_synced_h5_to_lerobot.py β synced HDF5 β LeRobot v2.1
Produces a LeRobot dataset directly in --output-dir. The folder must not already exist.
Example (30 Hz β 10 Hz, 2 cameras, 8-dim state):
python convert_synced_h5_to_lerobot.py --synced-h5 /path/synced.h5 --output-dir /path/lerobot_dataset --fps 10 --source-fps 30 --task "pass the knife by the sharp side" --image-map agentview_image:base_rgb oak_image:wrist_rgb --state-keys robot0_joint_pos robot0_gripper_qpos --action-source next_state --image-size 256 256
Key flags:
--output-dir PATHβ final dataset folder (must not exist; parent is created if needed).--fps N/--source-fps Mβ target and source frame rates.Mmust be divisible byN; the script subsamples by strideM/N. If--source-fpsis omitted, it is estimated from the first demo's timestamps.--image-map src:dst [...]β rename HDF5 image keys to LeRobot feature names.--state-keys k1 k2 [...]β concatenate these lowdim datasets into a singlestatevector (order matters).--action-source {next_state, hdf5_actions}β use the next state as the action when the HDF5 has noactionsdataset.--image-size H Wβ resize images. Omit to keep native resolution.--task "..."β language instruction stored with every frame.--repo-id user/name+--push-to-hubβ optional, pushes to HuggingFace.
Output layout (LeRobot v2.1):
<output-dir>/
meta/ info.json, episodes.jsonl, tasks.jsonl, episodes_stats.jsonl
data/ chunk-000/episode_<6digit>.parquet
Typical workflow
# 1. sync raw HDF5s
python sync_image_low_dim.py --image-h5 raw_images.hdf5 --lowdim-h5 raw_lowdim.hdf5 --output-h5 synced.h5 --allow-missing
# 2. eyeball the result
python visualize_synced_data.py synced.h5 --out-dir vis --fps 10 --image-keys agentview_image oak_image --overlay-keys robot0_eef_pos robot0_gripper_qpos
# 3. convert to LeRobot
python convert_synced_h5_to_lerobot.py --synced-h5 synced.h5 --output-dir ./lerobot_dataset --fps 10 --source-fps 30 --task "your instruction" --image-map agentview_image:base_rgb oak_image:wrist_rgb --state-keys robot0_joint_pos robot0_gripper_qpos --action-source next_state --image-size 256 256
- Downloads last month
- 138