Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
640
640
label
class label
3 classes
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
0episode_000000
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
1episode_000001
End of preview. Expand in Data Studio

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

LivUMI Dataset

This repository is an imitation-learning / robot-learning dataset for the LivUMI dual-arm robot, laid out in a LeRobot-compatible v2.1 format (codebase_version: v2.1). Data are organized by episode and include multiple cameras, depth, proprioceptive state, and time indexing. See meta/info.json for the full schema and statistics.


Directory layout

Directory Purpose
meta/ Metadata and indices: info.json (global info, feature definitions, path templates for data and videos), episodes.jsonl (per-episode tasks and length), tasks.jsonl (task strings and task_index mapping). Optional cache files such as .cached_info.json.
data/ Frame-wise tabular data, usually Parquet. Path pattern: data/chunk-{chunk:03d}/episode_{episode:06d}.parquet, matching data_path in info.json; contains timestamps, frame indices, dual-arm end-effector poses and gripper values, and other non-image fields (image-like modalities are often referenced via external video or image files).
videos/ MP4 videos, grouped by chunk and observation key. Path pattern: videos/chunk-{chunk:03d}/{video_key}/episode_{episode:06d}.mp4, matching video_path in info.json. Typical video_key values include left/right RealSense RGB, depth colorization, and left/right fisheye (see the actual subdirectories in this dataset).
images/ Per-frame images (here, mainly depth maps). Streams whose dtype is image in info.json are stored under this tree. Layout: {feature_path}/episode_{episode:06d}/frame_{frame:06d}.png (e.g. raw depth for left/right cameras).

Statistics for this copy (from meta/info.json)

This checkout may be a small or example subset. Typical fields include:

  • Robot type: LivUMI
  • Chunking: directories like chunk-000; chunks_size is the maximum number of episodes per chunk
  • Features: dual-arm 6D end-effector poses and gripper values, multiple RGB / depth / fisheye modalities (video or PNG), timestamp / frame_index / episode_index / task_index, etc.

For the complete list and tensor shapes, refer to the features section in meta/info.json.


Usage notes

  • Episode list and tasks: parse meta/episodes.jsonl and meta/tasks.jsonl.
  • Multimodal alignment: within an episode, align Parquet rows, video frames, and PNGs under images/ using frame_index or timestamps.
  • Training stacks: with the LeRobot ecosystem, point loaders at the dataset root and follow the v2.1 dataset conventions.

License and citation

If the data come from a third-party project or paper, follow the original license and cite the appropriate references (add specific citations here if applicable).


Chinese documentation: README_zh.md.

Downloads last month
1,345