Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception:    SplitsNotFoundError
Message:      The split names could not be parsed from the dataset config.
Traceback:    Traceback (most recent call last):
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
                  for split_generator in builder._split_generators(
                                         ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/hdf5/hdf5.py", line 64, in _split_generators
                  with h5py.File(first_file, "r") as h5:
                       ^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 564, in __init__
                  fid = make_fid(name, mode, userblock_size, fapl, fcpl, swmr=swmr)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/h5py/_hl/files.py", line 238, in make_fid
                  fid = h5f.open(name, flags, fapl=fapl)
                        ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "h5py/_objects.pyx", line 56, in h5py._objects.with_phil.wrapper
                File "h5py/_objects.pyx", line 57, in h5py._objects.with_phil.wrapper
                File "h5py/h5f.pyx", line 102, in h5py.h5f.open
              FileNotFoundError: [Errno 2] Unable to synchronously open file (unable to open file: name = 'hf://datasets/tshiamor/block-stacking-mimic@9e0874bc032321bd8b3b8c69c5b26174f49defbe/cosmos_generated_202.hdf5', errno = 2, error message = 'No such file or directory', flags = 0, o_flags = 0)
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
                  for split in get_dataset_split_names(
                               ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
                  info = get_dataset_config_info(
                         ^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
                  raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
              datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

Block Stacking MimicGen Dataset

Description

202 demonstrations of a Franka Emika Panda robot stacking colored blocks into a bowl. Generated using NVIDIA MimicGen from source teleoperation demonstrations and augmented with Cosmos world foundation models.

Task

Objective: Pick up 3 colored cubes (Jenga-style blocks) and place them into a bowl on the table.

Success Criteria: All 3 blocks must be placed inside the bowl.

Robot Platform

  • Robot: Franka Emika Panda (7-DoF manipulator)
  • Gripper: Franka Hand (parallel jaw)
  • Simulator: NVIDIA Isaac Lab (Isaac Sim)
  • Control Mode: Joint velocity control
  • Control Frequency: 30 Hz

Data Modalities

Modality Key Shape Type Description
Table Camera RGB obs/table_cam (T, 200, 200, 3) uint8 Third-person view
Wrist Camera RGB obs/wrist_cam (T, 200, 200, 3) uint8 Eye-in-hand view
Table Camera Depth obs/table_cam_depth (T, 200, 200, 1) float32 Depth map
Surface Normals obs/table_cam_normals (T, 200, 200, 3) float32 Normal vectors
Segmentation obs/table_cam_segmentation (T, 200, 200, 4) uint8 Instance segmentation
End-Effector Position obs/eef_pos (T, 3) float32 XYZ position
End-Effector Orientation obs/eef_quat (T, 4) float32 Quaternion (x,y,z,w)
Joint Positions obs/joint_pos (T, 9) float32 7 arm + 2 gripper
Joint Velocities obs/joint_vel (T, 9) float32 7 arm + 2 gripper
Gripper Position obs/gripper_pos (T, 2) float32 Finger positions
Cube Positions obs/cube_positions (T, 9) float32 3 cubes x XYZ
Cube Orientations obs/cube_orientations (T, 12) float32 3 cubes x quaternion
Actions actions (T, 7) float32 Joint velocity commands

Dataset Statistics

  • Total Demonstrations: 202
  • Average Episode Length: ~515 steps
  • Total Frames: ~104,000
  • File Size: 16 GB

File Format

HDF5 file with the following structure:

cosmos_generated_202.hdf5
└── data/
    β”œβ”€β”€ demo_0/
    β”‚   β”œβ”€β”€ actions
    β”‚   β”œβ”€β”€ obs/
    β”‚   β”‚   β”œβ”€β”€ table_cam
    β”‚   β”‚   β”œβ”€β”€ wrist_cam
    β”‚   β”‚   β”œβ”€β”€ eef_pos
    β”‚   β”‚   └── ...
    β”‚   β”œβ”€β”€ states/
    β”‚   └── initial_state/
    β”œβ”€β”€ demo_1/
    └── ...

Usage

Python (h5py)

import h5py
import numpy as np

# Load dataset
with h5py.File('cosmos_generated_202.hdf5', 'r') as f:
    demos = list(f['data'].keys())
    print(f"Number of demos: {len(demos)}")

    # Access a single demo
    demo = f['data/demo_0']
    actions = demo['actions'][:]
    images = demo['obs/table_cam'][:]
    eef_pos = demo['obs/eef_pos'][:]

    print(f"Episode length: {len(actions)}")
    print(f"Image shape: {images.shape}")

PyTorch DataLoader

import h5py
import torch
from torch.utils.data import Dataset, DataLoader

class BlockStackingDataset(Dataset):
    def __init__(self, hdf5_path):
        self.f = h5py.File(hdf5_path, 'r')
        self.demos = list(self.f['data'].keys())

    def __len__(self):
        return len(self.demos)

    def __getitem__(self, idx):
        demo = self.f[f'data/{self.demos[idx]}']
        return {
            'images': torch.from_numpy(demo['obs/table_cam'][:]),
            'actions': torch.from_numpy(demo['actions'][:]),
            'eef_pos': torch.from_numpy(demo['obs/eef_pos'][:])
        }

dataset = BlockStackingDataset('cosmos_generated_202.hdf5')
loader = DataLoader(dataset, batch_size=1, shuffle=True)

Generation Pipeline

  1. Source Demonstrations: Human teleoperation via VR hand tracking
  2. MimicGen Augmentation: Automated trajectory generation with randomized object poses
  3. Cosmos Enhancement: Visual augmentation using NVIDIA Cosmos world foundation models
  4. Quality Filtering: 202/714 trajectories passed success criteria (28.3% success rate)

Compatible Frameworks

This dataset can be converted for use with:

  • LeRobot (HuggingFace) - Convert to Parquet + MP4
  • OpenVLA - Convert to RLDS TFRecord
  • Pi0/OpenPI (Physical Intelligence) - LeRobot v2 format
  • GROOT N1 (NVIDIA) - LeRobot v2 with 224x224 images
  • Cosmos Transfer (NVIDIA) - MP4 videos + JSON annotations

See DATASET_FORMAT_GUIDE.md for conversion instructions.

Citation

If you use this dataset, please cite:

@misc{block_stacking_mimicgen_2026,
  title={Block Stacking MimicGen Dataset},
  author={Tshiamo},
  year={2026},
  publisher={HuggingFace},
  url={https://huggingface.co/datasets/tshiamor/block-stacking-mimic}
}

License

MIT License

Acknowledgments

  • NVIDIA Isaac Lab and MimicGen teams
  • NVIDIA Cosmos for visual augmentation
  • HuggingFace for dataset hosting
Downloads last month
4