metadata
license: mit
task_categories:
- robotics
tags:
- robotics
- manipulation
- franka
- visuomotor
- isaac-sim
- isaac-lab
- mimicgen
- cosmos
- block-stacking
- imitation-learning
size_categories:
- n<1K
Block Stacking MimicGen Dataset
Description
202 demonstrations of a Franka Emika Panda robot stacking colored blocks into a bowl. Generated using NVIDIA MimicGen from source teleoperation demonstrations and augmented with Cosmos world foundation models.
Task
Objective: Pick up 3 colored cubes (Jenga-style blocks) and place them into a bowl on the table.
Success Criteria: All 3 blocks must be placed inside the bowl.
Robot Platform
- Robot: Franka Emika Panda (7-DoF manipulator)
- Gripper: Franka Hand (parallel jaw)
- Simulator: NVIDIA Isaac Lab (Isaac Sim)
- Control Mode: Joint velocity control
- Control Frequency: 30 Hz
Data Modalities
| Modality | Key | Shape | Type | Description |
|---|---|---|---|---|
| Table Camera RGB | obs/table_cam |
(T, 200, 200, 3) | uint8 | Third-person view |
| Wrist Camera RGB | obs/wrist_cam |
(T, 200, 200, 3) | uint8 | Eye-in-hand view |
| Table Camera Depth | obs/table_cam_depth |
(T, 200, 200, 1) | float32 | Depth map |
| Surface Normals | obs/table_cam_normals |
(T, 200, 200, 3) | float32 | Normal vectors |
| Segmentation | obs/table_cam_segmentation |
(T, 200, 200, 4) | uint8 | Instance segmentation |
| End-Effector Position | obs/eef_pos |
(T, 3) | float32 | XYZ position |
| End-Effector Orientation | obs/eef_quat |
(T, 4) | float32 | Quaternion (x,y,z,w) |
| Joint Positions | obs/joint_pos |
(T, 9) | float32 | 7 arm + 2 gripper |
| Joint Velocities | obs/joint_vel |
(T, 9) | float32 | 7 arm + 2 gripper |
| Gripper Position | obs/gripper_pos |
(T, 2) | float32 | Finger positions |
| Cube Positions | obs/cube_positions |
(T, 9) | float32 | 3 cubes x XYZ |
| Cube Orientations | obs/cube_orientations |
(T, 12) | float32 | 3 cubes x quaternion |
| Actions | actions |
(T, 7) | float32 | Joint velocity commands |
Dataset Statistics
- Total Demonstrations: 202
- Average Episode Length: ~515 steps
- Total Frames: ~104,000
- File Size: 16 GB
File Format
HDF5 file with the following structure:
cosmos_generated_202.hdf5
└── data/
├── demo_0/
│ ├── actions
│ ├── obs/
│ │ ├── table_cam
│ │ ├── wrist_cam
│ │ ├── eef_pos
│ │ └── ...
│ ├── states/
│ └── initial_state/
├── demo_1/
└── ...
Usage
Python (h5py)
import h5py
import numpy as np
# Load dataset
with h5py.File('cosmos_generated_202.hdf5', 'r') as f:
demos = list(f['data'].keys())
print(f"Number of demos: {len(demos)}")
# Access a single demo
demo = f['data/demo_0']
actions = demo['actions'][:]
images = demo['obs/table_cam'][:]
eef_pos = demo['obs/eef_pos'][:]
print(f"Episode length: {len(actions)}")
print(f"Image shape: {images.shape}")
PyTorch DataLoader
import h5py
import torch
from torch.utils.data import Dataset, DataLoader
class BlockStackingDataset(Dataset):
def __init__(self, hdf5_path):
self.f = h5py.File(hdf5_path, 'r')
self.demos = list(self.f['data'].keys())
def __len__(self):
return len(self.demos)
def __getitem__(self, idx):
demo = self.f[f'data/{self.demos[idx]}']
return {
'images': torch.from_numpy(demo['obs/table_cam'][:]),
'actions': torch.from_numpy(demo['actions'][:]),
'eef_pos': torch.from_numpy(demo['obs/eef_pos'][:])
}
dataset = BlockStackingDataset('cosmos_generated_202.hdf5')
loader = DataLoader(dataset, batch_size=1, shuffle=True)
Generation Pipeline
- Source Demonstrations: Human teleoperation via VR hand tracking
- MimicGen Augmentation: Automated trajectory generation with randomized object poses
- Cosmos Enhancement: Visual augmentation using NVIDIA Cosmos world foundation models
- Quality Filtering: 202/714 trajectories passed success criteria (28.3% success rate)
Compatible Frameworks
This dataset can be converted for use with:
- LeRobot (HuggingFace) - Convert to Parquet + MP4
- OpenVLA - Convert to RLDS TFRecord
- Pi0/OpenPI (Physical Intelligence) - LeRobot v2 format
- GROOT N1 (NVIDIA) - LeRobot v2 with 224x224 images
- Cosmos Transfer (NVIDIA) - MP4 videos + JSON annotations
See DATASET_FORMAT_GUIDE.md for conversion instructions.
Citation
If you use this dataset, please cite:
@misc{block_stacking_mimicgen_2026,
title={Block Stacking MimicGen Dataset},
author={Tshiamo},
year={2026},
publisher={HuggingFace},
url={https://huggingface.co/datasets/tshiamor/block-stacking-mimic}
}
License
MIT License
Acknowledgments
- NVIDIA Isaac Lab and MimicGen teams
- NVIDIA Cosmos for visual augmentation
- HuggingFace for dataset hosting