piper_picking_tests / README.md
charithmunasinghe's picture
Release v0.3 - LeRobot v3 compliant dataset
e310a8e
---
license: mit
task_categories:
- robotics
- reinforcement-learning
tags:
- LeRobot-v3
- piper-robot
- teleoperation
- manipulation
- imitation-learning
size_categories:
- 1K<n<10K
language:
- en
pretty_name: PiPER Robot Teaching Episodes
---
# PiPER Robot Teaching Episodes Dataset
**13 teleoperation demonstrations** for robot manipulation using a 7-DOF PiPER arm. Fully compatible with LeRobot v3 format.
## Quick Info
- **Episodes**: 13 | **Tasks**: 12 | **Size**: ~6.2 GB | **FPS**: 30
- **Robot**: PiPER 7-DOF arm | **Cameras**: Table (800×720) + Wrist | **Version**: v0.3
- **Format**: HDF5 + PNG images | **Compatible**: LeRobot v3, ACT, Diffusion Policy, SmolVLA
## Tasks
`cleaningcloth` `fillamentroll` `gamecontroller` `hexwrench` `pencil` `scissors` `scissors_hidden` `screwdriver` `smallkey` `smallpaper` `smallwoodenstick` `thinmetaldisk`
## Dataset Structure
```
{episode_name}_{timestamp}.hdf5 # Robot state, actions, compressed images
{episode_name}_{timestamp}.json # Episode metadata (frames, fps, stats)
{episode_name}_images/
├── observation.images.table_cam/ # 800×720 PNG frames
└── observation.images.wrist_cam/ # PNG frames (vertically flipped)
meta_data/
├── info.json # Dataset config, encoding, shapes
├── tasks.jsonl # Task definitions
└── episodes.jsonl # Episode-task mapping
info.json # Root metadata (LeRobot v3)
```
### HDF5 Structure
- `observations/state`: 7-DOF joint angles (degrees)
- `observations/images/table_cam`: Compressed JPEG images
- `observations/images/wrist_cam`: Compressed JPEG images
- `actions`: 7-DOF commands
- `timestamps`: Frame timestamps
### Metadata (JSON)
Each episode JSON contains: `episode_name`, `n_frames`, `duration_seconds`, `fps`, `state_dim`, `cameras`, `state_stats` (mean/std/min/max), `recording_date`
## Usage
### LeRobot Library (Recommended)
```python
from lerobot.common.datasets.lerobot_dataset import LeRobotDataset
dataset = LeRobotDataset("charithmunasinghe/piper_picking_tests", version="v0.3")
for batch in dataset:
state = batch['observation']['observation.state']
action = batch['action']
table_img = batch['observation']['observation.images.table_cam']
wrist_img = batch['observation']['observation.images.wrist_cam']
```
### Visualize
```python
from lerobot.scripts.visualize_dataset import visualize_dataset
visualize_dataset(
repo_id="charithmunasinghe/piper_picking_tests",
episode_index=0,
version="v0.3"
)
```
### Direct HDF5 Access
```python
import h5py
from PIL import Image
from pathlib import Path
with h5py.File("screwdriver_20251104_203022.hdf5", 'r') as f:
states = f['observations/state'][:]
actions = f['actions'][:]
img = Image.open("screwdriver_images/observation.images.table_cam/frame_000000.png")
```
## Dataset Info
- **Collection**: Human teleoperation in lab environment
- **Preprocessing**: Table camera cropped (+300,0 offset), wrist camera flipped vertically
- **Split**: Single train split (13 episodes)
- **License**: MIT
## Citation
```bibtex
@dataset{piper_teaching_episodes_2025,
title={PiPER Robot Teaching Episodes Dataset},
author={Munasinghe, Charith and Toffetti, Giovanni},
year={2025},
publisher={Hugging Face},
howpublished={\url{https://huggingface.co/datasets/charithmunasinghe/piper_picking_tests}}
}
```
**Contact**: charithmunasinghe (Hugging Face)