piper_picking_tests / README.md
charithmunasinghe's picture
add info.json, datacard and correct information in readme
c3b124b
|
raw
history blame
3.44 kB
metadata
license: mit
task_categories:
  - robotics
  - imitation-learning
tags:
  - robotics
  - teleoperation
  - manipulation
  - robot-learning
  - demonstration-data
size_categories:
  - 1K<n<10K

PiPER Robot Teaching Episodes Dataset

This dataset contains teleoperation demonstrations recorded using the PiPER robotic system.

Dataset Description

  • Total Episodes: 13
  • Unique Tasks: 12
  • Total Size: ~6.2 GB
  • Cameras: Dual camera setup (table_cam + wrist_cam)
  • Recording Date: November 2024
  • Recording Date: November 2025

Tasks Included

  • cleaningcloth
  • fillamentroll
  • gamecontroller
  • hexwrench
  • pencil
  • scissors
  • scissors_hidden
  • screwdriver
  • smallkey
  • smallpaper
  • smallwoodenstick
  • thinmetaldisk

Dataset Structure

dataset/
├── {episode_name}_{timestamp}.hdf5  # Robot state, action, and compressed image data
├── {episode_name}_{timestamp}.json  # Episode metadata
└── {episode_name}_images/
    ├── observation.images.table_cam/    # Table view images (800x720, cropped)
    │   └── frame_XXXXXX.png
    └── observation.images.wrist_cam/    # Wrist view images (vertically flipped)
        └── frame_XXXXXX.png

Data Format

HDF5 Files

Each .hdf5 file contains:

  • observations/state: Robot joint states (7-DOF)
  • observations/images/table_cam: Compressed table camera images
  • observations/images/wrist_cam: Compressed wrist camera images
  • actions: Robot actions/commands
  • timestamps: Frame timestamps

JSON Metadata

Each .json file contains:

  • Episode name and duration
  • Number of frames and FPS
  • State dimension and statistics (mean, std, min, max)
  • Recording timestamp
  • Camera configuration

Image Folders

Extracted and processed images:

  • table_cam: Cropped to 800x720 pixels (offset 300,0)
  • wrist_cam: Vertically flipped for correct orientation

Usage

from pathlib import Path
import h5py
import json
from PIL import Image

# Load episode
episode_path = "screwdriver_20251104_203022.hdf5"
with h5py.File(episode_path, 'r') as f:
    states = f['observations/state'][:]
    actions = f['actions'][:]
    timestamps = f['timestamps'][:]

# Load metadata
with open(episode_path.replace('.hdf5', '.json'), 'r') as f:
    metadata = json.load(f)
    print(f"Episode: {metadata['episode_name']}")
    print(f"Frames: {metadata['n_frames']}, Duration: {metadata['duration_seconds']:.2f}s")

# Load images
image_dir = Path(f"{metadata['episode_name']}_images")
frame_0_table = Image.open(image_dir / "observation.images.table_cam" / "frame_000000.png")
frame_0_wrist = Image.open(image_dir / "observation.images.wrist_cam" / "frame_000000.png")

Citation

If you use this dataset, please cite:

@dataset{piper_teaching_episodes_2024,
  title={PiPER Robot Teaching Episodes Dataset},
  author={Charith Munasinghe, Giovanni Toffetti},
  year={2025},
  publisher={Hugging Face},
  howpublished={\url{https://huggingface.co/datasets/charithmunasinghe/piper_picking_tests}}
}

License

MIT License - See LICENSE file for details.

Additional Information

  • Robot Platform: PiPER (link to robot documentation if available)
  • Control Interface: Teleoperation with human demonstrations
  • Processing: Images have been preprocessed (cropping, flipping) for ML training

For questions or issues, please open an issue in the repository.