diffpick / README.md
e-cagan's picture
Upload README.md with huggingface_hub
6e803ac verified
metadata
license: apache-2.0
task_categories:
  - robotics
  - reinforcement-learning
tags:
  - robotics
  - imitation-learning
  - diffusion-policy
  - manipulation
  - fetch
  - mujoco
  - lerobot
size_categories:
  - 1K<n<10K

DiffPick: Fetch Pick-and-Place Demonstrations

A clean dataset of 200 successful pick-and-place demonstrations collected from a scripted expert policy in the FetchPickAndPlace-v4 MuJoCo environment. Designed for training vision-based imitation learning policies (Diffusion Policy, ACT, BC).

Part of the DiffPick project — a from-scratch implementation of a Diffusion Policy pipeline with ROS2 deployment.

Dataset Stats

Property Value
Episodes 200
Total frames 5,489
Mean episode length 27.4 steps
Min / Max length 20 / 35 steps
FPS 25
Image resolution 96×96 RGB
Success rate (during collection) 97.1% (200 of 206 attempts kept)

Features

Key Shape Type Description
observation.image (3, 96, 96) float32 Front-view RGB camera
observation.state (10,) float32 Robot proprioception only (gripper xyz, finger widths, velocities). No object pose — must be inferred from image.
action (4,) float32 End-effector delta (dx, dy, dz) ∈ [-1,1] + gripper command (-1 close, +1 open)
task string "Pick up the block and place it at the target location."

Why proprioception-only state?

The state vector deliberately excludes object position. This forces a learned policy to develop visual grounding rather than copying ground-truth coordinates. The result: policies trained on this dataset must actually see the object in the RGB stream to succeed — closer to a real-world deployment scenario where object pose isn't directly observable.

Expert Policy

Demonstrations were generated by a hand-crafted state machine controller:

APPROACH (gripper open, hover above object)
↓
DESCEND (gripper open, lower to object)
↓
GRASP (close gripper, hold for 8 steps)
↓
PLACE (move to target, gripper closed)

Proportional control in end-effector space (no IK required, since the env exposes a 4-D end-effector action interface). Episodes ending in success too quickly (< 15 steps, indicating object near target at reset) were filtered out.

Source: data_collection/scripted_policy.py

Usage

from lerobot.datasets.lerobot_dataset import LeRobotDataset

dataset = LeRobotDataset("e-cagan/diffpick")
sample = dataset[0]

print(sample["observation.image"].shape)  # torch.Size([3, 96, 96])
print(sample["observation.state"].shape)  # torch.Size([10])
print(sample["action"].shape)             # torch.Size([4])

Reproduction

git clone https://github.com/e-cagan/diffpick
cd diffpick
pip install -r requirements.txt

# Collect raw demos
python -m data_collection.collect --n_episodes 200

# Convert to LeRobotDataset format
python -m data_collection.to_lerobot_dataset \
    --raw_dir data/raw_demos \
    --repo_id <your-username>/diffpick \
    --fps 25

Intended Use

  • Training Diffusion Policy for vision-conditioned manipulation
  • Benchmarking imitation learning algorithms (BC vs ACT vs DP)
  • Learning resource for ROS2 + MuJoCo + LeRobot integration

Limitations

  • Single environment seed family (FetchPickAndPlace-v4 defaults). No domain randomization for backgrounds, lighting, or distractors.
  • Single front-facing 96×96 camera. No wrist cam, no depth.
  • Scripted expert is deterministic given a seed — no behavioral diversity (no left-hand/right-hand approach modes, etc.). This may limit the multi-modal advantages of Diffusion Policy.
  • Object is a single blue cube. No category generalization.

Citation

If you use this dataset, please cite:

@misc{apaydin2026diffpick,
  author = {Apaydın, Emin Çağan},
  title = {DiffPick: A Diffusion Policy Pipeline for Fetch Pick-and-Place},
  year = {2026},
  publisher = {GitHub},
  url = {https://github.com/e-cagan/diffpick}
}

License

Apache 2.0