e-cagan commited on
Commit
6e803ac
·
verified ·
1 Parent(s): d352621

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +113 -140
README.md CHANGED
@@ -1,153 +1,126 @@
1
  ---
2
  license: apache-2.0
3
  task_categories:
4
- - robotics
 
5
  tags:
6
- - LeRobot
7
- - robotics
8
- - imitation-learning
9
- - diffusion-policy
10
- - manipulation
11
- - fetch
12
- configs:
13
- - config_name: default
14
- data_files: data/*/*.parquet
15
  ---
16
 
17
- This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
18
-
19
-
20
- <a class="flex" href="https://huggingface.co/spaces/lerobot/visualize_dataset?path=e-cagan/diffpick">
21
- <img class="block dark:hidden" src="https://huggingface.co/datasets/huggingface/badges/resolve/main/visualize-this-dataset-xl.svg"/>
22
- <img class="hidden dark:block" src="https://huggingface.co/datasets/huggingface/badges/resolve/main/visualize-this-dataset-xl-dark.svg"/>
23
- </a>
24
-
25
-
26
- ## Dataset Description
27
-
28
-
29
-
30
- - **Homepage:** [More Information Needed]
31
- - **Paper:** [More Information Needed]
32
- - **License:** apache-2.0
33
-
34
- ## Dataset Structure
35
-
36
- [meta/info.json](meta/info.json):
37
- ```json
38
- {
39
- "codebase_version": "v3.0",
40
- "fps": 25,
41
- "features": {
42
- "observation.image": {
43
- "dtype": "video",
44
- "shape": [
45
- 96,
46
- 96,
47
- 3
48
- ],
49
- "names": [
50
- "height",
51
- "width",
52
- "channels"
53
- ],
54
- "info": {
55
- "video.height": 96,
56
- "video.width": 96,
57
- "video.codec": "av1",
58
- "video.pix_fmt": "yuv420p",
59
- "video.is_depth_map": false,
60
- "video.fps": 25,
61
- "video.channels": 3,
62
- "has_audio": false
63
- }
64
- },
65
- "observation.state": {
66
- "dtype": "float32",
67
- "shape": [
68
- 10
69
- ],
70
- "names": [
71
- "gripper_x",
72
- "gripper_y",
73
- "gripper_z",
74
- "finger_left",
75
- "finger_right",
76
- "gripper_vx",
77
- "gripper_vy",
78
- "gripper_vz",
79
- "finger_left_v",
80
- "finger_right_v"
81
- ]
82
- },
83
- "action": {
84
- "dtype": "float32",
85
- "shape": [
86
- 4
87
- ],
88
- "names": [
89
- "dx",
90
- "dy",
91
- "dz",
92
- "gripper_cmd"
93
- ]
94
- },
95
- "timestamp": {
96
- "dtype": "float32",
97
- "shape": [
98
- 1
99
- ],
100
- "names": null
101
- },
102
- "frame_index": {
103
- "dtype": "int64",
104
- "shape": [
105
- 1
106
- ],
107
- "names": null
108
- },
109
- "episode_index": {
110
- "dtype": "int64",
111
- "shape": [
112
- 1
113
- ],
114
- "names": null
115
- },
116
- "index": {
117
- "dtype": "int64",
118
- "shape": [
119
- 1
120
- ],
121
- "names": null
122
- },
123
- "task_index": {
124
- "dtype": "int64",
125
- "shape": [
126
- 1
127
- ],
128
- "names": null
129
- }
130
- },
131
- "total_episodes": 200,
132
- "total_frames": 5489,
133
- "total_tasks": 1,
134
- "chunks_size": 1000,
135
- "data_files_size_in_mb": 100,
136
- "video_files_size_in_mb": 200,
137
- "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
138
- "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
139
- "robot_type": "fetch",
140
- "splits": {
141
- "train": "0:200"
142
- }
143
- }
144
  ```
145
 
 
 
 
 
 
 
 
 
 
 
 
 
146
 
147
  ## Citation
148
 
149
- **BibTeX:**
150
 
151
  ```bibtex
152
- [More Information Needed]
153
- ```
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
  task_categories:
4
+ - robotics
5
+ - reinforcement-learning
6
  tags:
7
+ - robotics
8
+ - imitation-learning
9
+ - diffusion-policy
10
+ - manipulation
11
+ - fetch
12
+ - mujoco
13
+ - lerobot
14
+ size_categories:
15
+ - 1K<n<10K
16
  ---
17
 
18
+ # DiffPick: Fetch Pick-and-Place Demonstrations
19
+
20
+ A clean dataset of **200 successful pick-and-place demonstrations** collected from a scripted expert policy in the [`FetchPickAndPlace-v4`](https://robotics.farama.org/envs/fetch/pick_and_place/) MuJoCo environment. Designed for training **vision-based imitation learning** policies (Diffusion Policy, ACT, BC).
21
+
22
+ Part of the [DiffPick project](https://github.com/e-cagan/diffpick) — a from-scratch implementation of a Diffusion Policy pipeline with ROS2 deployment.
23
+
24
+ ## Dataset Stats
25
+
26
+ | Property | Value |
27
+ |---|---|
28
+ | Episodes | 200 |
29
+ | Total frames | 5,489 |
30
+ | Mean episode length | 27.4 steps |
31
+ | Min / Max length | 20 / 35 steps |
32
+ | FPS | 25 |
33
+ | Image resolution | 96×96 RGB |
34
+ | Success rate (during collection) | 97.1% (200 of 206 attempts kept) |
35
+
36
+ ## Features
37
+
38
+ | Key | Shape | Type | Description |
39
+ |---|---|---|---|
40
+ | `observation.image` | (3, 96, 96) | float32 | Front-view RGB camera |
41
+ | `observation.state` | (10,) | float32 | Robot proprioception only (gripper xyz, finger widths, velocities). **No object pose** — must be inferred from image. |
42
+ | `action` | (4,) | float32 | End-effector delta (dx, dy, dz) ∈ [-1,1] + gripper command (-1 close, +1 open) |
43
+ | `task` | string | — | "Pick up the block and place it at the target location." |
44
+
45
+ ### Why proprioception-only state?
46
+
47
+ The state vector deliberately **excludes object position**. This forces a learned policy to develop visual grounding rather than copying ground-truth coordinates. The result: policies trained on this dataset must actually *see* the object in the RGB stream to succeed — closer to a real-world deployment scenario where object pose isn't directly observable.
48
+
49
+ ## Expert Policy
50
+
51
+ Demonstrations were generated by a hand-crafted state machine controller:
52
+
53
+ ```
54
+ APPROACH (gripper open, hover above object)
55
+
56
+ DESCEND (gripper open, lower to object)
57
+
58
+ GRASP (close gripper, hold for 8 steps)
59
+
60
+ PLACE (move to target, gripper closed)
61
+ ```
62
+
63
+ Proportional control in end-effector space (no IK required, since the env exposes a 4-D end-effector action interface). Episodes ending in success too quickly (< 15 steps, indicating object near target at reset) were filtered out.
64
+
65
+ Source: [`data_collection/scripted_policy.py`](https://github.com/e-cagan/diffpick/blob/main/data_collection/scripted_policy.py)
66
+
67
+ ## Usage
68
+
69
+ ```python
70
+ from lerobot.datasets.lerobot_dataset import LeRobotDataset
71
+
72
+ dataset = LeRobotDataset("e-cagan/diffpick")
73
+ sample = dataset[0]
74
+
75
+ print(sample["observation.image"].shape) # torch.Size([3, 96, 96])
76
+ print(sample["observation.state"].shape) # torch.Size([10])
77
+ print(sample["action"].shape) # torch.Size([4])
78
+ ```
79
+
80
+ ## Reproduction
81
+
82
+ ```bash
83
+ git clone https://github.com/e-cagan/diffpick
84
+ cd diffpick
85
+ pip install -r requirements.txt
86
+
87
+ # Collect raw demos
88
+ python -m data_collection.collect --n_episodes 200
89
+
90
+ # Convert to LeRobotDataset format
91
+ python -m data_collection.to_lerobot_dataset \
92
+ --raw_dir data/raw_demos \
93
+ --repo_id <your-username>/diffpick \
94
+ --fps 25
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
95
  ```
96
 
97
+ ## Intended Use
98
+
99
+ - Training **Diffusion Policy** for vision-conditioned manipulation
100
+ - Benchmarking imitation learning algorithms (BC vs ACT vs DP)
101
+ - Learning resource for ROS2 + MuJoCo + LeRobot integration
102
+
103
+ ## Limitations
104
+
105
+ - Single environment seed family (`FetchPickAndPlace-v4` defaults). No domain randomization for backgrounds, lighting, or distractors.
106
+ - Single front-facing 96×96 camera. No wrist cam, no depth.
107
+ - Scripted expert is deterministic given a seed — no behavioral diversity (no left-hand/right-hand approach modes, etc.). This may limit the multi-modal advantages of Diffusion Policy.
108
+ - Object is a single blue cube. No category generalization.
109
 
110
  ## Citation
111
 
112
+ If you use this dataset, please cite:
113
 
114
  ```bibtex
115
+ @misc{apaydin2026diffpick,
116
+ author = {Apaydın, Emin Çağan},
117
+ title = {DiffPick: A Diffusion Policy Pipeline for Fetch Pick-and-Place},
118
+ year = {2026},
119
+ publisher = {GitHub},
120
+ url = {https://github.com/e-cagan/diffpick}
121
+ }
122
+ ```
123
+
124
+ ## License
125
+
126
+ Apache 2.0