ampicard commited on
Commit
d5556b0
·
verified ·
1 Parent(s): 96d1870

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +220 -0
README.md ADDED
@@ -0,0 +1,220 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - robotics
5
+ tags:
6
+ - LeRobot
7
+ configs:
8
+ - config_name: default
9
+ data_files: data/*/*.parquet
10
+ ---
11
+
12
+ This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
13
+
14
+ ## Dataset Description
15
+
16
+ This dataset contains the first set of teleoperated demonstrations collected during a two-day hackathon using the LeRobot library and SO-101 robot arms in a leader–follower setup.
17
+ Each episode shows the follower arm picking one colored cube and placing it onto the matching colored cross inside a 2×2 grid.
18
+
19
+ Two synchronized RGB cameras were used:
20
+
21
+ - **Top camera**: overhead, provides a full 2D view of the workspace (arm, cube, grid).
22
+
23
+ - **Front/low camera**: slightly above ground level, facing the arm and grid to capture z-axis cues and arm self-pose.
24
+
25
+ The background was masked with cardboard panels, but ambient lighting varied throughout the day; this variation is preserved and is useful for robustness studies.
26
+
27
+ Intended for vision-based imitation learning, multi-view fusion, and tabletop manipulation research.
28
+
29
+
30
+ ### Use Cases
31
+
32
+ - **Imitation Learning**: Behavior cloning from teleop demonstrations.
33
+
34
+ - **Multiview Perception**: Fusing top + front perspectives for depth inference without explicit depth sensors.
35
+
36
+ - **Robustness to Lighting**: Evaluating policy sensitivity to illumination drift.
37
+
38
+ - **State–Action Alignment**: Leveraging synchronized proprioception and images.
39
+
40
+ - **Policy Bootstrapping for curricula**: pretrain on single-cube before multi-cube tasks.
41
+
42
+
43
+ ## Data Collection
44
+
45
+ ### Teleoperation & Hardware
46
+
47
+ - **Leader–Follower teleop**: human drives a leader arm; follower SO-101 replicates to produce demonstrations.
48
+
49
+ - **Workspace**: Tabletop with 2×2 grid; only one cell has a colored cross. One cube is placed in its matching cross per episode.
50
+
51
+ - **Cameras**:
52
+
53
+ - **Front**: static overhead.
54
+
55
+ - **Left**: static frontal view emphasizing depth.
56
+
57
+ - **Environment**: Cardboard background; illumination changes across time are present in the data.
58
+
59
+ ### Episode Protocol
60
+
61
+ 1- Move to pre-grasp and visually localize the target cube.
62
+
63
+ 2- Approach and grasp the cube.
64
+
65
+ 3- Transport and align over the colored cross.
66
+
67
+ 4- Place, release, and return to neutral.
68
+
69
+
70
+ ## Known Limitations
71
+
72
+ Lighting drift: Varying brightness/temperature across episodes; apply color constancy, normalization, or photometric augmentation.
73
+
74
+ Occlusions: Hand/gripper and cube may occlude from the front camera during close approaches.
75
+
76
+ No depth sensor: Only RGB; consider multi-view fusion or learned depth cues.
77
+
78
+ Action semantics: Confirm whether actions are delta-pose or joint velocities in each metadata.json.
79
+
80
+ Early-phase variability: Being the first batch, some episodes may include exploratory motions, hesitations, or failed initial grasps that later recover—useful for learning robustness but consider filtering for clean BC.
81
+
82
+
83
+ ## Additional Information
84
+
85
+ - **Homepage:** [deel-ai](https://www.irt-saintexupery.com/deel/)
86
+
87
+ - **License:** apache-2.0
88
+
89
+ ## Dataset Structure
90
+
91
+ [meta/info.json](meta/info.json):
92
+ ```json
93
+ {
94
+ "codebase_version": "v3.0",
95
+ "robot_type": "so101_follower",
96
+ "total_episodes": 206,
97
+ "total_frames": 84098,
98
+ "total_tasks": 1,
99
+ "chunks_size": 1000,
100
+ "data_files_size_in_mb": 100,
101
+ "video_files_size_in_mb": 500,
102
+ "fps": 30,
103
+ "splits": {
104
+ "train": "0:206"
105
+ },
106
+ "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
107
+ "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
108
+ "features": {
109
+ "action": {
110
+ "dtype": "float32",
111
+ "names": [
112
+ "shoulder_pan.pos",
113
+ "shoulder_lift.pos",
114
+ "elbow_flex.pos",
115
+ "wrist_flex.pos",
116
+ "wrist_roll.pos",
117
+ "gripper.pos"
118
+ ],
119
+ "shape": [
120
+ 6
121
+ ]
122
+ },
123
+ "observation.state": {
124
+ "dtype": "float32",
125
+ "names": [
126
+ "shoulder_pan.pos",
127
+ "shoulder_lift.pos",
128
+ "elbow_flex.pos",
129
+ "wrist_flex.pos",
130
+ "wrist_roll.pos",
131
+ "gripper.pos"
132
+ ],
133
+ "shape": [
134
+ 6
135
+ ]
136
+ },
137
+ "observation.images.left": {
138
+ "dtype": "video",
139
+ "shape": [
140
+ 480,
141
+ 640,
142
+ 3
143
+ ],
144
+ "names": [
145
+ "height",
146
+ "width",
147
+ "channels"
148
+ ],
149
+ "info": {
150
+ "video.height": 480,
151
+ "video.width": 640,
152
+ "video.codec": "av1",
153
+ "video.pix_fmt": "yuv420p",
154
+ "video.is_depth_map": false,
155
+ "video.fps": 30,
156
+ "video.channels": 3,
157
+ "has_audio": false
158
+ }
159
+ },
160
+ "observation.images.front": {
161
+ "dtype": "video",
162
+ "shape": [
163
+ 480,
164
+ 640,
165
+ 3
166
+ ],
167
+ "names": [
168
+ "height",
169
+ "width",
170
+ "channels"
171
+ ],
172
+ "info": {
173
+ "video.height": 480,
174
+ "video.width": 640,
175
+ "video.codec": "av1",
176
+ "video.pix_fmt": "yuv420p",
177
+ "video.is_depth_map": false,
178
+ "video.fps": 30,
179
+ "video.channels": 3,
180
+ "has_audio": false
181
+ }
182
+ },
183
+ "timestamp": {
184
+ "dtype": "float32",
185
+ "shape": [
186
+ 1
187
+ ],
188
+ "names": null
189
+ },
190
+ "frame_index": {
191
+ "dtype": "int64",
192
+ "shape": [
193
+ 1
194
+ ],
195
+ "names": null
196
+ },
197
+ "episode_index": {
198
+ "dtype": "int64",
199
+ "shape": [
200
+ 1
201
+ ],
202
+ "names": null
203
+ },
204
+ "index": {
205
+ "dtype": "int64",
206
+ "shape": [
207
+ 1
208
+ ],
209
+ "names": null
210
+ },
211
+ "task_index": {
212
+ "dtype": "int64",
213
+ "shape": [
214
+ 1
215
+ ],
216
+ "names": null
217
+ }
218
+ }
219
+ }
220
+ ```