Datasets:
File size: 6,310 Bytes
d5556b0 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 |
---
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Description
This dataset contains the first set of teleoperated demonstrations collected during a two-day hackathon using the LeRobot library and SO-101 robot arms in a leader–follower setup.
Each episode shows the follower arm picking one colored cube and placing it onto the matching colored cross inside a 2×2 grid.
Two synchronized RGB cameras were used:
- **Top camera**: overhead, provides a full 2D view of the workspace (arm, cube, grid).
- **Front/low camera**: slightly above ground level, facing the arm and grid to capture z-axis cues and arm self-pose.
The background was masked with cardboard panels, but ambient lighting varied throughout the day; this variation is preserved and is useful for robustness studies.
Intended for vision-based imitation learning, multi-view fusion, and tabletop manipulation research.
### Use Cases
- **Imitation Learning**: Behavior cloning from teleop demonstrations.
- **Multiview Perception**: Fusing top + front perspectives for depth inference without explicit depth sensors.
- **Robustness to Lighting**: Evaluating policy sensitivity to illumination drift.
- **State–Action Alignment**: Leveraging synchronized proprioception and images.
- **Policy Bootstrapping for curricula**: pretrain on single-cube before multi-cube tasks.
## Data Collection
### Teleoperation & Hardware
- **Leader–Follower teleop**: human drives a leader arm; follower SO-101 replicates to produce demonstrations.
- **Workspace**: Tabletop with 2×2 grid; only one cell has a colored cross. One cube is placed in its matching cross per episode.
- **Cameras**:
- **Front**: static overhead.
- **Left**: static frontal view emphasizing depth.
- **Environment**: Cardboard background; illumination changes across time are present in the data.
### Episode Protocol
1- Move to pre-grasp and visually localize the target cube.
2- Approach and grasp the cube.
3- Transport and align over the colored cross.
4- Place, release, and return to neutral.
## Known Limitations
Lighting drift: Varying brightness/temperature across episodes; apply color constancy, normalization, or photometric augmentation.
Occlusions: Hand/gripper and cube may occlude from the front camera during close approaches.
No depth sensor: Only RGB; consider multi-view fusion or learned depth cues.
Action semantics: Confirm whether actions are delta-pose or joint velocities in each metadata.json.
Early-phase variability: Being the first batch, some episodes may include exploratory motions, hesitations, or failed initial grasps that later recover—useful for learning robustness but consider filtering for clean BC.
## Additional Information
- **Homepage:** [deel-ai](https://www.irt-saintexupery.com/deel/)
- **License:** apache-2.0
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 206,
"total_frames": 84098,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:206"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.state": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.images.left": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
``` |