|
|
--- |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- robotics |
|
|
tags: |
|
|
- LeRobot |
|
|
configs: |
|
|
- config_name: default |
|
|
data_files: data/*/*.parquet |
|
|
--- |
|
|
|
|
|
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
This dataset contains the final batch of teleoperated demonstrations collected during a two-day hackathon using the LeRobot library and SO-101 robot arms in a leader–follower configuration. |
|
|
Each episode shows the follower arm picking two colored cubes (one after the other) and placing each into the matching colored cross within a 2×2 grid. Two RGB cameras were used: |
|
|
|
|
|
Top camera: mounted above the workspace for a clear 2D view of the arm, cubes, and grid. |
|
|
|
|
|
Front/low camera: slightly above the ground, facing the arm and grid to provide better z-axis cues and arm self-perception. |
|
|
|
|
|
Despite cardboard backgrounding, the room’s illumination varied over time and is deliberately preserved in the data, as it proved to be a limiting factor and may be valuable for robustness research. |
|
|
|
|
|
This dataset is intended for vision-based imitation learning (e.g., behavior cloning, goal-conditioned policies), multi-view fusion, and perception-control studies on tabletop manipulation. |
|
|
|
|
|
### Use Cases |
|
|
|
|
|
- **Imitation Learning**: Behavior cloning from teleop demonstrations. |
|
|
|
|
|
- **Multiview Perception**: Fusing top + front perspectives for depth inference without explicit depth sensors. |
|
|
|
|
|
- **Robustness to Lighting**: Evaluating policy sensitivity to illumination drift. |
|
|
|
|
|
- **State–Action Alignment**: Leveraging synchronized proprioception and images. |
|
|
|
|
|
|
|
|
## Data Collection |
|
|
|
|
|
### Teleoperation Setup |
|
|
|
|
|
- **Leader–Follower**: Human teleoperates a leader arm; follower SO-101 replicates motion to generate demonstrations. |
|
|
|
|
|
- **Workspace**: Tabletop with a 2×2 grid. Each cell contains a colored cross; two colored cubes must be placed on matching crosses. |
|
|
|
|
|
- **Cameras**: |
|
|
|
|
|
- **Top**: overhead, full scene. |
|
|
|
|
|
- **Front**: low angle, emphasizes depth and arm self-pose. |
|
|
|
|
|
- **Background control**: Cardboard panels; lighting varies during the day and is preserved in data. |
|
|
|
|
|
### Episode Protocol |
|
|
|
|
|
1- Move to pre-grasp; localize target cube(s) visually. |
|
|
|
|
|
2- Grasp first cube; transport; place on correct colored cross. |
|
|
|
|
|
3- Repeat for second cube. |
|
|
|
|
|
4- Return to neutral. |
|
|
|
|
|
|
|
|
## Known limitations |
|
|
|
|
|
- **Lighting drift**: Significant variation during the day; expect distribution shift. Consider color constancy or data augmentation. |
|
|
|
|
|
- **Camera motion**: Cameras are fixed for the batch, but small nudges may occur; rely on metadata intrinsics/extrinsics if provided. |
|
|
|
|
|
- **Occlusions**: Self-occlusion of the gripper and cubes in certain positions, especially from left camera during close approach. |
|
|
|
|
|
- **No depth**: RGB only |
|
|
|
|
|
|
|
|
## Additional Information |
|
|
|
|
|
- **Homepage:** [deel-ai](https://www.irt-saintexupery.com/deel/) |
|
|
|
|
|
- **License:** apache-2.0 |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
[meta/info.json](meta/info.json): |
|
|
```json |
|
|
{ |
|
|
"codebase_version": "v3.0", |
|
|
"robot_type": "so101_follower", |
|
|
"total_episodes": 50, |
|
|
"total_frames": 31189, |
|
|
"total_tasks": 1, |
|
|
"chunks_size": 1000, |
|
|
"data_files_size_in_mb": 100, |
|
|
"video_files_size_in_mb": 500, |
|
|
"fps": 30, |
|
|
"splits": { |
|
|
"train": "0:50" |
|
|
}, |
|
|
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", |
|
|
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", |
|
|
"features": { |
|
|
"action": { |
|
|
"dtype": "float32", |
|
|
"names": [ |
|
|
"shoulder_pan.pos", |
|
|
"shoulder_lift.pos", |
|
|
"elbow_flex.pos", |
|
|
"wrist_flex.pos", |
|
|
"wrist_roll.pos", |
|
|
"gripper.pos" |
|
|
], |
|
|
"shape": [ |
|
|
6 |
|
|
] |
|
|
}, |
|
|
"observation.state": { |
|
|
"dtype": "float32", |
|
|
"names": [ |
|
|
"shoulder_pan.pos", |
|
|
"shoulder_lift.pos", |
|
|
"elbow_flex.pos", |
|
|
"wrist_flex.pos", |
|
|
"wrist_roll.pos", |
|
|
"gripper.pos" |
|
|
], |
|
|
"shape": [ |
|
|
6 |
|
|
] |
|
|
}, |
|
|
"observation.images.left": { |
|
|
"dtype": "video", |
|
|
"shape": [ |
|
|
480, |
|
|
640, |
|
|
3 |
|
|
], |
|
|
"names": [ |
|
|
"height", |
|
|
"width", |
|
|
"channels" |
|
|
], |
|
|
"info": { |
|
|
"video.height": 480, |
|
|
"video.width": 640, |
|
|
"video.codec": "av1", |
|
|
"video.pix_fmt": "yuv420p", |
|
|
"video.is_depth_map": false, |
|
|
"video.fps": 30, |
|
|
"video.channels": 3, |
|
|
"has_audio": false |
|
|
} |
|
|
}, |
|
|
"observation.images.front": { |
|
|
"dtype": "video", |
|
|
"shape": [ |
|
|
480, |
|
|
640, |
|
|
3 |
|
|
], |
|
|
"names": [ |
|
|
"height", |
|
|
"width", |
|
|
"channels" |
|
|
], |
|
|
"info": { |
|
|
"video.height": 480, |
|
|
"video.width": 640, |
|
|
"video.codec": "av1", |
|
|
"video.pix_fmt": "yuv420p", |
|
|
"video.is_depth_map": false, |
|
|
"video.fps": 30, |
|
|
"video.channels": 3, |
|
|
"has_audio": false |
|
|
} |
|
|
}, |
|
|
"timestamp": { |
|
|
"dtype": "float32", |
|
|
"shape": [ |
|
|
1 |
|
|
], |
|
|
"names": null |
|
|
}, |
|
|
"frame_index": { |
|
|
"dtype": "int64", |
|
|
"shape": [ |
|
|
1 |
|
|
], |
|
|
"names": null |
|
|
}, |
|
|
"episode_index": { |
|
|
"dtype": "int64", |
|
|
"shape": [ |
|
|
1 |
|
|
], |
|
|
"names": null |
|
|
}, |
|
|
"index": { |
|
|
"dtype": "int64", |
|
|
"shape": [ |
|
|
1 |
|
|
], |
|
|
"names": null |
|
|
}, |
|
|
"task_index": { |
|
|
"dtype": "int64", |
|
|
"shape": [ |
|
|
1 |
|
|
], |
|
|
"names": null |
|
|
} |
|
|
} |
|
|
} |
|
|
``` |
|
|
|