Datasets:
license: apache-2.0
task_categories:
- robotics
tags:
- LeRobot
configs:
- config_name: default
data_files: data/*/*.parquet
This dataset was created using LeRobot.
Dataset Description
This dataset contains the first set of teleoperated demonstrations collected during a two-day hackathon using the LeRobot library and SO-101 robot arms in a leader–follower setup. Each episode shows the follower arm picking one colored cube and placing it onto the matching colored cross inside a 2×2 grid.
Two synchronized RGB cameras were used:
Top camera: overhead, provides a full 2D view of the workspace (arm, cube, grid).
Front/low camera: slightly above ground level, facing the arm and grid to capture z-axis cues and arm self-pose.
The background was masked with cardboard panels, but ambient lighting varied throughout the day; this variation is preserved and is useful for robustness studies.
Intended for vision-based imitation learning, multi-view fusion, and tabletop manipulation research.
Use Cases
Imitation Learning: Behavior cloning from teleop demonstrations.
Multiview Perception: Fusing top + front perspectives for depth inference without explicit depth sensors.
Robustness to Lighting: Evaluating policy sensitivity to illumination drift.
State–Action Alignment: Leveraging synchronized proprioception and images.
Policy Bootstrapping for curricula: pretrain on single-cube before multi-cube tasks.
Data Collection
Teleoperation & Hardware
Leader–Follower teleop: human drives a leader arm; follower SO-101 replicates to produce demonstrations.
Workspace: Tabletop with 2×2 grid; only one cell has a colored cross. One cube is placed in its matching cross per episode.
Cameras:
Front: static overhead.
Left: static frontal view emphasizing depth.
Environment: Cardboard background; illumination changes across time are present in the data.
Episode Protocol
1- Move to pre-grasp and visually localize the target cube.
2- Approach and grasp the cube.
3- Transport and align over the colored cross.
4- Place, release, and return to neutral.
Known Limitations
Lighting drift: Varying brightness/temperature across episodes; apply color constancy, normalization, or photometric augmentation.
Occlusions: Hand/gripper and cube may occlude from the front camera during close approaches.
No depth sensor: Only RGB; consider multi-view fusion or learned depth cues.
Action semantics: Confirm whether actions are delta-pose or joint velocities in each metadata.json.
Early-phase variability: Being the first batch, some episodes may include exploratory motions, hesitations, or failed initial grasps that later recover—useful for learning robustness but consider filtering for clean BC.
Additional Information
Homepage: deel-ai
License: apache-2.0
Dataset Structure
{
"codebase_version": "v3.0",
"robot_type": "so101_follower",
"total_episodes": 206,
"total_frames": 84098,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 500,
"fps": 30,
"splits": {
"train": "0:206"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.state": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.images.left": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.front": {
"dtype": "video",
"shape": [
480,
640,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 480,
"video.width": 640,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}