Datasets:
This dataset was created using LeRobot.
Dataset Summary
- Task: Move the red block into the white box (pick-and-place / block-in-bin).
- Target object (red):
- Position: fixed at the nominal start location across episodes.
- Orientation: varies between episodes (the red block is rotated).
- Distractor objects (green, beige):
- Positions: vary across episodes (small translations around their nominal areas, or broader—depending on your setup).
- Orientations: vary across episodes (rotated differently each episode).
- Learning goal: Train policies to follow a color/identity-based objective (“red block”) under scene clutter and object pose variation.
Why This Dataset
Adding distractors changes the difficulty from “find the only object” to “select the correct object.” This dataset targets:
- target identification (choose red, ignore green/beige),
- robustness to clutter and occlusions (distractors may partially block the target),
- grasping and reaching robustness with variable object orientations.
It is a clean intermediate step between single-object pick-and-place and more realistic multi-object scenes.
Supported Tasks and Use Cases
This dataset is suitable for:
- Imitation learning for pick-and-place with distractors.
- Object-centric policy learning (condition on target identity/color).
- Robust perception: selecting the correct object under pose changes.
- Ablations:
- red-only vs. red+2 distractors,
- fixed distractors vs. randomized distractors,
- rotation-only vs. position+rotation distractors.
Task Description
Instruction: “Place the red block into the white box.”
A typical episode includes: approach the red block → grasp → transport → release into the box → optional return.
Across episodes, the robot must:
- identify the red block even when green/beige blocks move and rotate,
- avoid mistakenly grasping a distractor,
- adapt grasp pose to the red block’s changing orientation.
Dataset Structure
{
"codebase_version": "v3.0",
"robot_type": "so_follower",
"total_episodes": 50,
"total_frames": 18846,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 200,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.state": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.images.gripper": {
"dtype": "video",
"shape": [
1080,
1920,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 1080,
"video.width": 1920,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.top": {
"dtype": "video",
"shape": [
1080,
1920,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 1080,
"video.width": 1920,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
📊 Interested in the dataset? Contact Unidata to discuss purchase terms: https://unidata.pro
Citation
BibTeX:
[More Information Needed]
- Downloads last month
- 10