record-red-block-v3 / README.md
Martsinian's picture
Update README.md
4c00e26 verified
---
task_categories:
- robotics
tags:
- LeRobot
- robotics
- robot-learning
- imitation-learning
- manipulation
- pick-and-place
- so101
- domain-randomization
configs:
- config_name: default
data_files: data/*/*.parquet
---
This dataset was created using [LeRobot](https://github.com/huggingface/lerobot).
## Dataset Summary
- **Task:** Move a **red block** into a **white box** (pick-and-place / block-in-bin).
- **Scene setup:** The **white box remains fixed** in the workspace.
- **Variations across episodes:**
- **Red block position randomization:** The block starts at different positions within a **small neighborhood** around the nominal start location (small translations).
- **Lighting variation:** Ambient illumination changes slightly between episodes (e.g., intensity and/or direction), while the scene layout remains the same.
- **Learning goal:** Encourage policies that are robust to **small spatial shifts** and **appearance changes** from lighting.
## Supported Tasks and Use Cases
This dataset is suitable for:
- **Imitation learning** with moderate domain randomization.
- **Visual robustness experiments** (sensitivity to illumination changes).
- **Generalization evaluation** for pick-and-place under small distribution shifts.
- **Baseline comparisons** against:
- fixed block position / fixed lighting datasets,
- block-rotation-only datasets,
- position-randomized-only datasets.
## Task Description
**Instruction:** “Place the red block into the white box.”
A typical episode includes: approach the block → grasp → move to the box → release into the box → optional return to rest.
With **position randomization** and **lighting variation**, successful policies should:
- localize the block under changing appearance,
- plan motion conditioned on current block position,
- maintain grasp reliability when shadows/highlights vary.
## Dataset Structure
[meta/info.json](meta/info.json):
```json
{
"codebase_version": "v3.0",
"robot_type": "so_follower",
"total_episodes": 50,
"total_frames": 20724,
"total_tasks": 1,
"chunks_size": 1000,
"data_files_size_in_mb": 100,
"video_files_size_in_mb": 200,
"fps": 30,
"splits": {
"train": "0:50"
},
"data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet",
"video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4",
"features": {
"action": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.state": {
"dtype": "float32",
"names": [
"shoulder_pan.pos",
"shoulder_lift.pos",
"elbow_flex.pos",
"wrist_flex.pos",
"wrist_roll.pos",
"gripper.pos"
],
"shape": [
6
]
},
"observation.images.gripper": {
"dtype": "video",
"shape": [
1080,
1920,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 1080,
"video.width": 1920,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"observation.images.top": {
"dtype": "video",
"shape": [
1080,
1920,
3
],
"names": [
"height",
"width",
"channels"
],
"info": {
"video.height": 1080,
"video.width": 1920,
"video.codec": "av1",
"video.pix_fmt": "yuv420p",
"video.is_depth_map": false,
"video.fps": 30,
"video.channels": 3,
"has_audio": false
}
},
"timestamp": {
"dtype": "float32",
"shape": [
1
],
"names": null
},
"frame_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"episode_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"index": {
"dtype": "int64",
"shape": [
1
],
"names": null
},
"task_index": {
"dtype": "int64",
"shape": [
1
],
"names": null
}
}
}
```
📊 Interested in the dataset? Contact Unidata to discuss purchase terms: [https://unidata.pro](https://unidata.pro/datasets/lerobot-so-101-manipulations/?utm_source=huggingface-Martsinian&utm_medium=referral&utm_campaign=record-red-block)
## Citation
**BibTeX:**
```bibtex
[More Information Needed]
```