| | --- |
| | task_categories: |
| | - robotics |
| | tags: |
| | - LeRobot |
| | - robotics |
| | - robot-learning |
| | - imitation-learning |
| | - manipulation |
| | - pick-and-place |
| | - so101 |
| | - domain-randomization |
| | - vision |
| | configs: |
| | - config_name: default |
| | data_files: data/*/*.parquet |
| | --- |
| | |
| | This dataset was created using [LeRobot](https://github.com/huggingface/lerobot). |
| |
|
| | ## Dataset Summary |
| |
|
| | - **Task:** Move a **red block** into a **white box** (pick-and-place / block-in-bin). |
| | - **Scene setup:** Tabletop scene with a fixed target receptacle (white box) and a red block. |
| | - **Variations across episodes:** |
| | - **Lighting (primary factor):** |
| | - **Warm vs. cool illumination** (different color temperatures). |
| | - **Different lighting angles/positions** (“from different viewpoints”), producing diverse shadow directions and contrast patterns. |
| | - **Red block position (secondary factor):** The block start position varies within a small neighborhood around a nominal location (small translations), consistent with the previous dataset in this series. |
| | - **Learning goal:** Train and evaluate policies that are robust to **large appearance changes caused by lighting** while still solving a straightforward manipulation task. |
| |
|
| | ## Why This Dataset |
| |
|
| | Lighting changes can significantly alter pixel appearance even when geometry is unchanged. This dataset targets: |
| | - **color constancy / illumination invariance** challenges, |
| | - robustness to **strong shadows** and **specular highlights**, |
| | - stable perception of object boundaries and grasp points under **shifting contrast**. |
| |
|
| | It is especially useful as a stress test for vision-based policies that otherwise perform well in stable lab lighting. |
| |
|
| | ## Supported Tasks and Use Cases |
| |
|
| | This dataset is suitable for: |
| |
|
| | - **Imitation learning** for pick-and-place with heavy illumination randomization. |
| | - **Robust visual policy training** (e.g., augmentation studies, representation learning). |
| | - **Generalization benchmarks**: train on a subset of lighting conditions, test on unseen ones. |
| | - **Comparisons** against: |
| | - fixed-lighting datasets, |
| | - mildly varied-lighting datasets, |
| | - position-only randomized datasets. |
| |
|
| | ## Task Description |
| |
|
| | **Instruction:** “Place the red block into the white box.” |
| |
|
| | A typical episode: approach the block → grasp → transport → release into the box → optional return. |
| |
|
| | Under varied lighting, successful policies must: |
| | - reliably detect the block despite changes in brightness and color cast, |
| | - handle shadowed/overexposed regions, |
| | - maintain consistent grasp approach when the block’s visual features shift. |
| | - |
| | ## Dataset Structure |
| |
|
| | [meta/info.json](meta/info.json): |
| | ```json |
| | { |
| | "codebase_version": "v3.0", |
| | "robot_type": "so_follower", |
| | "total_episodes": 50, |
| | "total_frames": 22165, |
| | "total_tasks": 1, |
| | "chunks_size": 1000, |
| | "data_files_size_in_mb": 100, |
| | "video_files_size_in_mb": 200, |
| | "fps": 30, |
| | "splits": { |
| | "train": "0:50" |
| | }, |
| | "data_path": "data/chunk-{chunk_index:03d}/file-{file_index:03d}.parquet", |
| | "video_path": "videos/{video_key}/chunk-{chunk_index:03d}/file-{file_index:03d}.mp4", |
| | "features": { |
| | "action": { |
| | "dtype": "float32", |
| | "names": [ |
| | "shoulder_pan.pos", |
| | "shoulder_lift.pos", |
| | "elbow_flex.pos", |
| | "wrist_flex.pos", |
| | "wrist_roll.pos", |
| | "gripper.pos" |
| | ], |
| | "shape": [ |
| | 6 |
| | ] |
| | }, |
| | "observation.state": { |
| | "dtype": "float32", |
| | "names": [ |
| | "shoulder_pan.pos", |
| | "shoulder_lift.pos", |
| | "elbow_flex.pos", |
| | "wrist_flex.pos", |
| | "wrist_roll.pos", |
| | "gripper.pos" |
| | ], |
| | "shape": [ |
| | 6 |
| | ] |
| | }, |
| | "observation.images.gripper": { |
| | "dtype": "video", |
| | "shape": [ |
| | 1080, |
| | 1920, |
| | 3 |
| | ], |
| | "names": [ |
| | "height", |
| | "width", |
| | "channels" |
| | ], |
| | "info": { |
| | "video.height": 1080, |
| | "video.width": 1920, |
| | "video.codec": "av1", |
| | "video.pix_fmt": "yuv420p", |
| | "video.is_depth_map": false, |
| | "video.fps": 30, |
| | "video.channels": 3, |
| | "has_audio": false |
| | } |
| | }, |
| | "observation.images.top": { |
| | "dtype": "video", |
| | "shape": [ |
| | 1080, |
| | 1920, |
| | 3 |
| | ], |
| | "names": [ |
| | "height", |
| | "width", |
| | "channels" |
| | ], |
| | "info": { |
| | "video.height": 1080, |
| | "video.width": 1920, |
| | "video.codec": "av1", |
| | "video.pix_fmt": "yuv420p", |
| | "video.is_depth_map": false, |
| | "video.fps": 30, |
| | "video.channels": 3, |
| | "has_audio": false |
| | } |
| | }, |
| | "timestamp": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "frame_index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "episode_index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "task_index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | } |
| | } |
| | } |
| | ``` |
| |
|
| | 📊 Interested in the dataset? Contact Unidata to discuss purchase terms: [https://unidata.pro](https://unidata.pro/datasets/lerobot-so-101-manipulations/?utm_source=huggingface-Martsinian&utm_medium=referral&utm_campaign=record-red-block) |
| |
|
| | ## Citation |
| |
|
| | **BibTeX:** |
| |
|
| | ```bibtex |
| | [More Information Needed] |
| | ``` |