RLBench-OG / README.md
Keith-Luo's picture
commit readme
2fdc4ee
|
raw
history blame
4.79 kB
metadata
license: mit

RLBench-OG Dataset

Overview

RLBench-OG is derived from the RLBench benchmark and is designed to evaluate the robustness of models under occlusion as well as their generalization capability under various environmental perturbations. The dataset selects ten tasks from the original RLBench task list, covering both simple scenarios and more complex long-horizon tasks. The benchmark consists of two main components: an Occlusion Suite and a Generalization Suite.

Dataset Components

Occlusion Suite

The Occlusion Suite focuses on scenarios where the camera's line of sight to key task-relevant regions is fully or partially blocked, leading to incomplete observations. Occlusions are introduced to the front_camera through two mechanisms:

  1. Self-occlusion by object pose perturbation: Modifying the position or orientation of task-relevant objects to occlude essential interaction points (e.g., drawer handles, target objects).

  2. Occlusion by external distractors: Placing task-irrelevant objects (cabinets, TVs, doors, etc.) in front of the workspace to partially block the scene.

Task-specific Occlusion Configurations:

  • basketball_in_hoop: Basket and trash can poses are perturbed to occlude the basketball
  • block_pyramid: A cabinet is placed in front of the workspace to occlude part of the blocks
  • close_drawer: The drawer is rotated such that its geometry occludes the handle
  • scoop_with_spatula: A wine bottle is positioned to block the target cube
  • solve_puzzle: A storage cabinet is placed to occlude puzzle pieces
  • straighten_rope: A desk lamp is placed in front of one end of the rope
  • take_plate_off_colored_dish_rack: A box with a laptop blocks visibility of the plate
  • take_usb_out_of_computer: A cabinet blocks the USB port area
  • toilet_seat_down: A door is placed such that it occludes the toilet seat
  • water_plants: A television partially blocks both the watering can and the plant

Generalization Suite

The Generalization Suite evaluates robustness to environment-conditioned variations. Based on the same ten tasks, the suite includes six types of environment variations, each modifying exactly one factor while keeping all others unchanged. Following the pipeline from COLOSSEUM, variation types are specified using yaml configuration files and data collection procedures via json metadata.

Variation Types:

  • light_color: RGB values sampled within predefined ranges and applied to directional lights
  • table_texture: Textures sampled from a texture dataset and applied to the table
  • table_color: RGB values sampled within predefined ranges and applied to the table surface
  • background_texture: Background textures randomly sampled and applied
  • distractor: Two distractor objects sampled from a 3D asset dataset and spawned within workspace boundaries
  • camera_pose: Camera position and orientation offsets sampled and applied to front, left-shoulder, and right-shoulder cameras

Dataset Structure

The dataset is organized by task and variation type. Each configuration includes:

  • RGB-D images from multiple camera viewpoints
  • Robot state information
  • Robot action

Visualizations

For visualizations of different variant settings corresponding to each task, refer to the figures below:

Variant Visualizations Part 1 Visualization of different variants for the basketball_in_hoop, block_pyramid, close_drawer, scoop_with_spatula, solve_puzzle tasks.

Variant Visualizations Part 2 Visualization of different variants for the straighten_rope, take_plate_off_colored_dish_rack, take_usb_out_of_computer, toilet_seat_down, water_plants tasks.

Citation

If you use this dataset in your research, please cite our paper:

@misc{bai2025learningacttaskawareview,
                  title={Learning to See and Act: Task-Aware Virtual View Exploration for Robotic Manipulation}, 
                  author={Yongjie Bai and Zhouxia Wang and Yang Liu and Kaijun Luo and Yifan Wen and Mingtong Dai and Weixing Chen and Ziliang Chen and Lingbo Liu and Guanbin Li and Liang Lin},
                  year={2025},
                  eprint={2508.05186},
                  archivePrefix={arXiv},
                  primaryClass={cs.RO},
                  url={https://arxiv.org/abs/2508.05186}, 
            }

License

This dataset is released under the MIT License.

Contact

For questions about this dataset, please contact: baiyj26@mail2.sysu.edu.cn