Keith-Luo commited on
Commit
2fdc4ee
·
1 Parent(s): 2e41ca7

commit readme

Browse files
Files changed (5) hide show
  1. .gitattributes +1 -0
  2. .gitignore +8 -0
  3. Fig/Fig-og-1.png +3 -0
  4. Fig/Fig-og-2.png +3 -0
  5. README.md +83 -0
.gitattributes CHANGED
@@ -57,3 +57,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
 
 
57
  # Video files - compressed
58
  *.mp4 filter=lfs diff=lfs merge=lfs -text
59
  *.webm filter=lfs diff=lfs merge=lfs -text
60
+ *.tar.gz filter=lfs diff=lfs merge=lfs -text
.gitignore ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ Occlusion/train/
2
+ Occlusion/test/
3
+ Generalization/train/
4
+ Generalization/test/
5
+ Fig/Fig-og-1.pdf
6
+ Fig/Fig-og-2.pdf
7
+ upload.py
8
+ .vscode/
Fig/Fig-og-1.png ADDED

Git LFS Details

  • SHA256: 20ccbfdfc78b7f7ae2e98ca3040aefd64ad23c7d1fe4a68d2e01cf40fa9d7b6c
  • Pointer size: 132 Bytes
  • Size of remote file: 9.12 MB
Fig/Fig-og-2.png ADDED

Git LFS Details

  • SHA256: 63dd614966de8334452f9f337d9b31f8a030b902d23d188f12de94cfc16f62b7
  • Pointer size: 132 Bytes
  • Size of remote file: 9.36 MB
README.md CHANGED
@@ -1,3 +1,86 @@
1
  ---
2
  license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: mit
3
  ---
4
+ # RLBench-OG Dataset
5
+
6
+ ## Overview
7
+
8
+ RLBench-OG is derived from the RLBench benchmark and is designed to evaluate the robustness of models under occlusion as well as their generalization capability under various environmental perturbations. The dataset selects ten tasks from the original RLBench task list, covering both simple scenarios and more complex long-horizon tasks. The benchmark consists of two main components: an **Occlusion Suite** and a **Generalization Suite**.
9
+
10
+ ## Dataset Components
11
+
12
+ ### Occlusion Suite
13
+
14
+ The Occlusion Suite focuses on scenarios where the camera's line of sight to key task-relevant regions is fully or partially blocked, leading to incomplete observations. Occlusions are introduced to the `front_camera` through two mechanisms:
15
+
16
+ 1. **Self-occlusion by object pose perturbation:** Modifying the position or orientation of task-relevant objects to occlude essential interaction points (e.g., drawer handles, target objects).
17
+
18
+ 2. **Occlusion by external distractors:** Placing task-irrelevant objects (cabinets, TVs, doors, etc.) in front of the workspace to partially block the scene.
19
+
20
+ #### Task-specific Occlusion Configurations:
21
+
22
+ - **basketball_in_hoop:** Basket and trash can poses are perturbed to occlude the basketball
23
+ - **block_pyramid:** A cabinet is placed in front of the workspace to occlude part of the blocks
24
+ - **close_drawer:** The drawer is rotated such that its geometry occludes the handle
25
+ - **scoop_with_spatula:** A wine bottle is positioned to block the target cube
26
+ - **solve_puzzle:** A storage cabinet is placed to occlude puzzle pieces
27
+ - **straighten_rope:** A desk lamp is placed in front of one end of the rope
28
+ - **take_plate_off_colored_dish_rack:** A box with a laptop blocks visibility of the plate
29
+ - **take_usb_out_of_computer:** A cabinet blocks the USB port area
30
+ - **toilet_seat_down:** A door is placed such that it occludes the toilet seat
31
+ - **water_plants:** A television partially blocks both the watering can and the plant
32
+
33
+ ### Generalization Suite
34
+
35
+ The Generalization Suite evaluates robustness to environment-conditioned variations. Based on the same ten tasks, the suite includes six types of environment variations, each modifying exactly one factor while keeping all others unchanged. Following the pipeline from COLOSSEUM, variation types are specified using `yaml` configuration files and data collection procedures via `json` metadata.
36
+
37
+ #### Variation Types:
38
+
39
+ - **light_color:** RGB values sampled within predefined ranges and applied to directional lights
40
+ - **table_texture:** Textures sampled from a texture dataset and applied to the table
41
+ - **table_color:** RGB values sampled within predefined ranges and applied to the table surface
42
+ - **background_texture:** Background textures randomly sampled and applied
43
+ - **distractor:** Two distractor objects sampled from a 3D asset dataset and spawned within workspace boundaries
44
+ - **camera_pose:** Camera position and orientation offsets sampled and applied to front, left-shoulder, and right-shoulder cameras
45
+
46
+ ## Dataset Structure
47
+
48
+ The dataset is organized by task and variation type. Each configuration includes:
49
+
50
+ - RGB-D images from multiple camera viewpoints
51
+ - Robot state information
52
+ - Robot action
53
+
54
+ ## Visualizations
55
+
56
+ For visualizations of different variant settings corresponding to each task, refer to the figures below:
57
+
58
+ ![Variant Visualizations Part 1](Fig/Fig-og-1.png)
59
+ *Visualization of different variants for the **basketball_in_hoop**, **block_pyramid**, **close_drawer**, **scoop_with_spatula**, **solve_puzzle** tasks.*
60
+
61
+ ![Variant Visualizations Part 2](Fig/Fig-og-2.png)
62
+ *Visualization of different variants for the **straighten_rope**, **take_plate_off_colored_dish_rack**, **take_usb_out_of_computer**, **toilet_seat_down**, **water_plants** tasks.*
63
+
64
+ ## Citation
65
+
66
+ If you use this dataset in your research, please cite our paper:
67
+
68
+ ```bibtex
69
+ @misc{bai2025learningacttaskawareview,
70
+ title={Learning to See and Act: Task-Aware Virtual View Exploration for Robotic Manipulation},
71
+ author={Yongjie Bai and Zhouxia Wang and Yang Liu and Kaijun Luo and Yifan Wen and Mingtong Dai and Weixing Chen and Ziliang Chen and Lingbo Liu and Guanbin Li and Liang Lin},
72
+ year={2025},
73
+ eprint={2508.05186},
74
+ archivePrefix={arXiv},
75
+ primaryClass={cs.RO},
76
+ url={https://arxiv.org/abs/2508.05186},
77
+ }
78
+ ```
79
+
80
+ ## License
81
+
82
+ This dataset is released under the [MIT License](https://opensource.org/licenses/MIT).
83
+
84
+ ## Contact
85
+
86
+ For questions about this dataset, please contact: [baiyj26@mail2.sysu.edu.cn](mailto:baiyj26@mail2.sysu.edu.cn)