Yinxuan commited on
Commit
2c394d2
·
1 Parent(s): 45fe36d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +200 -14
README.md CHANGED
@@ -3,15 +3,50 @@ language:
3
  - en
4
  license:
5
  - cc-by-nc-4.0
 
 
 
 
6
  task_categories:
7
  - image-segmentation
8
- splits:
9
- - name: train
10
- - name: validation
11
- - name: test
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  viewer: false
13
- tags:
14
- - object-centric learning
15
  ---
16
 
17
  # OCTScenes: A Versatile Real-World Dataset of Tabletop Scenes for Object-Centric Learning
@@ -24,19 +59,33 @@ tags:
24
 
25
  ### Dataset Summary
26
 
27
- ### Dataset Summary
28
-
29
  The OCTScenes dataset is a versatile real-world dataset of tabletop scenes for object-centric learning, containing 5000 tabletop scenes with a total of 15 objects. Each scene is captured in 60 frames covering a 360-degree perspective. It can satisfy the evaluation of object-centric learning methods based on single-image, video, and multi-view.
30
 
 
 
 
 
 
 
 
 
 
 
31
  ### Supported Tasks and Leaderboards
32
 
33
- - `object-centric learning`: The dataset can be used to train a model for [object-centric learning](https://arxiv.org/abs/2202.07135), which aims to learn compositional scene representations in an unsupervised manner. The model segmentation performance is measured by Adjusted Mutual Information (AMI), Adjusted Rand Index (ARI), and mean Intersection over Union (mIoU). Two variants of AMI and ARI are used to evaluate the segmentation performance more thoroughly. AMI-A and ARI-A are computed using pixels in the entire image and measure how accurately different layers of visual concepts (including both objects and the background) are separated. AMI-O and ARI-O are computed only using pixels in the regions of objects and focus on how accurately different objects are separated. The model reconstruction performance is measured by Minimize Squared Error (MSE) and Learned Perceptual Image Patch Similarity (LPIPS). Success on this task is typically measured by achieving high AMI, ARI, and mIOU, and low MSE and LPIPS.
 
 
 
 
 
 
34
 
35
- In the OCTScenes-A dataset, the 0--3099 scenes without segmentation annotation are for training, while the 3100--3199 scenes with segmentation annotation can be used for testing. In the OCTScenes-B dataset, the 0--4899 scenes without segmentation annotation are for training, while the 4900--4999 scenes with segmentation annotation can be used for testing.
36
 
37
- We have images of three different resolutions for each scene: 128x128, 256x256 and 640x480. The name of each image is in the form `[scene_id]_[frame_id].png`. They are in `./128x128`,`./256x256`, and `./640x480` respectively. The images are compressed using `tar`, and the name of compressed files starts with the resolutions, such as 'image_128x128_'. Please download all the compressed files, and use 'tar' instruction to decompress the files.
38
 
39
- For example, for the images with the resolution of 128x128, please download all the scene files starts with 'image_128x128_*', and then merge files into 'image_128x128.tar.gz':
40
 
41
  ```
42
  cat image_128x128_* > image_128x128.tar.gz
@@ -46,6 +95,143 @@ And then decompress the file:
46
 
47
  ```
48
  tar xvzf image_128x128.tar.gz
49
- ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
50
 
51
- Download the segmentation annotations from `./128x128/segments_128.tar.gz`. We provide the segmentations of test scenes 3100-3199 for OCTScenes-A and 4900-4999 for OCTScenes-B. Each segmentation annotation image is named as `[scene_id]_[frame_id].png`. The int number in each pixel represents the index of the object (ranges from 1 to 10, and 0 represents the background).
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3
  - en
4
  license:
5
  - cc-by-nc-4.0
6
+ tags:
7
+ - object-centric learning
8
+ size_categories:
9
+ - 10K<n<100K
10
  task_categories:
11
  - image-segmentation
12
+ paperswithcode_id: octscenes
13
+ dataset_info:
14
+ features:
15
+ - name: scene_id
16
+ dtype: string
17
+ - name: frame_id
18
+ dtype: string
19
+ - name: resolution
20
+ dtype: string
21
+ - name: image
22
+ dtype: image
23
+ - name: depth
24
+ dtype: image
25
+ - name: segment
26
+ dtype: image
27
+ - name: intrinsic_matrix
28
+ dtype: array
29
+ - name: camera_pose
30
+ dtype: array
31
+ configs:
32
+ - config_name: OCTScenes-A
33
+ splits:
34
+ - name: train
35
+ num_examples: 3000
36
+ - name: validation
37
+ num_examples: 100
38
+ - name: test
39
+ num_examples: 100
40
+ - config_name: OCTScenes-B
41
+ splits:
42
+ - name: train
43
+ num_examples: 4800
44
+ - name: validation
45
+ num_examples: 100
46
+ - name: test
47
+ num_examples: 100
48
+
49
  viewer: false
 
 
50
  ---
51
 
52
  # OCTScenes: A Versatile Real-World Dataset of Tabletop Scenes for Object-Centric Learning
 
59
 
60
  ### Dataset Summary
61
 
 
 
62
  The OCTScenes dataset is a versatile real-world dataset of tabletop scenes for object-centric learning, containing 5000 tabletop scenes with a total of 15 objects. Each scene is captured in 60 frames covering a 360-degree perspective. It can satisfy the evaluation of object-centric learning methods based on single-image, video, and multi-view.
63
 
64
+ The 15 distinct types of objects are shown in Figure 1, and some examples of data are shown in Figure 2.
65
+
66
+ ![Figure 1](assets/objects.png)
67
+
68
+ <p align="center">Figure 1: Objects of the dataset.</p>
69
+
70
+ ![Figure 2](assets/datasets.png)
71
+
72
+ <p align="center">Figure 2: Examples of images, depth maps, and segmentation maps of the dataset.</p>
73
+
74
  ### Supported Tasks and Leaderboards
75
 
76
+ - `object-centric learning`: The dataset can be used to train a model for [object-centric learning](https://arxiv.org/abs/2202.07135), which aims to learn compositional scene representations in an unsupervised manner. The segmentation performance of model is measured by Adjusted Mutual Information (AMI), Adjusted Rand Index (ARI), and mean Intersection over Union (mIoU). Two variants of AMI and ARI are used to evaluate the segmentation performance more thoroughly. AMI-A and ARI-A are computed using pixels in the entire image and measure how accurately different layers of visual concepts (including both objects and the background) are separated. AMI-O and ARI-O are computed only using pixels in the regions of objects and focus on how accurately different objects are separated. The reconstruction performance of model is measured by Minimize Squared Error (MSE) and Learned Perceptual Image Patch Similarity (LPIPS). Success on this task is typically measured by achieving high AMI, ARI, and mIOU and low MSE and LPIPS.
77
+
78
+ ### Languages
79
+
80
+ English.
81
+
82
+ ## Dataset Structure
83
 
84
+ We provide images of three different resolutions for each scene: 640x480, 256x256, and 128x128. The name of each image is in the form `[scene_id]_[frame_id].png`. They are available in `./640x480`, `./256x256`, and `./128x128`, respectively.
85
 
86
+ The images are compressed using `tar` and the names of the compressed files start with the resolutions, e.g. `image_128x128_`. Please download all compressed files and use the `tar` command to decompress them.
87
 
88
+ For example, for the 128x128 resolution images, please download all the scene files starting with `image_128x128_*` and then merge the files into `image_128x128.tar.gz`:
89
 
90
  ```
91
  cat image_128x128_* > image_128x128.tar.gz
 
95
 
96
  ```
97
  tar xvzf image_128x128.tar.gz
98
+ ```
99
+
100
+ ### Data Instances
101
+
102
+ Each data instance contains an RGB image, its depth map, its camera intrinsic matrix, its camera pose, and its segmentation map, which is None in the training and validation set.
103
+
104
+ ### Data Fields
105
+
106
+ - `scene_id`: a string scene identifier for each example
107
+ - `frame_id`: a string frame identifier for each example
108
+ - `resolution`: a string for the image resolution of each example (e.g. 640x480, 256x256, 128x128)
109
+ - `image`: a `PIL.Image.Image` object containing the image
110
+ - `depth`: a `PIL.Image.Image` object containing the depth map
111
+ - `segment`: a `PIL.Image.Image` object containing the segmentation map, where the int number in each pixel represents the index of the object (ranges from 1 to 10, with 0 representing the background).
112
+ - `intrinsic_matrix`: a `numpy.ndarray` for the camera intrinsic matrix of each image
113
+ - `camera_pose`: a `numpy.ndarray` for the camera pose of each image
114
+
115
+ ### Data Splits
116
+
117
+ The data is split into two subsets to create datasets with different levels of difficulty levels of difficulty. Both the two subsets are randomly divided into training, validation, and testing sets. The validation and testing sets each consist of 100 scenes, while the remaining scenes form the training set. Only the data in the testing set contain segmentation annotations for evaluation.
118
+
119
+ OCTScenes-A contains 3200 scenes (`scene_id` from 0000 to 3199) and includes only the first 11 object types, with scenes consisting of 1 to 6 objects, making it comparatively smaller and less complex. Images with `scene_id` ranging from 0000 to 2999 are used for training, images with `scene_id` ranging from 3000 to 3099 are for validation, and images with `scene_id` ranging from 3100 to 3199 are for testing.
120
+
121
+ OCTScenes-A contains 5000 scenes (`scene_id` from 0000 to 4999) and includes all 15 object types, with scenes consisting of 1 to 10 objects, resulting in a larger and more complex dataset. Images with `scene_id` ranging from 0000 to 4799 are used for training, images with `scene_id` ranging from 4800 to 4899 are for validation, and images with `scene_id` ranging from 4900 to 4999 are for testing.
122
+
123
+ <table align="center">
124
+ <tr>
125
+ <th style="text-align: center;">Dataset</th>
126
+ <th colspan="3" style="text-align: center;">OCTScenes-A</th>
127
+ <th colspan="3" style="text-align: center;">OCTScenes-B</th>
128
+ </tr>
129
+ <tr>
130
+ <th style="text-align: center;">Resolution</th>
131
+ <td align="center">640x480</td>
132
+ <td align="center">256x256</td>
133
+ <td align="center">128x128</td>
134
+ <td align="center">640x480</td>
135
+ <td align="center">256x256</td>
136
+ <td align="center">128x128</td>
137
+ </tr>
138
+ <tr>
139
+ <th style="text-align: center;">Split</th>
140
+ <td align="center">train</td>
141
+ <td align="center">validation</td>
142
+ <td align="center">test</td>
143
+ <td align="center">train</td>
144
+ <td align="center">validation</td>
145
+ <td align="center">test</td>
146
+ </tr>
147
+ <tr>
148
+ <th style="text-align: center;">Number of scenes</th>
149
+ <td align="center">3000</td>
150
+ <td align="center">100</td>
151
+ <td align="center">100</td>
152
+ <td align="center">4800</td>
153
+ <td align="center">100</td>
154
+ <td align="center">100</td>
155
+ </tr>
156
+ <tr>
157
+ <th style="text-align: center;">Number of object catergories</th>
158
+ <td colspan="3" align="center">11</td>
159
+ <td colspan="3" align="center">15</td>
160
+ </tr>
161
+ <tr>
162
+ <th style="text-align: center;">Number of objects in a scene</th>
163
+ <td colspan="3" align="center">1~6</td>
164
+ <td colspan="3" align="center">1~10</td>
165
+ </tr>
166
+ <tr>
167
+ <th style="text-align: center;">Number of views in a scene</th>
168
+ <td colspan="3" align="center">60</td>
169
+ <td colspan="3" align="center">60</td>
170
+ </tr>
171
+ </table>
172
+
173
+
174
+ ## Dataset Creation
175
+
176
+ ### Curation Rationale
177
 
178
+ OCTScenes was designed as a novel benchmark for unsupervised object-centric learning. It serves as a versatile real-world dataset that aims to fill the scarcity of specifically tailored real-world datasets in this field.
179
+
180
+ ### Source Data
181
+
182
+ #### Initial Data Collection and Normalization
183
+
184
+ A three-wheel omnidirectional wheel robot equipped with an Orbbec Astra 3D camera was employed for data collection. It took place in a school conference room, where a small wooden table was positioned on the floor and surrounded by baffles. Randomly selected objects, ranging from 1 to 10, were manually placed on the table without any stacking. The data was directly collected from these visual scenes.
185
+
186
+ ### Annotations
187
+
188
+ #### Annotation process
189
+
190
+ - Segmentation Annotation: We use [EISeg](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.8/EISeg), which is a high-performance interactive automatic annotation tool for image segmentation, to label the segmentation maps. We manually labeled 6 images of each scene and used the labeled images to train a supervision real-time semantic segmentation model named PP-LiteSeg using the framework [PaddleSeg](https://github.com/PaddlePaddle/PaddleSeg) to annotation the rest of the data. The annotated images are split into 90% for training and 10% validation, achieving a mean Intersection over Union (mIoU) of 0.92 on the validation set.
191
+ - Intrinsic Matrix: We obtained the intrinsic matrix of the camera through camera calibration.
192
+ - Camera Pose: We obtained the camera pose of each image through 3D reconstruction using [COLMAP](https://github.com/colmap/colmap), which is commonly used to create real-world NeRF datasets.
193
+
194
+ #### Who are the annotators?
195
+
196
+ Some annotations are manually labelled by the authors, while others are generated by the model.
197
+
198
+ ### Personal and Sensitive Information
199
+
200
+ N/A
201
+
202
+ ## Considerations for Using the Data
203
+
204
+ ### Social Impact of Dataset
205
+
206
+ N/A
207
+
208
+ ### Discussion of Biases
209
+
210
+ N/A
211
+
212
+ ### Other Known Limitations
213
+
214
+ The main limitation of the dataset is its simplicity, characterized by a single background type and uncomplicated object shapes, most of which are symmetrical and lack the variation in orientation that occurs when viewed from different perspectives. Therefore, the object representations learned by the model are relatively simple, and some simple modeling methods may produce better segmentation results than complex modeling methods.
215
+
216
+ To overcome the aforementioned issue and enhance the dataset further, we have devised a plan for the next version of OCTScenes. In our future work, we will introduce a wider range of diverse and complex backgrounds, including tables of different types, patterns, and materials, and a greater variety of objects into the OCTScenes, particularly objects with asymmetric shapes, complex textures, and mixed colors, which will increase the complexity and diversity of the dataset.
217
+
218
+ ## Additional Information
219
+
220
+ ### Dataset Curators
221
+
222
+ The dataset was created by Yinxuan Huang, Tonglin Chen, Zhimeng Shen, Jinghao Huang, Bin Li, and Xiangyang Xue as members of the [Visual Intelligence Lab at Fudan University](https://github.com/FudanVI).
223
+
224
+ ### Licensing Information
225
+
226
+ The dataset is available under [CC-BY-NC 4.0 license](https://creativecommons.org/licenses/by-nc/4.0/).
227
+
228
+ ### Citation Information
229
+
230
+ ```
231
+ @article{huang2023octscenes,
232
+ title={OCTScenes: A Versatile Real-World Dataset of Tabletop Scenes for Object-Centric Learning},
233
+ author={Huang, Yinxuan and Chen, Tonglin and Shen, Zhimeng and Huang, Jinghao and Li, Bin and Xue, Xiangyang},
234
+ journal={arXiv preprint arXiv:2306.09682},
235
+ year={2023}
236
+ }
237
+ ```