Datasets:
Tasks:
Image Segmentation
Languages:
English
Size:
10K<n<100K
ArXiv:
Tags:
object-centric learning
License:
Update README.md
Browse files
README.md
CHANGED
|
@@ -24,7 +24,13 @@ tags:
|
|
| 24 |
|
| 25 |
### Dataset Summary
|
| 26 |
|
| 27 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
In the OCTScenes-A dataset, the 0--3099 scenes without segmentation annotation are for training, while the 3100--3199 scenes with segmentation annotation can be used for testing. In the OCTScenes-B dataset, the 0--4899 scenes without segmentation annotation are for training, while the 4900--4999 scenes with segmentation annotation can be used for testing.
|
| 30 |
|
|
|
|
| 24 |
|
| 25 |
### Dataset Summary
|
| 26 |
|
| 27 |
+
### Dataset Summary
|
| 28 |
+
|
| 29 |
+
The OCTScenes dataset is a versatile real-world dataset of tabletop scenes for object-centric learning, containing 5000 tabletop scenes with a total of 15 objects. Each scene is captured in 60 frames covering a 360-degree perspective. It can satisfy the evaluation of object-centric learning methods based on single-image, video, and multi-view.
|
| 30 |
+
|
| 31 |
+
### Supported Tasks and Leaderboards
|
| 32 |
+
|
| 33 |
+
- `object-centric learning`: The dataset can be used to train a model for [object-centric learning](https://arxiv.org/abs/2202.07135), which aims to learn compositional scene representations in an unsupervised manner. The model segmentation performance is measured by Adjusted Mutual Information (AMI), Adjusted Rand Index (ARI), and mean Intersection over Union (mIoU). Two variants of AMI and ARI are used to evaluate the segmentation performance more thoroughly. AMI-A and ARI-A are computed using pixels in the entire image and measure how accurately different layers of visual concepts (including both objects and the background) are separated. AMI-O and ARI-O are computed only using pixels in the regions of objects and focus on how accurately different objects are separated. The model reconstruction performance is measured by Minimize Squared Error (MSE) and Learned Perceptual Image Patch Similarity (LPIPS). Success on this task is typically measured by achieving high AMI, ARI, and mIOU, and low MSE and LPIPS.
|
| 34 |
|
| 35 |
In the OCTScenes-A dataset, the 0--3099 scenes without segmentation annotation are for training, while the 3100--3199 scenes with segmentation annotation can be used for testing. In the OCTScenes-B dataset, the 0--4899 scenes without segmentation annotation are for training, while the 4900--4999 scenes with segmentation annotation can be used for testing.
|
| 36 |
|