insomnia7 commited on
Commit
124b379
·
verified ·
1 Parent(s): 1241580

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +147 -3
README.md CHANGED
@@ -1,5 +1,149 @@
 
 
 
 
 
 
 
 
 
1
  # Preprocessed Scannet Dataset for SIU3R Training
2
- This repository contains the preprocessed ScanNet dataset, specifically tailored for training the SIU3R model.
3
 
4
- # Pretrained Models for SIU3R
5
- We provide pretrained models for the Panoptic Segmentation task. We train MASt3R backbone with adapter on the COCO dataset for SIU3R initialization.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ language:
4
+ - en
5
+ - zh
6
+ ---
7
+ # Pretrained Models for SIU3R
8
+ We provide pretrained models for the Panoptic Segmentation task. We train MASt3R backbone with adapter on the COCO dataset for SIU3R initialization.
9
+
10
  # Preprocessed Scannet Dataset for SIU3R Training
11
+ This dataset is a processed version of the ScanNet dataset, which is available at http://www.scan-net.org/. The dataset is provided by WU-CVGL(https://github.com/WU-CVGL) for research purposes only.
12
 
13
+ The dataset is split into 2 parts: train and val. Both splits are provided with color images, depth images in millimeter (convert to meter by div 1000.0), ground truth c2w pose in txt file, ground truth camera intrinsic in txt file, ground truth annotations for 2D semantic segmentation, 2D instance segmentation and 2D panoptic segmentation, iou overlap value between images in iou.pt file. The annotations are provided in format described as follows:
14
+
15
+ - 2D semantic segmentation: a single channel uint8 image with pixel-wise class labels. The class is defined as below:
16
+ ```yaml
17
+ 0: "unlabeled",
18
+ 1: "wall",
19
+ 2: "floor",
20
+ 3: "cabinet",
21
+ 4: "bed",
22
+ 5: "chair",
23
+ 6: "sofa",
24
+ 7: "table",
25
+ 8: "door",
26
+ 9: "window",
27
+ 10: "bookshelf",
28
+ 11: "picture",
29
+ 12: "counter",
30
+ 13: "desk",
31
+ 14: "curtain",
32
+ 15: "refrigerator",
33
+ 16: "shower curtain",
34
+ 17: "toilet",
35
+ 18: "sink",
36
+ 19: "bathtub",
37
+ 20: "otherfurniture",
38
+ ```
39
+ - 2D instance segmentation: a 3 channel uint8 image, where encoded as follows:
40
+ The segment id is defined as 1000 * semantic_label + instance_label. Note that the semantic_label is NOT the same as the 2D semantic segmentation. The instance_label is a unique id for each instance within the same semantic class.
41
+ semantic label is defined as below:
42
+ ```yaml
43
+ 0: "unlabeled",
44
+ 1: "cabinet",
45
+ 2: "bed",
46
+ 3: "chair",
47
+ 4: "sofa",
48
+ 5: "table",
49
+ 6: "door",
50
+ 7: "window",
51
+ 8: "bookshelf",
52
+ 9: "picture",
53
+ 10: "counter",
54
+ 11: "desk",
55
+ 12: "curtain",
56
+ 13: "refrigerator",
57
+ 14: "shower curtain",
58
+ 15: "toilet",
59
+ 16: "sink",
60
+ 17: "bathtub",
61
+ 18: "otherfurniture",
62
+ ```
63
+ Then, the segment_id is encoded in the 3 channel image as follows:
64
+ ```yaml
65
+ R: segment_id % 256,
66
+ G: segment_id // 256,
67
+ B: segment_id // 256 // 256.
68
+ ```
69
+ - 2D panoptic segmentation: a 3 channel uint8 image, which encoded just like instance sgementation task do, but note that the defination of semantic label is as same as 2D semantic segmentation.
70
+
71
+ - iou.pt file store the iou overlap value between images, which is a Tensor with shape (N, N), where N is the max index of images in the dataset (note we remove some images which pose is unavailable or semantic annotations is blank). The iou[i, j] value is calculated by unproject depth[i] into 3d space, then project to images[j]'s camera coordinate, detailed calculation can be found in the code.
72
+
73
+ We also provide image pairs for validation and testing, which are stored in the val_pair.json file. The image pairs are defined as below:
74
+ ```json
75
+ [
76
+ {
77
+ "scan": "scene0011_00",
78
+ "context_ids": [
79
+ 1727,
80
+ 1744
81
+ ],
82
+ "target_ids": [
83
+ 1727,
84
+ 1729,
85
+ 1732,
86
+ 1738,
87
+ 1739,
88
+ 1744
89
+ ],
90
+ "iou": 0.38273486495018005
91
+ },
92
+ {
93
+ "scan": "scene0011_00",
94
+ "context_ids": [
95
+ 255,
96
+ 337
97
+ ],
98
+ "target_ids": [
99
+ 255,
100
+ 267,
101
+ 310,
102
+ 325,
103
+ 331,
104
+ 337
105
+ ],
106
+ "iou": 0.47921222448349
107
+ },
108
+ ...
109
+ ]
110
+ ```
111
+ The "scan" field is the scan name, the "context_ids" field is the image ids of context images, the "target_ids" field is the image ids of target images, and the "iou" field is the iou overlap value between 2 context images. The context images are used as input to the model, and the target images are used as ground truth for evaluation.
112
+ For refer segmentation task, we provide the refer segmentation annotations in train_refer_seg_data.json and val_refer_seg_data.json. The annotations are provided in format described as follows:
113
+ ```json
114
+ {
115
+ "scene0011_00": {
116
+ "2": {
117
+ "object_name": "kitchen_cabinets",
118
+ "instance_label_id": 1,
119
+ "panoptic_label_id": 3,
120
+ "frame_id": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89, 90, 91, 92, 93, 94, 95, 96, ...],
121
+ "text": ["there are brwon wooden cabinets. placed on the side of the kitchen.", "there is a set of bottom kitchen cabinets in the room. it has a microwave in the middle of it.", "there is a set of bottom kitchen cabinets in the room. there is a microwave in the middle of them.", "brown kitchen cabinets, the top is decorated with marble layers it is placed on the left in the direction of view. the right are 4 brown chairs.", "the kitchen cabinets are located along the right wall. they are below the counter top. the kitchen cabinets are located to the right of the table and chairs."],
122
+ "text_token": [
123
+ [49406, 997, 631, 711, 1749, 9057, 33083, 269, 9729, 525, 518, 1145, 539, 518, 4485, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
124
+ [49406, 997, 533, 320, 1167, 539, 5931, 4485, 33083, 530, 518, 1530, 269, 585, 791, 320, 24240, 530, 518, 3694, 539, 585, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
125
+ [49406, 997, 533, 320, 1167, 539, 5931, 4485, 33083, 530, 518, 1530, 269, 997, 533, 320, 24240, 530, 518, 3694, 539, 1180, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
126
+ [49406, 2866, 4485, 33083, 267, 518, 1253, 533, 15917, 593, 13071, 15900, 585, 533, 9729, 525, 518, 1823, 530, 518, 5407, 539, 1093, 269, 518, 1155, 631, 275, 2866, 12033, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
127
+ [49406, 518, 4485, 33083, 631, 5677, 2528, 518, 1155, 2569, 269, 889, 631, 3788, 518, 7352, 1253, 269, 518, 4485, 33083, 631, 5677, 531, 518, 1155, 539, 518, 2175, 537, 12033, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
128
+ ]
129
+ },
130
+ "3": {
131
+ "object_name": "table",
132
+ "instance_label_id": 5,
133
+ "panoptic_label_id": 7,
134
+ "frame_id": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72, 73, 74, 75, 76, 77, 78, 79, ...],
135
+ "text": ["this is a long table. there are three brown chairs behind it.", "this is a long table. it is surrounded by chairs.", "there is a large table in the room. it has ten chairs pulled up to it.", "a brown table, placed in the middle of the room, on the left is 4 brown chairs, on the right are 4 brown chairs. the front is a brown door with light shining on.", "this is a brown table. it is surrounded by quite a few matching chairs."],
136
+ "text_token": [
137
+ [49406, 589, 533, 320, 1538, 2175, 269, 997, 631, 2097, 2866, 12033, 2403, 585, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
138
+ [49406, 589, 533, 320, 1538, 2175, 269, 585, 533, 13589, 638, 12033, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
139
+ [49406, 997, 533, 320, 3638, 2175, 530, 518, 1530, 269, 585, 791, 2581, 12033, 8525, 705, 531, 585, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
140
+ [49406, 320, 2866, 2175, 267, 9729, 530, 518, 3694, 539, 518, 1530, 267, 525, 518, 1823, 533, 275, 2866, 12033, 267, 525, 518, 1155, 631, 275, 2866, 12033, 269, 518, 2184, 533, 320, 2866, 2489, 593, 1395, 10485, 525, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0],
141
+ [49406, 589, 533, 320, 2866, 2175, 269, 585, 533, 13589, 638, 4135, 320, 1939, 11840, 12033, 269, 49407, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]
142
+ ]
143
+ },
144
+ ...
145
+ }
146
+ ...
147
+ }
148
+ ```
149
+ The "scene0011_00" field is the scan name, the "2" field is the object id (also instance_label), the "object_name" field is the object name, the "instance_label_id" field is the semantic label id in instance segmentation task, the "panoptic_label_id" field is the semantic label id in panoptic segmentation task, the "frame_id" field is the frame ids of images which contain this object, the "text" field is the refer segmentation text description, and the "text_token" field is the tokenized refer segmentation text by openclip (https://github.com/mlfoundations/open_clip), note that we use `convnext_large_d_320` model (https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft-soup). The refer segmentation task is to segment the object in the image based on the refer segmentation text. This part of data is obtained from the uniseg3d repository (https://github.com/dk-liang/UniSeg3D), thanks for their great work.