text stringlengths 0 820 |
|---|
separately to the UNet module of our proposed method |
using shared weights. We use the embedding layers from the |
encoder part of the UNet architecture for pre- and post-disaster |
images to learn about the changes. In other words, the second |
module of our model is a separate decoder that conducts a |
damage classification task on the subtracted embedding layers |
using several convolutional layers. This idea is based on the |
approach proposed in [4]. Figure 3 demonstrates the overall |
schema of the architecture. Our UNet architecture has five |
convolution blocks for the encoder part and four convolution |
blocks for the decoder. Each downsampling block consists |
of convolution, batch normalization, ReLU, and max-pooling |
layers. Each upsampling block consists of upsampling with |
bilinear interpolation, convolution, batch normalization, and |
ReLU layers. For the damage decoder, the same upsampling |
blocks apply to subtracted and concatenated representations |
at each step. The details of the layers can be found in our |
code repository made publicly available1. The output of the |
damage classification mask has five channels for four damage |
levels and one background label. We use weighted binary |
cross-entropy loss for building segmentation and multi-label |
cross-entropy loss for damage classification. In the building |
1https://github.com/microsoft/building-damage-assessment-cnn-siamese |
Fig. 3. We use a Siamese U-Net model architecture where the pre- and |
post disaster imagery are fed into an encoder-decoder style segmentation |
model (U-Net) with shared weights (blocks with the same color in the figure |
share weights). The features generated by the segmentation encoder from |
both inputs are subtracted and passed to an additional damage classification |
decoder that generates per-pixel damage level predictions. The weights of the |
damage classification decoder can be fine-tuned for specific disaster types, |
while relying on building segmentation output from the building decoder. |
segmentation loss function shown in equation 1, ωs,1andωs,0 |
denotes weights on building pixels and background pixels, |
respectively. Subscript sdenotes the segmentation task. ysis |
the ground truth label for each pixel and psis the predicted |
probability. For both pre- and post-disaster image frames, loss |
functions LspreandLspostare defined similarly and the UNet |
model has shared weights across these two components. |
Lspre=Lspost=−(ωs,1yslogps+ωs,0yslog (1−ps)) |
(1) |
In equation 2, ωd,cdenotes weight on each damage class c. |
We use subscript dto denote the damage classification task. yd |
is the damage ground truth label for each pixel and pdis the |
predicted probability. The damage loss, Ldmg, is calculated |
only when a pixel is predicted as a building class or ˆys== 1 . |
Ldmg =−P5 |
c=1ωd,c(yd(c) logpd(c)), ifˆys== 1 (2) |
Equation 3 indicates the combined weighted loss function |
for the tasks along with their corresponding weights. |
Ltotal =ωspreLspre+ωspostLspost+ωdLdmg (3) |
V. E XPERIMENTS AND RESULTS |
For our first experiment, we divide each disaster’s tiles |
available in the xBD dataset based on original tiles of |
1024x1024 pixels into train/validation/test splits at the ratio |
of 80:10:10 randomly, to train and evaluate the performance |
of our model. Based on this way of splitting the dataset, it is |
possible to have different tiles from the same disaster incident |
across the training, validation, and test sets. To reduce the |
size of the input images, we further crop each tile into 20 |
patches of 256x256 pixels. The number of final patches in the |
train/validation sets is 176700:22220. We conduct tile-wise |
normalization on the pre-disaster and post-disaster imageryseparately. We also apply random horizontal and vertical |
flipping during the training to reduce overfitting. |
We observed that it is quite challenging to train the entire |
model from scratch for both tasks simultaneously as the |
performance of the building segmentation step impacts the |
performance of the damage classification task significantly. As |
such, we train the model sequentially based on two different |
sets of weights. First, we train the building segmentation |
module by setting the weight for damage classification as |
zero and setting the weights for the UNet in the loss function |
equal to 0.5 for both pre-disaster and post-disaster building |
segmentation tasks. We also set weights for building pixels |
equal to 15 and background pixels equal to 1 as there |
is a significant imbalance between the number of pixels |
across these two classes. In other words, [ωspre, ωspost, ωd] = |
[0.5,0.5,0]and[ωs,c=0, ωs,c=1] = [1 ,15]. Label c= 0denotes |
background pixels and label c= 1 denotes building pixels. |
Once we get reasonable performance on the validation |
set for the first task, we freeze the parameters of the |
UNet and we start training the model for the second task, |
i.e., damage classification. Thus, we set the weights in the |
loss function for pre-disaster and post-disaster segmentation |
task as zero and we set the damage classification task |
equal to 1. Due to high imbalance across different damage |
classes, we assign higher weights to the major-damage class |
(label=3) and destroyed class (label=4). In other words, |
[ωd,c=0, ωd,c=1, ωd,c=2, ωd,c=3, ωd,c=4] = [1 ,35,70,150,120] |
for the damage classification task and [ωspre, ωspost, ωd] = |
[0,0,1]for building segmentation. Label d= 0 denotes |
background pixels and labels d= 1 tod= 4 denotes damage |
levels scaled from not-damaged to destroyed. |
Since our model handles two tasks, we demonstrate |
performance results separately on each task. The performance |
results for the tile-wise random split are demonstrated in the |
first row of Table III where the model is evaluated both |
on the validation set and test set. Columns in the table |
are named BLD-1, DMG-0, DMG-1, DMG-2, DMG-3, and |
DMG-mean. The BLD-1 column denotes the F1 score for |
class 0, which indicates building pixels. DMG-0, DMG-1, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.