text
stringlengths
0
820
Workshop at the Conference on Computer Vision and
Pattern Recognition to benchmark automated computer vision
capabilities for localizing and scoring the degree of damage
to buildings after natural disasters [3]. In this challenge,
participants had to train their model offline and upload
their predictions for evaluation and display on the public
leaderboard based on a single unlabeled test dataset, which
they could download. While this challenge provided a great
opportunity for AI researchers to weigh in on damage
assessment tasks, it assumed no constraints on the level of
computational resources available to participants for model
training and did not strictly prevent the potential hand-labeling
and use of the test datasets in the training phase. The winning
solutions used large ensembles of models, and although they
perform well on the test set, they were not optimized for
inference runtime and require a prohibitively large amount
of compute resources to be run on large amounts of satellite
imagery on demand during disaster events. For example, the
first-place winner proposed an ensemble of four different
models, requiring 24 inference passes for each input.
In this study, we propose a single model which predicts
both building edges and damage levels and that can be run
efficiently on large amounts of input imagery. The proposed
multitask model includes a building segmentation module
and a damage classification module. We use a similar model
architecture proposed by previous studies on building damage
assessment [4], [5]; however, we use a simpler encoder and do
not include attention layers. We evaluate the performance of
our model extensively for several different splits of the dataset
to assess its robustness to unseen disaster scenarios. From an
operational perspective, the model’s runtime is of paramount
importance. Thus, we benchmark the inference speed of our
model against the winning solutions in the xView2 competition
and the existing models deployed by our stakeholder. We show
that our model works three times faster than the fastest xView2
challenge winning solution and over 50 times faster than the
slowest first place solution. The baseline solution available to
our stakeholder consists of two separate models for building
segmentation and damage classification [6]. We were able to
show that our proposed approach works 20% faster than the
baseline model available to the stakeholder and also conducts
the task in an end-to-end and more automated way, which can
improve their field operations and deployment.
Finally, we develop a web-based visualizer that can display
the before and after imagery along with the model’s building
damage predictions on a custom map. This is an important
step in deploying a model for real-world use cases. Even
a perfect building damage assessment model will not be
practically useful if there is not a mechanism for running
that model on new imagery and communicating the results
to decision-makers that are responding to live events. A web-
based visualizer allows anyone to see both the imagery and
predictions without GIS software for any type of disaster.
II. R ELATED WORK
Convolutional neural networks (CNN) have been used
for change detection tasks in satellite imagery for disaster
response and other domains including but not limited to
changes in infrastructures. [7] proposed using pre-trained
CNN features extracted through different convolutional layers
and concatenation of feature maps for pre- and post-event
images. The authors used pixel-wise Euclidean distance
to compute change maps and thresholding methods to
conduct classification. [8] leverages hurricane Harvey data, in
particular, to train CNNs to classify images as damaged and
undamaged. While they report very high accuracy numbers,
they did not focus on detecting building edges and used a
binary damage scale at the image-frame level. A Siamese
CNN approach was proposed in [9] to extract features directly
from the images, pixel by pixel. To reduce the influence of
imbalance between changed and unchanged pixels, the authors
used weighted contrastive loss. The unique property of the
extracted features was that the feature vectors associated with
changed pixel pairs were far away from each other in the
feature space, whereas the ones of unchanged pixel pairs
were close. Fully convolutional Siamese networks for changedetection were introduced in [4] and were proposed by other
studies as well [10], [11].
In [4], convolutional Siamese networks are trained end-to-
end from scratch using only the available change detection
datasets. The authors proposed fully convolutional encoder-
decoder networks that use the skip connection concept.
[12] presented an improved UNet++ model with dense skip
connections to learn multiscale and different semantic levels
of visual feature representations. Attention layers have been
proposed for general change detection networks [13] as well
as building damage assessment tasks as presented in [5]. Also,
[14] proposes an attention-based two-stream high-resolution
network to unify the building localization and classification
tasks into an end-to-end model via replacing the residual
blocks in HRNet [15] with attention-based residual blocks
to improve the model’s performance. RescueNet, an end-
to-end model that handles both segmentation and damage
classification tasks was proposed in [16]. It was trained using
a localization aware loss function, that consists of a binary
cross-entropy loss and dice loss for building segmentation and
a foreground-only selective categorical cross-entropy loss for
damage classification. [6] explored the applicability of CNN-
based models under scenarios similar to operational emergency
conditions with unseen data and the existence of time
constraints. [17] proposed a dual-task Siamese transformer
model to capture non-local features. Their model adopts