text
stringlengths
0
820
DMG-2, and DMG-3 indicate pixel-wise F1 scores for no-
damage, minor-damage, major-damage, and destroyed classes,
respectively. DMG-mean denotes the harmonic F1 score across
all damage levels computed based on the following equation.
F1dmg=4
P4
c=11
F1c+ϵ(4)
As shown in the columns for extreme classes of not
damaged and destroyed, i.e., DMG-0 and DMG-3 in Table III,
we observe superior performance results compared to columns
DMG-1 and DMG-2 for the minor-damage and the major-
damage classes due to the strength in the signal for those
extreme classes. We used one Nvidia Tesla V100 GPU with
32G Memory to train the model and it took 6 days for the
training to complete. Adam optimization algorithm was used
and learning rate and batch size were set at 0.001 and 32.
Fig. 4. Example model predictions at three locations with varying amounts
of observed damage.
Fig. 5. Legend for damage level colors shown in Figure 2 and 4
Figure 4 demonstrates the predicted polygons for buildings
along with their predicted labels. In the second row, the model
was able to capture two missing buildings in the ground
truth mask. Green, orange, purple and dark pink colors are
used to indicate no-damage, minor-damage, major-damage and
destroyed classes respectively. See figure 5 for legend.
The baseline model available to our stakeholder shows F1
score of 0.64 on the test set for building segmentation, which
is inferior to our result of 0.74, shown in Table III. For the
stakeholder’s baseline damage classification performance, we
do not have access to the results for a comparable data split
to report here. We also show our model’s performance against
the baseline model presented in [26] in Table II. Our proposed
solution demonstrates a significant improvement in damage
classification task.
Model BLD DMG
Baseline 0.79 0.03
Ours 0.74 0.58
TABLE II
COMPARISON WITH THE BASELINE MODEL PRESENTED IN [26], BOTH
RESULTS ARE BASED ON TRAINING MODELS ON THE X BD TIER 1DATASET
VI. M ODEL ROBUSTNESS TO UNSEEN DISASTERS
To assess the robustness of the model performance to unseen
disasters, we conduct four additional experiments outlined in
Table III. To that end, each time, we leave either the Joplintornado or the Nepal flooding out for testing purposes and
we train and validate the model based on the random split of
the remaining data. Additionally, to see the impact of training
damage classification only based on a specific type of disaster,
we conduct two additional experiments: (I) when damage
classification is trained only on wind-caused data, and (II)
when damage classification is trained only on flood disasters.
In both cases, the building segmentation module is trained on
90% of the entire training data; not on a specific disaster type
as in the damage classifier module. In Table III, the second
row shows the results for the case when we leave out the
Joplin tornado for testing purposes and use a random split of
the remaining data for training and validation for both building
segmentation and damage classification tasks. For this unseen
disaster, the harmonic mean of F1 scores on the test set drops
by 4% compared to the completely random split of the dataset.
The drop in the performance is more significant, 0.54%,
when we leave Nepal flooding out as outlined in the fourth
row of Table III. Regression in performance is also notable
for the building segmentation task for Nepal flooding. This
observation can be associated to the geographical distribution
of the data. Unlike Nepal flooding, the majority of the disasters
in the dataset, used in the training phase, are concentrated
around North and Central America, which could explain the
dramatic decrease in the F1 score when testing the model on a
completely new geographical region. Test set results outlined
in rows three and five of Table III demonstrate that training
the damage classifier on the specific type of disasters boosts
the performance when testing on a completely unseen disaster
event.
VII. I NFERENCE SPEED BENCHMARKING
Inference speed (specifically, the number of pixels/second
that a model can process) is an important property of models
that will be deployed to run on imagery collected from future
disasters. Slow models will result in larger compute costs and
potentially delayed results in time-sensitive disaster response
applications.
We benchmark the inference speed of the top-ranked
solutions on the xView2 challenge with our proposed model
and find that our proposed model is three times faster than
the fastest winning solution and over 50 times faster than the
slowest first place solution. Table IV shows the performance
results of each solution (except for the 4th place solution which
was not reproducible). To benchmark each solution we use the
following setup:
•A NC6 virtual machine instance on Microsoft Azure
which contains a Tesla K80 GPU.
•The single input inference script provided in each
solution’s code release from the official “DIUx-xView”
GitHub account. If the inference script did not contain a
flag for enabling GPU acceleration we modified it to use
the GPU for model inference.
•Three pre- and post-disaster inputs from the xBD dataset.
Experiment Train Test BLD-1 DMG-0 DMG-1 DMG-2 DMG-3 DMG-mean
Random splits 80% at random 10% at random 0.74 0.89 0.43 0.54 0.73 0.60
Joplin held out 90% of non-Joplin Joplin only 0.76 0.89 0.50 0.36 0.81 0.56