text stringlengths 0 820 |
|---|
Joplin held out (wind only damage classifier) 90% of non-Joplin Joplin only 0.74 0.89 0.42 0.54 0.77 0.60 |
Nepal held out 90% of non-Nepal Nepal only 0.63 0.42 0.17 0.23 0.02 0.06 |
Nepal held out (flood only damage classifier) 90% of non-Nepal Nepal only 0.64 0.54 0.12 0.27 0.07 0.14 |
TABLE III |
PIXEL -WISE F1SCORE ACROSS VARIOUS SPLITS OF THE X BD DATASET . W E TEST GENERALIZATION PERFORMANCE OF MODELS ON THE JOPLIN WIND |
AND NEPAL FLOODING EVENTS IN TWO SETTINGS :ONE IN WHICH WE TRAIN ON allAVAILABLE DATA THAT IS NOT FROM THE SPECIFIC EVENT ,AND |
ANOTHER SETTING IN WHICH WE TRAIN THE DAMAGE CLASSIFICATION DECODER ON OTHER WIND -ONLY EVENTS (FOR TESTING ON THE JOPLIN |
EVENT )AND OTHER FLOOD ONLY EVENTS (FOR TESTING ON THE NEPAL FLOODING EVENT ). |
•The same Python virtual environment for all experiments |
to remove the effect of different packages on the |
performance. |
Additionally, the inference times reported in Table IV |
include the file I/O, model loading, pre-processing, and post- |
processing costs associated with each approach and therefore |
represent an upper bound on the time taken to process any |
given 1024×1024 input (i.e. when running such approaches |
over large amounts of input, the models would only need to |
be loaded from the disk a single time). |
As previously discussed, the xView2 challenge2encouraged |
participants to optimize for leaderboard performance instead |
of throughput. As such, many of the top-placed solutions used |
techniques such as ensembling and test time augmentation, as |
well as larger, more complex models in order to improve their |
performance at the cost of inference speed. The top-performing |
solution, for instance, consists of an ensemble of 12 models |
that are run 4 times for each input (test time augmentation with |
4 rotations). These solutions are prohibitively costly to run |
on large inputs. For example, the Maxar Open Data program |
released ∼20,000km2of pre- and post-disaster imagery |
covering areas impacted by Hurricane Ida in 2021. Assuming |
the inference times from Table IV, 0.3m/px spatial resolution |
of the input imagery and $0.9/hr cost of running a Tesla K80 |
(based on current Azure pricing), the first place solution would |
cost $6,500 to run, while our solution would only cost $100 |
to run. In this case, our solution would generate results for the |
area affected by Hurricane Ida in 4.7 days while the first place |
solution would take up to 301.4 days using a single NVIDIA |
Tesla K80 GPU. |
Finally, we benchmark our proposed solution in an |
optimized setting compared to the above setting: we load data |
with a parallel data-loader (vs. loading a single tile on the main |
thread), we run pre- and post-processing steps on the GPU, |
we maximize the amount of imagery that is run through the |
model at once (vs. running on a single 1024×1024 tile of |
imagery), and we use the most recent version of all relevant |
packages (vs. the earliest version pinned in the environments |
from the xView2 solution repositories). Here, we find that |
our model is able to process 612.29 square kilometers per |
hour compared to 89.35 square kilometers per hour under the |
2Most machine learning competitions follow a similar format, whereby |
participant solutions are only ranked in terms of the held-out test set |
performance.same assumptions in the previous setup despite using the same |
hardware. In this case, our model could process the Hurricane |
Ida imagery in 2 days at a cost of $14.7. The stakeholder’s |
baseline solution’s speed is 1000 square kilometers per hour |
on Azure NC12 GPU. We project our runtime to be 20% faster |
than their baseline solution on a similar GPU. |
Method Inference time (s) sq. km/hr |
xView2 1st place 245.75 (0.73) 1.38 |
xView2 2nd place 121.03 (0.36) 2.81 |
xView2 3rd place 108.21 (0.6) 3.14 |
xView2 4th place not reproducible not reproducible |
xView2 5th place 10.94 (0.06) 31.07 |
Our method 3.8 (0.02) 89.35 |
TABLE IV |
COMPARISON OF BUILDING DAMAGE MODEL INFERENCE TIMES ON A |
SINGLE 1024 X1024 PIXEL TILE FOR DIFFERENT METHODS USING A |
SINGLE TESLA K80 GPU ( ON AN AZURE NC6 MACHINE ). T IMES ARE IN |
SECONDS AND ARE AVERAGED OVER THREE RUNS WITH A STANDARD |
DEVIATION IN PARENTHESES . THE RESULTS FOR THE WINNING X VIEW2 |
SOLUTIONS ARE REPRODUCED THROUGH THE OFFICIAL GITHUB |
REPOSITORIES PUBLISHED FOR EACH ,WHERE THE ONLY MODIFICATIONS |
TO THE ORIGINAL CODE WAS TO ENABLE GPU PROCESSING FOR EACH |
INFERENCE SCRIPT . THE RIGHTMOST COLUMN SHOWS THE INFERENCE |
SPEED IN TERMS OF (SQ.KM)/HR ASSUMING A 0.3M/PIXEL INPUT SPATIAL |
RESOLUTION . |
VIII. W EBVISUALIZER TOOL |
In contrast to standard vision applications, semantic |
segmentation models that operate over satellite imagery need |
to be applied over arbitrarily large scenes at inference-time. As |
such, distributing the imagery and predictions made by such |
models is non-trivial. First, high-resolution satellite imagery |
scenes can be many gigabytes in size, difficult to visualize |
(e.g. requiring GIS software and normalization steps), and may |
require pre-processing to correctly align temporal samples. |
Second, the predictions from a building damage model are |
strongly coupled to the imagery itself. In other words, |
only distributing georeferenced polygons of where damaged |
buildings are predicted to be is not useful in a disaster response |
setting. The corresponding imagery is necessary to interpret |
and perform quality assessment on the predictions. |
Considering these difficulties, we implement a web-based |
visualizer to distribute the predictions made by our model over |
satellite image scenes. This approach bypasses the need for any |
specialized GIS software, allowing any modern web-browser |
Fig. 6. Screenshot of the building damage visualizer instance for the August, |
2021 Haiti Earthquakes. The left side of the map interface shows the pre- |
disaster imagery while the right side shows the post-disaster imagery. The |
slider in the middle of the interface allows a user to switch between the pre- |
and post-disaster layers to quickly see the difference in the imagery. Finally, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.