text stringlengths 0 820 |
|---|
[67] X. Wang, R. Girshick, A. Gupta, and K. He. Non-local neu- |
ral networks. In 2018 IEEE/CVF Conference on Computer |
Vision and Pattern Recognition , pages 7794–7803, 2018. 3 |
[68] Yuqing Wang, Zhaoliang Xu, Xinlong Wang, Chunhua Shen, |
Baoshan Cheng, Hao Shen, and Huaxia Xia. End-to-end |
video instance segmentation with transformers. In 2021 |
IEEE/CVF Conference on Computer Vision and Pattern |
Recognition (CVPR) , pages 8737–8746, 2021. 3 |
[69] Li Yuan, Yunpeng Chen, Tao Wang, Weihao Yu, Yujun Shi, |
Zi-Hang Jiang, Francis E.H. Tay, Jiashi Feng, and Shuicheng |
Yan. Tokens-to-token vit: Training vision transformers from |
scratch on imagenet. In Proceedings of the IEEE/CVF In- |
ternational Conference on Computer Vision (ICCV) , pages |
558–567, October 2021. 3 |
[70] Peng Yue, Boyi Shangguan, Lei Hu, Liangcun Jiang, Chenx- |
iao Zhang, Zhipeng Cao, and Yinyin Pan. Towards a train- |
ing data model for artificial intelligence in earth observation. |
International Journal of Geographical Information Science , |
36(11):2113–2137, 2022. 1 |
[71] Zixiao Zhang, Xiaoqiang Lu, Guojin Cao, Yuting Yang, |
Licheng Jiao, and Fang Liu. Vit-yolo:transformer-based |
yolo for object detection. In 2021 IEEE/CVF International |
Conference on Computer Vision Workshops (ICCVW) , pages |
2799–2808, 2021. 3 |
[72] Bolei Zhou, Alex Andonian, Aude Oliva, and Antonio Tor- |
ralba. Temporal relational reasoning in videos. In Pro- |
ceedings of the European Conference on Computer Vision |
(ECCV) , September 2018. 5 |
10428 |
On the Deployment of Post-Disaster Building |
Damage Assessment Tools using Satellite Imagery: |
A Deep Learning Approach |
Shahrzad Gholami1, Caleb Robinson1, Anthony Ortiz1, Siyu Yang1, Jacopo Margutti2, |
Cameron Birge1, Rahul Dodhia1, Juan Lavista Ferres1 |
1AI for Good Research Lab, Microsoft, Redmond, USA, |
2510 an initiative of the Netherlands Red Cross, The Hague, The Netherlands |
Abstract —Natural disasters frequency is growing globally. |
Every year 350 million people are affected and billions of |
dollars of damage is incurred. Providing timely and appropriate |
humanitarian interventions like shelters, medical aid, and food to |
affected communities are challenging problems. AI frameworks |
can help support existing efforts in solving these problems in |
various ways. In this study, we propose using high-resolution |
satellite imagery from before and after disasters to develop a |
convolutional neural network model for localizing buildings and |
scoring their damage level. We categorize damage to buildings |
into four levels, spanning from not damaged to destroyed, based |
on the xView2 dataset’s scale. Due to the emergency nature |
of disaster response efforts, the value of automating damage |
assessment lies primarily in the inference speed, rather than |
accuracy. We show that our proposed solution works three |
times faster than the fastest xView2 challenge winning solution |
and over 50 times faster than the slowest first place solution, |
which indicates a significant improvement from an operational |
viewpoint. Our proposed model achieves a pixel-wise F1 score |
of 0.74 for the building localization and a pixel-wise harmonic |
F1 score of 0.6 for damage classification and uses a simpler |
architecture compared to other studies. Additionally, we develop |
a web-based visualizer that can display the before and after |
imagery along with the model’s building damage predictions on |
a custom map. This study has been collaboratively conducted to |
empower a humanitarian organization as the stakeholder, that |
plans to deploy and assess the model along with the visualizer |
for their disaster response efforts in the field. |
Index Terms —satellite imagery datasets, neural networks, |
image segmentation, building damage classification, natural |
disasters, humanitarian action |
I. I NTRODUCTION |
Natural disasters affect 350 million people each year causing |
billions of dollars in damage and were the main driver of |
hunger for 29 million people in 2021 [1]. Providing timely |
humanitarian aid to affected communities is increasingly |
challenging due to the growing frequency and severity of |
such events [2]. Impact assessment of natural disasters in |
a short time frame is a crucial step in emergency response |
efforts as it helps first responders allocate resources effectively. |
For example, dispatching aid, sending shelters, and allocating |
building material for reconstruction can be more efficient with |
estimates of where damaged buildings are, and how badly |
damaged they are.Microsoft AI for Good/Humanitarian Action has collab- |
orated with Netherlands Red Cross to use high-resolution |
satellite imagery from before and after natural disasters, |
delineated in the publicly available xBD dataset, to develop |
an end-to-end Siamese convolutional neural network that can |
localize buildings and score their damage level. Such a model |
is trained on historical disaster data and then applied on |
demand to identify damaged buildings during future disasters. |
Such AI and data-driven decision-aid tools can empower |
humanitarian organizations to take more informed actions |
at the time of disaster and allocate their resources more |
strategically during their field deployments. Throughout the |
course of our collaboration, extensive deployment experience |
shared by field experts and their valuable perspective as |
a stakeholder were instrumental in informing our empirical |
analysis of the model pipeline and will be vital in future |
assessments of the model performance in the fields when |
actual disasters happen. |
In 2019, the xView2 challenge and the xBD dataset were |
announced at the Computer Vision for Global Challenges |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.