text
stringlengths
0
820
the geospatial community to continually benefit from the
most recent progress in computer vision, enabling a smarter,
safer, and healthier planet.
References
[1] Peri Akiva, Matthew Purri, and Matthew J. Leotta. Self-
supervised material and texture representation learning for
remote sensing tasks. CoRR , abs/2112.01715, 2021. 3, 6, 7,
9
[2] Kumar Ayush, Burak Uzkent, Chenlin Meng, Kumar Tan-
may, Marshall Burke, David B. Lobell, and Stefano Er-
mon. Geography-aware self-supervised learning. CoRR ,
abs/2011.09980, 2020. 1, 2, 9
[3] Hangbo Bao, Li Dong, and Furu Wei. Beit: BERT pre-
training of image transformers. CoRR , abs/2106.08254,
2021. 3
[4] Rodrigo Caye Daudt, Bertr Le Saux, and Alexandre Boulch.
Fully convolutional siamese networks for change detection.
In2018 25th IEEE International Conference on Image Pro-
cessing (ICIP) , pages 4063–4067, 2018. 7
[5] R. Caye Daudt, B. Le Saux, A. Boulch, and Y . Gousseau.
Urban change detection for multispectral earth observation
using convolutional neural networks. In IEEE International
Geoscience and Remote Sensing Symposium (IGARSS) , July
2018. 6
[6] Mark Chen, Alec Radford, Rewon Child, Jeffrey Wu, Hee-
woo Jun, David Luan, and Ilya Sutskever. Generative pre-
training from pixels. In Hal Daum ´e III and Aarti Singh,
editors, Proceedings of the 37th International Conference
on Machine Learning , volume 119 of Proceedings of Ma-
chine Learning Research , pages 1691–1703. PMLR, 13–18
Jul 2020. 3
[7] Xinlei Chen, Haoqi Fan, Ross B. Girshick, and Kaiming
He. Improved baselines with momentum contrastive learn-
ing. CoRR , abs/2003.04297, 2020. 2
[8] Gong Cheng, Junwei Han, and Xiaoqiang Lu. Remote sens-
ing image scene classification: Benchmark and state of the
art.Proceedings of the IEEE , 105(10):1865–1883, Oct 2017.
4
[9] Yezhen Cong, Samar Khanna, Chenlin Meng, Patrick Liu,
Erik Rozi, Yutong He, Marshall Burke, David B Lobell, and
Stefano Ermon. Satmae: Pre-training transformers for tem-
poral and multi-spectral satellite imagery. arXiv preprint
arXiv:2207.08051 , 2022. 1, 2, 3, 6, 7, 8
[10] MMSegmentation Contributors. MMSegmentation:
Openmmlab semantic segmentation toolbox and bench-
mark. https://github.com/open-mmlab/
mmsegmentation , 2020. 7
[11] Jia Deng, Wei Dong, Richard Socher, Li-Jia Li, Kai Li,
and Li Fei-Fei. Imagenet: A large-scale hierarchical image
database. In 2009 IEEE conference on computer vision and
pattern recognition , pages 248–255. Ieee, 2009. 1
[12] Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina
Toutanova. Bert: Pre-training of deep bidirectional
transformers for language understanding. arXiv preprint
arXiv:1810.04805 , 2018. 3
[13] Ivica Dimitrovski, Ivan Kitanovski, Dragi Kocev, and Nikola
Simidjievski. Current trends in deep learning for earth obser-
vation: An open-source benchmark arena for image classifi-
cation, 2022. 7[14] Alexey Dosovitskiy, Lucas Beyer, Alexander Kolesnikov,
Dirk Weissenborn, Xiaohua Zhai, Thomas Unterthiner,
Mostafa Dehghani, Matthias Minderer, Georg Heigold, Syl-
vain Gelly, Jakob Uszkoreit, and Neil Houlsby. An image is
worth 16x16 words: Transformers for image recognition at
scale. CoRR , abs/2010.11929, 2020. 2, 3, 6, 7, 8
[15] Jean-Bastien Grill, Florian Strub, Florent Altch ´e, Corentin
Tallec, Pierre H. Richemond, Elena Buchatskaya, Carl Do-
ersch, Bernardo ´Avila Pires, Zhaohan Daniel Guo, Mo-
hammad Gheshlaghi Azar, Bilal Piot, Koray Kavukcuoglu,
R´emi Munos, and Michal Valko. Bootstrap your own la-
tent: A new approach to self-supervised learning. CoRR ,
abs/2006.07733, 2020. 3
[16] Suchin Gururangan, Ana Marasovic, Swabha Swayamdipta,
Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith.
Don’t stop pretraining: Adapt language models to domains
and tasks. CoRR , abs/2004.10964, 2020. 2, 3
[17] Rujun Han, Xiang Ren, and Nanyun Peng. DEER: A data ef-
ficient language model for event temporal reasoning. CoRR ,
abs/2012.15283, 2020. 2, 3
[18] Kaiming He, Xinlei Chen, Saining Xie, Yanghao Li, Piotr
Doll´ar, and Ross B. Girshick. Masked autoencoders are scal-
able vision learners. CoRR , abs/2111.06377, 2021. 3
[19] Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun.
Deep residual learning for image recognition. arXiv preprint
arXiv:1512.03385 , 2015. 6, 7, 8
[20] Neal Jean, Sherrie Wang, Anshul Samar, George Azzari,
David B. Lobell, and Stefano Ermon. Tile2vec: Unsuper-
vised representation learning for spatially distributed data.
CoRR , abs/1805.02855, 2018. 2
[21] Shunping Ji, Shiqing Wei, and Meng Lu. Fully convolutional
networks for multisource building extraction from an open
aerial and satellite imagery data set. IEEE Transactions on
Geoscience and Remote Sensing , 57(1):574–586, 2019. 7
[22] Andr ´as Kalapos and B ´alint Gyires-T ´oth. Self-supervised
pretraining for 2d medical image segmentation. arXiv
preprint arXiv:2209.00314 , 2022. 3
[23] Jian Kang, Ruben Fernandez-Beltran, Puhong Duan, Sicong
Liu, and Antonio J. Plaza. Deep unsupervised embedding
for remotely sensed images based on spatially augmented
momentum contrast. IEEE Transactions on Geoscience and
Remote Sensing , 59(3):2598–2610, 2021. 2