text
stringlengths
0
820
this study (R&T project ”Application des techniques de Visual
Question Answering `a des donnes d’imagerie satellitaire”).
REFERENCES
[1] K. Anderson, B. Ryan, W. Sonntag, A. Kavvada, and L. Friedl, “Earth
observation in service of the 2030 agenda for sustainable development,”
Geo-spatial Information Science , vol. 20, no. 2, pp. 77–96, 2017.
[2] Y . Gu, J. Chanussot, X. Jia, and J. A. Benediktsson, “Multiple kernel
learning for hyperspectral image classification: A review,” IEEE Trans-
actions on Geoscience and Remote Sensing , vol. 55, no. 11, pp. 6547–
6565, Nov 2017.
[3] S. Li, W. Song, L. Fang, Y . Chen, P. Ghamisi, and J. A. Benediktsson,
“Deep learning for hyperspectral image classification: An overview,”
IEEE Transactions on Geoscience and Remote Sensing , pp. 1–20, 2019.[4] J. E. Vargas-Mu ˜noz, S. Lobry, A. X. Falc ˜ao, and D. Tuia, “Correcting
rural building annotations in openstreetmap using convolutional neural
networks,” ISPRS Journal of Photogrammetry and Remote Sensing , vol.
147, pp. 283–293, 2019.
[5] S. Antol, A. Agrawal, J. Lu, M. Mitchell, D. Batra, C. L. Zitnick,
and D. Parikh, “VQA: Visual Question Answering,” in International
Conference on Computer Vision , 2015.
[6] S. Lobry, J. Murray, D. Marcos, and D. Tuia, “Visual question answering
from remote sensing images,” in IEEE International Geoscience and
Remote Sensing Symposium , July 2019.
[7] Z. Shi and Z. Zou, “Can a machine generate humanlike language descrip-
tions for a remote sensing image?” IEEE Transactions on Geoscience
and Remote Sensing , vol. 55, no. 6, pp. 3623–3634, June 2017.
[8] X. X. Zhu, D. Tuia, L. Mou, G. Xia, L. Zhang, F. Xu, and F. Fraundorfer,
“Deep learning in remote sensing: A comprehensive review and list of
resources,” IEEE Geoscience and Remote Sensing Magazine , vol. 5,
no. 4, pp. 8–36, Dec 2017.
[9] F. Hu, G.-S. Xia, J. Hu, and L. Zhang, “Transferring deep convolutional
neural networks for the scene classification of high-resolution remote
sensing imagery,” Remote Sensing , vol. 7, no. 11, pp. 14 680–14 707,
2015.
[10] J. Deng, W. Dong, R. Socher, L.-J. Li, K. Li, and L. Fei-Fei, “ImageNet:
A Large-Scale Hierarchical Image Database,” in IEEE Conference on
Computer Vision and Pattern Recognition , 2009.
[11] Q. Wang, S. Liu, J. Chanussot, and X. Li, “Scene classification with
recurrent attention of VHR remote sensing images,” IEEE Transactions
on Geoscience and Remote Sensing , vol. 57, no. 2, pp. 1155–1167, Feb
2019.
[12] G.-S. Xia, X. Bai, J. Ding, Z. Zhu, S. Belongie, J. Luo, M. Datcu,
M. Pelillo, and L. Zhang, “DOTA: A large-scale dataset for object
detection in aerial images,” in IEEE Conference on Computer Vision
and Pattern Recognition , 2018, pp. 3974–3983.
[13] Q. Li, L. Mou, Q. Xu, Y . Zhang, and X. X. Zhu, “R3-Net: A deep
network for multioriented vehicle detection in aerial images and videos,”
IEEE Transactions on Geoscience and Remote Sensing , pp. 1–15, 2019.
[14] M. V olpi and D. Tuia, “Dense semantic labeling of subdecimeter reso-
lution images with convolutional neural networks,” IEEE Transactions
on Geoscience and Remote Sensing , vol. 55, no. 2, pp. 881–893, 2016.
[15] E. Maggiori, Y . Tarabalka, G. Charpiat, and P. Alliez, “Can semantic
labeling methods generalize to any city? the INRIA aerial image labeling
benchmark,” in IEEE International Geoscience and Remote Sensing
Symposium . IEEE, 2017, pp. 3226–3229.
[16] B. Huang, K. Lu, N. Audeberr, A. Khalel, Y . Tarabalka, J. Malof,
A. Boulch, B. Le Saux, L. Collins, K. Bradbury et al. , “Large-scale
semantic classification: outcome of the first year of INRIA aerial image
labeling benchmark,” in IEEE International Geoscience and Remote
Sensing Symposium . IEEE, 2018, pp. 6947–6950.
[17] I. Demir, K. Koperski, D. Lindenbaum, G. Pang, J. Huang, S. Basu,
F. Hughes, D. Tuia, and R. Raskar, “Deepglobe 2018: A challenge to
PRE-PRINT. FINAL VERSION IN IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 12
parse the earth through satellite images,” in IEEE/CVF Conference on
Computer Vision and Pattern Recognition Workshops . IEEE, 2018.
[18] L. Zhou, C. Zhang, and M. Wu, “D-linknet: Linknet with pretrained
encoder and dilated convolution for high resolution satellite imagery
road extraction.” in IEEE/CVF Conference on Computer Vision and
Pattern Recognition Workshops , 2018, pp. 182–186.
[19] R. Hamaguchi and S. Hikosaka, “Building detection from satellite
imagery using ensemble of size-specific detectors,” in 2018 IEEE/CVF
Conference on Computer Vision and Pattern Recognition Workshops
(CVPRW) . IEEE, 2018, pp. 223–2234.
[20] C. Tian, C. Li, and J. Shi, “Dense fusion classmate network for land
cover classification.” in IEEE/CVF Conference on Computer Vision and
Pattern Recognition Workshops , 2018, pp. 192–196.
[21] X. Zhang, X. Li, J. An, L. Gao, B. Hou, and C. Li, “Natural language
description of remote sensing images based on deep learning,” in IEEE
International Geoscience and Remote Sensing Symposium , July 2017,
pp. 4798–4801.
[22] X. Zhang, X. Wang, X. Tang, H. Zhou, and C. Li, “Description gener-
ation for remote sensing images using attribute attention mechanism,”
Remote Sensing , vol. 11, no. 6, p. 612, 2019.
[23] B. Wang, X. Lu, X. Zheng, and X. Li, “Semantic descriptions of
high-resolution remote sensing images,” IEEE Geoscience and Remote
Sensing Letters , 2019.
[24] Q. Wu, D. Teney, P. Wang, C. Shen, A. Dick, and A. v.d. Hengel, “Visual
question answering: A survey of methods and datasets,” Computer Vision
and Image Understanding , 2017.
[25] A. Fukui, D. H. Park, D. Yang, A. Rohrbach, T. Darrell, and
M. Rohrbach, “Multimodal compact bilinear pooling for visual question
answering and visual grounding,” arXiv preprint arXiv:1606.01847 ,
2016.
[26] H. Ben-Younes, R. Cadene, M. Cord, and N. Thome, “MUTAN: Multi-
modal tucker fusion for Visual Question Answering,” in Proceedings of
the IEEE international conference on computer vision , 2017, pp. 2612–
2620.
[27] P. Anderson, X. He, C. Buehler, D. Teney, M. Johnson, S. Gould, and
L. Zhang, “Bottom-up and top-down attention for image captioning and
visual question answering,” in Proceedings of the IEEE conference on
computer vision and pattern recognition , 2018, pp. 6077–6086.