text stringlengths 0 820 |
|---|
results of this experiment. The red spot indicates the loca- |
tion with the highest activations for the given query. For |
each query, the left figure shows the total area of inference, |
and the right figure shows a fine-grained image at the loca- |
tion with the highest activation, obtained from Google. Wesee that our model is capable of making reasonable local- |
ization for the given queries. For example: in (a) our model |
activates over a soccer stadium. Similarly, for (c) and (d), |
our model has high activations over an amusement park and |
the “National Railway Museum” respectively. Figure (b) |
shows that when we compose the concept of people with |
animals, our model shows very high activation in farm-like |
areas which is where these two concepts would most likely |
co-occur. These results show that our model can reasonably |
localize the most plausible point within a given area, where |
one might observe a given query. This property can be ben- |
eficial in solving visual search problems in the geospatial |
domain. |
5. Conclusion |
We introduced a novel weakly supervised framework to |
learn a rich embedding space between geolocation and fine- |
grained captions. Our method does not require any text- |
labeled data making it easy to train and scale. We demon- |
strated 4 interesting applications of our model. First, we |
showed that our model can be used for cross-view image |
retrieval even when using uncurated ground-level images. |
Secondly, we showed that our model can be used for gen- |
erating fine-grained and dynamic captions for geolocations. |
Third, we showed that our model can effectively localize |
textual concepts within a given geospatial region. Finally, |
we demonstrated how Sat2Cap embeddings can be used for |
the newly defined task of large-scale zero-shot mapping. |
8 |
References |
[1] R. Mokady, A. Hertz, and A. H. Bermano, “Clip- |
cap: Clip prefix for image captioning,” arXiv preprint |
arXiv:2111.09734 , 2021. |
[2] T. Salem, S. Workman, and N. Jacobs, “Learning a dynamic |
map of visual appearance,” in Proceedings of the IEEE/CVF |
Conference on Computer Vision and Pattern Recognition , |
pp. 12435–12444, 2020. |
[3] P.-Y . Laffont, Z. Ren, X. Tao, C. Qian, and J. Hays, “Tran- |
sient attributes for high-level understanding and editing of |
outdoor scenes,” ACM Transactions on graphics (TOG) , |
vol. 33, no. 4, pp. 1–11, 2014. |
[4] B. Zhou, A. Lapedriza, A. Khosla, A. Oliva, and A. Torralba, |
“Places: A 10 million image database for scene recognition,” |
IEEE transactions on pattern analysis and machine intelli- |
gence , vol. 40, no. 6, pp. 1452–1464, 2017. |
[5] A. Streltsov, J. M. Malof, B. Huang, and K. Bradbury, “Esti- |
mating residential building energy consumption using over- |
head imagery,” Applied Energy , vol. 280, p. 116018, 2020. |
[6] A. J. Bency, S. Rallapalli, R. K. Ganti, M. Srivatsa, and |
B. Manjunath, “Beyond spatial auto-regressive models: Pre- |
dicting housing prices with satellite imagery,” in 2017 |
IEEE winter conference on applications of computer vision |
(WACV) , pp. 320–329, IEEE, 2017. |
[7] A. Radford, J. W. Kim, C. Hallacy, A. Ramesh, G. Goh, |
S. Agarwal, G. Sastry, A. Askell, P. Mishkin, J. Clark, et al. , |
“Learning transferable visual models from natural language |
supervision,” in International conference on machine learn- |
ing, pp. 8748–8763, PMLR, 2021. |
[8] J. Li, R. Selvaraju, A. Gotmare, S. Joty, C. Xiong, and |
S. C. H. Hoi, “Align before fuse: Vision and language rep- |
resentation learning with momentum distillation,” Advances |
in neural information processing systems , vol. 34, pp. 9694– |
9705, 2021. |
[9] L. Yao, R. Huang, L. Hou, G. Lu, M. Niu, H. Xu, |
X. Liang, Z. Li, X. Jiang, and C. Xu, “Filip: fine-grained |
interactive language-image pre-training,” arXiv preprint |
arXiv:2111.07783 , 2021. |
[10] B. Thomee, D. A. Shamma, G. Friedland, B. Elizalde, K. Ni, |
D. Poland, D. Borth, and L.-J. Li, “Yfcc100m: The new |
data in multimedia research,” Communications of the ACM , |
vol. 59, no. 2, pp. 64–73, 2016. |
[11] L. Ilic, M. Sawada, and A. Zarzelli, “Deep mapping gentrifi- |
cation in a large canadian city using deep learning and google |
street view,” PloS one , vol. 14, no. 3, p. e0212814, 2019. |
[12] T. Behrens, K. Schmidt, R. A. MacMillan, and R. A. Vis- |
carra Rossel, “Multi-scale digital soil mapping with deep |
learning,” Scientific reports , vol. 8, no. 1, p. 15244, 2018. |
[13] Z. Zong, J. Feng, K. Liu, H. Shi, and Y . Li, “Deepdpm: Dy- |
namic population mapping via deep neural network,” in Pro- |
ceedings of the AAAI Conference on Artificial Intelligence , |
vol. 33, pp. 1294–1301, 2019.[14] M. Onishi and T. Ise, “Explainable identification and map- |
ping of trees using uav rgb image and deep learning,” Scien- |
tific reports , vol. 11, no. 1, p. 903, 2021. |
[15] C. Greenwell, S. Workman, and N. Jacobs, “What goes |
where: Predicting object distributions from above,” in |
IGARSS 2018-2018 IEEE International Geoscience and Re- |
mote Sensing Symposium , pp. 4375–4378, IEEE, 2018. |
[16] V . Alhassan, C. Henry, S. Ramanna, and C. Storie, “A deep |
learning framework for land-use/land-cover mapping and |
analysis using multispectral satellite imagery,” Neural Com- |
puting and Applications , vol. 32, pp. 8529–8544, 2020. |
[17] K. Karra, C. Kontgis, Z. Statman-Weil, J. C. Mazzariello, |
M. Mathis, and S. P. Brumby, “Global land use / land cover |
with sentinel 2 and deep learning,” in 2021 IEEE Interna- |
tional Geoscience and Remote Sensing Symposium IGARSS , |
pp. 4704–4707, 2021. |
[18] B. Feizizadeh, D. Omarzadeh, M. Kazemi Garajeh, T. Lakes, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.