text stringlengths 0 820 |
|---|
and T. Blaschke, “Machine learning data-driven approaches |
for land use/cover mapping and trend analysis using google |
earth engine,” Journal of Environmental Planning and Man- |
agement , vol. 66, no. 3, pp. 665–697, 2023. |
[19] V . T. Bickel, S. J. Conway, P.-A. Tesson, A. Manconi, |
S. Loew, and U. Mall, “Deep learning-driven detection and |
mapping of rockfalls on mars,” IEEE Journal of Selected |
Topics in Applied Earth Observations and Remote Sensing , |
vol. 13, pp. 2831–2841, 2020. |
[20] S. Zhang, E. J. M. Carranza, H. Wei, K. Xiao, F. Yang, J. Xi- |
ang, S. Zhang, and Y . Xu, “Data-driven mineral prospectivity |
mapping by joint application of unsupervised convolutional |
auto-encoder network and supervised convolutional neural |
network,” Natural Resources Research , vol. 30, pp. 1011– |
1031, 2021. |
[21] S. Workman, R. Souvenir, and N. Jacobs, “Understanding |
and mapping natural beauty,” in Proceedings of the IEEE In- |
ternational Conference on Computer Vision , pp. 5589–5598, |
2017. |
[22] S. Workman and N. Jacobs, “Dynamic traffic modeling |
from overhead imagery,” in Proceedings of the IEEE/CVF |
Conference on Computer Vision and Pattern Recognition , |
pp. 12315–12324, 2020. |
[23] Y . Zhang, H. Jiang, Y . Miura, C. D. Manning, and C. P. |
Langlotz, “Contrastive learning of medical visual represen- |
tations from paired images and text,” in Machine Learning |
for Healthcare Conference , pp. 2–25, PMLR, 2022. |
[24] K. Desai and J. Johnson, “Virtex: Learning visual rep- |
resentations from textual annotations,” in Proceedings of |
the IEEE/CVF conference on computer vision and pattern |
recognition , pp. 11162–11173, 2021. |
[25] L. Yuan, D. Chen, Y .-L. Chen, N. Codella, X. Dai, J. Gao, |
H. Hu, X. Huang, B. Li, C. Li, et al. , “Florence: A |
new foundation model for computer vision,” arXiv preprint |
arXiv:2111.11432 , 2021. |
[26] C. Jia, Y . Yang, Y . Xia, Y .-T. Chen, Z. Parekh, H. Pham, |
Q. Le, Y .-H. Sung, Z. Li, and T. Duerig, “Scaling up vi- |
sual and vision-language representation learning with noisy |
9 |
text supervision,” in International Conference on Machine |
Learning , pp. 4904–4916, PMLR, 2021. |
[27] J. Yang, C. Li, P. Zhang, B. Xiao, C. Liu, L. Yuan, and J. Gao, |
“Unified contrastive learning in image-text-label space,” in |
Proceedings of the IEEE/CVF Conference on Computer Vi- |
sion and Pattern Recognition , pp. 19163–19173, 2022. |
[28] J. Yang, J. Duan, S. Tran, Y . Xu, S. Chanda, L. Chen, |
B. Zeng, T. Chilimbi, and J. Huang, “Vision-language pre- |
training with triple contrastive learning,” in Proceedings of |
the IEEE/CVF Conference on Computer Vision and Pattern |
Recognition , pp. 15671–15680, 2022. |
[29] J. Cho, S. Yoon, A. Kale, F. Dernoncourt, T. Bui, and |
M. Bansal, “Fine-grained image captioning with clip re- |
ward,” arXiv preprint arXiv:2205.13115 , 2022. |
[30] A. Ramesh, P. Dhariwal, A. Nichol, C. Chu, and M. Chen, |
“Hierarchical text-conditional image generation with clip la- |
tents,” arXiv preprint arXiv:2204.06125 , 2022. |
[31] A. Nichol, P. Dhariwal, A. Ramesh, P. Shyam, P. Mishkin, |
B. McGrew, I. Sutskever, and M. Chen, “Glide: Towards |
photorealistic image generation and editing with text-guided |
diffusion models,” arXiv preprint arXiv:2112.10741 , 2021. |
[32] Z. Wang, W. Liu, Q. He, X. Wu, and Z. Yi, “Clip-gen: |
Language-free training of a text-to-image generator with |
clip,” arXiv preprint arXiv:2203.00386 , 2022. |
[33] M. Hendriksen, M. Bleeker, S. Vakulenko, N. van Noord, |
E. Kuiper, and M. de Rijke, “Extending clip for category-to- |
image retrieval in e-commerce,” in Advances in Information |
Retrieval: 44th European Conference on IR Research, ECIR |
2022, Stavanger, Norway, April 10–14, 2022, Proceedings, |
Part I , pp. 289–303, Springer, 2022. |
[34] A. Sain, A. K. Bhunia, P. N. Chowdhury, S. Koley, T. Xi- |
ang, and Y .-Z. Song, “Clip for all things zero-shot sketch- |
based image retrieval, fine-grained or not,” in Proceedings of |
the IEEE/CVF Conference on Computer Vision and Pattern |
Recognition , pp. 2765–2775, 2023. |
[35] A. Baldrati, M. Bertini, T. Uricchio, and A. Del Bimbo, “Ef- |
fective conditioned and composed image retrieval combin- |
ing clip-based features,” in Proceedings of the IEEE/CVF |
Conference on Computer Vision and Pattern Recognition , |
pp. 21466–21474, 2022. |
[36] A. v. d. Oord, Y . Li, and O. Vinyals, “Representation |
learning with contrastive predictive coding,” arXiv preprint |
arXiv:1807.03748 , 2018. |
[37] I. Loshchilov and F. Hutter, “Decoupled weight decay regu- |
larization,” arXiv preprint arXiv:1711.05101 , 2017. |
[38] I. Loshchilov and F. Hutter, “Sgdr: Stochastic gradient de- |
scent with warm restarts,” arXiv preprint arXiv:1608.03983 , |
2016. |
[39] E. D. Cubuk, B. Zoph, J. Shlens, and Q. V . Le, “Ran- |
daugment: Practical automated data augmentation with a re- |
duced search space,” in Proceedings of the IEEE/CVF con- |
ference on computer vision and pattern recognition work- |
shops , pp. 702–703, 2020. |
10 |
Sat2Cap: Mapping Fine-Grained Textual Descriptions from Satellite Images |
(Supplementary Material) |
A. Text to Overhead Image Retrieval |
Our framework uses ground-level images as pseudo- |
labels to learn the textual concepts of geolocation. Although |
Sat2Cap does not require any text labels during training, |
it effectively learns an embedding space where geoloca- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.