text stringlengths 0 820 |
|---|
map fine-grained textual descriptions (captions). Our ap- |
proach allows us to theoretically map anything that can be |
expressed in natural language, and thus, serves as a general |
framework for zero-shot mapping. |
Recently, several works [7, 8, 9] have delved into model- |
ing the relationship between images and text. Models such |
as CLIP [7] and ALBEF [8] are trained on large database of |
captioned-images to learn a multimodal embedding space |
that unifies vision and text space. These embedding spaces |
can be utilized to learn the textual descriptors of a given |
image. However, a limitation of using overhead imagery |
is its tendency to provide coarse and generic features of a |
given area. We observe that this property of overhead im- |
ages holds true in the CLIP embedding space as well, where |
these images are related to coarse textual concepts like city, |
beach, or property. These images capture a broad perspec- |
tive from above, offering limited insight into the intricate |
concepts and dynamics within the location. Ground-level |
images, on the other hand, provide more detailed infor- |
mation about a place. The CLIP embedding space has a |
better understanding of fine-grained concepts for ground- |
level images since it was primarily trained on them and |
their descriptive captions. Yet, several challenges hinder |
the direct utilization of ground-level imagery for mapping |
tasks. Firstly, ground-level images are sparsely available |
i.e., obtaining a ground-level image for every location on |
Earth is not feasible. Secondly, the coverage and quality of |
a ground-level image for the same location can have large |
variations which could introduce unwanted variations dur- |
ing inference. |
To address these issues, we present a novel weakly- |
supervised cross-view approach for learning fine-grained, |
and dynamic textual concepts for geographic locations. |
First, we create a large-scale dataset with paired overhead |
and ground-level images. Our dataset uses a subset of the |
1arXiv:2307.15904v1 [cs.CV] 29 Jul 2023 |
Figure 1: Captions generated by the CLIPCAP model [1] using CLIP embeddings vs. Dynamic Sat2Cap embeddings. (Row- |
1) shows the results from CLIP embeddings which produce many generic descriptions. (Row-2 and 3) shows the results from |
our Sat2Cap embeddings for the month of May and January respectively. The captions generated using Sat2Cap embeddings |
are more fine-grained and dynamic. While (d) does not add any winter properties for the January query, this behavior is |
expected as the image is over Australia where January falls in the middle of summer. |
YFCC100M [10]. More details about the dataset are pre- |
sented in Section 3.1. Using this paired dataset, we learn the |
CLIP distribution of the ground-level scene for a given loca- |
tion. CLIP embeddings of ground-level images can describe |
detailed textual concepts of that location. Our Sat2Cap |
model learns to predict the expected CLIP embedding of the |
ground-level scene using the overhead image. Compared to |
the CLIP embeddings, Sat2Cap embeddings tend to capture |
more fine-grained textual concepts for a given geolocation |
as seen in Figure 1. |
To account for the temporal associations between vari- |
ous concepts and a location, our model is conditioned on |
temporal data, specifically, the date and time stamps from |
the Flickr imagery. This allows our model to learn fine- |
grained concepts that can be dynamically adapted to dif- |
ferent date and time settings. Figure 1 shows an example |
of CLIP-generated coarse captions vs. Sat2Cap-generated |
fine-grained dynamic captions. |
Our method is also weakly-supervised and thus does not |
require any text-labeled data. Creating a large-scale dataset |
of fine-grained captions and geolocation can be challeng- |
ing. However, our approach only requires geotagged and |
timestamped ground-level images which are easily accessi- |
ble and scalable. Additionally, our framework is designed |
to learn high-resolution information of a location. This rich |
information can be used as an additional signal to solve a |
number of other downstream tasks. The following pointssummarize the primary contributions of our work: |
• A novel weakly-supervised approach for learning fine- |
grained dynamic textual concepts of geographic loca- |
tions |
• A model for effective cross-view image retrieval be- |
tween overhead images and ground-level images taken |
in the wild |
• A zero-shot approach for creating large-scale textual |
maps |
• A new large-scale cross-view dataset |
2. Related Works |
2.1. Deep Learning Based Mapping |
Creating maps of attributes of interest is an important |
task in many domains. Deep Learning methods have been |
used extensively [11, 12, 13, 14, 15] in recent years to make |
mapping efficient and scalable. Alhassan et al. [16] fine- |
tuned imagenet pretrained models to make landcover pre- |
dictions. Similarly [17, 18] leveraged large-scale annotated |
data from different sensors to improve landuse and land- |
cover classification using deep learning methods. Apart |
from remote sensing, other areas have also leveraged deep |
learning for their own mapping tasks. Using high-resolution |
2 |
images, [19] trained several deep learning architectures for |
automated CNN-based mapping of Martian rockfalls. On |
the other hand, [20] used an unsupervised approach to map |
regions with high “Au” deposits. |
There have been other works that specifically focus on |
mapping visual attributes. For instance, [21] used features |
from both overhead and ground-level imagery, and intro- |
duced a cross-view approach to map scenicness. Later |
works focused on creating dynamic maps. Both [22, 2] |
conditioned their model on temporal information along with |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.