text
stringlengths
0
820
tion and their fine-grained descriptions are well aligned.
To show this, we randomly selected 1000 overhead images
from our training set, and compute their Sat2Cap embed-
dings. For a given text query, we generate the CLIP[1] text
embedding and compute its similarity with all images in the
test set. Figure 1 shows examples of 4 closest overhead im-
ages retrieved for a given query.
We experiment with small perturbations of prompts to
analyze how our retrieval results change with minute vari-
ations of query. We see in Figure 1, the prompt “people
driving cars” retrieves city or residential areas. However,
replacing the phrase “driving cars” with “riding horses” re-
trieves locations with farmland. Similarly, the prompt “per-
son on a long hike” exclusively retrieves mountainous re-
gions, while the prompt “person on a long run” retrieves
images that looks like trails nearby residential areas. Hence,
Sat2Cap embeddings demonstrate a good understanding of
fine-grained variations of textual concepts.
B. More Large-scale Textual Maps
We create country-level maps of England and Nether-
lands for four different prompts: a) Farmers harvesting
crops, b) Cars stuck in traffic, c) Animals grazing in the
fields, and d) People fishing on a boat. To generate the tex-
tual maps for each prompt, we compute the Sat2Cap em-
beddings for all images of the country and compute its sim-
ilarity with the CLIP text embeddings of the given prompt.
We normalize the similarities and plot those to create the
textual maps. Figure 2 shows textual maps of Netherlands
and England for each prompt. We also placed a landcover
map on the bottom of each text map as a reference of places
with likely activations for the given prompt.
By comparing with the respective landcover maps, we
see that the Sat2Cap embeddings activates reasonable loca-
tions on a map for a given prompt. For example, the prompt
“Farmers harvesting crops” gets activated mostly in crop-
land, while the prompt “Cars stuck in traffic” is activated
in urban areas. Similarly, the textual maps of the prompts
“Animals grazing in the fields”, and “People fishing on a
boat” look similar to the rangeland and water landcover re-
spectively. In (d), we see high activations in the top-left
corner for England, which does not match the water land-
cover. This region is the Lake District, which has numerousbeautiful lakes.
C. Dynamic Caption Generation
We take a single overhead image and show the dynamic
captions Sat2Cap embedding can generate. Figure 3 shows
our results on a test image at four different temporal set-
tings. The generated captions capture both the semantic
concepts from the given image as well as the temporal con-
cepts that are added to it. As you move from May to Decem-
ber, the concepts of winter become more prominent in the
captions. Similarly, as you move from 10:00 am to 11:00
pm, we see the concepts associated with night are better
highlighted. One interesting observation is that we are not
getting trivial changes, such as simply adding in winter or
at night to the captions. Rather, the entire concept that the
captions describe also changes.
D. Dataset
We introduced a cross-view dataset with overhead im-
ages and co-located ground-level images taken from the
YFCC100M [2] dataset. Figure 4 shows a few samples
from our dataset. The ground-level images provide us with
detailed fine-grained concepts of a location that cannot be
directly inferred when looking at the overhead imagery.
E. Overhead to Ground Image Retrieval
In figure 5, we show additional results of overhead-
to-ground image retrieval. We see that Sat2Cap embed-
dings accurately relate overhead imagery with fine-grained
ground-level concepts. An interesting thing to note is that
the relationship is not based primarily on visual feature
matching but rather based on agreement of concepts. For
example, the overhead image of a running track retrieves
images of people playing different sports, while an image
over the ocean retrieves images of people enjoying various
water sports.
References
[1] Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya
Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry,
Amanda Askell, Pamela Mishkin, Jack Clark, et al. Learn-
ing transferable visual models from natural language supervi-
sion. In International conference on machine learning , pages
8748–8763. PMLR, 2021. 1
[2] Bart Thomee, David A Shamma, Gerald Friedland, Benjamin
Elizalde, Karl Ni, Douglas Poland, Damian Borth, and Li-Jia
Li. Yfcc100m: The new data in multimedia research. Com-
munications of the ACM , 59(2):64–73, 2016. 1
1arXiv:2307.15904v1 [cs.CV] 29 Jul 2023
Figure 1: Top-4 text-to-overhead retrieval: We retrieve the top-4 closest overhead image from a given text prompt. Our
results show that Sat2Cap embeddings can accurately relate geolocations with fine-grained textual prompts.
2
Figure 2: Zero-shot map of countries: We show the textual maps of England and Netherlands for different queries. We
also show a landcover map as a guide for plausible locations where the query is likely to be activated. In (d), we see high
activations in the top-left corner of england which lies in the “Lake District National Park”.
3
Figure 3: Dynamic Caption Generation: Our Sat2Cap embeddings dynamically adapt to temporal manipulations, facilitat-
ing dynamic caption generation.
Figure 4: Examples of co-located overhead and ground images in our dataset. The ground-level images describe more
detailed concepts of the given locations than their overhead counterparts.
4
Figure 5: Top-9 overhead-to-ground retrieval : Our model is capable of inferring fine-grained concepts of ground-level
scenes through overhead imagery. Sat2Cap accurately retrieves probable concepts for a given geolocation using an overhead