text
stringlengths
0
820
the R@10 score by 12.4%. Another important observa-
tion is that the effect of removing meta-information dur-
ing inference is less severe when using dropout. Hence,
our model achieves good cross-view retrieval scores even if
meta-information is not available during inference.
Figure 3 shows the top 9 closest images retrieved from a
given overhead image. We see that our model is able to re-
trieve ground-level images by relating concepts rather than
direct visual matching. For example, in (a), our model re-
trieves images of people playing golf for an overhead image
of a golf course. Similarly, in (d), our query image seems
to be located over a farm. Here, Sat2Cap has learned to
associate the concept of farmland with cattle and livestock.
It retrieves images of horses and goats which are concepts
that likely reside in the location but are not visible in the
overhead image. This suggests that our model can map fine-
grained concepts of the ground-level scene to a given geolo-
cation. Sat2Cap is also capable of dynamic image retrieval.
Figure 4 shows the top 9 images retrieved at two different
6
Figure 5: Country-level maps of textual descriptions : (Col 1-2) shows the country-level maps created using Sat2Cap for
two prompts: “Kids playing in the sand” and “A busy street in downtown”. (Col 3) shows a landcover map of the respective
countries for comparison.
time settings (11:00 p.m. vs 08:00 a.m.).
4.2. Application 2: Fine-grained and Dynamic Cap-
tion Generation
Our embedding space captures detailed and fine-grained
textual concepts for geographic locations. While the CLIP
space can only provide coarse-level generic descriptions
from an overhead image, our model learns more fine-
grained visual concepts that someone on the ground might
observe. To generate captions from our embeddings, we
use the CLIPCAP [1] model, which maps CLIP space to
text space.
Figure 1 shows that using CLIP embeddings of the over-
head images, the model can only describe generic concepts
of a location like a beach, island, property, etc. Our Sat2Cap
embeddings on the other hand produce much more fine-
grained, as well as, aesthetically pleasing captions. For
example: in figure (a) CLIP generates the caption “aerial
view of a beach” missing out on other important details of
the area. Our model on the other hand generates the cap-
tion “sea facing apartment with swimming pool, terrace in
a quiet residential area”, capturing many intricate concepts
that reside within that location.
Sat2Cap also models temporal variations allowing us to
generate different captions for different times. Figure 1
shows the captions generated for two different months, Mayvs. January. We see that the model reasonably accounts
for the seasonal variations for different months of the year.
However, in figure (d), we see that the model does not add
any cold/winter-specific information for the January input.
This is expected behavior since the image is from Australia,
where the month of January falls right in the middle of sum-
mer.
4.3. Application 3: Zero-Shot Map of Fine-grained
Concepts
We use the rich Sat2Cap embedding space to create
country-level maps of fine-grained textual prompts in a
zero-shot manner. Firstly, we choose two countries to cre-
ate maps: England and Netherlands. Then, we download
satellite imagery that covers these regions. Specifically, we
download 800x800 patches of Bing Map Images at 0.6m/px
resolution. We precompute the Sat2Cap embeddings for all
the images and save them on a disc. Now for any given
text query, we compute the similarity of the CLIP text em-
bedding with all overhead images of the region. Then we
normalize these similarities between 0and1and use the
normalized similarities to create textual maps. The process
of computing similarities for an entire country took only
about 4-5 seconds. Hence, our framework is quite efficient
for mapping large regions and thus could be easily extended
to map the entire world.
7
Figure 6: Localizing textual queries at finer resolution : For each prompt, the image on left shows the big region which is
used for inference. The image on the right shows an image of the ground-level scene at the point with the highest activation,
which was taken by entering the location in Google Maps
Figure 5 shows the maps for two prompts: “Kids playing
in the sand” and “A busy street in downtown”. We added the
phrase “a photo of” at the beginning of each prompt. For
the first prompt, we see that our model activates locations
around the ocean and beaches. Activations in both coun-
tries are high in areas where you might observe a kid play-
ing in the sand. The second prompt activates locations with
major cities in both countries. For England, we see high
activations around London, Oxford, Birmingham, Manch-
ester, Liverpool, etc. For the Netherlands, we see high acti-
vations in Amsterdam, Rotterdam, The Hague, Maastricht,
Groningen, etc as well as other smaller cities. We compare
this map with ESRI’s Sentinel-2 landcover map. From the
landcover maps, we see that our model correctly activates
the fine-grained prompt “A busy street in downtown” in the
urban areas. Thus, we introduce to a novel way to create
large-scale maps in a zero-shot setting.
4.4. Application 4: Geolocalizing Textual Queries
Our model can be used to localize textual queries at a
finer resolution. For this experiment, we draw a 24km2
bounding box over a region. We compute the Sat2Cap sim-
ilarity for all the overhead images in that box with a given
text query. We, then, normalize the similarities between 0
and1and clip the values below 0.5. Figure 6 shows the