text
stringlengths
0
820
✓ ✓ ✓ 0.493 0.591 12 0.390 0.482 6
Table 1: Cross-view retrieval performance of Sat2Cap model: We experiment with three different settings in our model.
First, we experiment with the effect of using the Dynamic Encoder. Secondly, we look at the performance degradation in
scenarios where meta-information is not available in inference. Finally, we experiment with randomly dropping out the
Dynamic Encoder during training.
to the model. We encode this meta-information using sin-
cos encoding. We use a very shallow fully connected layer
which we call the Dynamic Encoder represented by hθ. The
encoded meta information is passed through the Dynamic
Encoder whose output is added element-wise to the unnor-
malized output from the Sat2Cap model. We then normal-
ize the final sum to compute our objective. Our framework,
shown in Figure 2, is defined as:
Oi=gθ(oi) (4)Ei=hθ(ei) (5)
where eiis the output of sin-cos encoding of the date, time,
and location information for sample i.
Si=Oi+Ei (6)
Now the objective function is updated to:
Ldynamic =1
kk∑
i=0−logexp(Si·Gi/τ)
∑k
j=0exp(Si·Gj/τ)(7)
4
Figure 3: Top-9 overhead-to-ground image retrieval: We use the Sat2Cap embeddings of the overhead images and CLIP
embeddings of the ground-level images and show the 9 closest ground-level images retrieved for a query overhead image.
The retrieval was performed using 10,000 samples.
Our training dataset captures the ground-level scenes at
various times. If the model is only allowed to learn using the
overhead image of a location, it will be forced to learn an av-
erage concept for all temporal settings. By conditioning the
problem on additional temporal data, our model learns dif-
ferent ground-level concepts for different times of the day
and year. This ultimately allows Sat2Cap to dynamically
adapt to temporal variations for the same geolocation.
To prevent overfitting to the meta-information, we imple-
ment random dropout of the Dynamic Encoder during train-
ing. This improves retrieval performance but can decrease
the model’s sensitivity to temporal variations. The dropout
causes the model to learn to disregard the meta-information
as it is frequently dropped out during training. Therefore,
we view dropout as a hyperparameter that can be adjusted
to control the dynamic sensitivity of our model.
3.4. Implementations Details
We use a ViT-32B as the CLIP image encoder. This im-
age encoder is kept frozen throughout our training proce-
dure. We use a ViT-32B architecture as the backbone for
our Sat2Cap model. The Sat2Cap backbone is initialized
using CLIP weights. Following [7] we use an AdamW opti-
mizer [37] with a learning rate of 1e−05withβ1= 0.9and
β2= 0.98. We also use a learnable temperature parameterwhich is initialized at τ= 0.07. We use Cosine Annealing
with Warm Restarts [38] as the learning rate scheduler.
We augment the overhead images using RandomRe-
sizedCrop and RandAugment [39]. The overhead images
are normalized using the mean and standard deviation of
the dataset. The training was carried out using Nvidia A100
40GB GPU. Since a larger number of negative samples is
beneficial for contrastive learning, we simulate a large batch
size using a memory bank approach. We initialize a queue
of size 9600 and fill it with precomputed ground-level im-
age CLIP embeddings which are used as negative samples
for computing the loss.
4. Experiments and Results
Our model learns a powerful geo-text embedding space
that can be used for a variety of applications. We experi-
mented with four tasks and show the quantitative and qual-
itative results on them.
4.1. Application 1: Cross-View Image Retrieval
In this experiment, we show that our model learns a
strong relationship between co-located overhead images
and ground-level images in the CLIP space. We randomly
sample 10,000 image pairs from the test set for this exper-
iment. First, we compute the Sat2Cap embeddings for all
5
Figure 4: Top-9 overhead-to-ground image retrieval with temporal manipulation: We show the 9 closest ground-level
images for a query overhead image at two different time settings (11:00 p.m. and 08:00 a.m.).
overhead images in the 10k testset. Then we compute the
CLIP embeddings for all ground-level images in the set. We
then compute top-k and median rank metrics between the
Sat2Cap overhead embeddings and the CLIP ground em-
beddings. Table 1 shows all retrieval results.
As a baseline, we use the distance between CLIP over-
head embeddings and CLIP ground embeddings. We get
an extremely low R@10 score of 0.013 and a median rank
of 1700. These low scores essentially tell us that the over-
head images and corresponding ground-level images lie far
apart in the CLIP space. For Sat2Cap, we first experiment
without using the Dynamic Encoder. Just by contrastively
training the Sat2Cap image encoder with ground-level CLIP
embeddings, we achieve a high R@10 score of 0.493 and a
median rank of 15.
All remaining experiments are conducted on models that
were trained using the Dynamic Encoder. Table 1 shows
that initially, the retrieval scores drop when using the Dy-
namic Encoder. We suspect this happens because the model
starts to overfit on the meta-information, ignoring important
cues from the overhead images. We also see a 5.4% drop in
R@10 metrics when we remove the meta-information dur-ing inference. To reduce the possibility of overfitting, we
randomly drop the Dynamic Encoder during training. We
see that simply adding dropout during training increases