text stringlengths 0 820 |
|---|
overhead images to learn dynamic concepts for a given loca- |
tion. While the prior works mostly focus on mapping some |
specific attribute, we attempt to generalize the mapping pro- |
cess. Hence, in our work, we introduce a framework to cre- |
ate maps of free-form textual prompts, which we call Tex- |
tual Maps. |
2.2. Vision-Language Pretraining |
Recently, Vision-Language (VL) models have shown |
great promise in their ability to model complex relation- |
ships between the vision and text space. ConVIRT [23] |
and VirTex [24] both introduced methods that used image- |
text pairs to learn rich visual representations. CLIP [7] |
demonstrated the results of VL pretraining on a large-scale |
dataset (400M pairs) and validated the efficacy of large- |
scale VL pretraining for several downstream tasks. Flo- |
rence [25] and ALIGN [26] further increased the scale of |
data by training on 900M and 1.8B pairs respectively. Other |
works [8, 9, 27, 28] have since focused on learning better |
VL embedding space. With the existence of these pow- |
erful pretrained VL models, many researchers have uti- |
lized their embedding spaces to solve specific downstream |
tasks. CLIPCap [1], and [29] used CLIP space to gener- |
ate image captions. Other models like [30, 31, 32] uti- |
lized the CLIP space for text-to-image generation. Several |
works [33, 34, 35] have also used the CLIP space for im- |
age retrieval tasks. In our work, we utilize the rich CLIP |
space to bridge the gap between geolocations and their fine- |
grained textual descriptions. |
3. Method |
Our objective is to learn an embedding space that |
describes the expected ground-level scene given a geo- |
graphic location and an overhead image. Secondly, our |
embedding space needs to dynamically adapt to temporal |
manipulations for the same location. We have ground- |
level images {g1, g2, ...gn}, corresponding overhead im- |
ages{o1, o2, ...on}, and respective metadata for the ground- |
level images {e1, e2, ...en}. Each eicontains the latitude |
and longitude information of the sample, as well as the date |
and time when the ground-level image was captured. We |
also have a CLIP image encoder fθthat generates CLIP em- |
beddings for a given ground-level image.3.1. Dataset |
We created a large-scale cross-view dataset to train our |
model. The ground-level images in the dataset are taken |
from the YFCC100M [10] dataset. The YFCC100M dataset |
contains 99.3 million images, collected from Flickr. Our |
cross-view dataset uses a smaller sample from this collec- |
tion which excludes all US imagery. Our dataset contains |
close to 6M images. Each of these images has a geoloca- |
tion, timestamp, and other meta information such as tags, |
description, camera type, etc. For each Flickr image, we |
download an overhead image centered at its location. We |
use the Bing Maps API to download 800x800patch satel- |
lite images at 0.6m/px resolution. |
3.2. Approach |
We initialize our Sat2Cap image encoder gθwith |
the weights of fθ. A batch of ground-level images |
{g1, g2, ..., g k}is passed through the CLIP encoder to get |
the ground-level CLIP embeddings. These embeddings |
serve as the target for alignment. A batch of correspond- |
ing overhead images {o1, o2, ..., o k}is passed through the |
Sat2Cap image encoder to obtain the embeddings, as fol- |
lows: |
Gi=fθ(gi) (1) |
Oi=gθ(oi) (2) |
To align the overhead image embeddings with the |
ground-level CLIP embeddings, we contrastively train our |
model using the InfoNCE [36] loss as follows: |
L=1 |
kk∑ |
i=0−logexp(Oi·Gi/τ) |
∑k |
j=0exp(Oi·Gj/τ)(3) |
We optimize this loss to minimize the distance between |
co-located overhead and ground-level images in the CLIP |
space. It is worth noting that throughout the training pro- |
cess, the CLIP image encoder remains frozen. Hence, |
with our training procedure, we essentially allow the over- |
head images to move close to images from their respective |
ground-level scene in the CLIP space. Our results from |
Section 4.1 show that Sat2Cap learns a strong correlation |
between co-located overhead and ground-level images. |
3.3. Learning Dynamic Concepts of Places |
Many ground-level concepts are temporally dependent. |
Concepts like ‘crowded street’, ‘snowy place’ etc can dra- |
matically vary based on the exact time we query about them. |
In order to model such dynamic concepts, we condition |
Sat2Cap on the timestamps of the ground-level images. |
For each sample, we extract the year, month, day, and |
hour in which the ground-level image was taken. We also |
add the geolocation information to provide a stronger signal |
3 |
Figure 2: A weakly-supervised learning framework to learn fine-grained and dynamic concepts of geolocations without |
explicit text labels. |
Method Overhead2Ground (10K) Ground2Overhead (10K) |
Model Dynamic Encoder Dropout Meta Information R@5 ↑R@10 ↑Median-R ↓ R@5 ↑R@10 ↑Median-R ↓ |
CLIP - - - 0.007 0.013 1700 0.108 0.019 2857 |
ours✗ ✗ ✗ 0.398 0.493 15 0.356 0.450 11 |
✓ ✗ ✗ 0.322 0.413 34 0.254 0.343 20 |
✓ ✗ ✓ 0.368 0.467 23 0.298 0.398 13 |
✓ ✓ ✗ 0.467 0.564 13.5 0.366 0.462 7 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.