text stringlengths 0 820 |
|---|
step 7 |
step 9 |
step 11 |
step 13 |
step 15 |
(b)(Top row )Query sequences, and corresponding heat maps (darker indicates higher probability), obtained using V AS. (Middle row ) |
Query sequences, and corresponding heat maps (darker indicates higher probability), obtained using V AS while enforcing the query |
outcomes at every stage being β unsuccessful β.(Bottom row )Query sequences, and corresponding heat maps (darker indicates higher |
probability), obtained using V AS while enforcing the query outcomes at every stage being β successful β. |
Figure 13: Sensitivity Analysis of VAS with a sample test image and large vehicle as target class under distance based query |
cost. |
19 |
(a) The original image |
step 1 |
step 3 |
step 5 |
step 7 |
step 9 |
step 11 |
step 13 |
step 15 |
(b)(Top row )Query sequences, and corresponding heat maps (darker indicates higher probability), obtained using V AS. (Middle row ) |
Query sequences, and corresponding heat maps (darker indicates higher probability), obtained using V AS while enforcing the query |
outcomes at every stage being β unsuccessful β.(Bottom row )Query sequences, and corresponding heat maps (darker indicates higher |
probability), obtained using V AS while enforcing the query outcomes at every stage being β successful β. |
Figure 14: Sensitivity Analysis of VAS with a sample test image and caras target class under distance based query cost. |
20 |
(a) The original image |
step 1 |
step 3 |
step 5 |
step 7 |
step 9 |
step 11 |
step 13 |
step 15 |
(b)(Top row )Query sequences, and corresponding heat maps (darker indicates higher probability), obtained using V AS. (Middle row ) |
Query sequences, and corresponding heat maps (darker indicates higher probability), obtained using V AS while enforcing the query |
outcomes at every stage being β unsuccessful β.(Bottom row )Query sequences, and corresponding heat maps (darker indicates higher |
probability), obtained using V AS while enforcing the query outcomes at every stage being β successful β. |
Figure 15: Sensitivity Analysis of VAS with a sample test image and ship as target class under distance based query cost. |
21 |
(a) The original image with query sequence. |
step 1 step 5 step 10 step 15 |
(b) Saliency maps (red indicates high saliency), obtained using V AS at different stages of search process with large vehicle as target. |
Figure 16: Saliency map visualization of VAS under uniform cost budget. |
22 |
(a) The original image with query sequence. |
step 1 step 5 step 10 step 15 |
(b) Saliency maps (red indicates high saliency), obtained using V AS at different stages of search process with small car as target. |
Figure 17: Saliency map visualization of VAS under uniform cost budget. |
23 |
(a) The original image with query sequence. |
step 1 step 5 step 10 step 15 |
(b) Saliency maps (red indicates high saliency), obtained using V AS at different stages of search process with small car as target. |
Figure 18: Saliency map visualization of VAS under uniform cost budget. |
24 |
Sat2Cap: Mapping Fine-Grained Textual Descriptions from Satellite Images |
Aayush Dhakal1Adeel Ahmad1,2Subash Khanal1Srikumar Sastry1Nathan Jacobs1 |
1Washington University in St. Louis2Taylor Geospatial Institute |
Abstract |
We propose a novel weakly supervised approach for cre- |
ating maps using free-form textual descriptions (or cap- |
tions). We refer to this new line of work of creating tex- |
tual maps as zero-shot mapping. Prior works have ap- |
proached mapping tasks by developing models that predict |
over a fixed set of attributes using overhead imagery. How- |
ever, these models are very restrictive as they can only solve |
highly specific tasks for which they were trained. Map- |
ping text, on the other hand, allows us to solve a large va- |
riety of mapping problems with minimal restrictions. To |
achieve this, we train a contrastive learning framework |
called Sat2Cap on a new large-scale dataset of paired over- |
head and ground-level images. For a given location, our |
model predicts the expected CLIP embedding of the ground- |
level scenery. Sat2Cap is also conditioned on temporal in- |
formation, enabling it to learn dynamic concepts that vary |
over time. Our experimental results demonstrate that our |
models successfully capture fine-grained concepts and ef- |
fectively adapt to temporal variations. Our approach does |
not require any text-labeled data making the training eas- |
ily scalable. The code, dataset, and models will be made |
publicly available. |
1. Introduction |
Creating maps of different attributes is an important task |
in many domains. Traditionally, methods of mapping in- |
volve exhaustive data collection across vast regions, which |
is both time-consuming and labor-intensive. To address this |
issue, recent studies have explored the use of Deep Learn- |
ing models, with their strong visual learning capabilities, to |
directly predict attributes of interest through overhead im- |
agery. Salem et al. [2] used overhead images to map tran- |
sient attributes [3] and scene categories [4] across large re- |
gions, while Streltsov et al. [5] predicted residential build- |
ing energy consumption using overhead imagery. Similarly, |
Bency et al. [6] also used satellite images to map housingprices. However, all these prior methods focused on learn- |
ing some specific pre-defined attributes. These attribute- |
specific models are quite restrictive as they cannot map any- |
thing beyond their preset list of variables. To overcome this |
limitation, we created a novel framework that enables us to |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.