text stringlengths 0 820 |
|---|
ration of the masked image modeling framework in geospa- |
tial applications is still in its early stages, and could help |
alleviate some concerns with contrastive approaches in this |
domain. Particularly, the choice of augmentations with con- |
trastive methods can be quite difficult, as common selec- |
tions such as greyscale, color jitter and others that heav- |
ily affect the intensity of the image can instill undesirable |
invariances [29]. On the other hand, MIM objectives like |
[41, 18] rely only on simple spatial augmentations such |
as flipping and cropping. Furthermore, a common remote |
sensing application is that of change detection, which re- |
quires a model to detect changes in two images from the |
same location but at different times. In order to still be ef- |
fective on this task, works that use contrastive approacheson temporal positives introduce various design choices. For |
instance, SeCo [28] creates multiple feature subspaces dur- |
ing pretraining, each one invariant to a separate form of aug- |
mentation. [1] also employs temporal positives, but instead |
chooses the sampling locations for the pretraining data to |
ensure that image pairs contain primarily natural illumina- |
tion and viewing angle variant, without major changes such |
as new urban developments. |
Continual Pretraining . Continual pretraining has been |
primarily introduced in the natural language domain [16, |
17, 26], in order to improve large language models (LLM). |
[16] illustrates the viability of two additional stages of |
pretraining, using in-domain data (domain-adaptive), and |
then even further using task-specific data (task-adaptive). |
[17] proposes a continual training paradigm for enabling |
temporal reasoning abilities to pretrained language mod- |
els. [26] focus on using continual pretraining to enable |
mixed language neural machine translation. In the vision |
domain, [22] employs a BYOL [15] style continual pre- |
training paradigm for 2D medical image segmentation. [34] |
explores a hierarchical pretraining approach for task adap- |
tation. However, they primarily focus on adapting to a spe- |
cific downstream task at a time, employing three training |
stages on top of an existing pretrained model for each task |
individually. In contrast, we employ one efficient in-domain |
pretraining setting that can generalize to many downstream |
tasks, as illustrated in Section 4. Furthermore, rather than |
directly loading the pretrained weights from existing mod- |
els as initialization, we find instead that leveraging the rep- |
resentations as an auxiliary distillation objective during the |
pretraining process enables learning stronger representa- |
tions. |
3. Methodology |
In the following sections, we discuss the pretraining data |
selection (Sec, 3.1), investigate vanilla continual pretraining |
(Sec. 3.2), and present our GFM method (Sec. 3.3). |
3.1. Pre-training Data Selection |
A particularly common choice of source data among |
geospatial contrastive pretraining works is Sentinel-2 im- |
agery [28, 1, 37] due to its large corpus of available data |
and ease of access. Therefore, to begin our study, we first |
gather a pretraining dataset of 1.3 million Sentinel-2 im- |
ages using the sampling technique from [28]. After gather- |
ing the Sentinel-2 data, we employ it to pretrain a Swin-B |
[25] model with the masked image modeling (MIM) objec- |
tive from [41]. We then finetune and evaluate this model |
on a wide variety of downstream datasets to get a broad un- |
derstanding of its performance potential in many tasks (see |
Section 4 for task details). For a comparison, we finetune |
the ImageNet-22k pretrained Swin-B from the official Swin |
Transformer repository [25] on all downstream tasks as a |
Figure 2. We visualize some example images from the pretraining |
datasets with Sentinel-2 (left) and GeoPile (right). Sentinel-2 has |
noticeably much lower feature diversity within a single image and |
across images than that of our GeoPile pretraining dataset. |
baseline. In order to compare these models across all tasks, |
we introduce an average relative performance metric (ARP) |
in which we take the relative difference on each task with |
respect to the ImageNet-22k baseline, and then average that |
difference: |
ARP (M) =1 |
NNX |
i=1score (M,taski)−score (baseline ,taski) |
score (baseline ,taski). |
(1) |
Here “baseline” is the Swin-B model pretrained on |
ImageNet-22k, as mentioned above. Mdenotes the model |
for performance evaluation, and N is the number of tasks. |
There are 7tasks used in Section 4 covering important |
geospatial applications such as classification, multi-label |
classification, semantic segmentation, change detection, |
and super-resolution. The reported ARP value is scaled by |
100 to show as a percentage. |
We compare these two models in Table 1. Interest- |
ingly, we find that the Sentinel-2 model performs poorly |
on downstream tasks compared to the ImageNet-22k base- |
line. To investigate further, we visualize multiple samples |
from Sentinel-2 in the left columns of Figure 2. Upon in- |
spection, we note that the feature diversity within a single |
image and across images of Sentinel-2 is perceivably low. |
To further quantify this suspicion, we calculate the average |
image entropy over a randomly sampled set of 3000 im- |
ages from the collected Sentinel-2 data as well as the typ- |
ical ImageNet dataset as a baseline. Overall, the Sentinel |
images have an average entropy of 3.9 compared to 5.1 of |
ImageNet. Such an evaluation provides insights into the |
potential pitfalls of Sentinel-2 data in pretraining transform- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.