text
stringlengths
0
820
more, such natural image models are constantly being im-
proved and released by the general computer vision commu-
nity, providing a consistent source of better baseline mod-
els. Therefore, an approach that could enable the geospa-
tial domain to leverage these improvements with minimal
resource needs and carbon footprint paves the way for con-
tinual, sustainable benefits for the geospatial community.
However, when we initially experiment with the standard
continual pretraining formulation, we find it provides only
marginal benefits (Section 3.2). Instead, we discover that
utilizing ImageNet representations as an auxiliary distilla-
tion objective during pretraining leads to a stronger geospa-
tial foundation model. Building upon this principle, we pro-pose a multi-objective continual pretraining paradigm that
significantly enhances performance while requiring mini-
mal resources. Our approach leverages ImageNet’s power-
ful representations to facilitate and expedite learning, while
also enabling the acquisition of valuable in-domain features
via self-supervised learning on geospatial data. Further-
more, our proposed Geospatial Foundation Model (GFM)
exhibits strong performance, surpassing previous state-of-
the-art (SOTA) methods across a diverse range of down-
stream tasks (Section 4). Our contributions are as follows:
• We investigate a novel paradigm for creating highly ef-
fective geospatial models with minimal resource costs.
Our methodology begins with data selection and con-
struction of a compact yet diverse dataset from multi-
ple sources to promote feature diversity and enhance
pretraining effectiveness, which we term GeoPile. We
further explore the potential of continual pretraining
from ImageNet models, but find it is not satisfactory in
its standard formulation.
• Therefore, to achieve better performance with minimal
resource needs, we propose a multi-objective contin-
ual pretraining paradigm. Our design is surprisingly
simple yet effective, constructed as a teacher-student
strategy with both a distillation objective and self-
supervised masked image modeling. This approach
allows GFM to leverage the strong representations of
ImageNet to guide and quicken learning, while simul-
taneously providing the freedom to learn valuable in-
domain features.
• We evaluate our GFM approach, as well as several
baseline and SOTA methods, on 7 datasets covering
important geospatial applications such as change de-
tection, classification, multi-label classification, se-
mantic segmentation, and super-resolution. Overall,
our GFM performs favorably over previous methods
(as shown in Figure 1).
2. Related Work
Geospatial Pretraining . Various works have experi-
mented with employing supervised or self-supervised pre-
training paradigms in the geospatial domain. The clas-
sical work of [29], and more recent paper [38], investi-
gate supervised pretraining on individual datasets of various
sizes. Interestingly, these still often found the ImageNet
pretrained models to perform very well, particularly with
vision transformers [14, 25]. Other works have explored
self-supervised learning paradigms for remote sensing, pri-
marily focused on contrastive methods. [28] and [2] employ
a MoCo [7] style objective using spatially aligned but tem-
porally different images as the positive pairs. [23] and [20]
also utilize a MoCo-inspired objective, but specify a crop-
ping procedure to generate positives and negatives within
and across images. [37] employs a colorization objective
on Sentinel-2 imagery utilizing the various spectral bands.
Most recently, SatMAE [9] explores the use of masked im-
age modeling to train a large ViT model. This work is sim-
ilar in some respect to ours, as we also train a transformer
model with an MIM objective. However, we find that Sat-
MAE often does not perform better than the off-the-shelf
ImageNet-22k pretrained ViT (Section 4). This indicates
both the difficulty of building strong geospatial pretrained
models from scratch and highlights the potential usefulness
of leveraging continual pretraining instead, as we investi-
gate in this work.
Masked Image Modeling . Masked image modeling
(MIM) has been proposed in various forms in recent years,
and has recently been found to be particularly effective
in the natural image domain, surpassing many contrastive
works and being shown to be friendlier to downstream op-
timization [41, 18, 44, 3, 40] In general, the goal is to learn
from data in a self-supervised manner by asking the model
to generate pixel values for intentionally-withheld regions
in an image. [32] is an early work with an aim of learning
strong visual representations through inpainting masked re-
gions. In [6], Chen et. al train a large transformer to pre-
dict pixels autoregressively. After the introduction of vi-
sion transformers (ViT) [14], many works continued to im-
prove various MIM variants. [3] and [44] take inspiration
from BERT [12] in natural language processing, and tok-
enize the image patches with either a pretrained model or
jointly trained online tokenizer, with the objective being to
reconstruct at a token-level rather than raw pixels. Recently,
[41] and [18] show that a masked image modeling task of
simply regressing directly on the image pixels is sufficient
and effective. In this work, we leverage the framework from
[41], as it is compatible with hierarchical transformer archi-
tectures [25].
In this work, we develop our pretraining objective based
on a masked image modeling approach like [41, 18]. Explo-