text
stringlengths
0
820
in Figure 5, this is not the most optimal option. Such ini-
tialization is unnecessary in our framework, since it already
allows for seamless integration of ImageNet representations
with valuable in-domain features. Forcibly doing so likely
introduces too much bias towards the natural image repre-
sentations. Therefore an unbiased student is most ideal and
effective.
5.3. GeoPile Pretraining Dataset
To ablate components of the GeoPile, we remove each
dataset individually to see its relative importance. Also,
we compare using just the labeled data portion and using
just the unlabeled NAIP imagery portion. As expected, us-
ing just data from labeled datasets gives better performance
with less images than using just images gathered from just
NAIP. The human-curated samples in these datasets are
more likely to contain relevant objects and features, as they
each correspond to a particular class of interest. Still, unla-
beled data like NAIP can be sourced easily and with scale.
Further scaling of both labeled and unlabeled portions could
further improve performance; however, it will also increase
the training time and sustainability impact. Therefore, we
maintain GeoPile at approximately 600,000 images.
Table 9. Ablation results for the training objectives in GFM. For w/o teacher, we only conduct MIM with GeoPile. For w/o MIM, we
simply perform the distillation objective from the ImageNet-22k model to our student model with GeoPile. We abbreviate the following
for horizontal space: UC Merced (UCM), BigEarthNet (BEN), WHU Aerial (WHU), Vaihingen (Vai), SpaceNet2 (SN2).
Method OSCD (F1) DSFIN (F1) UCM BEN 10% BEN 1% WHU Vai. SN2 (PSNR) SN2 (SSIM)
w/o teacher 57.3 67.65 98.8 86.5 80.0 90.5 74.0 22.509 0.631
w/o MIM 59.58 71.86 98.8 86.1 80.2 90.2 72.6 22.069 0.608
GFM 59.82 71.24 99.0 86.3 80.7 90.7 75.3 22.599 0.638
Table 10. Results for employing temporal pairs and datasets from SeCo [28] in our multi-objective pretraining framework. TP indicates
that the teacher receives one image from a temporal pair, and the student receives the other. SI indicates that the same image is inputted to
the teacher and student.
Dataset Inputs OSCD (F1) DSFIN (F1) UCM BEN 10% BEN 1% WHU Vai. SN2 (PSNR) SN2 (SSIM)
SeCo 100k [28] TP 57.03 62.48 80.0 80.6 68.6 88.3 66.3 22.078 0.572
SeCo 100k [28] SI 58.41 67.92 92.1 83.9 76.5 88.8 68.1 22.439 0.602
SeCo 1M [28] SI 58.87 69.41 95.7 86.2 77.1 89.6 71.0 22.281 0.626
GeoPile SI 59.82 71.24 99.0 86.3 80.7 90.7 75.3 22.599 0.638
5.4. Multi-objective Ablation.
To delve deeper into the evaluation of GFM’s perfor-
mance, we extend our analysis by conducting experiments
in which we exclude the teacher component and MIM com-
ponent individually, as detailed in Table 9. We find that
training with the multi-objective approach is the best per-
former overall. This shows that the integrated distillation
and MIM objectives within the GFM framework both con-
tribute to producing a well-balanced mode for downstream
tasks, and are important aspects of efficient and effective
geospatial learning.
5.5. Temporal Pairs Experiment
Some works employ temporal pairs in the pretraining
procedure [28, 2, 1], meaning two satellite images from the
same spatial region but taken at different times. We also ex-
periment with the use of temporal positives in our training
paradigm using the dataset proposed in SeCo [28]. In this
case, the teacher receives one image from a temporal pair,
and the student receives the other. The temporal changes
can possibly serve as a form of natural augmentation for the
distillation objective. However, as shown in Table 10, we
find that using temporal positives (TP) is worse than simply
using the same image (SI) for both branches. Therefore, we
simply use the same image for both branches for other ex-
periments. We further scale up the data by employing the
1M sample Sentinel-based dataset from SeCo. Nonethe-
less, GeoPile proves to be more effective as a pretraining
data source for our GFM.
6. Conclusion
In summary, this paper investigates an alternative
paradigm from previous work towards producing better
geospatial foundation models with substantially less re-source cost. To this end, we first construct a concise
yet diverse collection of data from various remote sens-
ing sources for pretraining. Second, we propose a sur-
prisingly simply yet effective multi-objective continual pre-
training paradigm, in which we leverage the strong repre-
sentations of ImageNet-22k to guide and quicken learning,
while simultaneously providing the freedom to learn valu-
able in-domain features through self-supervised learning on
geospatial data. We hope that our GFM approach will serve
as an example to inspire other works in investigating ef-
ficient and sustainable methods for developing geospatial
foundation models.
Broader Impact and Limitations. As the geospa-
tial community continues to innovate, the resulting impact
promises to positively benefit both the earth and society.
Automating the process of extracting useful information
from geospatial data can aid scientists, engineers, and others
to make data-informed decisions on infrastructure advance-
ment, food supply improvements, and natural disaster re-
sponse. A potential limitation of our GFM approach is that
it may still be somewhat constrained by the performance
of the ImageNet-22k model. If perhaps a model was trained
from scratch on an extremely large corpus of remote sensing
data, the performance may eventually also lead to improved
performance over ImageNet baselines. However, this would
incur a substantial amount of training time and CO 2impact.
Furthermore, as mentioned in Section 1, natural image mod-
els are constantly being improved and released by the gen-
eral computer vision community. Therefore, our approach
enables the geospatial domain to effectively leverage these
improvements for better in-domain performance with mini-
mal carbon impact. We believe this is a sustainable way for