text stringlengths 0 820 |
|---|
gressively increased for various applications worldwide. |
Progress in this domain can substantially improve our abil- |
ity to understand the earth and how we interact with it. With |
the rising popularity of foundation models in vision and nat- |
ural language, researchers have begun to investigate apply- |
*Work done as an intern at Amazon Web Services |
92.5 |
94.1 |
95.8 |
97.4 |
99.0 |
30.537.945.252.660.0 |
43.050.257.564.872.0 |
81.083.586.088.591.0 |
68.0 |
69.9 |
71.8 |
73.6 |
75.50.59 |
0.6 |
0.62 |
0.63 |
0.6440.0 |
50.5 |
61.0 |
71.5 |
82.0DSFIN |
(Change Detection) |
WHU Aerial |
(Segmentation) |
Vaihingen |
(Segmentation) |
SpaceNet2 |
(Super-resolution)BigEarthNet |
(Multi-label |
Calssification)UC Merced |
(Classification)OSCD |
(Change Detection) |
N/AFigure 1. Our geospatial foundation model (GFM) achieves favor- |
able performance on a broad set of tasks in comparison to other |
state-of-the-art geospatial pretraining methods (SeCo [28], Sat- |
MAE [9]) and ImageNet supervised pretraining baselines. Leg- |
end is as follows. Cyan: ImageNet-1k Supervised (ResNet50), |
Blue: SeCo [28], Purple: ImageNet-22k Supervised (ViT), Or- |
ange: SatMAE [9], Gray: ImageNet-22k Supervised (Swin), |
Green: GFM (ours). |
ing such principles to the geospatial domain in order to en- |
hance the suitability of deep learning models in downstream |
tasks [29, 28, 9, 2]. In the literature, various works have ex- |
plored two prominent approaches for introducing pretrained |
foundation models in geospatial applications. The first ob- |
vious approach is to leverage existing foundation models |
from the natural image domain, like those trained on the |
large-scale ImageNet-22k dataset [11]. In practice, this |
is done by directly finetuning publicly-available ImageNet |
pretrained models on the downstream tasks . This approach |
has the advantage of being straight-forward, as ImageNet |
models can be simply downloaded from many open-source |
model zoos, and has been shown to be effective [29, 30]. |
However, due to the domain gap between natural images |
and remote sensing, this approach is not optimal for geospa- |
tial data, and still leaves performance gains on the table. |
In recent years, a second approach has gained significant |
traction, where researchers aim to pretrained models spe- |
cific to the geospatial domain [28, 2, 9, 38]. These methods |
typically train a network from scratch on a large corpus |
of remote sensing imagery to learn in-domain representa- |
tions transferable to downstream tasks. Unfortunately, this |
can require a significant amount of data and training time |
to achieve good performance, especially when employing |
large state-of-the-art (SOTA) transformer models. For in- |
stance, the current SOTA in geospatial foundation models, |
SatMAE [9], requires 768 hours on a V100 GPU for train- |
ing a vision transformer [14]. This has substantial cost |
associated with producing the model, not just in terms of |
time and computation but also environmentally, with a to- |
tal estimated carbon footprint of 109.44 kg CO 2equivalent. |
Additionally, the final performance of such models are not |
consistently better across various tasks than simply utilizing |
publicly-available ImageNet pretrained models (Section 4), |
despite the high resource expense. |
In this work, we propose to investigate a different |
paradigm for producing more effective geospatial founda- |
tion models with substantially less resource costs. First, we |
begin with a discussion on pretraining data selection, and |
ultimately construct a concise yet diverse collection of data |
from various sources to promote feature diversity and ef- |
fective pretraining. Second, rather than following the afore- |
mentioned typical approaches, we investigate the potential |
ofcontinual pretraining for the geospatial domain from |
readily-available ImageNet models. Continual pretraining |
has been practiced in the NLP domain with success in var- |
ious works [16, 17, 26]. In this paradigm, existing founda- |
tion models are further improved for a specific domain or |
task through a secondary pretraining stage. This new single |
model can now be fine-tuned on the various downstream |
tasks in that domain. In principle, we reason that such a |
paradigm has the potential to boost performance by utiliz- |
ing large-scale ImageNet representations as a base on which |
stronger geospatial foundation models can be built. Further- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.