text stringlengths 0 820 |
|---|
ers. For MIM objectives, training data with a substantially |
lower entropy can make for an easier reconstruction task, |
since masked regions may be more similar to their neigh- |
bors. Therefore, the network does not have to work as hard |
to fill in the blanks, limiting the learning potential. Overall, |
these result indicate that the noticeably narrow scope of fea-Table 1. Dataset Analysis. To evaluate each method, we finetune |
the pretrained model on seven different tasks, outlined in Section |
4 and report the ARP metric defined in Equation 1. We also report |
the training time in hours on a V100 GPU, as well as the car- |
bon impact estimations2in kg CO 2equivalent [24]. Overall, our |
collected GeoPile pretraining dataset significantly improves down- |
stream performance. †indicates the vanilla continual pretrain- |
ing approach of initializing the model with ImageNet-22k weights |
prior to conducting MIM training on GeoPile. To further improve |
the performance in an efficient manner, we introduce our continu- |
ous pretraining paradigm GFM. |
Method # Images Epochs ARP ↑Time↓CO2↓ |
ImageNet-22k Sup. 14M - 0.0 - - |
Sentinel-2 [28] 1.3M 100 -5.83 155.6 22.2 |
GeoPile 600k 200 0.92 133.3 19.0 |
GeoPile†600k 200 1.24 133.3 19.0 |
GeoPile†600k 800 1.45 533.2 76.0 |
GFM 600k 100 3.31 93.3 13.3 |
Table 2. Breakdown of datasets in the GeoPile. We gather approxi- |
mately 600k samples from a combination of labeled and unlabeled |
satellite imagery with various ground sample distances and scenes. |
Dataset # Images GSD # Classes |
NAIP [31] 300,000 1m n/a |
RSD46-WHU [27] 116,893 0.5m - 2m 46 |
MLRSNet [33] 109,161 0.1m - 10m 60 |
RESISC45 [8] 31,500 0.2m - 30m 45 |
PatternNet [45] 30,400 0.1m - 0.8m 38 |
tures and limited per-sample information in Sentinel-2 data |
may be limiting the potential of the pretrained model. |
Therefore, we set out to collect a diverse geospatial pre- |
training dataset. Sourcing from both labeled and unlabelled |
data, we form a new pretraining dataset which we term |
GeoPile. The breakdown of GeoPile is shown in Table 2. |
For textural detail, we ensure a variety of ground sample |
distances (GSD), including images with much higher reso- |
lution than Sentinel-2 (which has a GSD of 10m). Further- |
more, the selected labeled datasets encompass a wide vari- |
ety of classes from general remote sensing scenes, ensuring |
visual diversity across samples. We calculate the average |
entropy of our GeoPile dataset, and find it to be 4.6, much |
higher than that of Sentinel-2. Furthermore, the textural and |
visual diversity is qualitatively evident in Figure 2. In Table |
1, the enhancing effect of the data selection is clearly shown |
by the substantial performance increase. |
3.2. Vanilla Continual Pretraining |
Next, after establishing our pretraining data selection, we |
investigate an alternate pretraining paradigm that bridges |
the gap between the two common approaches mentioned |
2CO2estimations were completed with mlco2.github.io from [24]. |
ℱ𝑇 |
ℱ𝑆𝒫… |
…… 𝒟ℒ𝑓𝑒𝑎𝑡 |
GeoPile |
𝑓𝐿𝑇𝑃(𝑓𝐿𝑆) |
ℒ𝑀𝐼𝑀 |
…… |
Foundation |
ModelModel Zoo |
Large -scale |
Dataset… |
Figure 3. Our GFM continual pretraining pipeline, which leverages publicly-available large-scale models in concert with our compiled |
geospatial dataset and pretraining objective. First, we select a concise set of data from various sources, which we term GeoPile (Section |
3.1). Next, we train GFM with our multi-objective continual pretraining approach. Our GFM framework is constructed as a teacher-student |
paradigm, with two parallel model branches. The teacher FTis initialized with ImageNet-22k weights (top) and frozen during training. |
The student FSis initialized from random initialization (bottom), and is trained to serve as the final geospatial foundation model. In a |
continual pretraining fashion, we leverage the intermediate features of an ImageNet-22k pretrained model to guide and quicken learning. |
Furthermore, we build in an MIM objective on the student branch to learn valuable in-domain features directly from the geospatial data. |
in Section 1. Specifically, we investigate the potential of |
continual pretraining in the context of geospatial pretrained |
models. To do so, we first employ the vanilla continual pre- |
training approach; that is, using the ImageNet-22k weights |
as initialization prior to beginning the pretraining step with |
GeoPile. We find this to be helpful in improving perfor- |
mance over starting from scratch. This validates the pos- |
sibility of continual pretraining as a beneficial paradigm to |
provide performance gain without additional resource costs. |
Nonetheless, the improvement is still limited, with ∼0.3% |
ARP increase over starting from scratch and ∼1.24% ARP |
over the baseline. |
To further improve the performance of our pretrained |
model in comparison to the ImageNet-22k baseline, we in- |
crease the number of pretraining epochs in the next row of |
Table 1. While we are able to make improvements, this |
comes at the cost of substantially more computational cost |
and carbon footprint for marginal gain. Therefore, we ask |
the question: how can we significantly improve the per- |
formance further while maintaining minimal compute and |
carbon footprint overhead? To this end, we propose a sim- |
ple and efficient approach for building geospatial pretrained |
models capable of strong downstream performance. |
3.3. GFM Pretraining |
A significant number of geospatial foundation model |
studies disregard the existing large-scale model represen- |
tations. This is far from ideal, particularly for large trans- |
former models known to require a vast amount of data and |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.