text stringlengths 0 820 |
|---|
compute power to train. Instead, we reason that the valu- |
able knowledge available in models like those trained on |
ImageNet-22k should be leveraged to produce strong per-formance with minimized overhead. To this end, we pro- |
pose an unsupervised multi-objective training paradigm for |
effective and efficient pretraining of geospatial models, il- |
lustrated in Figure 3. |
There are two main components in our framework. First, |
we randomly initialize an encoder FSand decoder Dset up |
for MIM as in [41]. During training, the input is randomly |
masked, and the network attempts to reconstruct the image |
at the output. This MIM objective is enforced with an L1 |
loss [41]: |
LMIM =∥Oκ−Gκ∥1 |
N, (2) |
whereOκare the original pixel values from κmasked re- |
gions,Gκare the generated reconstructions for those re- |
gions, and Nis the total number of masked pixels. |
For the continual pretraining of our framework, we ini- |
tialize a second encoder branch FTup to a chosen stage L |
and load the ImageNet-22k pretrained weights. This branch |
behaves as a form of teacher during the training process |
to the student branch ( FS), which will serve as our final |
model. For the ImageNet teacher, we freeze the weights, |
to both ensure that the structured representations are main- |
tained during the training process, and also reduce the com- |
putation required during optimization. |
Rather than using the masked input as in the student |
branch, the teacher receives the unmasked image as input, |
and provides a feature output fT |
Lat stage L. This feature |
has access to the full context of the input, enabling it to |
capture informative representations. We utilize this feature |
to guide the representations of the student, and form a sec- |
ondary objective with the cosine similarity between branch |
features: |
Lfeat =−P(fS |
L) |
P(fS |
L) |
2·fT |
L |
fT |
L |
2, (3) |
where fS |
LandfT |
Lare the intermediate features of the stu- |
dent and teacher branches at stage L, andPis an linear |
projection layer. Therefore, the final loss during training is |
simply the summation of these objectives: |
L=LMIM +Lfeat. (4) |
This training paradigm enables an ideal two-fold optimiza- |
tion. Distillation from the intermediate features of the |
teacher ensure that the student can benefit from the teacher’s |
diverse knowledge, learning more in less time. Further- |
more, the student is simultaneously given freedom to adapt |
to in-domain data through its own pretraining objective, |
gathering new features to improve performance. |
We analyze the ARP and resource cost of this approach |
in Table 1. Notably, our GFM is able to achieve better over- |
all performance with substantially less computation and |
emissions impact compared to vanilla continual pretraining |
with the same dataset, illustrating that our multi-objective |
continual pretraining paradigm is an effective method for |
training these models. Comparatively, the SOTA geospa- |
tial pretrained method SatMAE [9] requires 768 hours on a |
V100 GPU and 109.44 kg equivalent CO 2according to their |
reported results. Therefore, GFM enables more than 8× |
reduction in total training time and carbon impact. More- |
over, we find that the performance of SatMAE is often not |
superior to the off-the-shelf ImageNet-22k pretrained ViT |
(Section 4). This implies that building powerful geospatial |
pretrained models from scratch is challenging and further |
underscores the benefits of utilizing continual pretraining |
instead. We show these results in the following section. |
4. Experiments |
To verify the effectiveness of our model in detail, we |
conduct experiments on seven geospatial datasets of vari- |
ous tasks including change detection (Section 4.1), classifi- |
cation (Section 4.2), segmentation (Section 4.3), and super- |
resolution (Section 4.4). |
For pretraining, we employ 8 NVIDIA V100 GPUs with |
a batch size of 2048 (128 per GPU) and the image size |
of 192×192. All pretraining settings are the same as in |
[41]. For downstream tasks, 4 NVIDIA A10G GPUs are |
employed. During the pretraining stage, we utilize RGB |
bands as they are most commonly available among data |
sources and tasks. For downstream tasks with additional |
band inputs, we initialize the RGB patch embeddings with |
the pretrained weights and randomly initialize the remain- |
ing channels. Potentially improving performance even fur- |
ther though the employment of additional data modalitiesTable 3. Onera Satellite Change Detection Results |
Method Precision ↑Recall ↑F1↑ |
ResNet50 (ImageNet-1k) [19] 70.42 25.12 36.20 |
SeCo [28] 65.47 38.06 46.94 |
MATTER [1] 61.80 57.13 59.37 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.