text stringlengths 0 820 |
|---|
ViT (ImageNet-22k) [14] 48.34 22.52 30.73 |
SatMAE [9] 48.19 42.24 45.02 |
Swin (random)[25] 51.80 47.69 49.66 |
Swin (ImageNet-22k)[25] 46.88 59.28 52.35 |
GFM 58.07 61.67 59.82 |
Figure 4. Qualitative results of downstream performance on OSCD |
comparing our GFM with ImageNet-22k and randomly initialized |
baselines. White, green, red colors show true positive, false posi- |
tive, and false negative respectively. |
Table 4. DSFIN Change Detection Results |
Method Precision ↑Recall ↑F1↑ |
ResNet50 (ImageNet-1k) [19] 28.74 92.07 43.80 |
SeCo [28] 39.68 81.02 53.27 |
ViT (ImageNet-22k) [14] 70.77 66.34 68.49 |
SatMAE [9] 70.45 60.29 64.98 |
Swin (random)[25] 57.97 62.06 59.94 |
Swin (ImageNet-22k)[25] 67.11 72.33 69.62 |
GFM 74.83 67.98 71.24 |
will be an intriguing avenue for future research. Additional |
training details for these tasks are provided in the supple- |
mentary material . |
4.1. Change Detection |
Change detection is a particularly important remote sens- |
ing task, helping us understand how humans interact with |
our planet over time, and natural phenomena that change |
our planet’s landscape. We conduct experiments on both |
the Onera Satellite Change Detection (OSCD [5]) in Table |
3 and DSIFN [43] in Table 4. |
OSCD consists of 14 image pairs extracted from various |
regions around the world within a three year period of 2015 |
to 2018. The images are taken from Sentinel-2 with GSDs |
ranging from 10m to 60m, and split into 14 images for train- |
ing and 10 for evaluation. The annotations indicate whether |
the change has occurred on a pixel level, and focus primarily |
on urban developments. Similarly, we also test our method |
on DSIFN dataset. This dataset contains high-resolution |
imagery, such as WorldView-3 and GeoEys-1 [43]. This |
dataset contains 3490 high resolution samples for training |
and 48 images for evaluation respectively. Every pair of im- |
ages from a given location at two different timestamps will |
be fed into the swin encoder [25] for feature extraction. The |
difference between the features from each pair is computed |
and fed into an UPerNet [39] to generate the final binary |
segmentation masks [28, 4]. The encoder is initialized with |
the pretrained weights. |
For both datasets, we report the precision, recall, and |
F1 score on the “change” class. As the results presented |
from OSCD (Table 3 and Figure 4) and DSIFN (Table 4), |
GFM shows a consistent improvement over the ImageNet- |
22k baseline across both datasets. Notably, SatMAE is able |
to improve over its ImageNet-22k baseline on OSCD, but |
lags behind on DSIFN. This further highlights the difficulty |
of training large vision transformers from scratch that can |
perform consistently across different GSDs. |
4.2. Classification |
Another common remote sensing application is that of |
classification. We evaluate two datasets common in the |
literature [28, 1]: UC Merced Land Use Dataset [42] and |
BigEarthNet [36]. The UC Merced Land Use Dataset is a |
classic dataset in the remote sensing field. It contains 21 |
classes, each with 100 images at 256x256 pixels and an ap- |
proximate GSD of 1 foot. We split the data into train and |
validation according to [13]. BigEarthNet [36] (BEN) is a |
large-scale remote sensing dataset for multi-label classifi- |
cation. The data consist of 12-band Sentinel-2 images with |
sizes of 120x120, 60x60, and 20x20 pixels for the bands at |
10m, 20m, and 60m GSDs, respectively. We employ the |
data split and 19 class evaluation as common in the litera- |
ture [29, 28, 9]. |
In Table 5, we report the classification accuracy on |
UC Merced (UCM) and mean average precision results |
on BigEarthNet (BEN) for all methods. On UC Merced, |
we note the SeCo [28] pretrained model performs signif- |
icantly worse than its ImageNet-1k pretrained counterpart |
with ResNet-50. These two datasets are very different in |
both classes, satellite source, and GSDs, and therefore hav- |
ing a diverse feature knowledge is imperative to maintain- |
ing performance despite these distinctions. Our model can |
provide robust performance in both cases by leveraging Im- |
ageNet representations and remote sensing data in its learn- |
ing. Furthermore, one key motivation for training a geospa- |
tial foundation model is to improve the sample efficiency forTable 5. UC Merced classification accuracy and BigEarthNet |
multi-label classification mean average precision results. |
Method UCM BEN 10% BEN 1% |
ResNet50 (ImageNet-1k) [19] 98.8 80.0 41.3 |
SeCo [28] 97.1 82.6 63.6 |
ViT (ImageNet-22k)[14] 93.1 84.7 73.6 |
SatMAE [9] 92.6 81.8 68.9 |
Swin (random)[25] 66.9 80.6 65.7 |
Swin (ImageNet-22k) [25] 99.0 85.7 79.5 |
GFM 99.0 86.3 80.7 |
downstream tasks. Notably, we find that our model main- |
tains strong performance on BigEarthNet, even when only |
given 1% of the training data. |
4.3. Segmentation |
Segmentation is a popular remote sensing application for |
enabling automated extraction of building footprints or land |
cover mappings over wide regions. We therefore conduct |
experiments on this task on two different datasets. Vai- |
hingen [35] is an urban semantic segmentation dataset col- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.