text
stringlengths 0
820
|
|---|
224px, .7m GSD
|
Ground Truth Image Ihr
|
(448px, .3m GSD)resampleMAE Encoder
|
Transformers
|
14 14GSDPE
|
L1 loss
|
448224
|
Scale-MAE Decoder
|
L2 loss
|
Loss
|
Upsampling
|
Reconstruction
|
28
|
56
|
Demask
|
High freq.
|
Ground Truth
|
LB
|
Low freq.
|
Ground TruthDeconv
|
DeconvLB
|
Ground Truth Image I hr
|
(448px, .3m GSD)
|
DecodingFigure 2. Scale-MAE employs the Masked Autoencoder framework. An input image is patchified and masked before being passed into an
|
MAE encoder. A Ground Sample Distance Positional Encoding (GSDPE) is added to the encoder input, which scales the positional encodings
|
to the area of ground covered. The Scale-MAE decoders has three stages: (1) Decoding, which uses a smaller number of transformer layers
|
than MAE to decode the encoded values (2) Upsampling, which progressively deconvolves the decoded feature map to a larger size before
|
being passed through the Laplacian Blocks (abbreviated LB, see Section 3), (3) Reconstruction, which then reconstructs low and high
|
frequency features at different scales. These outputs are used to compute an aggregate loss with ground truth low and high frequency features,
|
where following super resolution literature [2], an L1 loss is used for high frequency output to better reconstruct edges and an L2 loss is used
|
for low frequency output to better reconstruct average values.
|
data at a specific scale [13, 20, 22, 32, 41]. In this paper we
|
present Scale-MAE , a masked reconstruction model that ex-
|
plicitly learns relationships between data at different, known
|
scales throughout the pretraining process. By leveraging this
|
information, Scale-MAE produces a pretrained model that
|
performs better across a wide range of GSDs and tasks.
|
Masked Autoencoders [26] offer self-supervised learn-
|
ing without explicit augmentations. A standard Masked
|
Autoencoder resizes/crops an image, masks the majority of
|
the transformed image, and then tasks a Vision Transformer
|
(ViT) based autoencoder with embedding the unmasked com-
|
ponents. A decoding ViT then decodes the full image from
|
these learned embeddings, where the decoder is later dis-
|
carded and the encoder is used to produce representations
|
for an unmasked input image.
|
Existing MAE-based pretraining approaches fail to gen-
|
eralize across domains with images at multiple scales.
|
Scale-MAE (Figure 1) overcomes this through a GSD-based
|
positional encoding derived from the land area covered in the
|
image. This informs the ViT of both the position and scale of
|
the input image. Scale-MAE also uses a Laplacian-pyramid
|
decoder to encourage the network to learn multiscale rep-
|
resentations. The embeddings are decoded to two images
|
containing low and residual high frequency information, re-
|
spectively – see Figure 2. As we discuss in Section 3, this
|
structure allows the ViT decoder to use fewer parameters
|
than MAE while still producing strong representations across
|
multiple scales.
|
We show that Scale-MAE leads to better performing,
|
more robust multiscale representations than both a stan-
|
dard MAE and a recently proposed, state-of-the-art MAEs
|
SatMAE [13] and ConvMAE [21] across remote sensing
|
datasets with a variety of scale and resolution characteristics.
|
To the best of our knowledge Scale-MAE is the first self-supervised MAE to include scale-aware positional encoding
|
and Laplacian pyramids. In our experiments, Scale-MAE
|
achieves an average of a 5.6%nonparametric kNN classifica-
|
tion improvement across eight remote sensing datasets com-
|
pared to current state-of-the-art in addition to a 0.9mIoU
|
to1.7mIoU improvement on the SpaceNet building seg-
|
mentation transfer task for a range of evaluation scales (see
|
Figure 1).
|
2. Related Work
|
Representation learning and the Masked Autoencoder.
|
Representation learning aims to extract meaningful, intrin-
|
sic features from data for downstream use [5]. In prac-
|
tice, this often entails pretraining a deep network so that
|
a lightweight learning routine can then finetune it for a par-
|
ticular downstream task, see [15,16,17,24,27,30,37,49,66].
|
The Masked Autoencoder (MAE) is a recent state-of-the-art
|
self-supervised representation learning method in computer
|
vision that pretrains a ViT encoder by masking an image,
|
feeding the unmasked portion into a transformer-based en-
|
coder, and then tasking the decoder with reconstructing the
|
input image [26]. MAEs fail to leverage scale information in
|
scale-dependent domains as they are often reliant on absolute
|
or relative positional encodings. To the best of our knowl-
|
edge, Scale-MAE is the first MAE-based self-supervised
|
learning method to incorporate a scale-variant positional
|
encoding.
|
Remote Sensing Representation Learning Neumann et
|
al. [46] were one of the first to exhaustively share results on
|
existing representation learning and semi-supervised learn-
|
ing techniques for remote sensing imagery. Gao et al. [22]
|
demonstrated the effectiveness of MAE pretraining for re-
|
mote sensing image classification. Ayush et al. [3] lever-
|
aged the metadata from remote sensing images via spatially
|
aligned but temporally separated images as positive pairs
|
for contrastive learning and predicted the latitude and longi-
|
tude as pretext tasks. Gupta et al. [25] demonstrated the use
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.