text
stringlengths 0
820
|
|---|
is 2x2 with stride 2 that outputs a feature map at 2x the in-
|
put resolution (28 in Figure 2), followed by a LayerNorm
|
and GELU, and then another 2x2 deconvolution layer that
|
outputs a feature maps at 2x the previous resolution (56 in
|
Figure 2). See the supplementary material for a full architec-
|
tural diagram.
|
Reconstruction After having been upsampled, the lower
|
resolution and higher resolution feature maps are passed into
|
Laplacian Blocks (LBs in Figure 2) that reconstruct high
|
and low resolution images for the high and low frequency
|
reconstruction, respectively. Architecturally, the Laplacian
|
Blocks consist of a sequence of three sub-blocks: a Lapla-
|
cian Feature Mapping Block, a Laplacian Upsample Block,
|
and a Laplacian Pyramid Reconstruction Block. The Feature
|
Mapping Block is used to project features within a particular
|
layer of the Laplacian Pyramid back to the RGB space. The
|
Laplacian Upsample Block represents a learnable upsam-
|
ple function that maps latent features from one layer of the
|
Laplacian Pyramid to a higher level. Finally, the Laplacian
|
Pyramid Reconstruction Block is used to reconstruct infor-
|
mation at the different frequencies in RGB space. Following
|
super resolution literature [2], an L1 loss is used for high
|
frequency output to better reconstruct edges and an L2 loss
|
is used for low frequency output to better reconstruct aver-
|
age values. The supplementary material has architectural
|
diagrams for each block.
|
4. Experiments
|
We investigate the quality of representations learned from
|
Scale-MAE pretraining through a set of experiments that
|
explore their robustness to scale as well as their transfer
|
performance to additional tasks. First, we present our main
|
experiments in Section 4.1 and compare with SatMAE [13],
|
a current state-of-the-art MAE for remote sensing imagery,
|
Input Image Mask Low Frequency High Frequency ReconstructionFigure 4. Scale-MAE reconstruction. Examples from Functional
|
Map of the World are shown. From left to right, an input image
|
at 224x224 resolution is shown. Its corresponding mask is visual-
|
ized as well. Columns 3 and 4 show the low and high frequency
|
produced by the Scale-MAE decoder. The last column is the re-
|
construction obtained from summing the low and high frequency
|
features together.
|
ConvMAE [21], a state-of-the-art multiscale MAE, as well
|
as several other approaches detailed throughout. The exact
|
implementation of Scale-MAE for the main experiments was
|
determined through a set of ablation experiments presented
|
in Section 4.2.
|
We pretrain a ViT-Large model with Scale-MAE using the
|
Functional Map of the World (FMoW) [12] RGB training set,
|
which consists of 363.6k images of varying image resolution
|
and GSD, for 800 epochs. The initial higher resolution image
|
Ihris taken as a random 448px2crop of the input image, and
|
the input image Iis then a downsampled 224px2fromIhr.
|
The low frequency groundtruth is obtained by downscaling
|
Ihrto 14px2and then upscaling to 224px2, while the high
|
frequency groundtruth is obtained by downscaling Ihrto
|
56px2and then upscaling to 448px2and subtracting this
|
image from Ihr.
|
Figure 4 shows examples of the masked input, low resolu-
|
tion/frequency, high resolution/frequency, and combined re-
|
construction of FMoW images during training. The low res-
|
olution/frequency images capture color gradients and land-
|
scapes, while the residual high resolution/frequency images
|
capture object edges, roads, and building outlines.
|
4.1. Representation Quality
|
We evaluate the quality of representations from
|
Scale-MAE by freezing the encoder and performing a non-
|
parametric k-nearest-neighbor (kNN) classification with
|
eight different remote sensing imagery classification datasets
|
0 25% 50% 75% 100%0.50.60.70.80.91.0KNN acc.
|
RESISC
|
Scale-MAE
|
SatMAE
|
ConvMAE
|
0 25% 50% 75% 100%0.50.60.70.80.91.0
|
Optimal-31
|
0 25% 50% 75% 100%0.50.60.70.80.91.0
|
MLRSNet
|
0 25% 50% 75% 100%0.50.60.70.80.91.0
|
CV-BrCT
|
0 25% 50% 75% 100%
|
Relative GSD0.50.60.70.80.91.0KNN acc.
|
WHU-RS19
|
0 25% 50% 75% 100%
|
Relative GSD0.50.60.70.80.91.0
|
EuroSAT
|
0 25% 50% 75% 100%
|
Relative GSD0.50.60.70.80.91.0
|
AiRound
|
0 25% 50% 75% 100%
|
Relative GSD0.50.60.70.80.91.0
|
UC MercedFigure 5. Learning better representations at all scales. Scale-MAE (blue) features perform better than state-of-the-art. We evaluate kNN
|
accuracy on eight datasets with a large variance in GSD. Scale-MAE consistently produces better results at coarser resolutions. In addition
|
to using evaluation datasets at different GSDs, to further test the multiscale representations, we create multiple test sets for each dataset
|
in which we downsampled the full resolution validation set to coarser GSDs at fixed percentages: XG%
|
val, G∈ {12.5,25,50,100}, where
|
EuroSat does not include the 12.5% because the images are at a resolution of 64px, our patch size is 16px, and an input image of 8px is too
|
small.
|
with different GSDs, none of which were encountered dur-
|
ing pretraining. The kNN classifier operates by encoding
|
all train and validation instances, where each embedded in-
|
stance in the validation set computes the cosine distance
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.