text stringlengths 0 820 |
|---|
512-d |
2x2T, 512 |
2x2T, 512x N timesLaplacian Block |
Upsampling BlockFigure 7. (top) The Laplacian Block (LB) is a fully convolutional architecture consists of a chain of Feature Mapping Block followed by |
one final Reconstruction Block. (bottom) The UpSampling Block (UB) consists of a series of transpose convolution layers separated by |
LayerNorm and GELU activation. |
B.2. Upsampling Block |
Upsampling Blocks are used to upsample the feature map to a higher resolution. It consists of a series of 2x2 transpose |
convolution layers with LayerNorm and GELU activation between them. The number of such transposed convolution layers |
are a function of the output and input resolution. This is a progressive process in which we repetitively upsample the feature |
map by a factor of 2 until we reach the desired target resolution. Figure 7 illustrates the architecture of these two blocks. |
C. Evaluation Details |
As discussed in the main experimental section, we investigated the quality of representations learned from Scale-MAE |
pretraining through a set of experiments that explore their robustness to scale as well as their transfer performance to additional |
tasks. We provide more information and details on these evaluations here. In order to compare with SatMAE [13] and |
ConvMAE [21], for our main experiments, we pretrained Scale-MAE with a ViT-Large model using the Functional Map of |
the World (FMoW) RGB training set, which consists of 363.6k images of varying image resolution and GSD. The initial |
higher resolution image Ihris taken as a random 448px2crop of the input image, and the input image Iis then a downsampled |
224px2fromIhr. The low frequency groundtruth is obtained by downscaling Ihrto 14px2and then upscaling to 224px2, while |
the high frequency groundtruth is obtained by downscaling Ihrto 56px2and then upscaling to 448px2and subtracting this |
image from Ihr. This is a common method for band pass filtering used in several super resolution works, where a high to low |
to high resolution interpolation is used to obtain only low frequency results, and then high frequency results are obtained by |
subtracting the low frequency image. |
As further discussed in the main experimental section, we evaluate the quality of representations from Scale-MAE by |
freezing the encoder and performing a nonparametric k-nearest-neighbor (kNN) classification with eight different remote |
sensing imagery classification datasets with different GSDs, none of which were encountered during pretraining. All kNN |
evaluations were conducted on 4 GPUs. Results are in Table 11. The kNN classifier operates by encoding all train and |
validation instances, where each embedded instance in the validation set computes the cosine distance with each embedded |
instance in the training set, where the instance is classified correctly if the majority of its k-nearest-neighbors are in the same |
class as the validation instance. The justification for a kNN classifier evaluation is that a strong pretrained network will output |
semantically grouped representation for unseen data of the same class. This evaluation for the quality of representations occurs |
Scale-MAE |
Ground Truth Vanilla MAE |
Correct Incorrect |
Scale-MAE Ground Truth Vanilla MAE |
Figure 8. Visualization of Segmentation Results on SpaceNet. The left, center, right columns are ground truth labels, Scale-MAE and |
vanilla MAE, respectively. The top row shows a 0.3m GSD image and the bottom row shows a 3.0m GSD image. As shown in the figure, |
Scale-MAE performs better at both higher and lower GSDs. |
in other notable works [7, 9, 57]. |
D. Visualization of SpaceNet Segmentation |
Figure 8 shows an additional set of segmentation examples comparing Scale-MAE and vanilla MAE pre-trained on FMoW |
and finetuned on SpaceNet v1. The left, center, right columns are ground truth labels, Scale-MAE and vanilla MAE respectively. |
The top row shows a 0.3m GSD image and the bottom row shows a 3.0m GSD image. As shown in the figure, Scale-MAE |
performs better at both higher and lower GSDs. |
E. Glossary |
E.1. Ground sample distance |
Ground sample distance (GSD) is the distance between the center of one pixel to the center of an adjacent pixel in a remote |
sensing image. GSD is a function of sensor parameters (such as its dimensions and focal length), image parameters (the |
target dimensions of the formed image), and the geometry of the sensor with respect to the object being imaged on the Earth. |
Remote sensing platforms frequently have multiple sensors to capture different wavelengths of light. Each of these sensors |
have varying parameters, resulting in different GSDs for an image of the same area. Additionally, the ground is not a uniform |
surface with changes in elevation common across the swath of the sensor. In total, a remote sensing platform has a sense of |
absolute scale that varies along two dimensions: (1) spectrally depending on the sensor used to capture light, and (2) spatially |
depending on surface elevation. |
Domain Adaptation |
for the Classification |
of Remote Sensing Data |
An overview of recent advances |
DEVIS TUIA, CLAUDIO PERSELLO, |
AND LORENZO BRUZZONEAdvances in Machine Learning for Remote Sensing and Geosciences |
image licensed by ingram publishing |
jUNE 2016 ieee Geoscience and remote sensin G ma Gazine 0274-6638/16©2016IEEE 41 The success of the supervised classification of remotely |
sensed images acquired over large geographical areas or |
at short time intervals strongly depends on the representa - |
tivity of the samples used to train the classification algo - |
rithm and to define the model. When training samples are |
collected from an image or a spatial region that is different |
from the one used for mapping, spectral shifts between the two distributions are likely to make the model fail. Such |
shifts are generally due to differences in acquisition and atmospheric conditions or to changes in the nature of the |
object observed. To design classification methods that are |
robust to data set shifts, recent remote sensing literature has |
considered solutions based on domain adaptation (DA) approaches. Inspired by machine-learning literature, several |
DA methods have been proposed to solve specific problems |
in remote sensing data classification. This article provides a |
critical review of the recent advances in DA approaches for |
remote sensing and presents an overview of DA methods |
divided into four categories: 1) invariant feature selection, 2) representation matching, 3) adaptation of classifiers, and |
4) selective sampling. We provide an overview of recent |
Digital Object Identifier 10.1 109/MGRS.2016.2548504 |
Date of publication: 13 June 2016 |
Authorized licensed use limited to: ASU Library. Downloaded on March 08,2024 at 03:13:37 UTC from IEEE Xplore. Restrictions apply. |
ieee Geoscience and remote sensin G ma Gazine june 201642 |
methodologies, examples of applications of the considered |
techniques to real remote sensing images characterized by |
very high spatial and spectral resolution as well as possible |
guidelines for the selection of the method to use in real ap - |
plication scenarios. |
Remote Sen Sing Facing n ew o ppoRtunitie S |
With the advent of the new generation of satellite mis - |
sions, which are often made up of constellations of satel - |
lites with short revisit time |
and very high-resolution sen - |
sors, the amount of remote sensing images available has increased significantly. Now - |
adays, the monitoring of dy-namic processes has become possible [ 1], [2 ], and biophy- |
sical parameter estimation and classification problems can be addressed with the |
use of several data sources |
[3]–[6 ]. As a consequence, |
analysts have the opportuni - |
ty to use multitemporal and multisource images for tasks such as repetitive monitoring |
of the territory, change de - |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.