text
stringlengths
0
820
RESULTS OF VITAE V2-S WITH DIFFERENT SETTINGS OF TRAINING
EPOCH ON THE MILLION AID VALIDATION SET .
Epoch Acc@1 Acc@5
5 94.53 99.41
10 96.45 99.64
15 97.38 99.74
20 98.00 99.81
40 98.64 99.86
60 98.87 99.83
80 98.90 99.85
100 98.97 99.88
TABLE IV
RESULTS OF THE CANDIDATE MODELS FOR THE SUBSEQUENT
FINETUNING EXPERIMENTS ON THE MILLION AID VALIDATION SET .
Epoch Acc@1 Acc@5
ResNet-50
40 97.99 99.81
120 98.76 99.83
300 98.99 99.82
Swin-T
40 97.80 99.84
120 98.63 99.89
300 98.59 99.88
ViTAEv2-S
40 98.64 99.86
100 98.97 99.88
is set to 384. The remained settings are the same as the
previous experiment. All experiments are conducted with 4
V100 GPUs, and the results are shown in Table III. According
to the results, it can be observed that the model starts saturation
after about 40 epochs, since it only improves 0.64% top-1
accuracy compared with training 20 epochs, while the next
20 epochs only bring a gain of 0.23%. Thus, the network
weights trained with 40 epochs are firstly chosen as the RSP
parameters of ViTAEv2-S to be applied to the subsequent
tasks. Intuitively, the model achieving good performance on
the large-scale pretraining dataset will also perform well on
the downstream tasks. Therefore, we also use the network
weights trained with 100 epochs in the downstream tasks.
These models are separately denoted with the suffix “E40”
and “E100”.
For ResNet-50 and Swin-T, we follow [13] to configure
the training settings, where the networks are trained for 300
epochs. In the experiments, we observe that the top-1 accuracy
of Swin-T-E120 on the validation set is roughly equivalent
to ViTAEv2-S-E40. Thus, the training weights of Swin-T-
E120 are selected. Similarly, we also choose the final network
weights Swin-T-E300 as a comparison with ViTAEv2-S-E100.
To make the experiments fair, the weights of ResNet-50 and
Swin-T that are trained with 40 epochs are also considered,
WANG et al. : EMPIRICAL STUDY OF REMOTE SENSING PRETRAINING 7
since they are trained using the same number of epochs with
the ViTAEv2-S-E40.
The final pretraining models are listed in Table IV. It can be
seen that the validation accuracies are almost increasing with
the increase of training epochs. However, the performance of
Swin-T-E300 is not as well as Swin-T-E120. Nonetheless, we
still keep it since it may have stronger generalization by seeing
more diverse samples.
IV. F INETUNING ON DOWNSTREAM TASKS
In this section, the pretrained models are further finetuned
on a series of downstream tasks, including recognition, seman-
tic segmentation, object detection in aerial scenes as well as
change detection. It should be clarified that models for scene
recognition in this section are trained and evaluated on com-
monly used aerial scene datasets rather than the MillionAID
engaging for RSP.
A. Aerial Scene Recognition
We first introduce the used scene recognition datasets and
the implementation details, then present the experimental
results and analyses.
1) Dataset: The three most popular scene recognition
datasets including the UC Merced Land Use (UCM) dataset
[69], the Aerial Image Dataset (AID) [70], and the benchmark
for RS Image Scene Classification that is created by North-
western Polytechnical University (NWPU-RESISC) [21], are
used to comprehensively evaluate the impact of RSP and the
representation ability of the above adopted backbones.
•UCM: This is the most important dataset for scene
recognition. It contains 2,100 images whose sizes are
all 256 ×256 and have a pixel resolution of 0.3m. The
2,100 images equally belong to 21 categories. Thus, each
category has 100 images. All samples are manually ex-
tracted from the large images in the USGS National Map
Urban Area Imagery Database collected from various
urban areas around the country.
•AID: This is a challenging dataset, which is generated by
collecting the images from multi-source sensors on GE.
It has high intra-class diversities since the images are
carefully chosen from different countries. And they are
extracted at different times and seasons under different
imaging conditions. It has 10,000 images at the size of
600×600, belonging to 30 categories.
•NWPU-RESISC: This dataset is characterized by a great
number of samples. It contains 31,500 images and 45
categories in total, where each category has 700 samples.
Each image has 256 ×256 pixels. The spatial resolutions
are varied from 0.2m to 30m. Some special landforms,
such as islands, lakes, regular mountains, and snow
mountains, maybe in lower resolutions.