text
stringlengths
0
820
2) Implementation Detail and Experimental Setting: The
training settings are the same as previous experiments. The
training epoch and batch size are set to 200 and 64, respec-
tively. These experiments are conducted on a single V100
GPU. Following [34], five settings of these three datasets are
adopted to comprehensively evaluate the RS pretrained modelsand make the experiments become convincible, including
UCM (8:2), AID (2:8), AID (5:5), NWPU-RESISC (1:9), and
NWPU-RESISC (2:8). Note the m:nmeans 10m%samples
are used for training, while the others form the testing set.
Similar to the previous section, the images in each category
are proportionally divided into two groups that are separately
used for training and evaluation, respectively. Besides the
above three backbones we selected, the ImageNet pretrained
ResNet-50 and the ResNet-50 pretrained by SeCo [58] – an RS
self-supervised method considering seasonal variation, are also
adopted for a fair comparison. When implementing finetuning
on each scene recognition task, only the neuron number of
the last linear layer is changed to match the categories of
the target dataset. The overall accuracy (OA), which is the
most commonly used criterion in the aerial scene recognition
community by counting the proportion of the correct classified
images relative to all images in the testing set, is utilized in the
experiments. The models are repeatedly trained and evaluated
five times at each setting, and the average value µand standard
deviationσof the results in different trials are recorded as
µ±σ.
3) Experimental Results: Quantitative Results and Anal-
yses: Table V presents the results of the above selected
backbones pretrained using different methods and other SOTA
methods. Since this research only focuses on the pretraining
of deep networks, especially the vision transformers. We only
lists the DL based aerial scene recognition methods. For
convenience, the “IMP” and “RSP” are used to represent
“ImageNet Pretraining” and “Remote Sensing Pretraining”,
respectively. It can be seen that the methods are split into five
groups. The first group is the methods that adopt ResNet-50
as the backbone network, where the ResNet-50 is initialized
by the ImageNet pretrained weights. This group can be used
to compare with the third group. The second group includes
the recent existing advanced methods whose backbone is
other popular networks except for ResNet-50, such as the
ImageNet pretrained VGG-16, ResNet-101, DenseNet-121,
and so on. Then, the ResNet-50, Swin-T, and ViTAEv2-S
networks, whose pretrained weights are obtained by IMP, RSP,
or SeCo, form the last three groups, respectively. In addition,
it should be noted that besides the network types, the weights
pretrained for different epochs are also considered. The bold
fonts in the last three groups mean the best results in each
group, while “ *” denotes the best among all models (same
meanings in other tasks).
On the foundation of ImageNet pretrained ResNet-50, many
methods are developed, which have been shown in the first
group. Among these methods, many flexible and advanced
modules have been explored. For example, the attention mech-
anisms (CBAM [35], EAM [71], MBLANet [34]), where
specific channels or spatial positions of the features are high-
lighted, and multiscale features (F2BRBM [72] and GRMANet
[73]), where the intermediate features are also employed.
In addition, the self-distillation technology combined with
specially designed loss functions (ESD-MBENet [25]) and
the multibranch siamese networks (IDCCP [74]) have also
been applied. While in the second group, the more diverse
frameworks with various backbones are presented. Besides
8 JOURNAL OF L ATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015
TABLE V
RESULTS OF THE SELECTED MODELS AND SOTA METHODS ON THE THREE SCENE RECOGNITION DATASETS UNDER DIFFERENT SETTINGS . THE BOLD
FONTS IN THE LAST THREE GROUPS MEAN THE BEST RESULTS ,WHILE “*”DENOTES THE BEST AMONG ALL MODELS .
Model Publication UCM (8:2) AID (2:8) AID (5:5) NWPU-RESISC (1:9) NWPU-RESISC (2:8)
CBAM [35] ECCV2018 99.04±0.23 94.66±0.39 96.90±0.04 92.10±0.04 94.26±0.12
EAM (IMP-ResNet-50) [71] GRSL2021 98.98±0.37 93.64±0.25 96.62±0.13 90.87±0.15 93.51±0.12
F2BRBM (IMP-ResNet-50) [72] JSTARS2021 99.58±0.23 96.05±0.31 96.97±0.22 92.74±0.23 94.87±0.15
MBLANet (IMP-ResNet-50) [34] TIP2021 99.64±0.12 95.60±0.17 97.14±0.13 92.32±0.15 94.66±0.11
GRMANet (IMP-ResNet-50) [73] TGRS2021 99.19±0.10 95.43±0.32 97.39±0.24 93.19±0.42 94.72±0.25
IDCCP (IMP-ResNet-50) [74] TGRS2021 99.05±0.20 94.80±0.18 96.95±0.13 91.55±0.16 93.76±0.12
ESD-MBENet-v1 (IMP-ResNet-50) [25] TGRS2021 99.81±0.10 96.00±0.15 98.54±0.17 92.50±0.22 95.58±0.08
ESD-MBENet-v2 (IMP-ResNet-50) [25] TGRS2021 99.86±0.12 95.81±0.24 98.66±0.20 93.03±0.11 95.24±0.23
ARCNet (IMP-VGG-16) [75] TGRS2019 99.12±0.40 88.75±0.40 93.10±0.55 — —
SCCov (IMP-VGG-16) [76] TNNLS2019 99.05±0.25 93.12±0.25 96.10±0.16 89.30±0.35 92.10±0.25
KFBNet (IMP-DenseNet-121) [77] TGRS2020 99.88±0.12 95.50±0.27 97.40±0.10 93.08±0.14 95.11±0.10
GBNet (IMP-VGG-16) [24] TGRS2020 98.57±0.48 92.20±0.23 95.48±0.12 — —
MG-CAP (IMP-VGG-16) [78] TIP2020 99.00±0.10 93.34±0.18 96.12±0.12 90.83±0.12 92.95±0.13
EAM (IMP-ResNet-101) [71] GRSL2021 99.21±0.26 94.26±0.11 97.06±0.19 91.91±0.22 94.29±0.09
IMP-ViT-B [44] ICLR2021 99.28±0.23 93.81±0.21 96.08±0.14 90.96±0.08 93.96±0.17
MSANet (IMP-ResNet-101) [79] JSTARS2021 98.96±0.21 93.53±0.21 96.01±0.43 90.38±0.17 93.52±0.21
CTNet (IMP-MobileNet-V2+IMP-ViT-B) [52] GRSL2021 — 96.25±0.10 97.70±0.11 93.90±0.14 95.40±0.15
LSENet (IMP-VGG-16) [32] TIP2021 99.78±0.18 94.41±0.16 96.36±0.19 92.23±0.14 93.34±0.15
DFAGCN (IMP-VGG-16) [23] TNNLS2021 98.48±0.42 — 94.88±0.22 — 89.29±0.28
MGML-FENet (IMP-DenseNet-121) [26] TNNLS2021 99.86±0.12 96.45±0.18 98.60±0.04 92.91±0.22 95.39±0.08
ESD-MBENet-v1 (IMP-DenseNet-121) [25] TGRS2021 99.86±0.12 96.20±0.15 98.85±0.13* 93.24±0.15 95.50±0.09
ESD-MBENet-v2 (IMP-DenseNet-121) [25] TGRS2021 99.81±0.10 96.39±0.21 98.40±0.23 93.05±0.18 95.36±0.14
IMP-ResNet-50 [12] CVPR2016 98.81±0.23 94.67±0.15 95.74±0.10 90.09±0.13 94.10±0.15
SeCo-ResNet-50 [58] ICCV2021 97.86±0.23 93.47±0.08 95.99±0.13 89.64±0.17 92.91±0.13
RSP-ResNet-50-E40 Ours 99.43±0.24 95.88±0.07 97.29±0.07 92.86±0.09 94.40±0.05
RSP-ResNet-50-E120 Ours 99.52±0.15 96.60±0.04 97.78±0.08 93.76±0.03 94.97±0.07
RSP-ResNet-50-E300 Ours 99.48±0.10 96.81±0.03 97.89±0.08 93.93±0.10 95.02±0.06
IMP-Swin-T [13] ICCV2021 99.62±0.19 96.55±0.03 98.10±0.06 92.73±0.09 94.70±0.10
RSP-Swin-T-E40 Ours 99.24±0.18 95.95±0.06 97.52±0.04 91.22±0.18 93.30±0.08
RSP-Swin-T-E120 Ours 99.52±0.00 96.73±0.07 98.20±0.02 92.02±0.14 93.84±0.07
RSP-Swin-T-E300 Ours 99.52±0.00 96.83±0.08 98.30±0.04 93.02±0.12 94.51±0.05
IMP-ViTAEv2-S [29] arXiv2022 99.71±0.10 96.61±0.07 98.08±0.03 93.90±0.07 95.29±0.12