text stringlengths 0 820 |
|---|
RSP-ViTAEv2-S-E40 Ours 99.71±0.10 96.72±0.06 97.92±0.06 94.12±0.07 95.35±0.03 |
RSP-ViTAEv2-S-E100 Ours 99.90±0.13* 96.91±0.06* 98.22±0.09 94.41±0.11* 95.60±0.06* |
the traditional CNN, the recent ViT has also been applied in |
some works. Compared with the IMP-ViT-B, the RSP-Swin- |
T-E300 performs better, although the former model has more |
trainable parameters. It can be observed that the backbones are |
changing over time. The VGG-16 used in the early years is |
gradually replaced by the deeper networks such as ResNet-101 |
or DenseNet-121 due to their better representation ability. |
In the implemented networks, the SeCo-ResNet-50 performs |
the worst compared with its counterparts, it may be because |
there still exists a gap between the Sentinel-2 multispectral |
images where the SeCo trained on with the RGB images |
for aerial scenes recognition. Compared with the ImageNet |
pretrained ResNet-50, our RS pretrained ResNet-50 improves |
the accuracy on all settings. These results imply that RSP |
brings a better starting point for the optimization of the |
subsequent finetuning process, attributing to the aerial im- |
ages used for pretraining compared with the natural images |
in ImageNet. Similarly, the RSP-Swin-T outperforms IMP- |
Swin-T on three settings and achieves comparable results |
on the other two settings. In addition, the ResNet-50 and |
Swin-T can perform to be competitive compared to other |
complicated methods by only using the RSP weights without |
changing the network structures. Besides, when comparing |
the ImageNet pretrained ResNet-50 and Swin-T, we can find |
that the IMP-Swin-T performs better in all settings since the |
vision transformers have stronger context modeling capability. |
While being initialized by RSP weights, the ResNet becomes |
to be more competitive and surpasses the IMP-Swin-T on |
the AID (2:8), NWPU-RESISC (1:9), and NWPU-RESISC(2:8) settings, showing the benefit of RSP again. Owing to |
the excellent representation ability of ViTAEv2-S, which has |
both the locality modeling ability and long-range dependency |
modeling ability, it outperforms both ResNet-50 and Swin- |
T on almost all the settings, regardless of IMP and RSP. |
Moreover, the RSP-ViTAEv2-S achieves the best performance |
compared with all other methods on almost all settings except |
for the AID (5:5), though on which it also delivers comparable |
performance with the best one, i.e., RSP-Swin-T-E300. |
In our experiments, RSP helps the networks obtain bet- |
ter performance on small datasets, it may be because the |
models are easier to converge when adopting the RS pre- |
trained weights. While for the case where training samples |
are abundant, like AID (5:5), the representation ability of |
deeper models can be fully exploited. For example, the |
DenseNet-121 based ESD-MBENeT obtain the best accuracy. |
Nevertheless, it should be noted that only the feature output |
from the last layer of RSP-ResNet-50, RSP-Swin-T, or RSP- |
ViTAEv2-S is used for classification, and it is expected that |
their performance can be further improved when employing |
the multilayer intermediate features. In this sense, these RS |
pretrained models can serve as effective backbones for future |
research in the aerial recognition field. Furthermore, Table |
V also shows that the models pretraining with more epochs |
will probably have stronger representation abilities. Since |
RSP-ResNet-50-E40 and RSP-Swin-T-E40 fall behind their |
counterparts with more epochs, we only evaluate the “E120” |
and “E300” pretrained weights for these two types of networks |
in the rest experiments, while for ViTAEv2-S, both the “E40” |
WANG et al. : EMPIRICAL STUDY OF REMOTE SENSING PRETRAINING 9 |
Terrace |
Mountain |
River |
Bridge |
Church |
Airplane |
Stadium |
Airport |
Thermal |
power |
station |
Sparse |
residential |
Medium |
residential |
Dense |
residential |
School-1 |
School-2 |
(b) (c) (d) (e) (f) (g) (h) |
(a) |
Fig. 4. Response maps of the evaluated models on different scenes. (a) |
Original image. (b) IMP-ResNet-50. (c) SeCo-ResNet-50. (d) RSP-ResNet-50. |
(e) IMP-Swin-T. (f) RSP-Swin-T. (g) IMP-ViTAEv2-S. (h) RSP-ViTAEv2-S. |
and “E100” weights are still used. |
Qualitative Results and Analyses: Figure 4 shows the |
response maps of the above evaluated models using Grad- |
CAM++ [80] on images from various scenes. The warmer the |
color is, the higher the response is. To better show the impact |
of RSP, we use the pretrained weights of “E300” for ResNet- |
50 and Swin-T, and the weights of “E100” for ViTAEv2-S. |
The first three rows are the natural landscapes, and the scenes |
in 4-8 rows mainly contain specific foreground objects, while |
the next six rows present some scenes with different artificial |
constructions. For example, the “Thermal power station” scene |
includes not only chimneys but also cooling towers. |
Corresponding to the quantitative results in Table V, the |
response maps of SeCo-ResNet-50 are scattered and they can |
not precisely capture the semantic-relevant areas, especially |
in the natural landscapes or complex scenes with artificial |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.