text
stringlengths
0
820
optimizer with the learning rate of 6e-5 and weight decay
of 0.01. These models are trained for 200 epochs with a
batch size of 8, while the learning rate is linearly decayed
until the end of training. Following [140], the LEVIR dataset
is clipped to the patches at the size of 256 ×256 with no
overlaps. Thus, the sizes of the training, validation and testing
set are 7,120/1,024/2,048. The final performance of different
models is evaluated on the testing set, while the results on the
validation set are only used to select the best model during
training. We use the F1 score as the evaluation metric and the
experiments are conducted on a single V100 GPU.
3) Experimental Results: Quantitative Results and Anal-
yses: The quantitative results are summarized in Table X.
Without surprise, the self-supervised SeCo pretrained weights
perform well on this task, e.g., the SeCo-ResNet-50 based
BIT performs better than the IMP counterpart. Although the
SeCo weights are trained to achieve seasonal invariance, the
change features can be encoded via the multi-head sub-space
embedding [58]. Nevertheless, ViTAEv2-S pretrained either by
IMP or RSP performs better than SeCo-ResNet-50, showing
the benefit of using the advanced backbone.
Compared with other methods, it is no doubt that the
ViTAEv2-S achieves the best performance, showing the po-
tentiality of applying an advanced vision transformer on RS
field. As before, we analyze the performance difference be-
tween the RSP with the IMP through the perspective of task
WANG et al. : EMPIRICAL STUDY OF REMOTE SENSING PRETRAINING 15
WANG et al. : EMPIRICAL STUDY OF REMOTE SENSING PRETRAINING 17
(a)
(b)
(c)
(d)
(e)
(f)
(g)
(h)
(i)
(j)
(k)
(l)
(m)
(n)
(o)
(p)
(q)
(r)
(s)
(t)
Fig. 9. Visualization of the change detection maps. The first and second row separately show the change detection results of a sample image from CDD and
LEVIR dataset. Here, (a)(k), (b)(l) are the first and second temporals of the same regions. (c)(m) are the corresponded change annotations. (d)(n) are generated
by the IMP-ResNet-50 based BIT, while (e)(o), (f)(p), (f)(o), (g)(q), (h)(r), (i)(s), (g)(t) are separately the results from the SeCo-ResNet-50, RSP-ResNet-50,
IMP-Swin-T, RSP-Swin-T, IMP-ViTAE-S-Stage-Win and RSP-ViTAE-S-Stage-Win backbones.
We hope this study can drive the works on aerial image field
using vision transformers based on remote sensing pretraining.
ACKNOWLEDGEMENT
The authors would like to thank PhD candidate Yang Long
and Prof. Gui-Song Xia for providing the MillionAID dataset
and PhD candidate Qiming Zhang for offering the ViTAE
series models. This work was done by Di Wang as the research
intern in JD Explore Academy.
REFERENCES
[1] X. Zhang, Y . Sun, K. Shang, L. Zhang, and S. Wang, “Crop clas-
sification based on feature band set construction and object-oriented
approach using hyperspectral images,” IEEE J. Sel. Topics Appl. Earth
Observ. Remote Sens. , vol. 9, no. 9, pp. 4117–4128, Sep. 2016.
[2] X. Yang and Y . Yu, “Estimating soil salinity under various moisture
conditions: An experimental study,” IEEE Trans. Geosci. Remote Sens. ,
vol. 55, no. 5, pp. 2525–2533, May 2017.
[3] M. J. Swain and D. H. Ballard, “Color indexing,” Int. J.
Comput. Vis. , vol. 7, no. 1, pp. 11–32, 1991. [Online]. Available:
https://doi.org/10.1007/BF00130487
[4] R. M. Haralick, K. Shanmugam, and I. Dinstein, “Textural features
for image classification,” IEEE Trans. Syst. Man Cybern. , vol. SMC-3,
no. 6, pp. 610–621, 1973.
[5] A. Oliva and A. Torralba, “Modeling the shape of the scene:
A holistic representation of the spatial envelope,” Int. J. Comput.
Vis., vol. 42, no. 3, pp. 145–175, 2001. [Online]. Available:
https://doi.org/10.1023/A:1011139631724
[6] A. Avramovi ´c and V . Risojevi ´c, “Block-based semantic classification
of high-resolution multispectral aerial images,” Signal, Image Video
Process. , vol. 10, no. 1, pp. 75–84, 2016.
[7] O. A. Penatti, K. Nogueira, and J. A. Dos Santos, “Do deep features
generalize from everyday objects to remote sensing and aerial scenes
domains?” in CVPRW , 2015, pp. 44–51.
[8] D. G. Lowe, “Distinctive image features from scale-invariant key-
points,” Int. J. Comput. Vis. , vol. 60, no. 2, pp. 91–110, 2004.
[9] N. Dalal and B. Triggs, “Histograms of oriented gradients for human
detection,” in CVPR , vol. 1. Ieee, 2005, pp. 886–893.
[10] Y . Bengio, A. Courville, and P. Vincent, “Representation learning: A
review and new perspectives,” IEEE Trans. Pattern Anal. Mach. Intell. ,
vol. 35, no. 8, pp. 1798–1828, Aug 2013.
[11] T. Hofmann, “Unsupervised learning by probabilistic latent semantic
analysis,” Mach. Learn. , vol. 42, no. 1, pp. 177–196, 2001.
[12] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, “Object
retrieval with large vocabularies and fast spatial matching,” in CVPR .
IEEE, 2007, pp. 1–8.
[13] Q. ZHANG, J. Zhang, W. Liu, and D. Tao, “Category anchor-
guided unsupervised domain adaptation for semantic segmentation,”
inNeurIPS , vol. 32. Curran Associates, Inc., 2019.
[14] K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image