text stringlengths 0 820 |
|---|
epoch 33. We use one of the SOTA OBB detection frameworks |
— ORCN [126] to evaluate the performance of different pre- |
trained backbones. We adopt the default hyper-parameters of |
ORCN, which is implemented in OBBDetection2. Following |
[126], the DOTA dataset is sampled and cropped to 1,024 |
×1,024 patches with a stride of 824, while the HRSC2016 |
images are scaled keeping the aspect ratio with the shorter side |
equals to 800, and the length of the longer side is less than |
or equal to 1333. Data augmentations during training include |
random horizontal and vertical flipping. For convenience, the |
original training and validation sets are merged for training, |
2https://github.com/jbwang1997/OBBDetection |
WANG et al. : EMPIRICAL STUDY OF REMOTE SENSING PRETRAINING 13 |
TABLE VIII |
RESULTS OF THE ORCN DETECTION MODEL WITH DIFFERENT BACKBONES AND SOTA METHODS ON THE TESTING SET OF THE DOTA DATASET .†: THE |
RESULT IS FROM AERIAL DETECTON [110]. ‡: THE RESULT IS FROM THE ORIGINAL ORCN PAPER . |
Method Backbone mAPAP per category1 |
Ship ST BD TC BC GTF Bridge LV SV HC SP RA SBF Plane Harbor |
One-stage |
RetinaNet [111]† IMP-ResNet-50-FPN 68.43 79.11 74.32 77.62 90.29 82.18 58.17 41.81 71.64 74.58 60.64 69.67 60.60 54.75 88.67 62.57 |
DAL [112] IMP-ResNet-50-FPN 71.44 79.74 78.45 76.55 90.84 79.54 66.80 45.08 76.76 67.00 60.11 73.14 62.27 57.71 88.68 69.05 |
RSDet [113] IMP-ResNet-101-FPN 72.20 70.20 83.40 82.90 90.50 85.60 65.20 48.60 70.10 69.50 68.00 * 67.20 63.90 62.50 89.80 65.60 |
R3Det [114] IMP-ResNet-101-FPN 71.69 77.54 83.54 81.99 90.80 81.39 62.52 48.46 74.29 70.48 60.05 67.46 59.82 61.97 89.54 65.44 |
R3Det [114] IMP-ResNet-152-FPN 73.74 78.21 84.23 81.17 90.81 85.26 66.10 50.53 78.66 70.92 67.17 69.83 63.77 61.81 89.49 68.16 |
S2ANet [115] IMP-ResNet-50-FPN 74.12 87.25 85.64 82.84 90.83 84.90 71.11 48.37 78.39 78.11 57.94 69.13 62.60 60.36 89.11 65.26 |
S2ANet [115] IMP-ResNet-101-FPN 76.11 88.04 86.22 81.41 90.69 84.75 69.75 54.28 80.54 78.04 58.86 73.37 * 65.81 65.03 88.70 76.16 |
Two-stage |
ICN [116] IMP-ResNet-101-FPN 68.16 69.98 78.20 74.30 90.76 79.06 70.32 47.70 67.82 64.89 50.23 64.17 62.90 53.64 81.36 67.02 |
Faster R-CNN [117] † IMP-ResNet-50-FPN 69.05 77.11 83.90 73.06 90.84 78.94 59.09 44.86 71.49 73.25 56.18 64.91 62.95 48.59 88.44 62.18 |
CAD-Net [118] IMP-ResNet-101-FPN 69.90 76.60 73.30 82.40 90.90 * 79.20 73.50 49.40 63.50 71.10 62.20 67.00 60.90 48.40 87.80 62.00 |
ROI Transformer [119] IMP-ResNet-101-FPN 69.56 83.59 81.46 78.52 90.74 77.27 75.92 43.44 73.68 68.81 47.67 58.93 53.54 58.39 88.64 62.83 |
SCRDet [120] IMP-ResNet-101-FPN 72.61 72.41 86.86 * 80.65 90.85 87.94 68.36 52.09 60.32 68.36 65.21 68.24 66.68 65.02 89.98 66.25 |
ROI Transformer†[119] IMP-ResNet-50-FPN 74.61 86.87 82.51 82.60 90.71 83.83 70.87 52.53 76.67 77.93 61.03 68.75 67.61 53.95 88.65 74.67 |
Gliding Vertex [121] IMP-ResNet-101-FPN 75.02 86.82 86.81 85.00 90.74 79.02 77.34 * 52.26 73.14 73.01 57.32 70.86 70.91 * 59.55 89.64 72.94 |
FAOD [122] IMP-ResNet-101-FPN 73.28 79.56 84.68 79.58 90.83 83.40 76.41 45.49 68.27 73.18 64.86 69.69 65.42 53.40 90.21 * 74.17 |
CenterMap-Net [123] IMP-ResNet-50-FPN 71.74 78.10 83.61 81.24 88.83 77.80 60.65 53.15 66.55 78.62 58.70 72.36 66.19 49.36 88.88 72.10 |
FR-Est [124] IMP-ResNet-101-FPN 74.20 86.44 83.56 81.17 90.82 84.13 70.19 50.44 77.98 73.52 60.55 66.72 66.59 60.64 89.63 70.59 |
Mask OBB [125] IMP-ResNet-50-FPN 74.86 85.57 85.05 85.09 * 90.37 82.08 72.90 51.85 73.23 75.28 66.33 69.87 68.39 55.73 89.61 71.61 |
ORCN‡[126] IMP-ResNet-50-FPN 75.87 88.20 * 84.68 82.12 90.90 * 87.50 70.86 54.78 83.00 78.93 52.28 68.84 67.69 63.97 89.46 74.94 |
ORCN‡[126] IMP-ResNet-101-FPN 76.28 87.52 85.33 83.48 90.90 * 85.56 76.92 55.27 82.10 74.27 57.28 70.15 66.82 65.51 * 88.86 74.36 |
ORCN IMP-ResNet-50-FPN 76.14 88.16 84.91 81.35 90.90* 87.43 71.35 54.86 83.03 79.04 58.14 69.05 66.67 63.39 89.58 74.19 |
ORCN SeCo-ResNet-50-FPN 70.02 86.33 81.31 73.32 90.88 79.46 67.07 49.94 76.48 76.15 49.71 65.32 58.55 41.31 88.64 65.90 |
ORCN RSP-ResNet-50-FPN 76.50 88.17 85.72 81.88 90.84 86.17 70.91 54.39 83.01 78.67 62.22 72.21 67.45 62.22 89.78 73.99 |
ORCN IMP-Swin-T-FPN 76.07 88.02 84.92 82.23 90.90* 87.42 74.37 52.25 83.55 77.99 63.07 69.30 65.99 57.70 89.48 73.88 |
ORCN RSP-Swin-T-FPN 76.12 87.83 84.84 79.74 90.86 85.90 74.50 52.91 84.02 78.96 57.36 70.61 67.33 62.90 89.54 74.45 |
ORCN IMP-ViTAEv2-S-FPN 77.38 88.14 86.35 83.50 90.90 * 87.51 75.38 53.42 85.15* 79.99* 66.03 66.12 70.91* 61.02 89.27 76.95* |
ORCN RSP-ViTAEv2-S-FPN 77.72* 88.04 85.58 83.04 90.90 * 88.17* 75.16 55.85* 84.34 79.95 67.89 67.15 70.60 62.64 89.66 76.77 |
1ST: storage tank. BD: baseball diamond. TC: tennis court. BC: baseball court. GTF: ground track field. LV: large vehicle. SV: small vehicle. HC: helicopter. |
SP: swimming pool. RA: roundabout. SBF: soccer ball field. |
TABLE IX |
RESULTS OF THE ORCN DETECTION MODEL WITH DIFFERENT |
BACKBONES AND SOTA METHODS ON THE TESTING SET OF THE |
HRSC2016 DATASET .†: THE RESULT IS FROM THE ORIGINAL ORCN |
PAPER . |
Method Backbone mAP |
R2PN [127] IMP-VGG-16 79.6 |
RRD [128] IMP-VGG-16 84.3 |
FoRDet [129] IMP-VGG-16 89.9 |
R2CNN [130] IMP-ResNet-101 73.1 |
Rotated RPN [131] IMP-ResNet-101 79.1 |
ROI Transformer [119] IMP-ResNet-101-FPN 86.2 |
Gliding Vertex [121] IMP-ResNet-101-FPN 88.2 |
GRS-Det [132] IMP-ResNet-50-FPN 88.9 |
GRS-Det [132] IMP-ResNet-101-FPN 89.6 |
R3Det [114] IMP-ResNet-101-FPN 89.3 |
DAL [112] IMP-ResNet-101-FPN 89.8 |
AproNet [115] IMP-ResNet-101-FPN 90.0 |
S2ANet [115] IMP-ResNet-101-FPN 90.2 |
ORCN†[126] IMP-ResNet-50-FPN 90.4 |
CHPDet [133] Hourglass104 90.6* |
ORCN IMP-ResNet-50-FPN 90.4 |
ORCN SeCo-ResNet-50-FPN 88.9 |
ORCN RSP-ResNet-50-FPN 90.3 |
ORCN IMP-Swin-T-FPN 89.7 |
ORCN RSP-Swin-T-FPN 90.0 |
ORCN IMP-ViTAEv2-S-FPN 90.4 |
ORCN RSP-ViTAEv2-S-FPN 90.4 |
while the original testing sets of DOTA and HRSC2016 are |
separately used for evaluation. We report the mean average |
precision (mAP) of all categories and the average precision |
(AP) of each class on the corresponding testing set. All models |
are trained on a single V100 GPU. |
3) Experimental Results: Quantitative Results and Anal- |
yses: Table VIII-IX show the results of OBB detection |
experiments. On the challenging DOTA dataset, it can be |
seen that using the advanced ORCN framework, the models |
whose backbone is either ResNet-50 or Swin-T performs well, |
although the mAPs of Swin-T models are slightly lower than |
the ResNet models. The ViTAEv2-S, which is a kind of vision |
transformer network that is introduced the inductive biasesincluding the locality and scale-invariance characteristics of |
CNN, obtains amazing performance that improves the ORCN |
baseline by nearly 2% mAP. Another point needed to be |
noticed is the performance of RSP weights on these three |
backbones all outperforms their ImageNet pretrained coun- |
terparts. These results support our previous claims that the |
granularity of the representation required for the detection |
task is closer to that for the scene recognition task compared |
with the segmentation task. Thus, the performance difference |
between RSP and IMP in the detection experiments aligns with |
the results in the scene recognition experiments. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.