text stringlengths 0 820 |
|---|
UperNet SeCo-ResNet-50 57.2 63.9 71.7 66.9 69.9 54.5 45.9 38.9 58.2 44.8 33.2 9.3 52.3 71.6 83.3 51.4 |
UperNet RSP-ResNet-50 61.6 64.2 75.9 68.8 69.9 58.5 54.4 40.2 59.6 47.5 32.1 43.8 65.4 76.5 82.8 51.5 |
UperNet IMP-Swin-T 64.6 69.2 76.5 74.1 69.9 56.3 60.1 41.9 62.3 51.6 44.7* 45.8 64.5 75.9 85.7 56.7 |
UperNet RSP-Swin-T 64.1 67.0 74.6 73.7 70.7 59.0 60.1 44.3 62.0 50.6 37.6 46.8 64.9 76.2 85.2 53.8 |
UperNet IMP-ViTAEv2-S 65.3* 71.4* 77.5* 68.2 71.0 60.8 61.9 43.0 63.8* 53.6* 43.4 44.8 65.1 77.9* 86.4* 57.7* |
UperNet RSP-ViTAEv2-S 64.3 71.3 74.3 72.2 70.4 57.4 63.0* 44.0 62.5 51.6 35.4 47.0 62.2 77.7 85.2 54.7 |
1ST: storage tank. BD: baseball diamond. TC: tennis court. BC: baseball court. GTF: ground track field. LV: large vehicle. SV: small vehicle. HC: helicopter. |
SP: swimming pool. RA: roundabout. SBF: soccer ball field. |
high-resolution features have not encoded sufficient high-level |
semantics, while LANet [92] not only simultaneously enhance |
the high and low-level features, it also enriches the semantics |
of the high-resolution features. Thus, the segmentation perfor- |
mance of the evaluated models based on UperNet on small |
objects, such as cars, needs to be improved. On the other |
hand, the IMP-Swin-T performs to be competitive and the |
IMP-ViTAEv2-S achieves the best performance on the iSAID |
dataset, outperforming the SOTA methods such as the HRNet |
[103] and OCR [106] as well as a series of methods that |
specially designed for aerial semantic segmentation, e.g., the |
FarSeg [108] and FactSeg [109]. |
Table VII also shows the advantages of RSP models lying |
in the “Bridge” category, which conforms to the finding in |
the previous scene recognition task. Nevertheless, we can also |
see from Table VI-VII that, on the segmentation task, theperformances of RSP are not as good as the classical IMP. |
In our considerations, there may be two reasons. The first |
one is the difference between the pretraining dataset and the |
evaluation one. Besides the dataset volume (note that the train- |
ing sample and category numbers of MillionAID are smaller |
than ImageNet-1k), the spectral disparities also have a side |
impact on the performance, especially on the Potsdam dataset, |
which adopts the IR-R-G channels instead of the ordinary |
RGB image (See Figure 6). Another reason we attribute to |
the difference between tasks. The representation used for scene |
recognition needs to have a global understanding of the whole |
scene as Figure 4 shows, while the segmentation task requires |
the features to be more detailed while possessing high-level |
semantic information simultaneously since they separately |
conduct the scene-level or pixel-level classification. To prove |
this conjecture, we then evaluate these networks on the aerial |
12 JOURNAL OF L ATEX CLASS FILES, VOL. 14, NO. 8, AUGUST 2015 |
(a) (b) (c) (d) |
(e) (f) (g) (h)Imper. surf. |
Building |
Low veg. |
Tree |
Car |
Ignore |
Fig. 7. Segmentation maps of the UperNet with different backbones on the Potsdam dataset. (a) Ground Truth. (b) IMP-ResNet-50. (c) SeCo-ResNet-50. (d) |
RSP-ResNet-50. (e) IMP-Swin-T. (f) RSP-Swin-T. (g) IMP-ViTAEv2-S. (h) RSP-ViTAEv2-S. |
object detection task in the next section. The granularity of |
the representation needed for detection probably lies between |
those for the segmentation and recognition tasks, since one of |
the aims in the detection task is the object-level classification. |
Qualitative Results and Analyses: We present some visual |
segmentation results of the UperNet with different backbones |
on the Potsdam dataset in Figure 7. As can be seen, only |
the ViTAEv2-S successfully connects the long strip low veg- |
etations (see the red boxes), while IMP-ViTAEv2-S performs |
slightly better than RSP-ViTAEv2-S, which is consistent with |
the quantitative results in Table VI. |
C. Aerial Object Detection |
Since the aerial images are top-down photoed in the sky, |
the objects can be presented in any direction in the birdview. |
Thus, the aerial object detection is the oriented bounding |
box (OBB) detection, which is distinguished from the usual |
horizontal bounding box (HBB) task on natural images [111], |
[117], [134]. In this paper, similar to segmentation, we also |
use different detection datasets in the experiments. Concretely, |
we evaluated on the multi-category RS objects detection and |
the single-category ship detection subtasks, respectively. |
1) Dataset: Two datasets including the large-scale DOTA |
[135] scenes and the commonly used HRSC2016 [136] dataset |
are separately utilized for the above objectives. |
•DOTA: This is the most famous large-scale dataset for |
OBB detection. It totally contains 2,806 images whose |
size ranges from 800 ×800 to 4,000 ×4,000, where |
188,282 instances belonging to 15 categories are in- |
cluded. The training, validation, and testing set separately |
have 1,411/458/937 tiles. It should be noticed that the cat- |
egories are completely the same with the iSAID dataset,since the two datasets share the same set of scenes. The |
difference lies in the annotations for different tasks. |
•HRSC2016: This is a specialized ship detection dataset, |
where the bounding boxes are annotated in arbitrary ori- |
entations. 1,061 images with the size ranging from 300 × |
300 to 1,500 ×900 are included. In the official division, |
436/181/444 images are used for training, validation, and |
testing, respectively. The dataset only has one category, |
since there is no need to recognize the type of ships. |
2) Implementation Detail and Experimental Setting: Simi- |
lar to segmentation, the ResNet models are trained using the |
SGDM algorithm with a learning rate of 0.005, a momentum |
of 0.9, and a weight decay of 0.0001, while the vision |
transformers are trained with the AdamW optimizer, where the |
learning rate and weight decay are separately set to 0.0001 and |
0.05. These models are trained for 12 and 36 epochs with a |
batch size of 2 on DOTA and HRSC2016 scenes, respectively. |
The learning rate is adjusted by a multi-step scheduler. On the |
DOTA dataset, the learning rate will be separately reduced by |
10×after the 8th epoch and the 11th epoch, while on the |
HRSC2016 scene, the corresponded settings are epoch 24 and |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.