text stringlengths 0 820 |
|---|
lected over Vaihingen, Germany at a GSD of 0.9m. We |
employ the data split implemented in the MMSegmenta- |
tion library [10] for our experiments, with 344 training and |
398 for validation, all with an image size of 512x512 pix- |
els. The WHU Aerial building [21] dataset is sampled over |
Christchurch, New Zealand at a GSD of 0.3m. Image tiles |
are provided at 512×512pixels, split into 4736 for training |
and 2416 for evaluation. |
We report the intersect of union (IoU) segmentation re- |
sults for all methods in Table 6. ImageNet pretrained mod- |
els are notably strong performers in all cases. On both |
datasets, SeCo lags substantially behind its ImageNet coun- |
terpart. Interestingly, SatMAE is able to bring improvement |
over ImageNet-22k on WHU, but fails to do so to a larger |
degree on Vaihingen. However, our approach is able to |
leverage the already strong ImageNet-22k representations |
and guide them towards the geospatial domain, resulting in |
overall improvement. |
4.4. Super-resolution |
In the previous experiments, we evaluated several com- |
mon high-level tasks. Nonetheless, the low-level task of |
super-resolution is also important in the geospatial domain. |
For this task, we re-purpose the SpaceNet2 dataset, which |
contains 10,593 8-band images from four cities: Las Ve- |
gas, Paris, Shanghai, and Khartoum. The data is provided |
at both a GSD of 1.24m (multi-spectral, 162x162 pixels) |
and 0.3m (pan-sharpened multispectral, 650x650 pixels). |
We formulate a super-resolution task, taking as input the |
1.24m multi-spectral images and generating the 0.3m pan- |
sharpened equivalent. We evaluate the super-resolution per- |
Table 6. Results on the WHU Aerial and Vaihingen segmentation |
datasets. We finetune all methods for 40k iterations, and report the |
IoU for the building class on WHU and mean IoU (mIoU) across |
the 6 classes (impervious surface, building, low vegetation, tree, |
car, clutter) of Vaihingen. |
Method WHU Aerial Vaihingen |
ResNet50 (ImageNet-1k) [19] 88.5 74.0 |
SeCo [28] 86.7 68.9 |
ViT (ImageNet-22k) [14] 81.6 72.6 |
SatMAE [9] 82.5 70.6 |
Swin (random) [25] 88.2 67.0 |
Swin (ImageNet-22k) [25] 90.4 74.7 |
GFM 90.7 75.3 |
Table 7. SpaceNet2 Super-resolution Results. Notably, while |
SatMAE fails to enhance its baseline (ViT ImageNet-22k), our |
method exhibits substantial improvement over its respective base- |
line (Swin ImageNet-22k) in both PSNR and SSIM. |
Method PSNR ↑SSIM↑ |
ViT (ImageNet-22k)[14] 23.279 0.619 |
SatMAE [9] 22.742 0.621 |
Swin (random) [25] 21.825 0.594 |
Swin (ImageNet-22k) [25] 21.655 0.612 |
GFM 22.599 0.638 |
formance of our model and several baselines with the peak |
signal-to-noise ratio (PSNR) and structural similarity in- |
dex measure (SSIM) in Table 7. The ViT-L ImageNet-22k |
model and our model are among the best in terms of PSNR |
and SSIM, respectively. Interestingly, SatMAE is not able |
to improve over its baseline. On the other hand, our method |
improves considerably over its ImageNet-22k baseline. |
5. Ablation Studies |
We perform multiple ablation studies on the choice of |
distillation stage, student initialization, training objectives, |
the pretraining dataset components. Further detailed results |
and discussions are provided in the supplementary material . |
5.1. Distillation Stage |
When implementing our feature map distillation objec- |
tive, a natural question is at which point should the map- |
ping take place. We experiment with different locations |
by stage in the Swin transformer and calculate the corre- |
sponding ARP in Figure 5. Overall, performing the dis- |
tillation after Stage 3 yields the highest ARP. Hence, we |
employ this scheme for all downstream experiments. This |
result is also intuitively expected; distilling at Stage 3 gives |
a large portion of the model the supervisory signal from the |
teacher, while still allowing for purely domain-specific fea- |
ture learning in the final layers. |
(a) (b) |
ARPFigure 5. a) Distillation stage ablation results. b) Student initializa- |
tion ablation results. “Both” indicates that the teacher and student |
branches are initialized with ImageNet weights prior to geospatial |
pretraining. “Teacher” indicates that just the teacher branch is ini- |
tialized, as described in Section 3.3. |
Table 8. GeoPile pretraining dataset ablation. We remove each |
dataset individually from GeoPile and report the number of im- |
ages remaining and resulting ARP. The row “w/o curated datasets” |
removes all data other than NAIP imagery. |
Data # Images ARP ↑ |
w/o WHU-RSD46 444,061 1.77 |
w/o MLRSNet 451,793 2.17 |
w/o Resisc45 529,454 1.57 |
w/o PatternNet 557,554 1.79 |
w/o curated datasets 300,000 0.53 |
w/o NAIP 260,954 1.50 |
5.2. Student Initialization |
In our proposed framework, we maintain the teacher |
model frozen with ImageNet pretrained weights, and ran- |
domly initialize the student. Another alternative is to initial- |
ize the student also with ImageNet weights prior to begin- |
ning the geospatial pretraining process. However, as shown |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.