text stringlengths 0 820 |
|---|
4. Experiments |
We apply TSViT to two tasks using SITS records X∈ |
RT×H×W×Cas inputs: classification and semantic seg- |
mentation. At the object level, classification models learn |
a mapping f(X)∈RKfor the object occupying the center |
of the H×Wregion. Semantic segmentation models learn |
a mapping f(X)∈RH×W×K, predicting class probabili- |
ties for each pixel over the spatial extent of the SITS record. |
We use an ablation study on semantic segmentation to guide |
model design and hyperparameter tuning and proceed with |
presenting our main results on three publicly available SITS |
semantic segmentation and classification datasets. |
4.1. Training and evaluation |
Datasets To evaluate the performance of our proposed |
semantic segmentation model we are using three publicly |
available S2land cover recognition datasets. The dataset |
presented in [47] covers a densely cultivated area of interest |
of102×42km2north of Munich, Germany and contains 17 |
distinct classes. Individual image samples cover a 240×240 |
m2area ( 24×24pixels) and contain 13 bands. The PASTIS |
10423 |
dataset [14] contains images from four different regions in |
France with diverse climate and crop distributions, span- |
ning over 4000 km2and including 18 crop types. In total, |
it includes 2.4k SITS samples of size 128×128, each con- |
taining 33-61 acquisitions and 10 image bands. Because |
the PASTIS sample size is too large for efficiently train- |
ing TSViT with available hardware, we split each sample |
into24×24patches and retain all acquisition times for a |
total of 60k samples. To accommodate a large set of ex- |
periments we only use fold-1 among the five folds provided |
in PASTIS. Finally, we use the T31TFM-1618 dataset [60] |
which covers a densely cultivated S2tile in France for years |
2016-18 and includes 20 distinct classes. In total, it includes |
140k samples of size 48×48, each containing 14-33 ac- |
quisitions and 13 image bands. For the SITS classification |
experiments, we construct the datasets from the respective |
segmentation datasets. More specifically, for PASTIS we |
use the provided object instance ids to extract 24×24pixel |
regions whose center pixel falls inside each object and use |
the class of this object as the sample class. The remain- |
ing two datasets contain samples of smaller spatial extent, |
making the above strategy not feasible in practice. Here, we |
choose to retain the samples as they are and assign the class |
of the center pixel as the global class. We note that this |
strategy forces us to discard samples in which the center |
pixels belongs to the background class. Additional details |
are provided in the supplementary material. |
Implementation details For all experiments presented |
we train for the same number of epochs using the provided |
data splits from the respective publications for a fair com- |
parison. More specifically, we train on all datasets using |
the provided training sets and report results on the valida- |
tion sets for Germany and T31TFM-1618, and on the test |
set for PASTIS. For training TSViT we use the AdamW op- |
timizer [23] with a learning rate schedule which includes a |
warmup period starting from zero to a maximum value 10−3 |
at epoch 10, followed by cosine learning rate decay [33] |
down to 5∗10−6at the end of training. For Germany and |
T31TFM-1618 we train with the above settings and report |
the best performances between what we achieve and the |
original studies. Since we split PASTIS, we are training |
with both settings and report the best results. Overall, we |
find that our settings improve model performance. We train |
with a batch size of 16 or 32 and no regularization on ×2 |
Nvidia Titan Xp gpus in a data parallel fashion. All mod- |
els are trained with a Masked Cross-Entropy loss, masking |
the effect of the background class in both training loss and |
evaluation metrics. We report overall accuracy (OA), aver- |
aged over pixels, and mean intersection over union (mIoU) |
averaged over classes. For SITS classification, in addition |
to the 1D models presented in section 2 we modify the best |
performing semantic segmentation models by aggregating |
extracted features across space prior to the application of aAblation Settings mIoU |
Factorization orderSpatial & Temporal 48.8 |
Temporal & Spatial 78.5 |
#clstokens1 78.5 |
K 83.6 |
Position encodingsStatic 80.8 |
Date lookup 83.6 |
Interactions between |
clstokensTemporal Spatial |
✓ ✓ 81.5 |
✓✓✓ 83.6 |
Patch size2×2 84.8 |
3×3 83.6 |
6×6 79.6 |
Table 1. Ablation on design choices for TSViT . All proposed |
design choices are found to have a positive effect on performance. |
classifier, thus, outputing a single prediction. Classification |
models are trained with Focal Loss [30]. We report OA and |
mean accuracy (mAcc) averaged over classes. |
4.2. Ablation studies |
We perform an ablation study on design parameters of |
our framework using the Germany dataset [47]. Starting |
with a baseline TSViT with L= 4 for both encoder net- |
works, a single clstoken, h=w= 3, t= 1, d= |
128we successively update our design after each ablation. |
Here, we present the effect of the most important design |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.