text
stringlengths
0
820
choices; additional ablations are presented in the supple-
mentary material. Overall, we find that the order of factor-
ization is the most important design choice in our proposed
framework. Using a spatio-temporal factorization from
the video recognition literature performs poorly at 48.8%
mIoU. Changing the factorization order to temporo-spatial
raises performance by an absolute +29.7%to78.5%mIoU.
Including additional clstokens increases performance to
83.6%mIoU ( +5.1%), so we proceed with using Kclsto-
kens in our design. We test the effect of our date-specific
position encodings compared to a fixed set of values and
find a significant −2.8%performance drop from using fixed
sizePTcompared to our proposed lookup encodings. Fur-
ther analysis is provided in the supplementary material. As
discussed in section 3.4 our spatial encoder blocks cross cls-
token interactions . Allowing interactions among all tokens
comes at a significant increase in compute cost, O(K2)to
O(K), and is found to decrease performance by −2.1%
mIoU. Finally, we find that smaller patch sizes generally
work better, which is reasonable given that tokens retain a
higher degree of spatial granularity and are used to predict
smaller regions. Using 2×2patches raises performance by
+1.2%mIoU to 84.8%compared to 3×3patches. Our fi-
nal design which is used in the main experiments presented
10424
Germany [47] PASTIS [14] T31TFM-1618 [60]
Model #params. (M) IT (ms) Semantic segmentation (OA / mIoU)
BiCGRU [47] 4.5 38.6 91.3 / 72.3 80.5 / 56.2 88.6 / 57.7
FPN-CLSTM [7] 1.2 19.5 91.8 / 73.7 81.9 / 59.5 88.4 / 57.8
UNET3D [45] 6.2 11.2 92.4 / 75.2 82.3 / 60.4 88.4 / 57.6
UNET3Df [60] 7.2 19.7 92.4 / 75.4 82.1 / 60.2 88.6 / 57.7
UNET2D-CLSTM [45] 2.3 35.5 92.9 / 76.2 82.7 / 60.7 89.0 / 58.8
U-TAE [14] 1.1 8.8 93.1 / 77.1 82.9 / 62.4 (83.2 / 63.1) 88.9 / 58.5
TSViT (ours) 1.7 11.8 95.0 / 84.8 83.4 / 65.1 (83.4 / 65.4) 90.3 / 63.1
Model #params. (M) IT (ms) Object classification (OA / mAcc)
TempCNN∗[40] 0.9 0.5 89.8 / 78.4 84.8 / 69.1 84.7 / 62.6
DuPLo∗[24] 5.2 2.9 93.1 / 82.2 84.8 / 69.4 83.9 / 69.5
Transformer∗[48] 18.9 4.3 92.4 / 84.3 84.4 / 68.1 84.3 / 71.4
UNET3D [45] 6.2 11.2 92.7 / 83.9 84.8 / 70.2 84.8 / 71.4
UNET2D-CLSTM [45] 2.3 35.5 93.0 / 84.0 84.7 / 70.3 84.7 / 71.6
U-TAE [14] 1.1 8.8 92.6 / 83.7 84.9 / 71.8 84.8 / 71.7
TSViT (ours) 1.7 11.8 94.7 / 88.1 87.1 / 75.5 87.8 / 74.2
Table 2. Comparison with state-of-the-art models from literature .(top) Semantic segmentation. (bottom) Object classification.∗1D
temporal only models. For each model we note its number of parameters ( #params. ×106) and inference time (IT) for a single sample
with T=52, H,W=24 and C=13 size input on a Nvidia Titan Xp gpu. We report overall accuracy (OA), mean intersection over union (mIoU)
and mean accuracy (mAcc). For PASTIS we report results for fold-1 only; average test set performance across all five folds is shown in
parenthesis for direct comparison with [14].
Figure 5. Visualization of predictions in Germany. The back-
ground class is shown in black, ”x” indicates a false prediction.
in Table 2 employs a temporo-spatial design with Kclsto-
kens, acquisition-time-specific position encodings, 2×2in-
put patches and four layers for both encoders.
4.3. Comparison with SOTA
In Table 2 and Fig.1, we compare the performance of
TSViT with state-of-the-art models presented in section 2.
For semantic segmentation, we find that all models from
literature perform similarly, with the BiCGRU being over-
all the worst performer, matching CNN-based architectures
only in T31TFM-1618. For all datasets, TSViT outperforms
previously suggested approaches by a very large margin. A
visualization of predictions in Germany for the top-3 per-formers is shown in Fig.5. In object classification, we ob-
serve that 1D temporal models are generally outperformed
by spatio-temporal models, with the exception of the Trans-
former [48]. Again, TSViT trained for classification consis-
tently outperforms all other approaches by a large margin
across all datasets. In both tasks, we find smaller improve-
ments for the pixel-averaged compared to class-averaged
metrics, which is reasonable given the large class imbalance
that characterizes the datasets.
5. Conclusion
In this paper we proposed TSViT, which is the first
fully-attentional architecture for general SITS processing.
Overall, TSViT has been shown to significantly outperform
state-of-the-art models in three publicly available land cover
recognition datasets, while being comparable to other mod-
els in terms of the number of parameters and inference time.
However, our method is limited by its quadratic complexity
with respect to the input size, which can lead to increased
hardware requirements when working with larger inputs.
While this may not pose a significant issue for semantic seg-
mentation or SITS classification, it can present challenges
for detection tasks that require isolating large objects, thus
limiting its application. Future research is needed to address
this limitation and enable TSViT to scale more effectively
to larger inputs.
Acknowledgements MT and EC acknowledge funding
from the European Union’s Climate-KIC ARISE grant. SZ
was partially funded by the EPSRC Fellowship DEFORM:
Large Scale Shape Analysis of Deformable Models of Hu-
mans (EP/S010203/1).
10425
References
[1] European Space Agency. The sentinel missions. https://
www.esa.int/Applications/Observing_the_
Earth/Copernicus/The_Sentinel_missions .
Accessed: 2022-11-11. 1
[2] European Space Agency. Sentinels for common agriculture
policy. http://esa-sen4cap.org/ . Accessed: 2022-