text
stringlengths
0
820
image.
5
ViTs for SITS: Vision Transformers for Satellite Image Time Series
Michail Tarasiou Erik Chavez Stefanos Zafeiriou
Imperial College London
{michail.tarasiou10, erik.chavez, s.zafeiriou }@imperial.ac.uk
Abstract
In this paper we introduce the Temporo-Spatial Vision
Transformer (TSViT), a fully-attentional model for general
Satellite Image Time Series (SITS) processing based on
the Vision Transformer (ViT). TSViT splits a SITS record
into non-overlapping patches in space and time which
are tokenized and subsequently processed by a factorized
temporo-spatial encoder. We argue, that in contrast to nat-
ural images, a temporal-then-spatial factorization is more
intuitive for SITS processing and present experimental evi-
dence for this claim. Additionally, we enhance the model’s
discriminative power by introducing two novel mechanisms
for acquisition-time-specific temporal positional encodings
and multiple learnable class tokens. The effect of all
novel design choices is evaluated through an extensive
ablation study. Our proposed architecture achieves state-
of-the-art performance, surpassing previous approaches
by a significant margin in three publicly available SITS
semantic segmentation and classification datasets. All
model, training and evaluation codes can be found at
https://github.com/michaeltrs/DeepSatModels .
1. Introduction
The monitoring of the Earth surface man-made impacts
or activities is essential to enable the design of effective in-
terventions to increase welfare and resilience of societies.
One example is the sector of agriculture in which monitor-
ing of crop development can help design optimum strategies
aimed at improving the welfare of farmers and resilience of
the food production system. The second of United Nations
Sustainable Development Goals (SDG) of Ending Hunger
relies on increasing the crop productivity and revenues of
farmers in poor and developing countries [35] - approxi-
mately 2.5 billion people’s livelihoods depend mainly on
producing crops [10]. Achieving SDG 2 goals requires to be
able to accurately monitor yields and the evolution of culti-
vated areas in order to measure the progress towards achiev-
ing several goals, as well as to evaluate the effectiveness of
different policies or interventions. In the European Union
Figure 1. Model and performance overview. (top) TSViT archi-
tecture. A more detailed schematic is presented in Fig.4. (bottom)
TSViT performance compared with previous arts (Table 2).
(EU) the Sentinel for Common Agricultural Policy program
(Sen4CAP) [2] focuses on developing tools and analytics to
support the verification of direct payments to farmers with
underlying environmental conditionalities such as the adop-
tion of environmentally-friendly [50] and crop diversifica-
tion [51] practices based on real-time monitoring by the
European Space Agency’s (ESA) Sentinel high-resolution
satellite constellation [1] to complement on site verifica-
tion. Recently, the volume and diversity of space-borne
Earth Observation (EO) data [63] and post-processing tools
[18, 61, 70] has increased exponentially. This wealth of
resources, in combination with important developments in
machine learning for computer vision [20, 28, 53], provides
an important opportunity for the development of tools for
the automated monitoring of crop development.
This CVPR paper is the Open Access version, provided by the Computer Vision Foundation.
Except for this watermark, it is identical to the accepted version;
the final published version of the proceedings is available on IEEE Xplore.
10418
Towards more accurate automatic crop type recognition,
we introduce TSViT, the first fully-attentional1architecture
for general SITS processing. An overview of the proposed
architecture can be seen in Fig.1 (top). Our novel design
introduces some inductive biases that make TSViT particu-
larly suitable for the target domain:
• Satellite imagery for monitoring land surface variabil-
ity boast a high revisit time leading to long temporal
sequences. To reduce the amount of computation we
factorize input dimensions into their temporal and spa-
tial components, providing intuition (section 3.4) and
experimental evidence (section 4.2) about why the or-
der of factorization matters.
• TSViT uses a Transformer backbone [64] following
the recently proposed ViT framework [13]. As a result,
every TSViT layer has a global receptive field in time
or space, in contrast to previously proposed convolu-
tional and recurrent architectures [14, 24, 40, 45, 49].
• To make our approach more suitable for SITS mod-
elling we propose a tokenization scheme for the in-
put image timeseries and propose acquisition-time-
specific temporal position encodings in order to extract
date-aware features and to account for irregularities in
SITS acquisition times (section 3.6).
• We make modifications to the ViT framework (sec-
tion 3.2) to enhance its capacity to gather class-specific
evidence which we argue suits the problem at hand
and design two custom decoder heads to accommodate
both global and dense predictions (section 3.5).
Our provided intuitions are tested through extensive abla-
tion studies on design parameters presented in section 4.2.
Overall, our architecture achieves state-of-the-art perfor-
mance in three publicly available datasets for classification