text stringlengths 0 820 |
|---|
crop type recognition. |
3.3. Tokenization of SITS inputs |
A SITS record X∈RT×H×W×Cconsists of a series |
ofTsatellite images of spatial dimensions H×Wwith |
Cchannels. For the tokenization of our 3D inputs, we can |
extend the tokenization-as-convolution approach to 3D data |
and apply a 3D kernel with size (t×h×w)at stride (t, h, w ) |
across temporal and spatial dimensions. In this manner |
N=⌊T |
t⌋⌊H |
h⌋⌊W |
w⌋non-overlapping tokens xi∈RthwC |
are extracted, and subsequently projected to ddimensions. |
Using t >1, all extracted tokens contain spatio-temporal |
information. For the special case of t= 1 each token |
contains spatial-only information for each acquisition time |
and temporal information is accounted for only through the |
encoder layers. Since the computation cost of global self- |
attention layers is quadratic w.r.t. the length of the token se- |
quence O(N2), choosing larger values for t, h, w can lead |
to significantly reduced number of FLOPS. In our experi- |
ments, however, we have found small values for t, h, w to |
work much better in practice. For all presented experiments |
we use a value of t= 1motivated in part because this choice |
simplifies the implementation of acquisition-time-specific |
temporal position encodings, described in section 3.6. With |
regards to the spatial dimensions of extracted patches we |
have found small values to work best for semantic segmen- |
tation, which is reasonable given that small patches retain |
additional spatial granularity. In the end, our tokenization |
scheme is similar to ViT’s applied in parallel for each acqui- |
sition as shown in Fig.3, however, at this stage, instead of |
unrolling feature dimensions, we retain the spatial structure |
of the original input as reshape operations will be handled |
by the TSViT encoder submodules. |
3.4. Encoder architecture |
In the previous section we presented a motivation for us- |
ing small values t, h, w for the extracted patches. Unless |
other measures are taken to reduce the model’s computa- |
tional cost this choice would be prohibitive for process- |
ing SITS with multiple acquisition times. To avoid such |
problems, we choose to factorize our inputs across their |
temporal and spatial dimensions, a practice commonly em- |
10421 |
Figure 4. TSViT submodules. (a) Temporal encoder. We reshape tokenized inputs, retaining the spatio-temporal structure of SITS, into |
a set of timeseries for each spatial location, add temporal position encodings PT[t,:]for acquisition times t, concatenate local clstokens |
ZTcls (eq.5) and process in parallel with a Transformer. Only the first Koutput tokens are retained. (b)Spatial encoder. We reshape |
the outputs of the temporal encoder into a set of spatial feature maps for each clstoken, add spatial position encodings PS, concatenate |
global clstokens ZScls(eq.6) and process in parallel with a Transformer. (c)Segmentation head. Each local clstoken is projected into hw |
values denoting class-specific evidence for every pixel in a patch. All patches are then reassembled into the original image dimensions. (d) |
Classification head. Global clstokens are projected into scalar values, each denoting evidence for the presence of a specific class. |
ployed for video processing [17,27,37,56,66,72]. We note |
that all these works use a spatial-temporal factorization or- |
der, which is reasonable when dealing with natural images, |
given that it allows the extraction of higher level, semanti- |
cally aware spatial features, whose relationship in time is |
useful for scene understanding. However, we argue that in |
the context of SITS, reversing the order of factorization is |
a meaningful design choice for the following reasons: 1) in |
contrast to natural images in which context can be useful for |
recognising an object, in crop type recognition context can |
provide little information, or can be misleading. This arises |
from the fact that the shape of agricultural parcels, does not |
need to follow its intended use, i.e. most crops can gener- |
ally be cultivated independent of a field’s size or shape. Of |
course there exist variations in the shapes and sizes of agri- |
cultural fields [34], but these depend mostly on local agri- |
cultural practices and are not expected to generalize over |
unseen regions. Furthermore, agricultural parcels do not in- |
herently contain sub-components or structure. Thus, know- |
ing what is cultivated in a piece of land is not expected to |
provide information about what grows nearby. This is in |
contrast to other objects which clearly contain structure, e.g. |
in human face parsing there are clear expectations about |
the relative positions of various face parts. We test this hy- |
pothesis in the supplementary material by enumerating over |
all agricultural parcels belonging to the most popular crop |
types in the T31TFM S2tile in France and taking crop-type- |
conditional pixel counts over a 1km square region from their |
centers. Then, we calculate the cosine similarity of thesevalues with unconditional pixel counts over the extent of |
the T31TFM tile and find a high degree of similarity, sug- |
gesting that there are no significant variations between these |
distributions; 2) a small region in SITS is far more informa- |
tive than its equivalent in natural images, as it contains more |
channels than regular RGB images ( S2imagery contains 13 |
bands in total) whose intensities are averaged over a rela- |
tively large area (highest resolution of S2images is 10×10 |
m2); 3) SITS for land cover recognition do not typically |
contain moving objects. As a result, a timeseries of single |
pixel values can be used for extracting features that are in- |
formative of a specific object part found at that particular |
location. Therefore, several objects can be recognised us- |
ing only information found in a single location; plants, for |
example, can be recognised by variations of their spectral |
signatures during their growth cycle. Many works perform- |
ing crop classification do so using only temporal informa- |
tion in the form of timeseries of small patches [47], pixel |
statistics over the extent of parcels [46] or even values from |
single pixels [40, 48]. On the other hand, the spatial pat- |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.