text stringlengths 0 820 |
|---|
by [9] evaluated the possibility to adapt networks pre-trained |
on large natural image databases (such as ImageNet [10]) to |
classify hyperspectral remote sensing images. More recently, |
[11] used an intermediate high level representation using recur- |
rent attention maps to classify images. Object detection is also |
often approached using deep learning methods. To this effect, |
[12] introduced an object detection dataset and evaluated clas- |
sical deep learning approaches. Methods taking into account |
the specificity of remote sensing data have been developed, |
such as [13] which proposed to modify the classical approach |
by generating rotatable region proposal which are particularly |
relevant for top-view imagery. Deep learning methods have |
also been developed for semantic segmentation. In [14], the |
authors evaluated different strategies for segmenting remote |
sensing data. More recently, a contest organized on the dataset |
of building segmentation created by [15] has motivated the |
development of a number of new methods to improve results |
on this task [16]. Similarly, [17] introduced a contest including |
three tasks: road extraction, building detection and land cover |
classification. Best results for each challenge were obtained |
using deep neural networks: [18], [19], [20]. |
Natural language processing has also been used in remote |
sensing. For instance, [21] used a convolutional neural |
network (CNN) to generate classification probabilities for a |
given image, and used a recurrent neural network (RNN) to |
generate its description. In a similar fashion, [7] used a CNN |
to obtain a multi semantic level representation of an image |
(object, land class, landscape) and generate a description |
using a simple static model. More recently, [22] uses an |
encoder/decoder type of architecture where a CNN encodes |
the image and a RNN decodes it to a textual representation, |
while [23] projects the textual representation and the image to |
a common space. While these works are use cases of naturallanguage processing, they do not enable interactions with the |
user as we propose with VQA. |
A VQA model is generally made of 4 distinct components: |
1) a visual feature extractor, 2) a language feature extractor, 3) |
a fusion step between the two modalities and 4) a prediction |
component. Since VQA is a relatively new task, an important |
number of methodological developments have been published |
in both the computer vision and natural language processing |
communities during the past 5 years, reviewed in [24]. VQA |
models are able to benefit from advances in the computer |
vision and automatic language processing communities for |
the features extraction components. However, the multi-modal |
fusion has been less explored and therefore, an important |
amount of work has been dedicated to this step. First VQA |
models relied on a non-spatial fusion method, i.e.a point-wise |
multiplication between the visual and language feature vectors |
[5]. Being straightforward, this method does not allow every |
component from both feature vectors to interact with each |
other. This interaction would ideally be achieved by multiply- |
ing the first feature vector by the transpose of the other, but |
this operation would be computationally intractable in practice. |
Instead, [25] proposed a fusion method which first selects |
relevant visual features based on the textual feature (attention |
step) and then, combines them with the textual feature. In [26], |
the authors used Tucker decomposition to achieve a similar |
purpose in one step. While these attention mechanisms are |
interesting for finding visual elements aligned with the words |
within the question, they require the image to be divided in a |
regular grid for the computation of the attention, and this is not |
suitable to objects of varying size. A solution is presented in |
[27], which learns an object detector to select relevant parts of |
the image. In this research, we use a non-spatial fusion step to |
keep the model part relatively simple. Most traditional VQA |
works are designed for a specific dataset, either composed |
of natural images (with questions covering an unconstrained |
range of topics) or synthetic images. While interesting for |
the methodological developments that they have facilitated, |
these datasets limit the potential applications of such systems |
to other problems. Indeed, it has been shown in [28] that |
VQA models trained on a specific dataset do not generalize |
well to other datasets. This generalization gap raises questions |
concerning the applicability of such models to specific tasks. |
A notable use-case of VQA is helping visually impaired |
people through natural language interactions [29]. Images |
acquired by visually impaired people represent an important |
domain shift, and as such a challenge for the applicability |
of VQA models. In [30], the authors confirm that networks |
trained on generic datasets do not generalize to their specific |
one. However, they manage to obtain much better results |
by fine-tuning or training models from scratch on their |
task-specific dataset. |
In this study, we propose a new application for VQA, |
specifically for the interaction with remote sensing images. |
To this effect, we propose the first remote sensing-oriented |
VQA datasets, and evaluate the applicability of this task on |
remote sensing images. We propose a method to automatically |
PRE-PRINT. FINAL VERSION IN IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 3 |
Elements catalog |
Road |
Water area |
Commercial building |
Industrial building |
Residential area |
Retail |
Religious areaLand usages |
...Attributes catalog |
Relations catalog |
PositionalSizeShape Count |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.