text stringlengths 0 820 |
|---|
and answers is more limited than what it is in traditional VQA |
datasets (9 possible answers for the LR dataset, 98 for the HRdataset). |
III. VQA M ODEL |
We investigate the difficulty of the VQA task for remote |
sensing using a basic VQA model based on deep learning. An |
illustration of the proposed network is shown in Figure 7. In |
their simple form, VQA models are composed of three parts |
[24]: |
A. feature extraction; |
B. fusion of these features to obtain a single feature vector |
representing both the visual information and the question; |
C. prediction based on this vector. |
As the model shown in Figure 7 is learned end-to-end, the |
vector obtained after the fusion (in green in Figure 7) can be |
seen as a joint embedding of both the image and the question |
which is used as an input for the prediction step. We detail |
each of these 3 parts in the following. |
A. Feature extraction |
The first component of our VQA model is the feature extrac- |
tion. Its purpose is to obtain a low-dimensional representation |
of the information contained in the image and the question. |
1) Visual part: To extract information from a 2D image, |
a common choice is to use a Convolutional Neural Network |
(CNN). Specifically, we use a Resnet-152 model [32] pre- |
trained on ImageNet [10]. The principal motivation for this |
choice is that this architecture manages to avoid the un- |
desirable degradation problem (decreasing performance with |
deeper networks) by using residual mappings of the layers’ |
inputs which are easier to learn than the common choice of |
direct mappings. This architecture has been succesfully used |
in a wide range of work in the remote sensing community |
(e.g. [8], [17], [33]). The last average pooling layer and fully |
connected layer are replaced by a 1×12D convolution which |
outputs a total of 2048 features which are vectorized. A final |
PRE-PRINT. FINAL VERSION IN IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 7 |
⊙B. Fusion A. Features extraction |
CNN |
ResNet-152 |
"Is there a building?" RNN |
skip-thoughts |
2048 1200 |
1200 2400 |
1200 256Output |
vector |
Yes |
No |
89 ... |
A. 1) Visual part |
A. 2) Language partC. Prediction |
⊙ |
Fully connected layer |
Point wise multiplication |
Legend |
Fig. 7. Framework of the proposed Visual Question Answering model. |
fully connected layer is learned to obtain a 1200 dimension |
vector. |
2) Language part: The feature vector is obtained using the |
skip-thoughts model [34] trained on the BookCorpus dataset |
[35]. This model is a recurrent neural network, which aims at |
producing a vector representing a sequence of words (in our |
case, a question). To make this vector informative, the model |
is trained in the following way: it encodes a sentence from a |
book in a latent space, and tries to decode it to obtain the two |
adjacent sentences in the book. By doing so, it ensures that |
the latent space embeds semantic information. Note that this |
semantic information is not remote sensing specific due to the |
BookCorpus dataset it has been trained on. However, several |
works, including [36], have successfully applied non-domain |
specific NLP models to remote sensing. In our model, we use |
the encoder which is then followed by a fully-connected layer |
(from size 2400 elements to 1200). |
B. Fusion |
At this step, we have two feature vectors (one representing |
the image, one representing the question) of the same size. To |
merge them into a single vector, we use a simple strategy: a |
point-wise multiplication after applying the hyperbolic tangent |
function to the vectors’ elements. While being a fixed (i.e. |
not learnt) operation, the end-to-end training of our model |
encourages both feature vectors to be comparable with respect |
to this operation. |
C. Prediction |
Finally, we project this 1200 dimensional vector to the |
answer space by using a MLP with one hidden layer of 256 |
elements. We formulate the problem as a classification task, in |
which each possible answer is a class. Therefore, the size of |
the output vector depends on the number of possible answers.D. Training procedure |
We train the model using the Adam optimizer [37] with a |
learning rate of 10−5until convergence (150 epochs in the |
case of the LR dataset, and 35 epochs in the case of the HR |
dataset). We use a dropout of 0.5 for every fully connected |
layer. Due to the difference of input size between the two |
datasets (HR images are 4 times larger), we use batches of |
70 instances for the HR dataset and 280 for the LR dataset. |
Furthermore, when the questions do not contain a positional |
component relative to the image space (i.e. ”left of”, ”top of”, |
”right of” or ”bottom of”, see subsection II-A), we augment the |
image space by randomly applying vertical and/or horizontal |
flipping |
IV. R ESULTS AND DISCUSSION |
We report the results obtained by our model on the test sets |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.