text stringlengths 0 820 |
|---|
of 85.24% in the first test set of the HR dataset. This is |
illustrated in Figures 8(a,b), where presence of buildings (by |
the mean of the covered area) is well detected. |
However, we found that our model performs poorly with |
questions regarding the relative positions of objects, such |
as those illustrated in Figures 8(c-e). While Figure 8(c) |
is correct, despite the question being difficult, Figure 8(d) |
shows a small mistake from the model and Figure 8(e) is |
completely incorrect. These problems can be explained by |
the fact that the questions are on high semantic level and |
therefore difficult for a model considering a simple fusion |
scheme, as the one presented in section III. |
Is it a rural or an urban area? |
Rural RuralGround truth Prediction |
Is it a rural or an urban area? |
Urban UrbanGround truth Prediction |
(a) LR, test set (b) LR, test set |
Are there more water areas than |
commercial buildings? |
Yes NoGround truth Prediction |
Are there less buildings than water areas? |
No NoGround truth Prediction |
(c) LR, test set (d) LR, test setFig. 9. Samples from the low resolution test set. |
Regarding the low resolution dataset, rural/urban questions |
are generally well answered (90% of accuracy), as shown |
in Figure 9(a,b). Note that the ground truth for this type |
of questions is defined as a hard threshold on the number |
of buildings, which causes an area as the one shown in |
Figure 9(b) to be labeled as urban. |
However, the low resolution of Sentinel-2 images can be |
problematic when answering questions about relatively small |
objects. For instance, in Figures 9(c,d), we can not see any |
water area nor determine the type of buildings, which causes |
the model’s answer to be unreliable. |
Generalization to unseen areas: |
The performances on the second test set of the HR dataset |
show that the generalization to new geographic areas |
is problematic for the model, with an accuracy drop of |
approximately 5%. This new domain has a stronger impact |
on the most difficult tasks (counting and area computation). |
This can be explained when looking at Figures 8(g-i). We can |
see that the domain shift is important on the image space, as |
a different sensor was used for the acquisition. Furthermore, |
the urban organization of Philadelphia is different from that |
of the city of New York. This causes the buildings to go |
PRE-PRINT. FINAL VERSION IN IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 10 |
Yes No Rural Urban 0 1-10 101-1000 1000+ |
Yes |
No |
Rural |
Urban |
0 |
1-10 |
11-100 |
101-1000 |
1000+11-100True |
Predicted06544022980 |
Fig. 10. Confusion matrix for the low resolution dataset (logarithm scale) |
on the test set. Red lines group answers by type (”Yes/No”, ”Rural/Urban”, |
numbers). |
undetected by the model in Figure 8(h), while the parkings |
can still be detected in Figure 8(g) possibly thanks to the |
cars. This decrease in performance could be reduced by |
using domain adaptation techniques. Such a method could |
be developed for the image space only (a review of domain |
adaptation for remote sensing is done in [39]) or at the |
question/image level (see [40], which presents a method for |
domain adaptation in the context of VQA). |
Answer’s categories: |
The confusion matrices indicate that the models generally |
provide logical answers, even when making mistakes |
(e.g. it might answer ”yes” instead of ”no” to a question |
about the presence of an object, but not a number). Rare |
exceptions to this are observed for the first test set of the |
HR dataset (see Figure 11(a)), on which the model gives 23 |
illogical answers (out of the 316941 questions of this test set). |
Language biases: |
A common issue in VQA models, raised in [41], is the fact |
that strong language biases are captured by the model. When |
this is the case, the answer provided by the model mostly |
depends on the question, rather than on the image. To assess |
this, we evaluated the proposed models by randomly selecting |
an image from the test set for each question. We obtained an |
overall accuracy of 73.78% on the LR test set, 73.78% on |
the first test set of the HR dataset and 72.51% on the second |
test set. This small drop of accuracy indicates that indeed, |
the models rely more on the questions than on the image to |
provide an answer. Furthermore, the strongest drop of accuracy |
is seen on the HR dataset, indicating that the proposed model |
extracts more information from the high resolution data. |
Importance of the number of training samples: |
We show in Figure 12 the evolution of the accuracies when the |
model is trained with a fraction of the HR training samples. |
When using only 1% of the available training samples, the |
model already gets 65% in average accuracy (vs 83% for themodel trained on the whole training set). However, it can be |
seen that, for numerical tasks (counts and area estimation), |
larger amounts of samples are needed to achieve the perfor- |
mances reported in Table III. This experiment also shows that |
the performances start to plateau after 10% of the training data |
is used: this indicates that the proposed model would not profit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.