text stringlengths 0 820 |
|---|
of Philadelphia, which is not seen during the training. Note |
that this second test set also uses another sensor (marked as |
unknown on the USGS data catalog), not seen during training. |
Differences between the two datasets : |
Due to their characteristics, the two datasets represent two |
different possible use cases of VQA: |
•The LR dataset allows for large spatial and temporal |
coverage thanks to the frequent acquisitions made by |
Sentinel-2. This characteristic could be of interest for |
future applications of VQA such as large scale queries |
(e.g. rural/urban questions) or temporal (which is out of |
the scope of this study). However, due to the relatively |
low resolution (10m), some objects can not be seen on |
such images (such as small houses, roads, trees, . . . ). |
This fact severely limits the questions to which the model |
2https://earthexplorer.usgs.gov/could give an accurate answer. |
•Thanks to the much finer resolution of the HR dataset, |
a quantity of information of interest to answer typical |
questions is present. Therefore, in contrast to the LR |
dataset, questions concerning objects’ coverage or count- |
ing relatively small objects can possibly be answered |
from such data. However, data of such resolution is |
generally less frequently updated and more expensive to |
acquire. |
Based on these differences, we constructed different types |
of questions for the two datasets. Questions concerning the |
area of objects are only asked in the HR dataset. On the other |
hand, questions about urban/rural area classification are only |
asked in the LR dataset, as the level of zoom of images from |
the HR dataset would prevent a meaningful answer from being |
provided. |
To account for the data distributions and error margins we |
also quantize different answers in both datasets: |
•Counting in LR: as the coverage is relatively large |
(6.55km2), the number of small objects contained in one |
tile can be high, giving a heavy tailed distribution for the |
numerical answers, as shown in Figure 6. More precisely, |
while 26.7% of the numerical answers are ’0’ and 50% |
of the answers are less than ’7’, the highest numerical |
answer goes up to ’17139’. In addition to making the |
problem complex, we can argue that allowing such a |
range of numerical answer does not make sense on data |
of this resolution. Indeed, it would be in most cases |
impossible to distinguish 17139 objects on an image of |
65536 pixels. Therefore, numerical answers are quantized |
into the following categories: |
–’0’; |
–’between 1 and 10’; |
–’between 11 and 100’; |
–’between 101 and 1000’; |
–’more than 1000’. |
•In a similar manner, we quantize questions regarding |
the area in the HR dataset. A great majority (60.9%) of |
the answer of this type are ’0m2’, while the distribution |
also presents a heavy tail. Therefore, we use the same |
quantization as the one proposed for counts for the LR |
dataset. Note that we do not quantize purely numerical |
answers (i.e. answers to questions of type ’count’) as |
the maximum number of objects is 89 in our dataset. |
Counting answers therefore correspond to 89 classes in |
the model in this case (see section III). |
C. Discussion |
Questions/Answers distributions : |
We show the final distribution of answers per question type |
for both datasets in Figure 5. We can see that most question |
types (with the exception of ’rural/urban’ questions in the |
LR dataset, asked only once per image) are close to evenly |
distributed by construction. The answer ’no’ is dominating |
the answers’ distribution for the HR dataset with a frequency |
of 37.7%. In the LR dataset, the answer ’yes’ occurs 34.9% |
of the time while the ’no’ frequency is 34.3%. The strongest |
PRE-PRINT. FINAL VERSION IN IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 6 |
(a) Distribution of answers for the LR dataset |
yesno |
yes |
no0m2between 10m2 and 100m2more than 1000m2presencecount |
comparea (b) Distribution of answers for the HR dataset (numerical answers are ordered, |
and 0 is the most frequent) |
Fig. 5. Distributions of answers in the Low resolution (LR) and High resolution (HR) datasets. |
Fig. 6. Frequencies of exact counting answers in the LR dataset. Only the |
left part of the histogram is shown (until 200 objects), the largest (single) |
count being 17139. 50% of the answers are less than 7 objects in the tile. |
imbalance occurs for the answer ’0’ in the HR dataset |
(with a frequency of 60.9% for the numerical answer). This |
imbalance is greatly reduced by the quantization process |
described in the previous paragraph. |
Limitations of the proposed method : |
While the proposed method for image/question/answer triplets |
generation has the advantage of being automatic and easily |
scalable while using data annotated by humans, a few lim- |
itations have been observed. First, it can happen that some |
annotations are missing or badly registered [4]. Furthermore, |
it was not possible to match the acquisition date of the imagery |
to the one of OSM. The main reason being that it is impossible |
to know if a newly added element appeared at the same time |
in reality or if it was just entered for the first time in OSM. |
As OSM is the main source of data for our process, errors in |
OSM will negatively impact the accuracy of our databases. |
Furthermore, due to the templates used to automatically |
construct questions and provide answers, the set of questions |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.