text stringlengths 0 820 |
|---|
SCALE . W HEN DEALING WITH LOW RESOLUTION DATA ,VISIBLE OBJECTS |
OF INTEREST ARE LARGER . TO DEAL WITH THIS DISPARITY ,WE ADAPT |
THE SIZE THRESHOLDS TO THE RESOLUTION OF THE IMAGES . |
”next to” is handled as a hard threshold on the relative distance |
between the two objects (less than 1000m). When looking at |
relative positions, we select the second element following the |
procedure previously defined. |
Question construction : At this point of the procedure, we |
have an element (e.g. road), with an optional attribute (e.g. |
small road) and an optional relative position (e.g. small road |
on the left of a water area). The final step is to generate |
a ”base question” about this element. We define 5 types of |
questions of interest (”Question catalog” in Figure 2(a)), from |
which a specific type is randomly selected to obtain a base |
question. For instance, in the case of comparison questions, we |
randomly choose among ”less than”, ”equals to” and ”more |
than” and construct a second element. |
This base question is then turned into a natural language |
question using pre-defined templates for each question type |
and object. For some question types (e.g. count), more than |
one template is defined (e.g. ’How many are there?’, |
’What is the number of ?’ or ’What is the amount of ?’). |
In this case, the template to be used is randomly selected. The |
stochastic process ensures the diversity, both in the question |
types and the question templates used. |
2) Answer construction: : To obtain the answer to the |
constructed question, we extract the objects from the OSM |
database corresponding to the image footprint. The objects b |
corresponding to the element category and its attributes are |
then selected and used depending on the question type: |
•Count : In the case of counting, the answer is simply the |
number of objects b. |
•Presence : A presence question is answered by comparing |
the number of objects bto 0. |
•Area : The answer to a question about the area is the sum |
of the areas of the objects b. |
•Comparison : Comparison is a specific case for which |
a second element and the relative position statement is |
needed. This question is then answered by comparing the |
number of objects bto the ones of the second element. |
•Rural/Urban : The case of rural/urban questions is han- |
dled in a specific way. In this case, we do not cre- |
ate a specific element, but rather count the number of |
buildings (both commercial or residential). This number |
of buildings is then thresholded to a predefined number |
depending on the resolution of the input data (to obtain |
a density) to answer the question. Note that we are using |
a generic definition of rural and urban areas but this can |
be easily adapted using the precise definition of each |
Fig. 3. Images selected for the LR dataset over the Netherlands. Each point |
represent one Sentinel-2 image which was later split into tiles. Red points |
represent training samples, green pentagon represents the validation image, |
and blue triangle is for the test image. Note that one training image is not |
visible (as it overlaps with the left-most image). |
country. |
B. Data |
Following the method presented in subsection II-A, we |
construct two datasets with different characteristics. |
Low resolution (LR) : this dataset is based on Sentinel-2 |
images acquired over the Netherlands. Sentinel-2 satellites |
provide 10m resolution (for the visible bands used in this |
dataset) images with frequent updates (around 5 days) at a |
global scale. These images are openly available through ESA’s |
Copernicus Open Access Hub1. |
To generate the dataset, we selected 9 Sentinel-2 tiles cover- |
ing the Netherlands with a low cloud cover (selected tiles are |
shown in Figure 3). These tiles were divided in 772images of |
size 256×256(covering 6.55km2) retaining the RGB bands. |
From these, we constructed 77′232 questions and answers |
following the methodology presented in subsection II-A. We |
split the data in a training set ( 77.8%of the original tiles), a |
validation set ( 11.1%) and a test set ( 11.1%) at the tile level |
(the spatial split is shown in Figure 3). This allows to limit |
spatial correlation between the different splits. |
High resolution (HR) : this dataset uses 15cm resolution aerial |
RGB images extracted from the High Resolution Orthoim- |
agery (HRO) data collection of the USGS. This collection |
1https://scihub.copernicus.eu/ |
PRE-PRINT. FINAL VERSION IN IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING 5 |
Fig. 4. Extent of the HR dataset with a zoom on the Portland, Manhattan (New |
York City) and Philadelphia areas. Each point represent one image (generally |
of size 5000×5000 ) which was later split into tiles. The images cover the |
New York City/Long Island region, Philadelphia and Portland. Red points |
represent training samples, green pentagons represent validation samples, and |
blue indicators are for the test sets (blue triangles for test set 1, blue stars for |
test set 2). |
covers most urban areas of the USA, along with a few areas |
of interest (e.g. national parks). For most areas covered by the |
dataset, only one tile is available with acquisition dates ranging |
from year 2000 to 2016, with various sensors. The tiles are |
openly accessible through USGS’ EarthExplorer tool2. |
From this collection, we extracted 161 tiles belonging to |
the North-East coast of the USA (see Figure 4) that were split |
into10′659images of size 512×512(each covering 5898m2). |
We constructed 1′066′316questions and answers following the |
methodology presented in subsection II-A. We split the data |
in a training set ( 61.5%of the tiles), a validation set ( 11.2%), |
and test sets ( 20.5%for test set 1, 6.8%for test set 2). As it |
can be seen in Figure 4, test set 1 covers similar regions as |
the training and validation sets, while test set 2 covers the city |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.